entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 17
188
| authors
sequence | primary_category
stringlengths 5
18
| categories
sequence | text
stringlengths 2
629k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2307.08707v1 | 20230714214832 | All-optically untangling light propagation through multimode fibres | [
"Hlib Kupianskyi",
"Simon A. R. Horsley",
"David B. Phillips"
] | physics.optics | [
"physics.optics"
] |
[email protected]
Physics and Astronomy, University of Exeter, Exeter, EX4 4QL. UK.
Physics and Astronomy, University of Exeter, Exeter, EX4 4QL. UK.
[email protected]
Physics and Astronomy, University of Exeter, Exeter, EX4 4QL. UK.
When light propagates through a complex medium, such as a multimode optical fibre (MMF), the spatial information it carries is scrambled. In this work we experimentally demonstrate an all-optical strategy to unscramble this light again. We first create a digital model capturing the way light has been scattered, and then use this model to inverse-design and build a complementary optical system – which we call an optical inverter – that reverses this scattering process. Our implementation of this concept is based on multi-plane light conversion, and can also be understood as a diffractive artificial neural network or a physical matrix pre-conditioner. We present three design strategies allowing different aspects of device performance to be prioritised. We experimentally demonstrate a prototype optical inverter capable of simultaneously unscrambling up to 30 spatial modes that have propagated through a 1 m long MMF, and show how this enables near instantaneous incoherent imaging, without the need for any beam scanning or computational processing. We also demonstrate the reconfigurable nature of this prototype, allowing it to adapt and deliver a new optical transformation if the MMF it is matched to changes configuration. Our work represents a first step towards a new way to see through scattering media. Beyond imaging, this concept may also have applications to the fields of optical communications, optical computing and quantum photonics.
All-optically untangling light propagation through multimode fibres
David B. Phillips
August 12, 2023
===================================================================
As their name suggests, multimode optical fibres (MMFs) support the transmission of multiple spatial modes, recognisable as unique patterns imprinted in the electric field of guided laser light <cit.>. These spatial modes are capable of acting as independent information channels, offering the tantalising prospect of ultra-high density information and image transmission through hair-thin strands of optical fibre <cit.>. Such technology has a wealth of applications, from high-resolution micro-endoscopy deep inside the body <cit.>, to space-division multiplexing through short-range optical interconnects in data centres <cit.>, and emerging forms of quantum communication and photonic computing <cit.>.
However, there are significant challenges to overcome before the high data capacity of MMFs can be fully unlocked.
An optical field illuminating one end of a MMF typically emerges from the other end unrecognisably spatially scrambled – a consequence of modal dispersion and cross-talk. This presents a major hurdle to spatial signal transmission and imaging through MMFs, as the light must somehow be unscrambled again to recover images and data <cit.>. A number of techniques to achieve this are currently under development. A widely applicable strategy involves first creating a digital model of the way the fibre scrambles light. This can be accomplished by measuring the fibre's transmission matrix (TM) – a linear operator encapsulating how any spatially coherent optical field will be transformed upon propagation through the MMF <cit.>. Once the TM is known, it links monochromatic fields at either end of the MMF, and so knowledge of the field at one end enables computational recovery of the field at the other end <cit.> – a technique closely related to coherent optical multiple-input multiple-output (MIMO) in the optical communications domain <cit.>.
Knowledge of the TM also enables scanning imaging through MMFs to be accomplished <cit.>. This method, known as wavefront shaping <cit.>, uses a spatial light modulator (SLM) to dynamically structure input optical fields so they transform into focused spots after propagation through the fibre <cit.>. By scanning a spot over the scene, and recording the total return signal that has emanated from each of the known spot locations, reflectance or fluorescence images can be reconstructed. Wavefront shaping is a powerful technique that has enabled a wide variety imaging modalities through MMF-based micro-endoscopes <cit.>. However, in these methods the spatial information is essentially unscrambled one mode at a time – which severely limits imaging frame-rates, and is not compatible with wide-field or super-resolution imaging <cit.> through fibres.
To take full advantage of the parallel information channels supported by MMFs, we would ideally be able to disentangle all propagating spatial modes simultaneously, to a high fidelity, and with minimal computational overhead <cit.>. In this article, we show how the unscrambling operation can be achieved passively in an all-optical manner, with a latency limited only by the speed of light. Armed with knowledge of a fibre's TM, we design a complementary optical system – crafted through the process of inverse design – that reverses the scattering process imparted by the MMF. We refer to this device as an optical inverter <cit.>. It brings closer the vision of being able to simply look through an optical fibre to directly see the scene at the other end.
An optical inverter must precisely manipulate many spatial modes simultaneously. Realisation of photonic systems capable of on-demand high-dimensional spatial mode transformation is a challenging task, with techniques still in their infancy <cit.>. Despite their high resolution, a single reflection from a two-dimensional SLM or metasurface cannot achieve an arbitrary multi-modal transformation – for this the interaction of light with a three-dimensional photonic architecture is required <cit.>. Our concept, shown in Fig. <ref>, relies on a technology known as multi-plane light conversion <cit.>, and can also be understood as a physically realised diffractive artificial neural network <cit.>. Light emerging from the MMF reflects from a cascade of specially designed diffractive optical elements – here referred to as `phase planes' (see Fig. <ref>(b)), each separated by free-space. These static phase planes successively rearrange the spatial information carried by the light, operating on all modes simultaneously, and enacting the inverse transformation to that applied by the fibre itself. After this process, images of input optical fields are formed at the output of the inverter, without the need for any computational processing.
We recently proposed this optical inversion concept, as a way to unscramble light propagation through MMFs, in a numerical study <cit.>. Here we experimentally implement a prototype optical inverter, with its design tuned to a specific MMF supporting up to ∼30 spatial modes. We also demonstrate how this prototype inverter can be adapted to match a new fibre TM if the bend configuration of the MMF is perturbed – pointing towards future applications involving flexible fibres. While our work predominantly targets imaging applications, the concepts we introduce here may also prove fruitful in the fields of optical communications, optical computing and quantum photonics.
Optical inverter design
We design an optical inverter complementary to a short length (1 m) of step-index MMF with a core diameter of d=25 μm, and numerical aperture NA=0.1. At a wavelength of λ = 633 nm, this MMF supports N = 42 spatial modes per polarisation channel (given by N∼(π dNA/2λ)^2). We demonstrate three inverse design protocols that enable different performance criteria to be optimised.
Eigenmode-based inverter design: We start by considering how to unscramble light transmitted through an ideal MMF. Under the weakly guiding approximation (i.e. low NA), the MMF eigenmodes are a set of N circularly polarised propagation invariant modes (PIMs) <cit.>. The PIMs maintain an unchanging transverse field profile as they propagate along the fibre, denoted by Ψ_n, where n indexes the mode: n∈{1,2,3,...,N} – see Fig. <ref>(a) and Supplementary Information (SI) 4 for examples. Being eigenmodes of the system, the ideal PIMs exhibit negligible mode-dependent loss and modal coupling during propagation.
Each PIM has a mode-dependent propagation constant β – describing the rate at which its global phase shifts as it propagates along the fibre. Consequently, mode n picks up a mode-dependent global phase delay of β_n L upon reaching the output of a fibre of length L. This spatial mode dispersion causes the interference pattern cast by a superposition of PIMs to be different at the input and output fibre facets, resulting in the observed spatial scrambling of optical fields. Therefore, in matrix form the TM of an ideal MMF, in a single circularly-handed polarisation state and represented in real-space (pixellated) input and output bases, is well-approximated by _fib = ^†. Here transforms from fibre mode space to real-space, and is a diagonal unitary matrix encoding the phase delay accumulated by the PIMs along its diagonal. This TM links the arbitrary vectorised input field to the corresponding output field via: = _fib.
In this ideal case, the task of the optical inverter is to reverse these mode-dependent phase delays: it should enact the transform Ψ_n→Ψ_nexp(-iβ_n L) for all N PIMs simultaneously.
Therefore, the TM of the optical inverter is given by _inv = ^†^†. The TM of the combined MMF-inverter system is given by _fib-inv = _inv_fib= ^†, which now represents only the spatial filtering applied to light fields transmitted through a fibre (due to the fibre's limited NA), with spatial scrambling corrected.
The PIMs spatially overlap with one another, so separating them to impart the required mode-dependent phase delay onto each PIM is the key challenge we face in the design of an optical inverter <cit.>. The technique of multi-plane light conversion is emerging as a front-runner to efficiently manipulate tens of arbitrarily shaped spatial light modes simultaneously <cit.>. Here we aim to design a multi-plane light converter (MPLC) that has a relatively low number of planes, rendering the design practical to build: we unscramble the N=42 spatial channels using a cascade of M=5 phase planes.
The phase profiles of the layered MPLC structure can be efficiently inverse-designed using methods analogous to the back-propagation algorithms used to train layered electronic artificial neural networks <cit.>. Transforming many spatial modes with only a few phase planes typically means that not all of the input light can be fully controlled. This situation is dependent upon the desired transform asked of the MPLC, but typically occurs if the number of planes M≲ 2N. To overcome this problem, here we employ a bespoke MPLC design algorithm that allows a high degree of tunability in the low-plane number regime. We recently introduced this algorithm for generalised mode sorting <cit.>, and here we apply it to optical inverter design. Our iterative algorithm relies on gradient ascent with an objective function that enables the trade-off between fidelity and efficiency to be adjusted on a mode-by-mode basis – see Methods and ref. <cit.> for a detailed description. SI 3 shows a comparison of our gradient ascent algorithm to conventional approaches – in particular, the well-known `wavefront matching method' (WMM) <cit.>. These simulations demonstrate that our gradient ascent algorithm gives access to substantially higher fidelity inverter designs in this scenario.
Figure <ref>(a-b) shows a simulation of the performance of an optical inverter designed using gradient ascent. As can be seen in Fig. <ref>(a), the target output PIMs are projected into a disk in the output plane, and uncontrolled light is directed around the edge of this disk where it can be discarded or blocked. When the optical inverter is coupled to an MMF, this output disk becomes an image of the field illuminating the input facet of the fibre. The optical inverter will unscramble spatially coherent laser light, or spatially incoherent (or partially coherent) light, within the spectral bandwidth of the combined MMF-inverter system. Figure <ref>(b) shows simulated examples of spatially incoherent images transmitted through the combined MMF-inverter system with high fidelity. The resolution of these images are diffraction limited, governed by the NA of the MMF.
The incoherent imaging capabilities of this system can be understood by considering that light emanating from a diffraction limited point on the input facet of the MMF is re-imaged to a corresponding point at the output of the inverter. Therefore, the operation of the system does not rely on interference between light from neighboring points, meaning there is no requirement for spatial coherence of the input optical field. The bandwidth, Δλ, is thus limited by the spectral dispersion of the combined system, which for a step-index fibre, is typically constrained by the fibre itself (Δλ_fib). For imaging at the output facet of a step-index fibre, Δλ_fib∼2n_cλ^2/(LNA^2). Although relatively narrow, this spectral bandwidth substantially increases if the image plane is moved away from the end of the fibre, as discussed in ref. <cit.>. Therefore, far-field imaging through MMFs <cit.> may be achieved over a broad spectral bandwidth. SI 6 shows simulations of the spectral bandwidth of the optical inverters designed in this work.
Singular value decomposition-based inverter design:
Next, we study the design of an optical inverter matched to a real step-index MMF of 1 m in length, nominally with the same core diameter and NA as simulated above (d=25±3 μm, NA=0.1). We first experimentally measure the TM of this fibre, _f-exp, at a wavelength of 633 nm, in a single circular input and output polarisation state <cit.>. A digital micro-mirror device (DMD), placed in the Fourier plane of the input facet and acting as a programmable diffraction grating, is used to shape the laser light projected into the fibre <cit.>. The TM is measured by scanning a focused spot over a hexagonal grid of points across the input facet, and holographically recording the fields emanating from the output facet. See Methods for experimental details, and SI 1 for a schematic of the full optical set-up.
To represent a realistic use-case, the fibre is held in place in a curved configuration (as shown in SI 2) and so the TM features non-negligible levels of spatial mode and polarisation coupling. Our prototype optical inverter is designed to operate on a single polarisation state, and so we filter out one circular polarisation of light exiting the fibre, rendering the measured TM non-unitary. We note that a polarisation-resolved TM of a short fibre is typically close to unitary <cit.>, and our approaches could be naturally extended to vectorial optical inverters capable of operating on both polarisation states simultaneously <cit.>.
Rather than the eigenvalue decomposition applied above, it is now more appropriate to design the optical inverter by considering the singular value decomposition (SVD) of the fibre TM: _f-exp = Σ^†, where unitary matrices and contain the left-hand and right-hand singular vectors, respectively, along their columns, and diagonal matrix Σ contains the singular values along its diagonal. Figure <ref>(e) shows a plot of the first 100 singular values in descending order. We see the distribution of singular values is dominated by ∼30 large values, corresponding to speckle field profiles that approximate linear combinations of ideal PIMS. These speckle patterns transmit the majority of the power through the MMF. This number of high singular values agrees well with the theoretical mode capacity calculated from the fibre geometry when also factoring in the manufacturing tolerance on core radius. We observe a long tail of lower singular values, a phenomenon which we interpret as due to core-cladding modes that are weakly excited by light scattering out of the core.
In this scenario, the inverse transform that our optical inverter must apply can be approximated by the pseudo-inverse of _f-exp, i.e. _f-exp^-1∼^-1^†. Here we regularise the inverse by setting all but the largest N=30 singular values to zero. Therefore, we design an MPLC to simultaneously pair-wise map the N spatial modes represented by the left-hand singular vectors (held on columns of ) with the largest singular values, to the corresponding N spatial modes defined by the right-hand singular vectors (held on columns of ). As before, when reformatted to a 2D array, these right-hand singular vectors fit into a disk representing an image of the input facet of the fibre formed at the output of the inverter. Examples of these left and right singular vectors are shown in Fig. <ref>(c) and SI 5, along with simulated outputs from the SVD-based inverter.
As the TM of the MMF is non-unitary, the singular vectors do not have a uniform magnitude. In this situation an ideal inverter would somehow boost the power in the singular vectors with lower singular values to compensate for this effect – a feature that at first sight appears incompatible with a passive linear optical system. However, given a low-plane MPLC is inherently lossy, and we have independent control over the transform efficiency of each singular vector, our algorithm can, within limits, be used to re-balance these mode dependent losses by boosting the transform efficiency of certain modes – see Methods.
Figure <ref>(d) shows the simulated performance of this SVD-based optical inverter design, when matched to the experimentally measured fibre TM. In this case the fidelity of imaging is slightly reduced – in particular on the left-hand-side of the field-of-view (FoV). This is due to the non-unitary nature of the single-polarisation channel TM. In particular, some of the light transmitted from the left-hand-side of the input facet is transformed into the polarisation channel, or into singular vectors that are filtered out. Characterising and inverting both polarisation channels simultaneously could overcome this loss of information <cit.>.
Optical inversion via speckle mode sorting:
So far we have designed optical inverters capable of reconstructing images of arbitrary continuous fields incident anywhere on the input fibre facet. However, in the more realistic non-unitary case this gives us no direct control over the spatial variation in reconstruction fidelity. In our final strategy, we investigate an inversion method that allows us to arbitrarily specify the regions of the fibre facet that we wish to reconstruct with high fidelity.
Our measured fibre TM _f-exp maps a set of input focused spots to the corresponding speckle patterns emerging from the other end of the fibre. We now design an MPLC to perform the inverse mapping, and directly transform these speckle patterns back into focused spots. In this design protocol, we can arbitrarily specify the number and position of diffraction limited spots at the input facet which are unscrambled and re-imaged to the output of the inverter. For example, we are able to lower the number of channels to fewer than the maximum supported by the fibre, which allows a smaller set of channels to be unscrambled with higher fidelity.
If the input channels (i.e. focused spots) are not spatially overlapping, then the performance of this `speckle mode sorter' can be quantified by a cross-talk matrix, which depicts the fraction of light input into channel index i at the MMF input facet that appears in output channel index j after the inverter (see Methods). Figure <ref>(b) shows the numerically simulated cross-talk matrix of a speckle sorter-based inverter designed to operate on 19 spatially separated channels using 5 phase planes. Here we limit the design to an experimentally achievable resolution to compare with the experiments detailed below. In this case we design the speckle mode sorter by optimising the correlation between the target and actual output modes, which is equivalent to the WMM. SI Movie 1 shows a simulation of the spatial transformation undergone by input speckle fields as they propagate through the phase planes of a speckle mode-sorter inverter design. We note that our tunable inverse design algorithm can be used to substantially suppress cross-talk in future high-resolution speckle mode sorter designs <cit.>.
Experimental realisation
We now experimentally implement a prototype optical inverter matched to the 1 m long MMF characterised earlier. Figure <ref> shows a simplified schematic of our experiment, and SI 1 shows the full experimental setup. Our inverter is constructed from an MPLC with 5 reflections from a liquid crystal SLM. The phase planes are designed using knowledge of the experimentally measured fibre TM _f-exp. The output fibre facet is magnified and imaged onto the first phase plane of the MPLC. When processing tens of spatial light modes, the complexity of the required phase profiles means that pixel-perfect alignment of each phase plane is critical. This is demanding to achieve given the numerous alignment degrees-of-freedom (as discussed in detail in ref. <cit.>). To mitigate alignment difficulties, we create a range of MPLC designs incorporating the expected span of residual experimental errors in the scaling and defocus of the input fields, and distance between planes, and implement an auto-alignment protocol based on a genetic algorithm that simultaneously optimizes the choice of phase-plane set and the lateral position of each phase plane on the SLM. Once the fibre is secured in position on an optical table, and the MPLC is aligned, we find the system is stable for days at a time.
Experimentally, the resolution of the SLM limits us to relatively low resolution MPLC designs, which best suit the speckle-sorter based optical inverter, since the SVD-based design requires much higher resolution phase patterns <cit.>. We first test the performance of the inverter to unscramble 19 channels evenly distributed over a hexagonal grid across the input facet of the fibre. Figure <ref>(a) shows two examples of input spots transformed into speckle patterns via propagation through the MMF, and then back into focused spots by the optical inverter. Our experimentally realised device compares well with simulations. Figures <ref>(b-c) show a comparison of the simulated and experimentally measured cross-talk matrices in this case. The slightly higher system cross-talk levels observed experimentally are mainly due to the effects of coupling between adjacent SLM pixels (i.e. the phase setting of one pixel affects the phase of neighbouring pixels), which may be mitigated by using higher-resolution or multiple SLMs to display the required phase profiles with greater accuracy. SI Movies 2 and 3 show the inverter outputs as all input channels are sequentially excited, for a 19-mode and 30-mode inverter.
All-optical image transmission through MMFs: Figure <ref>(d-e) demonstrates all-optical incoherent image transmission through the MMF-inverter system. We design and implement optical inverters supporting both 19 channels (Fig. <ref>(d)) and 30 channels (Fig. <ref>(e)). In these experiments, we mimic the effect of incoherent light transmitted through the system by illuminating each excited channel sequentially with the DMD, and time-averaging the intensity images at the output of the inverter – this captures the level of cross-talk expected for the transmission of incoherent light within the spectral bandwidth of the system. We transmit a variety of simple pixellated binary images of numbers, letters, smiley faces and symbols, and observe that all images are recognisable without the need for any additional processing at the output of the inverter. The contrast of the transmitted images reduces as their sparsity decreases (i.e. as more channels are simultaneously excited) – as is expected from the additive effects of incoherent channel cross-talk. Imaging contrast also reduces when the number of channels is increased from 19 to 30, due to the additional load placed on the MPLC-based speckle sorter to simultaneously sort more modes in 5 phase planes.
Adaptive optical inversion: As with all approaches that rely on knowledge of the TM of a scattering medium, our inversion strategy will fail if the fibre configuration changes enough to significantly modify the TM. The transmission properties of optical fibres are notoriously sensitive to such perturbations. Therefore, if changes are anticipated, the fibre TM should be regularly re-estimated (which can be achieved with only a few probe measurements <cit.>) and the MPLC patterns updated when needed. We explore this scenario in Fig. <ref>. Starting from an aligned MMF-inverter system (Fig. <ref>, left-hand column), we deliberately change the fibre configuration by re-routing one of its bends (see SI 2 for details). This almost entirely disrupts the unscrambling operation, as seen by the disappearance of the prominent diagonal in the cross-talk matrices shown in Fig. <ref>, middle column. Performance is restored by remeasuring the TM of the MMF, and re-configuring the phase-planes of the optical inverter to match the new TM, as shown in Fig. <ref>, right column.
We note that in micro-endoscopy applications, the input end of the fibre may not be optically accessible – thus complicating continuous TM monitoring of flexible MMFs. Nonetheless, there are a variety of methods under development to achieve single-ended TM measurement in this scenario <cit.>. Even once a new TM is known, in future adaptive applications it will be important to minimise the computational complexity associated with re-designing an entire optical inverter on-the-fly whenever the fibre state changes. To address this issue, we showed in ref. <cit.> that modulating only a single, carefully chosen phase plane can correct for a wide range of fibre states. Furthermore, selecting from a pre-designed library of inverters, matched to the range of expected fibre TMs, would bypass the need for any synchronous inverter design <cit.>, offering a viable route towards adaptive inversion operating at liquid crystal SLM update rates in the future.
Discussion
We now consider our work in the context of other emerging techniques and give an outlook towards future directions. An MPLC is equivalent to the more recently coined diffractive artificial neural network (DNN) <cit.> – the two concepts share the same physical structure, and are inverse designed using analogous approaches. DNNs have recently begun to be applied to computational imaging operations <cit.>. Viewed from the perspective of artificial neural networks, an ideal MMF optical inverter is capable of reconstructing any transmittable image after being `trained' (i.e. inverse-designed) on a minimal yet complete set of fibre responses. This minimises the training time, and prevents any undesired bias towards particular classes of image being frozen into the final network design.
More crucially, the all-optical nature of our physically-realised network conserves the phase and coherence information carried by optical fields flowing through it. This responsiveness to the optical phase renders the inversion problem linear and well-posed, offering advantages over conventional neural networks tasked with this class of light unscrambling problem. For example, electronic neural networks have been applied to unscramble coherent images transmitted through MMFs <cit.>. When trained on intensity-only images cast by spatially coherent light, these methods are faced with a non-linear mapping problem that is highly ill-posed, as many different inputs can result in the same intensity profile at the output of the fiber, differing only by their phase information <cit.>. That said, conventional neural networks currently offer additional flexibility in terms of accommodating perturbations to fibre state <cit.>.
Single-shot incoherent computational imaging directly through MMFs has also been previously explored. This technique relies on measurement of the `intensity TM', , which links incoherent optical fields at either end of a fibre <cit.>. Measurement of the intensity pattern of a spatially incoherent field at one end of the fibre, , then permits the intensity pattern at the other end, , to be computed by solving the inverse problem = for . However, in this approach, the spatial information is encoded in low contrast speckle patterns such as those shown in Fig. <ref>(c,d) (top rows). This means that matrix is poorly conditioned, and the solution to the inverse problem is very sensitive to small changes in . For example, we see that in Fig <ref>(d) (top row), the speckle patterns emanating from the fibre look very similar when transmitting images of `2', `3' and `8' – making accurate computational reconstruction of these images sensitive to noise. Therefore, this direct inversion technique is hampered by very low signal-to-noise ratios, and typically works best when imaging sparse scenes <cit.>.
An ideal inverter physically solves this incoherent imaging inverse problem without the need for any further computation. However, even an imperfect inverter, possessing non-negligible levels of channel cross-talk (such as our 30-channel prototype), has advantages here: it represents a physical pre-conditioning element, thus reducing the condition number (i.e. the ratio of the largest to the smallest singular value) of the intensity TM describing the optical system. This improves the stability of final computational image reconstruction. For example, in our experiments the inclusion of the inverter lowered the condition number κ of matrix from κ=14.1 (MMF alone) to κ=5.5 (MMF-inverter) in the 30-channel case, and from κ=8.8 to κ=3.1 in the 19-channel case (see Methods). We anticipate the benefits of physically pre-conditioning in this way will grow with the dimensionality of the system.
How scalable is our concept in terms of the length and mode capacity of the fibre? The main challenge presented by longer fibres is the stability of their TM, which becomes increasingly susceptible to external perturbations (e.g. changes in temperature, or vibrations). Therefore long fibres would necessitate low latency adaptive optical inversion. Regarding mode capacity: in this work we experimentally unscramble up to 30 modes using 5 phase planes, i.e. a mode-to-plane ratio of 6, with an efficiency of ∼20% (ignoring SLM losses). SI 7 provides a table of performance metrics for all of the inverters designed in this work. In the general case of arbitrary unitary transformations, to unscramble light with high fidelity and efficiency, the number of planes scales linearly with the number of modes <cit.>. However, this scaling can be improved by sacrificing transform efficiency while preserving fidelity <cit.>. Interestingly, certain transformations can be efficiently achieved with a far more favorable mode-to-plane scaling <cit.>. We recently showed that the transformation required to sort MMF PIMs falls into this category <cit.>. In ref. <cit.> we constrained the inverter design to take advantage of the efficient PIM sorting transform, and established that, in theory, up to 400 modes could be unscrambled in 29 planes with an efficiency of ∼50%. This improves the mode-to-plane ratio to ∼14. Experimentally realising such a high-dimensional system is a promising avenue for future work.
Conclusions
In summary, we have demonstrated a passive optical system capable of reversing the strongly spatially-variant aberrations introduced by a MMF – unscrambling tens of optical modes simultaneously. Our approach can be understood as a form of all-optical MIMO equilisation <cit.>. Proposals to optically unscramble light propagation through MMFs were first suggested in the 1970s <cit.>. However, it is only in the last few years that our understanding of the inverse design of multi-modal photonic systems has matured to a level that renders such concepts experimentally feasible. In addition to the step-index MMFs studied here, these approaches apply to graded-index fibres, fibre bundles and photonic lanterns <cit.>. Our work targets micro-endoscopic imaging applications, however these concepts apply more broadly to general scattering media <cit.>, and also have potential applications in the fields of classical and quantum optical communications and photonic computing.
§ METHODS
Optical inverter inverse design algorithm
Our aim is to find the phase delay imparted by each pixel on each plane of the optical inverter, such that the resulting optical transformation is close to the desired inverter transmission matrix _inv. For each design strategy, we specify different target sets of input and output modes – essentially selecting the preferred basis in which the inverter design will be conducted:
* For the ideal Eigenmode-based inverter design, the input mode set is the PIMs – see, for example, refs <cit.> or <cit.> for details of how these modes are defined, and SI 4 for visualisations. The output set is also the PIMs, with the global phase shift of the n^th mode given by θ_n = -β_nL, to compensate for the phase delay picked up during propagation through a fibre of length L.
* For the SVD-based inverter, we express the measured TM as _f-exp = Σ^†, as explained in the main text. The inverter input modes are the left-hand singular vectors held on the columns of , and the output modes are the right-hand singular vectors held on the columns of – appropriately truncated.
* For the speckle mode sorter, the input modes are user-selected columns of _f-exp – i.e. experimentally measured speckle patterns emanating from the fibre during the measurement of the TM. The output modes are the corresponding focused points that were focused onto the input of the fibre to probe the fibre TM.
During the design process, we aim to optimise a metric that quantifies the performance of the MPLC to transform all input modes to their corresponding output modes – taking into account both the transform fidelity and efficiency of each mode pair. To achieve this, we use the efficient MPLC design method we recently introduced in ref. <cit.>, and choose the optimisation objective function as follows
F_T = ∑_n=1^N α_nF_n ,
F_n = Re[^†_n·ρ_n]_Fidelity+γ_n(^†_n·^bk_n)_Efficiency .
Here F_T is the real-valued number we aim to maximise during optimisation. N is the total number of mode-pairs to transform. The real positive number α_n weights the relative importance of transforming mode pair n.
F_n is a real-valued number quantifying the performance of the transformation for input-output mode-pair n. The interplay between the two contributions to F_n can be understood as follows: The first term on the right-hand-side (RHS) of Eqn. 2 is designed to maximise the correlation between the target output mode ρ_n, and the actual output mode _n. Taking the real part of the overlap constrains the global phase of the output to match the global phase of the target (alternatively, taking the absolute square of this term leaves the relative global phase of the output modes unconstrained <cit.>).
The second term on the RHS of Eqn. 2, weighted by positive scalar γ_n, enables the efficiency of the transformation to be tuned. A non-zero value of γ_n allows some of the transmitted light to be deliberately shepherded into a designated background region outside the image of the input fibre facet. This lowers the overall efficiency, but enables optical inverter designs to be found that yield a higher output fidelity within the image of the fibre facet itself. The background region is defined using vector ^bk: the elements of ^bk are set to zero inside the image of the fibre core where the unscrambled field will appear, and 1 in the surrounding area, which is designated as the background zone. We incorporate this information into Eqn. 2 via ^bk_n = _n⊙^bk, where the operation ⊙ signifies the element-wise Hadamard product. Therefore ^†_n·^bk_n represents the intensity of light directed to the background zone (and is thus always real and positive). Adding this efficiency term in Eqn. 2 acts to increase F_n when more light is scattered to the background. We note that the spatial structure of _n in the background zone is free to evolve throughout the iterative design process, so this approach does not enforce any predetermined structure on the light scattered there.
Within Eqn. 2, when the efficiency term increases, the fidelity term simultaneously decreases, since here the calculation of fidelity involves the overlap integral between the target and actual mode across the entire output plane. Thus, this formulation allows the relative importance of fidelity and efficiency to be tuned on a mode-by-mode basis by the adjusting the values of γ_n. In addition, the relative importance of each input-output mode-pair can be adjusted by tuning α_n. The combination of these two adjustable controls enables the efficiency of certain mode-pair transformations to be boosted – thus correcting, to some extent, for the non-uniform singular value distribution of the inverse TM that should be implemented in the SVD-based optical inverter design.
For the eigenmode-based and SVD-based inverter designs we use non-zero values of α_n and γ_n, that are initialised as α_n = 1 and γ_n = 2, while γ_n is adjusted throughout the design process to compensate for the variation in transformation weights governed by the diagonal elements of Σ. For the speckle mode sorter designs, we use α_n = 1 and γ_n = 0. In this case, the algorithm optimises the same objective as the well-known wavefront matching method <cit.>. If higher resolution phase masks are possible to implement, the cross-talk between the speckle sorter channels can be substantially suppressed by adding an additional term to the objective function, as shown in ref. <cit.>.
Equation 2 is readily differentiable with respect to the phase profile on a particular plane, meaning that the objective function can be efficiently maximised using gradient-based adjoint methods <cit.>.
Optical inverter performance metrics
The performance of a particular optical inverter design can be quantified in several ways, described in detail in the supplementary information of ref. <cit.>, and summarised below:
Efficiency: The efficiency e_n of the transformation of mode n is given by fraction of input power transmitted into the target zone at the inverter output. For the case of the Eigenmode and SVD-based inverters, this output zone is the disk representing the image of the fibre input facet. For the speckle mode sorter inverter design, the output zone is a small disk located where the output Gaussian should be focused to. The mean efficiency e is given by the average of e_n over all N modes.
Fidelity: The fidelity f_n of the transformation of mode n is given by the absolute square of the normalised overlap integral between the target spatial mode, and the actual spatial mode that is transmitted into the target output zone (i.e. setting any part of the actual field outside the target zone to zero before normalisation takes place). The mean fidelity f is given by the average of f_n over all N modes.
Channel cross-talk: For the speckle mode sorter-based inverter, if the output channels do not spatially overlap, we can quantify the level of channel cross-talk. This information is stored in a matrix , where element c_i,j is given by the amount of power appearing in output channel j, when input channel i is excited, divided by the total power appearing in all output channels. In the ideal case of no channel cross-talk, c_i,i = 1 for all i, and c_i,j = 0 for all i≠ j, meaning that is equal to the identity matrix. The mean cross-talk is given by c, which is equal to one minus the mean value of the diagonal elements of (represented as a percentage in the main text) i.e. c=0 % in the ideal case.
Fibre transmission matrix characterisation
Our experimental set-up for measurement of the TM of the MMF is similar to that described in ref. <cit.>. The MMF used in our experiments is a 1 m long step-index MMF with a nominal core diameter of d=25 μm, and numerical aperture NA=0.1 (Thorlabs FG025LJA). A 633 nm laser beam, generated by a 1 mW HeNe laser, (Thorlabs HNLS008L-EC), is expanded to fill a DMD (Vialux-7001), which is used to shape the light incident on the input end of the fibre <cit.>. The input facet of the fibre is placed in the Fourier plane of the DMD. The incident light is circularly polarised. Light is focused into, and collected from, the MMF using a pair of 10× objective lenses. The output facet of the fibre is imaged onto a camera that is electronically synchronised with the DMD. A combination of a quarter wave-plate and a polariser filter out one component of circular polarisation. A coherent reference beam (plane wave) is also incident onto the camera, enabling measurement of the optical field via digital holography.
We measure the TM _f-exp of this fibre by scanning a focused spot over a hexagonal grid across the input facet. Each of these inputs results in a unique speckle pattern emerging from the output fibre facet. Each output field is vectorised and the p^th output forms column p to the TM _f-exp. We implement phase-drift correction to compensate for any phase drift between the signal and reference arms of the interferometer. This is achieved by interlacing the probe measurements with a standard measurement. The global phase drift of this standard output mode is tracked throughout the TM measurement, and the phase drift function subtracted from the phase of the final TM measurement. SI 1 shows a detailed schematic of the optical set-up.
Experimental implementation of optical inverter
Once designed, we implement the optical inverter using five reflections from a liquid crystal SLM (Hamamatsu X13138-01), of total resolution 1280×1024 pixels and a pixel pitch of 12.5 μ m. The SLM is placed opposite a mirror, positioned a distance of 31 mm away from the SLM chip, giving a distance between the phase planes of 62 mm. Each phase plane is first optimised using a 400×400 pixel simulation. The active area of each plane, where the light from each mode mainly stays, is a central 200×200 pixel region. In order to fit the 5 phase planes adjacently on a single SLM, we crop each pre-designed phase plane to a 200×400 pixel area, which are displayed next to one another on the SLM. Initial experimental alignment of this optical system is challenging, as pixel-perfect lateral positioning of the phase planes is required, leading to a large number of degrees of freedom to align simultaneously. We achieved this by implementing an automated genetic algorithm to search for the correct phase plane display positions on the SLM. See ref. <cit.> for a detailed description. The output of the inverter is recorded with a camera (Basler ace acA640-300gm), positioned in the Fourier plane of the final phase plane. The operation of this camera is electronically synchronised with the DMD.
We note that the SLM used for this work did not have its reflection efficiency optimised specifically for the operating wavelength of 633 nm. Therefore the efficiency of each reflection was relatively low (∼50 %), thus rendering the efficiency of the optical inverter artificially low in our prototype device. This issue could be improved by using an SLM with a dielectric back-plane optimised for the operational wavelength.
Condition number of the intensity transmission matrix
In the main text we analyse the condition number κ of the intensity TM of just the MMF (_MMF), and of the MMF-inverter system (_MMF-inv). In general, the intensity TM of a scattering system is given by = ||^2, where is the field TM when represented in real-space input and output bases, and here we take the element-by-element (i.e. Hadamard) square. Therefore, the n^th column of _MMF is given by the vectorised intensity speckle pattern that appears at the output facet of the MMF, when the MMF is excited with the n^th input focused spot on the input facet. Similarly, the n^th column of _MMF-inv is given by the vectorised intensity pattern that appears at the output of the inverter when the MMF is excited with the n^th input focused spot incident onto the input facet. The condition number κ of these matrices is then calculated by finding the singular value decomposition of , and calculating the ratio of the largest singular value divided by the smallest singular value.
§ ACKNOWLEDGEMENTS
We thank Unė Būtaitė, Tomáš Čižmár and Joel Carpenter for useful discussions. DBP acknowledges financial support from the European Research Council (Grant no. 804626), and the
Royal Academy of Engineering.
§ CONTRIBUTIONS
DBP conceived the idea for the project and supervised the work. HK performed all simulations, experimental work and data analysis. SARH derived the gradient ascent optimisation method for Eigenmode and SVD-based optical inverter designs. DBP and HK wrote the paper, with editorial input from SARH.
Supplementary Information
1: Optical setup
Figure <ref> presents a schematic of the optical setup used for implementing the experimental all-optical MMF inverter. A helium-neon laser (Thorlabs HNLS008L-EC) was used as a source of linearly polarised light. After the beam is magnified using a 4-f system of two lenses, a DMD (Vialux V-7001) is used to generate, in its Fourier plane, inputs to the MMF. Camera “Cam 1" is positioned in the image plane of the fibre input facet (i.e. the Fourier plane of the DMD), enabling the intensity profiles of the fields incident on the input of the MMF to be imaged. Prior to entering the fibre, the light passes through a quarter wave-plate to change the polarisation of the incoming light from linear to circular. The optical field generated in the Fourier plane of the DMD is demagnified to the scale of the fibre core using a 4-f system (lens L4 and objective lens OL1). Circularly polarised light on the output of the MMF is converted back to the linear orientation, one component of which is discarded using a polarising beam-splitter. The light emanating from the output facet of the fibre is re-imaged into the plane of the camera "Cam 2", where its intensity can be directly measured. The optical phase of the transmitted fields is retrieved by interfering them with the reference beam and then using off-axis digital holography. The MMF output fields are also re-imaged onto the first plane of the MPLC-based optical inverter. The MPLC is constructed from a liquid crystal SLM (Hamamatsu X13138-01) and a mirror parallel to its screen. After the light is modulated by M=5 planes of the MPLC, it is Fourier transformed by a lens and any unmodulated light is discarded with a linear polariser.
2: Adaptive correction of the optical inverter: MMF bend configurations
Figure <ref> shows the MMF configurations before (a) and after (b) an additional bending has been applied to it. The fibre itself in the presented pictures has an orange jacket and is taped to the optical table to ensure the stability of its TM. The bending configuration states of the fibre correspond to the measurements shown in Fig. <ref> for the 'aligned' and the 'perturbed' cases respectively.
3: Comparison of our gradient ascent algorithm to the wavefront matching method for optical inverter design
Figure <ref> compares the simulated performance of two phase mask optimisation techniques that may be used: our gradient ascent algorithm (GA) and the conventional wavefront matching method (WMM). Both algorithms are tasked with designing a 42-mode eigenmode-based optical inverter using 5 phase planes. The quality of two selected output modes are shown, and visually we see the fidelity of the outputs modes from the inverter designed using the WMM is substantially lower than those generated by our GA algorithm. Quantitatively, the fidelity of the first spatial mode (top row) is f_n=10^ GA = 98%, versus f_n=10^ WMM = 58%. For the second example (bottom row), f_n=14^ GA = 97%, and f_n=14^ WMM = 40%. The overall fidelity of the transformation averaged over all the modes of a set for GA is f^ GA=97% and for WMM is f^ WMM=49%, while the simulated average efficiencies (i.e. input light utilisation ratios) are e^ GA=13%, e^ WMM=30% (See also Table <ref>). Example simulated images using the GA-optimsed inverter and the WMM-optimised inverter are shown on the right hand side of Fig. <ref>.
4: Eigenmode-based optical inverter: transformation of all eigenmodes
Figure <ref> presents the set of all N=42 spatial mode-pairs used to design the eigenmode-based optical inverter matched to an ideal MMF. The MMF eigenmodes accumulate a mode-dependant phase delay β_n L as they propagate through the fibre. The multi-plane light conversion system used as the inverter is then designed to act on all N=42 output modes simultaneously to compensate for these global phase delays with M=5 planes. The fourth row of each mode block of the presented fields demonstrates the spatial modes generated on the output of such an inverter system optimised using our gradient ascent algorithm (α=1,γ=2). The region of the output plane in which light is allowed to be randomly scattered, in order to enhance output fidelity within the central disk area, can be clearly seen in the outputs.
5: SVD-based optical inverter: transformation of all modes
Figure <ref> presents sets of all N=30 spatial mode-pairs used to optimse the SVD-based optical inverter, designed to matched to a real MMF. The mode sets are derived from the experimentally measured TM of a MMF. The MMF inputs in the form of N right-hand singular vectors are transformed into N left-hand singular vectors as they propagate through the fibre. The multi-plane light conversion system used as the inverter is then designed to act on all N=30 modes simultaneously, to transform left-hand singular vectors back into the right-hand ones with M=5 planes. The fourth row of each block of the presented fields demonstrates the spatial modes on the output of such an inverter system optimised using our gradient ascent algorithm (α=1,γ=2). Once again, the area of the output plane in which light is allowed to be randomly scattered in order to enhance output fidelity within the central disk area, can be clearly seen in the outputs.
6: Spectral bandwidth of the optical inverters
Spectral response curves of the simulated optical inverter based on the SVD approach that inverts the transmission matrix of the experimentally measured fibre are shown in Figure <ref>. These are numerically calculated as follows. First, a set of MPLC M=5 phase masks corresponding to the needed transformation is optimised using the gradient ascent algorithm with the base values of α=1,γ=2 at the original wavelength λ_0 = 633 nm. After this, the wavelength of the input to the MPLC light is detuned (λ_ det) and the phase delays introduced by the phase masks are modified accordingly by multiplication by a factor λ_0/ λ_ det. After this, the performance of such a mode transformer can be quantified by simulating the propagation of the frequency shifted modes through the device, and calculating the average fidelity and the average efficiency parameters defined in the Methods section of the main text.
The spectral bandwidth of the inverter can then be calculated as the full width half maximum (FWHM) of the average fidelity curve shown in Fig. <ref>(a). As the minimum level of the average fidelity corresponding to the satisfactory inversion quality is not strictly defined, the spectral bandwidth of such a system may be estimated as Δλ = 5-10 nm. However note that, according to the main text of the paper, the bandwidth of the whole MMF-inverter system is rather limited by the bandwidth of the MMF itself in this case.
Figure <ref> presents the simulated spectral response curves of the experimentally realised MMF inverters using the N=19 (blue) and N=30 speckle mode sorters (green). The average cross-talk and the average efficiency as functions of the detuned wavelength are calculated in the same way as described above for the central wavelength of λ_0 = 633 nm. We observe a broader FWHM, if compared to Fig. <ref>. This is due to the fact that the phase masks of these systems were optimised using the WMM, which is equivalent to the gradient ascent optimisation with α=1, γ=0. This objective function produces less intricate phase masks containing fewer phase wraps thus boosting the spectral bandwidth of the device. The phase masks used were also additionally smoothed to be faithfully represented by the LC SLM screen at the expense of the overall performance <cit.>.
7: Summary of optical inverter performance metrics
8: Description of supplementary movies
* Supplementary movie 1: A simulation of the spatial transformation undergone by input speckle fields as they propagate through the phase planes of a speckle mode-sorter inverter design.
* Supplementary movie 2: The 19-mode experimentally implemented optical inverter outputs as all input channels are sequentially excited. Left panel: Intensity of field input into MMF. Middle panel: Experimentally measured optical field at output of MMF. RIght panel: Intensity of field at output of optical inverter.
* Supplementary movie 3: The 30-mode experimentally implemented optical inverter outputs as all input channels are sequentially excited. Left panel: Intensity of field input into MMF. Middle panel: Experimentally measured optical field at output of MMF. RIght panel: Intensity of field at output of optical inverter.
|
http://arxiv.org/abs/2307.05068v1 | 20230711071329 | A Theory of Bounded Inductive Rationality | [
"Caspar Oesterheld",
"Abram Demski",
"Vincent Conitzer"
] | cs.AI | [
"cs.AI",
"cs.GT",
"cs.LG",
"I.2"
] |
Retrieval-augmented GPT-3.5-based Text-to-SQL Framework with Sample-aware Prompting and Dynamic Revision Chain
Chunxi Guo, Zhiliang Tian (), Jintao Tang, Shasha Li, Zhihua Wen,
Kaixuan Wang and Ting Wang ()
August 12, 2023
==============================================================================================================
The dominant theories of rational choice assume
logical omniscience. That is, they assume that when facing a decision problem, an agent can perform all relevant computations and determine the truth value of all relevant logical/mathematical claims. This assumption is unrealistic when, for example, we offer bets on remote digits of π or when an agent faces a computationally intractable planning problem.
Furthermore, the assumption of logical omniscience creates contradictions in cases where the environment can contain descriptions of the agent itself.
Importantly, strategic interactions as studied in game theory are decision problems in which a rational agent is predicted by its environment (the other players).
In this paper, we develop a theory of rational decision making that does not assume logical omniscience.
We consider agents who repeatedly face decision problems (including ones like betting on digits of π or games against other agents). The main contribution of this paper is to provide a sensible theory of rationality for such agents.
Roughly, we require that a boundedly rational inductive agent tests each efficiently computable hypothesis infinitely often and follows those hypotheses that keep their promises of high rewards. We then prove that agents that are rational in this sense have other desirable properties. For example, they learn to value random and pseudo-random lotteries at their expected reward. Finally, we consider strategic interactions between different agents and prove a folk theorem for what strategies bounded rational inductive agents can converge to.
§ INTRODUCTION
The dominant theories of rational decision making – in particular Bayesian theories – assume logical omniscience, i.e., that rational agents can determine the truth value of any relevant logical statement. In some types of decision problems, this prevents one from deriving any recommendation from these theories, which is unsatisfactory (<Ref>). For one, there are problems in which computing an optimal choice is simply computationally intractable. For example, many planning problems are intractable. Second, the assumption of logical omniscience creates contradictions (resembling classic paradoxes of self reference, such as the liar's paradox) if the environment is allowed to contain references to the agent itself. These issues arise most naturally when multiple rational agents interact and reason about one another.
This paper develops a novel theory of boundedly rational inductive agents (BRIAs) that does not assume logical omniscience and yields sensible recommendations in problems such as the ones described above. Rather than describing how an agent should deal with an individual decision, the theory considers how an agent learns to choose on a sequence of different decision problems.
We describe the setting in more detail in <Ref>.
The core of our theory is a normative rationality criterion for such learning agents. Roughly, the criterion requires that a boundedly rational inductive agent test each efficiently computable hypothesis (or more generally each hypothesis in some class) infinitely often and follows hypotheses that keep their promises of high rewards. We describe the criterion in detail in <Ref>. Importantly, the criterion can be satisfied by computationally bounded agents, as we show in <Ref>.
We demonstrate the appeal of our criterion by showing that it implies desirable and general behavioral patterns. In <Ref>, we show that on sequences of decision problems in which one available option guarantees a payoff of at least l, BRIAs learn to obtain a reward of at least l. Thus, in particular, they avoid Dutch books (in the limit).
We further show that similarly on sequences of decision problems in which one available option pays off truly or algorithmically randomly with mean μ, BRIAs learn to obtain a reward of at least μ.
Finally, we consider decision problems in which one BRIA plays a strategic game against another BRIA. We show that BRIAs can converge to any individually rational correlated strategy profile. BRIAs are thus a promising model for studying ideas such as superrationality (i.e., cooperation in the one-shot Prisoner's Dilemma) <cit.> (cf. <Ref>).
Related work is discussed in <Ref>.
Throughout this paper, we describe the key ideas for our proofs in the main text. Detailed proofs are given in <Ref>.
§ SETTING
Informally, we consider an agent who makes decisions in discrete time steps. At each time step she faces some set of available options to choose from. She selects one of options and receives a reward. She then faces a new decision problem, and so on.
Formally, let 𝒯 be some language describing available options. A decision problem ∈Fin(𝒯) is a finite set of options. A decision problem sequence is a sequence of decision problems _1,_2,...
An agent for is a sequence c̅ of c_t ∈_t.
The rewards are numbers r_1,r_2,r_3,...∈ [0,1]. Note that in contrast to the literature on multi-armed bandit problems (<Ref>) counterfactual rewards are not defined.
It is generally helpful to imagine that (similar to multi-armed bandit problems) at each time t the agent first sees _t; then chooses c_t from _t according to some algorithm that looks at the available options in _t and takes past experiences into account; then the environment calculates some reward as a function of c_t; the agent observes the reward and learns from it. The sequence of decision problems _t may in turn be calculated depending on the agent's choices. But technically we can consider an agent who chooses c̅ in the beginning without ever looking at or r̅.
We will often consider specific and somewhat unusual types of decision problems as examples, in particular ones where options are terms in some mathematical logic. However, our theory applies at least as well to more traditional, partly empirical decision problems. For example, one could imagine that each option describes a particular medical treatment and that the agent has to select one of the treatments for a particular patient.
We focus on learning myopically optimal behavior. That is, we want our agent to learn to choose whatever gives the highest reward for the present decision problem, regardless of what consequences that has for future decision problems.
§
COMPUTATIONAL CONSTRAINTS AND PARADOXES OF SELF-REFERENCE
In this paper, we develop a normative theory of rational learning in this setting. The standard theory for rational decision making under uncertainty is
Bayesian decision theory (BDT) (<cit.>; for contemporary overviews, see <cit.>).
The main ideas of this paper are motivated by a specific shortcoming of BDT: the assumption that the agent who is subject to BDT's recommendations is logically omniscient and in particular not limited by any computational constraints. [Essentially the same issue has also sometimes been called the problem of old evidence. Cf. <Ref>.]
We develop a theory that gives recommendations to computationally bounded (and therefore in particular logically uncertain) agents. In the following, we give two kinds of examples to illustrate the role of logical omniscience in BDT and motivate our search for an alternative theory.
Mere intractability
The first problem is that in most realistic choice problems, it is intractable to follow BDT.[Pointing out this type of issue with BDT has a long history in many different strands of literature, see, e.g., the overviews given by Wheeler () and Garrabrant et al. ().]
Bayesian updating and Bayes-optimal decision making are only feasible if the environment is small or highly structured (<cit.>; <cit.>; <cit.>).
Even if the agent had a perfectly accurate world model, determining the optimal choice may require solving computationally hard problems,
such as the traveling salesman problem, protein design <cit.>, planning in 2-player competitive games (e.g., <cit.>; <cit.>), etc. Optimal choice may also rely on whether particular mathematical claims are true, e.g., when assessing the safety of particular cryptographic methods. In all these problems, BDT requires the agent to perfectly solve the problem at hand. However, we would like a theory of rational choice that makes recommendations to realistic, bounded agents who can only solve such problems approximately.[Here is an analogy to explain this critique of BDT. A trivial theory of rational choice is the following: Take the option that is best given the model that accurately describes the world. This theory assumes omniscience about both logical and empirical facts. From a BDT perspective, it is unsatisfactory because we do not know what model describes the actual world. A proponent of the trivial theory could argue that one should simply approximate the trivial theory. But it is unclear how one should perform this approximation and from a BDT perspective this is an essential question that a theory of rational choice should answer in a principled way. Similarly, we believe that there should be a principled normative theory of how one should address problems that are hard to solve exactly.]
Consider a decision problem ={a_1,a_2 }, where the agent knows that option a_1 pays off the value of the 10^100-th digit of the binary representation of π. Option a_2 pays off 0.6 with certainty. In our formalism, r equals the 10^100-th digit of the binary representation of π if c=a_1 and r=0.6 if c=0.6. All that Bayesian decision theory has to say about this problem is that one should calculate the 10^100-th digit of π; if it is 1, choose a_1; otherwise choose a_2. Unfortunately, calculating the 10^100-th digit of π is likely intractable.[Remote digits of π are a canonical example in the literature on bounded rationality and logical uncertainty <cit.>.
To the knowledge of the authors it is unknown whether the n-th digit of π can be guessed better than random in less than O(n) time. It is (to our knowledge) not even known whether all digits appear with equal frequency in the decimal representation of π. For a general, statistical discussion of the randomness of digits of π, see Marsaglia .
] Hence, Bayesian decision theory does not have any recommendations for this problem for realistic reasoners. At the same time, we have the strong normative intuition that – if digits of π indeed cannot be predicted better than random under computational limitations – it is rational to take a_2.
We would like our theory to make sense of that intuition.
We close with a note on what we can expect from a theory about rational decision making under computational bounds. A naïve hope might be that such a theory could tell us how to optimally use some amount of compute (say, 10 hours on a particular computer system) to approximately solve any given problem (cf. our discussion in <Ref> of Russell et al.'s <cit.> work on bounded optimality); or that it might tell us in practice at what odds to bet on, say, Goldbach's conjecture with our colleagues. In this paper, we do not provide such a theory and such a theory cannot exist.[
For example, Blum's speedup theorem states, roughly, that there is a decision problem such that for every algorithm solving that decision problem, there exists another, much faster algorithm solving that decision problem. Also, by, e.g., Rice's theorem, it is not even decidable, for a given computational problem, whether it can be solved within some given computational constraints. Also see Hutter et al. () for some discussion, including a positive result, i.e., an algorithm that is in some sense optimal for all well-defined computational problems.] We must settle for a more modest goal.
Since our agents face decision problems repeatedly, our rationality requirement will be that the agent learns to approximately solve these problems optimally in the limit. For example, if digits of π are pseudo-random in the relevant sense, then a rational agent must converge to betting 50-50 on remote binary digits of π. But it need not bet 50-50 “out-of-the-box”.
While our paper thus focuses on a general theoretical answer to the problem of intractability for rational agents, note that the perspective of assigning probabilities to logical claims has recently also been used to derive novel results in complexity theory <cit.>.
Paradoxes of self-reference, strategic interactions, and counterfactuals
A second problem with BDT and logical omniscience more generally is that it creates inconsistencies if the values of different available options depend on what the agent chooses. As an example, consider the following decision problem, which we will call the Simplified Adversarial Offer (SAO) <cit.>. Imagine that an artificial agent chooses between two available alternatives a_0 and a_1, where a_0 is known to pay off 1/2 with certainty, and a_1 is known to pay off 1 if the agent's program run on this decision problem chooses a_0, and 0 otherwise.
Now assume that the agent chooses deterministically and optimally given a logically omniscient belief system
. Then the agent knows the value of each of the options. This also means that it knows whether it will select a_0 or a_1.
But given this knowledge, the agent selects a different option than what the belief system predicts. This is a contradiction. Hence, there exists no agent that complies with standard BDT in this problem.
Compare the examples of Oesterheld and Conitzer and Spencer ; also see Demski and Garrabrant () for a discussion of another, subtler issue that arises from logical omniscience and introspection.
We are particularly interested in problems in which such failure modes apply. SAO is an extreme and unrealistic example, selected to be simple and illustrative. However, strategic interactions between different rational agents share the ingredients of this problem: Agent 1 is thinking about what agent 2 is choosing, thereby creating a kind of reference to agent 2 in agent 2's environment. We might even imagine that two AI players know each others' exact source code (cf. , Sect. 10.4; ; ; ; ; ). Further, it may be in agent 2's interest to prove wrong whatever agent 1 believes about agent 2. For a closely related discussion of issues of bounded rationality and the foundations of game theory, see Binmore and references therein (; ).
Besides the failure of BDT in particular, the Adversarial Offer is illustrative of the challenge of developing a normative rationality criterion for such general decision problems. Many notions of optimal behavior are based on a requirement that a rational agent should not be outperformed by an alternative strategy.[In multi-armed bandit problems, for example, one usually considers the goal of minimizing regret (see the discussion in <Ref>). BDT itself can also be motivated in this way, as is done in the complete class theorems (; ; ).] But any agent may end up in a decision problem sequence that at each time t poses the problem SAO_c_t. Regardless of what c_t selects, it always selects the option with the lowest reward. Hence, the agent choosing according to the sequence of c_t performs worse than any agent that deviates from c_t at least some of the time.
An alternative perspective on this is that in our setting (as in the real world), counterfactual claims are problematic. Although one can resolve the value of all options in SAO_c, it seems odd for an agent after choosing a_0 to believe, Had I chosen a_1, I would have gotten 1. Arguably the right counterfactual statement in this case is, Had I chosen a_1, I would have gotten 0, even though a_1 in fact resolves to 1. However, it is unclear how the right counterfactuals can be constructed in general. In the present paper, we therefore avoid the reliance on any such counterfactual claims even if some form of counterfactual is revealed or can be calculated ex post by the agent. In practice another motivation to not rely on counterfactual claims is that when interacting with the real world, counterfactuals are not directly revealed.[What counterfactuals are and what role they should play in rational choice is, of course, one of the most widely discussed questions in analytic philosophy. For an introduction to the literature on counterfactuals in general, see, for example, Starr (). Properly relating our views and the approach of the present paper to this vast literature could easily fill its own paper. Note that in the present context, logical counterfactuals are particularly relevant, which seem even harder to make sense of.
For discussions of the role of counterfactuals (and the related concept of causality) in rational choice in particular, see, e.g., Eells (), Joyce (), or Ahmed (). In <Ref>, we will relate the present theory to a particular topic of that literature.]
Without counterfactuals, we face a different problem: Imagine an agent choosing between “1/3" and “2/3". How can we design a rationality requirement that rules out an agent who simply always takes “1/3"?
In the next section, we will give an answer to this question. Roughly, our approach is the following: We do not ever make claims about counterfactuals in a particular decision problem. However, we require that in a sequence of decision problems, a rational agent tests different hypotheses about what the optimal choice is. For example, there will be a hypothesis which claims that in this type of problem one should choose “2/3" and that doing so provides a payoff of 2/3. This hypothesis has to be tested by actually taking “2/3" and seeing whether the promised payoff of 2/3 was realized. This particular hypothesis keeps its promise and is therefore prudent to follow, unless another hypothesis (which has either proved reliable or is up for testing) promises an even higher reward.
For a closely related discussion of issues of bounded rationality, counterfactuals and the foundations of game theory, see <cit.> and references therein.
§ THE RATIONALITY CRITERION
In short, our approach is as follows: Agents have to not only choose actions, but also estimate in each round the reward they will receive. As part of our rationality criterion we require that these estimates are not systematically above
what the agent actually obtains. Further, we consider rationality relative to some set of hypotheses, which in turn recommend actions and promise that some reward is achieved when following the recommendation. To satisfy computational constraints, we can restrict the set of hypotheses to only include efficiently computable ones.
Roughly, our rationality criterion then states that if a hypothesis infinitely often claims strictly higher reward than the agent estimates for its own choice, then the agent must test this hypothesis infinitely often. Testing requires taking the option recommended by the hypothesis in question. To reject a hypothesis, these tests must indicate that the hypothesis consistently over-promises.
§.§ Preliminary definitions
An estimating agent α̅ is a sequence of choices from the available options α^c_t∈_t and estimates α^e_t∈ [0,1]. Our rationality criterion uses estimating agents. For brevity, we will say agent instead of estimating agent throughout the rest of this paper.
For example, let SAO_α,t be the Simplified Adversarial Offer for the agent at time t as described in <Ref>. Then we might like an agent who learns to choose α_t^c=a_0 (which pays 1/2 with certainty) and estimate α_t^e=1/2.
A hypothesis h has the same type signature as an estimating agent. When talking about hypotheses, we will often refer to the values of h_t^e as promises and to the values of h_t^c as recommendations.
Our rationality criterion will be relative to a particular set of hypotheses ℍ. In principle, ℍ could be any set of hypotheses, e.g., all computable ones, all three-layer neural nets, all 8MB computer programs, etc. Generally, ℍ should contain any hypothesis (i.e., any hypothesis about how the agent should act) that the agent is willing to consider, similar to the support of the prior in Bayesian theories of learning, or the set of experts in the literature on multi-armed bandits with expert advice.
Following Garrabrant et al. , we will often let ℍ be the set of functions computable in O(g(t)) time, where g is a non-decreasing function.
We will call these hypotheses efficiently computable (e.c.). Note that not all time complexity classes can be
written as O(g(t)). For example, the set of functions computable in polynomial time cannot be written in such a way. This simplified set is used to keep notation simple. Our results generalize to more general
computational complexity classes.
Restricting ℍ to functions computable in O(g(t)) relates to our goal of developing computationally bounded agents (cf. <Ref>). It is not clear whether computational constraints related to t are the most relevant – usually an agent's computational power does not increase as time goes on. An alternative might be to let the computational constraints depend on some number specified by the decision problems themselves. This would require some extra notation and assumptions about the environment, however, without changing our analysis much. Another question is whether asymptotic bounds are more relevant than absolute bounds. After all even O(1) contains hypotheses that cannot in practice be evaluated, which we could avoid by considering only hypotheses that take 10 seconds on a particular machine. We will nevertheless often use sets ℍ defined by asymptotic bounds. This is done for the usual reason: asymptotic complexity classes afford closure properties that simplify analysis. For example, if two operations are in O(g(t)), then compositions of the two are also in O(g(t)).
§.§ No overestimation
We now describe the first part of our rationality requirement, which is that the estimates should not be systematically above what the agent actually obtains. The criterion itself is straightforward, but its significance will only become clear in the context of the hypothesis coverage criterion of the next section.
For T∈ℕ, we call ℒ_T(α̅,r̅) ∑_t=1^T α_t^e - r_t
the cumulative overestimation of an agent α̅ on r̅.
We say that an agent α̅ for ,r̅ does not overestimate (on average in the limit) if ℒ_T(α̅,r̅) / T ≤ 0 as T→∞.
In other words, for all ϵ >0, there should be a time t such that for all T>t, ℒ_T(α̅,r̅) / T ≤ϵ.
Note that the per-round overestimation of boundedly rational inductive agents as defined below will usually but need not always converge to 0; it can be negative in the limit (see <Ref>).
§.§ Covering hypotheses
We come to our second requirement, which specifies how the agent α̅ relates to the hypotheses in ℍ.
We say that h̅ outpromises α̅ or that α̅ rejects h̅ at time t if h_t^e>α_t^e.
We distinguish two kinds of hypotheses:
First, there are hypotheses that promise higher rewards than α̅^e in only finitely many rounds. For example, this will be the case for hypotheses that α̅ trusts and takes into account when choosing and estimating. Also, this could include hypotheses who recommend an inferior option with an accurate estimate, e.g., hypotheses that recommend “1/3" and promise 1/3 in {“1/3",“2/3" }. For all of these hypotheses, we do not require anything of α̅. In particular, α̅ need not test these hypotheses.
Second, some hypotheses do infinitely often outpromise α̅^e. For these cases, we will require our boundedly rational inductive agents to have some reason to reject these hypotheses. To be able to provide such a reason, α̅ needs to test these hypotheses infinitely often.[If we only test them finitely many times, a correct hypothesis may be rejected due to bad luck (e.g., if rewards are random, as discussed in <Ref>).] For the reasons described in <Ref>, tTesting a hypothesis requires choosing the hypothesis' recommended action.
We call a set M⊆ℕ a test set of α̅ for h̅ if for all t∈ M, α^c_t=h^c_t.
For α̅ to infinitely often reject h̅, these tests must then show that h̅ is not to be trusted (in those rounds in which they promise a reward that exceeds α̅^e). That is, on these tests, the rewards must be significantly lower than what the hypothesis promises. We thus introduce another key concept.
Let h̅ be a hypothesis and M⊆ℕ be a test set of α̅ for h̅. We call l_T(α̅,r̅,M,h̅)∑_t∈ M_≤ T r_t - h^e_t the (empirical) record of h (on M).
Here, M_≤ T{ t∈ M| t≤ T } is defined to be the set of elements of M that are at most T.
We now have all the pieces together to state the coverage criterion, which specifies how we want our agents to relate to the hypotheses under consideration.
Let α̅ be an agent, h̅ be a hypothesis, and let B be the set of times t at which α̅ rejects h̅. We say that α̅ covers h̅ with test set M if either B is finite or the sequence ( l_T(α̅,r̅,M,h̅) )_T∈ B goes to negative infinity.
§.§ The boundedly rational inductive agent criterion
We now state the BRIA criterion, the main contribution of this paper.
Let α̅ be an agent for ,r̅. Let ℍ={h_1,h_2,...} be a set of hypotheses.
We say α̅ is a boundedly rational inductive agent (BRIA) for ,r̅ covering ℍ with test sets M_1,M_2,... if α̅ does not overestimate and for all i, α̅ covers h_i with test set M_i.
In the following, whenever α̅ is a BRIA, we will imagine that the test sets are given as a part of α̅. For example, if we say that α̅ is computable in, say, time polynomial in t, then we will take this to mean that α̅ together with a list at time t of tested hypotheses can be computed in polynomial time.
§.§ Examples
Betting on digits of π
Consider the decision problem sequence with DP_t={a_t^π,x_t } for all t, where a_t^π pays off the 2^t-th binary digit of π – i.e., r_t is the 2^t-th digit of π if α_t^c=a_t^π – and x_t∈ [0,1] pays off x_t. As usual we assume that the 2^t-th binary digits of π are pseudorandom (in a way we will make precise in <Ref>) uniformly distributed (as they seem to be, cf. <ref>).
We would then expect boundedly rational agents to (learn to) choose a_t^π when x_t<1/2 and choose x_t when x_t>1/2.
We now consider an agent α̅ for this decision problem sequence. We will step-by-step impose the components of the BRIA criterion on α̅ to demonstrate their meaning and (joint) function in this example. We start by imposing the no overestimation criterion on α̅ without any assumptions about hypothesis coverage – what can we say about α̅ if we assume that does not overestimate? As noted earlier, the no overestimation criterion alone is weak and in particular does not constrain choice at all. For instance, α̅ might always choose α^c_t=a_t^π and alternate estimates of 0 and 1; or it might always choose x_t and estimate x_t-1.
We now impose instances of the hypothesis coverage criterion. We start with the hypothesis h_x which always recommends choosing x_t and promises a reward of x_t. Note that for all we know about the decision problem sequence this hypothesis does not give particularly good recommendations. However, in the context of our theory, h_x is useful because it always holds its promises. In particular, h_x's empirical record on any test set is 0. Hence, if α is to cover h_x, then α can only reject h_x finitely many times. By definition, this means that α_t^e≥ x_t for all but finitely many t∈ℕ.
With the no overestimation criterion, it follows that α on average obtains utilities at least equal to x_t. But α's choices may still not match our bounded ideal. For example, α may always choose x_t.
Next, consider for ϵ>0, the hypothesis h_π^ϵ that always recommends a_t^π and estimates 1/2-ϵ. Whether h_π^ϵ holds its promises is a more complicated question. But let us assume that α̅ covers h_π^ϵ with some test set M, and let us further assume that whether t∈ M is uncorrelated with the 2^t-th binary digit of π, for instance, because predicting the 2^t-th binary digit of π better than random cannot be done using the agent's computational capabilities. Then h_π^ϵ's empirical record on M will go to ∞, assuming that M is infinite – after all, following h_π^ϵ's recommendations yields a reward of 1/2 on average, exceeding its promises of 1/2-ϵ. (Note that if the 2^t-th binary digits of π act like random variables, then this would presumably not be true for ϵ=0, due to the well-known recurrence (a.k.a. Gambler's ruin) result about the simple symmetric random walk on the line <cit.>.) With the assumption that α̅ covers h_π^ϵ, it follows that for all but finitely many t, α^e_t≥1/2-ϵ. Now imagine that α not only covers one particular h_π^ϵ, but that there exist arbitrarily small positive ϵ such that α covers the hypothesis h_π^ϵ. Then it follows that in the limit as t→∞, α_t^e≥1/2.
The above three conditions – no overestimation, coverage of h_x and coverage of h_π^ϵ for arbitrarily small ϵ – jointly imply that α̅ exhibits the desired behavior. Specifically, we have shown that α̅ must estimate at least max{1/2,x_t} in the limit. By the no overestimation criterion, α̅ also has to actually obtain at least max{1/2,x_t} on average. And if α̅ cannot guess the 2^t-th digits of π better than random, then the only way to achieve max{1/2,x_t} on average is to follow with limit frequency 1 the policy of choosing a_t^π when x_t<1/2 and x_t when x_t>1/2.
Adversarial offers
Let α be an agent who faces a sequence of instances of SAO. In particular at time t, the agent faces SAO_α,t={ a_0,a_1}, where a_0 pays off 1/2 with certainty. Intuitively, a_1 is evaluated to 1 if on the present problem α chooses a_0 and to 0 otherwise. Note, however, that the former fact is never relevant to computing r_t. So effectively r_t=1/2 if α_t^c=a_0 and r_t=0 otherwise.
Assume that α does not overestimate and that it covers the hypothesis h which estimates 1/2 and recommends a_0 in every round. Hypothesis h will always have an empirical record of 0 on any test set M since it holds its promises exactly. Hence, if α is to cover h, it can reject h only finitely many times. Thus, α_t^e≥1/2 in all but finitely many rounds. To satisfy the no overestimation criterion, α must therefore obtain rewards of at least 1/2 on average in the limit. Since a_1 pays off 0 whenever it is taken by α, it must be α_t^c=a_0 with limit frequency 1.
§ COMPUTING BOUNDEDLY RATIONAL INDUCTIVE AGENTS
As described in <Ref>, the goal of this paper is to formulate a rationality requirement that is not self-contradictory and that can be satisfied by computationally bounded agents. Therefore, we must show that one can actually construct BRIAs for given ℍ and that under some assumptions about ℍ, such BRIAs are computable (within some asymptotic bounds).
theoremcomputingBRIAsthm
Let ℍ be a computably enumerable set consisting of (O(g(t))-)computable hypotheses. (Let g∈Ω(log).) Then there exists an algorithm that computes a BRIA covering ℍ (in O(g(t)q(t)), for arbitrarily slow-growing, O(g(t))-computable q with q(t)→∞) for any , r̅.
We here give a sketch of our construction. For each decision problem, we run a first-price sealed-bid auction among the hypotheses. The highest-bidding hypothesis determines the agent's choice and estimate and is tested in this round. For each hypothesis, we maintain a wealth variable that tracks the hypothesis' empirical record. A hypothesis' bid is bound by its wealth. Thus, when a hypothesis outpromises the agent, this implies that the hypothesis' wealth is low. Upon winning an auction, the hypothesis pays its promise and gains the reward obtained after following the hypothesis' recommendation. We further distribute at each time t allowance to the hypotheses. The overall allowance per round is finite and goes to zero. The cumulative allowance for each hypothesis goes to ∞ over time. Thus, if a hypothesis is rejected infinitely often, then this requires the hypothesis to have spent all allowance and thus for its record among those rejection rounds to go to -∞. Moreover, the cumulative overestimation is bound by overall allowance distributed and thus per-round overestimation goes to 0.
In <Ref>, we provide a construction for BRIAs and prove that it has the claimed computability properties. It can similarly be shown that, for example, a BRIA relative to the class P of hypotheses computable in polynomial time can be computed in arbitrarily close to polynomial time, i.e. in O(t^q(t)) for arbitrarily slow-growing q with q(t)→∞.
The next result shows that the BRIAs given by <Ref> are optimal in terms of complexity.
theoremnoecBRIA
Let α be a BRIA for ,r̅,ℍ. Assume that there are infinitely many t such that |_t|≥ 2 and α_t^e<1. If ℍ is the set of (O(g(t))-)computable hypotheses, then α is not computable (in O(g(t))).
We prove (and discuss) this in <Ref>.
§ LOWER BOUNDS ON AVERAGE REWARDS
Options with payoff guarantees
Throughout this section, we will show that BRIAs satisfy many desiderata that one might have for rational decision makers. We start with a simple result which shows that if at each time t one of the options can be efficiently shown to have a value of at least L_t, then a BRIA will come to obtain at least L_t on average.
theoremeasyoptionsthm
Let α̅ be a BRIA for ,r̅ and the set of e.c. hypotheses. Let a̅ be a sequence of terms in 𝒯 s.t. for all t∈ℕ, it holds that a_t∈_t and α_t^c= a_t r_t ≥ L_t
for some e.c. sequence L̅. We require also that the a_t are efficiently identifiable from the sets _t. Then in the limit as T→∞ it holds that ∑_t=1^T r_t / T≥∑_t=1^T L_t / T.
A formal proof is given in <Ref>. The proof idea is simple. Consider the hypothesis that estimates L_t and recommends a_t if t∈ S and promises 0 otherwise. This hypothesis always keeps its promises. Hence, to cover this hypothesis, α can be outpromised by this hypothesis only finitely many times.
<Ref> implies that when the value of all options is e.c., then a BRIA must choose the best available option. For example, when the choice is between “1/3" and “2/3", a BRIA has to choose “2/3" with frequency 1.
We can interpret <Ref> as providing an immunity to money extraction schemes, a widely discussed rationality condition. If a BRIA can leave with a certain payoff of L_t, it will on average leave with at least L_t. For example, in SAO of <Ref>, a BRIA walks away with at least 1/2, which in turn means that it chooses a_0=“1/2" with frequency 1. As <cit.> and <cit.> show, a different normative theory of rationality, called causal decision theory, can be used as a money pump with this example.
Another corollary of <Ref> is that BRIAs must learn and use empirical facts that can be efficiently deduced from what is revealed by and r̅. For example, imagine that in one round, _t,r_t reveals that the minimum of the populations of Hamburg and Amsterdam is 0.8 million. Then in later rounds, this information can be used to efficiently compute lower bounds on other options. For example, the option that pays off the maximum of the populations of Hamburg and Detroit in millions can be deduced to be at least 0.8. If such decision problems occur infinitely often, BRIAs must converge to exploiting such inferences.
Options with algorithmically random payoffs
<Ref> only tells us something about truly random variables. But a key goal of our theory is to also be able to assign expected rewards to
algorithmically random sequences, i.e., sequences that are deterministic and potentially even computable, but relevantly unpredictable under computational constraints. We first offer a formal notion of algorithmic randomness.
definitionvMWCdef
We say a sequence y̅ is (O(h(t)) boundedly) van Mises–Wald–Church (vMWC) random with means μ̅ if for every infinite set S⊆ℕ that is decidable (in O(h(t)) time) from available information
, we have that lim_T→∞∑_t∈ S_≤ T y_t-μ_t=0.
Thus, we call a sequence random if there is no (O(g(t))-)computable way of selecting in advance members of the sequence whose average differs from the means μ̅.
<Ref> generalizes the standard definition of (unbounded) vMWC randomness <cit.> to non-binary values with means μ̅ other than 1/2 and computational constraints with outside input (e.g., from , which could contain options containing information such as, by the way, the trillionth digit of π is 2). The notion of vMWC randomness is generally considered quite weak ().
theorempseudolotteriesthm
Let μ̅ be an e.c. sequence on [0,1]. Let α be an O(h(t))-computable BRIA for decision problem sequence with rewards r̅ covering all e.c. hypotheses. Let a̅ be a sequence of terms in 𝒯 s.t. a_t∈_t for all t∈ℕ and the payoffs r_t in rounds with α_t^c=a_t are O(h(t))-boundedly vMWC random with means μ̅. Then in the limit as T→∞, it holds that ∑_t=1^T r_t/T ≥∑_t=1^T μ_t/T.
We show an analogous result for Schnorr bounded randomness <cit.> in <Ref>.
Note that we could replace the = sign in the last line of the definition with a ≥ and all of the following would still hold – however, the resulting definition does not reasonably capture randomness.
Analogous results for truly random options follow from results for algorithmically random r̅ and the fact that a sequence of truly random, independent numbers is algorithmically random almost surely. We give a direct proof in
<Ref>.
§ BOUNDEDLY RATIONAL INDUCTIVE AGENTS AS A FOUNDATION FOR GAME THEORY
§.§ Games as decision problems
We first recap basic game-theoretic concepts. For a thorough introduction to game theory, see Osborne or any other textbook on the topic. A (two-player) game consists of two finite sets of (pure) strategies A_1,A_2, one set for each player, and two payoff functions u_1,u_2 A_1× A_2 → [0,1].
A correlated strategy profile is a distribution 𝐜∈Δ(A_1× A_2) over A_1× A_2. We can naturally extend utility functions to correlated strategy profiles as follows: u_i(𝐜)=∑_𝐚∈ A_1× A_2 c_𝐚 u_i(𝐚).
We call a correlated strategy profile 𝐜 strictly individually rational if each player's payoff in 𝐜 is greater than their pure strategy maximin payoff, i.e., u_i(𝐜) > max_a_i∈ A_imin_a_-i∈ A_-i u_i(a_i,a_-i).
Now imagine that two BRIAs α̅_1,α̅_2 learn to play a game against each other. That is, we consider BRIAs α̅_1,α̅_2 for ^α̅_1,^α̅_2 respectively, where ^α̅_i=A_i for i=1,2 and r_i,t=u_i(α_1,t^c,α_1,t^c).
Abusing notation a little, we use a_i∈ A_i to represent the available options in DP^α_i_t. For instance, we write α_i,t^c=a_1 to denote that α_i chooses the option from DP^α_i_t that corresponds to a_i∈ A_i.
Note that this is a fairly specific setup. Other versions are possible. For example, instead of knowing the opponent's source code or mathematical definition precisely, we could imagine that they have some distribution over opponent BRIAs. After all, if we accept our BRIA criterion as a definition of rationality, then the common rationality assumption underlying game theory still leaves open which exact BRIA the other player uses.
[Folk theorem]theoremFolkTheorem
Let Γ be a game. Let ℍ_1,ℍ_2 be any sets of hypotheses. Let 𝐜∈Δ(A_1× A_2) be strictly individually rational. Then there exists 𝐜' arbitrarily close to 𝐜 and BRIAs α̅_1,α̅_2 covering ℍ_1,ℍ_2 for decision problem sequences ^α_1,^α_2 with rewards r̅_1,r̅_2 based on Γ as defined above s.t. the empirical distribution of (α_1^c,α_2^c) converges to 𝐜', i.e., for all 𝐚∈ A_1× A_2, 1/T∑_t=1^T 1[(α_1^c,α_2^c)=𝐚] → c'_𝐚 as T→∞.
Conversely, if α_1,α_2 are BRIAs for sets of hypotheses ℍ_1 and ℍ_2 that contain at least the constant-time deterministic hypotheses, ∑_t=1^T u_i(α_1,t^c,α_2,t^c)/T ≥max_a_imin_a_-i u_i(a_i,a_-i) as T→∞.
That is, in the limit each player receives at least their maximin utility.
<Ref> is compelling, because it means BRIAs can learn to cooperate in one-shot games where rational agents would otherwise fail to cooperate (e.g., contrast fictitious play, or regret learning, both of which necessarily converge to defecting in the Prisoner's Dilemma).
Note that our BRIA criterion is myopic, i.e., aimed at maximizing reward in the current round. Thus, even though the BRIAs in the above setting play repeatedly, the above result is unrelated to the folk theorems for repeated games.
The proof is given in <Ref>.
§ RELATED WORK
Multi-armed bandit problems
Our setting resembles a multi-armed bandit problem with expert advice (where ℍ is the set of “experts”). The main difference is that we only define r_t, the reward actually obtained by the agent. The literature on multi-armed bandit problems assumes that the problem also defines the (counterfactual) rewards of untaken options and defines rationality in terms of these rewards. As discussed in <Ref>, one of our motivations is to do away with these counterfactuals.
Within the literature on multi-armed bandit problems, some strands of work in statistical learning theory make assumptions that avoid the problems of bounded rationality and paradoxes of self reference.
For example, Yang and Zhu () and Agarwal et al. () assume that the agent can converge to having a fully accurate model of how the available actions give rise to rewards.
Other papers explicitly assume that the reward is determined by some linear function <cit.>. These assumptions allow a much simpler rationality requirement, namely some kind of convergence to optimal behavior (cf. <Ref>). Aside from the early (and very general) work of <cit.>, the literature on contextual multi-armed bandits has come to focus on achieving fast convergence rates (which we have given little consideration in this paper).
As we have argued in <Ref>, computationally complex reward functions pose quite different theoretical problems, including the impossibility of deciding based on accurate beliefs about the available options and of low-regret learning. We have argued that facing these issues head-on is important, e.g., for studying strategic interactions. We suspect that authors in this line of work generally do not have such problems in mind and are instead inspired by settings in which uncertainty is primarily empirical and computationally simple models can be somewhat accurate, e.g., when selecting treatments for a patient based on medical data.
Within the multi-armed bandit literature, the most closely related strand of work is the literature on adversarial multi-armed bandit problems with expert advice (; ).
Like this paper, this literature addresses this problem of bounded rationality by formulating rationality relative to a set of hypotheses (the eponymous experts). However, its rationality criterion is very different from ours: they require regret minimization and in particular that cumulative regret is sublinear, a condition sometimes called Hannan-consistency. As the Simplified Adversarial Offer shows, Hannan-consistency is not achievable in our setting. However, it does become achievable if we assume that the agent has access to a source of random noise that is independent from <cit.>. Importantly, the rationality criterion itself ignores the ability to randomize, i.e., it does not prescribe that the use of randomization be optimal in any sense.
We find it implausible to require rational agents to randomize to minimize regret; most importantly, regret minimization can require minimizing the rewards one actually obtains – see <Ref>.
At the same time, we conjecture that learners with low regret relative to a set of hypotheses ℍ satisfy a version of the BRIA criterion; see <Ref> for a preliminary result.
One interesting issue in the literate on multi-armed bandit problems with expert advice is that of reactive (a.k.a. non-oblivious) bandits. For example, there could be a bandit/decision problem sequence that at each time t>T pays the agent a dividend if the agent invests (i.e., foregoes a small reward) on day T. Like the BRIA criterion of this paper, standard notions of Hannan-consistency are myopic and therefore require that one learns to take the small reward today. Depending on the setting, this may be undesirable. Some authors have therefore explicitly considered the goal of maximizing reward non-myopically (; ; ). However, as these authors have noted, it is in general difficult to define sensible non-myopic notions of regret. The underlying problem is essentially the problem of making counterfactual claims that motivates much of the present paper (see <Ref>). Only one trajectory is observed and in general it is difficult to evaluate claims about what would have happened if alternative strategies has been used. In the theory of multi-armed bandits, this problem is usually addressed by making assumptions that ensure that variants of typical notions of regret can be applied after all. In particular, it is assumed that the bandit is forgetful. Since BRIA theory does not rely on counterfactual claims, we believe that BRIA theory can be used to address this problem more generally and satisfactorily. It seems that one merely has to adapt the BRIA theory to incorporate non-myopia. This can be done, for example, by evaluating a bid h_i,t not based on the immediate reward r_t obtained after accepting it (h_i,t^c=α_t^c) but on the discounted reward
∑_t'=t^∞γ^t'-t r_t.
§.§ Decision theory of Newcomb-like problems
Decision theory of Newcomb-like problems
Problems in which the environment explicitly predicts the agent have been discussed as Newcomb-like problems by (philosophical) decision theorists <cit.><cit.>. In fact, the Adversarial Offer of <cit.> is intended as a contribution to that theory.
Most of this literature has focused on discussing relatively simple cases (similar to SAO) in which people have strong but differing intuitions about what the rational choice should be. In these cases, BRIAs generally side with what has been called evidential decision theory. For example, by <Ref>, BRIAs learn to one-box in Newcomb's problem, cooperate in a Prisoner's Dilemma against an exact copy and choose a_0 in the Adversarial Offer.[The underlying reason is roughly that BRIAs implement a version of what has been called the law of effect (; ), which roughly states that behaviors will be repeated if they have been followed by high rewards in the past. As has been pointed out by, e.g., <cit.> and <cit.>, learning according to the law of effect yields evidential decision theory-like behavior.
Of course, many evidential decision theorists may disagree with particular recommendations that the present theory makes. For example, while BRIAs learn to cooperate against exact copies of themselves, a pair of sufficiently different BRIAs will learn to defect against each other (see <Ref>). In contrast, some have argued or simply assumed that evidential decision theory-type reasoning should lead to cooperation more generally <cit.>.] Of course, BRIAs differ structurally from how a decision theorist would usually conceive of an evidential decision theory-based agent. E.g., BRIAs are not based on expected utility maximization (though they implement it when feasible; see <Ref>). We also note that the decision theory literature has, to our knowledge, not produced any formal account of how to assign the required conditional probabilities in Newcomb-like problems.
§.§ Bounded rationality
Bounded rationality
The motivations of the present work as per <Ref>, especially <Ref>, coincide with some of the motivations for the study of bounded rationality. For instance, in one of his seminal works, <cit.> writes that the task is to replace the global rationality of economic man with a kind of rational behavior that is compatible with [...] the computational capacities that are actually possessed by organisms, including man. Compare <cit.>. However, other motivations have been given for the study of bounded rationality as well <cit.>. More importantly, since much of bounded rationality is geared towards explaining or prescribing human (as opposed to AI) behavior, the characterization and analysis of computational capacities often differ from ours <cit.>. For instance, for most humans dividing 1 by 17 is a challenge, while such calculation are trivial for computers. (Meanwhile, the brain performs many operations (e.g., recognizing objects in images) that require much more complex computations.)
A few authors have also explicitly connected the general motivations of bounded rationality with paradoxes of self reference and game theory as discussed in <Ref> (<cit.>, <cit.>).
Anyway, the literature on bounded rationality is vast and diverse. Much of it is so different from the present work that a comparison hardly makes sense. Below we discuss a few approaches in this literature that somewhat resemble ours. In particular, like the present paper (and Hannan consistency) they specify rationality relative to a given set of hypotheses (that in turn is defined by computational constraints).
§.§ Russell et al.'s bounded optimality
Russell et al.'s bounded optimality
Like our approach and the other approaches discussed in this related work section, Russell et al. define bounded optimality as a criterion relative to a set of (computationally bounded) hypotheses called agent programs (, Sect. 1.4; ; ). Roughly, an agent program is boundedly optimal if it is the optimal program from some set of bounded programs. For example, imagine that the agent will only face a single decision problem of type { a_m,x }, where a_m is known to pay off the m-th binary digit of π and x∈ [0,1] simply pays x. Then Russell et al. have us ask questions such as: among all computer programs of size at most 16MB that return an answer (i.e., either a_m or a_x) after running for at most a day on a specific computer, what program maximizes expected reward if m is sampled from ℕ with probability proportional to 1/m^2 and x is sampled uniformly from [0,1]? For large enough m, the optimal bounded program would simply choose a_m whenever x<1/2, and a_x whenever x>1/2, in line with our approach.
The main difference between our and Russell et al.'s approach is that we address the problems of <Ref> by developing a theory of learning to make such decisions, while Russell et al. address them by moving the decision problem one level up, from the agent to the design of the agent <cit.>. As one consequence, we can design general BRIAs, while it is in general hard to design boundedly optimal agents. (<cit.> give a special class of environments in which they show the design of boundedly optimal agents to be tractable.)
Of course, the feasibility of designing BRIAs comes at the cost of our agents only behaving reasonably in the limit. As an example, imagine that the agent will with probability 1 be offered some bet on Goldbach's conjecture. Then Russell et al.'s approach requires the agent's designer to determine whether Goldbach's conjecture is true. In contrast, our approach puts no requirement on an agent who only faces this one betting situation. Moreover, the designer of boundedly optimal agents as per Russell et al. may become a subject of the paradoxes of <Ref> in problematic ways. Imagine that the designer is in turn some computer program d and let us say that d is posed the problem of designing a program that will face only decision problem SAO_d with probability 1. Here, SAO_d={ a_0,a_1 } is the decision problem where a_0 is known to pay off 1/2 and a_1 is 1 if d selects a program that selects a_0 in SAO_d and 0 otherwise. Then d cannot select the optimal agent program for this problem.
§.§ Garrabrant inductors
Garrabrant inductors
The present is in part inspired by the work of Garrabrant et al. , who address the problem of assigning probabilities under computational constraints and possibilities of self-reference.
As an alternative to the present theory of BRIAs, one could also try to develop a theory of boundedly rational choice by maximizing expected utility using the Garrabrant inductor's probability distributions.
Unfortunately, this approach fails for reasons related to the challenge of making counterfactual claims, as pointed out by Garrabrant . As in the case of Hannan consistency, we can address this problem using randomization over actions. However, like Garrabrant (ibid.), we do not find it satisfactory to require randomization (cf. again <Ref>). We conjecture that, like regret minimizers, Garrabrant inductors with (pseudo-)randomization could be used to construct BRIAs.
§ CONCLUSION
We developed BRIA theory as a theory of bounded inductive rationality. We gave results that show the normative appeal of BRIAs. Furthermore, we demonstrated the theory's utility by using it to justify Nash equilibrium play. At the same time, the ideas presented lead to various further research questions, some of which we have noted above. We here give three more that we find particularly interesting. Can we modify the BRIA requirement so that it implies coherence properties à la Garrabrant et al. ? Do the frequencies with which BRIAs play the given pure strategies of a game converge to mixed Nash and correlated equilibria? Can BRIA theory be used to build better real-world systems?
§ ACKNOWLEDGMENTS
We thank Emery Cooper, Daniel Demski, Sam Eisenstat, Daniel Kokotajlo, Alexander Oldenziel, Nisan Stiennon, Johannes Treutlein and attendants of OptLearnMAS 2021 for helpful discussions.
eptcsini
§ PROOFS
§.§ An easy lemma about test sets
We start with a simple lemma which we will use to simplify a few of our proofs. Roughly, the lemma shows that to cover a hypothesis h, it never helps to test h in rounds in which h_t=0, i.e., in rounds in which h doesn't make any promises.
Let h̅ be a hypothesis and N⊆ℕ s.t. t∈ N implies h^e_t=0. Then if α̅ covers h̅ with test set M, α̅ covers h̅ with test set M-N.
For all T, we have that
l_T(α̅,r̅, M, h̅) = ∑_t∈ M_≤ T r_t - h_t^e = ∑_t∈ M_≤ T - N r_t - h_t^e + ∑_t∈ M_≤ T∩ N r_t - h_t^e
= ∑_t∈ M_≤ T - N r_t - h_t^e + ∑_t∈ M_≤ T∩ N r_t
≥ ∑_t∈ M_≤ T - N r_t - h_t^e
= l_t(α̅, r̅, M-N, h̅).
Thus, if l_T(α̅,r̅, M, h̅)→ -∞ as T→ -∞, it must also be l_T(α̅,r̅, M-N, h̅)→ -∞ as T→ -∞.
§.§ Proof of <Ref>
*
Our proof is divided into four parts. First, we give the generic construction for a BRIA (1). Then we show that this is indeed a BRIA by proving that it satisfies the no overestimation criterion (2), as well as the coverage criterion (3). Finally, we show that under the assumptions stated in the theorem, this BRIA is computable in the claimed time complexity (4).
1. The construction
First, we need an allowance function A:ℕ×ℕ→ℝ_≥ 0, which for each time n, specifies a positive amount A(n,i) given to hypothesis h_i's wealth at time n. The allowance function must satisfy the following requirements:
* Each hypothesis must get infinite overall allowance, i.e.,
∑_n=1^∞ A(n,i)=∞
for all hypotheses h_i.
* The overall allowance distributed per round n must go to zero, i.e.,
∑_n=1^N 1/N∑_i=1^∞ A(n,i) N→∞→ 0.
In particular, the allowance distributed in any particular round must be finite.
An example of such a function is A(n,i) = n^-1i^-2.
We can finally give the algorithm itself. Initialize the wealth variables as (for example) w_0(i)← 0 for each hypothesis h_i∈ℍ.
At time t, we run a (first-price sealed-bid[This format is mainly chosen for its simplicity. We could just as well use a second-price (or third-price, etc.) auction. We could use even different formats to get somewhat different BRIA-like properties. For instance, with combinatorial auctions, one could achieve cross-decision optimization.
]) auction for the present decision problem among all hypotheses. That is, we determine a winning hypothesis
i^*_t ∈_i∈ℕmin (h_i,t^e , w_t(i))
with arbitrary tie breaking. Intuitively, each hypothesis h_i bids h_i,t^e, except that it is constrained by its wealth w_t(i). The idea is that if h_i has performed poorly relative to its promises, then α should not trust h_i's promise for the present problem.
Let e^*_t∈[0,1] be the maximum (wealth-bounded) bid itself. We then define our agent at time t as α_t (h_i_t^*,t^c , e^*_t).
We update the wealth variables as follows. For all hypotheses i≠ i^*_t, we merely give allowance, i.e., w_t+1(i) ← w_t(i) + A(t,i).
For the winning hypothesis i_t^*, we update wealth according to w_t+1(i_t^*) ← w_t(i^*_t)
+ A(t,i_t^*)
+ r_t - e^*_t.
That is, the highest-bidding hypothesis receives the allowance and the reward obtained after following its recommendation (r_t), but pays its (wealth-bounded) bid (e_t^*).
2. No overestimation We will show that the cumulative overestimation is bounded by the sum of the allowance.
For each T, let B^+_T be the set of hypotheses whose wealth w_t(i) is positive for at least one time t∈{0,...,T}. Note that all highest-bidding hypotheses in rounds 1....,T are in B^+_T for all j. We can then write the overall wealth of the hypotheses in B^+_T at time T as
∑_i∈ B^+_T w_T (i) = ∑_i∈ B^+_T∑_n=1^T A(n,i) + ∑_t=1^T r_t-α_t^e.
That is, the overall wealth at time T is the allowance distributed at times 1,...,T plus the money earned/lost by the highest-bidding hypotheses.
Now notice that by the construction above, if a wealth variable w_t(i) is non-negative once, it remains non-negative for all future t. Thus, for all i∈ B_T^+, w_T(i)≥ 0.
Second, the last term is the negated cumulative overestimation of α̅. Thus, re-arranging these terms and dividing by T gives us the following upper bound on the per-round overestimation:
1/Tℒ_T(α,r̅) = 1/T(∑_i∈ B_T^+∑_n=1^T A(n,i) - ∑_i∈ B^+_T w_T (i) ) ≤1/T∑_i∈ B_T^+∑_n=1^T A(n,i) ≤∑_i=1^∞1/T∑_n=1^T A(n,i),
which goes to zero as T→∞ by our requirement on the function A (line <ref>).
3. Hypothesis coverage Given a hypothesis h_i that strictly outpromises α̅ infinitely often, we use as a test M_i, the set of times t at which h_i is the winning hypothesis (i.e., the set of times t s.t. i=i_t^*). We have to show that M_i is infinite, is a valid test set (as per <Ref>), and that it satisfies the justified rejection requirement in the hypothesis coverage criterion.
A) We show that M_i is infinite. That is, we need to show that infinitely often h_i is the highest-bidding hypothesis in the auction that computes α̅. Assume for contradiction that M_i is finite. We will show that at some point h_i's bidding in the construction of α̅ will not be constrained anymore by h's wealth. We will then find a contradiction with the assumption that h_i strictly outpromises α infinitely often.
Consider that for T'>T, it is w_T'(i) = w_T(i) + ∑_t=T+1^T' A(t, i). That is, from time T to any time T', hypothesis i's wealth only changes by h_i receiving allowance, because i is (by assumption) not the winning hypothesis i^*_t in any round t≥ T. Because we required ∑_n=1^∞ A(n,i)=∞, we can select a time T*≥ T such that w_T*(i)≥ 1. Note that again it is also for all t>T* the case that w_t(i)≥ 1.
We now see that if t≥ T* the wealth constraints is not restrictive. That is, for all such t it is min (h_i,t^e , w_t(i))=h_i,t^e.
But it is infinitely often h_i,t^e>α_t^e. This contradicts the fact that by construction, α_t is equal to the highest wealth-restricted hypothesis.
B) The fact that M_i is a valid test set follows immediately from the construction – α always chooses the recommendation of the highest-bidding hypothesis.
C) We come to the justification part of the coverage criterion. Let B_i be the set of rounds in which h̅ _i strictly outpromises α̅.
At each time t∈ B_i, by construction w_T(i,j)<h_i,t^e(_T).
We have that h_i,t^e(_T)≤ 1 and
w_T(i) = ∑_n=1^T A(n, i) + ∑_t∈ M_i :t<T r_t - h_i,t^e.
Hence, from the fact that w_T(i)<h_i,t^e(_T) for all T∈ B_i, it follows that for all T∈ B_i, it is
∑_t∈ M_i:t<T h_i,t^e - r_t > ∑_n=1^T A(n, i),
which goes to infinity as T→∞, as required.
4. Computability and computational complexity It is left to show that if ℍ can be computably enumerated and consist only of (O(g(t))-)computable hypotheses, then we can implement the above-described BRIA for ℍ, ,r̅ in an algorithm (that runs in O(g(t)q(t)), for arbitrarily slow-growing, O(g(t))-computable q with q(t)→∞).
The main challenge is that the construction as described above performs at any time t, operations for all (potentially infinitely many) hypotheses.
The crucial idea is that for an appropriate choice of A, we only need to keep track of a finite set of hypotheses, when calculating α̅ in the first T time steps. Each hypothesis starts with an initial wealth of 0. Then a hypothesis i can only become relevant at the first time t at which A(t,i)>0. At any time t, we call such hypotheses active. Before that time, we do not need to compute h̅_i and do not need to update its wealth. By choosing a function A s.t. (in addition to the above conditions) A(t,·) has finite, e.c. support at each time t, we can keep the set of active hypotheses finite at any given time. (An example of such a function is A(n,i) = n^-1i^-2 for i<n and A(n,i) =0 otherwise.) We have thus shown that it is enough to keep track at any given time of only a finite number of hypotheses.
At any time, we therefore only need to keep track of a finite number of wealth variables, only need to compute the recommendations and promises of a finite set of hypotheses, and only need to compute a minimum of a finite set in line <ref>.
Computability is therefore proven. We proceed to show the claim about computational complexity. At any time t, let C_max(t) be the largest constant by which the computational complexity of hypotheses at time t are bounded relative to g(t). Further, let h_b(t) be the set of active hypotheses. Then the computational cost from simulating all active hypotheses at time t is at most h_b(t)C_max(t)g(t).
All of C_max(t) and h_b(t) must go to ∞ as t→∞. However, this can happen arbitrarily slowly, up to the limits of fast (O(g(t))) computation.
Hence, if we let q(t)=h_b(t)C_max(t)g(t), we can let q grow arbitrarily slowly (again, up to the limits of fast computation).
Finally, we have to verify that all other calculations can be done in O(q(t)g(t)): To determine the winning hypothesis given everyone's promises, we have to calculate the maximum of h_b(t)∈ O(q(t)) numbers, which can be done in O(q(t)) time. We also need to conduct the wealth variable updates themselves, which accounts for O(h_b(t)) additions. Again, this is in O(g(t)q(t)). And so on.
§.§ Proof of <Ref>
§.§ Proof of <Ref>
(and some discussion)
*
This is shown by a simple diagonalization argument. If a BRIA α were computable (in O(g(t))), then consider the hypothesis who in rounds in which |_t|≥ 2 and α_t^e<1, promises 1 and recommends an option other than α_t^c; and promises 0 otherwise. This hypothesis strictly outpromises α infinitely often, is computable (in O(g(t))) but is never tested
, see Lemma <Ref>.The same argument can similarly be used to show that, for example, a BRIA for the set of polynomial-time hypotheses cannot be computed in polynomial time.
Is diagonalization a silly reason to fail the BRIA criterion? After all, the diagonalizing hypothesis' recommendation will not be sensible in general. For example, imagine that _t={a_t,0,a_t,1} is the problem of guessing the t-th binary digit of π. Option a_t,k pays off 1/2 if the t-th digit of π is k and 0 otherwise. This problem can be solved in O(t), so we might like to say that there is an O(t) BRIA for this process. But if we use as ℍ the set of linear-time-computable hypotheses, this is not possible.
There are two possible responses to this intuition. The first is that we here assume that we at some point come to be certain of how ,r̅ work and in particular, that the diagonalizing hypothesis does not work. However, the BRIA criterion for the set of hypotheses ℍ assumes that we never develop perfect confidence that any of the hypotheses in ℍ is wrong. If we really wanted to allow an O(t) BRIA, we should therefore exclude the diagonalization hypothesis from ℍ.
A second approach is to avoid diagonalization by randomizing. In particular, we could let the BRIA test hypotheses according to a randomized scheme, where the hypotheses' computational model does not have access to the BRIA's sequence of random variables. This allows us to construct, for example, an O(t) (plus randomization) BRIA that cannot be exploited, even by more powerful hypotheses. However, this, of course, only works if the decision problem sequence is easy to solve: in this case, solvable in O(t), as the following theorem illustrates.
For any g with g(t)→∞, there exists a decision problem sequence ,r̅ for which there is no BRIA relative to O(g(t)) hypotheses that can be computed with the use of randomization in O(g(t)).
§.§ Proof of <Ref>
*
We will show that if the assumptions are satisfied, then for all but finitely many t, we have that α_t^e≥ L_t. From this and the fact that α̅ doesn't overestimate, it then follows that ∑_t=1^T r_t / T≥∑_t=1^T L_t / T.
We prove this new claim by proving a contrapositive. In particular, we assume that α^e_t < L_t for infinitely many t and will then show that α̅ is not a BRIA (using the other assumptions of the theorem).
Consider hypothesis h̅_i such that h_i,t=(a_t,L_t). Because L̅ is e.c. and the a̅ are efficiently identifiable, h̅ is e.c. We now show that h̅_i is not covered by α̅, which shows that α̅ is not a BRIA. By assumption, h̅_i strictly outpromises α̅ infinitely often. It is left to show that there is no M_i as specified in the hypothesis coverage criterion, i.e. no M_i on which h̅_i consistently underperforms its promises.
If t∈ M_i, then α_t^c=h_i,t^c=a_t and therefore r_t≥ L_t. It follows that for all T,
l_T(α̅,r̅,M_i,h̅_i)=∑_t∈ M_i:t<Tr_t_≥ L_t - h_i,t^e_=L_t≥ 0.
Thus, α̅ violates the coverage criterion for h̅_i.
§.§ Proof of <Ref>
*
*
We prove the theorem by proving that for all ϵ>0, α_t^e≥μ_t-ϵ for all but finitely many t. As usual, we prove this by proving the following contrapositive: assuming this is not the case, α̅ is not a BRIA. To prove this, consider hypothesis h̅_a,ϵ that at each time t promises max(μ_t-ϵ,0) and recommends a_t. Since h̅_a,ϵ infinitely often outpromises α̅, it must tested infinitely often. Let the test set be some infinite set M⊆ℕ. By <Ref>, we can assume WLOG that for all t∈ M, h_a,ϵ^e=μ_t-ϵ.
Now notice that M is by assumption computable in O(h(t)) given the information available at time t. Now
1/|M_i,≤ T| l_T(α,r̅,M_i,h̅_i)= 1/|M_i,≤ T|∑_t∈ M_i,≤ T r_t-(μ_t-ϵ) w.p. 1→ϵ as T→∞,
where the final step is by the fact that among rounds where α_t^c=a_t, r̅ is vMWC random with means μ̅. Hence, h̅_a,ϵ's record l_T(α,r̅,M_i,h̅_i) must be positive in all but finitely many rounds. Thus, α̅'s infinitely many rejections of h̅_a,ϵ violate the coverage criterion.
§.§ Proof of <Ref>
Let (A_1,A_2,u_1,u_2) be any game. Then
max_σ_i∈Δ(A_i)min_a_-i∈ A_-i u_i (σ_i,σ_-i) = min_σ_-i∈Δ(A_-i)max_a_i∈ A_i u_i (σ_i,σ_-i).
*
The latter part (Conversely,...) follows directly from <Ref>. It is left to prove the existence claim.
We construct the BRIAs as follows. First we fix positive probabilities p_𝐜∈ (0,1) and (p_a_i)_a_i∈ A_i for i=1,2 (WLOG assume A_1 and A_2 are disjoint) s.t. p_𝐜+∑_i=1^2 ∑_a_i∈ A_i p_a_i = 1. Further let v_i be some number that is strictly greater than Player i's maximin value but strictly smaller than p_c u_i(𝐜). By the assumption that 𝐜 is strictly individually rational, such a number exists if we make p_c large enough. Then let α_i,t^e=v_i for all t. Then in each step the BRIAs jointly randomize[We here use true randomization for simplicity. The same can be achieved using algorithmic randomness.] independently from all bidders in ℍ_1,ℍ_2 as follows:
* With probability p_𝐜 both players play according to 𝐜 by jointly implementing 𝐜, e.g., by deterministically cycling through the different strategies in the appropriate numbers.
Further, α_i,t^e=v_i. No hypotheses are tested.
* With probability p_a_i, Player i plays a_i and Player -i plays from _a_-i∈ A_-i u_i(a_i,a_-i). Player -i estimates v_-i and does not test any hypothesis. Player i estimates v_i and tests every hypothesis that estimates more than v_i.
We now prove that α̅_1,α̅_2 thus constructed are BRIAs.
No overestimation:
ℒ_T(α̅_i,r̅_i)/T = ∑_t=1^T (α_i,t^e - r_i,t)/T = ∑_t=1^T (v_i - r_i,t)/T ≤ v_i - u_i(𝐜) as T→∞.
By construction, v_i - u_i(𝐜)≤ 0.
Coverage:
Let h̅_i be a hypothesis that outbids α̅_i infinitely often. Then in particular h̅_i outbids infinitely often in rounds in which h̅_i recommends some a_i and α_i,t^c=a_i. Thus, h̅_i has an infinite test set M on which the hypothesis' empirical record is
l_T(α̅_i,r̅_i, M, h̅_i) = ∑_t∈ M_≤ T r_t - h_i,t^e = ∑_t∈ M_≤ Tmin_a_-i u_i(h_i,t^c,a_-i) - h_i,t^e ≤∑_t∈ M_≤ Tmax_a_imin_a_-i u_i(a_i,a_-i) - v_i → -∞
as T→∞.
Thus, h̅_i is covered.
§ OPTIONS WITH RANDOM PAYOFFS
The following result shows, roughly, that in the limit BRIAs are von Neumann–Morgenstern rational if von Neumann–Morgenstern rational choice is e.c. That is, when choosing between different lotteries whose expected utilities are efficiently computable, BRIAs converge to choosing the lottery with the highest expected utility. When other, non-lottery options are available, BRIAs converge to performing at least as well as the best lottery option.
theoremlotteriesthm
Let α̅ be a BRIA for ,r̅. Let a̅ be a sequence of terms in 𝒯 s.t. a_t∈_t for all t∈ℕ and the values of r_t if α^c_t=a_t are drawn independently from distributions with e.c. means μ̅.
Let the a_t be efficiently identifiable from _t. Then almost surely in the limit as T→∞, it holds that ∑_t=1^T r_r/T ≥∑_t=1^T μ_t/T.
The proof idea similar to the proof idea for <Ref>. It works by considering hypotheses that recommend a_t and promise μ_t-ϵ and noting that the empirical record of such hypotheses goes to -∞ with probability 0.
We need only show that with probability 1 for all ϵ>0 it holds that for all but finitely many times t that α_t^e≥μ_t-ϵ. From this and the no overestimation property of α̅, the conclusion of the theorem follow.
Again we prove the following contrapositive: If there is some ϵ>0 s.t. with some positive probability p>0 we infinitely often have that α_t^e< μ_t-ϵ, then α̅ is with positive probability not a BRIA.
Consider the hypothesis h̅_a,ϵ that at each time t promises max(μ_t-ϵ,0) and recommends a_t. Since with probability p, h̅_a,ϵ infinitely often outpromises α̅, it must in these cases (and therefore with probability (at least) p) be tested infinitely often. (If not, we α̅ would in these cases not be a BRIA and we would be done.) In these cases (i.e., when h̅_a,ϵ is tested infinitely often), let the test set be some infinite set M⊆ℕ. (Note that M may depend on r̅ and inherit its stochasticity. This will not matter for the following, though.) For simplicity, let M be the empty set if h̅_a,ϵ does not outpromise α infinitely often. By <Ref>, we can assume WLOG that for all t∈ M, h_a,ϵ^e=μ_t-ϵ. Now notice that
1/|M_i,≤ T| l_T(α,r̅,M_i,h̅_i)= 1/|M_i,≤ T|∑_t∈ M_i,≤ T r_t-h_a,ϵ,t^e = 1/|M_i,≤ T|∑_t∈ M_i,≤ T r_t-(μ_t-ϵ).
Conditioning on the (probability p) event that h infinitely often outbids and therefore that M is infinite, it must then with probability 1 be the case that ∑_t∈ M_i,≤ T r_t-(μ_t-ϵ) w.p. 1→ϵ as T→∞ by the law of large numbers. We have thus shown that with positive probability (p) h̅_a,ϵ outpromises α̅ infinitely often while h̅_a,ϵ's record l_T(α,r̅,M_i,h̅_i) is positive in all but finitely many rounds. Thus, in this positive-probability event α̅'s infinitely many rejections of h̅_a,ϵ violates the coverage criterion.
§ MORE ON RANDOMIZATION AND REGRET
In the literature on multi-armed bandit problems, authors usually consider the goal of regret minimization. A natural rationality requirement is for per-round average regret to go to 0. This is sometimes called Hannan consistency. For any given agent c, the Simplified Adversarial Offer SAO_c of <Ref> is a problem on which regret is necessarily high. However, if we assume that the agent at time t can randomize in a way that is independent of how the rewards are assigned by D_t, it can actually be ensured that per-round regret (relative to any particular hypothesis) goes to 0 (see <Ref>). In the literature on such Newcomb-like problems (see <Ref>), an idea closely related to regret minimization has been discussed under the name ratificationism <cit.>. Ratificationism similarly uses distributions over actions <cit.>, though often these are not meant to arise from randomization <cit.>.
Arguably the assumption that the agent can independently randomize is almost always satisfied for artificial agents in practice. For instance, if an agent wanted to randomize independently, then for an adversary to predict the program's choices, it would not only need to know the program's source code. It would also require (exact) knowledge of the machine state (as used by pseudo-random number generators); as well as the exact content of any stochastic input such as video streams and hardware/true random number generators. Independent randomization might not be realistic for humans (to whom randomization requires some effort), but none of these theories under discussion (the present one, regret minimization, full Bayesian updating, etc.) are directly applicable to humans, anyway.
Nevertheless, we are conceptually bothered by the assumption of independent randomization. It seems desirable for a theory of choice to make as few assumptions as possible about the given decision problems. Moreover, we can imagine situations in which independent randomization is unavailable to a given agent. It seems odd for a theory of learning to be contingent on the fact that such situations are (currently) rare or practically insignificant. A detailed discussion of this philosophical concern is beyond the scope of this paper.[For brief discussions of this and closely related concerns in the literature on Newcomb-like problems, see Richter (), Harper (), Skyrms (), Arntzenius (), Levinstein and Soares (), and Oesterheld and Conitzer ().]
In the rest of this section, we discuss the goal of regret minimization under the assumption that algorithms can randomize independently of D̅. The problems discussed in this section all involve references to the agent's choice.
We argue that regret minimization/ratificationism is undesirable in some decision problems (<Ref>). We then discuss the relationship between BRIA theory and regret minimization/ratificationism in a particular type of decision problem (<Ref>). As a caveat, note that in many cases – in particular under suitable independence assumptions between D̅ and the agent – zero-regret is desirable. However, we believe in all of these cases regret minimization satisfies the BRIA criterion.
§.§ The implausibility of regret minimization as a rationality condition
We consider a version of Newcomb's problem (introduced by <cit.>; see <Ref> for further discussion and references). In particular, we consider for any chooser c the decision problem NP_c={a_1,a_2} which is resolved as follows. First, we let D(a_1)=1/4+P(c=a_1)/2.
So the value of a_1 is proportional to the probability that c chooses a_1. And second, we let D(a_2)=D(a_1)+P(c=a_1)/4.
If we let p=P(c=a_1), then the expected reward of c in this decision problem is 1/4+p/2 + (1-p)p/4.
It is easy to see that this is strictly increasing in p and therefore maximized if c=a_1 deterministically. The regret, on the other hand, of c is p^2/4, which is also strictly increasing in p on [0,1] and therefore minimized if c=a_2 deterministically. Similarly, the competitive ratio is given by 1/4+3p/4/1/4 + p/2+(1-p)p/4,
which is also strictly increasing in p on [0,1] and therefore also minimized if c=a_2 deterministically.
Regret and competitive ratio minimization as rationality criteria would therefore require choosing the policy that minimizes the actual reward obtained in this scenario, only to minimize the value of actions not taken.
As noted in <Ref>, it is a controversial among decision theorists what the rational choice in Newcomb's problem is. However, from the perspective of this paper in this particular version of the problem, it seems undesirable to require reward minimization. Also, it is easy to construct other (perhaps more convincing) cases. For example, if a high reward can be obtained by taking some action with a small probability, then regret minimizers take that action with high probability in a positive-frequency fraction of the rounds. Or consider a version of Newcomb's problem in which D(a_1) is defined as before, but D(a_2)=D(a_1). On such problems, Hannan-consistency is trivially satisfied by any learner, even though taking a_1 with probability 1 is clearly optimal.
§.§ BRIAs, randomization and regret minimization
Interestingly, when faced with a sequence of problems like NP_c, the BRIA criterion allows convergence both to a_1 and to a_2. Which action the BRIA converges to (if any), depends on how it learns, i.e., how it tests hypotheses. If the BRIA chooses deterministically, like the BRIA described in <Ref>, then NP_c becomes an easy problem (in the sense of <Ref>): whenever a_1 is taken, a reward of 3/4 is obtained; whenever a_2 is taken, a reward of 1/2 is obtained. Hence, by <Ref> from that section, deterministic BRIAs must converge to taking a_1 with frequency 1.
If a BRIA α̅ randomizes, then the dynamics become much more complicated. When fixing a particular probability of taking each action – e.g., imagine that the BRIA simply takes each action with probability 1/2 for a while – a_2 will be associated with higher rewards. However, if the probabilities of taking a_1 varies between rounds – e.g., because at some point the BRIA adjusts its probabilities based on past experience – then taking a_1 is correlated with P(α^c_t(NP_α_t^c)=a_1) being high, which in turn is correlated with obtaining high rewards.[The case of a deterministic α̅ can be viewed as one where only the extreme case of this latter phenomenon occurs.] Hence, it is not clear whether a_1 or a_2 will be empirically associated with higher rewards. Learning processes such as these are analyzed by <cit.>, who show that model-free reinforcement learners can only converge to ratifiable strategies. In this particular problem, this would mean that the learner would converge to taking a_2 with probability 1. The case of BRIAs is complicated further by the additional layer of hypotheses. Nonetheless, based on 's work we suspect that are there randomizing BRIAs that converge to taking a_2 with probability 1 in this problem. More generally:
There are BRIAs who can only converge to ratifiable/regret-minimizing choice probabilities.
BRIA theory seems to not claim anything about what is the rational choice in this particular decision problem. What one makes of this depends on one's views about the decision theory of Newcomb-like problems. If you are unsure about what to do in Newcomb's problem or believe that both one- and two-boxing can be justified, then it may be an attractive feature of BRIA theory that it admits both possibilities, albeit only in this particular version of Newcomb's problem. If you have a strong opinion about what is the rational choice in this scenario, then you might complain that BRIA theory makes no particular claim about this problem. We discuss the two possible views in turn.
First, if you are a two-boxer, you might find it problematic that BRIA theory allows agents who converge to taking a_1 (one-boxing) with frequency 1. The most important point to make about the two-boxer's perspective is that, as noted in <Ref>, there are other cases in which BRIA theory sides unambiguously with one-boxers (evidential decision theory), anyway. In these other cases, fundamental changes are necessary to learn to two-box, such as learning a causal model or requiring that appropriate counterfactuals are revealed to the agent. If one believes that in those other scenarios, one-boxing is acceptable, then BRIA theory may serve as a foundation on which one could add additional rationality requirements (e.g., about randomization).
Second, if you are a one-boxer, you might find it problematic that BRIA theory allows agents who converge to taking a_2 with frequency (or probability) 1. This complaint is more critical to BRIA theory, because BRIA is, generally speaking, an evidentialist theory. Again, we could view BRIA theory as a first step on which to add other requirements (such as representing all randomization within the available options) to ensure convergence to taking a_2 with frequency 1. But we also believe that it is useful to understand what exactly goes wrong in the BRIAs of <Ref>. Roughly, it seems that these BRIAs do not take into account the fact that the way they decide which hypothesis to follow in a particular round affects what reward they get. To converge to optimal behavior (taking a_1 with probability 1), these BRIAs would have to keep track not only of how well different hypotheses perform. They would also have to test different aspects of the choice process that tests hypotheses.[Perhaps in other decision examples, it is explicitly rewarded to randomize. It might be even be rewarded to randomize in a way that is not represented in the chosen option.] It is not clear to us whether the choice probabilities are special among the agent's internals. One could similarly imagine that the reward that an agent obtains to be affected by other internals. For example, imagine a case in which the reward is high if and only if the expert currently tests a hypothesis with a poor test record on its test set. The optimal algorithm would then only test hypotheses that happen to be right but that have a poor record. But it seems too much to ask that an algorithm optimizes all of its internals.
§ SOME REGRET MINIMIZERS SATISFY A GENERALIZED BRIA CRITERION
We here show that some regret minimizers satisfy a slightly generalized version of the BRIA criterion. We first have to give a formal definition of regret. Since the literature on adversarial bandit problems with expert advice does not consider experts who submit estimates in the way that our hypotheses do, we cannot use an existing definition and will instead make up our own. For simplicity, we will only consider the case 𝕊=ℕ.
As noted in <Ref>, to define regret we need counterfactuals. Therefore, throughout this section we assume that instead of selecting a single value r_t, the environment selects a function D_t_t → [0,1] that assigns a reward even to counterfactual actions. Call such D̅ an extended decision process. Instead of r_t, we can then write D_t(α_t^c).
Let D̅ be an extended decision process, α̅ be an agent and ℍ={h_1,h_2,...} be a set of hypotheses. For simplicity, let ℍ be finite. For each h_i∈ℍ, let B_i{ t∈ℕ| h_i,t^e>α_t^e } be the set of rounds in which h_i outpromises α. We define the average per-round regret of the learner to hypothesis h_i up to time T as
REGRET_m,T=𝔼[1/|B_m,≤ T|∑_t∈ B_m,≤ T D_t(h_i,t^c) - α_t^e ].
As before, the bidding mechanisms means that hypotheses can specialize on specific types of decisions.[Note that we subtract the agent's estimates, not the utility that α̅ in fact achieves. This is important. Otherwise, the learner can set α^e=0 even in rounds in which D_t(α_t^c) is (expected to be) high, thus circumventing the expert's bidding mechanism.
Still, there are alternative definitions that also work. For example, one might count regret only in rounds in which α and h_i differ in their recommendations.]
As is common in the adversarial bandit problem literature, we will be interested in learning algorithms that guarantee average regret to go to zero as |B_m,≤ T| →∞.
Regret is somewhat analogous to the cumulative empirical record on the test set. As with the coverage condition, low regret can be achieved trivially by setting α^e=1. Thus, if we replace the coverage criterion with a sublinear-regret requirement, we have to keep the no overestimation criterion.
Let D̅ be an extended decision process where |DP_t| is bounded for all t∈N.
With access to an independent source of randomization, and given access to the outputs of all hypotheses in ℍ, we can compute α̅ that does not overestimate on ℕ s.t. for all hypotheses h_i, REGRET_i,T→ 0 with probability 1 if |B_i,≤ T| →∞.
As noted elsewhere, without independent randomization it is clear that such an α̅ cannot be designed. Even with independent randomization, it is not obvious whether the conjecture holds. However, similar results in the literature on adversarial bandit problems with expert advice lead us to believe that it does. That said, we have not been able to prove the conjecture by using simply the results from that literature.
Let α̅ be an independently randomized agent that does not overestimate on ℕ and ensures sublinear regret with probability 1 relative to all hypotheses in some finite set ℍ={h_i}_i. Further assume that for all hypotheses h_i, P(α_t^c=h_i,t^c)∈ω(1/t) among t∈ B_i. Then we can compute based on α a new agent α̃ that does not overestimate and that satisfies for each hypothesis h_i that is infinitely often rejected,
∑_t∈ B_i,≤ T1[α_t^c=h_i,t^c]/P(α_t^c=h_i,t^c) (D_t(h_i,t^c)-h_i,t^e) → -∞
among T at which α̃ rejects h_i.
Notice that the left-hand side of line <ref> is a weighted version of the cumulative empirical record on the set { t∈ B_i,≤ T|α_t^c=h_i,t^c }.
The proof combines one key idea from the literature on adversarial multi-armed bandits – importance-weighted estimation – and one from this paper – the decision auction construction (<Ref>).
For t∈ B_i, define
R̂_i,t = 1[α_t^c=h_i,t^c]/P(α_t^c=h_i,t^c)(D_t(h^c_i,t) - α^e_t),
where we assume P(α_t^c=h_i,t^c)>0.
As usual we then have that 𝔼[ R̂_i,t] = D_t(h^c_i,t) - α^e_t.
For t∉ B_i, define R̂_i,t=0.
Hence, R̂_i,t can be used as an unbiased estimator of the regret in a single round.
Further, (R̂_m,t)∈ o(t), and thus ∑_t=1^T (R̂_m,t) ∈ o(T^2). By Kolmogorov's strong law of large numbers,
1/T∑_t∈ B_m,≤ TR̂_m,t - 1/T∑_t∈ B_m,≤ T D_t(h^c_i,t) - α^e_t
1/T∑_t=1^T R̂_m,t - 1/T∑_t=1^T D_t(h^c_i,t) - α^e_t
0 as T→∞
In other terms,
∑_t∈ B_m,≤ TR̂_m,t - ∑_t∈ B_m,≤ T D_t(h^c_i,t) - α^e_t
is sublinear.
We now construct new estimates. Fix a non-decreasing, sublinear function CAℕ→ℝ with CA(T) →∞. (These are cumulative versions of the allowance functions from the construction in <Ref>.)
Next, we define
ℒ_i,T
∑_t∈ M_i,≤ T1[α_t^c=h_i,t^c]/P(α_t^c=h_i,t^c)(D_t(h^c_i,t) - h^e_i,t)
∑_t∈ B_i,≤ T-M_i1[α_t^c=h_i,t^c]/P(α_t^c=h_i,t^c)(D_t(h^c_i,t) - α^e_t),
where M_i⊆ B_i will be defined in a second. Define w_T(i)=CA(T) + ℒ_i,T.
Now at each time t, we define our new estimate as
α̃_t^e = max (α_t^e, max_i:w_t-1(i) ≥ 0 h_i,t^e).
Finally, let M_i be the set of rounds in which i is the maximizer in Eq. <ref> through the outer max.
We now need to show two things: That cumulative overestimation is still sublinear even for the new increased α̃_t^e and that the claimed variant of the hypothesis coverage criterion is satisfied.
We start with hypothesis coverage. First notice that because M_i⊆ B_i and for t∈ B_i, h_i,t^e> α_t^e, we get that
w_T(i) ≥CA(T) + ∑_t∈ B_m,≤ T1[α_t^c=h_i,t^c]/ P_t(α_t^c=h_i,t^c) (D_t(h^c_i,t) - h_i,t^e).
Thus, whenever h_i,T^e>α̃_T^e, then by construction w_t(i)< 0, and therefore
∑_t∈ B_i,≤ T1[α_t^c=h_i,t^c]/ P(α_t^c=h_i,t^c) (D_t(h^c_i,t) - h_i,t^e) ≤ -CA(T).
Thus, we get that among T∈B̃_i (the times where t strictly outpromises the new estimates), the empirical record on the test set goes to -∞.
It is left to show that overestimation remains low if we increase the estimates from α^e to α̃^e. We have
∑_t=1^T α̃_t^e - D_t(h_i,t^c) = ∑_t=1^T α_t^e - D_t(h_i,t^c)+ ∑_t=1^T α̃_t^e - α_t^e .
The first sum is sublinear by assumption. So we only have to show that ∑_t=1^T α̃_t^e - α_t^e is sublinear in T.
We have
∑_t=1^T α̃_t^e - α_t^e = ∑_i ∑_t∈ M_i,≤ T h_i,t^e - α_t^e.
So, it is left to show that the increase on behalf of each expert i is sublinear.
Now, we use IWE again. That is, we consider
∑_t∈ M_i,≤ T1[α_t^c=h_i,t^c]/ P(α_t^c=h_i,t^c) (h_i,t^e - α_t^e).
By the same argument as above, we can show that the difference between this term and ∑_t∈ M_i,≤ T h_i,t^e - α_t^e is sublinear. So it is enough to show that this term is sublinear.
Now notice that
w_T(i) = CA(T) + ∑_t∈ M_i,≤ T1[α_t^c=h_i,t^c]/P(α_t^c=h_i,t^c)(D_t(h_i,t^c) - h_i,t^e)_=(D_t(h_i,t^c)-α_t^e)-(h_i,t^e-α_t^e)
+ ∑_t∈ B_i,≤ T-M_i1[α_t^c=h_i,t^c]/α_t^c=h_i,t^c(D_t(h_i,t^c) - α_t^e)
= CA(T) - ∑_t∈ M_i,≤ T1[α_t^c=h_i,t^c]/P_t(α_t^c=h_i,t^c) (h_i,t^e - α_t^e)
+ ∑_t∈ B_i,≤ T1[α_t^c=h_i,t^c]/P_t(α_t^c=h_i,t^c) (D_t(h_i,t^c)-α_t^e).
Now, for T∈ M_i, it must be w_T(i)≥ 0. Still, w_T(i) can fall under 0, but only by R̂^m_t for some t∈{1,...,T}, which is in o(T). Thus,
∑_t∈ M_i,≤ T1[α_t^c=h_i,t^c]/P_t(α_t^c=h_i,t^c) (h_i,t^e - α_t^e)
CA(T) + ∑_t∈ B_i,≤ T1[α_t^c=h_i,t^c]/P_t(α_t^c=h_i,t^c) (D_t(h_i,t^c)-α_t^e) + o(T)
CA is sublinear by construction and the second summand has been shown to be sublinear above.
§ DOMINANCE?
In this section, we we show that BRIAs do not in general satisfy the dominance criterion. To even formulate the dominance criterion, we have to consider extended decision processes as defined in <Ref>.
There is an extended decision process D̅, a BRIA α̅ for the set of e.c. hypotheses and a positive number Δ>0 s.t. for all t∈ℕ, a_t,b_t ∈_t D_t(a_t)>D_t(b_t)+Δ but with limit frequency 1 we have that α_t^c = b_t.
This is shown by Newcomb's problem (<Ref>).
In fact, Newcomb's problem shows that for any algorithm that constructs BRIAs, there is a D̅ s.t. the algorithm's BRIA converges to a_t.
Of course, various dominance-like results follow from the results of <Ref>. However, more interesting applications of dominance are arguably ones where the conditions of these results aren't satisfied, e.g., where it is very unclear how one would assign expected utilities to different options. We will now give some reasons for why it's difficult to give any dominance result for BRIAs that does not follow from the results of <Ref>.
The first thing to notice is that relationships such as D_t(a_t)>D_t(b_t)+Δ (for all t) are irrelevant for our theory, as shown by Newcomb's problem, SAO, etc. Instead, our dominance relation needs to be statistical and relative to the test set. Roughly, we must make an assumption that when testing a_t, the rewards are (on average) higher (by Δ) than the reward of taking b_t in rounds in which b_t is taken. Of course, this already means that the result will be quite different from traditional notions of dominance.
A second, subtler issue relates to the use of estimates in our theory.[As noted in <Ref>, an alternative theory could simply require that an agent tests various choice policies and in the limit follows the ones that are empirically most successful. For such a theory, a condition like the one in the previous paragraph probably suffices.] To ensure that b_t is not taken with limit frequency, we would need to ensure not only that the a_t-recommending hypothesis doesn't underperform on its test set (as described above). We also need to ensure that this hypothesis is tested on a set on which it doesn't overestimate. We therefore need a further assumption that gives us some way to safely and efficiently estimate a_t, e.g., based on past values of a_t,b_t or estimates α^e_t. While this assumption can be made in relatively sneaky ways, we have not found any particularly interesting version of this claim.
We now discuss a subtler issue that relates to the use of estimates in our theory to show why a particularly simple approach doesn't work. A first attempt might be to assume that for every test set M, _t∈ M_≤ T D_t(a_t) > Δ + _t ≤ T α_t^c = b_tα_t^c as T→∞, where _t∈ N f(t) 1/|N|∑_t∈ N f(t) for any finite set N and function f on N. That is, we assume that a_t performs better on any test set than b_t when taken by α.
The trouble is that to obtain a conclusion we need to transform such an assumption into a hypothesis that not only recommends a_t (and thus receives relatively high rewards on average) but also makes appropriate estimates.
§ WHY AN EVEN SIMPLER THEORY FAILS AND ESTIMATES ARE NECESSARY
A simple mechanism of learning to choose is the law of effect (LoE) (, p. 244):
Of several responses made to the same situation, those which are accompanied or closely followed by satisfaction to the animal will, other things being equal, be more firmly connected with the situation, so that, when it recurs, they will be more likely to recur; those which are accompanied or closely followed by discomfort to the animal will, other things being equal, have their connections with that situation weakened, so that, when it recurs, they will be less likely to occur. The greater the satisfaction or discomfort, the greater the strengthening or weakening of the bond.
This notion is implicit in many reinforcement learning algorithms <cit.>. In (human) psychology it is also known as operant conditioning.
In situations like ours, where situations generally do not repeat exactly, for the law of effect to be meaningful, we have to applied on a meta level to general hypotheses or policies for making choices. So let a policy be a function that maps observations to actions. Then we could phrase this meta LoE as: if following a particular policy is accompanied with high rewards, then an agent will follow this policy more often in the future.
The BRIA criterion can be seen as abiding by this meta LoE, as the BRIA criterion requires testing different hypotheses and following the ones that have experimentally proven themselves. Its main conceptual innovation relative to the meta LoE is the bidding system, i.e., having the agent as well as hypotheses give estimates for how much utility will be achieved by making a particular choice, and using these estimates for testing and evaluation. A natural question then is: Are these conceptual additions to meta LoE necessary to obtain the kind of results we obtain? We here show why the answer is yes.
The biggest problem is quite simple to understand: if we don't restrict the testing regimen for policies, then biased testing can justify clearly suboptimal behavior. As an illustrative example, imagine that for all t, DP_t∈Fin([0,1]) where r_t=α^c_t. That is, at each time the agent is offered to choose from some set of numbers between 0 and 1 and then obtains as a reward the chosen number. The agent tests two policies: The first simply chooses the maximum number. The second chooses, e.g., the worst option that is greater than 1/2 if there is one, and the best option otherwise.
Of course, in this situation one would like the agent to learn at some point to follow the max policy. BRIAs indeed learn this policy (when accompanying the two tested policies with appropriate estimates) (cf. <Ref>). But now imagine that the agent tests the max hypothesis primarily in rounds where all values are at most 1/2 and the other hypothesis primarily in rounds in which there are options greater than 1/2. Then the max hypothesis could empirically be associated with lower rewards than the max hypothesis, simply because it is tested in rounds in which the maximum achievable reward is lower.
To avoid this issue we would have to require that the set of decision problems on which hypothesis A is tested is in all relevant aspects the same as the set of decision problems on which hypothesis B is tested. Unfortunately, we do not know what the relevant aspects are. For instance, in the above problem it may be sufficient to test the max hypothesis on even time steps and the other hypothesis on odd time steps. However, there may also be problems where rewards depend on whether the problem is faced in an even or in an odd time step. More generally, it is easy to show that for each deterministic procedure of deciding which hypothesis to test, there is a decision process ,r̅ in which which this testing procedure introduces a relevant bias. In particular, the positive results we have proven in <Ref> seem out of reach. We conclude that a direct deterministic implementation of meta LoE (without the use of estimates) is insufficient for constructing a criterion of rational choice.
Besides the estimates-based approach to this problem that we have developed in this paper, a different (perhaps more obvious) approach to this problem is to test randomly. For this, we assume that we have a randomization device available to us that is independent of ,r̅. If we then, for example, randomize uniformly between testing two hypotheses, testing is unbiased in the sense that for any potentially property of decision problems, as the number of tests goes to infinity, both hypotheses will be tested on the same fraction of problems with and without that property. This is essentially the idea behind randomized controlled trials. We have discussed this idea in <Ref>.
§ FACTORING TEAM DECISIONS
Let n∈ℕ be a positive natural number. Let _t be a a decision problem sequence where every t∈ℕ, _t=_t,1× ... ×_t,n, for some sets _t,1,...,_t,n. Let α̅ be a BRIA for , r̅ covering the set of e.c. hypotheses. Now for any t let ((a_t,1∈ DP_t,1,...,a_t,n∈_t,n),v_t)=α_t and define α_i,t=(a_t,i,v_t).
Then for i=1,..,n, α_i is a BRIA for DP_i, r̅ covering the e.c. hypotheses.
Instead of considering sets _t that are already the Cartesian products of a bunch of sets, one could also factorize any given set (unless its number of elements is 1 or a prime number) <cit.>. For example, a decision from {1,2,3,4} can be factorized into a decision of {1,2} versus {3,4}, and a decision of {1,3} versus {2,4}.
Low overestimation:
Clearly,
ℒ(α̅_i,r̅_i) = ∑_t=1^T α_i,t^e - r_t = ∑_t=1^T α_t^e - r_t ≤ 0,
where the last step is by the assumption that α̅ is a BRIA and therefore does not overestimate in the limit.
Coverage: Let h̅_i be an e.c. hypothesis for _i,r̅. Let h̅ be a hypothesis s.t. the i-th entry of h_t^c is equal to h_t^c, and h_t^e=h_i,t^e. Clearly, such an e.c. hypothesis exists. Let M be α̅'s test set for h̅. We will also use M as α̅_i
s test set for h̅_i. Also, let B be the set of times at which h_i outpromises α̅_i. Note that B is thereby also equal to the set of times at which h̅ outbids α̅.
We now need to show that if B is infinite, then (l_T(α̅_i,r̅, M, h̅ _i))_T∈ B→ -∞.
To prove this, notice that for all T,
l_T(α̅_i,r̅, M, h̅ _i) = ∑_t∈ M_≤ T r_t-h_i,t^e
= ∑_t∈ M_≤ T r_t-h_t^e
= l_T(α̅,r̅, M, h̅).
By assumption that α̅ is a BRIA, the final term goes to -∞ within T∈ B if B is infinite.
So if α̅ is a BRIA, α̅_1,...,α̅_n are BRIAs. Note that the converse of this does not hold.
§ SCHNORR BOUNDED ALGORITHMIC RANDOMNESS
A martingale is a function d𝔹^* → [0,∞) s.t. for all w∈𝔹^* we have that d(w)= 1/2d(w0)+1/2d(w1).
Let w∈𝔹^∞ be an infinite sequence. We say that d succeeds on w if lim sup_n→∞ d(w_1...w_n) = ∞.
We call w∈𝔹^ω (O(g(t))-boundedly) Schnorr random if there is no martingale d such that d succeeds on w and d can be computed (in O(g(t))) given everything revealed by time t.
Let α be an (O(h(t))-computable) BRIA for ,r̅ covering all e.c. hypotheses.
Let a̅ be a sequence of terms in 𝒯 s.t. a_t∈_t for all t∈ℕ and the values r_t in the rounds t with α^c_t are (O(h(t))-boundedly) Schnorr random. Then in the limit as T→∞, it holds that ∑_t=1^T r_t/T ≥1/2.
We conduct a proof by proving the following contrapositive: if the conlusion of the theorem does not hold, then (r_t)_t:α_t^c=a_t is not Schnorr random.
Assume that there is ϵ>0 s.t. ∑_t=1^T r_t/T<1/2-ϵ for infinitely many T. Then by the no overestimation criterion, there must also be an ϵ>0 s.t. ∑_t=1^T α^e_t/T < 1/2-ϵ for infinitely many T. Consider the hypothesis h_a,ϵ that always estimates 1/2-ϵ and recommends a_t. Now let M_ϵ be α̅'s test for h_a,ϵ. From the fact that α̅ rejects h_a,ϵ infinitely often, it follows that there are infinitely many T∈ℕ such that ∑_t∈ M_≤ T r_t - (1/2-ϵ) < 0.
From this fact, we will now define an (O(h(t))-computable) martingale d that succeeds on the sequence (r_t)_t:α_t^c=a_t. First, define d()=1. Whenever T is not in M, define d((r_t)_t<T:α_t^c=a_t0)=d((r_t)_t<T:α_t^c=a_t)=d((r_t)_t<T:α_t^c=a_t1). That is, when T∉ M, don't bet on r_T. If T∈ M, then bet some small, constant fraction δ of the current money that the next bit will be 0. That is, d((r_t)_t<T:α_t^c=a_t0)=(1+δ)d((r_t)_t<T:α_t^c=a_t) and d((r_t)_t<T:α_t^c=a_t1)=(1-δ)d((r_t)_t<T:α_t^c=a_t). Clearly, d thus defined is a martingale that is computable based on α̅,M.
Now we now know that there are infinitely many T s.t. d((r_t)_t<T:α_t^c=a_t)≥ (1+δ)^T+ϵ T(1-δ)^T. It is left to show that for small enough δ, (1+δ)^T+ϵ T(1-δ)^T→∞ as T→∞.
First notice that
(1+δ)^T+ϵ T(1-δ)^T = ((1+δ)(1-δ))^T (1+δ)^Tϵ = (1-δ^2)^T (1+δ)^Tϵ = ((1-δ^2) (1+δ)^ϵ)^T.
So we need only show that for small enough but positive δ, (1-δ^2) (1+δ)^ϵ>1. The most mechanic way to do this is to take the derivative at δ=0 (where the left-hand side is equal to 1) and showing that it is positive. The derivative is d/dδ (1-δ^2) (1+δ)^ϵ = (1+δ)^ϵ (ϵ - δ(ϵ+2)). Inserting δ=0 yields ϵ, which is positive.
§ A FEW MINOR RESULTS
In this section, we give a few minor results about the BRIA criterion. We don't use them anywhere, but they are helpful to understand what the BRIA criterion is about.
First, we simply note that the BRIA criterion becomes (weakly) stronger if we expand the set of hypotheses under consideration, which is immediate from the definitions in <Ref>.
Let ℍ,ℍ' be sets of hypotheses such that ℍ'⊆ℍ. Then any BRIA for ℍ is also a BRIA for ℍ'.
The following result shows that if we change a BRIA's decisions and estimates for a finite number of decisions in , it remains a BRIA.
Let α̅ be a BRIA for ,r̅ covering ℍ. If for all but finitely many t∈ℕ it is ζ_t=α_t, then ζ̅ is also BRIA for ℍ, ,r̅.
The following shows that if two hypotheses differ only at finitely many time steps, then coverage of one implies coverage of the other.
Let α̅ be a BRIA covering h. Let h' be s.t. h_t=h'_t for all but finitely many t. Then α̅ covers h'.
The following states that the decisions can be reordered, as long as this is done by bounded numbers of places, while maintaining the BRIA criterion. Note that, of course, a BRIA's α_t^c,α_t^e are usually calculate as a function of _1,...,_t and r_1,...,r_t-1. Thus, a reordered BRIA will typically not be computable in this way.
Let α̅ be a BRIA for ,r̅ covering h. Let fℕ→ℕ be a bijection s.t. f(n)-n is bounded (from above and below) (i.e., there is a number x s.t. |f(n)-n|<x for all n∈ℕ). Then α_f(1),α_f(2),... is a BRIA for D_f(1),D_f(2),... covering h_f(1),h_f(2),...
propositionincreasingestimates
Let α̅ be a BRIA for ,r̅ covering ℍ. Let ϵ̅ be a sequence of non-negative numbers such that ∑_t=1^T ϵ_t/T→ 0 as T→∞.
Let
ζ_t = (α_t^c,α_t^e+ϵ_t) for all t. Then ζ̅ is a BRIA for ,r̅ covering ℍ.
Note that decreasing estimates by a similar sequence ϵ̅ in general does not maintain the BRIA property. For example, if the estimates in rounds in which an option “0.5" is chosen is decreased below 0.5, the resulting agent would be exploitable by a hypothesis that recommends “0.5" and promises 0.5.
|
http://arxiv.org/abs/2307.04106v2 | 20230709060722 | Parametric Depth Based Feature Representation Learning for Object Detection and Segmentation in Bird's Eye View | [
"Jiayu Yang",
"Enze Xie",
"Miaomiao Liu",
"Jose M. Alvarez"
] | cs.CV | [
"cs.CV"
] |
Parametric Depth Based Feature Representation Learning for Object Detection and Segmentation in Bird’s-Eye View
Jiayu Yang^1,3^*, Enze Xie^2, Miaomiao Liu^1, Jose M. Alvarez^3
^1Australian National University, ^2The University of Hong Kong, ^3NVIDIA
{jiayu.yang, miaomiao.liu}@anu.edu.au, [email protected], [email protected]
Accepted 08-Jul-2023. Received 18-Jun-2023; in original form 23-May-2023
========================================================================================================================================================================================================================================
empty
< g r a p h i c s >
figureGiven multi-view images and camera parameters, our framework utilize parametric depth to transform image feature into BEV space for jointly estimating 3D object detection, BEV segmentation and a BEV visibility map.
Recent vision-only perception models for autonomous driving achieved promising results by encoding multi-view image features into Bird's-Eye-View (BEV) space. A critical step and the main bottleneck of these methods is transforming image features into the BEV coordinate frame. This paper focuses on leveraging geometry information, such as depth, to model such feature transformation. Existing works rely on non-parametric depth distribution modeling leading to significant memory consumption, or ignore the geometry information to address this problem. In contrast, we propose to use parametric depth distribution modeling for feature transformation. We first lift the 2D image features to the 3D space defined for the ego vehicle via a predicted parametric depth distribution for each pixel in each view. Then, we aggregate the 3D feature volume based on the 3D space occupancy derived from depth to the BEV frame. Finally, we use the transformed features for downstream tasks such as object detection and semantic segmentation. Existing semantic segmentation methods do also suffer from an hallucination problem as they do not take visibility information into account. This hallucination can be particularly problematic for subsequent modules such as control and planning. To mitigate the issue, our method provides depth uncertainty and reliable visibility-aware estimations.
[^*The work is done during an internship at NVIDIA]
We further leverage our parametric depth modeling to present a novel visibility-aware evaluation metric that, when taken into account, can mitigate the hallucination problem.
Extensive experiments on object detection and semantic segmentation on the nuScenes datasets demonstrate that our method outperforms existing methods on both tasks.
§ INTRODUCTION
In autonomous driving, multiple input sensors are often available, each of which has its coordinate frame, such as the coordinate image
frame used by RGB cameras or the egocentric coordinate frame used by the Lidar scanner. Downstream tasks, such as motion planning, usually require inputs in a unified egocentric coordinate system, like the widely used Bird's Eye View (BEV) space. Thus, transforming features from multiple sensors into the BEV space has become a critical step for autonomous driving. Here, we focus on this transformation for the vision-only setup where we take as input multi-view RGB images captured in a single time stamp by cameras mounted on the ego vehicle and output estimation results, such as object detection and segmentation, in a unified BEV space, see Fig. <ref>.
In general, accurate depth information is crucial to achieve effective transformations.
Early methods<cit.> forgo explicit depth estimation and learn implicit feature transformations using neural networks, which suffers from the generalization problem since the neural network does not have an explicit prior of the underlying geometric relations. More recent methods <cit.> adopt explicit but simplified depth representations for the transformation, which either requires large memory consumption, limiting the resolution <cit.>; or over-simplifies the representation leading to noise in the BEV space<cit.>. Moreover, these simplified depth representation do not have the ability to efficiently provide visibility information. As downstream tasks such as semantic segmentation are trained using aerial map ground truth, the lack of visibility estimation usually results in hallucination effects where the network segments areas that are not visible to the sensor <cit.>, see Figure <ref>. As a consequence, those estimations can mislead downstream planning tasks as it is extremely dangerous to drive towards hallucinated road but actually non-driveable, especially in high speed.
To address these limitations, we propose to adopt explicit parametric depth representation and geometric derivations as guidance to build a novel
feature transformation pipeline. We estimate a parametric depth distribution and use it to derive both a depth likelihood map and an occupancy distribution to guide the transformation from image features into the BEV space. Our approach consists of two sequential modules: a geometry-aware feature lifting module and an occupancy-aware feature aggregation module. Moreover, our parametric depth-based representation enables us to efficiently derive a visibility map in BEV space, which provides valuable information to decouple visible and occluded areas in the estimations and thus, mitigate the hallucination problem. We also derive ground-truth visibility in BEV space, which enables us to design a novel evaluation metric for BEV segmentation that takes visibility into account and reveals insight of selected recent methods <cit.> in terms of estimation on visible region and hallucination on occluded region.
Our contributions can be summarized as follows:
* We propose a geometry-aware feature transformation based on parametric depth distribution modeling to map multi-view image features into the BEV space. Our depth distribution modeling enables the estimation of visibility maps to decouple visible and occluded areas for downstream tasks.
* The proposed feature transformation framework consists of a novel feature lifting module that leverages the computed depth likelihood to lift 2D image features to the 3D space; and a feature aggregation module to project feature to the BEV frame through the derived 3D occupancy.
* We further propose a novel visibility-aware evaluation metric for segmentation in BEV space that reveals the insight of estimation on visible space and hallucination on occluded space.
Extensive experiments on the nuScenes dataset on object detection and semantic segmentation demonstrate the effectiveness of our method yielding state of the art results for these two tasks with a negligible compute overhead.
§ RELATED WORK
External depth based feature transformations.
When given depth input either from Lidar sensor or stereo matching, image feature can easily be transformed into BEV space<cit.>. PointPillar<cit.> extract features from a 3D point cloud and aggregate the features into BEV space. PseudoLidar<cit.> based methods firstly estimate a depth using stereo matching given stereo image pair as input followed by unprojecting the feature based on estimated depth. However, in real-life applications, Lidar sensors or stereo image inputs are not always available, which limits these line of methods.
Feature transformations without reliable depth input.
Without reliable depth input, various feature transformation methods have been proposed<cit.>, starting from early methods<cit.> that learn implicit feature transformations using neural networks. Learned transformation can suffer from the generalization problem, since the neural network does not explicitly account for changes in cameras' intrinsic and extrinsic parameters. Recent methods <cit.> adopt various depth representations to explicitly transform features based on multi-view geometry to the BEV space. The key in these methods is the underlying depth representation, which dominates the resolution and accuracy the feature transformation module can achieve. For instance, LSS <cit.> adopts a non-parametric depth representation. It represents depth as a discretized probability density function along each visual ray, which can be treated as a categorical distribution of depth. It can further form the depth probability volume in LSS for all pixels in an image. When the sampling rate is sufficient, such non-parametric depth distribution can adequately represent a large variety of depths, including multi-modal depth distributions. In practice, however, to estimate such depth representation, the backbone needs to estimate a probability volume that is cubic with the input image size and increases significantly along the number of input images, which limits the image and depth resolution.
To address this limitation, M^2BEV <cit.> adopts a simplified depth representation assuming the depth of all pixels follows a uniform distribution. Under this assumption, features are directly lifted to every location on the visual ray, resulting identical feature along the entire ray with no difference. Following works <cit.> followed similar depth representation. Such simplified representation have advantage on efficiency, as the backbone network do not need to estimate any parameter for the depth, but can cause ambiguity and noise in the 3D space.
Unlike the non-parametric depth distribution used in <cit.> or the uniform depth distribution in M2BEV<cit.>, we adopt a parametric depth distribution to model pixel-wise depth for feature lifting. Parametric depth distribution represents depth as a continuous distribution such as Gaussian or the Laplacian distribution, and its estimated distribution parameters can be used to evaluate depth likelihood or depth probability on any given depth value along each ray. To model the depth for a pixel, it takes only two parameters (μ,σ) for Gaussian and two (μ,b) for Laplacian, so it can be more efficient than non-parametric distribution. Moreover, its continuous nature allows evaluating depth likelihood on any points along the visual ray, which can achieve a higher depth resolution than the diescretized non-parametric distribution. We specifically designed our pipeline incorporating parametric depth to improve 2D-BEV feature transformation and also propose the derivation of visibility for subsequent planning tasks and visibility-aware evaluations.
Aggregating 3D feature into BEV space. Given the lifted feature in 3D space, most existing works including LSS <cit.> and M^2BEV <cit.> use the feature concatenation method introduced by Pointpillars<cit.> for transforming 3D features into BEV space. The 3D feature volume is split along horizontal dimensions and interpreted as pillars of features. Then, a feature vector is created by concatenating features along the vertical dimension for each pillar. All the concatenated features form a 2D feature map, which is converted into BEV feature map by few convolution layers. This design allows each voxel along the Z-axis to have equal contribution to the final BEV feature. However, this method can be affected by noisy features on empty spaces. We thus propose to compress the features based on a calculated space occupancy probability from the parametric depth distribution. Our proposed method can largely reduce the influence of those empty voxels to the aggregated features.
Joint Detection and Segmentation in BEV space.
M^2BEV recently proposed a unified detection and segmentation framework in BEV space, which we leverage to evaluate the effectiveness of our method. Specifically, the image features are transformed into a unified BEV feature, which is used by two parallel heads, a detection head and a segmentation head, to achieve multi-task estimation. M^2BEV leverage a detection head design from Lidar-based detection methods <cit.> and modify it to better suit camera-based methods. Their segmentation head is inspired by the design from <cit.>. However, in contrast to prior work, we leverage the proposed explicit feature transformations based on parametric depth to address its weaknesses.
Temporal extension.
Few concurrent methods <cit.> proposed to utilize temporal information to further boost segmentation and detection performance in BEV space and achieved promising results. Most of these methods, including BEVFormer<cit.>, BEVerse<cit.>, BEVDet4D<cit.> are based on the feature transformation module in LSS<cit.>.
<cit.> adopt depth supervision and temporal stereo matching to improve depth quality and further propose a more efficient implementation of LSS's Lift-splat step. <cit.> query 2D features from projected location of 3D voxels, which does not explicitly use depth and is similar to the uniform depth assumption in M^2BEV<cit.>. Our contributions focusing on depth representation, feature transformation and visibility estimation is orthogonal to the temporal extension of these methods and our method can potentially be applied to these methods to further boost their performance and enable the efficient visibility inference.
§ METHOD
Let us now introduce our framework to jointly perform segmentation and object detection. Shown in Fig. <ref>, our framework comprised of three fundamental components: feature extraction, feature transformation, and multi-task estimation. The framework's key contributions include a parametric depth decoder integrated into the feature extraction, a geometry-aware feature lifting module, and an occupancy-aware feature aggregation module. Furthermore, we introduce a visibility estimation module as a constituent of the multi-task estimation that provide crucial visibility information for down-streaming planning tasks.
§.§ Problem Statement
Let { I_i} _i=1^N, I_i∈ℝ^H× W × 3,
be a set of RGB images taken at the same time slot, H and W define the image dimension, and { K_i, R_i, T_i}_i=1^N represent the intrinsic and extrinsic parameters for their corresponding camera poses, respectively. We focus on lifting the image features f_i^2D∈ℝ^H× W × CH to the 3D space as f^3D∈ℝ^X'× Y' × Z'× CH and then aggregate them to the BEV space as f^BEV∈ℝ^X× Y × CH_B for 3D object detection and segmentation.
§.§ Parametric Depth Distribution Modelling
Let us first introduce our parametric depth distribution modelling. Given an image I_i, we extract its latent features f_i^T using a backbone network followed by a image feature decoder network to extract 2D image features, f_i^2D, see fig. <ref>. Then, following depth estimation methods <cit.>, we adopt a Laplacian distribution to model depth in real-world scenarios where the depth distribution for each pixel is given by,
ℒ(d|μ,b) = 1/2bexp(-|d-μ|/b),
where μ provides an estimation of the depth, and b is the diversity parameter of the distribution, see Fig. <ref>. The goal in this module is to estimate (μ, b).
We design the parametric depth decoder network Φ_θ to map the latent feature to the parameter space of the depth distribution: Φ_θ: ℝ^H× W× CH_T→ℝ^H× W× 2,
where CH_T is the latent feature dimension. Note that when the ground-truth depth for each pixel is known, the depth distribution becomes a delta function, where the depth probability p(d_gt) on ground-truth depth d_gt is one and zero anywhere else. However, in practice, the depth is unknown for each pixel. Given our modelled depth distribution, we can calculate the depth likelihood analytically based on our parametric modelling.
Fig. <ref> shows an example of depth distribution where μ gives an estimate of the depth and b could be interpreted as the uncertainty of each estimation. Larger values of b correspond to areas where the estimation is more uncertain.
§.§ Geometry-aware Feature Lifting
Fig. <ref> depicts our geometry-aware feature lifting module to transform the 2D image features f_i^2D∈ℝ^H× W× CH from the camera coordinate system into 3D space defined for the ego vehicle coordinate system, generating the 3D feature volume f_i^3D∈ℝ^X'× Y'× Z'× CH_I.
Ideally, the 2D image feature for each pixel is back-projected along the visual ray to the 3D location defined by its ground truth depth value f^3D( P_gt) = f^2D( p), where P_gt = d_gt K_i^-1p̃, p̃ is the homogeneous coordinate for p. Without knowing the true depth value for each pixel, we discretize the 3D space into voxels and thus aggregate the feature for each voxel by forward projecting it to multi-view images.
Precisely, let P_j = (x_j, y_j, z_j)^T define the 3D coordinate of centre for voxel j. Given the camera poses for multiple views, we project it to image I_i as
d^i_jp̃^i_j = K_i( R_iP̃_j+ T_i) where p̃^i_j denotes the homogenous coordinate of p^i_j in image I_i. Meanwhile, we can obtain the depth value of P_j in view i as d^i_j. Based on our parametric depth modelling, we obtain the likelihood of d^i_j being on the object surface as
α_d^i_j = ℒ(d^i_j|μ^i_ p^i_j,b^i_ p^i_j) = 1/2b^i_ p^i_jexp(-|d^i_j-μ^i_ p^i_j|/b^i_ p^i_j).
We similarly project the voxel to all views and aggregate the feature for the j-th voxel as
f_j^3D = ∑_i=1^Nα_d^i_j f_i^2D( p^i_j),
where f_i^2D is the extracted image feature. We adopts bilinear interpolation to obtain f_i^2D( p^i_j) when p^i_j is a non-grid coordinate. All lifted 3D features form the 3D feature volume f^3D∈ℝ^X'× Y'× Z'× CH, which is then aggregated by our occupancy aware feature aggregation module into 2D BEV feature, introduced in the following section.
§.§ Occupancy-aware Feature Aggregation
Our occupancy-aware feature aggregation module aggregates the 3D feature volume f^3D∈ℝ^X'× Y'× Z'× CH from ego vehicle 3D coordinate frame into BEV space, forming BEV feature map f^BEV∈ℝ^X× Y× CH_B.
As shown in Fig. <ref>, the 2D BEV coordinate system is aligned with the XY plane of the ego vehicle coordinate system where the shared origin is defined on the center of the ego vehicle. Note that the BEV coordinate system only has 2 dimensions, forgoing the Z dimension. The goal of the feature aggregation is to transform the 3D feature volume in ego vehicle coordinate into a 2D feature map in the BEV space, which can be treated as aggregating the 3D feature volume along its Z axis. To this end, we first rearrange the previously computed depth likelihood for all voxels by Eq. <ref> into a depth likelihood volume P^3D∈ℝ^X'× Y'× Z', which shares the same volumetric coordinate as that of 3D feature volume f^3D. For each column along the Z-axis in the depth likelihood volume, the likelihood of each voxel of different height reflects its spatial occupancy. Thus, we normalize the depth likelihood along Z axis into a spatial occupancy distribution, forming a spatial occupancy volume O^3D∈ℝ^X'× Y'× Z' defined as
O^3D(x,y,z) = P^3D(x,y,z) + b_o/∑_z_i=0^Z'-1P^3D(x,y,z_i) + b_o,
where the b_o is a bias term to encourage an equal contribution of feature on completely occluded region.
Our feature aggregation along the Z-axis could minimize the influence of features from empty voxels to the final feature in the BEV frame. Given the spatial occupancy volume O^3D, we compute the final 2D BEV feature as a weighted sum of 3D features
f̂^BEV(x,y) = ∑_z_i=0^Z'-1 (O^3D(x,y,z_i)× f^3D(x,y,z_i)),
where we use the normalized spatial occupancy distribution as the 3D feature weight.
We further transform f̂^BEV via a few layers of convolution to obtain the final feature for BEV space f^BEV which is then applied to detection and segmentation tasks.
§.§ Object Detection and Segmentation
Given the BEV feature map, we use two heads for detection and segmentation. Specifically, we adopt the detection head and segmentation head from M^2BEV <cit.> without modification for fair comparison. The detection head consists of three convolution layers and outputs dense 3D anchors in BEV space along with category, box size, and direction of each object. The segmentation head consists of five convolution layers and outputs 2 classes predictions, road and lane, as originally defined by LSS<cit.>.
§.§ Training Strategy
We adopt supervised training strategy. We supervise the parametric depth estimation by maximizing its depth likelihood on ground-truth depth observations. Specifically, we minimize the negative log-likelihood loss ℒ_D using sparse ground-truth depth d_gt generated from sparse lidar measurements. Here ℒ represent Laplacian distribution and P^i_gt represent set of pixels where ground-truth lidar measurements is valid for image i.
ℒ_D(θ) =∑_i=1^N∑_p∈𝒫^i-log(ℒ(d^p_gt,i|μ_i^p(θ), b_i^p(θ)))
where 𝒫^i defines the set of pixel coordinates with valid ground truth depth map for view i.
For detection head, we use the 3D detection loss used in PointPillars<cit.> as follows, where ℒ_loc is the total localization loss, ℒ_cls is the object classification loss, ℒ_dir is the direction classification loss, N_pos refer to the number of positive samples and β_cls, β_loc, β_dir are set to 1.0, 0.8, 0.8 accordingly.
ℒ_det = 1/N_pos(β_clsℒ_cls + β_locℒ_loc + β_dirℒ_dir)
Please refer to <cit.> for more details.
For segmentation head, we use both Dice loss ℒ_dice and binary cross entropy loss ℒ_bce as segmentation loss ℒ_seg and use equal weight β_dice = β_bce = 1.
ℒ_seg = β_diceℒ_dice + β_bceℒ_bce
For the visibility map and additional outputs, since they are geometrically derived from the estimated parametric depth representation without any learned parameters, it's not necessary to apply supervision on them.
§ VISIBILITY
§.§ Visibility Map
The segmentation in BEV space mainly focuses on segmenting lane regions. However, those regions are not always visible in the camera views due to the occlusion of vertical scene structures such as building (see Fig.<ref>). We thus propose to use our parametric depth modeling to infer a visibility map which decouples visible and occluded areas and, will contribute to mitigate the hallucination effect.
We define a visibility map V^BEV∈ℝ^X× Y to describe the visibility range of ego vehicle's multi-view cameras. Starting from the likelihood of the Laplacian distribution in Eq. <ref>, the occlusion probability B(d) of a voxel in 3D space that has a back-projected depth d in camera view is
B(d) = ∫_0^dℒ(x|μ,b) dx.
We derive this occlusion probability as follows. Firstly we find the indefinite integral of Eq. <ref> as
F(x) = ∫_-∞^xℒ(x|μ,b)dx = 1/2exp(x-μ/b) if x < μ
1-1/2exp(-x-μ/b) if x ≥μ.
Then we calculate the definite integral between [0,d] as the occlusion probability B(d), which is defined as
B(d) = F(d) - F(0) = F(d)-1/2exp(-μ/b).
In practice, this is computed very efficiently, without the need to perform the discrete integration of the depth likelihood over the range [0,d]. Based on the relationship between visibility and occlusion, we convert the occlusion probability B to visibility probability V by
V(d) = 1-B(d) = 1 + 1/2exp(-μ/b)-F(d).
To finally compute the visibility in BEV space, we take the maximum visibility probability along the Z axis to form the visibility map V^BEV.
Ṽ^BEV(x,y) = max_z∈𝒵'V(x,y,z)
where 𝒵'={0,1,2⋯ Z'-1}. The V^BEV is obtained via interpolation from Ṽ^BEV.
§.§ Visibility-aware Evaluation
For semantic segmentation where the ground-truth is usually generated using aerial images, it is not possible evaluate predictions in visible and occluded areas by using the standard evaluation metrics. Therefore, in this section, we follow a similar process as the one to generate the visibility map to derive a visibility-aware evaluation method for segmentation in BEV space. In this case, however, we project the lidar 3D points (ground-truth) into multi-view image space and use a depth completion network to obtain multi-view dense depth maps. This depth map is then used as the expected depth value to build a parametric depth representation F(θ_gt). We then evaluate the ground-truth depth likelihood on each voxel in 3D space using Eq. <ref>, forming the ground-truth depth likelihood volume L_gt. Finally, we derive the ground-truth visibility map in BEV space V using Eq. <ref> and Eq. <ref>.
In this case, V reflects the maximum visibility of the multi-view cameras in BEV space. Thus, it can be used as a mask to explicitly evaluate results in BEV space subject to visibility. Specifically, we use a threshold τ_vis to split the predicted segmentation s_pred and ground-truth segmentation label s_gt into visible region {s^vis_pred,s^vis_gt} and occluded region {s^occ_pred,s^occ_gt}. We can then compute the IoU for the visible (IoU_vis) and occluded (IoU_occ) regions separately as
s^vis = ∑_x∈𝒳,y∈𝒴s(x,y)× 1(V(x,y) ≥τ _vis),
s^occ = ∑_x ∈𝒳, y∈𝒴s(x,y)×1(V(x,y) < τ _occ),
IoU_vis = s^vis_pred∩ s^vis_gt/s^vis_pred∪ s^vis_gt, IoU_occ = s^occ_pred∩ s^occ_gt/s^occ_pred∪ s^occ_gt where 𝒳={0,1,⋯,X-1}, 𝒴={0,1,⋯,Y-1}, and 1(·) is the indicator function.
We also report the occlusion rate on nuScenes as the percentage of visible or occluded segmentation labels over total number of segmentation labels.
§ EXPERIMENTS
In this section, we first detail our experimental settings, then we demonstrate the effectiveness of our approach on the nuScenes dataset, and, finally, we provide ablation studies on the main components of our method.
§.§ Implementation Details
Dataset. We conduct our experiments on the nuScenes dataset <cit.>. The nuScenes dataset provides video sequences along with multiple sensor outputs including Lidar, Radar, GPS and IMU, all of which are collected by calibrated and synchronized sensors mounted on an vehicle driving across Boston and Singapore. The dataset consists of 1000 sequences, split into 700 for training and 150 for validation and testing, respectively. Each sample provides six RGB images captured by 6 cameras with divergent viewing directions along with Lidar sparse 3D points, Radar sparse 3D points, GPS pose and IMU readouts. We follow <cit.> to generate ground-truth segmentation labels from the global map provided by nuScenes dataset.
Evaluation metrics. We report our results using the same metrics as in the nuScenes benchmark. For detection, we report mean Average Precision (mAP) and the nuScenes detection score <cit.>. For segmentation, we follow LSS <cit.>, and report the mean IoU score (mIoU). In addition, we report results using the proposed visibility-aware evaluation detailed in Sec. <ref>. Unless specified, we report numbers on the validation set.
Network architecture. We use a unified framework to demonstrate benefits of our depth-based feature transformation module. The network consists of a backbone image encoder and two decoding heads, one for segmentation and one for detection. We use ResNet with deformable convolution as the image encoder. For the decoding heads, we use the same architecture as the one in PointPillars <cit.>.
We set the size of the intermediate 3D volume consisting of X'× Y'× Z' = 400×400×12 voxels, with a voxel size of 0.25m× 0.25m× 0.5m, respectively. The final BEV space dimension consists of X× Y = 200×200 grids. Each grid is of size 0.5m× 0.5m.
Training and inference. During training, we use 6 RGB images and corresponding camera parameters as input.
The training for parametric depth estimation is supervised by the ground-truth sparse Lidar points provided in the dataset. Ground-truth detection and segmentation labels are used to supervise the detection and segmentation heads. We set batch size to 1 per GPU and use 3 nodes with 8 Nvidia V100 GPUs. For inference, our method only requires the 6 input RGB images together with the corresponding camera parameters.
§.§ Results
We now compare our results with M^2BEV and other state-of-art methods on the nuScenes dataset. To facilitate the comparison to other approaches, we use ResNeXt-101 as the backbone of our method for detection and segmentation experiments and use ResNet-50 as the backbone for multi-task learning experiments and efficiency analysis.
Detection. We report the results of our method and related state of the art methods in Tab. <ref> and Tab. <ref>, for the validation set and the test set respectively. For the validation set, we only include frame-wise camera-based methods. That is, we exclude those approaches using temporal information. For the test set, we include the latest results including Camera, Lidar, Radar and their combination. As we can see, in both sets, our approach outperforms all existing camera-based methods on both mAP and the NDS score.
Segmentation. We now focus on evaluating our semantic segmentation results. We report our performance compared to state-of-the-art methods on the nuScenes validation set in Tab. <ref>.
We also report a variant of our model trained without depth supervision (Ours*) to fairly compare with LSS <cit.>.
Our method performs significantly better compared to LSS <cit.> on both road and lane segmentation and slightly better compared to M^2BEV <cit.>, the closest method to ours.
Our model without depth supervision still outperforms existing methods.
Interestingly, if we take the visibility into account, as shown in Tab. <ref> and Fig. <ref>, our method clearly outperforms the baselines on the visible areas while maintain the performance compared to M^2BEV on the occluded regions. These results evidence the benefits of our parametric depth approach.
Joint detection and segmentation. Finally, we report results for jointly evaluating both tasks. In this case, we compare our results to the multi-task version of M^2BEV. We show results for this experiment in Tab. <ref>. Our method, once again, outperforms the baseline on both detection and segmentation tasks. These results further evidence the benefits of an improved depth representation in the 2D to 3D feature transformation process.
Efficiency. Our parametric depth estimation requires the estimation of additional parameters compared to simplified depth estimation approaches. As shown in Tab. <ref>, our model requires slightly larger amount of memory; However, that does not lead to a significant increase in the inference time.
§.§ Ablation Studies
We carry out ablation experiments to study the influence of feature transformations on final detection and segmentation performance and the robustness of our model to calibration error. More ablation experiments can be found in supplementary material. We use ResNet-50 as the backbone for all ablation experiments.
Feature transformations
We evaluate the effectiveness of the parametric depth based feature lifting and aggregation module comparing with baseline non-parametric depth based lifting LSS<cit.>, baseline uniform depth based lifting similar to M^2BEV and the widely used Pointpillar<cit.> feature aggregation. Results are in Tab. <ref>. Our proposed parametric depth based lifting coupled with occupancy based feature aggregation achieved best performance for both detection and segmentation.
Limitations. Like all camera based methods, our method can only provide reliable detection and segmentation results on visible region. On occluded region, although our method can provide hallucination results and visibility information, the results are not reliable for making critical driving decision. Following planning tasks should utilize the visibility and uncertainty information to achieve reliable planning.
§ CONCLUSION
We propose a parametric depth distribution modeling-based feature transformation that efficiently transforms 2D image features to BEV space. By incorporating visibility inference, our method can provide crucial visibility information to down-streaming planning tasks. Moreover, our approach outperforms existing methods in both detection and segmentation tasks, making it a promising candidate for feature transformation in future works. In our future work, we aim to investigate the integration of temporal information to improve estimation accuracy.
ieee_fullname
|
http://arxiv.org/abs/2307.04980v1 | 20230711023822 | A Model for Circuit Execution Runtime And Its Implications for Quantum Kernels At Practical Data Set Sizes | [
"Travis L. Scholten",
"Derrick Perry II",
"Joseph Washington",
"Jennifer R. Glick",
"Thomas Ward"
] | quant-ph | [
"quant-ph"
] |
A Multi-view Impartial Decision Network for Frontotemporal Dementia Diagnosis
Guoyao Deng1, Ke Zou1,3, Meng Wang3, Xuedong Yuan2, Sancong Ying2 and Huazhu Fu3
August 12, 2023
====================================================================================
plain
plain
Quantum machine learning (QML) is a fast-growing discipline within quantum computing. One popular QML algorithm, quantum kernel estimation, uses quantum circuits to estimate a similarity measure (kernel) between two classical feature vectors. Given a set of such circuits, we give a heuristic, predictive model for the total circuit execution time required, based on a recently-introduced measure of the speed of quantum computers. In doing so, we also introduce the notion of an “effective number of quantum volume layers of a circuit", which may be of independent interest. We validate the performance of this model using synthetic and real data by comparing the model's predictions to empirical runtime data collected from IBM Quantum computers through the use of the Qiskit Runtime service. At current speeds of today's quantum computers, our model predicts data sets consisting of on the order of hundreds of feature vectors can be processed in order a few hours. For a large-data workflow, our model's predictions for runtime imply further improvements in the speed of circuit execution – as well as the algorithm itself – are necessary.
§ INTRODUCTION
Quantum machine learning (QML) is a broad, interdisciplinary topic at the intersection of quantum information/computation and classical machine learning <cit.>. Within QML, there has been much study on one particular QML algorithm, called “quantum kernel estimation" or “quantum support vector machines" <cit.>. Quantum kernels are a similarity measure K(𝐱, 𝐲) between two classical feature vectors (data points) 𝐱, 𝐲 evaluated using a quantum circuit[For details on kernel methods in general, see <cit.>.]. This circuit uses an n-qubit parameterized encoding circuit U(θ). Given U, 𝐱, and 𝐲, and some fiducial starting state |ψ_0⟩, the corresponding quantum kernel value is given by
K(𝐱, 𝐲) = |⟨ψ_0|U^†(𝐲)U(𝐱)|ψ_0⟩|^2.
Usually, |ψ_0⟩ is taken to be a computational basis state (typically, the all-zeros state, |0^⊗ n⟩). To calculate a quantum kernel using a quantum computer, |ψ_0⟩ is prepared, and the circuit U(𝐱)∘ U^†(𝐲) is applied. (Here, ∘ means the composition of two operators.) Finally, the resulting state is measured, resulting in a classical bitstring 𝐛. The probability of obtaining the bitstring corresponding to |ψ_0⟩ is estimated by repeating the just-described process many times (aka, for many “shots") to build up statistics:
Pr(|ψ_0⟩) = # of outcomes 𝐛 corresponding to |ψ_0⟩/S,
with S as the number of shots. Here, the symbol “ " is used in the statistical sense of “Is an estimate of", not in the quantum-mechanical sense of “Is a quantum-mechanical operator". That is, Equation (<ref>) is an estimate of the quantum kernel, Equation (<ref>).
Given a data set 𝒟 = {𝐱_1, 𝐱_2, ⋯ , 𝐱_N}, usually the collection of pairwise quantum kernel values K(𝐱_1, 𝐱_1), K(𝐱_1, 𝐱_2), ⋯ is estimated. These values can then be used in classical kernel-based algorithms, such as support vector machines <cit.>, Gaussian processes <cit.>, etc. <cit.>. In this way, quantum kernels “enhance" classical kernel-based algorithms. This work focuses on quantum-enhanced support vector machines.
Quantum kernels have already been used in a variety of contexts, including high-energy physics <cit.>, healthcare and life sciences <cit.>, many-body physics <cit.>, natural language processing <cit.>, industrial manufacturing <cit.>, and financial services and banking <cit.>. However, to date the only proof of an advantage from using quantum kernels is theoretical in nature <cit.>. In a practical context, quantum advantage with quantum kernels has yet to be attained.
One obstacle to deploying quantum kernels in practice – and at scale across a data set where N >> 1 – is the time spent executing the necessary quantum circuits could become a bottleneck to the total runtime of the quantum-enhanced, kernel-based algorithm. At least two places exist where this bottleneck could arise: first, transferring data to the quantum computer (necessary because, usually, quantum computers are not closely co-located with the data sets they are processing, necessitating the transfer of data over networks), and second, the total time required for the quantum computer to run the required circuits. The former obstacle can be alleviated by minimizing the amount of data transfer required <cit.>; the latter is the subject of this work. The question we consider here is: “How much time is needed to execute a job consisting of S shots each of M circuits, each of which estimates a quantum kernel value based on an encoding circuit U(θ)?".
The runtime must clearly relate to M itself, as well as S, as evidenced by Figure <ref>. However, other properties of the circuit itself – as well as the system the job is being run on – may also impact runtime. In this work, we introduce a well-motivated model for job runtime (Section <ref>), and evaluate its performance by comparing the model's predictions to results obtained from running jobs on IBM Quantum's systems using the Qiskit Runtime <cit.> service (Section <ref>). Using this model, we then discuss the implications of estimating quantum kernels on practical and large data sets in for a climatologically-relevant problem; namely, flash flood prediction (Section <ref>). Finally, we conclude with
a discussion of the implications of the model for processing large data sets (N>>1), as well as interesting directions for future work (Section <ref>).
§ A MODEL FOR CIRCUIT EXECUTION (JOB) RUNTIME
This section presents a model for job runtime. It model does not take into account the time a given job spends waiting in a queue prior to being executed on hardware. Empirical studies of queue times show wide variation in how long a given circuit spends waiting to execute; see <cit.>. Queue time depends strongly on the queuing system used; instead, this work focuses on modeling the time required to run the job once it has been removed from the queue.
Modeling job runtime is hindered due to a lack of well-defined notions of “How long does it take a quantum computer to run a circuit?". One starting point is using information about how much time is needed for state initialization, gates, and measurements. However, such a model may be overly-cumbersome to use in practice, as modeling the runtime of a circuit with even a moderate number of qubits or depth could be difficult. Doing so would require getting down into the weeds of the circuits, and considering the vagaries of how the hardware executes them[For example, whether the compilers used to schedule pulses attempt to bring pulses forwards in time in the pulse-based representation of the circuit.]. What's more, such a low-level model misses the impact of contributions higher up the stack on timing performance – for example, the time spent compiling an abstract quantum circuit or program into the requisite pulse signals would clearly impact overall runtime, but wouldn't be captured by such a model.
Hence, a better model – in the sense of capturing more of the stack that impacts timing performance – would focus on modeling runtime starting from the moment a given job is pulled from a queue of jobs, to the time its results are sent back to the end-user. The necessary ingredient to do so is a holistic notion of “system speed".
Such a quantity has been recently introduced in the literature, and is called “Circuit Layer Operations Per Second" (CLOPS) <cit.>. The methodology used to calculate the CLOPS of a given system explicitly encompasses the entire stack from the moment a job is de-queued, and is straightforward to describe. Consider running a job of M parameterized quantum volume circuits <cit.> on a system with quantum volume V. Each circuit in the job has a number of quantum volume layers (repetitions of permutations and random 2-qubit gates) D=log_2(V). And suppose the parameters of each circuit are updated updated K times, and each circuit in the job is repeated for S shots. Let the total elapsed time be T. The CLOPS C of the system is then
C = MDKS/T.
The methodology for computing CLOPS presented takes S=M=100, K=10, and performs the parameter updates by chaining the output of one run of a circuit to the inputs of the next run, through the use of a pseudo-random number generator <cit.>.
Assuming the stack has no fixed overheads or time costs with respect to varying any of M,K,S, or D, then a multiplicative scaling of any of these parameters would result in a corresponding scaling of the total runtime. That is, if another job was run with M' circuits, K' parameter updates, S' shots, and D' quantum volume layers, then a system with CLOPS C should take a time
T' = (M'*K'*D'*S')/C
to run such a job.
To apply Equation (<ref>) to jobs consisting of circuits which estimate quantum kernel values, two modifications are necessary. Both relate to the fact the CLOPS metric is defined using quantum volume circuits, but quantum volume circuits are not usually used as encoding circuits in QML.
The first – and most straightforward – issue is the CLOPS metric incorporates the notion of parameter updates through the variable K. When calculating quantum kernels, no parameter updates are done; K should be fixed to one[Note if quantum kernel training <cit.> was performed, then K≠ 1, and should reflect the number of update calls performed.].
The second issue is what the notion of “number of quantum volume layers" (D) would mean. While a given feature map may have a parameter which seems similar in spirit to D – for example, by repeating a base template for an encoding circuit several times – these are different categories of items, making them incomparable. Figure <ref> shows examples of what both “number of repetitions of a base template" mean for quantum volume and a particular QML circuit, called a “ZZFeatureMap" ([Equation (<ref>)] and reference <cit.>).
Consequently, a notion of the “effective" number of quantum volume layers is needed. We provide a definition below, based on 2 observations. The first observation is for an n-qubit encoding circuit U(𝐱), with a number of repetitions of its template D, the corresponding circuit for calculating a quantum kernel acts on n qubits and has a number of repetitions of the base template 2D. Thus, its volumetric area[The notion of volumetric area of a circuit is based on the idea of volumetric benchmarking of quantum computers <cit.>, with the difference that in <cit.>, the depth of the circuit when transpiled to a canonical gate set is used in place of a notion of “number of layers".] – the product of circuit width and number of base layers – is 2Dn. A quantum volume circuit acting on q qubits has volumetric area[Recall quantum volume circuits are square, meaning the number of quantum volume layers is equal to the number of qubits the circuit acts on.] q^2. Thus, a quantum volume circuit with q^2 = 2Dn has the same volumetric area as a quantum kernel circuit. This sets a required number of qubits the quantum volume circuit needs to act on in order to have the same volumetric area as the quantum kernel circuit.
The second observation is even when two circuits have the same volumetric area, their depths when transpiled to hardware will generally not be the same (see Figure <ref>). A variety of circuits with different values of n and D can have the same volumetric area, but the circuit execution time can be dramatically different – intuitively, a circuit with higher depth will take more time to execute. Hence, capturing the effect of circuit depth is necessary. To do so, we normalize the depth of the quantum kernel circuit to the depth of a quantum volume circuit with the same volumetric area, and use it as a scaling factor.
These two observations above lead to a definition of the “effective number of quantum volume layers" of a quantum kernel circuit as
D_eff≡⟨Depth(U^†(𝐱)U(𝐲))⟩/⟨Depth(QV_v) ⟩*v, where v = ⌈√(2Dn)⌉.
Here, QV_j denotes a quantum volume circuit with a number of layers j, and Depth() denotes the circuit depth when transpiled onto hardware. The expectation values are taken with respect to the parameters 𝐱, 𝐲 and random seeds for the kernel and quantum volume circuits, respectively.
Thus, our model for execution time of a job consisting of M quantum kernel circuits with an effective number of quantum volume layers D_eff on a system with with CLOPS C for a total of S shots is given by
T̂ = MS/C*D_eff.
Note here, T̂ means “An estimate of the runtime", not “Is a quantum-mechanical operator".
§ MODEL PERFORMANCE
The performance of the model is evaluated using 2 kinds of circuits: quantum volume circuits and kernel circuits based on the ZZFeatureMap circuit. Both of these circuits are parameterized, so synthetic data is generate the parameters. Empirical runtime information is collected by submitting the jobs to IBM Quantum systems using the Qiskit Runtime, a quantum computing service and programming model allowing users to optimize workloads and efficiently execute them on quantum systems at scale <cit.>, via the Runtime's Sampler primitive <cit.>. Across the jobs, the number of circuits M, shots S, backend used, and number of qubits n are varied. In addition, for the ZZFeatureMap circuits, both the number of repetitions of the base template D and the circuit's entanglement structure are varied.
To quantify the model's performance at predicting runtime, two numbers are used. Suppose the actual runtime for the job is T, and the runtime predicted by the model is T̂. The corresponding loss L of the model with respect to the job is be
L = r-1 r ≥ 1
1/r-1 r<1 with r = T̂/T,
By construction L ≥ 0, with equality if, and only if, the predicted and actual runtimes agree.
The number r – the runtime ratio – is another quantifier of the degree to which the predicted and actual runtime agree. When r < 1, the model under-predicts runtime. One problem with this is if the predictions of the model are used in other contexts – for example, analyzing the overall runtime of a QML workflow – then an under-prediction on the part of the model would negatively impact such an overall analysis. Hence, the loss function more strongly penalizes under-prediction of runtime (i.e., increases more quickly when r < 1).
§.§ Model Performance: Quantum Volume Circuits
The runtime model uses the CLOPS metric as the notion of the speed of circuit execution. The CLOPS metric is computed using quantum volume circuits. Hence, we first evaluate the performanc of the model when the circuits in the job are quantum volume circuits. Note for these jobs, D_eff in Equation (<ref>) is taken to be the number of quantum volume layers D.
Table <ref> shows – across 5 backends – the actual runtime T, the runtime predicted by the model T̂, the runtime ratio r=T̂/T, and the corresponding loss for jobs where S=M=100, and D= log_2(QV). (Note in these experiments K=1, whereas for the CLOPS experiments, K=10.) The value of the runtime ratio T̂/T shows the model consistently under-estimates the runtime. As a result, the model's loss is non-zero. One reason for this discrepancy could be that, when calculating a system's CLOPS, the quantum volume circuits are pre-transpiled to a given system. For the jobs submitted here, they were not, meaning some additional time was spent in transpilation.
Note the number of quantum volume layers D depends on the quantum volume of the backend; the circuits run on systems with higher quantum volumes have more layers than those run on systems with lower quantum volumes. Hence, even if 2 systems have roughly the same CLOPS values, their actual runtimes may be different, due to differences in the number of layers in the circuit. Further, different systems have different numbers of qubits, which impacts the time cost of circuit transpilation and waveform loading. Thus, even though ibmq_jakarta, ibmq_guadalupe, and ibm_hanoi all have a comparable CLOPS value, their differences in quantum volume and qubit count mean the actual runtime T will be different for these CLOPS jobs.
The methodology used to compute CLOPS uses S=100. This is a small value for applications where precise estimates are required; commonly, jobs use on the order of thousands of shots. For quantum kernels, increasing the number of shots directly increases the accuracy with which the kernel [Equation (<ref>)] can be estimated. And, as shown back in Figure <ref>, changing S dramatically changes the runtime.
This is also reflected in the results of Table <ref> which extend Table <ref> to run the exact same set of jobs, except the number of shots is changed. Considering the model's loss for these jobs, we see it is minimized when S is 100 or 500 – exactly (or close to) the number of shots used for measuring CLOPS.
As S→ 0 the loss increases substantially, because the runtime ratio approaches 0, driven by the fact that in the model [Equation (<ref>)] the number of shots enters multiplicatively in the predicted runtime. However, there are fixed overheads across the stack which don't scale with S. For example, as noted in <cit.>, the time required for circuit compilation and data transfer is independent of S. Such an overhead would dominate circuit runtime in a low-shot regime.
When the number of shots increases, the loss does as well, albeit less dramatically as when the number of shots decreases. In terms of the runtime ratio, as S increases, the model over-predicts job runtime, though the runtime ratio appears to be similar across similar backends.
These results imply that although the model is not perfectly accurate with respect to predicting runtimes for the CLOPS job, it is – comparatively speaking – most accurate for such a (or a very similar) job, as opposed to jobs involving a small or large number of shots. As we will discuss in Section <ref>, one of the main reasons for these discrepancies could be the fact the CLOPS metric is evaluated using an execution path different from the one used here. That is, the manner in which jobs are set up and run is different, which can lead to differences in execution time, a point returned to in the Conclusions (Section <ref>).
In the next subsection, we repeat similar experiments as those whose results are presented here, but with a different kind of circuit.
§.§ Model Performance: Quantum Machine Learning Circuits
The previous sub-section evaluated the model's performance on quantum volume circuits. Next, we turn to the task of evaluating the model using a circuit used for quantum kernels, which evaluate a similarity measure K(𝐱,𝐲) between two classical feature vectors 𝐱, 𝐲. Note in this section, synthetic values for 𝐱 and 𝐲 are used.
Given an encoding circuit U(𝐱), the corresponding quantum kernel circuit is U(𝐱)∘ U^†(𝐲). We focus on a particular encoding circuit on n qubits, based on an encoding circuit introduced in <cit.>. The encoding circuit we use is given by
U(𝐱) =V(𝐱)∘ H^⊗ n,
where H^⊗ n is the Hadamard gate on all n qubits, and
V(𝐱) = Exp(i∑_𝐣∈ S[ ϕ_𝐣(𝐱)∏_a ∈𝐣Z_a]).
(Note that in <cit.>, the encoding circuit used is V(𝐱) ∘ H^⊗ n∘ V(𝐱)∘ H^⊗ n.) Here the set S indexes both individual qubits, as well as pairs of them.
The function ϕ_𝐣(𝐱) is given by
ϕ_𝐣(𝐱) =
𝐱_j single qubit j
(π - 𝐱_j)(π - 𝐱_k) qubit pair j,k.
On the j^th individual qubit, V(𝐱) applies a phase rotation, with the phase being set by the value the j^th component of 𝐱, 𝐱_j. On a pair of qubits j,k, V(𝐱) applies an entangling ZZ operation, with a phase set by (π - 𝐱_j)(π - 𝐱_k).
Implicit in the notation above is the idea of an “entangling strategy", which determines which pairs of qubits become entangled. In this work, we consider two strategies:
* “Linear", in which adjacent pairs of qubits are entangled: S = {0, 1, ⋯, n-1 }∪{(0,1), (1,2), (2,3), ⋯, (n-2,n-1)}
* “Full", in which all pairs of qubits are entangled: S = {0, 1, ⋯, n-1 }∪{(0,1), (0,2), (0,3), ⋯, (0,n-1), (1,2), ⋯, (n-2,n-1)}
In general, quantum kernel circuits are rectangular: the total number of layers (2D) does not equal the circuit width (n). We use the aspect ratio of the circuit, a≡ 2D/n to capture whether the kernel circuits are wide and shallow (a < 1), square (a=1), or narrow and deep (a > 1).
Table <ref> shows the average performance of the model for kernel circuits with an aspect ratio a=1, and where M=S=100. (Note that here, the data is aggregated over circuits whose width varies between 2 and 6.) Similar behavior as Table <ref> is observed; namely, the model generally under-predicts job runtime. The degree to which the model does so depends on the entanglement structure of the encoding circuit. In particular, circuits with a “linear" entanglement structure have a runtime ratio closer to 0 than those whose entanglement structure is “full". From Figure <ref>, we see the former family of circuits has a lower depth compared to quantum volume circuits of a similar volumetric area. This suggests the depth-dependent factor in the definition of D_eff in Equation (<ref>) plays a significant role in the model's performance.
Figure <ref> extends Table <ref> to include rectangular circuits, and to vary the number of shots. The behavior of the model is very similar as to what was seen for quantum volume circuits: namely, the model's runtime ratio decreases dramatically as S→ 0, and once S is on the order of 500 or so, the ratio stabilizes. This behavior consistently occurs across a variety of circuit aspect ratios a, and is also consistent when the entanglement structure of the circuit is changed.
With respect to the circuit's aspect ratio, the model does better when the circuits are narrow and deep (a > 1) than wide and shallow (a < 1). This effect appears to be more pronounced for the “full" entanglement structure, especially when S is large. One way to understand this is that with the “full" entanglement structure, every qubit is entangled with every other; for such circuits which are also wide, the depth of the circuit when transpiled to hardware could be quite large; this impacts the model's predictions via D_eff.
Finally, we consider the impact of changing the number of circuits in the job, M. Figure <ref> shows the mean runtime ratio r as a function of M, where the data is segregated on the number of shots, and whether a=1. The model's behavior is consistent for both square (a=1) and rectangular (a≠ 1) kernel circuits, and the mean runtime ratio is fairly stable across a wide range of values for M.
Taken together, Figures <ref> and <ref> suggest that of the four parameters in the model, it is the number of shots S and the number of effective quantum volume layers D_eff which play the most (and second-most) substantial role in influencing the model's performance, respectively. Given the assumptions of the model, this makes sense. S enters multiplicatively in the model; as it goes down, the impact of fixed, shot-independent overheads becomes more important, but isn't explicitly captured by the model[As noted in Section <ref>, this is an intentional choice, to avoid creating an unwieldy and over-parameterized model.]. The job's runtime is also impacted by how deep the circuits in the job are. The depth of the circuits is impacted both by the number of repetitions of the template and the entangling strategy, both of which impact D_eff.
Having evaluating the model's performance on two kinds of circuits using synthetic data, we now turn to using the model to estimate runtimes for large data sets in a real-world context.
§ IMPLICATIONS FOR RUNTIME ON PRACTICAL DATA SET SIZES
The prior section studied the model's performance. In this section, we use the model to examine the implications of running jobs for calculating quantum kernels where the underlying data set is both large and practical. The choice of the data set was influenced by the fact this work started as part of a summer internship program offered by IBM and its Operations Risk Insights (ORI) organization[Operations Risk Insights (ORI) is an automated, comprehensive, and Watson-powered alert service which assesses employee safety, operations and natural disaster risk events to identify those posing the greatest threat of impact to the business continuity.]. Over the summer of 2022, ORI began incorporating into its capabilities a purely classical model to predict flash floods. In parallel, the authors (and others, noted in the Acknowledgements) began exploring the use of a quantum-enhanced model through the use of quantum kernels <cit.>.
Flash floods are are a significant contributor to annual, weather-inflicted monetary losses. They can be catastrophic to communities, infrastructure, and of course, people. Flash flood events are often unpredictable, making it hard to prepare for or mitigate their potential effects. For example, California's flooding rains and heavy snows which killed at least 17 people likely caused more than $30 billion in damages and economic losses in January of 2023 <cit.>. Improved early warnings of flash floods thus can save lives and reduce economic losses.
The ORI effort initially focused on flash flood prediction within the state of Texas, at two levels of geographic granularity: county level, and ZIP code level. At these two levels of granularity, the available data set had N=2513 records and N=70571 records, respectively. Although this number of records may be modest from a classical ML perspective, it is important to keep in mind that generating quantum kernels for both of these data sets requires running on the order of 3 million and 2.5 billion circuits, respectively. Utilizing the runtime model in Equation (<ref>), we can roughly predict how long running those jobs would take.
Figure <ref> plots the predictions of the model out to data set sizes encompassing both the Texas county and ZIP code data sets[For a given number of feature vectors N, the number of quantum kernel circuits M = N(N-1)/2.]. Here, specific values for both D_eff and S are used; namely, D_eff = 2 and S=4000. As we've seen in the previous section, the runtime will be impacted by both of these quantities. The primary focus of the figure is the impact of improving the speed of circuit execution (as measured by CLOPS, C).
Current system speeds are on the order of 1K. At such speeds, processing the Texas county data set would take on the order of approximately 1 year, and processing the Texas ZIP code data set would be infeasible for all practical purposes.
Recently, a demonstration of C>10K CLOPS has been made <cit.>. At those system speeds, processing the Texas county data set could take on the order of months, and processing the Texas ZIP code data set would still remain infeasible.
Setting aside whether quantum advantage can be found for these particular data sets and the particular encoding circuit used, it is still useful to highlight how considerations from the overall flash-flood prediction workflow used by ORI would place constraints on the acceptable amount of runtime on quantum hardware, assuming quantum-enhanced classifiers were deployed to the platform. That is, the ORI platform updates its flash flood predictions every 2 hours. If a quantum-enhanced classifier was incorporated into the platform, it would be necessary to refresh the kernel values within that time window.
And while re-processing an entire data set may not be necessary, the implication from the model presented here is that, barring advances in the underlying algorithm itself, the runtime on quantum hardware would need to come down by several orders of magnitude in order for the quantum kernel part of the ORI platform to sustain the desired rate of updates for the model.
This highlights the quantum part of a quantum-enhanced workflow doesn't exist in isolation, and there are considerations which have nothing to do with quantum computing per se which can impact the feasibility of deploying a quantum-enhanced approach to a classical workflow.
§ CONCLUSIONS & DISCUSSION
Quantum kernels are one particular quantum machine learning algorithm, in which classical ML models are enhanced by similarity measures computed by running quantum circuits on quantum systems. Given a data set of size N, 𝒪(N^2) kernels need to be calculated. In this work, we studied the problem of modeling the runtime of a collection of circuits used to calculate quantum kernel values, and presented a predictive model to do so [Equation (<ref>)], based on a recently-introduced measure of the speed of quantum computers, CLOPS <cit.>.
We validated the model's performance by comparing its predictions against empirical runtime information, and found the model is most accurate when the job closely mimics those used to calculate the CLOPS of a given system. When the job being run is substantially different, the model's performance suffers. When the number of shots is small, the model consistently under-predicts runtime, due to the fact that, in reality, the software stack has fixed (and unavoidable!) overheads not accounted for by the model. When the number of shots is large, the model generally over-predicts runtime in a shot-dependent fashion. This suggests the model could be used – to reasonable accuracy – in a regime where the number of shots is modest, or large. Further, the model's performance is relatively stable as with respect to the number of circuits in the job, meaning it can be applied in the context of jobs with very large numbers of circuits.
We note here one of the main difficulties in making statement about the model as such is the degree to which the job execution path used to establish a system's CLOPS value differs from the one used here. This work leverages the Qiskit Runtime service for job execution, a service not currently used for CLOPS values. It would be interesting to re-consider the analysis presented here if it was, as we could then better understand whether the issues with the model's performance come from the model as such, or the particular execution path of the jobs.
By extrapolating the model to very large data set sizes (i.e., a number of feature vectors on the order of thousands and beyond), we find at current system speeds, processing such data sets would require a prohibitively large amount of runtime on quantum hardware. However, for smaller data set sizes, quantum kernels can be processed in a reasonable amount of time on today's systems. What's more, as noted in the Introduction, quantum advantage with quantum kernels has yet to be attained in a practical setting, meaning scaling up to larger data set sizes wouldn't be necessary right now for early users of quantum-enhanced models. That is, for small data set sizes, classical data scientists could already begin exploring quantum-enhanced, kernel based algorithms on real-world data, with circuit execution runtimes that enable interesting experimentation and work. In this sense, the speed of the hardware is not an obstacle to data scientists and other early end-users of quantum-enhanced models to begin upskilling themselves today.
It is important to note this work does not touch on the other practical or theoretical considerations necessary to substantiate a claim of quantum advantage. We make no claims – nor dare speculate – on whether improvements in job runtime would enable quantum advantage using the particular encoding circuit we studied, the particular quantum computing modality used (namely, superconducting qubits), and the particular data set considered.
The results of this work suggest 4 primary lines of additional research. First, there is a need to apply and validate the runtime model introduced here to a larger variety of circuits used for quantum machine learning. For example, ad-hoc (or “hardware-efficient") circuits are used to encode data in a way with minimal circuit depth and for which their 2-qubit gates respect the connectivity of the qubits in the hardware. Studying a larger variety of circuits would provide more evidence of the regimes of validity of the model.
Second, hardware runtime could be further reduced through parallelization of the job across multiple QPUs. If the time on 1 QPU is T, parallelizing across X > 1 QPUs could reduce the total time to approximately T/X. As more quantum systems come online, the feasibility of doing this parallelization becomes higher[Note this approach ignores any latency effects, the overhead of the software orchestrating the parallelization, and the potentiality of the parallelized jobs being sent to different queues, each with their own queue behavior.]. Further, multiple quantum kernel circuits could be executed on the same chip, assuming a sufficient number of qubits is available. This would provide another level of parallelization.
Third, one of the most straightforward ways to decrease job execution is to reduce the number of shots S. Doing so comes with the cost of increasing the shot noise of the estimated kernel values. A close collaboration with classical ML scientists and practitioners looking at kernelized ML algorithms with robust performance guarantees in the face of noisy kernel values would be fruitful, and could help the quantum ML research community understand what the practical upper bounds on S might be, both in the context of quantum-enhanced support vector machines, and other ML algorithms. For example, recent work has shown that in order for an SVM to have a generalization error at most ϵ when trained on a data set of size N, the total number of shots required per kernel entry scales as S ∼𝒪(N^8/3/ϵ^2) <cit.>. In turn, this implies a runtime – across the entire data set – of 𝒪(N^2)*𝒪(N^8/3/ϵ^2) * D_eff/C = 𝒪(N^4.67D_eff/(Cϵ^2)). This is a rather unfavorable scaling with respect to N in practice, and motivates exploring regimes wherein small amounts of training data are required, and algorithms which can tolerate relatively large amounts of error in the estimated kernel entries.
Fourth, the notion of “effective number of quantum volume layers of a circuit" should be studied in more depth. We presented one definition [Equation (<ref>)]; others are possible. In particular, the definition of D_eff introduced here was particular to quantum kernel circuits; defining one which could be applied across a wider family of circuits would be useful.
In sum, this work showed it is possible to model job execution time using a holistic measure of the speed of quantum systems. This model has four parameters: number of circuits M, number of shots S, system CLOPS C, and number of effective quantum volume layers D_eff. Although simple, we showed this model can be used – with reasonable accuracy – to predict job execution time, especially in a regime where the number of shots is large. We encourage end-users of quantum computing systems to leverage this model for analyzing the quantum-enhanced portion of their workflows, and for quantum computing applications researchers to find ways to apply it to other applications of quantum algorithms beyond quantum kernels.
§ ACKNOWLEDGEMENTS
We acknowledge prior collaborative contributions from the other IBM ORI Extreme Blue Interns for Summer 2022: Chelsea Zackey, Christopher Moppel, and Samantha Anthony. Further, we acknowledge the support of other ORI Exterme Blue mentors, including Bhanwar Gupta, Chester Karwatowski, Rinku Kanwar, Mallikarjun Motagi and Ayush Kumar. In addition, we acknowledge the support of the IBM Extreme Blue program, as well as Dr. Liliana Horne of IBM's Global Chief Data Office. TLS thanks Drs. Paul Nation, Omar Shehab, and Stefan Wörner for feedback on earlier versions of this manuscript. JW thanks Fausto Palma of the IBM CIO Supply Chain and Technology Systems group for his gracious support. Finally, we acknowledge the use of IBM Quantum services for this work. The views expressed are those of the authors, and do not reflect the official policy or position of IBM or IBM Quantum.
§ REAL-WORLD RESULTS: METHODS AND DETAILS
This appendix describes the methods and workflows used to generate the empirical results presented in Figure <ref>. These workflows were built in a broader context of creating an end-to-end pipeline for training classical and quantum-enhanced models, leveraging state-of-the-art, cloud-based tools. In particular, the workflows were built using Kubeflow <cit.> running on the IBM Cloud Kubernetes Service, to manage the complexity of both the classical and quantum machine learning experimental workflows. Kubeflow is an open source toolkit and a de-facto standard for building, experimenting with, and deploying ML pipelines to various environments for development, testing, and production-level model serving, on containerized environments such as Red Hat OpenShift <cit.> and vanilla Kubernetes <cit.>. Within Kubeflow are Kubeflow Pipelines (KFP), which is a “platform for building and deploying portable, scalable machine learning (ML) workflows based on Docker containers”. Each KFP step or component is containerized, with the ability to share and track results and associated experiment artifacts between components, while allowing independent, long-running steps to proceed in parallel.
The end-to-end Kubeflow pipeline consisted of the following steps:
* Initialization: Obtaining the latest source code binaries from Github
* Data preparation: Performing feature selection and data resizing.
* Quantum kernel generation: create the jobs needed to calculate quantum kernel values, and send them to IBM Quantum systems.
* Aggregate Qiskit Runtime job results: extract empirical runtime information and a quantum kernel matrix from the job results
* Classical kernel generation: calculate a classical kernel (RBF kernel) for the data set generated in Step 2.
* Model training and analysis: train 2 SVMs (one for each kernel matrix), and evaluate their accuracy.
An example pipeline – showing the launching of 5 independent quantum and classical kernel generation tasks – is given in Figure <ref>. A major benefit of using Kubeflow for running the experiment done in this work is the ability to parallelize the workflow across multiple splits, where each split can consist of independent data sets. In addition, pipeline runs are automated and asynchronous, on a managed cloud environment (vs., e.g., running manually on a standalone machine). As a result, a very large experiment can be split into multiple independent ones, meaning the failure of any one sub-experiment does not impact whether other sub-experiments fail. This also allows for an easy reboot/restart of the failed sub-experiments. In addition, because of the cloud-based nature of Kubeflow, long-running experiments (e.g., several hours) can be easily handled, due to the fact the orchestration of the work is handled via the cloud. Finally, the use of splits allows for more usage of Qiskit Runtime compute resources as they become available, by, e.g., sending different splits to different systems.
We now provide brief descriptions of some of the steps above.
For step 2, the real-world data sets used consisted of 38 features, and was constructed out of long-term flash flood records and historical analysis from the following sources:
* National Oceanic and Atmospheric Administration (NOAA), for historical precipitation data
* The Weather Channel (TWC), for hourly atmospheric and precipitation data
* Multi-Resolution Land Characteristics Consortium (MRLC), for land surface data
* US Geological Survey (USGS), for regional land classification
The particular dataset used here is one generated for the state of Texas at the county level, which had N=2513 records. For the data preprocessing, classical principal component analysis (PCA) was used to perform feature reduction to go from the initial 38 features to the statistically most significant 2, 3, 5, and 7 features. This allowed for a study of the impact on model accuracy as the number of features was changed. For the data points in Figure <ref>, the two most significant features – PrecipAmountAvg and RelativeHumidityAvg – were used.
Because flash floods represented only 3% of the data set, caution was needed during data preparation to avoid issues that are typical of highly imbalanced datasets. When attempting resize the dataset from the initial N=2513 records to smaller batches of N=10, 25, 50, 75, 100, 150, 200 the Imbalanced Learn RandomUnderSampler <cit.> was used ensure we maintained an appropriate representation of flash floods in the training dataset. Note that both the feature reduction and the data resizing are done each time our experimental pipelines are run, as they are computationally easy.
For step 3, the code used to generate jobs consisting of quantum kernel circuits was based on the open source, quantum kernel library in Qiskit Machine Learning project <cit.>, the compute_overlap, compute_circuit, and evaluate methods in particular. These functions were modified to include calls to the Qiskit Runtime APIs to facilitate the extraction of job execution information, as well as the quantum kernel matrix itself (step 4). Jobs were run on the ibmq_auckland system, using a dedicated reservation mode made available via the IBM Quantum Platform. The ibmq_auckland system is a 27 qubit machine, with a quantum volume of 64, and CLOPS of 2400.
For step 5, the choice of RBF kernel was motivated by prior work from the authors and other collaborators <cit.>, which showed the RBF kernel yielded the best balanced accuracy and F1 score compared to other classical kernel functions and model approaches for the flash flood data set. This step is part of the pipeline since it is not computationally intensive, and provided a classical benchmark against which to compare a quantum-enhanced classifier.
|
http://arxiv.org/abs/2307.04782v1 | 20230710180000 | Deeper than DEEP: A Spectroscopic Survey of $z>3$ Lyman-$α$ Emitters in the Extended Groth Strip | [
"Stephanie M. Urbano Stawinski",
"M. C. Cooper",
"Steven L. Finkelstein",
"Intae Jung",
"Pablo G. Pérez-González",
"Caitlin M. Casey",
"Olivia R. Cooper",
"Nimish P. Hathi",
"Benne W. Holwerda",
"Anton M. Koekemoer",
"Vital Fernández",
"Rebecca L. Larson",
"Ray A. Lucas",
"L. Y. Aaron Yung"
] | astro-ph.GA | [
"astro-ph.GA"
] |
firstpage–lastpage
[
[
Received 24 May 2023 / Accepted 30 June 2023
================================================
We present a spectroscopic survey of Lyα emitters in the Extended Groth Strip (EGS) field, targeting the regime near the Epoch of Reionization. Using Keck/DEIMOS, we observed 947 high-z candidates with photometric redshifts from 3 < z_phot < 7 and down to an H-band (HST/WFC3 F160W) magnitude limit of < 27.5. Observations were taken over the course of 8 nights, with integration times ranging from 4 to 7.8 hours. Our survey secured 137 unique redshifts, 126 of which are Lyα emitters at 2.8 < z < 6.5 with a mean redshift of z = 4.3. We provide a comprehensive redshift catalog for our targets, as well as the reduced one- and two- dimensional spectra for each object. These observations will provide an important auxiliary dataset for the JWST Directors Discretionary Early Release Science (DD-ERS) program the Cosmic Evolution Early Release Science Survey (CEERS), which recently completed near- and mid-IR imaging and spectroscopy of galaxies in the EGS field.
galaxies:high-redshift – surveys – catalogues
§ INTRODUCTION
Deep surveys in widely studied extragalactic fields are pivotal in characterizing galaxy evolution across cosmic time. The Extended Groth Strip (EGS) is one of the leading extragalactic fields on the sky, renowned for a balance of area and depth with observations extending from X-ray to radio wavelengths <cit.>. The EGS field is centered at α = 14^h19^m00^s and δ = +52^∘48^m00^s with the bulk of deep imaging observations covering a central region of 800 square arcminutes. Its relevance in extragalactic astronomy is due in part to major surveys using a variety of instruments, including the Hubble Space Telescope (HST) through both the All-wavelength Extended Groth strip International Survey (AEGIS, ) and the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS, ). Now with the launch of JWST <cit.>, the EGS has further cemented its status as a legacy field, due predominately to the Cosmic Evolution Early Release Science (CEERS) Survey (ERS 1345, PI: S Finkelstein[CEERS data can be publicly accessed in MAST: https://doi.org/10.17909/z7p0-848110.17909/z7p0-8481]), a Director's Discretionary Early Release Science (DD-ERS) program that has conducted both imaging and spectroscopy with JWST in the EGS (; Finkelstein et al. in prep).
The significant amount of existing observations and telescope time dedicated to the EGS makes supplemental spectroscopic observations increasingly powerful. Spectroscopic observations have routinely been used for confirmation or readjustment of photometric redshifts, ultimately improving the reliability of constraints derived from photometric spectral energy distribution (SED) fitting. Spectroscopy is also critical for obtaining certain spectral properties and dynamical measurements (such as emission and absorption line strengths, velocity offsets, and velocity widths). For these reasons, spectroscopic data drastically improve the implied constraints from photometry alone.
In June 2022, CEERS began imaging in the EGS using the Near Infrared Camera (NIRCam, ) and the Mid-Infrared Instrument <cit.> and continued to obtain additional photometric imaging in December 2022. The CEERS collaboration has since published the first data release of NIRCam observations <cit.> and MIRI imaging <cit.>. With the influx of photometry in the EGS field using JWST in the first year of operation, spectroscopic catalogs at high-z become particularly useful for improving inferred galaxy properties and informing future science.
Foundational spectroscopic surveys in the EGS, such as the DEEP2 and DEEP3 surveys (, see also ) have provided extensive spectroscopy within the EGS, greatly improving the inferred constraints on galaxy properties at low and intermediate redshifts. While the DEEP2 and DEEP3 surveys provide highly uniform spectra and an extremely high sampling density of secure redshifts at z ≲ 1, the EGS has trailed other extragalactic fields, such as COSMOS, GOODS-N, and GOODS-S, with respect to spectroscopic coverage at higher z <cit.>. Over the past decade, however, the 3D-HST survey <cit.> as well as the MOSFIRE Deep Evolution Field survey (MOSDEF, ) both began to push spectroscopic studies to higher z in the EGS field. 3D-HST used HST WFC3-IR/G141 grism spectroscopy to measure ∼ 3000 secure grism redshifts, including ∼ 500 galaxies at 2 < z < 3 and another 26 at 3 < z < 3.5. In addition, MOSDEF targeted ∼ 1500 galaxies at 1.37 < z < 3.80 from the EGS, GOODS-N, and COSMOS fields. Despite these more recent near-IR spectroscopic campaigns along with smaller efforts to study very high-z sources <cit.>, the EGS is still lacking in spectral coverage for galaxies at z > 4, an important epoch with respect to the recently-completed CEERS JWST photometric observations. CEERS spectroscopic observations using JWST NIRSpec and NIRcam are poised to greatly increase the publicly-available spectroscopy of high-z galaxies in the EGS <cit.>, yielding redshifts for hundreds of sources at a range of z, including at z ∼ 8-10 <cit.>.
To further supplement the recent deep, near- and mid-IR imaging data in the EGS from JWST, we undertook spectroscopic observations of intermediate- and high-z sources using the DEep Imaging Multi-Object Spectrograph (DEIMOS; ) on the KECK II telescope. We present spectroscopic observations of 947 targets with 137 unique spectroscopically confirmed redshifts. The majority (126) of these objects are Lyα emitters at 2.8 < z < 6.5, increasing the spectroscopic coverage of high-z galaxies in the EGS. In Sections <ref> and <ref>, we describe our target selection and observations for the survey, respectively. We present the Keck/DEIMOS redshift catalog in Section <ref>, along with subsequent analysis. Finally, in Section <ref>, we conclude with a discussion of the potential use of our survey and our recent collaborations with ongoing, ground-based high-z surveys in the EGS.
§ TARGET SELECTION AND SLIT MASK DESIGN
As its name suggests, the EGS field spans an extended, narrow area on the sky. To efficiently explore Lyα emission from z = 3-6 over the entirety of the field as probed by the CANDELS HST imaging, we required an optical spectrograph with a similarly broad field-of-view (FOV) and capable of a high level of multiplexing. The Keck/DEIMOS spectrograph is particularly well-suited due to its large FOV that matches the shape of the CANDELS footprint in the EGS as well its ability to observe ≳ 140 targets simultaneously.
Spectroscopic targets were selected from the photometric catalog of <cit.>, based upon the CANDELS HST and Spitzer IRAC observations in the EGS. Primary targets were selected to be at 3 < z_ phot < 7 (derived from ; ) and H < 27.5, with priority given to brighter sources. We adopted this magnitude limit to avoid potentially spurious sources at the detection limit of the existing HST imaging, while the photometric redshift limits were chosen to match the lowest and highest z at which Lyman-α would be detected given our instrument configuration (see <ref>). Slit masks were filled with any additional sources with z_ phot > 3 and H < 27.5 as well as (preferentially brighter) sources at z_ phot < 3 without a secure spectroscopic redshift from the DEEP2 and DEEP3 surveys. In total, the target population included 947 unique sources, with the vast majority (98%) selected to be at z_ phot > 3 from the <cit.> photo-z catalog (including 92% of targets at 3 < z_ phot < 6). Figure <ref> shows the distribution of our targeted sources as a function of z_ phot and H-band (F160W) magnitude, highlighting those that yielded a secure spectroscopic redshift (see <ref>).
We tiled the EGS with a total of 8 slitmasks, located at 4 overlapping positions along the strip, such that sources had at least two opportunities to be placed on a mask. Table <ref> summarizes the position and number of targets for each slitmask, along with the date of observation and total exposure time. Across all masks, slit widths were fixed to 1^'', with slit gaps measuring 05, so as to optimize the number of potential targets observed on a given mask. Slit lengths were allowed to vary, above a minimum slit length of 4^'', such that slits were sufficiently long to avoid any significant loss in redshift success (see results from DEEP2/DEEP3, ). Each slitmask includes roughly 140-150 sources per mask, with the final targeted sample including 947 unique sources down to the magnitude limit of H < 27.5. Across the 8 masks, 173 sources are targeted more than once, with 22 of these repeat targets placed on more than two masks. We describe how we handle repeat redshift measurements later in <ref>. Figure <ref> shows the distribution of targets with respect to the CANDELS HST/WFC3 imaging footprint as well as the CEERS JWST NIRCam, MIRI, and NIRSpec pointings. Along the southeast edge of the strip, overlap with the CEERS NIRCam fields is sub-optimal due to a lack of bright guide stars; though, the Keck/DEIMOS spectroscopy covers the central portion of the – at that time – planned CEERS observations.
§ DEIMOS OBSERVATIONS AND REDUCTIONS
As detailed in Table <ref>, spectroscopic observations were completed during June 2020 and 2021, prior to the launch of JWST in late 2021. With Keck/DEIMOS, we used the 600 lines mm^-1 grating blazed at 7500 Å and tilted to a central wavelength of 7200 Å, with the GG455 order-blocking filter employed. This spectroscopic setup provides an approximate spectral coverage of ∼ 4500–9900 Å, depending on the slit placement on the particular mask. The spectral resolution (FWHM) for the 600g grating on DEIMOS is ∼ 3.5 Å <cit.>, with a dispersion of 0.65 Å per pixel.
Each individual exposure was typically ∼ 1800 sec in length, with a minimum of 7 exposures per mask and no dithering applied between exposures. The total integration times achieved for each mask are listed in Table <ref>, ranging from ≲ 4 hours to as much as ∼ 7.8 hours. Calibrations for each mask included three internal quartz lamp flat-field frames and an arc lamp spectrum (using Kr, Ar, Ne, and Xe lamps). During observations, the DEIMOS flexure compensation system was utilized to ensure flexure frame-to-frame throughout the night (for both calibration and science images) differed by ≲ ± 0.25 pixels.
Observing conditions varied throughout the survey. In general, seeing ranged from roughly 06 to 1^'' with variable cloud cover. The DEIMOS detector is comprised of 8 CCDs, with each object spectrum spanning two chips (blue and red). One of the chips (CCD5) was inoperative during the 2020 observations and had an elevated level of read noise during the 2021 observations, resulting in decreased sensitivity (or a total loss of spectral coverage) at red wavelengths for approximately ∼ 25% of the slits per mask. The number of targets per mask, for which the resulting spectra do not fall on CCD5 (i.e. unaffected by this issue), are listed in Table <ref>.
Once the DEIMOS observations were completed, we reduced the entire dataset using the [<https://sites.uci.edu/spec2d/>] DEEP2/DEEP3 DEIMOS data reduction pipeline <cit.>. Spectroscopic redshifts were measured using a custom template fitter that incorporates both an emission-line galaxy template (included to find redshifts for low-z interlopers) as well as an asymmetric Gaussian profile to probe a single Lyα line where no other emission lines were detected. Examples of the best-fit templates for three high-z Lyα emitters are shown in Figure <ref>. The uncertainty reported in the redshift measurement is derived from the 1σ error on the location of the peak Lyα emission determined from the fit. The quality of each redshift was visually inspected and given a quality code (Q) following the previous classification from the DEEP2/DEEP3 surveys. A quality code of Q = -2 indicates major detector/reduction issues, rendering at least half of the spectrum unusable. Out of the 276 total targets assigned Q = -2, 245 targets (∼ 89 %) were placed on CCD5. Slits that had no detected emission or continuum are assigned a quality code of Q = 1. Upon visual inspection, targets with unclear or low-quality redshift measurements received a quality code of Q = 2. These objects would require follow-up analysis for redshift confirmation and are thus not reported in our final sample. Secure redshifts have a quality flag of either Q = 3 or Q = 4. Quality Q = 4 objects differ from Q = 3 by, upon visual inspection, having clear characteristics of an asymmetric Lyα profile (or multiple emission lines in the case of low-z interlopers).
§ REDSHIFT CATALOG
We present the spectroscopic measurements from our Keck/DEIMOS observations in Table <ref> (the full version is available on the electronic version of the Journal). In summary, from 947 unique targets we were able to secure a spectroscopic redshift of high quality (Q = 3, 4) for 137 galaxies. Of these, 126 are Lyα emitters at 2.8 < z < 6.5 (yielding a 13 % success rate) with a mean redshift of z = 4.3. Figure <ref> shows the full redshift distribution for objects with secure spectroscopic measurements. The sample includes 11 low-redshift galaxies (z < 1.2; all but one were originally targeted as high-z candidates) along with four galaxies at z > 6 that probe the end of the Epoch of Reionization (EOR). The most recent version of our catalog, including the 1D and 2D spectra, can be downloaded directly from the survey webpage.[<https://sstawins.github.io/deeper_than_deep/>]
§.§ Sources with Multiple Observations
As mentioned in <ref>, 173 targets were observed more than once. However, due to observing conditions, mask placement, and signal-to-noise from mask to mask, not all of these repeat observations resulted in multiple secure redshift measurements. Of the 173 sources with multiple observations, only 35 galaxies have two or more independent, secure (Q = 3,4) redshift measurements.
The value of the redshift measured in repeated observations of a given galaxy can be affected by variation in the placement of the slit with respect to the galaxy as well as variation in the resulting S/N of the observed spectrum. To assess the uncertainty of our redshift measurements, we utilize the deviation in redshift for galaxies with multiple z_ spec measurements. We limit this analysis to the 30 sources with repeated observations yielding a secure redshift (Q = 3,4) inferred from Lyα emission (i.e. excluding the 5 low-z sources with multiple z_ spec measurements). We fit a normalized probability density function to the differences in the measured redshift (Δ z), yielding a one-sided 1σ standard deviation of 0.0013 (∼ 389 km s^-1) and a two-sided standard deviation of 0.0007 (∼ 209 km s^-1). This uncertainty is 1-2 dex larger than the uncertainty associated with the fits to the observed emission line in a single observation, as described in <ref>. In general, this analysis suggests that the redshifts (at z > 2) reported in our catalog are accurate to ∼ 1 × 10^-3 (∼ 300 km s^-1). For low-z sources (z < 2), where redshifts are largely determined by fits to multiple emission lines including [OII], Hδ, Hβ, [OIII], and Hα, the typical redshift uncertainty is assumed to be ∼60 km s^-1, similar to that of the DEEP3 survey which utilized the same instrument configuration and slit width (, see also ).
In addition to enabling a study of spectroscopic redshift precision, repeated observations of sources within our survey allow for co-adding observations to produce higher signal-to-noise spectra. This is particularly interesting for sources that did not yield a secure redshift based upon any individual observation. One such source is 37653, which was targeted twice with Keck/DEIMOS, yielding a lower-quality redshift (Q=1,2) for each observation. To increase the signal-to-noise, we co-added the reduced, sky-subtracted 2D spectra, producing a combined observation with an effective exposure time of 7.7 hours. We then extracted a 1D spectrum, using a boxcar extraction width of ± 3 pixels (∼ 07). Finally, we fit the resulting 1D spectrum, using the same asymmetric Gaussian fit described in <ref>, to find a best-fit, secure (Q=3) redshift of z_ spec = 4.8998. The resulting spectroscopic redshift is in excellent agreement with the photo-z estimated from existing ground-based and HST imaging <cit.>. This source is also one of a small number of high-redshift galaxies – with a confirmed spectroscopic redshift – detected in the initial MIRI imaging for the CEERS survey <cit.>. Recent NIRSpec prism observations (in December 2022) as part of the CEERS survey have confirmed our spectroscopic redshift (with z_ NIRSpec = 4.89651, , and see further discussion in <ref>). By co-adding repeated observations for other sources in our target sample, we were able to measure secure redshifts for an additional 4 galaxies (i.e. 5 in total, including IDs 37653, 30014, 27862, 20237, and 24687).
§.§ Catalog Comparison to Literature
To put our catalog into context with a large, photometric catalog in the EGS, we match our sample of spectroscopically confirmed galaxies to the <cit.> photometric catalog. We match the two catalogs by separation on the sky, requiring a maximum separation of 02. Out of 137 total galaxies with secure redshifts from this work, 129 are found in the <cit.> catalog. Table <ref> includes the object ID from <cit.> for a given object. We compare the expected median photometric redshift reported in <cit.> with the spectroscopic redshift from this work in Figure <ref>. For comparison, we also plot our spectroscopic redshifts versus the photometric redshifts from our target catalog <cit.> in Figure <ref>. Overall, there is good agreement between our spectroscopic redshifts and the photometric redshifts from both catalogs. We find 71.3% (92 galaxies) of photometric redshifts from <cit.> are within Δ z < 0.05 (1+z), and only 12.4% (16 galaxies) exceed a maximum difference of Δ z > 0.15 (1+z). The median offset of these photometric redshifts (excluding significant outliers, Δ z > 0.15 (1+z)) from our spectroscopic measurements is Δ z/(1+z) = 0.013 with a 1σ standard deviation of 0.03. This is similar to the results from the <cit.> catalog, for which 65.1% of the photometric redshifts are within Δ z < 0.05 (1+z) of the spectroscopic redshift, and 16.3% are significant outliers outside Δ z > 0.15 (1+z). We find the median value of Δ z/(1+z) to be 0.009 with a 1σ standard deviation of 0.04 excluding significant outliers.
Five galaxies with secure redshifts from this work (IDs 69719, 47173, 24711, 200415, 200814) correspond to sources with previously published spectroscopic redshifts. We find good agreement between our measured redshifts and the published values in all but one case. Two galaxies in our sample, ID 69719 at z = 3.438 and 47173 at z = 3.304, were also observed (in the near-IR) as part of the MOSDEF survey (ID_MOSDEF 30847 and 13470, ), with measured redshifts of z = 3.435 and z = 3.302, respectively. One low-redshift emission line galaxy in this work (ID 24711, z = 0.2948) was also included in DEEP2 Data Release 4 (DR4, ID_DR4 12028700; ) with z = 0.2955. Finally, two objects – ID 200415 at z = 0.4639 and ID 200814 at z = 0.6426 – were included in the spectroscopic catalog from the 3D-HST survey (ID_3D-HST 21636 and 29958, ). Our measured redshift for the latter source (ID 200814) is in good agreement with the measurement of z = 0.6738 from the lower-resolution 3D-HST grism spectrum. For ID 200415, however, we find a significant difference in redshift, relative to the 3D-HST measurement of z = 0.73945. The WFC3 G141 grism spectrum for this object includes an emission feature towards the blue (near ∼ 1.14 μ m) that is identified as Hα (yielding z = 0.73945). This portion of the 3D-HST grism spectrum, however, suffers from a high level of contamination associated with a nearby bright source, such that the emission feature identified as Hα is likely spurious. In our Keck/DEIMOS spectrum, we detect a multitude of emission features, including [OII], Hβ, and [OIII] — as well as Hα and [NII].
With the exception of ID 200415, the difference in the measured spectroscopic redshift between that of our survey and previously published values for these 4 sources ranges from Δ z = 0.0007-0.03. These small differences in the measured z are consistent with the differences in spectral resolution, rest-frame spectral features sampled, and potential variation in slit placement between our observations and those of the MOSDEF, DEEP2, and 3D-HST surveys.
During December 2022, the CEERS team observed the EGS with NIRSpec. Two galaxies with a measured spectrum (ID 10496 and 14628 from ) are also found in our catalog (ID 47173 and 37653, respectively). ID 47173 was observed twice in 2020 and 2021 with DEIMOS, resulting in two redshift measurements (z_spec = 3.3038 with Q=3 and z_spec = 3.3056 with Q=2). NIRSpec spectroscopy is consistent with the Q=3 measured redshift by Δ z = 0.002, with z_NIRSpec = 3.30186 (MSA ID 11699, ). The difference in redshift in part represents the offset between the Lyα emission (Δ v_Lyα) and the associated metal lines observed in NIRSpec spectroscopy, which better trace the systemic redshift of the galaxy. In this case, the observed offset in Lyα emission for this galaxy is 580 km s^-1. This is within the range of Δ v_Lyα found by <cit.> around z ∼ 3.5 star-forming galaxies, with offsets up to 800 km s^-1 and an average offset of ⟨Δ v_Lyα⟩ = 358 km s^-1.
For ID 37653, both DEIMOS observations yielded low-quality (Q = 2) redshift measurements. As discussed previously in <ref>, we use the co-added spectrum of two individual DEIMOS observations to measure a secure redshift for this galaxy. We find the spectroscopic redshift to be z_spec = 4.8998, which agrees with the NIRSpec observations at z_NIRSpec = 4.89651 (MSA ID 707, ). This galaxy also lies close (∼ 1-2 comoving Mpc, in projection) to a recently identified overdense region at 4.5 < z < 5.5 in the EGS <cit.>. We find 7 other galaxies as within this region, all with 4.8 < z_spec < 5.0. Studies of high-redshift proto-clusters using the Millennium cosmological simulations show that such clusters at z ∼ 5 could be as large as 5-20 cMpc <cit.>. Taken together, the redshift and the location of 37653 could indicate that this galaxy is a member of this overdensity at z ∼ 5.
Using the redshift given by NIRSpec for 37653, we find the offset of Lyα emission is large (⟨Δ v_Lyα⟩ = 970 km s^-1) compared to offsets found by <cit.> around Lyα emitting galaxies at z = 4.4-5.7 (⟨Δ v_Lyα⟩ = 377 km s^-1 with a scatter of 329 km s^-1). In that work, none of the Lyα offsets exceed ∼800 km s^-1 for z = 4.4-5.7. More work is need to identify the cause of this high-velocity offset.
§.§ Redshift Success
Lastly, in this section, we discuss the success rate of our observations and compare those to preliminary selection criteria and other physical parameters. In Figure <ref>, we present the redshift success rate for our survey as a function of H-band apparent magnitude, z_phot, and absolute UV magnitude; where the redshift success rate is defined to be the number of sources with a secure (Q =3,4) redshift divided by the total number of targets observed. For this analysis, we exclude objects with Q=-2 as they are effectively unobserved.
We first calculate the redshift success rate as a function of H-band magnitude and photo-z from our target sample <cit.>. In general, the redshift success rate is ∼ 15% for targets with H ≲ 25. Although this slightly increases up to ∼ 20 % for dimmer targets, and with consideration of the 1 σ uncertainty for each bin, we find no significant dependence of redshift success on apparent H-band magnitude. This implies apparent H-band magnitudes are not a biased tracer of Lyα emission at this redshift range.
The redshift success does vary with z_phot across our targeted redshift range, mostly due to observational constraints. Near the effective low-z detection limit for Lyα emission given our DEIMOS setup (z ∼ 3.1), the success rate reaches as low as 6.8%. Meanwhile, at z ≳ 5.5, we find a decrease in the z success rate, dropping to 2.1% for z_phot > 6, likely due to the increase in sky emission at redder wavelengths. Low-redshift sources (z < 2) have a low success rate (9.1 %), however the uncertainty is larger due to the smaller number of total targets.
The average redshift success rate is relatively flat from 3.5 < z_phot < 5.5. For this redshift range, the average success rate is 24.1% and the 1σ standard deviation of the distribution is ∼ 1.5 %. For z_phot > 3, we are exclusively probing Lyα emitters, hence the redshift success rate could be related to a Lyα detection fraction (f_Lyα) as measured in previous works <cit.>. However, here we are not taking into account the variability in instrument sensitivity as a function of wavelength, which would directly translate into a corresponding variation in the sensitivity to Lyα detection as a function of z. Therefore, more work must be done to make a direct comparison between our measured redshift success rate and existing measurements of the Lyα detection fraction. Overall, this analysis shows our survey had excellent success around our desired redshift range.
Using the <cit.> photometric catalog, we can also analyze the redshift success rate as a function of other physical parameters such as stellar mass, SFR, rest-frame U-V color, and absolute UV magnitude (M_UV). We find no significant dependence of the redshift success rate on stellar mass and SFR. Conversely, the redshift success rate does strongly depend on rest-frame U-V color and absolute UV magnitude. For rest-frame U-V color, we find that the success rate increases towards bluer colors, reaching 38% for U-V ∼ -0.1 and decreasing to ∼ 5% at U-V ≳ 1.2. As shown in the right-most panel of Figure <ref>, we also find that the redshift success rate increases towards brighter M_UV magnitudes, reaching as high as 34% for M_UV∼ -21. Although previous work by <cit.> found that the Lyα fraction from z = 3-6 to be higher at fainter M_UV, we again caution our redshift success rate is not directly comparable to a Lyα detection fraction. In their previous work, <cit.> calculated the Lyα detection fraction for sources with equivalent widths (EW) >50Å. While work to measure Lyα EWs for our sample is ongoing, preliminary measurements show we are probing Lyα emitters below 50 Å for M_UV magnitudes brighter than -18.
§ CONCLUSION
In this work, we targeted 947 high-redshift galaxies (z_phot > 3) in the EGS with Keck/DEIMOS. In total, we measured spectroscopic redshifts for 137 galaxies, including 126 confirmed Lyα emitters at 2.8 < z < 6.5. This catalog significantly expands the number of spectroscopically confirmed galaxies in the EGS field at z_spec > 3.
Overalll, we find good agreement with our spectroscopic redshifts and photometric catalogs in literature <cit.>. A comparison between redshifts yields a small difference (Δ z / (1+z) < 0.05) of 65.1 and 71.3%, respectively. We also find 4 galaxies that have spectroscopic redshifts from other surveys in literature (i.e. MOSDEF, 3D-HST, and DEEP2), with the difference in spectroscopic redshifts ranging from Δ z = 0.0007 - 0.03.
This work comes at an opportune time, given the recently completed observations from the JWST ERS program CEERS. With the influx of photometric data from JWST, it becomes increasingly useful to have spectroscopic constraints to constrain photometric SED fits. Furthermore, spectroscopic redshifts are more reliable than those measured with photometric imaging, allowing improved target selection for future observations.
In December 2022 CEERS observed high-redshift galaxies, detecting faint emission from galaxies out to z = 4-6 using the NIRSpec multi-object spectrograph. Two galaxies targeted during these NIRSpec observations were also found in this catalog, with agreeing redshifts. Emission lines from these JWST observations will allow for analysis of gas conditions in galaxies at much higher redshift than previously studied. Together with our DEIMOS observations, we can start to measure the correlation between the dependence on Lyα emission with important galaxy conditions at z > 4. As demonstrated in <ref>, these measurements together could also lead to further characterization of Lyα velocity offsets for galaxies at z = 3-6.
Given the importance of spectroscopic measurements of galaxies at z > 4 in the EGS field, we are currently working on additional ground-based observations in collaboration with the Web Epoch of Reionization Lyman-alpha Survey (WERLS; PI: C. Casey and J. Kartaltepe). The ongoing program, which was allocated time in 2022A/B and 2023A/B, is targeting the EGS Field with Keck/LRIS and Keck/MOSFIRE with the goal of detecting Lyα emitters at the latter half of the EOR (5.5 < z < 8; see the preliminary catalogs from O. Cooper et al. in prep for Keck/MOSFIRE observations and Urbano Stawinski et al. in prep for the Keck/LRIS observations). This future work will expand upon the efforts of this paper and significantly add to known z > 4 galaxies in the EGS.
§ ACKNOWLEDGEMENTS
The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. MCC and SMUS acknowledge support from the National Science Foundation through grant AST-1815475. PGP-G acknowledges support from Spanish Ministerio de Ciencia e Innovación MCIN/AEI/10.13039/501100011033 through grant PGC2018-093499-B-I00.
§ DATA AVAILABILITY
The data underlying this article are available in the survey GitHub webpage: <https://sstawins.github.io/deeper_than_deep/>.
mnras
|
http://arxiv.org/abs/2307.04584v1 | 20230710142620 | Porous CrO$_2$: a ferromagnetic half-metallic member in sparse hollandite oxide family | [
"Sujoy Datta"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci"
] |
[email protected], [email protected]
Department of Physics, University of Toronto, 60 Saint George Street, Toronto, Ontario M5S 1A7, Canada
A stable polymorph of CrO_2 is predicted using PBE+U method. The porous material is isostructural with α-MnO_2 making it the second transition metal oxide in sparse hollandite group of materials. However, unlike the anti-ferromagnetic semiconducting character of the α-MnO_2, it is found to be a ferromagnetic half-metal. At Fermi level, the hole pocket has ample contribution from O-2p orbital, though, the electron pocket is mostly contributed by Cr-3d_xy and Cr-3d_x^2-y^2. A combination of negative charge transfer through orbital mixing and extended anti-bonding state near Fermi level is responsible for the half-metallic ferromagnetic character of the structure. A comparative study of rutile and hollandite CrO_2 and hollandite MnO_2 structures delineate the interplay between structural, electronic and magnetic properties. The material shows a robust magnetic character under hydrothermal pressure, as well as, the band topology is conserved under uniaxial strain. Moderate magneto-crystalline anisotropy is observed and it shows a correspondence with the anisotropy of elastic constants.
Porous CrO_2: a ferromagnetic half-metallic member in sparse hollandite oxide family
Sujoy Datta
August 12, 2023
====================================================================================
§ INTRODUCTION
In the field of material science, studies on transition metal oxides (TMO) is prolific. However, TMOs never get tired to amaze the scholars through their versatile character. Not only that, even the burgeoning development in first-principles electronic-structure schemes faces the challenge of describing the underlying Physics of TMOs. Along with dynamical mean free theory (DMFT), inclusion of Hubbard parameters are mostly successful approaches.
Though TMOs are found to form solid-state structure of various symmetries, some structures are known for their uniqueness. The porous layered hollandite structure of α-MnO_2 is such an example <cit.>. The 2×2 tunnel within this structure can accommodate additional atomic species (e.g., Pb^ 2+ , B^2+, K^+ , etc.) while producing <cit.>. Such additional species can tune the electronic and magnetic properties as wells. A good number of publications in this field is the testimonial to the importance of study on this structure <cit.>. However, till date, there is no report on any other TMO exhibiting similar crystal structure.
In transition metal family, Chromium is one of its kind. Even at room temperature, is is found to be antiferromagnetic, making it the only single elemental anti-ferromagnetic (AFM) solid <cit.>. It shows a valency in the range of [+1 to +6] and oxidize to form CrO, Cr_2O_3, CrO_2 and CrO_5. Experimentally, two phases of Chromium dioxides have been characterised, the rutile type (namely α-, or, r-) and orthogonal CaCl_2 type (namely β-, or, o-). There is a second order phase transition observed from rutile to CaCl_2 type phase at 12-17 GPa <cit.>. Some other dynamically stable phases have also been proposed theoretically though hollandite structure have not been predicted yet <cit.>. Also, the porous nature prone to accommodate other ions may hinder in synthesis of the material.
Transition metals are characterised by the electrons in their d-orbital. In its 4+ valance state, Cr ion has two 3d electrons and a vacant 4s shell. These two 3d electrons should reside at two t_2g orbitals. Strong on-site correlation between these two electrons should result in Mott type insulating nature, however, this is not the case. Rutile type Chromium dioxide is a ferromagnetic half-metal at its ground state <cit.>. An explanation of the half-metallic ferromagnetic character has been provided in terms of the double-exchange model <cit.>. This makes r- a negative charge transfer insulating materials in the Zaanen-Sawatzky-Allen (ZSA) scheme <cit.>. The O-2p bands crossing the Fermi energy works as a charge (electron/hole) centre to nullify the strong electron-electron correlation between Cr-3d electrons through the hybridisation. Recently the topological character of rutile and orthorhombic type are investigated. It is shown that type-I and type-II Weyl fermions, can emerge in these phases of chromium dioxide <cit.>.
With the technological advancement, energy storage is a bottleneck yet to be cleaned. Holey structures are efficacious candidate in storing Lithium, Sodium, Zinc or other ions prospective to battery material <cit.>. Both Chromium and oxygen are abundant in nature, so, h- can be a good alternative in this field. Furthermore, half-metallic ferromagnets are playing a critical role modern spintronics devices; from magnetic sensors, spin valves to computer hardware components including such as magneto-resistive random-access memory (MRAM), read head of magnetic hard drive, etc. <cit.>.
In view of the rare ferromagnetic nature of CrO_2 in the TMO family, an investigation on the structures and their local bonding characteristics is indispensable. The aim of this article is to use the latest reliable theoretical approaches to demonstrate the mechanical, electronic and magnetic character of the proposed hollandite polymorph. To analyse and interpret the complicated electronic structures of TMOs, introduction of Hubbard term for both of on-site and off-site electron-electron interaction proved to be an efficacious tool <cit.>.
A detailed side by side investigation of the three materials, already synthesised r-, h-MnO_2, and the proposed h- can bridge the gap of theoretical understanding on such materials with rare structure as well as electronic and magnetic properties. Such trustworthy theoretical predictions will facilitate experimentalists a better handle on choosing their materials for a desired application, out of a plethora of possibilities.
§ COMPUTATIONAL DETAILS
We have used the Vienna Ab-initio Simulation Package (VASP) package for density functional theoretical (DFT) calculations <cit.>. Projected augmented wave (PAW) basis enabled pseudo-potentials using Perdew-Burke-Ernzerhof (PBE) <cit.> and PBE for solids (PBEsol) <cit.> generalised gradient approximated (GGA) exchange-correlation (xc) functional have been used. The optimised structures are found through ionic and volume relaxations using GGA and GGA+U methods starting from different magnetic configurations. We have set the threshold of maximum force as 10^-5 Ry./atom and pressure threshold as 10^-5 Kbar/cell. The convergence criteria for energy and charge densities has been set as 10^-8 Ry. We have chosen kinetic-energy cut-off 520 Ry. Fine reciprocal-space grids of dimensions 8×8×8, 5×5×8, 5×5×8 are used for holandite, rutile and orthorhombic structures, respectively.
The electron configurations of Cr and O have been taken as [Ar]4s^2 3d^4 and [He]2s^2 2p^4. As 3d electrons of transition metals correlate more strongly than the limit of GGA, for proper description of strong electron-electron correlation, Hubbard U and Hund J terms have been used. While U provides inter-orbital Coulomb interaction, U-J takes care of the interorbital Coulomb interaction between electrons.
Over the years, several non-empirical approaches have been proposed to estimate Hubbard parameters, such as constrained DFT, constrained random phase approximation (cDFT/cRPA) and linear-response formulation <cit.>. These terms for r- have been calculated by constrained screening method by Korotin <cit.>. We have used these values for : U=3.0 and J=0.87. Using self-consistent linear-response theory within the framework of DFPT, evaluation of the Hubbard parameters have been introduced recently <cit.>. Hitherto it is successfully predicted electronic structure of diverse materials using on-site U and inter-site V parameters<cit.>. For calculation of U and V, Quantum Espresso (QE) package is used <cit.>. For MnO_2, U=5.87 is used <cit.>.
The Vesta package has been utilised to simulate X-ray diffraction (XRD) pattern for Cu_α radiation <cit.>. For pre and post-processing Vaspkit and ElATools are utilised <cit.>. The chemical bonding analysis has been done using Lobster package for PAW <cit.>.
§ STRUCTURE AND MECHANICAL PROPERTIES
The hollandite <cit.> CrO_2 follows body-centred tetragonal lattice having I4/m (82) crystal symmetry. The optimised lattice constants are found to be a=b=9.992Å and c=2.702Å calculated using PBE for spin unpolarised calculation. As depicted in Fig.<ref> (a-c), the Cr atoms coordinate with six neighbouring oxygen atoms forming edge-sharing CrO_6 octahedra. Such MO_6 type structure is also found in the rutile and is a common building block for many covalently bonded hard materials <cit.>. A 2×2 tunnel is formed in between the CrO_6 octahedras. Including Hubbard terms U=3.0 and J=0.87 with PBE exchange-correlation the optimised lattice parameters are calculated as a=b=9.880Å and c=2.978Å <cit.>. So, there is a relative underestimate of the volume by 7.23% by the unpolarised calculation. Using same Hubbard parameters and PBEsol the lattice constants are found as a=b=9.767Å, c=2.928Å.
In Fig.<ref>(a-c), the conventional unit cell of h- crystal having eight formula units are shown from different angles. The primitive cell used for electronic structure calculations is presented in Fig.<ref>(d). The primitive cell contains four formula units. Now, for experimental identification of any crystal structure, XRD spectra is the key. Here in Fig.<ref>(e), we provide the XRD spectra for Cu_α radiation. Reflection from (1,1,0), (1,1,1), (2,2,1) and (1,0,-1) crystallographic planes create the most prominent sharp peaks of XRD, which represents the signature character of the particular structure.
Experimentally two polymorphs of are prepared so far, the rutile type α-(or r-) and orthorhombic β-(or o-). The lattice constants for both these structures predicted using PBEsol+U are agreeing with the experimental findings (see, <ref>).
The hollandite structure of MnO_2 is known as α-MnO_2. The calculated lattice constants for conventional unit cell using PBEsol+U a=b=9.787Å and c=2.903Å matches well with the reported experimental values a=b=9.750Å and c=2.861Å <cit.>.
Elastic properties: Within elastic limit, according to Hook law, the stress (σ_i) and external strain (e_j) follow a linear relationship:
σ_i = ∑_i,j=1^6 C_ij e_j
, where, C_ij is the elastic stiffness tensor. The orthorhombic system has nine independent components in the 6×6 matrix. The rutile structure being a part of type-I tetragonal system possesses six independent components. The hollandite structure which falls under type-II tetragonal class possesses seven independent components. Beside the stress-strain relationship, the elastic tensor can also be calculated from the total energy (E) using the harmonic approximation as:
C_ij=1/V_0∂^2 E/∂ e_i ∂ e_j
, where, V_0 is the volume without any stress. The values of C_ij are tabulated in Table <ref>.
While all other structures are experimentally reported, h- is not been synthesised yet. Hence, a mechanical stability check is indispensable. Following the Born stability criteria extended to different crystal classes, the necessary and sufficient conditions for mechanical stability (Eq. <ref>) are satisfied by all the structures are <cit.>:
Orthorhombic: C_11, C_44, C_55, C_66 > 0; C_11C_22 > C_12^2
C_11C_22C_33 + 2C_12C_13C_23 - C_11C_23^2 - C_22C_13^2 - C_33C_12^2 > 0
Tetragonal-I: C_11> |C_12|, C_44, C_66 > 0
2C_13^2 < C_33(C_11+C_12)
Tetragonal-II: C_11> |C_12|; C_44 > 0
2C_13^2 < C_33(C_11+C_12); 2C_16^2 < C_66(C_11-C_12)
Beside the elastic stability of the proposed material, its dynamical stability test is necessary. As the vibrational modes (phonons) generate due to the relative kinetics of ions, for a dynamical stable structure, there should be no negative phonon mode. The phononic energy dispersion in Fig.<ref>(a) delineate the dynamical stability. Chromium ions (Cr^4+) are much heavier than oxygen ions (O^-2), therefore, the lower energy phononic modes are populated by the contribution from Cr. Optical modes of higher frequency is mostly contributed by the kinetics of Oxygen ions.
The resistance against the external compression reflects in the bulk modulus of material. Using the Voigt-Reuss-Hill methodology the bulk moduli (B_H) is calculated <cit.>. A set of separate calculation has been carried out to find the variation of energy with changing volume. The equation of state (EOS) fittings for those data provide another set of bulk moduli. In Table-<ref> B_V represent the bulk moduli calculated using Vinet EOS <cit.>. The calculated B_V using PBEsol+U for r- is almost exactly matching with the experimental value <cit.>. From literature, the theoretically predicted values of the bulk modulus for r- are found as 261 GPa <cit.>, 282 GPa <cit.>, 225 GPa <cit.>, 238 GPa <cit.>. Now, there is a mismatch of theoretical and experimental bulk moduli of o- structure (216.75 vs. 181). Maddox have suspected that as the orthorhombic phase is not found in ambient pressure, the zero pressure volume can not be measured experimentally <cit.>. Therefore, an effect of inadequate data may results in the EOS prediction for the phase.
Along with the Young moduli (Y), the sheer moduli (G), Poisson ratios (ν), Pugh ratios (B_H/G) are calculated using the same Voigt-Reuss-Hill methodology and are tabulated in Table-<ref>. relative to the orthorhombic and rutile structures, the hollandite structure has large vacuum present in between the CrO_6 polyhedra giving more space to accommodate the the deformation produced by external strains. As a result, B, Y and G values of of the h- phase are much less than other polymorphs. The 2×2 tunnel is also present in h-MnO_2, so, the values of elastic moduli come out to be almost similar to those ones of h-.
As for tetragonal class of materials the elastic constants follow the relation: C_12=C_21, hence, the variation of bulk moduli in the XY plane for both h- and h-MnO_2 are isotropic in nature, whereas, in ZX or ZY plane they show elliptical variation as depicted in Fig.<ref> (a). The eccentricity of the variation in ZY plane is lower for h-, while, the bulk modulus in XY plane is higher for the same. In contradiction to h-, YZ plot of bulk modulus of h-MnO_2 shows a dip at Z=0. The variation of compressibility in Fig.<ref>(c) is a confirmation of the fact that higher bulk modulus yields less compressibility. According to the Young moduli in Fig. <ref>(b), h- is less stiff along Z direction than h-MnO_2. The anisotropy of the elastic moduli can be visually confirmed through Fig.<ref>(d-e). In XY plane, the pattern of Young modulus shows it is hard to deform the shape of the 2×2 tunnel in h-. As h-MnO_2 is found AFM ground state, its resistance against the displacement of Mn atoms is more robust.
§ ELECTRONIC STRUCTURE
The electronic structure and magnetic behaviour of has always been a curious case. While it is more likely for an TMO to be found in anti-ferromagnetic character, is found to be ferromagnetic in its ground state. For anti-ferro spin configuration we have taken three different arrangements as depicted in Fig.<ref>(a-c). The EOS plot confirms the ferromagnetic ground state of h-. The AFM-1 state is 30 meV/atom higher in energy that the FM configuration. The AFM-1 and AFM-2 states are very close in energy and for ambient pressure it is showing a crossover. However, the ferromagnetic ground state remains stable for ambient pressure which indicates robustness of magnetic response. h- MnO_2 which have the similar structure is found to be in anti-ferro ground state with the spin distribution as in Fig.<ref>(c) <cit.>. Interestingly, for h- this AFM-3 configuration possesses highest energy.
The h- is a ferromagnetic material where the nature of bands for up and down spins are quite different. The metallic nature of one spin (up) is contradicted by the semiconductor nature for the other spin (down) channel making the h- a half-metal. The electronic band dispersion and the density of states (DOS) near the Fermi level (E_F) is depicted in Fig.<ref>. The majority spin bands crosses E_F, while, the down spin channel shows a gap. The half-metallic bandgap is found to be 2.9 eV. There is no state over -0.85 eV and below 2.07 eV relative to E_F in minority spin channel. The half-metallic gap is indirect on the Z_0-M line of irreducible Brillouin Zone (BZ) edge (see, Fig.<ref>). For majority spin channel, in conduction band, first four bands are separated from the other conduction. The pseudo-gap is of 0.88 eV from 1.56 eV at Γ to 2.38 eV at M w.r.t. E_F. An interesting feature is that the eigenvalues for all the bands at Z are equal with those at Z_0, so, the diagonally opposite points of upper surface of the irreducible BZ are equivalent.
The DOS for majority spin channel is shifted lower than the minority spin channel. Shifting of DOS is an well-known sign of ferromagnetic character of material as well as the dissimilarity of DOS represents the ferromagnetic strength. For , the dissimilarity is vividly noticeable, so, the found magnetic moment of 2μB per formula unit is quite justified.
Electronic bonding analysis can provide more insight of the ferromagnetic character of h- <cit.>. For localised basis set the atomic orbital overlap is straight-forward to calculate, so, the overlap population weighted DOS (crystal orbital overlap population, COOP) can provide the information on the nature of bonding, anti-bonding, or non-bonding interaction. For DFT calculation involving plane wave basis, crystal orbital Hamiltonian populations (COHP) is a method that partitions band energies into pairwise atomic orbital interactions which facilitating similar identification. There is no anti-bonding state below E_F for down spin, whereas, for up spin, the anti-bonding state starts -1.78 eV. Such extended anti-bonding state is generated from Cr-3d and O-2p interaction which becomes clear from the orbital weighted bands in Fig.<ref>. The bonding anti-bonding is similar in rutile phase as well (see, Fig.<ref>)
To get a closer look on the orbital contribution on the full spaghetti of bands, the orbital weighted bands are plotted in Fig.<ref>. The lowest lying bands 40eV below E_F are originating from the the s and p orbitals of Cr, and, the majority spin bands are lower in energy than the minority spin bands. The O-2s bands are also separated below 20 eV. Near E_F, in valence band, the most contribution is coming from the O-p orbitals. For majority spin channel, the Cr-3d_xy bands are separately visible. There is a small electron pocket at M which is visible along X-M-Γ and Z_0-M line. The pocket is conduced mostly from Cr-3d_xy and Cr-3d_x^2-y^2. Also, there is an hole pocket along X-P with most contribution coming from hybridised Cr-3d_xz and O-2p_x. The band creating the hole pocket is more flat than the band responsible for electron pocket indicating heavier hole than electron at E_F.
From the partial DOS plot for Cr and O we have already noticed that the conduction bands are formed by Cr orbitals and the valence bands are mostly coming from O orbitals. Cr and O orbital mixing is better experienced for up spin. Now, r- is understood to be a negative charge transfer gap material <cit.>. Korotin have shown that a almost pure O-2p band crosses the Fermi energy which acts as a electron/hole reservoir causing fractional occupation of Cr-3d band at E_F. In Fig.<ref>(a) we have presented the oxygen weighted bands for both rutile and hollandite structure. In both cases, there is a Cr-3d band below E_F contributing mostly from 3d_xy orbital. However, in hollandite structure, it is not as much a separate unhybridised band, so, Cr-3d_xy orbital in h- is not as localised as in rutile counterpart. In r-, in the vicinity of BZ centre Γ, the band crossing E_F holds its pure O-2p character (brown dots) though hybridizes with Cr-3d near Z. So, hybridisation is much anisotropic in r- for the said band. In h- there are two bands responsible for the metallic character, the band crossing E_F around M has almost pure Cr-3d character and the band crossing along X-P is a hybridised Cr-d_xz and O-2p_x band. In both polymorphs, the band mixing between Cr-3d and O-2p is accountable for the half-metallic ferromagnetic character, though, the nature of hybridisation is different.
Along Z_0-M near E_F two bands are almost touching each other, one with strong Cr-d_xz another with O-2p_z orbital contribution. This makes us curious if uniaxial pressure along z-axis can bring some fundamental change of band topology (see, Fig.<ref>(b)). With 1% pressure, the bands come very close though having a gap of the order of meV. With higher pressure, the bands come close at two points though never cross each-other, hence, maintaining the band-topology conserved always. However, more detailed study on the topological aspect on this material has to be explored in future.
Magneto-crystalline anisotropy: h- possesses a rare porous hollandite crystal structure and belongs to a scarce ferromagnetic TMO family. It would be interesting to see how the crystal structure tunes the direction of magnetic moment in this system. Magneto-crystalline anisotropy is a phenomenon manifested by the variation of internal energy depending on the direction of magnetization in any material. As the orbit is strongly coupled to the crystal structure (lattice), so, changing the orientation of spin is resisted through spin-orbit coupling giving rise to MAE <cit.>. The spatial variation of MAE is presented in Fig.<ref>. The three dimensional plot is showing an isotropy in X-Y plane (azimuthal independence). Here to mention that the bulk modulus for this structure has also shown azimuthal symmetry. This is a feature of its layered structure. However, tilting the spin through making an angle θ with Z-axis (easy axis) demands external energy. The MAE variation with θ shows that the hardest axis lies on X-Y plane with highest value of MAE=394.96 μ Ev.
§ CONCLUSION
α-MnO_2 structure is one of its kind with a 2×2 tunnel to accommodate foreign elements which can be found to be useful in different applications. We have predicted another TMO having a similar structure, yet drastically different electronic and magnetic character. Even, it demonstrates contrasting mechanical and electronic behaviour than other Chromium dioxides. A detailed side-by-side study reveals the underlying Physics of such a vibrant character.
The system is mechanically and dynamically stable. It exhibits anisotropic elastic moduli and is less stiff than the similar h-MnO_2 structure. The lattice parameters and elastic constants calculated for hollandite h-MnO_2 and rutile r- are at par with the experimental values exhibiting the reliability of methodology utilised for the proposed h- crystal.
The h- is a half-metallic ferromagnet having a half-metallic bandgap of 2.9 eV. The ferromagnetic nature is a result of strong hybridisation of Cr-3d and O-2p electrons. The band crossing is minimal with only one electron pocket around M conduced mostly from Cr-3d_xy and Cr-3d_x^2-y^2, and one hole pockets along X-P with most contribution coming from hybridised Cr-3d_xz and O-2p_x at Fermi level. The occurrence of electron-hole pocket is at different k-points, so, the direct electron-hole coupling is not possible. However, the prospect of phonon mediated superconductivity is yet to be investigated.
The crystal shows ample magneto-crystalline anisotropy with the easy axis perpendicular to the plane of the structure (Z-axis) with highest value of MAE=394.96 μ Ev, while the azimuthal angle independence of MAE is evident. Though there are two bands almost touching together near Fermi level, the band topology remains conserved upon uniaxial pressure. However, the transformation of bands near Fermi energy with minimal uniaxial pressure may pave the way for further detailed study on the topological aspects of this material.
To sum up, we can conclude that a new stable transition metal oxide with vibrant physical properties is proposed which may further ignite the interest of Physics community.
100
bystrom1950
A. Byström, A. M. Byström, The crystal structure of hollandite, the
related manganese oxide minerals, and α-mno2, Acta Crystallographica
3 (2) (1950) 146–154.
luo2010
J. Luo, H. Zhu, J. Liang, G. Rao, J. Li, Z. Du, Tuning magnetic properties of
α-mno2 nanotubes by k+ doping, The Journal of Physical Chemistry C
114 (19) (2010) 8782–8786.
li2007
L. Li, Y. Pan, L. Chen, G. Li, One-dimensional α-mno2: trapping
chemistry of tunnel structures, structural stability, and magnetic
transitions, Journal of Solid State Chemistry 180 (10) (2007) 2896–2904.
cockayne2012
E. Cockayne, L. Li, First-principles dft+ u studies of the atomic, electronic,
and magnetic structure of α-mno2 (cryptomelane), Chemical Physics
Letters 544 (2012) 53–58.
tseng2015
L.-T. Tseng, Y. Lu, H. M. Fan, Y. Wang, X. Luo, T. Liu, P. Munroe, S. Li,
J. Yi, Magnetic properties in α-mno2 doped with alkaline elements,
Scientific reports 5 (1) (2015) 1–8.
wang2009
Y. Wang, H. Liu, X. Sun, I. Zhitomirsky, Manganese dioxide–carbon nanotube
nanocomposites for electrodes of electrochemical supercapacitors, Scripta
Materialia 61 (11) (2009) 1079–1082.
marcus1998
P. Marcus, S. Qiu, V. Moruzzi, The mechanism of antiferromagnetism in chromium,
Journal of Physics: Condensed Matter 10 (29) (1998) 6541.
maddox2006
B. Maddox, C. Yoo, D. Kasinathan, W. Pickett, R. Scalettar, High-pressure
structure of half-metallic cr o 2, Physical Review B 73 (14) (2006) 144111.
kuznetsov2006
A. Y. Kuznetsov, J. De Almeida, L. Dubrovinsky, R. Ahuja, S. Kwon, I. Kantor,
A. Kantor, N. Guignot, High-pressure synthesis and physical properties of an
orthorhombic phase of chromium dioxide, Journal of applied physics 99 (5)
(2006) 053909.
bendaoud2019
H. Bendaoud, K. Obodo, B. Bouhafs, Predicted dynamically stable new phase for
cro2 compound: Dft+ u calculations, Computational Condensed Matter 21 (2019)
e00400.
kim2012
S. Kim, K. Kim, C.-J. Kang, B. Min, Pressure-induced phonon softenings and the
structural and magnetic transitions in cro2, Physical Review B 85 (9) (2012)
094106.
huang2018
S. Huang, X. Wu, J. Niu, S. Qin, Structural, magnetic and electronic properties
of cro 2 at multimegabar pressures, RSC advances 8 (43) (2018) 24561–24570.
schwarz1986
K. Schwarz, Cro2 predicted as a half-metallic ferromagnet, Journal of Physics
F: Metal Physics 16 (9) (1986) L211.
korotin1998
M. Korotin, V. Anisimov, D. Khomskii, G. Sawatzky, Cro2: A self-doped double
exchange ferromagnet, Physical Review Letters 80 (19) (1998) 4305.
katsnelson2008
M. Katsnelson, V. Y. Irkhin, L. Chioncel, A. Lichtenstein, R. A. de Groot,
Half-metallic ferromagnets: From band structure to many-body effects, Reviews
of Modern Physics 80 (2) (2008) 315.
kulatov1990
E. Kulatov, I. Mazin, Extended stoner factor calculations for the half-metallic
ferromagnets nimnsb and cro2, Journal of Physics: Condensed Matter 2 (2)
(1990) 343.
zaanen1985
J. Zaanen, G. Sawatzky, J. Allen, Band gaps and electronic structure of
transition-metal compounds, Physical review letters 55 (4) (1985) 418.
wang2018
R. Wang, Y. Jin, J. Zhao, Z. Chen, Y. Zhao, H. Xu, Ferromagnetic weyl fermions
in cro2, Physical Review B 97 (19) (2018) 195157.
tompsett2013
D. A. Tompsett, M. S. Islam, Electrochemistry of hollandite α-mno2:
Li-ion and na-ion insertion and li2o incorporation, Chemistry of Materials
25 (12) (2013) 2515–2526.
li2012
L. Li, C. Nan, J. Lu, Q. Peng, Y. Li, α-mno2 nanotubes: high surface
area and enhanced lithium battery properties, Chemical communications 48 (55)
(2012) 6945–6947.
attema2005
J. J. Attema, L. Chioncel, C. Fang, G. A. de Wijs, R. de Groot, Half-metals:
Challenges in spintronics and routes toward solutions, Local-Moment
Ferromagnets: Unique Properties for Modern Applications (2005) 199–216.
irkhin1994
V. Y. Irkhin, M. I. Katsnel'son, Half-metallic ferromagnets, Physics-Uspekhi
37 (7) (1994) 659.
yuasa2018
S. Yuasa, K. Hono, G. Hu, D. C. Worledge, Materials for spin-transfer-torque
magnetoresistive random-access memory, MRS Bulletin 43 (5) (2018) 352–357.
keen2002
D. A. Keen, Disordering phenomena in superionic conductors, Journal of Physics:
Condensed Matter 14 (32) (2002) R819.
himmetoglu2014
B. Himmetoglu, A. Floris, S. De Gironcoli, M. Cococcioni, Hubbard-corrected dft
energy functionals: The lda+ u description of correlated systems,
International Journal of Quantum Chemistry 114 (1) (2014) 14–49.
vasp
G. Kresse, J. Furthmüller, Efficient iterative schemes for ab initio
total-energy calculations using a plane-wave basis set, Physical review B
54 (16) (1996) 11169.
PBE
J. P. Perdew, K. Burke, M. Ernzerhof, Generalized gradient approximation made
simple [physical review letters 77, 3865 (1996)], Physical Review Letters 78
(1997) 1396–1396.
PBEsol
J. P. Perdew, A. Ruzsinszky, G. I. Csonka, O. A. Vydrov, G. E. Scuseria, L. A.
Constantin, X. Zhou, K. Burke, Restoring the density-gradient expansion for
exchange in solids and surfaces, Physical review letters 100 (13) (2008)
136406.
csacsiouglu2011
E. Şaşıoğlu, C. Friedrich, S. Blügel, Effective
coulomb interaction in transition metals from constrained random-phase
approximation, Physical Review B 83 (12) (2011) 121101.
pickett1998
W. Pickett, S. Erwin, E. Ethridge, Reformulation of the lda+ u method for a
local-orbital basis, Physical Review B 58 (3) (1998) 1201.
timrov2018
I. Timrov, N. Marzari, M. Cococcioni, Hubbard parameters from
density-functional perturbation theory, Physical Review B 98 (8) (2018)
085127.
timrov2021
I. Timrov, N. Marzari, M. Cococcioni, Self-consistent hubbard parameters from
density-functional perturbation theory in the ultrasoft and
projector-augmented wave formulations, Physical Review B 103 (4) (2021)
045141.
ricca2020
C. Ricca, I. Timrov, M. Cococcioni, N. Marzari, U. Aschauer, Self-consistent
dft+ u+ v study of oxygen vacancies in srtio 3, Physical review research
2 (2) (2020) 023313.
paul2023
B. Paul, D. Mondal, D. Bhattacharya, S. Datta, M. Kundu, I. Mondal, P. Halder,
S. Sarkar, A. Ghosh, T. Mandal, et al., Transition metal impregnated
nanostructured oxide material for broadband electromagnetic interference
shielding: A theoretical and experimental insight, Chemical Engineering
Journal 459 (2023) 141560.
QE
P. Giannozzi, O. Baseggio, P. Bonfà, D. Brunato, R. Car, I. Carnimeo,
C. Cavazzoni, S. De Gironcoli, P. Delugas, F. Ferrari Ruffino, et al.,
Quantum espresso toward the exascale, The Journal of chemical physics
152 (15) (2020) 154105.
vesta
K. Momma, F. Izumi, Vesta: a three-dimensional visualization system for
electronic and structural analysis, Journal of Applied crystallography 41 (3)
(2008) 653–658.
elatools
S. Yalameha, Z. Nourbakhsh, D. Vashaee, Elatools: A tool for analyzing
anisotropic elastic properties of the 2d and 3d materials, Computer Physics
Communications 271 (2022) 108195.
vaspkit
V. Wang, N. Xu, J.-C. Liu, G. Tang, W.-T. Geng, Vaspkit: A user-friendly
interface facilitating high-throughput computing and analysis using vasp
code, Computer Physics Communications 267 (2021) 108033.
nelson2020
R. Nelson, C. Ertural, J. George, V. L. Deringer, G. Hautier, R. Dronskowski,
Lobster: Local orbital projections, atomic charges, and chemical-bonding
analysis from projector-augmented-wave-based density-functional theory,
Journal of Computational Chemistry 41 (21) (2020) 1931–1940.
miura1986
H. Miura, The crystal structure of hollandite, Mineralogical Journal 13 (3)
(1986) 119–129.
sun2019
S. Sun, X. Zhang, J. Cui, Q. Yang, S. Liang, High-index faceted metal oxide
micro-/nanostructures: a review on their characterization, synthesis and
applications, Nanoscale 11 (34) (2019) 15739–15762.
chen2012
Z. Chen, Z. Jiao, D. Pan, Z. Li, M. Wu, C.-H. Shek, C. L. Wu, J. K. Lai, Recent
advances in manganese oxide nanocrystals: fabrication, characterization, and
microstructure, Chemical Reviews 112 (7) (2012) 3833–3855.
hill1952
R. Hill, The elastic behaviour of a crystalline aggregate, Proceedings of the
Physical Society. Section A 65 (5) (1952) 349.
vinet1986
P. Vinet, J. Ferrante, J. Smith, J. Rose, A universal equation of state for
solids, Journal of Physics C: Solid State Physics 19 (20) (1986) L467.
born1955
M. Born, K. Huang, M. Lax, Dynamical theory of crystal lattices, American
Journal of Physics 23 (7) (1955) 474–474.
mouhat2014
F. Mouhat, F.-X. Coudert, Necessary and sufficient elastic stability conditions
in various crystal systems, Physical review B 90 (22) (2014) 224104.
voight1928
W. Voight, Lehrbuch der kristallphysik, Teubner, Leipzig (1928).
reuss1929
A. Reuß, Berechnung der fließgrenze von mischkristallen auf grund der
plastizitätsbedingung für einkristalle., ZAMM-Journal of Applied
Mathematics and Mechanics/Zeitschrift für Angewandte Mathematik und
Mechanik 9 (1) (1929) 49–58.
vinet1987
P. Vinet, J. Ferrante, J. H. Rose, J. R. Smith, Compressibility of solids,
Journal of Geophysical Research: Solid Earth 92 (B9) (1987) 9319–9325.
wu2012
H. Wu, Y. Chen, C. Deng, X. Su, Pressure-induced phase transition and
structural properties of cro2, Phase Transitions 85 (8) (2012) 708–717.
alptekin2015
S. Alptekin, Pressure-induced phase transition in cro2, Journal of molecular
modeling 21 (2015) 1–5.
dronskowski2004
R. Dronskowski, Itinerant ferromagnetism and antiferromagnetism from the
perspective of chemical bonding, International journal of quantum chemistry
96 (2) (2004) 89–94.
daalderop1990
G. Daalderop, P. Kelly, M. Schuurmans, First-principles calculation of the
magnetocrystalline anisotropy energy of iron, cobalt, and nickel, Physical
Review B 41 (17) (1990) 11919.
sander2004
D. Sander, The magnetic anisotropy and spin reorientation of nanostructures and
nanoscale films, Journal of Physics: Condensed Matter 16 (20) (2004) R603.
*
1cm
§
|
http://arxiv.org/abs/2307.04049v1 | 20230708212820 | Parallel Algorithms Align with Neural Execution | [
"Valerie Engelmayer",
"Dobrik Georgiev",
"Petar Veličković"
] | cs.LG | [
"cs.LG"
] |
[
Parallel Algorithms Align with Neural Execution
Valerie Engelmayeraux
Dobrik Georgievcam
Petar Veličkovićdm
auxDepartment of Applied Computer Science, University of Augsburg, Augsburg, Germany
camDepartment of Computer Science and Technology, University of Cambridge, Cambridge, United Kingdom
dmGoogle DeepMind, London, United Kingdom
Valerie [email protected]
Machine Learning, ICML
0.3in
]
Neural algorithmic reasoners are parallel processors. Teaching them sequential algorithms contradicts this nature, rendering a significant share of their computations redundant. Parallel algorithms however may exploit their full computational power, therefore requiring fewer layers to be executed. This drastically reduces training times, as we observe when comparing parallel implementations of searching, sorting and finding strongly connected components to their sequential counterparts on the CLRS framework. Additionally, parallel versions achieve strongly superior predictive performance in most cases.
§ MOTIVATION
In neural algorithmic reasoning, neural networks (NN) act as computational machines. In graph neural networks (GNN), graph nodes take on the role of storage space (interpreting edge labels as nodes adjacent to its endpoints throughout this paper), while edges indicate which ways information may flow. The update function of choice defines the set of constant (neural) time operations. But note how nodes update their features in parallel, each one acting as a processor of its own rather than sheer memory.
The parallel nature of neural networks is widely known. Running them in parallel fashion on processing devices like GPUs and TPUs drastically saves computational resources <cit.>. It seems natural that this translation between computational models would also hold the other way around. And indeed, Loukas loukas_what_2020 proves how Neural Networks (NN) are analogous to distributed computational models under certain assumptions.
Kaiser & Sutskever kaiser2015neural exploit the advantages of parallel processing in their Neural GPU. Freivalds et al. freivalds_neural_nodate derive their architecture from the parallel computational model of Shuffle-Exchange-Networks. Xu et al. xu_what_2020 observe how their model learns to compute a shortest path starting from both ends in parallel when executing Bellman Ford. Veličković et al. velickovic_clrs_2022 and Veličković et al. velickovic_neural_2020 hint at parallelized computations whenever possible.
It is time the parallel processing capabilities of NN are exploited systematically. Theory on parallel computational models and algorithms explicitly designed for them are abundant <cit.>. Their trajectories are shorter and align more closely with neural architectures, as illustrated in figure <ref>. Hinting at these during training teaches NN to execute algorithmic tasks much more efficiently than when providing hints for sequential algorithms, as we demonstrate in section <ref> for the examples of searching, sorting and finding strongly connected components.
While it is common practice to modify the neural architecture for better alignment <cit.>, it seems promising to narrow the gap from the other side, by choosing algorithms that naturally align with neural execution.
§ PARALLEL COMPUTING
Fundamentally, the parallel computational models addressed here assume multiple processors collaborating to solve a task. The line between parallel and distributed computing is blurry and depends on how controlled interactions between processors are. We assume a fixed and known interconnection graph, uniquely identified processors and a common clock to govern computation. Therefore, we choose to speak of parallel computing.
§.§ Parallel Computational Models
Processor Arrays. Communication may take place via hard-wired channels between the processors. These induce an interconnection graph that may in principle take any shape. At every time step, each processor executes some computation based on the contents of its local memory and the information received from its neighbours in the previous step, and may in turn send out a tailored message through any of its channels.
PRAM Models. Alternatively, communication may be realised by reading from and writing to global memory, giving rise to PRAM (parallel random access machine) models <cit.>. Submodels allowing for concurrent reading and writing by multiple processors are referred to as CRCW PRAM. Different conventions exist on whether attempting to concurrently write different values is permitted, and if so, how to decide who succeeds. In the most powerful model, the priority CRCW PRAM, the value from the processor with the lowest index taking part in the concurrent write will be taken on.
§.§ Efficiency
Since multiple steps can be carried out at the same time, the required number of operations in a parallel algorithm does not impose a lower bound to its run time as in the sequential case, but the product of time and processor number. Optimal speedup is achieved if the use of n processors speeds up computation by a factor of n.
This gives rise to a notion of efficiency frequently used in parallel computing <cit.>.
The efficiency of a parallel algorithm solving a task of sequential complexity C on p processors in time t
is defined as
C/pt.
It is not hard to see that optimal speedup entails an efficiency of Ω(1).
§.§ Examples of Parallel Algorithms
Searching.
For a simple parallel search for value x in a descending list of n items, assume a priority CRCW PRAM with n processors. Distribute the first item to processor 1, the second to processor 2 etc., while x is stored in the global memory. If a processor's item is ≥ x, it tries to write its index to a designated location in the global memory. Since the one with the smallest index will succeed, the location now contains the desired position of x.
The run time is independent of the input size[Distributing values to processors can be done in constant time by routing over the shared memory. We neglect distributing/returning in-/outputs from/to a host computer in the following as it is omitted in neural execution.], so the time-processor-product is Θ(n), missing optimal speed-up as searching can be done in O(log n).
Sorting. Habermann habermann_parallel_1972 proposes a simple parallel sorting algorithm for a linear array of processors called Odd Even Transposition Sort (OETS). Each processor holds one item. In an odd (even) round, all neighbouring pairs starting at an odd (even) index swap their items if they are out of order. The two types of rounds take turns for at most n rounds total when n items are to be sorted, yielding O(n^2) operations when accounting for the n processors. Again, this is not optimal for comparison-based sorting, which may be done in O(n log n).
Strongly Connected Components.
Fleischer et al. rolim_identifying_2000 propose a Divide-and-Conquer algorithm for computing strongly connected components (SCC) of a digraph, which they call DCSC. First, find all descendants and predecessors of an arbitrary node, e.g. by carrying out breadth-first search (BFS) in the graph and its reversed version. The intersection of both sets constitutes a SCC. Observe how each further SCC has to be completely contained in either the descendants, the predecessors or the undiscovered nodes, such that the described routine may be called recursively for start nodes in each subset independently, until each vertex is assigned to a SCC. They prove an expected serial time complexity of O(n log n) for graphs on n nodes whose degrees are bounded by a constant. This is not optimal, but parallelization of the two searchs per vertex, as well as the recursive calls may significantly speed up execution.
§.§ Analogy to Neural Networks
Loukas loukas_what_2020 formally establishes an analogy between models like processor arrays and GNN by identifying processors with graph nodes and communication channels with edges. Therefore, the width of a GNN corresponds to p, and its depth to t. Loukas coins the term capacity for the product of width and depth of a GNN, reflecting the time-processor product of parallel algorithms. The shared memory of a PRAM finds its neural analog in graph-level features. Since the computation of a graph feature may take into account positional encodings of the nodes, we may assume a priority CRCW PRAM, encompassing all other PRAM models.
§ EFFICIENCY OF EXECUTING ALGORITHMS NEURALLY
Inspired by the definition of efficiency in parallel computing, we define the efficiency of a neural executioner as follows.
Let be a GNN with capacity c(n) executing an algorithm of sequential complexity C(n). Define its node efficiency as
η (, ) C(n)/c(n).
This definition implies an important assumption we make throughout this paper.
When executing an algorithm on a GNN, one constant-time operation is to be executed per node per layer.
This is not entirely unproblematic as discussed in section <ref>, but often expected when providing hints and helps to identify theoretical properties. Under this assumption, node efficiency denotes the share of nodes doing useful computations throughout the layers.
Since the computational cost of a GNN also scales with the number of messages that are being sent, it is insightful to study the share of edges that transport relevant information as well.
Let be a GNN operating over a graph G=(V,E), m | E |, to execute an algorithm . Then we call an edge (i,j) ∈ E active at layer t for a certain input x, if the operation to be executed by node j at time t involves information stored at node i at time t-1.
Let a(t) be the number of active edges at time t, and T the total number of time-steps.
Then define edge efficiency as worst case share of active edges when processing inputs x_n of size n,
ϵ (, ) x_nmin 1/T∑_t=1^T a(t)/m.
Note how neural efficiencies are defined relative to the algorithm they are executing as opposed to the task they solve. This allows for a neural executioner to be efficient in executing an algorithm that is itself not efficient in solving a task.
§.§ Parallel Algorithms Entail Higher Efficiency
Contradicting a GNN's parallel nature by teaching it to execute sequential algorithms artificially impedes the task. Training to solve tasks in parallel instead is more efficient, which may also simplify the function to learn.
Shorter Trajectories.
As observed by Loukas loukas_what_2020, the complexity of an algorithm lower bounds the capacity of a GNN executing it. If the number of processors is one, the depth alone needs to match the complexity, while the width might theoretically be set to one. But in practice, the width has to scale with the input size n to ensure applicability to different n.
Therefore, training sequential algorithms forces overspending on capacity by a factor of n.
Setting the width to n, as is often done to distribute one unit of information over each node, entails n available processors. Making use of them may shorten the trajectory of an algorithm by a factor of up to n in the case of optimal speedup, which allows the capacity to take on its lower bound.
The capacity of a GNN directly translates to the time needed to train and execute it. Additionally, long roll-outs give rise to an issue Bansal et al. bansal_end–end_2022 refer to as overthinking, where many iterations degenerate the behaviour of a recurrent processor.
Less Redundancy.
Neural efficiencies denote the share of nodes and edges involved in useful computations. Redundant computations not only harm run times, but may also interfere with the algorithmic trajectory. Parameterising them correctly to prevent this can complicate the function to learn.
Assuming the redundant nodes (grey in figure <ref>) need to preserve their information to be processed or put out later, their self-edges should execute an identity, while the additional incoming messages need to be ignored, i.e. mapped to a constant.
In practice, this will be hard to do, which could entail a temporal variant of oversmoothing, where relevant information gets lost throughout the layers <cit.>. Oyedotun et al. skipconnections highlight how skip connections help to avoid the issue, Ibarz et al. ibarz_generalist_2022 introduce a gating mechanism to leave information unchanged, Bansal et al. bansal_end–end_2022 let their architecture recall the original input.
So let's explore the efficiency of executing sequential and parallel algorithms.
Let be a scalable GNN operating over a graph with n nodes and m edges. Further let be a sequential, and an efficient parallel algorithm on n processors, both of complexity C. Then executing and on , respectively, entails efficiencies
η(, ) = O (1/n), ϵ(, ) = O( 1/m),
η(, ) = O(1), ϵ(, ) = O(n/m).
As observed above, the capacity c of a GNN executing a sequential algorithm of complexity C has to be c ≥ nC, while it may be c=C in the case of optimal speedup. Node efficiencies follow immediately.
Since one processor can read only so much information, only a constant number of edges can be active at each layer during sequential processing, while up to a multiple of n edges can be active during parallel algorithms. This yields the stated edge efficiencies.
Therefore, the share of nodes avoiding redundant computation cannot exceed 1/n when executing sequential algorithms, whereas it may reach up to 1 for efficient parallel algorithms. At the same time, the number of redundant messages is reduced by a factor of n. Removing the artificial bottleneck of a single processor prevents data from having to be stored until the processor gets to it. Allowing nodes to carry out meaningful computation frees them of the dead weight of acting as memory.
Local Exchange of Information.
In neural networks, information exchange is inherently local. The feature h_i^t of node i at time t may only depend on itself and its neighbours _i. E.g. for permutation invariant MPNN <cit.>,
h_i^t = f (h_i^t-1, j ∈_i⊕ g(h_i^t-1, h_j^t-1))
This paradigm is often not respected by classical algorithms, as depicted in figure <ref>.
In the RAM model, the state h_i_t^t of register i_t updated at time t may depend on any two registers j_t and k_t:
h_i_t^t = f^t_i (h_k_t^t-1, h_j_t^t-1), j_t, k_t arbitrary.
Not being able to restrict which nodes have to communicate may render it advisable for a GNN to operate over a complete graph to make sure all necessary information is available at all times (see e.g. <cit.>). The situation is different in the setting of interconnected processing arrays, see figure <ref>. For example OETS only ever requires neighbouring processors to compare their items. In general, at time t, the memory state h_i^t of processor i is computed by
h_i^t = f^t_i (h_i^t-1, j ∈ J_i^t|| h_j^t-1), J_i^t ⊆_i,
where concatenation indicates how i may tell apart its neighbours.
Therefore it suffices for the GNN to only rely on edges present in the interconnection graph. To emulate a PRAM algorithm, an empty graph would in principle be enough, though it might not deem advantageous to route all communication over the graph feature in practice.
Restricting the number of edges further reduces the use of resources and may help performance, since fewer unnecessary messages are being passed. Interconnection graphs are mostly chosen to be sparse, enabling maximum edge efficiency.
§ METHODOLOGY
To test the hypothesis, we consider the two elementary tasks of searching and sorting, as well as computing SCC as an example of a graph algorithm.
The parallel algorithms are chosen from section <ref>; as sequential counterparts we use binary search, bubble sort and Kosaraju's SSC algorithm from the CLRS-30 benchmark <cit.>. Key data of the GNN we use are listed in table <ref>.
We compare performances across various processor networks, namely the wide-spread architectures of DeepSets <cit.>, GAT <cit.>, MPNN <cit.>, and PGN <cit.>. The trajectories of the new algorithms are encoded for the CLRS framework as follows below. Note that in every case, randomized positional information, as proposed by Mahdavi et al. mahdavi_towards_2023 and standard on CLRS, is provided as part of the input, to emulate the situation of uniquely identified processors.
§.§ Searching
Parallel Search.
The hints for parallel search of x in A closely resemble its template. As to be seen in figure <ref>, each item A_i of A is represented by one node of an empty graph. A node indicates whether A_i ≤ x. The position rank_A (x) of x in A is predicted by the graph feature as categorical variable over the nodes ( in <cit.>). Therefore we introduce an extra node carrying x as a placeholder to allow for as many categories as possible positions of x.
To perfectly predict the outcome in this setting, the graph nodes may be updated by
h_i = ReLU (A_i -x),
yielding h_i = 0 if and only if A_i ≤ x.
So the graph feature may be computed by
rank_A (x) = min{i=1,…,n : h_i = 0 }.
These steps closely align with the considered neural update functions, especially since the function updating the graph level possesses its own set of parameters. Additionally, the roll-out has constant length, leaving room for only a constant number of redundant edges, see figure <ref> and table <ref>. Altogether, we expect high performance on parallel search.
Binary Search. Opposed to parallel search, binary search has an optimal complexity of O(log n). But given the need for n nodes, it still requires an enhanced capacity of O(n log n), yielding low node efficiency. In CLRS-30, binary search is executed on a complete graph (whose edges are omitted in figure <ref> to avoid clutter), impairing edge efficiency, see table <ref>. Low efficiency is visible in figure <ref> by the amount of grey components.
§.§ Sorting
OETS.
Actually swapping the items would require making numerical predictions. Instead, we predict changing predecessors as , following preimplemented examples. To still provide edges between nodes holding items to compare, we have to operate on a complete graph, sacrificing edge efficiency (see table <ref>), since only Θ(n) edges are active in each round, so ϵ = n/n^2. As hints, we feed for each round the current predecessors along with an edge indicating whether two nodes have to switch their role, and a graph-level with the parity of the round, serving as rudimentary clock.
Bubble Sort.
Though Bubble Sort induces the same amount of operations O(n^2) as OETS, it requires a larger network to be executed on (table <ref>). Again, along with operating over a complete graph, this entails low efficiencies.
§.§ Strongly Connected Components
DCSC.
We input the undirected adjacency matrix as edge , along with the directed one as . Parallelizing the recursive calls of DCSC on multiple disjoint sets would require an extra feature dimension for every search that is going on. Therefore we only let the two BFS starting from the same source node be executed in parallel, which we each encode as is standard in CLRS-30. Additionally, a binary on each node is flipped to 1 as soon as it is discovered from both directions, indicating it belongs the currently constructed SCC (this is reset at the start of every new search). At the same time, it receives a to the source, which in the end constitutes the output. Throughout, we keep track of undiscovered nodes in another node . We choose the node with the smallest index from this set as next source.
DCSC spends most of its time on the repeated BFS, a subroutine known to be learned well even on relatively simple architectures <cit.>, as it aligns well with neural execution <cit.>.
Note how they let each node consider all its incoming edges in parallel, as is done on CLRS-30. This not only allows the trajectory to be shortened from O(n+m) to O(n), but also prevents redundant computations from having to be handled explicitly.
Except for the source, each node can carry out the same computation at each step (see <cit.> for details) – just that this will only change its state whenever information flowing from the
start node reaches it. DCSC only has to pass the index s of the source node instead of computing predecessor pointers, so computation looks like depicted in figure <ref>, closely resembling the situation in figure <ref>. Therefore, efficiency is expected to be less important for predictive performance in this special case. An obvious upper bound to DCSC's run time is O(n^2), accounting for one (two-sided) BFS per node, resulting in the big capacity reported in table <ref>. There is also no guarantee for more than one node and edge being active per step per BFS, resulting in low efficiencies. But this represents edge cases at best, such that the average trajectories will be much shorter and more efficient, as experiments will show. The core of DCSC aligning so well with neural execution promises good results.
Kosaraju.
The skeleton of Kosaraju's algorithm as implemented in CLRS-30 on the other hand is formed by a depth first search (DFS), which is more challenging for neural executioners <cit.>. As opposed to the closely related BFS, it is hard to parallelize. In fact, when relying on lexicographic ordering for tie-braking, it is considered an inherently sequential algorithm <cit.>. Since nodes have to wait for the search to retract from its siblings, computation cannot be carried out as in figure <ref>, so processing needs be timed correctly. The total run time is O(n+m), entailing the capacity and efficiencies reported in table <ref>.
§ RESULTS
Predictive performance is reported in table <ref>. As expected, parallel search achieves almost perfect results. Meanwhile, training time is reduced by a factor of almost 3 as compared to binary search (see figure <ref>). Despite DCSC's only partial parallelization and the asymptotically optimal linear run time of its sequential opponent, training time is more than halved for the SCC task. At the same time, predictions become up to more than twice as accurate. On the sorting task, the sequential algorithm entails better accuracy, with the parallel one mostly falling within one standard deviation. Though both algorithms require the same asymptotic number of operations, training OETS takes a fraction of the time needed for bubble sort (figure <ref>).
§ DISCUSSION
Neural efficiency only loosely correlates with predictive performance when comparing tables <ref> and <ref>. This is not too surprising, since correctly parameterising redundant computations is only one of many aspects that make a function hard to learn. We propose a rather one-sided relationship, where low efficiencies can harm accuracy (if not circumvented as in BFS, see section <ref>), but high efficiencies do not necessarily enhance learning success.
We would like to highlight the importance of taking the perspective on neural networks as computational models when executing algorithms, as it opens access to the rich theory of computational complexity. E.g. the classes of NC (efficiently parallelizable) and P-complete problems (mostly thought of as inherently sequential) <cit.> inform us on which tasks may be hard to execute neurally, to tackle them more effectively. However in doing so, it is important to keep in mind the gap between the respective sets of constant time operations, with none being strictly more powerful than the other. On the one hand, a single RAM instruction may need to be approximated by entire subnetworks. On the other hand, one neural step suffices to process all incoming edges of a node during execution of BFS <cit.>. This breaks up the strict correspondence between time-processor product and capacity.
§ CONCLUSION
As suggested in section <ref>, parallel algorithms prove to be a lot more efficient to learn and execute on neural architectures than sequential ones. Often, OOD predictions on algorithmic tasks are significantly improved as well, suggesting that higher node and edge efficiency can help learning. Future work has to show how performance is impacted for other tasks, on more elaborate architectures like in <cit.>, and in generalist settings.
§ ACKNOWLEDGEMENTS
We would like to thank Razvan Pascanu and Karl Tuyls for their valuable comments, as well as Pietro Liò for insightful discussions and Torben Hagerup for the support he provided.
icml2023
|
http://arxiv.org/abs/2307.05561v1 | 20230709173313 | TransPose: A Transformer-based 6D Object Pose Estimation Network with Depth Refinement | [
"Mahmoud Abdulsalam",
"Nabil Aouf"
] | cs.CV | [
"cs.CV",
"cs.AI"
] |
TransPose: A Transformer-based 6D Object Pose Estimation Network with Depth Refinement
Identify applicable funding agency here. If none, delete this.
Mahmoud Abdulsalam 5, and Nabil Aouf
5Corresponding author
Department of Engineering, School of Science and Technology
City, University of London, ECV1 0HB London, United Kingdom
Email:{mahmoud.abdulsalam, nabil.aouf,}@city.ac.uk
August 12, 2023
=================================================================================================================================================================================================================================================
plain
plain
As demand for robotics manipulation application increases, accurate vision based 6D pose estimation becomes essential for autonomous operations. Convolutional Neural Networks (CNNs) based approaches for pose estimation have been previously introduced. However, the quest for better performance still persists especially for accurate robotics manipulation. This quest extends to the Agri-robotics domain. In this paper, we propose TransPose, an improved Transformer-based 6D pose estimation with a depth refinement module. The architecture takes in only an RGB image as input with no additional supplementing modalities such as depth or thermal images. The architecture encompasses an innovative lighter depth estimation network that estimates depth from an RGB image using feature pyramid with an up-sampling method. A transformer-based detection network with additional prediction heads is proposed to directly regress the object's centre and predict the 6D pose of the target. A novel depth refinement module is then used alongside the predicted centers, 6D poses and depth patches to refine the accuracy of the estimated 6D pose. We extensively compared our results with other state-of-the-art methods and analysed our results for fruit picking applications. The results we achieved show that our proposed technique outperforms the other methods available in the literature.
Transformer, Depth Estimation, Pose Estimation
§ INTRODUCTION
6D object pose estimation is a crucial topic to address in the robotics domain. The ability to perceive the position of an object from a single RGB-image can find application in areas such as: robotics for grasping tasks <cit.>, autonomous driving <cit.>, space applications <cit.> and robotics for virtual and augmented reality applications <cit.>. This problem, however, comes with several challenges such as: object appearance and texture, lighting conditions and object occlusion <cit.>.
Conventionally, 6D object pose estimation problem is formulated as a feature mapping problem where feature points of a 3D objects are matched on 2D images <cit.>. However, these methods are unable to detect features on smooth objects with minimum or no texture. Introduction of additional modality such as depth data have been used to solve the problem of features on texture-less objects <cit.>. However, this requires more inputs in the form of RGB-D images. With the emergence of Convolutional Neural Networks (CNNs), some research leveraged on this powerful tool as part of their pipeline to estimate 6D poses <cit.>. Transformer based models are emerging and proving to be more efficient than CNNs <cit.>. Thus, few pipelines adopting transformer based models for 6D pose estimation in quest for better accuracy <cit.> exist.
In this work, we propose a new 6D object pose estimation architecture where we aim at improving the accuracy in comparison of the existing methods. We introduce TransPose: an improved transformer-based 6D pose estimation network with a novel depth refinement module. The objective is to get better 3D translations and rotations estimates from a single RGB image input. For our initial estimations, we adapted the Detection Transformer (DETR) framework, <cit.>, to directly regress the center of the target object. Furthermore, we obtain an image patch of the target object. The translation and rotation can directly be regressed by formulating additional prediction heads on DETR <cit.>. Indeed, feed-forward heads are added to regress the two components of the 6D pose (3D translation and 3D rotation). A novel depth refinement module is also introduced in our estimation pipeline to increase the accuracy of the pose estimation.
TransPose architecture performs two interdependent tasks to obtain the final 6D pose of the target object. As seen in Fig. <ref>, an RGB image is used as the input to the pipeline. The image is passed to the transformer network which has a ResNet-101 <cit.> backbone for features extraction. These features are then passed to the transformer model consisting of a standard encoder and decoder setup <cit.>. The model is used to obtain an image patch by detecting the object and assigning a Region Of Interest (ROI) to the detected object. The second segment of the architecture is the depth estimation and refinement module. The depth estimation network encompasses a feature pyramid network (FPN) <cit.> that takes in an RGB image as input and outputs an estimated depth image. The image patch obtained from the transformer model is used to isolate the target on the depth image and hence obtain the depth of the target from the camera. The depth is then used to compute other components of the translation and subsequently used to refine the estimated 6D pose of the target. We evaluated our approach on YCB-Video dataset <cit.> as a benchmark and compared it with other state-of-the-art approaches. The following are our contributions in the TransPose model:
* We propose a novel pipeline for 6D object pose prediction that favourably compares with other state-of-the-art methods
* As part of the pipeline, we propose a lighter depth estimation network that utilizes a better up-sampling method for depth prediction
* Additional analyses are conducted with our own generated fruit dataset to facilitate and evaluate 6D pose estimation performance for fruit-picking applications.
The paper continues with a literature review in section II. After introducing a TransPose solution for 6D pose estimation in section III, we provide our results in the experiments section IV and finally the conclusion.
§ RELATED WORK
Many methods have been proposed to tackle the problem of 6D object pose estimation. Approaches that are non-learning-based rely heavily on object textures for pose estimation. Scale-Invariant Feature Transform (SIFT) features <cit.> and Speeded Up Robust Features (SURF) <cit.> are common examples of the classical features used. The SIFT algorithm as used in <cit.> for pose estimation requires rich texture information. This can be an issue if the objects are textureless objects. Miyake et al. <cit.> compensated the textureless nature of objects with the colour information to improve the accuracy of the 6D pose estimation. The geometric information has also been used to increase the accuracy of estimation <cit.>.
Pose estimation methods that utilise local descriptors define and compute the global descriptors offline. The local descriptor is then computed and matched online with the global descriptor. Pose estimation using Iterative Closest Point (ICP), Oriented Fast and Rotated Brief (ORB) <cit.>, Binary Robust Independent Elementary Features (BRIEF) <cit.> have been implemented in the past <cit.>. However, these methods are computationally expensive and do not perform well on reflective objects.
We can further group pose estimation methods into template-based and featured-based methods <cit.>. The advantage of the template-based methods is that they can detect objects without enough textures. Each input image location is scanned and matched with a constructed template of the object. The best match is selected based on a similarity score that compares the matched locations <cit.>. This type of method cannot properly estimate occluded objects since the similarity score will be low.
The feature-based methods utilize 2D-3D correspondences. Features are mapped from the 2D images to the 3D models thereby estimating the 6D poses <cit.>. This approach handles occluded objects better. However, this is at the expense of rich features in the form of enough texture. Some works have proposed learning feature descriptors to solve the problem of objects with no texture <cit.>, while others regress directly from the 2D image to obtain the 3D correspondence <cit.>. Without sufficient refinement, these models can obtain relatively low accuracy when dealing with symmetrical objects.
Convolutional Neural Network (CNN) architecture for pose estimation was introduced by <cit.> to regress 6D pose using RGB image. Limited by depth modality, the task becomes difficult. In an attempt to address this problem, another method proposed the prediction of depth from the 2D image and thus acquire the 3D position of the object <cit.>. Estimating the rotation component can also be a problem using this method due to non-linearity. <cit.> separated the rotation component and treated it as a classification problem. This often requires a post-refinement to obtain an accurate estimation. Methods that detect keypoints to estimated 6D pose have been proposed to robustly and efficiently estimate 6D pose. <cit.> utilized a segmentation technique to isolate the Region of Interest (ROI) and further regressed the keypoints from the ROI. Similarly, <cit.> utilised the YOLO <cit.> framework for such. However, these methods in the face of occlusion perform poorly. To address this problem, some methods obtain keypoints through pixel-wise heatmaps <cit.>. Considering that heatmaps are fixed-size, these methods suffer when the objects are truncated.
Some other methods have considered using models encompassing classical algorithm such as PnP algorithm to increase the accuracy of estimation <cit.>. Such models are weighty and hence not always suitable for real-time platform deployment. Models such as the PoseCNN <cit.> and T6D-direct <cit.> although are able to regress the 6D poses, however a very large dataset is required to train those models since they have no refinement module to count on.
Pose estimation using depth modality often involve the conversion of depth image to point cloud and proceeds with the segmentation of object masks <cit.> adopted semantic segmentation from depth images and point clouds to regress 6D poses. This is accompanied by computational burden due to the conversion to point cloud and often requires a large dataset. In contrast, we utilised the raw depth modality for the regressed pose refinement without converting to point cloud as presented further in this paper.
§ TRANSPOSE
The pipeline for TransPose 6D object pose estimation solution we propose in this work can be divided into three main parts:
* Detection and Regression Transformer
* Depth Estimation Network (DEN)
* Refinement Module for Final 6D Pose Estimation.
§.§ Detection and Regression Transformer
This transformer network is mainly adopted for object detection, image patch designation and initial 6D pose regression. The transformer architecture is inspired from Detection Transformer DETR <cit.> and T6D-Direct <cit.>. Our model is presented in Fig. <ref>. An RGB image is used as the input of the model. A ResNet-101 is adopted as the CNN backbone to extract and create a feature vector which is used as an input to the transformer encoder-decoder. Set of predictions of size N_c are produced by the transformer encoder-decoder. Prediction heads are added in form of Feed Foward Networks (FFNs) to regress the object pose and patch. The losses adopted to train this transformer network are categorized as follows:
§.§.§ Set Prediction Loss
The patch prediction in form of ROI is obtained by assigning a bounding box around the object of interest. From the input image through the decoder, the model produces a set of size N_c of tuples with fixed cardinality, where N_c also corresponds to the maximum number of the expected targets within the image. The content of each tuple is the image patch (left bottom pixel coordinates, height and width), class label probabilities and 6D pose (translation and rotation) of the predicted object. A bipartite matching is adopted to match the ground truth and the predicted sets to obtain matching pairs. The model is then trained to minimise a loss between the pairs.
Consider ground truth objects x_1, x_2, x_3, ... x_n, let's assume N_c is more than the number of objects in the image, bipartite matching is performed to match the ground truth x which is a set of size N_c padded with no-object (∅) with the predicted set x̂ of the same size. Essentially, performing a permutation between the sets while minimizing the loss below.
ρ̂ = ρ∈Θ_N_cmin∑_i ^N_cℒ_match(x_i, x̂_ρ (i))
ℒ_match(x_i, x̂_ρ (i)) is the pair-wise match cost between the prediction at index ρ (i) and the ground truth tuple x_i.
§.§.§ Hungarian loss
After matching, the model is trained to minimise the Hungarian loss. We denote the predicted patch as γ̂_ρ (i). Thus, the hungarian loss is defined as below:
[ ℒ_hung(x_i, x̂)= ∑_i ^N_c[λ _pose1_c_i≠∅ℒ_pose(R_i,t_i,R̂_ρ̂ (i), t̂_ρ̂ (i)); -logP̂_ρ (i)(c_i) +
1_c_i≠∅ℒ_patch(γ_i, γ̂_ρ̂ (i))] ]
ρ̂ is the lowest cost from equation.<ref>, c_i is the class probability and γ _i is a vector that defines the ground truth image patch coordinates, height and width.
§.§.§ Patch loss
The patch loss ℒ_patch(γ_i, γ̂_ρ (i)) is a component of equation <ref> and combines an l_1 norm loss and a generalized loss
ℒ_iou(γ_i, γ̂_ρ (i)),
<cit.>, as follows:
ℒ_patch(γ_i, γ̂_ρ (i)) = σ _1ℒ_iou(γ_i, γ̂_ρ (i)) + σ _2||γ_i - γ̂_ρ (i)||
and,
ℒ_iou(γ_i, γ̂_ρ (i)) = 1 - (| (γ_i∩γ̂_ρ (i)|/| (γ_i∪γ̂_ρ (i)| - | L(γ_i, γ̂_ρ (i)) \γ_i∪γ̂_ρ (i)|/| L(γ_i, γ̂_ρ (i)) |)
σ _1,σ _2 ∈ ℝ are hyperprameters. L(γ_i, γ̂_ρ (i)) is the largest patch having the ground truth γ_i and the predicted γ̂_ρ (i).
§.§.§ Pose loss
ℒ_pose(R_i,t_i,R̂_ρ̂ (i), t̂_ρ̂ (i)) is the pose loss. The pose loss is divided into two components, translation t and the Rotation R. Conventional l_2 norm loss is used to supervise the translation while a ShapeMatch loss L_R, <cit.>, is used for the rotation to deal with symmetrical objects.
ℒ_pose(R_i,t_i,R̂_ρ (i), t̂_ρ (i)) = L_R(R_i, R̂_ρ (i)) + || t_i - t̂_ρ (i)||
L_R =
{[ 1/| K |j_1 ∈ K∑ j_2 ∈ Kmin|| (R_ij_1 - R̂_ρ (i)j_2) || if symmetric,; ; 1/| K |j ∈ K∑|| (R_ij - R̂_ρ (i)j) || otherwise. ].
K represents the 3D points set. R_iand t_i are the ground truth rotation and translation, respectively. R̂_ρ (i) and t̂_ρ (i) are the respective predicted object rotation and translation.
§.§ Depth Estimation Network (DEN)
Depth estimation can be used for many applications <cit.>. In our case, the DEN is responsible for estimating depth images from monocular images inspired by the Feature Pyramid Network (FPN) <cit.>. The motivation is that FPN is capable of extracting features at different scales. We adopt ResNet-101 network as a backbone for feature extraction, where two 3x3 convolutional layers are utilised to process features and ReLU as an activation function for the layers as seen in Fig. <ref>. A better lightweight upsampling technique <cit.> that covers a larger field of view and enables the generation of adaptive kernels for better prediction is utilised. The depth images are one-fourth of the original image's size. The gradient of the depth map is obtained using a Sobel filter. The depth loss adopted in the training of our network is an l_1 norm loss defined as follows:
ℒ_depth = 1/n∑ _i=1 ^ n || d_i - d̂ _(i)||
where, d_i and d̂ _(i) are the ground truth depth and the predicted depth of every pixel i respectively.
§.§ Refinement Module for Final 6D Pose Estimation.
The refinement module consists of the depth patch generation and final pose estimation processes. The patch and the regressed 6D pose from the transformer alongside the depth image are used as inputs for the refinement module as shown in Fig. <ref>.
The patch defined as the ROI obtained by the Detection and Regression Transformer is formulated as:
ψ _i = [B_opx, B_opy, H_op, W_op]
where B_opx, B_opy represent the bottom left corner pixel coordinates of the patch respectively and H_op, W_op are the height and width of the patch respectively, all with respect to the original RGB image size (height and width) S_o = (W_o × H_o). Similarly, let us represent the size of the depth image as S_d = (W_d × H_d). where S_o ≠ S_d. We can obtain our depth patch ψ _j with respect to S_d from equ. <ref> as:
ψ _j = [B_dpx, B_dpy, H_dp, W_dp]
= ψ _i×[ W_d/W_o 0 0 0; 0 H_d/H_o c 0; 0 0 H_d/H_o 0; 0 0 0 W_d/W_o; ]
where B_dpx, B_dpy now represents the bottom left pixel coordinates of the depth patch respectively and H_dp, W_dp are the height and width of the depth patch respectively, all with respect to the depth image size S_d. The depth patch represents now our object ROI in the depth image frame and thus we can obtain the depth t_z1 from the camera to the target to be the depth information at the center pixel of the depth patch. The center pixel coordinates C_d = (C_dx, C_dy)^T can be obtained as follows:
C_dx = B_dpx + W_dp/2
C_dy = B_dpy + H_dp/2
The translation from the depth network model t_1 utilises t_z1 (which in this case is the depth) to compute t_x1 and t_y1 which are the translations in x and y axis to complete the translation t_1 = (t_x1,t_y1,t_z1)^T. Assuming the camera matrix is known, t_x1 and t_y1 can be obtained following the projection equation of a pinhole camera model as follows:
[ C_ox; C_oy; ]
=
[ f_xt_x1/t_z1 + PP_x; f_yt_y1/t_z1 + PP_y; ]
where f_x and f_y represent the focal length of the camera, ( PP_x, PP_y)^T is the principal point. C_o = (C_ox, C_oy)^T is the object centroid, which can be obtained from the image patch similarly to equ. <ref> to be (B_opx + W_op/2, B_opy + H_op/2)^T assuming the centroid coincides with the center of the patch. Thus, t_x1 and t_y1 can be calculated as:
[ t_x1; t_y1; ]
=
[ (C_ox- PP_x)t_z1/f_x; (C_oy- PP_y)t_z1/f_y; ]
Thus a complete translation from the depth image t_1 is obtained as:
t_1 = (t_x1, t_y1, t_z1)^T
Finally, we can obtain the final fusion-based object translation t as:
t = (w_1 × t_1) + (w_2 × t_2)
where the weights w_1,w_2 ≥ 0 and w_1+w_2 = 1. t_1 is the computed translation from the depth in equ. <ref> and t_2 is the regressed translation from the transformer model. Note that w_1 and w_2 are selected depending on the performance of both the transformer and depth model. Such that, the model with a lower loss will have a higher w and vice-versa.
§ EXPERIMENTS
In the following, we present all the experiments conducted to test the capability of our proposed TransPose solution. From the datasets adopted to the results and comparison made between our solution and existing solutions, all will be detailed in the following subsections.
§.§ Dataset
The popular KITTI dataset is used as a benchmarking dataset for the depth estimation network. Likewise, we use the popular YCB-Video dataset being a benchmark for 6D pose estimation. <cit.> so we can easily compare our results with other methods. The dataset has 133,936 images of 640 × 480 pixels resolution. Each image is accompanied with bounding box labels, depths, segmentation and 6D object pose annotations. Similar to <cit.>, a test was carried out on 2,949 keyframes from 12 scenes. Additionally, we sampled from the Fruity dataset <cit.> to validate this approach in the context of fruit picking application which is an important application for our research.
§.§ Evaluation Metrics
The metrics adopted to evaluate the depth estimation network are the abs-rel, sq-rel, RMSE and RMSE_log, as proposed in <cit.>, as follows:
abs_-rel = 1/| T |∑ _i = 1^T | d_i - d̂ _i|/d̂ _i
sq_-rel = 1/| T |∑ _i = 1^T || d_i - d̂ _i||^2 /d̂ _i
RMSE = √(1/| T |∑ _i = 1|| d_i - d̂ _i||^2)
RMSE_log = √(1/| T |∑ _i = 1||log d_i - logd̂ _i||^2)
where T is the number of pixels in the test set.
For the evaluation of the overall pose estimation, the average distance (ADD) metric, as suggested in <cit.>, is used. This metric calculates the mean pairwise distance as follows:
ADD = 1/| K |∑_j ∈ K|| (R_j + t) - (R̂_j + t̂) ||
where R and t are the ground truth object rotation and translation, respectively. R̂ and t̂ are the predicted rotation and translation respectively. K is the set of 3D points.
ADD is calculated as the closest point distance for symmetrical objects as follows:
ADD-S = 1/| K |∑_j_1 ∈ Kj_2 ∈ Kmin|| (R_j1 + t) - (R̂_j2 + t̂) ||
§.§ Training
The model is initialised as in <cit.> with pre-trained weights. The model utilizes an input of image sized 640 × 480. The initial learning rate is set to 1.0^-3 which is eventually decayed. The batch size is set to 16 samples. AdamW optimizer <cit.> is used for the training. The hyperparameters for calculating ℒ_patch in equation. <ref>, σ _1 and σ _2 are set to 2 and 5. Also, the parameter λ _pose for calculating ℒ_hungarian in equation. <ref> is set to 0.05. The cardinality or number of prediction queries N_c is set to 21.
§.§ Results
§.§.§ Depth estimation results
For the depth estimation network, the training loss and accuracy per iteration are shown in Fig. <ref>. As the training proceeds, the training loss decreases thereby increasing the training accuracy per iteration.
The results obtained for the depth evaluation using the metrics in equations. <ref>, <ref>,<ref> and <ref> are presented in Table. <ref>.
We compare the performance of our depth estimation network with other methods on the popular KITTI dataset and our custom fruit dataset. On the KITTI dataset, our method outperformed the others in the sq-rel and RMSE_log metric and compares very closely with <cit.> in the abs-rel and RMSE metric. On the fruit dataset, our network outperforms the other in abs-rel, sq-rel, RMSE_log metric and compares closely in the RMSE metric. This comparison shows the accuracy of our network as compared with other literature and the flexibility to adapt for depth estimation as part of the TransPose pipeline. It is worth noting that higher depth accuracy comes at a computational cost and the depth estimation network is just one part as a step of the TransPose pipeline. Thus, a reasonable trade-off between computational cost and accuracy is established to satisfy both decent estimation and future real-time implementation. Hence, the depth results are very satisfactory for our purpose.
The depth estimation qualitative results are shown in Fig. <ref>. Samples from all the classes of our Fruit dataset including their ground truths and the corresponding predictions are shown. A colour map is added to the depth images for better visualisation and evaluation.
Further comparisons with other methods are carried out across each individual class of fruit. Fig <ref> shows the comparison of each class of the fruit dataset using the Abs-rel and sq-rel metrics. From the results, our network outperformed all the methods across all the fruit classes. For the sq-rel, Our depth estimation network performs better in the banana class and slightly performs better in the other fruit classes.
Fig <ref> compares the RMSE and RMSE_log of each class of the fruit dataset. Our network performs better on the banana, orange and lemon class using the RMSE metric and compares with <cit.> on the apple and avocado class. For the RMSE_log, our network outperforms in the apple, avocado, banana and lemon class.
§.§.§ TransPose pose estimation results
We sample 20 test frames for the 6D pose estimation and compare the ground truth and the predicted poses. The translation [t_x, t_y, t_z]^T and the Quaternion [Q_x, Q_y, Q_z, Q_w]^T which define the orientation are compared for all fruit classes as shown in Fig <ref> - <ref>.
The samples are randomly selected from the test data to visualise the difference between the ground truth and the prediction. We can see that our TransPose prediction solution matches well with the ground truth poses across all the fruit classes.
The qualitative results we obtain for some sample frames from the fruit dataset is shown in Fig <ref>.
Table <ref> shows a detailed evaluation of some objects from the YCB dataset using the metric in equ. <ref> and equ. <ref>.
We can see that our proposed solution outperforms the other methods considering the ADD metric for all the objects except the "tuna fish can", "bowl", "wood block" and "banana" where our network closely compares with the other methods. Similarly, using the ADD-S metric, our solution outperforms the other methods except for the objects "tuna fish can" and "wood block".
A similar comparison is conducted for our fruit dataset using the ADD and ADD-S metric as shown in Table <ref>.
The mean from Table <ref> and Table <ref> shows the overall performance of TransPose across the sample objects. From the mean ADD and ADD-S, we can see that the depth refinement module improves the performance of 6D pose estimation.
§ CONCLUSION
This paper proposes TransPose, an improved transformer-based 6D pose estimation network that utilises a depth refinement module to improve the overall solution performance. In contrast to other multi-modal networks that require more than one sensor and data type, TransPose utilises an RGB image for the 6D pose estimation and the depth refinement with the aid of a depth estimation network. The 6D poses are directly regressed by means of a proposed transformer network and further refined with a depth network. We compare our results of the depth network with other methods using the standard evaluation metrics. The performance of the depth network satisfies the purpose of 6D pose refinement. The results obtained using the standard evaluation metrics show a competitive depth outcome. We evaluate our results on multiple datasets for depth estimation and final object 6D pose regression. We extended the scope to a fruit dataset to prove the effectiveness of this pipeline in precision agriculture, particularly fruit picking. In the future, we aim at exploring the real-time onboard deployment of TransPose in conjunction with a robotics manipulator for real-time fruit picking application.
IEEEtran
|
http://arxiv.org/abs/2307.06136v1 | 20230712123834 | Theory of Elastic Microphase Separation | [
"Yicheng Qiang",
"Chengjie Luo",
"David Zwicker"
] | cond-mat.soft | [
"cond-mat.soft",
"cond-mat.mes-hall",
"nlin.PS"
] |
Max Planck Institute for Dynamics and Self-Organization, Am Faßberg 17, 37077 Göttingen, Germany
Max Planck Institute for Dynamics and Self-Organization, Am Faßberg 17, 37077 Göttingen, Germany
[][email protected]
Max Planck Institute for Dynamics and Self-Organization, Am Faßberg 17, 37077 Göttingen, Germany
Elastic microphase separation refers to equilibrium patterns that form by phase separation in elastic gels. Recent experiments revealed a continuous phase transition from the homogeneous phase to a regularly patterned phase, whose period decreased for stiffer systems. We here propose a model that captures these observations. The model combines a continuous field of the elastic component to describe phase separation with nonlocal elasticity theory to capture the gel's microstructure. Analytical approximations unveil that the pattern period is determined by the geometric mean between the elasto-capillary length and a microscopic length scale of the gel. Our theory highlights the importance of nonlocal elasticity in soft matter systems, reveals the mechanism of elastic microphase separation, and will improve the engineering of such systems.
Theory of Elastic Microphase Separation
David Zwicker
=======================================
§ INTRODUCTION
Phase separation in elastic media is a ubiquitous phenomenon, which is relevant in synthetic systems to control micro-patterning <cit.> and in biological cells, where droplets are embedded in the elastic cytoskeleton or chromatin <cit.>.
While biological systems are typically dynamic and involve active processes, the simpler synthetic systems can exhibit regular stable structures.
These patterns harbor potential for metamaterials and structural color, particularly since they are easier to produce and manipulate than alternatives like self-assembly by block co-polymers <cit.> or chemical cross-linking <cit.>.
In these applications, it is crucial to control the length scale, the quality, and the stability of the pattern.
Such control might be possible in a recent experiment, which found stable equilibrium patterns <cit.>.
However, the underlying mechanism for this elastic microphase separation is unclear, complicating further optimization.
The elastic microphase separation experiment <cit.> proceeds in two steps (M-fig:schematic_nonlocal_dA):
First, a PDMS gel is soaked in oil at high temperatures for tens of hours until the system is equilibrated.
When the temperature is lowered in the second step, the sample develops bicontinuous structures, reminiscent of spinodal decomposition.
However, in contrast to spinodal decomposition, the length scale of the structure does not coarsen but stays arrested at roughly one to ten micrometers, depending on the gel's stiffness.
Interestingly, this transition is reversible and the pattern disappears upon reheating, suggesting a continuous phase transition.
Moreover, the resulting pattern is independent of the cooling rate, in contrast to other experiments on similar materials <cit.>.
Consequently, the experiments should be explainable by an equilibrium theory that captures elastic deformations in PDMS due to oil droplets formed by phase separation.
In this paper, we propose a theoretical model that explains the experimental observations <cit.>.
The model combines the continuous density field of the elastic component, which naturally describe phase separation, with a nonlocal elasticity theory to capture the microstructure of the gel <cit.>.
This approach allows us to capture the continuous phase transition to a patterned phase and predict its equilibrium period.
§ RESULTS
To explain the experimental results <cit.> using an equilibrium theory, we define a free energy comprising contributions from elastic deformation as well as entropic and enthalpic contributions that can induce phase separation.
While elastic deformations are naturally described by the strain tensor ϵ() defined in a reference frame , phase separation is typically described by the volume fraction field ϕ(x) of the elastic component in the lab frame x.
For simplicity, we will focus on one-dimensional systems in this paper, where volume conservation connects the scalar strain ϵ to the fraction ϕ in the reference frame,
ϵ(X) =ϕ_0/(X) - 1
,
where ϕ_0 denotes the fraction in the relaxed homogeneous initial state.
The fraction ϕ(x) in the lab frame then follows from the coordinate transform /=ϵ(X)+1.
This connection between strain ϵ and volume fraction ϕ permits a theory in terms of only one scalar field.
§.§ Local elasticity models cannot explain equilibrium pattern
We start by investigating a broad class of elastic models, where the elastic energy density is a function of strain ϵ.
Since ϵ can also be expressed in terms of the volume fraction (M-eqn:strain_fraction_relation), the free energy of the system reads
F_local[ϕ] = /∫[
f_0() + κ (∇)^2
] ,
where k_B is Boltzmann's constant, T is the constant absolute temperature of the system, and is a relevant molecular volume, e.g., of the solvent molecules.
Here, f_0 captures the elastic energy density as well as molecular interactions and translational entropy associated with ordinary phase separation, while the second term proportional to the interfacial parameter κ penalizes volume fraction gradients and gives rise to surface tension.
M-eqn:fe_functional_generic is identical to basic models of phase separation without elastic contributions <cit.>.
Such models exhibit phase separation and subsequent coarsening to minimize interfacial costs, known as Ostwald ripening <cit.>.
While adding local elasticity alters f_0(ϕ), functions that minimize F_local can only have a single interface <cit.> and equilibrium patterns with finite length scales are thus impossible.
We show in the <ref> that this result generalizes to higher dimensions.
Taken together, field theories based on local elasticity, including sophisticated non-linear finite strain models, cannot explain the equilibrium length scales observed in experiments.
§.§ Microscopic picture suggests nonlocal elasticity theory
Why do standard elastic theories fail to explain the observed patterns?
One answer is that only the interfacial parameters κ carries dimensions of length in M-eqn:fe_functional_generic, so on dimensional grounds we cannot expect another length scale beyond the interfacial width to emerge.
While the interfacial width is typically governed by molecular sizes (∼1), realistic elastic meshes exhibit additional length scales like the mesh size (∼10 <cit.>) and correlation lengths of spatial inhomogeneities (∼100 <cit.>).
Since the last two quantities are comparable to the pattern length scale (several 100 to several <cit.>), we hypothesize that a characteristic length of the mesh is key for explaining the observed patterns.
If microscopic lengths of the elastic mesh are relevant, local elastic theories are insufficient <cit.>.
This is because moving a particular crosslink transmits forces to connected crosslinks in the vicinity (see M-fig:schematic_nonlocal_dB), implying stresses are no longer local, and the associated elastic energy cannot be expressed as a function of the strain.
Instead, the stress on a particular crosslink is now given by a sum over all connected crosslinks, whose contributions decay with distance X in the reference frame <cit.>.
In a continuous field theory, this nonlocal averaging is expressed as a convolution operation <cit.>.
Using a simple linear elastic model for the local stress E ϵ, with elastic modulus E, we obtain the nonlocal stress
σ_nonlocal(X) = E ∫ϵ(X') g_ξ(X'-X) ,
where we choose a Gaussian convolution kernel <cit.>,
g_ξ(X) = √(2/πξ^2)exp(-2 X^2/ξ^2)
,
with a characteristic length ξ, which quantifies the microscopic length of the gel <cit.>.
This nonlocal model can also be derived more rigorously, either generically (see <ref>) or from a more explicit microscopic model <cit.>.
Note that the convolution is performed in the reference frame since the topology of the network, governing which crosslinks interact with each other, is determined in this unperturbed state.
The elastic energy density of the system is then given by the product of strain and nonlocal stress, so the free energy of the entire system reads
F_nonlocal[ϕ] = F_local[ϕ]
+ 1/2∫ϵ(X) σ_nonlocal(X) ,
where F_local now only captures the contributions associated with phase separation.
To capture the essence of phase separation, we consider a simple Flory-Huggins model for the local free energy density <cit.>,
f_0(ϕ) = log + (1-) log (1-) + χ (1-)
,
where 1- is the solvent fraction.
Here, the first two terms capture entropic contributions, while the last term describes the interaction between elastic and solvent components, quantified by the Flory-Huggins parameter χ.
Taken together, M-eqn:strain_fraction_relation–<ref> define the free energy F_nonlocal as a functional of the fraction ϕ of the elastic component.
§.§ Nonlocal elasticity enables equilibrium patterns
We start by analyzing equilibrium states of the model by determining profiles ϕ(x) that minimize F_nonlocal using a numerical scheme described in the <ref>.
Beside typical macroscopic phase separation, we also find periodic patterns for some parameter sets; see M-fig:schematic_nonlocal_dC and S-fig:free_energy_curve.
In soft systems (small stiffness E), dilute regions, corresponding to solvent droplets, alternate with dense regions, where the elastic mesh is hardly strained (ϵ≪1).
In contrast, a harmonic profile emerges for stiff systems (large E).
Taken together, the nonlocal elastic theory supports periodic patterns that qualitatively resemble the experimentally observed ones <cit.>.
To understand when periodic patterns form, we next investigate the simple case where components can freely exchange with a surrounding reservoir kept at fixed exchange chemical potential μ.
This situation allows solvent molecules to rush in and out of the system, adjusting the average fraction ϕ̅ of the elastic component.
M-fig:phase_diagram_grandcanonical shows two phase diagrams of this grand-canonical ensemble at different stiffnesses E.
In the soft system (left panel), the phase diagram mostly resembles that of ordinary phase separation:
For weak interactions (χ < 2), we find only a homogeneous phase and μ simply controls ϕ̅.
In contrast, above the critical point at χ≈ 2 (black disk), we observe a first-order phase transition (brown line) between a dilute phase (μ≲ 0) and a dense phase (μ≳ 0).
However, at even stronger interactions (χ≳ 3.3), an additional patterned phase (denoted by P) emerges, where the periodic patterns exhibit the lowest free energy.
The line of the first-order phase transitions between the patterned phase and the dilute or dense homogeneous phase (blue-brown-dashed curves) meet the line of the phase transition between the two homogeneous states at the triple point (gray disk), where these three states coexist.
The grand-canonical phase diagram of soft systems (left panel of M-fig:phase_diagram_grandcanonical) qualitatively resembles simple pressure-temperature phase diagrams, e.g., of water.
Assuming that the chemical potential μ plays the role of pressure and that the interaction χ is negatively correlated with temperature, the dilute and dense homogeneous phases respectively correspond to the gas and liquid phases.
They become indistinguishable at the critical point at low interaction strength (corresponding to high temperatures).
In contrast, the patterned phase, with its periodic microstructure, resembles the solid phase.
The general form of the grand-canonical phase diagram persists for stiff system (right panel of M-fig:phase_diagram_grandcanonical), although the parameter region of the patterned phase is much larger.
However, the first-order transition between the dilute and dense homogeneous phases disappears together with the normal critical point of phase separation.
Instead, we now find a continuous phase transition (dotted red line) between the homogeneous and the patterned phases, which we will discuss in more detail below.
Taken together, these phase diagrams suggest that stable patterned phases emerge for sufficiently large stiffness E and interaction χ for intermediated ϕ̅.
The grand-canonical ensemble that we discussed so far is suitable when the time scale of an experiment is long compared to the time scale of particle exchange with the reservoir.
In the experiments <cit.>, the initial swelling takes place over tens of hours with a measurable increase in size and mass, indicating that solvent soaks the sample until it is equilibrated with the surrounding bath.
In contrast, the temperature quench, during which the patterned phase is observed, takes place on a time scale of minutes without the solvent bath.
This suggests that this process is better described by a closed system.
§.§ ed and s coexist in closed systems
In the closed system, corresponding to a canonical ensemble, the average fraction ϕ̅ of elastic components, and thus also the average fraction of solvent, is fixed.
In this situation, we find that multiple different phases can coexist in the same system; see M-fig:phase_diagram.
This is again reminiscent of phase separation, where the common-tangent construction reveals the fractions in coexisting homogeneous states.
Indeed, we find exactly this behavior in soft systems (left panel of M-fig:phase_diagramA), where a dilute and dense phase coexist for fractions between the two vertical dotted lines, while the free energy of the patterned phase (blue line) is always larger and thus unfavorable.
The picture changes for larger stiffness (right panel of M-fig:phase_diagramA), where the patterned phase has lower energy and we can construct two separate common tangents, which respectively connect the dilute and dense homogeneous phase with the patterned phase.
Analogously to phase separation, we thus expect situations in which a patterned phase coexists with a homogeneous phase (when ϕ̅ is in the region marked with H+P or P+H).
M-fig:phase_diagramB corroborates this picture and shows various coexisting phases as a function of the stiffness E and the interaction strength χ.
Taken together, the main additional feature of the canonical phase diagrams is the coexistence of multiple phases, which was only possible exactly at the phase transition in the grand-canonical phase diagram.
§.§ Higher stiffness and interaction strength stabilize
The canonical phase diagrams shown in M-fig:phase_diagramB are complex, but they generally preserve three crucial aspects of the grand-canonical phase diagram shown in M-fig:phase_diagram_grandcanonical: Higher stiffness (i) slightly favors the homogeneous phases, (ii) greatly expands the parameter region of the patterned phase, and (iii) induces a continuous phase transition.
The first point is illustrated by the binodal line of the homogeneous phase (thick brown lines and red dotted lines), which moves up with increasing stiffness E, implying that larger interaction strengths χ are necessary to stabilize inhomogeneous systems.
Inside the binodal line the system exhibits various behaviors, which can be categorized by χ.
At a critical value χ_*, the patterned phase (blue disk) coexists with the dilute and dense homogeneous phase (brown disks), and the associated tie line corresponds to the triple point in M-fig:phase_diagram_grandcanonical.
For weaker interactions (χ<χ_*), we mostly observe coexistence of a dilute and dense homogeneous phase (region H+H), which corresponds to normal phase separation.
For stronger interactions (χ>χ_*), the system exhibits the patterned phase, either exclusively (colored region) or in coexistence with a homogeneous phase (regions H+P and P+H).
Larger stiffness E lowers the critical value χ_*, thus expanding the parameter region where the patterned phase exists.
Eventually, for sufficiently large E, χ_* approaches the critical point of the binodal (gray point), a tiny region with patterned phase appears, and part of the binodal line becomes a continuous phase transition (red dotted line), reproducing the behavior predicted by the grand-canonical phase diagram of stiff systems (right panel in M-fig:phase_diagram_grandcanonical).
The influence of stiffness E and interaction strength χ becomes even more apparent in the three-dimensional phase diagram shown in M-fig:phase_diagramC:
With increasing E, the χ associated with the critical point of phase separation (black line) increases slightly, whereas the states of three-phase coexistence (blue line and brown lines) shift to lower χ.
All lines meet at the tricritical point (black sphere) for E≈ 0.037 /, ϕ̅≈0.54, and χ≈ 2.14.
Increasing E further, a part of the binodal line exhibits a continuous phase transition, which expands with larger E.
The phase diagram thus summarizes three main aspects of our model:
First, the binodal line of phase separation, which is only weakly affected by E, determines whether the system can exhibit non-homogeneous states.
Second, if the system can be inhomogeneous, the stiffness E determines at what value of χ patterned phases emerge.
Third, for sufficiently large E, these patterned phases form immediately due to the continuous phase transition.
§.§ Continuous phase transition explains experimental measurements
The continuous phase transition that we identified at sufficiently large stiffness E implies that the system can change continuously from a homogeneous phase to a patterned phase when the interaction strength χ is increased (corresponding to cooling).
Indeed, the amplitude of the predicted pattern vanishes near the transition (right panel of M-fig:phase_diagramB), while the length scale stays finite (left panel of M-fig:phase_diagramB).
This behavior is not expected for phase separating systems, where first-order transitions are typically, which are associated with a jump in observables (see gray dots in M-fig:continuous_transitionA for an example).
The continuous phase transition was already hypothesized for the experiments <cit.>, based on a lack of hysteresis and a continuous change of the contrast measured by light intensity.
To connect to experiments, we mimic the contrast using the square of the amplitude of the optimal volume fraction profile.
M-fig:continuous_transitionA and the right panel of M-fig:phase_diagramB show that the contrast changes continuously from zero when the interaction strength χ is increased for sufficiently stiff systems.
Moreover, M-fig:continuous_transitionB shows that the associated pattern length scale changes only slightly, consistent with the experiments.
Note that deviations in the form of the curves could stem from thermal fluctuations, finite resolution in the experiment, and also deviations in model details.
§.§ Stiffness and interfacial cost control length scale
We next use the numerical minimization of the free energy F_nonlocal to analyze how the length scale L of the patterned phase depends on parameters.
M-fig:length_scale shows that L decreases with larger stiffness E and increases with the interfacial cost parameterized by κ.
The data in M-fig:length_scaleA suggests the scaling L/ξ∝ E^-1/2 over a significant parameter range, which matches the experimental observations <cit.>.
Moreover, M-fig:length_scaleB suggests L/ξ∝ξ^-1/2κ^1/4, which has not been measured experimentally.
Taken together, the two scaling laws suggest that the equilibrium length scale emerges from a competition between elastic and interfacial energy.
The two scaling laws emerge qualitatively from a simple estimate of the elastic and interfacial energies:
Since shorter patterns have more interfaces, the interfacial energy per unit length is proportional to γ L^-1, with surface tension γ∝κ^1/2 <cit.>.
In contrast, the elastic energy of a single period originates from stretching a part of material from initial length ξ to final length L, resulting in an elastic energy density proportional to ELξ^-1.
Minimizing the sum of these two energy densities with respect to L results in L/ξ∝ξ^-1/2 E^-1/2κ^1/4, which explains the observed scalings qualitatively.
§.§ Approximate model predicts length scale
To understand the origin of the length scale L in more detail, we consider the limit of strong phase separation, where the interfacial width is small compared to L; see M-fig:schematic_nonlocal_dC.
We thus approximate the volume fraction profile ϕ(x) of the elastic component by a periodic step function with fixed fractions ϕ_- and ϕ_+; see dotted lines in M-fig:box_function_approxA.
Material conservation implies that the relative size of these regions is dictated by the average fraction ϕ̅ in the swollen state, so we can only vary the period L̃ of the profile.
The stable period L then corresponds to the L̃ that minimizes F_nonlocal given by M-eqn:f_elastic, implying F_nonlocal'(L)=0.
Since changing L̃ does not affect the local free energy f_0, we only investigate the average free energy of the interface, (L̃) ≈ 2γL̃^-1, and the average elastic free energy, (L̃)= 1/2L̃^-1∫_0^L̃_0σ_nonlocal(X) ϵ(X) where L̃_0 = ϕ̅/ϕ_0L̃ is the period in the reference frame.
M-fig:box_function_approxB shows the derivatives of these contributions with respect to L̃, indicating that they sum to zero for L̃=L.
We show in the <ref> that
∂/∂L̃ ≈E/ξ·
0 L̃<L_min
1/√(2π)(1 - ϕ̅/ϕ_+)^2 L_min<L̃<L_max
1/√(8π)(ϕ_0/ϕ_--ϕ_0/ϕ_+)^2ξ^2/L̃^2 L̃>L_max ,
indicating three regimes bounded by
L_min = √(π/2) ϕ_0/ϕ̅ ξ and
L_max = √(1/2) ϕ_0/ϕ_- ϕ_+-ϕ_-/ϕ_+-ϕ̅ ξ .
M-fig:box_function_approxB shows that this approximation of ∂_L̃ captures the main features of the full numerical data.
M-fig:box_function_approxB suggests that stable patterns are mainly possible in the gray region (L_min < L̃ < L_max), which we interpret further below.
In this region, we use M-eqn:fe_derivatives_piecewise to solve ∂_L̃ + ∂_L̃ =0 for L̃, resulting in
L ≈ (8π)^1/4ϕ_+/ϕ_+-ϕ̅ (ξγ/E)^1/2 ,
consistent with numerical results; see transparent green lines in M-fig:length_scale.
This expression shows that the stable period L is governed by the geometric mean of the elasto-capillary length γ/E and the microscopic length ξ.
Moreover, L increases with a larger average fraction ϕ̅ of the elastic component, i.e., less swelling.
In contrast, the fraction ϕ_+ has only a weak influence since it is close to 1 in the case of strong phase separation, implying that the interaction strength χ affects L only weakly.
§.§ Patterned phase is governed by reference state
Finally, we use the approximate model to understand when the patterned phase emerges.
Here, it proves useful to interpret M-eqn:length_bound in the reference frame, where the convolution of the nonlocal elastic energy takes place.
Defining the length L_0=ϕ̅/ϕ_0L in the reference frame and the associated fraction α_0=ϕ_-/ϕ_0(ϕ_+-ϕ̅)/(ϕ_+-ϕ_-) occupied by the solvent droplet (M-fig:box_function_approxA), we find
L >L_min ⇔ L_0 >√(π/2) ξ and
L <L_max ⇔ α_0 L_0 <√(1/2) ξ ,
where the numerical pre-factors are very close to one.
The first condition (L_0 ≳ξ) suggests that two solvent droplets need to be separated by more than ξ in the reference frame since L_0 roughly estimates their separation; see M-fig:box_function_approxA.
If droplets were closer, they would feel each other's deformations, which is apparently unfavorable.
In the extreme case (L_0 ≪ξ), the average elastic energy is almost constant, essentially because short-ranged variations are averaged by the comparatively large nonlocal kernel.
In contrast, the second condition implies that the droplet size in the reference frame (α_0L_0) must be smaller than the microscopic length scale ξ.
Assuming ξ corresponds to the mesh correlation length, this suggests that the droplet can at most deform the correlated part of the mesh, which might correspond to large soft regions in natural meshes.
If droplets were larger (α_0L_0 ≫ξ), nonlocal features would only be relevant at interfaces, so the system would behave as if it had only local elasticity and coarsen indefinitely.
This analysis highlights that the existence of the periodic pattern depends on the reference frame, while its length scale L also depends on the different stretch of the dilute and dense region; see M-fig:box_function_approxA.
This observation suggest an intuitive explanation for the influence of the interaction χ:
Assuming that ϕ_- and ϕ_+ correspond to equilibrium volume fractions and ϕ̅=1/2 for simplicity, we find α_0 ∝ϕ_-, which decreases with larger χ.
Consequently, the lower bound L_min is unaffected, while L_max increases, consistent with our observation that the patterned phase forms easier at higher χ and the scaling law given by M-eqn:length_scale holds for broader parameter range with higher interaction strength (M-fig:length_scale).
§ DISCUSSION
We propose a theory that explains the experimentally observed elastic microphase separation <cit.> based on nonlocal elasticity, which captures aspects of the microscopic gel structure.
Within this theory, regular periodic patterns appear for sufficiently strong phase separation (large enough χ) and stiffness E, while surface tension γ opposes the trend.
Essentially, solvent droplets inflate a region of the elastic mesh of the size of the microscopic length ξ.
The pattern period L then results from a balance of elastic and interfacial energies, so that L scales as the geometric mean between ξ and the elasto-capillary length γ/E.
In contrast, the interaction strength χ, leading to phase separation in the first place, affects L only weakly, but it determines whether the patterned phase is stable, similar to ordinary phase separation.
However, the normal first-order transition between the homogeneous and heterogeneous phase (at the binodal line) can now also exhibit a continuous phase transition.
Consequently, the patterned phase can appear with arbitrarily small amplitude in a reversible process.
Our model captures the main features of the experiment <cit.>, including the continuous phase transition leading to reversible dynamics.
Moreover, it explains that the pattern length scale L is independent of the cooling rate, only weakly affected by the final temperature, and decreases with stiffness E.
Importantly, our model predicts that a structural length ξ of the mesh is essential for the emergence of the observed L.
Our numerics indicate that L can be an order of magnitude larger than ξ, suggesting that ξ could relate to observed correlation lengths of the order of a few hundred nanometers <cit.>.
Since ξ is small compared to the distance between droplets (see M-eqn:length_bound_ref), the nonlocal effects of elasticity do not affect droplet positioning.
Furthermore, we found that a coexisting homogeneous phase does not affect the free energy of the patterned phase strongly (see <ref>), suggesting that the two phases can be interspersed, which would contribute to irregularity of the droplet placement in real systems.
In contrast, the observed variation in droplet size <cit.> likely originates from local heterogeneity in material properties, like ξ, E, and γ.
Taken together, our theory makes clear predictions that could be tested experimentally.
To capture the mesh's microstructure, we employ nonlocal elasticity <cit.> based on a convolution of the stress field, which is similar to theories used in fracture mechanics <cit.>.
Our work complements related theories, which either modeled pores explicitly <cit.> or resorted to particle-based methods <cit.>.
Nonlocality is generally responsible for the emergence of microstructure in multiple physical systems, such as the Ohta-Kawasaki model <cit.>, phase separation with electrostatic interaction <cit.>, and also nonlocal elasticity <cit.>, e.g., to study polymeric materials <cit.>.
In contrast to the first two theories,
we use a convolution in the reference frame, capturing the microscopic topology of the elastic mesh.
More generally, the convolution kernel given by M-eqn:kernel can be interpreted as a Green's function of a diffusion process in the reference frame, suggesting that the nonlocal elasticity is similar to the damage field introduced in fracture mechanics <cit.>.
We analyzed our model in the simple case of one dimension to highlight fundamental properties, but to capture experimental details, including various morphologies, we need to generalize the model to higher dimensions, which will require a tensorial convolution kernel <cit.>.
Moreover, we might require more realistic models of phase separation (including different molecular sizes and higher-order interactions terms) and elasticity (involving finite extensibility, viscoelasticity <cit.>, as well as plastic deformation, like fracture <cit.> and cavitation, which can lead to regular droplet patterns <cit.>).
Finally, experimental systems exhibit heterogeneities in key model parameters including ξ, E, and γ, which will contribute to uncertainty and might even induce large scale rearrangements <cit.>.
Such extended theories will allow us to compare the full pair correlation and scattering functions to experiments, shedding light on how we can manipulate this pattern forming system to control microstructures.
§ ACKNOWLEDGMENTS
We thank Carla Fernández-Rico, Robert W. Style, and Eric R. Dufresne for helpful discussions.
We gratefully acknowledge funding from the Max Planck Society and the European Union (ERC, EmulSim, 101044662).
§ GENERAL LOCAL FREE ENERGIES CANNOT EXHIBIT EQUILIBRIUM PATTERNS
To show that local elasticity cannot yield patterns, we first consider a generic procedure to minimize the free energy functional, and explain afterwards that any periodic structure with finite period cannot be the minimum of the free energy functional if the interfacial term is the only nonlocal term.
Consider a free energy functional of a system of arbitrary dimension, which includes a volume-fraction-dependent term [ϕ_i], an interfacial energy term [ϕ_i] to penalize sharp interface, an elastic energy term [] and the constraint [ϕ_i, , ζ, η],
F[ϕ_i, , ζ, η]
= [ϕ_i]+[ϕ_i]+[] + [ϕ_i, , ζ, η] ,
where ϕ_i with i=1,2,…,N are the volume fraction fields of N components, is the deformation vector field of the elastic component, and ζ and η are two Lagrangian multipliers.
We keep the generic form of S-eqn:fe_functiuonal_generic for simplicity and universality, except for the constraint, where we use
[ϕ_i, , ζ, η]
= ∫ζ(∑_i ϕ_i - 1)+ ∫η (J ϕ_N - ϕ_N,0) .
The first term accounts for incompressibility, while the second one indicates that the N-th component is the elastic component, so its volume fraction is related to the displacement field by volume conservation.
Here, ϕ_N,0 is the volume fraction distribution of the elastic component in the relaxed state (M-fig:schematic_nonlocal_dA), and J is the determinant of the deformation gradient tensor _, where is the coordinate in the reference frame.
Extremizing the free energy in S-eqn:fe_functiuonal_generic with respect to all of its variables leads to the corresponding self-consistent equations,
δ/δϕ_i + δ/δϕ_i + ζ + η J δ_i,N = 0
δ/δ+δ/δ(∫η J ϕ_N)=0
1 - ∑_i ϕ_i =0
J ϕ_N - ϕ_N,0=0 ,
where δ_i,N is the Kronecker delta.
Note that the last two equations are simply the incompressibility and the volume conservation of the elastic component, respectively.
The constraint term [ϕ_i, , ζ, η] does not contribute to the free energy when the constraints are satisfied.
To minimize the free energy, we follow two steps:
First we require that the phase separated structure is periodic, and describe the unit cell by a group of parameters <cit.>.
For a given , S-eqn:scf_generic can be solved to obtain the free energy minimum with fixed unit cell F^*()=F[ϕ_i^*(), ^*(), ζ^*(), η^*()], where the symbols with asterisk denote the solution of S-eqn:scf_generic.
The minimum of the free energy can then be obtained by optimizing F^* with respect to the period .
Note that F^* is not a functional, but a function of instead.
Follow the steps in <cit.>, we find that the derivative of F^*() can be obtained from the partial derivative of the free energy functional with respect to the period while keeping the shape of all the spatial functions unchanged,
F^*/ = .∂/∂|_* + .∂/∂|_* + . ∂/∂|_* .
Assuming the total volume of the system V is constant, the total free energy can be replaced by the average free energy density f̅ = F / V,
f̅^*/ = .∂/∂|_* +.∂/∂|_* + . ∂/∂|_* .
Note that S-eqn:fe_derivative does not impose any assumptions on the exact form of the terms.
For any local volume-fraction-dependent term [ϕ_i], e.g., the Flory-Huggins free energy, the partial derivative with respect to vanishes, since the variables ϕ_i are dimensionless and have no explicit dependence on the pattern length scale.
For any local elasticity, including nonlinear ones with large deformations, the elastic energy takes a local form of the deformation gradient tensor _, which is also dimensionless and has no explicit dependence on the length scale.
Therefore, the partial derivative of the local elastic energy vanishes as well, leading to
f̅_local^*/ = .∂/∂|_* .
Here, term .∂/∂|_* does not vanish since the interfacial term usually depends on the gradient of the volume fraction fields ϕ_i, which has the dimension of length^-1 and contains explicit dependence on the length scale.
The period typically contains not only the size but also the shape of the unit cell, including the angles between the base vectors of the unit cell.
Keeping the shape of the unit cell unchanged and denoting its size by L̃, we have
f̅_local^*/L̃ = .∂/∂L̃|_* ,
which is usually negative since the interfacial energy prefers larger structure size.
Specifically, for the interfacial energy
= /1/V∫∑_i 1/2κ_i (ϕ_i)^2 ,
where κ_i quantifies the interfacial cost for component i, we have
.∂/∂L̃|_* = -2/L̃^* = - /2/L̃1/V∫∑_i 1/2κ_i (ϕ_i^*)^2 ,
which is negative.
Consequently, S-eqn:fe_derivative_local states that with local elasticity, even if a periodic patterned structure is formed, it still cannot be the equilibrium state.
Instead, Ostwald ripening is inevitable, since the free energy can always be lowered by coarsening, and the equilibrium length scale diverges.
With nonlocal elasticity, the elastic energy will no longer be a local function of the deformation gradient tensor, implying two non-vanishing terms in S-eqn:fe_derivative, which compete with each other,
f̅_nonlocal^*/L̃ = .∂/∂L̃|_* + . ∂/∂L̃|_* ,
leading to equilibrium pattern length scale (S-fig:free_energy_curve).
The conclusion drawn from the free energy derivative above, although not a strict proof, can be understand in a more intuitive way:
Imagine a periodic patterned structure with average free energy f̅, which is first scaled to be slightly larger, and then relaxed.
With local elasticity, the free energy decreases during scaling since only the interfacial term is affected.
Since the free energy must not be higher after relaxation due to the variational principle, the new free energy is generally lower than f̅.
This process can be repeated until the length diverges, implying local elasticity cannot explain the finite equilibrium length scale in a continuous field theory.
§ GENERIC MODEL OF NONLOCAL ELASTICITY
The simplest (linear) nonlocal elasticity can be introduced by treating the elastic energy as a functional of the displacement field and expand it to second order <cit.>.
We here focus on a one-dimensional system, and assume that the elastic energy is a functional of the displacement field u(X), where X is the coordinates in the reference frame.
Expanding to the second order, we have
= _0 + ∫ X _1(X)u(X) + 1/2∫ X X' _2(X,X')u(X)u(X') + ⋯ ,
where _i are the derivatives of order i of .
Since the constant part is irrelevant and the expansion is made near the relaxed state, the _0 and _1 terms can be dropped.
Consequently, the lowest order approximation of reads
≈1/2∫ X X' (X,X')u(X)u(X') .
Since the elastic energy must be invariant under constant shifts of the displacement field u,
∫ X' (X,X')=0 .
Extracting the diagonal element from (X,X'),
(X,X') = ψ(X)δ(X-X') - (X,X') ,
and integrating both side with respect to X' and using S-eqn:f_elastic_Phi_ts_u, we find
ψ(X) = ∫ X' (X,X') .
Note that the function is arbitrary and not subject to a constraint similar to S-eqn:f_elastic_Phi_ts_u, in contrast to .
Inserting S-eqn:f_elastic_Psi_def into S-eqn:f_elastic_expand_lite and using S-eqn:f_elastic_Phi_ts_u again, we find
≈1/4∫ X X' (X,X')[u(X)-u(X')]^2 .
The equation above has a very clear physical picture:
The system contains multiple springs, with both ends of each spring tied to X and X' in the reference frame, and stretched by u(X)-u(X') after the deformation of the elastic component.
The elastic energy of the system is just the total potential energy of all the springs, while the function is related to the stiffness of the springs.
Defining the strain ϵ= u/ X, this can be written as
≈1/4∫ X X' (X,X')[∫_X^X' X^* ϵ(X^*)]^2
=1/4∫ X γ(X-γ/2,X+γ/2)[∫_X-γ/2^X+γ/2 X^* ϵ(X^*)]^2
=1/2∫ X X' ϵ(X) ϵ(X') c(X,X') ,
with
c(X,X') = 1/2∫γ X (X-γ/2,X+γ/2) Π_γ(X-X_1^*) Π_γ(X-X_2^*)
,
where Π_γ(X) is the box function centered at 0 with width γ.
Assuming no explicit dependence on position, we have
(X,X') =(X-X')
(X,X') =(X-X')
and
c(X,X') =c(X-X') .
Note that this does not require a homogeneous deformation of the elastic component, but only that the property of the continuum is homogeneous.
Defining the nonlocal stress σ_nonlocal(X) = ∫ X' ϵ(X) c(X-X'), S-eqn:f_elastic_nonlocal becomes
≈1/2∫ϵ(X) σ_nonlocal(X) X ,
which is the elastic energy term in M-eqn:f_elastic of the main text.
The nonlocal elasticity given by S-eqn:f_elastic_nonlocal_final can also be obtained from a more explicit model, such as a full network model of polymer network <cit.>.
Assuming that the network is formed by multiple Gaussian chains, and each polymer chain has N monomers, while its ends are fixed at X-γ/2 and X+γ/2 in the reference frame, respectively, the total elastic energy of the network will be proportional to
∝∫ X N γ k_N [u(X+γ/2)-u(X-γ/2)]^2 w(N,γ) ,
where k_N is the stiffness of the entropic spring of a polymer chain of N monomers, w(N,γ) is the joint distribution of N and γ to consider the polydispersity of the polymer chain and the end-to-end distance distribution.
Note that originally the energy of a polymer strand should be written in the form of k_N [γ + u(X+γ/2)-u(X-γ/2)]^2.
However, one can easily show that it is equivalent to S-eqn:f_elastic_polymer except for a constant shift.
Change the order of the integration,
∝∫ X γ[∫ N w(N,γ) k_N] [u(X+γ/2)-u(X-γ/2)]^2 ,
which is just another form of S-eqn:f_elastic_spring_model.
§ NUMERICAL METHODS
We minimize the nonlocal free energy using an iterative numerical scheme, which makes use of simple mixing and the Anderson mixing method to improve numerical stability and convergence <cit.>.
A variable-cell algorithm further improves the performance of the numerical method <cit.>.
It makes use of the free energy derivative in S-eqn:fe_derivative_nonlocal, and provides a way to obtain the optimal volume fraction profile and the equilibrium length scale L at the same time, thus greatly reducing computational costs.
We first express the Flory-Huggins part of the free energy in M-eqn:f_FH in a symmetric form, with not only the volume fraction fields of the elastic component and the solvent, and , but also their conjugated fields and , respectively.
In one dimension, the average Flory-Huggins free energy at given period L̃ reads
= /1/L̃∫_0^L̃ x [(χ - - ) - log- log
+ log + log] ,
where the single molecular partition functions and are defined as
= 1/L̃∫_0^L̃ x e^- = 1/L̃∫_0^L̃ x e^- .
Minimizing the free energy in S-eqn:fe_functiuonal_generic with respect to and gives
= /e^- = /e^- .
Note that compared to M-eqn:f_FH, S-eqn:fe_FH_numerical does not change the minimum of the free energy functional, since M-eqn:f_FH can be fully recovered by inserting S-eqn:fe_FH_numerical_Q and S-eqn:fe_FH_numerical_phi into S-eqn:fe_FH_numerical.
This method brings two advantages:
First, the explicit logarithm terms of and are removed, circumventing the numerical difficulty related to negative volume fractions.
Second, the average volume fraction is automatically kept constant, since and act as normalization factors.
The interfacial term is also reinterpreted in a symmetric form,
= /1/L̃∫_0^L̃[κ/2(/ x)^2 + κ/2(/ x)^2] ,
while the elastic energy term
= E/2L̃∫_0^L̃_0∫_-∞^+∞ u/ X u/ X' g_ξ(X'-X) .
is expressed with the displacement field u, where L̃_0 is the period in the reference frame.
The Lagrangian multipliers must also be included in the numerical calculation to enforce the incompressibility and the volume conservation,
= /1/L̃∫_0^L̃ζ(+-1)+ /1/L̃∫_0^L̃η(J - ϕ_0) ,
where J= u / X + 1.
In addition to S-eqn:fe_FH_numerical_phi, other self-consistent equations for numerical calculation can be obtained by minimizing the sum of the four energy terms in S-eqn:fe_FH_numerical and S-eqn:fe_I_numerical–<ref>, which reads
= χ -κ^2/ x^2 + ζ + η J
= χ -κ^2/ x^2 + ζ
0 = / X( E/∫_-∞^+∞ u/ X' g_ξ(X'-X)) + ϕ_0 / x(η J)
1 =+
ϕ_0 =J .
Note that we transform all the coordinates to the reference frame, where the convolution exhibits a simple form and can be calculated efficiently with fast Fourier transforms (FFT).
In practice, we use the periodic alternative u^*(X)=u(X)-(ϕ_0/ϕ̅-1)X as the free variable since the displacement field u(X) is not periodic.
Using u, and as the main variables, the following iteration scheme solves S-eqn:sc_equation_numerical,
J^(i) = u^(i)/ X + 1
(η J)^(i) = - 1/ϕ_0∫ X [J^(i)/ X( E/∫_-∞^+∞ u^(i)/ X' g_ξ(X'-X))]
^(i) = 1/L̃∫_0^L̃_0 X J^(i) e^-^(i)
^(i) = 1/L̃∫_0^L̃_0 X J^(i) e^-^(i)
^(i) = /^(i)e^-^(i)
^(i) = /^(i)e^-^(i)
ζ^(i) = 1/2[^(i)+^(i)-κ^2/ x^2^(i) -κ^2/ x^2^(i) -(η J)^(i)]
u^(i,new) = ∫ X (ϕ_0/^(i)-1)
^(i,new) = χ^(i) -κ^2/ x^2^(i) + ζ^(i) + (η J)^(i)
^(i,new) = χ^(i) -κ^2/ x^2^(i) + ζ^(i) .
To improve numerical stability, a simple mixing method is used in most cases, where the differences between the new fields and the old ones are partially accepted,
u^(i+1) = u^(i)+λ_u(u^(i,new) - u^(i))
^(i+1) = ^(i)+λ_w(^(i,new)-^(i))
^(i+1) = ^(i)+λ_w(^(i,new)-^(i)) .
Here λ_u and λ_w are two empirical constants, which usually take value smaller than 0.1.
To accelerate the convergence, Anderson mixing is also used every few steps <cit.>.
The variable-cell method is used to simultaneously optimize the period L̃ of the structure during iteration.
L̃ is evolved in the direction of lowering the free energy <cit.>,
L̃^(i+1) = L̃^(i)-λ_L̃ξ^2/[(∂/∂L̃)^(i) + (∂/∂L̃)^(i)] ,
where the partial derivative of the interfacial energy and the elastic energy can be calculated by
(∂/∂L̃)^(i) = -2/L̃^(i)^(i)
(∂/∂L̃)^(i) = E/2L̃^(i)∫_0^L̃_0∫_-∞^+∞ u^(i)/ X u^(i)/ X' h_ξ(X'-X) ,
with the new kernel h_ξ(X)=g_ξ(X)+X g_ξ(X)/ X.
The evolution of L̃ is also accelerated by Anderson mixing <cit.>.
When converged, L̃ reaches the equilibrium length scale L.
For all of the numerical results, we use periodic boundary condition and 2048 spatial sample points per period of the .
The incompressibility and the relative square-mean-root of the field error are converged to below 10^-5, while the free energy derivative with respect to the period is converged to below 10^-8.
To perform common-tangent construction at fixed interaction strength χ and stiffness E, the free energy curve is numerically sampled with the interval of average fraction ϕ̅ no larger than 0.01.
Then, the numerical sample points are interpolated for the common-tangent construction (S-fig:phase_diagram_1D_real).
We notice that the free energy difference between the periodic and its coexistence with is tiny, which might be related to the irregularity of the droplet placement in real systems.
§ IDENTIFYING THE CONTINUOUS PHASE TRANSITION
We present two ways to identify the continuous phase transition, based on (i) overlap of spinodal and binodal lines and (ii) on a higher-order analysis.
§.§ Spinodal and binodal overlap at continuous phase transition
The spinodal line, based on linear stability analysis, and the binodal line, obtained from full numerics, overlap at the continuous phase transition.
In our system, we have multiple spinodals, since we can have coexistence between two homogeneous and a patterned phase.
To get the spinodal of the homogeneous phase, we first substitute ϵ=J-1=ϕ_0/-1 into M-eqn:f_elastic to express ϵ and σ_nonlocal with J, and transform the integral into the reference frame.
The average free energy density then reads
f̅ = /ϕ̅/ϕ_01/L̃_0∫_0^L̃_0 J [log + log(1-) + χ(1-) + κ(/ X)^2/J^2 ]
+ ϕ̅/ϕ_0E/2L̃_0∫_0^L̃_0∫_-∞^+∞[J(X)-1][J(X')-1] g_ξ(X'-X) .
Since we have J=ϕ_0/ϕ̅ for the homogeneous phase average volume fraction ϕ̅, we find
f̅_homo. = /(ϕ̅logϕ̅ + (1-ϕ̅)log(1-ϕ̅) + χϕ̅(1-ϕ̅)) + E/2ϕ̅/ϕ_0(ϕ_0/ϕ̅-1)^2 .
To test the stability, we first take the second-order derivative of the f̅_homo. with respect to ϕ̅,
∂^2 f̅_homo./∂ϕ̅^2 = E ϕ_0/ϕ̅^3+ /(1/1-ϕ̅+1/ϕ̅-2 χ) .
The spinodal of macrophase separation between two homogeneous phase corresponds to f̅_homo.”(ϕ̅)=0, yielding
χ_macro = 1/2( E/ϕ_0/ϕ̅^3+1/1-ϕ̅+1/ϕ̅) ;
see the black dashed lines in S-fig:phase_diagram_with_spinodal.
To consider microphase separation, we perturb J from a constant value, J=(ϕ_0/ϕ̅)(1+a cos(q X)), where a is the amplitude of the perturbation and q is the associated wave number.
Evaluating S-eqn:fe_full, taking the second-order derivative of f̅ with respect to a, and taking the limit a → 0, we find
.∂^2 f̅/∂ a^2|_a=0 = E ϕ_0 e^-1/8ξ ^2 q^2/2 ϕ̅+ /(κ q^2 ϕ̅^4/ϕ_0^2-χϕ̅^2+ϕ̅^2/2-2 ϕ̅+ϕ̅/2) .
Stability of the homogeneous state requires .∂^2 f̅/∂ a^2|_a=0≥0 for all q, implying
χ≤χ_u = E/ϕ_0 e^-1/8ξ ^2 q^2/2 ϕ̅^3+κ q^2 ϕ̅^2/ϕ_0^2+1/2 ϕ̅+1/2-2 ϕ̅
holds for all q.
χ_u assumes its minimum at q=q^* with
q^*=2 √(2)/ξ√(log( E/ξ ^2 ϕ_0^3/16 κϕ̅^5) ) ,
if ( E/) ξ ^2 ϕ_0^3>16 κϕ̅^5, leading to the spinodal of microphase separation,
χ_micro = 8 κϕ̅^2 log( E/ξ ^2 ϕ_0^3/16 κϕ̅^5)/ξ ^2 ϕ_0^2+8 κϕ̅^2/ξ ^2 ϕ_0^2+1/2 ϕ̅+1/2-2 ϕ̅ ;
see the brown dashed lines in S-fig:phase_diagram_with_spinodal.
Comparing this line to the binodal curve obtained from the full numerics, we find that they overlap in a parameter windows, which hints at the existence of a continuous phase transition (S-fig:phase_diagram_with_spinodal).
Note that if ( E/) ξ ^2 ϕ_0^3>16 κϕ̅^5 does not hold then χ_u takes its minimum at q → +0, where χ_u turns to χ_macro and the macrophase spinodal given by S-eqn:spinodal_macro is recovered.
§.§ Higher-order analysis identifies the continuous phase transition
S-eqn:spinodal_micro defines the spinodal curve of the microphase separation in the -χ plane, which implies that the homogeneous phase is (meta-)stable below the curve, and unstable above the curve.
No information about stability is provided right on the spinodal curve, since S-eqn:chi_requirement_for_given_q is not a sufficient condition for stability when χ=χ_u.In order to test the stability on the spinodal, we approximate J as J=(ϕ_0/ϕ̅)(1+ a_q^*cos(q^* X) + ∑_q q^* a_q cos(q X)) and perturb the average volume fraction ϕ̅ as ϕ̅+δϕ̅ at the same time.
The average free energy density can then be written as
f̅=f̅[δϕ̅, a_q^*, a_q_1, a_q_2,…] .
On the spinodal line of the homogeneous phase, we have
.∂^2 f̅/(∂δϕ̅)^2|_0 >0 .∂^2 f̅/(∂ a_q^*)^2|_0=0 .∂^2 f̅/(∂ a_q)^2|_0 >0 if q q^* ,
while all second-order cross derivatives vanish.
Here .·|_0 indicates the value at the homogeneous state on the spinodal.
Expanding the free energy up to fourth-order, we find
f̅ = .f̅|_0 + .∂f̅/∂δϕ̅|_0δϕ̅
+ 1/2.∂^2 f̅/(∂δϕ̅)^2|_0 (δϕ̅)^2 + 1/2∑_q q^*.∂^2 f̅/(∂ a_q)^2|_0 a_q^2
+1/2.∂^3 f̅/(∂ a_q^*)^2∂δϕ̅|_0 a_q^*^2δϕ̅ + 1/2.∂^3 f̅/(∂ a_q^*)^2∂ a_2q^*|_0 a_q^*^2 a_2q^*
+1/24.∂^4 f̅/(∂ a_q^*)^4|_0 a_q^*^4
+o(‖ (δϕ̅, a_q^*^2, a_q_1, a_q_2,…) ‖^2 ) ,
where omitted terms are either zero or absorbed in the remainder.
The stability of the homogeneous phase on the spinodal then requires
1/24.∂^4 f̅/(∂ a_q^*)^4|_0
- (1/2.∂^3 f̅/(∂ a_q^*)^2∂δϕ̅|_0)^2/(2.∂^2 f̅/(∂δϕ̅)^2|_0)
- (1/2.∂^3 f̅/(∂ a_q^*)^2∂ a_2q^*|_0)^2/(2.∂^2 f̅/(∂ a_2q^*)^2|_0)
≥ 0 .
Solving this inequality for ϕ̅ numerically, we obtain the window of ϕ̅ with a continuous phase transition.
Combined with S-eqn:spinodal_micro, the phase boundary of the continuous phase transition can be obtained, which is marked as red dotted lines or red surface in S-fig:phase_diagram_with_spinodal, M-fig:phase_diagram_grandcanonical and M-fig:phase_diagram.
In fact, in all these phase diagrams, the continuous transitions are verified with both the overlapping of spinodal and binodal, as well as the higher-order stability analysis.
§ APPROXIMATE MODEL AND ASYMPTOTIC SOLUTIONS
To understand the scaling law of the length scale L, we assume sharp interfaces and approximate the volume fraction profile ϕ(x) as a box function,
(x)=ϕ_+ + (ϕ_--ϕ_+)Π_α L(x-L̃/2) ,
within one period x∈[0,L̃).
Here α=(ϕ_+-ϕ̅)/(ϕ_+-ϕ_-) is the fraction of the solvent-rich region relative to the period L̃, ϕ̅ is the average volume fraction of the elastic component in the deformed state, and ϕ_- and ϕ_+ are the minimum and the maximum value of the volume fraction profile, respectively.
Converting the profile to the reference frame and making use of the relationship between strain and the volume fraction given by M-eqn:strain_fraction_relation, we find
ϵ(X)=ϕ_0/ϕ_+ - 1 + (ϕ_0/ϕ_--ϕ_0/ϕ_+)Π_α_0 L̃_0(X-L̃_0/2) ,
where L̃_0=(ϕ̅/ϕ_0)L̃ and α_0=(ϕ_-/ϕ̅)α are the period and relative droplet size in the reference frame, respectively.
To evaluate the elastic energy, we first consider the case where the period is much larger than the microscopic length scale (L̃≫ξ).
In this case, we can safely ignore the interference between the neighboring periods since the Gaussian convolution kernel decays exponentially with distance.
The elastic energy density thus reads
= E/2(ϕ_0/ϕ_--ϕ_0/ϕ_+)^2(ξ/L̃e^-2β^2L̃^2/ξ^2-1/√(2π)+L̃ (√(2)βL̃/ξ)) + f̅_el,0 with β = ϕ_-(ϕ_+-ϕ̅)/ϕ_0(ϕ_+-ϕ_-) .
where f̅_el,0 is a term with no dependence on L̃,
f̅_el,0 =E/2[2(1-ϕ̅/ϕ_0)ϵ_0-ϕ̅/ϕ_0ϵ_0^2] ,
with ϵ_0=ϕ_0/ϕ_+-1.
Expanding S-eqn:fe_regimeBC_general around L̃→+0, we have
^ 2 = E(ϕ_+ - ϕ̅)^2/√(2π)ϕ_+^2L̃/ξ + f̅_el,0 +o(L̃^3) ,
while expanding around L̃→+∞ leads to
^ 3 = E(ϕ_+-ϕ̅)(ϕ_+-ϕ_-)ϕ_0/ϕ_-ϕ_+^2 - E1/2√(2π)(ϕ_0/ϕ_--ϕ_0/ϕ_+)^2ξ/L̃ + f̅_el,0 +o(1/L̃^2) .
Next, we consider the case L̃≪ξ, where the elastic energy can be derived from the physical picture.
Since the period is much smaller than the microscopic length scale ξ, the convolution of the strain field simply gives the average strain which is not affected by the period L̃.
Evaluating M-eqn:f_elastic and converting it to energy density, we obtain
^ 1 = E/2ϕ_0/ϕ̅(ϕ̅/ϕ_0-1)^2 .
Combining the results in S-eqn:fe_regimeB, S-eqn:fe_regimeC and S-eqn:fe_regimeA, we find
≈E2ϕ_0ϕ̅(ϕ̅ϕ_0-1)^2 L̃<L_min
f̅_el,0 + E(ϕ_+-ϕ̅)^2√(2π)ϕ_+^2L̃ξ L_min<L̃<L_max
f̅_el,0 +E(ϕ_+-ϕ_-)(ϕ_+-ϕ̅)ϕ_0ϕ_-ϕ_+^2 - E12√(2π)(ϕ_0/ϕ_--ϕ_0ϕ_+)^2ξL̃ L̃>L_max ,
where the boundary values L_min and L_max will be estimated later.
Differentiating with respect to L, we have
∂/∂L̃≈
0 L̃<L_min
E(ϕ_+-ϕ̅)^2√(2π)ϕ_+^21ξ L_min<L̃<L_max
E12√(2π)(ϕ_0ϕ_--ϕ_0ϕ_+)^2ξL̃^2 L̃>L_max .
Since the derivatives of the average free energy density govern the equilibrium length scale (see S-eqn:fe_derivative_nonlocal), we determine L_max by balancing the last two terms of the derivatives of , resulting in
L_max=1/√(2)ϕ_0/ϕ_-ϕ_+-ϕ_-/ϕ_+-ϕ̅ξ .
In contrast, the derivatives are all constants in the first two regimes of S-eqn:fe_derivatives_piecewise, so we cannot estimate the boundary in the same way.
We thus balance directly in the first two regimes to get
L_min=√(π/2)ϕ_0/ϕ̅ξ .
Converting the two bounds L_min and L_max to the reference frame yields M-eqn:length_bound_ref in the main text.
For completeness, we here also present the generic expression of of the approximated model with the ϑ function
=E/2L̃∫_(1-α_0)L̃_0/2^(1+α_0)L̃_0/2(ϕ_0/ϕ_--ϕ_0/ϕ_+)^2 ϑ_3(π(X'-X)/L̃_0,e^-π ^2 ξ ^2/2 L̃_0^2) + f̅_el,0 .
This integral can be evaluated numerically; see the black line in M-fig:box_function_approxC for its derivative.
Note that all orders of derivative with respect to L̃ at L̃→+0 are zero, so the free energy is almost independent of L̃, consistent with our picture to derive S-eqn:fe_regimeA.
|
http://arxiv.org/abs/2307.04178v1 | 20230709140217 | Shock excitation of H$_2$ in the James Webb Space Telescope era | [
"L. E. Kristensen",
"B. Godard",
"P. Guillard",
"A. Gusdorf",
"G. Pineau des Forets"
] | astro-ph.GA | [
"astro-ph.GA"
] |
Niels Bohr Institute, University of Copenhagen, Øster Voldgade 5–7, 1350 Copenhagen K, Denmark
[email protected]
Observatoire de Paris, Université PSL, Sorbonne Université, LERMA, 75014 Paris, France
[email protected]
Laboratoire de Physique de l’École Normale Supérieure, ENS, Université PSL, CNRS, Sorbonne Université, Université Paris Cité, 75005 Paris, France
Sorbonne Université, CNRS, UMR 7095, Institut d’Astrophysique de Paris, 98bis bd Arago, F-75014 Paris, France
Institut Universitaire de France, Ministère de l’Enseignement Supérieur et de la Recherche, 1 rue Descartes, 75231 Paris Cedex F-05, France
Université Paris-Saclay, CNRS, Institut d’Astrophysique Spatiale, 91405 Orsay, France
Molecular hydrogen, H_2, is the most abundant molecule in the Universe. Thanks to its widely spaced energy levels, it predominantly lights up in warm gas, T ≳ 10^2 K, such as shocked regions externally irradiated or not by interstellar UV photons, and it is one of the prime targets of James Webb Space Telescope (JWST) observations. These may include shocks from protostellar outflows, supernova remnants impinging on molecular clouds, all the way up to starburst galaxies and active galactic nuclei.
Sophisticated shock models are able to simulate H_2 emission from such shocked regions. We aim to explore H_2 excitation using shock models, and to test over which parameter space distinct signatures are produced in H_2 emission.
We here present simulated H_2 emission using the Paris-Durham shock code over an extensive grid of ∼ 14,000 plane-parallel stationary shock models, a large subset of which are exposed to a semi-isotropic external UV radiation field. The grid samples six input parameters: the preshock density, shock velocity, transverse magnetic field strength, UV radiation field strength, the cosmic-ray-ionization rate, and the abundance of polycyclic aromatic hydrocarbons, PAHs. Physical quantities resulting from our self-consistent calculations, such as temperature, density, and width, have been extracted along with H_2 integrated line intensities. These simulations and results are publicly available on the Interstellar Medium Services platform.
The strength of the transverse magnetic field, as quantified by the magnetic scaling factor, b, plays a key role in the excitation of H_2. At low values of b (≲ 0.3, J-type shocks), H_2 excitation is dominated by vibrationally excited lines; whereas, at higher values (b ≳ 1, C-type shocks), rotational lines dominate the spectrum for shocks with an external radiation field comparable to (or lower than) the solar neighborhood. Shocks with b ≥ 1 can potentially be spatially resolved with JWST for nearby objects. H_2 is typically the dominant coolant at lower densities (≲ 10^4 cm^-3); at higher densities, other molecules such as CO, OH, and H_2O take over at velocities ≲ 20 km s^-1 and atoms, for example, H, O, and S, dominate at higher velocities. Together, the velocity and density set the input kinetic energy flux. When this increases, the excitation and integrated intensity of H_2 increases similarly. An external UV field mainly serves to increase the excitation, particularly for shocks where the input radiation energy is comparable to the input kinetic energy flux. These results provide an overview of the energetic reprocessing of input kinetic energy flux and the resulting H_2 line emission.
Shock excitation of H_2 in the James Webb Space Telescope eraTables B.1 – B.7 are only available in electronic form
at the CDS via anonymous ftp to cdsarc.cds.unistra.fr (<130.79.128.5>)
or via <https://cdsarc.cds.unistra.fr/cgi-bin/qcat?J/A+A/675/A86>
L.E. Kristensen1
B. Godard2,3
P. Guillard4,5
A. Gusdorf3,2
G. Pineau des Forêts6,2
Received 27 February 2023; accepted 23 May 2023
============================================================================================================================================================================================================================================================
§ INTRODUCTION
Shocks are inherently out-of-equilibrium time-dependent phenomena that permeate space. They appear over a wide range of scales, ranging from, for example, accretion onto stars or protoplanetary disks, winds and jets driven by accreting (proto)stars, planetary nebulae, supernova remnants, starburst galaxies, jets from active galactic nuclei (AGN), and to galaxy-galaxy collisions <cit.>. Common to all these phenomena is that the input kinetic energy flux dissipated by the shock accelerates, heats, and compresses the medium. When the medium cools down, radiation is emitted, which we observe. To understand the physical origin of emission (e.g., preshock density, shock velocity) and the energetic processing taking place in shocks, it is thus necessary to reverse engineer the observed light. Doing so requires models.
One of the often-used tracers of shocks is molecular hydrogen, H_2 <cit.>. This is the most abundant molecule in the interstellar medium by some four orders of magnitude over CO and H_2O. The molecule is the lightest, and so it has the most widely spaced rotational levels (J = 1 has E_ up / k_ B = 170 K and J = 2 has E_ up / k_ B = 510 K). As such, it is predominantly excited in warm (T ≳ 10^2 K) and hot (T ≳ 10^3 K) molecular gas. This molecule has no permanent dipole moment, and only forbidden electric quadrupole transitions occur, although at low probability. The main reason H_2 emission is still bright is because of its high abundance.
H_2 emission is readily observed from the ground, particularly in higher-excited rovibrational transitions at near-infrared wavelengths <cit.>. The brightest of these is typically the = 1–0 S(1) line at 2.12 μm. A few pure rotational lines are also accessible from the ground, and the line profiles may even be velocity resolved on telescopes such as the Very Large Telescope <cit.>. However, it is necessary to go above the atmosphere to observe the lower-excited pure rotational transitions of H_2. Space-based telescopes such as the Infrared Space Observatory (ISO) and the Spitzer Space Telescope (Spitzer) both observed these transitions toward numerous shocked regions <cit.>, as did the Stratospheric Observatory For Infrared Astronomy <cit.>. Now the James Webb Space Telescope (JWST) is doing the same <cit.>. Particularly, the MIRI instrument is observing the rotational transitions with a gain in sensitivity and spatial resolution of two orders of magnitude compared with Spitzer, and an increase in spectral resolution of a factor five <cit.>. Similar improvements are reached with the NIRSpec instrument compared with the VLT-SINFONI integral-field unit, allowing deep observations of the rovibrational lines of . The wavelength coverage of NIRSpec, NIRCam, and MIRI are illustrated in Fig. <ref>, which shows a simulated H_2 spectrum with the instrument wavelength coverages displayed.
Planning and interpreting the abovementioned observations is often done by use of models. With models, it is possible to constrain, for example, the shock velocity and preshock density, which together give the input kinetic energy flux, 1/2 ρ _ s^3, where ρ is the mass density and _ s is the shock velocity. In molecular shocks, a comparison reveals that up to 50% of the input energy is radiated away in H_2 emission <cit.>, depending on shock conditions, making H_2 the dominant coolant in these shocks. Spitzer particularly opened up for characterization of the pure rotational H_2 lines. Observations and subsequent modeling revealed that most H_2 emission could be reproduced by shock models <cit.>. However, when additional constraints, such as the H/H_2 ratio and the cooling length are included for protostellar outflows, a single shock model no longer reproduces observations <cit.>. Instead, as argued, the observational beam likely catches different shocks, or more complex shock geometries than 1D, which is to be expected; this is not just the case for protostellar outflows, but also observations of shocks in the diffuse gas of starburst and colliding galaxies <cit.>. Irrespective of the specific science case, the first step in comparing observations to models is to have the models available.
The Paris-Durham shock code <cit.> has been developed and maintained for more than 35 years <cit.>. The code can either find jump (J-type shocks) or continuous (C-type shocks) solutions depending on the input physical parameters. Recent developments include the treatment of an external UV radiation field <cit.>, and self-irradiation in high-velocity shocks <cit.>. Here we present the results of running a large grid of simulations of (externally irradiated) shocks with the goal of exploring how the input energy flux (kinetic and radiative) is reprocessed and ultimately results in H_2 emission. These model predictions can be used directly to interpret, for example, JWST observations of shock emission.
The paper is organized as follows. Section <ref> describes the shock model and the model grid, with a particular emphasis on H_2 excitation and emission. The section also describes which physical quantities were extracted from the models, and the methodology applied. Section <ref> describes the results and provides a discussion of these results. Finally, the main points are summarized in Sect. <ref>.
§ MODEL AND GRID DESCRIPTION
The current version of the multifluid shock code is extensively described in <cit.> and references therein, and only the main relevant points will be described here. These points particularly relate to H_2 emission and other observable diagnostics, but also how the initial shock conditions are calculated. The code is publicly available[<http://ism.obspm.fr/shock.html>], and the entire grid presented in this paper is also available on the ISM platform[ <https://app.ism.obspm.fr/ismdb/>]. In Appendix <ref> we provide an introduction to this platform and demonstrate how it can be used.
§.§ Initial conditions
The main focus of this paper is on H_2, and so the chemistry considered in this paper and, more importantly, in the models run, is a gas-phase-only chemistry. That is, grain adsorption and desorption processes are not included. The only exceptions are the formation of H_2 on grains, and grain erosion for the release of elemental Si, Fe, etc. into the gas phase. Photochemistry is included in all steps of the calculation; readers can refer to the text below for more details.
Our assumption is that the initial conditions are in equilibrium, that is, thermal and chemical equilibrium with or without an incident radiation field. Running a shock model therefore requires multiple steps, all done using the Paris-Durham code <cit.>. This code simulates steady-state gas equilibrium, photon-dominated regions (PDRs), or shocks. These steps are illustrated in Fig. <ref>. First, a chemical steady-state calculation is run with the given density and radiation field. For irradiated shocks, the next step is to take the final equilibrium conditions from the chemical steady-state calculation and use these as input for a PDR calculation, where a tracer particle is advected at a small velocity (≤ 0.01 km s^-1) from an A_ V of 10^-9 to 10^-1. The advection speed is chosen such that the time it takes to cross the PDR front is long enough that equilibrium is reached; this timescale is 10^5–10^9 years for high to low densities. The choice of a final A_ V of 0.1 is motivated by two considerations. First, the primary focus of this paper is H_2 and the A_ V thus needs to be high enough that the preshock gas is substantially molecular (molecular fraction ≥ 0.1) for the majority of the G_0 values here, specifically the part of the grid where G_0/n_ H < 1. Second, the A_ V should be low enough that H_2 is not fully self-shielded. These two conditions are met at an A_ V of 0.1. The final conditions, in terms of steady-state abundances, temperature, and H_2 level populations, are then used as the input physical conditions of the shock calculation. The shock is run in the final step.
The initial elemental abundances are provided in Table <ref>. Of particular importance is the abundance of polycyclic aromatic hydrocarbons (PAHs). In the model, a representative PAH molecule is included, C_54H_18 and its singly charged ions. Table <ref> reports the amount of H and C locked up in this PAH for a PAH abundance of X(PAH) = 10^-6. The grain temperature is kept fixed at 15 K.
We cover a 6D parameter space with preshock density (n_ H = 2 n(H_2) + n(H)), shock velocity (_ s), strength of the transverse magnetic field[The transverse magnetic field strength scales with the density as B = b ×√(n_ H ( cm^-3)) μG, where b is a scaling factor.] (b), external UV radiation <cit.>, H_2 cosmic-ray ionization rate (ζ_ H2), and the fractional abundance of the PAHs (X(PAH)). The parameter space is presented in Table <ref>. Depending on the initial conditions, the code either finds a Jump (J-type) solution or a Continuous (C-type) solution (see below, Sect. <ref> for more details). Throughout this paper, we use two shock models to illustrate differences when changing b from 0.1 to 1.0; these are referred to as model A and B (Table <ref>). For the given set of input parameters, model A gives rise to a J-type shock, and model B a C-type shock.
§.§ Molecular hydrogen
Collisional excitation and de-excitation of H_2 is calculated for collisions with H, H_2, and He. The collisional rate coefficients for H_2-H_2 collisions are adopted from <cit.> and for H_2-He collisions from <cit.>. In the case of H_2-H collisions, for the first 49 levels of H_2 the rates are from <cit.> and <cit.>, where the rates have been calculated using a full quantum mechanical approach. For the remaining levels, the rates from <cit.> are used. They were calculated using a quasi-classical approach. The reactive reaction rates of H_2 with H are from <cit.>.
The number of levels has been set to 150 here, and the highest level is = 8, J = 3 (E/k_ B=39,000 K). The model assumes that there are no levels between the user-set value and the dissociation level. This may be important when calculating the dissociation rate of H_2, since molecules that are already excited have internal energies that are closer to the dissociation limit, and thus require less energy to dissociate. For the models run here, we find that there is no significant difference in H_2 emission by increasing the number of levels.
Depending on the initial conditions, H_2 may dissociate in the shock through collisions. As the post-shock gas cools, H_2 reforms on the grains <cit.> and it is necessary to account for the bond energy released (4.5 eV ∼ 5.1 × 10^4 K). We assume that approximately one third of the energy goes to internal energy of the molecule. This internal energy distribution follows a Boltzmann distribution with a temperature corresponding to ∼ 17,000 K. The remaining energy is equally split between kinetic energy of the newly formed H_2 molecule, and heating of the grain.
The H_2 level populations are used for calculating the local H_2 line emissivities. This is done under the assumption of optically thin emission, which typically applies to H_2 emission because of its lack of a permanent dipole moment. Of these lines, 1000 are output explicitly and stored as emissivity profiles in this grid. About 900 of these H_2 lines are covered by the JWST instruments MIRI and NIRSpec. These two instruments together cover the wavelength range of 0.6 – 28 μm, that is the = 0–0 S(0) ground-state line at 28.3 μm (Fig. <ref>) is not covered.
§.§ Grid
The total set of grid parameters is presented in Table <ref>; covering this range of parameter space resulted in ∼ 14,000 simulations in total. Each simulation produces a number of outputs that are all stored in human-readable ASCII files and an HDF5 file for easy extraction[The full model outputs are provided on the ISM platform: <https://app.ism.obspm.fr/ismdb/>]. These include physical properties of the shock (e.g., temperature, density, velocity) as a function of distance and time through the shock, and chemical properties (e.g., local densities, charge state, column densities), excitation of H_2 (level populations and local emissivities). In this case, the time is calculated as the neutral flow time, t_ n = ∫ dz / _ n. In total, more than 2600 quantities are stored as profiles through each shock, and 1400 quantities are stored as integrated values.
The model integrates the gas state far downstream in order to ensure that a steady-state solution is contained within the simulation. Therefore, special care needs to be taken when extracting integrated quantities such as column densities or line intensities. We here adopt a similar criterion for the size of the shock as in <cit.> based on radiative energy dissipation. We here set that limit as the point where 99.9% of the total radiation has been emitted (see Appendix <ref>). Specifically, this means that the size, z_ s is defined as:
Υ(z_ s) - Υ(0)/Υ(∞)-Υ(0) = 99.9 % ,
where Υ is the sum of the kinetic, magnetic, and thermal energy fluxes.
For ease of use, we provide a number of tables containing already-extracted results at the Centre de Données astronomiques de Strasbourg (CDS[Add link to CDS archive at publication stage.]). Example tables are provided in Appendix <ref> in Tables <ref> – <ref>. These tables include:
<ref> Physical parameters such as peak temperature, density, width, and age of the shock;
<ref> Column densities of selected species, particularly H, H_2, O, OH, H_2, C^+, C, and CO;
<ref> Data required for creating H_2 excitation diagrams, i.e., ln(N/g) and E for each of the 150 levels;
<ref> H_2 integrated intensities of the 1000 lines extracted, along with their wavelength;
<ref> Width of the H_2 emitting zone for the = 0–0 S(1), 1–0 S(1), 0–0 S(9), 1–0 O(5), and 2–1 S(1) lines;
<ref> H_2 o/p ratios determined both locally and integrated through the shock;
<ref> Integrated line intensities of 29 transitions arising from C^+, Si^+, H, C, Si, O, S^+, N^+, N, and S.
On occasion, the model does not converge for numerical reasons; this happens in ∼5% of cases. This convergence-failure occurs often in C^*-type shocks, when the flow crosses the first sonic point <cit.>. In these cases, the model output is ignored but the input parameters are still recorded in the tables.
§.§ Model limitations
The model has a number of inherent assumptions, which are discussed in the following. The include the shock geometry, magnetic field orientation, self-irradiation, stationary shocks, and grain chemistry.
Geometry. The model treats a plane-parallel shock front, thus ignoring geometry. The lack of geometry is especially important in J-type shocks, where the gas may be compressed by four orders of magnitude or more. In nature, such a compression would quickly lead to a expansion of the high-pressure post-shock gas into the surrounding low-pressure medium, however, that is not possible in a 1D simulation. As a result, the post-shock density could be overestimated. For the case of H_2 emission, this is less important: most of the H_2 emission is generated in the warm parts of the shock where T > 100 K, prior to where significant post-shock expansion would occur.
Magnetic field orientation. The magnetic field orientation is assumed to be perpendicular to the direction of motion. This may not always be the case in molecular clouds, in fact, there is no a priori reason to assume the shock wave and field orientation are well aligned. If the field is not perpendicular to the direction of motion, the compression will lead to a change in field geometry, as described and discussed in <cit.>. These effects are not included here.
Self-irradiation. The model is best suited for molecular shocks. In shocks where H_2 is dissociated and atomic H is excited, the shocks become self-irradiated. While this self-irradiation can be solved iteratively <cit.>, it is not included in the present version of the grid. This limits J-type shocks to _ s≲ 30 km s^-1.
Stationary shocks. All the shocks in this paper are stationary shocks. This implies there needs to be enough time for the stationary structure to fully develop. While the code can mimic non-stationary shocks, an additional free parameter, the age of the shock, is needed, and it is deemed beyond the scope of this work to explore the effects of that parameter <cit.>.
Grain chemistry. Grain-grain interactions are omitted in this grid. For conditions where the velocity is below ∼ 25 km s^-1 and the density is below ∼ 10^5 cm^-3, this assumption is likely valid <cit.>. At larger velocities or densities, grains may interact, leading to grain evaporation and fragmentation which changes the size distribution of grains. Finally, in this grid we do not include ice mantles on the grains.
§ RESULTS AND DISCUSSION
The shock has an initial kinetic energy flux of 1/2 ρ _ s^3, where ρ = 1.4 n_ H m_ H is the mass density; most of this energy is radiated away in the shock. Figure <ref> shows how the energy is lost in shocks with b = 0.1, velocities of 20 and 30 km s^-1, and densities of 10^4 and 10^6 cm^-3. The pie charts are sorted by initial kinetic energy flux going from left to right, and top to bottom. The H_2 fraction decreases with increasing velocity and density because of dissociation. H_2 then reforms on the grains in the postshock gas introducing a heating term which counteracts the cooling of H_2. This is visible in the pie charts as the fraction of H_2 emission decreases monotonically with input kinetic energy flux, from 75% to 0.5%.
Figure <ref> is similar to Fig. <ref>, but for a stronger magnetic field (b = 1.0), i.e., the input kinetic energy fluxes are the same as above. Increasing b to 1 has the consequence that the two 20-km s^-1 shocks become C-type shocks; the 30-km s^-1 shocks remain J-type shocks. The J-type shocks are dissociative, and the H_2 cooling fraction thus decreases significantly, as also illustrated in Fig. <ref>.
The distribution of energy flux into emission lines has been described previously <cit.>, and a comparison in H_2 cooling fractions of the total input kinetic energy flux reveals broad agreement between different models and previous versions of the Paris-Durham model. These pie charts provide a global view of the energetic reprocessing in these shocks. In the following, the role of the different input parameters on the energetic reprocessing will be discussed in more detail, with a specific emphasis on H_2 emission.
§.§ Magnetic field
The strength of the transverse magnetic field, B, sets the ion-magnetosonic speed, c_ ims, together with the ion mass density, ρ_ i:
c_ ims = (c_ s + B^2 / 4πρ_ i)^1/2,
where c_ s is the sound speed. For _ s < c_ ims, the ionized and neutral fluids are decoupled and a magnetic precursor is present <cit.>; the code treats these multiple fluids self-consistently. For _ s > c_ ims, the ionized and neutral fluids are coupled, and there is no magnetic precursor (Fig. <ref>). We refer to Sect. 2.1 of <cit.> for a more in-depth description of the differences between J- and C-type shocks. Figure <ref> shows where the different shock types are as a function of b and _ s for a density of 10^4 cm^-3, Fig. <ref> shows the shock type for a part of the grid presented in this paper. For low values of b (≲0.3), the resulting shocks are J-type shocks, while for b ≳ 1.0 the resulting shocks are predominantly C-type shocks.
The effects of the magnetic precursor is that the input kinetic energy flux is deposited over a much larger spatial range (Fig. <ref>), resulting in lower peak temperatures when compared to shocks with the same input kinetic energy flux but no magnetic precursor. This naturally affects the excitation of H_2, as illustrated in Fig. <ref> in the form of the fraction of total integrated intensity to initial kinetic energy flux. The H_2 excitation is illustrated for the two reference shocks (Table <ref>), both with the same input kinetic energy flux. The figure demonstrates that for both shocks, most of the kinetic energy is radiated away in H_2 emission (see Fig. <ref> and <ref>); the difference in total H_2 integrated intensity from the two shocks is ∼ 15%. However, the integrated intensity from model B (b=1.0) is dominated by pure rotational emission (> 99% of H_2 emission), whereas it is spread over the vibrational levels in model A (b=0.1).
The differences in H_2 excitation and the origin thereof for different values of b are further explored in Fig. <ref> for models A and B in the left and right column, respectively. The first row shows the emerging H_2 spectrum from the two shocks. As was already clear from Fig. <ref>, most of the H_2 emission in model A is spread over the vibrational transitions, whereas emission in model B predominantly is rotational. To make these artificial spectra, a uniform resolving power of R = λ/Δλ = 2500 is assumed, similar to the resolving powers of the NIRSpec and MIRI instruments on JWST, and the line shapes are Gaussian. That is, the integrated intensity calculated in the models is I_ total = √(π) I_ peakΔλ / (2 √(2 ln 2)). A uniform resolving power implies that the emission from longer-wavelength transitions is spread over a larger wavelength range, and thus the peak emission is lower. This stark difference in the H_2 spectra can be understood from the physical structure of the shock.
The kinetic energy flux injected into the two shocks is the same, but the temperature structure is very different. For J-type shocks, such as model A, the maximum temperature can be approximated by <cit.>:
T_ max = 53 K(_ s/1 km s^-1)^2.
For model A, the maximum temperature is ∼ 2×10^4 K (Fig. <ref>, second row). This high temperature ensures that the vibrational H_2 levels are readily populated. For model B (b = 1.0), on the other hand, the magnetic precursor causes the kinetic energy to be deposited over a much larger scale (∼ 10^3 AU vs. ∼ 1 AU), and the resulting peak temperature is much lower (∼ 2000 K). In this case, the temperature is so low that only the rotational levels are significantly excited.
The third row of Fig. <ref> shows excitation diagrams for the two shocks. For model A, all points fall on a single curved line, indicating that the levels are probing a range of excitation temperatures, T_ ex. Particularly, the higher-J and rovibrational transitions probe hotter gas than the lower-J transitions, and the slope is thus shallower (slope = –1/T_ ex). In this case, the excitation temperatures is similar to the gas temperature where the local emissivity peaks (second row of Fig. <ref>). The excitation diagram for model B shows more scatter (caused by the low initial o/p ratio, see below), but the excitation temperatures still match the gas kinetic temperature where the levels are excited. In Appendix <ref> we provide figures showing the extracted excitation temperatures sampling the full range of initial density and shock velocity for b = 0.1 and 1.0, and G_0 = 0 and 1.
Another feature of the excitation diagram for model B is that there is a clear difference between the ortho- and para-levels of H_2. Here the ortho-levels (odd J) are displaced downward compared to the corresponding para-levels (even J), and the resulting zigzag pattern indicates that the ortho/para (o/p) ratio is lower than the high-temperature statistical equilibrium value of 3 <cit.>.
There are no radiative or collisional transitions between ortho- and para-H_2 levels, only exchange reactions with H, H_2, and protonated ions (e.g., H_3^+, HCO^+) can change the spin state <cit.>. The line emission and resulting excitation diagram is integrated through the shock, and thus does not provide information on the local o/p ratio. This is calculated directly from the level populations as n_ o / n_ p, and it can be compared to the cumulative column density ratio, N_ o / N_ p. Both these values are shown in the bottom row of Fig. <ref>. This column density ratio is often dominated by the column densities of H_2 in the two lowest rotational levels, J=0 and 1, which are not accessible in emission. Therefore, we also show the o/p ratio as calculated from the column densities of the lowest observable rotational levels, in this case from the J = 2–9 levels (S(0) to S(7) transitions). In model A, the temperature is high enough that the H exchange reaction H_2^ para + H → H_2^ ortho + H proceeds efficiently <cit.>. The resulting o/p ratios are thus close to 3, although the inferred rotational o/p is somewhat lower than 3 (∼ 1). For model B, the temperature never get high enough that the exchange reactions with H become dominant; instead, the ion-neutral proton-transfer reactions dominate, but they are limited by the low abundances of ions. Thus, the o/p ratios remain at ∼ 0.1. In both models, the initial temperature is 10 K and the gas is dense, which leads to a steady-state o/p ratio of 10^-3 <cit.>. Had the initial temperature been higher or the gas not been in steady state, the initial o/p ratio would have been higher, and the o/p ratio through the shock also correspondingly higher. All in all, however, special care must be taken when interpreting o/p ratios inferred from observations <cit.>.
As mentioned above, the input kinetic energy flux is deposited over a larger spatial range for increasing values of b. Specifically, a “phase transition” occurs when the resulting shock type goes from being J- to C-type, and a magnetic precursor develops. This typically happens at higher values of b or lower velocities (Fig. <ref> shows which physical conditions lead to which shock type). Naturally the ionization fraction also plays a role in setting the shock type (Eq. <ref>), but the gas is primarily neutral for the conditions examined here, and effectively this fraction does not play a role here. To measure the width and to make it a usable observational constraint, we have extracted the scale over which 80% of the H_2 emissivity is generated for a subset of lines: the = 0–0 S(1), 1–0 S(1), 0–0 S(9), 1–0 O(5), and 2–1 S(1) lines. These widths are shown in Fig. <ref> together with the integrated intensity of the lines; here we show the widths of the = 0–0 S(1) and 1–0 S(1) emitting regions. The shocks with b = 0.1 all have widths less than 10 AU, whereas the b = 1 shocks have widths up to ∼ 10^5 AU or ∼ 1 pc. For these shocks, there is an anticorrelation between the width and the integrated intensity: the wider shocks have lower integrated intensities. The J-type shocks occurring for b = 1 and _ s≥ 25 km s^-1 have larger widths than their b = 0.1 counterparts by one order of magnitude. Even though these are J-type shocks, the magnetic field still plays a significant role.
§.§ Velocity and density
The shock velocity, _ s sets the maximum temperature in J-type shocks (Eq. <ref>). H_2 excitation is sensitive to temperature, and so the velocity effectively sets the excitation. This is seen in the simulated spectra (Fig. <ref>). At the lowest velocity (5 km s^-1), the integrated intensity is low and only a few rotational lines are seen in the spectrum. On the contrary, at velocities ≳ 20 km s^-1, we see rich vibrational H_2 spectra. At the same time the peak specific intensity increases by a factor of ∼10, until the velocity reaches 30 km s^-1 and the shock becomes dissociative. In this case, H_2 only contributes to the cooling once it has reformed on the grains. Thus, to a first order, the excitation is set primarily by the velocity in J-type shocks, and the density plays a role in setting the total integrated intensity.
In C-type shocks, the combination of density and velocity is what affects the excitation and the integrated intensity (Fig. <ref>, bottom panel). This is illustrated in the top row of Fig. <ref>, which shows the total H_2 integrated intensity emitted as well as the brightest line. Here, the brightest line serves as a proxy for the excitation in the sense that the higher excited the brightest line is, the higher the excitation is. For the C-type shocks (orange dots), there is a clear intensity and excitation gradient which depends on both density and velocity. The brightest lines are rotational over the bulk of parameter space (from 0–0 S(0) to S(6)), and they are typically para-H_2 transitions (even J). For the case of J-type shocks (blue dots), the intensity gradient is dominated by the density, as discussed above. However, the brightest lines quickly become vibrational; the = 1–0 Q(1) line (2.41 μm) is predicted to be particularly bright, as is the = 1–0 S(3) line (1.96 μm). Thus, identifying the brightest line in the H_2 spectrum provides constraints on where in parameter space the shock is located. Appendix <ref> provides an overview of the dominant cooling lines across the grid.
The H_2 fraction in the gas is highest at the lower densities and lower velocities where H_2 does not dissociate. However, for a given velocity, the total H_2 integrated intensity increases monotonically with density, as shown in Fig. <ref>. This is in spite of the fraction of input kinetic energy flux radiated by H_2 is monotonically decreasing. Thus, for the shocks with the brightest H_2 emission, other molecules and atoms are needed to trace the bulk deposition of kinetic energy. Examples include emission from CO and H_2O at lower velocities, and O, S, and H at higher velocities.
§.§ UV radiation field
In an externally UV-irradiated shock, the UV photons lead to increased gas ionization and thus higher density of the charged fluid. This increase causes a tighter coupling between the neutral and charged fluids, which in turn leads to the kinetic energy typically being deposited over shorter scales compared to in the absence of external UV radiation. Thus, the temperature typically increases and the shocks become narrower <cit.>. The increased temperature naturally causes higher excitation of H_2, as is illustrated in the H_2 spectra in Fig. <ref>. Here, the shock in model B, showing pure rotational excitation of H_2, is exposed to increasing strengths of an external UV-field, from G_0 = 0 to 10^3. The increase in temperature (from 1700 K to 2800 K) leads to an increase in excitation, and the vibrational levels start to become populated.
The second effect of the UV field is to deposit additional energy into the shock <cit.>. Either this energy deposition is indirect in the form of ionization followed by recombination and release of binding energy, or the energy deposition is direct, where UV photons excite H_2 electronically, from which the molecules can de-excite radiatively. It is clear that for the highest values of G_0, the additional energetic input is significant. This is illustrated in Fig. <ref>. Here, the energy radiated away by H_2 as a function of vibrational level is shown for model B, similar to Fig. <ref>. In this case, model B is exposed to stronger UV fields, and the higher vibrational levels are excited, as also seen in Fig. <ref>. The total fraction of energy lost in H_2 emission increases almost monotonically from 0.63 to 1.07 of the input kinetic energy flux. Thus, at least 7% of the excitation is caused by the UV field, and likely more as there are other channels of energy loss (Fig. <ref>). For a quantitative description of the role of UV pumping on the H_2 level populations, we refer to Fig. 8 of <cit.>.
Even for relatively weak UV field strengths (e.g., G_0 = 1), the UV photons may play a significant role. Figure <ref> is similar to Fig. <ref> in that the top panels show the total amount of H_2 emission and the strongest H_2 line. For the weak shocks (low density, low velocity), one major difference is seen when the UV field is turned on: in the absence of external UV radiation, the brightest lines are all para-H_2 lines (even J) because there is no significant para- to ortho-H_2 conversion. For the weak UV field, the strongest lines are predominantly ortho-lines (odd J), which is consistent with observations of the diffuse gas in colliding galaxies <cit.>. This suggests that interstellar shocks in general are not fully shielded, but exposed to some UV radiation.
§.§ H_2 excitation for JWST observers
JWST represents an increase in sensitivity, spatial and spectral resolution by more than an order of magnitude over previous infrared space-based telescopes <cit.>. We here outline some of the ways in which the models may be used to plan and interpret the JWST observations of shocked regions, keeping in mind the model limitations listed in Sect. <ref>.
H_2 spectroscopy. The spectroscopic capabilities of NIRSpec and MIRI make them perfectly suited for observing H_2 line emission. The excitation of H_2 is the result of a complex interplay between various input parameters, as discussed above, with some degeneracies, especially between the density and shock velocity. This is for example illustrated in Fig. 13 of <cit.>, where observations of H_2 emission from the explosive Orion-KL protostellar outflow are analyzed. With high enough spectral resolution, independent constraints can be made on the shock velocity, thus directly breaking the degeneracy <cit.>.
It will likely not be possible to strongly constrain shock conditions from H_2 observations alone, unless the observers only consider subgrids of physical parameters relevant to their studies. An example could be that if shocks in diffuse clouds are studied, only the lowest densities in the grid would be relevant. Furthermore, in a large number of cases, G_0 can be independently constrained, for example, by studying ionized gas lines, UV continuum observations, or PAH features at infrared wavelengths. Observers should also be aware that, in shock-dominated environments, the total H_2 line emission in a given beam is likely the product of a distribution of shocks arising from a multiphase medium with different conditions. Such an example of shock probability distributions convolved with the use of grids of shock models have been used to interpret H_2 observations in the intragroup shocked diffuse gas in colliding galaxies <cit.>.
Shock width. The NIRCam instrument on JWST is well-suited for observing H_2 emission. The instrument contains three categories of filters, narrow-, medium-, and wide-band filters. Their wavelength coverages are illustrated in Fig. <ref>. Of the narrowband filters, three center on H_2 lines: F212N (=1-0 S(1)), F323N (=1-0 O(5)), and F470N (=0-0 S(9)). The spatial resolution ranges from 007 to 016, corresponding to linear scales of 14 and 32 AU at a distance of 200 pc, a typical distance to nearby star-forming regions. As illustrated in Fig. <ref>, the width of shocks with b = 1.0 is typically resolvable if the shock is observed close to edge on, except at the highest densities (≳10^7 cm^-3 for C-type shocks, and ≳10^6 cm^-3 for J-type shocks). Shocks with b = 0.1 are not resolvable at a distance of 200 pc. Having a measured shock width puts additional constraints on the shock models: the width is sensitive to the strength of the transverse magnetic field and thus serves as an independent constraint of this parameter <cit.>. Besides NIRCam, the MIRI IFU offers the possibility of producing spectral line maps of H_2 emission at 160 AU (0."5) spatial resolution at a distance of 200 pc of the 0–0 S(1) line at 17 μm. Emission from this line traces colder gas, and so is typically more extended than the higher-excited lines shown in Fig. <ref>. This resolution is therefore still enough to resolve shock-dominated line emission from dissipative regions in nearby star-forming clouds <cit.>.
H_2 photometry. As shown in Fig. <ref>, the NIRCAM and MIRI imaging filters includes multiple ro-vibrational and rotational H_2 lines, so the use of a those filters may prove to be efficient as far as exposure time and mapping area are concerned. Such observations may be used for constraining shock conditions. As an example, Figs. <ref> and <ref> show the brightest lines for a given set of initial conditions. Thus, if an observed region is dominated by shocked H_2 emission, then it might be possible to broadly constrain the range of parameter space where the emission is generated. That is, with the model results in hand, the user can construct “H_2 photometry” which can be compared to observations, assuming H_2 emission dominates the spectrum and the contribution from, e.g., PAH emission is negligible, or assuming that a combination of filters can be used to remove the contribution of the continuum emission. A similar approach has been shown to work efficiently for the wideband MIRI filters for observations of the colliding galaxies in Stephan's Quintet <cit.>.
H_2 summary. Table <ref> summarizes what sets the H_2 integrated intensity and the excitation. This table is by no means exhaustive, but may be used as an overview guide of H_2 emission in shocks. To constrain the excitation properly, it is necessary to cover as large a wavelength range as possible, and to cover both rotational and rovibrational lines. The former are predominantly excited in C-type shocks, and the latter in J-type shocks. Once a solution has been found that approximately reproduces observations, we recommend the user to fine-tune the grid further for more precise solutions. This can be done either by interpolating the grid values; in this case care must be taken when going from one shock type to another. Alternatively the user can download the model and run their own shock models, in which case we recommend benchmarking their results against the models presented here in a first step. Finally, we recommend that the total integrated intensity of the H_2 lines is compared to the total available mechanical energy output from a given source, to ensure that the best-fit shock model is physical <cit.>.
Atomic lines. Apart from H_2 emission, the model calculates line emission from several other atomic and ionic species. As an example, JWST-MIRI will observe the [Si] line at 25 μm <cit.>, and the integrated line intensity of this line is calculated and tabulated from the grid. The same applies to lines from other species, e.g., O, and C. Naturally, these lines light up in different parts of parameter space compared to H_2, and thus provide complementary information.
Other emission lines. The abundances of some 140 other species have been calculated through the shock. Examples of particular relevance to JWST and shocks include Fe^+, OH and H_2O, because these species have a number of transitions visible in the NIRSpec and MIRI wavelength ranges and these species are some of the dominant coolants (e.g., Fig. <ref> and <ref>). The abundance, temperature, and density profiles are calculated through the shock, which means that the profiles can be post-processed to calculate integrated line intensities using for example a large velocity gradient (LVG) radiative transfer code <cit.>, which has not been done for this grid of models. Just as for the atomic lines, these will provide complementary observational constraints.
§ SUMMARY
Here we present the results of an extensive grid of plane-parallel steady-state shock models. The grid was constructed by varying six parameters: the preshock density, shock velocity, strength of the transverse magnetic field, strength of the UV field impinging on the shock, the cosmic-ray-ionization rate, and the PAH abundance. This is the first time such an extensive grid of shock models has been run and made publicly available.
The purpose of running this grid of models was to examine under which shock conditions H_2 is efficiently excited, and how shock conditions affect the H_2 excitation and integrated line intensities. H_2 is already being extensively observed with JWST, and the coming years will see a flood of H_2 observations. At the moment it is therefore critical for planning and interpreting JWST observations.
We find that the strength of the transverse magnetic field, as quantified by the magnetic scaling factor, b, plays a key role in the excitation of H_2. At low values of b (≲ 0.3, J-type shocks), H_2 excitation is dominated by vibrationally excited lines; whereas, at higher values (b ≳ 1, C-type shocks), rotational lines dominate the spectrum for shocks without an external radiation field. Shocks with b ≥ 1 can potentially be spatially resolved with JWST for nearby objects, which serves as an additional constraint.
H_2 is typically the dominant coolant at lower densities (≲ 10^4 cm^-3); at higher densities, other molecules such as CO, OH, and H_2O take over at velocities ≲ 20 km s^-1 and atoms, for example, H, O, and S, dominate at higher velocities. Together, the velocity and density set the input kinetic energy flux. When this increases, the excitation and integrated intensity of H_2 increases similarly.
An external UV field mainly serves to increase the excitation, particularly for shocks where the input radiation energy is comparable to or greater than the input kinetic energy flux. Together, these results provide an overview of the energetic reprocessing of input energy and the resulting H_2 line emission observable by JWST.
We would like to thank F. Boulanger and S. Cabrit for simulating discussions, particularly at the beginning of this project, as well as J. A. Villa Vélez. The research leading to these results has received funding from the European Research Council, under the European Community’s Seventh framework Programme, through the Advanced Grant MIST (FP7/2017–2022, No. 742719). The grid of simulations used in this work has been run on the computing cluster Totoro of the ERC MIST, administered by MesoPSL. We would also like to acknowledge the support from the Programme National “Physique et Chimie du Milieu Interstellaire” (PCMI) of CNRS/INSU with INC/INP co-funded by CEA and CNES. The research of LEK is supported by a research grant (19127) from VILLUM FONDEN. PG would like to thank the Sorbonne University, the Institut Universitaire de France, the Centre National d'Etudes Spatiales (CNES), the “Programme National de Cosmologie and Galaxies” (PNCG). This work has made use of the Paris-Durham public shock code V1.1, distributed by the CNRS-INSU National Service “ISM Platform” at the Paris Observatory Data Center[<http://ism.obspm.fr>].
aa
§ THE ISM PLATFORM
The ISM platform[<http://ism.obspm.fr>] is a web portal that contains a series of services developed for the diffusion of state-of-the-art astrochemical models and the preparation and interpretation of observations. Regarding the Paris-Durham shock code, the platform provides access to the numerical code and its previous versions, a full documentation of the physical processes implemented, a tutorial to learn how to run the code locally, and a series of selected references. The platform also provides two analysis tools, IDAT and the Chemistry Analyzer tool, which can be used to study the output of the shock code and identify the processes responsible for the thermochemical evolution of the gas in a simulation. Finally, the platform contains a numerical database (InterStellar Medium DataBase or ISMDB) that provides an easy access to recalculated grid of theoretical models.
On this platform it is possible to “Search models in ISMDB” and from there “Browse models.” This leads to a page where combinations of input shock parameters can be specified, and once the selection has been made, it is possible to “Get model.” The resulting page shows the input parameters as well as some of the resulting quantities (e.g., shock type). The entire model output can be downloaded for further analysis, or the model can be quickly inspected directly through “Online analysis with IDAT.” This tool allows the user to select different quantities and plot them against distance through the shock on one or two different y-axes if so desired. An example could be the velocities through the shock as well as the temperature.
§ TABLES WITH EXTRACTED PARAMETERS
We here provide example tables of the physical quantities already extracted from the grid (Tables <ref> – <ref>). These tables are available on CDS in electronic format. These tables include:
<ref> Physical quantities such as peak temperature, density, width, and age of the shock;
<ref> Column densities of relevant species, particularly H, H_2, O, OH, H_2, C^+, C, and CO;
<ref> Data required for creating H_2 excitation diagrams, i.e., ln(N/g) and E for each of the 150 levels;
<ref> H_2 integrated intensities of the 1000 lines extracted, along with their wavelength;
<ref> Width of the H_2 emitting zone for the = 0–0 S(1), 1–0 S(1), 0–0 S(9), 1–0 O(5), and 2–1 S(1) lines;
<ref> H_2 o/p ratios determined both locally and integrated through the shock;
<ref> Integrated line intensities of 29 transitions arising from C^+, Si^+, H, C, Si, O, S^+, N^+, N, and S.
An energy cutoff of 99.9% was used to define the point at which integrated quantities (e.g., line intensities, column densities) were integrated to (Sect. <ref>). Tests were performed using cutoffs at 95%, 99%, 99.9%, 99.99%, and 99.999%. The two lower values (95 and 99%) did not capture the H_2-emitting zone, particularly in strong CJ-type shocks where the temperature exceeds 10^5 K. The difference between 99.9% and 99.99% cutoffs were on the order of a few percent in terms of H_2 integrated line intensities for the = 0–0 S(1), 1–0 S(1), and 2–1 S(1) transitions for most shock conditions. Thus, a threshold of 99.9% ensured that most of the H_2 radiative cooling zone was encompassed.
§ ADDITIONAL FIGURES
§.§ Excitation temperatures
Excitation temperatures have been extracted and calculated from a subset of the grid. Figures <ref> and <ref> show these temperatures calculated from the = 0, J = 3 to 5 levels (S(1) to S(3)) and the = 0, J = 6 to 11 levels (S(4) to S(9)) levels, respectively. The excitation temperatures are shown for b = 0.1 and 1, and G_0 = 0 and 1. Figures <ref> and <ref> show excitation temperatures for the = 1, J = 0–8 and = 2, J = 0–8 vibrationally excited levels.
§.§ Cosmic ray ionization rate
In the model, cosmic rays may ionize H_2 and other species. When these species recombine, primarily H_2, secondary UV photons are emitted. Direct excitation by cosmic rays is not included. In this manner, cosmic rays serve as an additional source of both ionization and thus energy input. The expectation is that they will impact the H_2 emission to a similar degree as an external UV field. Their impact, however, is smaller than that of UV radiation. This is illustrated in Fig. <ref>, where the integrated line intensity of three representative lines are shown as a function of the cosmic ray ionization rate, ζ_ H2, for Model B. In this case, the PAH abundance is set to 10^-8. For no external UV radiation, the integrated intensity increases by ∼ one order of magnitude when ζ_ H2 increases by two orders of magnitude. For G_0 = 1, there is practically no change in intensity over the same range of ζ_ H2, however, the vibrationally excited lines are significantly brighter than for the shocks without an external radiation field.
§ DOMINANT COOLING LINES
It is natural, when examining such a large grid, to identify the dominant H_2 cooling lines, that is, the H_2 lines that are most likely to be observed for a given set of input parameters. One way of identifying these lines for the entire grid, is to go through each model and tabulate the lines with integrated intensities that are greater than 25% of the maximum intensity. This arbitrary cutoff is chosen from the perspective that if the strongest line is detected at 20σ, then these lines would also be detectable at the 5σ level. Next, the lines are sorted according to which ones are present in the largest number of models, i.e., which are typically the dominant cooling lines in a global perspective. The lines that are present in at least 25% of models are tabulated in Table <ref>.
Twenty-four lines are present in at least 25% of models. The lines are either = 0–0 or 1–0 transitions; the higher-excited levels are clearly not sufficiently populated over the majority of the grid. Some of the lines in Table <ref> are observable from the ground, for example, the often bright = 1–0 S(1) line at 2.12 μm, but the majority of the lines are not (17/24 lines). All lines are, however, observable with the JWST. Eighteen lines are observable with NIRSpec, while seven are observable with MIRI. At 5.06 μm, the = 0–0 S(8) line is observable with both instruments, and could serve as a cross-calibrator between the two instruments.
|
http://arxiv.org/abs/2307.05869v1 | 20230712015012 | Autonomous and Ubiquitous In-node Learning Algorithms of Active Directed Graphs and Its Storage Behavior | [
"Hui Wei",
"Weihua Miao",
"Fushun Li"
] | cs.DC | [
"cs.DC",
"cs.MA"
] |
Journal of Class Files, Vol. 14, No. 8, August 2021
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals
Autonomous and Ubiquitous In-node Learning Algorithms of Active Directed Graphs and Its Storage Behavior
Hui Wei^a*, Weihua Miao^a, Fushun Li^a
^a Laboratory of Algorithms for Cognitive Models, School of Computer Science, Fudan University, No.2005 Songhu Road, Shanghai and 200438, China
August 12, 2023
=====================================================================================================================================================================================================
Memory is an important cognitive function for humans. How a brain with such a small power can complete such a complex memory function, the working mechanism behind this is undoubtedly fascinating. Engram theory views memory as the co-activation of specific neuronal clusters. From the perspective of graph theory, nodes represent neurons, and directed edges represent synapses. Then the memory engram is the connected subgraph formed between the activated nodes. In this paper, we use subgraphs as physical carriers of information and propose a parallel distributed information storage algorithm based on node scale in active-directed graphs. An active-directed graph is defined as a graph in which each node has autonomous and independent behavior and relies only on information obtained within the local field of view to make decisions. Unlike static-directed graphs used for recording facts, active-directed graphs are decentralized like biological neuron networks and do not have a super manager who has a global view and can control the behavior of each node. Distinct from traditional algorithms with a global field of view, this algorithm is characterized by nodes collaborating globally on resource usage through their limited local field of view. While this strategy may not achieve global optimality as well as algorithms with a global field of view, it offers better robustness, concurrency, decentralization, and bio-viability. Finally, it was tested in network capacity, fault tolerance, and robustness. It was found that the algorithm exhibits a larger network capacity in a more sparse network structure because the subgraph generated by a single sample is not a whole but consists of multiple weakly connected components. In this case, the network capacity can be understood as the number of permutations of several weakly connected components in the network. The algorithm maintains high recall accuracy and completeness when facing error-containing sample inputs or in the presence of node corruption within the network.
Directed Graph Storage, Connected Subgraphs, Decentralization.
§ INTRODUCTION
The traditional theoretical memory models based on psychology and cognitive science are the result of the summary and analysis of many experimental phenomena. It decomposes memory activities into the collaboration and interaction of several functional components, which are usually highly abstract and lack the underlying implementation details. However, most of the computational memory models based on artificial neuron networks, represented by Hopfield network <cit.> and BAM network <cit.>, lack biological authenticity. Therefore, it is necessary to establish an intermediate theoretical model that satisfies biological constraints between the macroscopic models of psychology and the molecular mechanism of memory in neurobiology, so as to help us understand the working mechanism of memory.
Marr <cit.> proposed a hierarchical framework for analyzing information processing systems, which involves three levels: the theory of computation, representations and algorithms, and hardware implementation. Based on this framework, we analyzed the memory system of the brain. The theory of computation needs to specify the specific requirements of the task, i.e., how the memory storage and retrieval functions are performed by the brain. In the second level, it is necessary to specify the form of memory representation and design algorithms for storing and retrieving memories. The last level is the detailed architecture of connections and communication between neurons in the brain, and since neurobiological studies in this area are more detailed, this paper focuses only on the first two levels. Psychology and cognitive science define and classify memory from a macroscopic point of view, it gives a rough division of memory stages <cit.>. While neurobiology shows how neurons connect and communicate with each other from a microscopic perspective. It gives the details of implementing the memory system in the hardware dimension <cit.>. However, these perspectives alone are insufficient for a comprehensive understanding of the memory process as they lack detailed representations and algorithms. To gain a more complete understanding of memory, it is necessary to establish reasonable representations and algorithms at an intermediate level. Computer memory, hard disk, and brain memory have similar functions, and their implementation details are compared in Table <ref>. From the comparison, it can be found that the implementation process of computer storage is very transparent, but there are still systematic gaps in the implementation of human brain memory. It is difficult to provide a logically consistent, sufficiently detailed, and complete theoretical understanding of the entire memory system.
Directed graphs are a classical data structure used in computer science to model the connections between information, such as citation relationships between papers, interaction relationships between proteins, and acquaintance relationships between people in social networks. In recent years, graph databases <cit.> and knowledge graphs <cit.> have been developed to store information in graphs. These databases avoid the need for multi-table unions during traditional relational database queries, providing a more efficient way to access and manage data. However, the directed graphs mentioned above are static and serve as a record of factual information and relationships or a graphical representation of a set of first-order logic propositions. These graphs do not exhibit dynamic behavior independently but act as a data structure required to implement global view-based algorithms. For instance, consider the classical Dijkstra's shortest path algorithm <cit.>. In each update, a node in the queue that satisfies the conditions is greedily selected. However, this selection operation is not a behavior of the nodes themselves but rather a task of a super manager with a global view. This manager can access information about all nodes, such as the adjacency matrix, enabling it to select the optimal node. In this case, the directed graph is just an information carrier to ensure the algorithm can run efficiently and correctly.
Looking at directed graphs from the perspective of a Multi-Agent System <cit.>, each node in the graph can be viewed as an independent and fully autonomous agent. Its field of view is limited to the upstream and downstream nodes connected to it, and information is exchanged only between neighboring nodes via directed edges. Every decision made by a node is based on local information, and it can continuously learn and optimize. Now, the directed graph is no longer static but a dynamic system in which neighboring nodes can influence each other. The intrinsic behavior of each node and the external upstream and downstream connections are unique, leading to a rich and complex dynamic behavior of the whole directed graph. In this paper, we term such a directed graph an active-directed graph, which is no longer a manipulated data structure but a cluster of agents with parallel distributed behaviors. An example of an active-directed graph is a biological neural network, where neurons receive and integrate stimulus signals from upstream neurons via dendrites, and subsequently transmit stimulus signals to downstream neurons via axons. Neurons can self-regulate through synaptic plasticity mechanisms like spike-timing-dependent plasticity(STDP) <cit.>. Since there is no super-manager with a global view to control the behavior of each neuron or node, the behavior of such active-directed graphs is much more complex than static-directed graphs that only record facts.
A directed graph consists of many nodes and directed edges that can be arranged to form numerous connected subgraphs. These connected subgraphs can be seen as a resource, where a subgraph characterizes a state in which several nodes cooperate or relate to each other. Therefore, the entire directed graph can store or remember much content. In a static-directed graph, the number of connected subgraphs obtained by a path-walking algorithm based on a global view is predictable because there is no uncertainty. However, in an active-directed graph, each node has autonomous behavior and relies only on the information obtained within the local field of view for decision-making. This leads to unpredictable functional connected subgraphs' structure and number. Moreover, since each node can perform incremental adaptive learning, the number of functional subgraphs in the active-directed graph may be much larger and more diverse. To achieve this, corresponding behavior criteria for nodes must be set, which is not required in static-directed graphs.
In a vast active-directed graph, the challenge lies in how to form functional connected subgraphs, consolidate them, efficiently use limited node and path resources, and make multiple connected subgraphs compatible or less interfering with each other. In short, we aim to study how to achieve storage in an active-directed graph. The contribution of this paper is to propose a node-scale-based parallel distributed storage algorithm in active-directed graphs. Unlike traditional algorithms with a global view, the design challenge of such algorithms is that nodes have to collaborate globally on resource usage through their very limited local field of view. This strategy may not achieve global optimality compared to algorithms with global views, but it offers better robustness, concurrency, decentralization, and bio-viability.
§ RELATED WORK
Directed graphs have various application modes in storage, and one common mode is the abstract modeling of neuronal networks. One such example is the Hopfield network proposed by John Hopfield in 1982 <cit.>. It is a fully connected binary recurrent neural network that characterizes the network state by an energy function. Each iteration of the network proceeds towards energy reduction until it reaches a steady state, also known as an attractor. The number of attractors represents the network capacity, which is approximately 0.14N, where N represents the number of nodes in the network. When implementing the associative memory function, the Hopfield network enables complete content retrieval by only part of the sample. However, the capacity of the Hopfield network increases linearly with its network size, making it difficult to preserve too many samples. In 2016, Krotov and Hopfield introduced the discrete modern Hopfield network <cit.>, which allows network capacity to be extended by changing the network energy function and the update rule, but at the corresponding cost of requiring a large number of hidden layer nodes. Demircigil et al. <cit.> further extended the energy function by introducing exponential interaction functions, increasing the network capacity. In 2021, Ramsauer, Hubert, et al. <cit.> extended the energy function of modern Hopfield networks from discrete to continuous states while maintaining exponential storage capacity and fast convergence. Hopfield networks, as classical self-associative computational models, enable mapping between vectors of the same dimension. The bidirectional associative memory(BAM) model proposed by Bart Kosko in 1988 <cit.> can realize both self-association and hetero-association, i.e., mapping between vectors of different dimensions. The model comprises two layers of neurons connected by a weight matrix, which encodes the mapping relationships of all samples. Activating any layer of neurons and iterating through the network results in a correlated output in the neuron on the other layer. In 2021, Bart Kosko <cit.> introduced a bidirectional backpropagation algorithm in BAM to update matrix parameters dynamically. The original structure was also extended to have any number of hidden layers. The network capacity is increased. In this mode, directed graphs simulate the structure of biological neuron networks, and the network structure is often fixed, such as fully connected or hierarchically connected. The impact of structural parameters such as connectivity, clustering coefficient, and average path length on network performance is not considered. The implementation of the storage function relies more on the weight parameters and update rules of the network, which remains the weight-centric theory. Moreover, all storage contents need to be determined in advance, and the weights are calculated and written at once, making them hard to update incrementally or partially. The local damage to the network will affect the global network, and the scalability of the network scale is not good.
Directed graphs are also commonly used to represent facts and relationships. One popular use case is in graph databases, which employ a graph structure for information queries <cit.>. In graph databases, nodes represent entities such as people, accounts, or other items, while edges represent connections, such as a friendship relationship. Compared to traditional relational databases, graph databases can perform complex queries more efficiently, as they do not require table join operations but instead search the graph directly. Therefore, they usually offer better performance, especially in the era of big data, where data organization and query complexity have become increasingly important. Graph databases are widely used in various fields, such as in the biomedical domain, for modeling proteins, metabolites, and their relationships, such as digestion and catalysis <cit.>. In knowledge graphs <cit.>, information is recorded as RDF triples and stored as graphs, laying the groundwork for achieving goals like semantic search and knowledge inference. In these applications, the role of the directed graph is to record facts, and there is no dynamic behavior. This approach does not address capacity issues or the impact of network structure on storage performance.
In contrast to artificial neural networks that rely on fixed connection patterns, weight parameters, and update rules, or static directed graphs that primarily function for fact recording, this paper presents a method for storing information in a directed network in a distributed manner. This approach leverages the autonomous and dynamic behaviors of numerous nodes in the network, such as resource acquisition and competition. The information content is differentiated based on the distinct combinations of nodes and edges. Subgraphs serve as the information storage carriers without relying on super nodes. Additionally, there is no need for a global view. Information is stored through nodes and edges' local, limited, and adaptive dynamic behaviors. This subgraph-based computational storage model ensures that the stored information remains stable, distinguishable, and fault-tolerant. It also enables incremental storage of information. The performance of this storage method relies on the network's structural characteristics and the nodes' adaptive learning algorithm.
§ SUBGRAPH-BASED STORAGE IMPLEMENTATION
In the early 20th century, Richard Semon proposed the concept of engrams, which represents the neural substrate for memory function <cit.>. He believed that an engram is eventually formed when a group of neurons experiences a persistent physical or chemical change in response to an external stimulus. Subsequently, when the original external stimulus comes again, these engram cells are reactivated, enabling the retrieval of memories. If we try to understand it from the perspective of a directed graph. Nodes represent neurons and directed edges represent synaptic connections between neurons. Then the engram is the subgraph formed between the activated nodes, and the difference in the structure of the subgraph represents the different information contents.
Let G=(V,E) be a directed graph, where V denotes the set of nodes and E denotes the set of edges. If G'=(V',E') is a subgraph of G, then it follows that V'⊆ V and E'⊆ E, denoted as G'⊆ G. In a subgraph-based storage implementation, information is recorded in the form of a series of active nodes and interconnected pathways between them, i.e., a subgraph in the network. For instance, consider the message to be stored as: "While observing a red apple on a tree, I also saw a chirping robin." In this case, nodes representing semantic elements like red, circle, branch, and chirping are activated simultaneously and propagate stimulus along the directed edges in the network. These nodes are referred to as the initial nodes of this sample. During the stimulus propagation process, the initial nodes activate some otherwise inactive nodes, called communication nodes, which are crucial for establishing the pathways. Not all nodes are directly connected by edges in non-fully connected networks, so communication nodes serve as bridges to establish pathways between the initial nodes. These pathways represent associations between semantic elements, such as recalling a robin when seeing an apple again. This occurs when the initial node representing the apple is activated, and the stimulus is passed along the stored pathway in the network, finally activating the node representing the robin. This implementation draws inspiration from cognitive psychology studies on long-term memory <cit.>.
However, the initial idea is insufficient and requires the design of specific implementation details. For example, a node may only characterize a fundamental physical feature, necessitating multiple nodes to represent a concept like an apple. The initial node may activate some communication nodes, which may activate others. Figure <ref>a shows 30 randomly selected nodes as initial nodes in a directed network. Figure <ref>b shows a stable subgraph obtained by propagating the stimulus of these 30 initial nodes through the network with continuous iterations.
Another factor that makes subgraphs suitable for information storage is the vast number of potential subgraphs present in the network. Given m=|E|, there can be up to 2^m subgraphs in graph G. Thus, using subgraphs as information storage carriers is a promising idea. The challenge lies in ensuring that the subgraphs do not interfere with or confuse each other, effectively utilizing the node and edge resources of the entire directed graph, enabling incremental storage, reducing unfair resource occupation due to varying sample upload orders, and sharing resources among multiple samples. These technical aspects need to be solved by the parallel distributed network adaptive learning algorithm.
§ SUBGRAPH GENERATION, STORAGE, AND RETRIEVAL
The storage and retrieval of samples are processes of the initial nodes propagating the stimulus in the network and eventually forming a stable subgraph. Assuming the stimulus propagation time between nodes is constant. The subgraph eventually reaches a stable state through iterations. The formal definition of a stable subgraph is as follows: let V_t represent the set of all active nodes in the network at time t. There exists a minimum time t' such that V_t'≠ V_t'-1 and V_t'=V_t'+k for any positive integer k. At this point, the network is considered to be in a stable state at time t', and the subgraph comprising all active nodes and edges is the stable subgraph. There are two primary concerns: first, how the subgraph is recorded in the network, and second, what rules nodes use for stimulus propagation. Section 4.1 describes the recording of subgraphs, while sections 4.2 to 4.4 outline the stimulus propagation rules. Section 4.5 presents the specific procedure for sample storage and retrieval.
§.§ The node internal index table records the local upstream and downstream connectivity traces
The storage of subgraph structures involves recording the connectivity paths between active nodes, which can only be accomplished by the nodes themselves based on their local perspectives. This necessitates that active nodes individually record activation information upstream and downstream of themselves. Define the activation trace of node u as a path fragment consisting of active fan-in nodes and active fan-out nodes of node u. Storing the activation traces of all active nodes during this sample storage process will complete the subgraph storage.
In this paper, we store node activation traces by introducing an index table in each node, a data structure with small capacity, easy access, and simple updating. Figure <ref> shows the structure of the index table, which contains two columns: the first for active fan-in nodes and the second for active fan-out nodes. Figure <ref>a shows that node B's index table contains no content before storing the samples. Once a sample is stored, its corresponding activation trace is saved in its index table. As shown in Figure <ref>b, during sample retrieval, if node B receives the same or similar input as the recorded activation traces, it generates the corresponding output based on the historical records in the index table.
Defining an index table inside each node that records upstream and downstream active path pairing relationships may appear straightforward and crude, but it is also biologically feasible. Biological neurons have dendrites that receive inputs from multiple directions and axonal that transmit outputs in different directions, creating a many-to-many connection. Actual physical signaling between upstream and downstream neurons relies on synapses, regulated by combinations of diverse neurotransmitters and ion pumps. These mechanisms precisely control the direction and intensity of positive and negative charge flow. Additionally, differences in synapse location, such as being distal or proximal to the axonal, on dendrites or axons, or on the main pathway or terminal, can precisely control the activation and deactivation of specific action potential transmission pathways. In conclusion, this highly precise and diverse molecular-level and subcellular-level modulation and their combinations equip biological neurons with various pathway control mechanisms at the microcircuit level <cit.>. As a result, a biological neuron can achieve diverse pathway control of signaling within its small neighborhood, relying on a complex set of electrochemical processes <cit.>. This has inspired the design of directed graph nodes' internal behavior, allowing them to function like network routers capable of differentially leading fan-outs based on fan-in variations. An index table with limited storage space is a simple, functional equivalent implementation.
§.§ Intra-node stimulus propagation algorithm
The creation of subgraphs depends on the propagation of stimulus between nodes. Stimulus propagation consists of two aspects: node activation rules, i.e., how nodes are activated, and stimulus propagation rules, i.e., determining the downstream nodes to which the stimulus is propagated. In this paper, the node activation rule employed is a fixed-probability activation model, where a node will be activated with a fixed probability upon receiving input. The node becomes active and begins delivering stimulus to downstream nodes if successfully activated. Two types of stimulus propagation rules are used in this paper: the first reuses similar historical activation traces, and the second employs a weighted random selection algorithm.
The specific process of stimulus propagation among nodes is as follows: each resting state node has a fixed probability of H to be activated after receiving the stimulus. Once a node is activated, if its index table is empty, it randomly selects several downstream nodes with equal probability for stimulus delivery. If the index table is not empty, the similarity between the input and each item in the node index table is calculated first. In this paper, the F1 score <cit.> is used as a metric to evaluate the similarity of two node sequences. The F1 score is a statistical measure of the accuracy of a binary classification model, which is the harmonic mean of precision and recall. When comparing similarity, either one of the node sequences can be treated as the predicted value and the other as the actual value, and the corresponding F1 score is calculated. Higher scores indicate greater similarity. The corresponding historical activation traces are reused if the maximum similarity exceeds the threshold. Otherwise, a weighted random selection algorithm is used to select downstream nodes for stimulus delivery.
The weighted random selection algorithm is based on the frequency of the downstream node appearing in all active fan-out nodes in the current node index table. The higher the frequency of occurrence, the lower the chance of being selected. The introduction of this algorithm allows each active node to distribute stimulus evenly, thus maximizing network resource utilization. The algorithm pseudo-code is shown in Algorithm <ref>. The value of H affects the subgraph size. The larger H is, the more nodes participate in subgraph formation and the better the connectivity. However, the corresponding cost of network resources is also larger. In this paper, H is set at 60% for testing.
§.§ Node Resource Grabbing Rules
Nodes are considered limited resources in a directed graph, adhering to the first-come, first-served preemption rule. Activating a node can be seen as the occupation of a node resource. When performing stimulus propagation, nodes can acquire the occupancy of downstream nodes to avoid passing stimulus to already occupied nodes. If an active node does not successfully activate any downstream nodes, it returns to a resting state. A change in the state of some active nodes may trigger a chain reaction that causes more active nodes to become resting. This situation is called the avalanche effect. As shown in Figure <ref>, at t_0, node B receives a stimulus from node A, then subsequently activated at t_1. However, because node C has been occupied, node B cannot transmit the stimulus to node C, causing node B to revert to the resting state at the t_2 moment. At this point, node A is not activating any nodes due to the change in the state of node B. Therefore, at t_3, node A also becomes resting due to the avalanche effect.
§.§ Several problems are caused by insufficient resources in subgraph generation
As the number of samples stored in the network increases, new subgraphs may encounter some problems caused by insufficient resources during the generation process, preventing new samples from being stored. The causes of insufficient resources in the network can be broadly classified into four categories:
1. The node index table has a capacity limit. When the number of samples stored in the network reaches a certain level, it becomes hard to store new samples.
2. The network is poorly connected, and the stimulus propagation rule carries a certain level of randomness, as well as the existence of the avalanche effect, which leads to an inability to establish a pathway between the initial nodes.
3. The samples already stored in the network interfere with the samples currently about to be stored. This is because nodes may reuse historical activation traces when they are activated. Although this is an optimization strategy to increase network capacity, it somewhat affects the storage of current samples.
4. Some active nodes take up too many node resources during the current activation, resulting in no resources available for other nodes.
For the first case, the capacity of the index table can be defined as the maximum number of output types that a node can store, as there may be many different inputs corresponding to the same output. The capacity can also be expanded by reasonably discarding and merging the contents of the index table. Specifically, when a node's index table capacity reaches its upper limit, the node will search for the two most similar activation traces to merge. The similarity here refers to the similarity of the active fan-in nodes in the two activation traces, and merging refers to taking the intersection of the active fan-out nodes of the two activation traces. If the differences between the activation traces are both large, the one with the lowest strength is discarded. The strength here refers to the number of samples that the activation trace has been involved in storing. The more involved, the higher the strength. By reasonably merging and discarding, it is possible to increase network capacity as much as possible at the expense of certain recall accuracy and completeness. The pseudo-code of the algorithm is given by Algorithm <ref>.
The second and third cases can be solved by introducing a re-pathfinding rule. The re-pathfinding rule allows the initial nodes to re-propagate the stimulus to find a path connecting other initial nodes when it does not successfully activate any downstream node. For the fourth case, the active node can release some occupied resources according to its situation. The resource release algorithm is introduced here to solve this problem. When the number of failed re-pathfindings of an initial node reaches a certain threshold, it will enter a dormant state, indicating that it is currently unable to communicate with other nodes. The node in the dormant state suspends pathfinding until the subgraph stabilizes. Active nodes will release some nodes after the subgraph is stabilized to make resources available to dormant nodes. For example, if an active node has three active fan-out nodes, it can actively release the occupation of two of them. After releasing the redundant resources, the nodes in the dormant state will resume pathfinding until the subgraph stabilizes again. If, after releasing the resources, the dormant node is still unable to establish path connections to other active nodes, the network connectivity is considered poor, or there is a conflict between the current sample and the samples already stored in the network. In this case, the subgraph can still be formed. However, there will be some isolated nodes that cannot establish connections with other nodes, leading to a decrease in the subgraph's anti-interference ability and fault tolerance. The pseudo-code of the resource release algorithm is given by Algorithm <ref>. Figure <ref> shows the transition relationship between the three node states. Figure <ref> presents the flowchart of the algorithm from the node's perspective, which includes how nodes process the received stimulus and how downstream nodes are selected for stimulus delivery.
§.§ Sample Storage and Retrieval
The sample storage process consists of two stages: (1) Stimulus propagation stage: the initial nodes propagate stimulus to other nodes along the directed edges until a stable subgraph is formed. (2) Subgraph consolidation stage: All nodes in the subgraph update their internal index tables, recording the activation traces. Figure <ref> shows the storage algorithm from the processor's perspective and the flowchart of the algorithm from the task scheduling perspective. The corresponding pseudo-code is given by Algorithm <ref>.
Stimulus propagation stage: Table <ref> demonstrates a complete process of generating a stable subgraph through continuous iteration of the initial nodes. At the time t_0, the initial nodes are activated, and downstream nodes are chosen for stimulus propagation according to the weighted random selection algorithm. The subsequent t_1 and t_2 moments represent the continuous stimulus propagation in the network. At the time t_3, since downstream nodes B and C of node H have been occupied by other nodes, node H cannot perform stimulus transfer. Therefore, according to the node resource-grabbing rules, node H transitions from the active state to the resting state. At the time t_4, downstream node H, excited by node J, reverts to a resting state. At this point, node J does not activate any nodes, and due to the avalanche effect, its state also becomes a resting state. After the end of time t_4, node D will no longer propagate stimulus. However, as the initial node, it will follow the re-pathfinding rules, searching for a new path and attempting to participate in the subgraph formation. When re-pathfinding reaches a certain number of attempts, the node will enter a dormant state. Here, it is assumed that node D has entered a dormant state and will halt pathfinding until the subgraph is stable. It can be observed that at time t_4, the subgraph is already stable since there will be no change in node states. At this point, it is necessary for other active nodes in the network to release redundant resources, providing node D the opportunity to re-engage in the subgraph formation. At the time t_5, node A, which originally occupied both node E and node G resources, can choose to release the occupation of either of the two nodes. Assuming that node E is released, node E will become resting, and the stimulus from node E to node B will also vanish. However, because node B is an initial node, its state will not change. After the resource is released, node D resumes pathfinding and node J is activated at time t_6. At the time t_7, node H is activated by node J, and the stimulus is passed to the initial node B, forming a path. At this moment, the connected subgraph between active nodes becomes stable, no dormant nodes are present in the network, and the stimulus propagation stage concludes.
Subgraph Consolidation Stage: The primary task of this stage is to store a stable subgraph structure in the network. When the subgraph achieves a stable state, each active node will have corresponding active fan-in and active fan-out nodes, which are activation traces. Storing the subgraph is completed by updating the activation trace of each node in the node's internal index table. The pseudo-code of the index table update algorithm is provided by Algorithm <ref>, and the pseudo-code of subgraph generation and preservation is given by Algorithm <ref>.
The process of sample retrieval closely resembles sample storage. However, the sample retrieval process is simpler, only including the stimulus propagation stage. During the stimulus propagation stage, it will not be activated when a node receives a stimulus transfer from an upstream node and cannot find a similar entry in its internal index table. The initial nodes will not enter the dormant state, and no nodes will release excessively occupied resources. In summary, the sample retrieval algorithm will not cause any changes to the existing network. However, it will only perform stimulus propagation based on the activation traces stored in the internal index table of the node. Since the sample retrieval algorithm process is very similar to the storage algorithm, only the specified part mentioned above needs to be omitted. Therefore, no separate pseudo-code is provided here.
§ EXPERIMENTS
The experiment is primarily divided into four aspects:
1. Capacity testing: This aims to investigate the number of samples the network can stably store and the factors influencing network capacity.
2. Fault tolerance testing: This mainly explores the effect of sample retrieval when the input is incomplete or has noise.
3. Robustness testing: This mainly explores the effect of sample retrieval when some nodes or edges are damaged.
4. Performance testing on different classical network structures: This mainly explores the algorithm's performance on various classic network structures.
The network used in the experiment is ER random graphs <cit.>. This classic random network model, proposed by Paul Erdős and Alfréd Rényi in 1959, is defined by having a probability p of connection between any two nodes in the network. Extending this definition to directed graphs, two distinct directional edges can be between any two nodes. The probabilities of these two edges existing are independent of each other, and both are equal to p.
§.§ Capacity testing
Let G_i=(V_i,E_i) represent the subgraph generated of the ith sample, where V_i denotes the set of nodes, and E_i denotes the set of edges. Let G'_i=(V'_i,E'_i) represent the subgraph generated during the retrieval of the ith sample. Define the accuracy P_i=|E_i ∩ E'_i|/|E'_i|. Define the completeness C_i=|E_i ∩ E'_i|/|E_i|. Define an isolated node as an initial node in the subgraph with both in-degree and out-degree equal to zero. Define the sample representation quality, Q, as the percentage of non-isolated nodes relative to the initial nodes. If the number of initial nodes in the sample is s, and the number of isolated nodes in the subgraph generated by the sample is l, then Q=s-l/s. In capacity testing, each stored sample should satisfy a high sample representation quality and high completeness and accuracy during sample retrieval to be considered successfully stored by the network. Based on this, we can define the reliable capacity of the network. Let the reliable capacity, T, represent the maximum number of samples that the network can successfully store, and for each stored sample, it satisfies Q_i>0.9, i∈ [1,T], P̅=∑_i=1^TP_i/T>0.9, C̅=∑_i=1^TC_i/T>0.9.
Assume the network has n nodes, each node has an internal index table with a capacity of K, and each subgraph contains, on average, s initial nodes and c communication nodes. For every subgraph stored in the network, there will be an increase or modification in the internal index table entries of the activated nodes. Considering this as a resource, the total number of resources in the network is nK, and each subgraph occupies s+c resources. Without considering resource reuse or optimization measures, the network will consume s+c resources for every stored sample, so the network capacity can be roughly represented as nK/s+c. If resource reuse is allowed, the calculation of network capacity becomes more complex. In an extreme case, where the resources occupied by the current subgraph are all reused, the upper bound of the network capacity can be roughly expressed as the combination number nKs+c. The network capacity obtained from these two different calculation methods differs greatly. In actual testing, there are many other influencing factors, such as different network connectivity and conflicts between samples. Therefore, a theoretical network capacity analysis is difficult, and a specific analysis should be conducted in conjunction with actual testing situations.
Table <ref> shows the performance of sample retrieval after storing 1,000 samples in networks. The node index table size is set to K=20. The scale of a single sample refers to the size of the initial node set. As shown in Table <ref>, the capacity of sparse graphs is typically larger than that of dense graphs with the same node size. The primary distinction between sparse and dense graphs lies in the number of directed edges within the network, which directly influences network connectivity. This can also be observed from the average number of weakly connected components in the subgraphs presented in Table <ref>. For networks with the same node size, the more edges they have, the fewer weakly connected components their subgraphs have on average. Generally, the subgraphs generated by samples are not necessarily connected but are composed of multiple connected components. Weakly connected components (WCCs) are defined as components where undirected edges replace all directed edges, and any two points within the component are reachable from one another. The number of connected components reflects the aggregation of the subgraph. A greater number of connected components indicates a more dispersed subgraph, while a smaller number of connected components signifies a more clustered subgraph.
The connectivity or structure of subgraphs is undeniably a crucial factor influencing network capacity, as it determines the resource usage of each subgraph. There are two main factors that impact subgraph connectivity: the scale of a single sample and network connectivity. Table <ref> demonstrates that a larger scale of a single sample and better network connectivity will reduce network capacity. This observation is intuitive for the former but counterintuitive for the latter. However, when the scale of a single sample node is 0 or network connectivity is extremely poor, the network capacity tends to be 0. This suggests that the relationship between network capacity and subgraph connectivity is not linear.
Figure <ref> shows the changes in network capacity and subgraph structure as the number of edges in a network increases. The node size of the network is 500, and the single sample scale is 60. It can be observed that the network capacity first rises and then declines, eventually stabilizing near the theoretical capacity value in the simple case, which is nK/s+c. During the stage of network capacity growth, both the average number of WCCs and the average number of nodes in the subgraph decrease, suggesting that the subgraph progressively transitions from "dispersed" to "clustered." Subsequently, there is a sharp decline in network capacity, and the average number of WCCs in the subgraph also drops dramatically. This indicates that the network connectivity has reached a critical point, with almost all nodes in the subgraph belonging to the same WCC. As a result, the "agglomeration effect" emerges. It means most initial nodes can connect directly without passing through other communicating nodes. When the network capacity reaches its peak, the average number of nodes in the subgraphs is close to the scale of a single sample, and the number of WCCs is slightly above 1.
Erdős and Rényi <cit.> demonstrated that when p>(1+ϵ) ln n/n, the ER random graph G(n,p) is almost always connected. To ensure that the subgraphs generated by the samples have a high probability of only 1 WCC, it needs to guarantee that p>(1+ϵ) ln s/s, where s represents the scale of a single sample, which is 60. Let's take p=ln s/s≈ 0.07. The generated subgraph in this scenario is shown in Figure <ref>a. If we take p=0.04, corresponding to the p when the network capacity reaches its peak, the generated subgraph is shown in Figure <ref>b. It can be found that the essence of large capacity is actually the permutation and combination of multiple WCCs. When p is slightly less than ln s/s, the subgraphs generated by the samples are composed of a small number of WCCs. Assuming the subgraph is evenly divided into t WCCs, the size of each WCC is s+c/t. The network capacity can be perceived as selecting t WCCs from all possible ones. This is essentially a Uniform disordered grouping problem. The calculation result is shown in formula <ref>. Although the actual capacity is significantly smaller than this value, it still demonstrates the huge storage potential of the network.
T = ns+c/tn - s+c/ts+c/t ... n - (t-1)s+c/ts+c/t/t!
= n!/(t!)^s+c+t/t (n-s-c)!
Figure <ref> demonstrates the capacity difference between a sparse graph and a dense graph, both with the same number of nodes. It can be observed that in the sparse graph, the average completeness of sample retrieval drops below 80% when the number of stored samples exceeds 8000. In contrast, for the dense graph, the average completeness of sample retrieval declines below 80% when the number of stored samples approaches 300. This capacity difference between the two networks further confirms that the arrangement and combination of WCCs are the essences of large capacity. Although the dense graph has more connections, its displayed capacity is not directly proportional to the number of resources owned by the network. Conversely, the sparse graph has only a small number of connections, but the network capacity achieved by the arrangement and combination of multiple connected subgraphs is several times that of the dense graph. This indicates that a sparse connection is a more reasonable mode, which can effectively save resources and obtain a larger network capacity. Moreover, the biological neuron network of the human brain also follows a sparse connection mode, implying that sparse connections are efficient and maximize the use of resources.
§.§ Fault tolerance testing
The fault tolerance testing primarily explores the effect of sample retrieval when the input is incomplete or has noise. The network used in the experiment is an ER random graph with 500 nodes and 3101 (sparse graph) and 12606 (dense graph) edges, respectively. The experiment first stores 1000 samples in the network, then modifies the sample inputs during the retrieval process. Finally, compare how the average accuracy and completeness change when the sample input is incomplete or has noise. There are three categories of modifications to inputs:
1. Removing a part of the original sample input to explore the impact of incomplete inputs on sample retrieval performance.
2. Adding extra noisy nodes to the original sample to investigate the impact of noise on retrieval.
3. Removing part of the original sample input and replacing it with an equal number of noisy nodes to examine the impact of sample retrieval in this mixed scenario.
Figure <ref> shows that as more sample inputs are missing, the accuracy and completeness of retrieval decrease to varying degrees in both sparse and dense graphs. The change trends of the two networks are generally similar, and the decline in accuracy is relatively gentle. When the proportion of missing parts reaches 80% of the input, the rate of accuracy decline increases significantly. Compared to Figure <ref>b, Figure <ref>a has higher completeness and accuracy when the proportion of missing inputs is between 0.0 and 0.1. This is because the network capacity of the dense graph is small, making it difficult to achieve high reading accuracy and completeness after storing 1000 samples. The overall decline rate of completeness is greater than that of accuracy, indicating that the erroneous content obtained during retrieval does not increase as the proportion of missing inputs increases. This suggests that the algorithm is relatively reliable when facing missing sample inputs, although the rapid decline in completeness represents a significant amount of correct content that cannot be read. However, even when the proportion of missing inputs is as high as 80%, the retrieval accuracy can still be maintained at around 40% to 50%, which means that even if there are numerous missing inputs, almost half of the read content is correct and reliable.
Figure <ref> shows the impact of increasing noise nodes. It can be observed that these additional noise nodes have relatively little effect on the accuracy and completeness of sample retrieval. The decline rate of completeness is lower than that of accuracy because adding noise nodes generally do not directly disrupt the original subgraph structure but makes the final subgraph larger. This demonstrates that the network has a relatively strong resistance to noise and is sensitive to the absence of sample inputs.
Figure <ref> presents the performance of both incomplete and contained noise nodes in the sample input under sparse and dense graphs, respectively. It can be seen that the decline rate of accuracy and completeness, in this case, is the fastest, indicating that the impact of missing inputs and the effect of noise nodes can be superimposed.
§.§ Robustness testing
It is known that neurons in the biological brain may experience various functional failures. How does this affect memory? This section primarily examines how the performance of sample retrieval changes when the network is damaged to different degrees. The test includes two main aspects: damage to some nodes and damage to some directed edges. The experiment first stores 1000 samples in the network, then deletes a certain proportion of nodes or directed edges and subsequently attempts to retrieve these samples while recording average accuracy and completeness changes. The network used in the experiment is an ER random graph with 500 nodes and 3101 edges (sparse graph) or 12606 edges (dense graph).
After a node or a directed edge is deleted, the activation traces recorded in the node's internal index table will be affected. Assume that an index table contains two items: A,B,C → X,Y,Z and B,C,D → U,X,Z. If nodes A, D, and Z are deleted, does it need to delete the corresponding node in the activation trace recorded in its index table? If deleted, then these two items will become: B,C → X,Y and B,C → U,X. It can be observed that the input parts of these two items are the same, so addressing the different output parts is a challenge. Usually, during the initial period after network damage, nodes are hard to respond, and at this time, the original traces stored in the node index table will not change. As the damage duration increases, nodes may gradually make corresponding adaptive adjustments to the damaged network. Given the above two different situations, this paper proposes four restoration schemes, as shown in Table <ref>, and compares these four solutions.
Figure <ref> demonstrates the impact of partial node damage on sample retrieval performance. Figure <ref>a and Figure <ref>b respectively display the changes in average accuracy and completeness of sample retrieval in the sparse graph for the four restoration schemes. In terms of average accuracy, the scheme that maintains the original traces performs the best, the scheme that takes the union performs the worst, and the other two schemes exhibit similar performance. Conversely, in terms of average completeness, the results are reversed. The scheme that takes the union performs the best, while the one that maintains the original traces performs the worst. This is because the union-taking scheme increases the number of activated nodes, which includes both incorrect and correct nodes. The former leads to a decrease in accuracy, while the latter leads to an increase in completeness. Figure <ref>c and Figure <ref>d present the results on the dense graph, revealing that when the network has a large number of edges, the differences between the four restoration schemes progressively diminish. This occurs because when network connectivity is high, the number of communication nodes in the subgraph generated by the sample is small, with most initial nodes being directly connected. Consequently, the accuracy remains consistently high. The frequency at which each node is shared by different samples is also reduced, so when a node is deleted, the number of samples it affects decreases, making the differences between the four solutions less noticeable.
Figure <ref> shows the impact of partial directed edge damage on sample retrieval. Since a node only has a local view, it can receive and transmit the information of neighboring nodes solely through its fan-in and fan-out edges. Node damage can be understood as the interruption of all fan-in and fan-out connections, so the impact on neighboring nodes is essentially the same, whether node damage or directed edge damage. Consequently, the same restoration schemes can be used. Figure <ref>a and Figure <ref>b respectively display the changes in average accuracy and completeness of sample retrieval for the four restoration schemes in the sparse graph. Their trends are almost consistent with Figure <ref>a and Figure <ref>b. Regarding accuracy, the notable difference between the two is that in the interval [0.8,1.0], Figure <ref>a maintains relatively high accuracy. Regarding completeness, the curve of Figure <ref>b is comparatively flat. Figure <ref>c and Figure <ref>d are the results of dense graphs. The results are also consistent with those of Figure <ref>c and Figure <ref>d. The difference is the same as that observed in the sparse graph, which indicates that the network is significantly more tolerant of directed edge damage than node damage, as nodes hold information while edges do not.
§.§ Performance testing on different classical network structures
The information storage and retrieval algorithm proposed in this paper is closely related to the network structure. Firstly, the algorithm utilizes the subgraph structure as the information storage carrier, and secondly, the subgraph formation depends on stimulus propagation. Both of these characteristics emphasize the importance of the network structure for the algorithm. Therefore, different network structure characteristics are key factors affecting the algorithm's performance.
Figure <ref> showcases six classic network structures. Figure <ref>a is the ER graph with p=0.1. Figure <ref>b represents a globally coupled network, also known as a fully connected network. Figure <ref>c shows the nearest-neighbor coupled network, characterized by N nodes arranged in a ring, with each node establishing connections to its left and right L neighbors, respectively. Figure <ref>d illustrates a star coupled network, featuring a central node to which all other nodes are connected. This characteristic causes any path between two points in the network to include the central node, creating a bottleneck for the entire network capacity. Figure <ref>e presents a one-dimensional Kleinberg network <cit.>, a small-world network <cit.>. The network is constructed by adding a few random edges to the nearest-neighbor coupled network. Figure <ref>f displays the Price network <cit.>, which is a scale-free network. The network's generation relies on the preferential attachment mechanism, where newly added nodes are more likely to connect to nodes with higher degrees. Since each newly added directed edge point from the new node to the old node, there are no loops in the network, leading to a significant decrease in network connectivity and capacity.
Evaluation parameters for different network structures typically include average path length and clustering coefficient.
Average Path Length: Defined as the average of the shortest path lengths between any two points in the network. The default shortest path length is usually positive infinity if the two nodes are disconnected. This special case is common in directed graphs. To prevent the calculation result from being positive infinity, this paper uses the harmonic mean <cit.> of the distance between any two nodes in the network to represent the average path length.
N represents the number of network nodes, and d(i, j) represents the shortest distance between node i and node j. GE represents network communication efficiency, with its essential idea being that the closer the node path distance in the network, the higher the communication efficiency. The average path length calculated by the formula <ref> <cit.> solves the problem of the value being positive infinity when the network is disconnected. Therefore, it is more suitable for evaluating directed graph network structure.
L=1/GE, GE = 1/N(N-1)∑_i≥ j1/d(i,j)
Clustering coefficient: This metric is used to measure whether the nodes in the network exhibit aggregation characteristics. This paper adopts the calculation method of the clustering coefficient in directed graphs proposed by Fagiolo <cit.>.
Table <ref> displays the test results of six network models with different structures but similar scales. The number of nodes in all test networks is 1000, and the single sample scale is 60. The ER random graph exhibits a small clustering coefficient and average path length, while the nearest-neighbor coupled network has a large clustering coefficient and average path length. The Kleinberg directed small-world network has a large clustering coefficient and a small average path length. Kleinberg's directed small-world and nearest-neighbor coupled network demonstrate relatively excellent network capacity among these six network types. Figure <ref> illustrates examples of sample storage corresponding to the two networks. A common feature observed in both networks is the presence of numerous small WCCs. As previously mentioned in the network capacity analysis, the essence of large network capacity lies in the arrangement and combination of WCCs. Since both networks have relatively high clustering coefficients, it is quite easy to form small components locally. Each subgraph can be considered a combination of several small components, resulting in a higher network capacity. The reason for the higher capacity of the Kleinberg network is that, due to its lower average path length, it is easier to form some large WCCs. These large WCCs not only exhibit higher distinguishability but also have a higher resource reuse rate for their nodes, thus positively impacting the improvement of network capacity. In contrast, these characteristics are not present in the ER random graph. Due to its low clustering coefficient and average path length, most weakly connected components formed by the ER random graph are large in scale. This is the difference in capacity caused by different network structures.
§.§ Conclusion
In this paper, we employ subgraphs as physical carriers for information storage and leverage nodes' autonomous adaptive learning behavior to achieve a large-capacity and stable directed graph storage model. The individual nodes' learning behavior does not need a global view, meaning that the tiny algorithms operating within each node do not work under strong central control and are entirely decentralized. Both the learning behavior and the supporting hardware resources are fine-grained and distributed and can, in theory, be highly parallel in physical implementation.
The storage capacity of the network depends on factors such as connectivity and network structure. The dense graph has better connectivity, the subgraphs generated by the samples are usually gathered together, and the communication nodes are rarely used. The measured capacity at this time is low, approaching the theoretical capacity limit that disallows resource reuse. Sparse graphs exhibit poor connectivity, and the sample-generated subgraphs are generally more dispersed, often consisting of several weakly connected components. In this case, the sample-generated subgraphs can be viewed as a permutation of connected components, significantly increasing the network capacity. Tests have shown that a sparse random directed graph with 500 nodes and 3101 edges can store nearly 8000 memory samples with over 80% accuracy and completeness. In contrast, a dense graph with 500 nodes and 12606 edges can only store around 300 memory samples.
Sparse graphs have fewer resources than dense graphs, but the actual number of samples they can store is tens of times more than dense graphs. It demonstrates that resource abundance is not the sole factor determining network capacity. The network's structural properties, such as connectivity, clustering coefficient, and average path length, are also crucial. Biological neuronal networks exhibit sparse connections and show large capacity and low power consumption characteristics. To some extent, this paper also provides a possible explanation for how biological neuronal networks can achieve memory functions.
IEEEtran
|
http://arxiv.org/abs/2307.04438v1 | 20230710092926 | Reconfigurable Intelligent Surface Assisted Railway Communications: A survey | [
"Aline Habib",
"Ammar El Falou",
"Charlotte Langlais",
"Marion Berbineau"
] | eess.SP | [
"eess.SP"
] |
Reconfigurable Intelligent Surface Assisted Railway Communications: A survey
Aline Habib1, Ammar El Falou2, Charlotte Langlais1, Marion Berbineau4
1 Mathematical and electrical engineering department, CNRS UMR 6285 Lab-STICC, IMT Atlantique, Brest, France
2 CEMSE Division, King Abdullah University of Science and Technology (KAUST), Saudi Arabia
4 COSYS-LEOST, Université Gustave Eiffel, Villeneuve d'Ascq, France
Email: {aline.habib, charlotte.langlais}@imt-atlantique.fr, [email protected], [email protected]
========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
The number of train passengers and the demand for high data rates to handle new technologies such as video streaming and IoT technologies are continuously increasing. Therefore the exploration of millimeter waves (mmWave) band is a key technology to meet this demand. However, the high penetration loss makes mmWave very sensitive to blocking, limiting its coverage area. One promising, efficient, and low-cost solution is the reconfigurable intelligent surface (RIS). This paper reviews the state of the art of RIS for railway communications in the mmWave context. First, we present the different types of RIS and review some optimization algorithms used in the literature to find the RIS phase shift. Then, we review recent works on RIS in the railway domain and provide future directions.
RIS, Railway communications, mmWave.
§ INTRODUCTION
The need to double the capacity of the existing rail networks and, at the same time to increase the overall quality of service is leading to a drastic increase in the need for high data rates and robust and low latency data exchange between the different actors in the rail system. This multiplication of transmission needs ultimately leads to problems of spectrum scarcity. In this context, using mmWave bands opens up new opportunities. However, mmWaves suffer from very high attenuation and high sensitivity to various masking effects. In this context, Reconfigurable Intelligent Surfaces offers promising application use cases.
Reconfigurable Intelligent Surface, known in the literature by several nomenclatures as Software-Controlled Metasurface <cit.>, Intelligent Reflecting Surface (IRS) <cit.>, Large Intelligent Surface (LIS) <cit.>, and Reconfigurable Smart Surface (RSS) <cit.>, is an electromagnetic-based reconfigurable structure
that turns the random nature of the propagation channel into a controllable and programmable radio environment. RIS is a thin planar meta-surface made of several low-cost reflective elements <cit.>. Each RIS element adjusts the phase and amplitude of the incident wave to reflect it into a beam toward the target direction. This improves the signal quality and extends the coverage area especially when the direct link is blocked. The paper's main objective is to provide the reader with the basic elements to understand RIS and its interest in a railway communication environment. To do so, we review the literature in the domain and propose some future research directions.
The rest of the paper is organized as follows. Section <ref> provides a literature overview related to RIS, such as the different RIS structures and types, and their opportunity in the context of mmWave communications. We stress the need for realistic channel models in order to properly evaluate the performance of RIS-assisted systems. Section <ref> focuses on very recent works investigating RIS-assisted systems for railway communications. Finally, in Section <ref>, some future directions are drawn, and Section <ref> concludes the paper.
§ RECONFIGURABLE INTELLIGENT SURFACE
§.§ RIS General Overview
The main objective of a RIS is to provide a programmable radio environment between a transmitter (Tx), typically a base station (BS) in the downlink case, and a receiver (Rx), typically a remote user equipment (UE), by changing the phase shifts and amplitude of the RIS incident wave as follows <cit.>
z_n=β_ne^jθ_n,
where z_n is the reflection coefficient of the n^th element, β_n and θ_n are the adjustments in amplitude and phase due to the n^th element.
As the RIS should not encompass too many RF and signal processing resources to maintain a low level of energy consumption and complexity, the BS computes the needed tunable parameters and transfers commands to each RIS element thanks to a smart controller <cit.> as seen in Fig.<ref>.
To adjust phase shifts and amplitude of the incident wave, RIS consists of adjustable components, such as diodes and liquid crystals. The diodes adjust the signal by changing the bias voltage, while the liquid crystals adjust the electromagnetic signal by changing material parameters such as conductivity and permeability <cit.>. Indeed, the PIN diode-based RIS consists of three layers: 1) The outer layer with printed metal patches on a dielectric substrate. This layer directly processes the incident signals. 2) The intermediate layer
composed of a copper panel to avoid signal energy loss.
3) The inner layer is a control board activated by a programmable digital electronic circuit (FPGA), allowing the real-time adjustment of the RIS elements' reflection coefficients <cit.>.
Two reflexion paradigms govern propagation in the context of RIS-assisted communication systems, namely, the specular reflection paradigm and the scattering reflection paradigm, <cit.>. The differences are mainly related to the relation between the size of RIS A_t and the distance D between BS-RIS or RIS-UE, as follows:
* The specular reflection paradigm: the transmission occurs in the near-field, i.e., D<d_lim = 2A_t/λ[d_lim denotes the Rayleigh distance and is defined by d_lim=2A_t/λ with A_t the RIS area and λ the wavelength <cit.>.]. The path loss, in this case, depends on the summation of the distances between BS-RIS and RIS-UE.
* The scattering reflection paradigm: the transmission occurs in the far-field, i.e., D>d_lim. In this case, the path loss depends on the product of the BS-RIS and RIS-UE separation distances.
In the case of a passive RIS (β_n ≤ 1), the RIS elements reflect the signal without amplification. Thus, in the context of scattering reflection communications (far-field), and by assuming the optimal phase shifts, the received power at the UE for the indirect link via passive RIS is expressed as <cit.>
P_r^UE=P_tG_tG_r(λ/4π) ^4(d_0)^2μ-4/(d_1d_2)^μN^2
where P_t is the transmitted power at the BS, G_t and G_r are the transmit and receive antenna gains at the BS and the UE, respectively, d_0 the reference distance in the free space, d_1 and d_2 are BS-RIS distance and RIS-UE distance, μ is the path loss exponent depending on the environment type (e.g., μ≥ 3 for urban environments), and N is the number of RIS elements. Thus, the passive RIS gives a gain proportional to N^2. However, the passive RIS has limitations due to the double-path loss effect. Indeed, the signal traverses two cascaded channels, the Tx-RIS link and the RIS-Rx link <cit.>. Thus, the received power via the indirect link could be greater
than the power of the direct link, if N is large, or/and if
the direct link is weak or blocked. To illustrate this concept, we plot in Fig. <ref> the power received at the UE via an
attenuated direct link, a direct link without attenuation, and the indirect
link via the RIS, versus the number of RIS elements. The mmWave channel links are generated using an extended version of the New York University simulator NYUSIM <cit.>. Note that to verify the RIS scattering reflection paradigm, the distances d_1, and d_2 must be in the far-field region. As RIS size increases, the distance where the RIS is in the near-field also increases. Thus, for the distances d_1, and d_2 to be in the far-field and equation (<ref>) to be valid, N must not exceed a certain N_max, computed from d_lim, the Rayleigh distance, and represented by a square in Fig. <ref> <cit.>. The behavior of the RIS in the near-field is an interesting research topic.
§.§ RIS types
To overcome this limitation and obtain an efficient RIS when the direct link exists or the number of RIS elements is low, the authors of <cit.> propose an active RIS that can amplify the reflected signals through amplifiers embedded in the RIS elements. The simulation results in a direct link scenario without attenuation for 256 RIS elements reveal a negligible sum-rate gain
of 3 % using the passive RIS, while their proposed active RIS offers a significant sum-rate gain of 67 % compared to the case without RIS.
Nevertheless, a RIS with a large number of active elements consumes more energy. Thus, the authors in <cit.> propose a novel type of RIS composed of active and passive reflective elements, called hybrid RIS, to deal with the limited power budget of the RIS.
RIS based on continuous phase shifts is considered an ideal system that is difficult to implement in practice. Therefore, RIS based on finite discrete phase shifts is the alternative solution to cope with this hardware constraint. To this end, the authors in <cit.> compare the performance of RIS systems with continuous and discrete phase shifts and they find that 3 levels of quantization are sufficient to obtain full diversity.
§.§ RIS optimization
The efficient functioning of the RIS is strongly affected by the adapted phase shifts θ_n. For instance, in Single Input Single Output (SISO) systems, the optimal phase shift of a RIS is easily determined analytically as follows <cit.>
θ_n=θ_tn+θ_nr.
where θ_tn and
θ_nr are the phase of the LoS path in the BS-RIS
and RIS-UE channels, respectively.
However, it is hard to find the optimal phase shifts analytically in the case of Multiple Input Multiple Output (MIMO) systems. To this end, an optimization algorithm is needed. <cit.> studied multi-user Multi Input Single Output (MISO) downlink communications assisted by RIS, where the objective is to maximize the weighted sum rate to find the optimized passive beamforming θ_n and the optimized precoding at the BS.
To solve this non-convex problem, they used the Lagrangian Dual Transform which transforms the sum-of-logarithms-of-ratio to an alternative form.
The authors in <cit.> discussed an indoor MISO multi-user system with a channel model based on the Rician K-factor. The RIS phase shift were configured as follows
θ_n^*=(H_d^H) - (H_l^H)-(H),
where H_d is the direct channel between the Tx and the Rx, H is the channel between Tx and RIS, and H_l is the channel between the RIS and the lth user.
In <cit.>, the authors adopted a low-complex algorithm called the cosine similarity algorithm. The latter aims to find the sub-optimal phase shifts of the RIS that maximize the channel gain. Moreover, to minimize the transmitted power given the bit error rate for a RIS-assisted single-user multipath uplink system,
the authors of <cit.> propose an iterative algorithm to jointly optimize precoding and passive beamforming. In addition, a deep learning algorithm is applied in <cit.> to maximize the received signal-to-noise ratio and find the optimal phase shifts of RIS.
§.§ RIS versus Relay
Both RIS and relay aim to improve signal quality and coverage. However, there are two main differences.
* In the case of RIS, a power supply is only needed to configure the RIS components based on low-cost materials (diodes, switches...). Once the configuration is done, the RIS becomes passive, and no power supply is needed <cit.>. However, relays are generally considered active devices connected to active electronics such as analog-to-digital converters, digital-to-analog converters, amplifiers, etc., which require a power supply for operation. As a result, relays are more complex to implement and consume more energy than RIS <cit.>.
* A RIS operates in a full duplex mode while relays generally work in a half-duplex mode. Relays can still operate in full duplex mode, but this increases their cost, since appropriate antennas and analog and/or digital signal processing, to eliminate loop-back self-interference, are required <cit.>.
§.§ RIS is an opportunity for mmWave communications
The mmWave band, ranging from 30 to 300 GHz, offers enormous free bandwidth and high data rate possibilities <cit.>, unlike the overloaded low-frequency spectrum. However, it is very vulnerable to oxygen absorption and rain attenuation, and also suffers from penetration loss that makes mmWave signals easily blocked. Therefore, the coverage of mmWave communications is limited<cit.>. On the other hand, when the direct link is blocked or largely attenuated, a RIS is a competitive solution to extend coverage area and connectivity <cit.>. The location of the RIS should be optimized to obtain two efficient connections: the BS-RIS link and the RIS-UE link.
The authors in <cit.> discuss the size limitation of the RIS in low frequencies below 6 GHz, which makes their deployment in this band inefficient. A study of the specific propagation characteristics of the terahertz band is needed to use RIS in these frequencies, and the most important implementation of RIS today is in the mmWave band.
In the literature, the most used channels in RIS-assisted systems are the theoretical channels such as Rice for Line-of-Sight (LOS) environments, and Rayleigh for non-LOS (NLOS) <cit.>, <cit.>. To fill the gap towards realistic channel modeling and simulator, the authors in <cit.> propose a novel geometrical channel simulator, called SimRIS. This simulator is based on statistical modeling and can be used in indoor and outdoor environments at 28 and 73 GHz frequencies. Moreover, in <cit.>
the authors extend QuaDRiGa, a simulator used to model MIMO radio channels at sub-6GHz and mmWave frequencies, to handle RIS. This simulator is convenient for RIS-assisted MIMO systems with a mobile Rx or mobile RIS. In addition, <cit.> discusses the extension of NYUSIM, a mmWave channel simulator based on extensive measurements and well-used to assess MIMO systems <cit.>, to generate realistic channels for RIS-assisted systems.
§ RIS-ASSISTED RAILWAY COMMUNICATIONS
§.§ Railway environments characteristics
Railway environments are known to be very complex and harsh from a radio point of view. Various obstacles such as pylons supporting the catenary and rapid transitions between different scenarios (cutting/tunnel, cutting/viaduct) can create severe radio impairments. Railway tunnel size and shape are very specific, depending on the category of the train. Radio propagation inside tunnels is often modeled using Ray tracing tools <cit.>, <cit.>. It is also important to mention that MIMO system performance in tunnels is subject to possible impairments depending on spatial correlation in the tunnel and also Key holes phenomenon <cit.>. Due to high speed, the train can rapidly go through diverse scenarios. In addition, Doppler effects and possible interference due to the proximity of high voltage (catenary) in the vicinity of the antennas render the railway environments very specific compared to the indoor, urban, or suburban environments generally considered today for the use of mmWave communication systems. A detailed description of railway-specific environments can be found in <cit.>.
Considering the capability of RIS to solve the blockage problems in mmWave wireless communications, the use of RIS for railway communications has recently been considered as a promising candidate.
§.§ RIS-assisted railway communications
§.§.§ RIS for high-speed trains
<cit.> discusses the need for RIS in High-Speed Railway (HSR) environment for mmWave communications to improve the signal quality, which suffers from frequent blockages due to high-speed trains. The authors apply Deep reinforcement Learning (DRL) based approach to jointly optimize the RIS phase shifts and the BS beamforming for spectral efficiency maximization. The results show a significant improvement in spectral efficiency performance using DRL compared to the traditional approach.
In <cit.>, the authors describe how to use RIS on high-speed trains to improve communication performance by providing beamforming, interference mitigation, and reducing signal attenuation. They present a detailed discussion of the challenges associated with the RIS deployment on these trains, such as the need for tracking of the train, low latency, and high-speed RIS control, and the impact of train vibration on the RIS performance. They also propose the DRL approach to solve the sum rate maximization problem.
<cit.> deals with interference suppression in an HSR network, composed of a BS, a mobile relay (MR) located on the train, a RIS located near the MR, and an interference
source. The authors maximize the channel capacity
using a DRL solution and they consider outdated channel state information (CSI) to take into account the motion of the train. The authors found that deploying a RIS in close proximity to the embedded MR improves interference suppression and that their algorithm is more effective in suppressing interference than other optimization algorithms
based on mathematical formulations.
<cit.> proposes a new interrupt flow scheduling approach for RIS-assisted downlink mmWave HSR communications where multiple mobile relays exist. Given the existence of eavesdroppers, the BS schedules a number of flows for each MR when the MR flow quality of service (QoS) exceeds the QoS requirement. The authors seek to maximize the scheduled flow number, find the optimal beamforming, the optimal RIS phase shifts, and the scheduling or not of the RIS discrete phase shift, and they find that RIS can intend communication security by reducing eavesdropping capacity and extending coverage area in the HSR environments.
§.§.§ RIS in railway tunnels
In <cit.>, the authors have considered a simple two dimensions empty tunnel. Using the image theory approach and a vertical blocking element between a Tx and an Rx inside the tunnel, they have shown that the use of RIS located on the ceiling of the tunnel can reduce the Blocking Probability (BP) of the signal between Tx
and Rx.
An increase in the number of RIS
and optimization of the Tx position conduct to an additional decrease in BP. The increase in
distance between RIS and Tx can extend the effective range of RIS for a given BP. This study could be extended by considering a train inside a 3D tunnel.
§.§.§ RIS for passengers inside trains
Recently RIS technology has been studied to extend the coverage area in the mmWave band inside an airplane cabin <cit.>. The authors aim to minimize the number of RIS deployed in this system while ensuring the user data rate remains above a threshold. Besides, they compare the performance of this system for two RIS positions in the cabin corridor near the seat and above the center seat. This study could be easily transposed to the case of the inside of a high-speed train or inside a metro to guarantee a given throughput for the passengers.
§ FUTURE DIRECTIONS
As discussed in the previous sections, RIS offers a promising low-cost solution to solve the blocking problems in railway networks since it improves the efficiency and reliability of high-speed trains, solves the interference problem, and extends the coverage area through controlled signal reflection. In the case of high-speed trains, the channel estimation for RIS-assisted communications is a crucial challenge due to the unexpected rapid change of environments. Future research directions could explore the case of RIS-assisted wireless communications in tunnels, especially when the vertical cross-section of the train is large compared to the tunnel cross-section, which increases the probability of signal blockage. In addition, the case where the train moves in the tunnel from the inside to the outside is particularly difficult due to the development of urban transport and in particular driverless metro systems which require high data rate transmissions. The optimization of RIS-assisted communications in this case will require the development of realistic channel models. It would also be interesting to study the optimal location of the RIS, the number of RIS elements, or the number of RIS itself, needed in these systems to maximize the coverage inside the tunnel and also maximize the ever-increasing passenger throughput demand onboard the trains.
§ CONCLUSION
This paper presents a survey on RIS-assisted communications for railway applications, particularly in the mmWave band. First, we have defined the RIS concept, explaining its structure, and different types of RISs. A review of the various optimization algorithms used in the literature for RIS-assisted systems is proposed, and we highlight the ability of RIS to solve the blocking problem of mmWave. In the last section, the paper outlines
the characteristics of the railway environments and details some recent works concerning the use of RIS in high-speed trains. This topic is a very active field of research and we have proposed some future directions for RIS-assisted railway communications.
§ ACKNOWLEDGMENT
This work was funded by the council of the Region Bretagne, under the grant MILLIRIS.
IEEEtran
|
http://arxiv.org/abs/2307.07250v2 | 20230714095126 | Mitigating Adversarial Vulnerability through Causal Parameter Estimation by Adversarial Double Machine Learning | [
"Byung-Kwan Lee",
"Junho Kim",
"Yong Man Ro"
] | cs.LG | [
"cs.LG",
"cs.AI",
"cs.CV"
] |
Mitigating Adversarial Vulnerability through Causal Parameter Estimation
by Adversarial Double Machine Learning
Byung-Kwan LeeEqual contribution. † Corresponding author., Junho Kim[], Yong Man Ro[]
Image and Video Systems Lab, School of Electrical Engineering, KAIST, South Korea
{leebk, arkimjh, ymro}@kaist.ac.kr
August 12, 2023
======================================================================================================================================================================================================================
empty
Adversarial examples derived from deliberately crafted perturbations on visual inputs can easily harm decision process of deep neural networks. To prevent potential threats, various adversarial training-based defense methods have grown rapidly and become a de facto standard approach for robustness. Despite recent competitive achievements, we observe that adversarial vulnerability varies across targets and certain vulnerabilities remain prevalent. Intriguingly, such peculiar phenomenon cannot be relieved even with deeper architectures and advanced defense methods. To address this issue, in this paper, we introduce a causal approach called Adversarial Double Machine Learning (ADML), which allows us to quantify the degree of adversarial vulnerability for network predictions and capture the effect of treatments on outcome of interests. ADML can directly estimate causal parameter of adversarial perturbations per se and mitigate negative effects that can potentially damage robustness, bridging a causal perspective into the adversarial vulnerability. Through extensive experiments on various CNN and Transformer architectures, we corroborate that ADML improves adversarial robustness with large margins and relieve the empirical observation.
§ INTRODUCTION
Along with the progressive developments of deep neural networks (DNNs) <cit.>, an aspect of AI safety comes into a prominence in various computer vision research <cit.>. Especially, adversarial examples <cit.> are known as potential threats on AI systems. With deliberately crafted perturbations on the visual inputs, adversarial examples are hardly distinguishable to human observers, but they easily result in misleading decision process of DNNs. Such adversarial vulnerability provokes weak reliability of inference process of DNNs and discourages AI adoption to the safety critical areas <cit.>.
In order to achieve robust and trustworthy DNNs from adversarial perturbation, previous methods <cit.> have delved into developing various adversarial attack and defense algorithms in the sense of cat-and-mouse game. As a seminal work, Madry <cit.> have paved the way for obtaining robust network through adversarial training (AT) regarded as an ultimate augmentation training <cit.> with respect to adversarial examples. Based on its effectiveness, various subsequent works <cit.> have investigated it to further enhance adversarial robustness.
Although several AT-based defense methods have become a de facto standard due to their competitive adversarial robustness, we found an intriguing property of the current defense methods. As in Figure <ref>, we identify that the adversarial robustness for the each target class significantly varies with a large gap, and this phenomenon equally happens in the course of (a) network architectures and (b) various AT-based defense methods. In addition, we would like to point out that the robustness of particular target is still severely vulnerable than others even with advanced architectures <cit.> and defense methods <cit.>. We argue that such phenomenon is derived from the current learning strategies of AT-based defense methods that lacks of understanding causal relations between the visual inputs and predictions. When considering AT methods as the ultimate augmentation <cit.>, current methods rely solely on strengthening the correlation between adversarial examples and target classes through canonical objectives that improve robustness. To fundamentally address such vulnerability and understand the causal relation, we need to quantify the degree of vulnerability (causal parameter) and should mitigate its direct effects to the network predictions.
Accordingly, we investigate the AT-based defense methods in a causal viewpoint and propose a way of precisely estimating causal parameter between adversarial examples and their predictions, namely Adversarial Double Machine Learning (ADML). We first represent a causal diagram of AT-based methods and interpret it as a generating process of robust classifiers as illustrated in Figure <ref>. Regarding standard adversarial training <cit.> as an optimizing procedure for the robust network parameters f with respect to the worst perturbations t, we can instantiate a generation g[Selecting g as proper perturbations varies according to domain specific tasks (rotations, translations <cit.>, or spatial deformations <cit.>).] as an adversarial attack of projected gradient descent (PGD) <cit.> for the given clean examples x.
Then, our research question is how to quantitatively compute the causal parameter θ between the perturbations t and target classes y, and identify the causal effects on outcome of our interests. Through double machine learning (DML) <cit.>, widely studied as a powerful causal estimator <cit.> for the given two regression models, we can establish an initial research point of estimating causal parameter of adversarial perturbation with theoretical background. However, it is difficult to directly estimate θ in the high-dimensional manifolds, especially for DNNs. In this paper, we shed some lights on identifying causal parameter of the perturbations while theoretically bridging the gap between causal inference and adversarial robustness. Then, by minimizing the magnitude of the estimated causal parameter, we essentially lessen negative causal effects of adversarial vulnerability, and consequently acquire robust network with the aforementioned phenomenon alleviated.
To corroborate the effectiveness of ADML on adversarial robustness, we set extensive experiments with four publicly available datasets <cit.>. Our experiments include various convolutional neural network architectures (CNNs), as well as Transformer architectures that have drawn great attention in both vision and language tasks <cit.> yet relatively lack of being studied in adversarial research.
Our contributions can be summarized as follows:
* We present an empirical evidence that despite the recent advances in AT-based defenses, fundamentally adversarial vulnerability still remains across various architectures and defense algorithms.
* Bridging a causal perspective into adversary, we propose Adversarial Double Machine Learning (ADML), estimating causal parameter in adversarial examples and mitigating its causal effects damaging robustness.
* Through extensive experiments and analyses on various CNN and Transformer architectures, we corroborate intensive robustness of our proposed method with the phenomenon alleviated.
§ BACKGROUND AND RELATED WORK
Notation. We deal with DNNs for classification as in Figure <ref>, represented by f:𝒳→𝒴, where 𝒳 and 𝒴 denotes image and probability space, respectively. Let x∈𝒳 denote clean images and y∈𝒴 indicate (one-hot) target classes corresponding to the images. Adversarial examples x_adv are generated by adversarial perturbations t through DNNs, such that x_adv=x+t. Here, the perturbations are carefully created through the following formulation:
max_ t _∞≤γℒ_CE(f(x+t),y),
where ℒ_CE represents a pre-defined loss such as cross-entropy for classification task. We regard adversarial perturbations t as l_∞ perturbation within γ-ball (perturbation budget). Here, ·_∞ describes l_∞ perturbation magnitude.
§.§ Adversarial Training
After several works <cit.> have found that human-imperceptible adversarial examples easily break network predictions, Madry <cit.> have thrown a fundamental question: “How can we make models robust to adversarial examples with security guarantee?”. To answer it, they have introduced the concept of empirical risk minimization (ERM) serving as a recipe to obtain classifiers with small population risk. Thanks to its reliable guarantee, they have consolidated it on the purpose of adversarial defense and accomplished the yardstick of adversarial training. The key factor of its achievement is regarding adversarial training as min-max optimization in a perspective of saddle point problem, which can be written as follows:
min_f_(x, y)∼𝒟[ max_ t _∞≤γℒ_CE(f(x+t),y) ],
where 𝒟 denotes a set of data samples (x, y). Here, they have presented an adversarial attack based on PGD to powerfully behave inner-maximization on Eq. (<ref>), which is an ultimate first-order adversary with a multi-step variant of fast gradient sign method <cit.> by adding a random perturbation around the clean images x.
According to its impact, various adversarial training methods <cit.> have grown exponentially and become de facto standards robustifying DNNs against adversarial perturbation. Zhang <cit.> have pointed out the trade-off between clean accuracy and adversarial robustness, and reduced the gap between clean errors and robust errors. Wang <cit.> have claimed that all of clean images are used to perform both inner-maximization and outer-minimization process in Eq. (<ref>), irrespective of whether they are correctly classified or not. Thus, they have focused on misclassified clean images prone to be easily overlooked during adversarial training and demonstrated their significant impacts on the robustness by incorporating an explicit regularizer for them. Wu <cit.> have studied loss landscapes with respect to network parameters and shown a positive correlation between the flatness of the parameter loss landscapes and the robustness. In the end, they have presented a double-perturbation mechanism where clean images are perturbed, while network parameters are simultaneously perturbed as well.
On the other hand, we plunge into investigating where adversarial vulnerability comes from and observe that the vulnerability varies along target classes, and it significantly deteriorates network predictions. Further, we find that this phenomenon commonly happens across various network architectures and advanced defense methods. To relieve such peculiarity, we deploy double machine learning (DML) that helps to capture how treatments (adversarial perturbations) affect outcomes of our interests (network predictions), which is one of the powerful causal inference methods. After we concisely explicate the necessary background of DML, we will bridge it to the adversary in Sec. (<ref>).
§.§ Double Machine Learning
In data science and econometrics, one of the fundamental problems is how to measure causality between treatments t and outcomes of our interest y among high-dimensional observational data samples (see Figure <ref>) to identify data generating process. At a first glance, it seems simple to compute their causality, but we should keep in mind the possibility for the existence of covariates x affecting both treatments and outcomes. In other words, for example, if one may want to know the causal effects of drug dosage t to blood pressure changes y, one needs to collect observational data with respect to a variety of patients characteristics and their clinical histories x, so as not to fall into biased environment. In reality, though, it is impossible to collect observational data including all covariates concerning treatments and outcomes, so it is not an easy problem to catch genuine causality under the unknown covariates x. Therefore, there has been a growing demand for robustly predicting the unbiased causal relation, despite with the limited data samples.
Recently, the advent of double machine learning (DML) <cit.> enables us to clarify the causality between treatments t and outcomes y, when two regression models are given. The formulation of initial DML can be written as:
y = f(x) + θ t + u, ([u| x, t]=0)
t = g(x) + v, ([v | x]=0)
where θ∈ℝ denotes causal parameter representing causal relation between t∈ℝ^d and y∈ℝ^d. In addition, f indicates one regression model projecting covariates to outcome domain, and g denotes another regression model generating treatments t. In the sense that two regression models f and g are not main interest of DML, they are called as nuisance parameters to estimate the causal parameter θ. Note that, early DML assumes the problem setup is proceeded in partially linear settings as a shape of Robinson-style <cit.> described in Eq. (<ref>), where “partially” literally means that treatments t∈ℝ^d are linearly connected to outcome y∈ℝ^d, while covariates x are not. In addition, it is supposed that the conditional expected error of u∈ℝ^d and v∈ℝ^d equals to zero vector 0∈ℝ^d.
To obtain the causal parameter θ, Chernozhukov <cit.> have provided a solution of estimating the causal parameter such that θ̂ = (y-[y| x]) · v / v^2 which satisfies Neyman-orthogonality <cit.>. It makes θ̂ invariant to their erroneous of two nuisance parameters with the variance of causal parameter reduced. Furthermore, they have addressed a chronic problem that θ is only accessible when the two nuisance parameters are in a class of Donsker condition, where deep neural networks are not included in that condition. They have theoretically demonstrated sample-splitting plus cross-fitting can effectively relax Donsker condition and allow a broad array of modern ML methods <cit.> to compute unbiased causal parameter θ.
Following the principle, they first split the data samples {𝒟_1, 𝒟_2}∼𝒟 and divided the process of causal inference into two steps: (a) training two nuisance parameters f and g with 𝒟_1, (b) estimating unbiased θ with 𝒟_2. Here, data samples 𝒟_2 used to estimate unbiased causal parameters should not be overlapped with 𝒟_1 utilized to train the nuisance parameters. To make copious combinations, they swapped the role of partitioned data samples 𝒟_1⇋𝒟_2 or repeatedly split 𝒟. Subsequently, they have performed cross-fitting (k-fold cross validation) by averaging the estimated causal parameters from various split samples.
Along with the success of initial DML in partially linear settings, numerous variants <cit.> have emerged, and they have extended its initial nature to non-parametric settings with continuous treatments t in order to capture more complicated non-linear causal relations in a debiased state. A non-parametric formulation <cit.> represents a more general problem setup of DML as follows:
y = f(x, t) + u, ([u| x, t]=0)
t = g(x) + v, ([v | x]=0)
where there is no explicit term for causal parameter θ exhibiting causal relation between treatments t and outcomes y, compared to Eq. (<ref>). Colangelo <cit.> have introduced a way of estimating causal parameter θ applicable to non-parametric settings with high-dimensional continuous treatments t∈𝒯, which can be written as:
θ̂ = ∂/∂ t[y|do(𝒯=t)].
They have utilized do-operator <cit.> commonly used in graphical causal models and intervened on treatments t to compute an interventional expectation [y|do(𝒯=t)]. It represents the expected outcome averaged from all the possible covariates for the given fixed treatments t, such that [y|do(𝒯=t)]=∑_x∈𝒳[y| x,t]p(x). Specifically, they have estimated causal parameter θ by measuring how much the interventional expectation shifted, once they change the treatments slightly. Since the most important property of DML is Neyman-Orthogonality helping to robustly estimate the causal parameter, the interventional expectation should be also modified to satisfy the property <cit.> of its invariance to nuisance parameters f and g. Its formulation can be written as follows (see details in Appendix A):
[y|do(𝒯=t)]=_𝒟_t[f(x,t)+y-f(x,t)/p(𝒯=t| x)],
where 𝒟_t denotes a set of observational covariates and outcome samples for a fixed t∈𝒯 such that (x,y)∼𝒟_t, a sub-population of 𝒟. Note that, p(𝒯=t| x) is related to treatment generator g. Here, differentiating Eq. (<ref>) enables us to acquire unbiased causal parameter in non-parametric settings with non-linear causal relation.
In brief, DML captures unbiased causal relation between treatments t and outcomes y even with finite data samples, of which theoretical ground is (a) Neyman-Orthogonality for robustly estimated causal parameter despite undesirable outputs of nuisance parameters, and (b) sample-splitting plus cross-fitting for debiased causal parameters.
§ ADVERSARIAL DOUBLE MACHINE LEARNING
§.§ Adversarial Data Generating Process
In general deep learning schemes, we have clean visual images x∈ℝ^hwc and their corresponding target classes y∈ℝ^d in our hand as a format of dataset, where h, w, c denotes image resolution of height, width, channel, repectively, and d denotes the number of classes. Thus, we do not need additional data generating process. For adversarial training, on the other hand, we need another data, which are adversarial perturbations generated from data samples (x,y) as in Eq. (<ref>). They are normally created by PGD <cit.> at every training iteration to make DNNs f robust through min-max optimization game according to Eq. (<ref>).
Though, the more iterations of adversarial training, the fewer perturbations that impair network predictions. In other words, not all of the perturbations can corrupt network predictions among newly generated perturbations. Hence, we do not consider all of the perturbations as treatments but selectively define them as worst perturbations t breaking network predictions, such that it satisfies y ≠ f(x+t), where we call x_adv = x + t as worst examples. This is because our major goal is to catch actual adversarial vulnerability of DNNs, so that we do not tackle the perturbations incapable of harming network predictions.
To access such worst perturbation, we choose perturbation generator g as an adversarial attack of PGD according to standard adversarial training <cit.>. In addition, we pick the worst perturbations t damaging network predictions among adversarial perturbations from the generator g. In this way, we perform adversarial data generating process.
§.§ Adversarial Problem Setup
In the nature of adversarial training, the worst perturbations t are explicitly injected to clean images x such that x_adv=x+t, and these combined images are propagated into DNNs f. Here, through this formulation as: f(x+t)=f(x,t), we connect DNNs for adversarial training and a nuisance parameter f for non-parametric DML in Eq. (<ref>). Fortunately, once we use Taylor expansion (with scalar-valued function for better understanding) and decompose f by its input component as: f(x+t)=f(x)+∑_i=1^∞ t^if^(i)(x)/i!, where f^(i) indicates i-th order derivative function, we can also express partially linear settings described in Eq. (<ref>). That is, since adversarial examples start from the concept of “additive noise”, both settings can exist at the same time in the scheme of adversarial training. From this reason, we build Adversarial Double Machine Learning (ADML):
y = f(x+t)=f(x) + θt̅+ u, ([u| x, t]=0)
t = g(x) + v, ([v | x]=0)
where t indicates Taylor-order matrix: [t,t^2, ⋯]^T and θ represents Taylor-coefficient matrix [f^(1)(x)/1!, f^(2)(x)/2!, ⋯] (see strict mathematical verification in Appendix B).
Here, we explain what the conditional expected error of u and v in Eq. (<ref>) means in adversarial training. The former [u|x,t]=0 implies the nature of adversarial training, which can be viewed as an ultimate augmentation robustifying DNNs, when infinite data population of x and t is given. Thus, it means network predictions become invariant in the end, despite the given worst perturbations. To implement it practically, we replace it with a mild assumption as [u| x, g(x)]=0 (see Appendix C) that signifies PGD-based perturbations are used to perform adversarial training. This is because we cannot always acquire worst perturbations t using only PGD. For the latter [v| x]=0, it represents PGD has capability of producing worst perturbations deviating network predictions during the training.
§.§ Estimating Adversarially Causal Parameter
Aligned with Eq. (<ref>), an explicit term of θ is regarded as the causal parameter in our problem setup of ADML. We can now interpret that θ is a causal factor to spur adversarial vulnerability, since its magnitude easily catalyzes the deviations from network predictions of clean images. Here, if it is possible to directly compute θ over all data samples, we can finally handle adversarial vulnerability.
Favorably, ADML follows both partially linear and non-parametric settings due to the concept of additive noise, thus we can employ the way of estimating causal parameter as in Eq. (<ref>). The following formulation represents the estimated causal parameter θ̂ in ADML (see Appendix D). Note that, as we emphasized, sample-splitting plus cross-fitting must be applied to estimate unbiased causal parameter.
θ̂ =_𝒟_t[-(1/p(𝒯=t | x)-1)∂/∂ tf(x+t)],
where ∂/∂ tf(x+t) indicates an input gradient for network predictions with respect to t, and p(𝒯=t|x) represents a distribution of worst perturbation given clean images x. Here, we cannot directly handle this distribution due to the presence of multiple unknown parameters required to define it. For that reason, we instead approximate it with the sharpening technique by incorporating the information on attacked confidence such that p(𝒯=t|x)≈_t'|x[p(y_a|x, t')] (see Appendix E), where y_a denotes attacked classes for the given worst perturbations t. It implicitly means that the higher the attacked confidence, the higher the probability of finding worst perturbations.
Aligned with the previous analysis <cit.> that show increasing magnitude of input gradient increases adversarial vulnerability, the magnitude of our causal parameter |θ̂| also becomes huge due to |θ̂| ∝ |∂/∂ tf(x+t)|. In parallel, Qin <cit.> show the more ambiguous confident, the lower robustness (high vulnerability), and interestingly, the magnitude of our causal parameter also |θ̂| becomes large due to |θ̂
| ∝ |1/_t'|x[p(y_a|x, t')]|.
Bringing such factors at once, θ̂ represents a weighted measurement of attacked confidence and their input gradients. Comprehensively, we can revisit that the network predictions of worst examples are easily flipped due to the following adversarial vulnerability: (a) ambiguous confidence around classification boundaries, or (b) high gradient magnitude amplifying the leverage of the perturbations. To improve the adversarial robustness of DNNs, it is essential to minimize the negative effects of causal parameters, which are combinatorial outcomes of the gradient and confidence.
§.§ Mitigating Adversarial Vulnerability
By deploying ADML, we propose a way of estimating causal parameter representing the degree of adversarial vulnerability that disturbs to predict the target classes. Then, our final goal is essentially to lessen its direct causal effect from adversarial perturbations in order to achieve robust networks. In detailed, alleviating their causal effect derived from θ̂ is the process of comprehensive reconstruction to focus more on vulnerable samples as we reflect their attacked confidence and gradients effects altogether.
Accordingly, the very first way is naively reducing the magnitude of θ̂ to suppress adversarial vulnerability damaging the robustness. However, calculating θ̂ and minimizing its magnitude at every iteration is computationally striking because input gradient has huge dimension of ℝ^dhwc and getting its gradient inevitably needs to compute second-order gradient with its tremendous dimension. We instead approximate the partial derivative ∂/∂ t[y|do(𝒯=t)] and minimize its magnitude, which can be written as:
min_f|θ̂|≈|[y|do(𝒯=t)]-[y|do(𝒯=0)]/t-0|,
where network parameters of DNNs f are only dependent on the numerator, thus we engross the numerator only. Lastly, we redesign [y|do(𝒯=t)] into the form of loss function used in deep learning and finally construct the objective function for ADML, of which formulation can be written as follows (see details in Appendix F):
min_f_𝒟_t[ τℒ_CE(f(x+t), y)]+ _𝒟_0[ℒ_CE(f(x), y)],
where we denote τ = 1/p(𝒯=t| x)-1 as balancing ratio. The current AT-based defenses use an equal weight “1/n” to loss for all data samples: 1/n∑_i=1^nℒ_Defense(x_i,y_i,t_i;f) because they presume all of perturbations have equal causal effect (successful attack) to change targets without realizing vulnerable samples. Whereas, ADML uses the balancing ratio τ to adaptively focus on vulnerable samples by reweighting the loss. To realize ADML, we describe Algorithm <ref> to explain further details, where ℒ_Defense(x,y,t;f) indicates a main body of the loss function for AT-based defenses.
§ EXPERIMENT
§.§ Implementation Details
Datasets & Networks. We conduct comprehensive experiments on various datasets and networks. For datasets, we use CIFAR-10 <cit.>, CIFAR-100 <cit.>, and two larger datasets: Tiny-ImageNet <cit.> and ImageNet <cit.>. For networks, four CNN architectures: <cit.> and four Transformer architectures: <cit.> are used.
Adversarial Attacks. We adaptively set perturbation budget γ of adversarial attacks depending on the classification difficulty of the four datasets: 8/255 equally for CIFAR-10 <cit.> and CIFAR100 <cit.>, 4/255 for Tiny-ImageNet <cit.>, and 2/255 for ImageNet <cit.>. We prepare three standard attacks: BIM <cit.>, PGD <cit.>, CW_∞ <cit.>, and three advanced attacks: AP (Auto-PGD: step size-free), DLR (Auto-DLR: shift and scaling invariant), AA (Auto-Attack: parameter-free), all of which are introduced by Francesco <cit.>. PGD, AP, DLR have 30 steps with random starts, where PGD has step size 2.3×γ/30, and AP, DLR both have momentum coefficient ρ=0.75. CW_∞ uses PGD-based gradient clamping for l_∞ with CW objective <cit.> on κ=0.
Adversarial Defenses. We use four defense baselines with a standard baseline: AT <cit.> and three advanced defense baselines: TRADES <cit.>, MART <cit.>, AWP <cit.>. To fairly validate experiments, a perturbation generator, PGD <cit.> is equivalently used to generate adversarial examples for which we use the budget 8/255 and set 10 steps with 2.3×γ/10 step size in training. Especially, adversarially training Tiny-ImageNet <cit.> and ImageNet <cit.> is a computational burden, thus we employ fast adversarial training <cit.> with FGSM <cit.>. For training CNNs, we use SGD <cit.> with a learning rate of 0.5 scheduled by Cyclic <cit.> in 120 epochs and use early stopping to
prevent overfitting <cit.>. For training Transformers, we use SGD <cit.> with a learning rate of 0.001 on the equal experimental setup of CNNs, where 224×224 resolution is applied for all datasets and pretrained parameters on ImageNet-1k models are utilized.
Training ADML. After the completion of standard adversarial training <cit.>, we apply AT-based defense methods to line 4 in Algorithm <ref> for ADML. We then optimize adversarially trained CNNs in 10 epochs using SGD <cit.> with a learning rate of 0.001 scheduled by Cyclic <cit.>, which allows empirically sufficient convergence to robustness. In addition, adversarially trained Transformers are also optimized with ADML using a learning rate of 0.0001 on the equal experimental setup of CNNs. Note that, we set sample-splitting ratio in half (see Appendix G) for each batch, and cross-fitting is satisfied during training iterations.
§.§ Robustness Validation on ADML
Adversarial Robustness. Based on our experimental setups, we have conducted enormous validations of adversarial robustness on CNNs in Table <ref> and Transformers in Table <ref>. As shown in these tables, employing ADML on AT-based defense methods: AT <cit.>, TRADES <cit.>, MART <cit.>, AWP <cit.> enables to largely improve adversarial robustness, compared with that of each defense method baseline. Bai <cit.> have argued that Transformers cannot show noticeable adversarial robustness than CNNs, but we want to point out that the robustness of Transformers can be remarkably improved, especially in larger datasets.
Ablation Study. In Table <ref>, we conduct ablation studies on the effect of sample-splitting plus cross-fitting on robustness and the effect of considering treatments as worst examples, non-worst examples, or both on robustness, either. According to the results, only considering treatments as worst examples can catch actual adversarial vulnerability, thereby improving robustness much more than others.
Utilizing Synthetic Images. Recently, several works <cit.> have employed TRADES <cit.> utilizing synthetic images: DDPM <cit.> and EDM <cit.> to improve adversarial robustness based on the insight that data augmentation such as CutMix <cit.> can improve robustness <cit.>. To further investigate the benefits of ADML, we experiment ADML combined with TRADES on the synthetic images. Table <ref> shows ADML can further improve the robustness even on synthetic images, demonstrating its efficacy.
§.§ Causal Analysis on ADML
Adversarial Vulnerability. To validate the alleviation of adversarial vulnerability existing in certain classes as in Figure <ref>, we evaluate the averaged adversarial robustness for the cumulative distribution of bottom-k classes with respect to the network prediction. We set the k value as 10%, 30%, and 50%. As in Figure <ref>, we can observe that AT shows noticeable vulnerability in bottom-k classes, and such tendency pervades in four different datasets and architectures. If we successfully mitigate direct causal parameter of adversarial perturbations on each class, we expect apparent improvements of robustness for bottom-k classes. As in the figure, we can observe the notable robustness of ADML in the vulnerable bottom-k classes and corroborate its effectiveness to alleviate aforementioned phenomenon existing in current AT-based defenses. Further infographic is illustrated in Figure <ref> for the integrated distribution of baselines <cit.> and their corresponding ADML adoptions on each architecture, and it shows further adversarial robustness in general (Additional results in Appendix H).
Causal Parameter. By deploying ADML, we present a way of mitigating the magnitude of causal parameter |θ|. To numerically calculate |θ|, we employ Eq. (<ref>) and measure the average of |θ_ADML| for ADML with respect to the bottom-k and whole classes, respectively. By dividing |θ_ADML| with |θ_AT|, we can obtain relative ratio of causal parameter, ρ_k:=100×|θ_ADML|/|θ_AT| of adversarial examples in bottom-k classes. This ratio indicates that relative intensity of causal parameter compared to that of AT <cit.>. As in Table <ref>, we can observe that ADML shows less intensity of the causal parameter than AT, which means less causal effects of adversarial perturbations on target classes. From combinatorial results of preceding robustness comparison in Sec. <ref>, we corroborate that ADML indeed mitigate the intrinsic causal parameter and alleviate empirical observation in Figure <ref>, thus results in adversarial robustness.
§ CONCLUSION
In this paper, we observe adversarial vulnerability varies across targets and still pervades even with deeper architectures and advanced defense methods. To fundamentally address it, we build causal perspective in adversarial examples and propose a way of estimating causal parameter representing the degree of adversarial vulnerability, namely Adversarial Double Machine Learning (ADML). By minimizing causal effects from the vulnerability, ADML can mitigate the empirical phenomenon as well as solidly improve adversarial robustness. Through intensive experiments, we corroborate the effectiveness of ADML for robust network.
ieee_fullname
|
http://arxiv.org/abs/2307.10200v1 | 20230709023156 | Disentangling Societal Inequality from Model Biases: Gender Inequality in Divorce Court Proceedings | [
"Sujan Dutta",
"Parth Srivastava",
"Vaishnavi Solunke",
"Swaprava Nath",
"Ashiqur R. KhudaBukhsh"
] | cs.CY | [
"cs.CY",
"cs.AI",
"cs.CL",
"cs.LG"
] |
Age of FGK Dwarfs Observed with LAMOST and GALAH: Considering the Oxygen Enhancement
Jinghua Zhang
Received August 12, 2023; accepted August 12, 2023
====================================================================================
Divorce is the legal dissolution of a marriage by a court. Since this is usually an unpleasant outcome of a marital union, each party may have reasons to call the decision to quit which is generally documented in detail in the court proceedings. Via a substantial corpus of 17,306 court proceedings, this paper investigates gender inequality through the lens of divorce court proceedings. While emerging data sources (e.g., public court records) on sensitive societal issues hold promise in aiding social science research, biases present in cutting-edge natural language processing (NLP) methods may interfere with or affect such studies. We thus require a thorough analysis of potential gaps and limitations present in extant NLP resources. In this paper, on the methodological side, we demonstrate that existing NLP resources required several non-trivial modifications to quantify societal inequalities. On the substantive side, we find that while a large number of court cases perhaps suggest changing norms in India where women are increasingly challenging patriarchy, AI-powered analyses of these court proceedings indicate striking gender inequality with women often subjected to domestic violence.
§ INTRODUCTION
The 2011 decennial census in India gave its citizens the following choices to select their marital status – never married, separated, divorced, widowed, married. Based on the census data, a study reported some startling facts <cit.>: 1.36 million of the Indian population is divorced which accounts for 0.24% of the married population, and 0.11% of the total population. More women were separated or divorced than men, and the number of separation was almost three times as high as the number of divorce.
Divorce, a historically taboo topic in India for ages <cit.>, seldom features in mainstream Indian discourse <cit.>. Recent indications of changing social acceptance of divorcees notwithstanding <cit.>, divorce in India still carries a considerable social stigma <cit.>.
How do we quantify gender inequality in Indian divorce? Surveys about divorce often have limited participation and a small sample size <cit.>, perhaps due to the social stigma attached. A vulnerable community – Indian women under conjugal distress – had limited visibility to social scientists. Via a substantial corpus of 17,306 divorce court proceedings, this paper conducts the first-ever computational analysis of gender inequality in Indian divorce based on public court records.
Even though written in English, legal texts are often domain-specific <cit.>. The considerable variation of legal jargon across countries and courts makes domain-specific analysis important. In that vein, Indian legal NLP is an emerging field <cit.>. Most NLP research on legal texts thus far has focused on building robust tools to analyze legal text. Recent research, however, on in-group bias <cit.> and sexual harassment <cit.>, and <Ref> and <Ref> suggest that automated methods to glean social insights from large-scale, legal texts merit investigation. Barring few recent lines of work <cit.>, there is surprisingly little literature on large-scale linguistic analysis of gender bias in India, let alone on legal text zeroing in on divorce.
While emerging data sources (e.g., public court records available on the web) offer opportunities for social scientists to study important and sensitive social issues that previously had limited survey data, applying cutting-edge NLP methods to newer domains requires careful evaluation of the critical question: How much of the (perceived) gender inequality as quantified by the methods truly reflects the corpus and how much of it is due to the inherent biases of the employed NLP methods?
In this paper, we show that the subtleties present in legal text present unique challenges. Unless we consider them and make non-trivial changes to existing methods, we may end up drawing inaccurate social conclusions.
We further show that sophisticated NLP methods built on top of large language models (LLMs) need scrutiny when applied to social inference tasks involving genders. We, in fact, conduct a much broader bias audit of these systems. Our audit reveals well-known LLMs often exhibit gender bias even on simple subject-verb-object sentence completion tasks. Through a corpus-specific text entailment analysis, we demonstrate that downstream applications such as natural language inference (NLI) systems also exhibit sensitivity to gender. We finally, present a novel inconsistency sampling method to mitigate this bias and present our social findings.
To summarize, our contributions are the following:
Social: We create a substantial corpus of 17,306 divorce court proceedings and conduct the first-ever analysis of gender inequality through the lens of divorce proceedings. While a large number of court cases perhaps suggest changing norms in India where women are increasingly challenging patriarchy <cit.>, our analyses reveal widespread domestic violence, dowry demands, and torture of the bride.
Methodological: We address extant gaps and limitations in multiple NLP frameworks. We propose non-trivial modifications to the framework <cit.> to make it suitable for legal text.
We demonstrate a novel application of text entailment <cit.> in quantifying gender inequality. We investigate several potential sources for model bias in NLP resources that can interfere with quantifying gender inequality. We present a novel inconsistency sampling method exploiting counterfactuals to mitigate this bias.
§ DATASET
§.§ Collection
We scrape all the publicly available court proceedings with the word between January 1, 2012 to December 31, 2021 from <https://indiankanoon.org/> (hereafter ), an Indian law search engine launched in 2008 and the largest free online repository of the court proceedings of different courts and tribunals of India. Prior computational law research <cit.> and gender focused social science studies <cit.> have used as source of data.
We download 86,911 case proceedings containing the word from using its advanced search feature. Filtering based on the keyword is a high-recall approach to obtain relevant cases with precedence in computational social science research <cit.>. However, the presence of the keyword may not always indicate a divorce court proceeding; for instance, the keyword can be used to describe the marital status of any of the litigants. It can also be used in an altogether different context (e.g., divorced from reality). We use the following heuristic to further refine our dataset. We also look for other words (e.g., , , ) and phrases (e.g., ), and check if such occurrences repeat for a minimum threshold (set to 5). On a random sample of 100 cases after we apply this cleaning method, a manual inspection reveals that 94 are divorce cases. Hence, our keyword-based filtering is reasonably precise. This pruning step retains 25,635 cases.
§.§ Data Pre-processing
To quantify gender inequality in court proceedings, we must disambiguate the legal parties – the plaintiff and the defendant – and accurately identify of the husband and the wife, who plays which role. Indian legal documents use a wide range of legal terms to denote the plaintiff (e.g., appellant, applicant, complainant, petitioner) and the defendant (e.g., respondent, nonapplicant, opponent). We observe different courts have different formats (sometimes, multiple formats) to summarize the proceedings. The documents also specify which party in marriage represents which role in several different ways (e.g., respondent/wife, respondent-wife, respondent aggrieved wife). We write a regular-expression-based pipeline and consolidate such information to identify the gender of the plaintiff and the defendant across all the states.
The names and salutations (e.g., , , , ) of the plaintiff and defendant also provide gender information. Subcultural naming conventions played a key role in assigning gender to the litigants in some of the cases. For instance, , meaning princess, is a Punjabi last name only for females <cit.>. Or , meaning sister, is solely used in many female names in Gujarat <cit.>. Dependence information of the litigants also provides gender information (e.g., , , ).[We did not find a single mention of in our dataset.]
Of the 25,635 cases, we could unambiguously assign gender to 17,306 cases. For each case, we replace each mention of the litigants as or accordingly. For example, a proceeding snippet “The plaintiff/wife has filed for a divorce. The plaintiff was married to the defendant for three years.”, will be modified to “The wife has filed for a divorce. The wife was married to the husband for three years.” This data set, _divorce, consists of 30,615,754 (30 million) tokens.
§ BRIEF OVERVIEW OF INDIAN LEGAL SYSTEM
Indian Judicial System is largely based on the English Common Law system (where, the law is developed by judges through their decisions, orders, and judgments). The nation has 28 states and 8 union territories (UT), and a total of 25 high courts (some high courts have jurisdiction of more than a state or UT). The federal structure has a supreme court coupled with the high courts that roughly handle the cases in a state or UT. The legal cases of divorce are usually handled by the family or district courts. However, some unresolved cases or sometimes fresh cases are also heard by the high courts. Since the court proceedings are public records and are digitally made available freely by , we found this dataset to be quite appropriate for a large-scale study on gender equality in court proceedings.
§ DOWRY IN DIVORCE PROCEEDINGS
The dowry system involves a transaction of financial assets between the bride's family and the bridegroom's family with the latter being the recipient of the financial assets. While legally prohibited in India since 1961 <cit.>, this practice has continued well after its legal prohibition and has a strong link to social crises such as female feticide <cit.>, domestic abuse and violence <cit.>, and dowry deaths <cit.>. In order to protect the bride from marital cruelty and domestic violence, Indian Penal Code introduced Section 498 in 1983 <cit.>.
Figure <ref> reflects relative proportions of divorce cases containing the text tokens and . For each state, we report the fraction of divorce cases that contain at least one mention of these two tokens. A higher intensity color indicates a larger proportion of such cases. We observe that overall, 24.38% of all cases and 21.86% of all cases mention and , respectively.
Jacob and Chattopadhyay, <cit.> reported that divorce in India does not follow any one-size-fits-all pattern across different states; there exists sufficient interstate variation even for the rate of divorce. We notice a considerable variation in mentions of dowry and section 498-A across different states indicating variance in reported cases of dowry or domestic violence. Among the states and the union territories, the top three entries in terms of dowry mentions are Telangana, Delhi, and Bihar while the top three entries in terms of Section 498-A mentions are Bihar, Telangana, and Andhra Pradesh. Bihar and Telangana have social science literature documenting dowry and domestic violence <cit.>. Apart from the overlap in the top three entries, the statewise dowry and 498-A mentions are moderately correlated (correlation coefficient: 0.67).
We next conduct a qualitative analysis of (alleged) dowry demands [This analysis follows the statements made by the plaintiffs]. On a random sample of 100 court proceedings where the (alleged) dowry demand is explicitly recorded, we observe that the estimated demanded amount is 393,100 ± 544,876. We observe demanded amounts as low as 5,000 to as high as 3,000,000 which explains the staggeringly high variance in our estimation. This also indicates the broad economic spectrum present in India and how far and wide the system of dowry (allegedly) persists. We further observe that cash is not always the solely demanded financial asset. Gold is the second-most commonly demanded asset. Out of the 100 cases, 34 cases report gold demands (71.2 ± 84.6 gm). When we adjust the valuation of demanded gold replacing it with the historical average gold price in India across 2012 and 2021 [Obtained from <https://www.bankbazaar.com/gold-rate/gold-rate-trend-in-india.html>], the estimated (alleged) demanded dowry is 474,798 ± 567,219.
§ METHODS OVERVIEW
We use two NLP methods to quantify gender inequality: (1) Word Embedding Association Test; and (2) a text entailment framework. A brief description follows.
§.§ Word Embedding Based Methods
The first metric is called ord mbedding ssociation est () introduced by <cit.>. To calculate the metric, the words are embedded and the vectors a and b are obtained for the words a and b respectively. The cosine similarity of these words are denoted by cos(a,b). The metric considers two sets of target words given by and , and two sets of attribute words Å and . Then, the score is defined as (, , Å, ) = (_x ∈σ(x, Å, ) - _y ∈σ(y, Å, ))/_w ∈∪σ(w, Å, ),
where, σ(w, Å, ) = _a ∈Åcos(w,a) - _b ∈cos(w,b).
Intuitively, σ(w, Å, ) measures the association of w with the attribute sets, and the score measures the differential association of the two sets of target words with the attribute sets. A positive score implies that the target words in is more associated with the attribute words in Å than and the words in is more associated with than Å.
§.§ Text Entailment Based Methods
Quantifying gender inequality relying on the distributed representation of words presents a diffused, bird's-eye view of the larger trends. Also, these methods are known to be data-hungry <cit.>. Data availability often becomes a limiting factor to conducting contrastive studies at different spatio-temporal granularity. In what follows, we present a novel application of text entailment, a natural language inference (NLI) task <cit.> that bypasses the data size requirement and equips us with a finer lens through which we can compare and contrast gender inequality with respect to individual verbs.
An NLI system take a premise 𝒫 and a hypothesis ℋ as input and outputs entailment, contradiction, or semantic irrelevance. For instance, the hypothesis some men are playing a sport is entailed by the premise a soccer game with multiple males playing <cit.>. As one can see, textual entailment is more relaxed than pure logical entailment and it can be viewed as a human reading 𝒫 would infer most likely ℋ is true. This framework has gained traction in several recent social inference tasks that include estimating media stance on policing <cit.>, aggregating social media opinion on election fairness <cit.>, and detecting COVID-19 misinformation <cit.>.
Formally, let NLI(𝒫,ℋ) takes a premise 𝒫 and a hypothesis ℋ as inputs and outputs o ∈{entailment, contradiction, neutral}. Following <cit.>, we define entailment ratio (denoted by ent(𝒟, ℋ)) for given corpus 𝒟 and a hypothesis ℋ, as the fraction of the individual sentences present in 𝒟 that entails ℋ:
ent(𝒟, ℋ) = ∑_𝒫∈𝒟I(NLI(𝒫, ℋ) = entailment)/|𝒟|,
where I is the indicator function. A larger value of ent(𝒟, ℋ) indicates greater support for ℋ in the corpus.
Consider we are interested in learning how often the husband and the wife are accused of torture (physical or emotional) in our corpus. We analyze this research question in the following way. We first construct a sub-corpus 𝒟_torture from the divorce court proceedings consisting of sentences that (1) mention or at least once; and (2) mention as a verb at least once. We next construct two hypotheses – ℋ_,torture and ℋ_,torture – using a and a as victims and perpetrators interchangeably. ℋ_,torture is A woman tortures a man and ℋ_,torture is A man tortures a woman. We next compute the entailment gap defined as
gap(𝒟_torture,torture) =
ent(𝒟_torture,ℋ_,torture) - ent(𝒟_torture,ℋ_,torture)
Effectively, this means we compute the fraction of sentences that entail A woman tortures a man in 𝒟_torture and subtract it from the fraction of sentences that entail A man tortures a woman in 𝒟_torture. An overall positive number indicates that the male has been described as the torturer more often than the female in court proceedings. A negative value would indicate the opposite way. Similar analysis can be extended to other verbs such as , , or .
§ DESIGN CONSIDERATIONS
Adapting the and entailment frameworks to quantify gender inequality in our domain requires careful consideration of several aspects described in what follows.
§.§ Verbs for Target Sets
Traditionally, score is used to quantify gender or racial stereotypes. Majority of the elements present in those attribute sets would be nouns and adjectives (e.g., criminals, terrorists, doctors, police) <cit.> and seldom verbs <cit.>.
We are interested in understanding the action space of the two parties fighting a divorce case; we want to know if the court described that one party tortured or abused the other. Hence, verbs are a natural choice for our target set.
We inspect the list of high-frequency verbs in the corpus and narrow down to the following ten verbs: _unpleasant =
{,
,
,
,
,
,
,
, , }. A small subset of these words are already present in the list of unpleasant stimuli presented in <cit.>. We further compute the average valence score of these words as per the lexicon presented in <cit.>. We find the average valence score of _unpleasant is 2.7, comparable to the average valence score (2.16) of unpleasant stimuli presented in <cit.>.
Divorce being a bitterly fought family situation, we observe a sparse presence of pleasant verbs such as , , or in our corpus. Since infrequent words in the corpus do not have reliable embeddings <cit.>, in contrast with traditional applications of score, we choose the target set to be an empty set.
§.§ The Torturer and the Tortured
The attribute sets Å and as defined in the score represents the identifiers used for the plaintiff and defendant in our data (e.g., Å consisting of , , , and consisting of , , etc.). However, notice that score is agnostic about whether the identifier is the contributor or the receptor of target words. For example, torture does not happen in isolation; it requires a torturer and one who is tortured. Unlike nouns, verbs are typically associated with two entities – the subject and the object. To disambiguate between “the husband tortured the wife” and “the wife tortured the husband”, a word embedding needs to understand this nuance. Otherwise, the embedding is likely to place both the plaintiff and defendant identifiers equidistant to the verb.
To disambiguate these two situations, we run the corpus through the POS tagger <cit.> to find out the subject and object of the sentences and whether the statements are in active or passive voice. Based on this, we classify the subjects and objects as `male perpetrator', `female perpetrator', `male victim', or `female victim', in the sentences that has the target verbs. We replace these four cases with four unique words (denoted by , ,, and , respectively) so that those words do not occur anywhere else in any of the documents. We call this new dataset _replaced.
§ WORD EMBEDDING BASED ANALYSIS
We are interested in two research questions:
RQ 1: How does gender inequality manifest in divorce court proceedings with respect to unpleasant verbs in 𝒳?
RQ 2: Is our careful disambiguation of the torturer and the tortured necessary at all?
In order to answer these two questions, we run two sets of experiments with identical training configurations. First, we run experiments on _replaced using the target and attribute sets as defined in the previous section. We train the word embedding model 10 times and calculate the scores for each of the following two cases: when both genders are (a) perpetrators, i.e., when Å={}, ={}, and (b) victims, i.e., when Å={}, ={}. We use the default parameters for training our FastText <cit.> Skip-gram embedding with the dimension set to 100 for all word-embeddings in this paper. Second, we run a baseline experiment with the original text data without replacing them with the four unique words (_divorce) and use the attribute sets as Å={} and ={}. The number of runs and the embedding method are the same in both experiments. The results are shown in <Ref>.
As already described, a negative score indicates is more associated with the target set as compared to Å. Hence, if we look from the perspective of the victim, we find that women are more associated with the unpleasant verbs than men. In contrast, when viewed from the perpetrator's perspective, a positive score implies that men are more associated with the unpleasant verbs. Hence, our results indicate that in our corpus, women are more often the victims while men are more often the perpetrators.
Our baseline experiments that do not make any distinction between the perpetrator and the victim give a score close to zero indicating near-perfect gender equality. This inaccurate result, while highly surprising from a social science perspective, is not unexpected given how the original framework functions. The two entities (husband and wife) are present around the unpleasant verbs with nearly equal frequency. If the method does not make any distinction between the roles of victim and perpetrator, will give inaccurate results. We thus carefully use the score to elicit the correct
gender bias when applied to legal texts for our social science research question.
§ SOCIETAL INEQUALITY AND MODEL BIAS
Our word embeddings are computed from scratch while our next set of experiments relies on downstream applications built on top of large language models. Large language models (LLMs) are known to have a wide range of biases due to the train data <cit.> and extant literature has examined gender bias in the form of occupational stereotypes present in NLI systems <cit.>. We thus need to disentangle societal inequalities that are potentially reflected in our corpus and model biases that are potentially present in the NLP applications.
Essentially, for a premise/hypothesis pair ⟨𝒫,ℋ⟩, the NLI system estimates the probability P(ℋ |𝒫). However, how LLMs encode the probability P(ℋ) when the hypotheses primarily consists of the two genders (male and female) and a set of verbs is understudied. A thorough investigation first reveals that the masked word prediction probability of several well-known LLMs is sensitive to gender. We next present a measure to quantify gender bias sensitivity of NLI frameworks and present mitigating strategies. Finally, we use a bias-mitigated NLI system on our corpus and report findings.
§.§ Implicit Bias in Agent and Theme in LLMs
Unlike existing literature that primarily target occupational stereotypes to quantify and analyze gender bias <cit.>, we focus on a very basic unit in a sentence – the verbs. Following <cit.>, let in a sentence X verbs Y, X represent the agent and Y represent the theme.
Many verbs imply the relative authority levels between the agent and the theme. For example, in the sentence The football coach instructed the players to play a conservative game, the agent (the football coach) has more authority than the theme (the players). In contrast, the agent has less authority than the theme in the sentence The football coach honored the players' suggestion to play a conservative game. First proposed in <cit.>, the connotation relation of power captures this notion of power differential between an agent and a theme with respect to a given verb.
While the connotation relation of power has been analyzed in the context of gender inequality in movie scripts <cit.> and follow-on research focused on editorial fixes to remove bias <cit.>, little or no literature exists that documents the implicit gender bias present towards the agent and the theme when specific verbs are considered. This research is important and has a broader impact beyond our current social inference task. For instance, if an LLM encodes that it is less likely for a woman to inspire or guide someone than a man, this bias may percolate to downstream tasks leading to erroneous social conclusions when applied to large-scale data for other social inference tasks.
We use cloze tests to evaluate this implicit bias. A brief description of cloze test follows.
Cloze test: When presented with a sentence (or a sentence stem) with a missing word, a cloze task <cit.> is essentially a fill-in-the-blank task. For instance, in the following cloze task: In the , it snows a lot, is a likely completion for the missing word. Word prediction as a test of LLM's language understanding has been explored in <cit.>.
Bias Evaluation Framework:
We describe our proposed testing framework for gender bias. Let _𝑐𝑙𝑜𝑧𝑒 (w, 𝒮) denote the completion probability of the word w with a masked cloze task 𝒮 as input. For a given verb v, we consider the following four cloze tests:
* A [MASK] v a woman (denoted by v_womanAsTheme)
* A [MASK] v a man (denoted by v_manAsTheme)
* A man v a [MASK] (denoted by v_manAsAgent)
* A woman v a [MASK] (denoted by v_womanAsAgent)
In an ideal world where the LLM treats men and women equally, _𝑐𝑙𝑜𝑧𝑒 (man, v_womanAsTheme) and _𝑐𝑙𝑜𝑧𝑒 (woman, v_manAsTheme) should be equal. However, our preliminary exploratory analysis indicates that is not the case. For example, when v is set to inspire, _𝑐𝑙𝑜𝑧𝑒 (man, v_womanAsTheme) is 0.20 whereas _𝑐𝑙𝑜𝑧𝑒 (woman, v_manAsTheme) is 0.16. When we set v to guide, the gap widens – _𝑐𝑙𝑜𝑧𝑒 (man, v_womanAsTheme) is 0.71 whereas _𝑐𝑙𝑜𝑧𝑒 (woman, v_manAsTheme) is 0.36.
Again, in an ideal world where the LLM treats men and women equally, _𝑐𝑙𝑜𝑧𝑒 (man, v_womanAsAgent) and _𝑐𝑙𝑜𝑧𝑒 (woman, v_manAsAgent) should be equal.
Let 𝒱 denote the set of all verbs listed in <cit.> where the agent has more power than the theme. Our overall measures of implicit bias are:
(a) (1/|𝒱|) ·( ∑_v ∈𝒱 (_𝑐𝑙𝑜𝑧𝑒 (man, v_womanAsTheme) - . . _𝑐𝑙𝑜𝑧𝑒 (woman, v_manAsTheme)) ), and (b) (1/|𝒱|) ·(∑_v ∈𝒱 (_𝑐𝑙𝑜𝑧𝑒 (man, v_womanAsAgent) - . . _𝑐𝑙𝑜𝑧𝑒 (woman, v_womanAsAgent)) ).
Measure (a) quantifies bias_agent. A positive value indicates that the LLM encodes a man being in the position of agent likelier than a woman on expectation.
Measure (b) quantifies bias_theme. A positive value indicates that the LLM encodes a man being in the position of theme likelier than a woman on expectation. We investigate three well-known LLMs for this audit: <cit.>; <cit.>; and <cit.>. We consider 1,222 verbs listed in <cit.>. We also consider verbs in 𝒳_unpleasant for this study.
Table <ref> summarizes our gender bias audit of LLMs with respect to verbs implying more power to the agent than the theme. We first note that for both verb sets, bias_agent is substantially larger than bias_theme. This result indicates that men are considerably more likely to be considered as the agent when women is the theme and the verb implies that the agent has greater power than
the theme. We also note that the completions favor mildly men over women even for the theme, however, the values are closer to 0.
§.§ Implicit Bias in NLI Systems
We describe our approach to quantify model bias in our NLI framework specific to our task.
Consider we modify the sub-corpus 𝒟_torture to 𝒟_torture^flipped where the gender identifiers in each premise sentence are flipped to the equivalent identifier of the opposite gender. For instance, the premise The wife tortured the husband both mentally and physically will be modified as The husband tortured the wife both mentally and physically. Flipping gendered words to test bias through counterfactuals in the context of coreference resolution has been previously explored in <cit.>. We argue that if a premise in 𝒟_torture entails A man tortures a woman, the flipped premise in 𝒟_torture^flipped should entail A woman tortures a man instead in a gender-neutral NLI system. Hence the entailment gap for computed on 𝒟_torture should be equal in magnitude and opposite in polarity as the entailment gap computed on 𝒟_torture^flipped. The NLI system's (ℳ) overall bias score with respect to verbs present in 𝒳_unpleasant is thus computed as
NLI_bias(ℳ, 𝒳_unpleasant) = ∑_v ∈𝒳_unpleasantabs( (gap(𝒟_v, v) + gap(𝒟_v^flipped, v))/|𝒳_unpleasant|.
In simple words, for each verb, we compute the entailment gap (value_1) for the relevant sub-corpus and the flipped sub-corpus (value_2). We subtract value_2 from value_1 and take the absolute value of the sum. The bias score is the average value of this sum across all verbs: a score close to 0 indicates that the NLI system has a minimal bias, whereas larger values indicate greater bias.
Our baseline is an off-the-shelf NLI system from Allen NLP trained using (denoted by ℳ_base). We find that NLI_bias(ℳ_base, 𝒳_unpleasant) is 0.27 [We note that a bias-aware NLI variant from Allen NLP has a better starting point (bias score 0.20) than the base model. However, the bias-aware model exhibits slower convergence than the base model when we conduct our active learning steps as discussed in Section 7.3. With identical experimental setting, after iteration 3, the bias-aware model improves its bias score to 0.133.].
§.§ Bias Mitigation Via Inconsistency Sampling
Active Learning is a powerful and well-established form of supervised machine learning technique <cit.> characterized by the interaction between the learner, aka the classifier, and the teacher (oracle or annotator). Each interaction step consists of the learner requesting the teacher the label of an unlabeled instance sampled using a given sampling strategy and augmenting the data set with the newly acquired label. Next, the classifier is retrained on the augmented data set. This sequential label-requesting and re-training process continues until some halting condition is reached (e.g., exceeded annotation budget or the desired classifier performance). At this point, the algorithm outputs a classifier, and the objective for this classifier is to closely approximate the (unknown) target concept in the future. The key goal of active learning is to reach a strong performance at the cost of fewer labels.
Some of the well-known sampling methods include uncertainty sampling <cit.>, certainty sampling <cit.>, and density-based sampling <cit.>.
Beyond a static strategy, more complex strategies such as adapting strategy selection parameters based on estimated future residual error reduction or combining multiple sampling strategies to balance the label distribution in the procured data set have been explored in <cit.> and <cit.>, respectively.
Inconsistency Sampling. First introduced in Dutta et al. <cit.>, this sampling technique exploits the underlying logical structure of the ⟨ premise, hypothesis ⟩ space. For instance, a premise cannot both entail (or contradict) a given hypothesis and its negation. In our work, we extend this idea and exploit a ⟨ premise, hypothesis ⟩ space richer than Dutta et al. <cit.> for logical inconsistency.
Consider the premise/hypothesis pair Continuously her husband used to harass and torture her everyday/A man tortures a woman. We argue that if this premise entails the hypothesis (which it does), the modified premise/hypothesis pair with replacing every gendered word with the opposite gender – i.e., Continuously his wife used to harass and torture him everyday/A woman tortures a man – should also entail. If not, it signals a logical inconsistency. For each sampling iteration, we add 60 samples giving equal weightage to the verbs present in 𝒳_unpleasant.
Table <ref> summarizes our active learning results. For both models, ℳ_base and ℳ_bias-aware, we conduct three rounds of active learning using inconsistency sampling and stop when the performance improvement becomes indiscernible (≤ 0.01). All annotations are independently conducted by two annotators. Since legal documents are typically written in clear, unambiguous language, we observe a near-perfect agreement (Cohen's κ value 0.96). The remaining disagreements are resolved through a post-annotation adjudication step. Table <ref> indicates that with subsequent active learning steps, our NLI system exhibits lesser bias. Given that the maximum possible bias score is 2, we achieve substantial improvement in mitigating the bias.
Now that we are more confident that our model inferences are less sensitive to gender, we evaluate the societal bias present in our corpus.
Figure <ref> summarizes our text entailment results. Barring , for all other verbs, men are identified as perpetrators more often than women. We further note that verbs that indicate physical abuse, such as and , particularly stand out with larger values. The average entailment gap for verbs unambiguously indicating physical harm – , , , , and – is much higher (0.41) than verbs that may or may not indicate physical harm (0.19) such as , , , , and . A manual inspection of randomly sampled 200 ⟨ premise, hypothesis⟩ pairs aligns with our automated method's overall findings.
§ DISCUSSIONS AND LIMITATIONS
In this paper, we present the first-ever computational analysis (to our knowledge) of gender inequality in divorce court proceedings in India. Based on the documented allegations of parties involved in the divorce, our analyses indicate a striking gender inequality as described in these public records. While documented evidence of marital distress in India exists in social science literature, how such factors play out in divorce has limited understanding. Our study sheds light on a vulnerable and vulnerable and practically invisible community in India.
Methodologically, we identify and address several gaps and limitations of existing NLP techniques to quantify gender inequality. We believe our finding specific to legal text is new, and our method to address it is simple, effective, and intuitive. Casting the problem of quantifying gender inequality as a text entailment task is also new. Our results on text entailment results suggest that NLI can be a viable tool to computational social science researchers to analyze similar research questions (e.g., who gets the child custody can be estimated with hypotheses the husband gets the custody of the child and the wife gets the custody of the child). Moreover, our bias mitigation strategy exploiting a novel inconsistency sampling technique using counterfactuals holds promise.
Our work has the following limitations.
Sentence level processing: An important point to keep in mind, however, is that our analyses operate at the sentence level. If in a court proceeding, a sentence records that the plaintiff accuses the defendant of wrongdoing which the defendant denies in a subsequent sentence, how these two contradicting claims are resolved in the court cannot be inferred without language models that can handle document-level contexts. We believe our research will open the gates for investigation with newer-age LLMs that can handle broader contexts.
Archival limitation: The sparse presence of the North-Eastern region in our dataset is most likely due to archival limitation as some of these states record the highest rate of divorce <cit.>. Our study is also limited by the overall archival extent of .
Economic independence: Some of the court proceedings mention the litigants' occupations. We annotated randomly 100 sampled occupations for women.
While an overwhelming majority of the sampled occupations are homemakers, compared to World Bank Data on labor force participation of women in India (23%), 32% of the women are working women in our sampled occupations.
Economic independence and divorce merit a deeper exploration.
Out-of-court settlements, separation, abandonment: Finally, not all unhappy marriages end up in divorce and reach court for dissolution. Many out-of-court settlements happen. As documented in <cit.>, the number of separated women in 2011 is almost three times the number of divorced women. Since divorce is still looked at as a social stigma <cit.> and family institutions are highly valued in India, there could be many women who continue with their dysfunctional marriages while unhappy. The court does not know their stories.
§ ETHICAL STATEMENT
We work with public court records. Prior studies exist on Indian court proceedings <cit.>. We conduct aggregate analysis refraining from presenting any personally identifiable information in the paper. Hence, we do not see any ethical concern. Rather, we believe our findings and methods can be valuable to policymakers and social scientists.
A study on binary gender inequality runs the risk of oversimplifying gender, which we acknowledge lies on a spectrum. Same-sex marriage is yet not legal in India. Further nuances will be needed to extend our work to other cultures allowing same-sex marriages. We are also sensitive to previous studies that point out the potential harms of the erasure of gender and sexual minorities <cit.>.
10
jacob2016marriage
Suraj Jacob and Sreeparna Chattopadhyay.
Marriage dissolution in india: Evidence from census 2011.
Economic and Political Weekly, 51(33):25–27, 2016.
dommaraju2016divorce
Premchand Dommaraju.
Divorce and separation in india.
Population and Development Review, pages 195–223, 2016.
goode1962marital
William J. Goode.
Marital satisfaction and instability-a cross-cultural class analysis
of divorce rates.
International social science journal, 14(3):507–526, 1962.
mani2017study
A Santhosh Mani and Bhanu Priya.
A study on the recent trends of divorce in india.
ZENITH International Journal of Multidisciplinary Research,
7(8):25–32, 2017.
belliappa2013gender
Jyothsna Belliappa.
Gender, class and reflexive modernity in India.
Springer, 2013.
vasudevan2015causes
Bindhu Vasudevan, Devi M. Geetha, Anitha Bhaskar, Binu Areekal, Anupa Lucas,
et al.
Causes of divorce: a descriptive study from central kerala.
Journal of evolution of medical and dental sciences,
4(20):3418–3427, 2015.
bhattacharya2019comparative
Paheli Bhattacharya, Kaustubh Hiware, Subham Rajgaria, Nilay Pochhi,
Kripabandhu Ghosh, and Saptarshi Ghosh.
A comparative study of summarization algorithms applied to legal case
judgments.
In ECIR, pages 413–428. Springer, 2019.
kalia2022classifying
Arvind Kalia, Naveen Kumar, and Nischay Namdev.
Classifying case facts and predicting legal decisions of the indian
central information commission: a natural language processing approach.
In Advances in Deep Learning, Artificial Intelligence and
Robotics, pages 35–45. Springer, 2022.
ash2021group
Elliott Ash, Sam Asher, Aditi Bhowmick, Sandeep Bhupatiraju, Daniel Chen,
Tanaya Devi, Christoph Goessmann, Paul Novosad, and Bilal Siddiqi.
In-group bias in the Indian judiciary: Evidence from 5 million
criminal cases.
Technical report, Working paper, August, 2021.
kumar2020sexual
Anil Kumar.
Sexual harassment of women at workplace: How far is indian law
protective?
International Academic Journal of Law, 1(1):35–39, 2020.
madaan2018analyze
Nishtha Madaan, Sameep Mehta, Taneea Agrawaal, Vrinda Malhotra, Aditi Aggarwal,
Yatin Gupta, and Mayank Saxena.
Analyze, detect and remove gender stereotyping from bollywood movies.
In MAccT, pages 92–105. PMLR, 2018.
DBLP:conf/acl-trac/BhattacharyaSKB20
Shiladitya Bhattacharya, Siddharth Singh, Ritesh Kumar, Akanksha Bansal, Akash
Bhagat, Yogesh Dawer, Bornini Lahiri, and Atul Kr. Ojha.
Developing a multilingual annotated corpus of misogyny and
aggression.
In Proceedings of the Second Workshop on Trolling, Aggression
and Cyberbullying, pages 158–168, 2020.
khadilkar2021gender
Kunal Khadilkar, Ashiqur R. KhudaBukhsh, and Tom M. Mitchell.
Gender bias, social bias, and representation in Bollywood and
Hollywood.
Patterns, 3(4):100486, 2022.
rao1973dowry
R. Jaganmohan Rao.
Dowry system in India — a socio-legal approach to the problem.
Journal of the Indian Law Institute, 15(4):617–625, 1973.
ahmad2008dowry
Nehaluddin Ahmad.
Dowry deaths (bride burning) in India and abetment of suicide: a
socio-legal appraisal.
JE Asia & Int'l L., 1:275, 2008.
sonawat2001understanding
Reeta Sonawat.
Understanding families in india: A reflection of societal changes.
Psicologia: Teoria e Pesquisa, 17:177–186, 2001.
caliskan2017semantics
Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan.
Semantics derived automatically from language corpora contain
human-like biases.
Science, 356(6334):183–186, 2017.
maccartney2008modeling
Bill MacCartney and Christopher D. Manning.
Modeling semantic containment and exclusion in natural language
inference.
In COLING 2008, pages 521–528, 2008.
mandal2021unsupervised
Arpan Mandal, Kripabandhu Ghosh, Saptarshi Ghosh, and Sekhar Mandal.
Unsupervised approaches for measuring textual similarity between
legal court case reports.
Artificial Intelligence and Law, 29(3):417–451, 2021.
HaltermanKSO21Policing
Andrew Halterman, Katherine A. Keith, Sheikh Muhammad Sarwar, and Brendan
O'Connor.
Corpus-Level Evaluation for Event QA: The IndiaPoliceEvents Corpus
Covering the 2002 Gujarat Violence.
In ACL/IJCNLP 2021, volume ACL/IJCNLP 2021 of Findings
of ACL, pages 4240–4253, 2021.
DuttaPolice
Sujan Dutta, Beibei Li, Daniel S. Nagin, and Ashiqur R. KhudaBukhsh.
A murder and protests, the capitol riot, and the chauvin trial:
Estimating disparate news media stance.
In IJCAI, pages 5059–5065, 2022.
kaur2019gap
Harjnder Kaur-Aulja, Farzana Shain, and Alison Lilley.
A Gap Exposed: What is Known About Sikh Victims of Domestic Violence
Abuse (DVA) and Their Mental Health?
European Journal of Mental Health, 14(1):179–189, 2019.
mistry1982personal
PJ Mistry.
Personal names: Their structure, variation, and grammar in
Gujarati.
South Asian Review, 6(3):174–190, 1982.
ghansham2002female
Devaki Monani Ghansham.
Female foeticide and the dowry system in India.
In Townsville International Women’s Conference, James Cook
Univ., Australia, 2002.
banerjee2014dowry
Priya R. Banerjee.
Dowry in 21st-century India: the sociocultural face of
exploitation.
Trauma, Violence, & Abuse, 15(1):34–40, 2014.
rastogi2006dowry
Mudita Rastogi and Paul Therly.
Dowry and its link to violence against women in India: Feminist
psychological perspectives.
Trauma, Violence, & Abuse, 7(1):66–77, 2006.
carpenter2016protecting
Deepshikha Carpenter and Polly Vauquline.
Protecting Women from Domestic Violence in Assam, India? Evaluating
Section 498-A, The Indian Penal Code (IPC), 1983 vs the Protection of Women
from Domestic Violence Act (PWDVA), 2005.
Journal of International Women's Studies, 18(1):133–144, 2016.
babu2011dowry
Gopalan Retheesh Babu and Bontha Veerraju Babu.
Dowry deaths: a neglected public health issue in India.
International health, 3(1):35–43, 2011.
jakimow2013everyone
Tanya Jakimow.
‘everyone must give’: Explaining the spread and persistence of
bridegroom price among the poor in rural telangana, india.
Journal of Asian and African Studies, 48(2):180–194, 2013.
DBLP:conf/nips/MikolovSCCD13
Tomás Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey
Dean.
Distributed representations of words and phrases and their
compositionality.
In Advances in Neural Information Processing Systems, pages
3111–3119, 2013.
dagan2005pascal
Ido Dagan, Oren Glickman, and Bernardo Magnini.
The pascal recognising textual entailment challenge.
In Machine Learning Challenges Workshop, pages 177–190.
Springer, 2005.
bowman-etal-2015-large
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning.
A large annotated corpus for learning natural language inference.
In EMNLP, 2015.
halterman-etal-2021-corpus
Andrew Halterman, Katherine Keith, Sheikh Sarwar, and Brendan O'Connor.
Corpus-level evaluation for event QA: The IndiaPoliceEvents
corpus covering the 2002 Gujarat violence.
In ACL-IJCNLP, pages 4240–4253, 2021.
Capitol2022
Ashiqur R. KhudaBukhsh, Rupak Sarkar, Mark S. Kamlet, and Tom M. Mitchell.
Fringe news networks: Dynamics of US news viewership following the
2020 presidential election.
In ACM WebScience, pages 269–278, 2022.
hossain-etal-2020-covidlies
Tamanna Hossain, Robert L. Logan IV, Arjuna Ugarte, Yoshitomo Matsubara, Sean
Young, and Sameer Singh.
COVIDLies: Detecting COVID-19 misinformation on social media.
In Proceedings of the 1st Workshop on NLP for COVID-19 (Part
2) at EMNLP 2020, December 2020.
DBLP:conf/naacl/ManziniLBT19
Thomas Manzini, Yao Chong Lim, Alan W. Black, and Yulia Tsvetkov.
Black is to criminal as caucasian is to police: Detecting and
removing multiclass bias in word embeddings.
In NAACL-HLT, pages 615–621, 2019.
greenwald2014malice
Anthony G. Greenwald and Thomas F. Pettigrew.
With malice toward none and charity for some: ingroup favoritism
enables discrimination.
American Psychologist, 69(7):669, 2014.
DBLP:conf/acl/HoyleWWAC19
Alexander Hoyle, Lawrence Wolf-Sonkin, Hanna M. Wallach, Isabelle Augenstein,
and Ryan Cotterell.
Unsupervised discovery of gendered language through latent-variable
modeling.
In ACL 2019, pages 1706–1716, 2019.
warriner2013norms
Amy Beth Warriner, Victor Kuperman, and Marc Brysbaert.
Norms of valence, arousal, and dominance for 13,915 english lemmas.
Behavior research methods, 45(4):1191–1207, 2013.
DBLP:conf/iclr/LampleCRDJ18
Guillaume Lample, Alexis Conneau, Marc'Aurelio Ranzato, Ludovic Denoyer, and
Hervé Jégou.
Word translation without parallel data.
In ICLR. OpenReview.net, 2018.
qi2020stanza
Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning.
Stanza: A Python natural language processing toolkit for many human
languages.
In ACL: System Demonstrations, 2020.
bojanowski2017enriching
Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov.
Enriching word vectors with subword information.
TACL, 5:135–146, 2017.
bender2021dangers
Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret
Shmitchell.
On the Dangers of Stochastic Parrots: Can Language Models Be Too
Big?
In ACM FaccT, pages 610–623, 2021.
rudinger2017social
Rachel Rudinger, Chandler May, and Benjamin Van Durme.
Social bias in elicited natural language inferences.
In Proceedings of the First ACL Workshop on Ethics in Natural
Language Processing, pages 74–79, 2017.
DBLP:journals/corr/abs-2105-05541
Shanya Sharma, Manan Dey, and Koustuv Sinha.
Evaluating gender bias in natural language inference.
CoRR, abs/2105.05541, 2021.
kumar2020nurse
Vaibhav Kumar, Tenzin Singhay Bhotia, and Tanmoy Chakraborty.
Nurse is closer to woman than surgeon? mitigating gender-biased
proximities in word embeddings.
TACL, 8:486–503, 2020.
SAPPowerAgency
Maarten Sap, Marcella Cindy Prasettio, Ari Holtzman, Hannah Rashkin, and Yejin
Choi.
Connotation frames of power and agency in modern films.
In EMNLP 2017, pages 2329–2334, 2017.
PowerTransformer
Xinyao Ma, Maarten Sap, Hannah Rashkin, and Yejin Choi.
Powertransformer: Unsupervised controllable revision for biased
language correction.
In EMNLP 2020, pages 7426–7441, 2020.
taylor1953cloze
Wilson L. Taylor.
“Cloze procedure”: A new tool for measuring readability.
Journalism quarterly, 30(4):415–433, 1953.
paperno-etal-2016-lambada
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Ngoc Quan Pham,
Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel
Fernández.
The LAMBADA dataset: Word prediction requiring a broad discourse
context.
In ACL 2016, pages 1525–1534, 2016.
ettinger-2020-bert
Allyson Ettinger.
What BERT is not: Lessons from a new suite of psycholinguistic
diagnostics for language models.
TACL, 8:34–48, 2020.
devlin2018bert
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.
Bert: Pre-training of deep bidirectional transformers for language
understanding.
arXiv preprint arXiv:1810.04805, 2018.
Roberta
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer
Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov.
Roberta: A robustly optimized BERT pretraining approach.
CoRR, abs/1907.11692, 2019.
Megatron
Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper,
and Bryan Catanzaro.
Megatron-lm: Training multi-billion parameter language models using
model parallelism.
CoRR, abs/1909.08053, 2019.
lu2020gender
Kaiji Lu, Piotr Mardziel, Fangjing Wu, Preetam Amancharla, and Anupam Datta.
Gender bias in neural natural language processing.
In Logic, Language, and Security, pages 189–202. Springer,
2020.
settles2009active
Burr Settles.
Active learning literature survey.
Computer Sciences Technical Report 1648, University of
Wisconsin–Madison, 2009.
sindhwani2009uncertainty
Vikas Sindhwani, Prem Melville, and Richard D. Lawrence.
Uncertainty sampling and transductive experimental design for active
dual supervision.
In ICML, pages 953–960. ACM, 2009.
nguyen2004active
Hieu T. Nguyen and Arnold Smeulders.
Active learning using pre-clustering.
In ICML, page 79, 2004.
donmez2007dual
Pinar Donmez, Jaime G Carbonell, and Paul N Bennett.
Dual strategy active learning.
In Machine Learning: ECML 2007, pages 116–127. Springer, 2007.
palakodety2020voice
Shriphani Palakodety, Ashiqur R. KhudaBukhsh, and Jaime G. Carbonell.
Voice for the voiceless: Active sampling to detect comments
supporting the Rohingyas.
In AAAI 2020, volume 34-01, pages 454–462, 2020.
ArjunErasurePaper
Sunipa Dev, Masoud Monajatipoor, Anaelia Ovalle, Arjun Subramonian, Jeff M.
Phillips, and Kai-Wei Chang.
Harms of gender exclusivity and challenges in non-binary
representation in language technologies.
In EMNLP, pages 1968–1994, 2021.
|
http://arxiv.org/abs/2307.04810v1 | 20230710180135 | Great Inequality of Jupiter and Saturn I: The Planetary Three Body Problem, Heliocentric development by Lagrange multipliers, Perturbation Theory Formulation | [
"Jonathan Tot",
"S. R. Valluri",
"P. C. Deshmukh"
] | physics.class-ph | [
"physics.class-ph",
"math.DS",
"70F07 (Primary) 37J40"
] |
In this paper, we undertake to present a self-contained and thorough analysis of the gravitational three body problem, with anticipated application to the Great Inequality of Jupiter and Saturn. The analysis of the three body Lagrangian is very convenient in heliocentric coordinates with Lagrange multipliers, the coordinates being the vector-sides r⃗_i, i=1,2,3 of the triangle that the bodies form. In two dimensions to begin with, the equations of motion are formulated into a dynamical system for the polar angles θ_i, angular momenta ℓ_i and eccentricity vectors e⃗_i. The dynamical system is simplified considerably by change of variables to certain auxiliary vector f⃗_i=r̂_i+e⃗_i. We then begin to formulate the Hamiltonian perturbation theory of the problem, now in three dimensions. We first give the geometric definitions for the Delaunay action-angle variables of the two body problem. We express the three body Hamiltonian in terms of Delaunay variables in each sector i=1,2,3, revealing that it is a nearly integrable Hamiltonian. We then present the KAM theory perturbative approach that will be followed in future work, including the modification that will be required because the Hamiltonian is degenerate.
Great Inequality of Jupiter and Saturn I: The Planetary Three Body Problem, Heliocentric development by Lagrange multipliers, Perturbation Theory Formulation
Jonathan Tot
Department of Mathematics and Statistics, Dalhousie University,
Halifax, Nova Scotia, Canada B3H 4R2
mailto:[email protected]@dal.ca
S.R. Valluri
Department of Physics and Astronomy, University of Western Ontario
and Mathematics, King’s University College
London, Ontario, Canada N6A 3K7
mailto:[email protected]@uwo.ca
P.C. Deshmukh
CAMOST, IIT Tirupati and IISER Tirupati,
Tirupati, Andhra Pradesh 517619, India
mailto:[email protected]@iittp.ac.in
August 12, 2023
====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
PART:
*
§ INTRODUCTION
The three body-problem and the Great Inequality of Jupiter and Saturn are much studied from the 18th and 19th centuries. They are part of the history of the development of Celestial Mechanics, inaugurated by the publication of Newton’s Principia in 1687 <cit.>, and many of the great names that come down to us from that time were heavily involved, most chiefly the mathematical techniques proposed and analyses completed by Leonhard Euler, Joseph-Louis Lagrange, and finally Pierre-Simon Laplace, in his Théorie de Jupiter et de Saturne, written in 1785 <cit.>. These advances were enabled by increasing observational accuracy during the time of John Flamsteed, the first Astronomer Royal, and astronomical forecasts of such figures as Cassini and Halley <cit.>.
Following the grand realization, due to Laplace, that the observed discrepancies of Jupiter and Saturn’s motions from the essentially Keplerian predictions could actually be accounted for by the mutual Newtonian gravitation of the planets, other researchers into the 19th century continued to make improved theories and contributions to the methods, perhaps most notably the work of G.W. Hill <cit.> and the series manipulations of Charles Delaunay <cit.>.
The study of the three body problem, as a mathematical problem in its own right, also began with the widespread acknowledgement of Newtonian universal gravitation. Euler (1767) and Lagrange (1772) both initially found particular periodic solutions, which today are understood to be associated with central configurations of the system <cit.>. Much of the early focus was given to the restricted three body problem, in which one mass is negligible relative to the other two, so that the larger masses form a Keplerian system, and the third body orbits in the gravitational field of the first two. It was Euler who first set the (circular) restricted three body problem in rotating coordinates, and Lagrange who demonstrated the existence of equilibrium points within the rotating frame, today known as the five Lagrange points, which correspond to five orbits for the third mass with the same orbital frequency as the two-body system.
In the late 1800s, Poincaré studied the general three body problem. Famously, his work won the prize competition for the 60th birthday of Oscar II, the King of Sweden and Norway in 1885. Poincaré’s work on the three body problem led him to consider what are now known as Poincaré sections and first return maps, and these insights ultimately led to the development of Kolmogorov-Arnold-Moser theory in the mid-20th century <cit.>. In the large, three-volume Les Méthodes Nouvelles de la Mécanique Céleste (1892-99) <cit.>, Poincaré saw, in the unpredictable nature of three-body problem solutions, the first glimpses of chaotic dynamics, which dominates much of dynamical systems analysis today <cit.>.
In this work we present a self-contained analysis of the planetary three body problem, in which two lighter masses orbit a heavier central body, with particular application to the Sun-Jupiter-Saturn system. Working with modern mathematical notation and physical terminology, one aim of this work is pedagogical in nature, that the subject matter would be more accessible to modern audiences. For readers who are new to the subject, we put classical terminology in italics upon their first occurrence and definition in the text. Jupiter and Saturn’s conjunction in December 2020 gained much attention in the media <cit.>, so popular presentation of this work should serve to foster public engagement in mathematics and the sciences.
Crucial to our analysis is the work of Brouke and Lass <cit.>, who formulate the Lagrangian problem in terms of the three vector-sides of the triangle that the bodies form, using Lagrange multipliers. In Part <ref>, we present this treatment of the three body problem in heliocentric coordinates. In <ref> we show how total energy and total angular momentum are conserved in this scheme. Staying in two dimensions in <ref>, we transform the equations of motion into a system of first order ODEs for the polar angles, angular momenta and eccentricity vectors e⃗_i of the three sectors i=1,2,3 of the model. We also demonstrate how the constraint can be employed to remove the third sector entirely, leaving equations for only i=1,2, corresponding to the planets. Auxiliary vectors f⃗_i=r̂_i+e⃗_i for each sector are introduced in <ref>, which make a considerable simplification to the algebraic form of the dynamical equations. The geometrical properties of the auxiliary vector is explored. We also present alternate forms of these equations in <ref>, in terms of the polar representations (e_i,β_i), (f_i,ψ_i) of eccentricity and auxiliary vectors, respectively.
In Part <ref> we return to three dimensions, and move toward the perturbational analysis of this problem. In <ref> we begin with the geometric definition and construction of the Delaunay action-angle variables for the two-body problem, with particular focus on the mean anomaly. In <ref> we present Hamilton's equations for the problem, in terms of the the perturbing function 𝐑=-λ⃗·(r⃗_1+r⃗_2+r⃗_3), where λ⃗ is the Lagrange multiplier, r⃗_1+r⃗_2+r⃗_3=0 being the constraint. Finally, in <ref> we present the basic approach and the setup of Kolmogorov-Arnold-Moser (KAM) theory
for this problem.
PART:
The Planetary Three Body Problem
Let the masses of light bodies be m_J, m_S respectively, and M the mass of a heavier body. Let μ=m_S/m_J, expected to be 𝒪(1), while ϵ=m_J/M is small ϵ≪ 1. For Jupiter, Saturn and the sun these are
[ [ M=1.989×10^30 kg; m_J=1.898×10^27 kg; m_S=5.683×10^26 kg ] [ ϵ=9.54..×10^-4≈10^-3; μ=0.2994..≈0.3 ] ]
Let the positions of the bodies in an inertial reference
frame be X⃗ for mass M, and x⃗_J,x⃗_S for m_J,m_S. Then the Lagrangian is
ℒ_in=1/2(MẊ⃗̇^2+m_Jẋ⃗̇_J^2+m_Sẋ⃗̇_S^2)+G(M m_J/x⃗_J-X⃗+M m_S/x⃗_S-X⃗+m_J m_S/x⃗_S-x⃗_J)
We will make a change of variables to: the center of mass and heliocentric coordinates
R⃗ =MX⃗+m_Jx⃗_J+m_Sx⃗_S/M_Σ
r⃗_J =x⃗_J-X⃗
r⃗_S =X⃗-x⃗_S
where M_Σ=M+m_J+m_S is the total mass. With these the Lagrangian becomes
ℒ_in= 1/2M_ΣṘ⃗̇^2+1/2M/M_Σ{ m_Jṙ⃗̇_J^2+m_Sṙ⃗̇_S^2+m_J m_S/Mṙ⃗̇_J+ṙ⃗̇_S^2}
+m_JM/M_Σ· GM_Σ(r_J^-1+m_S/m_J/r_S+m_S/M/r⃗_J+r⃗_S)
We see the center-of-mass coordinate R⃗ is cyclic; it's dynamics decouple from the other variables, and so can be ignored. We will consequently drop the first term of (<ref>). Then taking a factor m_J out of the kinetic terms, we can see both remaining
terms are proportional to m̃_J=M/M_Σm_J, which we may regard as a reduced mass of `Jupiter'. Then a reduced and normalized Lagrangian is
𝐋_in=ℒ_in/m̃_J=1/2(ṙ⃗̇_J^2+μṙ⃗̇_S^2+ϵμṙ⃗̇_J+ṙ⃗̇_S^2)+α(r_J^-1+μ r_S^-1+ϵμ/r⃗_J+r⃗_S)
where α=GM_Σ is the gravitational parameter. Often α is taken to be 1. However, we will work in unit's of Jupiter's average distance and Jupiter's year, so that we should take α=R^3ω^2=4π^2.
At this stage we recognize 1) that the first and second terms of each parentheses, what can be called the `Jupiter' and `Saturn' terms, are just like the terms for the displacement vector of a two body problem, with large central mass at the origin, and 2) that the third terms, proportional to ϵμ, also look like two-body problem terms, but with displacement vector r⃗_SJ=±(r⃗_J+r⃗_S). Taking the minus-sign option, which gives r⃗_SJ=x⃗_S-x⃗_J, we find we are dealing with a Lagrangian for three independent two-body problems, with vectors r⃗_J,r⃗_S and r⃗_SJ that satisfy the condition r⃗_J+r⃗_S+r⃗_SJ=0. Looking back at our definition of these vectors in terms of inertial coordinates X⃗,x⃗_J,x⃗_S, the constraint is satisfied identically:
(x⃗_J-X⃗)+(X⃗-x⃗_S)+(x⃗_S-x⃗_J)≡0
We see that the three vectors are the sides of the triangle that the three bodies form. The system is analogous to three bodies, of relative masses 1,μ and ϵμ with respect to the first mass, all orbiting a central, stationary mass located at the origin, such that the gravitational parameter for each two-body problem is α=GM_Σ. Of particular note is that these three supposed bodies orbiting a central mass do not gravitate to each other. The constraint is maintained by forces that are sourced by Lagrange multipliers. We thus consider the modified Lagrangian
𝐋_λ=∑_i{μ_i(1/2r⃗_i^2+α/r_i)+λ⃗·r⃗_i}
where the sum is on i=J,S,SJ or simply i=1,2,3, and μ_i=1,μ,ϵμ. The additional terms are λ⃗·∑_i=1^3r⃗_i, so that the constraint equation is
∑_i=1^3r⃗_i =0
The Euler-Lagrange equations are
μ_ir̈⃗̈_i=-μ_iα r⃗_i/r_i^3+λ⃗ , i=1,2,3.
The dynamical version of the constraint equation is
∑_i=1^3r̈⃗̈_i =0
If, in addition to this, we have initial conditions that satisfy both
∑_i=1^3r⃗_i =0 and ∑_i=1^3ṙ⃗̇_i =0
then the condition (<ref>) will be satisfied for all time. This allows us to solve for the Lagrange multipliers by taking linear combination of the equations (<ref>). We have
0=∑_i=1^3r̈⃗̈_i=-α(∑_i=1^3r̂_i/r_i^2)+(∑_i=1^31/μ_i)λ⃗
and thus
λ⃗=αδ∑_i=1^3r̂_i/r_i^2
where the coefficient δ is the reciprocal of the sum of reciprocal masses
δ=(∑_i=1^31/μ_i)^-1=ϵμ/1+ϵ+ϵμ=m_S/M_Σ=m̃_S/M=2.8536..× 10^-4
The equations (<ref>) are
r̈⃗̈_i=A_ijr̂_j/r_j^2 (summation on j)
where the matrix A of coefficients is
A =α([ -1+ϵ/1+ϵ+ϵμ δ δ; ϵ/1+ϵ+ϵμ -1+ϵμ/1+ϵ+ϵμ ϵ/1+ϵ+ϵμ; (1+ϵ+ϵμ)^-1 (1+ϵ+ϵμ)^-1 -ϵ(1+μ)/1+ϵ+ϵμ ])
=G([ -(M+m_J) m_S m_S; m_J -(M+m_S) m_J; M M -(m_J+m_S) ]).
That each column sums to 0 corresponds to r̈⃗̈_1+r̈⃗̈_2+r̈⃗̈_3=0.
§ CONSTANTS OF MOTION
§.§ Energy
The conjugate linear momenta are
p⃗_i=∇_ṙ⃗̇_i𝐋_λ=μ_iṙ⃗̇_i
and by Legendre transform, the Hamiltonian is
𝐇_λ =∑_ip⃗_i·ṙ⃗̇_i -𝐋_λ
=∑_i{p_i^2/μ_i-(1/2p_i^2/μ_i+μ_iα/r_i+λ⃗·r⃗_i)}
=∑_i{p_i^2/2μ_i-μ_iα/r_i-λ⃗·r⃗_i }
thus when the constraint (<ref>) is satisfied, the following (reduced) energy is conserved
ξ=E/m̃_J=∑_iμ_i(1/2ṙ⃗̇_i^2-α/r_i)=∑_i μ_i ξ_i
where ξ_i=ṙ⃗̇_i^2/2-α/r_i is the specific energy for each sector of the model.
§.§ Angular Momentum
Now we put the dynamical vectors r⃗_i in polar coordinates. Up to this point, we could have been in three dimensions, but for now we work in two dimensions. Each vector is r⃗_i(t)=r_i(t)θ̂_i(t), where r̂(θ)=(cosθ, sinθ)^T
ṙ⃗̇_i^2=ṙ_i^2+r_i^2θ̇_i^2
momenta conjugate to θ_i are
h_i=∂ 𝐋_λ/∂θ̇_i=μ_i r_i^2θ̇_i
and the Euler-Lagrange equations are
ḣ_i=∂ 𝐋_λ/∂θ_i=∂/∂θ_i(λ⃗·r⃗_i)
=λ⃗·r_iθ̂_i
=λ⃗·r_iTr̂_i
=λ⃗·Tr⃗_i
where θ̂ is the vector function θ̂(φ)=(-sinφ, cosφ)^T, θ̂_i is the evaluation θ̂(θ_i(t)), and T is the 2×2 matrix T=[[ 0 -1; 1 0 ]], which is ccw-rotation by π/2, so that θ̂=Tr̂. This shows that the total angular momentum satisfies
ḣ=∑_i=1^3ḣ_i=λ⃗·T∑_ir⃗_i=0 by the constraint.
The presence of the rotation matrix T in ḣ shows that conservation of angular momentum is due to the fact that the system as a whole is invariant under global rotation.
Going forward, it will be best to use specific angular momenta
ℓ_i=h_i/μ_i=r_i^2θ̇_i
which gives
ℓ̇_i=1/μ_iλ⃗·r_iθ̂
Using the Lagrange multiplier (<ref>), this is
ℓ̇_i =αδ/μ_i(∑_jr̂_j/r_j^2)·r_iθ̂_i
∴ℓ̇_i =αδ/μ_ir_i∑_j≠ ir_j^-2sin(θ_j-θ_i)
we shall label these functions ℓ̇_i=τ_i=τ_i(r_j,θ_j,ℓ_j); τ for torque. Notice that these equations are perturbations for i=1,2, since μ_1,2∼𝒪(1), but this is not the case for i=3; μ_3=ϵμ∼𝒪(δ), so the coefficient in the equation for ℓ̇_3 is 𝒪(1).
§.§.§ Angular Momentum in Three Dimensions
In three dimensions, we have the vector (total) angular momentum
h⃗=∑_ir⃗_i×p⃗_i=∑_iμ_ir⃗_i×ṙ⃗̇_i
the time-derivative of which is
ḣ⃗̇ =∑_ir⃗_i×ṗ⃗̇_i
=∑_i r⃗_i×(-μ_iαr̂_i/r_i^2+λ⃗)
=(∑_ir⃗_i)×λ⃗
so angular momentum is conserved by the constraint.
§ RETURNING TO THE EQUATIONS OF MOTION
r̈⃗̈_i=A_ijr̂_j/r_j^2
With r⃗_i(t)=r_i(t)θ̂_̂î(̂t̂)̂, we develop these equations in polar coordinates. We will work out the r̂_i- and θ̂_i-components for the i^th equation of (<ref>).
ṙ⃗̇_i=ṙ_ir̂_i+r_iθ̇_iθ̂_i
so ṙ⃗̇_i^2=ṙ_i^2+r_i^2θ̇_i^2 , and
r̈⃗̈_i=(r̈_i-r_iθ̇_i^2)r̂_i+(r_iθ̈_i+2ṙ_iθ̇_i)θ̂_i .
The θ̂_i-component of r̈⃗̈_i is nothing other than ℓ̇_i/r_i; indeed the θ̂_i-components of (<ref>) are just the torque equations (<ref>) derived above. That leaves us with the r̂_i-components
r̈_i-r_iθ̇^2_i=(∑_j=1^3A_ijr̂_j/r_j^2)·r̂_i
with θ̇_i=ℓ_i/r_i^2 and A_ii=α(-1+δ/μ_i), while A_ij=αδ/μ_i for j≠ i, this is
r̈_i=ℓ_i^2/r_i^3+A_ii/r_i^2+αδ/μ_i∑_j≠ icos(θ_j-θ_i)/r_j^2
∴ r̈_i=ℓ_i^2/r_i^3-α r_i^-2+αδ/μ_i∑_j=1^3cos(θ_j-θ_i)/r_j^2
Here, in analogy to the definition we have for τ_i=ℓ̇_i=(αδ/μ_i) r_i∑_jsin(θ_j-θ_i)/r_j^2, we define three functions σ_i as
σ_i =1/μ_iλ⃗·r⃗_i
=αδ/μ_i r_i∑_j=1^3cos(θ_j-θ_i)/r_j^2
then what we have is
r̈_i=ℓ_i^2/r_i^3-α r_i^-2+σ_i/r_i
and define the right-hand sides as functions a_i=a_i(θ,ℓ,r)
§.§ Eccentricity or Laplace-Runge-Lenz Vectors
At this stage, we may formulate our differential equations as a system of 12 first order ODEs
θ̇_̇i̇ =Ω_i=ℓ_i/r_i^2
ℓ̇_i =τ_i=αδ/μ_ir_i∑_j≠ ir_j^-2sin(θ_j-θ_i)
ṙ_i =v_i
v̇_i =a_i=ℓ_i^2/r_i^3-α r_i^-2+σ_i/r_i
Now we introduce a change of variables from (r_i,v_i) to eccentricity vectors, in each sector
e⃗=ṙ⃗̇×ℓ⃗/α-r̂.
This is a normalization of the Laplace-Runge-Lenz vector
A⃗=p⃗×L⃗-m^2α r̂=m^2α e⃗
where p⃗ is linear momentum and L⃗ is the (dimensionfull) angular momentum. In the two-body problem, these vectors are constants of motion. The eccentricity vector has magnitude equal to that of the eccentricity of the Keplerian orbit and points in the direction of periapsis. Working in two dimensions, our specific angular momenta are out of the plane, ℓ⃗=ℓẑ, so that in polar coordinates
e⃗ =(v r̂+r θ̇ θ̂)×(ℓ ẑ/α)-r̂
=(ℓ^2/α/r-1)r̂-ℓ v/α θ̂
(where ℓ/r^2 has been substituted for θ̇). This defines the eccentricity vector in terms of it's polar components
e^r=ℓ^2/α/r-1, e^θ=-ℓ v/α
the reverse change of variables being
r =ℓ^2/α/1+e^r
v =-αe^θ/ℓ.
Equation (<ref>) is precisely the form of a Keplerian elliptic orbit if ℓ and e⃗ are constant, as in the two-body problem, since
r=ℓ^2/α/1+e^r =ℓ^2/α/1+e⃗·r̂
=ℓ^2/α/1+e^xcosθ+e^ysinθ
=ℓ^2/α/1+ecos(θ-β)
where β is the angle of e⃗ from the positive x-axis, called the longitude of periapsis. Equation (<ref>) thus describes the osculating orbit to the trajectory, which is the elliptic orbit that a body would follow given it's instantaneous angular momentum and eccentricity. As a further note, if we take the time-derivative of (<ref>), writing τ for ℓ̇, we find
v=ṙ=1/α(2ℓτ/1+e^r-ℓ^2/(1+e^r)^2(ė⃗̇·r̂+ℓ/r^2e⃗·θ̂))=1/α(2ℓτ/1+e^r-r^2/ℓ^2ė⃗̇·r̂)-αe^θ/ℓ
which reduces to precisely (<ref>) if τ and ė⃗̇ are 0.
§.§ System of Equations using Eccentricity vectors
We can differentiate the definitions (<ref>) and use the system (<ref>-<ref>) to derive first order differential equations for the eccentricity vectors ė⃗̇_i=…(θ,ℓ,e⃗ ). First working in one sector (no indices i), writing τ for ℓ̇, we have
ė⃗̇ =1/α(2ℓτ/r-ℓ^2v/r )r̂+(ℓ^2/α/r-1)ℓ/r^2θ̂-(τ v+ℓ r̈) θ̂/α+ℓ v/αℓ/r^2r̂
= 2ℓτ/α rr̂-ℓ/α(r̈-ℓ^2/r^3+α r^-2+vτ/ℓ)θ̂.
Substituting v=-α e^θ/ℓ and r̈=a=ℓ^2/r^3-α r^-2+σ/r givesė⃗̇ =2ℓτ/α rr̂+(τ/ℓ e^θ-ℓσ/α r)θ̂.
Pair half of the first term with the second θ̂-termė⃗̇ =ℓτ/α rr̂+τ/ℓe^θθ̂+ℓ/α r(τr̂-σθ̂).
In the first term we substitute ℓ/α r=(1+e^r)/ℓė⃗̇ =τ/ℓ(1+e^r)r̂ + τ/ℓe^θθ̂+ℓ/α r(τr̂-σθ̂)
=τ/ℓ(r̂+e^rr̂+e^θθ̂)+ℓ/α r(τr̂-σθ̂).
Thus we findė⃗̇ =τ/ℓ(e⃗+r̂)+ℓ/α r(τ r̂-σ θ̂).
The combination (τ r̂-σ θ̂)/r is, in each sector
(τ_i r̂_i-σ_i θ̂_i)/r_i= (αδ/μ_i∑_j=1^3sin(θ_j-θ_i)/r_j^2)[[ cosθ_i; sinθ_i ]]
-(αδ/μ_ir_i∑_j=1^3cos(θ_j-θ_i)/r_j^2)[[ -sinθ_i; cosθ_i ]]
= αδ/μ_i ∑_j=1^3r_j^-2[[ sinθ_j; -cosθ_j ]]
= -αδ/μ_i ∑_j=1^3r_j^-2θ̂_j=-αδ/μ_i ∑_j=1^3r_j^-2 Tr̂_j
∴ (τ_i r̂_i-σ_i θ̂_i)/r_i= -Tλ⃗/μ_i
So finally, our differential equations for ė⃗̇_i are
ė⃗̇_i=τ_i/ℓ_i(e⃗_i+r̂_i)-ℓ_i/αμ_i Tλ⃗.
Observe that these equations are ∝ δ/μ_i, so that ė⃗̇_i are 𝒪(δ) for i=1,2, i.e. Jupiter and Saturn, while ė⃗̇_3 is 𝒪(1). The system of twelve first order ODEs for the variables (θ_i,ℓ_i,e⃗_i) is
θ̇_̇i̇ =Ω_i=ℓ_i/r_i^2
ℓ̇_i =τ_i=αδ/μ_ir_i∑_j≠ ir_j^-2sin(θ_j-θ_i)
ė⃗̇_i =τ_i/ℓ_i(e⃗_i+r̂_i)-ℓ_i/αμ_iTλ⃗ for i=1,2,3
where r_i=ℓ_i^2/[α(1+e^r_i)]
Mixing between the sectors enters through the torques τ_i and λ⃗.
§.§ Reduction by the Constraint to Two Sectors
Given that solutions will satisfy the constraint 0=∑_ir⃗_i, we can write down algebraic/trigonometric expressions for the variables of the 3rd sector r⃗_3=-r⃗_1-r⃗_2 in terms of the 1st and 2nd sectors. Substituting these relations into the i=1,2 equations
would leave the system (<ref>-<ref>) for i=1,2 only, and the equations for ℓ̇_i,ė⃗̇_i would all be 𝒪(δ).
In particular, if r⃗_3=-r⃗_1-r⃗_2, then we can write expressions for the radius and trig ratios of the argument of r⃗_3 in terms of those of r⃗_1,r⃗_2. First of all, by the cosine-law
r_3^2=r_1^2+r_2^2+2 r_1r_2cos(θ_2-θ_1)
and then from basic trigonometry, we can express cosθ_3,sinθ_3 as follows
cosθ_3=-r_1cosθ_1+r_2cosθ_2/r_3
sinθ_3=-r_1sinθ_1+r_2sinθ_2/r_3
These may then be worked into the equations (<ref>,<ref>), still using r_i=α^-1ℓ_i^2/(1+e^r_i) but now only for i=1,2. This results in a system of first order ODEs θ̇_1=Ω_1, θ̇_2=Ω_2, ℓ̇_1=τ_1, ℓ̇_2=τ_2,ė⃗̇_1=v⃗_1,ė⃗̇_2=v⃗_2 where the angular velocities Ω_i are 𝒪(1), but the torques τ_i and eccentricity-velocities v⃗_i are 𝒪(δ), facilitating a multiple-scales analysis.
§ AUXILIARY VECTORS F⃗=R̂+E⃗
The expressions for the right-hand-sides of (<ref>-<ref>), especially the torques and eccentricity-velocities, in terms of the variables (θ,ℓ,e^r,e^θ), are very large and cumbersome. Written as rational functions, the numerators and denominators expanded out, are respectively 16 and 9 terms for the torques and 222 and 9 terms for the eccentricity-velocities, not including terms within 3/2-roots of the denominators.
The prevalence of the combination r̂_i+e⃗_i presents an opportunity for simplification. Observe that the denominator of the expression for the osculating orbit (<ref>) is the r̂_i-component of this vector
r=ℓ^2/α/1+e^r=ℓ^2/α/(r̂+e⃗ )·r̂.
The combination r̂_i+e⃗_i also occurs in the DEs for eccentricity vectors (<ref>). If we change variables from eccentricity vectors e⃗_i to these auxiliary vectors f⃗_i=r̂_i+e⃗_i, the modified equations become
r_i =ℓ_i^2/α f_i^r=ℓ_i^2/α f⃗_i·r̂_i=ℓ_i^2/α f_icos(ψ_i-θ_i)
ḟ⃗̇_i =τ_i/ℓ_if⃗_i-ℓ_i/αμ_i(Tλ⃗)+d/dtr̂_i
=τ_i/ℓ_if⃗_i-ℓ_i/αμ_i(Tλ⃗)+Ω_i θ̂_i
where f_i=f⃗_i and ψ_i is the argument of f⃗_i (such that f⃗_i/f_i=r̂(ψ_i)). Let the point be laboured, that given this definition of f⃗_i, the differential equation is
ḟ⃗̇_i=d/dtr̂_i+𝒪(δ).
So to leading order, the vector f⃗_i will be very nearly equal to the unit vector r̂_i(t)=r̂(θ_i(t)). Indeed, the solution is exactly f⃗_i=r̂_i+e⃗_i. So solutions will have f_i∼ 1 and ψ_i∼θ_i for small initial eccentricity, at least for a finite duration after initial conditions.
§.§ Geometric relationship of the Auxiliary vectors
For ellipses with even moderate eccentricity, up to ∼0.5, the vector -f⃗=-r̂-e⃗ points, to leading order, from the position of the planet towards the center of the ellipse. Indeed, the vector rf⃗=r⃗+re⃗ is such that
r⃗-rf⃗=-re⃗,
while the coordinate of the center of the ellipse is -ae⃗, where a is the semi-major axis, as demonstrated in Fig. <ref>. So the degree to which these coincide is the degree to which r and a agree. In terms of eccentricity, semi-major axis and true anomaly ν=θ-β, the relationship is
r=a(1-e^2)/1+ecosν.
At minimum re=ae(1-e), while the maximum value is ae(1+e). Thus the location r⃗-rf⃗ lies within a segment of the major-axis which is a length a e^2 on either side of the ellipse center, as shown in Fig. <ref>. The error of -f⃗ pointing from the location of the planet to the center of the orbit is 𝒪(e^2). Specifically, the difference in angle of -re⃗ vs. -ae⃗ as seen from the position r⃗—in other words, the angle ∠ CPE—is e^2sinνcosν to leading order in e.
Moreover, if we consider the equation
ar̂=-ae⃗+af⃗,
then we see the following geometry: construct an circle around the focus of the ellipse, with radius a. If we continue the vector r⃗ from the focus out to radius a, we reach the point ar̂ on the circle. The position of the ellipse centre from the focus is the first term -ae⃗. Thus we see from (<ref>) that the vector af⃗ points from the centre of the ellipse to the position of the orbit projected radially to radius a, as shown in Fig. <ref>.
§.§ The Dynamical System in terms of the Auxiliary vectors
What might simplify the equations the most is to write the equations for ḟ⃗̇_i in polar form, for the components ḟ_i^r=d/dt(r̂_i·f⃗_i) and ḟ_i^θ=d/dt(θ̂_i·ḟ⃗̇_i). That is, we have
f⃗_i =f_i^rr̂_i+f_i^θθ̂_i
and ḟ⃗̇_i =ḟ_i^rr̂_i+f_i^r(Ω_iθ̂_i) +ḟ_i^θθ̂_i+f_i^θ(-Ω_ir̂_i)
=(ḟ_i^r-Ω_i f_i^θ)r̂_i+(ḟ_i^θ+Ω_i f_i^r)θ̂_i
The equation (<ref>) for ḟ⃗̇_i becomes
ḟ_i^r =τ_i/ℓ_if_i^r+Ω_i f_i^θ-ℓ_i/αμ_i(r̂_i· Tλ⃗)
ḟ_i^θ =τ_i/ℓ_if_i^θ-Ω_i f_i^r-ℓ_i/αμ_i(θ̂_i· Tλ⃗)+Ω_i.
We know the components of Tλ⃗ from (<ref>)[Multiplying (<ref>) by -T, we can also find λ⃗=μ_i/r_i(σ_ir̂_i+τ_iθ̂_i).]
r̂_i·(-Tλ⃗)=μ_i τ_i/r_i
θ̂_i·(-Tλ⃗)=-μ_i σ_i/r_i
which gives
ḟ_i^r =τ_i/ℓ_if_i^r+ℓ_iτ_i/α r_i+Ω_i f_i^θ
ḟ_i^θ =τ_i/ℓ_if_i^θ-ℓ_iσ_i/α r_i+Ω_i(1-f_i^r) .
§.§ Final substitutions
Things may be made yet more concise. The osculating orbits are given by r_i=ℓ_i^2/α f_i^r, and the prevalent coefficients in (<ref>,<ref>) become
Ω_i =ℓ_i/r_i^2=α^2f_i^r^2/ℓ_i^3
ℓ_i/α r_i =f_i^r/ℓ_i
so that we find
ḟ_i^r =2τ_i/ℓ_if_i^r+α^2f_i^r^2/ℓ_i^3f_i^θ
ḟ_i^θ =(-σ_i/ℓ_i+α^2f_i^r(1-f_i^r)/ℓ_i^3)f_i^r+τ_i/ℓ_if_i^θ.
For completeness, we give the expressions for σ and τ in these terms
τ_i =α^2δ/μ_iℓ_i^2/f_i^r∑_j≠ if_j^r^2/ℓ_j^4sin(θ_j-θ_i)
σ_i =α^2δ/μ_i[f_i^r/ℓ_i^2+ℓ_i^2/f_i^r∑_j≠ if_j^r^2/ℓ_j^4cos(θ_j-θ_i)].
Equations (<ref>,<ref>) are best read in the 3-sector version of this problem (that is, if one does not eliminate r_3 and θ_3). If one wishes to work in only two sectors i=1,2, the substitutions (<ref>,<ref>,<ref>) should be made into (<ref>) and (<ref>).
We make the substitutions (<ref>,<ref>) into (<ref>) and (<ref>) as well, which become
ė⃗̇_i =τ_i/ℓ_i(e⃗_i+r̂_i)+f_i^r/ℓ_i(τ_ir̂_i-σ_iθ̂_i)
ḟ⃗̇_i =ė⃗̇_i+Ω_iθ̂_i
=τ_i/ℓ_if⃗_i+f_i^r/ℓ_i(τ_ir̂_i-σ_iθ̂_i)+α^2f_i^r^2/ℓ_i^3θ̂_i
In full then, the system of equations, in terms of the polar components of the auxiliary vectors f_i^r,f_i^θ are
θ̇_̇i̇ =Ω_i=α^2f_i^r^2/l_i^3
ℓ̇_i =τ_i=(<ref>)
ḟ_i^r =2τ_i/ℓ_if_i^r+α^2f_i^r^2/ℓ_i^3f_i^θ
ḟ_i^θ =(-σ_i/ℓ_i+α^2f_i^r(1-f_i^r)/ℓ_i^3)f_i^r+τ_i/ℓ_if_i^θ.
Numerical solutions of (<ref>-<ref>) for Jupiter's eccentricity vector are shown in Fig.
§ OTHER FORMULATIONS
For completeness, we present the following forms of the equations for both ė⃗̇_i and ḟ⃗̇_i. First of all, for ė⃗̇_i in the polar coordinates as above
ė_i^r =2τ_i/ℓ_i(1+e^r_i)+α^2(1+e_i^r)^2/ℓ_i^3e_i^θ
ė_i^θ =-(σ_i/ℓ_i+α^2e_i^r(1+e_i^r)/ℓ_i^3)(1+e_i^r)+τ_i/ℓ_ie_i^θ .
Of course, these are related to (<ref>,<ref>) as the components of e⃗_i,f⃗_i are related by
f_i^r=1+e_i^r, f_i^θ=e_i^θ.
Another option, using either the eccentricity or auxiliary vectors, is to express them in their own polar coordinates; that is,
e⃗_i(t) =e_i(t)r̂(β_i(t))
f⃗_i(t) =f_i(t)r̂(ψ_i(t)).
Here, e_i are the eccentricities (of the osculating orbits) and β_i are the longitudes of periapsis, whereas f_i are the lengths of the auxiliary vectors and ψ_i are the angles which f⃗_i make with the fixed reference (positive x-axis, for instance). These definitions give
ė⃗̇_i=ė_i r̂_β_i+e_i β̇_i θ̂_β_i
ḟ⃗̇_i=ḟ_i r̂_ψ_i+f_i ψ̇_̇i̇ θ̂_ψ_i
where here the notation r̂_× has been used for r̂(×(t)). From these, we derive differential equations for norms (e_i,f_i) and arguments (β_i,ψ_i) by taking r̂_β_i,θ̂_β_i-components[These are derived with the following helpful properties of the r̂,θ̂ functions: r̂(a)·r̂(b)=θ̂(a)·θ̂(b)=cos(a-b), r̂(a)·θ̂(b)=sin(a-b).] of (<ref>) and r̂_ψ_i,θ̂_ψ_i-components of (<ref>)
ė_i =r̂_β_i·ė⃗̇_i=τ_i/ℓ_i[e_i+(1+f_i^r)cos(θ_i-β_i)]+σ_i/ℓ_if_i^rsin(θ_i-β_i)
e_i β̇_i =θ̂_β_i·ė⃗̇_i
= τ_i/ℓ_i(1+f_i^r)sin(θ_i-β_i) -σ_i/ℓ_if_i^rcos(θ_i-β_i)
while the equations for f_i,ψ_i are
ḟ_i=r̂_ψ_i·ḟ⃗̇_i
=τ_i/ℓ_i(f_i+f_i^rcos(ψ_i-θ_i))+f_i^r/ℓ_i(α^2f_i^r/ℓ_i^2-σ_i)sin(ψ_i-θ_i)
f_i ψ̇_i=θ̂_ψ_i·ḟ⃗̇_i
= -τ_if_i^r/ℓ_isin(ψ_i-θ_i) + f_i^r/ℓ_i(α^2f_i^r/ℓ_i^2 - σ_i) cos(ψ_i-θ_i) .
The systems {θ̇_i,ℓ̇_i,ė_i,β̇_i } and {θ̇_i,ℓ̇_i,ḟ_i,ψ̇_i } can be closed by the relations
f_i^r=f_icos(ψ_i-θ_i)=1+e_i^r=1+e_icos(θ_i-β_i).
PART:
Perturbation Theory for the Three Body Problem
§ DELAUNAY VARIABLES - AN INTERPRETATION OF THE MEAN ANOMALY
We will not here reproduce the whole definition and derivation of action-angle variables for the two-body problem, nor the canonical transformation to what are known as the Delaunay variables. Suffice it now to say, that the Delaunay variables are a set of action-angle variables for the two-body problem, and that using these variables the Hamiltonian depends on none of the conjugate coordinates, revealing that the Hamiltonian is integrable, and the dynamics incredibly simple. Excellent references for the material can be found in <cit.>.
The two-body problem, of masses m and M orbiting each other subject to an attractive inverse-square-of-distance force F⃗=-k/r^3r⃗, where r⃗ is the relative position of one body to the other, reduces to a “1-body" problem of a mass μ=mM/(m+M), called the reduced mass, moving about a fixed centre by the same inverse-square force, and the displacement of the reduced mass from the centre is equal to the relative separation of the original bodies. This problem has the equation of motion μr̈⃗̈=-k/r⃗^3r⃗, and this may be given by the Lagrangian ℒ=12 μ|ṙ⃗̇|^2+k/r⃗; equivalently the Hamiltonian ℋ=|p⃗|^2/(2μ)-k/r⃗, where the linear momentum is p⃗=∇_ṙ⃗̇ℒ=μṙ⃗̇. The only parameter essential to the problem is the ratio α=k/μ, and in the case of the gravitational two-body problem this is GmM/μ=G(M+m), called the standard gravitation parameter (sometimes `standard' is dropped).
Many things are well known of this problem and it's solutions. It is straight forward to confirm that the angular momentum L⃗=μ r⃗×v⃗ is conserved, and that consequently the motion is planar. It is also well known that the orbits {r⃗(t)| t∈ T} are conic sections, including ellipses for negative energies E=μ v^2/2-k/r (with the fixed `centre' at one of the foci of the ellipse), which will be our focus. The physical size of elliptical orbits is characterized by the semi-major axis a, and it is also well known that the period of elliptical trajectories is given vy Kepler's Third Law: the square of an orbital period is proportional to the cube of the length of the semi-major axis
T^-2a^3=α/4π^2
or equivalently, a^3ω^2=α, where ω=2π/T=√(α/a^3), the angular frequency associated with period T, is called the mean motion.
Before describing the Delaunay variables, we introduce various `specific' quantities—quantities that are made `massless', dividing by the chosen mass scale. Thus the specific energy ξ=v^2/2-α/r, and specific angular momentum ℓ⃗=r⃗×v⃗, ℓ=|ℓ⃗|. These are both constants of the motion, given by ξ=-α/(2a) and ℓ=√(α a(1-e^2)).
With respect to a fixed orthonormal frame {x̂,ŷ,ẑ}, the Delaunay momenta are: the z-component of angular momentum ℓ^z, the total angular momentum ℓ, and finally a third action j, related to both the energy and semi-major axis, given as j=√(α a)=α/√(-2ξ). For lack of another name, I will refer to this as the impulse of the orbit. The coordinates are as follows[It should be noted that the Delaunay variables completely describe the 3D orientation of the elliptical orbit, as well as the progression of motion within the orbit. The coordinates Φ and η describe the axis of rotation of the orbital plane relative to the reference xy-plane and the position of periapsis within the orbital plane. From the momenta ℓ^z,ℓ,j can be determined a,e and α—in other words, the size, shape and speed of motion within the orbit. The actual rotation of the orbital plane seems to be missing; that is, the inclination ζ. But since the angular momentum is perpendicular to the orbital plane, we can determine the inclination by cos(ζ)=ℓ^z/ℓ.]. Conjugate to ℓ^z is the longitude of ascending node, indicated with Φ. This is the angle from the positive x-axis to the ray along which the plane of the orbit intersects the xy-plane and where ż>0—this direction is called the ascending node. Conjugate to ℓ is the argument of periapsis, η: the angle from the ascending node to the ray connecting the central body to the position of closest approach along the orbit, or periapsis, measured in the orbital plane. The final angle, conjugate to the impulse j, is the so-called mean anomaly M. Unlike the previous two coordinates, M is not so straightforward to define geometrically. We must first construct the eccentric anomaly, denoted E, by considering a circle of radius a which circumscribes the ellipse, tangent at periapsis and apoapsis (A and B respectively). The construction is shown in Fig. <ref>. The centre C of this circle coincides with the centre of the ellipse. We describe the position P of the orbiting body around the ellipse by the angle subtended at the focus O to periapsis, called the true anomaly and denoted ν
O⃗P⃗=r⃗=r(ν) r̂(ν)≡ r(ν)(cosν,sinν)
r(ν) =a(1-e^2)/1+ecosν.
From P we project to the auxiliary circle perpendicular to the major axis, arriving at a point Q. The eccentric anomaly is the angle about C from periapsis to Q, E=∠ ACQ. Now, the ellipse can be recovered from the circle by a contraction of factor (1-e^2)^1/2 parallel to the minor axis. This reveals that, working in the orbital plane and relative to the centre C, the position P is
C⃗P⃗=(acos E, bsin E),
the axes being aligned with the major and minor axes, and b=a√(1-e^2) is the semi-minor-axis.
Since the separation of the centre and focus is CO=ae, we have the relations
cos E =(1-e^2)cosν/1+ecosν+e=cosν-e/1+ecosν
sin E =√(1-e^2)sinν/1+ecosν
tanE2 =√(1-e/1+e)tanν2
and the inverse relations
cosν =cos E-e/1-ecos E
sinν =√(1-e^2)sin E/1-ecos E.
These relations give the separation of the orbit as
OP=r(E)=a(1-ecos E).
We move to constructing the mean anomaly by employing Kepler's Second Law: we know that equal area is swept out by the orbit in equal times. This can be expressed as the fact that the time-rate-of-change (t.r.o.c) of the area bounded by the ellipse 𝒜=AOP is constant; in particular
d/dt(2𝒜)=ℓ.
Owing to the aforementioned contraction, the area bounded by the auxiliary circle 𝒜̃=AOQ also grows uniformly with time
d/dt(2𝒜̃)=(1-e^2)^-1/2d/dt(2𝒜)=√(α a)=j.
Thus, we may see the significance of j as (twice) the t.r.o.c. of orbit-area projected to the auxiliary circle, just as angular momentum ℓ is twice the t.r.o.c. of area swept in the orbit itself.
The mean anomaly M is defined as an angle in the auxiliary circle, measured at the center, say to a point X on the circle, M=∠ ACX, such that the resulting sector has the same area as 𝒜̃=AOQ. At a constant radius a, we see that the uniform growth of area 𝒜̃ implies a constant rate-of-change for M
2ACX=a^2M, so a^2Ṁ =d/dt(2𝒜̃)=√(α a)
Ṁ =√(α/a^3)=ω.
The mean and eccentric anomalies can be link by a straightforward calculation
a^2M=2ACX=2AOQ =2(ACQ-OCQ)
=a^2E-2(ae)(asin E)/2
∴ M =E-esin E .
We thus have the relation
E-esin E=ω(t-τ)
where τ is a time when the orbit is at periapsis. A tantalizing equation, but unfortunately this cannot be inverted for the eccentric anomaly as elementary functions of time. If this could be done, we could write the true anomaly explicitly as a function of the mean anomaly, and thus of time. As it is, the relationship is implicit, although of course ν(M;e) is formally a well-defined function
ν(M;e)=2arctan(√(1+e/1-e)tanE(M;e)/2); 0≤ e<1
where E(M;e) is the inverse[It can be seen that this inverse is a well-defined function as follows: for e≤1 the function M(E;e)=E-esin E is increasing except for isolated points. Where M(E;e) is increasing, the inverse E(M;e) is a differentiable function. Even for e=1, on any domain dM/de≥0, with equality only at E=2kπ, k∈ℤ. This gives E(M;e) a continuous and increasing function, with dE/dM→+∞ for M∈2πℤ.] of (<ref>) for given eccentricity 0≤ e≤ 1. We can see that (<ref>) pairs E=M=kπ for k∈ℤ. If we consider E(M;e)=M+ϑ(M;e), this is ϑ=0 for M=kπ. Furthermore, ϑ(M;e) is 2π-periodic in M, and an odd function. We can thus express ϑ as a sin-Fourier series, with coefficients that are functions of e.
E(M;e)=M+ϑ(M;e)=M+∑_k≥1c_k^E(e)sin(kM)
These Fourier coefficients are c_k^E(e)=2 J_k(ke)/k, where J_k(z) are the Bessel functions of the first kind. We can see that the relationship ν(E) is just the same kind of relationship
ν(E;e)=E+∑_k≥1c_k^ν(e)sin(kE).
Finally, it is straightforward to confirm that composition of such functions is closed—these functions being the sum of the identity function and some odd 2π-periodic function. We conclude that the expression for the true anomaly in terms of the mean anomaly has the same form
ν(M;e)=M+∑_k≥1C_k(e)sin(kM)
this Fourier sine-series is known as the “equation of centre". The coefficients have a remarkable expression in terms of Bessel functions
C_k(e)=2/k{ J_k(ke)+∑_m≥1β^m[J_k-m(ke)+J_k+m(ke)]}, β=1-√(1-e^2)/e∼ e/2+⋯
These coefficients are of order C_k(e)∼𝒪(e^k); the Taylor series for the first of these functions begin
C_1(e) =2e-1/4e^3+5/96e^5+107/4608e^7+⋯
C_2(e) =5/4e^2-11/24e^4+17/192e^6+⋯
C_3(e) =13/12e^3-43/64e^5+95/512e^7+⋯
C_4(e) =103/96e^4-451/480e^6+⋯
C_5(e) =1097/960e^5-5957/4608e^7+⋯
C_6(e) =1223/960e^6+⋯
C_7(e) =47273/32256e^7+⋯
§.§ Auxiliary Circular Orbit
We now discuss two different proposals for what could be considered the “auxiliary circular orbit", auxiliary to a given elliptical orbit. The pedagogical idea is to be able to compare/connect the motion of a body in a elliptic Keplerian orbit, to a corresponding position in a related circular orbit. This has the flavour of a “method of images": the (uniformly orbiting) position in the circular orbit is an image of the position in the elliptic orbit, the motion of which is non-uniform, and perhaps more difficult to develop an intuition for. The relationship between the image-position and the elliptical position will ultimately be the geometrical one, based on the definition of the mean anomaly. Indeed, we will take the mean anomaly, as determined in the elliptical orbit, to be the angular position of a body in a circular orbit about the same centre (the focus of the ellipse).
Before proceeding, it should be pointed out that we will think primarily in the orbital plane, or equivalently in two dimensions. In two dimensions Φ and η themselves are undefined (better to say, undefinable), but their sum β=Φ+η is the well defined longitude of periapsis. Delaunay variables in two dimensions are the conjugate pairs ℓ,β and j,M.
Certainly, the obvious option that first presents itself is to take the orbit with eccentricity e^'=0 and radius a^'=a. With j=√(α a), ℓ=√(α a(1-e^2)), ω=√(α/a^3) and ξ=-α/2a, this choice gives ℓ^'=j^'=j, ω^'=ω and ξ^'=ξ. It should be noted than in this scenario we are also taking the same gravitational parameter α^'=α, ie. the same total mass in the `phantom' circular two body set-up as in the original. This certainly has simplicity to it's advantage, and that the phantom orbit has the same period as the original.
However, since the definition of M is so linked to Kepler's Second Law and the uniform t.r.o.c. of area-sweep, it would seem desirable to consider an auxiliary circular orbit which has the same angular momentum ℓ^'=ℓ as the original, as well as having the same period, requiring ω^'=ω. With e^'=0, these conditions require not only taking a circular radius different than the semi-major axis of the elliptical orbit, but also considering a change to the gravitational parameter α^'≠α. These conditions are the following
√(α^' a^') =√(α a(1-e^2)), √(α^'/a^'^3)=√(α/a^3)
the solution to which is
a^' =a(1-e^2)^1/4, α^'=α(1-e^2)^3/4.
With equal t.r.o.c. of area as well as equal orbital duration, it follows that the orbits enclose the same area: the elliptical area π ab=π a^2√(1-e^2)=πa^'^2 is equal to the area of circle with radius a^'. We thus see the sense of this auxiliary circular orbit, as the Keplerian orbit with the same period and bounding the same area. In this sense we really can say the orbits are of the same size.
That we have to consider a modified gravitational parameter is to say that we must consider the auxiliary circular orbit as having a different total mass. But this should not be problematic to us. We do well to remember that the frequency of a Keplerian orbit depends principally on the semi-major axis and the total mass of the bodies in the orbit (we will consider Newton's constant G as universal). So to take a circular orbit with a radius different from a given semi-major axis, if we want an orbit with the same frequency, we must take a different total mass. All this is to suggest that we should consider two orbital scenarios—in which one is both double the size in linear dimension and has 8 times the total mass than the other—as more similar to each other than orbits that have merely equal total mass or equal semi-major axis.
This consideration would also seem to elevate the stance of the angular momentum ℓ as somehow principal over the impulse j: we have j^'=ℓ^'=ℓ≠ j, ξ^'=ξ(1-e^2)^1/2. Indeed, we are seeing that the angular momentum is more characteristic of the orbit, as it corresponds to the actual area-sweeping rate in the physical orbit, and we preserve that in our `phantom' circular orbit, whereas j is the projected area rate in the a-radius circle that we construct around the ellipse. It is seen that this circle has far less to do with the actual physics than the above proposed auxiliary circular orbit.
§ ALTERNATE MASS PARAMETRIZATION
We have freedom to take a different parametrization of the masses and the coefficients μ_1,2,3. First note, from the mass distribution of the three masses m_J,m_S,M, that any dimensionless ratios of masses depends on two independent mass-ratios. We can form the following
β_1=m_J /M_Σ = 9.54× 10^-4, β_2=m_S/M_Σ = 2.85×10^-4
β_3=M/M_Σ = 0.99876 = 1- 1.2384×10^-3
which are constrained by β_1+β_2+β_3=1 .
Our freedom is in the mass scale m̃ we take to divide the Lagrangian (<ref>), giving the coefficients
μ_1=Mm_J/M_Σm̃, μ_2=Mm_S/M_Σm̃, μ_3=m_Jm_S/M_Σm̃
If we take m̃=λ m_J for some λ>0 (originally we had λ=M/M_Σ=β_3), then these are
μ_1=β_3/λ, μ_2=β_2β_3/β_1λ, μ_3=β_2/λ, δ=β_2β_3/λ
We can choose λ such that δ=1-β_3=β_1+β_2=(m_J+m_S)/M_Σ, in which case μ_3=δ/β_3=δ/(1-δ). This is
λ=β_2β_3M_Σ/m_J+m_S=β_3m_S/m_J+m_S
and this gives
μ_1=m_J+m_S/m_S, μ_2=m_J+m_S/m_J
which are related by 1/μ_1+1/μ_2=1. In terms of the original parameter μ=m_S/m_J∼ 0.3, these are μ_1=1+μ^-1 and μ_2=1+μ. For the masses of Jupiter and Saturn, these are μ_1=4.340=413+6.452×10^-3 and μ_2=1.2994=1.3-5.796×10^-4. This new mass scale is m̃=m_S/m_J+m_SM/M_Σm_J=Mm_Jm_S/M_Σ(m_J+m_S). We will redefine ϵ≡μ_3=δ/(1-δ)=(m_J+m_S)/M. Notice that δ is always 0≤δ≤1. For the planetary regime, certainly we would say m_J+m_s≤ M, so that δ≤1/2 and ϵ≤1.
It is instructive to return to the coefficient matrix of the equations (<ref>)
A =G([ -(M+m_J) m_S m_S; m_J -(M+m_S) m_J; M M -(m_J+m_S) ])
=α([ β_2-1 β_2 β_2; β_1 β_1-1 β_1; β_3 β_3 β_3-1 ])
=α([ -1+δ/μ_1 δ/μ_1 δ/μ_1; δ/μ_2 -1+δ/μ_2 δ/μ_2; 1-δ 1-δ -δ ])
§ HAMILTON'S EQUATIONS
We may now proceed to consider the asymptotic analysis of the three body problem, in the planetary case: when two bodies (the planets) orbit a third, and their masses are also much smaller than the third, but not so much that the gravitational attraction between them is negligible. Using the Delaunay variables defined in the three sectors, we have the Hamiltonian
𝐇_λ(J_i,h_i,h^z_i,M_i,η_i,Φ_i,λ⃗) =∑_i=1^3{ - μ_i^3 α^2/2J_i^2- λ·r⃗_i}
=∑_i=1^3{ - μ_i^3 α^2/J_i^2}-λ⃗·(r⃗_1+r⃗_2+r⃗_3)
where the Delaunay momenta are the relative variables (as opposed to specific)
J_i=μ_i j_i , h_i=μ_i ℓ_i , h^z_i=μ_i ℓ^z_i .
We notice that, with the exception of the impulses, the Delaunay variables enter into this Hamiltonian via the geometry, in the vector positions r⃗_i. The vector position in the i^th sector is
r⃗_i=J_i^2/αμ_i^2(1-e_icosE_i)r̂_i
The eccentricity is given by h_i^2/J_i^2=1-e_i^2, and the direction vector r̂_i is derived by the following sequence of rotations: we start with the unit vector in a fixed reference, say x̂-direction. This is then rotated ccw about the z-axis by both the true anomaly (related to the eccentric) and the argument of periapsis, ν+η. At this stage we can imagine we have an elliptic orbit with periapsis at argument η and ascending node at the positive x-axis, but no inclination. We need to rotate ccw about the x-axis by the inclination ζ_i. Note that the rotation by ζ_i is given by cosζ_i=h^z_i/h_i and sinζ_i=√(1-h^z_i^2/h_i^2). Finally, we rotate again about the z-axis, bringing the ascending node to longitude Φ_i. Thus r̂_i is
r̂_i=R^z_Φ_iR^x_ζ_iR^z_ν_i+η_i[1,0,0]^T .
Now, the components as a result of the rotation by true anomaly, are given in terms of the eccentric by
R^z_ν_i[1,0,0]^T=[cos E_i+e_i/1-e_icos E_i,√(1-e_i^2)sin E_i/1-e_icos E_i,0]^T .
So we can see the i^th position vector is
r⃗_i=J_i^2/αμ_i^2 R^z_Φ_iR^x_ζ_iR^z_η_i[cos E_i+e_i,√(1-e_i^2)sin E_i,0]
The components of the remaining rotation matrix are
R^z_Φ R^x_ζ R^z_η=(
[ cosΦcosη-sinΦcosζsinη -cosΦsinη-sinΦcosζcosη sinΦsinζ; sinΦcosη+cosΦcosζsinη cosΦcosζcosη-sinΦsinη -cosΦsinζ; sinζsinη sinζcosη cosζ ])
From the Hamiltonian (<ref>), Hamilton's equation are
Ṁ_i =∂𝐇_λ/∂ J_i=μ_i^3α^2J_i^-3+∂𝐑/∂ J_i J̇_i =μ_ij̇_i=-∂𝐑/∂ M_i
η̇_i =∂𝐑/∂ h_i ḣ_i =μ_iḣ_i=-∂𝐑/∂η_i
Φ̇_i =∂𝐑/∂ h^z_i ḣ^z_i =μ_iḣ^z_i=-∂𝐑/∂Φ_i
0 =∇_λ⃗𝐇_λ=S⃗
for i=1,2,3, where 𝐑=-λ⃗·S⃗ is the function by which the Hamiltonian is perturbed and S⃗=r⃗_1+r⃗_2+r⃗_3 is the constraint. We know that when the equations are satisfied subject to the constraint, λ⃗ takes the values (<ref>) as determined in the first section, and thus also determined by the geometry (<ref>,<ref>). Now, we must be careful with the partial derivatives of the perturbation. At first blush, we should regard both λ⃗ and S⃗ as functions of the coordinates. If χ is one of the canonical variables, then the derivative of the perturbation with respect to χ is
-∂𝐑/∂χ_i=∂λ⃗/∂χ·S⃗ + λ⃗·∂S⃗/∂χ
In particular, we compute the potential ∂_χS⃗ without regard to the constraint, but by the function form of the constraint S⃗=r⃗_1+r⃗_2+r⃗_3. If χ=χ_i is a variable of the i^th sector, then this is ∂_χ_ir⃗_i. We might similarly determine the derivative ∂_χ_iλ⃗ by combining (<ref>,<ref>,<ref>), but at this point we may evaluate (<ref>) subject to constraint S⃗=0, so that the first term vanishes. Thus
∂𝐑/∂χ_i = -λ⃗·∂r⃗_i/∂χ_i.
For some of the derivatives we have the following (suppressing indices i), using c_ζ=cosζ and s_ζ=sinζ,
M =(1-ecos E)dE
∂_M =(1-ecos E)^-1∂_E
∂_J
= ∂_J|_e + ∂e/∂J ∂_e
= ∂_J|_e + 1-e^2/eJ ∂_e
∂_h
= ∂e/∂h ∂_e + ∂c_ζ/∂h ∂_c_ζ + ∂s_ζ/∂h ∂_s_ζ
= -1-e^2/eh ∂_e - c_ζ/h ∂_c_ζ + c_ζ^2/s_ζh ∂_s_ζ
∂_h^z
= ∂c_ζ/∂h^z ∂_c_ζ + ∂s_ζ/∂h^z ∂_s_ζ
= c_ζ/h^z ∂_c_ζ - c_ζ^2/s_ζh^z ∂_s_ζ.
These will be instrumental to formulating the KAM theory for this perturbational problem.
§ SETUP FOR KAM THEORY
KAM Theory
is a perturbative approach for nearly integrable Hamiltonians, which are a perturbation away from depending on only the momenta J̅∈ℝ^n
𝐇(J̅,θ̅;δ)=𝐇_0(J̅)+ 𝐇(J̅,θ̅;δ)∼𝐇_0(J̅)+∑_k≥1δ^k𝐇_k(J̅,θ̅).
One seeks a nearly-identical canonical transformation 𝒞:(J̅,θ̅)↦(J̃,θ̃), given by a generating function
Ψ(J̃,θ̅;δ) =J̃·θ̅ + Ψ(J̃,θ̅;δ)
∼J̃·θ̅ + ∑_k≥1δ^kΨ_k(J̃,θ̅),
which gives
J̅=∇_θ̅Ψ =J̃+∇_θ̅Ψ(J̃,θ̅;δ)
∼J̃ + ∑_k≥1δ^k ∇_θ̅Ψ_k
θ̃=∇_J̃Ψ =θ̅+∇_J̃Ψ(J̃,θ̅;δ)
∼θ̅ + ∑_k≥1δ^k ∇_J̃Ψ_k.
The goal of this transformation is that the Hamiltonian, in terms of the new coordinates, depends only on the new momenta
𝐇∘𝒞^-1 (J̃,θ̃;δ)=𝐇̃(J̃;δ)
Practically speaking, if we only do this to so many terms, say truncating 𝐇=∑_k=1^Nδ^k𝐇_k and Ψ=∑_k=1^Nδ^kΨ_k, then the transformed Hamiltonian only depends on new coordinates θ̃ at the N+1 order in δ
𝐇̃(J̃,θ̃;δ)=𝐇̃_N(J̃;δ)+𝒪(δ^N+1)(J̃,θ̃;δ).
We will need to decompose the perturbations into the average over angle-variables θ̅
⟨𝐇⟩(J̅;δ) =(2π)^-n𝐇(J̅,θ̅;δ) ^nθ̅
⟨𝐇⟩_k(J̅;δ) =(2π)^-n𝐇_k(J̅,θ̅;δ) ^nθ̅
and the remainders 𝐅=𝐇-⟨𝐇⟩, 𝐅_k=𝐇_k-⟨𝐇⟩_k.
Starting with just one order in δ, N=1, writing Ω̅_0(J̅)=∇_J̅𝐇_0 for the frequency functions of the unperturbed Hamiltonian, we find
𝐇(J̅,θ̅;δ) ∼𝐇_0(J̃+∇_θ̅Ψ)+δ 𝐇_1(J̃+∇_θ̅Ψ,θ̅)+⋯
∼𝐇_0(J̃+δ ∇_θ̅Ψ_1+⋯)
+δ ⟨𝐇⟩_1(J̃+δ ∇_θ̅Ψ_1+⋯)
+δ 𝐅_1(J̃+δ∇_θ̅Ψ_1+⋯,θ̅)+⋯
∼𝐇_0(J̃)+δ Ω̅_0(J̃)·∇_θ̅Ψ_1+δ ⟨𝐇⟩_1(J̃)+δ 𝐅_1(J̃,θ̅)+𝒪(δ^2)
Thus we have
𝐇̃_1(J̃;δ)=𝐇_0(J̃)+δ ⟨𝐇⟩_1(J̃).
That is, the functional form of the transformed Hamiltonian, as a function of the new momenta, is the sum of the unperturbed Hamiltonian and the average of the perturbation over all angle variables, to leading order. This is achieved by matching the remaining terms at first order
Ω̅_0(J̃)·∇_θ̅Ψ_1(J̃,θ̅)+𝐅_1(J̃,θ̅)=0
Expanding Ψ_1, 𝐅_1 in Fourier multi-series in θ̅, with coefficients Ψ_1^k̅(J̃), F_1^k̅(J̃), (<ref>) becomes (suppressing dependence on J̃)
∑_k̅∈ℤ^n\{0}{(iΩ̅_0·k̅ Ψ_1^k̅+F_1^k̅)e^ik̅·θ̅}=0
So the solution is
Ψ_1(J̃,θ̅)=i∑_k̅∈𝒮_1F_1^k̅(J̃)/k̅·Ω̅_0(J̃) e^ik̅·θ̅
where 𝒮_1⊂ℤ^n is the set of multi-indices for which 𝐅_1 has non-zero Fourier coefficient. Here we finally see a problem: that if ever the frequency vector is orthogonal to one of these integer multi-indices k̅·Ω̅_0=0, then the solution (<ref>) breaks down. This is the problem of resonance, and it requires a modification called resonant perturbation theory. We have a special case of this problem: for our Hamiltonian (<ref>), the unperturbed terms
-μ_1^3α^2/J_1^2-μ_2^3α^2/J_2^2
do not depend on the third impulse J_3 nor any of the angular momenta, which only come into the Hamiltonian through the perturbing function 𝐑. This results in the corresponding components of frequency vector being identically zero. In other words, our Hamiltonian is degenerate, and the solution is called degenerate perturbation theory. What we need to do, not only decomposing the perturbation into it's average-over-all-angles and remainder, but further decomposing the remainder into it's average over the mean anomalies—the angles whose momenta the 0th order Hamiltonian depends on (it will be seen that M_3 can be included, and the integrable term for the third sector can be included with the (<ref>))—and remainder from that. If 𝐑=-λ⃗·S⃗=⟨𝐑⟩+𝐅
⟨𝐑⟩(J_i,h_i,h^z_i)=(2π)^-9 𝐑 ^3 M ^3η ^3Φ
𝐅(J_i,h_i,h^z_i,M_i,η_i,Φ_i)=𝐑-⟨𝐑⟩.
Then we further decompose 𝐅= 𝐅_M+𝐅̃
𝐅_M(J_i,h_i,h^z_i,η_i,Φ_i)=(2π)^-3 𝐅 ^3M
𝐅̃(J_i,h_i,h^z_i,M_i,η_i,Φ_i)=𝐅_M-𝐅.
So 𝐑=⟨𝐑⟩+𝐅_M+𝐅̃.
Then we seek a canonical transformation to a new Hamiltonian that doesn't depend on the mean anomalies. This will be elaborated in future work.
§ CONCLUSIONS
In this work, we have presented a self-contained analysis of the gravitational three-body problem in the planetary scenario. This is done in heliocentric coordinates with the use of Lagrange multipliers to elegantly handle the form of the Lagrangian. First working in two dimensions, we develop the equations of motion as a first order dynamical system in longitudes θ_i, angular momentum ℓ_i and eccentricity vectors e⃗_i. Auxiliary vectors f⃗_i=r̂_i+e⃗_i greatly simplify the equations algebraically, especially in terms of the components f_i^r,f_i^θ.
We then give the definition and construction of the Delaunay action-angle variables for the two body problem. We present a novel conceptualization of the mean anomaly of an eccentric orbit as the true anomaly of a `phantom' or `image' body, also orbiting the central mass, but in a circular orbit with radius reduced from the semi-major axis a^'=a(1-e^2)^1/4 as well as reduced gravitational parameter α^'=α(1-e^2)^3/4. These trajectories bound equal areas within the orbital plane, and the orbits have the same orbital frequency and angular momentum, as opposed to energy. We then develop the Hamiltonian for the problem, and investigate the geometry of the orbital positions r⃗_i in terms of the elements, in order to express the functional form of the Hamiltonian perturbation 𝐑=-λ⃗·(r⃗_1+r⃗_2+r⃗_3). Hamilton's equations are derived (<ref>-<ref>), and derivatives with respect to Delaunay variables are written down in terms of derivatives with respect to other elements E,e, and c_ζ,s_ζ. Finally, we considered the Hamiltonian perturbation theory of the problem, and learned that we need to proceed via a careful approach, seeing that our Hamiltonian is degenerate—not varying with all of the Delaunay momenta when δ=0.
There is much room for further work. The detailed work of the KAM theory analysis for this specific problem can now be begun. Of particular interest is the Great Inequality of Jupiter and Saturn: to identify the term or terms that correspond to this perturbation that is unexpectedly large in both period and amplitude. The equations in two dimensions, in terms of the auxiliary vectors, are interesting in their own right, and might be amenable to a multiple-scales asymptotic approach, as may be the Hamiltonian equations themselves, KAM theory aside. Furthermore, the general-relativist corrections to this work are of great interest. This would presumably be approached via the post-Newtonian formalism.
99
principia Newton, Isaac. Philosophiae Naturalis Principia Mathematica. 1687, London.
laplace Laplace, Pierre-Simon. Théorie de Jupiter et de Saturne. Memoire de l’Academie des Sciences de Paris 1788, 33-160.
wilson Wilson, Curtis. The Great Inequality of Jupiter and Saturn: From Kepler to Laplace. 1985, Springer-Verlag.
hill1 Hill, G.W. Notes on the Theories of Jupiter and Saturn. The Analyst 1881 8(2), 33-40.
hill2 Hill, G.W. On the Extension of Delaunay’s Method in the Lunar Theory to the General Problem of Planetary Motion. Trans. Amer. Math. Soc. 1900 1(2), 205-242.
quarles Musielak, Z E and Quarles, B. The Three-Body Problem. Rep. Prog. Phys. 2014 77 065901.
kam Arnold, V.I. Proof of a Theorem by A.N. Kolmogorov on the invariance of quasi-periodic motions under small perturbations of the Hamiltonian. Russ. Math. Survey 1963 18, 13-40.
poincare Poincaré, Henri. Méthodes Nouvelles de la Mécanique Céleste, vol 1-3. 1892-99, Paris: Gauthier-Villars.
chaos Feldman, David. Chaos and Dynamical Systems. 2019, Princeton Univ. Press. ISBN: 9780691161525
brouke_lass Brouke, R and Lass, H. A Note on Relative Motion in the General Three-Body Problem. Celestial Mechanics 1973 8(1), 5-10.
news1 Hunt, Katie and Strickland, Ashley. “Jupiter and Saturn's 'great conjunction' captured in stunning images.” CTVNews.ca, Dec. 22, 2020. <https://www.ctvnews.ca/sci-tech/jupiter-and-saturn-s-great-conjunction-captured-in-stunning-images-1.5241665>
news2 Byrd, Deborah and McClure, Bruce. “All you need to know: 2020’s great conjunction of Jupiter and Saturn.” EarthSky.org, Dec. 21, 2020. <https://earthsky.org/astronomy-essentials/great-jupiter-saturn-conjunction-dec-21-2020>
celletti Celletti, Alessandra. Perturbation Theory in Celestial Mechanics. 2007, obtained from <https://web.ma.utexas.edu/mp_arc/c/07/07-303.pdf>
morbidelli Morbidelli, Alessandro. Modern Celestial Mechanics. 2011, obtained from <https://www-n.oca.eu/morby/celmech.pdf>
elements Seidelmann, K.P., ed. Explanatory Supplement to the Astronomical Almanac. 1992 University Science Books, Mill Valley, California.
|
http://arxiv.org/abs/2307.05823v1 | 20230711220740 | On the association of secondary hairpin growth and surface pressure gradient for oscillating foils | [
"Suyash Verma",
"Muhammad Saif Ullah Khalid",
"Arman Hemmati"
] | physics.flu-dyn | [
"physics.flu-dyn"
] |
Finite SSH chains coupled to a two-level emitter:
Hybridization of edge and emitter states
D. Blume
August 12, 2023
===========================================================================================
The correspondence of secondary spanwise structures and pressure gradient is numerically evaluated for a foil, performing heaving and pitching motion, at a range of phase offsets (90^∘ ≤ϕ≤ 270^∘) and reduced frequency (0.32 ≤ St_c ≤ 0.56). The Reynolds number is Re = 8000. The wake is shown to be dominated by secondary hairpin-like structures that are formed due to an elliptic instability prompted by the paired primary and secondary leading edge vortex (LEV). The weaker secondary LEV undergoes a core deformation, resulting in streamwise vorticity outflux across the span of the foil, and hence, the growth of hairpin-like structures. Evaluating pressure gradients on the surface of the foil reveals a unique fundamental measure to quantitatively characterize the growth of these coherent structures. Their dominant presence can be directly linked to the growth of the secondary LEV formed due to the large-scale interactions under localized adverse pressure gradients. These promote a streamwise flow compression in neighboring regions of the primary LEV. This association also presents a vivid consistency across a range of kinematics. Therefore, this correspondence provides a novel procedure to investigate the mechanisms involved in the formation of secondary structures in the wake of an oscillating foil.
Oscillating foil, wakes, secondary structures, instability, vortex dynamics, surface pressure
§ INTRODUCTION
Formation of vortices and evolution of complex wakes behind an oscillating foil has been the focus of many researchers in the fluid dynamics community. Their studies have enriched our understanding of efficient propulsive locomotion and flow control mechanisms employed by biological swimmers and flyers <cit.>. The influence of complex kinematics, such as coupled heaving and pitching oscillation, on the vortex enhancement and diffusion in the wake requires more detailed investigation <cit.>. These foils are largely considered as propulsors, bio-mimicking the motion of underwater swimmers and aerial flyers <cit.>. The mechanisms of LEV evolution in the wake of marine swimmers (e.g. batoid fish) and micro flyers are critical in developing propulsive techniques that relate to the reduction of noise propagation <cit.>. The associated instabilities of LEVs are further exploited to promote turbulence and early diffusion of large scale coherent structures <cit.>. Such mechanisms play a critical role in the advancement of technologies that particularly reduce wake hazards behind aircraft <cit.>.
Recent studies have also established an association between the growth of secondary hairpin-like vortex structures and kinematics of foils, undergoing combined heaving and pitching motion <cit.>. The transition from heave to pitch dominated kinematics coincides with changes in the flow mechanism that contributes towards the growth of hairpin-like secondary structures, or their absence <cit.>. At increased chord- and amplitude-based Strouhal numbers (St_c and St_A), the stronger deformation of primary LEVs were also discussed previously <cit.>, although an association with secondary structures is not entirely clear. In this study, we advance the fundamental knowledge about the formation of secondary hairpin-like structures at a range of kinematic parameters by evaluating the distribution of streamwise surface pressure gradients on an oscillating foil.
<cit.> provided a comprehensive three-dimensional wake evolution process for an infinitely span foil oscillating with a coupled heaving and pitching motion. The distribution of the coefficient of pressure (C_p) at low and high St_c reveal the onset of elliptic instability mechanism for pairs of counter-rotating vortices of unequal strength <cit.>. At high St_c, dominant secondary hairpin-like structures are observed, whose origins are considered to be associated with core vorticity outflux <cit.> of dipole rollers shed in the wake. Recently, <cit.> reported novel findings related to the growth mechanisms and presence of secondary hairpin-like vortex arrangement at a constant St_c= 0.32. The phase offset (ϕ) between heaving and pitching motion varied in the range of 90^∘ to 270^∘. The profiles of C_p reveal the presence of a paired primary and secondary LEV roller arrangement, leading to an elliptic instability mechanism <cit.>, and thereby promote an outflux of core vorticity from the weaker secondary LEV <cit.>. This subsequently leads to thin streamwise vorticity filaments, which ultimately extend to form an arrangement of hairpin-like secondary structure in the wake <cit.>. Similar assessments that employed pressure measurements on wings with finite aspect ratios <cit.> are largely focused on the dominant spanwise instability characteristics, e.g., wavelength of undulating LEVs prior to their separation from wings. The association of pressure distribution and the growth of secondary hairpin-like structures, however, remains unknown in the current literature.
In this study, we explain a fundamental association and quantified links between streamwise gradients of pressure, calculated on the boundary of the foil, and the formation of secondary hairpin-like vortical structures, which were recently discussed by <cit.>. The findings are extended to a range of kinematics in order to ensure a wider applicability of the association between evolution of secondary wake structures and pressure gradients on an oscillating foil.
§.§ Problem Description
The flow around an infinitely span (2D) foil with a maximum thickness (D) to chord length (c) ratio of D/c=0.1 is examined numerically for a range of chord-based (St_c = fc/U_∞ = 0.32 - 0.56) and amplitude-based Strouhal numbers (0.05 ≤ St_A≤ 0.4). <cit.> indicated that significant transitions in the wake of flapping foils were observable at 0.2 < St_A < 0.4. It also coincides with the range corresponding to the optimal propulsive efficiency in swimming mammals <cit.>. The cross section of the foil shown in Figure <ref> resembles a teardrop hydrofoil shape, which was used in recent experimental investigations <cit.>.
The Reynolds number is Re = U_∞ c/ν = 8000, which is consistent with previous studies in this area <cit.>, and agrees closely with the biological characteristics of swimming fish <cit.>. Here, U_∞ and ν represent the freestream velocity and kinematic viscosity of the fluid, respectively.
The kinematics of the foil is prescribed by a coupled heaving and pitching motion, where the pitch axis is located at approximately 0.05c from the leading edge. Figure <ref> marks the heave and pitch amplitudes as h_o and θ_o, respectively. The resultant trailing edge amplitude is also shown as A_T. The motion profiles of heave (h) and pitch (θ), where pitching has a phase advancement (or offset) of ϕ relative to heaving, are represented as h(t)=h_osin (2 π f t) and θ(t)=θ_osin (2 π f t+ϕ), respectively.
In order to present a broader association of secondary wake structures (hairpin-like), surface pressure gradient, and kinematics of the foil, we also vary the phase offset (ϕ) between heaving and pitching motion in the range of 90^∘ and 270^∘. This leads to changes in A_T relative to a fixed h_o/c ( = 0.25) and θ = (10^∘) <cit.>.
It is important to note that the most propulsively efficient phase offset corresponds to ϕ= 270^∘ in our study, following the reference coordinate system employed by <cit.>.
§.§ Computational Method
The continuity
and Navier-Stokes equations
are solved directly using OpenFOAM, which is a numerical package based on the finite-volume method. This platform is extensively used for simulating wake dynamics behind oscillating foils and panels <cit.>. Kinematics of the oscillatory foil is modeled using Overset Grid Assembly (OGA) method, based on a stationary background grid and a moving overset grid that are merged for the simulation <cit.>. More details of the method can be found in <cit.>.
The computational domain is also presented in Figure <ref>, which highlights the C-type overset boundary containing the foil. The boundary conditions at the inlet are prescribed a uniform fixed velocity (Dirichlet) and a zero normal gradient (Neumann) for pressure. At the outlet, a zero-gradient outflow boundary condition is implied <cit.>. The top and bottom walls are further prescribed a slip boundary condition that effectively model open-channel or free-surface flows, and closely resemble the experimental and computational conditions of <cit.> and <cit.>, respectively. At the boundary of the foil, a no-slip condition for velocity and a zero-gradient condition for pressure is ensured. The periodic boundary condition is further implemented on the side boundaries, coinciding with the spanwise extent of the foil. It provides an effective way to model flows over bodies with infinite spans without the end or tip effects.
A spatial convergence analysis is completed at Re=8000, h_o/c=0.25, θ_o=15^∘, ϕ=270^∘ and St_c=0.67. This enables comparative evaluation of the numerical results with respect to experiments of <cit.>. Table <ref> summarizes the grid convergence results involving three grids, Grid1, Grid2 and Grid3. The ratio (δ^*) of minimum grid size element (Δ x) to Kolmogorov scale (η) is kept approximately below 10 within the critical region near the foil (x < 2.5c), specifically for Grid2 and Grid3 (see Table <ref>). This region corresponds to the origin of spanwise instability and secondary structures that emerge and grow with the wake evolution <cit.>. The relative error in prediction of C_T (ϵ_T=|C_T_,exp-C_T| / C_T_,exp), calculated with respect to the experimental results of <cit.>, is below 5% for Grid2. Similarly, ϵ_L^rms (=|C_L,Grid3^rms-C_L^rms| / C_L,Grid3^rms), calculated with respect to the finest grid (Grid3), is below 0.1%. The corresponding experimental results for C_L are not yet available. This agreement in results provide sufficient confidence in Grid2 for our analysis. For more details on grid convergence, readers are referred to <cit.>. Details for verification and validation of the numerical solver, with respect to the domain size, spatial and temporal grid, OGA solver, and boundary conditions, can be found in <cit.>, <cit.>, <cit.> and <cit.>.
The simulations are completed using Cedar and Narval high performance clusters, operated by Digital Research Alliance of Canada. The parallel decomposition and assignment of computational domain utilizes 96 CPUs with a total of 190 GB memory and 1440 simulation hours per case.
§ RESULTS & DISCUSSION
We begin with a brief discussion of the growth of secondary hairpin-like structures over the span of the foil for a range of kinematic conditions of the oscillating foil. It is followed by a detailed discussion of streamwise pressure gradients on the boundary of the foil and its association with the primary and secondary LEV structures. The implications of varying St_c and ϕ on this novel finding is further highlighted, which provides sufficient confidence in employing streamwise pressure gradient as a unique quantitative measure to explain the evolution of secondary hairpin-like formations in the wake of an oscillating foil.
§.§ Formation of secondary hairpin-like structures
<cit.> recently discussed the mechanisms that contribute to the growth of secondary hairpin-like structures. Here, we provide only a brief description of the wake dynamics explored by <cit.> to set the stage for our main analysis. At a range of increasing ϕ (90^∘≤ϕ≤ 270^∘), a paired primary and secondary LEV with unequal strengths are formed at St_c= 0.32, which lead to an elliptic instability of vortex cores <cit.>. This subsequently promote the outflux of vorticity from the weaker secondary LEV. The shear straining on account of the primary LEV further leads to the streamwise extension of thin hairpin-like filaments that subsequently form the dominant hairpin-like arrangement in the wake. As ϕ increases to 180^∘, however, the dominant hairpin-like arrangement fails to originate from an instability of the pair of primary and secondary LEV. Rather, the primary LEV paired with the TEV promotes the elliptic instability of the vortex cores. Hairpin-like structures are subsequently formed through a deformed TEV core, in contrast to the observations noted at ϕ= 90^∘. The wake at ϕ= 225^∘ and 270^∘ lacks any secondary structure formation, which is associated with the decreased strength of corresponding primary LEVs <cit.>.
We further evaluate the formation of secondary hairpin-like structures at a similar range of ϕ, while increasing St_c from 0.32 to 0.56. The results (refer to the supplementary online material) confirm that at St_c > 0.48, and the entire range of ϕ examined here, the wake is characterized by the formation of secondary hairpin-like structures. These follow the mechanism of elliptic core instability triggered by a pair of primary and secondary LEV. However, for St_c ≤ 0.48 and at the onset of pitch-dominated kinematics (i.e. ϕ= 180^∘ - 270^∘), the wake either depicts a dominant hairpin-like formation, through the deformed TEV core, or a complete absence of them.
We expand on these wake dynamics by evaluating a unique quantitative measure, in terms of streamwise pressure gradient that enable us to understand and relate the mechanism of secondary hairpin-like growth with transitioning kinematics of an oscillating foil.
§.§ Secondary structures and streamwise pressure gradients
We employ a span-averaged estimation of the coefficient of pressure (C_p = p/(0.5ρ U_∞^2)) over the boundary of the foil during a half oscillation cycle. The selected kinematic setting corresponds to ϕ= 90^∘ and St_c = 0.56, although the discussion and mechanisms remain consistent for the entire parameter space presented in Section <ref>. In order to provide a qualitative outlook on the formation of the vortices, Figure <ref>(a-c) provides an instantaneous snapshot of LEVs formed over the surface of the foil at first three quarters of the oscillation cycle (i.e. t^+ = 0, 0.25 and 0.5). The dominant vortical structures are identified using λ_2 criterion (λ_2^+ = -0.32) following the study of <cit.>. The arrangement of LEV_ac and LEV_c^s at t^+= 0 reflects a pair of unequal strength (Figure <ref>(a)), which subsequently triggers an elliptic instability of the vortex core <cit.>. The undulations on LEV_c^s are enhanced at t^+ = 0.25 in Figure <ref>(b), owing to the developed instability. At t^+ = 0.5 (Figure <ref>(c)), the growth of secondary hairpin-like structures is evident as a result of core vorticity outflux from the secondary LEV <cit.>. The instantaneous pressure signatures, corresponding to the LEVs identified in Figure <ref>, are averaged across the span and subsequently employed for calculating the streamwise pressure gradient (dp_w/dx).
In order to evaluate an association between dp_w/dx and the evolution of secondary hairpin-like structures through the secondary LEV <cit.>, we initially evaluate the observations relevant to the case exhibited in Figure <ref>. Figures <ref>(a-c) depicts the profiles of dp_w/dx at first three quarter instants (t^+ = 0, 0.25 and 0.5) of the oscillation cycle, where the paired primary and secondary LEVs are dominant. The pressure minima that coincide with the presence of the primary LEV is highlighted in Figure <ref>(a). The onset of the secondary LEV_c^s is also marked, which is characterized by a sharp change in the profile of dp_w/dx. <cit.> provided a quantitative interpretation with regards to the formation of the secondary re-circulation zone in the neighborhood of a thick vortex core. Particularly, the discussion highlights the contribution of large-scale interactions in the formation of re-circulation zones. These are associated with the streamwise compression of the flow due to the existence of localized adverse pressure gradients <cit.>.
To follow along the discussion of <cit.>, it is evident that a sudden rise from the minima of dp_w/dx (coincident with LEV_ac) occurs towards a positive dp_w/dx. Hence, this region featuring a localized adverse pressure gradient is associated with the formation of LEV_c^s, similar to the observations highlighted by <cit.> with regards to the secondary re-circulation zone. The presence of LEV_c^s can also be observed at t^+= 0.25 in Figure <ref>(b). Further ahead in the oscillation cycle at t^+= 0.5, a faded signature with regards to LEV_c^s becomes evident, which also coincides with the formation of secondary hairpin-like structures through the outflux of core vorticity from the secondary LEV (see Figure <ref>(c)). This is in contrast to the observations of <cit.>, which indicates an increase in the magnitude of drop in dp_w/dx, corresponding to the secondary re-circulation zone. Hence, based on the above observations, it is reasonable to discuss a plausible association between the variations of dp_w/dx and the coincident transformation of the secondary LEV to hairpin-like vortices.
To address this association for a wider kinematic setting, evaluations are now discussed for different values of ϕ and St_c, within the range specified in Section <ref>. Figure <ref> exhibits the profiles of span-averaged dp_w/dx along the chord of the foil for ϕ= 90^∘ and increasing St_c from 0.32 to 0.56. The presented time instants correspond to t^+ = 0 and 0.25, respectively. The cases that feature the growth of secondary hairpin-like structures from the secondary LEV demonstrate a sharp change in dp_w/dx, as it increases from the local minima corresponding to the primary LEV. Further ahead in the cycle (t^+≥ 0.25), the changes diminish as secondary hairpin-like structures grow out of the secondary LEV. For kinematics characterized by heave-domination at ϕ= 90^∘, this characteristic association between secondary hairpin-like evolution and dp_w/dx is consistent in the range of increasing St_c.
Figure <ref>(a-c) shows the variations of span-averaged dp_w/dx for ϕ= 180^∘, 225^∘ and 270^∘, respectively. The observations for ϕ > 90^∘ demonstrates sharp changes in dp_w/dx only at St_c ≥ 0.48. This also coincides with the qualitative results similar to those presented in Figure <ref>, which highlight the evolution of secondary hairpin-like structures from the secondary LEV at similar kinematics. For ϕ= 180^∘ and 225^∘, observations at St_c < 0.48 suggest that the dominant secondary hairpin-like structures are associated with the instability triggered by the pair of LEV-TEV, rather than a pair of primary and secondary LEV. In order to further illustrate this mechanism, we look at the wake visualization in Figure <ref>(a-c). These plots qualitatively depicts the formation of dominant secondary hairpin-like structures through a LEV-TEV pair at ϕ= 180^∘ and St_c= 0.32 <cit.>. The secondary LEV (marked as LEV1_c^s) at t^+ = 0.25 in Figure <ref>(b) appears much weaker and diffused, which only results in the growth of thin hairpin-like flow structures at t^+= 0.5 (see Figure <ref>(c)). These structures soon lose their coherence due to diffusion in the near wake (Figure <ref>(d)). However, the dominant secondary hairpin-like arrangement emerged from the core vorticity outflux of TEV, as seen in Figure <ref>(d). The dominant hairpin-like arrangement in the wake in Figure <ref>(a), are formed on account of the LEV-TEV instability. We observe that the trends of increasing dp_w/dx for such cases also appear flatter and closer to zero, which reflects a low streamwise flow compression, and thus the absence of a strong secondary LEV.
To further understand the role of dp_w/dx in quantitative characterization of the evolution of secondary hairpin-like structures, we calculate the slope of dp_w/dx with respect to the streamwise distance along the foil. These estimates are further focused on a localized region that features the rise from a pressure minima (marked by blue dotted square in Figures <ref> and <ref>), coinciding with the primary LEV structures. It provides an indicative measure for the increasing streamwise flow compression as St_c increases, and hence expands on the association of dp_w/dx and the growth of secondary hairpin-like structures.
Figure <ref> demonstrates the variation in the slope of dp_w/dx for every kinematic setting considered in this study. Here, the slope is computed as the ratio of a_i and Δ X^+, where a_i represents the magnitude of increasing dp_w/dx (marked in Figure <ref>), and Δ X^+ denotes the relative streamwise distance between the minima and local maxima following the rise of dp_w/dx. With increasing St_c, the slope of dp_w/dx increases in Figure <ref>. It suggests an environment of larger adverse pressure gradient and a stronger streamwise flow compression,
which further coincides with a consistent secondary hairpin-like evolution from the secondary LEV, as St_c becomes greater than 0.48.
The novel association described above presents a very useful quantitative tool in characterization of the changes in mechanisms that govern the growth of secondary spanwise structures. These particularly coincide with changes in kinematics of an oscillating foil from heave- to onset of pitch-domination. For example, <cit.> recently investigated and discussed such transitions, namely, Series A and Series B. The trends for dp_w/dx explained in this study accurately quantifies the identification of these transitions. Particularly, the increasing slope of dp_w/dx with increasing St_c coincides with the growth of secondary hairpin-like structures through a primary and secondary LEV pair (i.e. Mechanism “1" in <cit.>), which also coincides with the Series A transition at different ϕ <cit.>.
§ CONCLUSION
A fundamental association between the evolution of secondary structures, in the form of a spanwise hairpin-like arrangement, and the pressure gradient along the streamwise flow is computationally investigated for the case of infinitely span oscillating foils. Under large adverse gradients, a stronger streamwise flow compression promotes the formation of a dominant secondary LEV in the neighborhood of the paired primary LEV. The growth of the secondary LEV leads to an elliptic instability mechanism <cit.>, resulting in a core vorticity outflux from the secondary LEV, and a subsequent growth of secondary hairpin-like structures. The highlighted association is consistent across the range of increasing St_c and ϕ, and thus reflects a unique quantitative measure of the growth of secondary wake structures behind oscillating foils.
[Funding]This research has received support from the Canada First Research Excellence Grant. The computational analysis was completed using Compute Canada clusters.
[Declaration of Interests] The authors report no conflict of interest.
jfm
|
http://arxiv.org/abs/2307.04622v1 | 20230710150832 | Correlations between QPO frequencies and spectral parameters of GRS 1915+105 using AstroSat observations | [
"Ruchika Dhaka",
"Ranjeev Misra",
"JS Yadav",
"Pankaj Jain"
] | astro-ph.HE | [
"astro-ph.HE"
] |
firstpage–lastpage
Speed and Acceleration of CMEs Associated with Sustained Gamma-Ray Emission Events
Observed by Fermi/LAT
Seiji Yashiro
August 12, 2023
=========================================================================================================
In this work, we study the correlation between Quasi-periodic Oscillation (QPO) frequency and the spectral parameters during various X-ray states in the black hole binary GRS 1915+105 which matches well with the predicted relativistic dynamic frequency (i.e. the inverse of the sound crossing time) at the truncated radii. We have used broadband data of LAXPC and SXT instruments onboard AstroSat. Spectral fitting shows that the accretion rate varies from ∼ 0.1 to ∼ 5.0 × 10^18 gm/s and the truncated radius changing from the last stable orbit of an almost maximally spinning black hole, ∼ 1.2 to ∼ 19 Gravitational radii. For this wide range, the frequencies of the C-type QPO (2 - 6 Hz) follow the trend predicted by the relativistic dynamical frequency model and interestingly, the high-frequency QPO at ∼ 70 Hz also follows the same trend, suggesting they originate from the innermost stable circular orbit with the same mechanism as the more commonly observed C-type QPO. While the qualitative trend is as predicted, there are quantitative deviations between the data and the theory, and the possible reasons for these deviations are discussed.
accretion, accretion discs - black hole physics - stars: black holes - X-rays: binaries - relativistic processes
§ INTRODUCTION
The Black Hole X-ray Binary (BHXB) GRS 1915+105 was discovered on August 15, 1992, as a transient by the WATCH All-sky monitor onboard Granat observatory. It was the first galactic object to show a superluminal jet <cit.>. The binary system contains a black hole of 12.4 solar mass <cit.>. This source is located at a distance D = 8.6 kpc <cit.> and its relativistic jets are directed at an angle i=70^∘ from the line of sight <cit.>. It is an outstanding source because of its huge variability <cit.>. This source is observed in 14 different X-ray classes, based on its X-ray flux, Color-Color Diagram (CCD) and hardness ratio <cit.>. Some of these classes are named ϕ, χ, θ, λ, ρ, etc. Among all the 14 different classes the most observed class is χ. The χ class is the least variable class, and no large amplitude and long-term X-ray flux variability have been observed. Most of the time, since its discovery in 1992, GRS 1915+105 has been seen in bright X-ray states like High Soft state (HS) and High HIMS state (also called the Steep Power Law state (SPL state)). This source has entered into a decline phase since 2018 (lower branch of HIMS and the Low Hard State (LS)).
X-ray binaries exhibit variability on rapid time scales. Fourier analysis is often used to study fast variability and quasi-periodic oscillations (QPOs) by computing power-density spectra (PDS)<cit.>.
There are numerous patterns have been observed in the PDS <cit.>, ranging from various types of broad-band noise to much narrower structures known as QPOs. These appear as sharp peaks in the power spectrum. QPOs with frequencies ranging from few mHz to ∼70 Hz have been observed for the source GRS 1915+105 <cit.>.
The centroid frequencies of these QPOs during specific spectral states and transitions can be associated with physical processes occurring in these systems.
Typically, there are two types of QPOs. Low-frequency QPOs have a centroid frequency ≲ 30 Hz, whereas high-frequency QPOs have a centroid frequency ≳ 60 Hz (up to a few hundred hertz) <cit.>. Low-frequency QPOs are further subdivided into A, B, and C-type QPOs based on differences in power spectral properties and phase lag behavior, and they occur in various spectral states <cit.>.
However, the precise physical origin of QPOs in BHXBs is so far not well understood.
<cit.> have studied the dependence of QPO frequency f on the inner radius r of the truncated accretion disk. They found that f/Ṁ is well correlated with r, where Ṁ is the accretion rate. Remarkably, the relationship between the two is well described in terms of dynamical frequency arising due to normal modes of disk oscillations <cit.>.
The dynamical frequency is defined as the inverse of the sound crossing time (f_dyn∼ c_s(r)/r). The sound crossing time is the ratio of the truncation radius and the sound speed at the inner disc. According to the standard relativistic disc model proposed by <cit.>, the sound speed is dependent on several factors, including the mass accretion rate (Ṁ), spin, and inner radius (r) of the disc. This leads to the following formula for the dynamical frequency <cit.>:
f_dyn/Ṁ = N 8979 Hz (r/r_g)^-2.5(M/12.4 M_⊙)^-2× A^1 B^-2 D^-0.5 E^-0.5 L
where r_g = GM/c^2 is the gravitational radius, and r is the inner disc radii, N is a normalisation factor to take into account the assumptions made in the standard accretion disc theory.
The parameters A, B, D, E, and L are functions of the inner disc radii and the spin parameter described in <cit.> and <cit.>. All these parameters are important for small radii, r < 10 r_g. As a result, in this regime, the functional form of f_dyn considerably differs from its Newtonian dependence. Using spectral and timing analysis, one can determine the mass accretion rate, inner disc radii, and QPO frequency. Thus, the interpretation, and in particular Eqn <ref> can be verified with such an analysis. <cit.> did such an analysis using AstroSat observation data collected on 28 March 2016 and 1 April 2017 when GRS 1915+105 was in the low HIMS state (i.e., the lower horizontal track of HIMS). The source showed C-type QPOs in the frequency range of 3.5–5.4 Hz during the observation. A similar analysis was undertaken for Insight-HXMT observations of GRS 1915+105 when it exhibited low-frequency C-type QPOs <cit.>. For a wider range of QPO frequency, 2.6-4.3 Hz, and inferred accretion rate of 0.2-1.2× 10^18gm/s, they confirmed the results obtained by <cit.>.
Apart from these C-type QPOs, GRS 1915+105 also shows a QPO at ∼ 69 Hz, which is remarkable in having a nearly constant frequency <cit.>. This QPO has also been reported for AstroSat data, where it varied slightly from 67.4 to 72.3 Hz <cit.>.
In this paper, we perform an extensive spectro-temporal analysis of various X-ray states observed in GRS 1915+105 using AstroSat data. In GRS 1915+105, so far, only one outburst (started in 1992) is observed which is still continuing. GRS 1915+105 is never seen in the rising phase of an outburst. Our data includes a low hard state (Obs. 7), which has never been reported before. The motivation here is to study the QPO frequency dependence on spectral parameters covering a wider range of inner disc radii, accretion rates and QPO frequencies.
In Section <ref> of this work, we describe observations and data reduction techniques using the LAXPC and SXT pipeline software. In Section <ref>, we explain the various analytical and modelling techniques used to analyse the temporal and spectral features of GRS 1915+105. In Section <ref> of the paper, we describe the outcomes of the study and draw conclusions based on those results.
§ OBSERVATION AND DATA REDUCTION
AstroSat is a multi-wavelength observatory launched for astronomical studies of various celestial objects in near and far UV, soft (0.3-80 keV) and hard (3-100 keV) X-rays <cit.>. It has four science payloads: 1) Soft X-ray Telescope (SXT) <cit.>, 2) Ultra-Violet Imaging Telescope (UVIT) <cit.>, 3) Cadmium Zinc Telluride Imager (CZTI) <cit.> and 4) the Large Area X-ray Proportional Counter (LAXPC) <cit.>. Large Area X-ray Proportional Counters (LAXPC) consist of three identical but independent PCUs (LAXPC 10, LAXPC20 and LAXPC30) with an effective area of 6000 cm^2 at 15 keV and has a time resolution of 10μs in the energy range 3.0-80.0 keV with the dead-time of about 42 μs <cit.>.
A simultaneous fit of SXT data along with LAXPC data provides a broadband spectrum of the source. We have analysed various observations with simultaneous data from SXT and LAXPC spanning over 1094 days starting from 3 March 2016. Out of all the AstroSat observations that we looked into, we picked out the ones that showed the presence of QPOs in their power density spectrum.
In our study, we have included only those observations when the source flux is more or less steady. GRS 1916+105 often shows strong flares when the flux can change by a factor of a few <cit.>. Such flaring situations are not included in this study.
All transient black hole binary outbursts should follow a q-diagram. GRS 1915+105 has shown only one outburst so far; starting with its discovery on 15th August 1992 and the outburst ending now (not yet over); for approximately 31 years. The rising phase of the outburst in GRS 1915+105 is never observed. Our observations cover the period from 2016 to 2019 when the source remained mostly in luminous X-ray states. Thus our observations trace only part of the q-diagram; mostly vertical left and bottom horizontal branches, partly when QPOs are present. Its variability is complex as the source stays in the high luminous X-ray states most of the time. We selected seven observations of four distinct states: the High Soft (HS) state, the Low HIMS state; the High HIMS state; and the Low Hard (LS) state.
The data used in this work consists of 7 different observations made on 3 March 2016 (Obs. 1), 25 April 2016 (Obs. 2), 27 April 2016 (Obs. 3), 28 March 2017 (Obs. 4), 1 April 2017 (Obs. 5), 15 April 2017 (Obs. 6), and 21 March 2019 (Obs. 7). Table <ref> presents the effective exposure time of LAXPC and SXT of the observations used in this study. The Burst Alert Telescope (SWIFT/BAT) Hard X-ray Transient Monitor and the Monitor of All-sky X-ray Imaging (MAXI) provide continuous coverage of GRS 1915+105 in soft and hard X-rays. To see the evolution of the source, we extract the MAXI flux in the energy range of 2–20 keV and the SWIFT/BAT flux in the energy range of 15–50 keV, as shown in Fig. <ref>. The SWIFT/BAT flux is scaled by 30 so that both X-ray band light curves of GRS 1915+105 starting from 13 January 2016 to 27 April 2019 can be seen clearly. The vertical lines in the figure represent AstroSat observations of the GRS 1915+105 source used for this study. The sequence of vertical lines in the light curve shown in Fig. <ref>
is identical to that presented in Table <ref>. Each observation was further divided into segments such that each segment was continuous without gaps.
The HID of GRS 1915+105, covering the period from 13 January 2016 (MJD 57400) to 27 April 2019 (MJD 58600), is illustrated in Fig.
<ref>, where the 2–20 keV MAXI flux is plotted against the X-ray colour (HR). The location of the source in the HID diagram broadly reflects the state of the system. Also marked in Fig. <ref> are the locations of the AstroSat observations. Obs. 2 and 3 correspond to the soft state, while the high flux of Obs. 1 shows that it is in the Hard Intermediate state (High HIMS). On the other hand, Obs. 4, 5 and 6 correspond to
the Low HIMS state. The data from Obs. 7 represents the Low Hard
state of the source.
§.§ SXT Data Reduction
Level 1 photon counting mode data of the SXT instrument was processed through the official
SXT pipeline AS1SXTLevel2 - 1.4b[https://www.tifr.res.in/ astrosat_sxt/sxtpipeline.htmlhttps://www.tifr.res.in/ astrosat_sxt/sxtpipeline.html] to produce Level 2 mode data. The Photon
Counting mode (PC mode) data were chosen for the analysis of all sets of observations
listed in Table <ref>.
Using Julia-based SXTevtmerger script[https://www.tifr.res.in/ astrosat_sxt/dataanalysis.htmlhttps://www.tifr.res.in/ astrosat_sxt/dataanalysis.html], we merged all the events belonging to
one set of observations into a single event file. The HEASoft (version 6.29) tool XSELECT was used to generate the spectrum, light curves and images. The response matrix file (RMF) “sxt_pc_mat_g0to12_RM.rmf,” standard background spectrum “SkyBkg_comb_EL3p5_Cl_Rd16p0_v01.pha” and ancillary response file (ARF) "sxt_pc_excl00_v04_20190608_mod_16oct21.arf" were used for the analysis. The sxtARFmodule[https://www.tifr.res.in/ astrosat_sxt/sxtpipeline.htmlhttps://www.tifr.res.in/ astrosat_sxt/sxtpipeline.html] provided by the SXT
instrument team was used to apply a correction for offset pointing. In order to implement simultaneous analysis, we ensured that the LAXPC 20 observations were available at the same Good Time Interval (GTI) as the SXT observation. Therefore, we
used the simultaneous data segments to generate light curves, images and spectrum of
GRS1915+105.
For the Obs. 4, Obs. 5, Obs. 6 and Obs. 7 observations (low X-ray flux states), there was no pile-up near the centre of the image due to low flux (<40 counts per second, as mentioned in the AstroSat Handbook;[https://www.iucaa.in/ astrosat/AstroSat_handbook.pdfhttps://www.iucaa.in/ astrosat/AstroSat_handbook.pdf]). The average count rate in the Obs. 1, Obs. 2 and Obs. 3 was 91.33 counts/sec, 84.25 counts/sec, and 90.00 counts/sec, respectively. Therefore, to account for the pile-up effect at the centre of the image caused by the high flux rate (∼ 1 Crab) of the source in the charged-coupled device (CCD), the inner radius of the circular annulus region was set to 2 arcmins.
§.§ LAXPC Data Reduction
Level 2 event files were extracted from Level 1 event mode data utilising the official LAXPC software version released on 04 Aug 2020[http://astrosat-ssc.iucaa.in/laxpcDatahttp://astrosat-ssc.iucaa.in/laxpcData].
LAXPC data was extracted to obtain the light curve and spectrum of the source<cit.>.
Details of the response matrix (RMF) and background spectrum generation for
proportional counters 10, 20, and 30, respectively, can be found in <cit.>.
Out of three LAXPC detectors (LAXPC 10, LAXPC20 and LAXPC 30), we used only LAXPC 20
data for energy spectral studies for all of the observations given in Table <ref>.
§ DATA ANALYSIS
§.§ X-ray lighcurve and Timing Analysis
We have produced Background-subtracted light curves for four distinct observation types in the 4.0-50 keV energy range using LAXPC 20 data for the minimum time resolution of the SXT, which is 2.378 seconds. The left panel of Fig. <ref> shows 800 sec long Background-Subtracted light curve for HS state (Obs. 3), SPL state (Obs. 1), low HIMS (Obs. 4), and LH state (Obs 7).
The right panel of Fig.<ref> shows 800 sec SXT light curves in the 0.3-8 keV energy range for the identical segments used to generate the LAXPC 20 Background-Subtracted lightcurves in the left panel.
In order to study the properties of QPOs, we analyse the data in the frequency regime by generating a Power Density Spectrum (PDS). The PDS were generated by dividing the lightcurve of each segment into parts and averaging the power spectra of each part. We used all three LAXPC detector units (LAXPC 10, LAXPC 20, and LAXPC 30) to plot the PDS for the HS state (Obs. 3, Seg. 2). To plot the PDS for the rest of the observations, we used the LAXPC 20 unit. The PDS for the HS state is shown in the upper left panel of Fig. <ref> in the frequency range 10-110 Hz and is modelled using several Lorentzian functions <cit.> and a
power-law component in order to account very low frequency noise (VLFN). It shows HFQPO at ∼ 70 Hz, while no QPO is seen in the lower frequency region.
Fig. <ref>, the upper right panel, shows the PDS of the low HIMS state (Obs. 4, Seg. 6) in the frequency range 0.1-20 Hz. The lower panels of Fig. <ref> show the PDS for the SPL state (left panel) and the LH state (right panel) for the Obs. 1 (Seg. 5) and Obs. 7 (Seg. 7), respectively.
The component of broad-band noise related to these three PDS (Obs. 4, Obs. 1, and Obs. 7) was modelled using only a few Lorentzians. The frequency of QPOs, along with errors, has been estimated and tabulated in the third column of Table <ref>. All three panels show LFQPOs along with their harmonics.
§.§ Spectral Analysis
We have performed a simultaneous spectral fitting of SXT and LAXPC20 spectra using
XSPEC 12.12.0 in the broad energy range 1–50 keV (SXT: 1-5 keV and LAXPC20: 4-50
keV) for 4, 6 and 7 sets of observations listed in Table <ref>. The high energy range above 50.0 keV has been ignored because of the low S/N (signal-to-noise) ratio. For the rest of the observation sets, we have used the combined SXT, and LAXPC
energy range 1.0-20.0 keV; during these observations source spectrum is soft and signal to noise ratio deteriorates fast above 20 keV. Lower energies below 1 keV were not considered in all the observations due to uncertainties in the effective area and response of the SXT. The left panels of Fig. <ref> display the energy spectra of HS state (Obs 3, Seg. 2) in the top panel and SPL state (Obs. 1) in the bottom panel, respectively, covering an energy range of 1-20 keV. The low HIMS and LH state spectra for the Obs. 4 (Seg. 6) and Obs. 7 (Seg. 6), respectively, are shown in the right top and right bottom panels of Fig. <ref> in the energy range of 1-50 keV. A relative normalisation constant was used for the simultaneous fitting of LAXPC and SXT data.
As recommended by the LAXPC team, the 3% systematic error was incorporated for uncertainties in background estimation when fitting LAXPC and SXT data together <cit.>.
A gain correction was applied to the SXT data using the gain fit in XSPEC with slope fixed to 1, and the best-fit offset value was found to range from 0 to 35.68 eV.
SXT data were grouped with the ftgrouppha[https://heasarc.gsfc.nasa.gov/lheasoft/ftools/headas/ftgrouppha.htmlhttps://heasarc.gsfc.nasa.gov/lheasoft/ftools/headas/ftgrouppha.html] tool of Ftools[https://heasarc.gsfc.nasa.gov/ftools/https://heasarc.gsfc.nasa.gov/ftools/]. There are several
ways for binning the input pha file data; we have done the optimal binning
using the ftgrouppha tool. The spectrum was fitted using a combination of models,
Constant*tbabs (kerrdisk+simpl*kerrd). The absorption by the Inter-Stellar Medium (ISM)
was taken into account with the TBabs model <cit.> implemented with the galactic absorption abundance. The hydrogen column density was kept fixed at 4 × 10^22cm^-2 for data sets of HIMS, SPL and LH states listed in Table <ref>, as there was no significant difference in the best-fit while keeping this parameter free <cit.>. N_h was
kept free for HS state data set and was found to vary from 4.47 × 10^22cm^-2 to 4.65 × 10^22cm^-2. The convolution model of comptonization “simpl” <cit.> was used to take into account the Comptonization of the disk photons in the inner flow. The simpl model processes any input spectrum and transforms a fraction f_sc of the source photons into a power-law distribution.
The inner radius of the disk and mass accretion rate was estimated from the best-fit values obtained from the relativistic disk model, “kerrd” <cit.>. The black hole mass, disk inclination angle, and distance to the source were fixed to 12.4M_⊙, 60^∘, and 8.6 kpc, respectively, <cit.>. The spectral hardening factor of kerrd was fixed to
1.7 <cit.>. For the kerrdisk model, the emissivity index for both the inner and the outer portions of the disk was fixed at 1.8 <cit.>. The rest-frame energy of the iron line was set at 6.4 keV <cit.>. As GRS 1915+105 is a highly spinning
black hole, we set the spin parameter for “kerrdisk” at 0.98 <cit.>. Keeping these parameters free does not significantly affect the best-fit values of other parameters. The break radius separating the inner and outer parts of the disk was fixed at 6 r_g (gravitational radii). The radius parameter in kerrd is measured in the unit of gravitational radius (r_g), while for the kerrdisk, it is in units of radius of marginal stability or innermost stable circular orbit (ISCO). Therefore inner radius in the kerrdisk was normalised to that used for the “kerrd” after dividing by a factor of 1.235. The fraction scatter parameter in the data from 3 March 2016 was not constrained; therefore, we set it to 0.6. For HS state observation, gamma and flux in line emission parameters were not constrained; thus, we set them to 4.5 and 1 × 10^-2photons cm^-2s-1, respectively. Table <ref> represents the best-fit values of the spectral parameters, including the absorption column density, inner disk radius, accretion rate, scattered fraction, photon index (gamma), and flux in the iron emission line.
§ RESULTS
An overview of the observations used in this work which includes the date of observation, X-ray flux, hardness ratio, X-ray state, QPO frequency, accretion rate, and the inner disk radius, is given in Table <ref>.
The X-ray flux observed in the LAXPC20 detector is presented in Column 2 of Table <ref>. The value of HR2 is shown in column 3, where HR2 is defined as the ratio of X-ray flux in the 13–60 keV to the 3-5 keV energy range. We observe that the hardness ratio continuously decreases as the source moves from the Low Hard (LH) state to the HS state via the SPL state and the low HIMS state. The accretion rate, shown in column 6 of Table <ref>, generally increases as energy spectra become softer. The accretion rate is highest during the SPL state and lowest during the LH state.
Columns 5 and 7 of Table <ref> list the range of QPO frequencies and the inner radii of the truncated disc for different observations.
Fig. <ref> shows the variation of QPO frequency with accretion rate (top left panel), with inner disc radius (top right panel), and the variation of accretion rate with the inner disc radius (bottom panel). While for some of the individual data sets (i.e. for observations taken during a particular spectral state, such as Obs. 4 and Obs. 1), correlations between these parameters are evident, there is, in general, no correlation seen when all the observations are considered.
Next, we consider the possibility that the QPO frequency may depend both on the accretion rate and the inner disc radius and, in particular, in the form suggested by Equation <ref>, i.e. the QPO frequency divided by the accretion rate depends on the inner disc radius, as was suggested by <cit.>. This is illustrated in Fig. <ref>, where the QPO frequency divided by the mass accretion rate is plotted against the inner radius of the accretion disc. In this case, a clear trend is visible for all the observations. The solid violet line in Fig. <ref> represents the best-fitted standard accretion disc model for Low Frequency QPOs (LFQPOs) with spin parameter 0.97 and normalisation constant 0.01 (earlier work; <cit.>, who used only low HIMS data). For all the data sets, we find that the relationship is consistent with that predicted by the dynamic frequency model (given in Equation 1 with a=0.999 and N=0.1). This is shown by the solid black in Fig. 8. Note that the high spin value is already implied by the small inner radii of ∼ 1.2 R_g obtained from the spectral fitting. This work extends the earlier results to different spectral states and covers a large variation in accretion rate from 0.1 × 10^18gm/s to 5.0 × 10^18gm/s and the truncated radius changing from the last stable orbit of a maximally spinning black hole, ∼ 1.2 to ∼ 19 Gravitational radii. For this wide range, the frequencies of the C-type QPO follow the trend predicted by the relativistic model and, interestingly, the high frequency QPO at ∼ 70 Hz (which is an obvious outlier in top panels of Fig. <ref>) also follow the same trend, suggesting a common origin. While the qualitative trend is as predicted, there are quantitative deviations, which we discuss in the next section.
We have so far studied the QPO frequency divided by Ṁ as a function of the inner disc radius based on the interpretation that the QPO frequency is the dynamical one given by Equation <ref>. To generalise, we define a variable Y = QPO freq./(Ṁ^p) and check if other values of p other than unity would also represent the data by checking if Y is correlated with inner disc radius. The absolute magnitude of the Spearman rank correlation has a maximum of 0.99 for p ranging between 0.8 and 1.2. The Spearman rank correlation variation with p is plotted in Fig. <ref>. This figure shows that the correlation does not show significant change for p values within 0.8 to 1.2.
§ DISCUSSION
In order to put the results of this work into perspective, it is necessary first to enumerate the various possible different reasons why the data points in Fig. <ref>, show some deviations from the predicted values. It has been assumed that the colour factor f is a constant =1.7. The colour factor depends on the local vertical radiative transfer in the disc and has been numerically obtained to be approximately 1.7 by <cit.> for black hole binaries. The radiative transfer depends on the vertical structure of the disc and on the fairly uncertain viscous energy dissipation as a function of height. Moreover, a corona on top of the disc and irradiation will also affect the colour factor. The effect of changing the colour factor is more prominent for observations with a larger inner truncated disc radius. For example, if the colour factor is increased to 2, the mass accretion rates and the inner radii of the accretion disk slightly change for the soft state data collected on 25 April 2016 and 27 April 2016 i.e. mass accretion rate changes from 1.95 ^+0.06_-0.02 to 1.93 ^+0.10_-0.048× 10^18 g/sec and the inner radius changes from 1.40 ^+0.42_-0.15 to 1.32 ^+0.62_-0.08 R_g. On the other hand, for the Low HIMS (15 Apr 2017), the accretion rate change from 0.74 ^+0.07_-0.06 to 2.4 ^+0.3_-0.2× 10^18 g/sec while the inner radius changes from 4.6^+0.3_-0.3 to 9.6^+1.0_-0.3 R_g. An increase in the colour factor results in an increase in accretion rate and inner radii, making the HIMS points (Obs. 4, 5, 6) in Fig. <ref> to move right and downwards. We have tested that by changing the colour factor to 2, then the predicted curve matches with the data points, but
the normalisation factor increases from 0.1 to 0.15. Note that we have also assumed that the colour factor is independent of the accretion rate and radii which may not be the case. Some of the deviations of the data points from the predicted values could be due to such dependence.
It should be emphasised that the theoretical formula for the dynamical frequency (Equation <ref>) is an order of magnitude estimate, the uncertainty of which is parameterised by the normalisation factor N. Thus, one may expect N to vary not only for different observations (with different accretion rates and inner disc radii) but also to vary with radius, leading to deviations when the data is compared with a constant N prediction. The theoretical prediction is based on the standard accretion disc, where the disc extends to the last stable orbit and is not truncated. The sound speed at a radius may differ when the disc is truncated at that radius compared to when it is not, and this difference may be a function of the accretion rate and radius. A related issue is the assumption of standard accretion disc theory that the viscous dissipation goes to zero at the last stable orbit, which is incorporated both in the form of Equation <ref> and in the spectral model kerrbb used in this work. This assumption forces the temperature (and hence the sound speed) to go to zero at the last stable orbit. However, this assumption may not correctly describe the system, and instead, the accretion flow should necessarily pass through a sonic point, which leads to deviations from the standard theory near the last stable orbit <cit.>. Apart from these theoretical considerations, another potential reason for the deviation between the data and the predicted values is that the source may not be in the steady state and may be in a variable state. Out of seven observations used in this work, the source
shows significant short-time variability (on the hour/orbital time scale) during three observations (3rd March 2016, 28th March 2017 and 1st April 2017 (Obs. 1, 4 & 5)) <cit.>, as reflected in Table 2. During these observations, values of QPO frequency, inner disk radii and the Gamma clearly show a trend with time (for different orbits). Thus, the spectra averaged over the whole observation may not provide accurate accretion rates and inner disc radii values. Moreover, when the system is dynamic, it may not be correct to model the time-averaged spectra with a steady state one, as assumed when we have used a disc model like kerrbb. These three data sets show most deviations from the theory, as seen in Figure <ref> as the disk was not in the steady state. The 15th April 2017 (Obs. 6) data support this argument. This observation data do not show any trend with time/orbit and fall in the middle of points of Obs. 4 & 5 in Figure <ref> with little deviation (also see Table <ref>).
Given all the above-listed possibilities, which may cause the data points not to follow the theoretical predictions accurately, it is quite remarkable that the overall predicted trend is seen, for such a wide range of accretion rates, inner disc radii and QPO frequency. Indeed, as mentioned earlier, the general trend that for an empirical form of Y = f_QPO/Ṁ^p, the best anti-correlation with radii is obtained for p ∼ 1, indicates that the QPO frequency can be identified with dynamical one. It is also remarkable that the high frequency QPO at ∼ 70 Hz also follows the trend of the low frequency ones and the explanation for the observed high frequency is that for the high frequency QPO, the accretion rate is significantly higher and the inner radius close to the last stable orbit.
Interpreting the QPO frequency as the dynamic one, is an alternate explanation to the model where the QPO is due to the precession of the inner flow at the Lense-Thirring frequency <cit.>. In that interpretation, the QPO frequency is expected to be a function only of the truncation radius and not the accretion rate. Moreover, there is some evidence that the energy dependent properties of some of the QPOs vary with the inclination angle of the binary <cit.>, which would be more likely explained by a precessing inner flow. At present, this evidence is limited to a few sources due to the difficulty in estimating the inclination angle and energy dependent QPO properties. A more detailed theoretical analysis of the predicted inclination dependence of these two interpretations, along with better data, would be able to differentiate between them. Note that in the interpretation used in this work, the QPO frequency is not expected to depend on the inclination angle of the disc.
The wide band spectral and rapid temporal capabilities of AstroSat and Insight-HXMT had shown that the frequencies of the C-type QPO of GRS 1915+105 can be identified with general relativistic dynamic ones. In this work, we extend the results using AstroSat for a broader range of accretion rates and inner radii and have shown that the high frequency QPO may also be of a similar origin. The work needs to be extended to other observations of GRS 1915+105 and other black hole systems. Apart from AstroSat and Insight-HXMT observations, such work can also be done by NICER with perhaps high energy spectral coverage from simultaneous Nustar data. Such a systematic and multi-observatory study will give a clearer picture of the origin of the QPO phenomenon in black hole systems.
§ ACKNOWLEDGEMENTS
The authors would like to thank the anonymous reviewer for his or her insightful remarks and suggestions that considerably enhanced the quality of the manuscript.
This work has used the data from the Soft X-ray Telescope (SXT) developed at TIFR Mumbai. And the SXT POC at TIFR is acknowledged for verifying and releasing the data through the Indian Space Science Data Centre (ISSDC) and providing the required software tools.
We would also like to thank the LAXPC POC and SXT POC teams for their support. In addition, this study utilised the Monitor of All-sky X-ray Image (MAXI) and SWIFT/BAT data provided by the MAXI and BAT teams.
This research has used the software provided by the High Energy Astrophysics Science Archive Research Center (HEASARC), a service of the Astrophysics Science Division at NASA.
§ DATA AVAILABILITY
The software and packages utilised for data analysis are available at NASA’s HEASARC website (<https://heasarc.gsfc.nasa.gov/docs/software/heasoft/patch.html>). The data used [][s]in this article are available at the AstroSat-ISSDC website
(<https://astrobrowse.issdc.gov.in/astro_archive/archive/Home.jsp>), Maxi website (<http://maxi.riken.jp/top/index.html>) and the Swift/BAT observations from NASA’s SWIFT website
(<https://swift.gsfc.nasa.gov/results/transients/>).
mnras
§ APPENDIX
The relativistic correction parameters A, B, D, E, and L <cit.> which have been used to derive equation 1 are as follows.
A=1+a_*^2x^-4+2a_*^2x^-6
B=1+a_*x^-3
D=1-2x^-2+a_*^2x^-4
E=1+4a_*^2x^-4-4a_*^2x^-6+3a_*^4x^-8
L=3/2M 1/x^2(x^3-3x+2a_*)[x-x_0-3/2a_8ln(x/x_0)
-3(x_1-a_*)^2/x_1(x_1-x_2)(x_1-x_3)ln(x-x_1/x_0-x_1)
-3(x_2-a_*)^2/x_2(x_2-x_1)(x_2-x_3)ln(x-x_2/x_0-x_2)
-3(x_3-a_*)^2/x_3(x_3-x_1)(x_3-x_2)ln(x-x_3/x_0-x_3)]
where x=√(r/M) and a_* is the spin parameter in parameters A, B, D, E, L. Here,
x_1=2cos(1/3cos^-1a_*-π/3)
x_2=2cos(1/3cos^-1a_*+π/3)
x_3=-2cos(1/3cos^-1a_*)
x_0={3+Z_2-sgn(a_*)[(3-Z_1)(3+Z_1+2Z_2)]^1/2}^1/2
where Z_1=1+(1-a_*^2)^1/3[(1+a_*)^1/3+(1-a_*)^1/3] and Z_2=(3a_*^2+Z_1^2)^1/2
f=3/2M1/x^2(x^3-3x+2a_*)[x-x_0-3/2a_8lnx/x_0
-3(x_1-a_*)^2/x_1(x_1-x_2)(x_1-x_3)lnx-x_1/x_0-x_1 -3(x_2-a_*)^2/x_2(x_2-x_1)(x_2-x_3)lnx-x_2/x_0-x_2
-3(x_3-a_*)^2/x_3(x_3-x_1)(x_3-x_2)lnx-x_3/x_0-x_3]
|
http://arxiv.org/abs/2307.04459v1 | 20230710101228 | Thermal fluctuation, deflection angle and greybody factor of a high-dimensional Schwarzschild black hole in STVG | [
"Qian Li",
"Yu Zhang",
"Qi-Quan Li",
"Qi Sun"
] | gr-qc | [
"gr-qc"
] |
Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China.
[email protected] (Corresponding author) Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China.
Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China.
Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China.
In this work, we study the thermal fluctuation, deflection angle and greybody factor of the high-dimensional Schwarzschild black hole in scalar-tensor-vector gravity (STVG). Based on the correction of black hole entropy due to thermal fluctuation, we calculate some thermodynamic quantities associated with the correction of black hole entropy. The influence of the first-order and second-order corrections, spacetime dimensionality and STVG parameters on these thermodynamics quantities are discussed in detail. Additionally, by utilizing the Gauss-Bonnet theorem, the deflection angle is obtained in the weak field limit and the effect of two parameters on the results is visualized. Finally, we calculate the bounds on greybody factors of a massless scalar field.
Thermal fluctuation, deflection angle and greybody factor of a high-dimensional Schwarzschild black hole in STVG
Qi Sun
August 12, 2023
================================================================================================================
§ INTRODUCTION
Although Einstein's general relativity is one of the successful and well-established gravitational theories in modern physics, general relativity fails to explain many observational results, such as the present stage of cosmic acceleration <cit.>, rotation curves of galaxies <cit.> and some cosmological data <cit.>. Moreover, general relativity has inherent deficiencies in the theory, such as the presence of spacetime singularities. Therefore, the problems of general relativity motivate us to research the alternative gravity theories. One of the modified gravity theories is the scalar-tensor-vector gravity (STVG) proposed by Moffat <cit.>, which is based on the action principle and is presented by the metric tensor, three scalar fields and a massive vector field. Moffat gave the black hole solution in STVG in another paper <cit.>. What's more, this modified gravity (MOG), i.e., STVG may be considered an alternative to the dark matter problem, which can be solved by changes in the gravity sector. STVG was able to fit the rotation curves of galaxies <cit.> without considering dark matter and was showing no difference with solar system observational tests. However, Jamali and his colleagues <cit.> found that a modified version of the STVG, known as mMOG, cannot be deemed as an alternative to the dark matter problem when new constants are introduced in the kinetic term of the scalar field as its coefficients.
The interest in the physical properties of high-dimensional black holes significantly increases, even though high-dimensional black holes have not been directly observed or experimentally supported in comparison with the four-dimension black hole. This has a lot to do with the development of string theory. In addition, the theoretical importance of higher dimensional black hole solutions was introduced by Emparan and Reall <cit.>. Tangherlini <cit.> proposed firstly the solutions of the Schwarzschild and Reissner-Nordström black holes in D dimensional spacetime. Later, Myers et al. obtained the Kerr black hole solution in high dimensional spacetime in Ref. <cit.>. Recently, Cai et al. <cit.> derived a high-dimensional static spherically symmetric Schwarzschild black hole in STVG, which is a high dimensional extension of STVG theory, and studied its quasinormal modes of a massless scalar field and black hole shadow. This black hole solution is a link between Einstein's theory and STVG theory. Specifically, this black hole degenerates to Schwarzschild-Tangherlini black hole in Einstein's theory with the coupling constant α being zero.
The black hole entropy is proportional to the area of the event horizon of the black hole, known as the Bekenstein-Hawking formula <cit.>. The black hole entropy is maximum compared with the objects of the same volume in order to avoid the violation of the second law of black hole thermodynamics. However, due to thermal fluctuation which leads to the concept of the holographic principle <cit.>, the maximum entropy of black holes may be corrected. The corrected term for maximum entropy is generated by the quantum fluctuations in the spacetime geometry rather than the matter field in the spacetime. For large black holes, quantum fluctuations are negligible. When the size of black hole reduces due to Hawking radiation, however, the quantum fluctuations in the spacetime geometry will increase. Thus, there is a logarithmic correction at leading order in black hole entropy <cit.>. Upadhyay investigated the effect of thermal fluctuations on a quasitopological black hole and found the negative correction term result leads to a local instability of black holes <cit.>. The influence of logarithmic corrections on the thermodynamics due to thermal fluctuations for a dilaton black holes in gravity's rainbow has been studied in Ref. <cit.>. There are several works are devoted to studying the thermal fluctuation effects on black hole thermodynamics <cit.>.
Hawking believed that black holes are not completely black objects and can emit radiation, known as Hawking radiation <cit.>. This lays an important foundation for understanding the thermodynamics of black holes. The Hawking radiation detected at infinity of the black hole differs by a redshift factor, called as greybody factor, from the authentic radiation detected at the black hole horizon. The greybody factor that derives from the transmission amplitude can provide information related to the quantum nature of the black hole <cit.>. There are several methods to calculate the greybody factor such as the bounds on greybody factors <cit.>, the WKB method <cit.> and the exact numerical approach <cit.>. In this paper, we choose the bounds on greybody factor due to the fact that it can provide analytical results for the intermediate frequencies and all angular momentum.
When light ray encounters a dense compact object in its trajectory toward a distant observer, the observer will find the light ray has a deflection angle. That is to say, the compact object bends the light ray, which forms gravitational lensing. So gravitational lensing which can be classified as strong gravitational lensing, weak gravitational lensing and micro gravitational lensing is used as a special astronomical tool to check whether general relativity theory is correct. Concretely, the strong gravitational lensing is used to calculate the magnification and position of the black hole. The weak gravitational lensing can help us to measure different objects' masses or restrict of the cosmological parameter. In addition, on the cosmic microwave background aspects, the weak gravitational lensing also has an important effect <cit.>. At present, strong or weak gravitational lensing of compact objects, such as wormholes, black holes and cosmic strings has been widely considered <cit.>. Part of the work in the above literature is based on Gauss-Bonnet theorem to calculate the deflection angle for the weak gravitational lensing. The Gauss-Bonnet theorem proposed by Gibbon and Werner <cit.> in 2008, is used to derive the deflection angle for the first time in the context of optical geometry. Since then, this method has been applied to the weak deflection angles of different black holes <cit.>. We will also research the weak gravitational lensing of a high-dimensional Schwarzschild spacetime in STVG by using Gauss-Bonnet theorem.
Motivated by the above, the purpose of the paper is to study the thermal fluctuation, weak deflection and grey-body factor of the high-dimensional Schwarzschild black hole in STVG. The present paper is structured as follows. In section <ref>, we briefly introduce a high-dimensional Schwarzschild black hole solution in STVG. Then, we review the physical features of this black hole. In section <ref>, we study the corrected thermodynamic quantities due to thermal fluctuation. Section <ref> is devoted to calculating the weak deflection angle using Gauss-Bonnet theorem. We discuss the bounds on greybody factors in section <ref>. In the last section, our conclusions are summarized.
Throughout this paper, the natural system of units (G_N=ħ=c=1) is adopted.
§ FUNDAMENTAL SPACETIME
In the section, we introduce the high-dimensional Schwarzschild spacetime in
scalar-tensor-vector gravity (STVG) and simply review some thermodynamical properties. The general action of the STVG theory in D-dimensional spacetime takes the form <cit.>
S_L=S_GR+S_ϕ+S_S+S_M,
where
S_ GR=1/16π∫ d^Dx√(-g)1/GR,
S_ϕ=-1/4π∫ d^Dx√(-g)(K-1/2μ̃^2ϕ ^μϕ _μ),
S_S =∫ d^D x √(-g)[1/G^3(1/2 g^μν∇_μ G ∇_ν G-V_G(G)) +1/μ̃^2 G(1/2 g^μν∇_μμ̃∇_νμ̃-V_μ̃(μ̃))],
here S_GR is the Einstein-Hilbert action, S_ϕ stands for the action of a massive vector field ϕ^μ, S_S denotes the action of the scalar field and S_M represents the matter action. The black hole metric in the D-dimensional spacetime has the following form
ds^2=-f(r)dt^2+dr^2/f(r)+r^2dΩ ^2_D-2,
with the line element f(r) being <cit.>
f(r)=1-m/r^D-3+Gq^2/r^2(D-3),
where G is the Newton's gravitational constant, G=G_N(1+a). And m and q are defined by
m≡16π GM/(D-2)Ω _D-2, q≡8π√(a G_N)M/√(2(D-2)(D-3))Ω _D-2,
where the dimensionless parameter a in the form is regarded as a deviation of the STVG theory from standard general relativity theory and M is the black hole mass. Moreover, Ω_D-2 denoting the volume of unit (D-2)-dimensional sphere has the form
Ω_D-2=2π^D-1/2/Γ (D-1/2).
When the dimensionless parameter a, we can get a Schwarzschild-Tangherlini black hole in Einstein's gravity. Moffat gave a Schwarzschild black hole in STVG for the case D=4 <cit.>. Moreover, one can find that there is a similarity between a high-dimensional Schwarzschild black hole in STVG and a high-dimensional Reissner-Nordström black hole in Einstein gravity from the metric <cit.>. The high-dimensional Schwarzschild STVG black hole possesses up to two horizons
r_±=(m/2±√(m^2-4Gq^2)/2)^2,
where r_- and r_+ represent the Cauchy horizon and the event horizon, respectively. But Mureika et al. <cit.> pointed out that the Schwarzschild black hole in STVG, i.e., MOG black hole, relies only on the mass M and dimensionless parameter a. So q is called the gravitational charge rather than charge.
The black hole mass in terms of r_+ has the form
M=r_+^D-3(A-√(A^2-4 G B^2))/2 G B^2,
where the coefficients A and B are expressed as
A≡16 π G/(D-2) Ω_D-2, B≡8 π√(a G_ N)/√(2(D-2) (D-3))Ω_D-2.
The Hawking temperature is given by
T_ H=1/4πdf(r)/dr|_r=r_+ =(D-3)(A √(A^2-4 G B^2 )-A^2+4 G B^2 )/8 π G B^2r_ +.
Also, the Bekenstein-Hawking entropy of this high-dimensional black hole, S_0, is given by
S_0= Ω_D-2 r_+^D-2/4.
§ THERMAL FLUCTUATIONS
In the section, we investigate the influence of thermal fluctuations on thermodynamic potential of a high-dimensional Schwarzschild black hole in STVG. First of all, we simply introduce the thermal fluctuation and then calculate some important modified thermodynamics quantities.
We can not neglect the influence of the thermal fluctuation on the black hole thermodynamics when the radius of the black hole decreases and the temperature of the black hole is large. The thermal fluctuation will be regarded as a perturbation around the state of equilibrium if it is small enough. Using the partition function approach, a general expression for the corrected entropy area relation is written as <cit.>
S=S_0 -αln(S_0T^2)+ λ/S_0,
where α is the leading order correction parameter and λ is the second order correction parameter. The leading order correction is a logarithmic term caused by the thermal fluctuations, and the second order correction proportional to the inverse to uncorrected entropy is produced by extending the entropy function around the equilibrium.
Using Eqs. (<ref>) and (<ref>), the corrected entropy of this high-dimension black hole is given as
S =1/4r_+^D-2Ω_D-2 + 4r_+^D-2λ/Ω_D-2-αln[(D-3)^2(A^2-4GB^2)(A-√(A^2-4GB^2))^2 r_+^D-4Ω_D-2/256G^2B^4π^2].
We draw the corrected entropy versus the event horizon radius for different parameters in Figs.<ref> and <ref>. As shown in Fig.<ref>, the presence of leading order correction leads to an increase in entropy for small values of the event horizon radius. However, the corrected entropy gradually decreases and recovers to the original entropy when with the increase of the event horizon radius. This means that the equilibrium of the small black hole is unstable due to Δ S >0 when the black hole is regarded as an isolated system. The right figure in Fig. <ref> shows that the inverse correction term has a significant influence on the entropy for a small black hole. In fact, compared to the large black hole, the thermal fluctuation has a greater impact on the small black hole. We also show the effect of spacetime dimensionality on the corrected entropy in the left figure of Fig.<ref>. We find that the change of corrected entropy is not only fast but also large in high-dimensional spacetime. So one can easily see that for a small or large black hole, the higher the dimension, the larger the corrected entropy, whereas the middle black hole is not the case. We also obtain from the left figure in Fig. <ref> that the STVG parameter a leads to a slight increase in corrected entropy.
We can calculate the Helmholtz free energy using the corrected entropy and temperature as
F =-∫ S d T = (D-3)√(A^2-4 G B^2)(A-√(A^2-4 G B^2))/8 G B^2π
×(-4 r_+^D-1λ/(D-1)Ω_D-2 + r_+^D-3Ω_D-2/4(D-3)+ α/r( D-4 + ln[ (D-3)^2√(A^2-4 G B^2)(A-√(A^2-4 G B^2))^2 r_+^D-4Ω_D-2/256 G^2 B^4π^2])).
In order to have a better understanding of the corrected Helmholtz free energy, we plot the Helmholtz free energy in terms of the event horizon for the different parameters α,λ, D, a in Figs.<ref> and <ref>. In Fig.<ref>, we can find that the Helmholtz free energy without any corrections is a function that increases monotonically and keeps positive. It is worth noting that the Helmholtz free energy becomes negative for a small black hole under the thermal fluctuation but returns to positive with the increase of event horizon radius. In contrast to the case of the small black hole, the presence of logarithmic correction term increases the Helmholtz free energy for the larger black hole. We can conclude that thermal fluctuation causes small black holes to be more stable. In addition, we also obtain from the left in Fig.<ref> that the impact of spacetime dimension on the modified Helmholtz free energy is similar to that of logarithmic correction. We can see the effect of parameter a on the corrected Helmholtz free energy in the right figure of Fig.<ref>. It is clear that the parameter a decreases the corrected Helmholtz free energy.
The internal energy as one of the thermodynamic quantities has the thermodynamics relationship E=F+TS, i.e.,
E =-1/32 π (D-1) G B^2Ω_D-2(4GB^2+A(√(A^2-4GB^2)-A)r_+^-D-3) r_+^-D-3
×(16(D-3)(D-2)r_+^4λ+(D-1)r_+^DΩ_D-2×(4(D-4)(D-3)r_+^2α+(D-2)r_+^DΩ_D-2)).
Figs.<ref> and <ref> present the behavior of corrected internal energy with increasing the event horizon radius for the different parameters α,λ, D, a. As it is shown in Fig.<ref>, the internal energy has a positive asymptotic value under thermal fluctuation for a small black hole whereas we can neglect the effect of thermal fluctuation when we increase the event horizon radius. We can see clearly that the higher the dimensionality of the black hole, the larger the corrected internal energy. However, the corrected internal energy decreases with the increase of the STVG parameter.
Next, we investigate the heat capacity of black hole, which can be written as C=(dU/ dT)_V=(d U/ dr)/ (dT/dr) using Eqs. (<ref>) and (<ref>), concretely
C=(D-4)α+4(D-2)r_+^D-2λ/Ω_D-2-1/4(D-2)r_+^D-2Ω_D-2.
We draw the behavior of heat capacity by figures of Figs.<ref> and <ref>. In Fig.<ref>, we observe that without any thermal fluctuation, the heat capacity is negative and thus black hole is thermodynamically unstable. The existence of thermal fluctuations causes small black holes to have positive heat capacity and thus there is a phase transition that shows the transition of the system from stable to unstable. Moreover, the critical point gradually moves to the right when we increase the correction coefficients α,λ. From Fig.<ref>, we can see that the phase transition occurs at a larger event horizon radius if spacetime dimensionality D increases. It is worth mentioning that the heat capacity of a high-dimensional Schwarzschild black hole in STVG recovers to that of Schwarzschild-Tangherlini black hole. That is to say, the STVG parameter does not affect the stability conditions of black holes.
§ WEAK DEFLECTION ANGLE
In this section, we would like to obtain the deflection angle in weak field limit using Gauss-Bonnet theorem. For equatorial plane θ =π/2 and null geodesic ds^2=0, the corresponding optical metric of a high-dimensional Schwarzschild black hole in STVG has the following form
dt^2=1/f^2(r)dr^2+r^2/f(r)dφ^2.
Afterwards, we can rewrite the optical metric using the coordinate transformation dr_*=1/f(r)dr as
dt^2= dr_*^2+ f̃^2(r_*)dφ^2,
where f̃(r_*)≡√(r^2/f(r)).
We obtain the Gaussian optical curvature as following <cit.>
K =RicciScalar/2 =1/4(D-3)r^1-4D(4(D-2)G^2q^4r^9-2(D-2)Mr^3D
-6(D-2)Gq^2r^6+D+((D-1)M^2+4(2D-5)Gq^2)r^3+2D).
Now, we can calculate the deflection angle utilizing Gauss-Bonnet theorem <cit.>. The domain D is deemed to be a subset of a compact, oriented surface, with Gaussian optical curvature K and Euler characteristic number χ(D) and ∂D is the piecewise smooth boundary of domain D with geodesic curvature κ. We consider α_i to be the i^th exterior angle. The Gauss-Bonnet theorem is that
∫∫_DKdS+∫_∂Dκd t+∑_iα_i=2πχ(D),
where dS stands for the surface element. In addition, the geodesic curvature κ along a smooth curve γ is written as κ=g(Δ_γ̇γ̇,γ̈) where γ̈ denotes unit acceleration vector. We consider that D is bounded by the geodesics γ_c and geodesic γ_R where γ_R is considered to be perpendicular to γ_c at the source S and the observer O, so κ (γ_c)=0 by definition. Then ∑_iα_i=α_S+α_O as well as χ(D)=1. Eq.(<ref>) reduces to
∫∫_DKdS+∫_γ_Rκ (γ_R)d t =π.
Utilizing the definition of geodesic curvature, the radial part of κ (γ_p) can be expressed as
κ (γ_p)= (Δ_γ̇_̇ṗγ̇_̇ṗ)^r=γ̇_R^ϕ(∂_ϕγ̇_R^r)+Γ_ϕϕ^r(γ̇_R^ϕ)^2,
where γ̇_R represents the tangent vector of geodesics γ_R and Γ_ϕϕ^r is the Christoffel symbol. When we consider γ_R:=R=const, the first term on the right side of the above equation equals zero and the second term is 1/R. So κ (γ_R) reduces to 1/R.
We can make a change of variables dt using the relevant optical metric (<ref>), which can be rewritten as dt=R dφ.
Eq.(<ref>) becomes
∫∫_DKdS+∫_0^π+αdφ =π.
Finally, we obtain the deflection angle <cit.>
α̂=-∫_0^π∫_b/ sinϕ^∞K dS.
Now, we can calculate the deflection angle of a high-dimensional Schwarzschild black hole in STVG for the different spacetime dimensionality. As an example, we calculate the deflection angle when D=4,5,6,7
α̂_D=4 = 2m/b-3 m^2π/16b^2-3Gπ q^2/4b^2+4Gmq^2/3b^3 +O(q^4/b^4),
α̂_D=5 =3mπ/4b^2-3m^2π/16b^4-15Gπ q^2/16b^4+15Gmπ q^2/32b^6+O(q^4/b^8),
α̂_D=6 =8m/3b^3-25m^2π/128b^6-35Gπ q^2/32b^6+512Gmq^2/315b^9
+O(q^4/b^12),
α̂_D=7 =15π m/16b^4-105m^2π/512b^8-315Gπ q^2/256b^8+1155Gmπ q^2/2048b^12
+O(q^4/b^16),
We draw the behavior of the deflection angle with respect to the impact parameter for different values of D and a in Fig.<ref>. It is clear that the higher the black hole dimension, the smaller the deflection angle. However, the STVG parameter has an increasing effect on the deflection angle, i.e., a high-dimensional Schwarzschild black hole in STVG leads to a larger deflection angle than a Schwarzschild-Tangherlini black hole.
§ GREYBODY FACTOR
In this section, we study the bounds on greybody factors for the massless scalar field. The massless scalar field Φ is represented by the Klein-Gordon equation <cit.>
1/√(-g)∂_μ(√(-g)g^μν∂_ν)Φ=0,
where g is the determinant of the metric tensor. In order to separate radial and angular variables, we have an ansatz Φ=e^-iω t Y_lm(Ω)Ψ(r) and make a change dr_*=dr/f(r). Substituting the above definitions and metric function Eq. (<ref>) into Eq. (<ref>), we obtain a Schrödinger-like wave expression
d^2Ψ(r)/d^2r_*+[ω^2-V_eff(r)]Ψ(r)=0,
in which ω donates frequency, l and m are the azimuthal quantum number and the spherical harmonic index, respectively.
The effective potential V_eff(r) can be written as
V_eff(r)=f(r)[l(D+l-3)/r^2+(D-2)(D-4)f(r)/4r^2+(D-2)f'(r)/2r].
To better understand the effect of the dimensionality of the spacetime and STVG parameter on the effective potential, we visualize the effective potential with respect to the black hole radius for different values of D and a in Fig.<ref>. Obviously, the dimensionality of the spacetime causes an increase in the effective potential whereas the STVG parameter has the opposite effect. We can expect the behavior of greybody factors from the effective potential.
The bounds on greybody factors can be expressed as <cit.>
T≥sech^2[∫_-∞^∞√((h')^2+(ω^2-V_eff-h^2)^2)/ 2hdr_*],
where h≡ h(r_*) and h(r_*)>0. h is an arbitrary function and satisfies h(-∞)=h(∞)=ω and there are two particular functional forms of h considered in Ref.<cit.>. Here we only consider the case h=ω. Thus Eq.(<ref>) is rewritten as
T≥sech^2[1/2ω∫_r_+^∞V_eff/f(r)dr].
After expanding the integral, we obtain the lower bound on the greybody factors
T ≥sech^2[-1/2ω((-8+2D+D^2-12l+4lD+4l^2)(1/4r_+)
-(-2+D)B^2(-16+3D)Gm^2/4(2D-5)r^5-2D_++(D-10)Am/4r^2-D_+)
].
Fig.<ref> demonstrates the behavior of the greybody factor for the high-dimensional Schwarzschild black hole in STVG. We observe that the greybody factor reduces with the increase of dimension from the left panel. That is to say, the greybody factor is suppressed in high-dimensional spacetime. It indicates that less massless scalar particles pass through the potential barrier and reach to spatial infinity in a higher dimensional black hole. Additionally, we observe that as the STVG parameter a increases, the greybody factor increases. That is, the STVG parameter makes the gravitational potential transparent.
§ CONCLUSION
In this paper, we analyzed thermal fluctuation, weak deflection angle and greybody factor for a high-dimensional Schwarzschild black hole in STVG.
First, we evaluated the influence of the logarithmic and higher-order corrections of the entropy on the Helmholtz free energy, internal energy and heat capacity and made a comparison to corrected and uncorrected thermodynamic properties. Overall, the corrected entropy as a consequence of thermal fluctuation presents the trend of decreasing first and then increasing, and the impact of thermal fluctuation is significant for a small black hole. Due to the effect of the dimensionality of spacetime, the curve of modified entropy has different intersections. This causes that for a small-size or large-size black hole, the corrected entropy increases with the spacetime dimensionality increases, whereas the middle black hole is not the case. The existence of the STVG parameter leads to a slight increase in corrected entropy. The black hole with small values of event horizon radius possesses the negative Helmholtz free energy because of the thermal fluctuation. The Helmholtz free energy increases monotonically with increasing values of the parameters D and a for a small-size black hole. For a larger black hole, the parameters D and a have the opposite effects on Helmholtz free energy. The internal energy remains positive and its behavior is similar to corrected entropy. The internal energy increases with the increase of dimensions, while it decreases as the STVG parameter increases. In addition, we found that thermal fluctuation makes the small-size black hole more stable from the analysis of Helmholtz free energy and heat capacity in all dimensional cases and the heat capacity is independent of the STVG parameter.
Second, we calculated the weak deflection angle with Gauss-Bonnet theorem. We have shown the expression of weak deflection angle for D=4,5,6,7. We have pointed out that in the higher dimensional spacetime the weak deflection angle gets weaker but the presence of the STVG parameter results in the increase of deflection angle.
Finally, we computed the greybody factors of the massless scalar field and then analyzed the effect of the spacetime dimensionality and STVG parameter on greybody factors. We found that the 4-dimensional black hole has the largest values of greybody factors whereas the 7-dimensional black hole possesses the smallest values. Moreover, we have seen that when the STVG parameter increases, the greybody factor increases. We got the fact that the more radiation can reach spatial infinity in 4-dimensional black hole with the larger value of STVG parameter.
§ DECLARATION OF COMPETING INTEREST
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
99
Astier:2012ba
P. Astier and R. Pain,
Observartional Evidence of the Accelerated Expansion of the Universe.
Comptes Rendus Physique 13 (2012), 521-538.
doi:10.1016/j.crhy.2012.04.009
Moffat:2013sja
J. W. Moffat and S. Rahvar,
The MOG weak field approximation and observational test of galaxy rotation curves.
Mon. Not. Roy. Astron. Soc. 436 (2013), 1439-1451.
doi:10.1093/mnras/stt1670
Planck:2015fie
P. A. R. Ade et al. [Planck],
Planck 2015 results. XIII. Cosmological parameters.
Astron. Astrophys. 594 (2016), A13.
doi:10.1051/0004-6361/201525830
Moffat:2005si
J. W. Moffat,
Scalar-tensor-vector gravity theory.
JCAP 03 (2006), 004.
doi:10.1088/1475-7516/2006/03/004
Moffat:2014aja
J. W. Moffat,
Black Holes in Modified Gravity (MOG).
Eur. Phys. J. C 75 (2015), 175.
doi:10.1140/epjc/s10052-015-3405-x
Brownstein:2005zz
J. R. Brownstein and J. W. Moffat,
Galaxy rotation curves without non-baryonic dark matter.
Astrophys. J. 636 (2006), 721-741.
doi:10.1086/498208
Jamali:2017zrh
S. Jamali, M. Roshan and L. Amendola,
On the cosmology of scalar-tensor-vector gravity theory,
JCAP 01 (2018), 048.
doi:10.1088/1475-7516/2018/01/048
Emparan:2008eg
R. Emparan and H. S. Reall,
Black Holes in Higher Dimensions.
Living Rev. Rel. 11 (2008), 6.
doi:10.12942/lrr-2008-6
Tangherlini:1963bw
F. R. Tangherlini,
Schwarzschild field in n dimensions and the dimensionality of space problem.
Nuovo Cim. 27 (1963), 636-651.
doi:10.1007/BF02784569
Myers:1986un
R. C. Myers and M. J. Perry,
Black Holes in Higher Dimensional Space-Times.
Annals Phys. 172 (1986), 304.
doi:10.1016/0003-4916(86)90186-7
Cai:2020igv
X. C. Cai and Y. G. Miao,
High-dimensional Schwarzschild black holes in scalar–tensor–vector gravity theory.
Eur. Phys. J. C 81 (2021), 559.
doi:10.1140/epjc/s10052-021-09351-x
Bekenstein:1973ur
J. D. Bekenstein,
Black holes and entropy.
Phys. Rev. D 7 (1973), 2333-2346.
doi:10.1103/PhysRevD.7.2333
Easther:1999gk
R. Easther and D. A. Lowe,
Holography, cosmology and the second law of thermodynamics.
Phys. Rev. Lett. 82 (1999), 4967-4970.
doi:10.1103/PhysRevLett.82.4967
Das:2001ic
S. Das, P. Majumdar and R. K. Bhaduri,
General logarithmic corrections to black hole entropy.
Class. Quant. Grav. 19 (2002), 2355-2368.
doi:10.1088/0264-9381/19/9/302
Upadhyay:2017qmv
S. Upadhyay,
Quantum corrections to thermodynamics of quasitopological black holes.
Phys. Lett. B 775 (2017), 130-139.
doi:10.1016/j.physletb.2017.10.059
Dehghani:2018qvn
M. Dehghani,
Thermodynamics of charged dilatonic BTZ black holes in rainbow gravity.
Phys. Lett. B 777 (2018), 351-360.
doi:10.1016/j.physletb.2017.12.048
Jawad:2017mwt
A. Jawad and M. U. Shahzad,
Effects of Thermal Fluctuations on Non-minimal Regular Magnetic Black Hole.
Eur. Phys. J. C 77 (2017), 349.
doi:10.1140/epjc/s10052-017-4914-6
Shahzad:2018znu
M. U. Shahzad and A. Jawad,
Thermodynamics of Black holes With Higher Order Corrected Entropy.
Can. J. Phys. 97 (2019), 742-751.
doi:10.1139/cjp-2018-0091
Sharif:2021vex
M. Sharif and Z. Akhtar,
Study of thermal fluctuations in five-dimensional rotating regular black hole.
Chin. J. Phys. 71 (2021), 669-682.
doi:10.1016/j.cjph.2021.04.005
Khan:2022zcf
Y. H. Khan and P. A. Ganai,
Remnants and thermal corrections in Horndeski black holes with non-minimal kinetic coupling.
Eur. Phys. J. Plus 137 (2022), 827.
doi:10.1140/epjp/s13360-022-03036-4
Ama-Tul-Mughani:2022wtg
Q. Ama-Tul-Mughani, A. Waseem, W. u. Salam and A. Jawad,
Greybody factor and thermal fluctuations of rotating regular black hole bounded by PFDM.
Chin. J. Phys. 77 (2022), 2213-2227.
doi:10.1016/j.cjph.2021.11.024
Chen:2021czh
X. Chen, X. Huang, J. Chen and Y. Wang,
Effect of thermal fluctuation on the thermodynamics of GMGHS black hole.
Gen. Rel. Grav. 53 (2021), 9.
doi:10.1007/s10714-020-02780-1
Upadhyay:2019hyw
S. Upadhyay, Nadeem-ul-islam and P. A. Ganai,
A modified thermodynamics of rotating and charged BTZ black hole.
JHAP 2 (2022), 25-48.
doi:10.22128/jhap.2021.454.1004
Khan:2021tzv
Y. H. Khan, S. Upadhyay and P. A. Ganai,
Stability of remnants of Bardeen regular black holes in presence of thermal fluctuations.
Mod. Phys. Lett. A 36 (2021), 2130023.
doi:10.1142/S0217732321300238
Hawking:1974rv
S. W. Hawking,
Black hole explosions.
Nature 248 (1974), 30-31.
doi:10.1038/248030a0
Hawking:1975vcx
S. W. Hawking,
Particle Creation by Black Holes.
Commun. Math. Phys. 43 (1975), 199-220
[erratum: Commun. Math. Phys. 46 (1976), 206].
doi:10.1007/BF02345020
Barman:2019vst
S. Barman,
The Hawking effect and the bounds on greybody factor for higher dimensional Schwarzschild black holes.
Eur. Phys. J. C 80 (2020), 50.
doi:10.1140/epjc/s10052-020-7613-7
Boonserm:2008zg
P. Boonserm and M. Visser,
Bounding the greybody factors for Schwarzschild black holes.
Phys. Rev. D 78 (2008), 101502.
doi:10.1103/PhysRevD.78.101502
Boonserm:2014fja
P. Boonserm, A. Chatrabhuti, T. Ngampitipan and M. Visser.
Greybody factors for Myers-Perry black holes,
J. Math. Phys. 55 (2014), 112502.
doi:10.1063/1.4901127
Boonserm:2017qcq
P. Boonserm, T. Ngampitipan and P. Wongjun,
Greybody factor for black holes in dRGT massive gravity.
Eur. Phys. J. C 78 (2018), 492.
doi:10.1140/epjc/s10052-018-5975-x
Okyay:2021nnh
M. Okyay and A. Övgün,
Nonlinear electrodynamics effects on the black hole shadow, deflection angle, quasinormal modes and greybody factors.
JCAP 01 (2022), 009.
doi:10.1088/1475-7516/2022/01/009
Kokkotas:2010zd
K. D. Kokkotas, R. A. Konoplya and A. Zhidenko,
Quasinormal modes, scattering and Hawking radiation of Kerr-Newman black holes in a magnetic field.
Phys. Rev. D 83 (2011), 024031.
doi:10.1103/PhysRevD.83.024031
Konoplya:2020jgt
R. A. Konoplya, A. F. Zinhailo and Z. Stuchlik,
Quasinormal modes and Hawking radiation of black holes in cubic gravity.
Phys. Rev. D 102 (2020), 044023.
doi:10.1103/PhysRevD.102.044023
Li:2022jda
Q. Li, C. Ma, Y. Zhang, Z. W. Lin and P. F. Duan,
Gray-body factor and absorption of the Dirac field in ESTGB gravity.
Chin. J. Phys. 77 (2022), 1269-1277.
doi:10.1016/j.cjph.2022.03.027
Harris:2003eg
C. M. Harris and P. Kanti,
Hawking radiation from a (4+n)-dimensional black hole: Exact results for the Schwarzschild phase.
JHEP 10 (2003), 014.
doi:10.1088/1126-6708/2003/10/014
Catalan:2014ama
M. Catalán, E. Cisternas, P. A. González and Y. Vásquez,
Quasinormal modes and greybody factors of a four-dimensional Lifshitz black hole with z=0.
Astrophys. Space Sci. 361 (2016), 189.
doi:10.1007/s10509-016-2764-6
Abedi:2013xua
J. Abedi and H. Arfaei,
Fermionic greybody factors in dilaton black holes.
Class. Quant. Grav. 31 (2014), 195005.
doi:10.1088/0264-9381/31/19/195005
Lewis:2006fu
A. Lewis and A. Challinor,
Weak gravitational lensing of the CMB.
Phys. Rept. 429 (2006), 1-65.
doi:10.1016/j.physrep.2006.03.002
Peloton:2016kbw
J. Peloton, M. Schmittfull, A. Lewis, J. Carron and O. Zahn,
Full covariance of CMB and lensing reconstruction power spectra.
Phys. Rev. D 95 (2017), 043508.
doi:10.1103/PhysRevD.95.043508
Pratten:2016dsm
G. Pratten and A. Lewis,
Impact of post-Born lensing on the CMB.
JCAP 08 (2016), 047.
doi:10.1088/1475-7516/2016/08/047
Chen:2015cpa
S. Chen and J. Jing,
Strong gravitational lensing for the photons coupled to Weyl tensor in a Schwarzschild black hole spacetime.
JCAP 10, 002 (2015).
doi:10.1088/1475-7516/2015/10/002
Chen:2016hil
S. Chen, S. Wang, Y. Huang, J. Jing and S. Wang,
Strong gravitational lensing for the photons coupled to a Weyl tensor in a Kerr black hole spacetime.
Phys. Rev. D 95, 104017 (2017).
doi:10.1103/PhysRevD.95.104017
Wang:2016paq
S. Wang, S. Chen and J. Jing,
Strong gravitational lensing by a Konoplya-Zhidenko rotating non-Kerr compact object.
JCAP 11, 020 (2016).
doi:10.1088/1475-7516/2016/11/020
Lu:2016gsf
X. Lu, F. W. Yang and Y. Xie,
Strong gravitational field time delay for photons coupled to Weyl tensor in a Schwarzschild black hole.
Eur. Phys. J. C 76, 357 (2016).
doi:10.1140/epjc/s10052-016-4218-2
Zhao:2016kft
S. S. Zhao and Y. Xie,
Strong field gravitational lensing by a charged Galileon black hole.
JCAP 07, 007 (2016).
doi:10.1088/1475-7516/2016/07/007
Zhao:2017cwk
S. S. Zhao and Y. Xie,
Strong deflection gravitational lensing by a modified Hayward black hole.
Eur. Phys. J. C 77, 272 (2017).
doi:10.1140/epjc/s10052-017-4850-5
Zhang:2017vap
R. Zhang, J. Jing and S. Chen,
Strong gravitational lensing for black holes with scalar charge in massive gravity.
Phys. Rev. D 95, no.6, 064054 (2017).
doi:10.1103/PhysRevD.95.064054
Abbas:2019olp
G. Abbas, A. Mahmood and M. Zubair,
Strong Gravitational Lensing for Photon Coupled to Weyl Tensor in Kiselev Black Hole.
Chin. Phys. C 44, 095105 (2020).
doi:10.1088/1674-1137/44/9/095105
Bergliaffa:2020ivp
S. E. P. Bergliaffa, E. E. d. Filho and R. Maier,
Strong Lensing and Nonminimally Coupled Electromagnetism.
Phys. Rev. D 101, 124038 (2020).
doi:10.1103/PhysRevD.101.124038
Wang:2019cuf
C. Y. Wang, Y. F. Shen and Y. Xie,
Weak and strong deflection gravitational lensings by a charged Horndeski black hole.
JCAP 04, 022 (2019).
doi:10.1088/1475-7516/2019/04/022
Kumaran:2019qqp
Y. Kumaran and A. Övgün,
Weak Deflection Angle of Extended Uncertainty Principle Black Holes.
Chin. Phys. C 44, 025101 (2020).
doi:10.1088/1674-1137/44/2/025101
Javed:2020frq
W. Javed, M. B. Khadim and A. Övgün,
Weak gravitational lensing by Bocharova–Bronnikov–Melnikov–Bekenstein black holes using Gauss–Bonnet theorem.
Eur. Phys. J. Plus 135, 595 (2020).
doi:10.1140/epjp/s13360-020-00619-x
Kumar:2020sag
R. Kumar, S. U. Islam and S. G. Ghosh,
Gravitational lensing by charged black hole in regularized 4D Einstein–Gauss–Bonnet gravity.
Eur. Phys. J. C 80, 1128 (2020).
doi:10.1140/epjc/s10052-020-08606-3
ElMoumni:2020wrf
H. El Moumni, K. Masmar and A. Övgün,
Weak deflection angle of light in two classes of black holes in nonlinear electrodynamics via Gauss–Bonnet theorem.
Int. J. Geom. Meth. Mod. Phys. 19, 2250094 (2022).
doi:10.1142/S0219887822500943
Javed:2020pyz
W. Javed, J. Abbas, Y. Kumaran and A. Övgün,
Weak deflection angle by asymptotically flat black holes in Horndeski theory using Gauss-Bonnet theorem.
Int. J. Geom. Meth. Mod. Phys. 18, 2150003 (2021).
doi:10.1142/S0219887821500031
Xu:2021rld
X. Xu, T. Jiang and J. Jia,
Deflection angle with electromagnetic interaction and gravitational-electromagnetic dual lensing.
JCAP 08, 022 (2021).
doi:10.1088/1475-7516/2021/08/022
Javed:2021arr
W. Javed, A. Hamza and A. Övgün,
Weak Deflection Angle and Shadow by Tidal Charged Black Hole.
Universe 7, 385 (2021).
doi:10.3390/universe7100385
Gao:2021luq
Y. X. Gao and Y. Xie,
Gravitational lensing by hairy black holes in Einstein-scalar-Gauss-Bonnet theories.
Phys. Rev. D 103, no.4, 043008 (2021).
doi:10.1103/PhysRevD.103.043008
Javed:2020lsg
W. Javed, A. Hamza and A. Övgün,
Effect of nonlinear electrodynamics on the weak field deflection angle by a black hole.
Phys. Rev. D 101 (2020), 103521.
doi:10.20944/preprints201911.0142.v1
Gibbons:2008rj
G. W. Gibbons and M. C. Werner,
Applications of the Gauss-Bonnet theorem to gravitational lensing.
Class. Quant. Grav. 25 (2008), 235009.
doi:10.1088/0264-9381/25/23/235009
Ishihara:2016vdc
A. Ishihara, Y. Suzuki, T. Ono, T. Kitamura and H. Asada,
Gravitational bending angle of light for finite distance and the Gauss-Bonnet theorem.
Phys. Rev. D 94 (2016), 084015.
doi:10.1103/PhysRevD.94.084015
Islam:2020xmy
S. U. Islam, R. Kumar and S. G. Ghosh,
Gravitational lensing by black holes in the 4D Einstein-Gauss-Bonnet gravity.
JCAP 09 (2020), 030.
doi:10.1088/1475-7516/2020/09/030
Zhu:2019ura
T. Zhu, Q. Wu, M. Jamil and K. Jusufi,
Shadows and deflection angle of charged and slowly rotating black holes in Einstein-Æther theory.
Phys. Rev. D 100 (2019), 044055.
doi:10.1103/PhysRevD.100.044055
Sakalli:2017ewb
I. Sakalli and A. Ovgun,
Hawking Radiation and Deflection of Light from Rindler Modified Schwarzschild Black Hole.
EPL 118 (2017), 60006.
doi:10.1209/0295-5075/118/60006
Jusufi:2018jof
K. Jusufi, A. Övgün, J. Saavedra, Y. Vásquez and P. A. González,
Deflection of light by rotating regular black holes using the Gauss-Bonnet theorem.
Phys. Rev. D 97 (2018), 124024.
doi:10.1103/PhysRevD.97.124024
Ovgun:2018fte
A. Övgün, İ. Sakallı and J. Saavedra,
Weak gravitational lensing by Kerr-MOG black hole and Gauss–Bonnet theorem.
Annals Phys. 411 (2019), 167978.
doi:10.1016/j.aop.2019.167978
Li:2020wvn
Z. Li, G. Zhang and A. Övgün,
Circular Orbit of a Particle and Weak Gravitational Lensing.
Phys. Rev. D 101 (2020), 124058.
doi:10.1103/PhysRevD.101.124058
Javed:2020fli
W. Javed, M. B. Khadim, A. Övgün and J. Abbas,
Weak gravitational lensing by stringy black holes.
Eur. Phys. J. Plus 135 (2020), 314.
doi:10.1140/epjp/s13360-020-00322-x
Belhaj:2020rdb
A. Belhaj, M. Benali, A. El Balali, H. El Moumni and S. E. Ennadifi,
Deflection angle and shadow behaviors of quintessential black holes in arbitrary dimensions.
Class. Quant. Grav. 37 (2020), 215004.
doi:10.1088/1361-6382/abbaa9
Pourhassan:2017kmm
B. Pourhassan, K. Kokabi and S. Rangyan,
Thermodynamics of higher dimensional black holes with higher order thermal fluctuations.
Gen. Rel. Grav. 49 (2017), 144.
doi:10.1007/s10714-017-2315-7
Mureika:2015sda
J. R. Mureika, J. W. Moffat and M. Faizal,
Black hole thermodynamics in MOdified Gravity (MOG).
Phys. Lett. B 757 (2016), 528-536.
doi:10.1016/j.physletb.2016.04.041
Pourhassan:2016zzc
B. Pourhassan and M. Faizal,
Thermodynamics of a sufficient small singly spinning Kerr-AdS black hole.
Nucl. Phys. B 913 (2016), 834-851.
doi:10.1016/j.nuclphysb.2016.10.013
Pourhassan:2017rie
B. Pourhassan, H. Farahani and S. Upadhyay,
Thermodynamics of higher-order entropy corrected Schwarzschild–Beltrami–de Sitter black hole.
Int. J. Mod. Phys. A 34 (2019), 1950158.
doi:10.1142/S0217751X19501586
Pourhassan:2018wjg
B. Pourhassan, M. Faizal and S. A. Ketabi,
Logarithmic correction of the BTZ black hole and adaptive model of Graphene.
Int. J. Mod. Phys. D 27 (2018), 1850118.
doi:10.1142/S0218271818501183
Bubuianu:2018qsq
L. Bubuianu and S. I. Vacaru,
Black holes with MDRs and Bekenstein–Hawking and Perelman entropies for Finsler–Lagrange–Hamilton Spaces.
Annals Phys. 404 (2019), 10-38.
doi:10.1016/j.aop.2019.02.013
Sharif:2022ccc
M. Sharif and A. Khan,
Thermal fluctuations, quasi-normal modes and phase transitions of regular black hole.
Chin. J. Phys. 77 (2022), 1885-1902.
doi:10.1016/j.cjph.2022.01.002
Sharif:2020hid
M. Sharif and Q. Ama-Tul-Mughani,
Phase transition and thermal fluctuations of quintessential Kerr–Newman-AdS black hole.
Phys. Dark Univ. 30 (2020), 100723.
doi:10.1016/j.dark.2020.100723
Berti:2009kk
E. Berti, V. Cardoso and A. O. Starinets,
Quasinormal modes of black holes and black branes.
Class. Quant. Grav. 26 (2009), 163001.
doi:10.1088/0264-9381/26/16/163001
|
http://arxiv.org/abs/2307.05144v1 | 20230711095236 | Robust chaos in orientation-reversing and non-invertible two-dimensional piecewise-linear maps | [
"Indranil Ghosh",
"Robert I. McLachlan",
"David J. W. Simpson"
] | nlin.CD | [
"nlin.CD",
"math.DS",
"Primary: 37G35, Secondary: 39A28"
] |
theoremTheorem[section]
corollary[theorem]Corollary
lemma[theorem]Lemma
proposition[theorem]Proposition
definition
definitionDefinition[section]
example[definition]Example
remark
remarkRemark[section]
1]I. Ghosh
1]R. McLachlan
1]D. Simpson
[1]School of Mathematical and Computational Sciences
Massey University
Colombo Road, Palmerston North, 4410
New Zealand
Robust chaos in orientation-reversing and non-invertible two-dimensional piecewise-linear maps.
[
===================================================================================================
This paper concerns the two-dimensional border-collision normal form — a four-parameter family of piecewise-linear maps generalising the Lozi family and relevant to diverse applications. The normal form was recently shown to exhibit a chaotic attractor throughout an open region of parameter space. This was achieved by constructing a trapping region in phase space and an invariant expanding cone in tangent space, but only allowed parameter combinations for which the normal form is invertible and orientation-preserving. This paper generalises the construction to include the non-invertible and orientation-reversing cases. This provides a more complete and unified picture of robust chaos by revealing its presence to be disassociated from the global topological properties of the map. We identify a region of parameter space in which the map exhibits robust chaos, and show that part of the boundary of this region consists of bifurcation points at which the chaotic attractor is destroyed.
§ INTRODUCTION
Robust chaos refers to the persistence of a chaotic attractor throughout open regions of the parameter space of a family of dynamical systems <cit.>. Such robustness is essential for chaos-based cryptography <cit.> and can be helpful to devices that have advantageous functional characteristics when operated in a chaotic regime, examples include power converters <cit.>, optical resonators <cit.>, and energy harvesters <cit.>.
Robust chaos occurs in diverse settings <cit.>. Its occurrence in piecewise-linear maps was popularised by Banerjee et al. <cit.>, and later Glendinning and Simpson <cit.> proved the presence of robust chaos in their setting. They provided explicit parameter values yielding robust chaos, compared to earlier results giving implicit conditions in more abstract settings <cit.>, but only dealt with two-dimensional, orientation-preserving maps. Piecewise-linear maps that are not orientation-preserving are arguably more physically relevant than those that are orientation-preserving, as discussed below. The purpose of this paper is to extend the results of <cit.>
to the non-orientation-preserving cases. This is also a necessary first step towards generalising the results to higher dimensions.
Specifically, we study the family
f_ξ(x,y) = ( τ_L x + y + 1, -δ_L x ), x ≤ 0,
( τ_R x + y + 1, -δ_R x ), x ≥ 0,
known as the two-dimensional border-collision normal form.
For any fixed parameter point ξ = (τ_L,δ_L,τ_R,δ_R) ∈ℝ^4,
we are interested in the behaviour of iterations (x,y) ↦ f_ξ(x,y) on ℝ^2.
The family (<ref>) is continuous but non-differentiable on the line x = 0, termed the switching manifold,
and is a normal form in the sense that any two-dimensional, piecewise-linear, continuous map
with a single switching manifold and satisfying a genericity condition can be transformed to (<ref>) through a change of coordinates <cit.>.
It was originally formulated by Nusse and Yorke <cit.>
with an additional parameter μ that when varied through zero brings about a border-collision bifurcation <cit.>.
The simplicity of (<ref>) belies the incredible complexity of its dynamics;
it can exhibit two-dimensional attractors <cit.>,
large numbers of coexisting attractors <cit.>,
and Arnold tongues with a unique sausage-string geometry <cit.>.
In the special case τ_L = -τ_R and δ_L = δ_R,
(<ref>) reduces to the Lozi family <cit.>.
At any point with x 0 the derivative of (<ref>) has determinant
( ( D f_ξ)(x,y) ) =
δ_L, x < 0,
δ_R, x > 0.
Consequently we have the following cases.
* If δ_L > 0 and δ_R > 0 then f_ξ is orientation-preserving.
This is the most natural case to consider from an applied perspective
because Poincaré maps derived from time-reversible flows on ℝ^n are necessarily orientation-preserving <cit.>.
* If δ_L < 0 and δ_R < 0 then f_ξ is orientation-reversing.
This case was considered in the seminal work of Misiurewicz <cit.> for the Lozi map. Such maps can be embedded into higher dimensional orientation-preserving maps and in this way help us understand high-dimensional chaos in physical systems <cit.>.
* If δ_L and δ_R have opposite signs then f_ξ is non-invertible
with the left and right half-planes mapping onto either the top or the bottom half-plane.
Such maps, except piecewise-smooth instead of piecewise-linear,
apply to a wide range of power converters <cit.>.
For instance, the boost converter described by Banerjee et al. <cit.>
regulates output voltage via a switch that is activated whenever the current reaches a threshold value.
This causes the flow (in phase space) to fold back over itself resulting in a stroboscopic map that
is piecewise-smooth (due to the switch) and non-invertible (due to the folding) <cit.>.
* If one of δ_L and δ_R is zero then f_ξ is non-invertible with one half-plane mapping onto a line <cit.>.
Such maps arise as leading-order approximations to grazing-sliding bifurcations in relay control systems <cit.> and mechanical systems with stick-slip friction <cit.>. This is because trajectories of the differential equations become constrained to a codimension-one discontinuity surface
causing the range of one piece of the Poincaré map to have one less dimension than the domain of the map <cit.>. This also occurs for the Hickian trade cycle model of Puu et al. <cit.> and the influenza outbreak model of Roberts et al. <cit.>.
Let
Φ = {ξ∈ℝ^4 | τ_L > |δ_L + 1|, τ_R < |δ_R + 1| }
denote the set of all ξ for which f_ξ has two saddle fixed points (Lemma <ref>).
Banerjee et al. <cit.> considered ξ∈Φ in the orientation-preserving case
and showed that on one side of a homoclinic bifurcation
the stable manifold of one fixed point has transverse intersections with the unstable manifolds of both fixed points
and argued that this implies the existence of a chaotic attractor.
Their conclusion was verified rigorously by Glendinning and Simpson <cit.> using the methodology of Misiurewicz <cit.>.
First, a forward invariant region Ω⊂ℝ^2 was identified by using one of the unstable manifolds,
and this was perturbed into a trapping region that necessarily contains a topological attractor.
Second, an invariant expanding cone Ψ⊂ T ℝ^2 (see <ref>)
was constructed by using eigenvectors of the two pieces of D f_ξ.
The existence of this object implies that nearby forward orbits diverge exponentially
and that the attractor is chaotic in the sense of having a positive Lyapunov exponent.
In general this approach works brilliantly for piecewise-linear maps because the invariance and expansion properties can be verified by simple, explicit computations.
For more complicated constructions the verification is best done on a computer <cit.>.
The construction is robust to nonlinear perturbations to the pieces of the map
and has been used to show that chaos persists for intervals of parameter values beyond border-collision bifurcations <cit.>.
Recently this approach has also been applied to piecewise-smooth maps with a square-root singularity that arise for mechanical systems with hard impacts <cit.>.
Further techniques can reveal more properties of the attractor,
such as sensitive dependence on initial conditions <cit.>,
continuity with respect to ξ <cit.>,
and the presence of an SRB measure <cit.>.
In this paper, we extend the construction of <cit.> to the orientation-reversing and non-invertible cases.
We obtain a subset Φ_ trap⊂Φ in which f_ξ has a trapping region
and another subset Φ_ cone⊂Φ in which f_ξ has an invariant expanding cone.
It follows that f_ξ has a chaotic attractor for all ξ∈Φ_ trap∩Φ_ cone.
This is an open subset of parameter space, hence the chaos is robust with respect to the family f_ξ. We expect it is also robust to nonlinear perturbations to the pieces of the map, as in <cit.>.
Boundaries of Φ_ trap∩Φ_ cone are where some aspect of the construction fails.
As shown below, three of these boundaries correspond to bifurcations where the chaotic attractor is destroyed.
Beyond the other boundaries the chaotic attractor appears to persist and we believe robust chaos could be verified on a larger subset of parameter space by using a more complicated construction, e.g. <cit.>,
but our aim here is not to optimise the subset, only to obtain a reasonably large subset
for both negative and positive values of δ_L and δ_R by using a construction that is both natural and simple.
The remainder of this paper is organised as follows.
We start in <ref> by calculating the
stable and unstable manifolds of the fixed points.
Constraints on the geometry of the manifolds as they are extended outwards from the fixed points give rise to our definition of Φ_ trap.
Here we also define Φ_ cone
and state our main result (Theorem <ref>) for the existence of a chaotic attractor.
Being four-dimensional the sets Φ_ trap and Φ_ cone are difficult to visualise;
we show a range of two-dimensional cross-sections to give some impressions of their size and shape.
In <ref> we use the stable manifold of one of the fixed points
to form a region Ω⊂ℝ^2 and show it is forward invariant under f_ξ for any ξ∈Φ_ trap.
We then show how Ω can be perturbed into a trapping region for any ξ∈Φ_ trap.
In <ref> we identify a cone that is both invariant and expanding for any ξ∈Φ_ cone
and prove Theorem <ref>.
In <ref> we further study the extent to which Φ_ trap and Φ_ cone cover parameter space. We show a typical cross-section of Φ_ trap∩Φ_ cone and overlay numerical simulations whereby the presence robust chaos is estimated from forward orbits. This helps clarify which boundaries of Φ_ trap and Φ_ cone correspond to bifurcations and how much of the true robust chaos region is detected by the straight-forward constructions. Final remarks are provided in <ref>.
§ SUFFICIENT CONDITIONS FOR A CHAOTIC ATTRACTOR
For any ξ∈Φ the map f_ξ has fixed points
X = (-1/τ_R-δ_R-1, δ_R/τ_R-δ_R-1),
Y = (-1/τ_L-δ_L-1, δ_L/τ_L-δ_L-1),
where X is in the right half-plane and Y is in the left half-plane.
The stability multipliers associated with X and Y
are the eigenvalues of the Jacobian matrices
A_L = [ τ_L 1; -δ_L 0 ],
A_R = [ τ_R 1; -δ_R 0 ].
Since ξ∈Φ, A_L has eigenvalues λ_L^s ∈ (-1,1) and λ_L^u>1,
while A_R has eigenvalues λ_R^s ∈ (-1,1) and λ_R^u<-1.
This implies X and Y are saddles.
In fact, Φ is the set of all parameter combinations for which
f_ξ has two saddle fixed points:
The map f_ξ has two saddle fixed points if and only if ξ∈Φ.
Let f^L and f^R denote the left and right pieces of f_ξ, respectively.
If δ_L = τ_L - 1 then f^L has no fixed points,
while if δ_L τ_L - 1 then f^L has the unique fixed point Y, given by (<ref>).
Similarly if δ_R = τ_R - 1 then f^R has no fixed points,
while if δ_R τ_R - 1 then f^R has the unique fixed point X, given by (<ref>).
If ξ∈Φ then X and Y are saddle fixed points of f_ξ, as noted above.
Conversely, suppose f_ξ has two saddle fixed points.
By the above remarks, these must be X and Y.
Since they are fixed points of f_ξ,
Y is in the left half-plane, so τ_L > δ_L + 1 by (<ref>),
and X is in the right half-plane, so τ_R < δ_R + 1 by (<ref>).
Since they are saddles, also τ_L > -(δ_L + 1)
and τ_R < -(δ_R + 1), hence ξ∈Φ.
§.§ The stable and unstable manifolds of the fixed points
Since X and Y are saddles they have one-dimensional stable and unstable manifolds.
Their stable manifolds W^s(X) and W^s(Y)
have kinks at points where they meet the switching manifold x = 0
and on preimages of these points,
while their unstable manifolds W^u(X) and W^u(Y)
have kinks at points where they meet y = 0
(the image of the switching manifold) and on images of these points.
As each manifold (stable or unstable) emanates from the fixed point,
it coincides with the line through the fixed point with direction
given by the corresponding eigenvector of A_L or A_R.
We will use a subscript 0 to denote
the part of the manifold that coincides with this line.
These are indicated in Fig. <ref> for four different combinations of the parameter values.
Next, we describe some points where the stable and unstable manifolds
intersect x=0 and y=0 as these are central to our construction in <ref>.
For all ξ∈Φ, W^s_0(Y) has an endpoint on x=0, call it S.
Since W^s_0(Y) has slope -λ_L^u (due to the companion matrix form (<ref>)),
from the above formula for Y we obtain
S = (0, -λ_L^u/λ_L^u - 1).
Notice S_2 < -1
(here and throughout the paper for any P ∈ℝ^2
we write P_1 and P_2, respectively,
for its x and y components).
The manifold W^s(Y) continues into the right half-plane
but now with slope ϕ_1(ξ)/λ_L^u, where
ϕ_1(ξ) = δ_R - τ_Rλ_L^u .
Our trapping region construction requires that this
linear segment, call it W_1^s(Y), intersects y=0.
Certainly this is only possible if ϕ_1(ξ) > 0.
This inequality turns out to be sufficient to ensure
W_1^s(Y) intersects y=0 except in the case δ_L, δ_R < 0 (Fig. <ref>-d).
In this case W_1^s(Y) is the line segment
connecting S and f_ξ^-2(S).
So in this case for W_1^s(Y) to intersect y = 0 we need f_ξ^-2(S) to lie above y=0.
A straightforward calculation reveals that the y-component of f_ξ^-2(S)
is ϕ_2(ξ)/λ_L^s (λ_L^u - 1) δ_R, where
ϕ_2(ξ) = δ_R(λ_L^s+1) - λ_L^u(τ_R + (δ_R + τ_R)λ_L^s).
Thus we require ϕ_2(ξ) > 0 (because with δ_L, δ_R < 0 we have λ_L^s (λ_L^u - 1) δ_R > 0).
In any case, if W_1^s(Y) intersects y = 0,
it does so at the point
C = ( 1 + ϕ_3(ξ)/(λ_L^u - 1) ϕ_1(ξ), 0 ),
where
ϕ_3(ξ) = δ_R - (δ_R +τ_R - (τ_R+1)λ_L^u)λ_L^u.
Our construction also requires C_1 > 1, that is ϕ_3(ξ) > 0.
Now we consider the two unstable manifolds.
For all ξ∈Φ, W^u_0(Y) has an endpoint on y=0 at
D = (1/1-λ_L^s, 0 ),
and W^u_0(X) has an endpoint on y=0 at
T = (1/1-λ_R^s , 0).
These points are indicated in Fig. <ref>.
§.§ Homoclinic and heteroclinic bifurcations
In the orientation-preserving case (δ_L, δ_R > 0),
Banerjee et al. <cit.> noticed that as parameters are varied
a chaotic attractor can be destroyed when the points C and D collide.
This type of bifurcation is a homoclinic corner <cit.>
where the kinks (corners) of the unstable manifold of Y lie on the stable manifold of Y (and vice-versa).
This is analogous to a first homoclinic tangency <cit.> for smooth maps
which are well understood as a mechanism for the destruction of an attractor <cit.>.
From the above formulas for C and D we obtain
C_1 - D_1 = ϕ_4(ξ)/(λ_L^u - 1)(1 - λ_L^s)ϕ_1(ξ),
where
ϕ_4(ξ) = δ_R - (τ_R+δ_L+δ_R - (1+τ_R)λ_L^u)λ_L^u.
So the condition ϕ_4(ξ) > 0 (equivalent to equation (5) of <cit.>) ensures that C lies to the right of D as in Fig. <ref>-a.
In the orientation-reversing case (δ_L, δ_R < 0)
a chaotic attractor can be destroyed when the points C and T collide.
Here kinks of the unstable manifold of X
lie on the stable manifold of Y (and vice-versa).
We have
C_1 - T_1 = ϕ_5(ξ)/(λ_L^u-1)(1-λ_R^s)ϕ_1(ξ),
where
ϕ_5(ξ) = δ_R - (δ_R + τ_R - (1 +λ_R^u)λ_L^u)λ_L^u.
The condition ϕ_5(ξ) > 0 ensures that C lies to the right of T
as in Fig. <ref>-d.
In the special case of the Lozi map, ϕ_5(ξ) > 0 simplifies
(significantly) to equation (3) of Misiurewicz <cit.>.
In view of the above discussion we define
Φ_ trap = {ξ∈Φ | ϕ_i(ξ) > 0, i=1, …, 5 }.
This set is difficult to visualise because parameter space is four-dimensional.
Fig. <ref> shows four different cross-sections of Φ_ trap obtained by fixing τ_L and τ_R.
Broadly speaking the size of the cross-section decreases as the values of τ_L and |τ_R| increase.
Notice the topology of the cross-sections is different for different values of τ_L and τ_R.
For instance in Fig. <ref>-a the boundary of the cross-section is formed by ϕ_1(ξ) = 0, ϕ_2(ξ) = 0,
and the boundary of Φ,
whereas in Fig. <ref>-d the boundary is formed by ϕ_1(ξ) = 0, ϕ_3(ξ) = 0, ϕ_4(ξ) = 0, ϕ_5(ξ) = 0,
and the boundary of Φ.
The figure also includes curves on which f_ξ(C) = Y and f_ξ(C) = Z,
where Z = f_ξ(S) is the intersection of W_0^s(Y) with y=0.
Together with the δ_L and δ_R axes, these curves divide the cross-sections into six parts
corresponding to six cases for the vertices of the region Ω
that we construct in <ref>.
§.§ Sufficient conditions for robust chaos
Our construction of an invariant expanding cone Ψ_K
requires similar constraints on the parameter values to those established above for the trapping region. To this end we define
θ_1(ξ) = ( δ_L + δ_R - τ_L τ_R )^2
- 4 δ_L δ_R ,
θ_2(ξ) = τ_L^2 + δ_L^2 - 1 + 2 τ_L min( 0, -δ_Rτ_R, q_L, ã),
θ_3(ξ) = τ_R^2 + δ_R^2 - 1 + 2 τ_R max( 0,
-δ_Lτ_L, q_R, b̃),
where
q_L = -τ_L2(1 - √(1 - 4δ_Lτ_L^2)),
q_R = -τ_R2(1 - √(1 - 4δ_Rτ_R^2)),
and
ã = δ_L -δ_R-τ_Lτ_R - √(θ_1(ξ))/2τ_R, b̃ = δ_R -δ_L-τ_Lτ_R - √(θ_1(ξ))/2τ_L,
assuming θ_1(ξ) > 0.
We then define
Φ_ cone = {ξ∈Φ | θ_i(ξ) > 0, i=1, …, 3 }.
The condition θ_1(ξ) > 0 ensures θ_2(ξ) and θ_3(ξ) are well-defined,
and, as explained in <ref>, the conditions θ_2(ξ) > 0 and θ_3(ξ) > 0 ensure that our cone Ψ_K
is invariant and expanding.
Fig. <ref> shows cross-sections of Φ_ cone.
Broadly speaking the size of the cross-section
increases as the values of τ_L and |τ_R| increase.
Again the topology of the cross-sections is different for different values of τ_L and τ_R.
Similar to Fig. <ref> we have divided the cross-sections into six parts corresponding to six cases for the boundary of Ψ_K (see <ref>).
These correspond to different cases for the four quantities in each of
(<ref>) and (<ref>)
that attain the minimum (respectively, maximum) value.
Finally, we can state our main result.
For any ξ∈Φ_ trap∩Φ_ cone
the normal form f_ξ (<ref>)
has a topological attractor with a positive Lyapunov exponent.
This is proved at the end of <ref>.
We have found that there are many possibilities for the topology
of cross-sections of Φ_ trap∩Φ_ cone defining by fixing τ_L and τ_R,
and we do not attempt to catagorise these in this paper.
We provide one example in <ref>, where we also identify critical values of τ_L and τ_R
at which the cross-sections of Φ_ trap and Φ_ cone vanish entirely.
§ A FORWARD INVARIANT REGION AND A TRAPPING REGION
The following definition applies to any continuous map f on the plane.
Note we write int(·) for the interior of a set.
A set Ω⊂ℝ^2 is forward invariant if f(Ω) ⊂Ω.
A compact set Ω⊂ℝ^2 is a trapping region if f(Ω) ⊂ int(Ω).
In this section we construct a triangle Ω
and show that for any ξ∈Φ_ trap
this region is forward invariant under f (Proposition <ref>).
We then show there exists a perturbation of Ω
that is a trapping region for f (Proposition <ref>).
So that the non-invertible cases can be accommodated, our Ω differs from the triangle constructed by Glendinning and Simpson <cit.>
for the orientation-preserving case,
and by Misiurewicz <cit.> for the orientation-reversing case.
For clarity we now suppress the ξ-dependency and write f instead of f_ξ.
The linear segment of the stable manifold of Y that contains Y,
denoted W^s_0(Y), was shown in Fig. <ref>
for four different combinations of the parameter values.
In any case, this segment lies on the line
y = -λ_L^u x + S_2, where S_2 is the
y-component of S, given by (<ref>).
The point S is the right-most point of W^s_0(Y),
and is easy to see that in the other direction W^s_0(Y) extends to infinity if δ_L ≥ 0
and to the preimage of S under the left piece of f otherwise:
Suppose δ_L ∈ℝ and τ_L > |δ_L + 1|.
Then
W^s_0(Y) = { (x,y) | -S_2/δ_L≤ x ≤ 0,
y = -λ_L^u x + S_2 }, δ_L < 0,
{ (x,y) | -∞ < x ≤ 0,
y = -λ_L^u x + S_2 }, δ_L ≥ 0,
and f(W^s_0(Y)) ⊂ W^s_0(Y).
In particular Z = f(S) ∈ W^s_0(Y).
This point is the intersection of W^s_0(Y) with y=0 and given by
Z = (-1/λ_L^u-1, 0 ).
Now recall if ϕ_1(ξ) > 0 and ϕ_2(ξ) > 0
then C ∈ W_1^s(Y) is given by (<ref>).
Let ξ∈Φ with ϕ_1(ξ) > 0 and ϕ_2(ξ) > 0.
Then
Y, Z, f(Z), f(C), f^2(C) ∈ W^s_0(Y) ∖{ S }.
Certainly Y, Z ∈ W^s_0(Y) ∖{ S } by construction.
Also f(Z) ∈ W^s_0(Y) ∖{ S }
because W^s_0(Y) is forward invariant (Lemma <ref>)
and Z = f(S) cannot be a preimage of S.
Since C_1 >0 we have f(C) = (τ_R C_1+1, -δ_RC_1),
and it is a simple exercise to use the formula (<ref>)
to show that f(C) lies on the line y = -λ_L^u x + S_2.
Also f(C) lies to the left of S because
f(C)_1 = (τ_R + δ_R)(λ_L^u - 1) + τ_R/(λ_L^u - 1)ϕ_1(ξ) is negative by inspection,
and in the case δ_L < 0 the point f(C) lies to the right
of the left-most point of W_0^s(Y) because
f(C)_1 + S_2/δ_L = ϕ_2(ξ)/-λ_L^s (λ_L^u - 1) ϕ_1(ξ) is positive.
Thus f(C) belongs to W_0^s(Y) and is not an endpoint of W_0^s(Y),
thus f(C) ∈ W^s_0(Y) ∖{ S }
and f^2(C) ∈ W^s_0(Y) ∖{ S }
using again Lemma <ref>.
Given ξ∈Φ with ϕ_1(ξ) > 0 and ϕ_2(ξ) > 0,
let Q and R be the upper-most and lower-most points
of { Y, Z, f(Z), f(C), f^2(C) }, respectively.
Then let Ω be the compact filled triangle
with vertices C, Q, and R
(except Ω is a line segment in the special case δ_L = δ_R = 0).
In other words, Ω is the convex hull of
Y, Z, f(Z), f(C), f^2(C), and C.
There are six cases for the points
that form the vertices of Ω.
These are shown in Fig. <ref> and correspond to the six parts of Φ_ trap indicated
in Fig. <ref>. Fig. <ref> also shows the set f(Ω). Notice in each case f(Ω) has vertices at the images of the points P and V where the boundary of Ω intersects x=0.
Let ξ∈Φ_ trap.
Then f(Ω) ⊂Ω.
The proof is long so we break it into three steps.
1Characterise f(Ω).
The vertices Q and R lie in the left half-plane,
while C lies in the right half-plane.
Let P denote the intersection of Q C
(the line segment from Q to C) with x=0,
and V denote the intersection of R C with x=0,
see Fig. <ref>.
From the formula (<ref>) for C,
the y-components of these points are given in terms of Q and R by
P_2
= λ_L^u^3Q_2/λ_L^u^3 + [λ_L^u - Q_2(1-λ_L^u)] ϕ_1(ξ),
V_2
= λ_L^u^3R_2/λ_L^u^3 + [λ_L^u - R_2(1-λ_L^u)] ϕ_1(ξ).
So Ω is the union of
the quadrilateral Ω_L in the left half-plane
with vertices P, Q, R, and V,
and the triangle Ω_R in the right half-plane
with vertices C, P, and V.
Thus f(Ω) = f(Ω_L) ∪ f(Ω_R),
where f(Ω_L) and f(Ω_R) are polygons
because each piece of f is affine.
Thus since Ω is convex, to prove f(Ω) ⊂Ω
it suffices to show that the vertices of
f(Ω_L) and f(Ω_R)
belong to Ω.
These vertices are the points
f(C), f(P), f(Q), f(R), and f(V).
2Show C, Q, and R map to Ω.
Certainly f(C) ∈Ω by the definition of Ω.
We now show f(Q), f(R) ∈QR
(the left edge of Ω).
If δ_L > 0 then λ_L^s > 0,
so f(Q) ∈QY
and f(R) ∈RY
so certainly f(Q), f(R) ∈QR.
Also if δ_L = 0, then f(Q) = f(R) = Y = Z ∈QR.
Finally if δ_L < 0, then f(Q) and f(R)
lie below y=0, so lie below Z, and hence below Q.
In this case Q is either f(C) or Z
(because Y, f(Z), and f^2(C) lie below y = 0),
thus f(Q) lies on or above R, by the definition of R.
Also λ_L^s < 0, thus f(R) lies above f(Q),
and hence above R.
Thus in any case f(Q), f(R) ∈QR.
3Show P and V map to Ω.
The points f(P) and f(V) lie on y=0, specifically
f(P) = [ P_2 + 1; 0 ],
f(V) = [ V_2 + 1; 0 ],
where P_2 and V_2 are given by (<ref>) and (<ref>).
Also V lies above S, thus f(V) lies to the right of Z=f(S).
Hence it remains for us to show that f(P) lies at or to the left of C,
that is C_1 - (P_2 + 1) ≥ 0.
To do this we consider the various possibilities for Q in turn.
There are three cases: Q is either Y, Z, or f(C).
This is because f(Z) cannot lie above Y, while f^2(C) cannot lie above Y if δ_L > 0
and cannot lie above Z if δ_L ≤ 0.
Case 1: With Q = Z we have P_2 = 0.
Thus C_1 - (P_2 + 1) > 0 = C_1 - 1 > 0 because ϕ_3(ξ) > 0.
Case 2: With Q = Y, by substituting Y_2, given by (<ref>),
in place of Q_2 in (<ref>) we obtain
C_1 - (P_2 + 1) = (1 + λ_L^sλ_L^u^2/λ_L^u^2(1-λ_L^s)+ϕ_1(ξ))ϕ_4(ξ)/ϕ_1(ξ)(λ_L^u - 1).
This case requires δ_L ≥ 0, thus λ_L^s ≥ 0.
Also C_1 - (P_2 + 1) > 0
because ϕ_1(ξ)>0 and ϕ_4(ξ) >0.
Case 3: With Q = f(C) we similarly obtain
C_1 - (P_2 + 1) = (1 + λ_R^sλ_L^u^2/λ_L^u(λ_L^u-δ_R)ϕ_1(ξ))ϕ_5(ξ)/ϕ_1(ξ)(λ_L^u - 1).
This case requires δ_R ≤ 0 (so that f(C)_2 ≥ 0), thus λ_R^s ≥ 0.
Also ϕ_1(ξ)>0 and ϕ_5(ξ) >0,
thus C_1 - (P_2 + 1) > 0.
Let ξ∈Φ_ trap.
Given > 0 define
C_ = C - (,0),
Q_ = Q + ^2(C - R),
R_ = R + ^2(C - Q),
and let Ω_
be the compact filled triangle with vertices
C_, Q_, and R_.
Then f(Ω_) ⊂ int(Ω_),
for all sufficiently small > 0.
The triangle Ω_ is shown in Fig. <ref> for one combination of parameter values.
It has been defined so that its left edge lies to the right of W_0^s(Y) and is parallel to W^s_0(Y).
Due to the saddle nature of Y, this edge shifts further to the right when iterated under f.
Also the left edge is an order ^2 distance from W_0^s(Y)
to ensure f(C_) lies to the right of the image of this edge.
1Characterise f(Ω_).
Assume > 0 is sufficiently small
that Q_ and R_ lie to the left of x=0
and C_ lies to the right of x=0.
Then the line segment C_ Q_
intersects x=0 at a unique point P_,
as does C_ R_ at a point V_.
Similar to the previous proof,
it remains for us to show
that C_, P_, Q_, R_,
and V_ map to the interior of Ω_.
2Show C_ maps to the interior
of Ω_.
Let O = (0,0) denote the origin
and I = (1,0) be its image under f.
Also let ℓ = C O. This line segment maps under
the right piece of f to the line segment from f(C_) to I.
Since C_∈ℓ is an order distance from C,
its image f(C_) ∈ f(ℓ)
is an order distance from f(C), which belongs to Q R.
Since Q_ R_ is an order ^2
distance from Q R,
f(C_) must lie to the right of Q_ R_
for sufficiently small > 0.
Also, f(ℓ) lies inside the triangle Q R I.
Since C_ lies to the right of I
(because ϕ_3(ξ) > 0 and assuming
is sufficiently small),
this triangle lies below Q_ C_
and above R_ C_.
Thus f(C_) lies below
Q_ C_
and above R_ C_.
Thus f(C_) lies inside all three edges of
Ω_, hence f(C_) ∈ int(Ω_).
3Show P_ and V_ map to the interior of Ω_.
Let A be any point on C Z with A C, Z.
Then there exists sufficiently small > 0
such that A ∈ int(Ω_).
We know f(P) and f(V) are located on C Z∖{ C, Z },
because by construction Z_1 ≤ f(V)_1 < f(P)_1 < C_1,
so belong to int(Ω_) for sufficiently small > 0.
The same is true for f(P_) and f(V_)
because P_→ P and V_→ V as → 0.
4Show Q_ and R_ map to the interior of Ω_.
For brevity we just show f(Q_) ∈ int(Ω_) (f(R_) ∈ int(Ω_)
can be shown similarly). Let d_1 > 0 be the distance that Q_ R_ lies
to the right of QR, see Fig. <ref>.
Then f(Q_) lies a distance λ_L^u d_1 to the right of
QR, as Y is a saddle fixed point
with stable direction QR and unstable eigenvalue λ_L^u.
Since λ_L^u>1, the point f(Q_) lies to the right of the line Q_R_.
If Q Y, then f(Q) lies an order 1 distance below CQ,
thus f(Q_) lies below C_ Q_
for sufficiently small > 0.
Now consider the case Q=Y.
As shown in Fig. <ref> let d_2 be the vertical displacement from f(Q_)
upwards to the line Q_ C_
(we will show d_2 > 0).
By direct calculations d_2 = β^2 + O ( ^3 ) where
β = δ_L (C_1 - R_1) (C_1 + (τ_L - 2) Y_1)
- R_2 (C_1 + (δ_L - 1) Y_1).
Let p = C_1 - D_1 and notice p > 0
by (<ref>) because ϕ_4(ξ) > 0.
Using also D_1 = ( 1 - λ_L^u ) Y_1
by (<ref>) and (<ref>), we obtain
β = δ_L (C_1 - R_1) ( p - Y_1 ( 1 - λ_L^s ) )
- R_2 ( p - Y_1 λ_L^u ( 1 - λ_L^s ) ),
which is positive by inspection (e.g. Y_1 < 0).
This shows that d_2 > 0 for sufficiently small values of ,
that is f(Q_) lies below the upper edge of Ω_.
By similar calculations one can show that f(Q_) lies above
the lower edge of Ω_,
and therefore f(Q_) ∈ int(Ω_).
§ INVARIANT EXPANDING CONES
In this section we first define cones and what it means for them to be invariant and expanding under an arbitrary matrix A. We then focus on the Jacobian matrices A_L and A_R of the normal form (1), because in order for a cone to establish chaos in (1) it needs to be invariant and expanding for both A_L and A_R. We then explicitly construct such a cone for any ξ∈Φ_ cone (Proposition 3), and use this to prove Theorem 2.2.
A set C ⊂ℝ^2 is a cone
if α v ∈ C for all v ∈ C and α∈ℝ.
Given A ∈ℝ^2 × 2, a cone C ⊂ℝ^2 is
i)
invariant under A if A v ∈ C for all v ∈ C, and
ii)
expanding under A if there exists c > 1 such that A v ≥ c v for all v ∈ C.
In this paper we use the Euclidean norm v = √(v_1^2 + v_2^2) and it suffices to consider cones of the form
Ψ_K = {α[ 1; m ] | α∈ℝ, m ∈ K },
where K is an interval.
Since v ↦ A v is a linear map,
to verify invariance and expansion of a cone Ψ_K,
it suffices to verify properties (i) and (ii) for vectors
of the form v = [ 1; m ]:
If A v ∈Ψ_K for all v = [ 1; m ] with m ∈ K,
then Ψ_K is invariant under A.
If there exists c > 1 such that A v ≥ c v for all
v = [ 1; m ] with m ∈ K,
then Ψ_K is expanding under A.
Now we focus on the Jacobian matrices
A_J = [ τ_J 1; -δ_J 0 ],
where J ∈{ L, R }, of the normal form (<ref>).
The slope of v = [ 1; m ] is m
and the slope of A_J v is
G_J(m) = -δ_J/τ_J+m,
assuming m -τ_J.
Notice
d G_J(m)/dm = δ_J/(τ_J+m)^2,
thus G_J(m) is increasing if δ_J > 0,
decreasing if δ_J < 0,
and flat if δ_J = 0.
In any case, G_J(m) is monotone and so in order to verify invariance
under A_J it suffices to consider the endpoints of K:
Let τ_J, δ_J ∈ℝ and K = [a,b] be an interval
with -τ_J ∉ K.
If a ≤ G_J(a) ≤ b and a ≤ G_J(b) ≤ b
then Ψ_K is invariant under A_J.
Since -τ_J ∉ K,
by (<ref>) and (<ref>) G_J(m) is continuous and monotone on K.
Thus for any m ∈ K, G_J(m) is equal to or lies between the values G_J(a) and G_J(b).
Thus a ≤ G_J(m) ≤ b. That is, the slope of A_J [ 1; m ]
belongs to K, thus A_J [ 1; m ]∈Ψ_K. Hence Ψ_K is invariant under A_J by Lemma <ref>.
Next we introduce the function
H_J(m) = A_J [ 1; m ]^2
- [ 1; m ]^2
= τ_J^2 + δ_J^2 - 1 + 2 τ_J m.
It is easy to show that if H_J(m) > 0 for all m in a compact interval K,
then Ψ_K is expanding under A_J.
Since H_J(m) is a linear function of m it again suffices to consider the endpoints of K:
Let τ_J, δ_J ∈ℝ and K = [a,b] be an interval.
If H_J(a) > 0 and H_J(b) > 0 then Ψ_K is expanding under A_J.
Let h = min[H_J(a),H_J(b)] > 0.
By (<ref>), H_J(m) ≥ h for all m ∈ K.
Then for any m ∈ K the vector v = [ 1; m ] satisfies
A_J v ^2 = H_J(m) + v ^2 ≥ h + v ^2
= ( h v ^2 + 1 ) v ^2
≥( h/n + 1 ) v ^2,
where n = max_m ∈ K (1+m^2).
Thus Ψ_K is expanding under A_J
(with expansion factor c = √(h/n + 1) > 1)
by Lemma <ref>.
To prove chaos in (<ref>) we need to choose K = [a,b] so that Ψ_K is invariant under A_L and A_R.
This favours the interval K being relatively large. However, we want K to be as small as possible in order
to maximise the parameter region over which it is expanding under A_L and A_R. This balancing act motivates the following calculations that form the basis of our definition of K given below in Proposition <ref>.
For each J ∈{ L, R }, the fixed point equation G_J(m) = m is quadratic in m.
If δ_J 0 and δ_J < τ_J^2/4,
then G_J has exactly two fixed points
q_J = -τ_J/2(1 - √(1 - 4δ_J/τ_J^2)),
r_J = -τ_J/2(1 + √(1 - 4δ_J/τ_J^2)).
In order for Ψ_K to be invariant under A_L and A_R, we define K so that it contains q_L and q_R, see Fig. <ref>. So the smallest interval we consider is K = [q_L,q_R],. In the orientation-preserving case (δ_L, δ_R > 0), this interval indeed gives invariance, as shown in <cit.>. In the orientation-reversing case (δ_L, δ_R < 0),
invariance requires that K contains a period-two solution.
The equation (G_R ∘ G_L)(m) = m is quadratic in m with discriminant
θ_1(ξ) = ( δ_L + δ_R - τ_L τ_R )^2
- 4 δ_L δ_R ,
repeating (<ref>).
So if θ_1(ξ) > 0 there are two period-two solutions, Fig. <ref>-(IV).
Of these the inner-most solution is {ã,b̃}, where
ã = δ_L -δ_R-τ_Lτ_R - √(θ_1(ξ))/2τ_R, b̃ = δ_R -δ_L-τ_Lτ_R - √(θ_1(ξ))/2τ_L,
satisfying G_L(ã) = b̃ and G_R(b̃) = ã.
So in this case the smallest interval we can take is K = [ã,b̃], as
used by Misiurewicz <cit.>.
In the non-invertible cases
the slope maps G_L and G_R are either both non-negative or both non-positive, see again Fig. <ref>.
Thus a simple and effective choice for one endpoint of K is m=0.
In this case the smallest interval leading to invariance uses also one of q_L, q_R, or the image of m=0 under G_L or G_R:
G_L(0) = -δ_L/τ_L,
G_R(0) = -δ_R/τ_R.
Proposition <ref> shows that all cases
can be accommodated by simply defining a and b as the minimum and maximum of all points suggested above.
Recall Φ_ cone was defined in <ref> as the set of all ξ∈Φ
for which θ_1(ξ), θ_2(ξ), and θ_3(ξ) are positive.
The condition θ_1(ξ) > 0 ensures ã and b̃ are well-defined,
while, if a and b are given by (<ref>) and (<ref>),
θ_2(ξ) = τ_L^2 + δ_L^2 - 1 + 2 τ_L a = H_L(a),
θ_3(ξ) = τ_R^2 + δ_R^2 - 1 + 2 τ_R b = H_R(b),
and θ_2(ξ) > 0 and θ_3(ξ) > 0 ensure Ψ_K is invariant and expanding.
Let ξ∈Φ_ cone and K = [a,b] where
a = min[ 0, -δ_Rτ_R, q_L, ã],
b = max[ 0, -δ_Lτ_L, q_R, b̃].
Then Ψ_K is invariant and expanding under A_L and A_R.
To prove Proposition <ref> we first establish three lemmas. The first of these provides bounds on the other fixed points, r_L and r_R, of G_L and G_R.
Let ξ∈Φ.
Then r_L < 1 - δ_L^2 - τ_L^2/2 τ_L
and r_R > 1 - δ_R^2 - τ_R^2/2 τ_R.
We have δ_L+1< τ_L, hence (δ_L+1)^2 < τ_L^2, and so (δ_L-1)^2 < τ_L^2 - 4δ_L. By multiplying the last two inequalities together we obtain (δ_L^2 - 1)^2 < τ_L^2(τ_L^2 - 4δ_L), so δ_L^2-1 < τ_L√(τ_L^2-4δ_L) which rearranges to
r_L < 1 - δ_L^2 - τ_L^2/2 τ_L
using (<ref>).
The result for r_R follows similarly.
With the assumptions of Proposition <ref>,
a = ã if and only if δ_L ≤ 0 and δ_R ≤ 0;
similarly b = b̃ if and only if δ_L ≤ 0 and δ_R ≤ 0.
First suppose δ_L ≤ 0 and δ_R ≤ 0.
Then τ_L τ_R δ_L ≥ 0 (also θ_1(ξ) > 0 by assumption),
so we can use (<ref>) to obtain
2 τ_R (ã + τ_L) = -√(θ_1(ξ)) - √(θ_1(ξ) + 4 τ_L τ_R δ_L) < 0.
Thus ã > -τ_L (because τ_R < 0).
Also b̃ < -τ_R by a similar argument.
Notice δ_L ≤ 0 implies G_L(m) ≥ 0 for all m > -τ_L,
so q_L ≥ 0, G_L(0) = -δ_L/τ_L≥ 0, and G_L(ã) = b̃≥ 0.
Similarly G_R(m) ≤ 0 for all m < -τ_R,
so q_R ≤ 0, G_R(0) = -δ_R/τ_R≤ 0, and G_R(b̃) = ã≤ 0.
Also G_L(m) is non-increasing, so ã≤ 0 implies G_L(ã) ≥ G_L(0).
Thus b̃≥ -δ_L/τ_L≥ 0 ≥ q_R, so b = b̃.
Similarly ã≤ -δ_R/τ_R≤ 0 ≤ q_L, so a = ã.
Conversely suppose a = ã.
Then ã = G_R(b̃) ≤ 0, so δ_R ≤ 0.
Thus G_R(m) is non-increasing, so ã = G_R(b̃) ≤ G_R(0) implies b̃≥ 0.
Thus b̃ = G_L(ã) ≥ 0, so δ_L ≤ 0, as required
(also b = b̃ implies δ_L ≤ 0 and δ_R ≤ 0 in a similar fashion).
With the assumptions of Proposition <ref>, -τ_L < a and b < -τ_R.
For brevity we just show -τ_L < a (b < -τ_R can be shown similarly). Using θ_2(ξ) > 0 and Lemma <ref> we obtain
a > 1 - δ_L^2 - τ_L^2/2 τ_L > r_L .
Thus if δ_L ≥ 0 then
r_L + τ_L = τ_L/2( 1 - √(1 - 4 δ_L/τ_L^2)) ≥ 0,
and so a > -τ_L.
Now suppose δ_L < 0.
If δ_R ≥ 0 then a = 0 > -τ_L,
while if δ_R < 0 then a = ã (by Lemma <ref>)
and -τ_L < ã as shown in the proof of Lemma <ref>.
We first show G_L(a) ≥ a and G_L(b) ≥ a.
If δ_L ≤ 0 then G_L(m) ≥ 0 for all m ∈ K
(using Lemma <ref>), so certainly G_L(a) ≥ a and G_L(b) ≥ a. Now suppose δ_L > 0. In this case G_L(m) ≥ m for all r_L ≤ m ≤ q_L (i.e. at and between the fixed points of G_L).
Observe r_L < a ≤ q_L by (<ref>)
and the definition of a, thus G_L(a) ≥ a.
Also G_L(m) is increasing thus G_L(b) > G_L(a) ≥ a.
Next we show G_L(a) ≤ b and G_L(b) ≤ b.
If δ_L ≥ 0 then we have G_L(m) ≤ 0 for all m ∈ K (using Lemma <ref>), so certainly G_L(a) ≤ b and G_L(b) ≤ b.
Now suppose δ_L < 0. If δ_R ≤ 0 then Lemma <ref> implies a = ã and b = b̃ = G_L(ã), so b=G_L(a), if δ_R > 0 then a =0 and b≥ G_L(0), so b ≥ G_L(a).
Also G_L(m) is decreasing thus G_L(b) < G_L(a) ≤ b.
Now from Lemma <ref> we can conclude that Φ_K is invariant under A_L. Invariance under A_R can be proved in a similar fashion.
Next we prove expansion. By (<ref>), θ_2(ξ) > 0
implies H_L(a) > 0.
Also
H_L(b) = τ_L^2 + δ_L^2 - 1 + 2 τ_L b = H_L(a) + 2 τ_L (b - a)
is positive because τ_L > 0 and b ≥ a.
Thus Ψ_K is expanding for A_L by Lemma <ref>.
By a similar argument Ψ_K is also expanding for A_R.
Choose any ξ∈Φ_ trap∩Φ_ cone.
By Proposition <ref> there exists a trapping region Ω_ for f.
Then ⋂_n ≥ 0 f^n(Ω_) is an attracting set
and contains a topological attractor Λ.
By Proposition <ref> there exists a non-empty cone Ψ_K that is invariant and expanding
for both A_L and A_R with some expansion factor c > 1.
Choose any z ∈Λ and let v ∈Ψ_K be non-zero.
The Lyapunov exponent λ(z,v) for z in the direction v
is the limiting rate of separation of the forward orbits of z and z + Δ v for arbitrarily small Δ > 0 <cit.>.
If the forward orbit of z does not intersect the switching manifold then
the derivative of the n^ th iterate of z under f is well-defined for all n ≥ 1 and
λ(z,v) = lim sup_n →∞1/nln( D f^n(x) v ).
Observe
D f^n(x) = D f ( f^n-1(x) ) ⋯ D f ( f(x) ) D f(x),
where each of the n matrices on the right-hand side is either A_L or A_R.
By the invariance and expansion of Ψ_K, D f^n(x) v ≥ c^n v for all n, so λ(z,v) ≥ln(c) > 0.
If instead the forward orbit of z intersects the switching manifold,
λ(z,v) can similarly be evaluated and bounded using
one-sided directional derivatives because f is piecewise-linear, see <cit.> for details.
§ FURTHER REMARKS ON THE PARAMETER REGIONS Φ_ TRAP AND Φ_ CONE.
In <ref> we described two-dimensional cross-sections of Φ_ trap and Φ_ cone defined by fixing the values of τ_L > 0 and τ_R < 0.
For the most part larger values of τ_L and |τ_R| yield smaller cross-sections of Φ_ trap and larger cross-sections of Φ_ cone, see Fig. <ref> and Fig. <ref>.
This is because with larger values of τ_L and |τ_R| the map is more strongly expanding,
hence less amenable for the existence of a trapping region
but more amenable for the existence of an invariant expanding cone.
Fig. <ref> shows critical curves in the (τ_L,τ_R)-plane where the cross-sections vanish entirely.
To explain this figure we treat the critical curves one by one. First consider (τ_L,τ_R) at a point just above the critical curve τ_R = -τ_L/τ_L-1. Here the Φ_ trap cross-section has three vertices, P^(1), P^(2), and P^(3), as shown in Fig. <ref>-a. It is a simple exercise to show that as parameters are varied each vertex reaches the origin (δ_L,δ_R) = (0,0) on the critical curve. For instance the upper vertex is where ϕ_3(ξ) = 0 and ϕ_4(ξ) = 0 intersect at P^(1) = (0, (τ_R - (τ_R+1)τ_L)τ_L/1 - τ_L) and solving P^(1)_2 = 0 gives τ_R = -τ_L/τ_L - 1. Thus here the Φ_ trap cross-section contracts to a point and vanishes.
With instead (τ_L,τ_R) at a point just to the left of τ_R = -1/τ_L - 2, the Φ_ trap cross-section has three vertices at different points P^(4), P^(5), and P^(6), as shown in Fig. <ref>-b. Explicit calculations reveal that each vertex reaches the corner (δ_L,δ_R) = (τ_L+1,τ_R-1) when τ_R = -1/τ_R - 2. For instance, P^(4) = ((τ_Lτ_R - τ_R+1)(τ_R-1)/τ_R^2, τ_R-1 ), and solving P^(4)_1 = τ_L + 1 gives τ_R = -1/τ_R - 2. Thus here the Φ_ trap cross-section again vanishes.
With τ_L just less than 1 and τ_R < -1 the Φ_ cone cross-section appears as in Fig. <ref>-c. As parameters are varied the Φ_ cone cross-section vanishes when the vertices P^(7) = ( τ_L(τ_R+1), -(τ_R+1) ) and
P^(8) = ( -√(1-τ_L^2), -(τ_R+1) ) coincide. Solving P^(7)_1 = P^(8)_1 yields the critical curve τ_R = -τ_L+√(1-τ_L^2)/τ_L of Fig. <ref>. Similarly with τ_L > 1 and τ_R just greater than -1 the Φ_ cone cross-section appears as in Fig. <ref>-d. The cross-section vanishes when P^(9) and P^(10) coincide on τ_L = --τ_R+√(1-τ_R^2)/τ_R.
The geometry and topology of cross-sections of Φ_ trap∩Φ_ cone admit many possibilities
and a complete analysis is beyond the scope of this paper.
Here we examine one example, Fig. <ref>.
Here the cross-section of Φ_ trap∩Φ_ cone
is bounded by the following curves (going anticlockwise):
δ_L = τ_L - 1,
δ_R = -τ_R - 1,
ϕ_4(ξ) = 0,
ϕ_3(ξ) = 0,
ϕ_5(ξ) = 0,
and θ_2(ξ) = 0 (which has a kink at δ_L = 0).
The first two of these curves are boundaries of our overall parameter region Φ, the next three curves are boundaries of Φ_ trap,
and the last curve is a boundary of Φ_ cone.
Fig. <ref> also shows the result of a simple numerical simulation
to investigate the nature of the attractor.
For each point in a 300 × 300 equispaced grid of (δ_L,δ_R) values,
we computed 10^7 iterates of the forward orbit using a random initial condition.
Green points are where an estimate of the maximal Lyapunov exponent was positive, white points are where the orbit appeared to diverge (its norm exceeded 10^4), and other points are where there exists a stable periodic solution (determined by solving for periodic solutions exactly).
Note the numerical simulation gives an imperfect picture.
For example, we believe no attractor exists immediately to the left of the heteroclinic bifurcation ϕ_5(ξ) = 0, yet some green points are present here (with small δ_L > 0 and -3 < δ_R < -2) because orbits often experience a long transient before diverging
and 10^7 iterations are insufficient to detect this.
Nevertheless, the numerics effectively highlights the fact that three of the boundaries of Φ_ trap∩Φ_ cone
are bifurcations where the chaotic attractor is destroyed, so these boundaries cannot be improved upon.
As we cross δ_R = -τ_R - 1 the fixed point X becomes stable, ϕ_4(ξ) = 0 is a homoclinic bifurcation where the attractor is destroyed <cit.>, and ϕ_5(ξ) = 0 is a heteroclinic bifurcation where the attractor is destroyed.
Elsewhere the attractor is destroyed in a variety of other global bifurcations.
Above ϕ_3(ξ) = 0 and to the right of δ_L = τ_L - 1 the trapping region construction of <ref> fails.
An alternate construction that partially deals with this is described in <cit.>.
Below θ_2(ξ) = 0 the cone Ψ_K of <ref> fails to be expanding.
For some parameter combinations below θ_2(ξ) = 0 it is possible to construct an invariant expanding cone, and hence verify the presence of chaos, by using an induced map <cit.>.
§ DISCUSSION
We have extended the constructive proof of <cit.> for robust chaos in the border-collision normal form to the orientation-reversing and non-invertible settings. Specifically we identified an open parameter region Φ_ trap, where the map has a trapping region, and an open parameter region Φ_ cone, where it has an invariant expanding cone, see Figs. <ref> and <ref>. Throughout Φ_ trap∩Φ_ cone the map has an attractor with a positive Lyapunov exponent, Theorem <ref>.
In Fig. <ref> we considered a typical two-dimensional cross-section of parameter space. Numerical results showed robust chaos throughout Φ_ trap∩Φ_ cone, corroborating Theorem 2.2 as expected. The robust chaos terminates at three boundaries of Φ_ trap∩Φ_ cone: δ_R = -τ_R-1, ϕ_4(ξ) = 0, and ϕ_5(ξ) = 0, and appears to persist beyond the other boundaries. We expect our construction could be adapted to verify robust chaos beyond these boundaries, and already this has been achieved in some cases <cit.>.
Notice in Fig. <ref> the cross-section of Φ_ trap∩Φ_ cone includes a neighbourhood of (δ_L,δ_R). This is the case for many values of τ_L and τ_R and is a significant achievement of this paper because it shows robust chaos is not lost as we cross from the orientation-preserving setting to the orientation-reversing and non-invertible settings. Thus the presence of robust chaos is not dependent on the global topological properties of the map. Moreover, this provides a path for robust chaos to be demonstrated in higher dimensional maps. The n-dimensional border-collision normal form <cit.> can have two saddle fixed points: X with an eigenvalue λ_R^u < -1, and Y with an eigenvalue λ_L^u > 1. If all other eigenvalues associated with X and Y are sufficiently small in absolute value (which in two dimensions means (δ_L,δ_R) is sufficiently close to (0,0)), and with appropriate constraints on the values of λ_R^u and λ_L^u, we believe the map must have a chaotic attractor, and this is a tantalising avenue for future research.
§ ACKNOWLEDGMENTS
This work was supported by Marsden Fund contract MAU1809, managed by Royal Society Te Apārangi.
plain
|
http://arxiv.org/abs/2307.04094v1 | 20230709043319 | Class-Incremental Mixture of Gaussians for Deep Continual Learning | [
"Lukasz Korycki",
"Bartosz Krawczyk"
] | cs.LG | [
"cs.LG",
"I.5.0; I.5.1"
] |
Class-Incremental Mixture of Gaussians for Deep Continual Learning
Lukasz Korycki
Virginia Commonwealth University
[email protected]
Bartosz Krawczyk
Virginia Commonwealth University
[email protected]
August 12, 2023
==================================================================================================================================================
Continual learning models for stationary data focus on learning and retaining concepts coming to them in a sequential manner. In the most generic class-incremental environment, we have to be ready to deal with classes coming one by one, without any higher-level grouping. This requirement invalidates many previously proposed methods and forces researchers to look for more flexible alternative approaches. In this work, we follow the idea of centroid-driven methods and propose end-to-end incorporation of the mixture of Gaussians model into the continual learning framework. By employing the gradient-based approach and designing losses capable of learning discriminative features while avoiding degenerate solutions, we successfully combine the mixture model with a deep feature extractor allowing for joint optimization and adjustments in the latent space. Additionally, we show that our model can effectively learn in memory-free scenarios with fixed extractors. In the conducted experiments, we empirically demonstrate the effectiveness of the proposed solutions and exhibit the competitiveness of our model when compared with state-of-the-art continual learning baselines evaluated in the context of image classification problems.
§ INTRODUCTION
While the initial research done in the domain of continual learning from stationary data was, in large part, oriented towards task-incremental solutions, more recent works attempt to address generalized cases consisting of purely class-incremental and data-incremental (also known as domain-incremental) settings <cit.>. These scenarios are usually more universal but also more challenging and restrictive mainly due to the lack of task or even class labels. Such settings make many of the previously proposed solutions practically useless, for example, the methods based on memory-free regularization <cit.>, which are not capable of discriminating between older and new classes, even if they address the catastrophic forgetting problem <cit.>. Although the most standard experience replay methods can be effectively applied in the class-incremental scenarios <cit.>, there has been also a search for alternative approaches that could provide natural capabilities required for such cases. A significant group of methods can be identified based on their reliance on centroids (or prototypes) combined with the nearest-centroid classification methods <cit.>. Since centroids can be independently added to the classifier, they are examples of methods that can be very smoothly incorporated into class-incremental scenarios, offering almost no interference in the latent space.
In this work, we explore an advanced version of these alternatives by proposing integration of the gradient-based Gaussian mixture model with a class-incremental deep continual learning framework, called MIX. In fact, it requires us to tackle three major problems at the same time: (i) gradient-based mixture training, (ii) combining it with a trainable deep feature extractor and, finally, (iii) making it suitable for class-incremental scenarios. To achieve these goals, we introduce a set of dedicated losses, configurations and methods, providing a probabilistic classifier on top of a feature extractor and within a model capable of learning end-to-end. This opens many potential research directions that could exploit the well-modeled statistical properties of Gaussians. In addition to that, we show that our class-incremental mixture model, analogously to the centroid-driven algorithms, is characterized by some inherent properties useful in continual learning scenarios. They allow it for much better separation of concepts at the level of the classification module, leading to significant improvements in memory-free scenarios when pre-trained extractors are used. Through an extensive empirical study, we analyze different configurations of our method, provide the reader with some intuition about its parameters and show its competitiveness in the context of other continual learning algorithms.
§ RELATED WORKS
Continual learning: In continual learning, our focus should be on effective incorporation of the arriving data and retention of the acquired knowledge <cit.>. The main problem that learning algorithms will encounter here is catastrophic forgetting <cit.>. The most straightforward approaches involve replaying instances of previously seen tasks or classes while learning new ones <cit.>. Instead of putting instance-level constraints on the learning directions, we can apply direct adjustments to the loss using dedicated regularization terms. The most commonly used approach involves utilizing the knowledge-distillation loss <cit.> combined with standard cross-entropy <cit.> or maintaining importance weights to distinguish parameters that are crucial for the retention <cit.>. These methods generally cannot be used in more realistic class-incremental or data-incremental scenarios (if they do not use memory buffers), since they cannot learn how to discriminate new instances from the older ones <cit.>. Other approaches may employ masking to isolate parameters per task to keep them static when learning new ones <cit.>, use dynamic structures to expand the network for new concepts <cit.>, utilize ensemble techniques <cit.> or meta-learning and hypernetworks <cit.>. Finally, interesting alternative approaches focus on hybridizing the neural networks with different machine learning methods, e.g. decision trees <cit.> or centroid-driven algorithms <cit.>. The latter group of methods has been found especially useful in one-class-incremental scenarios, since, as mentioned in the introduction, centroids can be stored independently per class, allowing for natural class-incremental learning without additional interference at the level of a classifier. In this work, we follow these approaches and replace basic centroids learned separately from the feature extractor with more complex end-to-end mixture models.
Mixture optimization: Various techniques can be applied for the task of fitting the mixture model to given data. The most standard approach utilizes the EM algorithm, which can be realized in both offline and online settings <cit.>. While EM provides a stable framework for learning the mixtures – in terms of mathematical constraints and convergence – it is critically limited when it comes to working with high-dimensional data and feasible memory consumption <cit.>. On top of that, this algorithm is intrinsically incapable of being fully integrated with neural networks, preventing it from achieving joint end-to-end deep learning and benefiting from dedicated features. An alternative approach involves gradient-based optimization <cit.>. This method has been proved to be able to provide more scalable and flexible algorithms capable of working in challenging scenarios with high-dimensional data and in online settings. Most importantly, the gradient-based approach naturally enables combining the model as a classifier with a trainable deep feature extractor <cit.>, allowing for extending the optimization process with the input space adjustments. Methods utilizing such a compound learning process showed much evidence of its usability in offline and unsupervised scenarios, while at the same time encouraging researchers to develop further extensions and improvements <cit.>. Given all of the characteristics, we decided to use this approach in our scenario of continual learning.
§ MIXTURE OF GAUSSIANS FOR CLASS-INCREMENTAL LEARNING
Formally, the general goal of our work is to incrementally learn a classification model defined as ϕ^(t): 𝒳→𝒞 that can effectively incorporate subsequent class batches ⟨ (X^(1), c=1), (X^(2), c=2), ..., (X^(t), c=t)⟩, where X^(t) contains instances x only for a given class c. After t classes the model ϕ^(t) should aim at minimizing the loss for the current class c=t and all previously observed ones:
ℒ^(t) = ∑_c=1^t∑_n=1^N_cℒ^(c)(ϕ^(t)(x_n^(c))),
where x_n^(c)∈X^(c) and ℒ^(c) can be any supervised loss.
Additionally, since we are interested in deep learning, we define the whole trainable model as a tuple ϕ^(t)=⟨ℱ^(t), 𝒢^(t)⟩ consisting of a feature extractor ℱ^(t) and a classifier 𝒢^(t) jointly aggregating knowledge from t classes. The model makes prediction by classifying the features provided from the extractor ϕ^(t)(x)=𝒢^(t)(ℱ^(t)(x))=𝒢^(t)(x̂). In this work, we aim at employing the mixture of Gaussians as a jointly trained incremental classifier. Although the model learns from dedicated features x̂, in the next section, we use x for the sake of simplicity of notation.
§.§ Generic supervised mixture model
Formally, in a standard unsupervised setting the density for a given point 𝐱 can be expressed using a multivariate normal distribution defined as:
𝒩(𝐱|μ_k, Σ_k) = 1/√((2π)^D|Σ_k|)
× exp(-1/2(𝐱-μ_k)^TΣ_k^-1(𝐱-μ_k)),
where μ and Σ are its mean and covariance, and D is the size of the input (number of dimensions). The Gaussian mixture models (GMM) have been designed to approximate more complex multivariate densities by decomposing them into K components:
p(𝐱) = ∑^K_k=1ω_k𝒩(𝐱|μ_k,Σ_k),
where each of them is defined using a single Gaussian defined above and ω_k are their weights. The combined model, equipped with more degrees of freedom, should be capable of providing more accurate expressions of the overall observed distributions than a simpler approach utilizing only a single component. In such a framework, the fitting of the mixture to given data X is based on minimizing the loss defined using the log-likelihood function:
ℒ̅ = -log p(X|ω,μ,Σ)
= -1/N∑^N_n=1log∑^K_k=1ω_k
𝒩(𝐱_n|μ_k,Σ_k),
where we adjust the free parameters of the model – means μ, covariance matrices Σ and weights ω. To adapt the given framework to supervised scenarios we can simply specify a separate mixture model for each class c:
p(𝐱 | c) = ∑^K_k=1ω_k^(c)𝒩(𝐱|μ_k^(c),Σ_k^(c)),
and focus on minimizing the aforementioned loss also per class ℒ̅^(c):
ℒ̂ = ∑_c=1^Cℒ̅^(c) = -log∑_c=1^C p(X^(c)|ω^(c),μ^(c),Σ^(c)),
where X^(c) are N_c class-specific observations.
In continual learning we should aim at minimizing the interference of current updates with previously created models to alleviate the detrimental effect of catastrophic forgetting. Therefore, it is worth mentioning here that GMMs create such an opportunity by allowing for maximizing the log-likelihood only for a currently learned class through ℒ̅^(c). It provides a perfect separation at the level of the classification model.
§.§ Mixture optimization for class-incremental deep learning
In order to apply gradient-based learning to GMM in class-incremental deep learning scenarios, we have to address several different issues. Some of them are common for all GMM models using gradient-based learning, while others are specific for the class-incremental deep learning settings.
In general, we say that our goal is to optimize the class-incremental joint model ϕ^(t)=⟨ℱ^(t), 𝒢^(t)⟩, defined at the beginning of Sec. <ref>, using some supervised loss ℒ. Since we set 𝒢^(t) = 𝒩^(t), where 𝒩^(t) is a whole GMM model, we have ϕ^(t)(x)=𝒩^(t)(ℱ^(t)(x)). The trainable parameters are weights ∂ℒ / ∂W and biases ∂ℒ / ∂b for the extractor, and means ∂ℒ / ∂μ, covariance matrices ∂ℒ / ∂Σ and component weights ∂ℒ / ∂ω for the classifier. All of the subsequent paragraphs focus on designing optimization in the classifier (mixture) space, as it was introduced in Sec. <ref>.
§.§.§ Loss design
Max-component: It has been shown that optimizing the full loss ℒ̅^(c) given in Eq. <ref> may lead to some numerical instabilities, especially for high-dimensional data <cit.>. To address this issue a max-component approximation can be used. This approach is very straightforward. Since all p(x|c,k) in Eq. <ref> are positive, any component provides a lower bound for the whole sum used in ℒ̅^(c). If for every point x_n we find a component providing the highest log-likelihood and sum all of them, we will get the largest (max-component) lower bound <cit.>:
ℒ^(c)_max = -1/N_c∑^N_c_n=1max_klog(ω_k^(c)𝒩(𝐱^(c)_n|μ_k^(c),Σ_k^(c))).
Since we can state that ℒ^(c)_max≥ℒ̅^(c), we are able to minimize ℒ̅^(c) by focusing only on ℒ^(c)_max. It is also worth mentioning that just like the general formula given in Eq. <ref> may eliminate the interference with previously learned classes, the max-component approximation can limit the same issue at the level of class components, for example, in data-incremental scenarios <cit.>, making this approach a natural candidate for continual learning settings.
Inter-contrastive loss: All of the introduced losses are limited to scenarios either without a feature extractor or with a fixed pre-trained one. Unfortunately, if we operate in a setting where we can modify the input space of the mixture model and we utilize any of the aforementioned metrics relying entirely on maximizing log-likelihood, we will inevitably end up with a local minimum that for a joint model ϕ^(t) exists for ∀x(𝒢^(t)(x) = 0). This issue can be solved by incorporating an inter-contrastive loss that will distance representations for different classes. We define the loss as:
ℒ^(c)_ie = 1/N_cmax_j ≠ c∑^N_c_n=1max_klog(ω_k^(j)𝒩(𝐱^(c)_n|μ_k^(j),Σ_k^(j))),
which boils down to finding the closest component in other classes, and then optimizing against the class that on average is the closest to the one currently being considered. We keep the log-likelihood to ensure a similar numerical space of loss values as the one for the positive part given in Eq. <ref>. However, now one should notice that minimizing such a loss may very easily destabilize learning since optimization will gravitate towards ℒ̅^(c)_ie→ -∞ preventing the model from actually fitting to the class examples. To avoid it we introduce a tightness bound τ that clips the contrastive loss value at some pre-defined point ℒ^(c)_ie(τ) = max(τ, ℒ^(c)_ie). This basically means that we stop the decrease of the contrastive loss below the given bound, allowing for a more significant contribution of the actual fitting part ℒ^(c)_max. We parametrize the τ value with a simple linear transformation τ = p̅_max^(c) - 1/τ_p, where p̅_max^(c) is the average maximum density value observed across all class components (can be obtained on-the-fly) and τ_p is a tunable hyperparameter that takes values between ( 0,1 ⟩. Such a loss can provide effective discrimination between components of different classes, as shown for an example in Appendix A.
Diverse components: While all of the introduced techniques and modifications ensure reliable discrimination between components of different classes, they do not consider differentiation between components of the same class or their quality. In fact, even in offline gradient-driven settings without dynamic feature extraction it is common to obtain mixtures reduced to a single component per class with all the others practically meaningless, e.g., due to zeroed weights <cit.>. In scenarios with a trainable extractor, this problem becomes even more significant as it is very easy for the optimizer to focus on maximizing log-likelihood from a single component, as both mixture model and flexible extractor lack constraints to prevent this. While in standard scenarios this problem can be successfully addressed with a good initialization method, e.g., using k-means <cit.>, we observed that it was not enough in our case. As a consequence, we introduced two elements to the learning process.
Regionalization – before learning each class, we first divide it into K clusters using the k-means clustering. Then we force each component to fit only to the data from its cluster called a region ℛ^(c)_k. This replaces the max-component loss ℒ^(c)_max defined in Eq. <ref> with:
ℒ^(c)_reg = -∑^K_k=11/N_k∑_x∈ℛ^(c)_klog(ω_k^(c)𝒩(𝐱|μ_k^(c),Σ_k^(c))).
Intra-contrastive loss – the regionalization approach is necessary yet not sufficient to provide sufficient diversification between same-class components. The reason for it is the same as for discrimination between different classes, as described in the previous paragraph. Analogously to the inter-contrastive loss, we add the intra-contrastive loss with the tightness bound τ:
ℒ^(c)_ia(τ) = ∑^K_k=1max( τ, max_m ≠ k1/N_k
×∑_x∈ℛ_klog(ω_m^(c)𝒩(𝐱|μ_m^(c),Σ_m^(c))),
which for each class region pushes away other same-class components that on average are closest to the currently considered one, based on the regionalization conducted in the previous step. Obviously, one can define separate τ for the inter- and intra-contrastive loss.
Such an approach can effectively increase the diversity of the same-class components, as given for an example in Appendix A. However, this approach imposes a hard constraint on how the representation and mixture may look, which limits the flexibility of the whole model. Regardless of these concerns, this method can still effectively improve the overall performance of a multi-component model over a method without the proposed improvement, as we will show in our extensive experiments.
Final component-based losses: To summarize, we distinguish two component-based losses. One uses the max-component approach (MC):
ℒ_mc = ∑_c=1^tℒ_max^(c) + ℒ_ie^(c)(τ_ie),
while the second loss adds the regionalization technique with the intra-contrastive part (MCR):
ℒ_mcr = ∑_c=1^tℒ_reg^(c) + β(ℒ_ie^(c)(τ_ie) + ℒ_ia^(c)(τ_ia)).
Cross-entropy loss: Last but not least, we can also attempt to directly optimize the whole standard loss ℒ̂ given in Eq. <ref>, using a high-level supervised wrapper loss, e.g., based on cross-entropy (CE). In such a case, our loss is defined as:
ℒ_ce = -∑_c=1^t∑_n=1^N_cy_n^(c)logŷ_n^(c),
where y is a one-hot target vector and ŷ_n^(c) comes from the softmax function ŷ_n^(c) = e^p_n^(c)/∑_c=1^te^p_n^(c) and p_n^(c)=p(x_n|c) is a density value for a given class produced by the mixture model accordingly to Eq. <ref>.
§.§.§ Constraints
Other issues that have to be addressed when using gradient-based mixture training are the mathematical constraints that have to be enforced to preserve a valid mixture model. This is required since gradient-based learning does not constrain the possible values for means, covariance matrices and weights, and the last two have to remain in a specific range of values.
Component weights: For the GMM model its component weights ω_k have to sum up to one: ∑_k=1^Kω_k=1. To ensure that the effective weights satisfy this requirement we simply train auxiliary free parameters ω̂_k and use the softmax-based normalization ω_k = e^ω̂_k/∑_j=1^Ke^ω̂_̂ĵ to obtain required values <cit.>.
Covariance matrices: For a general case, the covariance matrices of the GMM model should be symmetric positive definite v^TΣv > 0 for all nonzero vectors v. This can be enforced using the Cholesky decomposition <cit.> Σ = AA^T, where A is a triangular matrix with positive diagonal values a_ii > 0 and, at the same time, our trainable proxy parameter. To enforce positive diagonal values, after each gradient-based update we clamp them with a_ii = min(a_ii, d_min) using some predefined d_min value. Finally, we also consider a case of a mixture using only the diagonal of the covariance – variance σ, which we control using the same clamp-based approach σ_i = min(σ_i, d_min).
§.§ Memory buffer
In our work, we consider the class-incremental scenario with strictly limited access to previously seen observations (classes). Therefore, in all of the introduced losses we use all available data for the currently learned class t, while for the others we sample from the memory buffers ℳ_c that store an equal number of examples per each previously seen class. On the other hand, if the feature extractor is pre-trained and static we could remove the inter-contrastive loss and even get rid of the memory buffer, allowing for memory-free training, as we will show in our experimental study. The memory buffer is needed in a general case when we assume the joint training of the whole model.
§.§ Classification
Finally, in the presented model, the classification of an instance x_n can be performed using two approaches, either utilizing the softmax function ŷ_n^(c) = e^p_n^(c)/∑_c=1^te^p_n^(c), where p_n^(c) = p(x_n|c), or by taking the weighted support of the closest component ŷ_n^(c) = max_kω_k^(c)𝒩(𝐱_n|μ_k^(c),Σ_k^(c)). We will empirically show that these methods work best with specific losses designed in the previous sections.
§ EXPERIMENTAL STUDY
In our experiments, we empirically explore all of the introduced methods and parameters and put our method in the performance context of different state-of-the-art baselines. We show how our model performs in end-to-end scenarios and with a pre-trained extractor, compared with other solutions. For more specific details regarding data, configurations and results, please refer to Appendix A and B, as well as to our repository containing source code for our methods and all experiments: (please check the source code provided in the supplementary materials, a public URL will be added later). All of the experiments were conducted using 4 GPUs (Tesla V100) that were part of an internal cluster.
§.§ Setup
For the purpose of the evaluation we selected commonly utilized image classification datasets that were turned into class-incremental sequences by presenting their classes subsequently to the models <cit.>. We used: MNIST, FASHION, SVHN, CIFAR and IMAGENET datasets using various variants (number of classes, pre-trained features). For the analysis of different configurations of our model we used shorter sequences. We extended them with the longer benchmarks for the comparison with baselines.
In the final section of this work, we compared our class-incremental Gaussian mixture model (MIX-MCR, MIX-CE) with other classifiers dedicated for continual learning scenarios. We considered: standard experience replay (ER) <cit.>, experience replay with subspaces (ERSB) <cit.>, centroid-based iCaRL <cit.>, two gradient-based sample selection methods (GSS and A-GEM) <cit.>, experience replay combined with knowledge distillation and regularization (DER) <cit.>, and two purely regularization-based approaches – LWF <cit.> and SI <cit.>. Most of the algorithms were implemented as wrappers of the source code provided in <cit.> under MIT License. For the last two we used their modifications adjusted for single-task learning <cit.>. As our lower bound we used a naively learning net (NAIVE), and for the upper bound we present results for the offline model (OFFLINE).
We evaluated the presented methods in a class-incremental setting, where all of the classes were presented to the models subsequently and were not shown again after their initial appearance. We measured the accuracy of a given algorithm after each class batch, utilizing holdout testing sets, and then, based on <cit.>, used it to calculate the average incremental accuracy over the whole sequence:
Ω_all = 1/T∑_t=1^Tα_t,
where α_t is the model performance after t classes and T=C is the total number of classes. In addition to the whole aggregation, for the final comparison, we provided these values after each batch to present a more complete perspective of the obtained results.
§.§ Results
In this section, we present and describe all of the results that were obtained for the experiments introduced in the previous paragraphs. The first part consists of the analysis of different configurations of MIX, while the second one focuses on a comparison with other class-incremental algorithms.
Loss and classification:
We analyzed different combinations of the proposed losses and classification methods. Based on Fig. <ref>, we can make three major observations. Firstly, the softmax classification works significantly better with the CE loss, and max-component can be more efficiently paired with MC and MCR than softmax. It was evident for almost all cases (except for MC on CIFAR10) and resulted in almost 0.15 difference on average between softmax and max-component for CE, and about 0.05 for MC and MCR.
Secondly, the MCR loss performed better than MC, showing consistent improvements, especially for more complex datasets like SVHN, CIFAR10 or IMAGENET10, which resulted in more than 0.1 for a difference on average. This demonstrate that the regionalization and intra-contrastive loss are capable of providing meaningful improvements over simpler MC loss utilizing only max-component and inter-contrastive elements, and that ensuring higher diversity among class components can be beneficial to the model.
Finally, we can see that CE with softmax could provide very similar results as MCR with max-component, which means that the general GMM learning formula, wrapped with a high-level supervised loss, can be sometimes as useful as more complex MCR without the need for tuning additional parameters. One drawback of using CE, however, is the fact that it does not model the Gaussian mixtures well (see Appendix B for additional visualizations). The CE loss does not really have to fit the mixtures to the data since it is enough for it to ensure high classification quality. We can also observe a similar behavior for the MC loss. It may be prohibitive if one wants to obtain a reliable description of the latent space. The MCR loss achieves both objectives at the same time: high classification accuracy and high quality of the mixture models for features. This may be important if someone requires interpretable models or would like to extend the proposed algorithm with some Gaussian-oriented techniques that MCR may enable. Furthermore, we believe that analyzing its probabilistic properties in detail could be a part of incremental works built on top of the mixture model. They could utilize its well-defined characteristics, e.g. by proposing new mixture-based losses.
Tightness:
In Fig. <ref>, we presented a grid of values for the average incremental accuracy per each pair of inter- and intra-tightness for every dataset. One can clearly see that imposing the constraint (tightness) on the inter- and intra-contrastive loss values is beneficial to the learning process. Most of the benchmarks required τ_p, ie at the level of 0.0001 or 0.001 and slightly higher intra-tightness τ_p, ia around 0.001 or 0.01 to achieve the best results. At the same time, one should notice that imposing too high inter-tightness (0.01) leads to abrupt deterioration of quality, which is a result of blocking the contrastive part of the loss from pushing components of different classes from each other. The influence of setting too high intra-tightness is less important since we may simply end up with a single component that can still be effectively used for classification.
The examples for FASHION, given in Fig. <ref> and <ref>, show how increasing the inter-tightness (the first one) and intra-tightness (the second one) affects learned representations and mixture models. We can observe the positive impact of the constraint and the potential for sweet spots providing a good balance between differentiating components between each other and fitting them to the actual data. It is evident that too low values introduce critical instabilities to the learning process (very high contrastive loss values overwhelming the fitting part), while too high thresholds lead either to the decline of discriminative properties of the model or degenerate solutions.
Baseline comparison:
In the second section of our experimental study, we placed our algorithm in the class-incremental performance context by comparing it with the introduced baselines (Fig. <ref>). First of all, we can see that the MIX-MCR variant performed better than the MIX-CE for most of the datasets, while being very close to it for the longer sequences (difference between less than 0.01 and 0.03). This proves that MIX-MCR is capable of providing not only a better representation (mixture) model but also that it is more reliable from the accuracy perspective. This also means that it is worth trying to maximize the quality of the produced Gaussian models as an alternative to high-level cross-entropy for classification. Secondly, although our model cannot be distinguished as the best classifier (being worse than iCaRL on average, with a difference equal to about 0.04), it is, at the same time, reliably competitive when compared with the remaining baselines (ER, GSS, DER) with a difference about 0.01 and less than 0.03. Also, it does not fall into the same pitfalls as either the weakest replay method (A-GEM) or the regularization-based ones (LWF, SI), outperforming them by almost 0.4 for accuracy on average. We can see that MIX could be found among the best models for MNIST, FASHION, IMAGENET10, IMAGENET20A and IMAGENET20B, especially at the end of the datasets, providing relatively reliable performance throughout the whole sequences. On the other hand, it struggled with catching up with the best replay methods for SVHN and CIFAR-based datasets showing that there is still a potential for improvements when it comes to predictive accuracy.
The overall very poor performance of LWF and SI (but also A-GEM), which were not much better than the NAIVE approach, confirms the observations made in other publications that the regularization-based methods cannot handle the most challenging 1-class-incremental scenarios without memory buffers <cit.> even after improvements proposed in <cit.>.
We can also see that the for the scenarios with end-to-end training the models were much closer (0.01-0.3) to the OFFLINE upper bound for the shorter sequences (MNIST, FASHION, SVHN and IMAGENET10, except for CIFAR10) than for the longer ones (IMAGENET20A, IMAGENET20B, CIFAR20) with differences between 0.4-0.5, which shows that all of the state-of-the-art methods still struggle with bridging the gap between incremental learning and offline optimum.
Finally, the results for the memory-free scenarios with pre-trained models, given in the last row of Fig. <ref>, exhibit the main strength of the MIX algorithm. Since in these scenarios, it does not use the inter-contrastive loss, it can perfectly separate the incremental learning process for each class, preventing catastrophic forgetting at the level of the classifier. As a result, it does not have to rehearse the previous concepts at all (ℳ_c=0) while still being able to conduct very effective learning producing results very close to the OFFLINE upper bound (difference between about 0 and 0.1), regardless of the quality of the extractor (pre-trained on 10 and 20 or 100 and 200 classes). The MIX-MCR method outperforms all of the baselines for all cases except for IMAGENET200-PRE20, for which only iCaRL was able to provide slightly higher accuracy, even though they had a small advantage of having approximately one example per class in the buffer. It is not a coincidence that practically only iCaRL is close to our method on average (worse by about 0.1), since it uses a similar paradigm in the classification layer by storing prototypes/centroids that are used for classification. All of the remaining algorithms cannot handle the memory-free scenario effectively, producing solutions worse by at least 0.2 on average. This can be a crucial property when one has to consider, for example, data privacy issues or mobile and edge computing.
All of the presented observations, conclusions and recommendations can be also found in a condensed form at the end of Appendix B.
§ SUMMARY
In this work, we introduced a class-incremental mixture of Gaussians model (MIX) for deep continual learning. We proposed different variants of the algorithm to make it suitable for gradient-based optimization and, through an extensive experimental study, we exhibited its practical configurations and capabilities in the context of other state-of-the-art continual learning models.
In our future research, we will focus on replacing the regionalization approach with a more flexible method that do not assume any pre-training structure and allows the gradient-based procedure to fully explore potential solutions, e.g. annealing <cit.>, and on removing the static tightness hyperparameter to increase flexibility even more – it could be more beneficial to either find a better (parameter-free) distance function or propose an adaptive threshold. It is also an open question whether we can effectively train a gradient-based mixture using a full covariance matrix. Finally, we could consider some kind of hybridization of the mixture models with the feature extractor to benefit from the capabilities of the former to limit interference with previously learned concepts by utilizing max-component losses. All of these potential improvements combined could provide significant performance gains in the class-incremental continual learning scenarios.
ieee_fullname
§ APPENDIX
§.§ Data
We used: MNIST, FASHION, SVHN, CIFAR10 and IMAGENET10 – a subset of the tiny IMAGENET200, to gain deeper insights into our method while conducting experiments with hundreds of different configurations. Then, we extended this set with CIFAR20 – the coarse-grained version of CIFAR100, IMAGENET20A and IMAGENET20B – larger subsets of IMAGENET200 – to benchmark our method against other algorithms.
For the experiments involving fixed extractors, we used pre-trained features to construct four additional sequences – CIFAR100-PRE10, CIFAR100-PRE100, IMAGENET200-PRE20 and IMAGENET200-PRE200, which consisted of features extracted for CIAFR100 and IMAGENET200, using extractors trained on 10, 20, 100 and 200 classes of the original datasets. The summary of the used benchmarks is given in Tab. <ref>. Details of the feature extractors can be found in the next section.
§.§ Model configurations
In the first section of our experiments, we explored different configurations of our algorithm, which can be mostly seen as an ablation study. Firstly, we evaluated different losses (CE, MC and MCR) combined with different classification methods (softmax, max-component). Secondly, we checked different settings for the tightness bound parameter τ_p by evaluating a grid of values for inter-tightness and intra-tightness – we considered τ_p ∈⟨1e-06, 1e-05, 0.0001, 0.001, 0.01⟩ for both. Thirdly, we analyzed how assuming different numbers of components affects the classification performance on different datasets. We used K ∈⟨1, 3, 5, 10, 20⟩. Then we checked if it is better to maintain a whole covariance matrix or only its variance (FULL, VAR). Finally, we evaluated different learning rates for the extractor and GMM part, using α_ℱ∈⟨1e-07, 1e-06, 1e-05, 0.0001, 0.001⟩ and α_𝒢∈⟨1e-05, 0.0001, 0.001, 0.01, 0.1⟩, to check whether it may be beneficial to configure them separately, and different memory sizes ℳ_c ∈⟨8, 64, 128, 256, 512⟩ to analyze how our method exploits limited access to class examples.
While evaluating specific parameters we kept others fixed. For our base configuration we chose a setup that was capable of providing performance comparable with a standard experience replay. We used the MCR with max-component as our loss and classification method, K=3, τ_p,ie=0.002, τ_p,ia=0.01, β=0.5, α_ℱ=0.0001, α_𝒢=0.001 and d_min=0.001 with only variance stored per each component. We assumed a modest memory buffer per class ℳ_c=256 and matched the size of a memory sample per class with the training batch size. The model was trained for 10 (MNIST, FASHION) or 25 epochs per class, with 32 (IMAGENET) or 64 instances in a mini-batch.
§.§ Algorithms
Based on the observations made in the first section of the experiments, in the final evaluation we used two variants of our algorithm: MIX-CE and MIX-MCR with τ_p,ie=0.0001, τ_p,ia=0.001, α_ℱ=0.0001, α_𝒢=1e-05 and, once again, d_min=0.001 with only variance maintained per each component. The only parameter that we tuned per each dataset was the number of components K. We used Adam as the optimizer. For the memory-free scenarios with pre-trained extractors, we turned off the inter-contrastive loss to minimize interference with previously learned classes.
The main parameters of the baselines methods were set based on the original papers and other literature, including empirical surveys or works containing vast empirical studies <cit.>. For all memory sampling methods we matched the memory sampling size with the training batch size. For ERSB we used 10 centroids per class each containing up to either 25 or 15 instances to match the total memory size. DER used α_d=0.5, for LWF we set the softmax temperature T=2 and progressively increased its distillation coefficient as suggested in <cit.>, and SI used λ =0.0001. All of the methods utilized the Adam optimizer with a learning rate α=0.0001 as we did not observe any significant differences when changing this parameter.
Analogously to the configuration section, all of the algorithms, including ours, were trained for 10 (MNIST, FASHION) or 25 epochs per class, using 32 (IMAGENET) or 64 instances per mini-batch. The offline models were trained for either 50 or 100 epochs, until they achieved a saturation level. The memory buffer was set to ℳ_c=128 (IMAGENET) or ℳ_c=256 for methods supporting memory per class (ER, ERSB, iCaRL), and ℳ=C·128 or ℳ=C·256 for the remaining ones (GSS, A-GEM, DER), where C was the total number of classes. The latter group was equipped with reservoir buffers <cit.>. For the experiments with pre-trained extractors we wanted to check the memory-free scenario, therefore we set ℳ_c=0 for our methods and ℳ_c=1 or ℳ=C for others, since most of them could not be run without storing any examples.
All of the algorithms, including different configurations of our method, were combined with feature extractors. For MNIST and FASHION we used a simple CNN with two convolutional layers consisting of 32 (5x5) and 64 (3x3) filters, interleaved with ReLU, batch normalization and max pooling (2x2). For SVHN and IMAGENET we utilized ResNet18, its modified version for CIFAR10 and CIFAR20, and ResNeXt29 for CIFAR100 <cit.>. The classification layers consisted of the default configurations.
Finally, for our method, ER, ERSB, A-GEM and DER we disabled batch normalization, since, consistently with <cit.>, we observed a significant difference in performance when those layers were turned off for the given methods. As mentioned in Sec. <ref>, for the memory-free scenarios, the extractors were pre-trained on either 10, 20, 100 or 200 classes of CIFAR100 and IMAGENET200. For this setting we trained all the models for 20 epochs per class.
Results for the offline model were either obtained by us (learned from scratch for IMAGENET20A, IMAGENET20B and fine-tuned models for IMAGENET200), or by referring to other publications <cit.>.
§ APPENDIX
§.§ Additional visualizations
Fig. <ref> presents an example of a single-component class-incremental mixture model learned with the inter-contrastive loss. Fig. <ref> demonstrates the effectiveness of training a multi-component model with the intra-contrastive loss and regionalization.
As mentioned in the main document, the CE loss can often achieve similar predictive performance even if its mixture models are not really fitting the data (Fig. <ref>). We can see it when compared with MC for K=1 or MCR for both K (Fig. <ref> and <ref>). Furthermore, the model produced for MC with K=3 clearly shows that it is incapable of effectively utilizing multiple components for the same class. Please notice that only the Gaussians in the middle actually cover some data points, while the remaining components are completely unrelated to the observed data. These are examples of the degenerate solutions. While for FASHION this loss could still, analogously to CE, provide similar performance as MCR (the components in the middle are fitted to the data and they are sufficient to model it), the observed desynchronization of components results in its weaknesses for more complex problems. The MCR loss can provide high quality of predictive performance and of the produced mixture models.
§.§ Additional configurations
Number of components: Tab. <ref> presents how many components were required to obtain the best solutions per each dataset for the given settings. We can observe that for simpler datasets (MNIST, FASHION) using a single component per class for sufficient and that introducing additional ones led to slightly worse performance, most likely due to the fact of fitting to simple concepts and overcomplicating the optimization problem. On the other hand, more complex benchmarks (SVHN, CIFAR10, IMAGENET10) preferred access to more components per class, which could provide significant improvements, e.g., for SVHN the difference between K=1 and K=10 was almost 0.3. While for these experiments we set the learning rate slightly higher for the GMM model (0.001) than for the extractor (0.0001), we observed that when the former used rate lower than the latter (as suggested by the results for learning rates that will be presented below), the optimal K tended to be lower on average. It is possible that if GMM is dominant it prefers having more flexibility (components), while when the extractor has a higher learning rate it may be more effective in adjusting representations to lower numbers of components.
Covariance: Results presented in Tab. <ref>, unequivocally show that our gradient-based MIX can much better adapt to data if it maintains only the variance of the covariance matrix (better by almost 0.3 when compared with full covariance). It is not surprising since previous publications related to the gradient-based GMMs for offline settings suggested a similar thing <cit.>. Most likely, working with a full covariance matrix leads to less stable loss values, and many more free parameters (especially if the feature space is high-dimensional) likely cause problems with convergence.
Learning rates: Analogously to the experiments for tightness, in Fig. <ref> we presented the grid of results for different extractor (horizontal) and mixture (vertical) learning rates. The obtained results suggest that the former part is more important – once the optimal rate is set (0.0001 for the given settings) tuning the latter seems less significant, although overall it should be set to a similar or slightly lower value.
Memory size: Finally, if we look at the results of class-incremental learning using different memory sizes, given in Fig. <ref>, we will see that MIX can effectively utilize larger buffers and that it seems to be quite memory-dependent, especially for SVHN where the difference between subsequent sizes ranged from 0.1 to 0.2. Still, the gap was much smaller for all of the remaining datasets. While this characteristic of the algorithm may be problematic (the fewer examples we need, the better), it is still valid that if we can use a pre-trained extractor, the whole model does not need to use the memory buffer at all.
§.§ Lessons learned
Based on the theoretical and empirical analysis presented for this work we can conclude the following.
* Class-incremental learner. Regardless of many combined challenges, it is possible to successfully hybridize the gradient-based mixture models on top of convolutional feature extractors, and use them in class-incremental end-to-end continual learning scenarios. The presented results show that MIX is capable of providing competitive results when compared with well-known incremental baselines.
* Dedicated losses. It has been shown that the training of the mixture models combined with dynamic feature extractors requires the inter-contrastive loss to effectively distinguish components of different classes from each other. In addition to that, to ensure diversity among same-class components and avoid degenerate solutions, such techniques as regionalization combined with the intra-contrastive loss are required. We showed that not only do the proposed approaches deliver what was intended, but also that they can translate into significant performance gains for more complex datasets. Finally, although the more generic high-level cross-entropy loss may provide good solutions in many cases, only the most advanced variant (MIX-MCR) delivers both high predictive performance and high quality of generated mixture models, which may be important from the perspective of interpretability or potential Gaussian-based extensions.
* Effective tightness. The tightness bound plays a crucial role in stabilizing the mixture learning procedure. Setting the optimal values of inter- and intra-tightness leads to striking a balance between pushing different components from each other and actually fitting them to the data. Intuitively, the inter-tightness prefers slightly lower values than intra-tightness.
* Recommended configurations. By analyzing other different hyperparameter settings and combinations of our methods we could observe that: (i) the CE loss works much better with the softmax classification method, while MC and MCR should be combined with the max-component approach, (ii) different numbers of components may be required for different data and different learning rates may also affect the optimal number, (iii) maintaining only the diagonal of the covariance matrices leads to more stable optimization and better results, (iv) the learning rate for the feature extractor dominates over the one for the mixture model, and that (v) MIX is quite memory-dependent in general end-to-end scenarios.
* Memory-free scenarios. At the same time, MIX is capable of learning without a memory buffer if we use a fixed pre-trained extractor and disable the contrastive loss that is not needed in this case. Our method stands out as the best model for such class-incremental scenarios which can be very important if there are any data privacy concerns or strict memory limits.
|
http://arxiv.org/abs/2307.03890v1 | 20230708034628 | Ground-Challenge: A Multi-sensor SLAM Dataset Focusing on Corner Cases for Ground Robots | [
"Jie Yin",
"Hao Yin",
"Conghui Liang",
"Zhengyou Zhang"
] | cs.RO | [
"cs.RO"
] |
Ground-Challenge: A Multi-sensor SLAM Dataset Focusing on Corner Cases for Ground Robots
Jie Yin ^†, Hao Yin ^†, Conghui Liang^* ^ and Zhengyou Zhang ^ (IEEE Fellow & ACM Fellow)
Authors ^† are independent researchers. Authors ^ are with Tencent Robotics X Lab, Shenzhen, China.
^* Corresponding Author: Conghui Liang ([email protected])
August 12, 2023
========================================================================================================================================================================================================================================================================
High-quality datasets can speed up breakthroughs and reveal potential developing directions in SLAM research.
To support the research on corner cases of visual SLAM systems,
this paper presents Ground-Challenge: a challenging dataset comprising 36 trajectories with diverse corner cases such as aggressive motion, severe occlusion, changing illumination, few textures, pure rotation, motion blur, wheel suspension, etc. The dataset was
collected by a ground robot with multiple sensors including an RGB-D camera, an inertial measurement unit (IMU), a wheel odometer and a 3D LiDAR. All of these sensors were well-calibrated and synchronized, and their data were recorded simultaneously.
To evaluate the performance of cutting-edge SLAM systems, we tested them on our dataset and demonstrated that these systems are prone to drift and fail on specific sequences.
We will release the full dataset and relevant materials upon paper publication to benefit the research community. For more information, visit our project website at https://github.com/sjtuyinjie/Ground-Challengehttps://github.com/sjtuyinjie/Ground-Challenge.
Data Sets for SLAM, Data Sets for Robotic Vision
§ INTRODUCTION
Intelligent ground robots have been widely used in industrial production and daily life, such as logistics, cleaning, warehouses, security, and food delivery. And navigation is the fundamental capability for these robots to execute these diverse tasks. To achieve reliable navigation, visual SLAM (Simultaneous Localization and Mapping) problem has been researched for decades, with quite a few classical methods proposed <cit.>.
A recent developing trend in visual SLAM is low-cost multi-sensor fusion, which has been verified to be a practical approach <cit.>
to enhance the robustness to diverse scenarios. Different sensors can complement each other, maximizing the perceptual awareness of environments. One of the best example is that visual-inertial odometry (VIO) algorithms can significantly improve the tracking stability and accuracy in aggressive motion and textureless scenarios.
While VIO systems have performed well in most cases, <cit.> has proven that this does not apply to ground vehicles.
For generic movement patterns, a VIO system has only four unobservable directions (three for global translation and one for global yaw). However, ground vehicles are restricted from moving in a 2D plane, mostly along a straight line or a circular arc, and thus the IMU is not sufficiently activated.
Therefore, the VIO system on the ground robot will suffer from additional DoF unobservability, such as the scale. To address this issue, <cit.> extends VINS-Mono <cit.> to
incorporate low-frequency wheel-encoder data and keep the scale observable. Similarly, <cit.> proposes a RGB-D Encoder SLAM system for differential-drive robots. Most recently, <cit.> proposes an optimization-based visual-inertial-wheel tightly coupled odometry, which claims to work robustly in dark or overexposed conditions. Nonetheless, its performance has not been tested on any public dataset with ground truth trajectories.
We believe that progress in SLAM, like in the AI field, is highly data-driven <cit.>.
Although there have been extensive public datasets available to evaluate different SLAM algorithms, most of these datasets are outdated and do not challenge cutting-edge SLAM algorithms. In our opinion, those datasets focusing on challenging cases can more efficiently reveal the defects and limitations of existing algorithms. We notice that corner case detection in autonomous driving receive extensive concern from researchers <cit.> <cit.> because such cases could easily cause the navigation system to drift. Similarly, once the localization module of the robot fails, it might cause industrial accidents and even pose potential threats to human safety as well. Nonetheless, to our knowledge, there is currently not much literature discussing the corner cases of robot navigation, which is not conducive to the safety of real-world robot applications.
To fill this gap, we present a novel SLAM dataset for ground robots, which aims to challenge existing cutting-edge SLAM systems with corner cases and thus promotes the progress of the multi-sensor fusion SLAM algorithm.
The challenges of our datasets lie in two areas: specific movement patterns and sensor failures, which will be elaborated in subsequent sections. Some scenarios covered in our datasets are visualized in Figure <ref>. Our major contributions are summarized as follows:
* We collect a novel visual SLAM dataset for ground robots with a rich pool of sensors in diverse environments both indoors and outdoors. Particularly, the dataset covers a series of challenging sequences including sensor failures and specific movement patterns.
* State-of-the-art SLAM algorithms of different settings are tested on our benchmark. And the results indicate these systems are not robust enough for situations such as sensor failures.
* To facilitate the research on corner cases of robot navigation, we will release the full dataset with ground truth trajectories and the configuration file of each tested algorithm upon paper publication.
§ RELATED WORKS
§.§ SLAM Datasets for Ground Robots
Most existing SLAM datasets are collected by UAVs <cit.> or cars <cit.>, but only a few are targeted at ground robots. For instance, Rawseeds <cit.> and UTIAS<cit.> provide RGB images only, thus making them unsuitable for evaluating multi-sensor fusion systems. The Rosario dataset <cit.> is rich in sensor variety, yet is specifically designed for agricultural environments. M2DGR <cit.> captures diverse indoor and outdoor scenarios, including some challenging scenes like elevators and darkrooms, but doesn't contain wheel odometer information which is essential for multi-sensor fusion SLAM algorithms due to its low cost and high precision. OpenLORIS<cit.> offers rich sensor types in visual challenging scenarios such as highly dynamic markets and poorly exposed corridors, but wheel challenges or motion challenges are not included.
§.§ Corner Cases
Corner cases, i.e., extreme and non-predictable situations, are a popular research topic in autonomous driving <cit.>. Although infrequent, these cases can potentially threaten the security and reliability of autonomous navigation systems. Corner cases exist in robot navigation tasks as well. To address such challenging scenarios, researchers have proposed various methods, such as RGB-D SLAM <cit.> and DS-SLAM <cit.>, to handle dynamic environments, and GVINS <cit.> to deal with degenerate cases including low-speed movement, less than four visible satellites, and GNSS-denial environments. Additionally, <cit.> proves that their method is robust in aggressive motions and a visual texture-less white wall. Nonetheless, we note that there are still plenty of corner cases that tend to be overlooked, such as wheel slippage, motion blur, and complete visual occlusion. There is a lack of SLAM datasets specifically designed for studying these corner cases, which is a gap yet to be filled. To sum up, it is urgent and critical to collect a novel SLAM dataset with rich sensor types, precise calibration, and sufficient challenge to support studies on corner cases, particularly sensor failures.
§ THE GROUND-CHALLENGE DATASET
§.§ Sensor setup
We construct a ground robot for data collection and the sensor locations on the robot are shown in Figure <ref>. The chassis is equipped with a front-view VI-Sensor (Visual-Inertial Sensor) that captures RGB and depth images along with 6-axis IMU's measurements. Driven by two driving wheels providing odometer information and four assisting wheels, the robot also has a high-precision 9-axis Xsens IMU and a 16-beam 3D LiDAR.
The ground truth trajectories and point clouds are generated by the Velodyne LiDAR and the Xsens IMU using Fast-LIO2 <cit.>, a state-of-the-art LiDAR-based SLAM system. To evaluate its performance, we compared the high-precision trajectories generated by a motion capture system with 16 infrared cameras to those generated by Fast-Lio2. The experiment revealed that Fast-LIO2 can reach a positioning accuracy of 3cm in a small-scale (15m x 15m) indoor room. Additionally, as reported in <cit.>, Fast-LIO2 can achieve less than 0.1m end-to-end error in an outdoor trajectory spanning 1000 meters. Thus, considering that it is difficult for visually-based SLAM algorithms to achieve similar accuracy in challenging scenarios, we use the result of Fast-LIO2 as the pseudo-ground-truth trajectory.
§.§ Synchronization and Calibration
We capture all the data using the ROSbag tool in the Robot Operating System (ROS). The RGB camera and 6-axis IMU embedded in the Realsense D435I are hard-synchronized, while the depth images are pixel-by-pixel aligned to the RGB images. The 3D LiDAR and 9-axis IMU are software-synchronized by triggering data capture at the same instance. To calculate the camera intrinsics of pinhole cameras, we use the MATLAB Camera Calibration Toolbox. To calibrate the internal parameters of the IMU, we use the toolbox from <cit.>, which includes the white noise and random walk of both the gyroscopic and accelerometer measurements. We choose the IMU frame as the reference to calibrate the extrinsic parameters (relative poses) between sensors, and employ the toolbox from <cit.> for calibrating the extrinsic parameters between cameras and IMU.
§.§ Data collection
We provide an overview of our dataset in Table <ref>. All data was captured using the Rosbag tool within the Robot Operating System (ROS). The recording process is as follows: First, we recorded Office and Room sequences, where the robot moves slowly in a well-lit and textured office or room respectively, to test the performance of different algorithms in normal situations. Subsequently, we designed a series of corner case experiments from three aspects: visual challenge, wheel odometer challenge, and particular movement pattern, which are presented as follows:
§.§.§ Visual Challenge
In our experiments, we manipulate the robot to move in a room with poor illumination (Darkroom sequences), back and forth in front of walls lacking texture (Wall sequences), and through scenarios of varying degrees of occlusion (Occlusion sequences). Figure <ref> (a) shows sequences Occlusion1∼2, which involves a person walking in front of the robot and causing intermittent partial occlusion. Figure <ref> (b) displays sequence Occlusion3, in which the camera is covered with the palm repeatedly. In sequence Occlusion4 (Figure <ref> (c)), a piece of black tape is attached to the camera's lens to completely block its view, disabling feature extraction and matching for visual SLAM. Furthermore, Motionblur sequences are generated by rapidly translating and rotating the robot, creating motion blur for cameras (Figure <ref> (d)).
§.§.§ Wheel Odometer Challenge
The Hall and Loop sequences are collected in a hall with smooth ground and a heavily carpeted aisle loop, respectively, where the wheels slip significantly. Moreover, we record Roughroad sequences to test the performance of the localization algorithm on rough roads.
§.§.§ Particular Moving Patterns
In the Sequences Corridor1 and Corridor2, the robot moves forward in a zigzag shape and straight forward, respectively. In the zigzag route, motion blur and less overlapping between adjacent image frames will lead to errors in feature matching.
In the Rotation sequence, the robot only rotates and hardly translates, which makes it difficult for vision-based algorithms to estimate the depth of feature points by triangulation. In the Static sequences, the robot stands still on a bracket, and we control its wheels to move in different directions through the handle. This experiment aims to test whether SLAM systems coupled with the wheel odometer can work well when the robot wheel is suspended.
Finally, we operate the robot from a flat surface to another, passing through a slope. In this experiment, since the wheel odometer only provides two-dimensional speed observations, it could be misleading to estimate three-dimensional trajectories.
§ EVALUATION
The features of all the sequences are described on our project website. We evaluated some SLAM systems with different sensor configurations on twelve representative sequences from our dataset. The tested algorithms are ORB-SLAM3 <cit.>, an optimization-based SLAM system; VINS-Mono <cit.>, one of the state-of-the-art monocular visual-inertial systems; VINS-RGBD <cit.>, a fusion algorithm of RGB-D and IMU information based on the VINS-Mono <cit.> framework; and VIW-Fusion <cit.>, a tightly-coupled visual-inertial-wheel system featuring online extrinsic calibration and wheel-aided initialization. Also, we use an EKF algorithm <cit.> for fusion of IMU and wheel odometer.
The EVO tool <cit.> was used to align all the estimated trajectories with ground truth trajectories to obtain the ATE RMSE <cit.>.
The quantitative results are shown in Table <ref>, with the estimated trajectories in 2D plotted in Figure <ref>. Since most of the selected sequences are highly challenging (even with sharp turns), ORB-SLAM3 (both monocular-inertial and RGBD-inertial version) performed poorly on most of our test sequences, with frequent tracking failures (less than 50% of successfully tracked frames), initialization failure, or scale drift.
In contrast, SLAM algorithms with multi-sensor fusion (like VIW-Fusion <cit.>) achieved better localization results but failed in some specific scenarios as well. We discuss the experiment results in detail as follows:
Normal Situation
The ATE RMSE results on Sequence Office3 indicate that existing localization methods can perform well when the motion mode matches the assumptions of these algorithms and all the sensors work well.
Vision Challenge
In Sequence Darkroom2 and Motionblur3, VINS-Mono <cit.> and VINS-RGBD <cit.> drift a lot due to visual failures, while Wheel odometer based algorithms work more robustly in this case.
In Sequence Occlusion4, all the vision-based methods including VIW-Fusion <cit.> fail to initialize because of poor feature extraction. This finding indicates that VIW-Fusion <cit.> has not been adequately designed to handle adverse conditions. A more prudent strategy may be to combine the wheel odometer and IMU to output a trajectory when a visual sensor failure is detected.
Wheel Odometer Challenge
In the sequences Roughroad3 and Slope1, vision-based systems perform worse than wheel odometer-based algorithms due to inaccurate scale estimation in aggressive motion. In Sequence Hall1, VINS-Mono <cit.> and VINS-RGBD <cit.> drift significantly due to ground reflection and faraway feature points. Here, VIW-Fusion <cit.> maintains satisfactory positioning performance even with slight wheel slippage, demonstrating the advantages and necessity of multi-sensor fusion in complex scenarios. However, when the wheels slip more severely in Sequence Loop2, the significant deviation caused by the wheel odometer increases the localization error of estimated trajectories. This can be attributed to two main reasons: current algorithms lack the ability to detect wheel slippage, and the angular velocity provided by the wheel speedometer is not accurate, leading to the long-term divergence of the estimated trajectory. To reduce the accumulation of errors, it is suggested that IMU's angular velocity measurement be used instead of the wheel odometer's.
Particular Movement Patterns
In Sequence Corridor1, the zigzag movement of the robot not only fails the feature extraction but also leads to severe wheel slippage. Therefore, all the tested algorithms cannot accurately estimate the trajectory. In Sequence Rotation1, pure rotation causes severe errors in depth estimation by VINS-Mono's triangulation, while the remaining tested systems perform well thanks to measurements from other sensors. Finally, in Sequence Static1, VIO systems cannot be initialized successfully due to the lack of IMU excitation. Since the wheels are still moving after suspension, the wheel odometer-based methods mistake the robot being in motion.
In summary, VINS-Mono <cit.> is most likely to generate catastrophic localization results in corner cases, and VINS-RGBD <cit.> can also inevitably fail when severe camera failures occur.
We have noticed that the wheel odometer alone can achieve good results in most situations, except for severe wheel slippage. Integrating the IMU and the wheel odometer through the EKF <cit.> can achieve higher accuracy than the raw odometer. Nonetheless, the trajectory of the EKF can shake violently in the initialization phase due to the inaccuracy in the initial covariance estimation (this part was manually eliminated in our experiment). VIW-Fusion <cit.> can achieve satisfying accuracy and robustness in most sequences, but its initialization in visual failure needs improvement. Furthermore, it lacks consideration for wheel slippage, and its adopted dead reckoning model will diverge in a long trajectory due to inaccurate angular velocity estimates.
The experiments conducted demonstrate the validity and value of our dataset as a benchmark for existing SLAM systems. The results further suggest that there is still much room for improvement in current cutting-edge multi-sensor fusion algorithms for real-world applications. Sensor failures, such as complete occlusion and wheel suspension, can be fatal for single-sensor-based methods; however, multi-sensor fusion systems should be designed to be more robust in these cases. For instance, we posit that a reliable visual-IMU-wheel system should be able to explicitly identify scenarios where visual observations are inaccurate and respond accordingly (e.g. disable visual information and rely only on wheel odometer and IMU). Nevertheless, to our knowledge, corner case identification and troubleshooting have been scarcely addressed in prior work. Therefore, we provide this dataset to support relevant researches.
§ CONCLUSION
We present Ground-Challenge, a novel ground robot dataset to encourage breakthroughs in multi-sensor fusion SLAM algorithms. Specifically, we have crafted a series of corner case experiments, including sensor failures in diverse environments, to challenge current cutting-edge SLAM systems. We have tested these systems on our dataset and analyzed their limitations in various scenarios, thus providing potential developing directions for SLAM. We are committed to continually updating our benchmark dataset. Specifically, we will mount 2D and 3D LiDAR on the robot, design experiments to invoke corner cases, and utilize higher-precision equipment such as motion capture systems to ensure accurate ground truth for LiDAR SLAM in our future work.
Acknowledgement
Thank Tencent Robotics X Lab for support to this work.
IEEEtran
|
http://arxiv.org/abs/2307.04690v1 | 20230710164423 | Heisenberg-limited Hamiltonian learning for interacting bosons | [
"Haoya Li",
"Yu Tong",
"Hongkang Ni",
"Tuvia Gefen",
"Lexing Ying"
] | quant-ph | [
"quant-ph",
"cs.IT",
"cs.NA",
"math.IT",
"math.NA"
] |
Spoofing-Resilient LiDAR-GPS Factor Graph Localization with Chimera Authentication
The views expressed are those of the authors and do not reflect the official guidance or position of the United States Government, the Department of
Defense or of the United States Air Force. Statement from DoD: The appearance of external hyperlinks does not constitute endorsement by the United States
Department of Defense (DoD) of the linked websites, or the information, products, or services contained therein. The DoD does not exercise any editorial,
security, or other control over the information you may find at these locations.
Adam Dai
Electrical Engineering
Stanford University
Stanford, USA
[email protected]
Tara Mina
Electrical Engineering
Stanford University
Stanford, USA
[email protected]
Ashwin Kanhere
Aeronautics and Astronautics
Stanford University
Stanford, USA
[email protected]
Grace Gao
Aeronautics and Astronautics
Stanford University
Stanford, USA
[email protected]
August 12, 2023
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
We develop a protocol for learning a class of interacting bosonic Hamiltonians from dynamics with Heisenberg-limited scaling. For Hamiltonians with an underlying bounded-degree graph structure, we can learn all parameters with root mean squared error ϵ using 𝒪(1/ϵ) total evolution time, which is independent of the system size, in a way that is robust against state-preparation and measurement error. In the protocol, we only use bosonic coherent states, beam splitters, phase shifters, and homodyne measurements, which are easy to implement on many experimental platforms. A key technique we develop is to apply random unitaries to enforce symmetry in the effective Hamiltonian, which may be of independent interest.
§ INTRODUCTION
Many tasks in quantum metrology and quantum sensing can be reduced to the task of learning the Hamiltonian H of a quantum system, whose evolution is described by the operator e^-iHt <cit.>. We call this task Hamiltonian learning, a name that is commonly used in the literature <cit.>. Besides quantum metrology and quantum sensing, Hamiltonian learning is also useful for quantum device engineering <cit.>, and quantum many-body physics <cit.>.
Previous works on Hamiltonian learning for many-body quantum systems are generally subject to the standard quantum limit (SQL), where to estimate the parameters in the Hamiltonian to precision ϵ, (ϵ^-2) samples are required <cit.>. On the other hand, for simple systems such as those consisting of a single spin, the Heisenberg limit can be achieved, where to obtain ϵ precision, only (ϵ^-1) total amount of resources is needed. Achieving the Heisenberg limit requires using quantum-enhanced protocols that either use (ϵ^-1) entangled probes <cit.> or coherent evolution for (ϵ^-1) time <cit.>.
The resources consumed are the number of probes and the length of time evolution, respectively.
The natural question is, can we achieve the Heisenberg limit for many-body quantum systems? When applying the existing quantum-enhanced protocols to the many-body setting, one quickly encounters difficulties. When many entangled probes are used, one needs many copies of the quantum system with the same parameters that can evolve simultaneously without interacting with each other. It is often unclear how one can create these copies, except for certain scenarios, such as when many probes undergo evolution under the same field strength. For long coherent time-evolution, the many-body nature of the quantum systems becomes problematic as subsystems undergo open-system dynamics, and phenomena such as thermalization prevent local observables from having enough sensitivity to achieve the Heisenberg limit. One can consider performing entangled measurements across all parts of the many-body system. Still, the difficulty in simulating the system makes finding a good measurement strategy extremely difficult.
Recently, a method was proposed in <cit.> to perform Hamiltonian learning for many-body spin systems with Heisenberg-limited scaling. The main technique is to apply quantum control in the form of random Pauli operators during time evolution so that the system evolves with an effective Hamiltonian that is easy to learn and, at the same time, preserves the parameters that one wants to learn. Another recent work proved that some form of quantum control is necessary for achieving the Heisenberg limit in this task <cit.>.
The above works are all focused on multi-qubit systems, and Heisenberg-limited Hamiltonian learning for bosonic systems is relatively less studied.
Bosonic systems, such as superconducting circuits <cit.>, integrated photonic circuits <cit.> and optomechanical platforms <cit.> are widely used for quantum sensing, communication, and computing <cit.>. These quantum applications require efficient calibration <cit.>, and it is thus highly desirable to develop optimal algorithms for characterizing bosonic Hamiltonians. For example, quantum computing and sensing with transmons require learning the energy levels and interactions between the transmons and microwave resonators.
For bosonic systems, there is a different set of “easy” quantum states, unitaries, and measurements than for spins. This work assumes that one can prepare coherent states, apply phase shifters and beam splitters, and perform the homodyne measurement. We note that although we may use terms from quantum optics, such as “phase shifters”, we do not constrain our discussion to the optical setting. Additionally, in our protocol, we do not require any squeezing, which can be experimentally difficult to implement <cit.>. Using these resources, we present a protocol to learn a class of interacting bosonic Hamiltonians with Heisenberg-limited scaling. These Hamiltonians involve terms that are quadratic or quartic in the creation and annihilation operators, and are particle-number preserving. The specific form of the Hamiltonians is given in (<ref>). Our protocol can also tolerate a constant amount of noise in the state preparation and measurement (SPAM) procedures and has a small classical post-processing cost.
In our method, we apply random unitaries during time evolution to reshape the Hamiltonian into an effective Hamiltonian that is easier to learn. This follows the same high-level idea as <cit.> but is specifically tailored to the bosonic setting. Moreover, we can interpret the procedure as enforcing a target symmetry in the effective Hamiltonian, thus putting constraints on the dynamics. We believe this technique may be useful for other problems in quantum simulation as well <cit.>. In analyzing the deviation from the effective dynamics, the unboundedness of the bosonic Hamiltonian terms poses a challenge, as the analysis in <cit.> requires Hamiltonian terms to be bounded. We use more involved techniques to overcome this difficulty in Section <ref>.
§ RESULTS
In this work, we focus on quantum systems on N bosonic modes forming a d-dimensional lattice, with the Hamiltonian of the form
H = ∑_⟨i,j|⟩ h_ijb_i^†b_j + ∑_i ω_i b_i^†b_i + ξ_i/2∑_i n_i(n_i-1),
where b_i (b_i^†) are bosonic annihilation (creation) operators, and n_i=b_i^†b_i are the number opreators. ⟨i,j|$⟩ means that the summation is over sitesi,jthat are adjacent to each other.h_ij=h_ji^*, and eachξ_iandω_iis a real number. We also assume that|h_ij|,|ω_i|,|ξ_i|≤1. This class of Hamiltonians is relevant for superconducting quantum processors <cit.>, arrays of coupled cavities <cit.>, and phonon dynamics in ion crystals <cit.>. We will present a protocol that generates estimatesĥ_ij,ω̂_i, andξ̂_isuch that
𝔼[|ĥ_ij-h_ij|^2], 𝔼[|ω̂_i-ω_i|^2], 𝔼[|ξ̂_i-ξ_i|^2]≤ϵ^2,
for alliandj.
The protocol has the following properties:
* The total evolution time is (ϵ^-1);
* The number of experiments is (polylog(ϵ^-1));
* A constant amount of SPAM error can be tolerated.
More precisely, our protocol consists ofN_exp=(polylog(ϵ^-1))experiments, which we number by1,2,⋯,N_exp.
In thejth experiment, we will initialize each bosonic mode in the system in a coherent state, let the system evolve for timet_j>0, and perform homodyne measurement for the bosonic modes. During time evolution, we will apply random beam splitters (on two modes) or phase shifters (on one mode). The total evolution time is defined to be∑_j=1^N_expt_j, which is the amount of time required to run all the experiments. We assume that after we prepare the initial state and before we perform the measurement, the system goes through error channelsℰ_1andℰ_2, which model the SPAM error. Ifℰ_1-ℐ_♢+ℰ_2-ℐ_♢is upper-bounded by a small constant, then our protocol will still be able to reach arbitrary precisionϵ. Here·_♢is the diamond norm <cit.>, andℐis the identity channel. The precision is measured by the mean squared error (MSE). We are using the big-notation to hide the constants for simplicity, and we note that these constants never depend on the system size. Our protocol generates(NN_exp)=(Npolylog(ϵ^-1))classical data and it takes a similar amount of time to process these data to compute the estimates.
Below we will describe the protocol in detail. We will start with a protocol to learn a single anharmonic oscillator, which forms the basic building block for more complex situations.
§.§ Learning an anharmonic oscillator
We first consider the simple case in which
H_AHO = ω n + ξ/2n(n-1),
wheren=b^†b. We want to estimate the coefficientsωandξwith root mean squared error (RMSE) at mostϵ.
This is a quantum sensing problem with two parameters to be estimated. In quantum sensing, one usually calculates the quantum Cramér-Rao bound (QCRB) that provides a lower bound on the MSE of unbiased estimators. Because the two parameters correspond to Hamiltonian terms that commute with each other, the QCRB scales inverse quadratically with time, allowing us to achieve the Heisenberg-limited scaling. This bound, however, is valid only for local estimation where the prior distribution of the estimators is already concentrated around the exact value. Here we provide an estimation protocol that achieves this scaling without any prior knowledge of the parameters.
Our protocol builds upon a robust frequency estimation algorithm similar to the robust phase estimation algorithm proposed in <cit.> as well as the alternative version in <cit.>. In the robust phase estimation algorithm, we assume that through performing certain experiments that we will specify when introducing our protocol, we have access to a random variableZ_δ(t)from measurement results, such that|Z_δ(t)-e^-iωt|≤1with probability at least1-δ, and generating such a random variable requires evolution time(tlog(δ^-1)). With multiple samples of this variable for different values oftandδ, we can generate an estimate ofωwith RMSE at mostϵusing(ϵ^-1)total evolution time. The algorithm proceeds by iteratively obtaining estimates with increasing accuracy through longer time evolution until the target precision is achieved. A detailed description of the algorithm and proof of its correctness can be found in Section <ref>.
We initialize the system in a coherent state|α⟩=e^-|α|^2/2∑_k(α^k/√(k!))|k⟩, and let the system evolve under the HamiltonianH_AHO. In the end we perform homodyne measurements with quadrature operatorsX=(b+b^†)/√(2)andP=i(b^†-b)/√(2)in separate experiments. With these measurement results we will be able to estimate⟨b|_⟩α,t=⟨α|e^iH_AHOtbe^-iH_AHOt|α|$⟩, which can be exactly computed to be
⟨b|_⟩α,t = α e^-|α|^2 e^-iω t e^|α|^2 e^-iξ t.
We perform this calculation in Section <ref>.
Using (<ref>), we can extract the values of ω and ξ from ⟨b|_⟩α,t. For ω, note that ⟨b|_⟩α,t/α = e^-iω t + (|α|^2), and therefore we can choose |α| to be below a small constant so that an estimate for ⟨b|_⟩α,t/α will be close to e^-iω t within some small constant distance, which enables us to apply the robust frequency estimation algorithm to estimate ω with RMSE at most ϵ using total evolution time (ϵ^-1).
For ξ, we can extract its value by constructing a periodically oscillating signal through
e^-iξ t = 1/|α_1|^2-|α_2|^2log(α_2⟨b|_⟩α_1,t/α_1⟨b|_⟩α_2,t) + 1.
This enables us to estimate ξ using the robust frequency estimation algorithm. Note that, once again, ⟨b|_⟩α_1,t and ⟨b|_⟩α_2,t only need to be estimated to constant precision, rather than ϵ precision which would result in an (ϵ^-2) scaling that would destroy the Heisenberg-limited scaling.
In the above procedure, we need to estimate the expectation of X and P operators, which are unbounded operators that can infinitely amplify any error in the quantum state. Fortunately, we found that we can replace them with operators X|X|≤ M and P|P|≤ M, where |X|≤ M=∫_|x|≤ M|x⟩⟨x| x and |P|≤ M is similarly defined. This means truncating the eigenvalues of these operators at a threshold M=(1). In practice, we can simply discard any X and P samples that are above the threshold M to implement the measurement associated with these truncated operators. This fact, together with the error tolerance in the robust frequency estimation algorithm, enables us to tolerate a constant amount of error from SPAM and time evolution.
The combined error from all sources should be below a small constant, which is sufficient for achieving arbitrarily high precision.
§.§ Learning two coupled anharmonic oscillators
Next, we consider a system consisting of two coupled anharmonic oscillators, where the Hamiltonian is of the following form:
H = ω_1 b_1^†b_1 + ω_2 b_2^†b_2 + h_12b_1^†b_2 + h_21b_2^†b_1 + ξ_1/2n_1(n_1-1) + ξ_2/2n_2(n_2-1)
The goal is to learn all the coefficients ω_1, ω_2, ξ_1, ξ_2, and h_12 (h_21=h^*_12).
We first focus on learning the single-mode coefficients ω_1, ω_2, ξ_1, and ξ_2. To do this, we will insert random unitaries during time evolution to decouple the bosonic modes from each other. In other words, the time evolution operator undergoes the following transformation
e^-iHt↦∏_j=1^r U_j^†e^-iHτ U_j = ∏_j=1^r e^-iU_j^†HU_jτ,
where the U_j, j=1,2,⋯,r, are the random beam splitters or phase shifters that we insert, r=t/τ, and the product goes from right to left. Each U_j is independently drawn from a distribution that we denote by 𝒟. In the limit of τ→ 0, the dynamics can be described by an effective Hamiltonian
H_effective = 𝔼_U∼𝒟 U^†HU.
This can be seen by considering the Taylor expansion of the time-evolved state in a small time step:
𝔼_U∼𝒟[e^-iU^†HUτρ e^iU^†HUτ] = ρ - iτ𝔼_U∼𝒟[[U^†HU,ρ]] + (τ^2)
= e^-i𝔼_U∼𝒟[U^†HU]τρ e^i𝔼_U∼𝒟[U^†HU]τ + (τ^2).
Note that the above is not a rigorous proof, because the (τ^2) residue is an unbounded operator. We provide a rigorous bound of how far the actual dynamics deviate from the limiting effective dynamics with finite τ>0 in Section <ref>.
The above procedure introduces additional randomness to our protocol, but it does not introduce any sample complexity overhead, because we only need the final quantum states to be close in terms of the trace distance.
To learn all the single mode coefficients, we let the unitary U drawn from the distribution 𝒟 be
U = e^-iθ n_1, θ∼𝒰([0,2π]).
Here 𝒰([0,2π]) is the uniform distribution over [0,2π].
We can then compute the effective Hamiltonian
H_effective = 1/2π∫_0^2π e^iθ n_1He^-iθ n_1θ = ω_1 n_1 + ω_2 n_2 + ξ_1/2n_1(n_1-1) + ξ_2/2n_2(n_2-1).
In other words, the coupling term h_12b_1^†b_2 + h_21b_2^†b_1 is cancelled in the process, due to the equality e^iθ n_1b_1 e^-iθ n_1=e^-iθb_1.
We can interpret this procedure as enforcing a particle number conservation on the first bosonic mode. In the effective Hamiltonian, the two bosonic modes are no longer coupled together, and therefore we can apply the learning algorithm described in Section <ref> to learn the parameters of the two modes separately. For a more detailed description of the protocol see Section <ref>.
Next, we will learn the coupling coefficient h_12. We will use the following unitaries
U_x(θ) = e^iθ (b_1^†b_2+b_2^†b_1), U_y(θ) = e^θ (b_1^†b_2-b_2^†b_1).
Our protocol is based on the observation that under a single-particle basis rotation, h_12 can be estimated from the new single-mode coefficients. More precisely, we let b̃_1 = U_y(π/4)b_1 U_y^†(π/4), b̃_2 = U_y(π/4)b_2 U_y^†(π/4), and the new bosonic modes will be related to the old ones through
[ b̃_1; b̃_2 ]
=
[ cos(π/4) sin(π/4); -sin(π/4) cos(π/4) ][ b_1; b_2 ].
We will then rewrite the Hamiltonian (<ref>) in terms of b̃_1 and b̃_2. The quadratic part of H can be written as
ω̃_1 b̃_1^†b̃_1 + ω̃_2 b̃_2^†b̃_2 + h̃_12b̃_1^†b̃_2 + h̃_21b̃_2^†b̃_1, where
ω̃_1 = ω_1+ω_2/2+ h_12.
Therefore, h_12 can be estimated if we can learn ω̃_1.
The quartic part becomes more complicated, but the procedure we describe next will yield an effective Hamiltonian of a simpler form.
In our protocol for learning h_12, we will let the random unitaries U_j in (<ref>) be
U_j = U_x(-θ/2), θ∼𝒰([0,2π]),
where 𝒰([0,2π]) denotes the uniform distribution on [0,2π]. Note that e^-iθñ_1=e^-i(θ/2)(n_1+n_2)U_x(-θ/2) where ñ_1=b̃_1^†b̃_1, and because the total particle number n_1+n_2 is conserved, the random unitary U_x(-θ/2) is equivalent to e^-iθñ_1 up to a global phase. This random unitary, as in (<ref>), results in an effective Hamiltonian in which ñ_1 is conserved. The effective Hamiltonian can be written as the following
H_effective = ω̃_1 ñ_1 + ω̃_2 ñ_2 + ξ̃_11/2ñ_1(ñ_1-1) + ξ̃_22/2ñ_2(ñ_2-1) + ξ̃_12ñ_1ñ_2.
In this effective Hamiltonian, the two bosonic modes b̃_1 and b̃_2 are still coupled through the term ξ̃_12ñ_1ñ_2. However, because the particle numbers on both modes are conserved, we can simply initialize the system with no particle on the mode b̃_2, and the coupling term will have no effect. More specifically, the initial state we use is U_y(π/4)|α⟩|0⟩, which is an α-eigenstate for b̃_1 and a 0-eigenstate for b̃_2. The effective Hamiltonian can then be further reduced to
H_effective' = ω̃_1 ñ_1 + ξ̃_11/2ñ_1(ñ_1-1).
This enables us to learn ω̃_1 using the single-mode protocol in Section <ref>, which then gives us h_12 through (<ref>). When performing homodyne measurement in the end, we also need to apply U_y(-π/4) to rotate back to the original single-particle basis. We write down the quantum state we get right before measurement to summarize the whole procedure:
U_y(-π/4)∏_j=1^r(U_x(θ_j/2)e^-iHτU_x(-θ_j/2))U_y(π/4)|α⟩|0⟩,
where all θ_j are independently drawn from the uniform distribution over [0,2π].
The above procedure yields h_12. For h_12, we only need to switch the roles of U_x(θ) and U_y(θ) and go through the same procedure. For a more detailed discussion, see Section <ref>.
§.§ Learning an N-mode system
So far, we have concerned ourselves with learning small systems with one or two modes, but the protocol we develop can be easily generalized to N-mode systems. This section will focus on N bosonic modes arranged on a 1D chain. For the more general situation with a bounded degree graph, e.g., D-dimensional square lattice, Kagome lattice, etc., see Section <ref>.
The Hamiltonian is described by (<ref>), where the bosonic modes are labeled 1,2,⋯, N, and i and j are adjacent only when j=i± 1.
For this N-mode system, we consider a divide-and-conquer approach. We will apply random unitaries so that in the effective dynamics, the system is divided into clusters of one or two modes, each of which does not interact with the rest of the system. In this way, we can learn the parameters associated with each cluster independently and in parallel using our protocol in Section <ref>.
More specifically, we apply random unitaries in the same way as described in (<ref>). The random unitary U_j is first chosen to be
U_j = ∏_k=1^⌊ N/3⌋ e^-iθ_3k n_3k,
where the random variables θ_3k are independently drawn from 𝒰([0,2π]), the uniform distribution over [0,2π]. Randomly applying the unitaries from this distribution enforces particle number conservation on sites with indices that are integer multiples of 3. Therefore, any Hamiltonian term b_i^†b_j that involves sites 3, 6, 9,⋯ are canceled. The effective Hamiltonian is
H = ω_1 n_1 + ω_2 n_2 + h_12b_1^†b_2 + h_21b_2^†b_1
+ ω_4 n_4 + ω_5 n_5 + h_45b_4^†b_5 + h_54b_5^†b_4
+ ⋯
+∑_iξ_i/2n_i(n_i-1),
where we did not include the terms ω_3 n_3, ω_6 n_6, etc., because they only contribute a global phase.
In this Hamiltonian, the two modes 1 and 2 form a cluster: they only interact with each other but not with the rest of the system. The same is true for modes 5 and 6, 7 and 8, etc. We can then apply the two-mode protocol in Section <ref> to learn all coefficients associated with modes 1, 2, 5, 6, ... Note that coefficients associated with different clusters can be learned in parallel in the same experiment.
Other coefficients remain to learn, such as ω_3, h_23, and h_34. We can adopt the same strategy but choose the random unitary U_j = ∏_k=0^⌊ N/3⌋-1 e^-iθ_3k+1 n_3k+1 so that modes 2 and 3, 5 and 6, etc. now form clusters. Similarly, we can let modes 3 and 4, 6 and 7, etc., form clusters. In this way, we can learn all the coefficients in the Hamiltonian using three different clustering schemes. The total evolution time required for carrying out all experiments will only be three times the cost of a two-mode protocol because different clusters can be learned in parallel.
More generally, we consider a system whose interaction can be described by a bounded-degree graph. We can design similar clustering schemes based on an appropriate coloring of its link graph, i.e., the graph whose vertices are the edges of the original graph. The overhead introduced will be quadratic in the degree of the original graph and independent of the system size N. This is discussed in more detail in Section <ref>.
§ DISCUSSION
In this work, we propose a protocol to learn a class of interacting bosonic Hamiltonians with Heisenberg-limited scaling. Our protocol uses only elements of linear optics that that can be implemented on various experimental platforms. Besides achieving the Heisenberg-limited scaling, our protocol can also tolerate a constant amount of
SPAM noise thanks to the robust frequency estimation subroutine discussed in Section <ref>. As a part of the protocol, we also propose a method to enforce symmetry on the effective Hamiltonian governing the system's evolution as discussed in more detail in Section <ref>.
To our knowledge, our work is the first to propose a method that learns interacting bosonic Hamiltonians with Heisenberg-limited scaling in a scalable way. However, many open problems remain to be solved in this research direction. In this work, we only consider the particle-number preserving Hamiltonian in (<ref>), but realistic Hamiltonians may contain terms that do not preserve the particle number, such as the coupling term in the Jaynes–Cummings model <cit.> and capacitive and inductive couplings between superconducting circuits <cit.>. Also, higher-order anharmonic effects beyond the fourth order may be non-negligible in certain quantum systems.
In our protocol, we need to apply random unitaries with a frequency that depends on the target precision. For higher precision, the speed of applying these unitaries will also need to be faster, which may be a problem for experimental implementation. A possible solution is to use some form of continuous control as considered in <cit.>. Moreover, since our protocol requires letting the system evolve coherently for (ϵ^-1) times to reach ϵ precision, the achievable precision will be limited by quantum noise such as dephasing and photon losses that limit the coherence time of most experimental Bosonic systems.
It would be therefore interesting to explore whether noise suppression techniques such as dynamical decoupling <cit.> and quantum error correction <cit.> can mitigate this limitation and whether they can be incorporated into our protocol in a useful and scalable way.
Random Clifford unitaries played a crucial role in the classical shadow formalism <cit.> as well as Hamiltonian learning <cit.>. Similarly, one may wonder whether the random gaussian unitaries used in this work can be useful for other quantum information tasks for bosonic systems, such as classical shadow tomography for continuous-variable systems <cit.>.
§ METHODS
§.§ Enforcing symmetry using random unitaries
This section will describe how to enforce symmetry using random unitaries. This strategy is similar in spirit to the symmetry protection strategies in <cit.>, but is easier to scale to an N-mode system in the current setting.
Let us first consider the general case where we have a compact Lie group G that describes the symmetry we want in the quantum system. Our quantum system is evolving under a Hamiltonian H that does not necessarily satisfy this symmetry, i.e., there may exist g∈ G such that gHg^-1≠ H (here we equate an element of the Lie group with its matrix representation). We want to have the system evolve under an effective Hamiltonian H_effective that satisfies the symmetry, i.e.,
gH_effectiveg^-1 = H_effective.
We achieve this by inserting random unitaries in the same way as in (<ref>), which gives us an effective Hamiltonian according to (<ref>). The distribution from which we draw the random unitaries is the Haar measure on G, which we denote by μ. The effective Hamiltonian can be computed as
H_effective = ∫ gHg^-1μ( g).
When the Hamiltonian H is unbounded, the above equality may only hold in a weak sense.
We can verify that this effective Hamiltonian satisfies the desired symmetry because
g' H_effective g'^-1 = ∫ g'gH(g'g)^-1μ( g) = ∫ g'gH(g'g)^-1μ( (g' g)) = H_effective.
Here we have used the property of the Haar measure that μ( (g' g))=μ( g).
It may not be easy to randomly apply elements from the symmetry group G. Still, in our learning protocol, we will only enforce symmetries that are either U(1) or U(1)×U(1)×⋯×U(1)=U(1)^× N, where sampling can easily be done for each U(1) group separately.
§ ACKNOWLEDGEMENTS
The authors thank Matthias Caro for helpful discussions.
Y.T. acknowledges funding from the U.S. Department of Energy Office of Science, Office of Advanced Scientific Computing Research (DE-NA0003525, and DE-SC0020290). Work supported by DE-SC0020290 is supported by the DOE QuantISED program through the theory consortium “Intersections of QIS and Theoretical Particle Physics” at Fermilab. The Institute for Quantum Information and Matter is an NSF Physics Frontiers Center. The work of H.L. and L.Y. is partially supported by National Science Foundation under awards DMS-2011699 and DMS-2208163. T.G. acknowledges funding provided by the Institute for Quantum Information and Matter and the Quantum Science and Technology Scholarship of the Israel Council for Higher Education.
unsrtnat
§ ROBUST FREQUENCY ESTIMATION
Our main tool to achieve the Heisenberg limit is an algorithm to estimate the frequency from a complex-valued signal with Heisenberg-limited scaling. This algorithm resembles the robust phase estimation algorithm in <cit.> but is different in that we can deal with cases where the frequency we want is encoded in the expectation value rather than the probability.
More precisely, we assume access to a signal Z_δ(t) that is close to e^-iω t, where |ω|<W, by a constant amount of error in both the phase and the amplitude with probability 1-δ, where δ can be tuned. It is also reasonable to assume that for smaller δ, generating the corresponding Z_δ(t) will be more costly, i.e., requiring longer evolution time. Our algorithm then uses Z_δ(t) for different values of δ and t to refine the estimation of ω iteratively. In each iteration, we use the result from the previous iteration to get an estimate θ_j satisfying
ω/W∈(θ_j-π/3·2^j, θ_j+π/3·2^j), 2π,
where
W=3W/π
is a normalization factor. A detailed algorithm description can be found in Algorithm <ref>, adapted from <cit.>.
In the following theorem, we analyze the performance of the above algorithm for a fixed set of values for δ in each iteration. We will then optimize these values to achieve the (ϵ^-1) scaling in Corollary <ref>.
Suppose that |ω|<W is known in advance and that we have access to a signal Z_δ(t) such that
* |Z_δ(t)|=1,
* |Z_δ(t)-e^-i(ω t+f(t))|≤η with probability at least 1-δ, where sup_t|f(t)|≤ C_f <π/3,
* 2arcsinη/2 + C_f≤π/3,
* generating Z_δ(t) requires evolution time C_Z t(log(δ^-1)+1),
then we can produce an estimate ω̂ such that
𝔼[|ω̂-ω|^2]≤∑_j=0^J-1E_j^2 δ_j + ϵ^2/4,
with total evolution time at most
π C_Z/W∑_j=0^J-12^j(log(δ_j^-1)+1),
where
E_0=2π, E_j=4πW/(3· 2^j) ∀ j≥ 1, J = ⌈log_2(4πW/(3ϵ))⌉, W = 3W/π,
and each δ_j∈ (0,1] is arbitrarily chosen.
Denote ω/W=ωπ/3W by ω̃, then |ω̃|<π/3.
We proceed by choosing a sequence of t_j, j=0, 1, …, J-1 and refining the estimation of ω̃ progressively. In each iteration we generate a signal Z_δ_j(t_j) for arbitrarily chosen δ_j.
First, let t_0=1/W, then with probability at least 1-δ_0,
|Z_δ_0(t_0)-e^-i(ω̃ +f(t_0))|≤η,
which yields
|ω̃-(- Z_δ_0(t_0))|≤ 2arcsinη/2 + C_f<π/3 2π.
Thus ω̃∈ (- Z_δ_0(t_0)-π/3+2kπ, - Z_δ_0(t_0)+π/3+2kπ) for some integer k. Let θ_-1 = 0 and S_0 = {- Z_δ_0(t_0)}, then by choosing θ_0 = _θ∈ S_0|θ-θ_-1|_2π=- Z_δ_0(t_0), we obtain
ω̃∈ (θ_0-π/3, θ_0+π/3), 2π.
Here θ, is defined to be the minimum distance to 0 modulo 2π, i.e., θ = π - |(θ2π)-π|.
At step j, we set t_j = 2t_j-1, S_j = {2kπ- Z_δ_j(t_j)/2^j}_k=0,…,2^j-1, and θ_j = _θ∈ S_jθ - θ_j-1.
Now we are ready to prove that if
|Z_δ_j'(t_j')-e^-i(ω̃2^j' +f(t_j'))|≤η,
for all 0≤ j'≤ j, then
ω̃∈ (θ_j-π/3·2^j, θ_j+π/3·2^j), 2π,
for all j by induction. The case j=0 is already proved. Suppose (<ref>) holds for j-1.
Because of <ref>, we have
|ω̃2^j-(- Z_δ_j(t_j))|≤ 2arcsinη/2 + C_f<π/3 2π.
Thus
ω̃∈ I_k := (2kπ - Z_δ_j(t_j)-π/3/2^j, 2kπ - Z_δ_j(t_j)+π/3/2^j) 2π,
for some k=0,1, … 2^j-1. Notice that (I_k, I_k')≥π/2^j-1-π/3·2^j-1 = π/3·2^j-2 for any k≠k', and that the length of the previous estimation (θ_j-1-π/3·2^j-1, θ_j-1+π/3·2^j-1) is exactly π/3·2^j-2, we can ensure that only one k^* satisfies I_k∩(θ_j-1-π/3·2^j-1, θ_j-1+π/3·2^j-1)≠∅, 2π. Moreover, the corresponding k^* satisfies
2k^*π- Z_δ_j(t_j)/2^j = _θ∈ S_j|θ-θ_j-1|_2π, since
|2k^*π- Z_δ_j(t_j)/2^j-θ_j-1|_2π≤|2k^*π- Z_δ_j(t_j)/2^j-ω̃|_2π + |ω̃-θ_j-1|_2π < π/3·2^j + π/3·2^j-1 = π/2^j,
and
|2kπ- Z_δ_j(t_j)/2^j-θ_j-1|_2π≥π/2^j-1 - |2k^*π- Z_δ_j(t_j)/2^j-θ_j-1|_2π > π/2^j-1 - π/2^j = π/2^j
for any k≠k^*. Now we have proved (<ref>).
In the end, notice that (<ref>) has an ambiguity of modulus 2π, we add a proper integer multiple of 2π to θ_J-1 such that |θ_J-1|≤π. We then choose this adjusted θ_J-1 as our estimate for ω̃, and our estimate for ω is Wθ_J-1=:ω̂.
From the above analysis we can see that if (<ref>) holds for 0≤ j'≤ j-1, which means that all the iterations from 0 to j-1 are successful, then by (<ref>), ω̃ is contained in (θ_j-1-π/(3· 2^j-1),θ_j-1+π/(3· 2^j-1))+2kπ for some integer k, and our estimate θ_J-1 is contained in (θ_j-1-π/(3· 2^j-1),θ_j-1+π/(3· 2^j-1))+2k'π for some integer k'. Since |ω̃|<π/3, we have ((θ_j-1-π/(3· 2^j-1),θ_j-1+π/(3· 2^j-1))+2kπ)⊂ (-π, π), and then ((θ_j-1-π/(3· 2^j-1),θ_j-1+π/(3· 2^j-1))+2k'π)∩ [-π, π]=∅ if k'≠k. Hence we must have k=k' since |θ_J-1|≤π. Therefore the error in the normalized ω̃ is at most E_j/W=4π/(3· 2^j), for j=1,2,⋯,J-1. If the very first iteration fails, the error is at most E_0/W=2π. If all the iterations are successful, then by (<ref>) and the argument above, |θ_J-1-ω̃|≤π/(3· 2^J-1)≤ϵ/(2W). From these observations, we will compute the expected error.
We define the random variable j_fail to be the first iteration that fails, i.e.,
|Z_δ_j'(t_j')-e^-i(ω̃2^j' +f(t_j'))|≤η, ∀ j'< j_fail, |Z_δ_j_fail(t_j_fail)-e^-i(ω̃2^j_fail +f(t_j_fail))|> η.
If such a j_fail cannot be found, i.e., all iterations are successful, then we let j_fail=J.
From the above analysis, conditional on j_fail=j<J, the error will be at most E_j/W. In other words 𝔼[|ω̃-θ_J-1|^2|j_fail=j]≤ E_j^2/W^2. If j=J, then the error is at most ϵ/(2W). Also, we have
[j_fail=j] = (1-δ_0)(1-δ_1)⋯ (1-δ_j-1)δ_j≤δ_j.
Therefore the expected square error is
𝔼[|ω-ω̂|^2] = W^2 𝔼[|ω̃-θ_J-1|^2]
=W^2 ∑_j=0^J𝔼[|ω̃-θ_J-1|^2|j_fail=j][j_fail=j]
≤∑_j=0^J-1 E_j^2 δ_j + ϵ^2/4.
This proves (<ref>). Generation of each Z_δ_j(t_j) requires an evolution time of C_Z t_j(log(δ_j^-1)+1), and hence we have total evolution time (<ref>) by adding them up.
In the theorem above, we have left a great deal of flexibility in choosing δ_j. Below, we will try to answer that if we want the MSE to satisfy 𝔼[|ω̂-ω|^2]≤ϵ^2, how we should choose the δ_j to minimize the total evolution time required. We first state our result:
Suppose that |ω|<W is known in advance and that we have access to a signal Z_δ(t) such that
* |Z_δ(t)|=1,
* |Z_δ(t)-e^-i(ω t+f(t))|≤η with probability at least 1-δ, where sup_t|f(t)|≤ C_f <π/3,
* 2arcsinη/2 + C_f≤π/3,
* generating Z_δ(t) requires evolution time C_Z t(log(δ^-1)+1),
then we can produce an estimate ω̂ such that 𝔼[|ω̂-ω|^2]≤ϵ^2, with total evolution time at most (C_Z ϵ^-1).
By (<ref>) and (<ref>), we essentially need to solve the following optimization problem to get the optimal {δ_j}:
{δ_j}minimize ∑_j=0^J-12^j log(δ_j^-1)
subject to ∑_j=0^J-1E_j^2 δ_j ≤3/4ϵ^2.
This optimization problem can be easily solved using the concavity of the logarithmic function, and the optimal δ_j is
δ_j = 3ϵ^2/4E_j^22^j/(2^J-1).
Using this choice of δ_j, we can then compute the total evolution time required through <ref>.
π C_Z/W∑_j=0^J-12^j(log(δ_j^-1)+1) = π C_Z/W∑_j=0^J-12^jlog(δ_j^-1)_(I) + π C_Z/W(2^J-1)_(II).
For term (II), we have
π C_Z/W(2^J-1)<π C_Z 8π/3ϵ
by our choice of J given in <ref>.
For (I), using our expression for δ_j and the expression for E_j in (<ref>), we have
(I) = π C_Z/W∑_j=0^J-12^jlog(4E_j^2/3ϵ^22^J-1/2^j)
= π C_Z/Wlog(16π^2 W^2/3ϵ^2(2^J-1)) + π C_Z/W∑_j=1^J-12^jlog(64π^2 W^2/27ϵ^22^J-1/2^j1/4^j)
< π C_Z/Wlog(64π^3 W^3/9ϵ^3) + π C_Z/W∑_j=1^J-12^jlog(4/38^J-j)
= π C_Z/Wlog(64π^3 W^3/9ϵ^3) + π C_Z/Wlog(4/3)(2^J-2) + π C_Z/Wlog(8)(2^J+2-2J-2)
≤(ϵ) + (C_Z ϵ^-1).
In the last line, we have used the fact that ϵ≤ W (as otherwise we can simply estimate ω by 0) to bound the first term on the second-to-last line. Combining (<ref>), (<ref>), and (<ref>), we can see that the total evolution time of the entire procedure is (C_Z ϵ^-1).
§ LEARNING AN ANHARMONIC OSCILLATOR
The basic building block of our algorithm is a method to learn a single anharmonic oscillator of the form
H_AHO = ω b^†b + ξ/2n(n-1),
where n=b^†b.
We will then outline the experiments we run to learn the coefficients ω and ξ from this Hamiltonian. We first start with a coherent state
|α⟩ = e^-|α|^2/2∑_k=0^∞α^k/√(k!)|k⟩.
We then let the system evolve under the Hamiltonian H_AHO for time t, and obtain the quantum state
e^-i H_AHO t|α⟩ = e^-|α|^2/2∑_k=0^∞α^k/√(k!)e^-iω k t-iξ/2k(k-1)t|k⟩.
In the end, we perform POVM measurement in the eigenbasis of either X=(b+b^†)/√(2) or P=i(b^†-b)/√(2), and by taking average we obtain ⟨X|_⟩α,t and ⟨P|_⟩α,t, where ⟨·|_⟩α,t means taking expectation with respect to the state e^-iH_AHO t|α⟩. With these, we can then obtain the expectation value of b through
⟨b|_⟩α,t=1/√(2)(⟨X|_⟩α,t + i⟨P|_⟩α,t).
The expectation values ⟨b|_⟩α,t for a certain set of α and t will enable us to estimate ω and ξ, and we will demonstrate this below. First we can compute b e^-i H_AHO t|α⟩ to be
be^-iH_AHOt|α⟩ = e^-|α|^2/2∑_k=1^∞α^k e^-iω k te^-iξ/2 k(k-1)t/√((k-1)!)|k-1⟩
= e^-|α|^2/2∑_k=0^∞α^k+1 e^-iω (k+1) te^-iξ/2 k(k+1)t/√(k!)|k⟩.
This yields a closed-form expression for ⟨b|_⟩α,t:
⟨b|_⟩α,t = e^-|α|^2∑_k α |α|^2k e^-iω te^-iξ kt/k!
= α e^-|α|^2 e^-iω t e^|α|^2 e^-iξ t.
Now we are ready to estimate ω and ξ with the help of Corollary <ref>. To estimate ω, we define
Z(t) = ⟨b|_⟩α,t/|⟨b|_⟩α,t| = e^-i(ω t + |α|^2sin(ξ t)),
then Z(t) = e^-i(ω t + f(t)), where f(t) = |α|^2sin(ξ t). Therefore, sup_t|f(t)|≤|α|^2. The exact value of Z(t) is, however, inaccessible in practice, and we need to find an approximation Z_δ(t) such that |Z_δ(t)-Z(t)|≤η with probability at least δ if we want to utilize Corollary <ref>. In the following, we decompose the approximation error into three parts and analyze them separately.
Truncation error.
The first part of the approximation error comes from the truncation of the observables. In our protocol, we truncate the observables up to a threshold M, which means rather than estimating the ⟨X|_⟩α,t and ⟨P|_⟩α,t we estimate ⟨X|X|≤ M|_⟩α,t and ⟨P|P|≤ M|_⟩α,t.
Here |X|≤ M and |P|≤ M are defined to be
|X|≤ M = ∫_-M^M|x⟩⟨x| x, |P|≤ M = ∫_-M^M|p⟩⟨p| p.
This is necessary for the robustness of our protocol. With the original unbounded observables X and P, any small error in the quantum state can potentially be infinitely magnified in the expectation value. The use of bounded observables will ensure that this does not happen.
In the following, we will ensure that the error introduced by this truncation is acceptable for our protocol.
From Chebyshev's inequality, one has
ℙ(|X|≥ M)≤⟨X^2|_⟩α,t/M^2≤2⟨b^† b|_⟩α, t+1/M^2 = 2|α|^2+1/M^2,
where we have used the fact that ⟨b^† b|_⟩α, t=|α|^2. Then, by Cauchy-Schwarz inequality,
|⟨X|_⟩α,t-⟨X|X|≤ M|_⟩α,t| = |⟨X|X|>M|_⟩α,t|≤√(⟨X^2|_⟩α,t)√(⟨|X|>M^2|_⟩α,t)
≤√(2⟨b^† b|_⟩α,t+1)√(ℙ(|X|≥ M))≤2|α|^2+1/M.
Similarly, one has
ℙ(|P|≥ M) ≤⟨P^2|_⟩α,t/M^2≤2⟨b^† b|_⟩α,t+1/M^2 = 2|α|^2+1/M^2,
and
|⟨P|_⟩α,t-⟨P|P|≤ M|_⟩α,t|≤2|α|^2+1/M.
Combining the error bounds for X and P truncations, we will have an error bound for the truncated b operator.
Let
Z_M(t)=1/√(2)(⟨X|X|≤ M|_⟩α,t + i⟨P|P|≤ M|_⟩α,t),
then
Z_M(t)-⟨b|_⟩α,t = √( Z_M(t) - ⟨b|_⟩α,t^2 + Z_M(t) - ⟨b|_⟩α,t^2)
= √((⟨X|_⟩α,t-⟨X|X|≤ M|_⟩α,t^2 + ⟨P|_⟩α,t-⟨P|P|≤ M|_⟩α,t^2))
≤2|α|^2+1/M.
Simulation error.
In practice, the final state we obtained is different from the ideal state e^-iH_AHO t|α⟩.
This is because in the multi-mode situation, H_AHO is the Hamiltonian of the effective dynamics, which differs from the actual dynamics by a small error. In this sense, we only simulate the effective dynamics, thus calling this error the simulation error.
We denote the expectation with respect to the real final state obtained by ⟨·|_⟩α, t, r, where r stands for the parameters used in the simulation. More precisely, as will be explained in Section <ref>, and in particular (<ref>), r is the number of random unitaries that we insert during the time evolution. In Section <ref>, we will show that for any given η_0>0, there exists a choice of r such that
⟨O|_⟩α, t, r-⟨O|_⟩α, t≤Oη_0,
for any bounded observable O. In particular, for any given η_0>0, there is a choice of r such that
⟨X|X|≤ M|_⟩α, t, r-⟨X|X|≤ M|_⟩α, t≤ Mη_0, ⟨P|P|≤ M|_⟩α, t, r-⟨P|P|≤ M|_⟩α, t≤ Mη_0.
Define
Z_M,r(t)=1/√(2)(⟨X|X|≤ M|_⟩α,t,r + i⟨P|P|≤ M|_⟩α,t,r),
then
Z_M,r(t)-Z_M(t)
= √((⟨X|X|≤ M|_⟩α,t,r-⟨X|X|≤ M|_⟩α,t^2 + ⟨P|P|≤ M|_⟩α,t,r-⟨P|P|≤ M|_⟩α,t^2))
≤ Mη_0.
Statistical error. In practice, homodyne measurement generates samples corresponding to the quadrature operator X. By discarding the samples with norm larger than M, we obtain samples x̂_1,x̂_2,⋯,x̂_L corresponding to X|X|≤ M. We then approximate ⟨X|X|≤ M|_⟩α, t, r through the average x̅=(x̂_1+x̂_2+⋯+x̂_L)/L. Similarly, we can generate p̂_1,p̂_2,⋯,p̂_L corresponding to P|P|≤ M, and use p̅ = (p̂_1+p̂_2+⋯+p̂_L)/L to approximate ⟨P|P|≤ M|_⟩α, t, r. It is clear that x̂ and p̂ are unbiased estimates for ⟨X|X|≤ M|_⟩α, t, r and ⟨P|P|≤ M|_⟩α, t, r. Define
Z̅ = 1/√(2)(x̅+ip̅),
then
Z̅-Z_M,r(t)= √((x̅-⟨X|X|≤ M|_⟩α,t,r^2 + p̅-⟨P|P|≤ M|_⟩α,t,r^2))
≤max{x̅-⟨X|X|≤ M|_⟩α,t,r, p̅-⟨P|P|≤ M|_⟩α,t,r}.
Thus by the union bound and Hoeffding's inequality, we have
ℙ(|Z̅-Z_M,r(t)|≥η_1) ≤ℙ(x̅-⟨X|X|≤ M|_⟩α,t,r≥η_1) + ℙ(p̅-⟨P|P|≤ M|_⟩α,t,r≥η_1)
≤ 2e^-Lη_1^2/2M^2+2e^-Lη_1^2/2M^2= 4e^-Lη_1^2/2M^2.
Putting the three types of error together, we have
Z̅-⟨b|_⟩α,t ≤Z̅-Z_M,r(t)+Z_M,r(t)-Z_M(t)+Z_M(t)-⟨b|_⟩α,t
≤η_1+Mη_0+2|α|^2+1/M,
with probability at least 1-4e^-Lη_1^2/2M^2. Define
Z_δ(t) = Z̅/Z̅,
then
Z_δ(t)-Z(t) = Z̅/Z̅-⟨b|_⟩α,t/⟨b|_⟩α,t≤2Z̅-⟨b|_⟩α,t/⟨b|_⟩α,t
=2Z̅-⟨b|_⟩α,t/|α|e^|α|^2(cos(ξ t)-1)≤ 2|α|^-1e^2|α|^2Z̅-⟨b|_⟩α,t.
Hence,
Z_δ(t)-Z(t)≤ 2|α|^-1e^2|α|^2(η_1+Mη_0+2|α|^2+1/M),
with probability at least 1-4e^-Lη_1^2/2M^2. In order for the condition 2arcsinη/2 + C_f≤π/3 in Theorem <ref> to hold, we need
2arcsin(|α|^-1e^2|α|^2(η_1+Mη_0+2|α|^2+1/M)) + |α|^2 ≤π/3.
In order for 1-4e^-Lη_1^2/2M^2≥1-δ to hold, we need
L≥2M^2/η_1^2log4/δ.
In conclusion, we have constructed a signal Z_δ(t) to estimate the parameter ω that satisfies the conditions required by Corollary <ref>.
Define Z_δ(t) = Z̅/Z̅, where Z̅ = 1/√(2)(x̅+ip̅), and (x̅, p̅) are the average values computed from L measurement results each for ⟨X|X|≤ M|_⟩α, t, r and ⟨P|P|≤ M|_⟩α, t, r, respectively. Here ⟨·|_⟩α, t, r denotes the expectation with respect to the real final state obtained by a simulation using r randomly inserted unitaries, which is an approximation of the state e^-iH_AHO t|α⟩ satisfying ⟨O|_⟩α, t, r-⟨O|_⟩α, t≤Oη_0 for any bounded operator O. Then Z_δ(t) satisfies the conditions of Corollary <ref> for the estimation of ω if
|α|^2<π/3, M > e^2|α|^2(2|α|^2+1)/|α|sin(π/6-|α|^2/2),
η_0<1/M(|α|e^-2|α|^2sin(π/6-|α|^2/2)-2|α|^2+1/M),
η_1≤ |α|e^-2|α|^2sin(π/6-|α|^2/2)-2|α|^2+1/M -Mη_0,
L≥2M^2/η_1^2log4/δ.
As a result, α, M, η_0 and η_1 can be chosen as 𝒪(1) constants and the total runtime needed in producing Z_δ(t) is 𝒪(t(log(1/δ)+1)).
When choosing the parameters in practice, one can follow the order in (<ref>), i.e., first decide the value of α, then choose M, η_0, η_1 and L accordingly.
Next, we will build the signal Z_δ(t) for the estimation of ξ with the help of the result above. In particular, when (<ref>) and (<ref>) hold, one can deduce that
|Z̅-⟨b|_⟩α, t| ≤ |α|e^-2|α|^2sin(π/6-|α|^2/2)≤|α|e^-2|α|^2≤|⟨b|_⟩α, t|.
We first observe that cos(ξ t) can be obtained by
cos(ξ t) = 1/|α|^2log⟨b|_⟩α, t/α + 1.
Therefore, when when (<ref>) and (<ref>) hold, the error in the estimation of cos(ξ t) caused by using Z̅ instead of ⟨b|_⟩α, t is
(1/|α|^2log⟨b|_⟩α, t/α + 1) -
(1/|α|^2logZ̅/α+1)
= 1/|α|^2logZ̅/⟨b|_⟩α, t=1/|α|^2log(1+Z̅-⟨b|_⟩α, t/⟨b|_⟩α, t)
≤ 2log2/|α|^2Z̅-⟨b|_⟩α, t/⟨b|_⟩α, t
≤ 2log2|α|^-3e^2|α|^2(η_1+Mη_0+2|α|^2+1/M),
where we have used the concavity of log x and (<ref>) in the third line. For estimating sin(ξ t), we use two different values for α. The ratio between ⟨b|_⟩α_1,t and ⟨b|_⟩α_2,t is
⟨b|_⟩α_1,t/⟨b|_⟩α_2,t = α_1/α_2 e^(|α_2|^2-|α_1|^2)(1-e^-iξ t).
Let Z_α_1, α_2 = ⟨b|_⟩α_1,t/⟨b|_⟩α_2,t/⟨b|_⟩α_1,t/⟨b|_⟩α_2,t
and β = α_2^2-α_1^2.
Assume that β<π/2, then
sin(ξ t) = 1/βarcsin( Z_α_1, α_2),
Now we analyze the error in the estimate of sin(ξ t) caused by approximation. We assume that (<ref>) holds for both α_1 and α_2, and we condition on the event that
Z̅_1-⟨b|_⟩α_1, t≤η_1+Mη_0+2|α_1|^2+1/M,
Z̅_2-⟨b|_⟩α_2, t≤η_1+Mη_0+2|α_2|^2+1/M.
Then
Z̅_1/Z̅_2/Z̅_1/Z̅_2-⟨b|_⟩α_1,t/⟨b|_⟩α_2,t/⟨b|_⟩α_1,t/⟨b|_⟩α_2,t
≤ Z̅_1/Z̅_2/Z̅_1/Z̅_2- ⟨b|_⟩α_1,t/⟨b|_⟩α_2,t/⟨b|_⟩α_1,t/⟨b|_⟩α_2,t
≤ 2Z̅_1/Z̅_2-⟨b|_⟩α_1,t/⟨b|_⟩α_2,t/⟨b|_⟩α_1,t/⟨b|_⟩α_2,t
≤ 2⟨b|_⟩α_2, t/⟨b|_⟩α_1, t[Z̅_1-⟨b|_⟩α_1,t/Z̅_2+⟨b|_⟩α_1,t/⟨b|_⟩α_2,tZ̅_2-⟨b|_⟩α_2,t/Z̅_2]
≤ 4[Z̅_1-⟨b|_⟩α_1,t/⟨b|_⟩α_1,t+Z̅_2-⟨b|_⟩α_2,t/⟨b|_⟩α_2,t]
≤ 4|α_1|^-1e^2|α_1|^2(η_1+Mη_0+2|α_1|^2+1/M)+4|α_2|^-1e^2|α_2|^2(η_1+Mη_0+2|α_2|^2+1/M).
Here we have used the fact that Z̅≥⟨b|_⟩α_,t in the fifth line, which can be deduced from (<ref>). Now, if we further assume that β≤π/3, then since the function arcsin is 2-Lipschitz on [-sinπ/3, sinπ/3], we have
1/βarcsin(Z̅_1/Z̅_2/Z̅_1/Z̅_2)-1/βarcsin( Z_α_1, α_2)
≤ 2/βZ̅_1/Z̅_2/Z̅_1/Z̅_2- Z_α_1, α_2
≤ 8/β(|α_1|^-1e^2|α_1|^2(η_1+Mη_0+2|α_1|^2+1/M)+|α_2|^-1e^2|α_2|^2(η_1+Mη_0+2|α_2|^2+1/M))
Combining (<ref>) and (<ref>), we have
e^iξ t-ĉ+iŝ/ĉ+iŝ≤ 4log2|α|^-3e^2|α|^2(η_1+Mη_0+2|α|^2+1/M)
+16/β(|α_1|^-1e^2|α_1|^2(η_1+Mη_0+2|α_1|^2+1/M)+|α_2|^-1e^2|α_2|^2(η_1+Mη_0+2|α_2|^2+1/M)),
where ĉ = 1/|α|^2logZ̅/α + 1 and ŝ = 1/βarcsin(Z̅_1/Z̅_2/Z̅_1/Z̅_2),
and the condition in Corollary <ref> reads
1≥4log2|α|^-3e^2|α|^2(η_1+Mη_0+2|α|^2+1/M)
+16/β(|α_1|^-1e^2|α_1|^2(η_1+Mη_0+2|α_1|^2+1/M)+|α_2|^-1e^2|α_2|^2(η_1+Mη_0+2|α_2|^2+1/M)).
In particular, we can take α=α_1 and obtain the following result.
Define Z_δ(t) = ĉ+iŝ/ĉ+iŝ, where ĉ = 1/|α_1|^2logZ̅_1/α_1 + 1, ŝ = 1/βarcsin(Z̅_1/Z̅_2/Z̅_1/Z̅_2), and (Z̅_1, Z̅_2) are defined in the same way as in Lemma <ref> for α_1 and α_2, respectively. Then Z_δ(t) satisfies the conditions of Corollary <ref> for the estimation of ξ if
|α_1|^2<π/3, |α_2|^2<π/3, β := |α_1|^2-|α_2|^2<π/2,
M > (4log2|α_1|^-3+16/β|α_1|^-1)e^2|α_1|^2(2|α_1|^2+1) + 16/β|α_2|^-1e^2|α_2|^2(2|α_2|^2+1),
η_0<M-(4log2|α_1|^-3+16/β|α_1|^-1)e^2|α_1|^2(2|α_1|^2+1) - 16/β|α_2|^-1e^2|α_2|^2(2|α_2|^2+1)/M^2((4log2|α_1|^-3+16/β|α_1|^-1)e^2|α_1|^2+16/β|α_2|^-1e^2|α_2|^2),
η_1≤M-(4log2|α_1|^-3+16/β|α_1|^-1)e^2|α_1|^2(2|α_1|^2+1+M^2η_0) - 16/β|α_2|^-1e^2|α_2|^2(2|α_2|^2+1+M^2η_0)/M((4log2|α_1|^-3+16/β|α_1|^-1)e^2|α_1|^2+16/β|α_2|^-1e^2|α_2|^2),
L≥2M^2/η_1^2log8/δ.
As a result, α_1, α_2, M, η_0 and η_1 can be chosen as 𝒪(1) constants and the total runtime needed in producing Z_δ(t) is 𝒪(t(log(1/δ)+1)).
§ LEARNING TWO COUPLED ANHARMONIC OSCILLATORS
In this section, we consider a system consisting of two coupled anharmonic oscillators, and the Hamiltonian is of the following form:
H = ω_1 b_1^†b_1 + ω_2 b_2^†b_2 + h_12b_1^†b_2 + h_21b_2^†b_1 + ξ_1/2n_1(n_1-1) + ξ_2/2n_2(n_2-1)
The goal is to learn all the coefficients ω_1, ω_2, ξ_1, ξ_2, and h_12 (h_21=h^*_12).
§.§ Single-mode coefficients
We first focus on learning the single-mode coefficients ω_1, ω_2, ξ_1, and ξ_2. To do this, we will insert random unitaries during time evolution to decouple the bosonic modes from each other. In other words, the time evolution operator undergoes the following transformation
e^-iHt↦∏_j=1^r U_j^†e^-iHτ U_j = ∏_j=1^r e^-iU_j^†HU_jτ,
where the U_j, j=1,2,⋯,r, are the random linear optics unitaries that we insert, r=t/τ, and the product goes from right to left. Each U_j is independently drawn from a distribution that we denote by 𝒟. In the limit of τ→ 0, the dynamics can be described by an effective Hamiltonian
H_effective = 𝔼_U∼𝒟 U^†HU.
This can be seen by considering the Taylor expansion of the time-evolved state in a small time step:
𝔼_U∼𝒟[e^-iU^†HUτρ e^iU^†HUτ] = ρ - iτ𝔼_U∼𝒟[[U^†HU,ρ]] + (τ^2)
= e^-i𝔼_U∼𝒟[U^†HU]τρ e^i𝔼_U∼𝒟[U^†HU]τ + (τ^2).
The above is not a rigorous proof because the (τ^2) residue is an unbounded operator. We will provide a rigorous bound of how far the actual dynamics deviate from the limiting effective dynamics with finite τ>0 in Section <ref>.
To learn all the single mode coefficients, we let the unitary U drawn from the distribution 𝒟 be
U = e^-iθ b_1^†b_1, θ∼𝒰([0,2π]).
Here 𝒰([0,2π]) is the uniform distribution over [0,2π].
We can then compute the effective Hamiltonian
H_effective = 1/2π∫_0^2π e^iθ b_1^†b_1He^-iθ b_1^†b_1θ = ω_1 b_1^†b_1 + ω_2 b_2^†b_2 + ξ_1/2n_1(n_1-1) + ξ_2/2n_2(n_2-1).
In other words, the coupling term h_12b_1^†b_2 + h_21b_2^†b_1 is cancelled in the process, due to the equality
1/2π∫_0^2π e^iθ b_1^†b_1b_1e^-iθ b_1^†b_1θ=1/2π∫_0^2π e^iθb_1 θ = 0.
We can interpret this procedure as enforcing a particle number conservation on the first bosonic mode.
The effective Hamiltonian has the desirable feature that the two bosonic modes are no longer coupled together. Therefore we can apply the learning algorithm described in Section <ref> to learn the parameters of the two modes separately.
§.§ The coupling coefficient
Next, we consider learning the coupling coefficient h_12. We observe that the coupling term can be transformed into a local one under a single-particle basis transformation. This is done through the following two operators
U_x(θ) = e^iθ (b_1^†b_2+b_2^†b_1), U_y(θ) = e^θ (b_1^†b_2-b_2^†b_1),
which correspond to Pauli-X and Y rotations. They transform the annihilation operators in the following way
[ U_x(θ)b_1 U_x^†(θ); U_x(θ)b_2 U_x^†(θ) ]
=
[ cos(θ) isin(θ); isin(θ) cos(θ) ][ b_1; b_2 ], [ U_y(θ)b_1 U_y^†(θ); U_y(θ)b_2 U_y^†(θ) ]
=
[ cos(θ) sin(θ); -sin(θ) cos(θ) ][ b_1; b_2 ].
We first perform the Pauli-Y rotation and define
b̃_1 = U_y(π/4)b_1 U_y^†(π/4), b̃_2 = U_y(π/4)b_2 U_y^†(π/4).
Through (<ref>) we have
[ b_1; b_2 ]
=1/√(2)[ 1 -1; 1 1 ][ b̃_1; b̃_2 ]
We will then rewrite the Hamiltonian (<ref>) in terms of b̃_1 and b̃_2. The quadratic part of H can be written as
ω̃_1 b̃_1^†b̃_1 + ω̃_2 b̃_2^†b̃_2 + h̃_12b̃_1^†b̃_2 + h̃_21b̃_2^†b̃_1,
where
[ ω̃_1 h̃_12; h̃_21 ω̃_2 ]
=1/2[ 1 1; -1 1 ][ ω_1 h_12; h_21 ω_2 ][ 1 -1; 1 1 ].
In particular, we have
ω̃_1 = ω_1+ω_2/2+ h_12.
For the quartic part, we have
ξ_1/2n_1(n_1-1) = ξ_1/2b_1^†b_1^†b_1b_1 = ∑_ijkl=1^2 ξ^(1)_ijklb̃^†_ib̃^†_jb̃_kb̃_l,
ξ_2/2n_2(n_2-1) = ξ_2/2b_2^†b_2^†b_2b_2 = ∑_ijkl=1^2 ξ^(2)_ijklb̃^†_ib̃^†_jb̃_kb̃_l.
In particular
ξ^(1)_1111 = ξ_1/4, ξ^(2)_1111 = ξ_2/4.
Combining (<ref>) and (<ref>), the Hamiltonian H can be written in terms of b̃_1 and b̃_2 as
H = ω̃_1 b̃_1^†b̃_1 + ω̃_2 b̃_2^†b̃_2 + h̃_12b̃_1^†b̃_2 + h̃_21b̃_2^†b̃_1 + ∑_ijkl=1^2 (ξ^(1)_ijkl+ξ^(2)_ijkl)b̃^†_ib̃^†_jb̃_kb̃_l.
The above expression is much more complicated than the original expression in (<ref>), but we will use random unitaries to produce a much simpler effective Hamiltonian. This time, the random unitary we use will be
U = e^-iθb̃_1^†b̃_1, θ∼𝒰([0,2π]).
With the same derivation as in (<ref>), we can obtain the effective Hamiltonian as 𝔼[U^† HU]. Note that in conjugating with e^iθb̃_1^†b̃_1, each b̃_1 in the Hamiltonian acquires a phase e^iθ, and each b̃_1^† acquires a phase e^-iθ. If in a Hamiltonian term b̃_1 and b̃_1^† do not appear the same number of times, then the term will acquire a phase e^icθ with c∈{-2,-1,1,2}, and integrating over θ will cancel out this term. For example
1/2π∫_0^2π e^iθb̃_1^†b̃_1b̃_1^†b̃_2 e^-iθb̃_1^†b̃_1θ = 1/2π∫_0^2π e^iθb̃_1^†b̃_2 θ = 0,
1/2π∫_0^2π e^iθb̃_1^†b̃_1b̃_1^†b̃_1^†b̃_2b̃_2 e^-iθb̃_1^†b̃_1θ = 1/2π∫_0^2π e^2iθb̃_1^†b̃_1^†b̃_2b̃_2 θ = 0.
In other words, only the terms that conserve the particle number on the first bosonic mode are preserved in the effective Hamiltonian. We can then write the effective Hamiltonian as
H_effective = ω̃_1 b̃_1^†b̃_1 + ω̃_2 b̃_2^†b̃_2 + (ξ^(1)_1111+ξ^(2)_1111)ñ_1(ñ_1-1) + (Añ_1+Bñ_2+C)ñ_2,
where ñ_1 = b̃_1^†b̃_1, and ñ_2 = b̃_2^†b̃_2.
Recall that our goal is to learn the coupling coefficient h_12, whose real part can be derived from ω̃_1, ω_1, and ω_2 through (<ref>), and ω_1, and ω_2 can be learned using the procedure outlined in Section <ref>. We, therefore, only need to estimate ω̃_1 from the effective Hamiltonian.
To do this, we start with a product state |α⟩|0⟩ on the two bosonic modes. Then we apply U_y(π/4) to this state to get the initial state of our time evolution
|Φ(0)⟩ = U_y(π/4)|α⟩|0⟩.
This state is the tensor product of the coherent states of b̃_1 and b̃_2 because one can verify that, using (<ref>),
b̃_1|Φ(0)⟩=b̃_1 U_y(π/4)|α⟩|0⟩ = U_y(π/4) b_1|α⟩|0⟩ = α U_y(π/4)|α⟩|0⟩
b̃_2|Φ(0)⟩=b̃_2 U_y(π/4)|α⟩|0⟩ = U_y(π/4) b_2|α⟩|0⟩ = 0.
Because of the above equation, we can see that there is no particle in the bosonic mode b̃_2 in this state |Φ(0)⟩. As the effective Hamiltonian in (<ref>) conserves the particle number on both bosonic modes, the particle number on the mode b̃_2 will stay 0. Consequently, any term that involves ñ_2 will not affect the dynamics. Therefore we can safely discard these terms and get a new effective Hamiltonian
H_effective' = ω̃_1 b̃_1^†b̃_1 + (ξ^(1)_1111+ξ^(2)_1111)ñ_1(ñ_1-1).
Note that this Hamiltonian only acts non-trivially on the bosonic mode b̃_1. Therefore we can use the single-mode protocol in Section <ref> to learn the coefficient ω̃_1. As guaranteed in (<ref>), we start from the α-coherent state for b̃_1. In the time evolution, the expectation value ⟨b̃_1|$⟩ contains the information to determineω̃_1. The expectation value⟨b̃_1|$⟩ can be extracted through homodyne measurement with two quadrature operators.
Note that we need to convert this homodyne measurement into homodyne measurement for b_1 or b_2. This can be easily done because b̃_1 = U_y(π/4)b_1 U_y^†(π/4). We can therefore apply the unitary U_y^†(π/4) at the end of the time evolution and then perform homodyne measurement for (b_1+b_1^†)/√(2) and i(b_1-b_1^†)/√(2), which combined yields the expectation value ⟨b̃_1|$⟩.
Let us now briefly summarize the whole procedure. We start from a state|α⟩|0⟩, applyU_y(π/4), let the system evolve for timet=rτ, while applying randome^-iθb̃_1^†b̃_1with intervalτ, and in the end applyU_y^†(π/4)=U_y(-π/4), after which we perform homodyne measurement forb_1. The quantum state right before the measurement is applied is
U_y(-π/4)∏_j=1^r(e^iθ_jb̃_1^†b̃_1e^-iHτe^-iθ_jb̃_1^†b̃_1)U_y(π/4)|α⟩|0⟩,
for randomly sampledθ_j,j=1,2,⋯,r.
Note thate^-iθ_jb̃_1^†b̃_1=e^-i(θ_j/2)(n_1+n_2)U_x(-θ_j/2), andHcommute withn_1+n_2because the particle number is conserved. We therefore have
e^iθ_jb̃_1^†b̃_1e^-iHτe^-iθ_jb̃_1^†b̃_1 = U_x(θ/2)e^-iHτU_x(-θ/2).
Consequently we can replace alle^-iθ_jb̃_1^†b̃_1withU_x(-θ_j/2). The quantum state we get in the end is, therefore
U_y(-π/4)∏_j=1^r(U_x(θ_j/2)e^-iHτU_x(-θ_j/2))U_y(π/4)|α⟩|0⟩.
Note that the adjacentU_x(-θ_j/2)andU_x(θ_j-1/2)can be merged intoU_x(-(θ_j-θ_j-1)/2), so that we only need to apply oneXrotation in each time step instead of two.
In the above procedure, we estimateω̃_1, which through (<ref>) we can estimateh_12. Forh_12, we can instead define
b̃_1 = U_x(π/4)b_1U_x^†(π/4), b̃_2 = U_x(π/4)b_2U_x^†(π/4),
and then (<ref>) will become
ω̃_1 = ω_1+ω_2/2+ h_12.
We can then change the whole procedure accordingly to estimateh_12, and the corresponding state before the measurement is
U_x(-π/4)∏_j=1^r(U_y(-θ_j/2)e^-iHτU_y(θ_j/2))U_x(π/4)|α⟩|0⟩.
§ USING A DIVIDE-AND-CONQUER APPROACH TO LEARN AN N-MODE SYSTEM
In this section, we consider the general case, where the Hamiltonian is of the form:
H = ∑_⟨i,j|⟩ h_ijb_i^†b_j + ∑_i ω_i b_i^†b_i + ξ_i/2∑_i n_i(n_i-1).
We will use a divide-and-conquer approach to learn the coefficients in this Hamiltonian. Specifically, we will insert random unitaries during time evolution to decouple the system into clusters containing one or two modes that do not interact with each other and learn the coefficients in each cluster in parallel.
We assume that the bosonic modes are arranged on a graph𝒢=(𝒱,ℰ), where𝒱is the set containing all vertices, each of which corresponds to a bosonic mode, andℰcontains all edges.∑_⟨i,j|⟩means summation over all vertices linked by an edge.
We consider decoupling the system with the help of a graphℒ=(ℰ,ℰ_ℒ)that is the link graph of𝒢. The setℰis the set of all edges in𝒢, andℰ_ℒis the set of edges ofℒ, which we will now define. For any two edgese,e'∈ℰ, we have(e,e')∈ℰif and only if they share a vertex in𝒱.
Next, we color the graphℒwith the following rule: any two vertices inℰmust be colored differently if they are at most distance2from each other. The number of colors needed for this coloring is at mostχ=deg(ℒ)^2+1, and such a coloring can be easily found by a greedy algorithm: we can simply color a vertex by any color that its neighbors or next-neighbors have not used, and such a color is always available because there are at mostχ-1neighbors and next-neighbors. For a graph𝒢with degreeD,deg(ℒ)≤2(D-1), and thereforeχ≤4(D-1)^2+1. This coloring yields a decomposition of the edges
ℰ = _c=1^χℰ_c,
whereℰ_cis the set of edges with colorc.
For each colorc=1,2,⋯,χ, we then learn all the coefficients associated with this color. We denote by𝒱_call the vertices (bosonic modes) that are contained in an edge inℰ_c. During time evolution, we apply random unitaries of the form
U = ∏_i∈𝒱∖𝒱_c e^-iθ_i b_i^†b_i, θ_i∼𝒰([0,2π]).
Hereθ_i,i∈𝒱∖𝒱_c, are independent random variables. Following the derivation in (<ref>), we can see that the effective Hamiltonian is
H_effective=∏_i∈𝒱∖𝒱_c(1/2π∫_0^2πθ_i ) e^-i∑_i∈𝒱∖𝒱_cθ_i n_iH e^i∑_i∈𝒱∖𝒱_cθ_i n_i.
We can then examine the effect of this transformation on each term. For a termb_k^†b_l,k≠l, ifkis in𝒱∖𝒱_cbutlis not, then
∏_i∈𝒱∖𝒱_c(1/2π∫_0^2πθ_i ) e^-i∑_i∈𝒱∖𝒱_cθ_i n_i b_k^†b_l e^i∑_i∈𝒱∖𝒱_cθ_i n_i = 1/2π∫_0^2πω_k e^iω_k b_k^†b_l = 0
The same is true iflis in𝒱∖𝒱_cbutkis not. When bothk,l∈𝒱∖𝒱_c, then
∏_i∈𝒱∖𝒱_c(1/2π∫_0^2πθ_i ) e^-i∑_i∈𝒱∖𝒱_cθ_i n_i b_k^†b_l e^i∑_i∈𝒱∖𝒱_cθ_i n_i
= 1/(2π)^2∫_0^2πω_k ∫_0^2πω_l e^i(ω_k-ω_l) b_k^†b_l = 0.
In other words, for any coupling termb^†_k b_l, the above procedure will cancel it out if eitherkorlis in𝒱∖𝒱_c. All other terms are preserved because they commute withn_ifor anyi∈𝒱∖𝒱_c. The only possibleb^†_k b_lterms left are those withk,l∈𝒱_c.
This also means that(k,l)∈ℰ_cbecause of the following argument: first by definition of𝒱_cthere must existsk'andl'such that(k,k')∈ℰ_cand(l,l')∈ℰ_c. We must have(k,l)∈ℰ, as otherwise, this coupling term would not exist at all. This means that unlessk'=l', the two edges(k,k')and(l,l')as vertices inℒare next-neighbors, which is not allowed in our coloring. Thereforek'=l'and we have(k,l)∈ℰ_c.
Consequently, the effective Hamiltonian is
H_effective = ∑_(i,j)∈ℰ_c h_ijb_i^†b_j + ∑_i ω_i b_i^†b_i + ξ_i/2∑_i n_i(n_i-1).
Next, we will show that the above Hamiltonian is decoupled into clusters of sizes at most2. We will do this by showing that any bosonic modeiinteracts with at most one other bosonic mode in the above Hamiltonian. This can be proved by contradiction: ifiinteracts with bothjandkin the above Hamiltonian, then(i,j)∈ℰ_cand(i,k)∈ℰ_c, which makes(i,j)and(i,k)neighbors as vertices inℒ, and this is forbidden in our coloring.
With the decoupled Hamiltonian in (<ref>), we can then learn the coefficients in each one- or two-mode cluster independently and in parallel using the algorithms described in Sections <ref> and <ref>. Looping over all colorsc∈{1,2,⋯,χ}, we will obtain all the coefficients in the Hamiltonian.
§ DEVIATION FROM THE EFFECTIVE DYNAMICS
In this section, we consider the error introduced by simulating the effective dynamics with the insertion of random unitaries, as mentioned in Section <ref>. Suppose𝒟is a distribution over the set of unitaries, and the initial state of the system is represented by the density matrixρ(0). The actual final state obtained after the insertion ofrrandom unitaries is
_U_j∼𝒟(∏_1≤ j≤ r^←U_j^† e^-iτ H U_j)ρ(0)(∏_1≤ j≤ r^→U_j^† e^iτ H U_j),
where eachU_jis inserted after timeτ=t/r. On the other hand, the desired final state, which facilitates the subsequent steps of the learning process, is
e^-it H_effectiveρ(0) e^it H_effective ,
whereH_effectiveis the effective Hamiltonian:
H_effective = 𝔼_U∼𝒟 U^†HU.
In this section, we provide an analysis of the difference between the two dynamics for a certain class of Hamiltonians and thereby complete the analysis of approximation errors investigated in Section <ref>. For the sake of the Hamiltonians studied in this paper, we consider the Hamiltonians of the following form:
H = ∑_⟨i,j|⟩ h_ij b_i^†b_j + ∑_iω_i n_i + 1/2∑_⟨jklm|⟩ξ_jklmb_j^†b_k^†b_lb_m,
where in the last term we denote by⟨jklm|$⟩ the index quadruples such that {j,k,l,m} form a connected subgraph in the underlying graph 𝒢=(𝒱,ℰ) of bosonic modes. We begin with a lemma describing the action of these Hamiltonians on the product of coherent states.
Let
H = ∑_⟨i,j|⟩ h_ij b_i^†b_j + ∑_iω_i n_i + 1/2∑_⟨jklm|⟩ξ_jklmb_j^†b_k^†b_lb_m,
and
= ⊗_i∈𝒱(e^-|α_i|^2/2∑_k=0^∞α_i^k e^-iζ_i,k/√(k!)|k⟩_i),
where α_i is a complex number of magnitude O(1), and ζ_i,k∈ can be any real number. Then
H = O(Nmax{|ξ_jklm|, |ω_i|,|h_i,j|}),
and
H^2 = O(N^2(max{|ξ_jklm|, |ω_i|,|h_i,j|})^2),
where N=|𝒱|+|ℰ|.
It suffices to prove the result for H/max{|ξ_jklm|, |ω_i|,|h_i,j|}. Therefore we assume max{|ξ_jklm|, |ω_i|,|h_i,j|}=1 without loss of generality. Notice that H is the sum of O(N) terms, and each term takes the form _p b_q or _p b_q_r b_s, where p, q, r, s may be repeated. We will prove that each term acting on |⟩ yields a state whose norm is O(1). We first demonstrate this for _p b_q_r b_s. Simple calculation shows
_p b_q_r b_s
= ⊗_i∉{p,q,r,s}(e^-|α_i|^2/2∑_k=0^∞α_i^k e^-iζ_i,k/√(k!)|k⟩_i)⊗⊗_j∈{p,q,r,s}(e^-|α_j|^2/2∑_k=0^∞α_j^k e^-iζ_j,k/√(k!)√(P_j(k))|k+σ_j⟩_j),
where P_j's are polynomials with ∑_j∈{p,q,r,s} P_j = 4, and σ_j is an integer determined by the numbers of _j and b_j in _p b_q_r b_s. For example, if p=q=r=1, s=2, then P_1(k)=(k+1)^3, σ_1=1, P_2(k)=k, σ_2=-1. Straight calculations can show that
e^-|α_j|^2/2∑_k=0^∞α_j^k e^-iζ_j,k/√(k!)√(P_j(k))|k+σ_j⟩_j^2
= e^-|α_j|^2∑_k=0^∞|α_j|^2k/k!P_j(k) = Q_j(|α_j|^2) = O(1).
where Q_j is a polynomial that can be determined by P_j, but we do not care about its explicit form. Therefore we have shown that
_p b_q_r b_s=√(∏_j∈{p,q,r,s}Q_j(|α_j|^2))=O(1).
Similarly, we can show that _p b_q=O(1). Therefore (<ref>) is established.
Next, we will prove (<ref>). We can fully expand H^2 into O(N^2) terms, each of which has the form _p b_q _p' b_q', _p b_q_r b_s_p' b_q', _p b_q_p' b_q'_r' b_s', or _p b_q_r b_s_p' b_q'_r' b_s'. Again, we may go through a similar process as above and conclude that each term acting on yields a state of magnitude O(1).
Assume that |ϕ_0⟩ = ⊗_i |α_i⟩ is a product of coherent states, and |ϕ_t⟩ is the state obtained by evolving under the effective dynamics for time t, i.e., |ϕ_t⟩ = e^-it|ϕ_0⟩, then |ϕ_t⟩ is a state of the form described in (<ref>) for the distribution 𝒟 used in previous sections.
Using density matrices, the effective dynamics with the Hamiltonian starts from the state ρ(0):=|ϕ_0⟩⟨ϕ_0| and end up in the state ρ(t):=|ϕ_t⟩⟨ϕ_t| at time t, while the actual final state obtained is given by (<ref>).
To bound its distance from the desired state ρ(t), we define the following density operators:
ρ^(ℓ)(t) = (∏_1≤ j≤ℓ^←U_j^† e^-iτ H U_j)ρ(t-ℓτ)(∏_1≤ j≤ℓ^→U_j^† e^iτ H U_j).
Then ρ^(0)(t) = ρ(t) and ρ^(r)(t) is the density operator in (<ref>).
Now consider the distance between ρ^(L-1)(t) and ρ^(L)(t). Define
Q^(L) = ∏_1≤ j≤ L-1^→U_j^† e^iτ H U_j,
then by the independence of U_j, we have
ρ^(L)(t)-ρ^(L-1)(t)_*
=_Q^(L)[ Q^(L)(_U(U^† e^-iτ H Uρ(t-Lτ)U^† e^iτ H U-e^-iτρ(t-Lτ)e^-iτ))(Q^(L))^†]_*
≤_Q^(L)Q^(L)(_U(U^† e^-iτ H Uρ(t-Lτ)U^† e^iτ H U-e^-iτρ(t-Lτ)e^-iτ))(Q^(L))^†_*
= _Q^(L)_U(U^† e^-iτ H Uρ(t-Lτ)U^† e^iτ H U-e^-iτρ(t-Lτ)e^-iτ)_*
=_U(U^† e^-iτ H Uρ(t-Lτ)U^† e^iτ H U-e^-iτρ(t-Lτ)e^-iτ)_*,
where·_* denotes the trace norm (nuclear norm). The fourth line follows from the property of trace norm and the fact that Q^(L) is unitary. From the Taylor expansion, one can obtain
(U^† e^-iτ H Uρ(t-Lτ)U^† e^iτ H U)-ρ(t-Lτ)
= (e^-iτ U^† HUρ(t-Lτ) e^iτ U^† HU)-ρ(t-Lτ)
= (-iτ [U^† H U, ρ(t-Lτ)]-∫_0^τ e^-is U^† HU[U^† HU, [U^† HU, ρ(t-Lτ)]] e^is U^† HU(τ-s) )
=-iτ [ (U^† H U), ρ(t-Lτ)]-(∫_0^τ e^-is U^† HU[U^† HU, [U^† HU, ρ(t-Lτ)]] e^is U^† HU(τ-s) )
= -iτ [, ρ(t-Lτ)]-(∫_0^τ e^-is U^† HU[U^† HU, [U^† HU, ρ(t-Lτ)]] e^is U^† HU(τ-s) ).
Similarly, one has
( e^-iτρ(t-Lτ) e^iτ)-ρ(t-Lτ)
= -iτ [, ρ(t-Lτ)]-∫_0^τ e^-is [, [, ρ(t-Lτ)]] e^is (τ-s) .
Combining (<ref>) and (<ref>), one obtains
(U^† e^-iτ H Uρ(t-Lτ)U^† e^iτ H U-e^-iτρ(t-Lτ)e^-iτ)_*
≤(∫_0^τ e^-is U^† HU[U^† HU, [U^† HU, ρ(t-Lτ)]] e^is U^† HU(τ-s) )_*
+∫_0^τ e^-is [, [, ρ(t-Lτ)]] e^is (τ-s)_*
≤τ^2(sup_U [U^† HU, [U^† HU, ρ(t-Lτ)]] _*+[, [, ρ(t-Lτ)]]_*)
One only needs to bound [U^† HU, [U^† HU, ρ(t-Lτ)]] _* and [, [, ρ(t-Lτ)]]_*. By a direct calculation, one sees that
[, [, ρ(t-Lτ)]]_*
≤^2ρ(t-Lτ)_* + 2ρ(t-Lτ)_*+ρ(t-Lτ)^2_*
=2ϕ_t-Lτ^2ϕ_t-Lτ+2ϕ_t-Lτ^2
≤ C N^2 max{|ξ_jklm|, |ω_i|, |h_i,j|}^2,
where C=𝒪(1) is a constant, and we have used the property of the trace norm for rank-1 matrices. In the last step, we are using <Ref> with H= and =ϕ_t-Lτ. Similarly, one can obtain
[U^† HU, [U^† HU, ρ(t-Lτ)]]_*
=2ϕ_t-LτU^† H^2Uϕ_t-Lτ+2U^† HUϕ_t-Lτ^2
=2H^2Uϕ_t-Lτ+2HUϕ_t-Lτ^2
≤ C N^2 max{|ξ_jklm|, |ω_i|, |h_i,j|}^2.
In the last step, we are using <Ref> with H=H and =Uϕ_t-Lτ. As a result, we have proved the following:
For a Hamiltonian of the form described in (<ref>) and a product of coherent states |ϕ_0⟩ = ⊗_i |α_i⟩ such that α_i are 𝒪(1) constants, we have
_U_j∼𝒟(∏_1≤ j≤ r^←U_j^† e^-iτ H U_j)ρ(0)(∏_1≤ j≤ r^→U_j^† e^iτ H U_j) - e^-it H_effectiveρ(0) e^it H_effective_*
≤ C N^2 t^2/rmax{|ξ_jklm|, |ω_i|, |h_i,j|}^2,
where ρ(0) = |ϕ_0⟩⟨ϕ_0|, H_effective = 𝔼_U∼𝒟 U^†HU, C is a 𝒪(1) constant, N=|𝒱|+|ℰ| and 𝒢=(𝒱,ℰ) is the underlying graph of bosonic modes.
The left-hand side of (<ref>) can be expressed by ρ^(r)(t)-ρ^(0)(t)_*, where ρ^(r)(t) and ρ^(0)(t) are defined in (<ref>). Thus
ρ^(r)(t)-ρ^(0)(t)_*≤∑_L=1^rρ^(L)(t)-ρ^(L-1)(t)_*
≤∑_L=1^r C N^2 τ^2 max{|ξ_jklm|, |ω_i|, |h_i,j|}^2 = C N^2 t^2/rmax{|ξ_jklm|, |ω_i|, |h_i,j|}^2,
where we have used (<ref>), (<ref>) and (<ref>) in the second inequality.
|
http://arxiv.org/abs/2307.04556v1 | 20230710134342 | Hairy Kiselev Black Hole Solutions | [
"Yaghoub Heydarzade",
"Maxim Misyura",
"Vitalii Vertogradov"
] | gr-qc | [
"gr-qc"
] |
Hairy Kiselev Black Hole Solutions
Yaghoub Heydarzade^(a)[[email protected]
.tr], Maxim Misyura ^(b,c)[[email protected]]
and Vitalii Vertogradov^(c,d)[[email protected]]
(a) Department of Mathematics, Faculty of Sciences, Bilkent University, 06800 Ankara, Turkey
(b) Department of High Energy and Elementary Particles Physics,
Saint Petersburg State University,
University Emb. 7/9, Saint Petersburg, 199034, Russia
(c) Physics department, Herzen state Pedagogical University of Russia,
48 Moika Emb.,
Saint Petersburg 191186, Russia
(d) SPB branch of SAO RAS, 65 Pulkovskoe Rd, Saint Petersburg
196140, Russia
August 12, 2023
=========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
empty
In the realm of astrophysics, black holes exist within non-vacuum cosmological backgrounds, making it crucial to investigate how these backgrounds influence the properties of black holes. In this work,
we first introduce a novel static spherically-symmetric exact solution of Einstein field equations representing a surrounded hairy black hole. This solution represents a generalization of the hairy Schwarzschild solution recently derived using the extended gravitational decoupling method. Then,
we discuss how the new induced modification terms attributed to the primary hairs and various background fields affect the geodesic motion in comparison
to the conventional Schwarzschild case. Although these
modifications may appear insignificant in most cases, we identify specific conditions where they can be
comparable to the Schwarzschild case for some particular background fields.
Keyword: Gravitational decoupling,
Kiselev black hole,
hairy
black hole,
cosmological fields, geodesics
§ INTRODUCTION
In 2019 the Event Horizon Telescope Collaboration unveiled the very first image of a black hole located at the center of the massive elliptical galaxy M87 <cit.>. More recently, scientists have successfully observed the shadow of the supermassive black hole located in the center of our own galaxy <cit.>. These direct observations provide compelling evidence that black holes are not merely abstract mathematical solutions of the Einstein field
equations but real astrophysical objects. Black holes possess a range of
miraculous properties. As instances, they allow for the extraction of energy from their rotation and electric fields <cit.>.
In the vicinity of the black hole's event horizon, particles can possess negative energy <cit.>, and black holes can even function as particle
accelerators <cit.>.
In the realm of astrophysics, black holes are not isolated objects and they inhabit non-vacuum backgrounds. Some research has focused on investigating the direct local effects of cosmic backgrounds on the known black hole solutions. As instance, Babichev et al. <cit.> have shown that in an expanding universe by a phantom scalar field, the mass of a black hole decreases as a result of the accretion of particles of the phantom field into the central black hole. However, one notes that this is a global impact. To explore the local changes in the spacetime geometry near the central black hole, one should consider a modified metric that incorporates the surrounding spacetime. In this context, an analytical static spherically symmetric solution to Einstein filed equations has been presented by Kiselev <cit.>. This solution
generalizes the
usual Schwarzschild black hole to a non-vacuum background, and is characterized by an effective equation of state parameter of the surrounding field of the black hole. Hence it can encompass a wide range of possibilities including quintessence, cosmological constant, radiation and dust-like fields. Several properties of the Kiselev black hole have been extensively investigated in the literature [85-90].
Later, this solution has been generalized to the dynamical Vaidya type
solutions <cit.>.
Such generalizations are well justified due to the non-isolated nature of real-world black holes and their exitance in non-vacuum backgrounds. Black hole solutions coupled to matter fields, such as the Kiselev solution, are particularly relevant for the study of astrophysical black holes with distortions <cit.>. They also play a significant role in investigating the no-hair theorem <cit.>. This theorem states that a black hole can be described only with three charges (i.e.
mass M, electric charge Q and angular momentum a), and it relies
on a crucial assumption that the black hole is isolated, meaning that
the spacetime is asymptotically flat and free from other sources.
However, real-world astrophysical situations do not meet this assumption
. As instances, one may refer to black holes in binary systems, black
holes surrounded by plasma, or those accompanied by accretion disks or
jets in their vicinity. Such situations implies that a black hole may
put on different types of wigs, and hence the applicability of the
standard no-hair theorem for isolated black holes to these cases becomes
questionable <cit.>.
Recently, the minimal geometrical deformations <cit.> and the extended gravitational decoupling
methods <cit.> have been utilized to derive new
solutions from the known seed solutions of Einstein field equations. These techniques have been particularly effective in investigating the violation of the no-hair theorem, the emergence of novel types of hairy black holes, and the exploration of alternative theories of gravity.
Using the extended gravitational
decoupling method, Ovalle et all <cit.>
have introduced a generalization of a Schwarzschild black hole
surrounded by an anisotropic fluid and possesses primary hairs. This new solution has motivated a substantial further research in generalizing this solution to hairy
Kerr <cit.>, Vaidya and generalized
Vaidya <cit.>, regular hairy black holes <cit.> and many others.
Indeed, the gravitational decoupling
method represents a novel and powerful tool for obtaining new solutions to the
Einstein equations.
In the present work, we introduce a novel class of exact solutions to the Einstein field equations, which describe a surrounded
hairy Schwarzschild black hole. This solution serves as a generalization of the previously obtained hairy Schwarzschild solution using the extended gravitational decoupling method. Then, in order to analyze the properties
of the solution, we investigate the effect of the new modification terms, attributed to the primary hairs and various surrounding fields, on the timelike
geodesic motion. Specifically, we compare the effects of modification
terms to the conventional Schwarzschild case. While these modifications
may seem negligible in most scenarios, we identify specific situations
where they can be comparable to the Schwarzschild case, particularly
when specific surrounding fields are present. This analysis sheds light
on the significance of these modifications in certain situations,
providing insights into the behavior of geodesic motion around real
astrophysical black holes.
The structure of the present paper is as follows. In Section 2, we briefly discuss the
hairy Schwarzschild solution by the minimal geometrical deformations and
the extended gravitational decoupling method. In Section 3, we solve the Einstein
field equations in order to obtain the surrounded hairy Schwarzschild black hole. In Section 4, we do analysis of the timelike geodesic
motion. In Section 5, we summarize the new findings and implications of the study.
The system of units c=G=1 will be used throughout the paper.
§ GRAVITATIONAL DECOUPLING AND HAIRY SCHWARZSCHILD BLACK HOLE
Gravitational decoupling method states that one can solve the Einstein
field equations with the matter source
T̃_ik=T_ik+Θ_ik ,
where T_ik represents the energy-momentum tensor of a system for
which the Einstein field equations are
G_ik=8π T_ik .
The solution of the equations (<ref>) is supposed to be
known and represents the seed solution.
Then Θ_ik represents an extra matter sources which causes
additional geometrical deformations.
The Einstein equations for this
new matter source are
G̅_ik=αΘ_ik ,
where α is a coupling constant and G̅_ik is the Einstein
tensor of deformed metric only.
The gravitational decoupling method
states that despite of non-linear nature of the Einstein equations, a
straightforward superposition of these two solutions
(<ref>) and (<ref>)
G̃_ik≡ G_ik+G̅_ik=8π T_ik+αΘ_ik≡T̃_ik ,
is also the solution of the Einstein field equations.
Now, we briefly describe this method. Let us consider the Einstein field
equations
G_ik=R_ik-1/2g_ikR=8π T_ik .
Let the solution of (<ref>) is a static spherically-symmetric
spacetime of the form
ds^2=-e^ν(r)dt^2+e^λ(r)dr^2+r^2 dΩ^2 .
Here dΩ^2=dθ^2+sin^2θ dφ^2 is the metric on unit
two-sphere, ν(r) and λ(r) are functions of r coordinate
only and they are supposed to be known.
The metric (<ref>) is
termed as the seed metric.
Now, we seek the geometrical deformation of (<ref>) by
introducing two new functions ξ=ξ(r) and η=η(r) by:
e^ν(r)→ e^ν(r)+αξ(r),
e^λ(r)→ e^λ(r)+αη(r).
Here α is a coupling constant. Functions ξ and η are
associated with the geometrical deformations of g_00 and g_11 of the
metric (<ref>) respectively. These deformations are caused by
new matter source Θ_ik. If one puts ξ(r)≡ 0 then the
only g_11 component is deformed, leaving g_00 unperturbed -this
is known as the minimal geometrical deformation. It has some drawbacks, for
example, if one considers the existence of a stable black hole possessing
a well-defined event horizon <cit.>. Deforming both
g_00 and g_11 components is an arena of the extended
gravitational decoupling. One should note that this extended
decoupling works only for the vacuum seed solutions of the Einstein
equations and fails for the region where we have the matter source due
to the violation of the Bianchi identities except for several cases. For
example, if one opts for deformations of the Vaidya solution then the
extended gravitational decoupling still works, but it fails for
the generalized Vaidya due to the being of an energy exchange
between two matter sources <cit.>.
Substituting (<ref>) into (<ref>) one obtains
ds^2=-e^ν+αξdt^2+(e^λ+αη)
dr^2+r^2 dΩ^2 .
The Einstein equations for (<ref>) as
G̃_ik= 8 πT̃_ik=8π (T_ik+Θ_ik ) ,
give
8π (T^0_0+Θ^0_0)=-1/r^2+e^-β( 1/r^2-β'/r) ,
8π (T^1_1+Θ^1_1)=-1/r^2+e^-β(1/r^2+ν'+αξ'/r),
8π (T^2_2+Θ^2_2)=1/4e^-β(2(ν”+αξ”)+(ν'+αξ')^2-β'(ν'+αξ')+2ν'+αξ'-β'/r),
e^β≡ e^λ+αη.
Here the prime sign denotes the partial derivative with respect to the radial
coordinate r, and we have 8π( T^2_2+Θ^2_2)=8 π(T^3_3+Θ^3_3) due to
the spherical symmetry.
From (<ref>) one can define the effective energy density
ρ̃, effective radial and tangential
P̃_r, P̃_t pressures as
ρ̃=-(T^0_0+Θ^0_0),
P̃_r=T^1_1+Θ^1_1,
P̃_t=T^2_2+Θ^2_2.
From (<ref>) one can introduce the anisotropy parameter
Π as
Π=P̃_t-P̃_r ,
where if Π≠ 0 then it indicates the anisotropic behaviour of fluid
T̃_ik.
The equations (<ref>) can be decoupled into two
parts[One should remember that it always works for
T_ik≡ 0 i.e. the vacuum solution and for special cases of
T_ik if one opts for Bianchi identities
∇_iT^ik=∇_iΘ^ik=0 with respect to the metric
(<ref>) otherwice there is an energy exchange i.e.
∇_iT̃^ik=0⇒∇_iT^ik=-∇_iΘ^ik≠ 0.]:
the Einstein equations corresponding to the seed solution (<ref>) and the
one corresponding to the geometrical deformations.
If we consider the
vacuum solution i.e. T_ik≡ 0 - Schwarzschild solution then, by
solving the Einstein field equations which correspond the geometrical
deformations, one obtains the hairy Schwarzschild solution <cit.>
ds^2=-(1-2M/r+α e^-r/M-α
l/2) dt^2+(1-2M/r+α e^-r/M-α
l/2) ^-1dr^2+r^2dΩ^2 ,
where α is the coupling constant, l is a new parameter with
length dimension and associated with a primary hair of a black hole. Here
M
is the mass of the black hole in relation with the
Schwarzschild mass ℳ as
M=ℳ+α l/2 .
The impact
of α and l on the geodesic motion, gravitational lensing,
energy extraction and the thermodynamics has been studied
in Refs.<cit.>.
§ SURROUNDED HAIRY SCHWARZSCHILD BLACK HOLE
Recently, the hairy Schwarzschild black hole has been
introduced in <cit.> by using the gravitational decoupling
method.
This solution in the Eddington-Finkelstein coordinates takes the
form
ds^2=-(1-2M/r+α e^-r/M-α
l/2) dv^2+2ε dvdr+r^2dΩ^2 .
Here v is the advanced (ε=+1) or retarded (ε=-1)
Eddington time. In this section, using the approach in <cit.>, we obtain the generalization of this solution
representing a hairy Schwarzschild solution
surrounded by some particular fields motivated by cosmology as in the following
theorem.
Theorem: Considering the extended gravitational
decoupling <cit.> and the principle of additivity and linearity
in the energy-momentum
tensor <cit.> which allows one to get correct limits to the known solutions, the
Einstein field equations admit the following solution in the Eddington-Finkelstein coordinates
ds^2=-(1-2M/r-N/r^3ω+1+α
e^-2r/2M-α l) dv^2+2ε dvdr +r^2dΩ^2
,
where M=ℳ+α l/2 in which ℳ and ℳ are integration constants. The
metric represents a surrounded hairy Schwarzschild solution or equivalently hairy Kiselev solution. We summarize our proof as follows.
Let us consider the general spherical-symmetric spacetime in the form
ds^2=-f(r)dv^2+2ε dvdr+r^2dΩ^2 .
The Einstein tensor components for the metric (<ref>) are
given by
G^0_0=G^1_1=1/r^2(f'r-1+f) ,
G^2_2=G^3_3=1/r^2(rf'+1/2r^2f”) ,
where the prime sign represents the derivative with respect to the radial
coordinate r.
The total energy-momentum tensor should be a combination of
Θ_ik associated to the minimal
geometrical deformations and T_ik associated to the surrounding
fluid as
T̃_ik=αΘ_ik+T_ik .
One should note that here we don't demand the fulfilment of the
condition Θ^ik_;k=T^ik_;k=0.
Instead, we demand that
T̃^ik_;k=0 which follows the Bianchi identity.
The total energy-momentum tensor T̃_ik follows the same symmetries of the Einstein tensor (<ref>) for (<ref>), i.e
T̃^0_0=T̃^1_1 and T̃^2_2=T̃^3_3.
An appropriate general expression for the energy-momentum tensor T_ik of the surrounding
fluid can be <cit.>
T^0_0=-ρ(r) ,
T^i_k= -ρ(r)[ -ξ(1+3ζ)
r^ir_k/r^nr_n+ζδ^i_k] .
From the form of the energy-momentum tensor (<ref>), one can
see that the spatial profile is proportional to the time component,
describing the energy density ρ with arbitrary constants ξ and
ζ depending on the internal structure of the surrounding fields.
The isotropic averaging over the angles results in
<T^i_k>=ξ/3ρδ^i_k=Pδ^i_k ,
since we considered <r^ir_k>=1/3δ^i_kr_n r^n.
Then, we have a barotropic equation of state for the surrounding fluid
as
P(r)=ωρ(r) , ω=ξ/3 ,
where P(r) and ω are the pressure and the constant equation of the
state parameter of the surrounding field, respectively.
Here, one notes that the source T_ik associated to the surrounding fluid should possess the same symmetries in T̃_ik because Θ_ij associated to the
geometrical deformations has the same symmetries as [One should
note that hairy Schwarzschild solution is supported with an anisotropic
fluid Θ^i_k
Θ^0_0=-ρ̅ , Θ^1_1=P̅_r , Θ^2_2=Θ^3_3=P̅_t .
Where the non-vanishing parameter Π=P̅_t-P̅_r indicates
on the anisotropic nature of the energy momentum tensor. So, in order to
satisfy the condition Θ^0_0=Θ^1_1 the anisotropic fluid
should be satisfied with the equation of the state P_r=-ρ̅.]
Θ^0_0=Θ^1_1=-ρ̅,
Θ^2_2=Θ^3_3=P̅_t.
It means that
T^0_0=T^1_1 and T^2_2=T^3_3.
These
exactly provide the so-called principle of additivity and linearity
considered in <cit.> in order to determine the free
parameter ζ of the energy-momentum tensor T_ik of surrounding
fluid as
ζ=-1+3ω/6ω .
Now, substituting (<ref>) and (<ref>) into
(<ref>), the non-vanishing components of the surrounding
energy-momentum tensor T_ik become
T^0_0=T^1_1=-ρ,
T^2_2=T^3_3=1/2(1+3ω) ρ .
Now, we know the Einstein tensor components (<ref>) and the
total energy-momentum tensor
(<ref>). Putting all these equations together, the
G^0_0=T̃^0_0 and G^1_1=T̃^1_1 give us the following equation
1/r^2( f'r-1+f)=-ρ-αρ̅ .
Similarly, the G^2_2=T̃^2_2 and G^3_3=T̃^3_3
components yields
1/r^2(rf'+1/2f”r^2)=1/2
(1+3ω) ρ+P̅ .
Thus, there are four unknown functions f(r), ρ(r), ρ̅(r) and P̅ that can be determined analytically
by the differential equations (<ref>) and
(<ref>) with the following ansatz
f(r)=g(r)-α l/r+α e^-2r/2M-α l .
Then, by substituting (<ref>) into (<ref>)
and (<ref>) and using (<ref>) one obtains the following system of
linear differential equations [Here we apply the Einstein equation
Ĝ^i_k=αΘ^i_k to eliminate ρ̃ and
P̃. Ĝ^i_k is the Einstein tensor for the spacetime
ds^2=-(1-α l/r+α e^-2r/2M-α l)dv^2+2ε dvdr+r^2dΩ^2 .
] for unknowns ρ(r) and g(r)
1/r^2( g'r-1+g)=-ρ,
1/r^2(rg'+1/2g”r^2)=1/2
(1+3ω) ρ .
This second order linear system can be integrated to give the metric function g(r) as
g(r)=1-2ℳ/r-N/r^3ω+1 ,
and the energy density ρ(r) of the surrounding field as
ρ(r)=-3ω N/r^3(ω+1) .
Here ℳ and N are constants of integration representing the
Schwarzschild mass and the surrounding field structure parameter,
respectively.
By putting all these solutions together, we arrive at the
surrounded hairy Schwarzschild solution or equivalently hairy Kiselev solution as
ds^2=-(1-2M/r-N/r^3ω+1+α
e^-2r/2M-α l) dv^2+2ε dvdr +r^2dΩ^2
,
where M=ℳ+α l/2.
From (<ref>), one can see
that the weak energy condition
demands
that parameters ω and N have different signs.
§ TIMELIKE GEODESICS
Considering the geodesic motion in spherically-symmetric spacetime, without loss of generality, one
can consider the equatorial plane
θ=π/2.
The geodesic equations for the metric
(<ref>) can be obtained by varying the following action
S= ∫ℒ dτ =1/2∫(-fv̇^2
+2εv̇ṙ+r^2φ̇^2 ) dτ ,
where the dot sign means the derivative with respect to the proper time
τ.
The spacetime (<ref>) is spherically-symmetric and hence in
addition to the time-translation Killing vector ∂/∂ t, there exists another Killing vector φ^i=∂/∂φ and
the corresponding
conserved
quantity, the angular momentum per mass, is given by
φ^iu_i=∂ℒ/∂φ̇=r^2 φ̇ =L .
Taking into account (<ref>) and (<ref>), one
obtains the following three geodesic equations
φ̇=L/r^2 ,
-1/2f' v̇^2+rφ̇^2-εv̈=0 ,
εr̈=fv̈+f'v̇ṙ ,
where the prime sign denotes the derivative with respect to the radial
coordinate r.
Substituting (<ref>) into
(<ref>), one obtains
fv̈=ε f L^2/r^3-1/2ε f f' v̇^2 .
Now, by applying the timelike geodesic condition g_iku^iu^k=-1 into the
equation above, we find
f'v̇ṙ=-1/2ε f' +1/2ε
ff'-1/2ε f' L^2/r^2v̇^2 .
Substituting the equation (<ref>) into
(<ref>) we arrive at the following general equation of
motion in terms of the metric function f for the radial
coordinate
r̈= -1/2(1+L^2/r^2)
f'+fL^2/r^3 .
Hence, using the obtained metric function (<ref>), one obtains the geodesic
equation in the form
r̈ = (-M/r^2+L^2/r^3-3ML^2/r^4)_sch
+ (-γN/2r^γ+1- (γ+2
)
NL^2/2r^γ+3)_s
+(α/2M-α l e^-2r/2M-α
l+ α L^2/(2M-α l) r^2
e^-2r/2M-α
l-α L^2/r^3 e^-2r/2M-α l)_h ,
where γ=3ω+1.
From (<ref>), one can observe the following interesting points.
* The three terms in the first line are the same as that
of the standard Schwarzschild black hole in which the first term
represents the Newtonian gravitational force, the second term represents
the repulsive centrifugal force, and the third term is the relativistic
correction of Einstein's general relativity which accounts for the
perihelion precession.
* The terms in the second line are new correction terms due to
the presence of the background field which surrounds the hairy
Schwarzschild black hole, in which its first term is similar to the term
of the gravitational potential in the first brackets, while its second term
is similar to the relativistic correction of general relativity.
Then,
regarding (<ref>) one realizes that for the more realistic
non-empty backgrounds, the geodesic equation of any object depends
strictly not only on the mass of the central object of the system and
the conserved angular momentum of the orbiting body, but also on the
background field nature.
The new correction terms may be small in
general in comparison to their Schwarzschild counterparts (the first and
the third term in the first brackets).
However, one can show that, there
are possibilities that these terms are comparable to them.
One also can observe, by using the equation (<ref>), that for ω∈ (-1/3, 0) the Newtonian gravitational force is
strengthened by corrections caused by the surrounding field, on the
other hand, for other values of ω the force is weakened.
If we
consider the same question regarding the second term, which corresponds
to the relativistic correction of Einstein's general relativity, then
for values ω∈(-1, 0) the force is strengthened and this is while
this force is weakened for other values ω.
The
surrounding fluid doesn't have any contributions to the
repulsive centrifugal force.
* The terms in the third line represent modifications by
the primary hairs α and l.
The second term here corresponds to the
relativistic correction of Einstein's general relativity.
The third term here
represents a new correction by the primary hairs to the
repulsive centrifugal force.
One can define the effective distance D
to find out where this force disappears by relation
A_1/A_r≈ 1 where A_r is the Schwarzschild black hole
repulsive centrifugal force, and A_1 is the correction to this force
caused by primary hairs.
So the distance is given by
D= (M-α l/2 )lnα .
Considering a minimal geometrical deformations, α must be
negligible, i.e α≪ 1.
So according to (<ref>), the correction caused by
primary hairs can weaken
the repulsive centrifugal force but it can't cancel it, and hence this
correction is negligible in general.
The first term in (<ref>) contributes a correction
to the Newtonian potential. This can be seen using the effective potential
V_eff(r). One can write the geodesic equations in the form
V_eff(r)=Φ(r)+L^2/2r^2+Φ(r)L^2/r^2 ,
where Φ(r) is related to g_00 metric component via relation
g_00=-(1+2Φ) .
By comparing this with (<ref>), we come to the conclusion that
Φ(r)=-M/r+N/2r^3ω+1-α e^-r/2M-α l .
Now, taking the derivative of V_eff in (<ref>)
with respect to r
d^2r/dτ^2=-dV_eff/dr ,
we arrive at the equation of motion (<ref>).
In order to better understand the nature of the solution obtained in (<ref>),
one can consider the following two groups of forces and
investigate their behaviour for various set of surrounding fields and
primary hair parameters.
G≡M/r^2+γN/2r^γ+1-α/2M-α l
e^-2r/2M-α l
,
H≡3ML^2/r^4+ (γ+2 )
NL^2/2r^γ+3-α L^2/(2M-α l) r^2
e^-2r/2M-α l ,
where G group represents the Newtonian
gravitational force with its modifications and H group corresponds to the
relativistic corrections of the general relativity.
One can ask for the possibilities if the new modifications caused by surrounding fields and
primary hairs can cancel the original forces or change their effect, i.e. change
their sign.
Hence, we are interested in possible cases in which for set of parameters ω,
α and l, the G and H functions are getting negligible values
or they change their signs.
In the following subsections, we consider some specific fields possessing particular equations
of state motivated by cosmology.
However, we can note the following facts which we can derive from
(<ref>). Let's consider the first two terms: for -1<ω<0
these two terms are always positive. However, the second term is
negative for positive ω and we can expect the sign change of H.
Let's consider two particular cases:
* the radiation ω=1/3. In this case, |N|≤ M^2
and the first two terms become negative in the region 0≤ r≤ 2M/3
which is inside the event horizon. Because the third term in
(<ref>) is negligible we can conclude that H is always positive
outside the event horizon region.
* The stiff fluid ω=1. In this case we can put N=-M^4 then
f(r=M)>0. Thus, in this case the event horizon location at the radius
which is less than M. However, the first two terms in (<ref>)
become negative at r=M and H<0 outside the event horizon region.
§.§ Stiff Fluid
We begin our analysis of timelike geodesics with the surrounding fluid having
the average equation of
the state
of a stiff fluid as
P=ρ ⇔ ω =1.
As mentioned previously, the presence of the surrounding field has a weakening effect on the forces given by (<ref>) and (<ref>).
From (<ref>), one observes that N must be negative to maintain a positive energy density for the surrounding fluid.
Our objective is to determine whether the corrections by the surrounding field and primary
hairs can cancel out the initial Schwarzschild forces or potentially can change their sign, and thereby altering the direction of the forces.
In Figure <ref>(a), we plotted three curves corresponding the usual Schwarzschild,
Kiselev and hairy Kiselev black holes.
We observe that the function G
for the hairy Kiselev black hole is negligible but positive near the event
horizon r=2ℳ for the given specific set of parameters.
However, in the case of purely Kiselev
black hole (i.e. α=0), the function G is negative in the
interval 2≤ r ≤ 2.15. One notes that in the purely Kiselev case, we have
a naked singularity (NS) (i.e. g_00≠ 0).[For this set of
parameters g_00 is always negative, i.e. there are not positive roots
of the equation g_00=0 for r∈(0,+∞). On
this reason, we have concluded that r=0 represents a NS because the
Kretschmann scalar diverges at r=0.
By NS we mean that r=0 singularity is not covered with
the event horizon. The question about future-directed non-spacelike
geodesics, which terminate at this singularity in the past, hasn't been
considered within this paper.]
Figure <ref>(b) shows that the function G becomes negative in
the vicinity of the event horizon (i.e. in the region 2≤ r ≤ 2
.02) for the hairy
Kiselev black hole for the set of parameters N = -5.186, l = 1.567.
To have a bigger distance from the event horizon, where the function G can become negative, one should increase |N| and l, however, in this
case, ℳ∼α l/2 and it will not anymore be a minimal
geometrical deformation in (<ref>).
So we can conclude that
G might be negative outside the event horizon but only in its vicinity.
Figure <ref>(c) compares the function H for the Schwarzschild, Kiselev
and hairy
Kiselev cases for the values considered in Figure 1(b).
In order to understand better the influence of a primary hair on a
geodesic motion we put α=0.1 in order to consider bigger values
of l.
Figures <ref>(a) and <ref>(b) show how G changes with
different values of l and N.
One can see that there are regions where it
becomes negative.
However, from these pictures one can't realize if they
deal with a black hole or a naked singularity.
For this purpose one should
impose the condition of existence of an event horizon .
The Figure <ref>(c) shows how G changes in this case.
§.§ Radiation
Here we consider the surrounding field having the average equation
of state of radiation field as
P=ρ/3 ⇔ ω =1/3 .
In this case, the N parameter must be negative, and akin to the previous
case, the surrounding radiation field and primary hairs weaken the forces in
(<ref>)
and (<ref>).
Figure <ref>(a) shows three curves in the pure Schwarzschild, Kiselev
and hairy Kiselev black holes for the parameter values N=-3.729 and l=4.
For the case of surrounding radiation-like field, one observes that the
spacetime is akin to the hairy Reissner-Nordstrom black hole such that
the parameter N plays the role of black hole's electric charge, i.e.
N=-Q^2.
So, in purely Reissner-Nordstrom case, the curve corresponds to the
naked singularity because ℳ^2<Q^2.
In comparison to the stiff fluid case, one notes
that the parameters l and N
are taken greater values to ensure that the function G is negligible.
In Figure <ref>(b), we plotted curves in order to show that hairs
can affect
the geodesic motion and hence G can become negative in the event horizon
vicinity (in the region 2≤ r ≤ 2.042).
In this case, we set N=-3.889
and l=4.16.
One can see that the smaller values of ω we take, the bigger values
of l are required to ensure the negative values of G.
For example, if we
take this value of l (i.e. l=4.16), then, in the case of stiff
fluid, we have N=-15.557 (we obtain this value by demanding that
the event horizon is located at r=ℳ) then the G function
is negative in the region 2≤ r ≤ 2.534.
Thus, one can see that
the region, where negative values of G are possible, shrinks when
ω tends to zero.
Figure <ref>(c) denotes the function H with the values of N
and l as in the previous figure.
Similar to the stiff fluid case, we have several plots for α=0.1.
Figures <ref>(a) and <ref>(b) show that G
becomes
negative at the larger distances in comparison to the stiff fluid case. This
apparently contradicts our previous statement that the smaller ω we consider, the region where G becomes negative becomes smaller.
However, one notes that this is a case
of the naked singularity because if one imposes an
extra condition of the event horizon existence, then for this case
(α=0.1) the G function is always positive outside the horizon
as can be seen from Figure <ref>(c).
§.§ Dust
For a dust-like field we have
P=0 ⇔ ω=0 ,
and we can show analytically that the function G is positive near the
event horizon as follows.
We have
2M+N/r=1+α e^-r/ℳ .
Substituting this into (<ref>) and considering the event horizon at
r=2ℳ, one obtains
1/4ℳ-α/4ℳe^2>0 .
So, for physically relevant values of α, l and N, the
function G is positive outside the event horizon.
Figure <ref>(a) compares three curves of a hairy
Kiselev black hole, purely Kiselev when α=0, and Schwarzschild
case when α=0 and N=0.
These curves are plotted for l=0.5 , N=-0.115.
Figure <ref>(b)
is plotted for the same values of black hole parameters and shows the
behaviour of the function H.
For ω≥ 0 the function H is positive, and its
behaviour is shown in the Figure <ref>(c).
For other values of ω we
could not find the condition (at small values of α) where H
becomes negative.
§.§ Quintessence
For a quintessence-like field, the equation of the state is
P=-2/3ρ ⇔ ω=-2/3 .
In this case, the parameter N must be positive as one can see from
(<ref>).
The function G can be negligible in the vicinity of
the horizon only if either N or L are negative.
However, G can
take negative values but at large distances from the event horizon. As
can be shown from Figure <ref>(a) at values l=0.05 , N=0.028, the
function
G for Kiselev black hole becomes negative at r>8.553. The effect of
N and α on the function H for these values are negligible and
they become considerable only at large distances, as one can see from Figure
<ref>(b) .
§.§ De Sitter background
In this case, the surrounded fluid has the effective equation of the state
P=-ρ ⇔ ω=-1 .
Like in the previous case, the parameter N must be negative, and the
function G must be positive near the event horizon.
Figure <ref>(a) shows that the function G for N=0.016 , l=0.01
becomes negative for r>3.841.
The function H behaves very similar in all three cases as can be seen in Figure <ref>(b).
Figure <ref> shows the behaviour
of G at
α=0
.1 and with an extra condition of the event horizon existence.
Here
<ref>(a) is plotted for positive cosmological constant as
<ref>(b)
for negative cosmological constant - anti-de-Sitter case.
§.§ Phantom field
The equation of the state a phantom-like field is given by
P=-4/3ρ ⇔ ω = -4/3 .
The parameter N must be positive, and as can be seen in Figure
<ref>(a), the
function G takes negative values at the region r>3.056 at l=0.05 ,
N=0.007.
Figure
<ref>(b) shows that for the same values of l and N, the
function H can be negative in the region r>5.433.
§ CONCLUSION
Inspired by the fact that black holes inhabit non-vacuum cosmological
backgrounds, we present a new solution to the Einstein field equations representing a surrounded
hairy
Schwarzschild black hole.
This solution
takes into account both the primary hair and surrounding fields (represented
by an energy-momentum tensor following the linearity and additivity condition
<cit.>) which affect the properties of the black hole.
The effect of the corresponding contributions
on timelike geodesics are discussed. We find that the new induced modifications
can be considerable in certain cases.
In particular, we investigate how
the specified surrounding fields and primary hairs affect the Newtonian and
perihelion precession terms. Our observations are as follows.
* The surrounding fields with
-1/3<ω<0 contribute positively to the Newtonian term, i.e strengthening the gravitational attraction.
* The new corrections to the Newtonian term might be the same order or
even greater for all other cases if one considers a naked singularity.[Considering the positive ω, the weak energy condition demands negative N values. This restriction, for example in
the dust case requires |N|<2M otherwise the metric function f(r) is always positive for all
ranges of r since all the being four terms are positive, and hence there is no event
horizon. In the case of the radiation i.e. ω=1/3, the NS occurs if M^2+N<0 which requires large values of
|N|. Hence one observes that for bigger values of |N|, the
function |G| becomes bigger but
this implies the violation of the condition required for the existence of an event horizon.]
* In the case that the solution represents a black holes, new corrections can be of the same order or
even greater than the Newtonian term in the event
horizon vicinity for ω>0.
* For ω<-1/3 i.e. for effectively repulsive fluids akin
to dark energy models, the correction terms dominate far from the event horizon and mainly near the
cosmological horizon.
Schwarzschild black hole is an idealized vacuum solution and it is
important to consider how it gets deformed in the presence of matter fields.
Another
crucial factor to consider is the impact of the surrounding environment,
particularly the shadow of a black hole in the cosmological background,
which serves as a potential cosmological ruler <cit.>.
The
solution presented in this work can be further investigated to study the
shadow of a hairy Schwarzschild black hole in various
cosmological backgrounds in order to find out how anisotropic
fluid can affect the observational properties, which is a plan of our
upcoming investigations.
It is worthwhile to mention that applying the Newman-Janis <cit.> and
Azreg Ainou <cit.> algorithms one can obtain the
rotating version of the solution presented here. Also, investigation of quasi-normal modes, thermodynamics properties,
accretion process and gravitational lensing of these solutions can help
us to understand better the nature of these objects.
Acknowledgments: V. Vertogradov and M. Misyura say thanks to
grant NUM. 22-22-00112 RSF for financial support.
The work was performed
within the SAO RAS state assignment in the part "Conducting Fundamental
Science Research".
150
bib:9 The Event Horizon Telescope Collaboration, First M87
Event Horizon Telescope results.
I. The shadow of the supermassive black hole, Astrophys.
J. Lett. 875 (2019) L1
bib:10 The Event Horizon Telescope Collaboration, First M87
Event Horizon Telescope results.
II. Array and instrumentation, Astrophys.
J. Lett. 875 (2019)
L2
bib:11 The Event Horizon Telescope Collaboration, First M87 Event Horizon Telescope results. III. Data processing and calibration, Astrophys. J. Lett.
875 (2019) L3
bib:ehtc2022 Akiyama, K. et al. [Event Horizon Telescope Collaboration].
First Sagittarius A* Event Horizon Telescope Results. I. The Shadow of the Supermassive Black Hole in the Center of the Milky Way. Astrophys. J. Lett. 2022, 930, L12.
bib:pen R. Penrose and R. M. Floyd, Extraction of Rotational Energy from a Black Hole Nature Physical Science
229, 177 (1971).
bib:zaslav O. B. Zaslavskii, Energy extraction from extremal charged black holes due to the Banados-Silk-West effect Phys. Rev. D 86, 124039 (2012)
[arXiv:1207.5209]
bib:mp Lucas Timotheo Sanches, Mauricio Richartz. Energy extraction from non-coalescing black hole binaries
Phys. Rev. D
104, 124025(2021), arXiv:2108.12408 (gr-qc)
bib:charged_vaidya Vitalii Vertogradov, Extraction energy from
charged Vaidya black hole via the Penrose process 2023 Commun. Theor.
Phys. 75 045404
bib:zaslav_rn O. B. Zaslavskii, Negative energy states in the
Reissner-Nordstrom metric Mod. Phys. Lett. A 36,
2150120 (2021), arXiv:2006.02189 [gr-qc].
bib:grib Grib A.A., Pavlov Yu.V., Vertogradov V.D. Geodesics with negative energy in the ergosphere of rotating black holes.
Modern Physics Letters A. Vol. 29,Iss. 20. 2014- P. 14501-14510. [arXiv:1304.7360]
bib:ver_negative V. Vertogradov, The Negative Energy in Generalized Vaidya Spacetime Universe 6(9), 155 (2020),
arXiv:2209.10976 [gr-qc]
bib:ver_ker_negative Vertogradov V.D. Geodesics with negative
energy in the ergosphere of rotating black holes, Gravitation and Cosmology. Vol. 21, Iss. 2. 2015.- PP. 171-174. [arXiv:2210.04674]
babi E. Babichev, V. Dokuchaev, Yu. Eroshenko, Black Hole Mass Decreasing due to Phantom Energy Accretion Phys. Rev.
Lett. 93, 021102 (2004).
bib:bsw M. Banados, J. Silk, S.M. West, Kerr Black Holes as Particle Accelerators to Arbitrarily High Energy Phys. Rev. Lett. 103,
111102 (2009) [arXiv:0909.0169].
bib:zaslav_anti O. B. Zaslavskii, Acceleration of particles by
black holes as a result of deceleration: Ultimate manifestation of kinematic nature of BSW effect Phys. Lett. B 712 (2012)
161 [arXiv:1202.0565]
bib:zaslav_dirty O. B. Zaslavskii, Circular orbits and
acceleration of particles by near-extremal dirty rotating black holes: general approach.
Class.
Quantum Grav. 29 (2012) 205004 [arXiv:1201.5351 [gr-qc]]
bib:grib_complex Grib, A. A., Pavlov Y. V. (2020) Rotating
black holes as sources of high energy particles. Physics of Complex Systems, 1 (1), 40-49.DOI: 10.33910/2687-153X-2020-1-1-40-49
bib:ver_complex Vertogradov, V.D. (2023) On particle
collisions during gravitational collapse of Vaidya spacetimes. Physics of Complex Systems, 4 (1), 17-23.
bib:joshi_col Mandar Patil, Tomohiro Harada, Ken-ichi Nakao,
Pankaj S. Joshi, Masashi Kimura, Infinite efficiency of collisional Penrose process: Can over-spinning Kerr geometry be the source of ultra-high-energy cosmic rays and neutrinos? Phys. Rev. D 93, 104015 (2016) [arXiv:1510.08205 [gr-qc]]
bib:japan_col T. Harada, H. Nemoto, U. Miyamoto, Upper limits
of particle emission from high-energy collision and reaction near a maximally rotating Kerr black hole Phys.Rev
.D86:024027,2012 [arXiv:1205.7088]
bib:50 V. V. Kiselev, Class. Quintessence and black holes Quant. Grav. 20, 1187 (2003).
bib:kvaidya Heydarzade, Y., Darabi, F. Surrounded Vaidya
solution by cosmological fields. Eur. Phys. J. C 2018,
78, 582.
bib:kbvaidya Heydarzade, Y.; Darabi, F. Surrounded
Bonnor-Vaidya solution by cosmological fields. Eur. Phys. J. C
2018, 78, 1004.
bib:kevaidya Y. Heydarzade, F. Darabi Surrounded
Vaidya black holes: apparent horizon properties, Eur. Phys. J. C (2018) 78:342.
bib:51 R. Geroch and J.B. Hartle, Distorted black holes J. Math. Phys. 23, 680
(1982).
bib:52 S. Fairhurst, B. Krishnan, Distorted Black Holes with Charge Int. J. Mod. Phys. D10, 691
(2001).
bib:53 S.R. Brandt, E. Seidel, Evolution of distorted rotating black holes. III. Initial data Phys. Rev. D54, 1403 (1996).
bib:54 M. Ansorg, D. Petroff, Black holes surrounded by uniformly rotating rings Phys. Rev. D 72.2, 024019 (2005).
bib:55 S. W. Hawking, Black holes in general relativity Commun. Math. Phys. 25, 152 (1972).
bib:56 J.D. Brown and V. Husian, Black holes with short hair Int. J. Mod. Phys. D6, 563
(1997).
bib:57 S.Droz, M. Heusler, N. Straumann, New black hole solutions with hair Phys. Lett. B, 268
(3-4), 371 (1991).
bib:58 J. Barranco, A. Bernal, J.C. Degollado, A.
Diez-Tejedor, M. Megevand, M. Alcubierre, D. Ndnez, O. Sarbach, Schwarzschild Black Holes can Wear Scalar Wigs Phys.
Rev. Lett. 109, 081102 (2012).
bib:noh4 J.D. Bekenstein, Novel "no-scalar-hair" theorem for
black holes Phys. Rev. D 51(12), R6608 (1995).
bib:hok_hair Hawking, S.W.; Perry, M.J.; Strominger, A. Soft Hair on Black Holes. Phys. Rev. Lett. 2016, 116, 231301.
bib:mgd1 J. Ovalle, Extending the geometric deformation: New
black hole solutions. Int. J. Mod. Phys. Conf. Ser., 41, 1660132 (2016)
[arXiv:1510.00855 [gr-qc]]
bib:mgd2 Roberto Casadio, Jorge Ovalle, Roldao da Rocha, The
Minimal Geometric Deformation Approach Extended. Class. Quantum Grav. 32 (2015) 215020 [arXiv:1503.02873 [gr-qc]]
bib:mgd3 Ovalle, J.; Casadio, R.; Rocha, R.D.; Sotomayor, A.; Stuchlik, Z. Black holes by gravitational decoupling. Eur. Phys. J. C 2018,
bib:gd1 Ovalle, J. Decoupling gravitational sources in general relativity: From perfect to anisotropic fluids. Phys. Rev. 2017, D95, 104019.
bib:gd2 Ovalle, J. Decoupling gravitational sources in general relativity: The extended case. Phys. Lett. B 2019, 788, 213.
bib:gd3 Contreras, E.; Ovalle, J.; Casadio, R. Gravitational decoupling for axially symmetric systems and rotating black holes. Phys. Rev. D 2021, 103, 044020.
bib:bh1 Ovalle, J.; Casadio, R.; Contreras, E.;
Sotomayor, A. Hairy black holes by gravitational decoupling. Phys. Dark Universe 2021,
bib:hairy_kerr S. Mahapatra and I. Banerjee, Rotating hairy
black holes and thermodynamics from gravitational decoupling, Phys. Dark
Univ. 39 (2023) 101172 [arXiv:
2208.05796],
bib:vermax Vitalii Vertogradov, Maxim Misyura "Vaidya and Generalized Vaidya
Solutions by Gravitational Decoupling"Universe 2022, 8(11), 567;
doi:10.3390/universe8110567 [arXiv:2209.07441 [gr-qc]]
bib:vermax2 Vitalii Vertogradov, Maxim Misyura, The Regular
Black Hole by Gravitational Decoupling.
Phys. Sci.
Forum 2023, 7(1), 27
bib:ovalle_regular Jorge Ovalle, Roberto Casadio, Andrea
Giusti, Regular hairy black holes through Minkowski deformation.
[arXiv:2304.03263 [gr-qc]]
bib:85 M. Jamil, S. Hussain, Dynamics of particles around a
Schwarzschild-like black hole in the presence of quintessence and magnetic field B. Majeed, Eur. Phys. J. C 75.
1, 24 (2015).
bib:86 I. Hussain, S. Ali, Marginally stable circular orbits in the Schwarzschild black hole
surrounded by quintessence matter Eur. Phys. J.l Plus 131. 8, 275
(2016).
bib:87 B. Malakolkalami, K. Ghaderi, The null geodesics of the
Reissner-Nordstrom black hole surrounded by quintessence Mod. Phys. Lett
. A 30
.10, 1550049 (2015).
bib:88 S. Fernando, Schwarzschild black hole surrounded by quintessence: Null geodesics Gen. Rel. Grav. 44.7, 1857 (2012).
bib:89 R. Uniyal, N.C. Devi, Geodesic Motion in Schwarzschild Spacetime Surrounded by Quintessence H. Nandan, K.D. Purohit, Gen.
Rel. Grav. 47.2, 16 (2015).
bib:90 S. Fernando, S. Meadows, K. Reis, Null Trajectories and Bending of Light in Charged Black Holes with Quintessence Int. J. Theo. Phys.
54.10, 3634 (2015).
bib:geod Ramos, A.; Arias, C.; Avalos, R.; Contreras,E. Geodesic motion
around hairy black holes. Annals Phys. 2021, 431,
168557.
bib:lens Sohan Kumar Jha, Anisur Rahaman, Gravitational
lensing by the hairy Schwarzschild black hole. arXiv:2205.06052[gr-qc]
bib:energy Zhen Li, Faqiang Yuan, Energy extraction via
Comisso-Asenjo mechanism from rotating hairy black hole. [arXiv:2304
.12553 [gr-qc]]
bib:thermo Cavalcanti, R.T.; Alves, K.d.S.; da Silva, J.M.H.
Near horizon thermodynamics of hairy black holes from gravitational
decoupling.
Universe 2022, 8, 363.
bib:ver_thermo Vertogradov V. D., Kudryavcev D. A. On the temperature of hairy black holes. Physics of Complex Systems. Vol. 4, no. 2, 2023
bib:106Y. Heydarzade, F. Darabi, Black Hole Solutions Surrounded by Perfect Fluid in Rastall Theory Phys. Lett. B, 771, 365 (2017).
bib:ruller Oleg Yu. Tsupko, Zuhui Fan, Gennady S.
Bisnovatyi-Kogan, Black hole shadow as a standard ruler in cosmology. Classical and Quantum Gravity, 37, 065016 (2020) [arXiv:1905.10509 [gr-qc]]
bib:71r E. T. Newman and A. I. Janis, "Note on the Kerr
spinning particle metric," 10.1063/1 J. Math. Phys. 6 (1965) 915-917.
bib:73r M. Azreg-Ainou, "Regular and conformal regular cores
for static and rotating solutions," 10.1016/j.physletb.2014.01 Phys.
Lett. B 730 (2014) 95-98, [arXiv:1401.0787 [gr-qc]].
bib:74r M. Azreg-Ai'nou, "From static to rotating to conformal
static solutions: Rotating imperfect fluid wormholes with(out) electric
or magnetic field," epjc/s10052-014-2865-8 Eur. Phys. J. C 74 no. 5,
(2014) 2865, [arXiv:1401.4292 [gr-qc]].
|
http://arxiv.org/abs/2307.04248v1 | 20230709190135 | Topological Hochschild homology of the image of j | [
"David Jongwon Lee",
"Ishan Levy"
] | math.AT | [
"math.AT",
"math.KT"
] |
We compute the mod (p,v_1) and mod (2,η,v_1) of many variants of the image-of-J spectrum. In particular, we do this for j_ζ, whose is closely related to the K-theory of the K(1)-local sphere. We find in particular that the failure for to satisfy _p-Galois descent for the extension j_ζ→ℓ_p corresponds to the failure of the p-adic circle to be its own free loop space. For p>2, we also prove the Segal conjecture for j_ζ,
and we compute the K-theory of the K(1)-local sphere in degrees ≤ 4p-6.
Relativistic time dilation as a quantum mechanism
Esteban Martínez Vargas
August 12, 2023
=================================================
§ INTRODUCTION
The algebraic K-theory of the K(1)-local sphere, or K(L_K(1)), is an object capturing fundamental structural information about the K(1)-local category. Part of Ausoni–Rognes' original vision of chromatic redshift was that it could be understood, at least T(2)-locally, via Galois hyperdescent. More specifically, they conjectured <cit.> that the map
K(L_K(1))⊗ V → K(_p)^h_p^×⊗ V
is an equivalence in large degrees when V is a type 2 finite spectrum. The T(n+1)-local K-theory of Morava E-theory has been shown in <cit.> to have Galois descent for finite subgroups of the Morava stabilizer group. Moreover, recent work of Ben Moshe–Carmeli–Schlank–Yanovski <cit.> combined with <cit.> shows that L_K(2)K(L_K(1)) → L_K(2)K(_p)^h_p^× is an equivalence, i.e that Galois hyperdescent is satisfied for the K(2)-locally.
Recent work of the second author <cit.> has made K(L_K(1)) an integrally accessible object. If we consider the connective Adams summand ℓ_p (or _2 for p=2) as a -equivariant _∞-ring via the Adams operation Ψ^1+p, then j_ζ is defined to be its -homotopy fixed points. Then it is shown that there is a cofiber sequence
K(j_ζ) → K(L_K(1)) →Σ K(_p)
split on π_*. It is also shown that the Dundas–Goodwillie–McCarthy square
K(j_ζ) [r][d] (j_ζ)[d]
K(_p)[r] (_p^h)
is a pullback square.
The three spectra K(_p), K(_p), and (_p^h)[This is essentially the nil- of _p by <cit.>, which is studied in <cit.>.] are understood, so understanding K(L_K(1)) is essentially reduced to understanding (j_ζ).
The primary goal of this paper is to understand (j_ζ) modulo (p,v_1) and (2,η,v_1), which is the first step in understanding (j_ζ).
For p>2, there is an isomorphism of rings
π_*(j_ζ)/(p,v_1) ≅π_*(ℓ_p)/(p,v_1)⊗_𝔽_pHH_*(_p^h/_p)
For p=2, there is an isomorphism of rings
π_*(j_ζ)/(2,η,v_1) ≅π_*(_2)/(2,η,v_1)⊗_𝔽_2HH_*(_2^h/_2).
Each of the terms on the right hand side of the equivalences is well understood. The ring π_*(ℓ_p)/(p,v_1) can be found in <cit.> or <Ref>, and π_*(_2)/(2,η,v_1) can be found in <cit.> or <Ref>.
The last tensor factor is given in <Ref> as
_*(_p^h/_p) ≅Λ[ζ]⊗_p
where |ζ| = -1, and _p denotes the ring of continuous functions from _p to _p.
The _p appearing can be viewed as the failure of descent at the level of for the _p-Galois extension coming from the _p-action on ℓ_p and _2. More precisely, at the level of π_*, the map
(ℓ_p^h)/(p,v_1) →(ℓ_p)^h/(p,v_1)
is base changed from the map _p→_p that sends a continuous function to its value at 0 (<Ref>).
This phenomenon can be explained by interpreting in terms of free loop spaces. If X is a pro-p-finite space, then the _p-Hochschild homology of the cochain algebra C^*(X;_p) is computed as
(C^∗(X;𝔽_p)/𝔽_p) = C^∗(LX;𝔽_p)
where LX is the free loop space of X. Since C^*(B_p;_p) ≅_p^h, the failure of the descent
(𝔽_p^h/𝔽_p) ≄(𝔽_p/𝔽_p)^h
is explained by the fact that B_p is not LB_p ≅ B_p×_p. For any p-complete _∞-ring R with a trivial ℤ-action, this completely accounts for the failure of p-complete to commute with -fixed points (<Ref>). The content of <Ref> is that the same phenomenon happens for (j_ζ) on π_* mod (p,v_1) or (2,η,v_1), even though the action is no longer trivial. In particular, <Ref> implies that there is an isomorphism of rings
π_*(j_ζ)/(p,v_1) ≅π_*(ℓ_p^triv,h)/(p,v_1)
where ℓ_p^triv,h is the fixed points of ℓ_p by a trivial -action.
The key idea in our proof of <Ref> is to run the spectral sequence for obtained by filtering j_ζ via the homotopy fixed point filtration, and showing that the differentials in the associated spectral sequence behave similarly enough to the case of a trivial action. To understand the associated graded algebra of the homotopy fixed point filtration, we further filter it by the p-adic filtration. At the level of the associated graded of both filtrations, j_ζ is indistinguishable from the fixed points by a trivial action, and we show that mod (p,v_1) and (2,η,v_1) this remains true at the level of homotopy rings after running the spectral sequences for of those filtrations.
The phenomenon that the -action on ℓ_p behaves like the trivial one is shown in <cit.> to asymptotically hold even at the level of cyclotomic spectra. More precisely, it is shown there that given any fixed type 3 finite spectrum V, for all sufficiently large k,
(ℓ_p^hp^k)⊗ V ≅(ℓ_p^triv,h)⊗ V
as cyclotomic spectra.
It is shown then that the failure of descent we observe on continues at the level of the T(2)-local . Combining this with the aforementioned hyperdescent result of the K(2)-local K-theory and the formula for the K-theory of the K(1)-local sphere, this implies that L_T(2)K(L_K(1)𝕊) is not K(2)-local and hence is a counterexample to the height 2 telescope conjecture.
In particular, this implies that the map
K(L_K(1))⊗ V → K()^h_p^×⊗ V
considered by Ausoni–Rognes is not an equivalence in large degrees.
The ring _p that appears in our formula for (j_ζ) is a key ingredient in <cit.> to maintain asymptotic control over (j_ζ,k) as a cyclotomic spectrum, and is one of the advantages of j_ζ versus the usual connective image-of-J spectrum j = τ_≥0j_ζ. If one was only interested in understanding L_T(2)K(L_K(1)), there are isomorphisms
L_T(2)K(L_K(1)) ≅ L_T(2)(j) ≅ L_T(2)(j_ζ)
so one can in principle approach the telescopic homotopy via (j) instead of (j_ζ).
However, j is not as well behaved as j_ζ is, as we now explain. We extend our methods for computing (j_ζ) in <Ref> to compute of j, giving a relatively simple proof of the result below due to Angelini-Knoll and Höning <cit.>.
For p>3[We also compute an associated graded ring (j)/(p,v_1) for p=3 (see <Ref>), but are unable to solve multiplicative extension problems coming from the fact that j/(p,v_1) is not an associative algebra for p=3. Nonassociative multiplicative extensions aren't considered in <cit.>, so the results of that paper also only compute an associated graded ring for p=3.], the ring π_*(j)/(p,v_1) is the homology of the CDGA
𝔽_p[μ_2]⊗Λ[α_1,λ_2,a]⊗Γ[b], d(λ_2)=aα_1
|b| = 2p^2-2p , |a| = 2p^2-2p-1, |λ_2| = 2p^2-1, |μ_2| = 2p^2
For k≥ 1 and any p>2, we have an isomorphism of rings
π_*(τ_≥0(ℓ_p^hp^k))/(p,v_1) ≅π_*(ℓ_p)/(p,v_1)⊗_*(τ_≥0_p[v_1]^h/_p[v_1])/v_1.
The ring _*(τ_≥0_p[v_1]^h/_p[v_1])/v_1 is described in <Ref>: it is isomorphic to Γ[dα_1/p^k]⊗Λ__p[α_1/p^k] where α_1/p^k is a class in degree 2p-3 and dα_1/p^k is a divided power generator in degree 2p-2.
In the above theorem, π_*(j)/(p,v_1) is not what one would expect in the case of the trivial action: there are two more differentials in the spectral sequence for the filtration we use to prove <Ref> than what one would find for the trivial action. The differentials witness the fact that λ_1,λ_2 ∈π_*(ℓ_p)/(p,v_1) don't lift to (j)/(p,v_1). Whereas most computations of in this paper use Bökstedt's computation of (_p) as their fundamental input, these differentials ultimately come from the Adams–Novikov spectral sequence.
A key difference between the of j_ζ and j is that the ring _p that appeared in π_*(j_ζ)/(p,v_1) is replaced by a divided power algebra for j. The advantage of the ring C^0(_p;_p) over a divided power algebra is that it up to units, it consists entirely of idempotents, which decompose (j_ζ) as an S^1-equivariant spectrum into a continuous _p-indexed family of spectra. This decomposition is not evidently present in (j).
Another advantage of j_ζ over j is that j_ζ satisfies the Segal conjecture but j doesn't, which we show for p>2 in <Ref>:
For p>2, the cyclotomic Frobenius map
(j_ζ)/(p,v_1) →(j_ζ)^tC_p/(p,v_1)
has (2p-3)-coconnective fiber, but the fiber of the cyclotomic Frobenius map
(j)/(p,v_1) →(j)^tC_p/(p,v_1)
is not bounded above.
The Segal conjecture for a ring j is a necessary condition <cit.> for the Lichtenbaum–Quillen conjecture to hold, i.e for (j)⊗ V to be bounded above for any finite type 3 spectrum V. Thus <Ref> implies that j doesn't satisfy the Lichtenbaum–Quillen conjecture. On the other hand, <Ref> is a key ingredient in proving the Lichtenbaum–Quillen conjecture for j_ζ as carried out in <cit.>. This Lichtenbaum–Quillen conjecture can be viewed as the part of Ausoni–Rognes's conjecture that is true. Namely, it implies that the map
K(L_K(1))⊗ V → K(L_K(1))⊗ V[v_2^-1]
is an equivalence in large degrees for V a type 2-complex.
In <Ref>, we show how computations can give information about in the stable range. For a map of _1-rings f:R → S, the _1-cotangent complex L_S/R is the S-bimodule that is the fiber of the multiplication map S⊗_RS → S. We prove the following result:
Given a map of _1-ring spectra f:R → S, there is a natural map
(f) →(S;L_S/R).
If f is an n-connective map of (-1)-connective rings for n≥ 1, this natural map is (2n+1)-connective.
A consequence of <Ref> is that the natural map above can be identified with the linearization map in the sense of Goodwillie calculus for the functor (f): ()_R/→ when R is (-1)-connective.
In the case the map f is a trivial square zero extension of connective rings, a K-theory version of the result was obtained as <cit.>, and a version is essentially <cit.>[See also <cit.> and <cit.>.]. The point of <Ref> is to have a version of the result that works for arbitrary maps of _1-rings rather than trivial square-zero extensions, and for (-1)-connective rings instead of connective rings.
We use <Ref> to reprove basic facts about , such as the understanding of the map (_p) →(_p) on π_2p-1. This is an ingredient in the computation of (_p) as a spectrum (see <cit.>).
We also apply <Ref> to compute the fiber of the map (j_ζ) →(_p^h) in the stable range, giving information about K(L_K(1)):
For p>2, there are isomorphisms
τ_≤ 4p-6((j_ζ) →(_p^h)) ≅Σ^2p-2_p
and
K_*L_K(1)≅ K_*-1_p ⊕ K_*_p ⊕π_*Σ^2p-2_p/_p, *≤ 4p-6.
In particular, for p>2, the infinite family of classes in the fiber of (j_ζ) →(_p^h) found in <cit.> are simple p-torsion, and completely account for all the classes in the stable range.
§.§ Acknowledgements
We are very grateful to Robert Burklund, Sanath Devalapurkar, Jeremy Hahn, Mike Hopkins, Tomer Schlank, and Andy Senger for conversations related to this work. The second author is supported by the NSF Graduate Research Fellowship under Grant No. 1745302.
§.§ Notations and conventions
* The term category will refer to an ∞-category as developed by Joyal and Lurie.
* We refer the reader to <cit.> for basic facts about , which we freely use.
* (a, b) will denote the space of maps from
a to b (in some ambient category).
* Tensor products and are implicitly p-completed.
* We use Λ[x] and Γ[x] to denote exterior and divided power algebras in homotopy rings.
* In an _p-vector space, we use a ≐ b to mean that a = cb for some unit c ∈_p^×, and a b to mean that a is sent to b up to a unit in _p^×.
* Conventions about filtrations and spectral sequences are addressed in <Ref>.
* For a pro-finite set A, we use C^0(A;_p) to denote continuous functions from A to _p.
* Let 𝒟 be a monoidal category acting on a category 𝒞. Given objects X ∈𝒞, Z ∈𝒟 with a self map f:X⊗ Z→ X, we use X/f to denote the cofibre of this map. We use X/(f_1,…,f_n) to denote (…(X/f_1)/…)/f_n, where each f_i is a self map of X/(f_1,…,f_i-1).
§ FILTRATIONS
In this section, we set up notation for working with filtered objects and explain how to put filtrations on ℓ_p, _2, j_ζ, and j, as well as for finite extensions. Our constructions amount to the filtration coming from the homotopy fixed point spectral sequences computing those objects, which in all cases except for j_ζ, is also the Adams–Novikov filtration.
§.§ Filtered objects and spectral sequences
Let 𝒞 be a presentably symmetric monoidal stable category with accessible
t-structure compatible with the symmetric monoidal structure. Let (𝒞) = (_≤^op,𝒞) be the category of decreasingly filtered objects, and let (𝒞) = (,𝒞) be the category of graded objects, so that both are symmetric monoidal via Day convolution. Basic properties of these categories are developed in <cit.> and <cit.>. Given an object x ∈(𝒞) or (𝒞), we write x_i for the value at i ∈. The left adjoint of the functor (-)_i in the case of (𝒞) is the functor (-)^0,i, defined for c∈𝒞 by
(c^0,i)_j =
c (j≤ i)
0 (j>i).
We also use the notationthis notation isn't used consistently throughout the paper. c^k,n := Σ^kc^0,n+k, π_k,n^x := π_k^x_n+k, and π_k,nx:= π_kx_n+k = π_0(^k,n,x), and use c to also denote c^0,0. There is a filtration parameter τ∈π_-1,0^0,0 such that the map x_i → x_i-1 giving the filtration is obtained levelwise from tensoring with τ.
The functor (-)^0,0: 𝒞→(𝒞) is a symmetric monoidal fully faithful functor, which we refer to as the trivial filtration. We often identify an object c ∈𝒞 with the trivial filtered object in (𝒞).
In fact, (𝒞) can be identified with _τ((𝒞)), so that taking associated graded amounts to base changing to τ. Given an object x ∈(𝒞), we let x∈(𝒞) denote the associated graded object, so that ( x)_i = _ix = (x_i+1 x_i).
On the other hand, there is an identification (𝒞)[τ^-1] ≅𝒞, so that given a filtered object x ∈(𝒞), its underlying object ux ∈𝒞, given by _i x_i, is identified with x[τ^-1]. Under the assumption that the t-structure is compatible with filtered colimits, we have an isomorphism π_**^x[τ^-1] ≅π_*^ux⊗[τ^±1].
Given a filtered object x∈(𝒞), there is a spectral sequence which we refer to as the spectral sequence associated with x.
E_1^s,t = π^_t-s,s x=π^_t-s( x)_tπ^_t-s(ux)
The d_r-differential is a map from E_r^s,t to E_r^s+r+1,t+r, which is a page off from the usual Adams convention, i.e. our d_r differential would be the d_r+1 differential in the Adams convention. We shall say Adams weight and filtration degree to refer to the bidegrees s and t, respectively.
In addition to the spectral sequence associated with x, there is also the τ-Bockstein spectral sequence, which has signature
E_1^** = (π_**^ x)[τ] π_**^x
We do not use the following lemma, but we state it as an exercise to help acquaint the unfamiliar reader with filtered objects. The τ-inverted τ-Bockstein spectral sequence refers to the spectral sequence obtained from the τ-Bockstein spectral sequence by inverting τ on each page.
Let x ∈(𝒞). For each r≥1, the E_r-page of the τ-inverted τ-Bockstein spectral sequence for x is isomorphic to [τ^±] tensored with the E_r-page of the spectral sequence associated with x. Moreover, the d_r differential on the former is given by τ^r times the d_r differential on the latter. The filtration on π^_**x[τ^±1] coming from the spectral sequence agrees with the filtration on π^_*x⊗[τ^±] coming from the filtration on x.
These statements can be checked for example by using explicit formulas for the pages and differentials. See, for example, <cit.>.
§.§ t-structures
We turn to studying t-structures on categories of filtered objects.
Our ability to produce t-structures comes from the following general result.
Let 𝒞 be a presentable stable category.
If {X_α} is a small collection of objects in 𝒞, then there is an accessible t-structure (𝒞_≥ 0, 𝒞_≤ 0) on 𝒞 such that 𝒞_≥ 0 is the smallest full subcategory of 𝒞 containing each X_α and closed under colimits and extensions. The full subcategory of coconnective objects is characterized by the condition that Y ∈𝒞_≤ 0 if and only if (Σ X_α,Y) = 0 for each X_α.
Let f:→ be a function. Define a t-structure ((𝒞)^f_≥0,(𝒞)^f_≤0) on the underlying category (𝒞) be the t-structure whose connective objects are generated by the objects Σ^f(i)c^0,i for c ∈𝒞_≥0. We let τ^f_≥ i and τ^f_≤ i denote the associated truncation functors. We similarly define a t-structure ((𝒞)^f_≥0, (𝒞)^f_≤0) by taking the image of those objects under the functor to be the generators.
Let x ∈(𝒞).
* x ∈(𝒞)^f_≤0 if and only if x_i is f(i)-coconnective in 𝒞 for each i.
* If f is nondecreasing, then x∈(𝒞)^f_≥0 iff x_i is f(i)-connective for each i. In this case, the truncation functor τ^f_≥ 0 is given by (τ^f_≥ 0x)_i = τ_≥ f(i)(x_i).
* The same results hold for ((𝒞)^f_≥0,(𝒞)^f_≤0).
We prove the result for (𝒞), as the result for (𝒞) is similar but easier. Coconnectivity can be checked by mapping in the generators of (𝒞)^f_≤0. Because of the adjunction defining the functor (-)^0,n, the condition for coconnectivity follows.
Now suppose f is nondecreasing. To prove the claims, It suffices to show that if x∈(𝒞) has x_i ∈𝒞_≥ f(i), then x admits no maps to a coconnected object. If y is a coconnected object, then x_i admits no maps to y_j for j ≤ i because y_j is f(j)-coconnected, and since f is nondecreasing, it is f(i)-coconnected. It follows that there are no nonzero maps of filtered objects x → y.
The t-structures (𝒞)^f, (𝒞)^f are compatible with the symmetric monoidal structure if f(0) = 0 and f(i) + f(j) ≥ f(i+j).
The condition f(0) = 0 guarantees that the unit is connective.
One needs to check that the tensor product of any pair of generators of (𝒞)^f_≥0 is still in (𝒞)^f_≥0. But the tensor product of Σ^f(i)c^0,i and Σ^f(j)d^0,j is Σ^f(i)+f(j)(c⊗ d)^0,i+j, which is in (𝒞)^f_≥0 because c⊗ d is in 𝒞_≥0 and so the assumption on f shows that this is connective.
The functor is right t-exact with respect to the t-structure corresponding to a nondecreasing function f, but not in general t-exact. In the following situation it preserves τ_≥0.
Suppose that c ∈(𝒞), f:→ is nondecreasing, π^_k,i-kc = 0 for f(i-1)≤ k < f(i), and π^_f(i)-1,i-f(i)+2 contains no simple τ-torsion. Then τ_≥0^f(c) ≅(τ_≥0^f(c)) and τ_≤0^f(c) ≅(τ_≤0^f(c)).
It suffices to prove the statement for τ_≥0 since is exact. There is a cofiber sequence c_i+1 c_i →_ic. By <Ref> we would like τ_≥ f(i+1)c_i+1→τ_≥ f(i)c_i →τ_≥ f(i)_ic to remain a cofiber sequence. From the exact sequence of homotopy groups, we see that we would like
τ_≥ f(i+1)c_i+1 = τ_≥ f(i)c_i+1 and π^_f(i)-1c_i+1→π^_f(i)-1c_i to be
injective. This is exactly the condition that π^_kc_i = π^_k,i-kc vanish when f(i-1)≤ k<f(i) and π^_f(i)-1c_i+1 = π^_f(i)-1,i-f(i)+2c has no simple τ-torsion.
Let f(i) = a i where a ≥ 0. This gives rise to the slope 1-a/a t-structure, whose truncation functors we denote τ^/a_≥0, τ^/a_≤ 0.
Let f(i) = 0 for i≤ 0 and f(i) = i/2 for i >0. This gives rise to the v t-structure, whose truncation functors we denote τ^v_≥0,τ^v_≤0.
The slope 1-a/a and v t-structures satisfy the conditions of <Ref> and <Ref>, so are compatible with the symmetric monoidal structure, and can be computed by truncating level-wise. The reason for the name slope is that in the Adams grading, the homotopy groups of objects in the heart of this t-structure lie along a line of slope 1-a/a.
The v t-structure is named so because the curve it describes is the vanishing curve on the homotopy groups of the -synthetic sphere at the prime 2.
We now specialize <Ref> to obtain two t-structures we use here.
Taking a = 0, we get the constant t-structure, whose connective cover functor τ^_≥0 just takes connective cover on each filtered piece.
Taking a = 1, we get the diagonal t-structure, whose connective cover functor τ^d_≥ 0 is given by taking the i^th-connective cover on the i^th filtered piece.
The functor (-)^:𝒞→(𝒞) is the symmetric monoidal functor given by the constant filtered object.
§.§ Filtrations on rings of interest
We now specialize to the case 𝒞 = with its standard symmetric monoidal structure. We begin by constructing j_ζ as a filtered ring. We use τ_≥*(-) to denote the composite functor τ_≥ 0^d((-)^). Indeed, τ_≥ i(-) is the i^th filtered piece of this functor.
We now use τ_≥*(-) to obtain a filtration on ℓ_p,j_ζ,_2, and j for p>2. We use R^ to denote these rings equipped with these filtrations, and R^ to denote the associated graded algebras.
Let ℤ_p^ be the ring of p-adic integers with the p-adic filtration. It is a filtered 𝔼_∞-ring since it is in the heart of the constant t-structure. Its associated graded ring is 𝔽_p[v_0], where v_0∈π_0,1ℤ_p^. We write v_0∈π_0,1ℤ_p^ for the class of filtration 1 detecting p∈ℤ_p, which projects to v_0 in the associated graded.
For p>2, consider ℓ_p, viewed as an _∞-ring equipped with the -action given by the Adams operation Ψ^1+p, and for p=2, consider it with the × C_2-action given by the Adams operations Ψ^3, Ψ^-1.
We now define most of our filtered _∞-rings of interest:
* ℓ_p^:= τ_≥*ℓ_p
* _2^:= τ_≥0^v((ℓ_2^)^hC_2)
* j_ζ,k^:= (ℓ_p^)^hp^k for p>2 and (_2^)^h for p=2
* ju_ζ,k^:= (ℓ_2^)^h2^k
* j_k^:=τ_≥0^(j_ζ,k^) for p>2.
In the case k=0, we just write j_ζ^, ju^, j^, and we remove to denote the underlying _∞-ring. For example, we write j_ζ,k = ℓ_p^hp^k.
The filtrations of <Ref> aren't as `fast' as they can possibly be. Namely, the spectra in the filtrations only change every multiple of 2p-2 filtrations. Speeding up the filtration doesn't affect very much related to the filtration in any case.
For p>2, it is also possible to use variants of the Adams filtration on the various rings of study, as in <cit.>, which would avoid the use of two filtrations. However this doesn't work as well at the prime 2, since the Adams filtration on is poorly suited to studying _2's .
The key properties of these filtrations that we use is that the associated graded algebras mod p are easy to describe.
The associated graded algebras of filtered rings defined in <Ref> are _∞--algebras.
The 0'th piece of every associated graded algebra is coconnective with π_0 = _p, so the unit map from ^0,0 factors canonically through , giving it a canonical _∞--algebra structure.
For p>2, there are isomorphisms of graded _∞-_p-algebras
ℓ_p^/p ≅_p[v_1]
j_ζ,k^/p ≅_p[v_1]⊗__p_p^h
and for p=2, there are isomorphisms of graded _∞-_2-algebras
j_ζ,k^/2 ≅ (_2^/2)⊗__2_2^h
ju_ζ,k^/2 ≅_2[v_1]⊗__2_2^h
_2^/2 ≅τ_≥ 0^v (_2^hC_2⊗__2_2[v_1]).
ℓ_p^ is the associated graded of the Postnikov filtration, which is _p[v_1], where the grading of v_1 is its topological degree, namely 2p-2. Reducing mod p, we get the claim about ℓ^/p. The -action on ℓ_p^ is the action of Ψ^1+p on the homotopy of ℓ_p. It is a ring automorphism sending v_1 to (1+p)^p-1v_1, which in particular is trivial modulo p. Since ℓ_p^ is a discrete object (it is in the heart of the diagonal t-structure), it follows that the action on ℓ_p^/p is trivial, giving the claimed identification of j_ζ,k^ for p>2 and ju_ζ,k^ for p=2.
For p=2, we first recall that the in the homotopy fixed point spectral sequence for _2≅_2^hC_2, all differentials are generated under the Leibniz rule by the differential d_3v_1^2 = η^3, where η is represented by the class in H^1(C_2;π_2_2). The spectral sequence for ℓ_2^hC_2 = _2^hC_2, displayed in <Ref>, embeds into this, after a page shift. Thus, we see that everything in π_**(^)^hC_2 above the line of slope 1 intercept zero is either in negative underlying homotopy or doesn't have τ-multiples on or below the line of slope 1 intercept 2. We learn that the bigraded homotopy ring of (ℓ^_2)^hC_2 is
_2[x,η,τ,b,v_1^4]/(b^2-4v_1^4,η^3τ^2,2η,2x,xητ^2, v_1^4x-η^4,η b),
where x represents v_1^-4η^4, and b represents 2v_1^2.
By applying <Ref>, we learn that the connective cover τ_≥ 0^v can be computed the level of associated graded, and that this even holds after taking the cofiber by 2. The C_2-action on _2^/2 is trivial, so indeed _2^/2 ≅(τ_≥ 0^v (_2^hC_2⊗__2_2[v_1])). For j_ζ,k^/2, we just observe that the residual -action is also trivial.
At the prime 2, it is possible to define j as a filtered _∞-ring, but we do not study this in this paper. One can define its underlying _∞-ring as the pullback
j [r][d] _2^h [d]
τ_≤2_2 [r] (τ_≤2_2)^h
and then consider the underlying filtered _∞-ring of ν_BP(j) where ν_BP is the synthetic analogue functor of <cit.>.
Finally, we show convergence properties of our applied to the filtrations we use. Given a filtered spectrum X∈(), the spectral sequence associated with X converges conditionally if and only if lim_i X_i = 0. This is equivalent to asking that X is τ-complete, where τ is in π_0,-1^0,0.
The following lemma shows completeness for with respect to all of the filtrations constructed in this section.
Suppose that R is a filtered ring such that the i-th filtered piece R_i is (-1+ci)-connective for every i and some fixed c>0. Then, the i-th filtered piece of (R) is also (-1+ci)-connective, so in particular the filtration on (R) is complete.
Note that R=(𝕊^0,0→ R) satisfies the same conditions of the statement. The filtration from the cyclic bar construction gives us an increasing filtration on (R) with k-th associated graded piece Σ^k R⊗R^⊗ k. The i-th filtered piece of Σ^k R⊗R^⊗ k is (-1+ci)-connective since it is a colimit of spectra of the form Σ^k R_j_0⊗R_j_1⊗⋯⊗R_j_k with j_0+⋯+j_k≥ i, which has connectivity of at least
k + ∑_s=0^k (-1 + cj_s) ≥ -1 + ci.
The other filtration we use is the p-adic filtration on _p, which we call _p^, whose associated graded algebra is _p[v_0]. We call ṽ_̃0̃ the element in π_0,1_p that is a lift of p to filtration 1, and projects to v_0 in the associated graded.
Let R be a (possibly graded) 𝔼_1-ℤ_p-algebra. Then, the filtration on the filtered ring
(R⊗_ℤ_pℤ_p^)/v_0.
is complete and its associated graded ring is concentrated in two filtration degrees t=0,1. Informally, the filtration is of the form
⋯→0→0→ I→(R)/p
for some (possibly graded) spectrum I. In particular, the associated spectral sequence collapses at the E_2-page.
By using the symmetric monoidality of and the fact that p=0 in _p^/ṽ_0, we obtain an equivalence
(R⊗_ℤ_pℤ_p^)/ṽ_0 ≅ ((R)/p) ⊗_((ℤ_p)/p)(ℤ_p^)/ṽ_0.
Since the conclusion of the statement is stable under base-change along trivially filtered rings, the statement reduces to the case R=ℤ_p.
For R=ℤ_p, the associated graded is (_p[v_0])/v_0, which has homotopy ring _p[σ^2p]⊗Λ[dv_0] (see <Ref>), which is indeed in filtrations ≤1. It remains to see that (_p^)/ṽ_0 = (_p^;_p) has a complete filtration. It suffices to show that (_p^;_p)⊗_(_p)_p ≅(_p^/_p;_p) has a complete filtration, since (_p) is built from _p via extensions and limits that are finite in each degree, and completeness of the filtration can be checked degreewise. The nth associated graded term of the cyclic bar construction computing this is
Σ^n (_p^)^⊗__p n⊗__p_p ≅Σ^n(_p^⊗__p_p)^⊗__p n
_p^⊗__p_p is complete since it is _p in each nonnegative degree, with transition maps 0, or in other words, it is a direct sum _p⊕⊕_1^∞Σ^0,i_p/τ. It follows that its tensor powers over _p are also sums of _p in each degree with transition maps 0 in positive filtration, so are complete. Since only finitely many terms in the cyclic bar complex contribute to each degree of , we learn that the is complete.
§ TOOLS FOR UNDERSTANDING
In this section, we explain some general tools which we use in understanding .
§.§ Suspension operation in THH
We begin by reviewing and proving some basic facts about the suspension maps, which are studied in <cit.>. Let R be an 𝔼_1-algebra in a presentably symmetric monoidal stable category 𝒞. By <cit.>, there are natural maps
σ:Σ(1_R) → R⊗ R
σ^2:Σ^2(1_R) →(R)
where 1_R is the unit map of R. Note that the first map is defined by the diagram
[column sep = huge]
[r][d,"1_R"] 0[d]
R[r,"𝕀⊗1_R-1_R⊗𝕀"] R⊗ R
and that it factors through (μ)→ R⊗ R where μ:R⊗ R→ R is the multiplication map.
Let I be an object of 𝒞 with a map I→ R⊗ R and nullhomotopies of the composites
I → R⊗ RR
I → R⊗ RR,
where T:R⊗ R→ R⊗ R is the exchange map. Then, we obtain a map
Σ I→(R)
by the commutative diagram
I[rr][rd][dd] 0[d]
R⊗ R[r,"μ"][d,"μ∘ T"] R[d,"1⊗𝕀"]
0[r] R[r,"𝕀⊗1"] R⊗_R⊗ R^opR.
By the proof of <cit.>, if I=Σ(1_R) and the map I→ R⊗ R is given by (<ref>), then the induced map Σ I→(R) is the map (<ref>).
Let X be a spectrum. Given a class x∈π_∗(X⊗1) and a lift x∈π_∗(X⊗(1_R)) we shall write σ x∈π_∗+1(X⊗ R⊗ R) and σ^2 x∈π_∗+2(X⊗(R)) for the image of x under the maps (<ref>) and (<ref>). The notation is ambiguous since we need to choose a lift x, but these lifts will often be well-defined.
We shall write d for
π_∗(X⊗ R)→π_∗+1(X⊗(R))
induced by the map of spectra Σ R→Σ^2(1_R)→(R).
If R is homotopy commutative in addition to being an 𝔼_1-algebra, then we can set I=(μ) in (<ref>) and obtain a map
σ:Σ(μ)→(R),
which is functorial on R and the homotopy[The same construction is studied in <cit.>, but we believe that additional hypotheses are required to make sense of their argument. For example, R is only assumed to be an 𝔼_1-ring in their generality, but an assumption such as homotopy commutativity of R is needed to ensure that the composite
(μ)→ R⊗ RR
is nullhomotopic. In their notation, we would need to assume, for example, that there is a homotopy 1_k≃ 1_k^τ. This does not affect any other part of their work since they only use rings that have enough structures.] μ≃μ∘ T. Then, the map (<ref>) is the composite
Σ^2(1_R)→Σ(μ)→(R)
of (<ref>) and (<ref>) up to sign.
If X is a spectrum, given a class y∈π_∗(X⊗ R⊗ R) and a lift y∈π_∗(X⊗(μ)), we shall write σ y∈π_∗+1(X⊗(R)) for the image of y under the map (<ref>). Then, we have dx= σ((η_L-η_R)x) for x∈π_∗(X⊗ R), where η_L and η_R are the left and right units of R⊗ R, respectively.
Let X be a homotopy unital ring spectrum and R be an 𝔼_2-algebra in 𝒞. Then, d satisfies the Leibniz rule
d(xy) = d(x)y + (-1)^|x|xd(y)
for any x,y∈π_∗(X⊗ R).
By <cit.>, the map d can be identified with the map
S^1_+⊗ R →(R)
induced by the unit map R→(R) and the S^1-action on (R). Since the map R→(R) is a map of 𝔼_1-rings, the S^1-action on the target gives an S^1-family of ring maps, and so we obtain a map of 𝔼_1-rings
R→lim_S^1(R) = DS_+^1⊗(R) = (R) ⊕Σ^-1(R)
given by the sum of the identity map and d. Here, DS_+^1 is the Spanier-Whitehead dual of S^1 with the algebra structure given by the diagonal map of S^1.
The homotopy ring of DS_+^1 is given by
π_∗(DS_+^1) = (π_∗ S^0)[t]/(t^2)
with |t|=-1. Since (<ref>) is a ring map, taking the X-homology, we have
1⊗ xy + t⊗ d(xy)=(1⊗ x + t⊗ dx )(1⊗ y+t⊗ dy)
for x,y∈π_∗ (X⊗ R). Expanding it using t^2=0 gives us the desired Leibniz rule.
Our use of the symbol d recovers the use in the HKR theorem. Recall that a strict Picard element of a symmetric monoidal category 𝒞 is a map of spectra →(𝒞). Given such a strict Picard element, viewing it as a symmetric monoidal functor →𝒞, the colimit of the composite
ℕ→ℤ→𝒞
is an _∞-algebra in 𝒞 which we denote [x], where x is a class in the Picard graded homotopy in the degree of .
Let C be a presentably symmetric monoidal stable category with a strict Picard element .
Let [x] denote the polynomial algebra on a class x in degree . Then ([x]) is a free [x]-module on 1 and dx.
The universal example of such a C is graded spectra, where [x] is the graded polynomial algebra Σ^∞_+, so it suffices to prove it there. But now this follows from from the Kunneth spectral sequence computing π_*([x]) = π_*[x]⊗_[x_1,x_2][x], since dx is σ((η_L-η_R)(x)).
We now explain some basic computations involving the suspension map.
[Bökstedt periodicity]
The fundamental computation of Bökstedt states that the ring π_∗(𝔽_p) is isomorphic to 𝔽_p[σ^2p].
Let R∈() be a filtered _1-ring and X∈ a spectrum. Let y∈π_k,r-k(R⊗ X), x∈π_k X be classes such that τ^r y =x∈π_k,-k(R⊗ X).
Then there is a choice of nullhomotopy of x in ( R)⊗ X such that in the spectral sequence for (R)⊗ X, the corresponding element σ^2x on the E_1-page survives to the E_r-page and has d_r-differential d_r(σ^2 x)=± d y.
A choice of homotopy τ^r y ∼ x in R⊗ X becomes in (^0,0→ R)⊗ X a choice of nullhomotopy of the image of τ^r y, which corresponds to a map Σ^|y|(τ^r) →(^0,0→ R)⊗ X. This map of filtered spectra gives a map of the associated spectral sequences, and in the spectral sequence for (τ^r), there is a d_r-differential between the two spheres on the associated graded.
We claim the image of the two shifts of τ in the map
Σ^|y|((τ) ⊕Σ^1,-(r+1) (τ)) ≅Σ^|y|(τ^r)⊗(τ) →(^0,0→ R)⊗ X⊗(τ)
correspond to the image of y and the suspension of a nullhomotopy of x under the map ^0,0→ R.
The claim that the first τ is sent to y is clear by construction, and the claim that the second τ is sent to the suspension of a nullhomotopy of x follows since on associated graded our original homotopy τ^ry∼ x becomes a nullhomotopy of x.
It then follows that there is a d_r differential between these two classes.
Composing with the filtered map
Σ(^0,0→ R)⊗ X≅Σ^2 (^0,0→ R)⊗ X (R)⊗ X
of <Ref>, y gets sent to dy and the nullhomotopy of x gets sent to σ^2x (up to a possible sign), giving the desired differential in the spectral sequence for (R)⊗ X.
Therefore, it is enough to prove that the connecting map sends x to y, and since the map π_∗(Z⊗ X_1)→π_∗(Z⊗ X_0) is injective, it is enough to prove that x is sent to η_∗(x) by the composite F→ X_1→ X_0. This composite is homotopic to F→𝕊→ X_0 since the connecting map F→ X_1 is given by the nullhomotopy
§.§ THH in the stable range
Throughout this subsection, let S be a connective _∞-algebra and R be a connective 𝔼_1-S-algebra.
In this section, we show that in the situation that the unit map S → R is highly connective, (R/S) in low degrees becomes relatively straightforward to understand. This is used later in <Ref> to understand (j). Let Δ_n denote the subcategory of Δ consisting of ordinals of size ≤ n.
If the unit map S→ R is i-connective, then the natural map
_Δ^op_nR^⊗_S*+1→_Δ^opR^⊗_S*+1≅(R/S)
is (n+1)(i+2)-1-connective.
Let R=(S→ R) be the cofiber of the unit map. The m^th term of the associated graded of the filtration coming from the cyclic bar construction is Σ^m R⊗_S R^⊗_Sm, which is m(i+2)-connective because R is connective and R is (i+1)-connective. It follows that the cofiber of the map in question has an increasing filtration whose associated graded pieces are m(i+2)-connective for m >n. This implies the result.
The above lemma gives a simple description of in low degrees.
If the unit map S → R is i-connective, then the map
Σ^2(1_R) ⊕ R (R/S)
is (2i+2)-connective, where σ^2 is defined as in (<ref>).
Consider the case n=1 in Lemma <ref>. Then, we have an equivalence
_Δ^op_1 R^⊗_S∗+1≃(
R⊗_S R[r,"μ"][d,"μ∘ T"] R
R )
(see <cit.>), where T is the exchange map, and this colimit maps into (R/S) by a (2i+3)-connective map.
Therefore, it is enough to prove that the map
(
Σ(1_R)⊕ R[r,"proj_2"][d,"proj_2"] R
R )
→(
R⊗_S R[r,"μ"][d,"μ∘ T"] R
R )
is (2i+2)-connective, where the map Σ(1_R)⊕ R → R⊗_SR is σ⊕(1_R⊗id) and the two maps R→ R are the identities. The fiber of this map is
Σ(Σ(1_R)⊕ R R⊗_SR)
which is (2i+2)-connective by the next lemma.
If the unit map 1_R:S→ R is i-connective, then the map
Σ(1_R)⊕ R R⊗_SR
is (2i+1)-connective.
This is equivalent to asking that the total cofiber of the following diagram
S⊗_SS[r][d] S⊗_SR[d]
R⊗_SS[r] R⊗_SR
is (2i+2)-connective. This follows from the assumption since the total cofiber is Σ^2 (1_R)⊗_S(1_R), which is (2i+2)-connective since (1_R) is i-connective.
The group π_2p-1(ℤ_p) is isomorphic to ℤ/p and is generated by σ^2α_1.
Since 𝕊_p→ℤ_p is (2p-3)-connective, the result follows from <Ref>, which implies that σ^2 induces an isomorphism
ℤ/p=π_2p-3(𝕊_p→ℤ_p)≃π_2p-1(ℤ_p).
For p>2, the map
j⊕Σ^2(𝕊_p→ j) (j)
is (4p^2-4p-2)-connective.
For p>2, _p → j is 2p^2-2p-2-connective. This is because the first element of the fiber is β_1 (see for example <cit.>) which is in that degree.
§ THE OF J_Ζ
In this section, we compute (j_ζ)/(p,v_1) using the filtration constructed in <Ref>. Let us first assume that p is an odd prime. We shall discuss the case p=2 later in the section.
§.§ THH of ℤ_p and ℓ_p
Before computing the of j_ζ, we shall compute the of ℤ_p modulo p and the of ℓ_p modulo (p,v_1) in this section, as a warm-up. They will be computed using the spectral sequences associated with (ℤ_p^) and (ℓ_p^). Later, we show that the computation of the spectral sequence for (j_ζ^) looks the same. We note that the computations for _p and ℓ_p are well-known (see for example <cit.>).
Let k be a discrete ring and let R be a ℤ^m-graded 𝔼_2-k-algebra such that the homotopy groups of R form a polynomial algebra
π_∗ R = k[x_1,…,x_n]
on even degree generators x_1,…,x_n. Then, there is an equivalence of ℤ^m-graded 𝔼_1-(k)-algebras
(R)≃(k)⊗_k (k[x_1,…,x_n]/k).
Let 𝕊[x_1,…,x_n] be the ℤ^m-graded 𝔼_2-ring spectrum of <cit.>. Then, by <cit.>, there is an equivalence of ℤ^m-graded 𝔼_2-k-algebras
R≃ k⊗𝕊[x_1,…,x_n].
Therefore, since is a symmetric monoidal functor ()→, there is an equivalence of ℤ^m-graded 𝔼_1-k-algebras
(R) ≃(k) ⊗(𝕊[x_1,…,x_n]),
and the statement follows by base changing the second tensor factor on the right hand side along 𝕊→ k.
Consider the filtered spectrum (_p^)/ṽ_̃0̃. Its associated graded spectrum is (_p[v_0])/v_0 and its underlying spectrum is (_p)/p. The E_1-page of the associated spectral sequence is 𝔽_p[σ^2p]⊗Λ[dv_0] by <Ref>. Note that σ^2 p and dv_0 are in filtrations 0 and 1, respectively.
By <Ref>, we have a differential d_1(σ^2p)≐ dv_0 in the spectral sequence associated with the filtered ring (ℤ_p^). Then, mapping to (ℤ_p^)/v_0 and using the Leibniz rule, we can determine all differentials, and the E_2-page is isomorphic to 𝔽_p[(σ^2p)^p]⊗Λ[(σ^2p)^p-1dv_0]. There are no differentials in later pages by <Ref>.
Therefore, the homotopy ring π_∗(ℤ_p)/p is isomorphic to 𝔽_p[μ]⊗Λ[λ_1] with |μ|=2p and |λ_1|=2p-1. By <cit.>, μ can be identified with σ^2 v_1[v_1 is not well defined at the prime 2, but still exists: it is just not a self map of (2). It is generally defined as any element of π_2p-2/p whose -Hurewicz image is v_1.], where v_1 ∈π_2p-2_p and λ_1 can be identified with σ t_1, in the sense of <Ref>, where t_1∈π_∗(ℤ⊗ℤ) is the image of t_1∈π_∗(⊗) under the map →ℤ. By <Ref>, we have λ_1 ≐σ^2α_1[Alternatively, if one knows that the p-Bockstein on μ is ≐λ_1, one learns that σ^2α≐λ_1 from the fact that the p-Bockstein on v_1 is α_1 and the fact that σ^2 is compatible with the p-Bockstein (since it comes from a map of spectra).].
Consider the filtered spectrum (ℓ_p^)/(p,v_1), where v_1∈π_∗ℓ_p is the class of filtration (2p-2). Its associated graded spectrum is (ℤ[v_1])/(p,v_1) and its underlying spectrum is (ℓ_p)/(p,v_1). By <Ref>, the E_1-page of the associated spectral sequence is 𝔽_p[σ^2v_1]⊗Λ[λ_1,dv_1]. Note that the for degree reasons, the first and last page a differential can happen is the E_2p-2-page.
Applying Lemma <ref>, there is a differential d_2p-2σ^2 v_1 ≐ dv_1 in the spectral sequence associated with the filtered spectrum (ℓ_p^)/p. Mapping to (ℓ_p^)/(p,v_1) and using the Leibniz rule, we can determine the d_2p-2-differentials on powers of σ^2 v_1. The class λ_1 is a permanent cycle for degree reasons. Therefore, the E_2p-1-page is isomorphic to 𝔽_p[(σ^2 v_1)^p]⊗Λ[λ_1, (σ^2v_1)^p-1dv_1]. The classes (σ^2v_1)^p,(σ^2v_1)^p-1dv_1 are permanent cycles for degree reasons, so the spectral sequence degenerates at the E_2p-1-page.
We let λ_2 denote a class detecting (σ^2v_1)^p-1dv_1, and μ denote a class detecting (σ^2v_1)^p.
To check that there are no multiplicative extensions, we need to check λ_1^2=λ_2^2=0, which follows for degree reasons. The homotopy ring π_∗(ℓ_p)/(p,v_1) is thus isomorphic to 𝔽_p[μ_1]⊗Λ[λ_1,λ_2] where λ_1 and λ_2 can be identified with σ t_1 and σ t_2 as in the case of (ℤ_p)/p. For p>2, μ_2 can be identified with σ^2v_2.
§.§ The associated graded
We further filter the associated graded ring j_ζ^ by the p-adic filtration to ultimately reduce the computation to our understanding of (_p). In running the spectral sequences to obtain the mod (p,v_1), we find that they are close enough to the spectral sequences of (ℓ_p^triv)^h, the fixed points of ℓ_p with the trivial -action.
We define the p-adic filtration on j_ζ^ to be j_ζ^⊗__p_p^. This is an 𝔼_∞-ℤ-algebra object in the category of filtered graded spectra.
By taking the associated graded, we obtain j_ζ^⊗_ℤ_p𝔽_p[v_0], which is an 𝔼_∞-ℤ-algebra object in the category of bigraded spectra. We shall write hfp grading for the grading on j_ζ^ if we need to distinguish it from the p-adic grading on 𝔽_p[v_0]. For example, in j_ζ^⊗__p_p[v_0], v_1 has hfp degree 2p-2 and p-adic degree 0, and v_0 has hfp degree 0 and p-adic degree 1.
For p>2, there is an isomorphism of bigraded _1-(_p)-algebras for
(j_ζ^⊗__p_p[v_0]) ≅(_p)⊗__p(_p[v_0,v_1]/_p)⊗__p(_p^h/_p)
First note that j_ζ^⊗__p_p[v_0] ≅ j_ζ^/p⊗__p_p[v_0], which by <Ref> is equivalent to _p[v_1,v_0]⊗__p_p^h. Then, the statement follows from <Ref>.
We next study the behavior of fixed points by trivial -actions on . We use the spherical Witt vectors adjunction <cit.> <cit.> between perfect _p-algebras and p-complete _∞-rings. For a perfect _p-algebra A, (A) is an _∞-ring that is (p-completely) flat under _p, and whose _p homology is A. The right adjoint is π_0^♭ which is defined to be the inverse limit perfection of the _p-algebra π_0(R)/p.
There is an equivalence of _∞-_p^h-algebras (_p^h) ≅_p^h⊗(C^0(_p;_p)). The restriction map _p^h→_p^hp on π_0^♭ is the map C^0(_p;_p) → C^0(p_p;_p) that restricts a function to p_p.
There is a natural map ^h_p⊗(π_0^♭((_p^h)) →(_p^h), and so for the first claim it suffices to show that this is an equivalence and that π_0^♭((_p^h)) ≅ C^0(_p;_p). Both of these can be checked after base change to _p. Note that (_p^h)_p⊗_p≅(_p^h/_p).
Since _p^h = _n _p^B/p^n and B/p^n is p-finite, we have, by <cit.>,
(_p^B/p^n/𝔽_p)≅_p^B/p^n⊗__p^(B/p^n)^2_p^B/p^n≅_p^B/p^n×_(B/p^n)^2B/p^n.
We have equivalences of spaces natural in n
B/p^n×_(B/p^n)^2B/p^n≅LB/p^n = Bℤ/p^n×ℤ/p^n.
where L denotes the free loop space.
Then, via the Künneth isomorphism and taking the colimit over n, we get
(_p^h/_p) ≅_p^h⊗_n_p^/p^n.
Since _n_p^/p^n is _p, so we obtain the desired equivalence.
To see the claim about π_0^♭, we note the natural map _p^h→_p^hp is the colimit of _p^hB/p^n→_p^hB/p^n-1, where the map is given by the inclusion /p^n-1→/p^n. At the level of the π_0, LB/p^n-1→ LB/p^n is also the inclusion /p^n-1→/p^n, so induces the restriction map at the level of -. Taking the colimit over n gives the claim.
<Ref> can be interpreted as saying that the failure of p-adic to commute with taking -homotopy fixed points in the universal case is measured by π_0^♭. In particular, the map (_p^h) (_p)^h on π_0^♭ is the map _p_p evaluating at 0, and the comparison map is base changed along (π_0^♭f).
Let R be a p-complete _∞-ring with trivial -action. Then there is an equivalence of _∞-R-algebras (R^h)≅(R)^h⊗(_p).
Combining <Ref> with <Ref> and the HKR isomorphism, we get the following.
For p>2, we have an isomorphism of rings
π_*(j_ζ^⊗__p_p[v_0]) ≅_p[σ^2p,v_0,v_1]⊗Λ[dv_0,dv_1,ζ]⊗_p
§.§ Spectral sequences
Let us first run the spectral sequence for the p-adic filtration.
For p>2, we have an isomorphism of rings
π_*(j_ζ^)/p ≅π_*(_p)/p⊗_p[v_1]⊗Λ[dv_1,ζ]⊗_p.
As in Example <ref>, the spectral sequence associated with (j_ζ^⊗_p^)/ṽ_̃0̃ has E_1-page isomorphic to π_*(j_ζ^⊗__p_p[v_0])/v_0 ≅_p[σ^2p,v_1]⊗Λ[dv_0,dv_1]⊗ H^*_(S^1×_p;_p) and converges to π_∗(j_ζ^)/p.
Because there is a map of filtered rings j_ζ^⊗__p_p^→(j_ζ^⊗__p_p^), we see that the classes v_1, ζ are permanent cycles. The class dv_1 is a permanent cycle since it detects the suspension dv_1 of v_1∈π_∗ j_ζ^/p. The elements of C^0(_p;_p) are permanent cycles since there are no elements of negative topological degree and positive filtration.
From the map of filtered rings
(ℤ_p^)→(j_ζ^⊗__p_p^),
there is a d_1-differential σ^2p↦σ v_0 by Example <ref>, and (σ^2p)^p and (σ^2p)^p-1dv_0 are permanent cycles detecting images of classes in (_p). It follows that after the d_1-differential, the E_2-page is _p[(σ^2p)^p,v_1]⊗Λ[(σ^2p)^p-1dv_0,dv_1,ζ]⊗ C^0(_p;_p), so the spectral sequence collapes at the E_2-page. There are no multiplicative extensions since every class comes from either j_ζ^, (ℤ_p), or (𝕊_p^hℤ).
Our next goal is to compute mod (p,v_1) the spectral sequence (j^_ζ) (j_ζ).
Before doing so, we run the analogous spectral sequence for computing (ℓ_p)/(p,v_1), as a warm up. We consider the _∞-ring _ζ=_p^h with the trivial filtration.
For p>2, π_*((j_ζ))/(p,v_1) ≅_p[σ^2v_2]⊗Λ[λ_1,λ_2,ζ]⊗ C^0(_p;_p) with |λ_i| = 2p^i-1 and |σ^2v_2| = 2p^2.
As in Example <ref>, we consider the spectral sequence associated with the filtered spectrum (j_ζ^)/(p,ṽ_̃1̃). The analogous spectral sequence in the case p=2 is displayed in <Ref> above. The underlying spectrum is (j_ζ)/(p,v_1) and the associated graded spectrum is (j_ζ^)/(p,v_1). By <Ref>, the E_1-page is isomorphic to 𝔽_p[σ^2 v_1]⊗Λ[λ_1, dv_1,ζ]⊗ C^0(ℤ_p;𝔽_p).
The classes in C^0(ℤ_p;𝔽_p) are permanent cycles by the Leibniz rule, since they are all their own p^th-power. The class ζ∈ H^1(S^1;_p) is a permanent cycle because it detects a class in the image of j_ζ→(j_ζ).
By <Ref>, there is a differential d_2p-2(σ^2 v_1)≐ dv_1, and the Leibniz rule determines the differentials on powers of σ^2v_1.
Similarly, by Lemma <ref>, there must be a d_2p-2 differential λ_1≐σ^2 α_1 dα_1 in the spectral sequence (j_ζ^)(j_ζ) mod p. By Lemma <ref>, we have
dα_1 = d(v_1ζ) = v_1dζ - ζ dv_1,
so that we have the differential d_2p-2(λ_1) ≐ζ dv_1 mod (p,v_1). By using the previous paragraph and replacing λ_1 with
λ_1'=λ_1 - ϵζμ
for some ϵ∈𝔽_p^×, we may assume that d_2p-2(λ_1')=0.
This completely determines the spectral sequence up to the E_2p-2-page, and we learn the E_2p-1-page is isomorphic to 𝔽_p[(σ^2v_1)^p]⊗Λ[λ_1',(σ^2v_1)^p-1dv_1,ζ]⊗ C^0(Z_p;𝔽_p). There are no more differentials since there is no class outside filtration degree 0 and 2p-2.
There are no multiplicative extension problems since the multiplicative generators in nonzero degree are free generators as a graded ring.
Finally, let us show that the polynomial generator μ_2 is the class σ^2 v_2. Let us consider the map j_ζ^→_ζ induced by applying (τ_≥*(-))^h to the -equivariant truncation map ℓ_p →_p. This induces a map of spectral sequences for . Since _ζ has the trivial filtration, its does too, so has no differentials in its associated spectral sequence. By <Ref> and <Ref>, (_ζ)_p≅(_p)^h⊗(_p), so
π_*(_ζ)/(p,v_1) ≅_p[σ^2v_1,λ_1,λ]⊗_p.
v_2 ∈π_2p^2-2/(p,v_1) has a canonical nullhomotopy in j_ζ/(p,v_1) ≅_p^h and _ζ/(p,v_1) ≅_p[σ v_1]^h, so there is a canonical element σ^2v_2 in π_2p^2(j_ζ)/(p,v_1) and π_2p^2(_ζ)/(p,v_1), which we claim is detected in the spectral sequence for (j_ζ^) by (σ^2v_1)^p. To see this, it suffices to show this in (_ζ) because the map is injective in degree 2p^2-2. But now it is the image of σ^2v_2 from the map ℓ_p^→_ζ, and in ℓ_p^, which we know by <Ref> is detected by (σ^2v_1)^p.
In the proof of the previous theorem, a reader might wonder why λ_1 supports a differential while σ^2α_1 is still well-defined in (j_ζ). This can be explained by the fact that σ^2α_1 is not well-defined in (ℤ_ζ)/(p,v_1) since
π_2p-3((𝕊→ℤ_ζ)/(p,v_1)) →π_2p-3(𝕊/(p,v_1))
is not injective. The class σ^2α_1 is well-defined in (ℤ)/(p,v_1) and (j_ζ)/(p,v_1), but their images in (ℤ_ζ)/(p,v_1) are different. The class λ_1 in the E_1-page represents the former and λ_1' represents the latter.
We can carry out the same computation for (ℓ_p)^hℤ/(p,v_1) using the same filtrations ℓ_p^ and ℓ_p^⊗_p^. Then, we obtain an isomorphism of rings
π_∗(ℓ_p)^hℤ/(p,v_1) ≃𝔽_p[σ^2 v_2] ⊗Λ[λ_1,λ_2,ζ].
Furthermore, by keeping track of the map
(j_ζ)/(p,v_1)→(ℓ_p)^hℤ/(p,v_1)
at every stage, we see that on homotopy groups, this map is the base-change along
C^0(ℤ_p;𝔽_p)→𝔽_p
that evaluates a function at 0∈ℤ_p.
§.§ The prime 2
We next turn to the prime 2. We first need to run the analogous analysis as in <Ref> for _2. We consider _2^/2[v_0] as the bigraded ring given as the associated graded of _2^⊗__2_2^. To understand this, we need the following lemma.
There is an isomorphism of bigraded rings
π_*(_2^/2[v_0])/η≅_2[v_0,v_1,σ^22,dη]/((dη)^2+v_1dη) ⊗Λ[dv_0,dwv_1]
The associated graded of _2^/2[v_0] with respect to the Posnikov filtration is _2[v_0,v_1,η].
By symmetric monoidality of , we have an equivalence
(_2[v_0,v_1,η])≅(_2[v_0,v_1])⊗_(_2)(_2[η])
Since the argument of <Ref> works at the prime 2, we learn that the first tensor factor has homotopy ring _2[σ^2p,v_0,v_1]⊗Λ[dv_0,dv_1].
For the second tensor factor, we note that (_2[η])⊗_(_2)_2 ≅(_2[η]/_2), whose homotopy ring is _2[η]⊗Λ[dη]. Since the map (_2) →_2 is the cofiber of σ^2p, we can run a σ^2p-Bockstein spectral sequence to recover (_2[η]). In the spectral sequence, η,dη are permanent cycles since they are in the image of the unit map and the map d. We also see that there are no multiplicative extensions mod η for degree reasons, i.e. we have
π_∗(𝔽_2[η])/η = Λ(dη)⊗_𝔽_2𝔽_2[σ^22].
In the spectral sequence computing (_2^/2[v_0]) from this, everything is a permanent cycle since all classes are generated either from the image of the unit map, the map from (_2), or the map d.
Now we turn to the multiplicative extensions, which we compute by mapping to the σ^22-completion of (_2^hC_2[v_0,v_1]). As before, we can compute this via the σ^22-Bockstein spectral sequence whose E_1-page is (_2^hC_2[v_0,v_1]/_2)[σ^22].
We have an isomorphism (_2^hC_2[v_0,v_1]/_2) ≅(_2^hC_2/_2)⊗__2(_2[v_0,v_1]).
Moreover, _*(_2[v_0,v_1]) ≅_2[v_0,v_1]⊗Λ[dv_0,dv_1], and (_2^hC_2) is _2^hC_2×_2^hC_2, since the free loop space of BC_2 is BC_2× C_2. If h is the generator of π_-1_2^hC_2, then a nontrivial idempotent in π_0(_2^hC_2) is given by dh. By the Leibniz rule (<Ref>), dη = v_1dh+hdv_1, so (dη)^2 = v_1^2dh = v_1dη+η dv_1. This this relation happens in (_2)/σ^22, but for degree reasons, this forces it to happen in (_2)/η as well.
To see that the classes dv_0 and dv_1 square to 0, we note that this is true in (_2[v_0,v_1]/_2), and that we have a map
(_2)⊗__2(_2[v_0,v_1]/_2) ≅(_2[v_0,v_1]) →(_2[v_0,v_1]^hC_2)
using the isomorphism of <Ref>.
There is an isomorphism of graded rings
π_*((_2^)/(2,η)) ≅_2[v_1,σ^2v_1,dη]/((dη)^2+v_1dη)⊗Λ[σ^2η,dv_1]
We now understand the spectral sequence computing π_*((_2^)/(2,η)) by running the 2-adic filtration spectral sequence on (_2⊗__2_2^)/(v_0,η). By <Ref>, there is a differential from σ^2v_0 to dv_0, σ^2η is a class squaring to zero detected by σ^2v_0dv_0, and σ^2v_1[The element v_1∈π_2/2 exists, even though it does not extend to a self map.] detects (σ^2v_0)^2. The remaining classes are either in the image of the unit map or the image of d, so are permanent cycles. The relation (dη)^2+v_1dη=0 occurs because it does on associated graded, and because there are no classes in topological degree 4 and positive p-adic filtration. The class dv_1 squares to zero since there are no classes of weight -2, topological degree 6, and positive p-adic filtration.
We now compute (_2)/(2,η,v_1), which was also computed in <cit.>.
We now can run the spectral sequence
(_2^)/(2,η,v_1) (_2)/(2,η,v_1)
, which is a spectral sequence associated with a filtered _∞-ring since _2 ≅_2^/(2,η,v_1), where η and v_1 are taken in filtration 2. This spectral sequence is displayed in <Ref>. The first page of this spectral sequence by <Ref> is _2[σ^2v_1]⊗Λ[dv_1,dη,σ^2η]. It follows as in <Ref> that there are differentials from σ^2η to dη and σ^2v_1 to dv_1. What remains after these differentials are _2[(σ^2v_1)^2]⊗Λ[σ^2v_1dv_1, σ^2η dη]. For degree reasons, there can be no further differentials. the classes in odd degree square to 0 because there are no classes in degrees 2 or 6 mod 8.
We now run the analogous analysis to compute (j_ζ)/(2,η,v_1).
There is an isomorphism of graded rings
π_*(j_ζ^)/(2,η,v_1) ≅π_*(_2^)/(2,η,v_1)⊗π_*((_2^h/_2))
Since _2^/2⊗__2_2^h≅ j_ζ^/2, we learn from <Ref> that
π_*((j_ζ^/2[v_0])/(v_0,η,v_1)≅_2[σ^2v_0]⊗Λ[dη,dv_0,dv_1]⊗_*(_2^h/_2)
where (_2^h/_2) is computed via <Ref> as _2^h⊗_p.
Exactly as in <Ref>, in the spectral sequence for the 2-adic filtration, there is a differential from σ^2v_0 to dv_0, σ^2η is a class squaring to zero detected by σ^2v_0dv_0, and σ^2v_1 is a class detecting (σ^2v_0)^2. The rest of the classes are permanent cycles because they are either in the unit map, come from d, or are permanent cycles by the Leibniz rule.
There is an isomorphism of rings for p=2
π_*(j_ζ)/(2,η,v_1) ≅_2[μ]⊗Λ[λ_2,x,ζ]⊗_p
where |x| = 5, |λ_2| = 7, |μ| = 8.
We run the spectral sequence (_2^)/(2,η,v_1) (_2)/(2,η,v_1).
As in <Ref>, there are differentials from σ^2η to dη and σ^2v_1 to dv_1. For degree reasons, (σ^2v_1)^2 is a permanent cycle, as are σ^2η dη, σ^2v_1dv_1, and ζ. _2 is a permanent cycle by the Leibniz rule. If we let λ_2 and x denote classes detecting σ^2v_1dv_1 and σ^2η dη respectively, then λ_2^2=0 and x^2=0 for degree reasons.
§ THE OF J
We now consider (j)/(p,v_1) for p>2. We first compute the Hochschild homology of the _p-algebra j^/p, which is isomorphic to τ_≥0(_p[v_1]^h) by <Ref>.
Let p>2. _*((j^/p)/_p) ≅_*(τ_≥0(_p[v_1]^h)/_p) is isomorphic as a ring to
Λ[dv_1,α_1]⊗𝔽_p[v_1,x_0,x_1,…]/(x_i^p = v_1^p^i+1-p^ix_i + v_1^p^i+1-p^i - 1α_1 (∏_j=0^i-1x_j^p-1)dv_1; i≥0)
where |x_i|=p^i(2p-2), and x_i is in grading p^i(2p-2).
Define a graded ring R = τ_≥ 0ℤ_p[v_1]^hℤ using a trivial ℤ-action so that R/p≃ j^/p. We shall show that π_∗(R/ℤ_p) is the ℤ_p-algebra generated by v_1,dv_1,α, and a set of generators x_0,x_1,… with |x_i| = p^i(2p-2) having relations
x_i^p = px_i+1 + v_1^p^i+1-p^ix_i + v_1^p^i+1-p^i - 1α (∏_j=0^i-1x_j^p-1)dv_1.
Then, the statement follows by the base-change ℤ_p→𝔽_p.
Let R_ζ = ℤ_p[v_1]^hℤ defined using a trivial ℤ-action and let η:R→ R_ζ denote the connective cover map. To compute the Hochschild homology, we shall show that the map η_∗:π_∗(R)→π_∗(R_ζ) is injective and describe the image. Note that π_∗ R_ζ = ℤ_p[v_1,ζ] and π_∗ R = ℤ_p[v_1,α] where η_∗(α)= v_1ζ.
Let us consider the Künneth spectral sequence
E_2((R)) = ^π_∗(R⊗_ℤ R)(π_∗ R,π_∗ R) π_∗(R).
Since π_∗ R =ℤ_p[v_1]⊗Λ[α], the E_2-page can be computed as
E_2((R)) = ℤ_p[v_1]⊗Λ[dv_1,α]⊗Γ[dα].
Similarly, there is a spectral sequence
E_2((R_ζ)) = ℤ_p[v_1]⊗Λ[dv_1,ζ]⊗Γ[dζ]π_∗(R_ζ)
up to p-completion.
We claim that E_2((R))→ E_2((R_ζ)) is injective. By Lemma <ref>, we have
dα↦ -ζ dv_1 + v_1dζ.
To prove the injectivity, it is enough to prove it after taking the associated graded group with respect to the (dv_1)-adic filtration. Then, we may assume that dα maps to v_1dζ, and since E_2((R_ζ)) is torsion-free, the divided power γ_n(dα) maps to v_1^nγ_n(dζ). Therefore, we have the desired injectivity. Note also that the map is injective mod p.
The spectral sequence (<ref>) degenerates at the E_2-page using the symmetric monoidality of , <Ref>, and <Ref>. We then see that (<ref>) also degenerates at the E_2-page and that η_∗:π_∗(R)→π_∗(R_ζ) is injective, even after mod p.
Let us describe the Künneth filtration on
π_∗(R_ζ) = ℤ_p[v_1]⊗Λ[dv_1,ζ]⊗ W(C^0(ℤ_p;F_p))
in more detail. Here, the ring
W(C^0(ℤ_p;𝔽_p))=lim_kC^0(ℤ_p;ℤ_p/p^k)
is the ring of all continuous functions ℤ_p→ℤ_p. It can also be described, up to completion, as the algebra generated by y_0,y_1,… with relations
y_i^p = py_i+1+y_i.
Here, the element y_0 is the identity function ℤ_p→ℤ_p and the y_i's for i>0 can be defined with the above formula since y^p≡ y p for any y∈ W(C^0(ℤ_p;𝔽_p)). In π_∗(R_ζ), the element y_0 equals dζ, and the y_i's represent the p^i-th divided power of dζ in the Künneth spectral sequence (<ref>).
To determine π_∗(R), we need to find the classes x_i's representing the divided powers γ_p^i(dα)∈ E_2((R)) up to a p-adic unit. The first divided power dα∈ E_2((R)) has a canonical lift x_0:=dα∈π_∗(R) and its image under η_∗ is v_1y_0 - ζ dv_1. Inductively, suppose that we have chosen x_0,…,x_i in a way that the image of x_j is
η_∗(x_j)= v_1^p^jy_j - v_1^p^j-1(∏_k=0^j-1y_k^p-1)ζ dv_1
for 0≤ j≤ i. Let x_i+1 be any class representing γ_p^i+1(dα). Then, after scaling by a unit, we must have
x_i^p = px_i+1+c
for some class c∈π_∗(R) with Künneth filtration <p^i+1. Applying η_∗, we have
η_∗(c)≡η_∗(x_i)^p ≡ v_1^p^i+1y_i^p ≡ v_1^p^i+1y_i p.
Let d∈π_∗(R) be the class v_1^p^i+1-p^i-1(v_1x_i + α(∏_k=0^i-1x_k^p-1)dv_1), having Künneth filtration p^i. Then, we can compute that η_∗(d) = v_1^p^i+1y_i so that η_∗(c) ≡η_∗(d) p.
Since η_∗ is injective mod p, we have c≡ d p, so by replacing x_i+1 with x_i+1 - (c - d)/p, we can assume that c=d. Then, we have
η_∗(x_i+1) = p^-1η_∗(x_i^p - c)
= p^-1( v_1^p^i+1y_i -pv_1^p^i+1-1(y_i⋯ y_0)^p-1ζ dv_1 - v_1^p^i+1y_i)
=v_1^p^i+1y_i+1 - v_1^p^i+1-1(y_i⋯ y_0)^p-1ζ dv_1.
The desired ring structure of π_∗(R) can now be read off from the ring structure on π_*(R_ζ).
There is an isomorphism of bigraded _1-(_p)-algebras for p>2
(j^⊗__p_p[v_0]) ≅(_p)⊗__p(_p[v_0]/_p)⊗__p(τ_≥0_p[v_1]^h/_p)
We run the strategy of <Ref> with appropriate modifications.
First, we have the isomorphism j^⊗__p_p[v_0] ≅ j^/p⊗__p_p[v_0], which by <Ref> is equivalent to τ_≥0_p[v_1,v_0]⊗__p_p^h. As an _2-ring, we claim this is equivalent to the tensor product of _p⊗[v_0] with the pullback of the cospan
[v_1]⊗^h [d]
[r] ^h
where the vertical map is the augmentation sending v_1 to 0.
This isomorphism is a consequence of the isomorphism of <Ref> and the pullback square
j^/p [r][d] j_ζ^/p [d]
[r] _p _p^h
Given this equivalence, we conclude by arguing exactly as in <Ref>.
Let p>2. Then
π_*(j^)/p ≅π_*(_p)/p⊗π_*(τ_≥0_p[v_1]^h/_p)
We follow the strategy in <Ref>, running the spectral sequence corresponding to the p-adic filtration
π_*(j^/p[v_0])/p π_*(j^)/p.
The E_1-page is understood via <Ref> to be
_p[σ^2p,v_0]⊗Λ[dv_0]⊗π_*(τ_≥0_p[v_1]^h/_p)
where the last tensor factor is described in <Ref>. There is a differential d_1σ^2p = dv_0, coming from the map from _p^→_p^⊗ j^ and <Ref>.
We need to show that the remaining classes are permanent cycles. The classes v_1,α are permanent cycles because they are in the image of the unit map, and dv_1 is a permanent cycle because it is in the image of the map σ^2. The classes x_i are permanent cycles for degree reasons, as everything of positive p-adic filtration is in nonnegative degree, and the differentials respect the hfp grading. One also sees for degree reasons and the map from (_p)/p that there are no multiplicative extension problems.
We now run the spectral sequence (j^)/(p,v_1) (j)/(p,v_1) associated with the filtered spectrum (j^)/(p,v_1) where v_1∈π_∗ j/p is the class of filtration 2p-2. The following lemma guarantees the multiplicativity of the spectral sequences.
j^/(p,v_1) admits a homotopy commutative Å_p-1-multiplication for p>2, and in particular is homotopy associative for p>3.
By <cit.>, it follows that /p is an Å_p-1-algebra, and it is easy to see that there is no obstruction to its multiplication being homotopy commutative for p>2. We conclude by observing that j^/(p,v_1) ≅τ_≤ 2p-3j^⊗/p.
Note that by loc. cit., the multiplication is not Å_p, the obstruction being α_1.
For p>3, π_*(j)/(p,v_1) is the homology of the CDGA
𝔽_p[μ_2]⊗Λ[α_1,λ_2,a]⊗Γ[b], d(λ_2)=aα_1
|b| = 2p^2-2p , |a| = 2p^2-2p-1, |λ_2| = 2p^2-1, |μ_2| = 2p^2
and for p=3, the above result is true after taking an associated graded ring.
The E_1-page of the spectral sequence
E_1 = π_∗(j^)/(p,v_1)π_∗(j)/(p,v_1)
is isomorphic to
𝔽_p[μ_1]⊗Λ[σ^2α_1,dv_1,α_1]⊗Γ[dα_1].
by <Ref>. By <Ref>, there are d_2p-2-differentials
σ^2α_1 dα_1
σ^2 v_1 dv_1.
The class α_1 is a permanent cycle since it must represent the image of α_1∈π_∗ j/(p,v_1) along the unit map, and the divided power classes (dα_1)^(k) are permanent cycles because they are in weight 0, and there are no classes of weight >1. Therefore, by the Leibniz rule, the E_2p-1-page is isomorphic to
𝔽_p[μ_2]⊗Λ[λ_2, a, α_1]⊗Γ[γ_p(dα_1)]
where μ_2,λ_2 and a represent (μ_1)^p,(σ^2 v_1)^p-1dv_1 and (σ^2 α_1)γ_p-1(dα_1), respectively.
For degree reasons, the only possible further nonzero differential is
d_p-1(λ_2) ≐α_1
To prove that this differential actually happens, it is enough to show that
π_2p^2-2(j)/(p,v_1)=0.
By <Ref>, there is a (4p^2-4p-2)-connective map j ⊕Σ^2(_p → j) →(j), so it suffices to show that
π_2p^2-2(j/(p,v_1)) = π_2p^2-2(Σ^2(1_j)/p,v_1)=0.
The former group is clearly 0. The latter is 0 from the computation of the Adams–Novikov E_2-page for /(p,v_1) in low degrees (see the discussion after <cit.> and Theorem 4.4.8 of op. cit.).
The last nontrivial differential of the spectral sequence is displayed for p=3 in <Ref>.
We now check for p≥ 5 that there are no multiplicative extension problems in our description of the commutative ring structure on π_*(j)/(p,v_1). If we choose γ_p^ib to be detected by (γ_p^i+1(dα_1)), the relations γ_p^i(b)^p=0 follow since there is nothing of higher filtration in that degree. Let μ_2 be any lift of (σ^2v_1)^p. The homology of the CDGA Λ__p[α_1,λ_2,a], d(λ_2) = aα_1 is 6-dimensional over _p, given by
{1,a,α_1,λ_2a,λ_2α_1,λ_2aα_1}
Let α_1,x, y,z denote lifts of the classes α_1,a,λ_2a,λ_2α_1 respectively (so that α_1y is a lift of λ_2aα_1). The relation α_1y=-xz holds because it is true on the associated graded and there is nothing of higher filtration in that degree. The classes α_1z,yz,xα_1 are 0 because there are no nonzero classes in degree (p+1)(2p-2),2p^2-1+2(2p-3),2(2p^2-1)+(2p-3)+ p(2p-2)+1 respectively. The only remaining relation, xy=0, occurs because it happens on the associated graded, and there is nothing of higher filtration.
For p=3, it is more complicated to figure out the multiplicative extensions, since the homotopy ring is not necessarily associative. Many of the multiplicative extensions can be ruled out using the Postnikov filtration on j/(3,v_1), but not all of them: for example this doesn't rule out the possible non-associative extension x(x μ_2^2) = zb^2 in degree 62.
§ THH OF FINITE EXTENSIONS
In this section, we shall make the analogous computations for the THH of j_ζ,k:=ℓ_p^hp^kℤ, ju_ζ,k, and and also of j_k:=τ_≥0j_ζ,k for p>2, which are introduced as filtered rings in <Ref>. j_ζ,k is a /p^k Galois extension of j_ζ in _p. The computations are very similar to the cases of j_ζ and j, so we shall only point out the differences from the proofs of those cases.
There is an isomorphism of rings for p>2
π_∗(j_ζ,k)/(p,v_1) ≃π_*((ℓ_p)/(p,v_1))⊗Λ[ζ]⊗_p
and for p=2
π_∗(j_ζ,k)/(2,η,v_1) ≃π_*((_2)/(2,η,v_1))⊗Λ[ζ]⊗_2
The maps (j_ζ,k)/(p,v_1) →(j_ζ,k+1)/(p,v_1)
on π_* are the identity on the (ℓ_p)/(p,v_1) component,
send ζ to 0, and are the restriction map _p→p_p≅_p.
The proof is exactly the same as in <Ref> and <Ref>. The only difference is that for k≥1, <Ref> doesn't apply: the class λ_1 in the spectral sequence (j_ζ^)/(p,v_1) (j_ζ)/(p,v_1) is a permanent cycle, which can be seen from the Leibniz rule. As noted in the remark, this doesn't affect the final answer.
The claim about the maps π_*(j_ζ,k)/(p,v_1) →(j_ζ,k+1)/(p,v_1) can be deduced at the level of associated graded of the filtrations. For example, by choosing elements λ_1,λ_2,σ^2v_2 in (j_ζ)/(p,v_1), one sees that their images in (j_ζ,k)/(p,v_1) are valid generators of the corresponding classes. To see what the transition maps do on Λ[ζ]⊗_p, we can use <Ref> since these classes are in the image of (_p^h). It then follows that map sends _p→p_p given by restriction of functions, and ζ goes to p ζ=0 because that is what happens on the level of mod p cohomology of the p-fold cover map S^1 → S^1.
We next explain the computation for ju_ζ,k, which is nearly identical to that of j_ζ,k
For each k≥0, there is an isomorphism of rings
π_∗(ju_ζ,k)/(2,v_1) ≃π_*((ℓ_2)/(2,v_1))⊗Λ[ζ]⊗_2
The maps (ju_ζ,k)/(p,v_1) →(ju_ζ,k+1)/(p,v_1) on π_* are the identity on the
(ℓ_2)/(2,v_1) component, send ζ to 0, and are the restriction map _2→2_2≅_2
The proof is nearly exactly as the proof of <Ref> for p>2. The only difference is that in checking multiplicative extension problems in spectral sequences, one must check that odd degree classes square to zero (since we are at the prime 2). This always follows because the square lands in a zero group; see <Ref> for a chart.
Our argument to compute (j_k) for k≥1 uses Dyer–Lashof operations to produce permanent cycles, so we first give j_k/(p,v_1) an _∞-structure.
For k≥1, j_k/(p,v_1) admits the structure of an _∞-algebra under j_k that is a trivial square zero extension of _p by Σ^2p-2_p.
To construct j_k/(p,v_1) as an _∞-ring, we first begin with τ_≤ 2p-3j_k, whose homotopy groups are _p in degree 0 and /p^k+1 in degree 2p-3, where α_1 is a p-torsion class in degree 2p-3.
By <cit.> this is a square zero extension of _p by Σ^2p-3/p^k+1, i.e it fits into a pullback square
τ_≤2p-3j_k [r][d] _p [d]
_p[r] _p⊕Σ^2p-2/p^k+1
By using the map /p^k+1→/p that kills every multiple of p (including α_1 since k≥1), we can produce an _∞-algebra R under τ_≤ 2p-3j_k defined as the pullback
R [r][d] _p [d]
_p[r] _p⊕Σ^2p-2/p
We claim that R is a trivial square zero extension of _p. To see this, square zero extensions of _p by Σ^2p-1_p are classified by maps of _p-modules L__p/_p→Σ^2p-1_p, where L__p/_p denotes the _∞ relative cotangent complex. By <cit.>, since _p →_p is 2p-3-connective, there is a 4p-4-connective map
_p ⊗__p(_p →_p) → L__p/_p
showing that π_2p-2L__p/_p is _p. It follows that up to isomorphism, there is a unique nontrivial square zero extension of _p by Σ^2p-3_p. But τ_≤2p-3_p must be this nontrivial extension, since α_1≠0 there. Since α_1=0 in R, it follows that R is the trivial square zero extension _p ⊕Σ^2p-3_p. Thus τ_≤2p-3(R⊗__p_p) is an _∞-_p-algebra under it that is a trivial square zero extension of _p by Σ^2p-2_p. But it is easy to see that the underlying unital j_k-module of this is j_k/(p,v_1).
For k≥1,p>2, there is an isomorphism
π_∗(j_k)/(p,v_1) ≃π_*(ℓ_p)/(p.v_1) ⊗Λ[α_1/p^k]⊗Γ[dα_1/p^k]
where |α_1/p^k| = 2p-2 and |σα_1/p^k| = 2p-1.
The proof of <Ref> carries over exactly for j_k to give an isomorphism
π_*(j_k^)/(p,v_1) ≅π_*(_p)/p⊗π_*(τ_≥0_p[v_1]^hp^k/_p)/v_1
The second tensor factor on the right hand side by <Ref> is Λ[α_1/p^k,dv_1]⊗Γ[d α_1/p^k][As an algebra this doesn't depend on k, but we have given names depending on k to indicate that the exterior class α_1/p^k is sent to 0 in (j_k+1^)/(p,v_1).].
In the spectral sequence for (j_k^)/(p,v_1) (j_k)/(p,v_1), there is a differential d_2p-2σ^2v_1 = d v_1 arising as in <Ref>, but the target of the differential from σ^2α_1, which is σα_1, is zero since α_1 = 0 in j_k/(p,v_1). In fact, the class σ^2α_1 is a permanent cycle since it can be constructed using a nullhomotopy of α_1. Let λ_1 be a class in (j_k)/(p,v_1) detecting this.
By <Ref>, j_k/(p,v_1) is an _∞-algebra under j_k that is an _∞-_p-algebra, so (j_k)/(p,v_1) ≅(j_k)⊗_j_kj_k/(p,v_1) is an _∞-_p-algebra with Dyer–Lashof operations. We define λ_2 to be the _2-Dyer–Lashof operation on λ_1. In (ℓ_p)/(p,v_1), this operation on the class λ_1 gives the class λ_2 in π_2p^2-1(ℓ_p)/(p,v_1) <cit.>, which is detected by σ^2v_1^p-1dv_1 in the spectral sequence for (ℓ_p^)/(p,v_1) by <Ref>. Since maps of filtered objects can only increase filtrations in which elements are detected, it follows that λ_2 must also be detected by σ^2v_1^p-1dv_1 in (j_k)/(p,v_1), so that class is a permanent cycle. The class α_1/p^k is a permanent cycle since it is in the image of the unit map, and the classes in Γ[dα_1/p^k] must be permanent cycles for degree reasons, so there are no further differentials. There are no even degree classes of positive weight, so classes representing the divided powers of dα_1/p^k have zero p^th-power for degree reasons. For degree reasons there can be no further multiplicative extensions.
§ IN THE STABLE RANGE
is an important invariant of rings, partially because of the Dundas–Goodwillie-McCarthy theorem, which says that for nilpotent extensions of rings, the relative K-theory is the relative .
Let f:R→ S an i-connective map of connective _1-rings, for i≥1. Then there is a pullback square
K(R) [r][d] K(S) [d]
(R) [r] (R)
A precursor to this theorem is a result of Waldhausen[Although Waldhausen proves this result for _1--algebras, the proof works equally well for any _1-algebra: see for example <cit.>.], which computes the first nonvanishing homotopy group of (f) ≅ K(f) in terms of Hochschild homology.
Let f:R → S be an i-connective map of connective _1-algebras for i≥1. Then (K(f)) ≅((f)) is (i+1)-connective, with π_i+1(K(f)) ≅HH_0(π_0S;π_i f).
Our goal in this section is to refine <Ref> to compute the spectrum (K(f)) in the stable range in terms of . We use this to understand the maps K(_p) → K(_p) and K(j_ζ) → K(_p^h) in the stable range.
Given a map of _1-rings, R → S, the relative _1-cotangent complex L_S/R is the S-bimodule given by the fiber of the multiplication map S⊗_RS → S[See for example <cit.>.]. Our result is as follows:
Given a map of ring spectra f:R → S, there is a natural map (f) →(S;L_S/R). If f is an n-connective map of -1-connective rings for n≥ 1, this natural map is 2n+1-connective.
In fact the map of <Ref> is the linearization map in the sense of Goodwillie calculus, of the functor f ↦ ((f)). See <cit.> for a variant of this, where one considers only trivial square-zero extensions of S rather than arbitrary _1-ring maps.
We first construct the natural transformation using the following lemma.
Let f:R → S be a map of _1-rings. Then there is a natural equivalence (R;S) ≅(S;S⊗_RS) making the diagram below commute.
(R;S)[rr][dr] (S;S)
[ur] (S;S⊗_RS)
Consider the map f^*:(R) →(S) and its right adjoint f_*:(S) →(R). The composite f^*f_* corresponds to the S-bimodule S⊗_RS, and the composite f_*f^* corresponds to the R-bimodule S. Since of a bimodule is the trace of the bimodule as an endomorphism in presentable stable categories, cyclic invariance of the trace gives the desired equivalence (R;S) ≅(S;S⊗_RS). There is a diagram
(R) (S)
(S) (S)["1_S"', shift right=3, from=1-3, to=2-3]
["1_S"', shift right=3, from=2-3, to=1-3]
["f^*"', shift right=3, from=1-1, to=2-1]
["f_*"', shift right=3, from=2-1, to=1-1]
["f^*"description, from=1-1, to=1-3]
["1_S"description, from=2-1, to=2-3]
where we use the natural transformation ϵ:f^*f_* → 1_S and 1_f^* to fill in the 2-morphisms in the diagram. The horizontal maps in the diagram induce at the level of bimodules the maps f^*f_* 1_S and f_*f^* 1_S which induce the maps (R;S),(S;S⊗_RS) →(S;S) in the triangle of the lemma statement. The C_2-action on (S) coming from writing 1_S as 1_S∘ 1_S corresponds to restricting the S^1-action on (S) to C_2 ⊂ S^1. It follows that the claimed diagram naturally commutes because S^1 is connected, so the rotation by π action on (S) is homotopic to the identity.
We construct the natural transformation ((f)) →(S;L_S/R) for a map f:R → S as follows: composing the map (R) →(R) with (R) →(R;S), we obtain a commutative square
(R)[r][d] [d](S)
(R;S)[r] (S;S)
Taking horizontal fibers and using the isomorphism of <Ref>, we obtain the desired natural transformation.
We will first prove <Ref> in the case R → S is a square-zero extension with ideal M. To do this, we consider the square-zero extension as a filtered _1-ring with underlying R and associated graded S⊕ M[1]. Then (R) is a filtered S^1-equivariant spectrum, and the Frobenius maps Φ_p:(R) →(R)^tC_p send filtration i to filtration ip, so in particular can be thought of as filtration preserving maps, since the filtration is only in nonnegative degrees.
The key input we use is the computation of of a trivial square-zero extension as an S^1-equivariant spectrum:
For S⊕ M the trivial square-zero extension of an _1-ring R by a bimodule M, there is an S^1-equivariant graded equivalance (S⊕ M) ≅(S) ⊕⊕_m=1^∞_/m^S^1(S;(Σ M)^⊗ m)
Here _/m^S^1 is the right adjoint of the forgetful functor from S^1-equivariant spectra to /m-spectra, and the /m-action on (S;(Σ M)^⊗ m) comes from cyclically permuting the tensor factors.
We also record a key property of the of -1-connective rings that we use:
Let R → S be an n-connective map of -1-connective rings, and M a connective S-bimodule. Then (S;M) is connective, and the map (R;M) →(S;M) is n+1-connective.
Both of these follow from examining the associated graded coming from the cyclic bar complex computing (R;M) and (S;M). For the latter is given by Σ^mS^⊗ m⊗ M which indeed is connective, and Σ^mS^⊗ m⊗ M →Σ^mR^⊗ m⊗ M is n+m-connective for m≥1 and an isomorphism for m=0.
Let f:R→ S be an n-connective square-zero extension of -1-connective _1-rings for n≥0. Then the map (f) →(S;L_S/R) is 2n+1-connective.
We consider the map (f) →(f) →(S;L_S/R) as a map of filtered spectra, viewing S as a filtered _1-ring with associated graded R⊕ M. By <Ref>, ((R)) ≅⊕_m=1^∞_/m^S^1(S;(Σ M)^⊗ m) as an S^1-spectrum. Since The Frobenius map is zero on associated graded since it takes filtration i to ip, so we learn that _m((f)) ≅ (Σ_/m^S^1(S;(Σ M)^⊗ m))_hS^1[See also <cit.>.]. In particular, since S is -1-connective and n≥ 0, the connectivity of these terms goes to ∞ as m →∞ via <Ref> so the filtration on is complete. Since _/m^S^1 decreases connectivity by 1, we learn that _m((f)) is (n+1)m-1-connective. In particular, the map (f) →_1(f) is 2n+1-connective.
To finish, it suffices to show the following two claims:
* (S;L_S/R) →_1(S;L_S/R) is 2n+2-connective.
* _1(f) →_1 (S;L_S/R) is an isomorphism.
The claim (1) follows from the fact that L_S/R≅ L_S/S⊕ M≅⊕_m=1^∞(Σ M)^⊗_S m, and (Σ M)^⊗_S m is 2n+2-connective for m≥ 2.
For claim (2), we see that
_1(f) ≅Σ(_/1^S^1(S;Σ M))_hS^1≅ (_/1^S^1(S;Σ M))^hS^1≅(S;Σ M)
Σ M is exactly _1L_S/R, and (S;_1L_S/R) ≅_1(S;L_S/R) since S is entirely in grading 0, so we are done.
We prove <Ref> by reducing to the case of a square-zero extension. First, we produce a natural way to factor a map of _1-rings through a square-zero extension. We recall that given a S'-S-bimodule M with a unit map → M, the pullback S'×_MS admits an _1-algebra structure where the maps S' → M and S → M are the S'-module and S-module maps adjoint to the unit map. This ring structure can be constructed as the endomorphism ring of the triple (S',S,S → S'⊗_S'M) viewed as an object of the oplax limit (S)×⃗M(S') (see <cit.> and <cit.>). When M comes from a cospan of ring maps S' → R ← S, this agrees with the pullback of the span of rings by <cit.>.
Given a map f:R → S, we consider S⊗_RS as an S-S-bimodule with unit 1. We define R_f,2 to be the _1-ring given by S×_S⊗_RSS.
We have natural maps R R_f,2 S. If R → S is an n-connective map of connective rings for n≥ 0, then h is 2n-connective, g is n-connective, and g is a square-zero extension.
The fiber of h:R → R_f,2 is the total fiber of the square
R [r][d] S [d]
S[r] S⊗_RS
which is f⊗_R f, which is 2n-connective. Since f is n-connective, it follows that g is too. It remains to show that g is a square-zero extension, which will follow if we identify S⊗_RS as an S-bimodule with unit with the associated structure on S ⊕ L_S/R coming from the cospan of rings S → S⊕ L_S/R←S corresponding to the universal derivation. But since R maps into the pullback of this cospan (since it is the universal square-zero extension of S under R) we have a square of ring maps
R[r][d] S [d]
S[r] S⊕L_S/R
which defines an isomorphism of unital S-bimodules S⊗_RS → S⊕ L_S/R.
We consider the maps h,g,f as in <Ref>, giving us the diagram
(h)[r][d] (f)[r][d] (g)[d]
(R_f,2;L_R_f,2/R)[r] (S;L_R/S) [r] (S;L_R_2,f/S)
To produce a nullhomotopy of the composite of the lower horizontal maps, we identify them with the vertical fibers of the following cofiber sequence using <Ref>:
(R;R_f,2)[r][d] (R;S)[r][d] (R_f,2;S)[d]
(R_f,2)[r] (S) [r] (S)
The map (R_f,2) →(S) lifts to (R_f,2;S), and this lifting provides the desired nullhomotopy. Moreover, we see that the fiber of the map (R_f,2;L_R_f,2/R) →(S; L_R/S→ L_R_2,f/S) is identified with the total fiber of the square
(R;R_f,2)[r][d] (R;S)[d]
[r](R_f,2) (R_f,2;S)
which is the fiber of the map (R; g) →(R_f,2; g). By <cit.>, since h is 2n-connective and g is n-connective, we see that this map is 3n+1-connective.
We next observe that in the right square of diagram (4), we know all maps except possibly the vertical map which we want to show is 2n+1-connective. Indeed, (h) is 2n+1-connective by <Ref> and <Ref>, the right vertical map is 2n+1-connective by <Ref>, and the lower horizontal map is 2n+1-connective since the map S⊗_RS → S⊗_R_f,2S is 2n+1-connective by <cit.>. It follows that the middle vertical map in diagram (4) is 2n-connective. But since f is an arbitrary n-connective map and h is 2n-connective, we learn that the left vertical map is 4n-connective. It follows that the middle vertical map is 2n+1-connective since it is an extension of a 2n+1-connective map and a 4n-connective map since n≥1.
There is a version of <Ref> for a 0-connective map of connective rings, but one must ask that π_0R→π_0S has a nilpotent kernel.
§.§ Applications to the sphere and the K(1)-local sphere
We now apply <Ref> to the map _p →_p for p≥ 2 to understand the map (_p) →(_p) in the stable range. The proposition below contains a key ingredient of <cit.> used to understand the homotopy type of (_p).
For p>2, the map π_*(_p) →π_*(_p) in degrees ≤ 4p-6 is an isomorphism in all degrees except 2p-1, where it is the map p_p →_p.
By <Ref> we have a 4p-5-connective map
((_p) →(_p))→((_p;_p) →(_p))
The target of the map is (_p →(_p)), which after applying τ_≤4p-5 is Σ^2p-2_p. Thus it follows that there is a cofiber sequence
Σ^2p-2_p →τ_≤4p-4(_p) →τ_≤4p-4(_p)
Recall that (_p) ≅_p ⊕Σ (^∞_-1)_p <cit.>[see also <cit.>], and that π_*(_p)/(p,v_1) is _p in odd degrees between -1 and 2p-1, and in degrees 0,2p-2, and 0 in all other degrees <cit.>[This argument is not circular, because (_p)/(p,v_1) is computed without knowing this proposition.]. From this description, it follows that both (_p)/(p,v_1) and (_p)/(p,v_1) are _p in degrees 2p-2,2p-1. Thus in the cofiber sequence above mod (p,v_1), the class in degree 2p-2 must go to 0 and the class in degree 2p-1 must go to the generator. It follows that integerally, the class must go to 0, and that it maps to the _p in _2p-1(_p) via the p-Bockstein, giving the conclusion.
For p>2, <Ref> also holds for j. In particular, the obstruction to lifting λ_1 ∈(_p) to (j) is up to a unit in _p the class σα_1 in (_p;L__p/j).
Since the map _p → j is 2p^2-2p-2-connective (see <Ref>), the map _p →_p agrees with the map j_p→_p in the stable range, so the analysis in <Ref> applies for j. In particular, the obstruction to lifting the class λ_1 ∈(_p) to j is nonzero in (_p;L__p/j), so must be σα_1 up to a unit in _p, since π_2p-2(_p;L__p/j) ≅_p is generated by this class.
We now apply <Ref> to the map j_ζ→_ζ, and then make deductions about K(L_K(1)) in the stable range.
There is an isomorphism Σ^2p-2_p ≅ L__ζ/j_ζ, where the generator is σ(α_1).
In fact, we claim that L__ζ^/j_ζ^≅Σ^2p-2,0_p on the class σ(α_1) which implies the result, since this is the associated graded of L__ζ/j_ζ. To see this, we note that L__ζ^/j_ζ^/p ≅ L__ζ^/p/j_ζ^/p. Since j_ζ^/p →_ζ/p is the augmentation of a polynomial algebra over the target on the class v_1, L__ζ^/p/j_ζ^/p≅Σ^2p-1_ζ^/p, where the generating class is σ(v_1). In j_ζ^, there is a p-Bockstein differential d_1v_1 = v_1ζ = α_1, so applying the map σ, we get that σ(v_1) has a p-Bockstein d_1-differential hitting ζσ(v_1) = σ(α_1). Thus we can conclude.
The following proposition gives a way in which (j_ζ) does not behave as if the action on ℓ_p is trivial.
For p>2, the image of the class λ_1 ∈(_p)/(p,v_1) in (_ζ)/(p,v_1) does not lift to (j_ζ)/(p,v_1). The same statement is true for K-theory replacing .
The result for K-theory is equivalent to the one for by <cit.>.
We have a commutative square of maps
((j) →(_p) [r][d] ((j_ζ)→(_p^h))[d]
(_p;L__p/j)[r] (_ζ;L__ζ/j_ζ)
where the vertical maps are 4p-5-connective by <Ref>.
The lower horizontal map sends σ(α_1) to σ(α_1), the generator of π_2p-2(_ζ;L__ζ/j_ζ). But σ(α_1) since the class is the obstruction to lifting λ_1 from (_p) to (_p), we learn that the obstruction to lifting λ_1 from (_p^h) to (j_ζ) is nontrivial. We also see that this obstruction is nonzero modulo (p,v_1).
For p>2, there are isomorphisms
τ_≤ 4p-6((j_ζ) →(_ζ)) ≅Σ^2p-2_p
and
K_*L_K(1)≅ K_*-1_p ⊕ K_*_p ⊕π_*Σ^2p-2_p/_p, *≤ 4p-6
The map f:j_ζ→_ζ is 2p-3-connective, so we learn that (f) →(_ζ;L__ζ/j_ζ) is 4p-5-connective using <Ref>. For the first statement, it suffices to show that τ_≤ 4p-4(_ζ;L__ζ/j_ζ) ≅Σ^2p-2_p. But using <Ref> and <Ref>, we learn
(_ζ;L__ζ/j_ζ) ≅(_ζ;Σ^2p-2_ζ/p⊗__p^h_p)
≅Σ^2p-2(_ζ)/p⊗__p^h_p ≅Σ^2p-2(_p)/p⊗__p_p
Since π_*(_p)/p is by <Ref> _p[σ^2α_1,σ^2v_1], we indeed learn the claim.
To get the statement about K-theory, by <cit.>, K_*(L_K(1)) ≅ K_*(j_ζ) ⊕ K_*-1(_p), and we have a cofiber sequence
((j_ζ) →(_ζ)) → K(j_ζ) → K(_p)
K_2p-1(_p)_p ≅_p, generated by λ_1, and the map K_2p-1(_p)_p. As noted in <Ref>, the boundary map K(_p) →((j_ζ)→(_ζ)) is nontrivial in the stable range, and λ_1 doesn't lift to K(j_ζ). In the stable range, (_ζ;L__ζ/j_ζ) is Σ^2p-2_p. The kernel of the map K(_p) →((j_ζ)→(_ζ) in the stable range then agrees with K(_p) by <Ref>, so from the long exact sequence on homotopy groups, we see that there is a short exact sequence in the stable range
0 →π_*Σ^2p-2_p/_p → K_*(j_ζ) → K_*(_p) → 0
But the map K(_p) → K(j_ζ) clearly splits this sequence, giving the result.
§ THE SEGAL CONJECTURE
The Segal conjecture for a cyclotomic spectrum X is the statement that the cyclotomic Frobenius map X → X^tC_p is an isomorphism in large degrees. Knowing the Segal conjecture for (R)⊗ V where V is a finite spectrum is a key step in proving the Lichtenbaum–Quillen conjecture for X, i.e the fact that (X) (and hence (X)) is bounded (see <cit.>).
Asking that the Segal conjecture hold for (R)⊗ V is a regularity and finiteness condition on R: for example it holds when V is p-torsion and R is a p-torsion free excellent regular noetherian ring with the Frobenius on R/p a finite map <cit.>. In this section, we show that the Segal conjecture does hold for j_ζ for p>2 as well as the extensions j_ζ,k, but doesn't hold for the connective covers j and j_k. In particular the Lichtenbaum–Quillen conjecture doesn't hold for j_k, and our result is used in <cit.> to show that it does hold for j_ζ,k for p>2.
A related regularity phenomenon was noted in <cit.>, namely that j_ζ is regular[See <cit.> for a discussion of regularity in the setting of prestable ∞-categories.] at the height 2-locus: i.e the t-structure on (j_ζ) restricts to a bounded t-structure on (j_ζ)^ω⊗_≥2. This t-structure is the key point in relating j_ζ's algebraic K-theory to that of the K(1)-local sphere. On the other hand, j is not regular at the height 2-locus which is why its integral K-theory is not closely related to that of the K(1)-local sphere.
Our first goal is to show that for odd p, j_ζ,k satisfies the Segal conjecture. A key input is the following proposition, the proof of which is the same as in the reference, though the statement is somewhat more general.
<cit.>
Let R be an _1-ring, and consider the ^m-graded polynomial algebra R[a_1,…,a_n]:=R⊗⊗_1^n [a_i], where each a_i has positive weight[i.e it is nonnegative weight in each copy of in ^m, and positive weight in some copy of .] and is even topological degree and [a_i] is the free _1-algebra. The map
φ: L_p(R[a_1,…,a_n]) →(R[a_1,…,a_n])^tC_p
at the level of π_*
is equivalent to the map
π_*(R)[a_i]⊗Λ[da_i] →π_*(R)^tC_p[a_i]⊗Λ[da_i]
where the a_i,da_i are sent to themselves. If R is an _2-algebra and [a_i] are given the _2-algebra structures coming from <cit.>, this is a homomorphism of rings.
The following lemma is used to reduce showing the Segal conjecture is true to the associated graded of a filtration on the ring.
Let C be a presentably symmetric monoidal stable category with a complete t-structure compatible with filtered colimits, and suppose that f:R^→ R'^ is a map of homotopy associative filtered rings in C, where the filtration on the source and target is complete.
If there is an element x ∈π_*R := π_*(,R), *>0 such that the associated graded map R^→ R'^ is n-coconnective in the constant t-structure and sends a class detecting x to a unit, then the map R → R' is also n-coconnective, and is equivalent to the map
R → R[x^-1]
First, since the filtrations are complete and the map f is n-coconnective on associated graded, we learn that the fiber is n-coconnective on associated graded, and complete, so the underlying object is n-coconnective.
Let x̃ be an element in π_**R^ whose underlying element is x that is sent to a unit in R'^gr. Since the filtration on R' is complete, it follows that x̃ is sent to a unit, which allows us to build a map R^[x̃^-1]→ R'^ via the colimit of the diagram
Σ^|x| R^[r] Σ^|x|R'^
R^[r] R'^
["..."marking, shift left=1, draw=none, from=2-1, to=3-1]
["..."marking, shift left=1, draw=none, from=2-2, to=3-2]
["x", from=1-1, to=2-1]
["x", from=1-2, to=2-2]
Note that the horizontal maps become more and more coconnective and the right vertical maps are all equivalences. Then because the t-structure is complete and compatible with filtered colimits, we learn that in the colimit the map is an equivalence. We also learn that the filtration on R^[x^-1] is complete, allowing us to conclude.
Before proceeding to prove the Segal conjecture, we recall as in <cit.> that given a filtered ^m-graded _1-ring R^, the cyclotomic Frobenius map refines to a filtered map
φ: L_p(R^)→(R^)^tC_p
where L_p is the operation on filtered spectra scaling the filtration and the gradings on R by p.
For p>2 and k≥0, the map (j_ζ,k)/(p,v_1) →(j_ζ,k)^tC_p/(p,v_1) has 2p-3-coconnective fiber, and is equivalent to the map
(j_ζ,k)/(p,v_1) →(j_ζ,k)[μ^-1]/(p,v_1)
where μ∈π_2p^2(j_ζ,k).
Using the filtration on j_ζ,k constructed in <Ref>, we get a filtered map
φ: L_p(j_ζ,k)/(p,ṽ_̃1̃) →(j_ζ,k)^tC_p/(p,φṽ_̃1̃)
By the proof of <Ref> and <Ref>, the class μ is detected in the spectral sequence for (j_ζ,k)/(p,v_1) by (σ^2v_1)^p. Thus by applying <Ref> for C = and R^→ R'^ the maps in question, it suffices to show
* The filtration on the source and target are complete.
* The associated graded map inverts the class σ^2v_1 and is 2p-3-coconnective.
To see (a), the source is complete by <Ref>. The Tate construction (-)^tC_p sits in a cofiber sequence up to shifts between the orbits (-)_hC_p and fixed points (-)^hC_p, so it suffices to show each of those is complete. The orbits are complete for connectivity reasons: in any finite range of degrees, the orbits are computed via a finite colimit. The fixed points are complete because complete objects are closed under limits.
We turn to proving (b). We further filter j_ζ,k^ by the p-adic filtration as j_ζ,k^⊗_p^ and consider the map of filtered graded _∞-rings L_p(j_ζ,k)/(p̃,v_1) →(j_ζ,k)^tC_p/(φp̃,φ v_1). We claim:
* The filtration on the source and target are complete.
* The associated graded map inverts the class σ^2p and is 2p-3-coconnective.
Given these claims, the proof is complete, since σ^2v_1 is detected in the spectral sequence by (σ^2p)^p (see <Ref>), so claim (b) follows from <Ref>.
(i) follows from an argument identical to the argument for (a), the only difference being that we use <Ref> to see that the filtration on (j_ζ^⊗_p^)/(p̃,v_1) is complete. To see (ii), by <Ref> the associated graded algebra is _p[v_0,v_1]^h, where the action is trivial. By <Ref> we have π_*(_p[v_0,v_1]^h)/(v_0,v_1) ≅_p⊗Λ[dv_0,dv_1,ζ]⊗_p[σ^2p], where |dv_0| = 1, |ζ| = -1, |dv_1|= 2p-1. It follows that if the Frobenius map mod (v_0,v_1) inverts σ^2p, it is 2p-3-coconnective, since it is injective on π_*, and an element in the cokernel of largest degree is (σ^2p)^-1σ v_1σ v_0, which is in degree 2p-2.
Thus it remains to see that the Frobenius map mod (v_0,v_1) on π_* inverts the class σ^2p. Since is a localizing invariant and ^h is a trivial square-zero extension as an _1-algebra, by <cit.> we have a pullback square of bigraded (_p)-modules in cyclotomic spectra
(_p[v_0,v_1]^h) [r][d] (_p[v_0,v_1]) [d]
[r] (_p[v_0,v_1]) (_p[v_0,v_1][x_0])
where x_0 is a polynomial generator in degree 0. It thus suffices to show that for
(_p[v_0,v_1][x_0]),(_p[v_0,v_1])
the cyclotomic Frobenius map inverts σ^2p. These statements follow from <Ref> with R = _p,_p[x_0], using the Segal conjecture for these discrete rings which is well known: for example <cit.> implies the Frobenius is an isomorphism in large degrees, but since it sends σ^2p to a unit <cit.>, it must just invert σ^2p.
The bound 2p-3 in <Ref> is optimal: the map is injective on π_*, and a class of largest degree not in the image is μ^-1λ_1λ_2, in degree 2p-2,
Now we show that the Segal conjecture fails for (j_k).
For p>2 and k≥0, the fiber of the Frobenius map (j_k)/(p,v_1) →(j_k)^tC_p/(p,v_1) is not bounded above. Thus j_k does not satisfy the Lichtenbaum–Quillen conjecture, i.e (j_k)⊗ V is not bounded above for V a finite type 3 spectrum.
First we note that the failure of the Segal conjecture implies the failure of the Lichtenbaum–Quillen conjecture by <cit.>, so we show that the Segal conjecture fails.
We first show that μ∈(j_k)/(p,v_1) is sent to a unit in (j_k)^tC_p/(p,v_1). It follows from the spectral sequences used to calculate (j_k)/(p,v_1) that the image of μ in (_p) is (σ^2p)^p^2 up to a unit, which is sent under the Frobenius map to a class detected up to a unit by t^-p^2 in the Tate spectral seqence for (_p)^tC_p/(p,v_1) by <cit.>. This is the lowest filtration of the Tate spectral sequence, so since in that filtration, the map (j_k)/(p,v_1) →(_p)/(p,v_1) is the map _p →_p, we learn that the image of μ must be detected by a unit multiple of t^-p^2 in the Tate spectral sequence for (j_k)^tC_p and hence be a unit.
If the Frobenius map has an element x in the kernel, then xμ^i is also in the kernel for each i, so the fiber isn't bounded above. On the other hand, if the Frobenius map is injective, then the classes φ(μ)^-1φ((σα_1/p^k)^(pi)) are an infinite family of classes of increasing degree in (j_k)^tC_p that are not in the image of φ, so in this case too, we learn that the fiber is not bounded above.
In fact, π_*(j_k)^tC_p/(p,v_1) under the Frobenius map is the completion of π_*(j_k)[μ^-1]/(p,v_1) at the ideal generated by (σα_1/p^k^(pi)) for each i, and the map is in particular injective on π_*.
alpha
|
http://arxiv.org/abs/2307.05914v1 | 20230712044359 | FIS-ONE: Floor Identification System with One Label for Crowdsourced RF Signals | [
"Weipeng Zhuo",
"Ka Ho Chiu",
"Jierun Chen",
"Ziqi Zhao",
"S. -H. Gary Chan",
"Sangtae Ha",
"Chul-Ho Lee"
] | cs.NI | [
"cs.NI",
"cs.LG",
"eess.SP"
] |
∗]Weipeng Zhuo
∗]Ka Ho Chiu
∗]Jierun Chen
∗]Ziqi Zhao
∗,]S.-H. Gary Chan
†]Sangtae Ha
,]Chul-Ho LeeThis work was supported in part by Hong Kong General Research Fund (under grant number 16200120). The work of Chul-Ho Lee was supported in part by the NSF under Grant IIS-2209921.
^Corresponding authors.
[∗]The Hong Kong University of Science and Technology
[ ]†University of Colorado Boulder, Texas State University
[ ]Email: ∗{wzhuo,khchiuac,jcheneh,zzhaoas,gchan}@ust.hk, †[email protected], [email protected]
: Floor Identification System with One Label for Crowdsourced RF Signals
[
Received: date / Accepted: date
=========================================================================
Floor labels of crowdsourced RF signals are crucial for many smart-city applications, such as multi-floor indoor localization, geofencing, and robot surveillance. To build a prediction model to identify the floor number of a new RF signal upon its measurement, conventional approaches using the crowdsourced RF signals assume that at least few labeled signal samples are available on each floor. In this work, we push the envelope further and demonstrate that it is technically feasible to enable such floor identification with only one floor-labeled signal sample on the bottom floor while having the rest of signal samples unlabeled.
We propose , a novel floor identification system with only one labeled sample. consists of two steps, namely signal clustering and cluster indexing. We first build a bipartite graph to model the RF signal samples and obtain a latent representation of each node (each signal sample) using our attention-based graph neural network model so that the RF signal samples can be clustered more accurately. Then, we tackle the problem of indexing the clusters with proper floor labels, by leveraging the observation that signals from an access point can be detected on different floors, i.e., signal spillover. Specifically, we formulate a cluster indexing problem as a combinatorial optimization problem and show that it is equivalent to solving a traveling salesman problem, whose (near-)optimal solution can be found efficiently. We have implemented and validated its effectiveness on the Microsoft dataset and in three large shopping malls. Our results show that outperforms other baseline algorithms significantly, with up to 23% improvement in adjusted rand index and 25% improvement in normalized mutual information using only one floor-labeled signal sample.
§ INTRODUCTION
Many smart-city applications are enabled by radio frequency (RF) signals with floor labels. Such applications include multi-floor navigation in cities <cit.>, geofencing for pandemic control <cit.>, robot rescue or navigation in environments where visual information is not available <cit.>, and unmanned aerial vehicle surveillance in restricted areas <cit.>. In these scenarios, it is costly and labor-intensive to employ trained surveyors to collect all the RF signals with floor labels. One practical solution is to leverage crowdsourcing, where different people contribute different subsets of signals collected in a building. However, the crowdsourced RF signals, albeit abundant to cover the whole building, are largely unlabeled. Hence, it is important how to leverage the unlabeled RF signals for floor identification.
Traditionally, different sensors, such as barometers and inertial measurement units (IMUs), have been leveraged to detect floor changes <cit.>, but these techniques either suffer from device heterogeneity or require users to follow specific routes for data collection. There have also been other studies which explore signal propagation models <cit.> to predict floor labels. However, the locations of access points (APs) are required in the studies, hindering their solutions from being deployed in practice. Recently, there is a growing interest in developing machine learning-based solutions <cit.> for floor identification due to their strong learning capability and high prediction accuracy. They, however, need to train models with a substantial amount of labeled data, which are difficult to obtain in crowdsourcing scenarios. Such a strong requirement of labeled data greatly hampers the large-scale deployment of the aforementioned applications using crowdsourced RF signals.
One natural question is how much we can eliminate the need of such expensive floor-labeled RF signals. In this work, we demonstrate that it is technically feasible to infer floor labels of RF signals (upon their arrival) just using only one labeled signal sample on the bottom floor while the rest of crowdsourced signal samples are unlabeled. Specifically, we are able to eliminate the need of floor-labeled signal samples significantly by leveraging the `signal spillover' effect. As shown in Figure <ref>(a), a transmitted signal from an AP can be detected across different floors, i.e., the signal spills over to different floors. Intuitively, adjacent floors would see more and stronger signals from each other than distant floors, i.e., having a higher signal spillover effect between adjacent floors. This is validated in Figure <ref>(b), where we show the number of common APs, or, more precisely, media access control (MAC) addresses that are detectable across different floors in a large shopping mall, i.e., a building of eight floors, where there are a total of 168 MAC addresses detected. For instance, if a MAC address can be detected across four floors, it will only be counted once in the bin of “4” in Figure <ref>(b). We see that signals of most APs can spill over to neighboring floors. Note that a few MACs could be detected in many floors because there is a large empty space in the middle of the mall.
Given this spillover observation, we expect that if we are able to group signals from the same floor together and figure out which groups are direct neighbors based on the signal spillover, then all the groups can be ordered, i.e., direct neighbor groups are placed next to each other. Since there is also a labeled signal sample on the bottom floor, the cluster containing the labeled sample is considered as the one for the starting floor, thereby making the ordering complete for floor identification. Thus, we propose , a novel floor identification system based on crowdsourced RF signals, where only one labeled signal sample is needed from the bottom floor. As illustrated in Figure <ref>, consists of the following steps: The crowdsourced RF signals are first modeled as a bipartite graph, which is then processed by our radio-frequency graph neural network (RF-GNN) to obtain their vector representations (embeddings). These vector representations are further grouped into different clusters whose number is the same as the number of floors. Finally, the clusters are indexed properly.
To cluster the crowdsourced RF signals, we first model RF signals as a bipartite graph. RF signals are inherently heterogeneous, meaning that different signal samples would observe different subsets of APs in the building, even if they are collected on the same floor. Thus, it may not be feasible to use a vector of the superset of APs from the building to represent each signal sample, as there would be many missing entries in each vector, which could make clustering inaccurate. With the bipartite graph modeling, APs, or, more specifically, MAC addresses, are considered nodes of one type, i.e., MAC nodes, and signal samples are considered nodes of the other type, i.e., signal-sample nodes. A MAC node and a signal-sample node are connected if the MAC address is detected in the signal sample.
We then obtain a vector representation (or embedding) of each node with a graph neural network model. High-quality vector representations of the signal-sample nodes can preserve the relative distance (similarity) among the signal samples in the embedding vector space, i.e., if two signal samples are similar to each other in the physical space, their vector representations are also close to each other in the embedding space. Graph embedding techniques <cit.> can be used to obtain such representations, but they are limited to static bipartite graphs. In other words, they are not a suitable choice for dealing with new incoming RF signals, i.e., new nodes into the graph. To enable efficient representation learning on such a dynamic bipartite graph with incoming nodes (new RF signals), in this work, we design , an attention-based graph neural network (GNN) model for RF signals. Specifically, incorporates received signal strength (RSS) between a MAC node and each of its connected signal-sample nodes as a special type of attention, such that node representations can be learned effectively. With the learned representations of signal-sample nodes, we then apply the hierarchical clustering algorithm to divide them into a given number of floor clusters accurately.
After obtaining the clusters, we index the clusters, i.e., identifying which cluster corresponds to which floor, by leveraging the signal spillover effect. We first propose a novel measurement metric to measure the similarity between clusters based on the level of the signal spillover. The higher the spillover level, the closer the two clusters are. We next formulate the cluster indexing problem as a combinatorial optimization problem, which is to find an optimal ordering of clusters such that the spillover level between any two adjacent clusters is maximized. We show that it is equivalent to solving a travelling salesman problem (TSP), more specifically the problem of finding the shortest Hamiltonian path. Given the spillover levels between pairwise clusters (cities), it boils down to finding an optimal path that visits each cluster (city) exactly once such that the sum of the spillover levels (distances) is maximized (minimized). Since we have one labeled signal sample on the bottom floor, the cluster with the labeled data sample is treated as the starting cluster (city). We empirically validate that the visiting sequence in the optimal path accurately indexes the clusters with floor numbers.
We further discuss how can be extended to the case when the one labeled signal sample comes from an arbitrary floor. This randomness would make the starting point of the TSP unfixed, leaving numerous paths (orderings) as candidate solutions. In other words, if the labeled signal sample comes from a different floor than the bottom one, the cluster containing the labeled sample can no longer be used as the starting cluster for the TSP, so the solutions to the TSP with different starting clusters need to be all evaluated. Thus, we propose a simple yet efficient heuristic method and numerically demonstrate that it still achieves accurate floor identification without much performance degradation (∼3%).
Our contributions can be summarized as follows.
* : a novel floor identification system with only one labeled signal sample for crowdsourced RF signals. is able to infer the floor number of each crowdsourced RF signal with only one labeled signal sample from the bottom floor, which greatly reduces the label requirement for the floor identification system and allows us to take a first step towards unsupervised floor identification for crowdsourced RF signals.
* : a novel attention-based graph neural network model to process heterogeneous RF signals. enables efficient representation learning on the graph built by RF signals by incorporating the RSS values as a type of attention to encode different levels of importance over edges, so that the vector representation of each signal-sample node is learned more accurately.
* Cluster indexing based on the signal spillover effect. We index the clusters with proper floor numbers based on our observation of the signal spillover effect between floors. To achieve high indexing accuracy, we propose a novel measure of similarity between clusters depending on the level of the signal spillover effect and then solve a cluster indexing problem, which is transformed into a TSP, to obtain the optimal indexing, i.e., floor identification of unlabeled signal samples.
* Extensive experiments on two large-scale crowdsourced datasets. We implement and evaluate its performance using Microsoft open dataset and in three large shopping malls. Experiment results show that achieves high accuracy for all buildings using only one floor-labeled signal sample on the bottom floor. outperforms other baseline algorithms significantly with up to 23% improvement in adjusted rand index and 25% improvement in normalized mutual information.
§ RELATED WORK
Requirement of a substantial amount of labeled data: A substantial number of floor-labeled RF signal samples are required in existing floor identification systems <cit.> which are purely based on RF signals. For instance, in <cit.>, a floor-level classifier is first trained using labeled RF signals collected from different floors in a building before its online deployment. RMBMFL <cit.> selects reliable APs and extracts features from RF signals coming from the APs to train a softmax classifier with corresponding floor labels. GRAFICS <cit.> assumes that a few labeled RF signals are available on every floor for floor identification. FedDSC-BFC <cit.> obtains a collection of datasets of floor-labeled RF signals from different sensing clients in a crowdsourced manner and trains a floor classification model with federated learning. In contrast, aims to go beyond the conventional assumption on the presence of floor-labeled RF signals on every floor and demonstrates the feasibility of floor identification with only one labeled RF signal sample on the bottom floor while the rest of samples are unlabeled.
Requirement of AP locations: AP locations are necessary for other RF signal-based floor identification systems <cit.> that do not require floor labels. For instance, HyRise <cit.> measures, in an offline phase, the pressure readings of RF signals and then obtains the AP floor information using the pressure readings. These information are stored in a database for online inference. StoryTeller <cit.> first identifies APs with the strongest signals among the measured RF signals and then converts the signal distribution into images with corresponding AP locations. These images are used to train a convolutional neural network model for floor classification. However, the locations of APs are generally difficult to obtain in practice, especially in the crowdsourcing scenarios. leverages only RF signal readings and does not require such AP locations during the floor identification process.
Requirement of other sensors: Other sensor signals <cit.> have also been used to facilitate the floor identification process. In <cit.>, it is observed that slow updates of pressure readings may be caused by reasons such as weather changes, while sudden changes of pressure readings are due to user movement. This observation is then leveraged by setting a threshold on the pressure readings to detect floor changes. However, it is usually difficult to set the threshold accurately in practice. 3D-WFBS <cit.> learns relative altitude information from barometer readings and then obtains absolute floor information by combining the barometer reading and RSS from landmark APs. The deployment of landmark APs is, however, still challenging. MPiLoc <cit.> takes advantage of trajectories generated by IMU signals and then uses a barometer to separate the trajectories into different floors. However, the collection of IMU signals often incurs significant overhead in data storage for mobile devices. In contrast, is able to achieve high accuracy in floor identification based only on crowdsourced RF signals, among which a signal sample obtained on the bottom floor only needs to be floor-labeled.
§ : ATTENTION-BASED GRAPH NEURAL NETWORKS FOR RF SIGNALS
With crowdsourced RF signal samples, we first model them as a bipartite graph and then process the graph using our to obtain a vector representation of each node.
§.§ : Graph Construction
RF signals are heterogeneous, meaning that different RF signals may only observe different subsets of APs in the building. A traditional way to represent an RF signal is to use a vector consisting of the superset of all APs in the building, which makes many entries empty, as illustrated in Figure <ref>. Such missing entries, which are typically filled up with some arbitrary small values, would lead to unsatisfactory application performance. Recently, RF signals are modeled as a bipartite graph <cit.> to overcome the missing value problem. We adopt the bipartite graph modeling in and below explain its details for the sake of completeness.
As shown in Figure <ref>, there are two types of nodes. One is for APs, or, more precisely, their MAC addresses, while the other is for RF signal samples (records). Recall that each RF signal sample contains a list of sensed MAC addresses along with their received signal strength (RSS) values. Then, a node of a MAC address is connected to another node corresponding to an RF signal sample if the MAC address is detected in the RF sample (record). Thus, we can represent the crowdsourced RF signals as a bipartite graph. Specifically, we construct a weighted bipartite graph 𝒢= (𝒰, 𝒱, ℰ), where 𝒱 is the set of nodes representing the crowdsourced RF signal samples, 𝒰 is the set of nodes representing the sensed MAC addresses, and ℰ is the set of edges. Each edge e_uv∈ℰ denotes the edge between u ∈𝒰 and v ∈𝒱. Let RSS_uv be the RSS value of an RF signal from u that appears in v. The edge weight w_uv is then defined as w_uv := f(RSS_uv), where f(RSS_uv) > 0 for all RSS_uv. We use f(RSS_uv) := RSS_uv + c for our weighted bipartite graph 𝒢, where c is some constant such that c >max{|RSS_uv|, ∀ u, v}. In our case, c is set to 120 dBm.
§.§ : Vector Representation Learning for Nodes
Next, we efficiently learn a vector representation of each node from the constructed graph. The advantage of learning high-quality representations is that the relative distance (similarity) between two `signal-sample' nodes in the physical space can be well preserved in the embedding vector space. We first elaborate on the general aggregation process <cit.> in graph neural networks for representation learning and then introduce our proposed .
Given a target node whose representation is to be learned, there are two steps in the aggregation process. First, we sample nodes from the N-hop neighborhoods of the target node based on uniform distribution. Second, we aggregate information from the sampled nodes towards the target node. Figure <ref> shows an illustrative example where the information is aggregated from two-hop neighbors towards the target node. It first samples two nodes from each of the first- and second-hop neighbors and implements two iterations of aggregation in the example. In each iteration, each node obtains information from its sampled immediate neighbors. After two iterations, the target node contains information from its two-hop neighbors.
We below introduce our , an attention-based GNN model for crowdsourced RF signals. In our scenario, it is natural to define the weights of edges as a function of sensed RSS values to encode different levels of signal strength from different APs (MAC addresses). To sample neighbors of a target node, intuitively, the higher the sensed RSS value between the node and its neighbor, the more likely the neighbor should be chosen. Thus, we design our own neighbor sampling strategy as follows. Consider u ∈ U and v ∈ V with e_uv∈ℰ and suppose that v is the target node. The sampling probability that u is selected for aggregation is given by
Pr(u) = f(RSS_uv)/∑_u'∈ N(v) f(RSS_u'v).
Similarly for when u is the target node.
Graph attention networks have been introduced in the literature <cit.> to learn better representations by incorporating an attention mechanism into the aggregation process compared to the GNN model without attention, since they consider different neighbors with different levels of importance (attention weights). However, the weight learning process requires a substantial amount of labeled data for supervised or semi-supervised training, which is infeasible in our scenario. We instead observe that in our weighted bipartite graph, the edge weights naturally capture the importance of neighbors, i.e., higher edge weights (RSS values) generally indicate closer distances. Thus, we incorporate the edge weights as a type of attention into the aggregation process and design an aggregator based on edge weights. Specifically, let N'(v) be the set of sampled neighbors of v and let r_u be the vector representation of u ∈ N'(v). The weighted aggregator is defined as
^w=∑_u∈ N'(v)f(RSS_uv)/∑_u'∈ N'(v) f(RSS_u'v)r_u.
We next explain the remaining details of and its unsupervised training to obtain the vector representation of each node. Consider i ∈ U∪ V. Let r_i^k be the representation of i in the k-th iteration, let N'(i) be the sampled neighborhood of i, and let K be the number of hops. We set r_i^0 to a random vector. In the k-th iteration of the aggregation process, first aggregates information from its sampled direct neighbors and stores in a temporary variable, say, r_N'(i)^k, which is given by
r_N'(i)^k = ^w(r_j^k-1, ∀ j ∈ N'(i)),
where r_j^k-1 denotes the representation of neighbor j in the (k-1)-th iteration. then concatenates r_N'(i)^k with the vector representation of i itself in the previous iteration, i.e., r_i^k-1. The concatenated vector goes through a fully connected layer with a trainable weight matrix 𝐖^k and a non-linear function σ(·) to generate r_i^k, which is given by
r_i^k = σ(𝐖^k (r_i^k-1,r_N'(i)^k)).
Finally, r_i^k is normalized as r_i^k := r_i^k/||r_i^k||_2, where ||·||_2 is the ℓ_2 norm. r_i^k is then used for the (k+1)-th iteration. After repeating the whole process K times, the final representation for node i is given by r_i^K.
For the process of unsupervised training, we follow the process in <cit.>, which is commonly used in training GNN models in an unsupervised manner <cit.>. Specifically, it is based on a large number of short random walks whose length is of five steps generated on the graph. The intuition here is that the nodes that appear in the same random walk should have similar vector representations as they are close to each other. Suppose that nodes i and j co-occur in a short random walk and let r_i and r_j be their corresponding vector representations. We use the following loss function to learn the vector representation of each node and the weight matrices 𝐖^k's:
ℒ_𝒢 := - log( σ(r_i·r_j)) - τ×E_z∼(z)log( σ (-r_i·r_z) ),
where σ(x) = 1/(1+exp(-x)), r_i ·r_j denotes the inner product of r_i and r_j, and the expectation E_z∼(z) is with respect to (z), which is a user-defined distribution over nodes. Note that the second term is based on the so-called `negative sampling' in that τ nodes are randomly sampled from the entire graph according to (z), so they are less likely to appear in the same random walk. In other words, the first term encourages the nodes that co-occur in the same random walk to stay close to each other in the embedding vector space, while the second term forces the nodes that are probably far from each other to separate apart in the embedding vector space. As used in <cit.>, we choose τ=4 and (z) ∝ d_z^3/4, where d_z is the degree of node z.
§ SIGNAL CLUSTERING AND CLUSTER INDEXING
After obtaining the latent vector representation of each node in the bipartite graph, we cluster the representations of signal-sample nodes into floor clusters and then index the clusters with proper floor numbers.
§.§ Signal Clustering
To cluster the representations of the signal-sample nodes into different clusters whose number is the same as the number of floors in the building, we employ a proximity-based hierarchical clustering. To start with, each representation is treated as a cluster. We merge two clusters with the shortest distance together in each round. Let C_i be the set of representations of the signal-sample nodes in cluster i. The distance between clusters i and j is then defined as
d(C_i, C_j) := 1/|C_i| |C_j|∑_r∈C_i∑_r' ∈C_jr -r' _2,
where r∈C_i is the representation of a signal-sample node in cluster i. This clustering process continues until the number of clusters becomes the same as the number of floors in the building.
§.§ Cluster Indexing: A TSP Formulation
We next index the clusters with floor numbers. Recall that the signal from an AP can be detected on different floors, i.e., there is a signal spillover effect. Two adjacent floors would observe a higher spillover effect, as it was empirically validated in Figure <ref>(b). See Figure <ref> for an illustration. If we are able to infer which two clusters are direct neighbors based on the signal spillover effect, we can eventually obtain an ordering of the clusters. Here, since we have one floor-labeled signal sample measured from the bottom floor, we use the cluster including the labeled sample as the starting cluster in the ordering.
To that end, we need a measure of gauging the level of signal spillover effect between floors, which is now the similarity between their corresponding clusters. A natural choice here would be the Jaccard similarity coefficient <cit.> as a measure of similarity between two clusters, which becomes the ratio of the number of shared MACs to the total number of MACs detected in both clusters in our setting. To be precise, letting A_i be the set of MACs detected in cluster i, the Jaccard similarity coefficient J_ij for clusters i and j is given by
J_ij = |A_i ∩A_j|/|A_i ∪A_j|.
However, this measure only considers the presence of a MAC (a set element) rather than its coverage. For instance, there is no difference between a MAC that is sensed by most signal samples and another MAC that is only sensed by few signal samples in each cluster. The former would correspond to an AP that has a wider coverage than the latter, but such a difference cannot be captured by the Jaccard similarity coefficient.
To overcome this limitation, we propose an adapted Jaccard similarity coefficient to capture the coverage of each AP. Instead of simply measuring the existence of a MAC, we also consider its appearance frequency. Since crowdsourced RF signals are generally abundant, the frequency of a MAC that appears in a cluster (a collection of RF signals) should be a good indicator of its coverage. Consider two clusters i and j, and suppose that there are a total of m MACs detected in the clusters. Letting f_ik be the frequency of MAC k that appears in cluster i, we define the frequency count of shared MACs between clusters i and j as
f^share_ij := ∑_k=1^m f_ikf_jk.
Here we do not simply compute the frequency of each MAC that appears in both clusters i and j in which case we cannot see how its appearances are distributed over i and j. For example, a MAC could appear predominately in one cluster over the other. Thus, we use the product of separate frequencies of each MAC in i and j for the frequency count f^share_ij. In addition, we define the frequency count of unshared MACs between clusters i and j as
f^diff_ij := ∑_k=1^m(1_{f_ik=0} f_jkf̅_i + 1_{f_jk=0} f_ikf̅_j ),
where f̅_i is the average frequency count of MACs appearing in cluster i, i.e., f̅_i =∑_k=1^m f_ik/m, and 1_{·} is an indicator function. For example, 1_{f_ik=0} is given by
1_{f_ik=0} =
1, if MAC k does not appear in cluster i,
0, otherwise.
Note that for the definition of f^diff_ij in (<ref>), we do not simply compute the pure frequency count of unshared MACs between i and j as its value would not be on the same scale as that of f^share_ij in (<ref>), which is in the product form. Thus, we consider f̅_i in the first term and f̅_j in the second term. From (<ref>) and (<ref>), our adapted Jaccard similarity coefficient J^n_ij between clusters i and j is finally defined as
J^n_ij := f^share_ij/f^share_ij + f^diff_ij.
It is worth noting that this adapted Jaccard similarity coefficient makes the performance of floor identification better than the case with the original Jaccard similarity coefficient, as shall be demonstrated numerically in Section <ref>.
For each pair of clusters, we calculate their adapted Jaccard similarity coefficients to gauge their similarity. The higher the coefficient is, the higher the similarity is, i.e., the clusters that correspond to adjacent floors should have higher coefficients than the ones corresponding to distant floors. Using the cluster that contains the only labeled signal sample as the starting cluster, our cluster indexing problem is to find an optimal ordering of the clusters such that the sum of the pairwise (adapted Jaccard) coefficients of the clusters that are adjacent in the ordering is maximized. Then, the optimal ordering simply indicates the floor number of each cluster, determining the labels of all the signal samples in the cluster.
Consider a weighted complete graph G with a set of nodes 𝒩={1, 2,…, N}, where N is the number of floors in a building of interest. Without loss of generality, suppose that node 1 corresponds to the cluster that contains the only labeled signal sample. Let w_ij be an edge weight from nodes i to j. Note that there is no self loop in G. We set the edge weight w_ij := 1 - J^n_ij for all i ∈𝒩 and j ∈𝒩∖{1}, while setting w_i1 := 0 for all i 1. See Figure <ref> for an illustration. Note that w_ij = w_ji for all i,j ∈𝒩∖{1} due to the symmetricity of J^n_ij between i and j. Then, we have the following.
The cluster indexing problem is equivalent to solving a TSP variant, or finding the shortest Hamiltonian path, on G, which is formally given by
minimize ∑_i=1^N∑_j=1^N w_ij1_ij
subject to ∑_i=1,i ≠ j^N 1_ij = 1, ∑_j=1,j ≠ i^N 1_ij = 1, and
∑_i,j ∈𝒮, i ≠ j1_ij≤ |𝒮| - 1, ∀𝒮⊊𝒩, |𝒮| ≥ 2,
where 1_ij is the indicator function, i.e.,
1_ij =
1, if the path goes directly from i to j,
0, otherwise.
Recall that given a set of cities and the pairwise distances between cities, the TSP is to find the shortest route that visits each city exactly once and returns to the starting city. If all the distances to the starting city are set to zero, it boils down to the problem of finding the shortest Hamiltonian path with a given starting city since the way back from the final city in the route does not contribute to the total route length. Also, observe from (<ref>) that 0 ≤ J^n_ij≤ 1. Thus, with the settings of w_ij, the cluster indexing problem, which is a maximization problem, is equivalent to the problem of finding the shortest Hamiltonian path on G starting with node 1.
Note that the exact solution to the TSP can be obtained by the Held-Karp algorithm <cit.> with the time complexity of O(N^22^N). We can thus resort to the Held-Karp algorithm to solve our problem. Once the solution, i.e., the optimal ordering of the clusters, is obtained, the clusters are indexed with the corresponding floor numbers sequentially, with the first cluster being the bottom floor. In case N is large, we can also resort to approximation algorithms <cit.> to obtain the near-optimal ordering. We empirically validate in the next section that an approximation algorithm works reasonably well compared to the exact algorithm.
§ EXPERIMENT RESULTS
We present here the extensive experiment results for . We first discuss our experiment settings and compare the performance between and other baseline algorithms. We then study the impact of different system components and parameters on . Our code is available online.[https://github.com/SteveZhuo/FIS-ONE]
§.§ Experiment Settings
Experiment setup: We conduct experiments on the Microsoft's open dataset <cit.> (denoted as `Microsoft' in the results) and in three large shopping malls (denoted as `Ours' in the results). For the Microsoft dataset, we first filter out two-story buildings as we have one labeled signal sample on the starting floor, which makes the indexing straightforward. Since crowdsourced data are usually abundant, we also filter out floors with less than 100 RF signal samples while the other floors remain intact. We end up using the dataset of 152 buildings in which each floor is associated with around 1000 RF signal samples on average, and the number of floors in a building ranges from three to ten. For the shopping malls, two of them have five floors while the other one has seven floors. We collected around 1000 RF signal samples on each floor. We show the floor number distribution of buildings in Figure <ref>. Unless otherwise mentioned, we present the average results of the buildings from each dataset.
Baseline algorithms for comparison: To the best of our knowledge, there is no existing work on floor identification with only one floor-labeled signal sample and the rest of the samples being unlabeled. Hence, we consider the following recent and popular clustering algorithms. Since they only provide clustering results (i.e., no cluster indexing) of RF signal samples, we adapt them with different components from such that they can be applied to our target scenario. Specifically, once we have the clusters generated by the baselines algorithms, we use our cluster indexing method explained in Section <ref> to label the resulting clusters with floor numbers. In addition, for SDCN <cit.>, DAEGC <cit.> and METIS <cit.>, the bipartite graph constructed from RF signal samples is used as an input for them. On the other hand, for MDS <cit.>, a matrix representation of RF signal samples is used as an input, as illustrated in Figure <ref>. The missing entries are filled with -120 dBm. To summarize, we have
* SDCN <cit.>: It learns a vector representation of each node in the graph while at the same time grouping the representations into different clusters using a combination of a deep neural network model and a graph convolution network model.
* DAEGC <cit.>: It generates the embedding of each node in the graph using an autoencoder and gradually clusters the embeddings based on the cluster centroids that are being updated during training.
* METIS <cit.>: It is a popular graph partition algorithm, which first coarsens the graph and partitions the coarsened graph to obtain initial clusters. It then uncoarsens the graph to refine the clusters.
* Multidimensional scaling (MDS) <cit.>: It learns the embeddings of RF signal samples by using pairwise distances among the vectors of RF signal samples, which are represented in a matrix form. We here use the pairwise distance of 1 - cosine similarity. The hierarchical clustering is then applied to the learned embeddings to obtain clusters.
Note that the number of clusters obtained by each algorithm is the same as the number of floors in each building. For SDCN and DAEGC, we use their code provided in <cit.> and <cit.>, respectively. We use the python implementation of METIS.
Evaluation metrics: We use the adjusted rand index (ARI) and the normalized mutual information (NMI) to evaluate the clustering performance. Intuitively, ARI <cit.> measures the pairwise data-point similarity between predicted clusters and ground-truth clusters. For instance, if two data points appear in the same cluster by both predicted clustering and ground-truth clustering, ARI will be higher. Formally speaking, let X= (X_1, …, X_N) be the predicted clustering results with corresponding clusters and let Y= (Y_1, …Y_N) be the ground-truth clusters. Let n_ij := |X_i∩Y_j| and let n := ∑_ij n_ij. Then, ARI is defined as
ARI :=∑_ijn_ij2 - [ ∑_i|X_i|2∑_j|Y_j|2] / n2/1/2[∑_i|X_i|2+∑_j|Y_j|2] -[∑_i|X_i|2∑_j|Y_j|2] / n2.
where |X_i| is the number of elements in predicted cluster i. Similarly for |Y_i|.
Mutual information <cit.> measures the similarity of the two distributions formed by the predicted clustering results X and the ground truth clusters Y using Kullback-Leibler divergence, which is defined as
MI(X, Y) := ∑_ijn_ij/nlogn· n_ij/|X_i||Y_j|.
The higher the MI value is, the better the clustering results. Since MI is not bounded, in this work, we use the following normalized version of MI, i.e., NMI:
NMI(X, Y) := 2· MI(X, Y)/H(X) + H(Y),
where H(X) is the entropy of X and defined as
H(X) = -∑_i g(X_i) log g(X_i), with g(X_i) = |X_i|/∑_j|X_j|.
Similarly for H(Y). Note that NMI is in [0, 1].
In addition, for the indexing performance, we use an edit distance <cit.> to measure how similar two given sequences are by considering the number of transpositions needed to make them identical to each other. Consider a five-cluster case as an example. Suppose that the ground-truth indexing of five clusters is given by [F1, F2, F3, F4, F5]. Then, its corresponding ground-truth sequence is S_Y = (1,2,3,4,5). Also, assuming that the predicted indexing is [F1, F4, F3, F2, F5], we have the predicted sequence as S_X = (1,4,3,2,5). Thus, in this example, we need one transposition, i.e., to swap 4 and 2, to make S_X identical to S_Y. Specifically, in this work, we use the following Jaro-Winkler edit distance <cit.>:
Edit Distance :=
0, if m=0,
1/3(m/|S_X| + m/|S_Y| + m-t/m), otherwise,
where m is the number of matching numbers, t is the number of transpositions, and |S_X| and |S_Y| are the lengths of sequences S_X and S_Y, respectively.
For all metrics, higher values indicate better performance.
§.§ Overall System Performance Comparison
We report, in Table <ref>, the clustering and indexing results of and other baseline algorithms. For clustering, outperforms SDCN and DAEGC in ARI by more than 20% and 23%, respectively. The gain in NMI is also up to 17% and 25%, respectively. These all indicate the effectiveness of – our carefully designed representation learning algorithm with an attention mechanism. SDCN obtains clusters in a self-supervised manner by leveraging the centers of clusters. However, the centers estimated during training may not provide good guidance as RF signals on the same floor can even exhibit quite different characteristics, which leads to a multi-modal distribution. DAEGC also suffers from the same problem as the cluster loss that it uses also involves the computation of cluster centroids. METIS does not perform well as the boundary between different clusters of RF signals may not be obvious due to the signal spillover effect. MDS learns the signal embeddings using the matrix of the superset of APs (MACs) to represent RF signals, so it suffers from the missing-value problem (see Figure <ref>).
As shown in Table <ref>, also achieves the best performance in edit distance among all the schemes. This demonstrates that our clusters are well-formed based on , and the signal indexing is correctly done based on the optimal solution to the cluster indexing problem, which is transformed into a TSP, where our adapted Jaccard coefficient effectively measures the similarity between clusters. However, the other algorithms show inferior performance in edit distance, which inherits from their low-quality clustering performance.
§.§ Ablation Study
To see the gain that obtains from the attention mechanism in , in Figure <ref>(a) and Figure <ref>(b), we show the performance of when is used with and without the attention mechanism. As shown in Figure <ref>(a) and Figure <ref>(b), incorporating edge weights as an attention mechanism in the learning process boosts up the system performance significantly, with up to 80% improvement in ARI, 49% improvement in NMI, and 34% improvement in edit distance. This is because the edge weight-based attention mechanism correctly incorporates proximity information between different signal samples in learning their vector representations. If two signal samples are collected closely in the physical space, their representations in the latent vector space are also close to each other. Hence, the representations learned with the attention mechanism can be more easily separated across different clusters, leading to better clustering performance.
To study the effectiveness of the hierarchical clustering, we next present the comparison between with the hierarchical clustering and with the clustering algorithm being replaced by K-means in Figure <ref>(c) and Figure <ref>(d). We see that the hierarchical clustering performs better than K-means when integrated into (with 4% improvement in ARI, 4% improvement in NMI, and 6% improvement in edit distance). This is because the hierarchical clustering better handles the signal representations around the boundary as it gradually merges similar representations together from the very beginning. In contrast, K-means may not be efficient in differentiating the boundary cases.
We further evaluate the improvement of our adapted Jaccard similarity coefficient over the original Jaccard similarity coefficient and present the results in Figure <ref>(a) and Figure <ref>(b). Our adapted coefficient achieves higher edit distance with lower standard deviation compared to the original one, meaning that this adapted coefficient better captures the signal spillover effect between floors by considering the appearance frequency of APs (MACs) and thus provides a better similarity measure between signal clusters. As such, the cluster indexing can be done more accurately.
In solving the TSP, the exact algorithm can also be replaced by an approximation algorithm to improve the computational efficiency, possibly at the cost of accuracy loss. We show the results in Figure <ref>(c) and Figure <ref>(d), where the 2-opt approximation algorithm <cit.> is used to obtain a near-optimal solution to the TSP. We can see that the performance degradation is insignificant (∼ 3%) by adopting the approximation algorithm. Hence, for tall buildings with many floors, we expect that one can resort to the approximation algorithm as a cost-efficient alternative to achieve accurate floor identification without much performance degradation.
§.§ System Parameter Study
For practical deployment, we have a wide range of choices for the embedding dimension used in our proposed and other baseline algorithms. To check their system sensitivity to this parameter, we vary the embedding dimension from 8 to 64 for each scheme and run the experiments on the two datasets. As presented in Figure <ref> and Figure <ref>, consistently performs well and better than the other baseline ones across different choices of embedding dimension, meaning that it is robust to changes in the embedding dimension. Note that METIS has no parameter of embedding dimension. We, however, plot its performance for consistency.
We are also interested in evaluating how performs for different building types, i.e., buildings of different floor numbers. Hence, we summarize the statistics in Figure <ref>. We see that performs well for all building types with small fluctuations, and it is consistently better than the other baseline algorithms. The performance of and other baseline algorithms overall fluctuates a bit more for taller buildings. This is because there is a fewer number of such buildings (see Figure <ref>), exhibiting larger variations due to a smaller sample size. Nonetheless, still performs well under such cases, which again verifies its effectiveness.
§ DISCUSSION
We discuss here the feasibility of using only one labeled signal sample from an arbitrary floor instead of the bottom (or top) floor. So far we have assumed that the labeled sample is collected on the bottom (or top) floor, which is used as an indicator of the starting point for the TSP. It ensures that only one path with the maximum sum of adapted Jaccard similarity coefficients along the path can be obtained. Thus, we can index the clusters correspondingly, as explained in Section <ref>.
Now, we explain how we can relax the assumption such that the labeled sample can be collected from an arbitrary floor. We first do not consider the labeled signal sample in the clustering process after its vector representation is obtained. Since there is no fixed starting point for the TSP, we solve the TSP with all possible starting points, e.g., leading to N orderings for a building of N floors. From these orderings, we pick out the one with the maximum sum of adapted Jaccard similarity coefficients and use the ordering for cluster indexing. However, there are two cases to consider.
Case 1: The building has an odd number of floors, and the labeled sample is collected from the middle floor. For instance, as shown in Figure <ref>, there are five floors in the building, and the labeled sample is collected from the third floor. Hence, it is not possible to index the ordering, as there is no indicator which side of the sequence contains the starting floor.
Case 2: For all the other scenarios, given a labeled sample, we can always find two candidate clusters to locate the labeled signal sample, as shown in Figure <ref>. Then, we “predict" which candidate cluster the labeled sample belongs to by finding the cluster that is closer to the labeled sample. The distance between cluster i and the vector representation of the labeled sample, say, r, is calculated as
d(r, C_i) := ∑_r' ∈C_ir'-r_2/|C_i|.
In other words, we calculate the averaged pairwise distance between the vector representations in C_i and the representation of the labeled sample.
To check the feasibility of this approach, we conduct an experiment on the two datasets with a labeled sample obtained from a random floor in Case 2. The experiment is repeated for ten times, and the average results are presented in Figure <ref>. We see that still performs well without much performance degradation (∼7%).
§ CONCLUSION
We proposed , a floor identification system for crowdsourced RF signals with only one labeled signal sample. Based on the observation of signal spillover, clusters the RF signals effectively and indexes the clusters accurately. As an integral component of , enables efficient representation learning for a large number of RF signals on a (possibly dynamic) graph. Extensive experiment results on Microsoft's open dataset and in three large shopping malls validated the effectiveness of and demonstrated its superior performance over baseline algorithms (with up to 23% improvement in ARI and 25% improvement in NMI). We also discussed how can be extended to the case when the one labeled signal sample comes from an arbitrary floor. We believe that we have taken a first step towards unsupervised floor identification for crowdsourced RF signals.
IEEEtran
|
http://arxiv.org/abs/2307.04413v1 | 20230710083640 | Quantum Zeno effect: a qutrit controlled by a qubit | [
"Komal Kumari",
"Garima Rajpoot",
"Sudhir Ranjan Jain"
] | quant-ph | [
"quant-ph"
] |
Optical-power-dependent splitting of magnetic resonance in nitrogen-vacancy centers in diamond
Kensuke Kobayashi
Received / Accepted
==============================================================================================
For a three-level system monitored by an ancilla, we show that quantum Zeno effect can be employed to control quantum jump for error correction. Further, we show that we can realize cNOT gate, and effect dense coding and teleportation. We believe that this work paves the way to generalize the control of a qudit.
§ INTRODUCTION
Quantum errors can be corrected only by developing methods to control quantum jumps. Recently, the quantum Zeno effect <cit.> has been employed to delay spontaneous emission, giving us time to detect possible erroneous jumps. Moreover, to observe and hence control quantum jumps, QZE has been shown to realize Dehmelt-like shelving <cit.>. This work was inspired by a very interesting and important experiment on “catching" and “reversing" a quantum jump by Minev et al. <cit.>. To take these thoughts further for realistic applications, we need to show this method of control for multi-level systems. Here we take the next step and consider a three-level system which has the possibility of three distinct frequencies ω_12, ω_23 and ω_13. One of these states is monitored by a detector: a two-level ancillary qubit <cit.>. In contrast to the control of two-level system where there is just one frequency, here there are three frequencies. Thus there are multiple time-scales under consideration. The aim of this article is to study the possibility of controlling spontaneous errors and shelving in the sense of Dehmelt and improvised in <cit.>.
The plan of the paper is as follows. In Section 2.1, we state the problem and present the principle of least action approach relevant to our physical situation. This is based on the mathematical treatment of n- level system, the details of which are reviewed in the Appendix. The solution of the evolution equation of the density matrix in terms of coordinates and conjugate momenta is shown. In Section 2.2, the construction of a cNOT gate using a three-level system is explained. It is interesting to see that the three-level system considered here can be related to dense coding and teleportation, explained in Sections 2.3 and 2.4.
§ QUTRIT DYNAMICS
We have a three-level system, i.e., a qutrit, with levels |1⟩, |2⟩ and |3⟩ and transition frequencies ω_12, ω_23 and ω_31.
For a three-level system, N=3, the density matrix is
ρ=1/3𝕀̂+1/2∑_i=1^8x_ix̂_i,
where 1≤ j<k≤ N, 1≤ l≤ N-1 <cit.>. For a detailed description, see Appendix. The operators are
x̂_1 = û_12 = |1⟩⟨2|+|2⟩⟨1|
x̂_2 = v̂_12 = -ι(|1⟩⟨2|-|2⟩⟨1|)
x̂_3 = ŵ_1 = |1⟩⟨1|-|2⟩⟨2|
x̂_4 = û_13 = |1⟩⟨3|+|3⟩⟨1|
x̂_5 = v̂_13 = -ι(|1⟩⟨3|-|3⟩⟨1|)
x̂_6 = û_23 = |2⟩⟨3|+|3⟩⟨2|
x̂_7 = v̂_23 = -ι(|2⟩⟨3|-|3⟩⟨2|)
x̂_8 = ŵ_2 = √(1/3)(|1⟩⟨1|+|2⟩⟨2|-2|3⟩⟨3|).
The density operator in the matrix form is
ρ̂ =[ 1/3+x_3/2+x_8/√(3) 1/2(x_1-ιx_2) 1/2(x_4-ιx_5); 1/2(x_1+ιx_2) 1/3-x_3/2+x_8/√(3) 1/2(x_6-ιx_7); 1/2(x_4+ιx_5) 1/2(x_6+ιx_7) 1/3-2x_8/√(3) ].
§.§ Monitoring a single level
Consider that the qutrit is interacting with an ancilla, a two-level system prepared initially in the state |0⟩ of σ_z, Fig. <ref>. The ancilla monitors the third level of the qutrit with a coupling strength J_3=√(α_3/δ t), where α_3 is a stochastic parameter related to the frequency of the detector. The qutrit+ancilla system evolves for a time δ t and then its σ_y operator is measured. If the outcome of measurement is 0, qutrit is in state |1⟩ or |2⟩. This evolution and measurement is performed n times for a total time of T=nδ t. The ancilla is reset after every measurement. The Hamiltonian of the qutrit+ancilla system is
H =H_s+H_s-d
=ω_12(|1⟩⟨2|+|2⟩⟨1|) + ω_23(|2⟩⟨3|+|3⟩⟨2|)+ ω_13(|1⟩⟨3|+|3⟩⟨1|)+J |3⟩⟨3|⊗σ_y^(3),
where H_s-d=J|3⟩⟨3|⊗σ_y^(3), denoting that the state |3⟩ is entangled with the ancilla and a measurement of the y observable of the ancilla. The Kraus operators for measurement are given by
ℳ_r =⟨r|exp[-ιH_s-dδt]|0⟩
= ⟨r|𝕀-ιH_s-d δt -1/2H_s-d^2 (δt)^2|0⟩
ℳ_0 = 𝕀-α_3/2|3⟩⟨3|δt
ℳ_1 =√(α_3δt)|3⟩⟨3|.
Upon unitary evolution of system via the operator 𝒰=exp-ι H_sδ t and measurements post-selected on t=0, we obtain
ρ(t+δ t)=ℳ^0 𝒰ρ𝒰^†ℳ^0†/Tr[ℳ^0 𝒰ρ𝒰^†ℳ^0†].
By extremising the action obtained for the Joint Probability Distribution Function (JPDF) for the system, we obtain eight coupled equations, their canonical conjugates, and a functional ℱ incorporating the back-action of measurement performed by the detector <cit.>
ẋ_1 =ω_23x_5+ω_13x_7+1/3α_3x_1(1-2√(3)x_8)
ẋ_2 =-2ω_12x_3-ω_23x_4+ω_13x_6+α_3/3x_2(1-2√(3)x_8)
ẋ_3 =2ω_12x_2+ω_13x_5-ω_23x_7+α_3/3x_3(1-2√(3)x_8)
ẋ_4 = ω_23x_2-ω_12x_7-α_3/6x_4(1+4√(3)x_8)
ẋ_5 = -ω_23x_1 +ω_12x_6 - ω_13(x_3+2√(3)x_8)-α_3/6x_5(1+4√(3)x_8)
ẋ_6 = -ω_13x_2-ω_12x_5-α_3/6x_6(1+4√(3)x_8)
ẋ_7 = -ω_13x_1+ω_12x_4+ω_23(x_3-2√(3)x_8)-α_3/6x_7(1+4√(3)x_8)
ẋ_8 =√(3)/2[ω_13x_5+ω_23x_7+2/9α_3(1-√(3)x_8(1+2√(3)x_8))]
The functional ℱ is given by ℱ=-α_3/3x_8(1-2√(3)x_8). The dynamical Hamiltonian is given by
ℋ =∑_i=1^8 p_iẋ_̇i̇+ℱ.
The canonically conjugate momenta can be derived by Hamilton's equations
p_i=-∂ℋ/∂ x_i.
Thus we obtain the coupled equations:
ṗ_1 = -α_3/3(1-2√(3)x_8)p_1+ω_23p_5+ω_13p_7
ṗ_2 = -α_3/3(1-2√(3)x_8)p_2-2ω_12p_3-ω_23p_4 +ω_13p_6
ṗ_3 = 2ω_12p_2-α_3/3(1-2√(3)x_8)p_3+ω_13p_5-ω_23p_7
ṗ_4 = ω_23p_2+α_3/6 (1+4√(3)x_8)p_4
ṗ_5 = ω_23 p_1 -ω_13p_3+α_3/6 (1+4√(3)x_8)p_5+ω_12p_6-√(3)/2ω_13p_8
ṗ_6 =-ω_13p_2-ω_12p_5+α_3/6(1+4√(3)x_8)p_6
ṗ_7 = -ω_13p_1+ω_23p_3+ω_12p_4+α_3/6(1+4√(3)x_8)p_7-√(3)/2ω_23p_7
ṗ_8 =2/√(3)α_3(x_1p_1+x_2p_2+x_3p_3+x_4p_4+x_5p_5+x_6p_6+x_7p_7+2x_8p_8)
+2√(3)(ω_13p_5+ω_23p_7)+α_3/3(p_8+1)-4/√(3)α_3x_8.
The dynamics of the position coordinates of the qutrit with time are shown in Fig. <ref>. When the detection frequency is less compared to all the transition frequencies of the system, the dynamics shows continuous oscillations, Fig. <ref> (a). In an intermediate frequency, the system shows oscillations for some time, after which, it gets arrested in a particular state, Fig. <ref> (b). When the detection frequency is higher compared to all the transition frequencies of the system, the Zeno regime sets in, Fig. <ref> (c). Each coordinate freezes at a particular value around a time t=6 and the system does not evolve any further.
The phase space dynamics of the qutrit are plotted in Figs. <ref> and <ref>, for a frequency lower and higher than the transition frequencies, respectively. In Fig. <ref>, for each coordinate, the qutrit shows evolution in the phase-space. However, in the Zeno regime, Fig. <ref>, it is evident that localization in x(p) is accompanied by delocalization of p(x). This shows that the system is shelved to a state. In terms of stability, localization in x or p corresponds to stability along that coordinate. It is clear that both x and p are not stable simultaneously, hence the points are saddle points, as in <cit.>.
§.§ Creating a cNOT gate
The three-level system can be used as a control and the ancilla as a target such that when the system is in |1⟩ or |2⟩, it does nothing to the ancilla (ancilla stays in initial state |0⟩_(n), whereas flips the ancilla to |1⟩_(n) when qutrit is in |3⟩. Such a gate can be represented as
cNOT =(|1⟩⟨1|+|2⟩⟨2|)⊗𝕀̂ + |3⟩⟨3|⊗σ_x^(n).
The states on which the cNOT acts are |1,0⟩, |2,0⟩ or |3,0⟩, where the first state is the qutrit state which controls the target ancilla initially in the state |0⟩. When cNOT acts on |3,0⟩, it gives |3,1⟩ and leaves the others unchanged.
§.§ Dense coding and teleportation
Some of the applications of entangled pairs are dense coding and teleportation. Dense coding uses one quantum bit together with a shared EPR pair to encode and transmit two classical bits <cit.>. Without using entanglement, only one classical bit of information can be extracted. Teleportaion is the opposite of dense coding as it uses two classical bits to transmit the state of an unknown qubit. The initial setup for both includes two parties, Alice and Bob who wish to communicate. Each is sent one of the entangled particles of an EPR pair
|ψ_0⟩=1/√(2) (|0⟩_A|0⟩_B+|1⟩_A|1⟩_B).
Each can perform transformations only on their particle unless they send over their particle.
Dense coding: Alice wants to transmit the state of two classical bits encoding one of the numbers {0,1,2,3}, depending on which, she performs one of the transformations {I,X,Y,Z} on her qubit of |ψ_0⟩. The resulting state is shown in table <ref>.
Bob decodes the information in two steps: cNOT to the entangled pair followed by Hadamard H on the first qubit:
Bob finally measures the two qubits to obtain the binary encoding sent by Alice.
Quantum teleportation: Due to the no-cloning theorem, the original state is destroyed and finally created at the target, hence the name teleportation. Alice has an qubit with unknown state |ϕ⟩=a|0⟩+b|1⟩. Both Alice and Bob share a part of the EPR pair just like in dense coding (<ref>). The initial state is then the three-qubit state:
|ψ⟩⊗|ψ_0⟩ =1/√(2)(a|0⟩⊗(|00⟩+|11⟩)+b|1⟩⊗(|00⟩+|11⟩))
=1/√(2)(a|000⟩+a|011⟩+b|100⟩+b|111⟩).
Alice controls the first two qubits and Bob controls the third. Alice uses the decoding step used by Bob in dense coding to the first two qubits in (<ref>), i.e., cNOT on first two followed by Hadamard on first qubit
(H⊗I⊗I) (cNOT⊗I)(|ψ⟩⊗|ψ⟩)
=(H⊗I⊗I)1/√(2)(a|000⟩+a|011⟩+b|110⟩+b|101⟩)
=1/2[a(|000⟩+|011⟩+|100⟩+|111⟩)+b(|010⟩+|001⟩-|110⟩-|101⟩)]
=1/2(|00⟩(a|0⟩+b|1⟩)+|01⟩(a|1⟩+b|0⟩)+|10⟩(a|0⟩-b|1⟩)+|11⟩(a|1⟩-b|0⟩)).
Upon measuring the first two qubits, Alice obtains one of the four states |00⟩, |01⟩, |10⟩ or |11⟩, depending upon which, Bob's qubit is projected to one of the four states a|0⟩+b|1⟩, a|1⟩+b|0⟩, a|0⟩-b|1⟩ or a|1⟩-b|0⟩. Alice sends her result as two classical bits to Bob. The original state |ϕ⟩ is contained in Bob's qubits. Upon receiving the two bits, Bob reconstructs the state by applying decoding transformation to his qubit:
Bob will finally have the qubit Alice wished to send.
§.§ Applications of entanglement using three-level system
We have considered a three-level system where the third level is being monitored by an ancilla. For communication and teleportation using the qutrit, we need to have two of the states acting as ground and the third, which is being monitored as the higher level. This will enable us to create a cNOT gate for the qutrit. Further, we need the regular Pauli operators corresponding to this setup, such that the bit-flip operator acts on the states as
X_13|1⟩=|3⟩, X_23|2⟩=|3⟩, X_13+23|3⟩= |1⟩+|2⟩/√(2).
Hence, the operators may be written as
X_13=[ 0 0 1; 0 0 0; 1 0 0; ] X_23=[ 0 0 0; 0 0 1; 0 1 0; ].
The resulting X operator read as
X=X_13+X_23/√(2)=1/√(2)[ 0 0 1; 0 0 1; 1 1 0; ].
We have
X|1⟩=1/√(2)[ 0; 0; 1; ]=1/√(2)|3⟩, X|2⟩=1/√(2)[ 0; 0; 1; ]=1/√(2)|3⟩ and X|3⟩=|1⟩+|2⟩/√(2).
Similarly, the Y operator is
Y=1/√(2)[ 0 0 1; 0 0 1; -1 -1 0; ],
with
Y|1⟩=1/√(2)[ 0; 0; -1; ]=-1/√(2)|3⟩, Y|2⟩=1/√(2)[ 0; 0; -1; ]=-1/√(2)|3⟩ and Y|3⟩=|1⟩+|2⟩/√(2).
The phase operator should act as
Z(|1⟩+|2⟩/√(2))=(|1⟩+|2⟩/√(2)) and Z|3⟩=-|3⟩.
That is,
Z=1/√(2)[ 1 0 0; 0 1 0; 0 0 -√(2); ].
The cNOT gate is given by
cNOT=(|1⟩⟨1|+|2⟩⟨2|)⊗ I^(n)+|3⟩⟨3|⊗σ_x^(n),
where superscript (n) represents the ancilla. To find the Hadamard operator, note that
|ψ_0⟩ =1/√(2)((|1⟩+|2⟩)/√(2)+|3⟩)
H 1/√(2)((|1⟩+|2⟩)/√(2)+|3⟩)=|1⟩+|2⟩/√(2)
H 1/√(2)((|1⟩+|2⟩)/√(2)-|3⟩)=|3⟩.
These are effected by the Hadamard gate:
H=1/2√(2)[ 1 1 √(2); 1 1 √(2); √(2) √(2) -2; ].
Now we have a set of operators at our disposal, acting as gates on this three-level system for dense coding and teleportation.
Dense coding: Alice encodes the digits {0,1,2,3} in state |ψ_0⟩ and performs transformations on her part of the state. Let the states of ancilla be {|g⟩,|e⟩}, the eigenstates of σ_z. These are entangled with the qutrit to parallel the EPR pair of qubits.
Then, Bob decodes using cNOT followed by Hadamard on the (first) qutrit. Here, the cNOT has control as the three-level system and target as a two level system. Hence, the flip operator will be the usual 2D Pauli σ_x. This is shown in table <ref>.
Teleportation: Alice has an unknown qubit |ϕ⟩=a|g⟩+b|e⟩ (ancilla). She wants to send this to Bob through a classical channel. They each share a part of the state
|ψ_0⟩=1/√(2)[|11⟩+|12⟩+|21⟩+|22⟩/2+|33⟩],
so that the combined state initially is
|ϕ⟩⊗|ψ_0⟩ =1/2√(2)[a(|g11⟩+|g12⟩+|g21⟩+|g22⟩+2|g33⟩)
+b(|e11⟩+|e12⟩+|e21⟩+|e22⟩+2|e33⟩)].
Alice controls the first two states in the tensor product in (<ref>) and Bob controls the third state. For the decoding step, Alice applies cNOT (|g⟩⟨g|⊗ I_3+|e⟩⟨e|⊗ X_3) on the first two states of the product followed by Hadamard on the first
(H_2⊗I⊗I) (cNOT⊗I)(|ϕ⟩⊗|ψ_0⟩)
= (H_2⊗I⊗I)1/2√(2)[a(|g11⟩+|g12⟩+|g21⟩+|g22⟩+2|g33⟩)
+√(2)b(|e31⟩+|e32⟩+1/√(2)(|e13⟩+|e23⟩))]
= 1/4[a(|g11⟩+|e11⟩+|g12⟩+|e12⟩+|g21⟩+|e21⟩+|g22⟩+|e22⟩+2|g33⟩+2|e33⟩)
+√(2)b(|g31⟩-|e31⟩+|g32⟩-|e32⟩+|g13⟩-|e13⟩+|g23⟩-|e23⟩)]
= 1/2√(2)[|g1⟩(a(|1⟩+|2⟩)/√(2)+b|3⟩)+|e1⟩(a(|1⟩+|2⟩)/√(2)-b|3⟩)
+|g2⟩(a(|1⟩+|2⟩)/√(2)+b|3⟩)+|e2⟩(a(|1⟩+|2⟩)/√(2)-b|3⟩)
+|g3⟩(√(2)a|3⟩+ √(2)b(|1⟩+|2⟩)/√(2))+|e3⟩(√(2)a|3⟩- √(2)b(|1⟩+|2⟩)/√(2))].
Thus the final encoded state is
|ψ⟩_f =1/2[|g⟩(|1⟩+|2⟩/√(2)){a(|1⟩+|2⟩/√(2))+b|3⟩}+|e⟩(|1⟩+|2⟩/√(2)){a(|1⟩+|2⟩/√(2))-b|3⟩}
+|g⟩|3⟩{a|3⟩+b(|1⟩+|2⟩/√(2))}+|e⟩|3⟩{a|3⟩-b(|1⟩+|2⟩/√(2))}]
Upon measuring the first two states, Alice will obtain one of the four states mentioned in the first column of Tab. <ref>, which she sends as two classical bits to Bob. Upon receiving them, Bob reconstructs the state by applying a decoding transformation (<ref>) to his part of the product state which contains the unknown state |ϕ⟩. Thus Bob will finally have the qubit state Alice wanted to send.
§.§ Monitoring two levels
Consider a qutrit interacting with two ancillae. The ancillae are again two-level systems, one of which monitor the state |2⟩ whereas the other monitors the state |3⟩ as shown in Fig. <ref>. The interaction strength between qutrit and ancilla monitoring |2⟩ (|3⟩) is J_2=√(α_2/δ t) (J_3=√(α_3/δ t)).
The Hamiltonian for this system can be given as
H =ω_12(|1⟩⟨2|+|2⟩⟨1|)+ω_23(|2⟩⟨3|+|3⟩⟨2|)+ω_13(|1⟩⟨3|+|3⟩⟨1|)+ H_s-d,
where
H_s-d = J_2|2⟩⟨2|⊗σ_y^(2)⊗𝕀^(3) + J_3 |3⟩⟨3| ⊗𝕀^(2)⊗σ_y^(3) + (J_2 |2⟩⟨2|+ J_3 |3⟩⟨3|) ⊗σ_y^(2) ⊗σ_y^(3).
The Kraus operators are given by
ℳ_r =⟨r_1 r_2|exp[-ιH_s-d δt]|00⟩
ℳ_00 = 𝕀 -J_2^2|2⟩⟨2| (δt)^2 -J_3^2|3⟩⟨3| (δt)^2
ℳ_01 = -J_3|3⟩⟨3| δt -ιJ_2^2 |2⟩⟨2| (δt)^2
ℳ_10 = -J_2|2⟩⟨2| δt -ιJ_3^2 |3⟩⟨3| (δt)^2
ℳ_11 = ι(J_2 |2⟩⟨2| +J_3 |3⟩⟨3|) δt.
So we have a 2× 2 Kraus operator matrix. The unitary evolution of qutrit under system Hamiltonian H_s and measurement postselected on r=00, we obtain 8 coupled dynamic equations from the density matrix
ρ(t+δ t)=ℳ_00𝒰ρ𝒰^†ℳ_00^†/Tr[ℳ_00𝒰ρ𝒰^†ℳ_00^†].
These equations are
ẋ_1 = -α_2 x_1x_3 +ω_23x_5 +ω_13x_7+1/3(α_2-2α_3)x_1(2√(3)x_8-1)
ẋ_2 = - [2ω_12x_3 + α_2 x_2x_3 + ω_23 x_4-ω_13 x_6 -1/3(α_2-2α_3)x_2(2√(3)x_8-1) ]
ẋ_3 = 1/3 [6ω_12x_2 + 2α_3 x_3 +3 ω_23 x_5 -3ω_23x_7-4√(3)α_3 x_3 x_8-α_2(1+x_3)(-2+3x_3-2√(3)x_8)]
ẋ_4 =1/3[3ω_23x_2-3ω_12x_7 +α_2 x_4 (2-3x_3+2√(3)x_8) -α_3 x_4 (1+4√(3) x_8) ]
ẋ_5 =1/3[-3ω_23x_1+(2α_2-α_3-3α_2 x_3)x_5 +3ω_12x_6+2√(3)(α_2-2α_3)x_5x_8-3ω_13(x_3+2√(3)x_8)]
ẋ_6 = [-ω_13 x_2 - ω_12x_5 - 1/3x_6 (α_2+α_3+3α_2 x_3 - 2√(3)α_2 x_8 +4√(3)α_3 x_8) ]
ẋ_7 = [-ω_13x_1 +ω_12x_4 -1/3 (α_2+α_3 +3 α_2 x_3)x_7 +2/√(3)(α_2-2α_3)x_7x_8+ω_23 (x_3-2√(3)x_8) ]
ẋ_8 = 1/6√(3) [4 α_3 + 9 ω_13 x_5+9 ω_23x_7-4α_3x_8(√(3)+6x_8)+α_3(-2+3x_3+2√(3)(1-3x_3)x_8+12x_8^2)]
The functional incorporating the backaction is ℱ=α_2x_3-2/3(α_2+α_3+√(3)α_2x_8-2√(3)α_3x_8). The corresponding conjugate momenta are
p_1 =α_2x_3p_1-1/3(α_2-2α_3)(2√(3)x_8-1)p_1+ω_23p_5+ω_13p_7
p_2 =α_2x_3 p_2-1/3(α_2-2α_3)(2√(3)x_8-1)p_2-2ω_12p_3-ω_23p_4+ω_13p_6
p_3 =α_2x_1p_1+2ω_12p_2+α_2x_2p_2-2/3α_3p_3+4√(3)/3α_3x_8p_3+α_2/3(1+6x_3-2√(3)x_8)p_3
+α_2x_4p_4+α_2x_5p_5+ω_13p_5+α_2x_6p_6+α_2x_7p_7-ω_23p_7-α_3/2√(3)p_8+x_8p_8-α_2
p_4 =ω_23p_2-α_2/3(2-3x_3+2√(3)x_8)p_4+α_3/3(1+4√(3)x_8)p_4-ω_12p_7
p_5 =-ω_23p_1-ω_23p_3-1/3(2α_2-α_3-3α_2x_3)p_5-2√(3)/3(α_2-2α_3)x_8p_5+ω_12p_6-√(3)/2ω_13p_8
p_6 =-ω_13p_2-ω_12p_5+1/3(α_2+α_3+3α_2x_3-2√(3)α_2x_8+4√(3)α_3x_8)p_6
p_7 =-ω_13p_1+ω_23p_3+ω_12p_4+1/3(α_2+α_3+3α_2x_3)p_7-2/√(3)(α_2-2α_3)x_8p_7-√(3)/2ω_23p_8
p_8 =-2/√(3)(α_2-2α_3)(x_1p_1+x_2p_2+x_3p_3+x_4p_4+x_5p_5+x_6p_6+x_7p_7-1?)-2/√(3)α_2p_3
+2√(3)(ω_13p_5+ω_23p_7)+2/3α_3(1+2√(3)x_8)p_8-α_3/3(1-3x_3)p_8-4/√(3)α_3x_8p_8.
The dynamics of the position coordinates of the qutrit with time are shown in Fig. <ref>. When the detection frequencies of the two detectors are less compared to all the transition frequencies of the system, the dynamics shows continuous oscillations, Fig. <ref> (a). In an intermediate frequency range, the system shows oscillations for some time, after which, it gets arrested in a particular state, Fig. <ref> (b). When the detection frequency is higher compared to all the transition frequencies of the system, the Zeno regime sets in, Fig. <ref> (c). Each coordinate freezes at a particular value around a time t=6, just as in the previous section where a single state was being monitored.
The phase space dynamics of the qutrit are plotted in Figs. <ref> and <ref>, for frequencies of both the detectors lower and higher than the transition frequencies, respectively. In Fig. <ref>, for each coordinate, the qutrit shows evolution in the phase-space. However, in the Zeno regime, Fig. <ref>, the system follows uncertainty principle - as soon as the position coordinates are fixed at a particular value, the uncertainty in the momentum coordinates peaks. This also shows that there is a saddle point. The qutrit gets shelved in the position coordinates and the is delocalised in the momentum coordinates.
§ CREATING A TOFFOLI GATE
The Kraus operators in (<ref>) indicate that the system may be in state 1 (M_00), state 2 (M_10), state 3 (M_01) or in a combination of 2 and 3, i.e., anywhere but not in state 1 (M_23). This can be interpreted as an operator
T = |1⟩⟨1|⊗|1⟩⟨1|⊗(𝕀⊗𝕀) + |2⟩⟨2|⊗|1⟩⟨1|⊗(X⊗𝕀)
+|1⟩⟨1|⊗|3⟩⟨3|⊗(𝕀⊗X) + |2⟩⟨2| ⊗|3⟩⟨3|⊗(X⊗X).
Consider 𝕀⊗𝕀, 𝕀⊗ X and x⊗𝕀 as giving an outcome of 0 and x⊗ X equivalent to producing an outcome of 1. The setup can then be interpreted as a Toffoli gate. For instance, if control is (1,1) and target is 0, the state is |1,1,(00)≡ 0⟩. If control is (2,3) and target is 1, the state is |2,3,(11)≡ 1⟩.
§ CONCLUDING REMARKS
Control of qutrit is shown by monitoring one or two levels. Due to the Quantum Zeno Effect, the state of the system is shown to shelve to a state other than the states of the three-level system. Treatment to a three-level system takes us out of Pauli algebra, here we have Gell-Mann matrices. In addition, we write a new set of operators to realise the cNOT gate with the qutrit as the control and the two-level ancilla as the target. With these operators, the applications of entanglement have been realised in a three-level system in dense coding and teleportation for the purpose of quantum communication. Application of the system to universal gates allows us to manipulate the states. In general, for N-level system also, the conclusion will hold good.
0.5 truecm
Data Availability Statement: No Data associated in the manuscript
0.25 truecm
Conflict of interests: Authors declare no conflict of interest.
§ APPENDIX: DENSITY MATRIX OF LG-LEVEL SYSTEM
An N-level system is defined by a Bloch vector whose components are expectation values of some observables <cit.>. The number of observables needed to identify the state are N^2-1. These correspond to N^2-1 independent parameters used to define a Hermitian density matrix operator ρ̂ with a constraint, Trρ̂=1. Choosing the generators of SU(N) for the observables x̂_i, the density matrix is determined from their expectation values ⟨x̂_i⟩'s as
ρ=1/N𝕀̂_N + 1/2∑_i=1^N^2-1⟨x̂_i⟩x̂_i.
The properties of the density matrix associated with a Hilbert space ℋ_N is given as
ρ∈ℒ(ℋ_N) : (i) Trρ=1 (ii) ρ = ρ^† (iii) ρ_i ≥ 0,
where ℒ is the space of linear operators on ℋ_N, i=1,2,… N and ρ_i's are the eigenvalues of ρ. The property (iv) Trρ^2≤ 1 follows from Eq. (<ref>). Equality holds when ρ is a pure state.
Following these properties, the operators x̂_i satisfy
( i) x̂_i = x̂_i^† ( ii) Tr [x̂_i] = 0 ( iii) Tr [x̂_ix̂_j] = 2δ_ij.
The x_i's are characterised with structure constants f_ijk, completely asymmetric tensor and g_ijk, completely symmetric tensor of Lie algebra
[x̂_i,x̂_j] = 2if_ijk x̂_k
{x̂_i,x̂_j} =2/Nδ_ijÎ_N+2g_ijkx̂_k.
By imposing (iv), the length of the operators x̂_i are restricted as
|x|≡√(x_ix_j)≤√(2(N-1)/N).
Systematic construction of the generators generalising the Pauli spin operators for an N-level system is given by <cit.>
{x̂_i}_i=1^N^2-1 = {û_jk,v̂_jk,ŵ_l}
where
û_jk = |j⟩⟨k| + |k⟩ ⟨j|,
v̂_jk = -ι(|j⟩⟨k| - |k⟩⟨j|),
ŵ_l = √(2/l(l+1))( ∑_j=1^l |j⟩⟨j|-l|l+1⟩⟨l+1|),
1≤j < k ≤N, 1≤l ≤N-1.
For N=2,
x̂_1 = û_12 = |1⟩⟨2| + |2⟩ ⟨1| ≡X̂,
x̂_2 = v̂_12 = -ι(|1⟩⟨2| - |2⟩⟨1|) ≡Ŷ,
x̂_3 = ŵ_l= |1⟩⟨1| - |2⟩⟨2| ≡Ẑ,
where |1⟩=[ 1 0; ]^ T and |2⟩=[ 0 1; ]^ T and the structure constants are f_ijk = ϵ_ijk (Levi-Civita), g_ijk=0.
99
ms B. Misra and E. C. G. Sudarshan, J. Math. Phys. 18, 756 (1977).
dehmelt H. G. Dehmelt, Bull. Am. Phys. Soc. 20, 60 (1975).
deh H. G. Dehmelt, IEEE Transactions on Instrumentation and Measurement, IM31, 83 (1982).
minev Z. Minev et al., Nature 570, 200 (2019).
parveen K. Snizhko, P. Kumar, A. Romito, Phys. Rev. Res. 2, 033512 (2020).
krjj Komal Kumari, Garima Rajpoot, Sandeep Joshi, and Sudhir R. Jain, Ann. Phys. 450, 169222 (2023).
kimura G. Kimura, The Bloch vector for N-level systems, Phys. Lett. A 314, 339 (2003).
jordan A. Chantasri, J. Dressel, A. Jordan, Phys. Rev. A 88, 042110 (2013).
jordan2 A. Chantasri, A. Jordan, Phys. Rev. A.92, 032125 (2015).
rieffel E. Rieffel, Wolfgang Polak, Quantum Computing: A gentle introduction, (The MIT Press, Cambridge) (2011).
hioe F. T. Hioe and J. H. Eberly, N-Level Coherence Vector and Higher Conservation Laws in Quantum Optics and Quantum Mechanics, Phys. Rev. Lett. 47, 838 (1981).
pottinger_lendi J. Pöttinger and K. Lendi, Generalized Bloch equations for decaying systems, Phys. Rev. A 31, 1299 (1985).
lendi K. Lendi, Entropy production in coherence-vector formulation for N-level systems, Phys. Rev. A 34, 662 (1986).
|
http://arxiv.org/abs/2307.15758v2 | 20230710132241 | Search for ultralight dark matter with a frequency adjustable diamagnetic levitated sensor | [
"Rui Li",
"Shaochun Lin",
"Liang Zhang",
"Changkui Duan",
"Pu Huang",
"Jiangfeng Du"
] | astro-ph.CO | [
"astro-ph.CO",
"physics.ins-det",
"quant-ph"
] |
CAS Key Laboratory of Microscale Magnetic Resonance and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
CAS Key Laboratory of Microscale Magnetic Resonance and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
CAS Key Laboratory of Microscale Magnetic Resonance and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
[email protected]
National Laboratory of Solid State Microstructures and Department of Physics, Nanjing University, Nanjing, 210093, China
CAS Key Laboratory of Microscale Magnetic Resonance and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
Among several dark matter candidates, bosonic ultra-light (sub-meV) dark matter is well motivated because it could couple to the Standard Model (SM) and induce new forces. Previous MICROSCOPE and Eöt-Wash torsion experiments have achieved high accuracy in the sub-1 Hz region, but at higher frequencies there is still a lack of relevant experimental research. We propose an experimental scheme based on the diamagnetic levitated micromechanical oscillator, one of the most sensitive sensors for acceleration sensitivity below the kilohertz scale. In order to improve the measurement range, we used the sensor whose resonance frequency ω_0 could be adjusted from 0.1Hz to 100Hz. The limits of the coupling constant g_ B-L are improved by more than 10 times compared to previous reports, and it may be possible to achieve higher accuracy by using the array of sensors in the future.
Search for ultralight dark matter with a frequency adjustable diamagnetic levitated sensor
Jiangfeng Du
August 12, 2023
==========================================================================================
§ INTRODUCTION
There are many astronomical <cit.> and cosmological observations <cit.> that prove the existence of dark matter particles<cit.>, but the specific parameters of dark matter, especially the quality, are still highly uncertain <cit.>. Many direct detection studies have assumed that dark matter is composed of supersymmetric fermions, but so far there has not been enough evidence. Now the focus of research is gradually shifting to ultralight bosons and the quality range is approximately 10^-22eV m_ϕ0.1eV <cit.>. For ultralight bosons with a mass less than 1eV, due to their high particle number density, they behave like a classical field. Due to the viral theorem , if the DM has virialized to the Galaxy, it will be moving with a typical speed v_DM≈ 10^5m/s <cit.>. This corresponds to Compton frequency ω_s=m_ϕ/ ħ and De Broglie wavelength λ_DM=hc^2/(m_ϕ v_DM).
According to the previous reports, such as ADMX <cit.> can search for the Peccei-Quinn axion in the mass range 10^-6eV m_ϕ 10^-3eV <cit.>. And the pseudoscalar axion-like ULMBs with masses between 10^-23eV and 10^-18eV <cit.> and scalar dilaton ULMBs with masses between 10^-21eV and 10^-5eV by use ultrastable clocks <cit.> and gravitation wave detectors <cit.>
have recently been reported.
When DM is a vector field couples to a conserved current, corresponding to the baryon number minus lepton number (B-L charge) in the SM. The Lagrangian in this case can be written as <cit.>:
ℒ=-1/4 F_μν F^μν -1/2 m_ϕ^2 A^2 +i g_ B-L A_μnγ^μ n
where n is the neutron field and the DM field couples directly to the number of neutrons, g_ B-L is the coupling strength.
Using the Lorentz gauge and the plane wave approximation, the dark electric field can be written as: E≈√(ρ_DM)sin (ω_s t-k⃗·x⃗), where ρ_DM≈ 0.3GeV/cm^3 <cit.> is the local DM density.
In ground experiments, assume that using a magnet-gravity mechanical oscillator to measure the ultralight DM field along the Earth's axis, we can parameterize the force exerted on the sensor as:
F_sig(t)=α g_ B-L N_g F_0 sin(ω_s t)
because the De Broglie wavelength of DM is much larger than the size of the sensor so that we drop the x dependence. In this equation, α=sinθ_N denotes the component along the direction of gravity and θ_N means the latitude of the location of the ground experiment system. In order to avoid the effects of the Earth's rotation under long time measurements and increase the force, experiment system is best carried out at high latitudes like in the Arctic which α=1. F_0=√(ρ_DM)≈ 10^-15N and N_g is the total number of neutrons in the sensor. We can approximate write it as N_g≈1/2 m/m_neu in a sensor with mass m and m_neu is the neutron mass. The force F_sig(t) is proportional to the mass of the sensor,
so the main criterion about the sensor is acceleration sensitivity.
Here we propose a experiment scheme to detect DM using a frequency adjustable diamagnetic levitated sensor. The resonance frequency could be changed by adjust the magnetic field gradient in a paramagnetic part of the oscillator and frequency range from 0.1Hz to 100Hz.
This means that we have high detection accuracy to detect DM with mass in the range from 10^-16eV to 10^-13eV.
Compare to previously reported experiments, our experiment scheme can achieve more than one order of magnitude improvement in the measurement of the coupling strength g_ B-L based on the results of theoretical calculation.
§ THEORETICAL CALCULATION
Under the effect of the ultralight DM field, consider thermal noise and measurement noise,
the motion equation of a mechanical oscillator at resonant frequency ω_0 could be written as:
mẍ+ mγẋ + mω_0^2 x
=F_sig(t)+F_th+F_mea
where γ is damp coefficient;
the F_sig(t) is the DM field drive from equation (<ref>); F_th is the environmental thermal noise; and the F_mea represents the measurement noise which is mainly composed of the detector imprecision noise and backaction of radiation pressure fluctuations.
The total acceleration noise of the system is given by:
S_aa^tot= S_aa^th+ (S_xx^imp/|χ_ m(ω,ω_0)|^2+ S_ff^ba/m^2 )
where χ_ m(ω,ω_0) is the mechanical susceptibility given by |χ_ m(ω,ω_0)|^2=1/[(ω^2-ω_0^2)^2+γ^2 ω^2],
and S_aa^th =4 γ k_B T/m is the thermal noise where k_B is Boltzmann constant and T indicates environment temperature.
The detector imprecision noise S_xx^imp and the backaction noise S_ff^ba
make up the total measurement noise
S_aa^mea=S_xx^imp /|χ_ m(ω,ω_0)|^2 +S_ff^ba / m^2,
and S_xx^imp· S_ff^ba=(1/η) ħ^2 meanwhile.
Here η⩽ 1 is the measurement efficiency, and η= 1 corresponding to standard quantum limit (SQL).
The total measurement noise S_aa^mea for the sensor operating at SQL condition at resonance frequency ω_0 could be given by the simple formula <cit.>:
S_aa^mea,SQL=2 √((ω_0^2-ω^2)^2+γ^2
ω^2)/m
And achieving the SQL in a frequency range need to optimize the measurement parameters
frequency by frequency as the range is scanned.
We use the total acceleratioon noise S_aa^tot as the acceleration measurement sensitivity of the system. From the equations (<ref>)-(<ref>), consider the optimal case of α=1, we obtain the relationship between coupling strength g_ B-L and the acceleration measurement sensitivity S_aa^tot by:
g_ B-L= 2 m_neu/F_0√(S_aa^tot/T_tot)
where T_tot denotes the effective total integration time. The DM signal is essentianlly a coherent force and the timescales T_coh≈ 10^6/ ω_s.
When the DM frequency ω_s is lower to satisfy T_coh T_mea,
all the measurement time T_mea contributes to the coherent DM signal. And as the DM frequency ω_s increases, when T_coh T_mea, only the proportion of T_coh/T_mea in the measurement time contributes to the coherent signal. So we define the effective integration time:
T_tot={[ T_mea if T_coh< T_mea; √(T_mea· T_coh) if T_coh>
T_mea ].
§ EXPERIMENTAL SCHEME
The levitated micromechanical and nanomechanical oscillators have been demonstrated as one of the ultrasensitive acceleration sensors due to its ultralow dissipation <cit.>.
We propose a reasonable scheme by our calculation as shown in Fig.<ref>(a). A diamagnetic sphere made by PMMA with radius r_1=0.5mm(corresponding volume V_1), density ρ_1 and magnetic susceptibility χ_ 1 is levitated in the upper magnet (name as Magnet-A) center region, and the oscillator signal is detected through the fibre on both sides.
A paramagnetic microsphere made by Tb_2 O_3 with
radius r_2=11 μm(corresponding volume V_2), density ρ_2 and magnetic susceptibility χ_ 2 is connected to the upper diamagnetic sphere through a thin glass rod. And another combined magnets (name as Magnet-B) is placed under the paramagnetic microsphere. The whole magnet assembly is placed in a multi-stage suspension system, and uses active vibration isolation devices to further improve the isolation
effect<cit.>.
Magnet-A is constructed in a similar way to our previous articles<cit.>. And need to use high remanence magnetic material with two different magnetisation direction to generate enough magnetic force. The red express the direction point to the centre, and the blue express the direction out to the centre. In addition, using a less remanence magnetic material to build the upper layer of Magnet-B and high magnetic material to build the lower layer. The combination of two different remanence magnetic materials allows Magnet-B to have a higher magnetic field gradient while reducing the magnetic field strength. And the direction of magnetisation is also indicated by red and blue colours.
The magnetic field energy of the upper paramagnetic sphere can be written as:
U_1=-∫_V_1χ_ 1/2μ_0 B_A ^2 dV
where B_A represents the magnetic field created by
Magnet-A.
Assuming that the Magnet-B is far away at beginning , the z direction equilibrium position z_0 of the oscillator in the magnetic-gravity trap satisfies:
∂ U_1/∂ z |_z=z_0=(ρ_1 V_1+ρ_2 V_2 )g.
And the resonance frequency in z direction is:
ω_0=√(1/ρ_1 V_1+ρ_2 V_2·∂^2 U_1/∂ z^2)|_z=z_0
Then we make the Magnet-B rise, the magnetic field B_ B from Magnet-B in the lower paramagnetic microsphere will become larger. And because of V_2≪ V_1, we can simplify the magnetic field energy of the paramagnetic microspheres as U_2=-χ_ 2 B_B^2 V_2/2μ_0.
Now the resonance frequency along z direction of the oscillator change as:
ω_0^'=√(ω_0^2-χ_ 2V_2/μ_0(ρ_1
V_1+ρ_2V_2)( ∂ B_ B/∂ z)^2)|_z=z_0
where χ_ 2 0 and ω_0^'ω_0.
We ignore the second order gradient term because of
(∂ B_B/∂ z)^2≫ B_B (∂^2 B_ B / ∂ z^2).
And the magnetic force from Magnet- B on the paramagnetic microsphere is much lower than the total gravity of oscillator since B_B and V_2 are very small, the equilibrium position z_0 will not be changed therefore.
We use finite element method to simulate the magnetic field gradient ∂ B_B/∂ z changes by the distance between the paramagnetic microsphere and Magnet-B expressed by d range from 50μm to 100 μm, then use equation (<ref>) to calculate the corresponding resonance frequency ω_0^', as shown in Fig.<ref>(b). It is theoretically possible to bring the resonance frequency ω_0^' close to zero by reducing the distance d. But in order to improve the stability of the oscillator and reduce the requirement for the isolation system, we select resonance frequency ω_0^' variation range from 0.1Hz to 100Hz.
§ EXPERIMENTAL RESULT ESTIMATE
Now we calculate the acceleration measurement sensitivity of this system. In order to improve the acceleration sensitivity, the whole system was placed in a low temperature environment which T=30mK, and estimate the damp coefficient γ=10^-4Hz <cit.>. In the Supplementary material, we calculate the dependence of the total measurement noise S_aa^mea on the laser input power P_in and obtained the optimized laser input
power P_opt(ω,ω_0) to minimised the total measurement noise.
In the cases of the oscillator resonance frequency ω_0 equal to 10Hz and 100Hz,
we calculate the corresponding acceleration noise and the results are shown in Fig.<ref>(a) and Fig.<ref>(b). When resonance frequency ω_0=10Hz,
assuming measurement efficiency η=1 and we set the laser input power to optimal laser power for each point as P_opt(ω,ω_0), the measurement noise S_aa^mea can almost reach the SQL at this time.
With the measurement efficiency η reduce to 0.1, the measurement noise is slightly increased.
But actually, to simplify the experiment, the laser input power need to choose near the resonance frequency ω_0 by P_opt(ω_0,ω_0), it will make the measurement noise S_aa^mea increase rapidly.
In Fig.<ref>(a), in the frequency range from 9Hz to 11Hz, the measurement noise S_aa^mea is always below the thermal noise S_aa^th with η=0.1. When the resonance frequency ω_0 is adjusted to 100Hz, the range of measurement noise S_aa^mea below thermal noise S_aa^th is reduced to 99.6Hz to 100.4Hz in Fig.<ref>(b). We choose the appropriate oscillator resonance frequency scan step Δω_0 from this.
According to the calculation results from Fig.<ref>(a) and
Fig.<ref>(b), we choose the scan step Δω_0=1Hz in the region resonance frequency ω_0 range from 0.1Hz to 100Hz, each scan cover the frequency range from ω_0-Δω_0/2 to ω_0+Δω_0/2, and fix the laser input power P_in=P_opt(ω_0,ω_0 ) in each scan meanwhile.
We calculate the acceleration measurement noise S_aa^mea with η=0.1 in each scan, and calculate the envelope of these series S_aa^mea writen as S_aa^mea^'. The acceleration measurement sensitivity S_aa^tot=S_aa^th+S_aa^mea^', and these results are presented in Fig.<ref>(c).
According to the previous discussion on the effective integration time T_tot,
we fix the measurement time of each scan as T_mea=10^5s.
When DM frequency ω_s10Hz, T_tot=T_mea; and when ω_s10Hz, T_tot=√(T_mea· 10^6/ω_s).
Combining previous discussion of the scan step, we estimate that about one hundred times adjustments and measurements will be required in total, corresponding to a total time of 1 × 10^7 seconds.
The final result of coupling strength g_ B-L from equation (<ref>) is shown in Fig.<ref>. In the region of ω_s 100Hz, this system always has high acceleration sensitivity by adjusting the resonance frequency of the mechanical oscillator. And we achieve more than an order of magnitude improvement in the measurement of g_ B-L compare to the MICROSCOPE and the Eöt-Wash torsion experiment.
And in the region of ω_s 100Hz, the measurement accuracy of g_ B-L decreases rapidly, due to the increase in measurement noise S_aa^mea.
Finally, we estimated the minimum g_ B-L that this system can detect. Assume that the DM frequency ω_s is 1Hz, 10Hz and 100Hz respectively.
From the equation (<ref>) and the measurement time T_mea range from 10^3s to 10^7s, the results are shown in Fig.<ref>.
When T_mea is less than the coherent time T_coh, g_ B-L decreases rapidly as T_mea increases; and when T_mea is greater than T_coh, g_ B-L decreases more slowly. If the final measurement time is about 10^7 s, the minimum g_ B-L that can be measured scale is about 10^-26.
§ CONCLUSION
We propose an experimental scheme to detect ultra-light dark matter using a frequency adjustable diamagnetic levitated microsphere sensor which can theoretically approach the standard quantum limit.
We change the resonance frequency by adjusting the distance between the paramagnetic microsphere and the lower combined magnets, and to obtain a lager range that maintains high acceleration measurement sensitivity.
Compared to the existing system, our method can achieve at least one order of magnitude improvement in the coupling constant g_ B-L, especially in the frequencies from 0.1Hz to 100Hz. And it may be possible to achieve higher accuracy by using the array of sensors in the future.
In this article, we consider only the effects of thermal noise and quantum measurement noise on the acceleration measurement sensitivity of the system.
In fact, there are many low frequency noises such as seismic waves and Earth tidal forces which also have a great impact on the accuracy of the experiment, and that cannot be shielded by the suspension system. This poses a great challenge to the actual measurement. Reducing the frequency scan step according to the accuracy of the active vibration isolation device may make the effect of other noise lower than thermal noise, and this needs to be verified by further experiments.
In general, the current ground-based precision measurement system may have a broader prospect in terms of dark matter measurement compared to the previous astronomical observation methods. In the future, with the development of measurement sensitivity and
measurement range of mechanical sensors , especially with the improvement quantum sensing technology, the measurement sensitivity may break through the standard quantum limit. It will open up more possibilities in terms of dark matter measurement.
This work was supported by the National Natural Science Foundation of China (Grants No.12205291, No. 12075115, No. 12075116, No. 11890702 and No. 12150011), the Fundamental Research Funds for the Central Universities, and Anhui Provincial Natural Science Foundation (Grant No. 2208085QA16).
apsrev4-1
§ APPENDIX: LIGHT FIELD CALCULATION AND MEASUREMENT NOISE OPTIMIZATION
Optical Calculation. The light emitted from the incident fiber is assumed to be Gaussian, taking the light propagation direction as the z-axis, the incident Gaussian light intensity distribution at waist can be written as <cit.>:
I_1 (r)=I_0 exp(-2r^2/ω_01^2)
And the waist radius of incident Gaussian beam is ω_01, which satisfies relation:
ω_01=√(a_0^2 λ^2/λ^2+π^2 a_0^2 tan^2 α)
where a_0 is the radius of fiber core, and sinα = N.A, N.A. is the numerical aperture of the fiber. In there a_0=5μm and N.A.=0.13 for single-mode fiber. The incident optical power is:
P_in=∫_0^∞ I_1 (r) 2 π rdr=π/2ω_01^2 I_0
The response of the light to the micro-sphere is calculated using the standard optical ABCD ray matrix <cit.>. Under the par-axial approximation, the transmission matrix 𝐓 is:
𝐓=[ A B; C D ]
which has the equation:
[ r_f; θ_f ]
=
𝐓[ r_i; θ_i ]
In calculating the transmission matrix 𝐓, we neglected the reflection of light at the interface and the absorption in the micro-sphere. Here A, B, C, D are
A=2/n-1,B=2R/n,C=1-n/n2/n,D=2/n-1,β_0=λ/πω_01^2
with the parameters λ=1550 nm, n=1.45, the we get the d_2 and ω_02 satisfy
d_2=AC/β_0^2+ACd_1^2+ADd_1+BCd_1+BD/C^2 /β_0^2+C^2 d_1^2+2CDd_1+D^2
ω_02=ω_01√((A+Cd_2 )^2+β_0^2(Ad_1+B+Cd_1 d_2+Dd_2 )^2)
d_2 and ω_02 are functions of d_1, choose a suitable d_1 so that ω_02≈ a_0.
The coupling efficiency Γ, of the laser beam and the single-mode optical fiber can be written as:
Γ=Γ_0 exp(-Γ_0·x_fib^2/2 (1/ω_02^2 +1/a_0^2)),
Γ_0=4ω_02^2a_0^2/(ω_02^2+a_0^2 )^2
x_fib indicate the fiber shift from the x direction, when x_fib=0, Γ=Γ_max=Γ_0.
In the experiment, fix x_fib at the place where ∂Γ/∂ x_fib is the largest. As x_fib=2.51μ m and Γ(x_fib)=0.604 in Fig.<ref>(b).
δ x is the displacement of the micro-sphere vertically to the optical axis (similar result for y direction), while δ x' is the projection on the incident fiber surface. Under par-axial approximation, δ x=ζ·δ x^' for small displacement δ x of the micro-sphere, with the displacement magnification factor:
ζ=d_1+d_2+2R/d_1+R,
ς=∂Γ/∂ x=∂Γ/∂ x'·∂ x'/∂ x=ζ·∂Γ/∂ x'
Measurement Noise. The relationship between the average power P and the photon number N is:
N_in=P_inT_mea/ħω_op,
N_dec=P_decT_mea/ħω_op
where ω_op is the light frequency. The photons satisfy the Poisson distribution and the corresponding photon number fluctuation is δ N_in=√(N_in) and δ N_dec=√(N_dec). Such fluctuation brings a imprecise detection noise of displacement δ x_imp:
δ x_imp =∂ x/∂Γ√((∂Γ/∂ N_inδ N_in)^2+
(∂Γ/∂ N_decδ N_dec)^2)
=1/ς√(Γ+Γ^2/N_in)
Thus the power density of displacement noise is:
S_xx^imp=1/ς^2(Γ+Γ^2)ħω_op/P_in
On the other hand, the photon passes through the micro-sphere which changes the direction and therefore generated a back-action force δ f_ba with the strength also proportional to the fluctuation of the incident photon δ N_in. The back-action force δ f_ba can be written as:
δ f_ba=√(N_in)ħΔ k /T_mea
where Δ k is the change of the wave vector.
Here we suppose that the direction of light wave vector is along the direction of the Gaussian light wavefront, and the probability of photon appearing is proportional to the intensity of Gaussian light. Δ k is the average change of light wave vector pass through the micro-sphere. It is calculated by √((Δ k_in)^2+(Δ k_out)^2), where Δ k_in is the average light wave vector go to the micro-sphere, Δ k_out is the average light wave vector go out of the micro-sphere. We obtain
(Δ k)^2= k^2 β
= k^2 ∫_0^∞k^2 r^3/k^2 r^2 +((1-z_r^2/z_l^2)kR^2/2ρ(z_l)+z_r/z-kρ(z_l))^2·
1/ω_1^2(z_l)exp(-2r^2/ω_1^2(z_l))dr
where k=ω_op/c, z_l=d_1+R-√(R^2-r^2), ω_1(z_l)=ω_01√(1+(z_l /z_r)^2), z_r=2 πω_01^2 / λ and ρ(z_l)=z_r (z_l/z_r +z_r/z_l).
The power density of back-action noise is thus:
S_ff^ba=P_inħω_opβ/c^2
and the product of imprecision noise and back-action noise is:
S_xx^imp· S_ff^ba=1/ς^2 (Γ+Γ^2 )
(ω_op /c)^2 β^2 ħ^2
The quantum efficiency of the measurement is defined as:
η=ς/4(Γ+Γ^2)β k^2
where η = 1 corresponding standard quantum limit (SQL). The total measurement noise is
S_aa^mea (ω)=S_xx^imp/|χ_ m(ω,ω_0)|^2
+S_ff^ba/m^2
S_aa^mea is minimized by tuning the incident laser power P_in under the product constraint of the imprecision noise and backaction noise. The optimized power is:
P_opt (ω,ω_0 )=√(Γ+Γ^2/β)m c/ς|χ_ m(ω,ω_0)|
with the minimised total acceleration measurement noise as:
S_aa,min^mea=2ħω_op/mς c |χ_ m(ω,ω_0)|√(β(Γ+Γ^2 ))
And in order to simplify the experiment process, we choose P_in =P_opt (ω_0,ω_0), with the optimized acceleration measurement noise at this time:
S_aa,opt^mea=ħω_op√(β(Γ+Γ^2 ))/mς c γω_0·(1/ |χ_ m(ω,ω_0)|^2+γ^2 ω_0^2 )
|
http://arxiv.org/abs/2307.06055v1 | 20230712101754 | Function-Space Regularization for Deep Bayesian Classification | [
"Jihao Andreas Lin",
"Joe Watson",
"Pascal Klink",
"Jan Peters"
] | cs.LG | [
"cs.LG",
"stat.ML"
] |
How Many Papers Should You Review? A Research Synthesis of Systematic Literature Reviews in Software Engineering
Xiaofeng Wang
Free University of Bozen-Bolzano
Bolzano, Italy
[email protected]
Henry Edison
Blekinge Institute of Technology
Karlskrona, Sweden
[email protected]
Dron Khanna
Free University of Bozen-Bolzano
Bolzano, Italy
[email protected]
Usman Rafiq
Free University of Bozen-Bolzano
Bolzano, Italy
[email protected]
========================================================================================================================================================================================================================================================================================================================================================
Bayesian deep learning approaches assume model parameters to be latent random variables and infer posterior distributions to quantify uncertainty, increase safety and trust, and prevent overconfident and unpredictable behavior.
However, weight-space priors are model-specific, can be difficult to interpret and are hard to specify.
Instead, we apply a Dirichlet prior in predictive space and perform approximate function-space variational inference.
To this end, we interpret conventional categorical predictions from stochastic neural network classifiers as samples from an implicit Dirichlet distribution.
By adapting the inference, the same function-space prior can be combined with different models without affecting model architecture or size.
We illustrate the flexibility and efficacy of such a prior with toy experiments and demonstrate scalability, improved uncertainty quantification and adversarial robustness with large-scale image classification experiments.
§ INTRODUCTION
Deep learning has enabled powerful classification models capable of working with complex data modalities and scaling to large data sets <cit.>.
The aim of Bayesian neural networks (BNNs) is to provide these complex models with priors for regularization, generalization and uncertainty quantification (UQ) useful in prediction tasks <cit.>.
Predictive uncertainty is crucial for machine learning systems in real-world settings, as it provides a degree of safety <cit.>, trust <cit.>, sample efficiency <cit.> and human-in-the-loop cooperation <cit.>.
In this work, we leverage function-space variational inference
[We use function-space VI instead of functional VI to avoid the overloaded term with different connotations.]
<cit.> (fVI) to implement regularization for classification tasks.
Function-space priors can explicitly affect the predictive distribution and do not depend on the particular model parameterization, whereas weight-space priors are implicit and model-specific.
Given any stochastic neural network capable of producing multiple predictions, such as Monte Carlo dropout <cit.> or deep ensembles <cit.>, we estimate a Dirichlet predictive distribution from several categorical outputs via maximum likelihood.
This approach retains the same mean prediction of conventional deep learning classifiers, while also capturing the information contained in the variance of the outputs.
The Dirichlet predictive distribution can then be used to specify a function-space prior to regularize classification.
Various prior work which uses the Dirichlet distribution and function-space regularization <cit.> can be viewed as a special case of fVI.
We demonstrate that our method improves uncertainty quantification and adversarial robustness across a range of popular models and datasets for both small- and large-scale inference.
§ DIRICHLET FUNCTION PRIORS
Let = {(_n, _n)}_n=1^N be the training data consisting of N observed pairs of input data _n ∈ and corresponding K-dimensional, one-hot class label vectors _n ∈.
A neural network ϕ with weights defines a deterministic function which maps an input ∈𝒳 to an element _∈Δ^K-1, where Δ^K-1 denotes the K-1 simplex.
More precisely, we write _ = σ(ϕ(; )) and ∼Cat(· | _), where σ is the softmax function.
In conventional maximum likelihood (ML) training, the weights are optimized by maximizing
log∏_𝒟Cat( | _)
= ∑_𝒟∑_k=1^K y_k logf__k,
where ∏_𝒟 and ∑_𝒟 denote ∏_(, ) ∈𝒟 and ∑_(, ) ∈𝒟, and ϕ, σ, and are implicit in .
In Bayesian deep learning, becomes a random variable and the goal is to estimate its posterior weight distribution.
In general, exact inference is intractable and various approximations employ different parameterizations.
In this paper, we assume that samples from a weight distribution p() are available but an explicit density is not.
This makes our method particularly generic and compatible with most BNNs and stochastic models.
Dirichlet Posterior Predictive
Bayesian neural networks and stochastic deep learning models for classification typically make predictions by first sampling from a weight distribution p(), then predicting a softmax output for each weight sample, and finally averaging those predictions to produce a posterior categorical predictive.
r.43
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
Treating model predictions as samples from a simplex (left), the mean reduction (middle) discards information which is present in the variance.
A Dirichlet distribution (right) fitted to the same samples can capture the uncertainty with its density function.
However, taking the average throws away the epistemic uncertainty of the classifier.
Instead, we interpret categorical predictions as samples from a Dirichlet distribution p(_), which allows us to leverage those samples to estimate a Dirichlet distribution over probability vectors instead of computing the average.
Figure <ref> shows how the Dirichlet density captures the variance of the samples.
We assume that, given any input , the model predicts a corresponding Dirichlet distribution p(_), which is induced by the weight distribution p().
Implicit Stochastic Processes
The model's capability to predict a K-dimensional Dirichlet distribution p(_) for any ∈ implicitly defines a stochastic process whose state space is the K-1 simplex Δ^K-1 and whose index set is <cit.>.
This stochastic process, despite using the Dirichlet distribution, is not a Dirichlet process <cit.>.
A Dirichlet process with index set requires that any finite subset {_1, …, _L}⊂ follows a joint L-dimensional Dirichlet, whereas our implicit stochastic process defines a K-dimensional Dirichlet for each ∈.
For us, the finite collection {_1, …, _L} would produce an element from Δ^K-1 to the power of L and the whole implicit stochastic process could be rigorously defined as a random variable from Δ^K-1 to the power of (see Appendix <ref>).
Function-Space Regularization
To apply regularization in function space, we use the function-space evidence lower bound objective (fELBO) <cit.>,
ℒ(θ) = _ ∼ q[ log p(𝒟 | ) ]
- [q( |θ) || p()],
which resembles the conventional evidence lower bound objective (ELBO) <cit.>.
To compute the likelihood term, we stay faithful to the backbone model and use M samples to estimate the expected categorical log-likelihood,
_∼ q[ log p(𝒟 | ) ]
≈1/M∑_n,m=1^N, Mlog p(_n,^(m)__n),
which is identical to the likelihood term in conventional ELBO optimization for BNNs.
The novelty of our approach manifests in the KL term, which requires computing a function-space KL divergence (fKL) between stochastic processes.
<cit.> derived this divergence as the supremum over regular KL divergences evaluated at all possible finite sets ⊂,
[q|| p]
=
sup_⊂, || < ∞[q(_ |θ)|| p(_)].
However, the supremum is generally intractable because there are infinitely many possible finite measurement sets.
A tractable approximation <cit.> replaces the supremum with an expectation,
[q|| p]
≈ _ [q(_ |)|| p(_)],
where ⊂𝒳 is a randomly sampled, finite measurement set of size L,
which contains all the points that the stochastic processes are conditioned on.
For us,
this is the training data, but we can improve it further by adding unlabeled data (see Section <ref>).
Dirichlet and KL Divergence Estimation
Assuming ^(m)_∼Dir(· | α_), we compute a maximum likelihood estimate (MLE) of α_ using M samples ^(m)_.
To this end, we consider α_ in terms of two separate but dependent parameters: the Dirichlet mean α̅_ = α_ / z_ and the Dirichlet precision z_, where α̅_ are akin to categorical class probabilities and z_ can be interpreted as a confidence score.
By matching the first moment of the empirical distribution of _, we obtain α̅_≈1/M∑_m=1^M ^(m)_.
To estimate z_, we fix α̅_ and employ a fast, iterative, quasi-Newton algorithm <cit.>
using M predictive samples
_^(1:M) = {^(1)_,…,^(M)_},
(z^(t+1)_)^-1 = (z^(t)_)^-1
+ (z^(t)_)^-2∂_z_(z^(t)_)/∂^2_z_(z^(t)_), (z^(t)_) = _Dir(_^(1:M), α_^(t)),
where z_^(t) and α_^(t) = α̅_ / z_^(t) are the Dirichlet precision and concentration at iteration t, and _Dir is the Dirichlet log-likelihood log∏_m=1^M Dir(_^(m)| α_).
With α̅_ and z_ estimated,
α_ = α̅_ / z_, q(_ | θ) = Dir( _ | α_) and we compute the KL divergence as
[q || p]
≈ 1/M∑_l,m=1^L,M( log q(__l^(m)| θ)
-log p(__l^(m)) ),
where __l^(m) is the m-th prediction of the model evaluated at the l-th measurement item _l ∈, and log q(__l^(m) | θ) and log p(__l^(m)) are the log-likelihood of __l^(m) under the variational Dirichlet posterior and Dirichlet prior respectively.
Further details about optimization and prior specification are discussed in Appendix <ref>.
§ EXPERIMENTS
In this section, we present an empirical evaluation of our proposed approach, comparing the performance of several models against their conventional training procedure.
We used feedforward multilayer perceptrons (MLPs) and convolutional neural networks (CNNs).
Metrics include classification accuracy, log-likelihood (LLH) and expected calibration error (ECE) <cit.>, which estimates the calibration of accuracy versus confidence through binning the predicted class probabilities.
Appendix <ref> contains additional details.
Toy Problem
To visualize the effects of function-space variational inference, we conducted a toy experiment with the Two Moons data set and MLP models.
We used MAP, MC Dropout <cit.>, and deep ensemble <cit.> models, training each with their regular weight-space method, a uniform function prior, a GP function prior and a random forest function prior.
Figure <ref> shows that weight-space training leads to overconfident extrapolation, while our inference approach combined with Dirichlet function priors adequately increases predictive uncertainty outside of the observed data.
While the weight-space approach learns a decision boundary that bisects the data, the function-space approaches learn richer boundaries which capture the data distribution more accurately and display properties that resemble the respective prior.
Rotated MNIST
Following <cit.>, we train on the MNIST handwritten digit classification data set <cit.> and evaluate on constructed test data with rotations of up to 180^∘, which simulates a challenging OOD scenario due to the absence of data augmentation.
For this experiment, we used the same MLP models as for the toy problem.
The log-likelihood between models is shown in Figure <ref>.
In terms of classification error, weight-space inference and fVI yield the same performance.
In terms of log-likelihood, fVI consistently outperforms their weight-space counterparts as the data becomes more OOD.
Subnetwork linearized Laplace <cit.> is also reported as a competitive baseline,
however, these results were obtained using a ResNet-18.
Assessing Measurement Set Design
To illustrate the importance of the measurement set, we train the fVI models for rotated MNIST using three different measurement sets: the training data, additional 90° augmentation, and additional 90° and 180° augmentation.
While simply using the training data without rotations already outperforms the weight-space counterparts, a direct comparison in Figure <ref> illustrates that performance can be further increased if an appropriate measurement set, i.e. example OOD data, is available.
With the enriched measurement sets, the OOD performance move closer to that of the prior, indicating more accurate inclusion in the fELBO.
Sets for greater OOD performance could be designed through manual data augmentation, unlabeled data, or synthetic data generation.
Note, for all other image classification experiments, we use the training data as the measurement set.
Image Classification under Corruption
We used the regular train splits of the CIFAR10 and CIFAR100 <cit.> as training data and their corrupted versions <cit.> as OOD test data.
CIFAR10 and CIFAR100 consist of natural color images of animals and vehicles.
Their corrupted versions perturb the images at five increasing levels of severity by changing the brightness, contrast or saturation, or adding noise, blur or other artifacts, such that classification becomes more difficult.
For this experiment, we used ResNet-18 CNN models <cit.>.
In addition to the previous MAP, MC Dropout and deep ensemble model types, we also evaluate our fVI approach on Radial BNNs <cit.>, as an effective variant of MFVI, and Rank-1 BNNs <cit.>, which combine ensembles and VI.
In Appendix <ref>, we investigate a similar setting where the corruptions are replaced by adversarial attacks of varying strength.
Figure <ref> shows the results for CIFAR10 and CIFAR100 under corruption.
The function-space prior frequently provides gains in OOD uncertainty quantification with only a small decrease in (uncorrupted) test performance.
This trade-off between accuracy and robustness has been observed and discussed in the adversarial robustness setting <cit.> and it remains an open problem if and how both qualities can be achieved in practice.
Moreover, the shared function-space prior resulted in remarkable consistency across models, compared to the variety seen in weight-space priors.
For CIFAR100, higher prior regularization due to higher dimensionality (see Appendix <ref>) resulted in reduced benefit over weight-space models, with improved performance only evident at stronger corruptions.
§ CONCLUSION
We propose an approach to function-space regularization for deep Bayesian classification, which enables the use of Dirichlet predictive priors to improve uncertainty quantification.
Our approach provides a generic view of prior work on Dirichlet-based classifiers with function-space regularization, and can be applied to a general class of BNNs and stochastic models without altering their underlying architectures and mechanisms.
Experiments demonstrate that our approach generally outperforms the corresponding weight-space priors in terms of uncertainty quantification and adversarial robustness.
Different measurement sets can trade-off scalability against OOD uncertainty quantification by extending the fKL evaluation beyond the training data.
Future research should improve measurement sets for fVI, for example, by developing effective methods for constructing them to reflect the test distribution, e.g. through using data augmentation or unlabeled data.
§ RELATED WORK
In this section, we summarize related work on Bayesian classification and function-space inference, and discuss previous research which is of particular relevance to our work.
Bayesian Classification
Compared to regression, classification is non-trivial for Bayesian methods due to the nonlinear link function required to predict the class labels.
As a result, closed-form Bayesian models, such as Gaussian processes (GP), require approximate inference methods such as the Laplace approximation <cit.>, variational inference <cit.>, and expectation propagation <cit.>.
The Pólya-Gamma data augmentation trick <cit.> has enabled scalable closed-form variational training of sparse Gaussian process classifiers <cit.>.
Gaussian processes have also been used with a Dirichlet predictive using a log-normal approximation <cit.>.
Classification with Bayesian neural networks is possible through a wide range of approximate inference methods, including Markov chain Monte Carlo <cit.>, (mean-field) variational inference (MFVI) <cit.>, Laplace approximations <cit.>, ensembles <cit.>, expectation propagation <cit.> and Monte Carlo dropout <cit.>.
Radial BNNs <cit.> are motivated as a practical alternative to MFVI BNNs that uses Gaussian weight priors and posteriors.
By sampling weights in a radial fashion, they avoid the pathologies encountered when sampling high-dimensional Gaussian distributions.
Rank-1 BNNs <cit.> combine ensembles and weight priors.
Using the shared BatchEnsemble structure <cit.> and Rank-1 covariance parameterizations, Rank-1 BNNs have a scalable memory requirement.
Alternatively, the Laplace bridge <cit.> approximately maps a Dirichlet predictive density backwards through the softmax into a latent Gaussian predictive.
A Gaussian-predictive BNN can then be trained using this latent approximation.
Alternative methods avoid propagating uncertainties by predicting Dirichlet concentrations directly with deep neural networks.
Prior networks <cit.> require categorical labels to be converted to Dirichlets, and resembles fVI as the objective consists of two KL divergences, for in- and outside the data distribution respectively.
They can be used to distill a trained ensemble into a single model <cit.>.
Similarly, belief matching <cit.> converts training labels to Dirichlets using Bayes rule.
This method can also be viewed as fVI where the measurement set is the training data.
Another method converts the training labels to categorical probabilities and uses a Bayes risk objective with KL regularization against a function-space prior <cit.>.
Compared to these methods, we introduce generic function-space regularization that allows us to use any BNN or stochastic model with the conventional categorical likelihood, avoiding the need to design networks and data representations that facilitate a model-specific training approach.
A longer discussion and comparison is provided later in this section.
Function-Space Variational Inference
Function-space variational inference generalizes conventional variational inference over finite weight distributions to inference over stochastic processes, which entails difficulties because the standard KL divergence between finite-dimensional probability distributions becomes an infinite-dimensional fKL divergence between stochastic processes.
Gaussian processes <cit.> are a rare exception where analytically tractable function-space inference is possible.
Sparse GPs may be viewed as variational inference over functions <cit.>, minimizing the fKL from its exact posterior via inducing points.
In functional variational BNNs (fBNNs), <cit.> derive the fKL as a supremum over an infinite set of finite, marginal KL divergences.
<cit.> showed that this fKL can be infinite under certain conditions, for example when considering the divergence between two BNNs with different architectures.
<cit.> replace the intractable supremum by an expectation based on finite measurement sets.
We explain this in more detail in Section <ref> because our approach is based on this approximation.
Further, <cit.> they used a trained GP as explicit function-space prior, which can be viewed as a form of empirical Bayes, and employ the spectral Stein gradient estimator (SSGE) <cit.> to enable implicit function priors.
Similar approaches take a mirror descent view for batch training <cit.>.
Variational implicit processes <cit.> interpret
parametric models with stochastic parameters as stochastic processes and introduce a wake-sleep procedure for inference in the regression setting with Gaussian likelihoods.
Our generic view of Bayesian neural networks and other stochastic models can be formally understood within their stochastic process perspective of parametric models, although our inference approach is unrelated (see Appendix <ref>).
Neural linear models have also been used with fVI, because closed-form Gaussian predictive distributions allow explicit computation of gradients <cit.>.
Concurrent work <cit.> has also adopted fVI for classification, by linearizing a neural network about a Gaussian weight distribution to estimate the fKL.
This model works with a Gaussian (latent) predictive prior and posterior which loses the intuitive aspect of function-space priors.
Moreover, the linearization requires computation of the Jacobian of the neural network function with respect to the model parameters, for which the memory requirement scales with the number of model parameters and outputs.
<cit.> propose particle optimization methods using finite function representations to learn a particle representation of the function-space posterior through the gradient flow of the log posterior.
Function-space inference is also an attractive approach to continual learning <cit.>.
Prior Networks
<cit.> use a neural network with parameters θ to directly predict the concentration parameters α_c of a Dirichlet distribution p(μ|x; θ), given input x, which is distinct from our approach of estimating a posterior Dirichlet from M categorical predictions.
This model is not a Bayesian neural network in practice, as only point estimates for the weights are learned.
To ensure α_c > 0, an element-wise exponential operation is applied as the final layer of the neural network.
Additionally, Prior Networks minimize an optimization objective consisting of two separate KL divergences, representing in- and out-of-distribution data respectively,
ℒ(θ) =
_p_in(x)
[[Dir(μ|α̂)||p(μ|x; θ)]]
+ _p_out(x)[[Dir(μ|α̃)||p(μ|x; θ)]].
The first expectation _p_in accounts for the actual learning, i.e. fitting the training data, whereas the second expectation _p_out is supposed to regularize the model by matching a prior distribution.
Accordingly, the first expectation is computed for the training data and can be compared to the expected log-likelihood term in our approach.
Instead of maximizing the categorical log-likelihood of M observations, Prior Networks construct Dirichlet targets by smoothing categorical ground truth labels to define the Dirichlet mean and setting the precision as a hyperparameter during training.
Although we also apply `label smoothing' to the predictions, it is for numerical reasons and not for the construction of target distributions from labels.
Additionally, Prior Networks treat the precision of their constructed target distribution as a hyperparameter, whereas we estimate the Dirichlet precision of our predicted variational posterior distribution via maximum likelihood.
The second expectation is computed for OOD data and resembles the fKL term in our approach, where the OOD data is used as measurement set.
In contrast to Prior Networks, our more general approach also allows the training data or mixtures of training data and OOD data as measurement sets, whereas Prior Networks explicitly compute their second expectation for OOD data only.
Furthermore, both Prior Network expectations consider the KL divergence from the neural network predictive distribution (right) to the target or prior distribution (left), whereas, in our approach, and variational inference in general, the KL divergence from the prior distribution (right) to the variational posterior (left) is considered.
Belief Matching
<cit.> assume a Dirichlet prior which, together with the categorical ground truth class labels, define a target Dirichlet posterior.
A neural network is used to directly predict concentration parameters of a Dirichlet posterior q_| by replacing the final softmax layer with an element-wise exponential operation.
To learn the target posterior, belief matching maximizes
l_EB(, α^())
= _q_| [log_]
- [q_|^|| p_|],
where _q_| [log_] is the expected log-likelihood of the training data and q_|^p_| is the KL divergence between the predicted Dirichlet posterior and the Dirichlet prior.
Therefore, their objective matches our fELBO objective (Equation (<ref>)) except for two differences:
Firstly, belief matching computes both the expected log-likelihood and the KL divergence with respect to their single, directly predicted Dirichlet distribution, whereas we evaluate them as arithmetic averages of M stochastic categorical model outputs.
Secondly, belief matching does not recognize the function-space aspect and instead only considers evaluation of the KL divergence using the training data, which resembles the fKL in our case where the measurement set is constrained to be the training data.
Evidential Deep Learning
<cit.> directly predict the concentration parameter of a Dirichlet distribution by using a neural network with ReLU activations as final layer to assert the positive constraint.
Additionally, a loss function is derived via type-II maximum likelihood by integrating over a Dirichlet prior and the sum of squares between target labels _i and predicted probabilities _i.
Furthermore, a regularizing KL divergence term is added, resulting in a total loss function,
(Θ) =
∑_i=1^N (
∑_j=1^K ( (y_ij - p̂_ij)^2 + p̂_ij(1 - p̂_ij)/S_i + 1)
+ λ_t [Dir(_i| α̃_i)||Dir(_i| 1)]
),
where y_ij are individual 0-1 target labels, p̂_ij are components of the predicted Dirichlet mean, S_i is the predicted Dirichlet precision, α̃_i is the predicted Dirichlet concentration parameter, 1 is a vector of ones and λ_t is a annealing coefficient for optimization.
The first part of their loss is responsible for fitting the training data and can thus be compared to the maximum likelihood objective in Section <ref>.
The ML objective can be derived from the categorical log-likelihood via type-I maximum likelihood, whereas their objective is derived by minimizing the sum of squares via type-II maximum likelihood.
The second part of their loss resembles the fKL in our approach.
However, they only evaluate the KL divergence for the training data and explicitly consider the uniform Dirichlet distribution with concentration 1.
Therefore, their KL divergence regularization term is a special case of our proposed regularization with the measurement set being the training data and the prior being the uniform Dirichlet distribution with precision K.
For both parts, a major difference between evidential deep learning and our approach is the realization of the predictive Dirichlet distribution.
Evidential deep learning directly predicts Dirichlet concentration parameters, whereas we use M predictions to estimate a Dirichlet distribution via maximum likelihood.
Experimental Comparison
We compare our MAP and MAP fVI models to Belief Matching and Prior Networks, which both demonstrated scalability to ResNet models.
To reproduce their results, we used the official open-source implementations
[
<github.com/tjoo512/belief-matching-framework>]
[
<github.com/KaosEngineer/PriorNetworks>].
Figure <ref> illustrates the test accuracy, log-likelihood, and expected calibration error for the corrupted CIFAR10 image classification task (see Section <ref>).
We trained the models using the same procedure and hyperparameters described in Section <ref>.
The Belief Matching model corresponds closely to the MAP fVI model, as both the objectives and models are similar.
Unfortunately, we were not able to reproduce the Prior Networks performance described in the paper <cit.>, neither with their listed hyperparameters (Table 4, <cit.>) or the hyperparameters used in Appendix <ref>.
It is uncertain whether this is due to the model, implementation bugs or unrecorded hyperparameters
[The authors did not respond to personal correspondence regarding this matter.].
§ DEEP STOCHASTIC CLASSIFIERS AS IMPLICIT STOCHASTIC PROCESSES
An implicit stochastic process <cit.> is an infinite set of random variables , such that any finite subset
__1:L = {__1, __2, ..., __L} with L ∈ℕ has a joint distribution which is implicitly defined as
∼ p(), __l = (_l, ), ∀ _l∈, 1 ≤ l ≤ L,
where the classifiers which we consider, such as BNNs and other stochastic neural networks, are instantiated as a feedforward or convolutional neural networks with stochastic weights, such that
(_l, ) = σ(ϕ(; )) in our case.
In practice, the implicit stochastic process interpretation of BNNs and stochastic models entails that we consider the weight distribution p() in a parameterized form q_() with parameters which we wish to optimize.
The actual form and meaning of depends on the specific neural network architecture, encoded through ϕ.
Table <ref> lists mathematical expressions to describe for various models.
Different model-specific parameterizations q_() induce the same generic variational posterior over functions q( |) which allows us to implement function-space regularization independent of the specific model.
Formally, the stochastic process is defined on the sample space
Ω
with an index set defined by the data type, such that
: ×Ω→Δ^K-1, where
Δ^K-1 is the state space, which is the K-1 simplex.
A random variable (): Ω→Δ^K-1 can be defined for each ∈ and we write () = _.
Kolmogorov's extension theorem <cit.> guarantees the existence of a stochastic process if for each L ∈ℕ the finite marginal joint distributions p_1:L(__1:L), where __1:L = {__1, …, __L}, satisfy exchangeability and consistency.
Exchangeability
For any permutation π of 1,…,L, p_π(_1:L)(_π(x_1:L)) = p__1:L(__1:L).
This requires that the process behavior is invariant to the order of inputs. For a feedforward neural network, this is satisfied because the respective predictions do not change if the order of inputs changes.
Consistency
For any 1≤ L' < L, p_1:L'(__1:L') =∫ p_1:L(__1:L) d__L'+1:L.
This requires that future evaluations are independent of past evaluations. For a feedforward neural network, this is satisfied because predictions do not depend on previous predictions.
§ OPTIMIZATION AND PRIOR SPECIFICATION
Here, we provide additional methodological details which were omitted in Section <ref>.
Optimization
We optimize θ using backpropagation on the fELBO objective.
For some models, such as deep ensembles, this is standard gradient descent, while for others, such as MFVI, the reparameterization trick is required.
In cases where only a single sample is available (M = 1), such as MAP models or MC Dropout with single forward pass, we set the precision z_ to the size of the training data.
When computing gradients, we assume that α_ does not depend on θ.
This serves the practical purpose of pruning the Dirichlet MLE from the computation graph, speeding up computation and evoking expectation maximization-style inference.
In Appendix <ref>, we connect this approximation to the pathwise gradient estimator <cit.>, which can in fact be lower variance than the total gradient and lead to faster convergence in terms of computation time.
In terms of mini-batching, like <cit.>, we divide both the batched expected log-likelihood and the fKL by the mini-batch size for numerical stability.
Consequently, the fKL weight in the total ELBO depends on the mini-batch size, which is theoretically undesirable, but, in practice, KL divergence scaling in (weight-space) variational inference is a topic of active debate <cit.> and is frequently scaled or annealed for numerical reasons <cit.>.
Prior Specification
We require the Dirichlet function prior p() to be defined as a regular K-dimensional Dirichlet distribution p(_) at each input location .
For most experiments in this paper, we choose p(_) = Dir(·|β) with β = 1, which is a constant uniform Dirichlet distribution with precision K.
One might criticize that this constant uniform prior is factorized and does not encode any correlations between input locations.
However, the posterior will still be correlated among input locations due to the neural network.
As we learn a variational posterior over functions by adapting the implicit neural network weights, the neural network function induces smoothness in the variational posterior despite the factorized prior.
A similar scenario arises in conventional weight-space variational inference with factorized Gaussian priors (MFVI):
The weights are also not correlated by the prior yet the neural network function enables learning.
In practice, it is often difficult to define correlated priors in domains with high-dimensional inputs, such as images.
In a toy problem, we show that it is also possible to use more sophisticated priors based on, for example, GPs or random forests.
Nonetheless, the constant uniform prior is simple, scalable, intuitive to understand, yet effective (see Section <ref>).
§ IMAGE CLASSIFICATION UNDER ADVERSARIAL ATTACKS
Despite the success of CNNs in computer vision, adversarial attacks are one of the biggest risks when it comes to practical applications <cit.>.
We evaluate the robustness of fVI compared to standard weight-space prior approaches on the CIFAR10 and CIFAR100 data using the fast gradient sign method (FGSM) <cit.>.
Figure <ref> compares the accuracy and the log-likelihood of the test data with increasing amounts of perturbation, ranging from ϵ = 0 (no attack) to ϵ = 0.3.
Although both weight-space and function-space models lose their classification accuracy when the FGSM attack is introduced, the fVI models only suffer small decreases in log-likelihood, whereas the weight-space LLH performance drops significantly.
We also observe the accuracy vs robustness trade-off in the fVI models.
We attribute this behavior to the quality of the uncertainty quantification at the decision boundary.
While both approaches have brittle boundaries due to the nature of CNNs, the predictive uncertainty at these decision boundaries is richer for fVI.
§ IMPLEMENTATION DETAILS AND COMPUTATIONAL COMPLEXITY
We implemented all models using the library <cit.> and all experiments were conducted using a i5-6600K CPU, a GTX1070 GPU and a GTX2080 GPU with less than 300 hours of total runtime.
The Two Moons data was generated by the function from the library using 100 samples, 0.2 noise and random state 456, the manual seed was set to 123.
For this toy problem, all models were MLPs with two hidden layers consisting of 25 hidden units each, bias terms enabled and ReLU activation.
For the Dropout models, the dropout rate was set to 0.2 and for the Ensemble models, we used 10 members per ensemble.
All models were trained for 1000 epochs at a learning rate of 0.005 using the Adam optimizer <cit.> with default parameters.
The measurement set for the KL divergence was the visible 2D input plane, discretized at steps of 0.05.
Since there was no mini-batch training, the KL term was not scaled according to Section <ref>.
This toy experiment is the only exception in this regard.
For the constant uniform prior, β was set to (1, 1).
The GP prior and the random forest prior were implemented using their respective implementations by taking their categorical predictions as a Dirichlet mean and using a Dirichlet precision z = K = 2 to match the precision of the uniform prior.
The GP used the RBF kernel with optimized hyperparameters and the random forest used 20 trees, the 'entropy' criterion and a maximum depth of 10.
The random seeds were set to 123 for both GP and random forest.
For Rotated MNIST, we used all 60000 images of shape 28x28x1 reshaped to 784 from the train split with pixel values normalized to [-1, 1] and no other pre-processing or data augmentation.
All 10000 images from the test set were used during evaluation, rotated by a fixed degree, ranging from 0° to 180° in 10° steps, resulting in a total of 190000 test images.
The MNIST <cit.> data is available under the terms of the Creative Commons Attribution-Share Alike 3.0 license.
All models were MLPs with two hidden layers consisting of 50 units each, bias terms enabled and ReLU activation.
For the Dropout models, the dropout rate was set to 0.2 and for the Ensemble models, we used 10 members per ensemble.
All models were trained for 30 epochs at a learning rate of 0.001 using a mini-batch size of 256.
The measurement set for the KL divergence was the training data itself, except for the measurement set comparison section, where the different measurement sets are stated explicitly.
Results were obtained using 10 random seeds.
For corrupted CIFAR10 and CIFAR100, we used all 50000 images of shape 32x32x3 from the regular train splits.
Following <cit.>, we normalized pixel values using the empirical mean and standard deviation, and employed data augmentation during training by first selecting random crops of size 32x32x3 after adding 4 pixels of zero padding to each side and then randomly flipping 50% of the images horizontally.
All 10000 images from the regular test set were used during evaluation plus their corrupted versions <cit.> with 19 different corruptions and 5 levels of severity, resulting in a total of 960000 test images.
The CIFAR10 and CIFAR100 <cit.> data is available under the terms of the MIT License and the corrupted CIFAR10 and corrupted CIFAR100 <cit.> data is available under the terms of the Apache License 2.0.
All models were CNNs following the ResNet-18 architecture <cit.>,
designed for CIFAR images, rather than ImageNet.
Adopting <cit.>, we trained with a batch size of 128 and used the SGD optimizer with momentum (0.9) for 200 epochs and scaled the learning rate by 0.1 at epochs 100 and 150.
For the MAP, MC Dropout and Ensemble models without fVI, we used 0.0005 weight decay.
For MC Dropout models <cit.>, the dropout rate was set to 0.2.
For Ensemble <cit.> and Ensemble fVI, we used 5 members per ensemble.
For Radial BNNs <cit.> and Radial fVI, we implemented weight priors for all convolutional weights but not for the final linear layer.
The standard deviation σ was parameterized using σ = log(1 + exp(ρ)) and ρ was initialized to -5 while the means were initialized using the default initialization scheme for CNNs.
For Radial BNNs without fVI, we used a closed-form Gaussian weight KL divergence with a Gaussian prior with a mean of 0 and a standard deviation of 0.1.
For Radial fVI, we used our fKL instead of the weight-space KL.
For Rank-1 BNNs <cit.> and Rank-1 fVI, we used 4 ensemble members and 250 training epochs instead of 200 due to slow convergence and scaled the learning rate by 0.1 at epochs 150 and 200.
During training, we used implicit batch ensembling <cit.>, whereas during prediction, we created explicit ensemble predictions by replicating the input.
We placed Rank-1 Gaussian distributions over all convolutional weights but not over the final linear layer.
The standard deviation σ was parameterized using σ = log(1 + exp(ρ)) and ρ was initialized to -3 while the means were initialized to 1.
For Rank-1 BNNs without fVI, following <cit.>, the Rank-1 priors were Gaussian with a mean of 1 and a standard deviation of 0.1, and weight decay of 0.0001 was used.
We did not use KL annealing epochs.
For Rank-1 fVI, we used our fKL instead of the weight-space KL.
For all fVI models, the measurement set for the KL divergence during fVI training was always the training data itself.
Results were obtained using 10 random seeds.
When scaling to higher dimensional classification tasks, specifically K ≥ 100, we observed numerical issues with the fELBO objective when using the uniform Dirichlet predictive prior.
In higher dimensions, this prior would provide greater regularization.
This is because the magnitude of the categorical likelihood does not change with dimensionality, as it is the log probability of the label class.
Conversely, the KL between two Dirichlet densities requires summing over the parameters, so the magnitude naturally increases with K.
To alleviate this over-regularization, we adopt the strategy of <cit.> and apply additional scaling to the KL term in the fELBO.
This scaling can be shown to be numerically equivalent to a certain prior, i.e. β (Section 3.4, <cit.>).
Therefore, optimizing this scaling is a form of model selection.
For our CIFAR100 experiments, we simply chose a scaling such that that the fKL magnitude was close to the CIFAR10 values.
We found this to be about 0.1, which matches a 10x scaling suggested by the Dirichlet KL due to the summation terms.
To ensure numerical stability, we defined a minimum and maximum precision for the posterior Dirichlet estimation: z_min = K and z_max = N, where K is the number of classes and N is the number of training examples.
For the MAP models, we skipped the Dirichlet MLE and set z = z_max because M = 1.
Similarly, we used M = 1 for the MC Dropout and Radial BNN models during training and also set z = z_max, although we set M = 10 during evaluation.
For the Ensemble models, M was always the number of members in the ensemble and for the Rank-1 models, we replicated the input M times during evaluation, which results in M distinct predictions.
Furthermore, we applied a small amount of label smoothing f__k^(m)≈ (1 - γ) f__k^(m) + γ1/K throughout all steps of the KL divergence estimation, where γ was set to 10^-4 for our experiments.
Minka's quasi-Newton maximum likelihood Dirichlet precision estimator <cit.>, which we used for our implementation, translated to our notation, is given by
1/z = 1/z + 1/z^2Δ_1/Δ_2,
α̅_k = 1/M∑_m=1^M f__k^(m),
ᾰ_k = 1/M∑_m=1^M logf__k^(m),
Δ_1 =
M ( ψ_0(z)
- ∑_k=1^K α̅_k ψ_0(z α̅_k)
+ ∑_k=1^K α̅_k ᾰ_k
),
Δ_2 =
M ( ψ_1(z)
- ∑_k=1^K α̅_k^2 ψ_1(z α̅_k)
),
where ψ_0 is the digamma function and ψ_1 is the trigamma function.
We initialized the algorithm with an approximate maximum likelihood solution using Stirling's approximation to the gamma function Γ <cit.>,
z^(0) = K - 1/-2 ∑_k=1^K α̅_k (logᾰ_k - logα̅_k).
We stopped the algorithm once the change per step is less than 10^-5.
Counting the number of iterations until convergence for a trained Ensemble fVI model with M = 10 ensemble members and 10000 MNIST test examples, the mean was 3.0796, the 95^th quantile was 3, the 99^th quantile was 15 and the maximum was 1172.
Note that the number of iterations until convergence in vectorized mini-batch computation is equal to the maximum number of iterations until convergence of the items in the mini-batch.
Although the computational complexity of the underlying deep learning model depends on the model architecture, data input size, number of parameters, etc., for the following comparison, we assume that a single forward pass through the model takes 𝒪(1), i.e. a constant amount of time, because the weight-space and function-space objectives share the same model.
With a mini-batch size of B, computation of the standard ML objective takes 𝒪(BMK) time per mini-batch iteration.
Assuming a constant number of quasi-Newton steps, the Dirichlet precision estimation takes 𝒪(SK + M) time for an measurement set of size S. Computing the fKL for an measurement set of size S takes 𝒪(SMK) time.
If the training data is used as measurement set the forward pass through the model can be shared between the log-likelihood and fKL calculation, resulting in an overall asymptotic time complexity of 𝒪(BMK) per mini-batch iteration.
In case of a different measurement set, the asymptotic time complexity becomes 𝒪((B+S)MK).
§ ABLATION STUDIES
Samples During Training
In Section <ref>, a Dirichlet estimation procedure was proposed using M samples.
In the single sample case M = 1, motivated by MAP models, a crude approximation was proposed to approximate the precision with the number of training data samples. During training, the M = 1 approximation was also used for MC Dropout, Radial and Rank-1 BNNs, akin to their respective weight-space variational inference procedures.
To assess the consequence of this approximation, we repeated the CIFAR10 corruption experiment for MC dropout with M = 5, matching the Ensemble models.
Figure <ref> shows that the 5 sample MC Dropout performance is closer to the 1 sample MC Dropout performance than the Ensemble.
This result indicates that the model, rather than M during training, has greater impact.
The similarity in performance between 1 and 5 sample MC Dropout suggests that the 1 sample approximation is reasonable.
Scaling Issues with High Label Dimensionality
The CIFAR100 experiments revealed an issue with the fELBO objective that caused underfitting for the larger label dimension.
To examine why, recall that the categorical likelihood is logf__k when y_k = 1.
Therefore, the dimensionality of does not directly influence the value.
Conversely, the KL divergence between two Dirichlet densities (p_1||p_2) does incorporate the label dimensionality K <cit.>,
(p_1||p_2) =
logΓ(z^(1))
- ∑_k=1^K
logΓ(α^(1)_k)
- logΓ(z^(2))
+ ∑_k=1^K
logΓ(α_k^(2))
+ ∑_k=1^K
(α_k^(1)-α_k^(2))
(Ψ(α_k^(1))-Ψ(α_k^(2))).
To counteract this linear increase due to the summation terms, we can assess a heuristic annealing scale factor on the fKL during training of 1/K = λ,
ℒ(θ) = _ ∼ q(· |θ)[ log p(𝒟 | ) ]
- λ[q( |θ) || p()].
To investigate this relationship between the Dirichlet KL divergence and the number of classes of the classification, we conducted a toy experiment in a hypercube [-1, 1]^D with fixed number of input dimensions D and increasing number of classes K.
The classes were created by using each dimension as the decision boundary, i.e. _d = 0, and leveraging all permutations to create up to K = 2^D classes, where D was set to 8.
The training data, measurement set for fKL, and test data, consisting of 1000 data points each, were all sampled uniformly at random from the hypercube.
We used a MAP model with 2-layer MLP architecture with 25 hidden units each, bias terms enabled, and ReLU activation functions.
We trained for 3000 epochs using Adam optimizer with a learning rate of 0.005 and default parameters otherwise.
Figure <ref> illustrates the model's test log-likelihood and fKL after training.
The regular MAP model represents a decent baseline with linear decrease in performance as the label dimensionality increases.
In contrast, the test log-likelihood of the MAP fVI model without KL scaling decreases exponentially while the fKL increases approximately linearly.
Applying the above proposed scaling to the fKL during training aids the optimization, keeps the final fKL at convergence consistently low and significantly improves the model's test log-likelihood.
Changing Prior Dirichlet Parameters
The uniform Dirichlet distribution with concentration parameters β_k = 1 is a natural choice for an uninformed prior over the simplex.
However, potentially interesting cases to consider are priors where all β_k are set to another value which is greater or smaller than 1.
While the Dirichlet mean remains the same, β_k > 1 corresponds to greater confidence that the class probabilities are uniformly distributed and β_k < 1 prefers dominance of any particular class.
It was also hypothesized that scaling β_k could yield results comparable to scaling the fKL as discussed in the previous subsection.
To test this hypothesis, we repeated the hypercube experiment from the previous subsection with the MAP fVI model without fKL scaling while using different β_k as prior parameters.
Figure <ref> show the test log-likelihood of the MAP fVI model without fKL scaling after training with Dirichlet priors using varying prior concentration parameters β_k.
However, there are no significant differences when using different β_k and no particular β_k achieves test log-likelihoods which would be comparable to the improvements due to fKL scaling discussed in the previous subsection.
To further investigate the Dirichlet prior with different concentration parameters, we repeated the visualizable Two Moons toy problem using the MC Dropout fVI model with different β_k.
Figure <ref> depicts the predicted class probabilities of the toy problem with K = 2 classes.
For β_k < 1, the areas of confident prediction enlarge but quickly fall back to uniformity, whereas for β_k > 1, the confident predictions or more locally concentrated, slowly tapering towards uniformity.
§ APPROXIMATE GRADIENT COMPUTATION
As stated in the main paper, we do not compute the total derivative of the approximate fKL divergence
[q || p] ≈1/M∑_l,m=1^L,M( log q(__l^(m)| θ) - log p(__l^(m)) ),
= 1/M∑_l,m=1^L,M( log q(__l^(m)(θ) | α(__l^(1:M)(θ)) ) - log p(__l^(m)(θ) ) ),
but only a partial one. Note that we have introduced the symbols α(__l^(1:M)(θ)) and __l^(m)(θ) to highlight the dependence of the Dirichlet posterior estimates on the M implicit functions __l^(m)(θ), m ∈ [1, M] and the dependence of the individual implicit functions on the network parameters θ. The total derivative of above approximate KL divergence corresponds to
∇_θ[q || p]
≈1/M∑_l,m=1^L,M( ∇_θlog q(__l^(m)(θ) | α(__l^(1:M)(θ)) ) - ∇_θlog p(__l^(m)(θ) ) )
=
1/M∑_l,m=1^L,M∂_f( log q(f| α(__l^(1:M)(θ)) ) - log p(f) ) |_f = __l^(m)(θ)∇_θ__l^(m)(θ)
+ 1/M∑_l,m=1^L,M∂_αlog q(__l^(m)(θ) | α) |_α = α(__l^(1:M)(θ))∇_θα(__l^(1:M)(θ)).
The partial derivative with which we optimize the fKL divergence in our algorithm omits the blue term in (<ref>). This simplifies the computation graph, as α(__l^(1:M)(θ)) is a maximum-likelihood estimate computed from the implicit function samples __l^(m)(θ).
Computing ∇_θα(__l^(1:M)(θ)) requires to compute the gradient of a maximum-likelihood estimate (MLE) α w.r.t. the implicit functions f.
Given that we use an iterative scheme to compute an approximate MLE, this would require us to differentiate through each iteration of the MLE computation.
We now want to provide evidence that using this partial derivative of the approximate fKL divergence is still a reasonable choice.
As argued in <cit.>, terms of the form 𝔼_p(x | θ)[ ∂_θlog p(x | θ) ] = 0 tend to introduce high variance into the gradient of variational inference objectives due to the Monte Carlo expectation approximation.
Therefore, omitting that term from the gradient estimate can actually benefit the convergence of gradient-based variational inference methods in certain cases.
We visualize this phenomena in Figure <ref> for fitting a high-dimensional Dirichlet, in which the gradient-based optimization of a KL divergence objective using the total derivative prematurely converges due to high variance, while an optimization that ignores the variance-inducing terms does not face this problem.
Moreover, the MLE fit of the variational density using 5 samples has no visible effect of the gradient estimation quality, despite fitting a 100-dimensional distribution.
This result indicates that as long as the predictive distribution is approximately Dirichlet, which is a central assumption of this approach, the gradient assumption is reasonable.
However, our method does not exactly match the pathwise gradient of <cit.>.
The ignored term of the total approximate fKL divergence derivative (<ref>) resembles the variance-inducing term, as the M implicit functions ^(m)(θ) evaluated at the different elements of the measurement set 𝒮 approximate the expectation over q(| α)
1/M∑_m=1^M∂_αlog q(_^(m)(θ) | α) ≈𝔼_q(| α)[ ∂_αlog q(| α) ] = 0.
However, the implicit function samples _^(m)(θ) are clearly not i.i.d. samples
as there is a tight correlation between the parameter α of the Dirichlet distribution q and the implicit function samples.
Moreover, our ensemble model does not use reparameterized gradients, optimizing a set of network weight `particles' instead.
Therefore, we also compared the optimization of the approximate fKL divergence objective using the partial- and total derivative on a particle-based variational representation of the Dirichlet.
To compute the gradient of the MLE w.r.t. the implicit function samples required by the total derivative, we first compute the MLE α by solving the underlying convex optimization problem <cit.> using the library <cit.>.
We then leverage the implicit function theorem <cit.> to compute the gradient of α w.r.t. the samples, leveraging that the gradient of the likelihood function vanishes for the MLE
0 = ∇_f^(1:M)log(p(f^(1:M)|α)).
The results in Figure <ref> indeed highlight that in the setting of this paper, the total derivative leads to a faster descent along the fKL divergence objective per gradient step when optimizing particles.
However, we see that the partial derivatives also minimize the fKL divergence.
Furthermore, due to the lower computational overhead of the partial derivatives, this optimization is carried out in significantly less time.
§ EXPERIMENTAL RESULTS
Numerical values of all our experimental results are reported here.
|
http://arxiv.org/abs/2307.04436v1 | 20230710092159 | Full event simulation of Photoproduction at NLO QCD in Sherpa | [
"Peter Meinzinger"
] | hep-ph | [
"hep-ph"
] |
=6.0in
=8.25in
=-0.3in =-0.20in
#1
#1
#1
#1
#1
#1
and
#1
Submitted to #1
Abstract
Presented
PRESENTED AT
Bibliography.bib
Born
0.2ex++
Full event simulation of Photoproduction at QCD in
Peter Meinzinger
Institute for Particle Physics Phenomenology,
Durham University, Durham DH1 3LE, UK
Photoproduction is an important mode for the production of jets and electro-weak particles at lepton–lepton and lepton–hadron colliders and allows for interesting studies of exclusive production at hadron–hadron colliders. In this talk, I will review recent efforts of extending the event generator to include the calculation of photoproduction cross sections for electron and proton beams, including the simulation of underlying events. The framework is validated using data of jet production at the and experiments and lepton production at the LHC. I will discuss advances towards achieving matched accuracy and fully capturing the dynamics of inclusive and exclusive photoproduction at different colliders.
DIS2023: XXX International Workshop on Deep-Inelastic Scattering and
Related Subjects,
Michigan State University, USA, 27-31 March 2023
< g r a p h i c s >
§ INTRODUCTION
The cross section of jet production at lepton–hadron or lepton–lepton collider experiments is dominated by the exchange of a virtual photon. While, in particular at the latter, this is well understood at large photon virtualities, the descriptive power of the theoretical calculations deteriorates with decreasing virtuality <cit.>. This has been reflected in decomposing the full cross section into electro- and photoproduction where the latter is identified with a regime where the photon is quasi-real and has to be seen as the incoming particle.
Simulating these events needs a different approach than the typical DIS processes. Here we report on the implementation of relevant physics and its validatation in .
§ SIMULATION IN
§.§ Photon flux
As the electron decouples from the hard interaction in the scattering, the flux of the quasi-real photons has to be calculated. In the Weizsäcker-Williams approximation <cit.> the cross section is calculated as
dσ_e p → e^' + 2j + X = σ_γ p → 2j + X(x, s)|_Q^2=0 dn(x) ,
where the electron momentum can be reconstructed from the photon and the photon virtuality is integrated out in the equivalent photon flux dn, leaving only the maximum virtuality Q^2_max as a free parameter, which has to be determined by the experimental setup and by the considered process.
For the measurements considered in this study, the photon flux for electron beams includes a mass-dependent correction as proposed in <cit.>:
dn(x) = α_em/2 πdx/x[ [ 1 + (1 - x)^2 ] log( Q^2_max/Q^2_min) +
2 m_e^2 x^2 ( 1/Q^2_min - 1/Q^2_max) ]
Here, x is the fraction of the photon momentum with respect to the electron momentum, m_e is the electron mass and Q_min/max are the minimum and maximum photon virtualities, where the former is given by kinematic constraints as Q^2_min = m_e^2 x^2/1 - x.
§.§ Parton distributions in the photon
Initial State Radiation off the photon cannot be neglected in photoproduction of jets, necessitating the inclusion of the resolved photon component in the calculation, i.e. its hadronic structure. Hence <cit.>:
dσ_γ p → 2 j + X = dσ_γ p → 2 j + X^(hl) + dσ_γ p → 2 j + X^(pl) , with
dσ_γ p → 2 j + X^(hl) = ∑_ij∫dx f_i/γ(x, μ_F^') f_j/p(x, μ_F) dσ̂_ij (p_γ, x p_p, α_S, μ_R, μ_F, μ_F^')
dσ_γ p → 2 j + X^(pl) = ∑_j ∫dx f_j/p(x, μ_F) dσ̂_γ j (p_γ, x p_p, α_S, μ_R, μ_F, μ_F^') ,
where the superscripts stand for the hadron- and point-like photon respectively, the f_i/A are the parton distribution functions (PDFs) related to finding parton i in particle A, the μ_F, R are the factorisation and renormalisation scales, and p are the momenta.
The photon PDF obeys an evolution slightly different to hadronic PDFs, due to the presence of a QED splitting kernel, leading to
∂ f_i/γ/∂logμ^2 = α_S/2 π∑_j P_ij⊗ f_j/γ + α_em/2 π P_iγ
with P the splitting kernels and where the first term is the usual QCD evolution and the latter the QED evolution stemming from a photon splitting into two quarks.
Photon PDFs from Glück-Reya-Vogt <cit.>, Glück-Reya-Schienbein <cit.>, Slominski-Abramowicz-Levy <cit.>, and Schuler-Sjöstrand <cit.> have been included in .
As exemplified in a comparison between two PDF sets (the SAS1D paramterisation by Schuler-Sjöstrand and the set by Slominski-Abramowicz-Levy) in Fig. <ref>, there are large deviations, especially in the gluon distribution function.
The distinction of direct and resolved processes can not be maintained at Next-to-Leading-Order (NLO) due to the ambiguity of real emissions. While the resolved-photon processes can be computed at analogously to jet production in p-p collisions, the direct-photon processes show divergences stemming from the photon splittings P_iγ. However, in <cit.> it was shown that these divergences cancel against the resolved-photon cross-section as these splittings are re-absorbed into the PDF by means of the inhomogenous term proportional to P_iγ in the evolution equation. Hence, these divergences can be subtracted from dσ_γ p → 2 j + X^(pl) and care only has to be taken to use a photon PDF with the correct evolution and the same factorisation scheme as in the matrix element generation. The calculation can then be matched to the parton shower with the prescription. The main difference lies in the fact that momentum fractions have to be calculated with respect to the variable photon energies instead of fixed beam energies.
§ VALIDATION
For validation, the simulation has been compared to data from the and colliders, namely photoproduction of one or two jets at the ZEUS, OPAL and L3 experiments. Typical observables in these analyses are the (average) jet transverse energy E_T, pseudo-rapidity η, cosΘ^*, which approximates the angle between two jets and x_γ^±, which is defined as
x_γ^± = ( ∑_j=1,2 E^(j)± p_z^(j)) / ( ∑_i∈ hfs E^(i)± p_z^(i) )
and works as a proxy to experimentally distinguish the direct from the resolved modes.
In Fig. <ref> we studied at LO, where all PDF sets could be used, the uncertainties from the different PDF parametrisations and found significant deviations, in agreement with the large discrepancies in the parton distributions. This underlines the need for a new fit to the available data and a more thorough study of the parton distribution of the real and quasi-real photon. Overall, the simulation shows good agreement with the data within the uncertainties. The results at , cf. Figs. <ref> and <ref>, were generated as an average over the SAS1M and SAS2M pdf sets, which use the scheme.
§ OUTLOOK
§.§ Minimum Bias photoproduction for the LHC
Multiple-parton interactions are non-negligible in photoproduction <cit.> and the implementation based on <cit.> has been extended to also cover parametrisations of γ p and γγ interactions.
One object of study could be the simulation of Minimum Bias events where interactions are not only allowed between the two proton beams, but also the photon–proton and photon–photon systems to examine systems with rapidity gaps at the LHC.
When studying semi-diffractive processes, e.g. at the LHC, the LUXqed PDF can be used to access both the elastic and the dissociative contributions to the photoproduction processes.
§.§ Diffractive photoproduction and pomeron exchange
The diffractive production of jets is often understood in terms of a pomeron exchange which is factorized into a pomeron flux and a pomeron parton distribution. At the factorisation was observed to break down, so there's ongoing interest to understand this phenomenon <cit.>, especially in view of the upcoming Electron-Ion Collider.
The implementation of the pomeron flux is work in progress in .
§ SUMMARY
We showed progress in to include photoproduction at various colliders and achieving matched accuracy in QCD. The validation has been done at LO and is ongoing for . We also discussed several ideas how to extend the framework further and use it in experimental studies at the LHC and the EIC.
|
http://arxiv.org/abs/2307.05182v2 | 20230711113540 | CAT-ViL: Co-Attention Gated Vision-Language Embedding for Visual Question Localized-Answering in Robotic Surgery | [
"Long Bai",
"Mobarakol Islam",
"Hongliang Ren"
] | cs.CV | [
"cs.CV",
"cs.AI",
"cs.RO"
] |
CAT-ViL for Visual Question Localized-Answering in Robotic Surgery
L. Bai et al.
Department of Electronic Engineering, The Chinese University of Hong Kong (CUHK), Hong Kong SAR, China
Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, UK
Shun Hing Institute of Advanced Engineering, CUHK, Hong Kong SAR, China
[email protected], [email protected], [email protected]
CAT-ViL: Co-Attention Gated Vision-Language Embedding for Visual Question Localized-Answering in Robotic Surgery
Long Bai1 ⋆
Mobarakol Islam2
Long Bai and Mobarakol Islam are co-first authors.
Hongliang Ren1,3
Corresponding author.
August 12, 2023
==============================================================================================================================
Medical students and junior surgeons often rely on senior surgeons and specialists to answer their questions when learning surgery. However, experts are often busy with clinical and academic work, and have little time to give guidance. Meanwhile, existing deep learning (DL)-based surgical Visual Question Answering (VQA) systems can only provide simple answers without the location of the answers. In addition, vision-language (ViL) embedding is still a less explored research in these kinds of tasks. Therefore, a surgical Visual Question Localized-Answering (VQLA) system would be helpful for medical students and junior surgeons to learn and understand from recorded surgical videos. We propose an end-to-end Transformer with the Co-Attention gaTed Vision-Language (CAT-ViL) embedding for VQLA in surgical scenarios, which does not require feature extraction through detection models. The CAT-ViL embedding module is designed to fuse multimodal features from visual and textual sources. The fused embedding will feed a standard Data-Efficient Image Transformer (DeiT) module, before the parallel classifier and detector for joint prediction. We conduct the experimental validation on public surgical videos from MICCAI EndoVis Challenge 2017 and 2018. The experimental results highlight the superior performance and robustness of our proposed model compared to the state-of-the-art approaches. Ablation studies further prove the outstanding performance of all the proposed components. The proposed method provides a promising solution for surgical scene understanding, and opens up a primary step in the Artificial Intelligence (AI)-based VQLA system for surgical training. Our code is available at https://github.com/longbai1006/CAT-ViLgithub.com/longbai1006/CAT-ViL.
§ INTRODUCTION
Specific knowledge in the medical domain needs to be acquired through extensive study and training. When faced with a surgical scenario, patients, medical students, and junior doctors usually come up with various questions that need to be answered by surgical experts, and therefore, to better understand complex surgical scenarios. However, the number of expert surgeons is always insufficient, and they are often overwhelmed by academic and clinical workloads. Therefore, it is difficult for experts to find the time to help students individually <cit.>. Automated solutions have been proposed to help students learn surgical knowledge, skills, and procedures, such as pre-recorded videos, surgical simulation and training systems <cit.>, etc. Although students may learn knowledge and skills from these materials and practices, their questions still need to be answered by experts. Recently, several approaches <cit.> have demonstrated the feasibility of developing safe and reliable VQA models in the medical field. Specifically, Surgical-VQA <cit.> made effective answers regarding tools and organs in robotic surgery, but they were still unable to help students make sense of complex surgical scenarios. For example, suppose a student asks a question about the tool-tissue interaction for a specific surgical tool, the VQA model can only simply answer the question, but cannot directly indicate the location of the tool and tissue in the surgical scene. Students will still need help understanding this complex surgical scene. Another problem with Surgical-VQA is that their sentence-based VQA model requires datasets with annotation in the medical domain, and manual annotation is time-consuming and laborious.
Currently, extensive research and progress have been made on VQA tasks in the computer vision domain <cit.>. Models using long-short term memory modules <cit.>, attention modules <cit.>, and Transformer <cit.> significantly boost the performance in VQA tasks. Furthermore, FindIt <cit.> proposed a unified Transformer model for joint object detection and ViL tasks. However, firstly, most of these models acquire the visual features of key targets through object detection models. In this case, the VQA performance strongly depends on the object detection results, which hinders the global understanding of the surgical scene <cit.>, and makes the overall solution not fully end-to-end. Second, many VQA models employ simple additive, averaging, scalar product, or attention mechanisms when fusing heterogeneous visual and textual features. Nevertheless, in heterogeneous feature fusion, each feature represents different meanings, and simple techniques cannot achieve the best intermediate representation from heterogeneous features. Finally, the VQA model cannot highlight specific regions in the image relevant to the question and answer. Supposing the location of the object in the surgical scene can be known along with the answer by VQLA models, students can compare it with the surrounding tissues, different surgical scenes, preoperative scan data, etc., to better understand the surgical scene <cit.>.
In this case, we propose CAT-ViL DeiT for VQLA tasks in surgical scene understanding. Specifically, our contributions are three-fold: (1) We carefully design a Transformer-based VQLA model that can relate the surgical VQA and localization tasks at an instance level, demonstrating the potential of AI-based VQLA system in surgical training and surgical scene understanding. (2) In our proposed CAT-ViL embedding, the co-attention module allows the text embeddings to have instructive interaction with visual embeddings, and the gated module works to explore the best intermediate representation for heterogeneous embeddings. (3) With extensive experiments, we demonstrate the extraordinary performance and robustness of our CAT-ViL DeiT in localizing and answering questions in surgical scenarios. We compare the performance of detection-based and detection-free feature extractors. We remove the computationally costly and error-prone detection proposals to achieve superior representation learning and end-to-end real-time applications.
§ METHODOLOGY
§.§ Preliminaries
VisualBERT <cit.> generates text embeddings (including token embedding e_t, segment embedding e_s, and position embedding e_p) based on the strategy of natural language model BERT <cit.>, and uses object detection model to extract visual embeddings (consisting of visual features representation f_v, segment embedding f_s and position embedding f_p). Then, it concatenates visual and text embeddings before feeding the subsequent multilayer Transformer module.
Multi-Head Attention <cit.> can focus limited attention on key and high-value information. In each head 𝐡_i, give the certain query q ∈ℝ^d_q, key matrix K ∈ℝ^d_k, value matrix V ∈ℝ^d_v, the attention for each head is calculated as 𝐡_i=A(𝐖_i^(q)𝐪, 𝐖_i^(K)𝐊, 𝐖_i^(V)𝐕). 𝐖_i^(q)∈ℝ^p_q × d_q, 𝐖_i^(k)∈ℝ^p_k × d_k, 𝐖_i^(v)∈ℝ^p_v × d_v are learnable parameters, and A represents the function of single-head attention aggregation. A linear conversion is then applied for the attention aggregation from multiple heads:
𝐡_i=MA(𝐖_o[
𝐡_1 …𝐡_h
]).
𝐖_o ∈ℝ^p_o × h p_v is the learnable parameters in multiple heads. Each head may focus on a different part of the input to achieve the optimal output.
§.§ CAT-ViL DeiT
We present CAT-ViL DeiT to process the information from different modalities and implement the VQLA task in the surgical scene. DeiT <cit.> serves as the backbone of our network. As shown in Fig. <ref>, the network consists of a vision feature extractor, a customized trained tokenizer, a co-attention gated embedding module, a standard DeiT module, and task-specific heads.
Feature Extraction:
Taking a given image and the associated question, conventional VQA models usually extract visual features via object proposals <cit.>. Instead, we employ ResNet18 <cit.> pre-trained on ImageNet <cit.> as our visual feature extractor. This design enables faster inference speed and global understanding of given surgical scenes. The text embeddings are acquired via a customized pre-trained tokenizer <cit.>. The CAT-ViL embedding module then processes and fuses the input embeddings from different modalities.
CAT-ViL Embedding:
In the following, the extracted features are processed into visual and text embeddings following VisualBERT <cit.> as described in Section <ref>. However, VisualBERT <cit.> and VisualBERT ResMLP <cit.> naively concatenate the embeddings from different modalities without optimizing the intermediate representation between heterologous embeddings. In this case, information and statistical representations from different modalities cannot interact perfectly and serve subsequent tasks.
Inspired by <cit.>, we replace the naive concatenation operation with a co-attention gated ViL module. The gated module can explore the best combination of the two modalities. Co-attention learning enables active information interaction between visual and text embeddings. Specifically, the guided-attention module is applied to infer the correlation between the visual and text embeddings. The normal self-attention module contains the multi-head attention layer, a feed-forward layer, and ReLU activation. The guide-attention module also contains the above components, but its input is from both two modalities, in which the q is from visual embeddings and K,V are from text embeddings:
𝐡_i=A(𝐖_i^(q)𝐪_visual, 𝐖_i^(K)𝐊_text, 𝐖_i^(V)𝐕_text)
Therefore, the visual embeddings shall be reconstructed with the original query, and the key and value of the text embeddings, which can realize the text embeddings to have instructive information interaction with the visual embeddings, and help the model to focus on the targeted image context related to the question. Six guided-attention layers are applied in our network. Thus, the correlation between questions and image regions can be gradually constructed. Besides, we also build six self-attention blocks for both visual and text embeddings to boost the internal relationship within each modality. This step can also avoid `over' guidance and seek a trade-off. Then, the attended text embeddings and text-guided attended visual embedding shall be output from the co-attention module and propagated through the gated module.
Compared to the naive concatenation <cit.>, summation <cit.>, or the multilayer perceptron (MLP) layer <cit.>, this learnable gated neuron-based model can control the contribution of multimodal input to output through selective activation (set as tanh here). The gate node α is employed to control the weight for selective visual and text embedding aggregation. The equations of the gated module are:
𝐄_𝐨 =𝐰 * tanh(θ_v·𝐄_v) +(1-𝐰) * tanh(θ_t ·𝐄_t)
𝐰 =α(θ_𝐰·[𝐄_v𝐄_t])
𝐄_v and 𝐄_t denotes visual and text embeddings, respectively.
(θ_ω, θ_f, θ_e ) are set as learnable parameters. [· ·] means the concatenation operation. 𝐄_𝐨 is the final output embeddings. The activation function internally encodes the text and visual embeddings separately, and the gate weights are used for embedding fusion. This method is uncomplicated and effective, and can optimize the intermediate aggregation of visual and text embeddings while constraining the model.
Subsequently, the fused embeddings 𝐄_o shall feed the pre-trained DeiT-Base <cit.> module before the task-specific heads. The pre-trained DeiT-Base module can learn the joint representation, resolve ambiguous groundings from multimodel information, and maximize performance.
Prediction Heads:
The classification head, following the normal classification strategy, is a linear layer with Softmax activation. Regarding the localization head, we follow the setup in Detection with Transformers (DETR) <cit.>. A simple feed-forward network (FFN) with a 3-layer perceptron, ReLU activation, and a linear projection layer is employed to fit the coordinates of the bounding boxes.
The entire network is therefore built end-to-end without multi-stage training.
Loss Function:
Normally, the cross-entropy loss ℒ_CE serves as our classification loss. The combination of ℒ_1-norm and Generalized Intersection over Union (GIoU) loss <cit.> is adopted to conduct bounding box regression. GIoU loss <cit.> further emphasizes both overlapping and non-overlapping regions of bounding boxes. Then, the final loss function is ℒ = ℒ_CE + (ℒ_GIoU +ℒ_1).
§ EXPERIMENTS
§.§ Dataset
EndoVis 2018 Dataset is a public dataset with 14 robotic surgery videos from MICCAI Endoscopic Vision Challenge <cit.>. The VQLA annotations are publicly accessible by <cit.>, in which the QA pairs are from <cit.> and the bounding box annotations are from <cit.>. Specifically, the QA pairs include 18 different single-word answers regarding organs, surgical tools, and tool-organ interactions. When the question is about organ-tool interactions, the bounding box will contain both the organ and the tool. We follow <cit.> to use video [1, 5, 16] as the test set and the remaining as the training set. Statistically, the training set includes 1560 frames and 9014 QA pairs, and the test set has 447 frames and 2769 QA pairs.
EndoVis 2017 Dataset is also a publicly available dataset from the MICCAI Endoscopic Vision Challenge 2017 <cit.>, and the annotations are also available by <cit.>. We employ this dataset as an external validation dataset to demonstrate the generalization capability of our model in various surgical domains. Specifically, we manually select and annotate frames with common organs, tools, and interactions in EndoVis 2017 Dataset, generating 97 frames with 472 QA pairs. We conduct no training but only testing on this external validation dataset.
§.§ Implementation Details
We conduct our comparison experiments against VisualBERT <cit.>, VisualBERT ResMLP <cit.>, MCAN <cit.>, VQA-DeiT <cit.>, MUTAN <cit.>, MFH <cit.>, and BlockTucker <cit.>. In VQA-DeiT, we use pre-trained DeiT-Base block <cit.> to replace the multilayer Transformer module in VisualBERT <cit.>. To keep a fair comparison of VQLA tasks, we use the same prediction heads in and loss function in Section <ref>. The evaluation metrics are accuracy, f-score, and mean intersection over union (mIoU) <cit.>. All models are trained on NVIDIA RTX 3090 GPUs using Adam optimizer <cit.> with PyTorch. The epoch, batch size, and learning rate are set to 80, 64, and 1 × 10^-5, respectively. The experimental results are the average results with five different random seeds.
§.§ Results
Fig. <ref> presents the visualization and qualitative comparison of the surgical VQLA system. Quantitative evaluation in Table <ref> presents that our proposed model using ResNet18 <cit.> feature extractor suppresses all SOTA models significantly. Additionally, we compare the performance between using object proposals (Faster RCNN <cit.>) and using features from the entire image (ResNet18 <cit.>). The experimental results in EndoVis-18 show that removing the object proposal model improves the performance appreciably on both question-answering and localization tasks, which demonstrates the impact of this approach in correcting potential false detections. Meanwhile, in the external validation set - EndoVis-17, our CAT-ViL DeiT with RCNN feature extractor suffers from domain shift and class imbalance problems, thus achieving poor performance. However, our final model, CAT-ViL DeiT with ResNet18 feature extractor, endows the network with global awareness and outperforms all baselines in terms of accuracy and mIoU, proving the superiority of our method. The inference speed is also enormously accelerated, demonstrating its potential in real-time applications.
Furthermore, a robustness experiment is conducted to observe the model stability when test data is corrupted. We set 18 types of corruption on the test data based on the severity level from 1 to 5 by following <cit.>. Then, the performance of our model and all comparison methods on each corruption severity level is presented in Fig. <ref>. As the severity increases, the performance of all models degrades. However, our model shows good stability against corruption, and presents the best prediction results at each severity level. The excellent robustness of our model brings great potential for real-world applications.
Finally, we conduct an ablation study on different ViL embedding techniques with the same feature extractors and DeiT backbone in Table <ref>. We compare with Concatenation <cit.>, Joint Cross-Attention (JCA) <cit.>, Multimodal Multi-Head Convolutional Attention (MMHCA) <cit.>, Multimodal Attention Transformers (MAT) <cit.>, Gated Fusion <cit.>, Self-Attention Fusion <cit.>, Guided-Attention Fusion <cit.>, Co-Attention Fusion (T2V: Text-Guide-Vision) <cit.>. Besides, we explore the Co-Attention module with different directions (V2T: Vision-Guide-Text, and Bidirectional). Furthermore, we also incorporate the Gated Fusion with different attention mechanisms (Self-Attention, Guided-Attention, Bidirectional Co-Attention, Co-Attention (V2T), Co-Attention (T2V)) for detailed comparison. They are shown as `Self-Attn Gated', `Guided-Attn Gated', `CAT-ViL (Bi)', `CAT-ViL (V2T)' and `CAT-ViL (T2V)' in Table <ref>. The study proves the superior performance of our ViL embedding strategy against other advanced methods. We also demonstrate that integrating attention feature fusion techniques and the gated module will bring performance improvement.
§ CONCLUSIONS
This paper presents a Transformer model with CAT-ViL embedding for the surgical VQLA tasks, which can give the localized answer based on a specific surgical scene and associated question. It brings up a primary step in the study of VQLA systems for surgical training and scene understanding. The proposed CAT-ViL embedding module is proven capable of optimally facilitating the interaction and fusion of multimodal features. Numerous comparative, robustness, and ablation experiments display the leading performance and stability of our proposed model against all SOTA methods in both question-answering and localization tasks, as well as the potential of real-time and real-world applications. Furthermore, our study opens up more potential VQA-related problems in the medical community. Future work can be focused on quantifying and improving the reliability and uncertainty of these safety-critical tasks in the medical domain.
§.§.§ Acknowledgements.
This work was funded by Hong Kong RGC CRF C4063-18G, CRF C4026-21GF, RIF R4020-22, GRF 14203323, GRF 14216022, GRF 14211420, NSFC/RGC JRS N_CUHK420/22; Shenzhen-Hong Kong-Macau Technology Research Programme (Type C 202108233000303); Guangdong GBABF #2021B1515120035. M. Islam was funded by EPSRC grant [EP/W00805X/1].
splncs04
§ SUPPLEMENTARY MATERIALS FOR “CAT-VIL: CO-ATTENTION GATED VISION-LANGUAGE EMBEDDING FOR VISUAL QUESTION LOCALIZED-ANSWERING IN ROBOTIC SURGERY”
|
http://arxiv.org/abs/2307.10208v1 | 20230714115503 | COHERENT Collaboration data release from the measurements of CsI[Na] response to nuclear recoils | [
"D. Akimov",
"P. An",
"C. Awe",
"P. S. Barbeau",
"B. Becker",
"V. Belov",
"I. Bernardi",
"M. A. Blackston",
"C. Bock",
"A. Bolozdynya",
"J. Browning",
"B. Cabrera-Palmer",
"D. Chernyak",
"E. Conley",
"J. Daughhetee",
"J. Detwiler",
"K. Ding",
"M. R. Durand",
"Y. Efremenko",
"S. R. Elliott",
"L. Fabris",
"M. Febbraro",
"A. Gallo Rosso",
"A. Galindo-Uribarri",
"M. P. Green",
"M. R. Heath",
"S. Hedges",
"D. Hoang",
"M. Hughes",
"T. Johnson",
"A. Khromov",
"A. Konovalov",
"E. Kozlova",
"A. Kumpan",
"L. Li",
"J. M. Link",
"J. Liu",
"K. Mann",
"D. M. Markoff",
"J. Mastroberti",
"P. E. Mueller",
"J. Newby",
"D. S. Parno",
"S. I. Penttila",
"D. Pershey",
"R. Rapp",
"H. Ray",
"J. Raybern",
"O. Razuvaeva",
"D. Reyna",
"G. C. Rich",
"J. Ross",
"D. Rudik",
"J. Runge",
"D. J. Salvat",
"A. M. Salyapongse",
"J. Sander",
"K. Scholberg",
"A. Shakirov",
"G. Simakov",
"G. Sinev",
"W. M. Snow",
"V. Sosnovtsev",
"B. Suh",
"R. Tayloe",
"K. Tellez-Giron-Flores",
"I. Tolstukhin",
"E. Ujah",
"J. Vanderwerp",
"R. L. Varner",
"C. J. Virtue",
"G. Visser",
"T. Wongjirad",
"Y. Yang",
"Y. -R. Yen",
"J. Yoo",
"C. -H. Yu",
"J. Zettlemoyer"
] | physics.ins-det | [
"physics.ins-det"
] |
INF³: Implicit Neural Feature Fusion Function for Multispectral and Hyperspectral Image Fusion
Tai-Xiang Jiang
August 12, 2023
==============================================================================================
§ OVERVIEW OF THE RELEASE
The data release <cit.> includes the CsI[Na] nuclear recoil quenching factor (QF) data acquired in a series of measurements performed by the COHERENT collaboration and described in ref. <cit.> and references therein. We duplicate the scripts from the release in the public repository <cit.>, which can be updated if any issues are found. Please direct questions about the material provided within this release to and .
§ STRUCTURE
The root directory of the data release is . It contains five subfolders. Four of them — — correspond to COHERENT CsI[Na] QF measurements and the remaining is for a global QF data fit tool based on the data from ref. <cit.>. Each of the numbered folders contains the data organized according to the steps in our data acquisition and analysis:
* — time of flight data for characterizing the energy distribution of incident neutrons;
* — data for the calibration of the backing detectors (BD);
* — data for the calibration of the CsI[Na] detector;
* — neutron beam data for determining the CsI[Na] QF.
Example code macros to read the data files and plot the recorded waveforms are also stored in the folders and have names similar to regular expression. folders also contain the files with the energy distributions of incident neutrons evaluated from the time-of-flight data. The MCNPX-PoliMi based predictions of the nuclear recoil energy depositions in CsI[Na] and macros to read these predictions are stored in the subfolders inside folders.
§ GUIDANCE ON ANALYSIS
§.§ Evaluation of incident neutron energy
The incident neutron energy is evaluated based on the time of flight data acquired with an EJ - 309 liquid scintillator detector. The detector was placed at a known distance from the ion beam target – either a deuterium gas cell or Li foil – in which neutrons were produced. Beam related gamma rays and neutrons are registered by the detector. The capability of EJ-309 to distinguish between signals from neutrons and gamma rays provides an estimate of the delay between the arrival of the former and the latter, enabling the kinematic reconstruction of incident neutron energies. Both arrival times are defined relative to the periodic signal of the beam-pick-off monitor (BPM) associated with primary ion beam pulse. The exact expression for the velocity of neutrons can be evaluated in the following way:
ct = v_nt+v_nΔ t = d,
v_n=cd/c Δ t+d,
where d is the distance travelled by neutrons, t is the time needed for a gamma ray to travel d, Δ t is the time delay between arrival times of neutrons and gamma rays and c is the speed of light. We suggest converting directly from the time of flight spectrum (histogram)
to a distribution of neutron velocities and then to energy while taking into account non-linear dependence of the energy on velocity (variable bin size of the final histogram). Results of such a straightforward approach for COHERENT-1/2 match with the more complex evaluation based on the MCNPX-PoliMi simulation. We recommend using constant fraction discrimination (CFD) thresholds for analysis of both the EJ-309 and BPM waveforms. It is useful to pick only one of the BPM pulses (e.g. the first following/preceding the EJ-309 signal) to determine the time delay if several pulses are presented in the recorded waveform. The leading uncertainties in neutron energies are associated with the time resolution of the measurements, the number of distances at which measurements were performed, and uncertainties in the neutron production site in the source as well as interaction site in the EJ-309 cell. Table <ref> presents a summary of the available time of flight measurements. The distances listed take into account the contributions from the source geometries and the EJ-309 cell. The uncertainty of the distance measurements is about 3.8 cm for COHERENT-1/2/3 and about 1 cm for COHERENT-4 for the absolute value, but is cancelled out if the difference of distances is considered. The difference in path length between gamma rays and neutrons is negligible. The velocity of neutrons can be calculated with the classical expression. Examples of time-of-flight spectra and evaluated neutron energy distributions can be found in Fig. 1 and Tab. 2 of ref. <cit.>. The description of the MCNPX-PoliMi simulation of the TOF data and the corresponding neutron beam energy spectra for COHERENT-1/2 can be found in Appendix D of ref. <cit.>. The definitions of pulse shape discrimination (PSD) parameters required to separate neutron- and gamma-induced signals in the EJ-309 detector and illustrations of PSD plots can be found in Fig. 3, 6, 9 of ref. <cit.>.
The neutron energy can be verified by time delays between neutrons and gammas at different distances from the source. Such a cross-check is independent from the absolute measurement of the distance from the source to the EJ-309 and uses relative distance information for different positions of the detector:
v_n=d_2-d_1/t_2-t_1,
where d_1, d_2 are distances between the source and EJ-309 and t_1, t_2 are maxima of neutron arrival time distributions at these distances. Such a method suffers from increased relative uncertainty because of the compound effects of two distance measurements.
§.§ Calibration of backing detectors
The COHERENT-1/2/3 QF measurements utilized organic scintillator detectors to tag neutrons scattering off of the CsI[Na] crystal. The calibration of the electron recoil (ER) energy scale of these backing detectors is required to take into account the effect BD energy deposition selections have on nuclear recoil (NR) spectra. We calibrated all 12 EJ-309 BDs of the COHERENT-1 measurement with a neutron energy deposition having an endpoint of about 1.3 MeV_ee <cit.>. For the COHERENT-2 measurement we suggest calibrating the EJ-299-33A BD with ^22Na and ^137Cs gamma rays of 511 keV and 662 keV with corresponding Compton edges of 341 keV and 477 keV. Plots presenting the results of such a calibration can be found in ref. <cit.> (Fig. 8.5). The calibration of the BD energy scale coincide in the original analysis <cit.> and reanalysis <cit.> within 3%. The ^252Cf calibration provides verification of the PSD parameter values for neutron-induced NRs and signals from gamma rays. The ^137Cs source was also used to calibrate the EJ-309 liquid scintillator BD in the COHERENT-3 measurement.
§.§ Calibration of CsI[Na]
The calibration of the CsI[Na] ER energy scale was performed with the 59.5 keV gamma ray line of ^241Am in all of the COHERENT QF measurements. The signal analysis approach description and illustrations of the energy deposition spectra can be found in Section 3 and Fig. 2 of ref. <cit.>. Table <ref> presents benchmark values of the 59.5 keV peak positions in “ADC units”, which is a measure of integral equal to “ADC counts × ADC time sample” for each measurement. We also provide information needed for the conversion from ADC units to nVs and PMT photoelectrons (PE). The uncertainties in Tab. <ref> represent rather the spread of the values obtained throughout the dataset, than particular statistic or systematic uncertainty. Note that the mean single photoelectron (SPE) integrals presented in Tab. <ref> were evaluated with the help of a Gaussian fit. Such a fit does not provide consistent description of the mean SPE integral at different PMT bias voltages and can induce bias relative to the true SPE value (see discussions in Appendix B of ref. <cit.>). The absolute value of the SPE mean integral does not affect the evaluated QF values as it cancels out in the definition of the QF. The second order dependence may come from the effect of the absolute light yield on the resolution model and smearing of the predicted NR spectrum.
The 59.5 keV peak position for COHERENT-1 from Tab. <ref> differs from the one suggested by ref. <cit.>. The latter had an issue with onset finding in ^241Am signals leading to 3% lower 59.5 keV peak position and distorted spectral shape. The issue is confirmed by the original author of ref. <cit.>. The 59.5 keV response after fixing the issue in the original analysis pipeline coincides with the one presented in Tab. <ref>. We also note that ref. <cit.> used the Polya model of the mean SPE integral. We use a Gaussian model here and in ref. <cit.> for consistency between data-sets and comparison with the results of ref. <cit.>. The SPE spectra presented in Appendix B of ref. <cit.> suggest that the Polya model alone cannot describe the full SPE integral distribution.
The 59.5 keV peak characteristic integral coincides in the original analysis of COHERENT-2 <cit.> and reanalysis presented in ref. <cit.> to within 1%, although the original analysis used the units of "V×2ns" (twice smaller than nVs from Tab. <ref>). The SPE mean estimates obtained with the Gaussian model in the ref. <cit.> are smaller by about 7% than those in the re-analysis and provide the absolute light yield estimate of 17.7 PE/keV. We were unable to track down the reason for this discrepancy, but low energy signal integral estimates are different in these analyses in general as suggested by differing QF estimates.
§.§ Evaluation of CsI[Na] QF
The CsI[Na] QF evaluation relies on selection of the NR signals related to the neutron source pulses. Such selections utilize information from the backing detectors (in the tagged neutron measurements) and the beam-pick-off monitor. The general image of the selection requirements is presented in Tab. <ref>, where “” means that a requirement was used in the COHERENT analysis of a particular dataset, “×” means that a requirement is available, but was not used, and “n/a” means that the requirement is not available due to absence of one of the signals needed. The illustrations of the distributions of restricted parameters can be found in corresponding sections of ref. <cit.>. The tagged neutron experiments rely on the selection of the neutron-induced signals of the BDs identified by the PSD parameter along with a BD integral cut. Additional accidental background suppression comes from the analysis of the time delay between BD and BPM signals (COHERENT-1 and COHERENT-3) or between BD and CsI[Na] signals. It is important to remember that the cuts relying on the onset of CsI[Na] signals (e.g. CsI-BD delay) may introduce inefficiency in selection of lowest energy signals due to the relatively slow scintillation decay times of the crystal. The COHERENT-4 measurement relies on the selection of neutron-induced signals of CsI[Na] by the time delay between CsI[Na] and BPM. In the COHERENT analysis additional effort was put into the suppression of Cherenkov-like signals in COHERENT-3 and COHERENT-4 data (described in corresponding sections of ref. <cit.>).
The empirical NR distributions can be fit to the prediction based on the MCNPX-PoliMi simulations. The prediction should account for the experimental resolution of the CsI[Na] setup with the leading contributions from the PMT photoelectron statistical fluctuations and fluctuations in the SPE integral. The afterglow contribution should also be taken into account if its contribution is non-negligible given the CsI[Na] pretrace cut (see Appendix A.3 and Tab. 8 in ref. <cit.>). The MCNPX-PoliMi predictions in folders represent the prediction of the NR spectrum with none of the above mentioned resolution effects taken into account. The NR spectra were produced for the BD integral cuts described in ref. <cit.> for the COHERENT-1/2/3 data. Although the elastic scattering NR mean energy depends only weakly on the BD energy deposition, the contribution from the inelastic scattering of neutrons with gamma ray escape may affect NR mean energy, especially for the largest scattering angles of COHERENT-1. We therefore recommend using the BD integral cuts utilised in our analysis <cit.>.
§.§ Global CsI[Na] QF data fit
Together with the COHERENT CsI[Na] QF data we release the macro to perform the global QF data fit. The input QF data are harmonized with ref. <cit.> and allow the user to perform fits under different assumptions, taking into account sub-selections of the existing data-sets. Such a tool will be useful if any new measurements or clarifications on systematic uncertainties of the existing data appear.
§ VERIFICATION AND DISCUSSION OF RESULTS
In order to make the results of ref. <cit.> falsifiable we provide the benchmark lists of triggers from the CsI[Na] NR signal subselections (see folders). These lists allow the unique identification of a trigger within a dataset and contain reconstructed event parameters used to evaluate CsI[Na] QF. We provide the images of waveforms for events from these lists to visualize our results or scripts to produce such images. The demonstrator event reconstruction scripts reproducing parameters from the lists are also released. Thus a way to verify both the reconstruction procedure and event selections of ours is provided. Interested parties may contact COHERENT by the e-mail addresses listed in this description for discussions. Communication will be supported for two years starting from the date of the data release.
§ SUMMARY
This paper describes the structure of the CsI[Na] QF data release by COHERENT <cit.> and presents the scripts (also in the repository <cit.>) required to start working with the data. The global QF data fit scripts allow for the reproduction of the results used to estimate the coherent elastic neutrino-nucleus scattering cross-section in ref. <cit.> and permit including new measurements or additional information on the systematic uncertainties in the fit. We hope that this release will contribute to enhancing the reproducibility of the CsI[Na] QF measurements. We encourage the scientific community working on the QF measurements to share the data, as existing measurements for the same materials indicate significant scatter, making it hard to reach consensus on the QF values and potential systematic uncertainty.
§ ACKNOWLEDGEMENTS
The COHERENT collaboration acknowledges the resources generously provided by the Spallation Neutron Source, a DOE Office of Science User Facility operated by the Oak Ridge National Laboratory. The Triangle Universities Nuclear Laboratory is supported by the U.S. Department of Energy under grant DE-FG02-97ER41033. This work was supported by the Ministry of Science and Higher Education of the Russian Federation, Project "New Phenomena in Particle Physics and the Early Universe" FSWU-2023-0073); the Russian Foundation for Basic Research (proj. # 17-02-01077 A); the DOE HEP grant DE-SC0020518. Support for DOI 10.13139/OLCF/1969085 dataset is provided by the U.S. Department of Energy, project HEP106 under Contract DE-AC05-00OR22725. Project HEP106 used resources of the Oak Ridge Leadership Computing Facility at Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725. We are thankful to J. Collar and B. Scholz for their contribution to the COHERENT-2 measurement.
§ LIST OF SCRIPTS
In this section we present the list of scripts published with the release <cit.> (and in the repository <cit.>) allowing to start the analysis of COHERENT CsI[Na] QF data and verify the results. The guidance on usage and compilation (when needed) of the scripts is provided in the code of the scripts.
§.§ COHERENT-1
— a script to view raw ADC waveforms of individual recorded events (triggers) of COHERENT-1;
— a script to read the neutron time-of-flight data recorded to evaluate the neutron beam energy in COHERENT-1 and COHERENT-2;
— a script to estimate the energy of beam neutrons based on the difference between time-of-flight values of gamma rays and neutrons;
— a script to view the simulated predictions of nuclear recoil energy depositions for COHERENT-1;
— a script reproducing the approach used to evaluate the key parameters of signal events (triggers) of COHERENT-1;
— an analog of the verification script utilizing a more straightforward approach to pulse finding and signal integration, the CsI[Na] integrals calculated with its help are lower by 1–2% in average compared with the default approach.
§.§ COHERENT-2
— a script to view raw ADC waveforms of individual recorded events (triggers) of COHERENT-2;
— a script to parse the raw binary data of COHERENT-2 and produce the ROOT tree with raw waveforms;
— a script creating images with the waveforms of events from the signal subselections of COHERENT-2;
— a script reproducing the approach used to evaluate the key parameters of signal events (triggers) of COHERENT-2;
— a script illustrating the fluctuations of the DC baseline of ADC and the selections used to reject the triggers with significant fluctuations from the analysis of COHERENT-2.
§.§ COHERENT-3
— a script to view raw ADC waveforms of individual recorded events (triggers) of COHERENT-3;
— a script reproducing the approach used to evaluate the key parameters of signal events (triggers) of COHERENT-3;
§.§ COHERENT-4
— a script to view raw ADC waveforms of individual recorded events (triggers) of COHERENT-4;
— a script to show the functional form of the neutron energy distributions in COHERENT-4;
— a script to view the simulated predictions of nuclear recoil energy depositions for the 0.94 MeV run of COHERENT-4;
— a script to view the simulated predictions of nuclear recoil energy depositions for the 1.26 MeV run of COHERENT-4;
— a script reproducing the approach used to evaluate the key parameters of signal events (triggers) of COHERENT-4;
§.§ Global QF data fit
— a script to reproduce the global qf data fit evaluated in ref. <cit.> and used in ref. <cit.>, contains QF and visible energy values as well as nuclear recoil energy with appropriate uncertainties.
§ DC BASELINE VOLTAGE FLUCTUATIONS OF ADC IN COHERENT-2
The inspection of COHERENT-2 raw waveforms recorded by the Acquiris U1071A ADC suggests that about 20% of the triggers suffer from significant fluctuations in the DC voltage baseline of the ADC. Such fluctuations may distort the integrals of the low energy signals of interest and affect evaluated QF values. In this section we describe the way we find and reject the triggers with DC voltage fluctuations.
For each of the ADC channels we define the baseline value estimates based on certain waveform intervals. The recorded amplitude values for such an interval are used to fill a histogram which is then fit to a Gaussian distribution. This fit is performed in the range of ±3 ADC units from the sample with the most probable amplitude value. The fit result for a Gaussian mean is used as a local baseline estimate. Below we list the symbols for these estimates and their combinations:
V^CsI_Beg — the estimate based on the first microsecond of the CsI[Na] waveform, also used as a “default” baseline value for the CsI channel analysis;
V^CsI_End — the estimate based on the last microsecond of the CsI[Na] waveform;
Δ V^CsI — the difference between V^CsI_Beg and V^CsI_End;
V^EJ_Beg — the estimate based on the first microsecond of the EJ plastic scintillator waveform;
V^EJ_Def — the estimate based on 0.9 to 1.9 μ s pre-trigger region of the EJ plastic scintillator waveform, a “default” baseline value for the EJ channel analysis;
V^EJ_End — the estimate based on the last microsecond of the EJ plastic scintillator waveform;
Δ V^EJ — the difference between V^EJ_Beg and V^EJ_End.
We use a combination of these values calculated for each trigger with no ADC range overflows in either channel to characterize the baseline fluctuations. The correlation between the “default” baseline values and difference in the local baseline value estimates is presented in Figure <ref>. Both channels demonstrate the image which can be interpreted as distortion and recovery of the baseline voltage. The maxima of the baseline value histograms correspond to absence of the Δ V with the exception of an excursion to the negative values. Another feature of the fluctuations is that Δ V^CsI is highly correlated with Δ V^EJ, see Figure <ref>. In order to reject the triggers with pathological baseline voltage fluctuations we, firstly, require V^CsI_Beg and V^EJ_Def to be contained within 0.8 ADC units from the most probable value. Note that both these values are calculated based on the pre-trigger region and thus do not affect the expected nuclear recoil signals. Secondly, we restrict the absolute value of Δ V^EJ to be within 1 ADC unit. This latter cut allows us to reject the remaining pathological waveforms based on the correlation between Δ V^CsI and Δ V^EJ. The tests show that Δ V^EJ doesn't depend on the size of recorded EJ signals up to events heavily affected by the ADC overflow. The EJ scintillation decay time is quite short and the signal does not reach the last microsecond of a recorded waveform, thus the cut doesn't introduce bias to the analysis of nuclear recoil signals from the beam neutrons. The remaining triggers demonstrate Δ V^CsI with a mean of about 0.02 ADC unit and RMS of 0.4 ADC units in the absence of signal. The average waveform accumulated from the triggers with no pulses in the CsI[Na] channel confirms absence of significant fluctuations in the baseline voltage value within a waveform. We thus conclude that remaining triggers are not affected by the pathological fluctuations of baseline voltage values and are suitable for the QF analysis. A script illustrating our selections can be found at within the release.
40
Release_2023
COHERENT collaboration, COHERENT Collaboration data release from the measurements of CsI[Na] response to nuclear recoils, 10.13139/OLCF/1969085 (2023), https://doi.ccs.ornl.gov/ui/doi/426
Akimov_2021
COHERENT collaboration, Measurement of scintillation response of CsI[Na] to low-energy nuclear recoils by COHERENT, Journal of Instrumentation 17 (2022) P10034, arXiv:2111.02477
Gitlab_2023
COHERENT collaboration, https://code.ornl.gov/COHERENT/qf_data_release
Park_2002
H. Park et al., Neutron beam test of CsI crystal for dark matter search, NIM A 491 (2002) 460
Guo_2016
C. Guo et al., Neutron beam tests of CsI(Na) and CaF_2(Eu) crystals for dark matter direct search, NIM A 818 (2016) 38
Collar_2019
J.I. Collar, A.R.L. Kavner, and C.M. Lewis, Response of CsI[Na] to Nuclear Recoils: Impact on Coherent Elastic Neutrino-Nucleus Scattering (CEνNS), Physical Review D 100 (2019) 033003, arXiv:1907.04828
Grayson_2017
G.C. Rich, Measurement of Low-Energy Nuclear-Recoil Quenching Factors in CsI[Na] and Statistical Analysis of the First Observation of Coherent, Elastic Neutrino-Nucleus Scattering, The University of North Carolina at Chapel Hill, PhD thesis (2017)
Pino_2014
F. Pino et al., The light output and the detection efficiency of the liquid scintillator EJ-309, App. Radiation and Isotopes 89 (2014) 79
Scholz_2017
B. Scholz, First Observation of Coherent Elastic Neutrino-Nucleus Scattering, Springer International Publishing, Springer Theses series (2018)
Akimov_cevns
COHERENT collaboration, Measurement of the Coherent Elastic Neutrino-Nucleus Scattering Cross Section on CsI by COHERENT, Physical Review Letters 129 (2022) 081801, arXiv:2110.07730
|
http://arxiv.org/abs/2307.04022v1 | 20230708175848 | Explicit a posteriori error representation for variational problems and application to TV-minimization | [
"Sören Bartels",
"Alex Kaltenbach"
] | math.NA | [
"math.NA",
"cs.NA",
"math.OC",
"35Q68, 49M25, 49M29, 65N30, 65N50"
] |
1]Sören BartelsEmail:
2]Alex KaltenbachEmail:
[1]Department of Applied Mathematics, University of Freiburg, Hermann–Herder–Straße 10, 79104 Freiburg
[2]Institute of Mathematics, Technical University of Berlin, Straße des 17. Juni 136, 10623 Berlin
Explicit a posteriori error representation for variational problems and application to TV-minimization
[
August 12, 2023
========================================================================================================
fancy
0cm
-0.25cm
[CO]Explicit error representation and application to TV-minimization
[CE]S. Bartels and A. Kaltenbach
[R]
[R]
In this paper, we propose a general approach for explicit a posteriori error representation for convex minimization problems using basic convex duality relations.
Exploiting discrete orthogonality relations in the space of element-wise constant vector fields as well as a discrete integration-by-parts formula between the Crouzeix–Raviart and the , all convex duality relations are transferred to a discrete level, making the explicit a posteriori error representation –initially based on continuous arguments only– practicable from a numerical point of view. In addition,
we provide a generalized Marini formula for the primal solution that determines a discrete primal solution in terms of a given discrete dual solution.
We benchmark all these concepts via the Rudin–Osher–Fatemi model. This leads to an adaptive algorithm that yields a (quasi-optimal)
linear convergence rate.
35Q68; 49M25; 49M29; 65N30; 65N50
§ INTRODUCTION
empty
The numerical analysis of the approximation of variational problems
is challenging when these are non-differentiable, degenerate, or involve
constraints. In particular, following established concepts for linear
elliptic partial differential equations often leads to sub-optimal results only.
The framework of convex duality provides an attractive concept to
reveal hidden information and structures to obtain quasi-optimal error representation formulas
under meaningful regularity conditions. Similar to <cit.>, we first exploit this
idea to derive explicit computable a posteriori error estimates for a natural error
measure. Then, this general result is transferred to a non-differentiable model problem with discontinuous solutions. As a whole, our results, similar to <cit.>, show that
the question of developing asymptotically exact a posteriori error estimators is
rather a question of identifying optimal error quantities. However, different from <cit.>, we also propose a general approach for making our results practicable from a numerical point of view.10mm
Given a domain Ω⊆ℝ^d, d∈ℕ,
a convex energy density ϕℝ→ℝ∪{+∞}, a
(Lebesgue) mea-surable energy density ψΩ×ℝ→ℝ∪{+∞} that is convex with respect to the second argument, and a Banach space X consisting of functions defined in
Ω, we denote by the minimization of the energy functional I X→ℝ∪{+∞}, for every v∈ X defined by
I(v) ∫_Ωϕ(∇ v) dx + ∫_Ωψ(·, v) dx ,
the primal problem.
Its (Fenchel) dual problem consists in the maximization of the functional D Y→ℝ∪{-∞}, where Y is a Banach space consisting of vector fields defined in
Ω, for every y∈ Y is defined by
D(y) -∫_Ωϕ^*(y) dx - ∫_Ωψ^*(·, div y) dx .
Here, ϕ^*ℝ^d→ℝ∪{+∞} and ψ^*Ω×ℝ→ℝ∪{+∞} (with respect to the second argument) denote the (Fenchel) conjugates of ϕℝ→ℝ∪{+∞} and ψΩ×ℝ→ℝ∪{+∞}, respectively.
Under rather general conditions, cf. <cit.>, we have the well-posedness of the
primal problem and the dual problem, i.e., the existence of a minimizer u∈ X of (<ref>), i.e., a primal solution, and of a maximizer z∈ Y of (<ref>), i.e., a dual solution, and the strong duality relation
min_v∈ X I(v) = I(u)= D(z) = max_y∈ Y D(y) .
Since u∈X and z∈ Y are optimal for (<ref>) and (<ref>), respectively, it holds 0∈∂ I(u) and 0∈∂ D(z).
In particular, for every v∈ X and y∈ Y, the quantities
ρ_I^2(v,u) I(v) - I(u) ,
ρ_-D^2(y,z) D(z) - D(y) ,
are non-negative. They define distances, if (<ref>) and (<ref>), respectively, are
strictly convex, and are called coercivity functionals or optimal convexity measures.
For accessible and admissible approximations v∈ X and y∈ Y of the solutions u ∈ X and z ∈ Y, given the definitions (<ref>) and (<ref>), the strong duality relation (<ref>) implies the error identity
ρ_I^2(v,u) + ρ_-D^2(y,z)
= I(v) - D(z)
η^2(v,y) .
Hence, the fully computable error estimator η^2 X× Y→ℝ∪{+∞}, cf. (<ref>), exactly
represents the sum of the primal and dual approximation errors, i.e., of (<ref>) and (<ref>).
The error representation (<ref>) can be seen as a generalization of the Prager–Synge
result, cf. <cit.>, which states that for the Poisson problem, i.e., ϕ1/2|·|^2∈ C^1(ℝ^d), ψ ((t,x)^⊤↦ -f(x)t) Ω×ℝ→ℝ∪{+∞}, where f∈ L^2(Ω), X W^1,2_D(Ω), and Y W^2_N(;Ω), for every v∈ W^1,2_D(Ω) and y∈ W^2_N(;Ω) with - y=f a.e. in Ω, we have that
12 ∇ v -∇ u_L^2(Ω;ℝ^d)^2 + 12 y - z _L^2(Ω;ℝ^d)^2
= 12 ∇ v-y ^2_L^2(Ω;ℝ^d) .
The equation (<ref>) has been used by various authors to define error estimators; for a comprehensive list of references, we refer the reader to <cit.>.
Often, local procedures are devised to construct an ad-missible vector field
y∈ W^2_N(;Ω) with - y=f a.e. in Ω from a given function v∈ W^1,2_D(Ω). While this leads to efficient procedures
to obtain accurate error estimators, the arguments cannot be expected to transfer
to non-linear problems. Another alternative to computing approximations
for the primal and dual problems consists in using finite element methods
for which reconstruction formulas are available, e.g., using the discontinuous Crouzeix–Raviart finite element
method and the Marini formula in the case of the Poisson problem, cf. <cit.>.7mm
It has recently been found (cf. <cit.>) that the discontinuous Crouzeix–Raviart finite element method leads to quasi-optimal a priori error estimates for non-linear and non-differentiable problems, while continuous finite element methods provide only a sub-optimal
convergence behavior. In the derivation of those results, a general
discrete convex duality theory with Raviart–Thomas vector fields has emerged that
also leads to reconstruction
formulas in rather general settings. As a consequence, given an approximation
v∈ X or y∈ Y, respectively, the missing one can be obtained via a simple post-processing procedure.
Then, the pair leads to the error representation formula (<ref>). It should also
be noted that neither v∈ X nor y∈ Y needs to be optimal in a subspace
of X or Y. By introducing appropriate residuals, any pair of admissible
approximations of u∈ X and z∈ Y can be used. This is particularly important for non-linear
problems, i.e., non-quadratic functionals, where an exact solution of discrete problems is neither possible nor rational.
A difficulty in the application of the explicit a posteriori error representation
formula (<ref>) arises from the condition that v∈ X and y∈ Y need to be admissible for
the functionals (<ref>) and (<ref>). In the case of the Poisson problem,
this arises, e.g., via element-wise constant approximations of f∈ L^2(Ω)
that are the images of Raviart–Thomas vector fields under the divergence operator. While data terms can be controlled by introducing appropriate
data oscillation terms, structural peculiarities of the energy densities
ϕℝ^d→ℝ∪{+∞} and ψΩ×ℝ→ℝ∪{+∞} and their (Fenchel) conjugates ϕ^*ℝ^d→ℝ∪{+∞} and ψ^*Ω×ℝ→ℝ∪{+∞} are often more challenging.
We illustrate this
by analyzing a non-differentiable
problem
which leads to a new error analysis and an adaptive refinement procedure
for the computationally challenging problem.
With ϕ = |·|∈ C^0(ℝ^d) and ψ=((x,t)^⊤↦α/2(t-g(x))^2)Ω×ℝ→ℝ
for a given function
g∈ L^2(Ω), i.e., the noisy image, and a given parameter α>0, i.e., the fidelity parameter,
the Rudin–Osher–Fatemi (ROF) model, cf. <cit.>, seeks a minimizing function u∈ BV(Ω)∩ L^2(Ω), i.e., the de-noised image, where BV(Ω) denotes the space of functions with bounded variation,
for the functional I BV(Ω)∩ L^2(Ω)→ℝ, for every v∈ BV(Ω)∩ L^2(Ω) defined by
I(v) |Dv|(Ω) + α2v-g_L^2(Ω)^2 ,
where |D(·)|(Ω)BV(Ω)→ [0,+∞] denotes the total variation functional.
The (Fenchel) problem to the minimization of the functional (<ref>) consists in the maximization of
the functional D W_N^2(;Ω)∩ L^∞(Ω;ℝ^d)→ℝ∪{-∞}, for every y∈ W_N^2(;Ω)∩ L^∞(Ω;ℝ^d) defined by
D(y) -I_K_1(0)(y)-12αdiv y+α g_L^2(Ω)^2+α2 g_L^2(Ω)^2 ,
where
I_K_1(0)(y) 0 if | y|≤ 1 a.e. in Ω and I_K_1(0)(y) +∞ else.
The primal solution u∈ BV(Ω) ∩ L^2(Ω), i.e., the unique minimizer of (<ref>), and a dual solution z∈ W_N^2(;Ω)∩ L^∞(Ω;ℝ^d), i.e., a (possibly non-unique) maximizer of (<ref>), are
(formally) related via, cf. <cit.>,
z ∈.{∇ u/|∇ u|} if |∇ u|>0
K_1(0) if |∇ u|=0
} a.e. in Ω ,
z = α (u-g) a.e. in Ω .
The relations (<ref>) determine z∈ W_N^2(;Ω)∩ L^∞(Ω;ℝ^d) via u∈ BV(Ω)∩ L^2(Ω) and vice versa.
A
Crouzeix–Raviart finite element approximation of (<ref>) is given by the minimization of the regularized, discrete functional
I_h,ε^cr𝒮^1,cr(𝒯_h)→ℝ, h,ε>0, for every v_h∈𝒮^1,cr(𝒯_h) defined by
I_h,ε^cr(v_h) f_ε(|∇_h v_h| )_L^1(Ω)
+ α2Π_h(v_h-g)_L^2(Ω)^2 .
Here, ∇_h is the element-wise application of the gradient operator
and f_ε∈C^1(ℝ) is a regularization of the modulus |·|, and Π_h denotes
the (local) L^2-projection onto element-wise constant functions.
A quasi-optimal dual Raviart–Thomas vector field z_h,ε^rt∈ℛT^0_N(𝒯_h) can be associated with a
minimizing function u_h,ε^cr∈𝒮^1,cr(𝒯_h) of I_h,ε^cr𝒮^1,cr(𝒯_h)→ℝ via the reconstruction formula
z_h,ε^rt = f_ε'(|∇_h u_h,ε^cr|) |∇_h u_h,ε^cr|∇_h u_h,ε^cr
+ αΠ_h (u_h,ε^cr -g)d( id_ℝ^d- Π_h id_ℝ^d) in ℛT^0_N(𝒯_h) .
For canonical choices of f_ε∈ C^1(ℝ), e.g.,
f_ε =|·|_ε= ((·)^2+ε^2)^1/2, it holds |Π_h z_h,ε^rt|≤ 1 a.e. in Ω, but not
|z_h,ε^rt|≤ 1 a.e. in Ω. Thus, we employ f_ε = (1-ε) |·|_ε,
so that
|f_ε'(t)|≤ 1-ε for all t∈ℝ. The choice ε∼ h^2 in (<ref>) and an additional projection step onto K_1(0)
lead to an accurate approximation z_h,ε^rt∈ℛT^0_N(𝒯_h) of z∈ W_N^2(;Ω)∩ L^∞(Ω;ℝ^d), which
satisfies |z_h,ε^rt|≤ 1 a.e. in Ω and, thus, represents an admissible test function that leads to the definition
of an error estimator. The resulting adaptive mesh-refinement procedure leads to significantly
improved experimental convergence rates compared to recent related contributions, cf. <cit.>. More precisely, we report quasi-optimal linear convergence rates which have been obtained only for meshes with quadratic grading towards a sufficiently simple jump set of a regular g in <cit.>.10mm
This article is organized as follows: In Section <ref>, we introduce the employed notation and the relevant finite element spaces. In Section <ref>, we propose a general approach for explicit a posteriori error representation for convex minimization problems based on (discrete) convex duality relations. In Section <ref>,
we transfer the concepts of Section <ref> to the Rudin–Osher–Fatemi model and propose a regularization scheme. In Section <ref>, we review our theoretical findings via numerical experiments.
§ PRELIMINARIES
§.§ Convex analysis
For a (real) Banach space X, which is equipped with the norm ·_X X→ℝ_≥ 0, we denote its corresponding (continuous) dual space by X^* equipped with the dual norm
·_X^* X^*→ℝ_≥ 0, defined by x^*_X^*sup_x_X≤ 1⟨ x^*,x⟩_X for every x^*∈ X^*, where ⟨·,·⟩_X X^*× X→ℝ, defined by ⟨ x^*,x⟩_X x^*(x) for every x^*∈ X^* and x∈ X, denotes the duality pairing.
A functional F X→ℝ∪{+∞} is called sub-differentiable in x∈ X, if F(x)<∞ and if there exists x^*∈ X^*, called sub-gradient, such that for every y∈ X, it holds
⟨ x^*,y-x⟩_X≤ F(y)-F(x) .
The sub-differential ∂ F X→ 2^X^* of a functional F X→ℝ∪{+∞} for every x∈ X is defined by (∂ F)(x){x^*∈ X^*|(<ref>) holds for x^*} if F(x)<∞ and (∂ F)(x)∅ else.
For a given functional F X→ℝ∪{±∞}, we denote its corresponding (Fenchel) conjugate by F^* X^*→ℝ∪{±∞}, which for every x^*∈ X^* is defined by
F^*(x^*)sup_x∈ X⟨ x^*,x⟩_X-F(x) .
If F X→ℝ∪{+∞} is a proper, convex, and lower semi-continuous functional, then also its (Fen-chel) conjugate F^* X^*→ℝ∪{+∞} is a proper, convex, and lower semi-continuous functional, cf. <cit.>.
Furthermore, for every x^*∈ X^* and x∈ X such that
F^*(x^*)+F(x) is well-defined, i.e., the critical case ∞-∞ does not occur, the Fenchel–Young inequality
⟨ x^*,x⟩_X≤ F^*(x^*)+F(x)
applies.
In particular,
for every x^*∈ X^* and x∈ X, it holds the Fenchel–Young identity
x^*∈ (∂ F)(x) ⇔ ⟨ x^*,x⟩_X= F^*(x^*)+F(x) .
The following convexity measures for functionals play an important role in the derivation of an explicit a posteriori error representation for convex minimization problems in Section <ref>; for further information, please refer to <cit.>.
Let X be a (real) Banach space and F X→ℝ∪{+∞} proper, i.e., D(F){x∈ X| F(x)<∞}≠∅.
(i) The σ^2_F
D(F)× X→ [0,+∞] for every x∈ D(F) and y∈ X is defined by
σ^2_F(y,x) F(y)-F(x)-sup_x^*∈ (∂ F)(x)⟨ x^*,y-x⟩_X ,
where we use the convention sup(∅)-∞.
(ii) The σ^2_F
D(F)^2→ [0,+∞] for every x,y∈ D(F) is defined by
σ_F,s^2(y,x)σ_F^2(y,x)+σ_F^2(x,y)=inf_x^*∈ (∂ F)(x);y^*∈ (∂ F)(y)⟨ x^*-y^*,x-y⟩_X ,
where we use the convention inf(∅) +∞.
Let X be a (real) Banach space and F X→ℝ∪{+∞} proper. Moreover, let x∈ X be minimal for F X→ℝ∪{+∞}. Then, the ρ^2_F
X^2→ [0,+∞] x∈ X for every y∈ X is defined by
ρ^2_F(y,x) F(y)-F(x)≥ 0 .
Let X be a (real) Banach space and F X→ℝ∪{+∞} proper. Moreover, let x∈ X be minimal for F X→ℝ∪{+∞}. Then, due to 0∈ (∂ F)(x), for every y∈ X, it holds
σ^2_F(y,x)≤ρ^2_F(y,x) .
§.§ Function spaces
Throughout the article, we denote by Ω⊆ℝ^d, d ∈ℕ, a bounded polyhedral Lipschitz domain, whose (topological) boundary is disjointly divided into a closed Dirichlet part Γ_D and an open Neumann part Γ_N, i.e., ∂Ω = Γ_D∪Γ_N and ∅ = Γ_D∩Γ_N. 3mm
For p∈[1,∞] and l∈ℕ, we employ the standard notations[Here, W^-1/p,p(Γ_N) (W^1-1/p',p'(Γ_N))^* and W^-1/p,p(∂Ω) (W^1-1/p',p'(∂Ω))^*.]
W^1,p_D(Ω;ℝ^l) {v∈ L^p(Ω;ℝ^l) |∇ v∈ L^p(Ω;ℝ^l× d), v=0 in L^p(Γ_D;ℝ^l)} ,
W^p_N(;Ω) {y∈ L^p(Ω;ℝ^d) | y∈ L^p(Ω), _n y=0 in W^-1/p,p(Γ_N)} ,
W^1,p(Ω;ℝ^l) W^1,p_D(Ω;ℝ^l) if Γ_D=∅, and W^p(;Ω) W^p_N(;Ω) if Γ_N=∅,
where we by W^1,p(Ω;ℝ^l)→L^p(∂Ω;ℝ^l) and by _n(·)W^p(;Ω)→W^-1/p,p(∂Ω), the trace and trace operator, respectively. In particular, we always omit (·) and _n(·). In addition, we employ the abbreviations L^p(Ω) L^p(Ω;ℝ^1), W^1,p(Ω) W^1,p(Ω;ℝ^1), and W^1,p_D(Ω) W^1,p_D(Ω;ℝ^1). For (Lebesgue) measurable functions u,vΩ→ℝ and a (Lebesgue) measurable set M⊆Ω, we write
(u,v)_M∫_Mu v dx ,
whenever the right-hand side is well-defined. Analogously, for (Lebesgue) measurable vector fields z,yΩ→ℝ^d and a (Lebesgue) measurable set M⊆Ω, we write (z,y)_M∫_Mz· y dx. Moreover,
let |(·)|(Ω) L^1_(Ω) →ℝ∪{+∞}, for every v∈ L^1_(Ω) defined by[Here, C_c^∞(Ω;ℝ^d) denotes the space of smooth and in Ω compactly supported vector fields.]
|v|(Ω)sup{-(v, ϕ)_Ω|ϕ∈ C_c^∞(Ω;ℝ^d);
ϕ_L^∞(Ω;ℝ^d)≤ 1} ,
denote the total variation functional. Then, the space of functions with bounded variation is defined by
BV(Ω){v∈ L^1(Ω)||v|(Ω)<∞} .
§.§ Triangulations
Throughout the entire paper, we denote by {𝒯_h}_h>0, a family of regular, i.e., uniformly shape regular and conforming, triangulations of Ω⊆ℝ^d, d∈ℕ, cf. <cit.>.
Here, h>0 refers to the average mesh-size, i.e., if we set h_T(T) for all T∈𝒯_h, then, we have that h
= 1/(𝒯_h)∑_T∈𝒯_hh_T.
For every element T ∈𝒯_h,
we denote by ρ_T>0, the supremum of diameters of inscribed balls. We assume that there exists a constant ω_0>0, independent of h>0, such that max_T∈𝒯_hh_Tρ_T^-1≤ω_0. The smallest such constant is called the chunkiness of {𝒯_h}_h>0. The sets 𝒮_h, 𝒮_h^i, 𝒮_h^∂, and 𝒩_h contain the sides, interior sides, boundary sides, and vertices, respectively, of the elements of 𝒯_h.
We have the following relation between the average mesh-size and the number of vertices:
h∼(𝒩_h)^-1/d .
For k∈ℕ∪{0} and T∈𝒯_h, let 𝒫_k(T) denote the set of polynomials of maximal degree k on T. Then, for k∈ℕ∪{0} and l∈ℕ, the sets of continuous and polynomial functions or vector fields, respectively, are defined by
ℒ^k(𝒯_h)^l {v_h∈ L^∞(Ω;ℝ^l)| v_h|_T∈𝒫_k(T)^l for all T∈𝒯_h} ,
𝒮^k(𝒯_h)^l ℒ^k(𝒯_h)^l∩ C^0(Ω;ℝ^l) .
For every T∈𝒯_h and S∈𝒮_h, let x_T1/d+1∑_z∈𝒩_h∩ Tz∈ T and x_S1/d∑_z∈𝒩_h∩ Sz∈ S denote the barycenters of T and S, respectively. The (local) L^2-projection operator Π_h L^1(Ω;ℝ^l)→ℒ^0(𝒯_h)^l onto element-wise constant functions or vector fields, respectively, for every
v∈ L^1(Ω), is defined by Π_h v|_T_Tv dx for all T∈𝒯_h.
The element-wise gradient
∇_hℒ^1(𝒯_h)^l→ℒ^0(𝒯_h)^l× d, for every v_h∈ℒ^1(𝒯_h)^l, is defined by ∇_hv_h|_T∇(v_h|_T) for all T∈𝒯_h.
§.§.§ Crouzeix–Raviart element
11mm
The Crouzeix–Raviart finite element space, cf. <cit.>, consists of affine functions that are continuous at the barycenters of inner element sides, i.e.,[Here, for every inner side S∈𝒮_h^i, v_h_S v_h|_T_+-v_h|_T_- on S, where T_+, T_-∈𝒯_h satisfy ∂ T_+∩∂ T_-=S, and for every boundary S∈𝒮_h^∂, v_h_S v_h|_T on S, where T∈𝒯_h satisfies S⊆∂ T.]
𝒮^1,cr(𝒯_h){v_h∈ℒ^1(𝒯_h)|v_h_S(x_S)=0 for all S∈𝒮_h^i} .
Note that 𝒮^1,cr(𝒯_h)⊆ BV(Ω). More precisely, for every v_h∈𝒮^1,cr(𝒯_h), cf. <cit.>, we have that Dv_h=∇_ hv_h⊗dx+v_h⊗ds|_𝒮_h with ∇_ hv_h⊗dx⊥v_h⊗ds|_𝒮_h, so that, cf. <cit.>,
|Dv_h|(Ω)= ∇_ hv_h_L^1(Ω;ℝ^d)+v_h_L^1(𝒮_h) .
The Crouzeix–Raviart finite element space with homogeneous Dirichlet boundary condition on Γ_D is defined by
𝒮^1,cr_D(𝒯_h){v_h∈𝒮^1,cr(𝒯_h)| v_h(x_S)=0 for all S∈𝒮_h∩Γ_D} .
A basis for 𝒮^1,cr(𝒯_h) is given by functions φ_S∈𝒮^1,cr(𝒯_h), S∈𝒮_h, satisfying the φ_S(x_S')=δ_S,S' for all S,S'∈𝒮_h. A basis for 𝒮^1,cr_D(𝒯_h) is given by φ_S∈𝒮^1,cr_D(𝒯_h), S∈𝒮_h∖Γ_D.
§.§.§ Raviart–Thomas element
The Raviart–Thomas finite element space, cf. <cit.>, consists of element-wise affine vector fields that have continuous constant normal components on inner element sides, i.e.,[Here, for every inner side S∈𝒮_h^i, y_h· n_Sy_h|_T_+· n_T_++y_h|_T_-· n_T_- on S, where T_+, T_-∈𝒯_h satisfy ∂ T_+∩∂ T_-=S and for every T∈𝒯_h, n_T∂ T→𝕊^d-1 denotes the outward unit normal vector field to T,
and for every boundary side S∈𝒮_h^∂, y_h· n_Sy_h|_T· n on S, where T∈𝒯_h satisfies S⊆∂ T and n∂Ω→𝕊^d-1 denotes the outward unit normal vector field to Ω.]
ℛT^0(𝒯_h){y_h∈ℒ^1(𝒯_h)^d| y_h|_T· n_T= on ∂ T for all T∈𝒯_h ,
y_h· n_S=0 on S for all S∈𝒮_h^i} .
Note that ℛT^0_N(𝒯_h)⊆ W^∞_N(;Ω).
The Raviart–Thomas finite element space with homogeneous normal component boundary condition on Γ_N is defined by
ℛT^0_N(𝒯_h){y_h∈ℛT^0(𝒯_h)| y_h· n=0 on Γ_N} .
A basis for ℛT^0(𝒯_h) is given by vector fields ψ_S∈ℛT^0(𝒯_h), S∈𝒮_h, satisfying ψ_S|_S'· n_S'=δ_S,S' on S' for all S'∈𝒮_h, where n_S is the unit normal vector on S pointing from T_- to T_+ if T_+∩ T_-=S∈𝒮_h. A basis for ℛT^0_N(𝒯_h) is given by ψ_S∈ℛT^0_N(𝒯_h), S∈𝒮_h∖Γ_N.
§.§.§ Discrete integration-by-parts formula
For every v_h∈𝒮^1,cr_D(𝒯_h) and y_h∈ℛT^0_N(𝒯_h), it holds the discrete integration-by-parts formula
(∇_hv_h,Π_h y_h)_Ω=-(Π_h v_h, y_h)_Ω .
In addition, cf. <cit.>,
if a vector field y_h∈ℒ^0(𝒯_h)^d satisfies for every v_h∈𝒮^1,cr_D(𝒯_h)
(y_h,∇_h v_h)_Ω=0 ,
then, choosing v_h=φ_S∈𝒮^1,cr_D(𝒯_h) for all S∈𝒮_h∖Γ_D, one finds that y_h∈ℛT^0_N(𝒯_h).
Similarly, if a function v_h∈ℒ^0(𝒯_h) satisfies for every y_h∈ℛT^0_N(𝒯_h)
(v_h, y_h)_Ω=0 ,
then, choosing y_h=ψ_S∈ℛT^0_N(𝒯_h) for all S∈𝒮_h∖Γ_N, one finds that v_h∈𝒮^1,cr_D(𝒯_h). In other words,
we have the orthogonal (with respect to the inner product (·,·)_Ω) decompositions
ℒ^0(𝒯_h)^d =(|_ℛT^0_N(𝒯_h))⊕∇_h(𝒮^1,cr_D(𝒯_h))
,
ℒ^0(𝒯_h) =(∇_h|_𝒮^1,cr_D(𝒯_h))⊕ (ℛT^0_N(𝒯_h)) .
§ EXACT A POSTERIORI ERROR ESTIMATION FOR CONVEX MINIMIZATION PROBLEMS
§.§ Continuous convex minimization problem and continuous convex duality
Let ϕℝ^d→ℝ∪{+∞} be a proper, convex, and lower semi-continuous function and let ψΩ×ℝ→ℝ∪{+∞} be a (Lebesgue) measurable function such that for a.e. x∈Ω, the function ψ(x,·)Ω×ℝ→ℝ∪{+∞} is proper, convex, and lower semi-continuous. We examine the convex minimization problem that seeks for a function u∈ W^1,p_D(Ω), p∈ (1,∞), that is minimal for the functional I W^1,p_D(Ω)→ℝ∪{+∞}, for every v∈W^1,p_D(Ω) defined by
I(v)∫_Ωϕ(∇ v) x+∫_Ωψ(·,v) x .
In what follows, we refer to the minimization of I W^1,p_D(Ω) →ℝ∪{+∞} as the primal problem.
A (Fenchel) dual problem to the minimization of (<ref>) consists in the maximization of the functional DL^p'(Ω;ℝ^d)→ℝ∪{ -∞}, for every y∈ L^p'(Ω;ℝ^d) defined by
D(y) -∫_Ωϕ^*( y) x-F^*( y) ,
where the distributional divergence L^p'(Ω;ℝ^d)→ (W^1,p_D(Ω))^* for every y∈L^p'(Ω;ℝ^d) and v∈W^1,p_D(Ω) is defined by ⟨ y,v⟩_W^1,p_D(Ω) -(y,∇ v)_Ω and
F^*L^p'(Ω)→ℝ∪{±∞} denotes the Fenchel conjugate to F L^p(Ω)→ℝ∪{+∞}, defined by F(v)∫_Ωψ(·,v) x for all v∈ L^p(Ω). Note that for every y∈W^p'_N(;Ω), we have that ⟨ y,v⟩_W^1,p_D(Ω)=( y, v)_Ω for all v∈ W^1,p_D(Ω) and, thus, the representation
D(y)=-∫_Ωϕ^*( y) x-∫_Ωψ^*(·, y) x .
A weak duality relation applies, cf. <cit.>, i.e.,
inf_v∈ W^1,p_D(Ω)I(v)≥sup_y∈ L^p'(Ω;ℝ^d)D(y) .
In what follows, we
always assume that ϕℝ^d→ℝ∪{+∞} and ψΩ×ℝ→ℝ∪{+∞} are such that (<ref>) admits at least one minimizer u∈ W^1,p_D(Ω), called the primal solution, (<ref>) at least one maximizer z∈ L^p'(Ω;ℝ^d), called the dual solution, and that a strong duality relation applies, i.e.,
I(u)= D(z) .
By the Fenchel–Young inequality (cf. (<ref>)), (<ref>) is equivalent to
the convex optimality relations
z·∇ u =ϕ^*(z)+ϕ(∇ u) Ω ,
z ∈∂ F(u) .
If z∈W^p'_N(;Ω), then the convex optimality relation (<ref>) is equivalent to
z u=ψ^*(·, z)+ψ(·, u) Ω .
If ϕ∈ C^1(ℝ^d),
then, by the Fenchel–Young identity (cf. (<ref>)), (<ref>) is equivalent to
z= Dϕ(∇ u) L^p'(Ω;ℝ^d) .
Similarly, if z∈W^p'_N(;Ω) and
ψ(x,·)∈ C^1(ℝ) for a.e. x∈Ω,
then (<ref>) is equivalent to
z=Dψ(·, u) L^p'(Ω) .
The convex duality relations (<ref>)–(<ref>) motivate introducing the primal-dual error estimator η^2 W^1,p_D(Ω)× L^p'(Ω;ℝ^d)→ [0,+∞], for every
v∈ W^1,p_D(Ω) and y∈ L^p'(Ω;ℝ^d) defined by
5mm
η^2(v,y) I(v)-D(y) .
Note that the sign of the estimator (<ref>) is a consequence of the weak duality relation (<ref>).
Together with the optimal convexity measures (cf. Definition <ref>) ρ_I^2 W^1,p_D(Ω)^2→ [0,+∞] of (<ref>) at a primal solution u∈ W^1,p_D(Ω) and ρ_-D^2L^p'(Ω;ℝ^d)→ [0,+∞] of the negative of (<ref>) at a dual solution z∈L^p'(Ω;ℝ^d), we arrive at the following explicit a posteriori error representation.3mm
The following statements apply:
(i) For every v∈ W^1,p_D(Ω) and y∈L^p'(Ω;ℝ^d), we have that
ρ^2_I(v,u)+ρ^2_-D(y,z)=η^2(v,y) .
(ii) For every v∈ W^1,p_D(Ω) and y∈W^p'_N(;Ω), we have that
η^2(v,y) = ∫_Ωϕ(∇ v)-∇ v· y+ϕ^*(y) dx+∫_Ωψ(·, v)- v div y+ψ^*(·,div y) dx .
(i) By the Fenchel–Young inequality (<ref>), the integrands in the representation (<ref>), are non-negative and, thus, suitable as local refinement indicators.
(ii) Appealing to Remark <ref>, from Theorem <ref> (i), for every v∈ W^1,p_D(Ω) and y∈ L^p'(Ω;ℝ^d), it follows that
σ_I^2(v,u)+σ_-D^2(y,z)≤η^2(v,y).
ad (i). Due to I(u)=D(z), cf. (<ref>), Definition <ref>, and (<ref>),
for every v∈ W^1,p_D(Ω) and y∈ L^p'(Ω;ℝ^d), we have that
ρ^2_I(v,u)+ρ^2_-D(y,z)=I(v)-I(u)+D(z)-D(y)=η^2(v,y) .
ad (ii). Using (<ref>), (<ref>), and integration-by-parts, we conclude that (<ref>) applies.
(i) In the , cf. <cit.>, i.e., ϕ1/p|·|^p∈ C^1(ℝ), p∈ (1,∞), and ψ ((t,x)^⊤↦ -f(x)t)Ω×ℝ→ℝ, where f∈ L^p'(Ω), cf. <cit.>, we have that
ρ^2_I(v,u)∼F(∇ v)-F(∇ u)_L^2(Ω;ℝ^d)^2 , ρ^2_-D(y,z)∼F^*(y)-F^*(z)_L^2(Ω;ℝ^d)^2 ,
where F,F^*ℝ^d→ℝ^d for every a∈ℝ^d are defined by F(a)| a|^p-2/2a and F^*(a)| a|^p'-2/2a.
(ii) In the , cf. <cit.>, i.e., ϕ1/2|·|^2∈ C^1(ℝ) and ψ ((t,x)^⊤↦ -f(x)t+I_χ(x)(t))Ω×ℝ→ℝ∪{+∞}, where f∈ L^2(Ω) and χ∈ W^1,2(Ω) with χ≤ 0 on Γ_D, cf. <cit.>, where I_χ(x)(t) 0 if t≥ 0 and I_χ(x)(t) +∞ else, we have that
ρ^2_I(v,u)= 12∇ v-∇ u_L^2(Ω;ℝ^d)^2+⟨ -Λ,v-u⟩_W^1,2_D(Ω) , ρ^2_-D(y,z)≥12y-z_L^2(Ω;ℝ^d)^2 ,
where Λ∈ (W^1,2_D(Ω))^* is defined by ⟨Λ,v⟩_W^1,2_D(Ω) (f,v)_Ω-(∇ u,∇ v)_Ω for all v∈ W^1,2_D(Ω).
(iii) In an , cf. <cit.>, i.e., ϕζ∘|·|∈ C^1(ℝ), where ζ(0) 0, ζ'(t)μ_2 t if t∈ [0,t_1], ζ'(t)μ_2 t_1 if t∈ [t_1,t_2], and ζ'(t)μ_1 t if t∈ [t_2,+∞) for some 0<t_1<t_2 and 0<μ_1<μ_2 with t_1μ_2=t_2μ_1, and ψ ((t,x)^⊤↦ -f(x)t)Ω×ℝ→ℝ, where f∈ L^2(Ω), cf.
<cit.>,
we have that
ρ^2_I(v,u)≥12μDϕ(∇ v)-Dϕ(∇ u)_L^2(Ω;ℝ^d)^2 , ρ^2_-D(y,z)≥12μy-z_L^2(Ω;ℝ^d)^2 .
(iv) In the , cf. <cit.>, i.e.,
ϕ|·|∈ C^0(ℝ) and ψ ((t,x)^⊤↦α/2(t-g(x))^2)Ω×ℝ→ℝ, where g∈ L^2(Ω), cf. <cit.>, we have that
ρ^2_I(v,u)≥α2v-u_L^2(Ω)^2 , ρ^2_-D(y,z)≥12α y- z_L^2(Ω)^2 .
Since the dual problem to the minimization of the negative of (<ref>), in turn, consists in the maximization of the negative of (<ref>),
the roles of the primal problem and the dual problem may be interchanged. An advantage of Theorem <ref> consists in the fact that it yields reliable and efficient a posteriori error estimators for both the primal problem and the dual problem, i.e.,7.5mm
Theorem <ref> also shows that for each y∈ L^p'(Ω;ℝ^d), the estimator η^2_I,y (v↦η^2(v,y)) W^1,p_D(Ω)→ [0,+∞]
satisfies
ρ^2_I(v,u)+ρ^2_-D(y,z)=η^2_I,y(v) ,
and for each v∈ W^1,p_D(Ω), the estimator η^2_-D,v (y↦η^2(v,y)) L^p'(Ω;ℝ^d)→ [0,+∞]
ρ^2_I(v,u)+ρ^2_-D(y,z)=η^2_-D,v(y) .
For the a posteriori error estimators (<ref>) and (<ref>) for being numerically practicable, it is necessary to have a
computationally cheap way to obtain sufficiently accurate approximation of the dual solution (for (<ref>)) and/or of the primal solution
(for (<ref>)), respectively. In Section <ref>, resorting to (discrete) convex duality relations between a non-conforming Crouzeix–Raviart approximation of the primal problem and a Raviart–Thomas approximation of the dual problem, we arrive at discrete reconstruction formulas, called generalized Marini formula, cf. <cit.>.9mm
§.§ Discrete convex minimization problem and discrete convex duality
Let ψ_hΩ×ℝ→ℝ∪{+∞} denote a suitable approximation[We refrain from being too precise concerning
what we mean with approximation to allow for more flexibility. Assumptions on both ϕℝ^d→ℝ∪{+∞} and ψ_hΩ×ℝ→ℝ∪{+∞}, h>0, that imply, e.g., Γ-convergence results can be found in <cit.>.] of ψΩ×ℝ→ℝ∪{+∞} such that ψ_h(·,t)∈ℒ^0(𝒯_h) for all t∈ℝ and for a.e. x∈Ω, ψ_h(x,·)Ω×ℝ→ℝ∪{+∞} is a proper, convex, and lower semi-continuous functional. Then, we examine the (discrete) convex minimization problem that seeks for a function u_h^cr∈𝒮^1,cr_D(𝒯_h) that is minimal for the functional I_h^cr𝒮^1,cr_D(𝒯_h)→ℝ∪{+∞}, for every v_h∈𝒮^1,cr_D(𝒯_h) defined by
I_h^cr(v_h)∫_Ωϕ(∇_ h v_h) x+∫_Ωψ_h(·,Π_h v_h) x .
In what follows, we refer the minimization of I_h^cr𝒮^1,cr_D(𝒯_h)→ℝ∪{+∞} to as the discrete primal problem.
In <cit.>, it is shown that the corresponding (Fenchel) dual problem to the minimization of (<ref>)
consists in the maximization of D_h^rtℛT^0_N(𝒯_h)→ℝ∪{-∞}, for every y_h∈ℛT^0_N(𝒯_h) defined by
D_h^rt(y_h)-∫_Ωϕ^*(Π_h y_h) x-∫_Ωψ_h^*(·, y_h) x .
A discrete weak duality relation, cf. <cit.>, applies
inf_v_h∈𝒮^1,cr_D(𝒯_h)I_h^cr(v_h)≥sup_y_h∈ℛT^0_N(𝒯_h)D_h^rt(y_h) .
We will always assume that ϕℝ^d→ℝ∪{+∞} and ψ_hΩ×ℝ→ℝ∪{+∞} are such that (<ref>) admits at least one minimizer u_h^cr∈𝒮^1,cr_D(𝒯_h), called the discrete primal solution,
(<ref>) admits at least one maximizer z_h^rt∈ℛT^0_N(𝒯_h), called the discrete dual solution, and that a discrete strong duality relation applies, i.e.,
I_h^cr(u_h^cr)=D_h^rt(z_h^rt) .
By the Fenchel–Young identity (cf. (<ref>)), (<ref>) is equivalent to the discrete convex optimality relations
Π_h z_h^rt·∇_ h u_h^cr =ϕ^*(Π_hz_h^rt)+ϕ(∇_ h u_h^cr) a.e. in Ω ,
z_h^rt Π_hu_h^cr =ψ_h^*(·, z_h^rt)+ψ_h(·,Π_hu_h^cr) a.e. in Ω .
If ϕ∈ C^1(ℝ^d), then, by the Fenchel–Young identity (cf. (<ref>)), (<ref>) is equivalent to
Π_h z_h^rt=Dϕ(∇_ h u_h^cr) in ℒ^0(𝒯_h)^d ,
and if ϕ^*∈ C^1(ℝ^d), then, by the Fenchel–Young identity (cf. (<ref>)), (<ref>) is equivalent to
∇_ h u_h^cr=Dϕ^*(Π_h z_h^rt) in ℒ^0(𝒯_h)^d .
Similarly, if ψ_h(x,·)∈ C^1(ℝ) for a.e. x∈Ω, then (<ref>) is equivalent to
z_h^rt=Dψ_h(·,Π_hu_h^cr) in ℒ^0(𝒯_h) ,
and if ψ_h^*(x,·)∈ C^1(ℝ) for a.e. x∈Ω, then (<ref>) is equivalent to
Π_hu_h^cr=Dψ_h^*(·, z_h^rt) in ℒ^0(𝒯_h) .
The relations (<ref>)–(<ref>) motivate the following discrete recontruction formulas for a discrete dual solution z_h^rt∈ℛT^0_N(𝒯_h) from a discrete primal solution u_h^cr∈𝒮^1,cr_D(𝒯_h) and vice versa, called generalized Marini formulas, cf. <cit.>.
The following statements apply:
(i) If ϕ∈ C^1(ℝ^d) and ψ_h(x,·)∈ C^1(ℝ) for a.e. x∈Ω, then, given a minimizer u_h^cr∈𝒮^1,cr_D(𝒯_h) of (<ref>),
a maximizer z_h^rt∈ℛT^0_N(𝒯_h) of (<ref>) is given via
z_h^rt= Dϕ(∇_ h u_h^cr)+Dψ_h(·, Π_hu_h^cr)/d(_ℝ^d-Π_h_ℝ^d) in ℛT^0_N(𝒯_h) ,
a discrete strong duality relation applies, i.e., (<ref>).
(ii) If ϕ^*∈ C^1(ℝ^d) and ψ_h^*(x,·)∈ C^1(ℝ) for a.e. x∈Ω, then, given a maximizer z_h^rt∈ℛT^0_N(𝒯_h) of (<ref>), a minimizer u_h^cr∈𝒮^1,cr_D(𝒯_h) of (<ref>) is given via
u_h^cr = Dψ_h^*(·, z_h^rt)+ Dϕ^*(Π_h z_h^rt)·(_ℝ^d-Π_h_ℝ^d)
in 𝒮^1,cr_D(𝒯_h) ,
a discrete strong duality relation applies, i.e., (<ref>).
It is possible to derive reconstructions formulas similar to (<ref>) and (<ref>) under weak conditions, e.g., resorting to a regularization argument (cf. Proposition <ref>) or given discrete Lagrange multipliers (cf. <cit.>).
ad (i). See <cit.>.5mm
ad (ii). By definition, it holds u_h^cr∈ℒ^1(𝒯_h) and the discrete convex optimality relation (<ref>) is satisfied.
Since z_h^rt∈ℛT^0_N(𝒯_h) is maximal for (<ref>) as well as ϕ^*∈ C^1(ℝ^d) and ψ_h^*(x,·)∈ C^1(ℝ) for a.e. x∈Ω, for every y_h∈ℛT^0_N(𝒯_h), we have that
(Dϕ^*(Π_h z_h^rt),Π_hy_h)_Ω+(Dψ_h^*(·, z_h^rt), y_h)_Ω=0 .
In particular, (<ref>) implies that Dϕ^*(Π_h z_h^rt)∈ ((|_ℛT^0_N(𝒯_h)))^⊥.
Appealing to <cit.>, it holds
((|_ℛT^0_N(𝒯_h)))^⊥=∇_h(𝒮^1,cr_D(𝒯_h)). Therefore, there exists
v_h∈𝒮^1,cr_D(𝒯_h) such that
∇_h v_h= Dϕ^*(Π_h z_h^rt) in ℒ^0(𝒯_h)^d .
Hence, for every y_h∈ℛT^0_N(𝒯_h), resorting to the discrete integration-by-parts formula (<ref>), (<ref>), (<ref>), and (<ref>), we find that
(Π_hv_h-Π_h u_h^cr, y_h)_Ω
=- (Dϕ^*(Π_h z_h^rt),Π_hy_h)_Ω-(Dψ_h^*(·, z_h^rt), y_h)_Ω=0 .
In other words, for every y_h∈ℛT^0_N(𝒯_h), we have that
( v_h-u_h^cr, y_h)_Ω= (Π_h v_h-Π_h u_h^cr, y_h)_Ω=0 .
On the other hand, we have that ∇_ h(v_h-u_h^cr)=0 in ℒ^0(𝒯_h)^d, i.e., v_h-u_h^cr∈ℒ^0(𝒯_h).
Therefore, (<ref>) in conjunction with (<ref>) implies that
v_h-u_h^cr∈ ( (ℛT^0_N(𝒯_h)))^⊥=(∇_h|_𝒮^1,cr_D(𝒯_h)). As a result, due to v_h∈𝒮^1,cr_D(𝒯_h), we conclude that u_h^cr∈𝒮^1,cr_D(𝒯_h) with
∇_ h u_h^cr =Dϕ^*(Π_h z_h^rt) in ℒ^0(𝒯_h)^d ,
Π_hu_h^cr =Dψ_h^*(·, z_h^rt) in ℒ^0(𝒯_h) .
By the Fenchel–Young identity, cf. (<ref>), (<ref>) is equivalent to
Π_h z_h^rt·∇_ h u_h^cr =ϕ^*(Π_hz_h^rt)+ϕ(∇_ h u_h^cr) a.e. in Ω ,
z_h^rt Π_hu_h^cr =ψ_h^*(·, z_h^rt)+ψ_h(·,Π_hu_h^cr) a.e. in Ω .
Eventually, adding (<ref>)_1 and (<ref>)_2, subsequently, integration with respect to x∈Ω, resorting to the discrete integration-by-parts formula (<ref>), and using the definitions (<ref>) and (<ref>), we arrive at I_h^cr(u_h^cr)=D_h^rt(z_h^rt),
which, appealing to the discrete weak duality relation (<ref>), implies that u_h^cr∈𝒮^1,cr_D(𝒯_h) is minimal for (<ref>).
§ APPLICATION TO THE RUDIN–OSHER–FATEMI (ROF) MODEL
In this section, we transfer the concepts derived in Section <ref> to the non-differentiable Rudin–Osher–Fatemi (ROF) model, cf. <cit.>. The approximation of the ROF model has been investigated by numerous authors: A priori error estimates has been derived in <cit.>.
A posteriori error estimates and adaptivity results can be found in <cit.>.7mm
§.§ The continuous Rudin–Osher–Fatemi (ROF) model
Given a function g∈ L^2(Ω), i.e., the noisy image, and a constant parameter α>0, the fidelity parameter the Rudin–Osher–Fatemi (ROF) model, cf. <cit.>, consists in the minimization of the functional I BV(Ω)∩ L^2(Ω)→ℝ, for every v∈ BV(Ω)∩ L^2(Ω) defined by
I(v)|v| (Ω)+α2v-g^2_L^2(Ω) .
In <cit.>, it has been established that there exists a unique minimizer u∈ BV(Ω)∩ L^2(Ω)
of (<ref>).
Appealing to <cit.> or <cit.>, the (Fenchel) dual problem to the minimization of (<ref>) consists in the maximization of the functional D W^2_N(;Ω) ∩ L^∞(Ω;ℝ^d)→ℝ∪{-∞}, for every y∈ W^2_N(;Ω) ∩ L^∞(Ω;ℝ^d) defined by
D(y) -I_K_1(0)(y) -12α y+α g_L^2(Ω)^2+α2g_L^2(Ω)^2 ,
where I_K_1(0) L^∞(Ω;ℝ^d)→ℝ∪{∞} is defined by I_K_1(0)(y) 0 if y∈ L^∞(Ω;ℝ^d) with | y|≤ 1 a.e. in Ω and I_K_1(0)(y)∞ else. Apart from that, in <cit.>, it is shown that (<ref>) admits a maximizer z∈ W^2_N(;Ω)∩ L^∞(Ω;ℝ^d) and that a strong duality relation applies, i.e.,
I(u)=D(z) .
Appealing to <cit.>, (<ref>) is equivalent to
the convex optimality relations
z =α (u-g) in L^2(Ω) ,
-(u, z)_Ω =|u|(Ω) .
Next, if we introduce, by analogy with Section <ref>, the primal-dual error estimator
η^2 BV(Ω)× (W^2_N(;Ω)∩ L^∞(Ω;ℝ^d))→ [0,+∞], for every v∈ BV(Ω) and y∈ W^2_N(;Ω)∩ L^∞(Ω;ℝ^d) defined by
η^2(v,y) I(v)-D(y) ,
then the concepts of Section <ref> can be transferred to the ROF model.5mm
The following statements apply:
(i) For every v∈ BV(Ω) and y∈ W^2_N(;Ω)∩ L^∞(Ω;ℝ^d), we have that
ρ^2_I(v,u)+ρ^2_-D(y,z)=η^2(v,y) .
(ii) For every v∈ BV(Ω) and y∈ W^2_N(;Ω)∩ L^∞(Ω;ℝ^d), we have that
η^2(v,y)= |Dv|(Ω)+( y,v)_Ω+12α y-α (v-g)_L^2(Ω)^2+I_K_1(0)(y) .
ad (i). Due to I(u)=D(z), cf. (<ref>), Definition <ref>, and (<ref>),
for every v∈ BV(Ω) and y∈ W^2_N(;Ω)∩ L^∞(Ω;ℝ^d), we have that
ρ^2_I(v,u)+ρ^2_-D(y,z)=I(v)-I(u)+D(z)-D(y)=η^2(v,y) .
ad (ii). For every v∈ BV(Ω) and y∈ W^2_N(;Ω)∩ L^∞(Ω;ℝ^d), we have that
η^2(v,y) =|Dv|(Ω)+( y,v)_Ω+12αα (v-g)_L^2(Ω)^2
-12α2( y,α v)_Ω+12α y+α g_L^2(Ω)^2-α2g_L^2(Ω)^2^2+I_K_1(0)(y)
=|Dv|(Ω)+( y,v)_Ω+α2v-g_L^2(Ω)^2
-
12α y-α (v-g)_L^2(Ω)^2-α2v-g_L^2(Ω)^2+I_K_1(0)(y) ,
which yields the claimed representation.
Restricting the estimator (<ref>) to subclasses of BV(Ω) and W^2_N(;Ω)∩ L^∞(Ω;ℝ^d), , for which an appropriate integration-by-parts formula apply, e.g., (<ref>), it is possible to derive alternative representations of the estimator (<ref>), whose integrands are point-wise non-negative and, thus, suitable as local refinement indicators.
(i) For every v∈ W^1,1(Ω) and y∈ W^2_N(;Ω)∩ L^∞(Ω;ℝ^d), by integration-by-parts, it holds
η^2(v,y)=∇ v_L^1(Ω;ℝ^d)-(∇ v,y)_Ω+12α y+α (v-g)_L^2(Ω)^2+I_K_1(0)(y)≥ 0 .
(ii) For every T∈𝒯_h, we define the local refinement indicator η_T^2 W^1,1(Ω)× W^2_N(;Ω)∩ L^∞(Ω;ℝ^d)→ [0,+∞] for every v∈ W^1,1(Ω) and y∈ W^2_N(;Ω)∩ L^∞(Ω;ℝ^d) by
η^2_T,W(v,y)∇ v_L^1(T;ℝ^d)-(∇ v,y)_T+12α y+α (v-g)_L^2(T)^2+I_K_1(0)(y)≥ 0 .
(iii) For every v_h∈𝒮^1,cr(Ω) and y_h∈ℛT^0_N(𝒯_h), by the representation of the total variation of Crouzeix–Raviart functions (<ref>) and the discrete integration-by-parts formula (<ref>), it holds
η^2(v_h,y_h) =∇_ h v_h_L^1(Ω;ℝ^d)+v_h_L^1(𝒮_h)-(∇_ h v_h,Π_h y_h)_Ω
+12α y_h+α (v_h-g)_L^2(Ω)^2+I_K_1(0)(y_h)≥ 0 .
(iv) For every T∈𝒯_h, we define the discrete local refinement indicator η_T,CR^2𝒮^1,cr(𝒯_h)×ℛT^0_N(𝒯_h) → [0,+∞] for every v_h∈𝒮^1,cr(𝒯_h) and y_h∈ℛT^0_N(𝒯_h) by
η^2_T,CR(v_h,y_h) ∇ v_h_L^1(T;ℝ^d)+∑_S∈𝒮_h;S⊆ Tv_h_L^1(S)-(∇_ h v_h,Π_h y_h)_T
+12α y_h+α (v_h-g)_L^2(T)^2+I_K_1(0)(y_h)≥ 0 .
We emphasize that the primal-dual error estimator (<ref>) and the representations (<ref>) or in Remark <ref> (i) & (ii) are well-known, cf. <cit.>. However, the combination of (<ref>) with the representation of the total variation of Crouzeix–Raviart functions (<ref>) and the discrete integration-by-parts formula (<ref>) in Remark <ref> (iii) & (iv), to the best of the authors' knowledge, is new and leads to significantly improved experimental convergence rates of the corresponding adaptive mesh-refinement procedure compared to the contributions <cit.>, cf. Section <ref>.
15mm
§.§ The discretized Rudin–Osher–Fatemi (ROF) model
Given g∈ L^2(Ω) and α>0, with g_hΠ_hg∈ℒ^0(𝒯_h), the discretized ROF model, proposed in <cit.>, consists in the minimization of I^cr_h𝒮^1,cr(𝒯_h)→ℝ, for every v_h∈𝒮^1,cr(𝒯_h) defined by
I^cr_h(v_h)∇_hv_h_L^1(Ω;ℝ^d)+α2Π_hv_h-α g_h^2_L^2(Ω) .
Note that the functional (<ref>) defines a non-conforming approximation of the functional (<ref>), as, e.g., jump terms of across inner element sides are not included. This, however, turned out to be essential in the derivation of optimal a priori error estimate in <cit.>.
Since the functional (<ref>) is proper, strictly convex, weakly coercive, and lower semi-continuous,
the direct method in the calculus of variations, cf. <cit.>, yields the existence of a unique minimizer u_h^cr∈𝒮^1,cr(𝒯_h), called the discrete primal solution. Appealing to <cit.>, the corresponding (Fenchel) dual problem to the minimization of (<ref>) consists in the maximization of the functional D_h^rtℛT^0_N(𝒯_h)→ℝ∪{-∞}, for every y_h∈ℛT^0_N(𝒯_h) defined by
D_h^rt(y_h) -I_K_1(0)(Π_hy_h)-12α y_h+α g_h_L^2(Ω)^2+α2g_h_L^2(Ω)^2 .
Appealing to Theorem <ref> (below), there exists a maximizer z_h^rt∈ℛT^0_N(𝒯_h) of (<ref>), which satisfies |Π_h z_h^rt|≤ 1 a.e. in Ω, a
discrete strong duality relation applies, i.e.,
I^cr_h(u_h^cr)= D_h^rt(z_h^rt) ,
and the discrete convex optimality relations
z_h^rt =α (Π_h u_h^cr-g_h) ℒ^0(𝒯_h) ,
Π_hz_h^rt·∇_h u_h^cr =|∇_h u_h^cr| ℒ^0(𝒯_h) .
§.§ The regularized, discretized Rudin–Osher–Fatemi model
To approximate a discrete minimizer u_h^cr∈𝒮^1,cr(𝒯_h) of (<ref>), it is common to approximate
the modulus function by strictly convex regularizations. In this connection, for every ε∈ (0,1), we define a special regularization f_εℝ→ℝ_≥ 0 of the modulus function, for every t∈ℝ, via
f_ε(t) (1-ε) | t|_ε , | t|_ε (t^2+ε^2)^1/2 ,
where |·|_εℝ→ℝ_≥ 0 is commonly referred to as the standard regularization.7mm
Let us collect the most important properties of the regularization (<ref>).
For every ε∈ (0,1), the following statements apply:
(i) f_ε∈ C^1(ℝ) with f_ε'(0)=0.
(ii) For every t∈ℝ, it holds -ε | t|-ε^2≤ f_ε(t)-| t|≤ε (1-| t|).
(iii) For every t∈ℝ, it holds | f_ε'(t)|≤ 1-ε.
(iv) For every s∈ℝ, it holds
f_ε^*(s)-ε ((1-ε)^2-| s|^2)^1/2 if | s|≤ 1-ε
+∞ if | s|> 1-ε .
The main reason to consider the regularization f_εℝ→ℝ_≥ 0 instead of the standard regularization |·|_εℝ→ℝ_≥ 0 consists in the property (iii) in Lemma <ref>. This additional slope reduction enables us later to construct a sufficiently accurate, admissible approximation of the dual solution using an additional projection step, cf. Remark <ref> (below) and Section <ref> (below).
ad (i). The claimed regularity f_ε∈ C^1(ℝ) is evident. Since for every t∈ℝ, it holds
f_ε'(t)=(1-ε) t(t^2+ε^2)^1/2 ,
we have that f_ε'(0)=0.
ad (ii). For every t∈ℝ, due to 0≤| t|_ε-| t|≤ε, we have that
-ε | t|-ε^2≤ -ε | t|_ε≤ f_ε(t)-| t|=ε-ε | t|_ε≤ε (1-| t|) .
ad (iii). Immediate consequence of the representation (<ref>).
ad (iv). Due to <cit.>, for every s∈ℝ and ε∈ (0,1), we have that
f_ε^*(s)=((1-ε) |·|_ε)^*(s)=(1-ε) (|·|_ε)^*(s1-ε) .
Since for every s∈ℝ and ε∈ (0,1), it holds
(|·|_ε)^*(s)=
-ε (1-| s|^2)^1/2 if | s|≤ 1
+∞ if | s|> 1
,
we conclude that
the claimed representation of the Fenchel conjugate applies.
Given g∈ L^2(Ω), α> 0, and an element-wise constant regularization parameter ε_h∈ℒ^0(𝒯_h) with 0<ε_h<1 a.e. in Ω, for g_hΠ_hg∈ℒ^0(𝒯_h), the regularized, discrete ROF model consists in the minimization of the functional I^cr_h,ε_h𝒮^1,cr(𝒯_h)→ℝ, for every v_h∈𝒮^1,cr(𝒯_h) defined by
I^cr_h,ε_h(v_h)f_ε_h(|∇_hv_h|)_L^1(Ω)+α2Π_hv_h-g_h^2_L^2(Ω) .
Since the functional (<ref>) is proper, strictly convex, weakly coercive, and lower semi-continuous,
the direct method in the calculus of variations, cf. <cit.>, yields the existence of a unique minimizer u_h,ε_h^cr∈𝒮^1,cr(𝒯_h), called the regularized, discrete primal solution.
Appealing to (f_ε_h∘|·|)^*=f_ε_h^*∘|·|, cf. <cit.>, the corresponding (Fenchel) dual problem to the minimization of (<ref>) consists in the maximization of functional D_h,ε_h^rtℛT^0_N(𝒯_h)→ℝ∪{-∞}, for every y_h∈ℛT^0_N(𝒯_h) defined by
D_h,ε_h^rt(y_h) -∫_Ωf_ε_h^*(|Π_hy_h| ) dx-12α y_h+α g_h_L^2(Ω)^2+α2g_h_L^2(Ω)^2 .
The following proposition clarifies the well-posedness of the dual regularized, discretized ROF model, i.e., the existence of a maximizer of (<ref>). It also yields a discrete reconstruction formula for a maximizer of (<ref>) from a minimizer of (<ref>) and proves discrete strong duality.
The following statements apply:
(i) A discrete weak duality relation applies, i.e.,
inf_v_h∈𝒮^1,cr_D(𝒯_h)I_h,ε_h^cr(v_h)≥sup_y_h∈ℛT^0_N(𝒯_h)D_h,ε_h^rt(y_h) .
(ii) The discrete flux z_h^rt∈ℒ^1(𝒯_h), defined via the generalized Marini formula
z_h,ε_h^rtf_ε_h'(|∇_h u_h,ε_h^cr|)|∇_h u_h,ε_h^cr|∇_h u_h,ε_h^cr+αΠ_h u_h,ε_h^cr-g_hd(_ℝ^d-Π_h_ℝ^d) ,
satisfies z_h,ε_h^rt∈ℛT^0_N(𝒯_h) and the discrete convex optimality relations
z_h,ε_h^rt =α (Π_hu_h,ε_h^cr-g_h) in ℒ^0(𝒯_h) ,
Π_h z_h,ε_h^rt =f_ε_h'(|∇_ h u_h,ε_h^cr|)|∇_ h u_h,ε_h^cr|∇_ h u_h,ε_h^cr in ℒ^0(𝒯_h)^d .
(iii) The discrete flux z_h^rt∈ℛT^0_N(𝒯_h) is a maximizer of (<ref>) and discrete strong duality applies, i.e.,
I^cr_h,ε_h(u_h,ε_h^cr)=D_h,ε_h^rt(z_h,ε_h^rt) .
Note that, by the Fenchel–Young identity, cf. <cit.>, (<ref>) is equivalent to
Π_h z_h,ε_h^rt·∇_h u_h,ε_h^cr =f_ε_h^*(|Π_h z_h,ε_h^rt| )+f_ε (|∇_h u_h,ε_h^cr|) in ℒ^0(𝒯_h) .
Appealing to Lemma <ref> (iii), we have that |Π_h z_h,ε_h^rt|≤ 1-ε_h a.e. in Ω. Therefore,
if Π_hu_h,ε_h^cr-g_h_L^∞(Ω)≤ c_0 for some c_0>0, which can be expected by discrete maximum principles, then, choosing
ε_hα c_0/dh, yields that
z_h,ε_h^rt_L^∞(Ω;ℝ^d)≤ 1. However, choices like ε_h∼ h let us expect convergence rates not better than 𝒪(h^1/2), cf. Proposition <ref> (i) (below). In order to allow for the convergence rate 𝒪(h), one needs to choose ε_h∼ h^2. But, in this case, we cannot guarantee that z_h,ε_h^rt_L^∞(Ω;ℝ^d)≤ 1, so that we instead consider the scaled vector field z_h,ε_h^rt z_h,ε_h^rt(max{1,z_h,ε_h^rt_L^∞(Ω;ℝ^d)})^-1∈ℛT^0_N(𝒯_h), which is still a sufficiently accurate approximation of the dual solution, as indicated by the numerical experiments, cf. Section <ref>.
ad (i). Using element-wise that f_ε_h=f_ε_h^**, the definition of the convex conjugate, cf. (<ref>), and the discrete integration-by-parts formula (<ref>), we find that
inf_v_h∈𝒮^1,cr_D(𝒯_h)I_h,ε_h^cr(v_h)=inf_v_h∈𝒮^1,cr_D(𝒯_h)f_ε_h^**(|∇_ h v_h|)_L^1(Ω)+α2Π_h v_h-g_h_L^2(Ω)^2
=
inf_v_h∈𝒮^1,cr_D(𝒯_h)sup_y_h∈ℒ^0(𝒯_h)^d-∫_Ωf_ε_h^*(|y_h |) dx+(y_h,∇_ h v_h)_Ω+α2Π_h v_h-g_h_L^2(Ω)^2
≥
inf_v_h∈𝒮^1,cr_D(𝒯_h)sup_y_h∈ℛT^0_N(𝒯_h)-∫_Ωf_ε_h^*(|Π_h y_h |) dx-( y_h,Π_h v_h)_Ω+α2Π_h v_h-g_h_L^2(Ω)^2
≥
sup_y_h∈ℛT^0_N(𝒯_h)-∫_Ωf_ε_h^*(|Π_h y_h |) dx-sup_v_h∈ℒ^0(𝒯_h)( y_h,v_h)_Ω-α2v_h-g_h_L^2(Ω)^2
=
sup_y_h∈ℛT^0_N(𝒯_h)-∫_Ωf_ε_h^*(|Π_h y_h |) dx-12α y_h+α g_h_L^2(Ω)^2+α2g_h_L^2(Ω)^2
=
sup_y_h∈ℛT^0_N(𝒯_h)D_h,ε_h^rt(y_h) ,
which is the claimed discrete weak duality relation.
ad (ii). By Lemma <ref>, the minimality of u_h,ε_h^cr∈𝒮^1,cr(𝒯_h) for (<ref>), for every v_h∈𝒮^1,cr(𝒯_h), yields that
(f_ε_h'(|∇_ h u_h,ε_h^cr| )∇_ h u_h,ε_h^cr|∇_ h u_h,ε_h^cr|,∇_ h v_h)_Ω+α (Π_hu_h,ε_h^cr-g_h,Π_h v_h)_Ω=0 .
By definition, the discrete flux z_h,ε_h^rt∈ℒ^1(𝒯_h)^d, defined by (<ref>), satisfies the discrete convex optimality condition (<ref>) and (z_h,ε_h^rt|_T)=α (Π_hu_h,ε_h^cr-g_h)|_T in T for all T∈𝒯_h.
Choosing v_h=1∈𝒮^1,cr(𝒯_h) in (<ref>), we find that ∫_Ωα (Π_hu_h,ε_h^cr-g_h) dx=0.
Hence, since for Γ_D=∅ the divergence operator ℛT^0_N(𝒯_h)→ℒ^0(𝒯_h)/ℝ is surjective, there exists
y_h∈ℛT^0_N(𝒯_h) such that y_h=α (Π_hu_h,ε_h^cr-g_h) in ℒ^0(𝒯_h). Then, we have that ((z_h,ε_h^rt-y_h)|_T)=0 in T for all T∈𝒯_h, i.e., z_h,ε_h^rt-y_h∈ℒ^0(𝒯_h)^d. In addition, for every v_h∈𝒮^1,cr(𝒯_h), it holds
(Π_h y_h,∇_ h v_h)_Ω =-( y_h,Π_h v_h)_Ω
=-α (Π_hu_h,ε_h^cr-g_h,Π_h v_h)_Ω
=(f_ε_h'(|∇_ h u_h,ε_h^cr| )∇_ h u_h,ε_h^cr|∇_ h u_h,ε_h^cr|,∇_ h v_h)_Ω
=(Π_h z_h,ε_h^rt,∇_ h v_h)_Ω .
In other words, for every v_h∈𝒮^1,cr(𝒯_h), it holds
(y_h-z_h,ε_h^rt,∇_ h v_h)_Ω=(Π_h y_h-Π_h z_h,ε_h^rt,∇_ h v_h)_Ω=0 ,
i.e., y_h-z_h,ε_h^rt∈∇_ h(𝒮^1,cr_D(𝒯_h))^⊥. By the decomposition (<ref>), we have that ∇_ h(𝒮^1,cr_D(𝒯_h))^⊥=(|_ℛT^0_N(𝒯_h))⊆ℛT^0_N(𝒯_h).
As a result, it holds y_h-z_h,ε_h^rt∈ℛT^0_N(𝒯_h). Due to y_h∈ℛT^0_N(𝒯_h), we conclude that z_h,ε_h^rt∈ℛT^0_N(𝒯_h). In particular, now from
(z_h,ε_h^rt|_T)=α (Π_hu_h,ε_h^cr-g_h)|_T in T for all T∈𝒯_h, it follows the discrete optimality condition
(<ref>).
ad (iii). Using (<ref>), (<ref>), and the discrete integration-by-parts formula (<ref>), we find that
I_h,ε_h^cr(u_h,ε_h^cr) =
f_ε_h(|∇_ h u_h,ε_h^cr|)_L^1(Ω)+α2Π_h u_h,ε_h^cr-g_h_L^2(Ω)^2
=-∫_Ωf_ε_h^*(|Π_h z_h,ε_h^rt|) dx+(Π_h z_h,ε_h^rt,∇_ h u_h,ε_h^cr)_Ω+12α z_h,ε_h^rt_L^2(Ω)^2
=-∫_Ωf_ε_h^*(|Π_h z_h,ε_h^rt|) dx-( z_h,ε_h^rt,Π_hu_h,ε_h^cr)_Ω+12α z_h,ε_h^rt_L^2(Ω)^2
=-∫_Ωf_ε_h^*(|Π_h z_h,ε_h^rt|) dx-1α( z_h,ε_h^rt, z_h,ε_h^rt+α g_h)_Ω+12α z_h,ε_h^rt_L^2(Ω)^2
=-∫_Ωf_ε_h^*(|Π_h z_h,ε_h^rt|) dx-12α z_h,ε_h^rt+α g_h_L^2(Ω)^2
=D_h,ε_h^rt(z_h,ε_h^rt) ,
which is the claimed discrete strong duality relation and, thus, appealing to the discrete weak duality relation (<ref>), proves the maximality of z_h,ε_h^rt∈ℛT^0_N(𝒯_h) for (<ref>).
The following proposition describes the approximative behavior the regularized, discretized ROF problem towards the (unregularized) discretized ROF problem, given uniform convergence (to zero) of the element-wise constant regularization parameter ε_h∈ℒ^0(𝒯_h). In what follows, in the convergence ε_h_L^∞(Ω)→ 0,
the average mesh-size h>0 is always fixed.2mm
If ε_h_L^∞(Ω)<1, then the following statements apply:
(i) It holds α2Π_h u_h,ε_h^cr-Π_hu_h^cr_L^2(Ω)^2
≤ε_h_L^∞(Ω)1-ε_h_L^∞(Ω) (α2 g_L^2(Ω)^2+2 |Ω|).
(ii) z_h,ε_h^rt→α (Π_hu_h^cr-g_h) in ℒ^0(𝒯_h) (ε_h_L^∞(Ω)→ 0).
(iii) f_ε_h^*(|Π_h z_h,ε_h^rt| )→ 0 in ℒ^0(𝒯_h) (ε_h_L^∞(Ω)→ 0).
(iv) f_ε_h (|∇_h u_h,ε_h^cr|)→∇_h u_h^cr in ℒ^0(𝒯_h) (ε_h_L^∞(Ω)→ 0).
ad (i). Using both the strong convexity of I_h^cr𝒮^1,cr(𝒯_h)→ℝ∪{+∞} and Lemma <ref> (ii),
we obtain
α2Π_h u_h,ε_h^cr-Π_hu_h^cr_L^2(Ω)^2 ≤ I_h^cr(u_h,ε_h^cr)-I_h^cr(u_h^cr)
≤11-ε_h_L^∞(Ω) I_h,ε_h^cr(u_h,ε_h^cr)+ε_h_L^∞(Ω)^21-ε_h_L^∞(Ω)|Ω| -I_h^cr(u_h^cr)
≤11-ε_h_L^∞(Ω) I_h,ε_h^cr(u_h^cr)+ε_h_L^∞(Ω)^21-ε_h_L^∞(Ω)|Ω|-I_h^cr(u_h^cr)
≤11-ε_h_L^∞(Ω) ( I_h^cr(u_h^cr)
+2 ε_h_L^∞(Ω) |Ω|)-I_h^cr(u_h^cr)
=
ε_h_L^∞(Ω)1-ε_h_L^∞(Ω) (I_h^cr(u_h^cr)+2 |Ω|) .
Since, by the minimality of u_h^cr∈𝒮^1,cr(𝒯_h) for (<ref>) and the L^2-stability of Π_h L^2(Ω)→ℒ^0(𝒯_h), it holds
I_h^cr(u_h^cr)≤ I_h^cr(0)=α2g_h_L^2(Ω)^2≤α2g_L^2(Ω)^2 ,
from (<ref>) we conclude the claimed error estimate.
ad (ii). From claim (i), it follows that
Π_h u_h,ε_h^cr→Π_hu_h^cr in ℒ^0(𝒯_h) (ε_h_L^∞(Ω)→ 0) .
Thus, using (<ref>), from z_h,ε_h^rt=α ( Π_h u_h,ε_h^cr-g_h) in ℒ^0(𝒯_h), cf. (<ref>), we conclude that
z_h,ε_h^rt→α (Π_hu_h^cr-g_h) ℒ^0(𝒯_h) (ε_h_L^∞(Ω)→ 0) .
ad (iii). Due to Π_h z_h,ε_h^rt=f_ε_h'(|∇_h u_h,ε_h^cr|)/|∇_h u_h,ε_h^cr|∇_h u_h,ε_h^cr and Lemma <ref> (iii), we have that
|Π_h z_h,ε_h^rt| =| f_ε_h'(|∇_h u_h,ε_h^cr|)|≤ 1-ε_h a.e. in Ω .
Therefore, using Lemma <ref> (iv) together with (<ref>), we conclude that
. | f_ε_h^*(|Π_h z_h,ε_h^rt| )| =
ε_h ((1-ε_h)^2-|Π_h z_h,ε_h^rt| ^2)^1/2
≤ε_h (1-ε_h)≤ε_h
} a.e. in Ω ,
which implies that f_ε_h^*(|Π_h z_h,ε_h^rt| )→ 0 in ℒ^0(𝒯_h) (ε_h_L^∞(Ω)→ 0).
ad (iv). Due to (<ref>), (u_h,ε_h^cr)_ε_h_L^∞(Ω)→ 0⊆𝒮^1,cr(𝒯_h) is bounded. The finite-dimensionality of 𝒮^1,cr(𝒯_h) and the Bolzano–Weierstraß theorem yield a subsequence (u_h,ε_h'^cr)_ε_h'_L^∞(Ω)→ 0⊆𝒮^1,cr(𝒯_h) and a function ũ_h^cr∈𝒮^1,cr(𝒯_h) such that
u_h,ε_h'^cr→ũ_h^cr in 𝒮^1,cr(𝒯_h) (ε_h'_L^∞(Ω)→ 0) .
From (<ref>) it is readily derived that
f_ε_h' (|∇_h u_h,ε_h'^cr|)→∇_hũ_h^cr in ℒ^0(𝒯_h) (ε_h'_L^∞(Ω)→ 0) .
Consequently, for every v_h∈𝒮^1,cr(𝒯_h), we find that
I_h^cr(ũ_h^cr) =lim_ε_h'_L^∞(Ω)→ 0I_h,ε_h'^cr(u_h,ε_h'^cr)
≤lim_ε_h'_L^∞(Ω)→ 0I_h,ε_h'^cr(v_h)
=I_h^cr(v_h) .
Thus, due to the uniqueness of u_h^cr∈𝒮^1,cr(𝒯_h) as a minimizer of (<ref>), we get ũ_h^cr=u_h^cr in 𝒮^1,cr(𝒯_h). Since this argumentation remains valid for each subsequence of (u_h,ε_h^cr)_ε_h_L^∞(Ω)→ 0⊆𝒮^1,cr(𝒯_h), the standard subsequence principle implies that f_ε_h (|∇_h u_h,ε_h^cr|)→∇_h u_h^cr in ℒ^0(𝒯_h) (ε_h_L^∞(Ω)→ 0).
The approximation properties of the regularized, discrete ROF model (<ref>) (and (<ref>)) towards the (unregularized) discrete ROF model (<ref>) (and (<ref>)) enable us to transfer the discrete convex duality relations established in Proposition <ref>, which apply mainly due to the differentiability of the regularized, discrete ROF model, to the non-differentiable discrete ROF model. To the best of the authors' knowledge, the following discrete convex duality relations for the (unregularized) discrete ROF model (<ref>)
seem to be new.7mm
There exists a vector field z_h^rt∈ℛT^0_N(𝒯_h) with |Π_h z_h^rt|≤ 1 a.e. in Ω and the following properties:
(i) For a not relabeled subsequence, it holds
z_h,ε_h^rt→ z_h^rt in ℛT^0_N(𝒯_h) (ε_h_L^∞(Ω)→ 0) .
(ii) There hold the following discrete convex optimality relations:
z_h^rt =α (Π_h u_h^cr-g_h) in ℒ^0(𝒯_h) ,
Π_hz_h^rt·∇_h u_h^cr =|∇_h u_h^cr| in ℒ^0(𝒯_h) .
(iii) The discrete flux z_h^rt∈ℛT^0_N(𝒯_h) is maximal for D_h^rtℛT^0_N(𝒯_h)→ℝ and discrete strong duality applies, i.e.,
I_h^cr(u_h^cr)=D_h^rt(z_h^rt) .
ad (i). Due to Proposition <ref> (ii) and (<ref>), the sequence (z_h,ε_h^rt)_ε_h_L^∞(Ω)→ 0⊆ℛT^0_N(𝒯_h) is bounded. Thus, by the finite-dimensionality of ℛT^0_N(𝒯_h), the Bolzano–Weierstraß theorem yields a not relabeled subsequence and a vector field z_h^rt∈ℛT^0_N(𝒯_h) such that
z_h,ε_h^rt→ z_h^rt in ℛT^0_N(𝒯_h) (ε_h_L^∞(Ω)→ 0) .
Due to the continuity of Π_h L^1(Ω)→ℒ^0(𝒯_h) and ℛT^0_N(𝒯_h)↪ L^1(Ω), from (<ref>), we obtain
Π_h z_h,ε_h^rt→Π_h z_h^rt in ℒ^0(𝒯_h) (ε_h_L^∞(Ω)→ 0) .
From |Π_h z_h,ε_h^rt|≤ 1-ε_h a.e. in Ω, cf. (<ref>), and (<ref>), we obtain |Π_h z_h^rt|≤ 1 a.e. in Ω, i.e.,
I_K_1(0)(Π_h z_h^rt)=0 .
ad (ii). Using Proposition <ref>, (<ref>), and (<ref>), we find that
. z_h^rt =lim_ε_h_L^∞(Ω)→ 0 z_h,ε_h^rt
=lim_ε_h_L^∞(Ω)→ 0α (Π_hu_h,ε_h^cr-g_h)
=α (Π_h u_h^cr-g_h) } a.e. in Ω ,
as well as.Π_h z_h^rt·∇_h u_h^cr =lim_ε_h_L^∞(Ω)→ 0Π_h z_h,ε_h^rt·∇_h u_h,ε_h^cr
=lim_ε_h_L^∞(Ω)→ 0f_ε_h^*(|Π_h z_h,ε_h^rt| )+f_ε_h(|∇_h u_h,ε_h^cr|)
=|∇_h u_h^cr| } a.e. in Ω ,
i.e., the claimed discrete convex optimality conditions.
ad (iii).
Using Proposition <ref> and (<ref>), we find that
I_h^cr(u_h^cr) =lim_ε_h_L^∞(Ω)→ 0I_h,ε_h^cr(u_h,ε_h^cr)
=lim_ε_h_L^∞(Ω)→ 0D_h,ε_h^rt(z_h,ε_h^rt)
=D_h^rt(z_h^rt) ,
i.e., the claimed discrete strong duality relation.
§ NUMERICAL EXPERIMENTS
5mm
In this section, we review the theoretical findings of Section <ref> via numerical experiments. To compare approximations to an exact solution, we impose Dirichlet boundary conditions on Γ_D=∂Ω, though an existence theory is difficult to establish, in general. However, the concepts derived in Section <ref> carry over verbatimly with Γ_N=∅ provided that the existence of a minimizer is given. All experiments were conducted deploying the finite element software package (version 2019.1.0), cf. <cit.>. All graphics were generated using the library (version 3.5.1), cf. <cit.>, and the library (version 2023.4.4), cf. <cit.>.
§.§ Implementation details regarding the optimization procedure
All computations are based on the regularized, discrete ROF problem (<ref>). This is motivated by the fact that appealing to Proposition <ref> (i), in order to bound the error u-Π_h u_h^cr_L^2(Ω), it suffices to determine the error u-Π_h u_h,ε_h^cr_L^2(Ω). The iterative minimization of (<ref>) is realized using a semi-implicit discretized L^2-gradient flow from <cit.> (see also <cit.>) modified with a residual stopping criterion guaranteeing the necessary accuracy in the optimization procedure.
Appealing to <cit.>, the iterates u_h^k∈𝒮^1,cr_D(𝒯_h), k∈ℕ, the residuals r_h^k∈𝒮^1,cr_D(𝒯_h), k∈ℕ, generated by Algorithm <ref>, and the minimizer u_h,ε_h^cr∈𝒮^1,cr_D(𝒯_h) of (<ref>) satisfy
u_h,ε_h^cr-u_h^k_L^2(Ω)≤ 2 r_h^k_L^2(Ω) .
In consequence, if we choose as a stopping criterion that r_h^k^*_L^2(Ω)≤ε_stop^hc_stop h for k^*∈ℕ, where c_stop>0 does not depend on h>0, then, owing to Proposition <ref> (i) and (<ref>), we have that
Π_h(u_h^cr-u_h^k^*)_L^2(Ω)^2≤ε_h_L^∞(Ω)1-ε_h_L^∞(Ω) (2 g_L^2(Ω)^2+8α |Ω|)+8 c_stop^2 h^2 .
If ε_h_L^∞(Ω)≤ c_reg h^2, where c_reg∈ (0,1), then, we arrive at Π_h(u_h^cr-u_h^k^*)_L^2(Ω)=𝒪(h).
Thus, to bound the error u-Π_hu_h^cr_L^2(Ω) experimentally, it is sufficient to compute u-Π_hu_h^k^*_L^2(Ω).
The following proposition proves the well-posedness, stability, and convergence of Algorithm <ref>.
Let the assumptions of Algorithm <ref> be satisfied and let ε_h∈ℒ^0(𝒯_h) such that ε_h>0 a.e. in Ω and ε_h_L^∞(Ω)<1. Then, the following statements apply:
(i) Algorithm <ref> is well-posed, i.e., for every k∈ℕ, given the most-recent iterate u_h^k-1∈𝒮^1,cr_D(𝒯_h), there exists a unique iterate u_h^k∈𝒮^1,cr_D(𝒯_h) solving (<ref>).
(ii) Algorithm <ref> is unconditionally strongly stable, i.e., for every L∈ℕ, it holds
I_h,ε_h^cr(u_h^L)+τ∑_k=1^Ld_τ u_h^k_L^2(Ω)^2≤ I_h,ε_h^cr(u_h^0) .
(iii) Algorithm <ref> terminates after a finite number of steps, i.e., there exists k^*∈ℕ such that r_h^k^*_L^2(Ω)≤ε_stop^h.6mm
The proof of Proposition <ref> (ii) is essentially based on the following inequality.
For every ε∈ (0,1) and a,b∈ℝ^d, it holds10mm
f_ε'(| a|)| a| b·(b-a)≥ f_ε(| b|)-f_ε(| a|)+12f_ε'(| a|)| a|| b-a|^2 .
Follows from <cit.>, since f_ε∈ C^1(ℝ_≥ 0) and (t↦ f_ε'(t)/t)∈ C^0(ℝ_≥ 0) is positive and non-decreasing for all ε∈ (0,1).
ad (i). Since f_ε'(t)/t≥ 0 for all ε∈ (0,1) and t≥ 0, the of Algorithm <ref> is a direct consequence of the Lax–Milgram lemma.
ad (ii).
Let L∈ℕ be arbitrary. Then,
for every k∈{1,…,L}, choosing v_h=d_τ u_h^k∈𝒮^1,cr_D(𝒯_h) in (<ref>), we find that
d_τ u_h^k_L^2(Ω)^2+(f_h,ε_h'(|∇_hu_h^k-1| )|∇_hu_h^k-1|∇_hu_h^k,∇_h d_τ u_h^k)_Ω+α (Π_hu_h^k-g_h,Π_h d_τ u_h^k)_Ω .
Appealing to Lemma <ref> with a=∇_hu_h^k-1|_T∈ℝ^d and b=∇_h u_h^k|_T∈ℝ^d applied for all T∈𝒯_h, for every k∈{1,…,L}, we have that
f_h,ε_h'(|∇_hu_h^k-1| )|∇_hu_h^k-1|∇_hu_h^k·∇_h d_τ u_h^k≥ d_τ f_h,ε_h(|∇_hu_h^k| ) a.e. in Ω .
In addition, since d_τ g_h=0, for every k∈{1,…,L}, we have that
(Π_hu_h^k-g_h)Π_h d_τ u_h^k =(Π_hu_h^k-g_h)d_τ(Π_h u_h^k-g_h)
=d_τ2|Π_hu_h^k-g_h|^2 .
Using (<ref>) and (<ref>) in (<ref>), for every k∈{1,…,L},
we arrive at
d_τ u_h^k_L^2(Ω)^2+d_τ I_h,ε_h^cr(u_h^k)≤ 0 .
Summation of (<ref>) with respect to k∈{1,…,L}, using ∑_k=1^Ld_τ I_h,ε_h^cr(u_h^k)=I_h,ε_h^cr(u_h^L)-I_h,ε_h^cr(u_h^0), yields the claimed stability estimate.
ad (iii). Due to (i), we have that d_τ u_h^k_L^2(Ω)^2→ 0 (k→∞), i.e., by the finite-dimensionality of 𝒮^1,cr_D(𝒯_h) and the equivalence of norms, it holds
u_h^k-u_h^k-1→ 0 in 𝒮^1,cr_D(𝒯_h) (k→∞) .
In addition, due to (i), we have that I_h,ε_h^cr(u_h^k)≤ I_h,ε_h^cr(u_h^0), which, using Lemma <ref>, implies that
(u_h^k)_k∈ℕ⊆𝒮^1,cr_D(𝒯_h) is bounded. Due to the finite-dimensionality of 𝒮^1,cr_D(𝒯_h), the -straß theorem yields a subsequence (u_h^k_l)_l∈ℕ⊆𝒮^1,cr_D(𝒯_h) and a function ũ_h∈𝒮^1,cr_D(𝒯_h) such that
u_h^k_l→ũ_h in 𝒮^1,cr_D(𝒯_h) (l→∞) .
Due to (<ref>), from (<ref>), we deduce that
u_h^k_l-1→ũ_h in 𝒮^1,cr_D(𝒯_h) (l→∞) .
As a result, using (<ref>)–(<ref>), by passing for l→∞ in (<ref>), for every v_h∈𝒮^1,cr_D(𝒯_h), we obtain
(f_h,ε_h'(|∇_hũ_h| )|∇_hũ_h|∇_hũ_h ,∇_hv_h )_Ω+α (Π_hũ_h-g_h,Π_hv_h)_Ω=0 ,
and, by uniqueness, ũ_h=u_h,ε_h^cr.
Hence, using (<ref>) and (<ref>), for every v_h∈𝒮^1,cr_D(𝒯_h), we obtain
(r_h^k_l,v_h)_Ω =(f_h,ε_h'(|∇_hu_h^k_l| )|∇_hu_h^k_l|∇_hu_h^k_l,∇_hv_h )_Ω+α (Π_hu_h^k_l-g_h,Π_hv_h)_Ω
→(f_h,ε_h'(|∇_hu_h,ε_h^cr| )|∇_hu_h,ε_h^cr|∇_hu_h,ε_h^cr ,∇_hv_h )_Ω+α (Π_hu_h,ε_h^cr-g_h,Π_hv_h)_Ω=0 (l→∞) ,
i.e., r_h^k_l⇀ 0 in 𝒮^1,cr_D(𝒯_h) (l→∞), and, thus, by the finite-dimensionality of 𝒮^1,cr_D(𝒯_h), r_h^k_l→ 0 in 𝒮^1,cr_D(𝒯_h) (l→∞), which implies that r_h^k_l→ 0 in L^2(Ω) (l→∞). As this remains valid for each subsequence of (r_h^k)_k∈ℕ⊆𝒮^1,cr_D(𝒯_h), the standard convergence principle yields that r_h^k→ 0 in L^2(Ω) (k→∞). In particular, there exists k^*∈ℕ such that r_h^k^*_L^2(Ω)≤ε^h_stop.
§.§ Implementation details regarding the adaptive mesh refinement procedure
8mm
Before we present numerical experiments, we briefly outline the details of the implementations regarding the adaptive mesh refinement procedure.
In general, we follow the adaptive algorithm, cf. <cit.>:
(i) The regularized, discrete primal solution u_i^cr∈𝒮^1,cr_D(𝒯_i) in step (Solve'Solve') is computed using
the semi-implicit discretized L^2-gradient flow, cf. Algorithm <ref>, for fixed step-size τ=1.0, stopping criterion ε_stop^h_ih_i/√(20), and initial condition u_i^0=0∈𝒮_D^1,cr(𝒯_i). Appealing to Proposition <ref> (ii), Algorithm <ref> is unconditionally strongly stable, so that employing the fixed step-size τ=1.0 is a reasonable choice.
The stopping criterion ε_stop^h_ih_i/√(20) ensures (cf. the argumentation below Algorithm <ref>) that the final iterate u_h_i^k^*∈𝒮^1,cr_D(𝒯_i) is a sufficiently accurate approximation of the discrete primal solution, in the sense
that its accuracy does not violate the best possible linear convergence rate, cf. Remark <ref> (below).
(ii) As an approximation u_i^cr∈𝒮^1,cr_D(𝒯_i) with u_i^cr=0 on ∂Ω, we employ
u_i^cr
u_i^cr if u_i^cr=0 on ∂Ω ,
I_k^∂ u_i^cr else ,
where the operator I_i^∂𝒮^1,cr(𝒯_i)→𝒮^1,cr_D(𝒯_i) for every v_h_i∈𝒮^1,cr(𝒯_i) is defined by
I_i^∂v_i∑_S∈𝒮_h_i;S∩∂Ω=∅v_h_i(x_S) φ_S .
(iii) Note that the particular choices in (ii) are only due to the imposed homogeneous Dirichlet boundary condition. In the case Γ_D=∅, the choice u_i^cru_i^cr∈𝒮^1,cr(𝒯_i) is always admissible.
(iv) If not otherwise specified, we employ the parameter θ=1/2 in (Estimate'Mark').
(v) To find the set ℳ_i⊆𝒯_i in step (Mark'Mark'), we deploy the Dörfler marking strategy, cf. <cit.>.
(vi) The (minimal) conforming refinement of 𝒯_i with respect to ℳ_i in step (Refine'Refine') is by deploying the red-green-blue-refinement algorithm, cf. <cit.>.
(vii) For the construction of the adaptively modified regularization parameter ε_i∈ℒ^0(𝒯_i) in step (Refine'Refine'), we employ separately the following two cases:
ε_iαd|Π_h_i-1 u_i-1^cr-g_h_i| h_i^2 + h_i^3 (locallocal) ,
h_i^2 (globalglobal) .
§.§ Example with Lipschitz continuous dual solution
We examine an example from <cit.>. In this example, we let Ω=(-1,1)^d, Γ_D=∂Ω, d∈{2,3}, r=1/2, α =10, and g=χ_B_r^d(0)∈ BV(Ω)∩ L^∞(Ω). Then, the primal solution u∈ BV(Ω)∩ L^∞(Ω) and a dual solution z∈ W^2(;Ω)∩ L^∞(Ω;ℝ^d), for a.e. x∈Ω are defined by
u(x) (1-dα r) g(x) ,
z(x)
-xr | x| < r ,
-rx| x|^d | x|≥ r .
Note that z∈ W^1,∞(Ω;ℝ^d), so that, appealing to <cit.>, uniform mesh-refinement (i.e., θ=1 in Algorithm <ref>) is expected to yield the quasi-optimal convergence rate 𝒪(h^1/2).
2D Case.
The coarsest triangulation 𝒯_0 of Figure <ref> (initial triangulation of Algorithm <ref>) consists of 16 halved squares. More precisely, Figure <ref> displays
the triangulations 𝒯_i, i∈{0,15,20,25}, generated by Algorithm <ref>
using either the adaptively modified ε_i∈ℒ^0(𝒯_i), cf. (locallocal), or the global choice ε_i h_i^2, cf. (globalglobal). For both choices,
a refinement towards the circle ∂ B_r^2(0), i.e., the jump set J_u of the exact solution u∈ BV(Ω)∩ L^∞(Ω), cf. (<ref>), is reported.
This behavior is also seen in Figure <ref>, where the regularized, discrete primal solution u_15^cr∈𝒮^1,cr_D(𝒯_15), the (local)
L^2-projection onto element-wise constant functions
Π_h_15 u_15^cr∈ℒ^0(𝒯_15), and
the (local) L^2-projections onto element-wise affine functions of
the modulus of the regularized, discrete dual solution z_15^rt∈ℛT^0_N(𝒯_15) and of the projected regularized, discrete dual solution z_15^rt∈ℛT^0_N(𝒯_15) are plotted. Figure <ref>, in addition, shows that using the adaptively modified ε_i∈ℒ^0(𝒯_i), cf. (locallocal), the refinement is more concentrated at the jump set J_u of the exact solution u∈ BV(Ω)∩ L^∞(Ω), cf. (<ref>). However, in Figure <ref> it is seen that (locallocal) does not result in an improved error decay, but an error decay comparable to (globalglobal). In addition,
Figure <ref> demonstrates that Algorithm <ref> improves the experimental convergence rate of about 𝒪(h^1/2) predicted by <cit.> for uniform mesh-refinement to the quasi-optimal rate 𝒪(h), cf. Remark <ref> (below). In addition, Figure <ref> indicates the primal-dual error estimator is reliable and efficient with respect to the error quantity
ρ̃^2(u_i^cr,z_i^rt)α2u_i^cr-u^2_L^2(Ω)+12α z_i^rt- z^2_L^2(Ω) , i∈ℕ ,
which, appealing to Remark <ref> (iv), is a lower bound for sum of the optimal convexity measures.
7mm
3D Case. The initial triangulation 𝒯_0 of Algorithm <ref> consists of 27 cubes each divided into six tetrahedrons. Using either the adaptively modified ε_i∈ℒ^0(𝒯_i), cf. (locallocal), or the global choice ε_i h_i^2, cf. (globalglobal), we report similar results to the 2D case: for both choices,
a refinement towards the sphere ∂ B_r^3(0), i.e., the jump set J_u of the exact solution u∈ BV(Ω)∩ L^∞(Ω), cf. (<ref>), is re-ported, which can be seen
in Figure <ref>, where the regularized, discrete primal solution u_10^cr∈𝒮^1,cr_D(𝒯_10) and
the (local) L^2-projection onto element-wise affine functions of
the modulus of the regularized, discrete dual solution z_10^rt∈ℛT^0_N(𝒯_10) are plotted.
Figure <ref> shows that the adaptive Algorithm <ref> improves the experimental convergence rate of about 𝒪(h^1/2) predicted by <cit.> for uniform mesh-refinement to the quasi-optimal rate 𝒪(h), cf. Remark <ref> (below).
12.5mm
In one dimension, the L^2-best-approximation error of the sign function on quasi-uniform
partitions is of order 𝒪(h^1/2), cf. <cit.>. More generally, using that the
intersection BV(Ω) ∩ L^∞(Ω) is contained in
fractional Sobolev spaces W^s,2(Ω) for all s<1/2,
cf. <cit.>, one cannot expect a higher convergence rate
than 𝒪(h^1/2) for generic, essentially bounded functions of bounded variation. For triangulations that are graded towards the jump
sets of certain discontinuous functions with a quadratic grading
strength, i.e., the local mesh-size satisfies
h_T ∼ h^2 for all elements T∈𝒯_h at the discontinuity set, with the average mesh-size h∼(𝒩_h)^-1/d, a linear
convergence rate 𝒪(h) has been established in <cit.>. Since our
error estimates not only bound squared L^2-errors but also control
squares of L^p-norms of non-linear error quantities involving derivatives, cf. , a higher convergence rate than linear cannot be expected.
In view of these aspects, the linear convergence rate 𝒪(h) for
the devised adaptive strategy is quasi-optimal.
§.§ Example without Lipschitz continuous dual solution
3mm
We examine an example from <cit.>. In this example, we let Ω=(-1.5,1.5)^2, Γ_D=∂Ω, r=1/2, α =10, and g=χ_B_r^2(re_1)-χ_B_r^2(-re_1)∈ BV(Ω)∩ L^∞(Ω). Then, the primal solution u∈ BV(Ω)∩ L^∞(Ω) and a dual solution z∈ W^2(;Ω)∩ L^∞(Ω;ℝ^2), for a.e. x∈Ω are defined by
u(x) (1-2α r) g(x) ,
z(x)∓x∓ r e_1r | x∓ r e_1| < r ,
∓r(x∓ r e_1)| x∓ r e_1|^2 | x∓ r e_1|≥ r .
Note that z∉ W^1,∞(Ω;ℝ^2), so that we cannot refer to <cit.> in order to expect uniform mesh-refinement to yield the convergence rate 𝒪(h^1/2).
However, since z|_Ω^±∈ W^1,∞(Ω^±;ℝ^2), where Ω^+Ω∩ (ℝ_>0×ℝ) and Ω^-Ω∩ (ℝ_<0×ℝ), and since the coarsest triangulation 𝒯_0 of Figure <ref> and, hence, also all resulting refinements 𝒯_i, i∈ℕ, of 𝒯_0 resolve J_zΩ∩ ({0}×ℝ), i.e., the jump set of
z∈ W^2(;Ω)∩ L^∞(Ω;ℝ^2), in the sense that J_z⊆⋃_S∈𝒮_h_iS for all i∈ℕ,
referring to <cit.>, we can expect uniform mesh-refinement to yield the convergence rate 𝒪(h^1/2).
The coarsest triangulation 𝒯_0 of Figure <ref> (initial triangulation of Algorithm <ref>) consists of 16 halved squares. More precisely, Figure <ref> displays
the triangulations 𝒯_i, i∈{0,15,20,25}, generated by Algorithm <ref>
using either the adaptively modified ε_i∈ℒ^0(𝒯_i), cf. (locallocal), or the global choice ε_i h_i^2, cf. (globalglobal). For both choices,
a refinement towards ∂ B_r^2(re_1)∪∂ B_r^2(-re_1), i.e., the jump set J_u of the exact solution u∈ BV(Ω)∩ L^∞(Ω), cf. (<ref>), is reported.
This behavior is also seen in Figure <ref>, where the regularized, discrete primal solution u_15^cr∈𝒮^1,cr_D(𝒯_15), the (local)
L^2-projection onto element-wise constant functions
Π_h_15 u_15^cr∈ℒ^0(𝒯_15), and
the (local) L^2-projections onto element-wise affine functions of
the modulus of the regularized, discrete dual solution z_15^rt∈ℛT^0_N(𝒯_15) and of the scaled regularized, discrete dual solution z_15^rt∈ℛT^0_N(𝒯_15) are plotted. Figure <ref>, in addition, shows that employing the adaptively modified regularization , cf. (locallocal), the refinement is more concentrated at the jump set J_u of the exact solution u∈ BV(Ω)∩ L^∞(Ω), cf. (<ref>). However, in Figure <ref> it can be seen that (locallocal) does not result in an improved error decay, but an error decay comparable to (globalglobal). In addition, Figure <ref> demonstrates that Algorithm <ref> improves the experimental convergence rate of about 𝒪(h^1/2) predicted by <cit.> for uniform mesh-refinement to the quasi-optimal rate 𝒪(h), cf. <ref>. In addition, Figure <ref> indicates the primal-dual error estimator is both reliable and efficient with respect to the error quantity (<ref>).
7mm
§.§ Example with Lipschitz continuous primal solution and Lipschitz continuous dual solution
We examine an example from <cit.>. In this example, we let Ω=(-1.5,1.5)^2, Γ_D=∂Ω, α =10, s(t)√(3t) and r(t)1/2√(1-4t) for t=0.1, and g∈ BV(Ω)∩ L^∞(Ω) for a.e. x∈Ω, be defined by2mm
g(x)
1 +2-α(s(t)^2+t)/s(t) if | x|≤ s(t) ,
1 +1-α(| x|^2+t)/| x| if s(t)<| x|≤ r(t) ,
0 else .
Then, the primal solution u∈ BV(Ω)∩ L^∞(Ω) and a dual solution z∈ W^2(;Ω)∩ L^∞(Ω;ℝ^2) with | z|≤ 1 a.e. in Ω, for a.e. x∈Ω are defined by
u(x)
1 - s(t)^2+t/s(t) if | x|≤ s(t) ,
1 -| x|^2+t/| x| if s(t)<| x|≤ r(t) ,
0 else ,
z(x)
-x/s(t) if | x|≤ s(t) ,
-x/| x| if s(t)<| x|≤ r(t) ,
-xr(t)/| x|^2 else .
Note that z∈W^1,∞(Ω;ℝ^2), so that, appealing to <cit.>, uniform mesh-refinement is expected to yield the quasi-optimal convergence rate 𝒪(h^1/2).
The coarsest triangulation 𝒯_0 of Figure <ref> (initial triangulation of Algorithm <ref>) consists of 16 halved squares. More precisely, Figure <ref> displays
the triangulations 𝒯_i, i∈{0,5,10,15}, generated by Algorithm <ref>
employing either ε_i∈ℒ^0(𝒯_i), cf. (locallocal), or ε_i h_i^2, cf. (globalglobal). For both choices,
a refinement mainly towards and on the set {|∇ u| >0} is reported.
This is also seen in Figure <ref>, where the regularized, discrete primal solution u_15^cr∈𝒮^1,cr_D(𝒯_10), the (local)
L^2-projection onto element-wise constant functions
Π_h_10 u_10^cr∈ℒ^0(𝒯_10), and
the (local) L^2-projections onto element-wise affine functions of
the modulus of the regularized, discrete dual solution z_10^rt∈ℛT^0_N(𝒯_10) and of the scaled regularized, discrete dual solution z_10^rt∈ℛT^0_N(𝒯_10) are plotted. Figure <ref> shows that employing the adaptively modified regularization parameter, cf. (locallocal), the refinement takes place at and on the set {|∇ u| >0}. However, in Figure <ref>, again, it can be seen that (locallocal) does not result in an improved error decay, but an error decay comparable to (globalglobal). In addition, Figure <ref> demonstrates that Algorithm <ref> improves the experimental convergence rate of about 𝒪(h^1/2) predicted by <cit.> for uniform mesh-refinement to the quasi-optimal rate 𝒪(h), cf. Remark <ref>. In addition, Figure <ref> indicates the primal-dual error estimator is both reliable and efficient with respect to the error quantity (<ref>).
7mm
§.§ Example without Dirichlet boundary condition and without exact solution
We examine an example from <cit.>. In this example, we let Ω=(-1,1)^2, r=1/2, Γ_D=∅, α =100, and g=χ_[-r,r]^2∈ BV(Ω)∩ L^∞(Ω). Then, the primal solution
and the dual solutions are not known. However, appealing to <cit.>, given the regularity of g∈ BV(Ω)∩ L^∞(Ω),
we can expect the convergence rate 𝒪(h^1/4) using uniform mesh refinement.
The coarsest triangulation 𝒯_0 of Figure <ref> (initial triangulation of Algorithm <ref>) consists of 16 halved squares. More precisely, Figure <ref> displays
the triangulations 𝒯_i, i∈{0,15,20,25}, generated by Algorithm <ref>
using either the adaptively modified ε_i∈ℒ^0(𝒯_i), cf. (locallocal), or the global choice ε_i h_i^2, cf. (globalglobal). For both choices,
a refinement towards the square ∂ [-r,r]^2, i.e., the jump set J_g of the data g∈ BV(Ω)∩ L^∞(Ω) is reported.
This behavior is also seen in Figure <ref>, where the regularized, discrete primal solution u_15^cr∈𝒮^1,cr_D(𝒯_15), the (local)
L^2-projection onto element-wise constant functions
Π_h_15 u_15^cr∈ℒ^0(𝒯_15), and
the (local) L^2-projections onto element-wise affine functions of
the modulus of the regularized, discrete dual solution z_15^rt∈ℛT^0_N(𝒯_15) and of the projected regularized, discrete dual solution z_15^rt∈ℛT^0_N(𝒯_15) are plotted. Figure <ref>, in addition, shows that using the adaptively modified ε_i∈ℒ^0(𝒯_i), cf. (locallocal), the refinement is, again, more concentrated at the jump set J_g of the data g∈ BV(Ω)∩ L^∞(Ω). However, in Figure <ref> it can be seen that (locallocal) does not result in an improved error decay, but an error decay comparable to (globalglobal). In addition,
Figure <ref> demonstrates that Algorithm <ref> improves the experimental convergence rate of about 𝒪(h^1/4) predicted by <cit.> for uniform mesh-refinement to the value 𝒪(h^2/5). This, on the one hand, confirms the optimality of the a priori error estimates established in <cit.> and, on the other hand, appealing to <cit.>, let us expect that there exists no Lipschitz continuous dual solution to the given data g=χ_[-r,r]^2∈ BV(Ω)∩ L^∞(Ω). The reported reduced error decay of 𝒪(h^2/5) compared to <cit.>, where an error decay of 𝒪(h^1/2) is reported, might only be pre-asymptotic and due to slight accuracy losses resulting due to the global scaling step. This might be due to potential singularities of a dual solution located at the corners of the square ∂ [-r,r]^2, as indicated in Figure <ref>. Therefore, it is possible that the error decay 𝒪(h^1/2) in <cit.> may be reported after surpassing a potential pre-asymptotic regime.
10mm
§.§ Numerical experiments with application to image processing
In order to benchmark the performance of the proposed numerical scheme (cf. Algorithm <ref> and Algorithm <ref>)
in a problem related to image processing, we examine a standard example from the field of image processing (cf. Section <ref>) and a new example (cf. Section <ref>).
11mm
§.§.§ The Cameraman image
We examine the cameraman image, which in a similar context has been considered in <cit.>. In this example,
we let Ω (0,1)^2, Γ_D=∅, α=1e+4, and g∈ BV(Ω)∩ L^∞(Ω) a piece-wise constant function taking its values in the interval [0,1], representing the cameraman image on a uniform triangulation with 66.049
nodes, cf. Figure <ref>. The adaptive algorithm (cf. Algorithm <ref>), employed as coarsening strategy, reduces
the number of nodes within 30 iteration steps to 25.059 nodes which corresponds to 38.0% of the initial number of nodes, which results in a squared L^2-error of u_30^cr-g_L^2(Ω)^2≈ 2.211e-3. The resulting coarsened image, by u_30^cr∈𝒮^1,cr(𝒯_30), is shown in Figure <ref>. The underlying grid 𝒯_30 shown in Figure <ref> reveals the expected coarsening of the triangulation away from the edges.
§.§.§ The Merle image
10mm
We examine an image of Merle, the male cat of the second author. In this example,
we let Ω (0,1)^2, Γ_D=∅, α=1e+4, and
g∈ BV(Ω)∩ L^∞(Ω) a piece-wise constant function taking its values in the interval [0,1], representing the Merle image on a uniform triangulation with 140.625
nodes, cf. Figure <ref>. The adaptive algorithm (cf. Algorithm <ref>), employed as coarsening strategy, reduces
the number of nodes within 30 iteration steps to 41.749 nodes which is 30.0% of the initial number of nodes, which results in a squared L^2-error of u_30^cr-g_L^2(Ω)^2≈ 2.162e-3. The resulting coarsened image, represented by u_30^cr∈𝒮^1,cr(𝒯_30), is shown in Figure <ref>. The underlying grid 𝒯_30 shown in Figure <ref> reveals the expected coarsening of the triangulation away from the edges.
5mm
10
AO00
M. Ainsworth and J. T.
Oden, A posteriori error estimation in finite element
analysis, Pure and Applied Mathematics (New York), Wiley-Interscience
[John Wiley & Sons], New York, 2000.
10.1002/9781118032824.
Bar12
S. Bartels, Total variation minimization with finite
elements: convergence and iterative solution, SIAM J. Numer. Anal.
50 no. 3 (2012), 1162–1180.
10.1137/11083277X.
Bar15
S. Bartels, Numerical methods for nonlinear
partial differential equations, Springer Series in Computational
Mathematics 47, Springer, Cham, 2015.
10.1007/978-3-319-13797-1.
Bar21
S. Bartels, Nonconforming discretizations of convex
minimization problems and precise relations to mixed methods, Comput.
Math. Appl. 93 (2021), 214–229.
10.1016/j.camwa.2021.04.014.
BDN18
S. Bartels, L. Diening, and
R. H. Nochetto, Unconditional stability of
semi-implicit discretizations of singular flows, SIAM J. Numer. Anal.
56 no. 3 (2018), 1896–1914.
10.1137/17M1159166.
BKROF22
S. Bartels and
A. Kaltenbach, Error estimates for total-variation
regularized minimization problems with singular dual solutions, Numer.
Math. 152 no. 4 (2022), 881–906.
10.1007/s00211-022-01324-w.
BK22Obstacle
S. Bartels and
A. Kaltenbach, Error analysis for a
Crouzeix-Raviart approximation of the obstacle problem, 2023.
10.48550/ARXIV.2302.01646.
BM20
S. Bartels and
M. Milicevic, Primal-dual gap estimators for a
posteriori error analysis of nonsmooth minimization problems, ESAIM
Math. Model. Numer. Anal. 54 no. 5 (2020), 1635–1660.
10.1051/m2an/2019074.
BNS15
S. Bartels, R. H. Nochetto,
and A. J. Salgado, A total variation diminishing
interpolation operator and applications, Math. Comp. 84
no. 296 (2015), 2569–2587. 10.1090/mcom/2942.
BTW21
S. Bartels, R. Tovey, and
F. Wassmer, Singular solutions, graded meshes,and
adaptivity for total-variation regularized minimization problems,
ESAIM Math. Model. Numer. Anal. 56 no. 6 (2022), 1871–1888.
10.1051/m2an/2022056.
BW21
S. Bartels and Z. Wang,
Orthogonality relations of Crouzeix-Raviart and Raviart-Thomas finite
element spaces, Numer. Math. 148 no. 1 (2021), 127–139.
10.1007/s00211-021-01199-3.
bartels15
S. Bartels, Error control and adaptivity for a
variational model problem defined on functions of bounded variation,
Math. Comp. 84 no. 293 (2015), 1217–1240.
10.1090/S0025-5718-2014-02893-7.
BC08
S. Bartels and
C. Carstensen, A convergent adaptive finite element
method for an optimal design problem, Numer. Math. 108 no. 3
(2008), 359–385. 10.1007/s00211-007-0122-x.
BBHSVN23
L. Baumgärtner,
R. Bergmann, R. Herzog,
S. Schmidt, and
J. Vidal-Núnez, Total generalized variation for
piecewise constant functions on triangular meshes with applications in
imaging, SIAM Journal on Imaging Sciences 16 no. 1 (2023),
313–339. 10.1137/22M1505281.
BC11
H. H. Bauschke and P. L.
Combettes, Convex analysis and monotone operator theory in hilbert
spaces, in CMS Books in Mathematics, 2011.
BW22
L. Baňas and A. Wilke,
A posteriori estimates for the stochastic total variation flow, SIAM
J. Numer. Anal. 60 no. 5 (2022), 2657–2680.
10.1137/21M1447982.
BB20
F. Bertrand and D. Boffi,
The Prager-Synge theorem in reconstruction based a posteriori error
estimation, in 75 years of mathematics of computation, Contemp.
Math. 754, Amer. Math. Soc., [Providence], RI, [2020] 2020, pp. 45–67. 10.1090/conm/754/15152.
Braess13
D. Braess, Finite Elemente. Theorie,
schnelle Löser und Anwendungen in der Elastizitätstheorie, 5th
revised ed. ed., Springer-Lehrb. Mastercl., Berlin: Springer Spektrum,
2013 (German). 10.1007/978-3-642-34797-9.
Brae09
D. Braess, An a posteriori error estimate and a
comparison theorem for the nonconforming P_1 element, Calcolo
46 no. 2 (2009), 149–155. 2520373.
10.1007/s10092-009-0003-z.
braides98
A. Braides, Approximation of free-discontinuity
problems, Lecture Notes in Mathematics 1694,
Springer-Verlag, Berlin, 1998. 10.1007/BFb0097344.
bregman67
L. Brégman, The relaxation method of finding the
common point of convex sets and its application to the solution of problems
in convex programming, USSR Computational Mathematics and Mathematical
Physics 7 no. 3 (1967), 200–217.
https://doi.org/10.1016/0041-5553(67)90040-7.
CL15
C. Carstensen and D. J.
Liu, Nonconforming FEMs for an optimal design problem, SIAM
J. Numer. Anal. 53 no. 2 (2015), 874–894.
10.1137/130927103.
CKNS08
J. Cascon, C. Kreuzer,
R. Nochetto, and
K. Siebert, Quasi-optimal convergence rate for an
adaptive finite element method, SIAM J. Numer. Anal. 46
no. 5 (2008), 2524–2550. 10.1137/07069047X.
CCMN08
V. Caselles, A. Chambolle,
S. Moll, and M. Novaga, A
characterization of convex calibrable sets in ℝ^N with respect to
anisotropic norms, Ann. Inst. H. Poincaré Anal. Non Linéaire
25 no. 4 (2008), 803–832.
10.1016/j.anihpc.2008.04.003.
CP20
A. Chambolle and T. Pock,
Crouzeix-Raviart approximation of the total variation on simplicial meshes,
J. Math. Imaging Vision 62 no. 6-7 (2020), 872–899.
10.1007/s10851-019-00939-3.5mm
CR73
M. Crouzeix and P.-A.
Raviart, Conforming and nonconforming finite element methods for
solving the stationary Stokes equations. I, Rev. Française
Automat. Informat. Recherche Opérationnelle Sér. Rouge 7
no. R-3 (1973), 33–75.
Dac08
B. Dacorogna, Direct methods in the calculus of
variations, second ed., Applied Mathematical Sciences 78,
Springer, New York, 2008.
DK08
L. Diening and C. Kreuzer,
Linear convergence of an adaptive finite element method for the
p-Laplacian equation, SIAM J. Numer. Anal. 46 no. 2
(2008), 614–638. 10.1137/070681508.
DR07
L. Diening and
M. Růžička, Interpolation operators in
Orlicz-Sobolev spaces, Numer. Math. 107 no. 1 (2007),
107–129. 10.1007/s00211-007-0079-9.
Doe96
W. Dörfler, A convergent adaptive algorithm for
Poisson's equation, SIAM J. Numer. Anal. 33 no. 3 (1996),
1106–1124. 10.1137/0733054.
ET99
I. Ekeland and
R. Témam, Convex analysis and variational
problems, english ed., Classics in Applied Mathematics 28,
Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA,
1999, Translated from the French.
10.1137/1.9781611971088.
EG21
A. Ern and J. L. Guermond,
Finite Elements I: Approximation and Interpolation, Texts in
Applied Mathematics no. 1, Springer International Publishing, 2021.
10.1007/978-3-030-56341-7.
FV04
F. Fierro and A. Veeser, A
posteriori error estimators for regularized total variation of characteristic
functions, SIAM J. Numer. Anal. 41 no. 6 (2003), 2032–2055.
10.1137/S0036142902408283.
HK04
M. Hintermüller and
K. Kunisch, Total bounded variation regularization
as a bilaterally constrained optimization problem, SIAM J. Appl.
Math. 64 no. 4 (2004), 1311–1333.
10.1137/S0036139903422784.
Hun07
J. D. Hunter, Matplotlib: A 2d graphics environment,
Computing in Science & Engineering 9 no. 3 (2007), 90–95.
10.1109/MCSE.2007.55.
LW10
A. Logg and G. N. Wells,
DOLFIN: automated finite element computing, ACM Trans. Math.
Software 37 no. 2 (2010), Art. 20, 28.
10.1145/1731022.1731030.
Mar85
L. D. Marini, An inexpensive method for the
evaluation of the solution of the lowest order Raviart-Thomas mixed
method, SIAM J. Numer. Anal. 22 no. 3 (1985), 493–496.
10.1137/0722029.
vedo
M. e. a. Musy, marcomusy/vedo: 2023.4.4, March 2023.
10.5281/zenodo.7734756.
NSV00
R. H. Nochetto,
G. Savaré, and
C. Verdi, A posteriori error estimates for variable
time-step discretizations of nonlinear evolution equations,
Communications on Pure and Applied Mathematics 53 no. 5
(2000), 525–589.
https://doi.org/10.1002/(SICI)1097-0312(200005)53:5<525::AID-CPA1>3.0.CO;2-M.
OBGXY05
S. Osher, M. Burger,
D. Goldfarb, J. Xu, and
W. Yin, An iterative regularization method for
total variation-based image restoration, Multiscale Modeling &
Simulation 4 no. 2 (2005), 460–489. 10.1137/040605412.
PraSyn47
W. Prager and J. L. Synge,
Approximations in elasticity based on the concept of function space,
Quart. Appl. Math. 5 (1947), 241–269.
10.1090/qam/25902.
RT75
P.-A. Raviart and J. M.
Thomas, A mixed finite element method for 2nd order elliptic
problems, in Mathematical aspects of finite element methods (Proc.
Conf., Consiglio Naz. delle Ricerche (C.N.R.), Rome, 1975),
1977, pp. 292–315. Lecture Notes in Math., Vol. 606.
Repin18
S. Repin and J. Valdman,
Error identities for variational problems with obstacles, ZAMM Z.
Angew. Math. Mech. 98 no. 4 (2018), 635–658.
10.1002/zamm.201700105.
Rep99
S. I. Repin, A posteriori error estimates for
approximate solutions to variational problems with strongly convex
functionals, J. Math. Sci. (New York) 97 no. 4 (1999),
4311–4328, Problems of mathematical physics and function theory.
10.1007/BF02365047.
ROF92
L. I. Rudin, S. Osher, and
E. Fatemi, Nonlinear total variation based noise
removal algorithms, Phys. D 60 no. 1-4 (1992), 259–268,
Experimental mathematics: computational issues in nonlinear science (Los
Alamos, NM, 1991). 10.1016/0167-2789(92)90242-F.
dr-nafsa
M. Růžička and
L. Diening, Non–Newtonian fluids and function
spaces, in Nonlinear Analysis, Function Spaces and Applications,
Proceedings of NAFSA 2006 Prague, 8, 2007, pp. 95–144.
Tart07-book
L. Tartar, An introduction to Sobolev spaces
and interpolation spaces, Lecture Notes of the Unione Matematica
Italiana 3, Springer, Berlin; UMI, Bologna, 2007.
Ver13
R. Verfürth, A Posteriori Error Estimation
Techniques for Finite Element Methods, Oxford University Press, 04 2013.
10.1093/acprof:oso/9780199679423.001.0001.9mm
ZeiIII
E. Zeidler, Nonlinear functional analysis and
its applications. III, Springer-Verlag, New York, 1985, Variational
methods and optimization, Translated from the German by Leo F. Boron.
10.1007/978-1-4612-5020-3.
|
http://arxiv.org/abs/2307.05047v2 | 20230711065249 | A Blockchain-based two Factor Honeytoken Authentication System | [
"Vasilis Papaspirou",
"Leandros Maglaras",
"Ioanna Kantzavelou",
"Naghmeh Moradpoor",
"Sokratis Katsikas"
] | cs.CR | [
"cs.CR"
] |
printacmref=false
B-.05emi-.025em b-.08emT-.1667em.7exE-.125emX
shapes.geometric,
arrows,
external,
pgfplots.groupplots,
matrix
compat=1.9
||
‖‖
OMScmsymn
.png,.PNG,
.pdf,.PDF,
.jpg,.mps,.jpeg,.jbig2,.jb2,.JPG,.JPEG,.JBIG2,.JB2
|
http://arxiv.org/abs/2307.04785v1 | 20230710180000 | Empirically Constraining the Spectra of a Stars Heterogeneities From Its Rotation Lightcurve | [
"David Berardo",
"Julien de Wit",
"Benjamin V. Rackham"
] | astro-ph.EP | [
"astro-ph.EP",
"astro-ph.IM",
"astro-ph.SR"
] |
0000-0001-6298-412X]David Berardo
Department of Earth, Atmospheric and Planetary Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
Department of Physics and Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
0000-0003-2415-2191]Julien de Wit
Department of Earth, Atmospheric and Planetary Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
0000-0002-3627-1676]Benjamin V. Rackham
51 Pegasi b Fellow
Department of Earth, Atmospheric and Planetary Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
David Berardo
[email protected]
Transmission spectroscopy is currently the most powerful technique to study a wide range of planetary atmospheres, leveraging the filtering of a star's light by a planet's atmosphere rather than its own emission. However, both a planet and its star contribute to the information encoded in a transmission spectrum and a particular challenge relate to disentangling their contributions. As measurements improve, the lack of fidelity of stellar spectra models present a bottleneck for accurate disentanglement. Considering JWST and future high-precision spectroscopy missions, we investigate the ability to derive empirical constraints on the emission spectra of stellar surface heterogeneities (i.e., spots and faculae) using the same facility as used to acquire the transmission spectra intended to characterize a given atmosphere.
Using TRAPPIST-1 as a test case, we demonstrate that it is possible to constrain the photospheric spectrum to ≥0.2% and the spectra of stellar heterogeneities to within 1-5%, which will be valuable benchmarks to inform the new generation of theoretical stellar models. Long baseline of observations (≥90% of the stellar rotation period) are necessary to ensure the photon-limited (i.e., instrument-limited) exploration of exoplanetary atmospheres via transmission spectroscopy.
§ INTRODUCTION
Transmission spectroscopy was the first technique introduced to study the atmospheres of worlds beyond the solar system <cit.>. Today, it is still one of the most powerful techniques in this context, as it leverages the light coming from a host star rather than the light directly emitted by the planet itself, which is orders of magnitude fainter. As the field of exoplanetary science transitions in the coming decade towards the spectroscopic characterization of directly imaged exoplanets, emission spectroscopy will become the primary avenue to study planetary atmospheres. Until then, perfecting the art of transmission spectroscopy studies is a must.
Currently, the dominant bottlenecks for transmission spectroscopy are associated with imperfections in our opacity models <cit.> and stellar models <cit.>. The current limitations in opacity models have been shown to result in an accuracy wall preventing most atmospheric properties beyond ∼0.5 dex for all planets but large, hot, and highly-metallic ones <cit.>. Future efforts supporting the standardization of existing databases, and the improvement of treatments of broadening and far-wing behaviors, should mitigate the current bottleneck.
Regarding stellar models, <cit.> showed that not accounting for stellar contamination will yield biased inferences of atmospheric properties. However, correcting for stellar contamination is challenging, as the model limitations (i.e., lack of fidelity) can yield a biased correction of the contamination via an inadequate fit of the out-of-transit spectrum. The lack of fidelity can also result in challenges in inferring the number of components present on the stellar disk <cit.>. Fortunately, when stellar models with a sufficient fidelity are accessible, the degeneracy between the number of components and their covering fractions can be lifted, leading to an optimal correction of the stellar contamination <cit.>. Sufficient fidelity is defined here as follows: with a precision superior or equal to the expected uncertainty associated with the out-of-transit spectra obtained for transit observations in the targeted system. This definition therefore supports returning to a regime of photon-limited studies–where instruments are used at their maximum potential. While a new generation of stellar models are being computed following the guidance of the report from NASA's Exoplanet Exploration Program Study Analysis Group 21 <cit.>, we investigate a possible avenue to empirically derive the emission spectra of a star's heterogeneities. Doing so would provide the community with a data-driven solution to the stellar-model challenge, i.e., benchmarks for ongoing theoretical simulations.
In this paper, we present a framework leveraging a multi-wavelength stellar spectroscopic rotation curve to constrain empirically the emission spectra of its different heterogeneities. We focus our injection–retrieval test on M-dwarf stars with properties similar to those of TRAPPIST-1 (T_eff = 2566 K), for which stellar contamination is expected to be the most pronounced <cit.> and the most challenging to correct <cit.>. We present in <ref> the forward model developed to generate the synthetic, multi-wavelength observations of an heterogeneous stellar surface. In <ref>, we present the retrieval framework used to assess the extent to which the properties of individual heterogeneities (size, positions, and emission spectra) can be constrained based on a synthetic rotation light-curve. In <ref>, we present the injection–retrieval tests performed and their results, including testing the effect of varying the duration and sampling of an observation relative to the stellar rotation period. In <ref>, we describe the results of these preliminary tests, as well as highlight future steps to improve and expand upon this initial framework.
§ FORWARD MODEL FOR GENERATING SYNTHETIC DATA
In this section we present the forward model used to generate synthetic time- and wavelength-dependent observations of an heterogeneous stellar surface. These synthetic observations are generated using a grid-based stellar surface model, which consists of a star (described by its rotation period and rotation axis orientation) as well as a list of heterogeneities, which are each described by a latitude, longitude, radius, and temperature.
§.§ Spectral Model
For this analysis, we use the PHOENIX stellar spectral model grid[<https://phoenix.astro.physik.uni-goettingen.de/>] to simulate the emission of an individual surface feature <cit.>. These grids provide adequate coverage to describe the photospheric background of an M dwarf, as well as heterogeneities which vary by several hundred degrees in either direction relative to the photosphere
. For the stellar photosphere we use a spectral model with a temperature of 2500 K, a log g of 5.0, and an [Fe/H] metallicity of 0 (similar to TRAPPIST-1, which has a surface temperature of 2566 ±26 K, a log g of 5.2396 ± 0.006 <cit.> and an [Fe/H] metallicity of 0.05 ± 0.08 <cit.>). For heterogeneities, we alter only the temperature of the model spectrum used, since the surface gravity and metallicity are typically expected to remain constant across a stellar surface <cit.>. In this way, we make the common assumption the emission from heterogeneities resembles that of a stellar photosphere with a different effective temperature. For our analysis, we used spectral models corresponding to 2300 K and 2700 K (varying ± 200 K relative to the photosphere).
For this analysis we use the PHOENIX grids of specific intensity spectra, which provide spectral information as a function of viewing angle μ, as opposed to disk-averaged intensities. When sampling from these specific intensity spectra, we take the value corresponding to μ = 0 (i.e., the center of the star, normal to the observer's line of sight). We then calculate a quadratic limb-darkening profile for the stellar surface, and scale this intensity across the stellar surface, allowing us to have control over the limb darkening of the signal.
We emphasize that although we use simulated models to generate the synthetic data, this does not invalidate the premise of this study to empirically retrieve stellar spectra. This is because when fitting for these spectra later on, we use no information about the input spectra whatsoever, and thus the retrieval is not biased based on prior knowledge.
§.§ Instrumental Model
We consider observations made with the NIRISS Single Object Slitless Spectroscopy (SOSS) instrument <cit.> on JWST <cit.>, which has a spectral resolution of R≈700 at 0.6–2.8 μm[https://jwst-docs.stsci.edu/jwst-near-infrared-imager-and-slitless-spectrograph/niriss-observing-strategies/niriss-soss-recommended-strategies], providing an adequate compromise between resolving power and spectral coverage for such work considering the spectral energy distribution (SED) of stars, including M dwarfs <cit.>. The spectral resolution of the PHOENIX spectra is much higher than can be observed with JWST, and so they must first be down-sampled to a resolution of R = 700 using a Gaussian convolution filter to match the expected signal from NIRISS. After adjusting the resolution, we also bin the spectra down to a wavelength spacing of 8.8 μm. These are appropriate transformations in this case given that the forward model is linear, and thus high resolution is not needed (see <cit.> for further discussion on binning and down-sampling spectra).
§.§ Spatial Model
The stellar surface is treated as a grid in the longitudinal and latitudinal directions. Once the stellar spectra are calculated, we must then determine where on the surface each heterogeneity lies. This is done using a flood fill technique, where we begin at the cell of the stellar surface corresponding to the heterogeneity center, and spread out from this point until we reach a cell which is too far from the central cell to be a part of a given heterogeneity. As this is done, each cell is marked as being a part of the heterogeneity and assigned the flux corresponding to its temperature as well as the relevant wavelength. While the model has been optimized for a circular feature, in principle any shape can be `painted' on the stellar surface grid, accounting for projection effects. This model is based off of a similar one used in <cit.>, which was used to model the interactions of an heterogeneous star with a debris disk.
In addition to this flux map, we also calculate maps which correspond to the projected area of a given cell, taking into account the shape of the cell as well as its normal vector relative to the observer. We also calculate a limb darkening map. These three maps can then be multiplied together to produce a final observation map, which can be rapidly summed to measure the observed flux at a given time. In order to calculate the flux at a different time, the flux map is simply `rolled' along the longitudinal axis, since the projected area and limb darkening effects are constant in time.
§ RETRIEVAL FRAMEWORK
The goal of this initial study is to demonstrate the capability to characterize arbitrary heterogeneities of a stellar surface and their contribution to the overall stellar spectrum without relying on physical models, which currently cannot provide a sufficient level of accuracy. In this work we focus in particular on heterogeneities which can be described by their size, location, and temperature. The effect of the position and size of a heterogeneity are highly non-linear, due to both their projection onto the observing plane as well limb-darkening effects. Thus when retrieving these parameters we will employ standard Markov chain Monte Carlo (MCMC) methods in order to sample the full range of parameter space. For a given distribution of heterogeneities, however, the total spectral signal can be described as a linear combination of the stellar photosphere and the heterogeneity spectra (scaled by their relative surface area), and thus can be solved for as a linear matrix problem, which we outline in this section. Once we have re-formulated the spectral retrieval as a linear algebra problem, we utilize spectral value decomposition (SVD)[https://en.wikipedia.org/wiki/Singular_value_decomposition] in order to estimate the spectral signal of each component (including the photosphere). Thus the problem can be separated into a non-linear MCMC retrieval (the geometric properties of the heterogeneity) and linear retrieval (the spectral signal of the photosphere and individual heterogeneities).
§.§ Linear component of retrieval model
Given a set of synthetic observations, we now describe the framework used to constrain the properties of individual components (size, positions, and spectra). The total flux observed, Flux(λ,t), at a given wavelength λ and time t is a linear combination of the geometric signals of all the components modulated by the spectral signal of each component and can thus be written as:
Flux(λ,t) = Λ_phot(λ) + ∑_i[Λ_i(λ)-Λ_phot(λ)] × S_i(t)
where Λ_phot(λ) is the (constant in time) spectral signal of the photosphere, Λ_i(λ) is the spectrum of the i^th heterogeneity, and S_i(t) is the time-varying geometric projection of a heterogeneity, which is a function of its size and position on the stellar surface, as well as any limb-darkening effects. The sum runs over the number of individual heterogeneity features. A graphical depiction of this decomposition is show in <ref>
Within an MCMC framework, the linear component of the model can be estimated using SVD, allowing us to leverage rapid and robust libraries available in Python to retrieve the spectral signal of each feature in just a few milliseconds on a modern laptop computer. The benefit of this separation is that the geometric signal of any surface features can often be estimated from a white light curve, as well as with more sophisticated techniques to analyze the frequency components of the light curves. Thus strong priors can be placed on the position and sizes of heterogeneities, which reduces the overall time needed to run such a retrieval.
§.§ A Note on Limb Darkening
The geometric signal of the heterogeneity in the previous equations (i.e., the quantity S_i(t)) requires a choice of limb darkening coefficients for the stellar surface, since it is calculated as the combination of the size of a cell and its projected area, multiplied by a limb darkening factor. However, in general, limb darkening is an effect which depends on the temperature of the stellar surface, which is the quantity we are attempting to fit. Thus we find ourselves in a loop where the stellar spectrum is required in order to know the appropriate value of the limb darkening coefficients, which is required in order to fit for the stellar spectrum. As a result, the current fitting routine assumes that limb darkening is independent of temperature, at least within the range considered in this work (± 200 K). In general, limb darkening is expected to vary with temperature <cit.>. However, since the models are generated under the same assumption, we may still assess the ability of the our framework to recover injected signals. In <ref> we briefly highlight how this may be addressed in the future and the additional prospects for characterization it will allow for.
§ INJECTION–RETRIEVAL TESTS
Given the forward model used to simulate observations described in <ref>, and the retrieval mechanism described in <ref>, we now describe a series of injection–retrieval tests we use to test the ability of the model to recover stellar surface heterogeneities.
§.§ Fitting for Spectral Components
In order to test the effectiveness of the model in retrieving spectral features of a star, we first perform a series of injection–retrieval tests in an idealized scenario in which we assume to know the number of heterogeneities, as well as their positions and sizes. Thus in this first stage we are attempting to retrieve only the spectral features of heterogeneities and photosphere (the linear part of the retrieval), which represents a best-case scenario and effectively acts as an upper limit on the strength of the current framework. In this idealized scenario, we have removed the complex, non-linear component of fitting for the feature positions, and the problem is reduced to a linear one of disentangling the spectral contribution of each component. By employing SVD, this can be solved in just milliseconds (including the full range of time and wavelength observations), allowing rapid testing of a variety of scenarios. This can similarly represent a scenario where strong priors have been obtained for the spectral components, based on an analysis of a white lightcurve or a pre-fitting routine which places constraints on the possible heterogeneity configurations.
We tested the model on a suite of stellar surfaces, including ones with heterogeneities hotter than the photosphere, colder than the photosphere, both, as well as anywhere from one to four individual heterogeneities. Additionally, we tested a series of single-heterogeneity models with all but one parameter being held constant, varying either the size of a heterogeneity or its latitudinal position. The full sample of surfaces considered is described in <ref>, along with the deviation from the true spectra used to simulate the observations. The results of these tests reveal that the model is able to recover the spectra of heterogeneities to sufficient precisions (i.e., better than the out-of-transit spectrum–see <ref>). For example, the precision achieved on the photospheric spectrum is ≤ 0.1% vs ∼0.5% for the out-of-transit spectrum associated with transit observations in the TRAPPIST-1 system–typically based on a ∼ 2 hr integration. The spectra of heterogeneities are constrained at the level of 1 to 5% depending notably on their sizes and latitudinal position.
The spectra of heterogeneities are less constrained due to their smaller covering fraction resulting in less photons from them. Their small covering fraction also mean that while the uncertainties associated with their spectra are larger, they contribute to the total uncertainty budget for the stellar model at a similar level than the photosphere. For this reason, we will assess sufficient model fidelity based on the ratio of the uncertainty associated with the retrieved photospheric spectrum and the one associated with the out-of-transit spectrum.
§.§ Retrieving Full Heterogeneities
In order to fully test the ability of the model to characterise a heterogeneous stellar surface, we also run a set of retrievals where we attempt to estimate not only the spectral signature of each component, but also their sizes and positions on the stellar surface. For a fit with N heterogeneities, we thus have 3N + 2 parameters: a size, latitude and longitude for each heterogeneity, as well as two limb darkening parameters for a quadratic limb darkening law. As described in <ref>, we run an MCMC retrieval within which we linearly retrieve the spectral signals of each component using SVD.
The results of this fitting process highlight the inherent difficulty in constraining the position and size of a heterogeneity, which outlines clear areas for future improvement. The longitude of a spot is typically reliably constrained to within a few degrees of the true value, due to the high time-sampling resolution. The latitude however is often much less constrained, with the model being able to differentiate only between equatorial or polar spots. Additionally, the size of a spot is typically only constrained to within 50% of its true value, although the model is capable of excluding extremely large or small/non-existent spots. In section <ref> we outline how additional prior information may be used to help further constrain the size of a feature, based on an global physical constraints on the overall scaling of its spectrum (leveraging the trade-off between feature size and spectral amplitude).
A subset of the models from the previous section were tested, where we fixed the number of heterogeneities to the true value. As an aside, we ran fits on the white lightcurve for each model, where we sequentially added in additional features. In most cases, the true number of components was found to best describe the data, while adding additional components did not improve the fit and resulted in a worse BIC (Bayesian Information Criterion) value.
In this first run, heterogeneities were allowed to occur anywhere on the stellar surface, and in some cases this led to degeneracies where two heterogeneities would overlap and contribute to the overall spectrum jointly. Additionally, we found that without additional information, the latitudinal position of a heterogeneity was difficult to constrain. These issues highlight clear areas for improvement for future work, which we discuss further in <ref>.
Despite issues with constraining the geometric properties of spot features, in most cases the model was still able to recover the photospheric signal to within 1%. We show the results of an example fit in <ref>, comparing the individual retrieved component spectra to the spectra used to generate the synthetic observations.
§.§ Varying Observation Baseline
In the previous sections, retrieval was performed using simulated observations covering an entire rotation period of the host star. However, in most cases a strong argument must be made to justify the use of high-demand facilities to continuously stare at a single target. In this section we investigate the effect of observing only a portion of the full rotation lightcurve on the ability of the framework to accurately measure the photospheric spectrum of a star. Given the time-variability of a heterogeneity signal, there exists a strong correlation between the duration of an observation, the phase offset relative to a heterogeneity's longitude, and the retrieved uncertainty on the stellar photosphere.
To this end, we first simulate a heterogeneous stellar surface as in the previous section, with anywhere from 1–4 heterogeneities which may be colder or hotter than the background photosphere. From this model, we then generate a set of synthetic observations again as described in the previous sections.
For each observation, we chose two parameters: (1) an offset for the longitudinal rotation of the star relative to the observer, and (2) a viewing window, defined as a fraction from 0–1 of the stellar rotation period. Selecting a value of one represents the analysis done in the previous section, for which the entire stellar rotation was supplied to the fitting routine. These two values define a time series, for which we generate the base-vector signals attributed to each heterogeneity on the stellar surface. We then use SVD decomposition to rapidly fit the linear component of the model. As in the previous section, we can then compare the retrieved spectrum to the injected spectrum for each component, the results of which are shown in <ref>.
The various curves represent different observation durations. For a given observation duration, the residual signal can vary strongly as a function of stellar rotation phase. This is more pronounced for the shorter durations. For example, the residual for an observation covering 0.1 of the stellar rotation can vary from approximately 1% to over 100%. We attribute this variation to the unequal ability of each phase to contribute a set of component spectra descriptive of the entire photosphere. In other words, when fewer or no heterogeneities are present, one cannot extract the necessary information to model the photosphere at a phase showing many heterogeneities. Thus, the shorter-duration observations show both larger residuals overall and larger variability in residuals with rotation phase. For this reason, we find that only a covering fraction of ≥90% can reliably constrain the stellar spectra to within the OOT uncertainty (0.5%). Indeed, while the targeted precision of 0.5% may be achieved for some configurations with only a 40% phase coverage, it is not achieved for all (average precision ∼1%).
§ DISCUSSION & FUTURE STEPS
This work represents the first steps towards building a library of empirical emission spectra for stellar surface heterogeneities. While similar in scope to the work of <cit.> that compiled a library of empirical spectra for various stellar types, an important distinction resides in that the spectra being measured are not for disk-integrated features, but rather for `pure' basis components which may be combined with rotational geometry in order to produce accurate spectra for stars with arbitrarily complex surface features. Such a library will not only enable the robust correction of the TLS effect based on out-of-transit measurements, it will also provide important benchmarks for the next-generation of theoretical stellar models <cit.>, and further inform key relationships between the properties of stars and those of heterogeneities such as between heterogeneities temperature and size, photospheric temperatures, and atomic line-depth ratios.
Indeed, we are able to constrain photospheric spectra at the level of the 0.1% and typically 1–5% for the spectra of heterogeneities while
spectra with precisions of ∼ 1% (S/N∼ 100) are used commonly to constrain the fundamental physical parameters of exoplanet host stars <cit.>.
In terms of absolute flux calibrations, for example, the goal for the X-SHOOTER instrument is ≤ 10% <cit.>, while the eventual goal of the JWST calibration program is 1% accuracy for each observing mode <cit.>.
Thus, constraints on component spectra from this technique are on par with current precisions available for integrated disk spectra and will be limited ultimately by the overall precision and accuracy limitations of JWST observations themselves providing valuable data-driven benchmarks to inform the next generation of models.
Our framework enables retrieving both the geometric features of heterogeneities as well as their individual spectral contributions, without relying on any prior information from spectra generated by physical models.
In the rest of this discussion, we highlight a series of possible improvements to the framework introduced here.
§.§ Series of Snapshots for Slow Rotators
Covering 90% of a stellar rotation of TRAPPIST-1 would correspond to a ∼72-hr stare at the system, which is both feasible and reasonable for such a high-priority target. Doing so for slow rotators that may have periods up to 30 times that of TRAPPIST-1's, however, would be impractical. For such hosts, we show that a continuous series of small stares (“snapshots”) could be used instead (see Figure <ref>). In order to reach the targeted precision, we find that snapshots needs a minimum duration equal the intended OOT integration and sufficient occurrences to sample time-varying contribution of the heterogeneities.
As seen in the bottom panels of <ref>, the duration and number of snapshots required to achieve a given SNR are related offering multiple observational options. For a 30-day rotation period, a sufficient precision is achieved for, e.g., 40 2-hr snapshots, or 20 4-hr , 10 8-hr, 5 16-hr. These options correspond to a 10× lower observation requirement than when considering a long continuous stare. Of the four options highlighted above, we expect that the later will be favored when accounting for practical considerations (e.g., overheads and slew time).
§.§ Wavelength-dependent Limb Darkening
The models described in this work used limb darkening laws which did not change as a function of temperature or wavelength. While this represents an important first step in estimating the capability of this framework, future developments should account for such dependencies, which could notably be used to break the currently observed degeneracies between the latitude and size of a heterogeneity and thus better constrain the latitudinal distribution of heterogeneities.
§.§ Including Prior Knowledge From Model Spectra
The present proof-of-concept is performed without any prior knowledge regarding stellar physics. Future works could explore how relevant priors could be added to the framework without introducing biases from complete stellar models. An example of such priors would be a parametrization of the relative flux expected between wavelength bins associated to the feature of a same molecule. While absolute flux values may be biased, relationships between wavelengths may be robust enough to provide additional constraints. This information could be extracted using Gaussian processes in order to measure correlations between different wavelengths <cit.>. Constraining the spectra in this way would enable tighter constraints on the size and latitude of a given feature, which is currently degenerate with the overall amplitude of its spectrum. Additionally, including the use of activity indicators provided by high-precision spectroscopy to help solve in the inverse problem of reconstructing active regions on the stellar surface <cit.>.
§.§ Correcting for Stellar Contamination at Different Epochs
The ultimate goal of this work is to generate a library of empirically retrieved spectra for the heterogeneities of a given star in order to support for the robust correction of in-transit stellar contamination at any past and future epochs. The feasibility of this approach is supported by the following. First, heterogeneities of a given star have been shown to have consistent properties. For example, molecular-band modeling of echelle spectra of DM UMa suggests a spot temperature of 3570 ± 100 K during an observing campaign in 1995, with filling factors ranging from 0.25 ± 0.08 to 0.30 ± 0.10 <cit.>. Returning to the same star during six nights in 1998, a later analysis found a spot temperature of 3450 ± 120 K and filling factors ranging from 0.28 ± 0.06 to 0.42 ± 0.05 <cit.>. Second, properties of heterogeneities appear to be correlated making it easier to to pin down. Starspot temperatures show a clear dependence on photospheric temperature, based on Doppler imaging, modeling of molecular bands, and atomic line-depth ratios <cit.>. Therefore while heterogeneity's filling factors surely evolve over a stellar activity cycle, their temperatures and thus spectra are a static characteristic of a given star supporting our proposition of their relevance across epochs.
In other words, while a series of improvements to this framework can (and should) be made in the future, the present theoretical proof-of-concept suffices to move towards a practical application with JWST data as a next step. Such data would also inform in a relevant manner the aforementioned series of improvements (e.g., empirical wavelength- and temperature-dependencies of the limb-darkening). We thus look forward to an on-sky validation and further development of this framework in the near future to enable the robust atmospheric characterization of planets whose spectra would otherwise stay contaminated.
§ ACKNOWLEDGEMENTS
We thank Elsa Ducrot and the Pandora Team for helpful discussions regarding this project.
B.V.R. thanks the Heising-Simons Foundation for support.
This material is based upon work supported by the National Aeronautics and Space Administration under Agreement No. 80NSSC21K0593 for the program “Alien Earths”.
The results reported herein benefited from collaborations and/or information exchange within NASA’s Nexus for Exoplanet System Science (NExSS) research coordination network sponsored by NASA’s Science Mission Directorate.
§ TEST MODEL PARAMETERS
|
http://arxiv.org/abs/2307.04946v1 | 20230711002138 | DDGM: Solving inverse problems by Diffusive Denoising of Gradient-based Minimization | [
"Kyle Luther",
"H. Sebastian Seung"
] | cs.CV | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
DDGM: Solving inverse problems by Diffusive Denoising of Gradient-based Minimization
Vincenzo Vitelli
October 2023
====================================================================================
Inverse problems generally require a regularizer or prior for a good solution. A recent trend is to train a convolutional net to denoise images, and use this net as a prior when solving the inverse problem. Several proposals depend on a singular value decomposition of the forward operator, and several others backpropagate through the denoising net at runtime. Here we propose a simpler approach that combines the traditional gradient-based minimization of reconstruction error with denoising. Noise is also added at each step, so the iterative dynamics resembles a Langevin or diffusion process. Both the level of added noise and the size of the denoising step decay exponentially with time. We apply our method to the problem of tomographic reconstruction from electron micrographs acquired at multiple tilt angles. With empirical studies using simulated tilt views, we find parameter settings for our method that produce good results. We show that high accuracy can be achieved with as few as 50 denoising steps. We also compare with DDRM and DPS, more complex diffusion methods of the kinds mentioned above. These methods are less accurate (as measured by MSE and SSIM) for our tomography problem, even after the generation hyperparameters are optimized. Finally we extend our method to reconstruction of arbitrary-sized images and show results on 128 × 1568 pixel images.
§ INTRODUCTION
A linear inverse problem is defined by a known measurement operator A. Given observed data y, the goal is to recover x by “explaining” the data, Ax ≈ y. Traditionally one minimizes the reconstruction error ‖ Ax - y‖^2, often by some kind of gradient descent. When the condition number of A is large, the inverse problem is said to be “ill-posed.” The true minimum of the reconstruction error is a bad solution because it tends to amplify noise. Better results can often be obtained by early stopping of the gradient descent. Another possibility is to formulate a prior probability distribution for x, and find the best x by maximizing the posterior probability, treating the reconstruction error as a log likelihood.
Recently, it has been shown that neural nets trained to denoise images can be incredibly successful at generating images when used in a diffusion process <cit.>. Another exciting application of these denoising nets would be as priors for solving inverse problems. Although we have no direct access to the x that gave rise to the data y, we assume that we have access to images that are statistically like x, i.e., samples from the prior probability distribution P(x) are available. If a net is trained to denoise these samples, it effectively learns something about the prior distribution, and should be helpful for reconstructing the unknown x that gave rise to the data y.
We propose a simple method of doing this. The method augments classical gradient-based minimization of the reconstruction error with denoising by the pretrained net. The only perhaps nonintuitive aspect of our method is that noise is also added back in before subsequent denoising.
As far as we know, our simple method is novel. Unlike <cit.>, our method does not require a singular value decomposition (SVD) to run. Unlike <cit.> our method does not require backpropagating through the denoiser. And finally, unlike <cit.> and the previous methods our method does not couple the number of gradient updates to the number of denoiser updates. We'll see that we require an order of magnitude fewer denoiser updates than gradient updates, so our method is fast. We also show that accuracy is also superior, when measured by standard metrics such as MSE or SSIM.
We compare our method to denoising diffusion restoration models (DDRM) and a variant of diffusion posterior sampling (DPS) on the inverse problem of tomographic reconstruction from tilt series electron micrographs, a popular technique in biological imaging <cit.>. 2D images of a specimen with a slab geometry are acquired at multiple tilt angles, and a 3D image is inferred by solving the linear inverse problem (Fig. 1). The problem is highly ill-posed, because the angles span a limited range, typically (-60^∘, +60^∘). For simplicity, we will study the problem of reconstructing a 2D image from 1D projections. The generalization to reconstructing a 3D image from 2D projections is conceptually straightforward and will be discussed elsewhere, because implementing a 3D denoising net is somewhat more involved.
The generalization of our method to other kinds of inverse problems is very natural. However we have not explored such applications here, because each inverse problem will require some tuning of annealing schedules. This seems to be the case for diffusion methods more generally. We had to extensively tune other methods to achieve performance even competitive with a traditional (non-neural) gradient descent method.
On another note, electron micrographs can be extremely large, and this is the case for biomedical images more generally. Another contribution of this paper is a novel patch-based diffusion process that enables denoisers trained on small patches to handle arbitrarily large images, either for generating image or solving inverse problems. In related work, GANs were used to synthesize images resembling electron micrographs of brain tissue <cit.>, but the GANs were not applied to inverse problems.
§ DIFFUSIVE DENOISING OF GRADIENT-BASED MINIMIZATION (DDGM)
Comment/* */
ParameterParameter
We assume that a network ϵ_θ has already been trained to denoise images x. We discuss the training objective later in Eq. <ref>. Our diffusion method for inverse problems is given in Algorithm <ref>. We take K gradient descent steps on the reconstruction error ‖ Ax-y‖^2. Then we add noise to x. Then we denoise x using the net. This process is repeated N times with a noise level that decays exponentially.
We note that the K gradient descent steps on the reconstruction error is essentially a classical algorithm, and by itself already yields some sort of solution. It might be tempting to simply denoise this with our net, but we had little success with this. We speculate this is because the the output of algebraic reconstruction is not simply the real image corrupted by Gaussian noise, which is the only kind of corruption that the net has been trained to remove.
The trick is to add Gaussian noise to x before applying the denoising net. If we add enough noise, our denoiser appears to improve x in some ways, at the expense of making it blurry. If this process is repeated with a decaying noise level, we will see that x becomes both sharp and accurately produces features of the true image.
§ EXPERIMENTS
Dataset We downloaded two volumes of size 1k × 10k × 1k from the center of a publicly available 3D image dataset acquired by FIB-SEM from a fly brain <cit.>. We chose the location of these sub-volumes to avoid stitching artifacts that are present in the full dataset. The voxel sizes at MIP-1 resolution are 16 × 16 × 16 nanometers. We used one volume for training, and the other for validation and testing. The dataset was normalized to have zero mean and unit variance computed over all the training set pixels.
Training the denoising network We train a U-Net to denoise 128× 128 images corrupted by adding randomly rescaled Gaussian noise σϵ to a clean image x. The elements of the vector ϵ are drawn from a Gaussian distribution with zero mean and unit variance. The scalar σ, or noise level, is chosen from a LogUniform distribution, i.e., logσ is uniformly distributed in the interval [log(0.03), log(30.0)]. The network output is denoted by ϵ̂_θ(x+σϵ), where θ are the network parameters and x+σϵ is the corrupted image. The network is trained to predict the unscaled noise ϵ, i.e., we minimize the mean squared error
∇_θ‖ϵ - ϵ̂_θ(x+σϵ) ‖^2
using the Adam optimizer with default PyTorch parameters. We train for 380,000 gradient updates which took 20 hours using 8 NVIDIA 3090 GPUs with a batch size of 64 (8 images per GPU).
Our U-Net style architecture has 128 feature maps at all 5 levels of resolution, Group Normalization <cit.>, residual connections within blocks, and 6 convolutional layers in each block. The net has 8 million parameters in total.
Following the work of <cit.>, the net is not conditioned on the noise level, unlike many other models in the diffusion model literature <cit.>. Rather, a single unconditional net is trained to denoise at any noise level.
Above the target task was characterized as noise prediction. However, it is more intuitive to flip the sign and think of the target task as denoising. If we regard the output of the net as -ϵ̂_θ, the net is trained to predict a direction in image space that is denoising. Traditionally, a denoising autoencoder is trained to predict a clean image in one step. Our net might be called a residual denoising autoencoder, since it predicts the direction of the difference between the clean and noisy image. This is suitable for the iterative diffusion method that will be introduced below. Note that the net is not trained to predict the magnitude of the denoising step, since the target is the unscaled noise ϵ. Later on, our diffusion procedure will rescale the denoising direction -ϵ̂_θ appropriately.
Our networks are trained using PyTorch <cit.> and PyTorch Lightning <cit.>.
§.§ Unconditional generation
Our ultimate goal is to solve an inverse problem, i.e., generate an image that explains the data. However, unconditional generation of images turns out to be invaluable for evaluating the quality of the prior learned by the denoising network, and for adjusting the parameters of the diffusion schedule. We find a simple exponential decay works well enough with 50 diffusion steps. Specifically we initialize σ_1= 30.0 and x_1 = σ_1 ϵ_1 where ϵ_1 ∼𝒩(0,1). We iterate the following for 50 steps to generate images unconditionally:
σ_n = σ_1 ((1-α)^2 + αβ)^(n-1)/2
x_n+1← x_n - ασ_n ϵ_θ(x_n) + √(αβ)σ_n ϵ_n
We set α=0.183 and β=0.5 as constants. This schedule is motivated by the simple exponential-decay schedule proposed in <cit.> (discussed before their more sophisticated schedule in their Algorithm 1). Our results are shown in <ref>. This shows that this denoiser is indeed quite powerful and should be very helpful as a prior for solving inverse problems.
§.§ Simulated tomographic tilt series
We extract random 2D image patches of size 128 × 128 from these volumes to train and evaluate our network. We use the Astra Toolbox to simulate 128 uniformly spaced tilt views over the range (-60^∘, +60^∘). Each tilt view is a 1D projection of the original image (128 pixels wide). Each of these 128 views are concatenated to form a 128 × 128 dimensional data vector y called the sinogram. This is then corrupted with Gaussian noise of magnitude 4.1 which gives a signal-to-noise ratio of 10 to 1. The tilt views are a linear function of the images.
y = Ax + σ_y ·ϵ
In this setting, the number of observed variables (the dimension of y) equals the number of variables we are trying to infer (the dimension of x) which in this setting is 128^2. The matrix A is highly ill-conditioned however with over half of the singular values being smaller than 0.1 × the largest singular value of A. We will see that simply performing gradient descent on ‖ Ax - y ‖^2 can recover some structure but inevitably misses a significant portion of critical information.
We use a validation set of 128 images of size 128 × 128 to tune the hyperparameters of all reconstruction methods. We report mean squared errors and SSIM metrics on a different test set of 128 images <cit.>. We use TorchMetrics to compute SSIM <cit.>.
While the test set is small, the standard error on our measurements is still sufficiently low that we can see clear differences between all methods.
§.§ Using the prior for tomographic reconstructions
We run Alg. <ref> on our simulated tilt series. We report quantitative results in Tab. <ref>. We report all methods using the absolute best (meaning lowest MSE on a validation set) settings found and the best settings subject to the number of denoiser evaluations being 50. All qualitative figures shown use the best settings for any number of denoiser evals. For the best settings, we found σ_1=3.0, σ_N=0.03, N=150, K=15 and λ = 9e-5. For the best settings at 50 denoiser evaluations, we set σ_1=3.0, σ_N=0.03, N=50. We set K=25 and λ = 9e-5. We found that performance of our method was still quite high at just 50 denoiser evaluations.
In <ref> we show two example reconstructions given by our method. Since our method is stochastic, we may end up with different results each time. Ideally these variations would be small, as we would ideally only have one unique solution. For challenging patterns, we occasionally see meaningfully different outputs of the network. In <ref> we show single individual reconstructions generated by our method compared to three other reconstruction methods which we'll discuss in the next section. Now we'll discuss how we arrived at this setting of parameters.
Step size λ for gradient of reconstruction error We found the largest λ for which gradient descent causes the reconstruction error ‖ Ax -y ‖^2 to decrease. For larger λ, the error explodes. The value of λ is held constant for our whole algorithm's process. We kept this value λ = 9e-5 for all experiments. Future work may explore tuning this parameter as well.
Initial noise level σ_1 The initial noise level we use, σ_1=3.0, is actually 10× lower than what we used for unconditional generation σ_1=30.0. Empirically we found that for fixed N, lowering the starting σ improved reconstruction performance slightly (Appendix). Intuitively, after just a few gradient iterations ∇_x ‖ Ax-y‖^2 the reconstruction x already bears some similarity to a real image from the training set. Therefore x+σϵ may resemble a clean image + Gaussian noise image for relatively low levels of Gaussian noise. We do not have an explanation for why starting with lower noise levels is actually better for MSE however.
Ending noise level σ_N We choose the smallest noise level the network was trained on, which in this case was σ_N=0.03. This is an imperceptibly small level of noise. We did not vary this choice in the experiments.
Number of gradient updates K per iteration This parameter is important. When the number of denoiser evaluations N=50, we found the optimal value of K=25. When the number of denoiser evaluations N=150, we found the optimal value of K=15 though the MSE differences were very slight between K=15 and K=25. Interestingly these values are less than the optimal value of K=100 when doing simple algebraic reconstruction (Eq. <ref>). But we found that the total number of gradient iterations, the product NK, was quite large. NK=1250 when N=50 and NK=2250 when N=150.
§.§ Comparisons
We compare our method to three other methods, DDRM <cit.>, a variant of DPS which we call DPS_* <cit.>, and a non-neural algebraic reconstruction method. Mean squared error and SSIM are computed between the recovered image x and the ground truth x_true. Results from a test set of 128 images are provided in Tab. <ref>. We evaluate the neural methods with two different settings: one where we allow any number of denoiser evaluations and one where we only allow 50 denoiser evaluations (as this setting is much faster).
We find in both cases our method significantly outperforms both DDRM and DPS_* in terms of MSE and SSIM. We find that DPS_* in particular benefits from a large number of denoiser evaluations, but even after 1000 evaluations, its error is far worse than our method with just 50 denoiser evaluations. In this setting of 50 denoiser evaluations, we actually found that the DPS_* method was outperformed even by the simple non-neural algebraic method.
Algebraic reconstruction The simplest method does not use a neural network. We just perform K steps of gradient descent on the squared error between predicted tilt views Ax and the measured tilt views y
x ← x - λ∇_x ‖ Ax - y ‖^2
Early stopping is used as an implicit regularizer. We initialize x=0. We set λ=9e-5 to be the λ which gives rise to the fastest decrease in objective value. This means we have one hyperparameter, the number of gradient steps K. We find that K=100 gives the lowest MSE between true and generated reconstructions x on our validation set. We show the validation set reconstruction errors as we vary K in the Appendix. We show the test set values for K=100 in Table <ref> and show reconstructions in Fig. <ref> and Fig. <ref>
Denoising Diffusion Restoration Models (DDRM) We refer the reader to Eq. 7 and 8 of <cit.> for the full description of the algorithm, which relies on the SVD of the projection operator A. We make note that computing the SVD of A is simple enough for 128 × 128 images (see Appendix for singular values), but more thought would be required to apply this method to our 128 × 1568 pixel images in Fig. 1 due to memory constraints.
This method treats different singular values of the measurement operator differently depending on the level of noise at each step of the diffusion process. We note that the method appears to recommend setting the initial noise level to be larger than the largest non-zero singular value of A^†. In our case this would imply setting σ_init≈ 1 / 10^-5 = 10^5. Our network has only been trained on noise levels up to 30 however. To proceed with the DDRM method, we therefore set all singular values of A which are smaller than 1 / 30.0 to zero and initialize our DDRM diffusion process at σ=30.0.
We keep η_b = 1.0 as used in the paper and tune their η parameter and the number of diffusion steps N. We find that η=1.0 and N=10 provided the minimal mean squared reconstruction error on a validation set of images. We show the validation set reconstruction errors as we vary η and N in the Appendix. We show the test set values in Table <ref> and show reconstructions in Fig. <ref>.
The reconstructions were notably blurry for all settings of parameters we tried (see examples in Fig. 4). This is not inconsistent with the recoveries shown by <cit.>. We also observed that unlike many diffusion models, using a surprisingly small number of steps (10 was optimal) gave higher performance than more steps.
Diffusion Posterior Sampling (DPS_*) We compare to a variant of the DPS method proposed in <cit.>, which we'll call DPS_*. We cannot use our networks with their exact diffusion schedule, as they use a diffusion method which learns the variances at each step, and they operate in the "variance preserving" regime (as opposed to the "variance exploding regime" we work in which means the variance of our patterns grows with increasing noise level). However we make several modifications and perform extensive parameter tuning in an attempt to get this method working for our reconstruction problem.
We first apply their key insight, line 7 of their Algorithm 1, to our pre-existing diffusion schedule. Specifically we add a normalized gradient term to the diffusion step of (Eqn. <ref>):
x_n+1 = x_n - ασ_n ϵ_θ(x_n) + √(αβ)σ_n ϵ_n - ζ∇_x_n‖ A (x_n - σ_n ϵ_θ(x_n)) - y ‖^2/‖ A (x_n - σ_n ϵ_θ(x_n)) - y ‖
If we stick with our pre-existing diffusion schedule (so setting α=0.183 and β=0.5 and N=50, the schedule used to produce the images in Fig. <ref>) then there is only one parameter to tune: ζ. We tune this parameter and show the results in the Appendix. We are unable to set ζ large enough to make the reconstructions match the data (the MSE is always worse than for the simple algebraic reconstruction method).
There is a slight technical detail here. We are operating in the variance exploding regime, meaning our x_n are related to their x'_n via x_n ≈√(1+σ_n^2) x'_n so it may be more appropriate to rescale their gradients ∇_x by 1/√(1+σ_n^2). Therefore we also compare a rescaled version of DPS:
x_n+1 = x_n - ασ_n ϵ_θ(x_n) + √(αβ)σ_n ϵ_n - ζ∇_x_n‖ A (x_n - σ_n ϵ_θ(x_n)) - y ‖^2/√(1+σ_n^2)‖ A (x_n - σ_n ϵ_θ(x_n)) - y ‖
We find this moderately improves MSEs so we use the rescaled version in the rest of our experiments. We explore what happens when we lower the starting noise level of our diffusion process, lowering α so that σ_N is still 0.03 and N is still 50. We find that lowering the starting noise level helps substantially but does not let us acheive even the MSE given by the classic algebraic reconstruction method. We explain this result as follows: our problem is highly ill-conditioned and if we use our pre-existing diffusion schedule, we only allow 50 gradient steps which simply is not enough for the data to strongly influence the reconstructions.
In their paper, they iterate for 1000 steps, so we modify N=1000 and α such that σ_N=0.03 and perform more experiments, tuning the coefficient ζ. The experiments are rather slow at this point since each image requires backpropagation through our denoiser 1000 times. However we found that σ_1=3.0 and ζ=0.1 gave optimal performance in this setting.
§.§ Arbitrary-sized image reconstruction
For this algorithm to have practical utility in connectomics, we must by able to reconstruct arbitrarily large images. Naively one could try running Algorithm 1 in patches then stitching the outputs together. However, the algorithm is inherently random and it is not obvious how to avoid seams in that scenario. One idea is to attempt to modify the work of <cit.> to the setting of inverse problem solving. We take a conceptually simpler approach. Instead we modify the denoiser network itself to run in patches, then we smoothly blend the denoised together. Mathematically, we convolve the denoiser outputs with a 2D bump function. The details are provided in the Appendix. We use this method to produce the 128 × 1568 pixel reconstruction shown in Fig <ref>.
§ DISCUSSION
Future directions In this paper we focused on a particular inverse problem, limited angle computed tomography. It would be interesting to explore application of our method to other inverse problems, especially non-linear ones. Since we do not require SVD, our method is at least well-equipped in principle to solve non-linear inverse problems such as those considered by the DPS method.
Another line of work should consider annealing the step size or number of gradient steps inside each loop. In our algorithm, the effective strength of the prior decays exponentially over time, while the data-term does not change. Surprisingly we did not need to anneal the data-driven term for our application, but other applications may benefit from such an annealing. Another interesting line of work would be the use of a preconditioner in the gradient updates with the idea of reducing the number of gradient evaluations at each iteration. Currently the gradient updates are the slow component of our algorithm.
Limitations A notable drawback of this method and related methods is the sensitivity to parameters. This work was aided by the fact that we have a ground truth by which we could tune the parameters. However, in the real world, one will typically use tomography to infer the 3D structure of an object that no other method can. This means there is no ground truth on which to tune the parameters, or more generally, evaluate the method. Another limitation regards our evaluation method. We have relied on MSE and SSIM, but these might encourage blurry reconstructions in uncertain image regions. Future evaluations should explore additional quantitative metrics.
Potential negative impacts One concerning outcome of this line of work is the tendency of the networks to hallucinate or eliminate real biological structures. We have observed that the reconstructions usually look very realistic, even when they are incorrect. For scientific applications, such hallucinations can be very concerning. One must take great care to validate any systems that derive scientific results from methods such as ours which use powerful priors to guide data-driven reconstructions.
plainnat
21
urlstyle
[Chung et al.(2022)Chung, Sim, Ryu, and Ye]chung2022improving
Hyungjin Chung, Byeongsu Sim, Dohoon Ryu, and Jong Chul Ye.
Improving diffusion models for inverse problems using manifold
constraints.
In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho,
editors, Advances in Neural Information Processing Systems, 2022.
URL <https://openreview.net/forum?id=nJJjv0JDJju>.
[Chung et al.(2023)Chung, Kim, Mccann, Klasky, and
Ye]chung2023diffusion
Hyungjin Chung, Jeongsol Kim, Michael Thompson Mccann, Marc Louis Klasky, and
Jong Chul Ye.
Diffusion posterior sampling for general noisy inverse problems.
In The Eleventh International Conference on Learning
Representations, 2023.
URL <https://openreview.net/forum?id=OnD9zGAGT0k>.
[Falcon and The PyTorch Lightning team(2019)]falcon2019lightning
William Falcon and The PyTorch Lightning team.
PyTorch Lightning, March 2019.
URL <https://github.com/Lightning-AI/lightning>.
[Graikos et al.(2022)Graikos, Malkin, Jojic, and
Samaras]graikos2022diffusion
Alexandros Graikos, Nikolay Malkin, Nebojsa Jojic, and Dimitris Samaras.
Diffusion models as plug-and-play priors.
In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho,
editors, Advances in Neural Information Processing Systems, 2022.
URL <https://openreview.net/forum?id=yhlMZ3iR7Pu>.
[Ho et al.(2020)Ho, Jain, and Abbeel]ho2020denoising
Jonathan Ho, Ajay Jain, and Pieter Abbeel.
Denoising diffusion probabilistic models.
Advances in Neural Information Processing Systems,
33:0 6840–6851, 2020.
[Jain(2017)]jain2017adversarial
Viren Jain.
Adversarial image alignment and interpolation.
arXiv preprint arXiv:1707.00067, 2017.
[Jalal et al.(2021)Jalal, Arvinte, Daras, Price, Dimakis, and
Tamir]jalal2021robust
Ajil Jalal, Marius Arvinte, Giannis Daras, Eric Price, Alex Dimakis, and
Jonathan Tamir.
Robust compressed sensing MRI with deep generative priors.
In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan,
editors, Advances in Neural Information Processing Systems, 2021.
URL <https://openreview.net/forum?id=wHoIjrT6MMb>.
[Kadkhodaie and Simoncelli(2021)]kadkhodaie2021stochastic
Zahra Kadkhodaie and Eero P Simoncelli.
Stochastic solutions for linear inverse problems using the prior
implicit in a denoiser.
In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan,
editors, Advances in Neural Information Processing Systems, 2021.
URL <https://openreview.net/forum?id=x5hh6N9bUUb>.
[Kawar et al.(2021)Kawar, Vaksman, and Elad]kawar2021snips
Bahjat Kawar, Gregory Vaksman, and Michael Elad.
SNIPS: Solving noisy inverse problems stochastically.
In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan,
editors, Advances in Neural Information Processing Systems, 2021.
URL <https://openreview.net/forum?id=pBKOx_dxYAN>.
[Kawar et al.(2022)Kawar, Elad, Ermon, and Song]kawar2022denoising
Bahjat Kawar, Michael Elad, Stefano Ermon, and Jiaming Song.
Denoising diffusion restoration models.
In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho,
editors, Advances in Neural Information Processing Systems, 2022.
URL <https://openreview.net/forum?id=kxXvopt9pWK>.
[Mastronarde and Held(2017)]mastronarde2017automated
David N Mastronarde and Susannah R Held.
Automated tilt series alignment and tomographic reconstruction in
imod.
Journal of structural biology, 1970 (2):0
102–113, 2017.
[Nicki Skafte Detlefsen et al.(2022)Nicki Skafte Detlefsen, Jiri
Borovec, Justus Schock, Ananya Harsh, Teddy Koker, Luca Di Liello,
Daniel Stancl, Changsheng Quan, Maxim Grechkin, and William
Falcon]nicki2022torchmetrics
Nicki Skafte Detlefsen, Jiri Borovec, Justus Schock, Ananya Harsh,
Teddy Koker, Luca Di Liello, Daniel Stancl, Changsheng Quan, Maxim
Grechkin, and William Falcon.
TorchMetrics - Measuring Reproducibility in PyTorch, February 2022.
URL <https://github.com/Lightning-AI/torchmetrics>.
[Paszke et al.(2019)Paszke, Gross, Massa, Lerer, Bradbury, Chanan,
Killeen, Lin, Gimelshein, Antiga, et al.]paszke2019pytorch
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory
Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al.
Pytorch: An imperative style, high-performance deep learning library.
Advances in neural information processing systems, 32, 2019.
[Scheffer et al.(2020)Scheffer, Xu, Januszewski, Lu, Takemura,
Hayworth, Huang, Shinomiya, Maitlin-Shepard, Berg,
et al.]scheffer2020connectome
Louis K Scheffer, C Shan Xu, Michal Januszewski, Zhiyuan Lu, Shin-ya Takemura,
Kenneth J Hayworth, Gary B Huang, Kazunori Shinomiya, Jeremy Maitlin-Shepard,
Stuart Berg, et al.
A connectome and analysis of the adult drosophila central brain.
Elife, 9:0 e57443, 2020.
[Song et al.(2023)Song, Vahdat, Mardani, and
Kautz]song2023pseudoinverseguided
Jiaming Song, Arash Vahdat, Morteza Mardani, and Jan Kautz.
Pseudoinverse-guided diffusion models for inverse problems.
In International Conference on Learning Representations, 2023.
URL <https://openreview.net/forum?id=9_gsMA8MRKQ>.
[Song et al.(2020)Song, Sohl-Dickstein, Kingma, Kumar, Ermon, and
Poole]song2020score
Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano
Ermon, and Ben Poole.
Score-based generative modeling through stochastic differential
equations.
arXiv preprint arXiv:2011.13456, 2020.
[Song et al.(2022)Song, Shen, Xing, and Ermon]song2022solving
Yang Song, Liyue Shen, Lei Xing, and Stefano Ermon.
Solving inverse problems in medical imaging with score-based
generative models.
In International Conference on Learning Representations, 2022.
URL <https://openreview.net/forum?id=vaRCHVj0uGI>.
[Wang et al.(2023)Wang, Yu, Yu, and Zhang]wang2023unlimited
Yinhuai Wang, Jiwen Yu, Runyi Yu, and Jian Zhang.
Unlimited-size diffusion restoration.
arXiv preprint arXiv:2303.00354, 2023.
[Wang et al.(2004)Wang, Bovik, Sheikh, and Simoncelli]Wang2004ssim
Zhou Wang, A.C. Bovik, H.R. Sheikh, and E.P. Simoncelli.
Image quality assessment: from error visibility to structural
similarity.
IEEE Transactions on Image Processing, 130
(4):0 600–612, 2004.
10.1109/TIP.2003.819861.
[Wu and He(2018)]wu2018group
Yuxin Wu and Kaiming He.
Group normalization.
In Proceedings of the European conference on computer vision
(ECCV), pages 3–19, 2018.
[Xu et al.(2020)Xu, Januszewski, Lu, Takemura, Hayworth, Huang,
Shinomiya, Maitin-Shepard, Ackerman, Berg, et al.]xu2020connectome
C Shan Xu, Michal Januszewski, Zhiyuan Lu, Shin-ya Takemura, Kenneth J
Hayworth, Gary Huang, Kazunori Shinomiya, Jeremy Maitin-Shepard, David
Ackerman, Stuart Berg, et al.
A connectome of the adult drosophila central brain.
BioRxiv, pages 2020–01, 2020.
[Silversmith et al.(2021)Silversmith, Collman, Kemnitz, Wu, Castro,
Falk, Roat, Macrina, Perlman, shangmu, Halageri, Gunn, Jagannathan, Hoag,
Turner, and Dorkenwald]silversmith2021cloudvolume
William Silversmith, Forrest Collman, Nico Kemnitz, Jingpeng Wu, Manuel Castro,
Ben Falk, Chris Roat, Thomas Macrina, Eric Perlman, shangmu, Akhilesh
Halageri, Pat Gunn, Sridhar Jagannathan, Austin Hoag, Nicholas Turner, and
Sven Dorkenwald.
seung-lab/cloud-volume: Zenodo release v1, November 2021.
URL <https://doi.org/10.5281/zenodo.5671443>.
[Wu et al.(2021)Wu, Silversmith, Lee, and Seung]wu2021chunkflow
Jingpeng Wu, William M Silversmith, Kisuk Lee, and H Sebastian Seung.
Chunkflow: hybrid cloud processing of large 3d images by
convolutional nets.
Nature Methods, 180 (4):0 328–330, 2021.
§ DATASET DETAILS AND SPLITS
We download data using the Cloud Volume Python client <cit.> to access to the Janelia Fly Hemibrain dataset <cit.>. For the training set, we download a contiguous 10 gigavoxel volume at MIP-1 resolution from corner (x,y,z) = (10750, 6500, 9000) to (x,y,z) = (11750,16500,10000). For the validation and test sets, we extract randomly located patches from a contiguous volume that extends from corner (x,y,z)=(12250,6500,9000) to (x,yz)=(13250,16500,10000).
The whole volume from which these subvolumes were downloaded can be viewed interactively in 3D by visiting the following Neuroglancer link
<https://hemibrain-dot-neuroglancer-demo.appspot.com/#!
§ MORE UNCONDITIONAL GENERATIONS
< g r a p h i c s >
More unconditional generations with our diffusion model. We generate these images using the algorithm and hyperparameters described in Section 3.3
§ MORE RECONSTRUCTIONS FROM OUR METHOD
< g r a p h i c s >
More tomographic reconstructions on validation set images from our model. Tilt views were simulated as described in Section 3.2. We generate these reconstructions using our method with the best-performing hyperparameters described in Table 1 of the paper.
< g r a p h i c s >
Ground truth corresponding to reconstructions from Fig. <ref>
§ SINGULAR VALUES OF THE PROJECTION MATRIX
< g r a p h i c s >
Singular values of the matrix A in the equation y=Ax+σ_y ϵ. This matrix A implements the forward projection operator, returning projections of the images from the angular range (-60^∘,+60^∘). We can see that this matrix is highly ill-conditioned, with singular values spanning a range from 10^2 down to 10^-5s.
§ PARAMETER TUNING
§.§ Diffusion Denoising of Gradient Minimization
We compute performance of our method on the validation set of 128 images as we modify various hyperparameters.
< g r a p h i c s >
Tuning K=number of gradient steps per iteration and starting σ with 50 diffusion steps, σ_N=0.03 (the final noise level). We set λ=9e^-5. The MSE between the predicted measurements Ax and the observed data y decreases monotically as K increases. However, the more important metrics, the MSE between reconstruction x and the ground truth x_true reaches its minimum at K=25. Similarly the SSIM between reconstruction x and the ground truth x_true reaches its maximum at K=25.
< g r a p h i c s >
Tuning K=number of gradient steps per iteration and starting σ with 100 diffusion steps, σ_N=0.03 (the final noise level). We set λ=9e^-5.
< g r a p h i c s >
Tuning K=number of gradient steps per iteration and starting σ with 150 diffusion steps, σ_N=0.03 (the final noise level). We set λ=9e^-5. Using 150 diffusion iterations (compared to 50 as in Fig <ref> improves MSE and SSIM between reconstruction x and ground truth x_true slightly. Notably the MSE between the predicted measurements Ax and the observed data y (right figure) is higher in this setting than when the number of diffusion iterations is 50.
§.§ Algebraic Reconstruction
< g r a p h i c s >
Vary K=number of gradient steps taken. The only other hyperparameter is λ=9e^-5.
§.§ Denoising Diffusion Restoration Models
< g r a p h i c s >
Varying η and number of diffusion iterations for DDRM. We also tried niter=5 but the performance was substantially worse than niter=10 and does not fit on these charts.
§.§ Diffusion Posterior Sampling
Besides the hyperparameter ζ governing the gradient step sizes, we find that DPS has an extreme sensitivity to the details of the noise schedule. In particular both the number of diffusion steps and the precise noise levels used have a large impact on the ultimate reconstruction performance. This makes somes sense as the number of gradient steps is tied to the number of diffusion steps in this algorithm. More performance could perhaps be achieved by a more extensive grid search over diffusion schedule parameters, but even evaluating a singular parameter configuration with 1000 diffusion steps (a single point in Fig. <ref>) takes over 1 hour so performing a grid search would require significant time investment. Furthermore, we struggled to find any setting of parameter
< g r a p h i c s >
Varying γ (from Eq. 5 and 6 of the main text) and comparing the unscaled and rescaled versions of Diffusion Posterior Sampling. We keep the diffusion schedule from the paper that we found gave high quality unconditional generations. This schedule is described in Eq. 2 of the main text, but in brief we use 50 iterations with an exponentially decaying noise schedule.
< g r a p h i c s >
We were able to improve performance of DPS by modifying the diffusion noise schedule. In particular, we try different starting noise level σ_1, and explore performance of the rescaled version of DPS for various γ. Note that we choose our schedule according to σ_n = σ_1 (σ_N/σ_1)^(n-1)/(N-1) so that we keep the number of diffusion steps fixed when we decrease the starting noise level (the spacing between noise levels just decreases as we decrease the starting noise level). In this figure, we still use 50 total denoiser evaluations.
< g r a p h i c s >
We evaluate DPS now with 1000 diffusion iterations. We choose our schedule according to σ_n = σ_1 (σ_N/σ_1)^(n-1)/(N-1). We vary γ and the starting σ_1. Interestingly the data error terms (right plot) Ax -y are nearly identical for all configurations.
§ ARBITRARY SIZED RECONSTRUCTION
We modify the noise-prediction network itself to run in patches, then we smoothly average them together. This is the Approach diagrammed in Figure 1 of <cit.>. Mathematically we run:
ϵ_patchified(x)[i,j] = ∑_u,v=1^∞ B[i-su,j-sv] ϵ_θ(x[su:su+p,sv:sv+p])[i-su,j-sv]/∑_u,v B[i-su,j-sv]
where s=96 is the stride, and p=128 is the patch size, that we use in the experiments. The bump function we use is the product of two 1D bump functions
B[x, y] = b(2x/p-1) · b(2y/p-1)
and each 1D bump function is given by:
b(u) =
1-exp(-1/max{1-u^2,0.2}) if |u| < 1
0 otherwise
These decay smoothly to 0.2 as x→ +p/2 and x → -p/2. With this overlap fraction, pixels are on averaged processed 1.8× by the network so this method is approximately 1.8x slower than just running a larger patch through our network.
Noise level distribution during training
§ EXPERIMENTAL DETAILS
Dataset We download two 10GB volumes at MIP-1 resolution from the center of the fly hemibrain dataset, one for training and one for testing. There are notable stitching artifacts between blocks which we avoid in our volumes. The voxel sizes at MIP-1 resolution are 16 × 16 × 16 nanometers.
Network architecture We use a 2D residual symmetric U-Net architecture with 4 downsampling layers. Our network is not conditioned on the noise level, similar to Song. More details are provided in the appendix. We train it on patches of size 128 × 128. We use Group Normalization.
Denoiser training
Diffusion sampling To sample with our noise-predictor model we pick a series of noise levels γ_1 > γ_2 > ... > γ_M and apply the following updates
x_i+1 : = x_i - γ_i^2 - γ_i+1^2/γ_iϵ̂_θ(x_i) + √(γ_i^2 - γ_i+1^2) z_i where z_i ∼𝒩(0,1)
To generate samples unconditionally, i.e. not solving an inverse problem, we initialize x_1 ∼ N(0,σ_1^2). We choose our noise scales using an exponentially decaying schedule
γ_m = γ_1 e^-α m/N
σ_1 controls the amplitude of the noise and β controls the sharpness and the rate of decay of the noise schedule. β > 1 gives rise to a relatively quick drop in noise followed by a slow decay. These are chosen purely empirically for the unconditional setting.
We sample from our noise-predictor network ϵ̂ in two different regimes. First, we sample in the uncondtional regime (so normal diffusion) just to confirm we have indeed trained a quality denoiser. Second, we sample where instead of initializeing with gaussian noise, we initialize the process with the output of a classical reconstruction plus a modest amount of noise.
§ DENOISING AND DIFFUSION PRIORS FOR LINEAR INVERSE PROBLEMS
§.§ Linear inverse problems
Ill conditioning. Can have immense.
§.§ Denoising priors
Before getting into diffusion, it is much simpler to discuss denoising priors. Simple attempts. Fundamental challenge: denoising has been trained on IID Gaussian noise.
§.§ Diffusion priors
Previous works tend to make use of the of score-based perspective on diffusion models. Score-based diffusion models use a mean squared error denoising objective to train a score function which estimates s_θ(x_σ, σ) ≈∇ x_σ P(x_σ). This score function is trained over several orders of magnitude of noise scales. To generate samples unconditionally and generate samples unconditionally with annealed Langevin dynamics.
x_σ_i+1← x_σ + α s_θ(x_σ, σ) + √(2 ασ)ϵ
α and σ are annealed to zero in this process. Solving an inverse problem can be formalized as sampling from ∇ P(x|y). Many methods therefore proceed by using Baye's rule to write P(x_i|y) = P(y|x_i) P(x_i) / P(y). A natural idea is to replace log P(x_i) with log P(x_i|y) in the hopes that samples will instead be generated from P(x|y).
x_σ_i+1← x_σ + α [s(x_σ, σ) + ∇ x_σlog P(y | x_σ)] + √(2 ασ)ϵ
Challenge #1: intractable gradients
It is tempting to replace ∇ x_σlog P(y | x_σ) with ∇_x_σlog P(y | x_σ) = A^⊤ (y - Ax)/λ^2 where λ is a hyperparameter controlling the strength. Unfortunately this is a mistake. Theoretically, P(y | x_σ) = ∫_x_0 P(x_σ|x_0) P(y | x__0) dx_0 . Emperically, this approximation can be off by several orders of magnitude. Have very negative consequences for sampling trajecory.
Naive: ∇_x_σlog P(y | x_σ) ≈γ· A^⊤ (y - Ax)
One of the earlier approximations was made by
Jalal et al: ∇_x_σlog P(y | x_σ) ≈γ‖∇_x log P(x_σ)‖/‖ A^⊤ (y - Ax)‖(A^⊤ (y - Ax))
A later approximation was made by ... who used:
Chung et al ∇_x_σlog P(y | x_σ) ≈γ1/‖ y - A x̂_0(x) ‖(∇_x ‖ y - Ax̂_0(x) ‖^2 )
We note here this is some confusion as the algorithm in the paper which instead writes ∇_x_σlog P(y | x_σ) = A^⊤ (y - Ax)/σ^2 + γ_i^2, but in line 144 of their code, this update rule is what is used, which has γ_i as an adaptive (and not fixed) hyperparameter.
A number of works take a different strategy which relies on using SVD. Large magnitude singular values listen to the data. These are signifcantly more complicated so we do not write the equations. Small amplitude singular values are infereed through a diffusion process. Downsides 1) requires you to actually compute an SVD and 2) requires even more mathematical machinery on top of already complex diffusion. Still an approximation.
Challenge #2: coupling diffusion schedule to gradient updates
This is especially problematic for ill-conditoined probelms. Can require 100's of gradient updates. Setting gradient too large can lead to an exponential blow up
At the end of the day, there are two properties shared by all such previousmethods that we will modify. 1) The diffusion process is initialized with random gaussian noise, in the same manner. 2) The diffusion updates themselves are modified.
> |
http://arxiv.org/abs/2307.15072v1 | 20230712132837 | Detecting the Presence of COVID-19 Vaccination Hesitancy from South African Twitter Data Using Machine Learning | [
"Nicholas Perikli",
"Srimoy Bhattacharya",
"Blessing Ogbuokiri",
"Zahra Movahedi Nia",
"Benjamin Lieberman",
"Nidhi Tripathi",
"Salah-Eddine Dahbi",
"Finn Stevenson",
"Nicola Bragazzi",
"Jude Kong",
"Bruce Mellado"
] | cs.CY | [
"cs.CY",
"cs.CL",
"cs.LG",
"cs.SI"
] |
Low complexity convergence rate bounds for the synchronous gossip subclass of push-sum algorithms
Balázs Gerencsér
B. Gerencsér is with the Alfréd Rényi Institute of Mathematics, Budapest, Hungary and the Eötvös Loránd University, Department of Probability and Statistics, Budapest, Hungary, [email protected]
[3]
Miklós Kornyik
M. Kornyik is with the Alfréd Rényi Institute of Mathematics, Budapest, Hungary, [email protected]
The research was supported by NRDI (National Research, Development and Innovation Office) grant KKP 137490.
August 12, 2023
===========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Very few social media studies have been done on South African user-generated content during the
COVID-19 pandemic and even fewer using hand-labelling over automated methods. Vaccination is a major tool in the fight against the pandemic, but vaccine hesitancy jeopardizes any public health effort. In this study, sentiment analysis on South African tweets related to vaccine hesitancy was performed, with the aim of training AI-mediated classification models and assessing their reliability in categorizing UGC. A dataset of 30000 tweets from South Africa were extracted and hand-labelled into one of three sentiment classes - positive, negative, neutral. The machine learning models used were LSTM, bi-LSTM, SVM, BERT-base-cased and the RoBERTa-base models, whereby their hyperparameters were carefully chosen and tuned using the WandB platform. We used two different approaches when we pre-processed our data for comparison - one was semantics-based, while the other was corpus-based. The pre-processing of the tweets in our dataset was performed using both methods, respectively. All models were found to have low F1-scores within a range of 45%-55%, except for BERT and RoBERTa which both achieved significantly better measures with overall F1-scores of 60% and 61%, respectively. Topic modelling using an LDA was performed on the miss-classified tweets of the RoBERTa model to gain insight on how to further improve model accuracy.
§ NOMENCLATURE
SVM Support Vector Machine.
COVID-19 Coronavirus Disease-19.
NLP Natural Language Processing.
BERT Bidirectional Encoder.
Representations for Transformers.
RoBERTa Robustly Optimized BERT
Pre-training Approach.
UGC User Generated Content.
LSTM Long Short Term Memory.
Bi-LSTM Bidirectional-LSTM.
VADER Valence Aware Dictionary and
sEntiment Reasoner.
LDA Latent Dirichlet Allocation.
AI Artificial Intellegence.
NPI Non-pharmaceutical interventions.
ABSA Aspect-based Sentiment Analysis.
NB Naïve Bayes.
VAI Vaccine Acceptance Index.
TF-IDF Term Frequency Inverse Document
Frequency.
RF Random Forest.
NSP Next Sequence Prediction.
MLM Masked Language Modelling.
WandB Weights and Biases.
§ INTRODUCTION
The still ongoing COVID-19 pandemic, which represents the most significant healthcare emergency in recent times, has had a shattering effect all over the globe - both physically and psychologically <cit.>. Many NPIs, such as wearing masks, washing hands regularly, and maintaining social distancing, can help reduce the spread of the virus and have been, indeed, effective in mitigating the infectious outbreak.<cit.>
However, they are not sustainable in the long term, both in terms of acceptability and psychological and economic impact. Also, they may not fully eradicate the disease. In this context, pharmaceutical interventions, including drugs and vaccination, can play a very crucial role in combating the infection and immunization could potentially eradicate this virus.
Several government agencies have worked closely with public and private organizations worldwide to provide the scientific community with the necessary resources to work toward the development of vaccines and drugs that would protect against COVID-19 infection, as well as mitigate the severity of symptoms arising from COVID-19 infection in the elderly and people with co-morbidities<cit.>. While drug discovery and vaccine development and roll-out are, in general, long and complicated processes, often taking an average of 10 to 15 years for development to be completed and approval to be finalized, the prompt development of the pharmacological compounds and vaccines against COVID-19 has been facilitated by several years of past basic and translational research.<cit.>
A global research effort coupled with novel technological advancements has allowed faster ways to manufacture drugs and vaccines while extensive funding has allowed firms to run multiple trials in parallel, thereby, expediting the process enormously <cit.>. Specifically concerning immunization, according to WHO, by August 2022, there were 198 COVID-19 vaccine candidates in pre-clinical development and 170 in clinical development <cit.>, but even once a vaccine has been developed and manufactured, challenges are not ended yet. The implementation of a mass immunization campaign may present, indeed, organizational and logistic hurdles and having to face vaccine hesitancy <cit.>.
For instance, in South Africa, the national vaccination program against COVID-19 commenced on 17th February 2021 <cit.>. The roll-out of the vaccine strategy in South Africa was implemented in a three-phase approach: first by vaccinating the most vulnerable population such as front-line healthcare workers and, then, by catering to other essential workers, people in congregated settings, persons over 60 years old, people over 18 years old with co-morbidities, and, finally, the population over the age of 18. The goal was to vaccinate at least 67% of the population by the end of 2021 <cit.>.
As of May 30, 2022, around 50.03% of adults in the country had had at least one COVID-19 vaccination. Gauteng leads other provinces in terms of the number of jabs administered (over ten million), followed by KwaZulu-Natal with more than five million vaccinations. It is quite apparent that most populations in different provinces have not yet been fully or partially vaccinated <cit.>.
The lack of willingness of the public to get vaccinated against COVID-19 is a matter of great concern to both health scientists and workers in the field of public health. After the initiation of the vaccination roll-out process, the public's opinions and emotions have become quite diverse. Different studies have been conducted all over the world with the aim of trying to detect and understand the reasons behind vaccine hesitancy <cit.>, which represents a complex, multi-factorial phenomenon<cit.>.
From these studies, some of the reasons that were identified were erroneous beliefs such as those that the vaccines were produced too quickly without proper research being undertaken, the vaccines were thought to cause cancer and/or infertility, uncertainty regarding the second dose's availability, increased risk of serious side-effects for people with pre-existing conditions/co-morbidities, and possible allergic reactions<cit.>. Also instrumental in the rapid rise in vaccine hesitancy were the spread and propagation of conspiracy theories and misinformation - which were due to anti-science, political and religious posts on social media persuading users towards adopting an anti-vaccination attitude.<cit.>
This rapid flow of multiple sources and types of misinformation drastically slowed down the acceptance of the COVID-19 vaccines. Several opposing opinions further divided the general population into groups of people and created a near-hostile temperament toward the topic of vaccination.<cit.>
A previous study done by B. Mellado et al was published in a paper with the title: Leveraging Artificial Intelligence and Big Data to Optimize COVID-19 Clinical Public Health and Vaccination Roll-Out Strategies in Africa” has shown that “Big data and artificial intelligence (AI) machine learning techniques and collaborations can be instrumental in an accurate, timely, locally nuanced analysis of multiple data sources to inform CPH decision-making, vaccination strategies and their staged roll-out” <cit.>.
Therefore, the government and other agencies should analyze people’s sentiments about vaccination campaigns to maximize and optimize their roll-out, collecting available data from different social networking sites. Examples of UGC <cit.> include tweets, Facebook status updates, videos, blogs, forum posts, and consumer-produced product reviews, among others. <cit.>
UGC can be mined in order to identify trends and make predictions on a range of diverse subjects and topics, spanning from product launches and sales to political campaigns and elections, natural disasters, infectious epidemics, and pandemics. Concerning the latter topic, there are several studies where UGC has been used to understand people's opinions about the Coronavirus and its spread, government measures taken to control its spread, and the development and administering of vaccines <cit.>.
However, to the best of our knowledge, the public's hesitation associated with getting vaccinated against COVID-19 has been investigated mainly in the Global North, but, to a lesser extent, in the Global South.This is very apparent if one considers that by the 9th July 2021, the share of people that have been partially or fully vaccinated per continent where all under 50%, with North America and Europe having 44% and 43% of its residents receiving at least one vaccination against COVID-19, followed by South America with 34%, which is way ahead of Asia and the Oceania each having respective shares of 25% and 19%, respectively, and then Africa with a dismal amount of under 5%. <cit.>
The fact that Africa is way behind in terms of vaccination rates as compared to the rest of the world, this further justifies and motivates the importance of this study <cit.>. Moreover, the platform most frequently used for delivering thoughts on the COVID-19 situation, since its emergence until now and especially during the year 2021, was Twitter - hence justifying using Twitter data as opposed to data from other social media platforms for this study. <cit.>
A total of 20 related works were analyzed, with the 6 most relevant papers mentioned in the upcoming literature review section, with each using either NLP techniques and/or ML methods in order to probe the public's sentiments towards certain pandemic-related topics such as vaccination and lockdown measures through user comments on one or more social media platforms extracted within or from one country/continent or amongst several countries/continents, with the intention of guiding policy-makers in making decisions, given the devastating effect of the pandemic.
Most studies exclusively used automated labelling methods in their sentiment analysis, while some included both manual and automated labelling in their experiments. The machine learning models that were commonly used included state-of-the-art models such as BERT, classical models such as SVM and novel recurrent neural networks such as LSTM/Bi-LSTM.
All these related works showcased the power of sentiment analysis and the potential prowess of using NLP techniques in conjunction with machine learning methods in the research in extracting meaningful conclusions pertaining to people's feelings/ opinions towards a particular topic - which can be used in future studies to create more sophisticated models that would help policymakers in making decisions during a pandemic or public health crises. No studies exclusively used manual labelling in their research, while some used partial manual labelling and others using ABSA with relatively good results.
However, there are many limitations to these studies involving data bias given the intrinsic characteristics of social media users being young and from more urbanized areas, as well as model bias given the choice of keyword selection. Moreover, other limitations arise from the tremendous amount of time manual labelling takes as opposed to automated labelling, class imbalance, dataset size and characteristics, as well as conflict from subtle deviations in terms of agreement with the choice of definition for vaccine hesitancy and the accompanying sentiment labels, along with the method of pre-processing and the rules used in the labelling process.
With these observations in mind, this study explored vaccine hesitancy in South Africa, which is in the Southern Hemisphere, using Twitter data as a source of public opinion. More specifically, the aim was to quantify and qualify the public’s willingness to be vaccinated in order to develop an AI model that would be able to detect the presence of vaccine hesitancy and track its dynamics, thereby, paving the path to an AI-mediated response to a global health crisis. This would allow for a faster, more efficient, implementation and deployment of disaster management systems for the detection, mitigation, and eradication of infectious pandemics.
§ RELATED WORK
In 2020, M.B. Mutanga and A. Abayomi used Twitter data from South Africa and identified issues relating to the pandemic using an LDA, which they showcased in a paper entitled: “Tweeting on COVID-19 pandemic in South Africa: LDA-based topic modelling approach." From the LDA analysis, some topics that were being discussed were identified pertaining to the sale and consumption of alcohol, lockdown, daily rates of infection, police brutality, 5G radiation causing COVID-19 and vaccines, as well as conspiracy theories. These topics were an illustration of the attitudes and perceptions the citizens had towards the topic of vaccines. The findings also revealed people’s resistance to measures that affect their economic activities, and their unwillingness to take tests or vaccines as a result of fake news and conspiracy theories <cit.>.
The study was very comprehensive but is limited given that as the COVID-19 pandemic continues its offence and new sources of damage and opportunities are being found, future work needed to be inclusive of extracting the emotion behind the sentiments from the collected tweets - in order to investigate the evolution of the public's opinions with time before and after certain remarkable events. Testing of additional topic extraction algorithms, including a combination of NLP techniques and machine learning methods toward an automatic classification and prediction of diverse factors relating to the COVID-19 pandemic were not performed in this study <cit.>.
In 2022, a paper entitled: “Sentiment analysis tracking of COVID‑19 vaccine through tweets," by A. Sarirete et al., people's sentiments towards vaccination during the pandemic from tweets that were scraped via the use of the TAGS tool from Twitter users from all over the world were investigated, using a hybrid approach, which combined the use of linear, probability and/or decision tree classifiers with a statistical-, semantics- and/or a dictionary-based approach. In other words, the hybrid approach uses NLP techniques in conjunction with ML methods, and in this case, were applied in order to classify text and extract the degree of vaccination hesitancy towards COVID-19 vaccines in general<cit.>
From the corpus analysis, emojis and words related to a sentiment were identified. The frequency of these keywords was recorded and each tweet was classified based on the keyword frequency using the aforementioned machine learning models. It was found that the tweets could be separated into positive and negative sentiments, with a dominance towards the negatives. Although, several tweets were collected, analyzed and classified based on keywords frequency, manual labelling was absent and more testing is needed on tweets using machine learning techniques to compare the results with the NLP techniques, and generalizing the algorithm to different hashtags and other applications <cit.>.
In 2021, a paper entitled: “Sentiment Analysis of COVID-19 Vaccine Perception Using NLP," by M.A. Mudassir, Y. Mor, R. Munot et al., the sentiments of the people residing in India with regards to the COVID-19 vaccine were analyzed. The paper used three different classification models i.e., TextBlob, VADER, and ABSA to perform the sentiment analysis on English tweets that were posted by users in India and then chose the best deep learning model after comparing their results based on F1-score and test accuracy<cit.>.
TextBlob and VADER are commonly used automated labelling algorithms, while ABSA is an ML that finds and attributes sentiment to aspects, features, and topics that it has categorized within the body of text - more in line with a human perspective used when manually labelling text.
In this study, 2000 or 10 % of the tweets in the dataset were manually labelled and tested on the three different models. The model with the highest accuracy was chosen and rest of the tweets were labelled using this model. It was found that ABSA produced the best result out of the other models due to its ability to focus on the specified aspects enhanced by the Attention based Transformer model and it was argued that ABSA should be used more frequently in sentiment analysis studies tasks which have a narrow focus rather than general purpose models<cit.>.
The results of this study showed that the insights gained from the ABSA model were more detailed and descriptive than other techniques that fail to give a more than a general overview of sentiment - however it is a notably a significantly slower method, which will need to be investigated in future studies. Thus, this study illustrates the advantages of using other methods of text classification used in the training phase such as manual labelling in conjunction with ABSA, instead of solely relying on automated labelling methods <cit.>.
In 2021, a paper entitled: “Dynamic assessment of the COVID-19 vaccine acceptance leveraging social media data," by L. Li and J. Zhou et al, over 29,000,000 vaccine-related tweets from 08 August to the 19th April 2021 were collected and quantified using a VAI, which they computed based on opinion classifications identified with the help of NLP techniques and provided them with a quantitative metric to show the level of vaccine acceptance across different geographic scales in the U.S. Text classification was either automated and performed using TextBlob and VADER or manually labelled into one of three classes i.e. positive, negative or unrelated <cit.>.
A fixed sample of 20000 unique tweets from the collected tweets that were most frequently re-tweeted were manually labelled according to specific labelling criteria, which were based on the CDC strategy in order to consolidate confidence in COVID-19 vaccines, whereby 10% of this dataset was chosen for the testing sample. A total of 9 candidate models were selected and then trained and tested on the aforementioned tweets and the best model in terms of F1 score and accuracy was selected after an extensive grid search was performed in order to obtain the model-specific set of hyperparameters whose values have been optimized to provide the best possible performance <cit.>.
The TF-IDF + RF that was trained on an augmented training set obtained the best overall performance and hence was applied to the entire dataset in subsequent steps. A classification was assigned to each tweet and used to compute a user-based vaccine acceptance measure. Different VAI measures were constructed for national-level, state-level and country-level analysis, respectively <cit.>.
At the national level, it showed that the VAI transitioned from negative to positive in 2020 and stayed steady after January 2021 - which was supported by national vaccination rates over that time interval - and re-iterated via a comprehensive analysis of the state- and county-level data. The paper discussed information characteristics that enabled a consistent method of estimation of the VAI <cit.>.
The findings supported the use of social media to understand opinions and to offer a fast and inexpensive way to assess vaccine acceptance.– which is also relevant. Therefore, future work could consider using NLP and machine learning tools trained in other languages and integrating data from surveys or models to complement the social media estimation, as well as considering the generalizability of this research framework by applying it to investigate the vaccine acceptance on other types of vaccine and in a broader geographical scale, such as the vaccine acceptance over HPV vaccine and flu vaccine in different countries <cit.>.
In 2021, a paper entitled: “Applying Machine Learning to Identify Anti-Vaccination Tweets during the COVID-19 Pandemic", by Quyen G. and Kien G. et al, the performance of various different NLP models i.e., BERT, NB, SVM and Bi-LSTM networks with pre-trained GLoVe embeddings in identifying anti-vaccination tweets published during the COVID-19 pandemic were evaluated <cit.>.
From the 1st of Jan up until the 23rd of August 2020, 150,000,000 tweets from all over the world were collected using a Twitter Stream API which allowed public access to a one percent sample of the daily stream of Twitter. After removing all non-English tweets and re-tweets, ≈75,000,000 tweets were left behind and used for training and testing <cit.>.
A systematic random sampling method was used to select 20,854 tweets from 1,474,276 tweets for automated labelling. This sampling method made sure that tweets from across different time intervals during the pandemic were chosen. Tweets were labelled as either “anti-vaccination” or “other” as the model was aimed to use for stance analysis using stance analysis, in which a tweet is determined to be in favour or against a target <cit.>.
The optimal model performance on the test set for the BERT model was: accuracy = 91.6%, precision = 93.4%, recall = 97.6%, F1 score = 95.5%, and AUC = 84.7%. From this result along with the other optimized model performances, it was concluded that the BERT models had outperformed all of the other models across all metrics and that given its excellent performance is viable as an identifier of anti-vaccination attitudes from tweets <cit.>.
However, since stance analysis was used, which is different from sentiment analysis in which a tweet is classified as positive or negative, a negative tweet may not mean anti-vaccine while a positive tweet may not mean pro-vaccine and moreover, only two classes were chosen, which both may have served to inflate the model to such high-performance values. Moreover, it may be possible that BERT has a high correlation with tweets labelled with Textblob and Vader, which needs to be investigated <cit.>.
Hence, this study should be repeated and cross-checked with the results of similar studies, as well as to check if such a correlation exists and also to compare results against a manually labelled dataset, as well as performing sentiment analysis on the dataset using the same labels and then extending the number of classes to three, in order to verify whether or not this model is reliable as a tool for identifying anti-vaccination attitudes across the globe towards the COVID-19 vaccines <cit.>.
In 2021, a paper entitled: “Deep Learning-Based Sentiment Analysis of COVID-19 Vaccination Responses from Twitter Data", by K.N. Alam and Md.S. Khan et al, the authors used a Kaggle dataset called "All COVID-19 Vaccines Tweets" consisting of 125906 vaccine-related tweets from across the globe to train LSTM and Bi-LSTM sentiment classifiers, whereby the tweets were labelled by VADER and not by hand <cit.>.
However, portions of the labelled tweets were assessed and if the label didn't match the sentiment that a human would have given it based on some rules, the cut-offs used for polarity identification were adjusted until more tweets had automated labels that agreed with their ascribed manual labels. This process was iterative and was done until the optimal or near-optimal cut-offs for the three sentiments are found<cit.>.
From the datasets, 125,906 tweets were analyzed using the lexicon-based VADER and separated into three parameters: positive, negative, and neutral. It was found that neutral tweets formed the majority; the negative reactions were lowest in frequency, indicating that fear and unrest related to COVID-19 vaccination procedures were still at large<cit.>.
LSTM and Bi-LSTM models were trained and tested on this dataset, in which The LSTM architecture showed 90.59% accuracy, and the Bi-LSTM model showed 90.83% accuracy, and both models showed good prediction scores in the precision, recall, F-1 scores, and confusion matrix calculation<cit.>.
Upon other analyses, it was found that people's reactions towards vaccines, the words “first dose”, “second dose”, “Moderna”, “Pfizer”, “Bharat BioNTech”, “death”, “emergency”, “Covishield” and “clinical trial” were very commonly used by twitter users in Canada and India, along with alarming words like “blood clot”, “feel” and “trial”<cit.>.
Furthermore, from January 21 to the end of February 21, the number of tweets related to vaccines was fewer than 500; from March 21 it rose to nearly 3000, indicating that people were very excited about the vaccines after the completion of the clinical trials and the vaccines were to be administered in large numbers. Then, from March 21 to the present, tweets regarding COVID-19 vaccines had fluctuated from 1000 to 2500 per month, which indicated people’s emotions about them had greatly transformed <cit.>.
This is an example of another study that showed the power of using NLP techniques alongside machine-learning methods in probing people’s vaccination attitudes, as well as their underlying characteristics, across the globe towards the COVID-19 vaccines <cit.>.
§ EXPERIMENTAL PROCEDURE
A total of 30000 tweets were collected using the Twitter Research License. The extraction focused on hashtags related to vaccines and vaccination over a time period spanning from the 5th March 2020 - when COVID-19 was first identified in South Africa - to the 24th November 2021 when the Omicron variant was first detected in South Africa. Duplicate tweets were removed, leaving 27069 unique tweets. In this study, two distinct pre-processing methods were used: corpus-based and semantics-based methods - each having their own unique emoji dictionaries.
In the corpus-based or lexical pre-processing method, contractions were removed and replaced by their full forms, uppercase text was lower-cased, integers were removed, hashtags were removed, hyperlinks were replaced by the word `url', @mentions were replaced by the term `atUser', repetitions of emojis, as well as all punctuation marks were removed. Thereafter, using a pre-defined emoji lexicon, relevant emojis were replaced by their physical descriptions in words, while other emojis not thought to convey any sentiment were discarded. This was followed by the replacement of common slang terms with their formal definitions, using a slang-term lexicon. The last step was, then, tokenisation using the TweetTokenizer from the NLTK database.
In the semantics-based pre-processing method, the same afore-mentioned procedure was followed with a few differences i.e. all punctuation marks and integers were not removed, upper-cased text was not lower-cased, each @mention was replaced by the word `Name' followed by an integer denoting its position relative to other @mentions in each tweet. Furthermore, the emoji lexicon was revised in order to describe the context of the emotion inherent in the emojis and the dictionary of slang terms was extended to include slang terms meaning vaccine or vaccinated such as `vaxxed' and `vaxx'. Certain hallmark pre-processing steps were not followed i.e., the removal of stop-words, lemmatization, and/or stemming. This was deliberately done in order to preserve the context of the tweets and, thus, the core sentiment. See Tables 4 and 5 under Section II of the Appendix for more details.
Topic Modelling was then performed. The procedure was as follows: The dataset was converted to a list of tweets. Thereafter, standard pre-processing was performed, in which hashtags, urls, emojis and punctuation were removed. Bi-gram and tri-gram models were built with functions defined for stopword removal and text lemmatization. Then contractions were replaced with their full form and stopwords removed. Lemmatization was performed, in which the nouns, verbs, adverbs and adjectives were kept. A dictionary was created using ‘id2word’, in which each word was given its own integer I.D. The corpus from the lemmatized data was created in the form of a list of term and term frequency entries. An LDA was then built using the Gensim module lda-model function, whereby the optimal value for the number of topics was determined to be 5 at a coherence value of 0.3707. The top 30 most salient terms in each topic were extracted, topics visualized using the pyLDAvis tool and thereafter, the topics where identified.
§.§ Hand-Labelling of Tweets
Before we can motivate why we used manual labelling over automated labelling that uses sentiment analysis algorithms, it is useful to consider why automated sentiment analysis is so popular and the challenges that arise in machine learning when using a classification algorithm or when building a specific type of machine learning classification model. Firstly, in both manual and automatic labelling, there are unavoidable factors that will impact the reliability of the labelled data to some extent, thus making any analytic results not directly applicable to real-world problems. The most concerning factors are mentioned below:
* Subjectivity of Text
* Context and Tone of the statement
* Presence of Sarcasm and Irony
* Presence of Negations
* Use of Emojis and Special Characters
* Use of idiomatic expressions
* Use of Colloquialisms and Slang <cit.>.
These are all relevant because many misclassifications by an algorithm or classification model arise directly from these factors. Moreover, even though many classification algorithms have been formulated such as TextBlob and VADER <cit.>, AI-based algorithms continue to struggle with - or are completely incapable of - detecting and understanding human emotion, and since tweets contain a strong emotional component, this may give rise to misinterpretation of text and incorrect labelling - when analysed by humans <cit.>.
Despite this, the benefit of using automated labelling over manual labelling is 2-fold:
* Removal of Text Subjectivity
* Reliable and Realistic Labels <cit.>.
Firstly, text subjectivity which arises from the fact that the meaning behind a statement is understood through our own life experiences and unconscious biases that, we, as humans have developed over the years, is no longer an influencing factor. This is obvious, since the tone and context of a piece of text is not considered by an algorithm i.e., it is always objective in its decision making - unlike humans who often will encounter texts that are difficult to classify.
Secondly, for human beings, labelling text is a long, tiring, and time-consuming process, which is not the case of machines. For example, sentiment analysis algorithms can analyze hundreds of Mbs of text within minutes - while the average human would struggle to label more than 45 tweets in an hour <cit.>.
However, we would like to draw meaningful conclusions from our analysis. So, it is important that the dataset that we used for training and testing NLP classifiers have reliable and realistic labels that are applicable in the real world <cit.>. Thus, hand-labelling of our dataset is justified in this regard.
Furthermore, even though the precision of these classification algorithms is quite high owing to a consistent sentiment analysis not impacted by subjectivity, the accuracy of the labels from a human perspective would be incredibly low <cit.>. Even though human beings would occasionally disagree on the correct label of a text in a large enough dataset, they are still much better apt at understanding the meaning behind the text <cit.>.
It is possible to mitigate the effect of subjectivity when hand-labelling text. This is especially useful and the findings of sentiment analysis would be relevant and important to policymakers. However, one can extend this mitigation by creating a fixed and unchanging bias that is used during the manual labelling process. This is not easy but the more defined the subject matter of the text that we are analysing, the more consistent and dependable the dataset will be, once labelling is complete <cit.>. This is imperative and aligns with the aim of this study, which is to create a machine learning model that would be able to accurately predict sentiments pertaining to vaccine hesitancy in order to guide current policy-makers during a pandemic.
The hand-labelling of the dataset was done by several persons in the team using a strict, clear, and consistent set of rules to minimise the frequency of disagreements on the correct label for a particular tweet. Such workforce also serves to minimize labelling errors, maximize quality control by checking that the labelling rules used were correctly and consistently implemented amongst the labellers and by finding consensus on difficult-to-label tweets.
A collection of 30000 tweets were selected to be hand-labelled. A label is ascribed to the tweet based on the opinion of the author towards a particular theme or topic - in this case, vaccination. Each tweet was hand-labelled into one of three sentiment classes i.e., positive, negative, or neutral. The criteria for hand-labelling involved answering a simple question: “Does the author of this comment approve or disapprove in taking a vaccination shot against COVID-19 and to what extent does he/she agree or disagree?"
To answer this question, a careful look at the punctuation, grammar, choice of words and symbols as well as tone inherent in the tweets were examined. To make things easier to categorize, easy-to-label tweets were labelled first and difficult-to-label, in which both negative and positive sentiments could be found in the tweet, were left towards the end.
We adopted IBM’s definitions of vaccine hesitancy, which was based on WHO's definition of vaccine hesitancy in this paper, in which a negative sentiment was defined as a refusal to get vaccinated - which is referred to as overt hesitancy, a positive sentiment was defined as a decisive decision to get vaccinated, while a neutral sentiment was defined as a delay towards getting vaccinated i.e. an indecisive temperament towards vaccination - this is referred to as subtle hesitancy <cit.>.
Additionally, a statement in which the author's viewpoint is unclear or unrelated to vaccination is by default labelled as neutral. Hence, we argue that the practice of hand-labelling is superior to automated classification algorithms - which frequently mislabel text that contain certain tones and contexts especially when negations, colloquial slang, emojis, and sarcasm are present, as previously said.
Refer to Table 3 under Section I of the Appendix for examples whereby the afore-mentioned statement is true
In the next section, we introduce the type of machine learning algorithms/models that we used, briefly discussing the architecture of the models, as well the data processing, training, and testing steps that were involved.
§.§ Support Vector Machines
The Support Vector Machine (SVM) algorithm is a popular algorithm amongst supervised machine learning algorithms, which can be utilized for classification purposes as well as for solving regression problems. In our model, feature extraction and the “Bag of Words" model were implemented in the post pre-processing steps with vectorisation being performed by the TFidfVectorizer on the text samples, while the labels were label-encoded using the Label-Encoder function from sklearn. The max number of features, in this case, was chosen to be 5000. This procedure is the preferred data preparation technique for SVMs, within the context of NLP<cit.>.
§.§ LSTM and Bi-LSTM
Both LSTMs and Bi-LSTMs are recurrent neural networks (RNNs). LSTM-based models make use of both Long-Term Memory (LTM) and Short-Term Memory (STM) in simplifying calculations through the application of gates i.e., Forget Gate, Learning Gate, Recall Gate and an Output Gate <cit.>. Owing to its bi-directionality, the Bi-LSTM is, in general, considered to be more effective in the deep learning process as compared to LSTMs <cit.>.
The architecture of both models was chosen to be the same, for this study, i.e., both the LSTM and Bi-LSTM models consisted of an Input layer followed by an Embedding layer, Dense layer, two LSTM or bi-LSTM layers, another Dense layer and finally an Output Dense layer, whereby each individual layer is separated by a Dropout layer.
The activation function for all the Dense layers was chosen to be `relu', except for the Output layer with activation function, `softmax'. The model was compiled with a loss function of `categorical crossentropy'. The argument for stacking LSTM or Bi-LSTM layers on top of each other is to allow for a greater model complexity <cit.>.
The labels for the targets, were not label-encoded as was performed in the case of SVM into categorical variables of varying weight-age but made into categorical variables that each carry equal weighting. Our choice of embedding technique was feature extraction with a maximum number of features set to 2000.
§.§ BERT and RoBERTa
Over the past years, supervised models have shown consistently better results than unsupervised models, until the introduction of a new pre-trained BERT text attachment model, which enabled unprecedented precision of results in many automated word processing tasks. This model replaced the widely known “word2vec" model in prevalence, becoming the industry standard. This is the motivation for the use of BERT in the study. BERT-base-cased was chosen, since it does not lowercase the sample text, thus preserving tone and context, and will take less computational power and time to train than BERT-large-cased. Soon after the construction of BERT, Robustly optimized BERT approach, RoBERTa was formed. Since RoBERTa is a retraining of BERT with improved training methodology, more data and computational power, in which the training procedure is improved whereby RoBERTa removes the Next Sentence Prediction (NSP) task from BERT’s pre-training and, instead, it introduces dynamic masking, this model i.e. RoBERTa-base was also chosen in the study. Both the BERT-base-cased and RoBERTa-base models were chosen for fine-tuning and both trained and evaluated on our dataset, with results then being compared to pre-selected pre-trained models evaluated on our dataset.
§.§ Model Hyper-parameters Used
In this study, all the machine learning models underwent extensive hyperparameter tuning using Bayesian optimization. The hyperparameters chosen for tuning in the case of the SVM were the cost function, C, γ, and the kernel. The hyperparameters chosen for tuning for both the LSTM and bi-LSTM models were the dropout rate, learning rate, weight decay, batch size, dense units, embedded dimensions, hidden dimensions, number of epochs and choice of optimizer.
The hyperparameters chosen for tuning for both BERT-base-cased and RoBERTa-base were the learning rate, batch size and number of epochs - but not the weight decay, which was set to zero. Given the uniqueness and complexity of the data-set and subject matter, the slight shift away from a balanced dataset, the small size of the dataset, as well as the non-typical method of labelling that was used, the overall and individual F1-scores were chosen as the defining measures for which the models could be assessed. For the Model-specific hyperparameters and their pre-selected ranges, please see Tables 6 and 7 of the Appendix under Section-III. Next, we will discuss model performance.
§ RESULTS AND DISCUSSION
§.§ Machine Learning Models
Applying hand-labeling of the tweets, the distribution of sentiments in our dataset were as follows: 31.7% positive; 36.1% neutral; 32.2% negative. There is not a dominant sentiment, and each sentiment is distributed equally within the sentiment population.
Table 1, above, shows a summary and comparison of optimised model performance for the various models. In terms of model performance, the LSTM model achieved an overall precision of 48% and overall accuracy of 49%, with a combined F1-score of 49% using the semantics pre-processing, with a similar result being achieved with an overall precision of 50%, an overall accuracy of 48%, and a combined F1-score of 48%, on the lexical pre-processing approach. As expected, the Bi-LSTM models performed better than the LSTM models, with the Bi-LSTM model achieving an overall precision of 49%, an overall accuracy of 51%, and a combined F1-score of 50% on the semantics pre-processing approach, but a significantly better result on the lexical pre-processing approach with a higher overall precision of 53%, and an overall accuracy of 52%, yielding an F1-score of 52%. Furthermore, the SVM model achieved identical results for both pre-processing methods with an overall precision of 54%, an overall accuracy of 54%, and a combined F1-score of 54%. It is clear that the SVM model produced results that were better than both pairs of LSTM and Bi-LSTM models, which may sound counter-intuitive at first glance, but can be explained as follows: feature embeddings as used in the SVM model generally perform better than word embeddings as were used in the LSTM/bi-LSTM models in the context of NLP.
Results of pre-selected BERT and RoBERTa pre-trained models served as a comparison to the performance of our respective fine-tuned models. Since, their classification measures were much lower than those of our fine-tuned models, we showed that the manual labelling of the vaccination hesitancy data-set was a novel approach and confirmed that the dataset is more complex than simple sentiment analysis on texts in categorising them into positive, negative or neutral labels based on the overall emotion inherent in the text samples and not on a specific topic. The pre-trained models chosen was a RoBERTa twitter sentiment model by the Cardiff NLP Group <cit.>. This pre-trained RoBERTa model was trained on ≈ 58M tweets and was fine-tuned for sentiment analysis using the TweetEval benchmark using positive, negative and neutral labels. The pre-trained BERT model that was chosen is a fine-tuned version of a multilingual BERT-base multilingual sentiment model by the NLP-Town Group <cit.>, which is a model for sentiment analysis using positive, negative and neutral labels, trained to classify product reviews on a scale of 1 to 5 stars, in English, German, French, Spanish, and Italian. The pre-trained BERT model achieved an overall precision of 46%, and an overall accuracy of 46%, yielding an F1-score of 46%, while our fine-tuned BERT-base-cased model achieved a much better result with an overall precision of 60%, and an overall accuracy of 61%, yielding an F1-score of 60%. By pairwise comparison, the RoBERTa models performed better than the BERT models with the pre-trained RoBERTa model achieving an overall precision of 46%, overall accuracy of 48%, yielding an F1-score of 48%, while the RoBERTa model achieved a much better result with a higher overall precision of 62%, and an overall accuracy of 61%, yielding a an F1-score of 61%.
Hence, from these results, one can conclude that based on the overall weighted F1-score, the best models for this classification problem in decreasing order of optimal performance were fine-tuned Roberta-base, fine-tuned BERT-base-cased, SVM, Bi-LSTM, LSTM, pre-trained RoBERTa-base and lastly pre-trained BERT-base-uncased.
§.§ Topic Modelling
An LDA was performed on the set of tweets that were miss-classified by the model with the highest efficiency – in this case, RoBERTa base. The top 10 Most Frequent Terms per LDA Cluster Grouping are shown, below, in Table 2. Given these keywords, the topics related to vaccine hesitancy were inferred and described i.e., mass vaccination roll out schemes in terms of availability and service delivery, defiance in response to international travel restrictions that target the non-vaccinated or partially-vaccinated population, safety concerns about severe side-effects from the vaccine, as well as concerns of ineffectiveness of vaccines in preventing the spread of the virus. The close enough 5 topics that are inferred from the LDA in which the clusters were visualized are mentioned, below, in Figure 1:
* Topic 1 : Inefficient mass vaccination
* Topic 2 : Selective air travel restrictions
* Topic 3 : Severe side-effects
* Topic 4 : Inescapable from illness/death
* Topic 5 : Ineffective to COVID
§.§ Limitation Of Study
Overall, the model performance is quite low, with 60-65%, being a very average score and also the range pertaining to the best performance achieved from our set of models. This could be due to a number of reasons i.e., in the case of the LSTM/bi-LSTM models, additional LSTM layers may be needed in order for the model to better grasp the complexity inherent in the dataset; while for the SVM algorithm, it is possible that a different word embedding technique and alternative model other than feature extraction and `Bag of Words' would have resulted in a better performance; while in the case of BERT and RoBERTa, the use of the -LARGE- formulations of these models instead of the -BASE- formulations should be used to improve the performance, and their performances may further be enhanced by incorporating some of the pre-processing steps used in the other non-Transformer models or fine-tuning on other BERT and RoBERTa models trained on similar pandemic-related use-cases for sentiment classification or by using a particular BERT or RoBERTa model with a respective and alternative tokenizer taken from a different BERT and RoBERTa model.
§ CONCLUSION
In conclusion, the models used were LSTM, bi-LSTM, SVM, BERT-base-cased and RoBERTa-base, whereby their hyperparameters were carefully chosen and tuned using the WandB platform and trained on a hand-labelled dataset containing tweets from South Africa on the topic of vaccine hesitancy. Out of all The machine learning models, excluding the pre-trained and fine-tuned ones, SVM was the best model with an overall F1-score of 54%, followed by the bi-LSTM with an overall F1-score of 52% and lastly the LSTM with an overall F1-score of 49%. The best model overall was the fine-tuned RoBERTa-base model with an overall F1-score of 61%, followed closely behind by the fine-tuned BERT-base-case model with an overall F1-score of 60%, where the best model was defined as the model with the highest overall F1-score. From the LDA on the miss-classified tweets of the fine-tuned RoBERTa model, certain types of vaccine hesitancy where identified as topics, which would serve to improve our best model's performance in future studies to better detect vaccine hesitancy and guide policymakers in managing the pandemic. Furthermore, since BERT and RoBERTa are Transformer models, meaning that they can be trained on downstream tasks, further training on additional data-sets with pandemic-related use-cases other than vaccination hesitancy, such as public compliance to other safety measures or the degree of faith in government interventions etc - whose data may originate from other countries around the globe - could essentially pave the way towards a universal tool for early disease detection and the enforcement of public compliance during public health crises or emergencies.
§ APPENDIX
§ APPENDIX I
§ HAND-LABELLING OF TWEETS
Here we highlight the pitfalls of using text classification algorithms over hand-labelling using explicit examples. In Table 3, the four different cases of tweets one would encounter when performing sentiment analysis along with three hand-labelled examples for each case, each corresponding to one of the three sentiment classes i.e., positive (+), negative (-), neutral (0) are provided. The four categories are: clear-cut cases, borderline cases, difficult-to-label tweets and same text tweets. Clear-cut cases correspond to tweets whose sentiment labels are obvious and there is no debate on the validity of its classification - in other words the tweet's polarity is heavily skewed towards a single sentiment type. Borderline cases correspond to tweets that can arguably take on one of two labels i.e., either neutral or positive or alternatively neutral or negative, whereby the author's point of view is debatable. Difficult-to-label tweets are tweets that contain both positive and negative sentiments each with high polarity scores, which makes it difficult to decided on the overall text polarity. Same text tweets are a class of tweets whereby the raw text is identical but differ in the amount of punctuation and/or emojis present in the tweet, which serve to change the message behind the tweet often through the introduction of satire. Two different classification algorithms were selected namely, VADER and TextBlob. These classification algorithms were then given each example tweet and their predicted labels were compared to the manually-classified tweet labels. The results are presented in the table. Overall VADER correctly predicted the labels of 50% of the tweets, in which 100% of the clear-cut case examples were classified correctly, while none, or 0%, of the border-line case tweets were classified correctly and only one third, 33%, of the difficult-to-label or the same text tweets were correctly labelled. VADER was able to get 50% recall for each respective class. Comparatively, TextBlob correctly predicted the labels of a third, or ≈ 33%, of all the tweets, in which two thirds, or ≈ 67%, of the clear-cut case examples were classified correctly, while none, or 0%, of the border-line case tweets were classified correctly and none, 0%, of the difficult-to-label and the same text tweets were correctly labelled. TextBlob got recalls of 25% for the positives, 75% for the neutrals but nothing, 0%, for the negatives. This shows that both classification algorithms perform well on simple clear-cut examples, but become much less efficient in correctly classifying tweets, as the complexity of the tweets increases. Furthermore, given the recall values, it is apparent that VADER is equally good in labelling each sentiment type, while TextBlob strongly favours a neutral label. In both cases, the overall accuracies are very low in comparison to hand-labelling and it is clear that when given same text tweets, the algorithms are unable to identify sarcasm or the nuanced effect of changing punctuation marks i.e., from ! to ?, given that VADER provided a positive label for each sentiment belonging to the same text case, while TextBlob provided all neutral labels. Hence, the table clearly highlights the advantages of manual over automated hand-labelling.
§ APPENDIX II
§ MATERIALS AND METHODS
Here, we show the similarities and differences between the corpus-based and semantics-based approaches. From Table 4, which provides and contrasts the emoji-to-text translation of each approach, it is clear that the two emoji lexicons are very different from each other i.e., while the lexical definition of an emoji is the statement of the physical features of the emoji in words with any punctuation marks, the semantics definition is the statement of the message and emotional intensity attached to that message that is communicated through the use of a particular emoji. The emotional intensity is given by the choice of punctuation marks; in this case only exclamation marks or full stops were used. Not shown in the table are examples of emojis that both methods do not have a definition for and are instead replaced by white-spaces. These emojis were deemed to not carry a sentiment, with an example being the soccer ball emoji.
Here we provide and contrast the pre-processing rules used in the two approaches with explicit examples. The idea behind using two different pre-processing methods was to compare the performance of the models once they were fully trained in order to gain insight into how the models `learn'. Unfortunately, since similar results were obtained by both approaches and a full semantics analysis was not performed, no conclusions or insights could be made on this matter. From Tables 5, it is clear that the pre-processing steps involved in each approach are largely the same, with the exception of how emojis are treated. In both methods, contractions are expanded into their full expressions, spelling errors are not corrected, back-to-back repetitions of emojis within a tweet are discarded leaving behind one of them before it is translated, slang terms are replaced by their formal expressions, hashtags in front of words are removed leaving the words behind, urls and @mentions are replaced by some more generic expression, double spaces are contracted into single ones. The minor differences come about in the way that the urls, @mentions and slang words related to vaccination are treated by each approach. In the lexical approach, there are no corrections for slang terms pertaining to the word vaccine and its different forms according to the part of speech it adopts in a sentence. The major differences between the two approaches are in the way emojis, punctuation marks, uppercase letters and numerical characters are treated. In the semantics approach, punctuation marks and back-to-back repetitions of punctuation marks are not discarded, uppercase letters were not lower-cased and numerical characters were kept. This is not the case in the lexical approach.
§ APPENDIX III
§ MACHINE LEARNING MODELS
In Table 6, below, we highlight the hyperparameters of each model; we explicitly show the chosen model-specific hyper-parameters and their associated tuning ranges. The SVM model was chosen to have three hyperparameters i.e., the kernel, gamma, and the cost function, while the LSTM and bi-LSTM models were chosen to have eight hyperparameters i.e, the dropout rate, learning rate, batch size, dense units, embedding dimensions, hidden dimensions, number of epochs, weight decay and the choice of optimizer. The BERT and RoBERTa models were chosen to have three hyperparameters, namely, the number of epochs, dropout rate, and the learning rate. The choice of possible kernels for the SVM models were chosen to be rbf or linear kernels, while the choice of optimizers for the RNN models were chosen to be adam, adamax, rmsprop and SGD. Note that optimizer and weight decay parameters were not chosen as hyperparameters for the transformer models, since it was found that the weight decay was least important with regards to the other three hyperparameters when included as a hyperparameter and since adamW is the standard optimizer used in all BERT/RoBERTa models. Also note that the tuning ranges are much larger than usual, especially in the case of the RNN models, so that an extensive hyperparameter search and optimisation could be performed using the WandB platform. The mode of optimisation was chosen to be Bayes' optimisation.
In Table 7, below, we explicitly provide the optimised hyper-parameter values for each model on the lexical-based and semantics-based pre-processing methods, respectively. In the case of the SVM models, the obtained values were identical for both pre-processing methods. One can immediately see that some of the obtained optimal hyperparameter values are unusual or uncommon among these models, particularly in the case of the RNN classification models, for both pre-processing methods, respectively, owing to the uniqueness and complexity of our particular dataset for this particular use case and the wide range of tuning values for each hyperparameter.
In Table 8, below, we show the model performances of various classification models on the hand-labelled Covid-19 dataset. The results from the VADER and TextBlob algorithms served as a comparison of the degree of similarity or dissimilarity in the criteria used when classifying sentiments via automated means versus classifying sentiments using a manual approach.
In this case, these algorithms showed a poor correlation with the hand labels i.e, overall F1-scores of 43% and 37%, respectively - which again highlights the superiority of hand-labelling over manual labelling. The pre-trained NLP-Town BERT model when tested on the COVID-19 dataset achieved an overall precision of 46%, an overall accuracy of 46% yielding an F1-score of 46%, while the pre-trained Cardiff-NLP RoBERTa model achieved a similar result with an overall precision of 46%, overall accuracy of 48% yielding an F1-score of 47%. The Original BERT-BASE-CASED model when tested on the COVID-19 dataset achieved an overall precision of 50%, an overall accuracy of 48% yielding an F1-score of 50%, while our COVID-19 BERT-BASE-CASED model achieved a much better result with an overall precision of 60%, overall accuracy of 61% yielding an F1-score of 60%. By comparison, The Original RoBERTa-BASE model when tested on the COVID-19 dataset achieved an overall precision of 49%, an overall accuracy of 50% yielding an F1-score of 49%, while the COVID-19 RoBERTa-BASE model achieved a much better result with an overall precision of 62%, overall accuracy of 61% yielding an F1-score of 61%. The superior performances of the COVID-19 models, when compared to the pre-trained NLP-Town BERT and NLPTown RoBERTa models illustrates both the complexity of the dataset and use-case when compared to sentiment analysis done on much simpler use-cases and using the same labels, as well as the cultural and linguistic differences in the way people communicate in South Africa as compared to the rest of the world. The superior performances of the COVID-19 models, when compared to the Original BERT and RoBERTa models shows that significant training has been achieved.
§ ACKNOWLEDGMENTS
We give special thanks to the IBM team with whom we had enormous discussion, as well as Malipalema Khang and Abhaya Kumar Swain for technical support during the initial phase of this project. We also thank Mahnaz Alavinejad for useful discussion. We give a big thank you to Canada’s International Development Research Centre (IDRC) and the Swedish Inter- national Development Cooperation Agency (SIDA) (Grant No. 109559-001) for funding this research.
bonell2020harnessing
C. Bonell, S. Michie, S. Reicher, R. West, L. Bear, L. Yardley, V. Curtis, R. Amlôt, and G. J. Rubin, “Harnessing behavioural science in Public Health Campaigns to maintain `social distancing’ in response to the COVID-19 pandemic: Key principles,” Journal of Epidemiology and Community Health, vol. 74, no. 8, pp. 617–619, 2020.
Ball2016
P. Ball, “The lightning-fast quest for covid vaccines — and what it means for other diseases,” Nature, vol. 589, no. 7840, pp. 16–18, 2020.
COVID-WHO
WHO. “COVID-19 Vaccine Tracker And Landscape”; 2019; WHO, Available at: https://www.who.int/publications/m/item/draft-landscape-of-covid-19-candidate-vaccines. /. (accessed May. 22, 2022).
COVID-vacc2021
D. Maverick. “South Africa To Give First COVID-19 Vaccine
Doses To President, Health Workers”, 2021; Daily Maverick. https://www.dailymaverick.co.za/article/2021-02-17-south-
africa-to-give-first-covid-19 -vaccine-doses-to-president-health-
workers/.(accessed May. 22, 2022).
doi:10.1080/14760584.2021.1949291
C. Sarah. R. Heidi. W. Charles. S. ; “Covid-19 vaccine hesitancy in South Africa: How can we maximize uptake of COVID-19 vaccines?,” Expert review of vaccines.[Online]. Available: https://pubmed.ncbi.nlm.nih.gov/34252336/. [Accessed: 09-Apr-2023].
litvaxhes1
K. Kahn, A. Pettifor, P. Mataboge, N. Kelly, P. Mashinini, H. Nair, H. Campbell, C. Cohen, F. X. Gomez-Olive, and S. Tollman, “Covid-19 vaccine hesitancy in rural South Africa: Deepening understanding to increase uptake and access,” SSRN Electronic Journal, 2022.
litvaxhes2
R. Burger, T. Köhler, A. M. Golos, A. M. Buttenheim, R. English, M. Tameris, and B. Maughan-Brown, “Longitudinal changes in covid-19 vaccination intent among South African adults: Evidence from the Nids-Cram Panel Survey, February to May 2021,” BMC Public Health, vol. 22, no. 1, 2022.
Bruce2021
B. Mellado, J. Wu, J. D. Kong, N. L. Bragazzi, A. Asgary, M. Kawonga, N. Choma, K. Hayasi, B. Lieberman, T. Mathaha, M. Mbada, X. Ruan, F. Stevenson, and J. Orbinski, “Leveraging Artificial Intelligence and big data to optimize COVID-19 Clinical Public Health and vaccination roll-out strategies in Africa,” SSRN Electronic Journal, 2021.
naab2017studies
T. K. Naab and A. Sehl, “Studies of user-generated content: A systematic review,” Journalism, vol. 18, no. 10, pp. 1256–1273, 2016.
smith2012does
A. N. Smith, E. Fischer, and C. Yongjian, “How does brand-related user-generated content differ across YouTube, Facebook, and Twitter?,” Journal of Interactive Marketing, vol. 26, no. 2, pp. 102–113, 2012.
puri2020social
N. Puri, E. A. Coomes, H. Haghbayan, and K. Gunaratne, “Social Media and vaccine hesitancy: New updates for the era of COVID-19 and globalized infectious diseases,” Human Vaccines & Immunotherapeutics, vol. 16, no. 11, pp. 2586–2593, 2020.
Medel2017
A. Medelyan, “Sentiment Analysis Comprehensive Beginners Guide,” Getthematic, https://getthe-matic.com/sentiment-analysis. (accessed June. 11, 2023).
Medel2018
A. Medelyan, “Text Analytics Approaches: A Comprehensive Review,” Getthematic, https://getthematic.com/insights/5-text-analytics-approaches/. (accessed June. 11, 2022).
Val2020
L. Vallantin, “How To Label Text For Sentiment Analysis,” 2020; TowardsDataScience, https://towardsdatascience.com/how-to-label-text-for-sentiment-analysis-good-practises-2dce9e470708. (Accessed 17 May 2022)
Petro2021
V. Petrosyan. “Text Annotation For Machine Learning”, 2021; Available from: https://blog.superannotate.com/text-annotation-for-machine-learning/,.(Accessed 16 June 2022)
WHO2015
WHO. “Summary WHO SAGE conclusions and recommendations on Vaccine Hesitancy,” 2015; WHO, Available from: https://www.who.int/docs/default-source/immunization/demand/summary-of -sage-vaccinehesitancy-en.pdf?sfvrsn=abbfd5c82. (Accessed 07 June 2022)
G4G2020
GeeksforGeeks. “Introduction To Support Vector
Machines (SVM),” GeeksforGeeks, Available from: https://www.geeks-
forgeeks.org/introduction-to-support-vector-machines-svm/. (Accessed 07 June 2022)
Kumar2020
S. Kumar, “Sentiment Analysis Using LSTM,” Analytics Vidhya,
Available from: https://www.analyticsvidhya.com/blog/2021/06/natural-
language-processing- sentiment-analysis-using-lstm/. (Accessed 07 June 2022)
BERTpre
Kalaoke, “Kalaoke/bert-finetuned-sentiment”, Hugging Face, 2016. Available from: https://huggingface.co/Kalaoke/bert-finetuned-sentiment. (Accessed 16 June 2022)
RoBERTapre
Cardiffnlp, “Cardiffnlp/twitter-roberta-base-sentiment”, Hugging Face; 2020. Available from: https://jonas-moennig.de/how-to-cite-a-website-with-bibtex/ (Accessed 16 June 2022)
Mutanga2020
M.B. Mutanga and A. Abayomi, “ Tweeting on COVID-19 pandemic in South Africa: LDA-based topic modelling approach,” African Journal of Science, Technology, Innovation and Development, pp.1–10., 2020, doi:https://doi.org/10.1080/20421338.2020.1817262.
Saritete2022
A. Saririte, “ Sentiment analysis tracking of COVID-19 vaccine through tweets,” Journal of Ambient Intelligence and Humanized Computing, 2022, doi:https://doi.org/10.1007/s12652-022-03805-0.
Mor2021
M.A. Mudassir, Y. Mor, R. Munot and R. Shankarmani,“Sentiment Analysis of COVID-19 Vaccine Perception Using NLP,” 2021 Third International Conference on Inventive Research in Computing Applications (ICIRCA), 2021, doi:https://doi.org/10.1109/icirca51532.2021.9544512.
Li2021
Li. Li, J. Zhou, Z. Ma, M.T. Bensi, M.A. Halland and G.B. Baecher, “Dynamic assessment of the COVID-19 vaccine acceptance leveraging social media data,” Journal of Biomedical Informatics, [online] 129, p.104054, 2021, doi:https://doi.org/10.1016/j.jbi.2022.104054.
Quyen2021
Q.G. To, K.G. To, V.-A.N. Huynh, N.T.Q. Nguyen, D.T.N. Ngo, S.J. Alley, A.N.Q. Tran, A.N.P. Tran, N.T.T. Pham, T.X. Bui, and C. Vandelanotte. “Applying Machine Learning to Identify Anti-Vaccination Tweets during the COVID-19 Pandemic.”, International Journal of Environmental Research and Public Health, [online] 18(8), p.4069., 2021, doi:https://doi.org/10.3390/ijerph18084069.
Alam2021
K.N. Alam, M.S. Khan, A.R. Dhruba, M.M., Khan, J.F. Al-Amri, M. Masud, and M. Rawashdeh. “Deep Learning-Based Sentiment Analysis of COVID-19 Vaccination Responses from Twitter Data.’’ Computational and Mathematical Methods in Medicine, 2021, pp.1–15. doi:https://doi.org/10.1155/2021/4321131.
§
l3.1cm
< g r a p h i c s >
Nicholas Perikli is an Astrophysics and Biological Sciences graduate with an Honours in Physics currently doing my MSc in Particle Physics at the University of the Witwatersrand. I have many interests in varying fields of academia, enjoy tutoring Maths and Physics, have a lot of experience with Python in general, am well-versed in the field of NLP and privileged to have taken shifts in the ATLAS Control Room during Run 3. I am creative, a fast learner and a deep thinker with an innate and deep curiosity which forces me to question the nature of everything and compels me to continuously learn new things and develop new skills, gain and expand on existing knowledge to develop deeper insights into things and then apply this knowledge and these skills to various research topics and fields. (ORCID ID: 0000-0002-8963-4290).
§
l3.1cm
< g r a p h i c s >
Srimoy Bhattacharya has received a Ph.D. in High Energy Physics Phenomenology from the Indian Institute of Technology, Guwahati, (IITG) in 2018. He is currently, a Post-Doctoral Researcher at the University of the Witwatersrand, Johannesburg, South Africa. In addition to the cutting-edge fields in High Energy Physics phenomenology, his current research interests include social media computing, NLP, machine learning for health, and data science for social benefit. Contact Srimoy at [email protected] (ORCID ID: 0000-0002-9468-5113).
§
l3.1cm
< g r a p h i c s >
Blessing Ogbuokiri received the Ph.D. degree in computer science from the University of the Witwatersrand, Johannesburg, South Africa. He is currently a postdoctoral researcher in the Africa Canada Artificial Intelligence and Data Innovation Consortium (ACADIC) laboratory, Department of Mathematics and Statistics, York University, Toronto, Canada. He has received several academic awards and research grants. He is the recipient of the Dahdaleh Institute Seed Grant for Critical Social Science Perspectives in Global Health Research. His recent research interests include machine learning for health, natural language processing, data science for social good and social media computing. Contact Blessing at [email protected] (ORCID ID:0000-0003-1606-0019). Visit Blessing at (https://www.blessingogbuokiri.com/).
§
l3.1cm
< g r a p h i c s >
Zahra Movahedi Nia is a postdoctoral researcher in the Africa Canada Artificial Intelligence and Data Innovation Consortium (ACADIC) laboratory, York University (Toronto, Canada). She received her PhD in computer engineering, from University of Isfahan (Iran). Her research interests include machine learning, deep learning, and data analytics. Contact Zahra at [email protected] (ORCID ID:0000-0002-5528-638X).
§
l3.1cm
< g r a p h i c s >
Benjamin Lieberman was born in 1994 in Johannesburg, Gauteng province, South Africa. He received his Bachelor of Science (B.Sc) in the field of nuclear physics and engineering, from the University of Witwatersrand in 2016. He completed his Bachelor of Engineering Honours in Mechanical Engineering at the University of Cape Town in 2018. He began his Master of Science (M.Sc) in the field of Particle Physics at the University of Witwatersrand in collaboration with the ATLAS experiment at the European Organization for Nuclear Research (CERN) in 2019. In 2021 his research was upgraded to a Doctor of Philosophy (PhD) with focus on machine learning applications in particle physics. His research interests lie in the area of applications of semi-supervised machine learning techniques in discovering new bosons at the Large Hadron Collider (LHC), as well as applying scientific modelling and machine learning techniques for epidemiological response. He has specific interest in applying data driven solutions, including machine learning and epidemiological modelling to help inform the COVID-19 response in South Africa. Contact Benjamin at [email protected] (ORCID ID: 0000-0001-5281-8937).
§
l3.1cm
< g r a p h i c s >
Nidhi Tripathi received the master’s degree in computer engineering from Iowa State University, USA in 2017 and is currently a Ph.D. candidate at University of Witwatersrand, South Africa and a member of iThemba labs, South Africa. Her research interest is machine learning based data analysis and modelling. Contact Nidhi at [email protected] (ORCID ID: 0000-0001-7518-1238).
§
l3.1cm
< g r a p h i c s >
Dr. Salah-Eddine DAHBI is presently working as postdoctoral Fellowship in the Institute for Collider Particle Physics at the University of the Witwatersrand and member the ATLAS collaboration at CERN-Large Hadron Collider. Salah’s dissertation research focuses on two topics. The first one addresses the fundamental question of the electroweak symmetry breaking and searching for a new physics predicted in different theoretical models by performing a resonant search. The second part of his PhD thesis is related to the performance of the ATLAS detector at High Luminosity LHC, when the luminosity is expected to be higher, up to ten times the nominal one, and leads to a degradation of the ATLAS sub-components, especially at the forward region closest to the proton-proton interaction point. Dr. Salah-eddine has been conducting a novel project at the Institute for Collider Particle Physics, to search for new resonances beyond the Standard Model of particle physics using machine learning techniques. The creativity in this study is that the method he developed helped the team to reveal the new physics under huge amount of background in the context of weak prior knowledge. Email: [email protected], (Orchid ID: 0000-0002-5222-7894).
§
l3.1cm
< g r a p h i c s >
Finn Stevenson is an Electro-Mechanical Engineering graduate from the University of Cape Town, currently pursuing a Masters degree through the University of the Witwatersrand in artificial intelligence and data science. My work for the past year has been primarily focused on Covid-19 data modelling in South Africa as a member of the Wits-iThemba Covid-19 Modelling team, the official modelling team of the Gauteng province. We have provided valuable AI-driven tools related to Covid-19 case data predictions, economic recovery predictions, data-informed policy recommendations and the development of early alert systems. During the period of working on Covid-19 related content, I have learnt many skills and connected with many interesting people due to the urgency and necessity of the work. I am now working on applying techniques I have learnt to data from the LHC at CERN for anomaly detection in Beyond the Standard Model searches as part of the ATLAS experiment. In the future, I hope to apply the broad skill set I have developed to many exciting projects. I am most passionate about work that is novel and experimental in nature. Outside of the office (home office), I am a keen adventurer and love being outdoors, on the mountain, in the forest or in the ocean. I am an avid surfer, skateboarder and hiker. I never miss the opportunity of a spontaneous adventure whatever that might entail. Contact Finn at [email protected] (ORCID ID:0000-0003-0444-2992).
§
l3.1cm
< g r a p h i c s >
Nicola Luigi Bragazzi got his MD in general medicine and surgery from Genoa University (Genoa, Italy) in 2011, his PhD in biophysics from Marburg University (Marburg, Germany) in 2014 and his specialization in Public Health from Genoa University (Genoa, Italy) in 2017. He is currently with the Department of Food and Drugs, University of Parma, Parma, Italy. Contact Bragazzi at [email protected] (ORCID ID: (0000-0001-8409-868X).
§
l3.1cm
< g r a p h i c s >
Jude Kong is the Director of the Africa-Canada Artificial Intelligence and Data Innovation Consortium (ACADIC). He leads two other networks of researchers that are designing early warning frameworks for emerging infectious disease outbreaks: Mathematics for Public Health (MfPH) and One Health Modelling Network for Emerging Infectious Diseases (OMNI). He is an expert in mathematical modelling, artificial intelligence, data science, infectious disease modelling and mathematics education. His principal research program focuses on the use of quantitative methods to improve decision-making for epidemic and pandemic prevention, preparedness and response. In 2020, he won a York Research Leader Award. In 2021 he was spotlighted among Canadian Innovation Research Leaders 2021(https://researchinfosource.com/pdf/CIL2021.pdf) for his work with ACADIC. In 2022, he was spotlighted as a Change Maker by People of York University for his work in helping others learn mathematical concepts and encouraging them to find their passion and achieve more than they thought was possible (https://www.yorku.ca/peopleofyu/2022/02/18/ jude-kong-faculty/). Contact Jude Kong at [email protected] (ORCID ID: 0000-0002-7557-5672).
§
l3.1cm
< g r a p h i c s >
Prof. Bruce Mellado is the Co-president of Africa Canada Artificial Intelligence and Data Innovation Consortium (ACADIC) and a member of the Gauteng Premier’s COVID-19 Advisory Committee, where he leads work on predicting and forecasting the dynamics of COVID-19. He is the recipient of several awards and fellowships. He is an Internationally acclaimed, B1-rated researcher of the National Research Foundation of South Africa. He is an expert in Artificial Intelligence. Contact Bruce at [email protected] (ORCID ID: 0000-0003-4838-1546).
|
http://arxiv.org/abs/2307.07477v1 | 20230714165908 | Population Expansion for Training Language Models with Private Federated Learning | [
"Tatsuki Koga",
"Congzheng Song",
"Martin Pelikan",
"Mona Chitnis"
] | cs.LG | [
"cs.LG",
"cs.CL",
"cs.CR"
] |
[
Population Expansion for Training Language Models with
Private Federated Learning
equal*
Tatsuki Kogaintern
Congzheng Songcomp
Martin Pelikancomp
Mona Chitniscomp
compApple
internUC San Diego, work done while interning at Apple
Congzheng [email protected]
Machine Learning, ICML
0.3in
]
Federated learning (FL) combined with differential privacy (DP) offers machine learning (ML) training with distributed devices and with a formal privacy guarantee.
With a large population of devices, FL with DP produces a performant model in a timely manner.
However, for applications with a smaller population, not only does the model utility degrade as the DP noise is inversely proportional to population, but also the training latency increases since waiting for enough clients to become available from a smaller pool is slower.
In this work, we thus propose expanding the population based on domain adaptation techniques to speed up the training and improves the final model quality when training with small populations.
We empirically demonstrate that our techniques can improve the utility by 13% to 30% on real-world language modeling datasets.
§ INTRODUCTION
Federated learning (FL) <cit.> enables training machine learning (ML) models using on-device data and is widely used in our daily lives as usage of mobile devices, e.g., smartphones, smart watches, and smart speakers, increases.
Although FL, by design, does not require raw data to be transmitted from devices, privacy breaches can happen by transmitting model gradients to the central server.
Thus, FL algorithms are modified to satisfy differential privacy (DP) <cit.> to provide a formal privacy guarantee.
We refer this learning framework as private federated learning (PFL).
Successful ML models trained with PFL typically require the number of devices sampled at each round, cohort size, to be large enough to reduce the detrimental impact of DP noise on the model utility <cit.>.
The requirement of large cohort size, which is easily met with hundreds of millions of devices, can be hard to fulfill for applications with device-constrained populations.
For a motivating example, to train a language model (LM) with PFL for automatic speech recognition (ASR) system in a virtual assistant, the on-device training data are transcribed speech.
For popular languages, such as English or Chinese, there are ample devices with transcriptions.
However, for less popular languages such as Romanian or Swahili, the population with data is orders of magnitude smaller due to the limited speaker base.
In such small populations, as we will show in Section <ref>, the server needs to spend much longer waiting for a full cohort of devices to become available in each iteration, which is impractical for models that require thousands of iterations to converge.
Thus, PFL has the tradeoff among privacy, utility, and latency for device-constrained applications.
Our contributions
In this work, we develop approaches to expand the population size to address the latency bottleneck for PFL in the device-constrained scenarios.
We propose to use data from different applications than the target application to augment the training data, e.g. there are more devices with typed text than those with audio transcriptions as the messaging application is used more frequently than a virtual assistant.
Population expansion for PFL has three benefits: (1) training will be faster as there are more devices available, (2) DP noise scale will be smaller from amplification by subsampling <cit.> by making population size larger, and (3) sampling error will be smaller.
We explore combinations of various domain adaptation techniques and show that they outperform naively augmenting the devices from other sources.
We focus on training LMs and evaluate the proposed approaches on public benchmark datasets including Reddits Comments and Common Voice.
We demonstrate our methods can expand the population size by 10 times, which significantly reduces the latency and achieves better model utility.
§.§ Related Work
Prior works on domain adaptation in the LM applications focuses on centralized training.
<cit.> explored instance weighting with importance sampling to reweight the training objective for domain adaptation.
<cit.> selected and used a portion of non-domain-specific language data for domain-specific LM training.
<cit.> extended LM neural networks (NNs) to have domain-specific and domain-shared representations so that those representations are learned separately.
<cit.> focused on transformer model and modify the model architecture to have domain-specific layers.
More recently, <cit.> adopted hierarchical network structures for training on data from a larger number of domains, where models are gradually trained along with the hierarchy in a top-down manner.
With regards to domain adaptation in the federated setting, prior works address the setting where the clients and the server own data from different domains <cit.>.
<cit.> extended the adversarial domain adaptation technique to the federated setting, but their main focus is cross-silo FL, where the number of clients is much smaller.
<cit.> also proposed a domain adaptation technique in cross-silo FL with differential privacy, which properly combines general and specific models.
§ PRELIMINARIES
Federated Learning (FL) <cit.> enables model training on multiple devices, each having a separate dataset, without sharing on-device dataset with a central server.
In particular, we focus on cross-device FL where the number of clients is very large, as opposed to cross-silo FL where client population is small.
The standard iterative procedure for training machine learning models executes at each iteration t:
(1) the central server samples a set of clients 𝒞_t from the population,
(2) each sampled client i∈𝒞_t downloads the shared model parameter θ_t from the server and locally trains the model on its own data to produce a local model θ_i,
(3) each sampled client i sends back the model difference Δ_t,i = θ_i - θ_t to the server, and
(4) the server aggregates the model differences as a “pseudo-gradient” Δ_t = 1/|𝒞_t|Δ_t,i and uses it to update θ_t with any standard optimizer.
Differential Privacy (DP) provides strong privacy protections for sensitive data on device.
DP is formally defined as follows:
A randomized algorithm M satisfies (ϵ, δ)-DP if for any neighboring datasets D,D^' and for any S⊆range(M),
[M(D) ∈ S] ≤exp (ϵ) [M(D^') ∈ S] + δ.
We say two datasets D, D^'∈𝒳 are neighboring if they differ on at most an individual's participation.
Two additional steps are added to the FL algorithm to ensure a DP guarantee:
(1) each sampled client clips the model difference before sending it back to have a bounded norm, and
(2) the server applies a DP building block, commonly the Gaussian mechanism <cit.>, when aggregating the model differences to get the noisy pseudo-gradient.
We focus on using the Gaussian mechanism for aggregating the model differences in this work.
The noise variance is then calibrated by the moment accountant <cit.> with fixed sampling rate q (fraction of clients sampled in each iteration), number of training iterations T, and privacy budgets (ϵ, δ).
§ EXPANDING POPULATION IN PFL
§.§ Device Sampling Latency
We first formulate how population size N impacts latency in PFL.
In each round of PFL, cohort size C ≈ Nq of devices are sampled to participate in training, where q is the device sampling probability to provide an amplification on privacy <cit.>.
Server tends to over-sample by using a slightly larger q>C/N to improve the latency.
In reality, only a proportion of devices satisfying certain conditions (e.g. locked, charging and on Wi-Fi) are eligible for training and devices might dropout or abort training <cit.>, and we denote this ratio of eligible devices as p.
Therefore, if C is larger than Npq, we need to wait until enough devices become available to participate before updating the model.
More formally, assume Npq < C, we model the latency to wait C-Npq devices more to become available and be sampled as follows.
Let m = N-Np be the number of current unavailable devices, k = C-Npq be the number of devices needed for current PFL iteration.
Assume that the time for the i-th unavailable device becoming available and being sampled for training is T_i∼Exponential (λ).
Let U_k be the random variable which describes the time when the first k devices become available and are sampled. Then
1/λ·C-Npq/N(1-p)+1≤𝔼[U_k] ≤C/λ(N-C).
We defer the proof to Appendix <ref>.
We use exponential time model since it is a common choice for modeling training time in the distributed scenario <cit.>.
From the above proposition we see that the expected latency U_k is inversely proportional to the population size, i.e. the smaller the population size, the longer the server needs to wait for enough devices to become available in each iteration.
Figure <ref> illustrates the relationship between latency and population size.
§.§ Domain Adaptation for Expanding Population
The small population situation happens often when building task-specific LMs, where potential data sources are scarce, e.g. training a LM on Swahili spoken texts as a part of virtual assistant system.
It is a challenging task since only a small number of users are frequent users of a virtual assistant and have Swahili speech on their devices.
Nonetheless, for such device-constrained locales, there could be other data sources, e.g., typed texts, with larger population.
This motivates us to expand the population by exploiting another text source with a different distribution to train the LM for the target data source, which can be cast as a domain adaptation (DA) problem.
Following DA convention, we denote data from other source applications with larger population as source domain 𝒮, and data from target application with smaller population as target domain 𝒯.
Goal We wish learn a global model that minimizes the objective 𝔼_x∼𝒯 [L(x)], where L is the loss function, with data from 𝒮∪𝒯 under a fixed privacy budget (ϵ, δ).
The latency-utility trade-off should be much better than training in 𝒯 alone.
Instance weighting (IW)
Naively training with devices sampled from 𝒮∪𝒯 would bias towards 𝒮 due to its larger population.
To remedy this sampling bias, we apply instance weighting <cit.> on the training objective:
𝔼_x∼𝒮∪𝒯[w(x) L(x)],
where w(x) = p_𝒯(x) / p_π(x) is the importance weight, π∈{𝒮, 𝒯} denotes which domain x is from and p_π(x) is the data density function for domain π.
As p_π(x) has to be estimated privately, we choose to approximate it with unigram likelihood p̂_π(x)=∏_i û_π(x_i) as unigram frequency û_π can be efficiently learned with a relative small privacy budget.
The product of unigrams in p̂_π(x) can lead to bipolarized density estimation, and thus unstable importance weights.
We instead use relative importance weight <cit.> to provide a more robust estimation:
w(x) = p̂_𝒯(x)/αp̂_𝒯(x) + (1-α) p̂_π(x),
where α is the proportion of the devices with data from 𝒯.
The overall PFL training procedure with IW is: (1) learn the unigram frequency û_π for π∈{𝒮, 𝒯} with privacy budget (ϵ_0, δ_0) which can be done with private federated statistics <cit.>, and (2) train model using objective weighted by Equation <ref> with privacy budget (ϵ-ϵ_0, δ-δ_0).
Pretrain in 𝒮 and finetune in 𝒯 (PT)
Recent work <cit.> has shown that pretraining a model in a different domain to target domain with a large population reduces the amount of data required for private finetuning.
We consider pretraining in 𝒮 with a large cohort size C and finetune in 𝒯 with a small cohort size α C so that the latency for finetuning stays roughly the same as pretraining.
We enforce that the population of 𝒮 and 𝒯 to be disjoint so that both pretraining in 𝒮 and finetuning in 𝒯 can spend privacy budget of (ϵ, δ) with parallel composition <cit.>.
Instance weighted pretraining (IWPT) Domain adaptive pretraining <cit.> (DAPT) demonstrated the benefits of pretraining with in-domain data.
However because the in-domain population is limited and it is inefficient to train with PFL, we consider instance weighted pretraining on 𝒮 with objective weighted by Equation <ref> as an approximation of DAPT.
§ EXPERIMENT
§.§ Datasets
To simulate a practical situation, we focus on using real-world datasets with user identifiers so that we can partition data naturally by users.
In particular, we use two sources of data: (1) Reddits <cit.> and (2) Common Voice (CV) <cit.> to build two datasets for DA tasks.
More data processing details are described in Appendix <ref>.
SubReddits
The first constructed DA dataset consists of only the Reddits dataset with different SubReddit topics.
We treat a set of similar subreddits as a domain, where we choose stock-related subreddits {Superstonk, amcstock, wallstreetbets, GME, Wallstreetsilver} as 𝒮 and news-related subreddits {news, worldnews, politics} as 𝒯.
As a result of the constrution, we have 117,708 clients in total and 14,072 clients (about 12%) have target domain data as well as source domain data.
CV&Reddits
The other constructed DA dataset combines Reddit (typed texts) and CV (transcribed audios) which simulates the difference between spoken and typed texts domains.
We treat texts from Reddits as 𝒮 and texts from CV as 𝒯.
CV dataset has 68,312 clients.
We randomly select clients from Reddit dataset so that the total number of clients is 10 times more than the number of clients with Common Voice data.
§.§ Experiment Setup
Since there usually is a constraint on the client device storage and communication cost in real world applications, we consider a rather simple LSTM following <cit.>.
We evaluate the performance of our approaches by the perplexity (PPL) in 𝒯.
We divide clients into training, validation, and test sets with the ratio of 6:2:2, where the hyper-parameters are tuned on validation set.
We consider two baselines with unweighted objective: (1) training with cohort sizes α C and C in 𝒯 only where α is the proportion of the devices with data from 𝒯, and (2) training with cohort size C in 𝒮∪𝒯.
We also experiment the baseline (2) with domain adaptive layers proposed in domain-shared/domain-specific representations (DSDSR) <cit.> and DEMix <cit.>.
To speed up the training process, we follow <cit.> and set the cohort size C to be 5,000 for adjusting the magnitude of noise in the DP analysis and to be 400 for actual training.
We set α=0.1 i.e. the ratio of population between 𝒯 and 𝒮.
All experiments last for 2,000 server iterations and 1 client iteration.
For fine-tuning experiments (PT and IWPT), we split the server iterations into 1,000 and 1,000 for pretraining and fine-tuning, respectively.
We use FedAdam <cit.> as the server optimizer with learning rate 0.1 and SGD as the client optimizer with learning rate 0.5.
We set the total privacy parameters to (ϵ, δ) = (2, 10^-6) throughout the experiments.
The clipping bound of Gaussian mechanism in PFL is set to 0.5.
For IW and IWPT, we allocate (ϵ_0, δ_0)=(0.8, 0) for estimating unigrams with Geometric Mechanism <cit.>, and (ϵ,δ) = (1.2, 10^-6) for model training.
To bound the sensitivity for the unigram estimation, we use at most 5 sequences with each of which have a fixed length of 10 tokens.
§.§ Results
Table <ref> summarizes the model performance of our algorithm and baseline approaches.
First, we observe from results on both datasets that training models with a small cohort size α C in 𝒯 only has the worst performance, which is because the DP noise dominates the model update in each iteration.
Increasing the cohort size to C can greatly improve the utility for 𝒯 only.
However, according to the argument made in Section <ref>, we need to trade off a significant amount of training time for a larger C.
For the baseline trained with large population size in 𝒮∪𝒯 and large cohort size, simply treating source domain data as target domain data does not improve the performance much possibly because source domain data is from a different distribution and has larger volume which dominates the model update.
DA specific architectures (DSDSR and DEMix) improved this baseline to some extent but can incur more communication cost due to larger model sizes.
On the other hand, both IW and PT approaches outperform the baseline methods, and are better than the DA specific architectures on SubReddits dataset.
The combined IWPT approach achieves the best PPL, 13% and 30% lower than the baseline models on SubReddits and CV&Reddits, respectively.
§ CONCLUSION AND FUTURE WORK
We demonstrate that the population size being small in PFL not only harms the model quality but also slows down the LM training.
With our proposed domain adaptation algorithm, which weights the source domain data appropriately, we show it is possible to have a larger population and train LMs with a better quality in a timely manner.
Since instance weighting framework can be applied to other data domains than languages, extending the framework to other domains, e.g., images, is a direction for future work.
icml2023
§ DATASET PREPROCESSING
The set of known vocabulary is built with target domain data in the training set by choosing top 10K frequent words and is assumed to be known in advance.
Every word outside the vocabulary list is mapped as .
We append to the beginning and to the end of every sentence.
Within each user, we limit the number of tokens (words) to 1,600 and cut the input sentences into sequences of length 10.
When a sequence has length less than 10, we append to make it have length 10.
§ PROOF OF PROPOSITION 3.1
Here we restate and prove Proposition 3.1.
Let m = N-Np be the number of current unavailable devices, k = C-Npq be the number of devices needed for current PFL iteration.
Assume that the time for the i-th unavailable device becoming available and being sampled for training is T_i∼Exponential (λ).
Let U_k be the random variable which describes the time when the first k devices become available and are sampled. Then
1/λ·C-Npq/N(1-p)+1≤𝔼[U_k] ≤C/λ(N-C).
We first state two properties about Exponential distribution:
* The minimum of n exponential random variables is exponential: min{T_1, … T_n}∼Exponential (nλ) <cit.>.
* The exponential random variable T_i is a memoryless: P(T_i > a + b | T_i > b) = P(T_i > a) <cit.>.
In our definition, U_1 = min{T_1, … T_m}∼Exponential (mλ) from the first property.
WLOG, let U_i=min{T_i, …, T_m} | T_j > U_i-1 where j={i,…, m} and i>1, then with the second property we can derive:
P(U_i - U_i-1 > a) = P(U_i - U_i-1 > a | U_i > U_i-1)
= P(U_i > a + U_i-1 | U_i > U_i-1)
= P(U_i > a)
= P(min{T_i, …, T_m} > a).
Thus, U_i - U_i-1∼Exponential ((m-i+1)λ) from the first property.
Then we have:
𝔼[U_k] = 𝔼[∑_i=2^k(U_i - U_i-1) + U_1] = ∑_i=2^k𝔼[(U_i - U_i-1)] + 𝔼[U_1] = 1/λ∑_x=m-k+1^m1/x.
Since 1/x is convex, we know from the lower and upper Riemann sum that:
∫_m-k+1^m+11/xdx≤∑_x=m-k+1^m1/x≤∫_m-k^m1/xdx
Then the lower bound can be derived as follows:
∫_m-k+1^m+11/xdx = lnm+1/m-k+1
≥ 1 - m-k+1/m+1
=k/m+1 = C-Npq/N(1-p) + 1,
where the first inequality comes from the fact that 1-1/x≤lnx≤ x-1.
Similarly, the upper bound can be derived as follows:
∫_m-k^m1/xdx =lnm/m-k
≤m/m-k - 1
= k/m-k
= C-Npq/N(1-p) - (C-Npq)
= C/N(1-p)/(1-Nq/Cp) - C
≤C/N-C,
where the last inequality comes from the fact that server tends to oversample q ≥C/N.
|
http://arxiv.org/abs/2307.05845v2 | 20230711233649 | PIGEON: Predicting Image Geolocations | [
"Lukas Haas",
"Michal Skreta",
"Silas Alberti"
] | cs.CV | [
"cs.CV",
"cs.LG"
] |
[
PIGEON: Predicting Image Geolocations
equal*
Lukas Haasstanford_cs
Michal Skretastanford_cs
Silas Albertistanford_ee
stanford_csDepartment of Computer Science, Stanford University, Stanford, CA, USA.
stanford_eeDepartment of Electrical Engineering, Stanford University, Stanford, CA, USA
Lukas [email protected]
Michal [email protected]
Silas [email protected]
Image Geolocalization, Computer Vision, Multi-task Learning, Meta Learning, Deep Learning, Machine Learning
0.3in
Planet-scale image geolocalization remains a challenging problem, necessitating fine-grained understanding of visual information across countries, environments, and time. Although traditional retrieval-based approaches using hand-crafted features have recently been superseded by deep learning methods, transformer-based advances in machine learning have rarely been applied in image geolocalization.
We introduce PIGEON, a novel deep multi-task model for planet-scale Street View image geolocalization that incorporates, inter alia, semantic geocell creation with label smoothing, conducts pretraining of a CLIP vision transformer on Street View images, and refines location predictions with ProtoNets across a candidate set of geocells. Our work presents three major contributions: first, we design a semantic geocells creation and splitting algorithm based on open-source data which can be adapted to any geospatial dataset. Second, we show the effectiveness of intra-geocell few-shot refinement and the applicability of unsupervised clustering and ProtNets to the task. Finally, we make our pre-trained CLIP transformer model, StreetCLIP, publicly available for use in adjacent domains with applications to fighting climate change and urban and rural scene understanding.
Motivated by the rising popularity of an online game GeoGuessr with over 50 million players worldwide, we focus specifically on Street View images and create the first AI model which consistently beats human players in GeoGuessr, ranking in the top 0.01% of players.
In addition to our novel modeling approach, we create a new planet-scale dataset for image geolocalization of 400,000 images. Our model achieves impressive results, aided by positive multi-task transfer in both an implicit and explicit multi-task setting. We attain 91.96% country accuracy on our held-out set and 40.36% of our guesses are within 25 km of target.
One of the most important results of our work is demonstrating the domain generalization of our pre-trained CLIP model called StreetCLIP <cit.> and its robustness to distribution shifts. We apply StreetCLIP in a zero-shot fashion to out-of-distribution benchmark datasets IM2GPS and IM2GPS3k and achieve state-of-the-art results, beating models finetuned on more than four million in-distribution images.
Finally, we show that contrastive pretraining is an effective meta-learning technique for image geolocalization with StreetCLIP realizing a more than 10 percentage points accuracy increase over CLIP on countries not seen during StreetCLIP-specific pretraining. With image geolocalization datasets varying widely in terms of geographical distribution, our results demonstrate the effectiveness of applying StreetCLIP to any geolocalization and related problem.
]
§ INTRODUCTION
The game of GeoGuessr has become a worldwide sensation in the recent years, attracting over 50 million players globally and getting covered by the New York Times <cit.>. On its surface, GeoGuessr seems quite simple: given a Street View location, players need to say where they find themselves in the world. Yet despite this seeming simplicity, the game is infamously difficult. As a result of the diversity of countries, seasons, and climates in the world, it is very hard for most humans to accurately pinpoint their locations.
Motivated by Geoguessr, we embarked on finding a state-of-the-art approach to planet-scale image geolocalization. The general problem of photo geolocation has a variety of popular use cases, ranging from geographic photo tagging and retrieval at large technology companies to academic, historical research based on archival images. The societal interest in artificial intelligence being able to recognize location from images became clear in 2016, when a paper published by Google garnered worldwide coverage by the media <cit.>. Given the rising popularity of GeoGuessr, numerous amateur attempts have been made at “solving" the game <cit.>. There is also an additional incentive to contribute to a growing community of geography enthusiasts, with AI models having the potential to improve geography education and the potential of the learned Street View representations to be beneficial for applications in sustainability, i.e. the prediction of buildings’ energy efficiency <cit.>.
In this work, we present PIGEON, a model trained on Street View data drawn from the same distribution as GeoGuessr, achieving an impressive image geolocalization results and consistently beating humans in the game of Geoguessr, ranking amongst the top players globally. Some of our work's major contributions revolve around the use of CLIP, a recent multi-modal vision transformer which has been shown to be an effective few-shot learner <cit.>, which is important given the geographical sparsity of images in most image geolocalization datasets. As such, our work innovates on approaches still leveraging convolutional neural networks (CNNs) such as <cit.>.
The remainder of this paper proceeds as follows. In Section <ref>, we outline past approaches to the problem of image geolocalization. In Section <ref>, we describe our dataset and the process of acquiring and augmenting our data. In Section <ref>, we discuss our proposed approach, outlining the six-step process comprising PIGEON. In Section <ref>, we present our results, discussing both distance-based metrics pertaining to our main image geolocalization task as well as other metrics relevant for our augmented dataset. In Section <ref>, we analyze the particularities of the performance of our model while attempting to interpret some predictions of the model. Section <ref> summarizes our work, and Section <ref> outlines potential future directions for our research.
§ RELATED WORK
§.§ Traditional Image Geolocalization
The task of image geolocalization, also referred to as visual place recognition <cit.>, is typically described as a difficult problem due to the sheer diversity of the conditions in which images are taken. An image can be taken during daytime or nighttime, with varying weather, illumination, season, traffic, occlusion, viewing angle, and many other factors. In fact, the task is deemed so difficult that it was not immediately clear that visual features could have superior predictive power in localizing images than textual features <cit.>.
What is perhaps even more challenging, however, is the fact that images can be taken anywhere in the world, representing an extremely vast classification space. To that end, many of the previous approaches at image geolocalization were constrained to small types of parts of the world, such as looking exclusively at cities <cit.>, specific mountain range like the Alps <cit.>, deserts <cit.>, or even beaches <cit.>. Other approaches focused on highly constrained geographical area, such as the United Sates <cit.> or even specific cities like Pittsburgh and Orlando <cit.> or San Francisco <cit.>.
The first modern attempt at planet-scale image geolocalization is attributed to IM2GPS in 2008 <cit.>, a retrieval-based approach using nearest-neighbor search based on hand-crafted features. It was the first time that image geolocalization was considered in an unconstrained manner on a global scale. Yet despite this scale, dependence on nearest-neighbor retrieval methods <cit.> meant that an enormous database of reference images would be necessary for accurate image geolocalization on the scale of the entire planet.
§.§ Deep Image Geolocalization
§.§.§ Convolutional Neural Networks (CNNs)
Interest in image geolocalization surged with the arrival of deep learning to computer vision, marking an evolution from hand-crafted to deep-learned features <cit.>. In 2016, Google released a paper called PlaNet <cit.> that first applied convolutional neural networks (CNNs) <cit.> to photo geolocalization. It also first cast the problem as a classification task, which was particularly important as past research has shown that it was difficult for deep learning models to directly predict geographic coordinates <cit.> because most models do not learn the distributions of data points efficiently, as well as because of the interdependence of latitude and longitude. The improvements made with deep learning led researchers to revisit IM2GPS <cit.>, apply CNNs to massive datasets on mobile images <cit.>, and make applications to GeoGuessrs more widespread <cit.>. Nevertheless, some researchers argue for using approaches combining classification and retrieval <cit.>.
§.§.§ Vision Transformers
Following the success of transformers <cit.> in natural language processing, the transformer architecture found its application to computer vision, such as through the ViT architecture <cit.>. The global context of ViT architectures explains immediate significant improvements compared with CNNs <cit.>. Additionally, vision transformers have been found to be useful in multi-model text and image setting, such as through the OpenAI's CLIP model <cit.> being applied to image geolocalization <cit.>. Prior papers have also used contrastive learning without the use of CLIP <cit.>.
Although vision transformers have been successfully applied to a range of problems in Computer Science, applications of these models have thus far been fairly limited <cit.>, but have recently been accelerating <cit.>. In particular, the emergence of vision transformer models has not been widely applied to the problem of geolocalization from Street View imagery.
§.§ Multi-task Image Geolocalization
Multi-task approaches have been found to be improving results of the main task by using complementary tasks <cit.>, with certain types of task being more beneficial for the main task than others <cit.>. This, coupled with the fact that auxiliary information was found to be a vital pre-processing step for image geolocalization <cit.>, pointed to the potential of multi-task learning to significantly accelerated the field of image geolocalization.
Extracting sets of priors about objects that can potentially be seen in an image <cit.> can be framed as ingredients to a multi-task setting, such as by using scene recognition as a secondary task in a multi-task framework <cit.>. By using semantic segmentation, the problem of extreme variation can be alleviated <cit.>. In fact, until recently, state-of-the-art performance <cit.> was made possible by combining convolutional neural networks with contextual information about environmental scenes. This is particularly important as image geolocalization is very difficult in natural environments <cit.>. More recent work showed that vision transformers and multi-task settings <cit.> contribute to superior performance, further accelerating research in the field.
§.§ Geocell Partitioning
The chosen method of partitioning the world into geocells can have an enormous effect on downstream classification performance. Previous approaches rely on geocells that are either plainly rectangular <cit.>, rectangular using the S2 library <cit.>, or effectively arbitrary, such as through combinatorial partitioning <cit.>. While semantic construction of geocells has been found to be of high importance to image geolocalization <cit.>, even current state-of-the-art papers using the S2 library <cit.>. Alternative method for achieving optimized geocells include creating specific loss functions for the classification layer <cit.>.
§.§ Additional Prior Work
Other prior academic work cited the need for cross-view image geolocalization as photos tend to be concentrated in landmarks and urban areas with sparse ground level geo-tagged photos. Cross-view approaches can combine ground-level appearance, overhead appearance, and land cover attributes <cit.>. What is more, methods using Street View images have shown incredible potential in inferring factors such as income, race, education, and voting patterns <cit.>. In prior work, oftentimes the Street View images were inputted to the model in conjunction with images of landmarks <cit.>, images taken indoors, or cross-viewed with aerial images <cit.>. Moreover, recent paper cited the potential of also geolocalizing objects within images <cit.>, factoring in the differences in land cover <cit.>, and setting new benchmarks <cit.>. Further information about work done in image geolocalization can be found in various surveys of the field <cit.>.
§ DATASET
§.§ Dataset Acquisition
While most image geolocalization approaches rely on publicly available datasets, this is not the case for Street View given the lack of publicly available planet-scale Street View datasets.
To that end, we decided to create on original dataset. We proactively reached out to Erland Ranvinge, the Chief Technology Officer of Geoguessr, who generously agreed to share a dataset of 1 million locations used in the Competitive Duels mode of GeoGuessr. From the dataset, we randomly sampled 100,000 of the provided locations, or 10% of the overall dataset. For each of the locations, we downloaded four images, ending up with 400,000 images.
The distribution of countries in our training set is displayed in Figure <ref> in Section <ref> of the Appendix. It is also where the details about our process of querying the Street View API, including relevant parameters for both Street View metadata and Street View images, is described. As can be seen, there are clear “tiers" of countries delineated by the frequency of sampling, and we denote each tier by a different color. Approximately 70% of the locations are in the “high" tier, 24% are in the “medium" tier, and the remaining 6% are in the “low" tier.
For each location, we start with a random compass direction and take four images separated by 90 degrees, thus differing from a single-image setup typically seen in Street View image geolocalization <cit.>. We carefully created non-overlapping image patches like in prior approaches <cit.>, and cropped images to remove auxiliary watermarks.
Prior work addressing using Street View for GeoGuessr image geolocalization did not specifically look at data obtained directly from the GeoGuessr game <cit.>, making our approach particularly novel.
§.§ Image Format
Four images for a sample location in our dataset are visualized in Figure <ref>. It is crucial to notice the advantage of a four-image setting compared to a single-image setting. Looking at the leftmost image in Figure <ref>, it mainly contains information on vegetation, making it difficult to locate the image with confidence. However, the additional images provide clues pertaining to roads, buildings and cars, pointing to the advantages of extending the dataset with additional images in lieu of taking a single image for each location.
§.§ Dataset Augmentation
Recognizing that adding auxiliary geographic metadata can be beneficial for image geolocalization <cit.>, we decided to augment our dataset with data on Köppen-Geiger climate zones <cit.>, as well as elevation temperature, precipitation, etc. We also capture information frequently used by human GeoGuessr players in placing their guesses such as the side of the road that traffic travels on.
Details regarding specific datasets used in our dataset augmentation procedure are described in Section <ref> of the Appendix.
§ METHODOLOGY
This work introduces a variety of technical novelties applied to the problem of image geolocalization, summarized in the following subsections.
§.§ Geocell Creation
Prior research has shown that predicting latitudes and longitudes directly for any image geolocalization problem does not result in state-of-the-art performance <cit.>. Current methods all rely on the generation of geocells to discretize the coordinate regression problem and thus transform it into a classification setting, making geocell design "crucial for performance" <cit.>.
§.§.§ Naive Geocells
Our initial geocell design is inspired by the approach undertaken by papers that had previously achieved state-of-the-art result on image geolocalization <cit.> using the S2 geometry library. The S2 geocell algorithm uses numerous rectangles which observe the curvature of the earth and split each rectangle into four equally-sized smaller rectangles if the number of data points within a given rectangle reaches a pre-defined threshold. Our naive geocell algorithm works in a similar fashion; it is first initialized with one large rectangle which is in every subsequent step divided into two rectangles along the longest side, only dividing a rectangle further if the two resulting rectangles contain a minimum of thirty points. Instead of splitting each rectangle into two equally-sized rectangles, a k-means clustering is performed with k = 2 to find a decision boundary, only splitting the given rectangle if the minimum geocell size of thirty training data points is respected. Figure <ref> illustrates the resulting rectangular geocells derived from our naive geocell creation algorithm for the metropolitan area of Paris.
§.§.§ Semantic Geocells
A major contribution of this work is our contribution on the generation of semantic geocells which automatically adapt based on the geographic distribution of any training dataset samples. The motivation behind a semantic geocell design is that visual features in images often follow the semantics of the given country (i.e. road marking), region (i.e. quality of infrastructure), or city (i.e. street signs). In addition, country or administrative boundaries often follow natural boarders such as the flow of rivers or mountain ranges which in turn influence visiual features such as the type vegetation, soil color, or more.
We use planet-scale open-source administrative data for our semantic geocell design, relying on non-overlaping political shape files of three levels of administrative boundaries (country, admin 1, and admin 2 levels) obtained from <cit.>. Starting at the most granular level (admin 2), our algorithm merges adjacent admin 2 level polygons to such that each geocell contains at least thirty training samples. Our method attempts to preserve the hierarchy given by admin 1 level boundaries and never merges cells across country borders (defined by distinct ISO country codes). It randomly merges geocells with adjacent cells using the following prioritization:
* Small adjacent geocells in same admin 1 area.
* Large adjacent geocells in same admin 1 area.
* Small adjacent geocells in same country.
* Large adjacent geocells in same country.
The above prioritization of our algorithm ensures that geocells containing fewer than the minimum threshold of training samples are not simply appended to large adjacent geocells but instead results in low-density regions being aggregated into one larger cell, often surrounding major metropolitan areas. This further preserves rural and urban semantics. Figure <ref> shows an example of our semantic geocell design preserving the urban area of Paris as well as the surrounding sub-urban regions.
One limitation of aggregating admin 2 level areas as defined by <cit.> is that for some urban areas, the number of training examples for a single cell might greatly exceed the minimum sample threshold defined by the algorithm's user. In addition, through the process of merging adjacent geocells, some cells might be created which could be split again into multiple smaller cells based on different boundaries.
We address this limitation in our geocell design through the following innovative algorithm which uses Voronoi Tessellation and the OPTICS clustering algorithm <cit.> to further split a geocell into further smaller semantic geocells.
Our Semantic Geocell Division Algorithm uses OPTICS <cit.> to find a large cluster within a cell, checking whether removing this cluster from the cell would result in two cells each having a large number of training samples than MINSIZE. If this is the case, the new geocell's polygon is determined by performing Voronoi Tessellation over all points in the intial cell as depicted in Figure <ref> and assigning the Voronoi polygons to a new cell containing all training samples in the computed OPTICS cluster. The area found through Voronoi Tessellation is then removed from the old geocell. The splitting is performed until convergence for each OPTICS parameter setting. In our work, we use three distinct OPTICS settings with values minsamples = 8, 10, and 15 for the three respective rounds and xi parameters of 0.05, 0.025, and 0.015 for the same rounds. With each successive setting, the requirements defining a cluster are thus relaxed to find clusters even in cells which are difficult to further divide.
Merging geocells according to administrative boundary hierarchies and dividing large cells based on our Semantic Geocell Division Algorithm results in geocells roughly balanced in size and which also preserve the semantics of cities, regions, countries, and the natural environment. By deploying our method to our training dataset, we compute the boundaries of a total of 2203 geocells used for our experiments.
§.§ Label Smoothing
By discretizing our image geolocalization problems via the help of our semantic geocells creation process, a trade-off is created between the granularity of geocells and predictive accuracy. The more granular the geocells are, the more precise a prediction can be but the classification problem becomes more difficult due to higher cardinality. To address this issue, we devise a loss function which penalizes based on the distance bwteen the predicted geocell to the correct geocell. By smoothing the one-hot geocell classification label according to equation <ref>, we train our models in a much more data-efficient way as the parameters for multiple geocells are trained concurrently with each training example. The value of the smoothed one-hot label L_i for geocell i given the correct geocell c is given by
L_i = exp(- [Hav(g_i, x_c) - Hav(g_c, x_c)] / 75)
where g_i are the centroid coordinates of the geocell polygon of cell i and x_c are the true coordinates of the example for which the label is computed. The constant of 75 acts as a temperature setting for the label smoothing which worked well in out experiments. Hav(·, ·) is the Haversine distance in kilometers defined as:
2r arcsin(√(sin^2 (ϕ_2 - ϕ_1/2) + cos(ϕ_1)cos(ϕ_2)sin^2 (λ_2 - λ_1/2)))
One advantage of using the Haversine distance between two points is that it respects the Earth's spherical geometry, giving accurate estimates of the distance between two points. Figure <ref> demonstrates the results of smoothing geocell labels which ideally results in lower geolocalization errors at the cost of slightly lower geocell prediction accuracy due to the added noise in the label.
By combining out semantic geocell design with label smoothing, we optimize for our model to spread probabilities across semantically similar and adjacent cells. Figure <ref> the distribution of probabilities of our best model for a true location close to the sea in Jakobstad, Finland. Notably, our semantic geocell design and label smoothing results in our model placing high probabilities on semantically similar cells adjacent to the Gulf of Bothnia in Scandinavia.
§.§ Vision Transformer (CLIP)
The input image is encoded using a pre-trained vision transformer <cit.>. We utilized a pretrained ViT-L/14 architecture and fine-tuning either the prediction heads or also unfreeze the last vision transformer layer. For model versions with multiple image inputs, we average the embeddings of all four images. Averaging the embeddings performed better during our experiments than combining the emebddings via multi-head attention or an additional transformer layer.
We were particularly interested in exploring the effect of
the type of pretraining on downstream performance. We compare a ViT-L/16 that was pre-trained ImageNet-21k with 14 million images <cit.> with CLIP ViT-L/14 which is a multi-modal model that utilized contrastive pre-training on a dataset of 400 million images and caption <cit.>.
Based on our priors and commonly observed strategies by professional GeoGuessr players, there are a variety of relevant features for the image location task, e.g., vegetation, road markings, street signs, and architecture. We hypothesize that the multi-modal pre-training creates embeddings with a much deeper semantic understanding of the image, enabling it to learn such features. As we show later, the CLIP vision transformer gives a substantial improvement over a comparable ImageNet vision transformer and using attention maps, we can indeed show how this enables the model to learn these strategies in an interpretable way.
§.§ StreetCLIP Contrastive Pretraining
Inspired by the substantial improvement that we observed from using CLIP's contrastive pre-training over the ImageNet pre-trained vision transformer, we explored designing a contrastive pre-training task that we could use to fine-tune our CLIP foundation model even before learning the geocell prediction head.
For that, we augment our Street View dataset with geographic, demographic, and geological auxiliary data. This data is used to create randomized captions for each image using a rule-based system that samples components from different task categories and combines them in a randomized order. The probabilities for each category are adjusted based on priors. Some examples of categories & corresponding caption components include:
* Location: “A Street View photo in the region of Eastern Cape in South Africa."
* Climate: “This location has a temperate oceanic climate."
* Compass Direction: “This photo is facing north."
* Season: “This photo was taken in December."
* Traffic: “In this location, people drive on the left side of the road."
This creates an implicit multi-task setting and ensures the model maintains rich representations of the data while adjusting to the distribution of Street View images and learning features that are relevant & correlated with geolocation.
§.§ Multi-task Learning
We also experiment with making our multi-task setup explicit by creating task-specific prediction heads for auxiliary climate variables, population density, elevation, and the month (season) of the year. As climate variables we include the Köppen-Geiger Climate Zone, the yearly average temperature and precipitation at the given location as well as the difference in temperature and precipitation between the month with the highest average value and the month with the lowest average value. The climate zone and and season prediction tasks are posed as a classification problem while the other six auxiliary tasks are formulated as a regression task.
In <cit.>, the authors note that the "distribution of likely locations for an image provides huge amounts of additional meta-data for climate, average temperature for any day, vegetation index, elevation, population density, per capita income, average rainfall," and more which can be leveraged for the task of geolocalization.
We unfreeze the last CLIP layer to allow for parameter sharing across tasks with the goal of observing a positive transfer from our auxiliary tasks to our geolocalization problem and to learn more general image representations which reduce the risk of overfitting to the training dataset. Our loss function weights the geolocalization tasks as much as all auxiliary tasks combined. A novel contribution of our work is that we use eight auxiliary prediction tasks instead of just two compared to prior research employing multi-task methods <cit.> with multi-task methods having shown impressive results across fields <cit.>.
§.§ ProtoNet Refinement
To further refine our model's guesses within a geocell and to improve street and city-level performance, we perform intra-geocell refinement using ProtoNets <cit.>. Instead of simply predicting the mean latitude and longitude of all points within a geocell as current state-of-the-art aprroaches such as <cit.>, we pose each cell's intra-cell refinement as a separate few-shot classification task.
We again use the OPTICS clustering algorithm <cit.> with a minsample parameter of 3 and a xi parameter of 0.15 to cluster all points within a geocell and thus propose classes to learn in the intra-cell classification setting. Each cluster consisting of at least three training examples forms a prototype and its representation is computed by averaging the embeddings of all images within the prototype. To compute the prototype embeddings, we use the same model as in our geocell prediction task but remove the prediction heads and freeze all weights. Figure <ref> illustrates examples of refinement clusters found by the OPTICS algorithm in the Greater Los Angeles metropoltian area.
During inference, we first compute and average the new location's embeddings. After our geocell classification model predicts, instead of predicting that cell's centroid coordinates, we take the euclidian distance between the averaged image embeddings and all prototypes within the given geocell, selecting the prototype location with the smallest euclidian image embedding distance to the inference location as the final geolocalization prediction. The creation of intra-cell location prototypes allows our model to predict one of more than 11,000 distinct locations for a training dataset of 90,000 locations instead of just choosing from the 2,203 distinct geocell centroid coordinates, thus allowing for more precise decision making.
While guess refinement via protonets is in itself a novel idea, our work goes one step further by allow our ProtoNet refiner to optimize across cells. Instead of refining a geolocalization prediction in a single cell, our ProtoNet refiner optimizes across multiple cells which further increases performance. During inference, our geocell classification model outputs the top five predicted geocells as well as the model's associated probabilities for these cells. The refinement model than picks the most likely location within each of the five proposed geocells after which a softmax is computed across the five euclidian image embedding distances yielded through ProtoNet refinement. We use a softmax with a temperature of 1.6 which was carefully tuned to balance probabilities across different geocells. Finally, these refinement probabilities are multiplied with the probabilities provided by the geocell classification model and the refinement location corresponding to the highest joint probability is chosen as the final geolocalization prediction.
§ RESULTS
The results of our best-performing PIGEON model are listed in the bottom row of Tables <ref> and <ref>. We achieve an astounding 91.96% Country Accuracy (based on political boundaries) and 40.36% of guesses are within 25 km of the correct location. Moreover, the median kilometer error is 44.35 km and the average GeoGuessr score is 4,525. In Table <ref>, we list the results of our multi-task models on our augmented dataset. Our results show that geographical, demographic, and geological features can be inferred from Street View images.
§.§ Ablation Studies on Geolocalization Accuracy
We perform a detailed ablation study for each of our methodological contributions as described in Section <ref>. We summarize our results in Table <ref>, displaying the percentage of our guesses that fall within a given kilometer radius from the actual location, using standard kilometer-based metrics in line with the literature <cit.>. Furthermore, for each ablation, we calculate additional distance-based metrics in Table <ref> that provide insights as to the performance of our modeling approach.
We have the following observations:
* Label Smoothing, Four-image Panorama, Multi-task Parameter Sharing, Semantic Geocells and CLIP Pretraining all significantly improve continent, country, and region-level metrics.
* On the other hand, ProtoNet Refinement has almost no effect on continent, country and region-level metrics, but significantly improves street-level accuracy from 1.32% to 4.84% as well as city level accuracy from 34.96% to 39.86%.
* Fine-tuning the last CLIP layer hurts model performance on its own, however, when performing multi-task training with the last CLIP layer as shared parameters, there is positive transfer and it increases performance. The multi-task training acts as a regularizer.
* When additionally performing the Contrastive StreetCLIP Pretraining then unfreezing the last CLIP layer again hurts performance. In particular, there is no positive transfer from the multi-task training anymore. Presumably, all of the benefits from multi-task supervision have already been captured from the implicitly multi-task StreetCLIP pretraining.
In Figure <ref> we visualize the improvement of the best-performing PIGEON models over the simplest model using CLIP Base, showing how the performance gains are more palpable at finer granularities of distance compared to coarser distance metrics.
§.§ Contrastive Pretraining Results with StreetCLIP
The geolocation task is usually framed as a supervised learning problem. However, this has the major problem the models are very restricted to a specific task, e.g., the number of classes and the distribution of the training data. For example, our training dataset contains only Street View images during the day, whereas IM2GPS, a common benchmark dataset for geolocalization, contains a much wider distribution of images, e.g., images of the inside of buildings and images during the night. Moreover, both datasets have different non-overlapping sets of countries and differing definitions of countries, e.g., whether overseas territories like French Guiana or Guam are considered their own countries or not.
We have the hypothesis that StreetCLIP <cit.>, through our Street View Multi-task Contrastive Pretraining, learns relevant strategies for geolocalization but keeps the general world knowledge from the original CLIP Pretraining. Thereby, it can generalize to countries it has never seen during our Street View Pretraining and is robust with regard to distribution shift.
We test our trained StreetCLIP model on the benchmark image geolocalization datasets IM2GPS and IM2GPS3k, which contain a much broader distribution of images than Street View. By generating an exhaustive list of 234 country captions, we perform a zero-shot linear probe of StreetCLIP to get country-level predictions which we then translate into coordinates. Table <ref> presents our results. We compare against TransLocator <cit.>, the current state-of-the-art on both of these datasets, and following their work, we report our performance on continent-level accuracy.
Whereas TransLocator was trained in a supervised manner on 4.72 million images, our model was trained in a semi-supervised manner on only 1 million Street View images. Surprisingly, despite the distribution shift, StreetCLIP outperforms the state-of-the-art on both benchmark datasets using just linear probing. In particular, StreetCLIP performs significantly better than CLIP which implies that there is a transfer of image geolocalization performance onto new distributions.
We conjecture that contrastive pretraining is performing implicit meta-learning. To further, confirm this hypothesis we investigated the performance of CLIP and StreetCLIP in countries that were not seen during StreetCLIP training <cit.>. On the latest benchmark IM2GPS3K, StreetCLIP achieves an accuracy of 52.79% for countries not seen during unseen countries vs. 41.51% of accuracy for CLIP. An explanation for this surprising transfer is that the knowledge about these countries was already learned during the initial CLIP pertaining, e.g., the text encoder presumably has a good embedding of every country in the world. However, the StreetCLIP pretraining primes the model for the geolocalization tasks and unlocks additional knowledge from the original CLIP pretraining. Thereby, StreetCLIP can perform well on zero-shot transfer to new tasks (i.e., new countries) where our contrastive pretraining can be seen as a form of implicit meta-learning.
§ ANALYSIS
We analyze our results in detail both through quantitative and qualitative evaluations. We confirmed the accuracy of our results by deploying our model in the GeoGuessr game, where our model consistently beats high-ranking human players, ranking in the Top 1,000 globally. We try to understand whether StreetCLIP is learning interpretable strategies by utilizing an explainability method. Furthermore, we analyze some of our underperforming guesses, and discuss the limitations of our work.
§.§ Quantitative Evaluation
§.§.§ Comparison with Human Performance
Using our Chrome extension (see Appendix <ref>), we deploy PIGEON in online competitive GeoGuessr and aggregate the results of 298 rounds of the game mode Duel against human players of varying skill levels. We visualize the comparison of PIGEON with actual human in-game performance in Figure <ref>. Players are ranked into the following divisions by skill level: Bronze Division, Silver Division, Gold Division, Master Division, and Champion Division. For reference, GeoGuessr has 30 million players worldwide, and the Master Division represents roughly the top 1% of players, whereas the Champion Division represents the Top 1000 players worldwide.
As we observe in Figure <ref>, PIGEON comfortably outperforms human performance. It even beats Champion Division players in median kilometer distance and, therefore, belongs to the Top 0.1% or Top 1000 players globally. Moreover, PIGEON is able to perform guesses almost instantly.
§.§.§ Urban vs. Rural
In order to elucidate the difficulty of different sub-distributions, we investigate whether a performance differential exists between urban and rural locations. Presumably, the density of relevant cues should be higher in Street View images from urban locations.
We bin our validation dataset into quintiles by population density and visualize PIGEON's median kilometer error. In Figure <ref>, we observe that indeed higher population density correlates with better predictions. In particular, there is a sharp dropoff in the highest quintile compared to the other four quintiles. This confirms our hypothesis that there is a higher density of cues in urban locations.
§.§ Qualitative Evaluation
§.§.§ Explainability
One of our hypotheses in Section <ref> was that the contrastive pre-training used by CLIP gives the model a deeper semantic understanding of scenes and thereby enables it to discover strategies that are interpretable by humans. Surprisingly, the model was able to learn strategies that are taught in online GeoGuessr guides without ever having been directly supervised to learn these strategies.
In order to visualize what patches of the image are considered relevant for a given caption, we visualize attention relevancy maps for our finetuned StreetCLIP model by implementing the method from Generic Attention-model Explainability for Bi-Modal Transformers <cit.>.
In our experiments, we observed that this explainability method does not generalize well from a patch size of 32, as used in the official implementation, to our patch size of 14. Our hypothesis is that this is caused by the distribution of relevancy scores across patches having a lower entropy when the patch size is smaller. In order to resolve this issue, we modify the method by filtering out outliers and squaring relevancy scores. This significantly improved the interpretability of both regular CLIP and our StreetCLIP on smaller patch sizes and should be applicable beyond our project.
For the visualizations in Figure <ref>, we generated relevancy maps for an image from the validation dataset and the corresponding ground-truth caption, e.g. “This photo is located in Canada". Indeed, the model pays attention to features that professional GeoGuessr players consider important, e.g., vegetation, road markings, utility posts, and signage. This makes the strong performance of the model explainable and could furthermore enable the discovery of new strategies that professional players have not yet discovered.
§.§.§ Error Analysis
In spite of our model's generally high accuracy of estimating image geolocations, there were several scenarios in which our model underperformed. By computing entropy for the probabilities of top predicted geocells for each location in our validation set, we managed to identify the images about the geolocation of which our model was the most uncertain. We visualize those cases in Figure <ref>.
The features of poorly classified images are aligned with our intuitions and prior literature about difficult settings for image geolocations. Figure <ref> shows that images from tunnels, bodies of water, poorly illuminated areas, forest, indoor areas and soccer stadiums are amongst the imagery that is the most difficult to pinpoint geographically. This makes sense: without recognizable features directly pertaining to a specific geographical area, their classification is much more difficult when compares to images with features that clearly distinguish a given geography.
§.§ Limitations
Nevertheless, several limitations remain. Although PIGEON can successfully identify the vast majority of countries in which photos were taken, it still cannot be used at extremely precise levels (street-level) that are necessary for detailed geo-tagging. Moreover, the Street View images in our dataset were taken during daytime, raising doubts over the generalization of the model to images taken during nighttime. Further testing under different appearance variations could provide insights into the robustness of PIGEON to different seasons, illuminations, weather, etc. Additionally, we recognize that some of our visualizations may be prone to cherry-picking, thus not being wholly representative of the underlying datasets.
§ CONCLUSION
Overall, PIGEON presents multiple novel improvements to multi-task image geolocalization while providing important insights and artifacts for related problems in fighting climate-change and urban and rural scene understanding. PIGEON achieves impressive results in planet-scale image geolocalization on Street View images, achieving a country accuracy of 91.96% on our held-out dataset and placing 40.36% of our guesses within 25 km of the target. Our model consistently beats human players in the game of GeoGuessr which samples data from the same distribution as introduced in our novel dataset of 100,000 Street View locations.
The three major contributions of our work can be summarized as follows: we introduce a semantic geocell creation and splitting algorithm based on open-source data adaptable to any geospatial dataset. Second, we show the effectiveness of intra-geocell few-shot refinement via ProtoNets and the use clustering to generate potential prediction candidates. Finally, we make our pre-trained CLIP transformer model, StreetCLIP <cit.>, publicly available for use by other researchers.
Finally, we show that contrastive pretraining is an effective meta-learning technique ideal for domain generalization and robustness to distribution shifts. One of the most important results of our work is achieving state-of-the-art performance on the IM2GPS and IM2GPS3k image geolocalization benchmark datasets which are strongly out-of-distribution compared to our Street View dataset used for the pre-training of StreetCLIP. Most notably, the state-of-the art performance achieved is in zero-shot, shining light on the potential of StreetCLIP to help solve problems in many other domains.
§ FUTURE WORK
§.§ Potential Extensions
Going forward, several extensions can be made to make image geolocalization more precise. Future models can detect text included in images to leverage linguistic information for predictions, with textual data having previously been suggested as a potential feature aiding geolocalization <cit.>. Instead of being constrained to street-level imagery, cross-view approaches could be employed, such as synthesizing satellite imagery with Street View <cit.>. Although we propose novel semantic geocells, are experiments are constrained to one granularity of geocells; in the future, various granularities of geocells can be tested to find the optimal geocell sizes. Ideally, future image geolocalization models would be robust to appearance changes, which bring up the need for incorporating changes over the years, requiring datasets of images over an extended period of time over a year <cit.>. In a multi-task setting, determining the optimal number of tasks is likely to be a priority. Additionally, image segmentation and concept influence could be used for further location prediction interpretability, and fusions between images to get information about the entire four-image panorama and not just individual images. In the long term, future work could go beyond Street View, with the models able to geolocate any photo taken anywhere in the world at fine-grained granularity. To that end, future experiments in CLIP-based zero-shot settings should go beyond just the continent-level accuracy.
Some additional extensions we thought of exploring in this project, but did not end up pursuing, include using knowledge graphs, using road networks and compass directions for intra-geocell refinement, as well as adding an urban/rural scene recognition task to the multi-task setting.
§.§ Social Impact
The results we achieved have vast social impact potential. By predicting climate based on images, we could be able to assess the risk to the consequences of climate change. This is why we decided to augment our data specifically with the Köppen-Geiger climate classification system given its emphasis on the geospatial understanding of the impacts of climate change <cit.>. Image geolocalization can also be used for applications in autonomous driving <cit.>, in war zones (such as during the Russian invasion of Ukraine), for attributing location to https://www.tiktok.com/@georainbolt/video/7167138543725301035?is_from_webapp=v1 item_id=7167138543725301035archival images, helping historical research, as well as in promoting geography education through gamified e-learning <cit.>.
Even with the potential benefits to humans, image geolocalization nevertheless has to deal with various ethical issues. Some actors posting images might not want their images to be geolocalized, leading to questions about the fragility of privacy protections. Furthermore, accurate image geolocalization systems could be used by governments for citizen surveillance, posing a threat to individual freedoms.
icml2022
In this Appendix, we provide additional information that describes our work in further detail.
In Section <ref>, we list our data sources and visualize the data used for augmenting our dataset. In Section <ref>, we provide details regarding the process of obtaining our images from the Street View API. In Section <ref>, we discuss the background of the GeoGuessr game that is relevant for understanding this project. In Section <ref>, we describe a Chrome Extension we built to play GeoGuessr by deploying our model online, allowing us to compare our results to human performance. In Section <ref>, we describe the technical details about the infrastructure used for running our models as well as the hyperparameters used during model training.
§ DATA SPECIFICATION FOR DATASET AUGMENTATION
§.§ Country Area Polygons
We obtain data on country areas from the Database of Global Administrative Areas (GADM) <cit.>, with the data available https://geodata.ucdavis.edu/gadm/gadm4.1/gadm_410-levels.ziphere. Additionally, we obtain data on several granularities of political boundaries of administrative areas, with the data available https://github.com/wmgeolab/geoBoundaries/raw/main/releaseData/CGAZ/geoBoundariesCGAZ_ADM1.geojsonhere and https://github.com/wmgeolab/geoBoundaries/raw/main/releaseData/CGAZ/geoBoundariesCGAZ_ADM2.geojsonhere.
§.§ Köppen-Geiger Climate Zones
We obtain data on global climate zones through the Köppen-Geiger climate classification system <cit.>, with the data available https://figshare.com/ndownloader/files/12407516here.
Our planet-scale climate zone data is visualized in Figure <ref>.
§.§ Elevation
We obtain data on elevation through the United States Geological Survey's Earth Resources Observation and Science (EROS) Center, with the data available https://www.usgs.gov/centers/eros/science/usgs-eros-archive-digital-elevation-shuttle-radar-topography-mission-srtm-1here. As elevation data was missing for several locations in our dataset, we further augmented our data with missing values from parts of Alaska and parts of Europe, with the data for Alaska available http://stacks.stanford.edu/file/druid:sg962yb7367/data.ziphere and the data for Europe available https://land.copernicus.eu/imagery-in-situ/eu-dem/eu-dem-v1.1/viewhere.
Our planet-scale elevation data is visualized in Figure <ref>.
§.§ GHSL Population Density
We obtain data on population density through the Global Human Settlement Layer (GHSL), with the data available https://jeodpp.jrc.ec.europa.eu/ftp/jrc-opendata/GHSL/GHS_POP_GLOBE_R2022A/GHS_POP_E2020_GLOBE_R2022A_54009_1000/V1-0/GHS_POP_E2020_GLOBE_R2022A_54009_1000_V1_0.ziphere.
Our planet-scale population density data is visualized in Figure <ref>.
§.§ WorldClim 2 Temperature and Precipitation
We obtain data on average temperature, temperature difference, average precipitation, and precipitation difference through WorldClim 2 <cit.>, with the data available https://www.worldclim.org/data/worldclim21.htmlhere.
Our planet-scale average temperature is visualized in Figure <ref>. Our planet-scale temperature difference is visualized in Figure <ref>. Our planet-scale average precipitation is visualized in Figure <ref>. Our planet-scale precipitation difference is visualized in Figure <ref>.
§.§ Location of Country Capitals
We obtain data on the locations of country capitals used for refining our zero-shot StreetCLIP predictions through Kaggle, with the data available https://www.kaggle.com/datasets/nikitagrec/world-capitals-gpshere.
§.§ Alpha-2 Country Codes
We obtain our ISO 3166-2 alpha-2 country codes used for matching country codes generated through the Street View API with country names through Kaggle, with the data available https://www.kaggle.com/datasets/juanumusic/countries-iso-codeshere.
§.§ Driving Side of the Road
We obtain our driving side of the road data through WorldStandards, with the data available https://www.worldstandards.eu/cars/list-of-left-driving-countries/here.
§ QUERYING STREET VIEW API
After signing an NDA with Erland Ranvinge, the Chief Technology Officer of GeoGuessr, we obtained a list of exactly one million locations that actually appear in the Competitive Duels mode of GeoGuessr. From that list, we randomly sampled 100,000 locations, or 10% of the dataset, maintaining the distribution of countries representative of the larger dataset as visualized in Figure <ref>.
It should be emphasized, however, that while the distribution is representative of the broader distribution of locations in Google Street View, the Google Street View distribution itself cannot be thoughout of as a uniform global distribution, as visualized in Figure <ref>.
To obtain the actual images from our dataset, we queried the https://developers.google.com/maps/documentation/streetview/overviewStreet View API using the Google Cloud Platform Education Grants generously supplied to us by Google with the help of Dan Russell.
§.§ Metadata
We first queried the Street View API for the location metadata by supplying a , or an id pertaining to each location, with each request. That way, we were able to verify whether Street View images actually existed in this location and which month and year a given image was taken. For each unavailable image, we sampled a random location from the same country to maintain the prior distribution. Each metadata request was free of charge.
§.§ Images
Subsequently, we proceeded to download images from each location. Aside from the , we specified additional parameters specific to Street View image downloads. We chose the image size to be pixels, or the largest available size. For each location, we generated a random , or compass direction, between and , and added 90 degrees to each subsequent picture for that location to come up with a full panorama. Subsequently, we chose a field of view, of , allowing us to retain all of the image's information even after cropping the watermarks. We picked our parameter as to be limited to outdoor images, however a small portion of images was still from indoors, as Figure <ref> shows, emphasizing mislabeling on Google's side. For the remaining parameters, we set the default values to for , for , and for . All in all, this allowed us to download images consistently for each location in our dataset.
§ BACKGROUND ON GEOGUESSR
https://www.geoguessr.com/GeoGuessr is an online game founded in Sweden in 2013. Upon starting the game, the user is placed in a location supplied by Google Street View and needs to guess where that location is in the world by placing a guess on the map. The game can be played in both single- and multi-player modes on maps that are both GeoGuessr-provided as well as user-generated. When playing with others, users can play both with their friends as well as with random opponents on the Internet. We decided to focus PIGEON on the Competitive Duels mode, whereby the user directly competes with an opponent and thus must not only place guesses accurately but also more accurately than the opponent. Each guess is translated into a GeoGuessr score, the function of which we re-engineered as outlined in Equation <ref>:
score(x) = 5000 · e^-x/1492.7,
where x is the prediction error in kilometers.
To get a better sense of the game, we provide some sample screenshots in Figure <ref>. We took these screenshots while deploying PIGEON in the real GeoGuessr game against real opponents using our self-developed Chrome Extension, which we describe in Section <ref> of this Appendix.
§ CHROME EXTENSION FOR GEOGUESSR
We constructed a Chrome extension that plays GeoGuessr by automating the browser. We did this to achieve two goals: First, having an engaging live demonstration. Second, confirming that our model is robust enough to also perform well on the real-world data from the GeoGuessr game.
§.§ Chrome Extension Behavior
The extension automatically activates itself once it detects that it is in a game and then autonomously places guesses. Moreover, it is able to detect when a game is over and restarts the game automatically if it is configured to do so. At the moment, it supports the following GeoGuessr game modes: Classic, Duel & Team Duel. It is able to play both in the “Play With Friends" and “Competitive" mode. In the latter mode, you are matched online against another player of similar rank, and each game either increases or decreases an Elo-based rank.
The procedure to place a guess works as follows and is repeated for each GeoGeussr round until the game is detected to be over:
* Resize Chrome window to correct aspect ratio.
* Wait until Street View scene is fully loaded.
* Repeat the following for all four directions:
* Hide all UI elements.
* Take a screenshot.
* Unhide all UI elements.
* Rotate by 90^∘ using simulated clicks.
* POST request to our backend server endpoint with the four images encoded as Base64 as payload.
* Receive predicted latitude & longitude from our server.
* Optional: Random delay to behave more human-like
* Place guess using reverse-engineered API call from GeoGuessr API.
* Collect statistics about true location & human performance and submit to the server using an additional POST request.
§.§ Backend
In addition to the Chrome extension, we run a backend server on a machine with a GPU that runs the model inference. We utilize the Python library FastAPI to implement two API endpoints:
* Inference endpoint: A POST endpoint that receives either one or four images, passes them to a Pytorch pipeline that preprocesses the images, and then runs inference on a GPU. In addition, it saves the images on disk in order to collect an additional dataset. Then, it returns the latitude & longitude of our models to the client.
* Statistics endpoint: A POST endpoint that receives the statistics about the correct location, the score & distance of our guess, and human performance (i.e., location, score, and distance of our online opponent). This data is saved on disk and then used for our evaluations.
§ TECHNICAL SPECIFICATION
The following is an overview of the technical infrastructure used for this project, an estimation of the time needed to compute our results, and an overview of the most important model parameters.
§.§ Technical Infrastructure
Our geolocalization models were trained on four NVIDIA A100 80GB GPUs with each model training between three hours and two days. The contrastive pretraining of StreetCLIP required a total of eight NVIDIA A100 80 GB GPUs on which we pre-trained our model for two days.
The geocell creation algorithm ran for one day on a local machine on a single CPU.
§.§ Hyperparameter Specification
For all our geolocaization models, we started by freezing all CLIP layers and solely training the prediction heads. To do so, we used a learning rate of 1e^-4 and a batch size (accumulated across GPUs and gradient updates) of 256. Once the prediction heads were trained to convergence we unfroze the last CLIP layer for the respective models and used the same batch size of 256 but lowered the learning rate to 2e^-5.
For the contrastive pretraining of StreetCLIP, we used an batch size of 2048 accumulated across all GPU cores and gradient updates, a learning rate of 1e^-6, linear learning rate warmup with rate 0.2, and a weight decay of 0.001.
|
http://arxiv.org/abs/2307.10829v2 | 20230710121818 | Exact Diffusion Inversion via Bi-directional Integration Approximation | [
"Guoqiang Zhang",
"J. P. Lewis",
"W. Bastiaan Kleijn"
] | cs.CV | [
"cs.CV"
] |
[
Michael Liut
August 12, 2023
===================
Recently, different methods have been proposed to address the inconsistency issue of DDIM inversion to enable image editing, such as EDICT <cit.> and Null-text inversion <cit.>. However, the above methods introduce considerable computational overhead. In this paper, we propose a new
technique, named bi-directional integration approximation (BDIA), to perform exact diffusion inversion with neglible computational overhead. Suppose we would like to estimate the next diffusion state z_i-1 at timestep t_i with the historical information (i,z_i) and (i+1,z_i+1). We first obtain the estimated Gaussian noise ϵ̂(z_i,i), and then apply the DDIM update procedure twice for approximating the ODE integration over the next time-slot [t_i, t_i-1] in the forward manner and the previous time-slot [t_i, t_t+1] in the backward manner. The DDIM step for the previous time-slot is used to refine the integration approximation made earlier when computing z_i. One nice property with BDIA-DDIM is that the update expression for z_i-1 is a linear combination of (z_i+1, z_i, ϵ̂(z_i,i)). This allows for exact backward computation of z_i+1 given (z_i, z_i-1), thus leading to exact diffusion inversion.
Interestingly, the update expression for z_i-1 is in fact time-symmetric in that switching the timestep t_i-1 and t_i+1 produces the inverse update expression for z_i+1 in terms of (z_i,z_i-1).
Experiments on both image reconstruction and image editing were conducted, confirming our statement.
BDIA can also be applied to improve the performance of other ODE solvers in addition to DDIM. In our work, it is found that applying BDIA to the EDM sampling procedure produces slightly better FID score over CIFAR10.
§ INTRODUCTION
As one type of generative models, diffusion probabilistic models (DPMs) have made significant progress in recent years. The pioneering work <cit.> applied non-equilibrium statistical physics to estimating probabilistic data distributions. In doing so, a Markov forward diffusion process is constructed by systematically inserting additive noise into a data sample until the data distribution is almost destroyed. The data distribution is then gradually restored by a reverse diffusion process starting from a simple parametric distribution. The main advantage of DPM over classic tractable models (e.g., HMMs, GMMs, see <cit.>) is that DPM can accurately model both the high and low likelihood regions of the data distribution by estimating a sequence of progressively less noise-perturbed data distributions. In comparison to generative adversarial networks (GANs) <cit.>, DPMs exhibit more stable training dynamics by avoiding adversarial learning, as well as showing better sample diversity.
Following the work of <cit.>, various learning and/or sampling strategies have been proposed to improve the performance of DPMs, which include, for example, denoising diffusion probabilistic models (DDPMs) <cit.>, denoising diffusion implicit models (DDIMs) <cit.>, improved DDIMs <cit.>, latent diffusion models (LDMs)<cit.>, score matching with Langevin dynamics (SMLD) <cit.>, analytic-DPMs <cit.>, optimized denoising schedules <cit.>, guided diffusion strategies <cit.>, and classifier-free guided diffusion <cit.>. It is worth noting that DDIM can be interpreted as a first-order ODE solver. As an extension of DDIM, various high-order ODE solvers have been proposed, such as EDM <cit.>, DEIS <cit.>, PNDM <cit.>, DPM-Solvers <cit.>, and IIA-EDM and IIA-DDIM <cit.>.
In recent years, image-editing via diffusion models has attracted increasing attention in both academia and industry. One important operation for editing a real image is to first perform forward process on the image to obtain the final noise representation and then perform a backward process with embedded editing to generate the desired image <cit.>. DDIM inversion has been widely used to perform the above forward and backward processes <cit.>. A major issue with DDIM inversion is that the intermediate diffusion states in the forward and backward processes may be inconsistent due to the inherent approximations (see Subsection <ref>). This issue becomes significant when utilizing classifier-free guided technique in text-to-image editing <cit.>. The newly generated images are often perceptually far away from the original ones, which is undesirable for image-editing.
Recently, two methods have been proposed to address the inconsistency issue of DDIM inversion. Specifically,
the work of <cit.> proposed a technique named null-text inversion to push the diffusion states of the backward process to be optimally close to those of the forward process via iterative optimization. The null-text inputs to the score neural network are treated as free variables in the optimization procedure.
In <cit.>, the authors proposed the EDICT technique to enforce exact DDIM inversion. Their basic idea is to introduce an auxiliary diffusion state and then perform alternating updates on the primal and auxiliary diffusion states, which is inspired by the flow generative framework <cit.>. One drawback of EDICT is that the number of neural functional evaluations (NFEs) has to be doubled in comparison to DDIM inversion (See Subsection <ref>). Another related line of research work is DDPM inversion (see <cit.>).
In this paper, we propose a new technique to enforce exact DDIM inversion with negligable computational overhead, reducing the number of NFEs required in EDICT by half. Suppose we are in a position to estimate the next diffusion state z_i-1 at timestep t_i by utilizing the two most recent states z_i and z_i+1. With the estimated Gaussian noise ϵ̂(z_i,i), we perform the DDIM update procedure twice for approximating the ODE integration over the next time-slot [t_i, t_i-1] in the forward manner and the previous time-slot [t_i,t_i+1] in the backward manner. The DDIM for the previous time-slot is employed to refine the integration approximation made earlier when computing z_i. As a result, the expression for z_i-1 becomes a linear combination of (z_i+1, z_i,ϵ̂(z_i,i)), and naturally facilitates exact diffusion inversion. We refer to the above technique as bi-directional integration approximation (BDIA). We emphasize that the obtained update expression for z_i-1 under BDIA-DDIM is time-symmetric in that switching the timestep t_i-1 and t_i+1 inverts the diffusion directions (see Section <ref> for a discussion on relevant literature). Experiments demonstrate that BDIA-DDIM produces satisfactory results on both image reconstruction and image editing. We have also applied BDIA to EDM, and found that the image qualities are also improved slightly.
§ PRELIMINARY
Forward and reverse diffusion processes:
Suppose the data sample x∈ℝ^d follows a data distribution p_data(x) with a bounded variance. A forward diffusion process progressively adds Gaussian noise to the data samples x to obtain z_t as t increases from 0 until T. The conditional distribution of z_t given x can be represented as
q_t|0(z_t|x) = 𝒩(z_t|α_tx, σ_t^2I) z_t = α_tx+σ_t ϵ,
where α_t and σ_t are assumed to be differentiable functions of t with bounded derivatives. We use q(z_t; α_t,σ_t) to denote the marginal distribution of z_t. The samples of the distribution q(z_T;α_T,σ_T) should be practically indistinguishable from pure Gaussian noise if σ_T ≫α_T.
The reverse process of a diffusion model firstly draws a sample z_T from 𝒩(0, σ_T^2I), and then progressively denoises it to obtain a sequence of diffusion states {z_t_i∼ p(z;α_t_i,σ_t_i)}_i=0^N,
where we use the notation p(·) to indicate that reverse sample distribution might not be identical to the forward distribution q(·) because of practical approximations. It is expected that the final sample z_t_0 is roughly distributed according to p_data(x), i.e., p_data(x)≈ p(z_t_0;α_t_0,σ_t_0) where t_0=0.
ODE formulation: In <cit.>, Song et al. present a so-called probability flow ODE which shares the same marginal distributions as z_t in (<ref>). Specifically, with the formulation (<ref>) for a forward diffusion process, its reverse ODE form can be represented as
dz = [f(t)z_t-1/2g^2(t)∇_zlog q(z_t; α_t,σ_t)]_d(z_t, t)dt,
where d(z_t,t) denotes the gradient vector at time t, and the two functions f(t) and g(t) are represented in terms of (α_t, σ_t) as
f(t) = dlogα_t/dt, g^2(t)=dσ_t^2/dt-2dlogα_t/dtσ_t^2.
∇_zlog q(z;α_t,σ_t) in (<ref>) is the score function <cit.> pointing towards higher density of data samples at the given noise level (α_t,σ_t). One nice property of the score function is that it does not depend on the generally intractable normalization constant of the underlying density function q(z;α_t,σ_t).
As t increases, the probability flow ODE (<ref>) continuously reduces the noise level of the data samples in the reverse process. In the ideal scenario where no approximations are introduced in (<ref>), the sample distribution p(z;α_t,σ_t) approaches p_data(x) as t goes from T to 0. As a result, the sampling process of a diffusion model boils down to solving the ODE form (<ref>), where randomness is only introduced in the initial sample at time T. This has opened up the research opportunity of exploiting different ODE solvers in diffusion-based sampling processes.
Denoising score matching: To be able to utilize (<ref>) for sampling, one needs to specify a particular form of the score function ∇_zlog q(z;α_t,σ_t). One common approach is to train a noise estimator ϵ̂_θ by minimizing the expected L_2 error for samples drawn from q_data (see <cit.>):
𝔼_x∼ p_data𝔼_ϵ∼𝒩(0, σ_t^2I)ϵ̂_θ(α_t x+σ_tϵ,t)-ϵ_2^2,
where (α_t, σ_t) are from the forward process (<ref>). The common practice in diffusion models is to utilize a neural network of U-Net architecture <cit.> to represent the noise estimator ϵ̂_θ. With (<ref>), the score function can then be represented in terms of
ϵ̂_θ(z_t; t) as (see also (229) of <cit.>)
∇_zlog q(z_t;α_t,σ_t) =-(z_t-α_t x)/σ_t^2 = -ϵ̂_θ(z_t; t)/σ_t.
Alternatively, the score function can be represented in terms of an estimator for x (see <cit.>). The functional form for the noise level (α_t,σ_t) also plays an important role in the sampling quality in practice. For example, the setup (α_t,σ_t)=(1,√(t)) was studied in <cit.>, which corresponds to constant-speed heat diffusion. The recent work <cit.> found that a simple form of (α_t,σ_t)=(1,t) works well in practice.
§ BI-DIRECTIONAL INTEGRATION APPROXIMATION (BDIA) FOR DDIM
In this section, we first review DDIM inversion and EDICT as an extension of DDIM inversion. We then
present our BDIA technique to enable exact diffusion inversion.
§.§ Review of DDIM inversion
We first consider the update expression of DDIM for sampling, which is in fact a first-order solver for the ODE formulation (<ref>)-(<ref>) (see <cit.>), given by
z_i-1= α_i-1(z_i -σ_iϵ̂_θ(z_i, i) /α_i)+σ_i-1ϵ̂_θ(z_i, i)
= a_i z_i +b_iϵ̂_θ(z_i, i)
≈ z_i+∫_t_i^t_i-1d(z_τ,τ)dτ,
where a_i=α_i-1/α_i and b_i=σ_i-1-σ_iα_i-1/α_i. It is clear from (<ref>)-(<ref>) that the integration ∫_t_i^t_i-1d(z_τ,τ)dτ is approximated by the forward DDIM update. That is, only the diffusion state z_i at the starting timestep t_i is used in the integration approximation.
To perform DDIM inversion, z_i can be approximated in terms of z_i-1 as
z_i =α_i(z_i-1-σ_i-1ϵ̂_θ(z_i,i)/α_i-1)+σ_iϵ̂_θ(z_i,i)
≈α_i(z_i-1-σ_i-1ϵ̂_θ(z_i-1,i)/α_i-1)+σ_iϵ̂_θ(z_i-1,i),
where z_i in the RHS of (<ref>) is replaced with z_i-1 to facilitate explicit computation. This naturally introduces approximation errors, leading to inconsistency of the diffusion states between the forward and backward processes.
§.§ Review of EDICT for exact diffusion inversion
Inspired by the flow generative framework <cit.>, the recent work <cit.> proposed EDICT to enforce exact diffusion inversion. The basic idea is to introduce an auxiliary diffusion state y_i to be coupled with z_i at every timestep i. The next pair of diffusion states (z_i-1, y_i-1) is then computed in an alternating fashion as
z_i^inter = a_iz_i + b_iϵ_θ(y_i,i)
y_i^inter = a_iy_i + b_iϵ_θ(z_i^inter,i)
z_i-1 = pz_i^inter+(1-p)y_i^inter
y_i-1 = py_i^inter+(1-p)z_i-1,
where p∈ [0,1] is the weighting factor in the mixing operations and the pair (z_i^inter, y_i^inter) represents the intermediate diffusion states. According to <cit.>, the two mixing operations (<ref>)-(<ref>) are introduced to make the update procedure stable.
Due to the alternating update formalism in (<ref>)-(<ref>),
the computation can be inverted to obtain (z_i, y_i) in terms of (z_i-1, y_i-1) as
y_i^inter = (y_i-1-(1-p)z_i-1)/p
z_i^inter = (z_i-1-(1-p)y_i^inter)/p
y_i = (y_i^inter - b_iϵ_θ(z_i^inter,i))/a_i
x_i = (z_i^inter - b_iϵ_θ(y_i,i)/a_i
Unlike (<ref>)-(<ref>), the inversion of (<ref>)-(<ref>) does not involve any approximation, thus enabling exact diffusion inversion.
Finally, it is clear from the above equations that the NFE that EDICT has to perform is two times the NFE required for DDIM. This makes the method computationally expensive in practice. It is highly desirable to reduce the NFE in EDICT while retaining exact diffusion inversion. We provide such a method in the next subsection.
§.§ BDIA-DDIM for exact diffusion inversion
Reformulation of DDIM update expression: In this section, we present our new technique BDIA to assist DDIM in achieving exact diffusion inversion. To do so, we first reformulate the update expression for z_i-1 in (<ref>) in terms of all the historical diffusion states {z_j}_j=N^i as
z_i-1 =z_N+∑_j=N^iΔ(t_j→ t_j-1|z_j)
≈z_N+∑_j=N^i∫_t_j^t_j-1d(z_τ, τ)dτ ,
where we use Δ(t_j→ t_j-1|z_j) to denote approximation of the integration ∫_t_j^t_j-1d(z_τ,τ)dτ via the forward DDIM step, given by
Δ(t_j→ t_j-1|z_j) =z_j-1 - z_j
=a_jz_j + b_jϵ̂_θ(z_j,j)-z_j.
Replacing forward DDIM by backward DDIM: We argue that, in principle, the integration ∫_t_j^t_j-1d(z_τ,τ)dτ in (<ref>) can be alternatively approximated by the backward DDIM update, expressed as
∫_t_j^t_j-1d(z_τ,τ)dτ≈ - Δ(t_j-1→ t_j|z_j-1),
where the notation Δ(t_j-1→ t_j|z_j-1) denotes the backward DDIM step from t_j-1 to t_j. The minus sign in front of Δ(t_j-1→ t_j|z_j-1) is due to integration over reverse time. The update expression for the backward DDIM step can be represented as
Δ(t_j-1→ t_j|z_j-1) =z_j - z_j-1
=α_j(z_j-1-σ_j-1ϵ̂_θ(z_j-1, j-1) /α_j-1)+σ_jϵ̂_θ(z_j-1, j-1) -z_j-1
=z_j-1/a_j - b_j/a_jϵ̂_θ(z_j-1,j-1) - z_j-1.
It is noted that in practice, we first need to perform a forward DDIM step over [t_j,t_j-1] to obtain z_j-1, and then we are able to perform the backward DDIM step computing Δ(t_j-1→ t_j|z_j-1).
Bi-directional integration approximation (BDIA):
We now present our new BDIA technique. Our primary goal is to develop an update expression for each z_i-1 as a linear combination of (z_i+1, z_i,ϵ̂_θ(z_i,i)). As will be explained in the following, the summation of the integrations ∑_j=N^i∫_t_j^t_j-1d(z_τ,τ)dτ for z_i-1 will involve both forward DDIM updates and backward DDIM updates.
Suppose we are at the initial time step t_N with state z_N. Then the next state z_N-1 is computed by applying the forward DDIM (see (<ref>)):
z_N-1 = a_Nz_N +b_Nϵ̂_θ(z_N, N)
=z_N + Δ(t_N→ t_N-1|z_N).
Upon obtaining z_N-1, we are able to compute Δ(t_N-1→ t_N|z_N-1) over the previous time-slot [t_N-1, t_N] and Δ(t_N-1→ t_N-2|z_N-1) over the next time-slot [t_N-1, t_N-2]. Consequently, the integration ∫_t_N^t_N-1d(z_τ,τ)dτ can be approximated by -Δ(t_N-1→ t_N|z_N-1). We define the update for z_i-1 for i≤ N-1 as below:
When i≤ N-1, let the diffusion state z_i-1 be computed in terms of (z_i, z_i+1) as
z_i-1 = z_i+1 + [a_iz_i+ b_iϵ̂_θ(z_i, i)]-(z_i/a_i+1-b_i+1/a_i+1ϵ̂_θ(z_i,i))
=z_i+1-Δ(t_i→ t_i+1|z_i) + Δ(t_i→ t_i-1|z_i).
We can conclude from (<ref>) that in the computation of each z_i-1, the integration for the most recent time-slot [t_i, t_i-1] is approximated by a forward DDIM update, and the integration for the second most recent time-slot [t_i+1, t_i] is approximated by a backward DDIM update. Fig. <ref> demonstrates how the entire integration ∫_t_N^t_i-1d(z_τ,τ)dτ for different z_i-1 is approximated. It can be seen from the figure that the directions of the integration approximation for neighbouring time-slots are always opposite. In other words, the forward and backward DDIM updates are interlaced over the set of time-slots {(t_j, t_j-1)}_j=N^i for each z_i-1. We summarize the results in a proposition below:
Let z_N-1 and {z_i| i≤ N-2} be computed by following (<ref>) and (<ref>) sequentially. Then for each timestep i≤ N-2, z_i can be represented in the form of
z_i = z_N + Δ(t_N-t_N-1|z_N)mod(N-j, 2)
+ ∑_j=i+1^N-1(-Δ(t_j→ t_j+1|z_j)+Δ(t_j→ t_j-1|z_j))mod(j-i,2).
BDIA-DDIM inversion: Whereas the conventional DDIM inversion (<ref>) requires the approximation z_i-1≈z_i, which is only true in the limit of infinite steps,
the formulation (<ref>) allows exact inversion (up to floating point error). Note that (<ref>) is symmetric in time: switching the timestep t_i+1 and t_i-1 in (<ref>) inverts the diffusion direction. That is, it follows from (<ref>) that the diffusion state z_i+1 can be computed in terms of (z_i, z_i-1) as
z_i+1 = z_i-1 + Δ(t_i→ t_i+1|z_i) - Δ(t_i→ t_i-1|z_i)
=
z_i-1 - [a_iz_i+ b_iϵ̂_θ(z_i, i)]+ (z_i/a_i+1-b_i+1/a_i+1ϵ̂_θ(z_i,i)).
We summarize the above property of time-symmetry in a lemma below:
Switching the timestep t_i-1 and t_i+1 in (<ref>) produces the reverse update (<ref>), and vice versa.
Finally, similarly to the computation (<ref>), EDICT also does not involve any approximation and results in exact diffusion inversion.
However, in contrast to EDICT, (<ref>) does not require a doubling of the NFE.
§ RELATED WORKS
In the literature, there is a branch of research on development of time-reversible ODE solvers. For instance, Verlet integration was a time-reversible method for solving 2nd-order ODEs <cit.>. Leapfrog integration is another time-reversible method also developed for solving 2nd-order ODEs <cit.>.
§ EXPERIMENTS
We conducted two types of experiments: (1) evaluation of image sampling for both BDIA-DDIM and BDIA-EDM; (2) image-editing via BDIA-DDIM. It was found that our new technique BDIA produces promising results for both tasks.
§.§ Evaluation of image sampling
In the first experiment, we consider the task of image sampling. The tested pre-trained models can be found in Appendix <ref>. Given a pre-trained model, 50K artificial images were generated for a particular NFE, and the corresponding FID score was computed.
Table <ref> and <ref> summarize the computed FID scores. It is clear that by incorporating BDIA into both DDIM and EDM, the FID scores are improved. This can be explained by the fact that BDIA introduces the additional backward integration approximation per time-step in the sampling process. This makes the resulting final integration approximation become more accurate.
§.§ Evaluation of image-editing
In this second experiment, we evaluated BDIA-DDIM for image-editing by utilizing the open-source repository of EDICT[<https://github.com/salesforce/EDICT>]. Fig. <ref> visualizes the obtained results. We point out that BDIA-DDIM produces very similar results to EDICT while reducing by approximately half the NFE compared to EDICT.
§ CONCLUSIONS
In this paper, we have proposed a new technique BDIA, to assist DDIM in achieving exact diffusion inversion. The key step of BDIA-DDIM is to perform DDIM update procedure twice at each time step t_i: one over the previous time-slot [t_i, t_i+1] and the other over next time-slot [t_i,t_i-1] in computing z_i-1. By doing so, the expression for z_i-1 becomes a linear combination of (z_i, ϵ̂_θ(z_i,i), z_i+1) that is symmetric in time. As a result, z_i+1 can be computed exactly as a linear function of (z_i, ϵ̂_θ(z_i,i), z_i-1), enabling exact diffusion inversion. Note that although the DDIM update is evaluated twice at each step, this is inexpensive since the costly neural functional evaluation is performed only once.
10
Arjovsky17WGAN
M. Arjovsky, S. Chintala, and L. Bottou.
Wasserstein GAN.
arXiv:1701.07875 [stat.ML], 2017.
Bao22DPM_cov
F. Bao, C. Li, J. Sun, J. Zhu, and B. Zhang.
Estimating the Optimal Covariance with Imperfect Mean in Diffusion
Probabilistic Models.
In ICML, 2022.
Bao22DPM
F. Bao, C. Li, J. Zhu, and B. Zhang.
Analytic-DPM: an Analytic Estimate of the Optimal Reverse Variance
in Diffusion Probabilistic Models.
In ICLR, 2022.
Bishop06
C. M. Bishop.
Pattern Recognition and Machine Learning.
Springer, 2006.
Chen20WaveGrad
N. Chen, Y. Zhang, H. Zen, R. J. Weiss, M. Norouzi, and W. Chan.
WaveGrad: Estimating Gradients for Waveform Generation.
arXiv:2009.00713, September 2020.
Dhariwal21DPM
P. Dhariwal and A. Nichol.
Diffusion models beat gans on image synthesis.
arXiv:2105.05233 [cs.LG], 2021.
Dinh14Nice
L. Dinh, D. Krueger, and Y. Bengio.
Nice: Non-linear independent components estimation.
arXiv preprint arXiv:1410.8516, 2014.
Dinh16DensityEsti
L. Dinh, J. Sohl-Dickstein, and S. Bengio.
Density estimation using real nvp.
arXiv preprint arXiv:1605.08803, 2016.
Goodfellow14GAN
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair,
A. Courville, and Y. Bengio.
Generative Adversarial Nets.
In Proceedings of the International Conference on Neural
Information Processing Systems, pages 2672–2680, 2014.
Gulrajani17WGANGP
I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville.
Improved training of wasserstein gans.
In Advances in neural information processing systems, pages
5767–5777, 2017.
Ho20DDPM
J. Ho, A. Jain, and P. Abbeel.
Denoising diffusion probabilistic models.
In NeurIPS, 2020.
Ho22ClassiferFreeGuide
J. Ho and T. Salimans.
Classifier-free diffusion guidance.
arXiv preprint arXiv:2207.12598, 2022.
Huberman23DDPMInversion
I. Huberman-Spiegelglas, V. Kulikov, and T. Michaeli.
An Edit Friendly DDPM Noise Space: Inversion and Manipulations.
arXiv:2304.06140v2 [cs.CV], 2023.
Hyvarinen05ScoreMatching
A. Hyvarinen.
Estimation of non-normalized statistical models by score matching.
Journal of Machine Learning Research, 24:695–709, 2005.
Karras22EDM
T. Karras, M. Aittala, T. Alia, and S. Laine.
Elucidating the Design Space of Diffusion-Based Generative Models.
In 36th Conference on Nueral Information Processing Systems
(NeurIPS), 2022.
Kim22GuidedDiffusion
D. Kim, Y. Kim, S. J. Kwon, W. Kang, and I.-C. Moon.
Refining Generative Process with Discriminator Guidance in
Score-based Diffusion Models.
arXiv preprint arXiv:2211.17091 [cs.CV], 2022.
Kingma18Glow
D. P. Kingma and P. Dhariwal.
Glow: Generative flow with invertible 1x1 convolutions.
In Advances in neural information processing systems, 2018.
Kingma21DDPM
D. P. Kingma, T. Salimans, B. Poole, and J. Ho.
Variational diffusion models.
arXiv: preprint arXiv:2107.00630, 2021.
Lam22BDDM
M. W. Y. Lam, J. Wang, D. Su, and D. Yu.
BDDM: Bilateral Denoising Diffusion Models for Fast and High-Quality
Speech Synthesis.
In ICLR, 2022.
Liu22PNDM
L. Liu, Y. Ren, Z. Lin, and Z. Zhao.
Pseudo Numerical Methods for Diffusion Models on Manifolds.
In ICLR, 2022.
Lu22DPM_Solver
C. Lu, Y. Zhou, F. Bao, J. Chen, C. Li, and J. Zhu.
DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Sampling
in Around 10 Steps.
In NeurIPS, 2022.
Mokady23NullTestInv
R. Mokady, A. Hertz, K. Aberman, Y. Pritch, and D. Cohen-Or.
Null-text Inversion for Editing Real Images using Guided Diffusion
Models.
In CVPR, 2023.
Nichol21DDPM
A. Nichol and P. Dhariwal.
Improved denoising diffusion probabilistic models.
arXiv preprint arXiv:2102.09672, 2021.
Nichol22GLIDE
A. Nichol, P. Dharwal, A. Ramesh, P. Shyam, P. Mishkin, B. McGrew,
I. Sutskever, and M. Chen.
GLIDE: Towards Photorealistic image generation and editing with
text-guided diffusion models.
In ICML, 2022.
Rombach22LDM
R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer.
High-resolution image synthesis with latent diffusion models.
In CVPR, 2022.
Rombach22StableDiffusion
R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer.
On High-resolution image synthesis with latent diffusion models.
In CVPR, page 10684–10695, 2022.
Ronneberger15Unet
O. Ronneberger, P. Fischer, and T. Brox.
U-Net: Convolutional Networks for Biomedical Image Segmentation.
arXiv:1505.04597 [cs.CV], 2015.
Saharia22Imagen
C. Saharia, W. Chan, S. Saxena, L. Li, J. Whang, E. Denton, S.-K.-S.
Ghasemipour, B.-K. Ayan, S. S. Mahdavi, R.-G. Lopes, T. Salimans, J. Ho,
D. J. Fleet, and M. Norouzi.
Photorealistic text-to-image diffusion models with deep language
understanding.
arXiv preprint arXiv:2205.11487, 2022.
Sauer22StyleGAN
A. Sauer, K. Schwarz, and A. Geiger.
StyleGAN-XL: Scaling StyleGAN to large diverse datasets.
In SIGGRAPH, 2022.
Shi23DragDiffusion
Y. Shi, C. Xue, J. Pan, and W. Zhang.
DragDiffusion: Harnessing Diffusion Models for Interactive
Point-based Image Editing.
arXiv:2306.14435v2, 2023.
Dickstein15DPM
J. Sohl-Dickstein, E. Weiss, N. Maheswaranathan, and S. Ganguli.
Deep unsupervised learning using nonequilibrium thermodynamics.
ICML, 2015.
Song21DDIM
J. Song, C. Meng, and S. Ermon.
Denoising Diffusion Implicit Models.
In ICLR, 2021.
Song21DPM
Y. Song, C. Durkan, I. Murray, and S. Ermon.
Maximum likelihood training of score-based diffusion models.
In Advances in neural information processing systems (NeurIPS),
2021.
Song19
Y. Song and S. Ermon.
Generative modeling by estimating gradients of the data
distribution.
In Advances in neural information processing systems (NeurIPS),
page 11895–11907, 2019.
Song21SDE_gen
Y. Song, J. S.-Dickstein, D. P. Kingma, A. Kumar, S. Ermon, and B. Poole.
Score-Based Generative Modeling Through Stochastic Differential
Equations.
In ICLR, 2021.
Wallace23EDICT
B. Wallace, A. Gokul, and N. Naik.
EDICT: Exact Diffusion Inversion via Coupled Transformations.
In CVPR, 2023.
Verlet67VerletInt
L. Verlet.
Computer Experiments on Classical Fluids. I. Thermodynamical Properties of Lennard-Jones Molecules.
Physical Review, 159:98–103, 1967.
Skeel93leapfrog
R. D. Skeel.
Variable Step Size Destabilizes the Stamer/Leapfrog/Verlet Method.
BIT Numerical Mathematics, 33:172–175, 1993.
GuoqiangIIA23
G. Zhang, K. Niwa, and W. B. Kleijn.
On Accelerating Diffusion-Based Sampling Processes by Improved
Integration Approximation.
arXiv:2304.11328 [cs.LG], 2023.
Zhang22DEIS
Q. Zhang and Y. Chenu.
Fast Sampling of Diffusion Models with Exponential Integrator.
arXiv:2204.13902 [cs.LG], 2022.
§ EXTENSION OF THE UPDATE PROCEDURE OF (<REF>)
As an extension of (<ref>), we can also compute z_i-1 by the update below:
When i≤ N-1, let the diffusion state z_i-1 be computed in terms of (z_i, z_i+1) as
z_i-1 = γ(z_i+1-z_i) + [a_iz_i+ b_iϵ̂_θ(z_i, i)]-γ(z_i/a_i+1-b_i+1/a_i+1ϵ̂_θ(z_i,i)-z_i)
=z_i+γ(z_i+1-z_i)-γΔ(t_i→ t_i+1|z_i) + Δ(t_i→ t_i-1|z_i),
where γ∈ [0,1].
§ TESTED PRE-TRAINED MODELS FOR BDIA-DDIM AND BDIA-EDM
|
http://arxiv.org/abs/2307.06002v1 | 20230712083015 | On off-critical zeros of lattice energies in the neighborhood of the Riemann zeta function | [
"Laurent Bétermin",
"Ladislav Šamaj",
"Igor Travěnec"
] | math.NT | [
"math.NT",
"math-ph",
"math.CV",
"math.MP",
"11E45"
] |
Institut Camille Jordan, Université Claude Bernard Lyon 1,
69622 Villeurbanne, France
Institute of Physics, Slovak Academy of Sciences,
Dúbravská cesta 9, 84511 Bratislava, Slovakia
The Riemann zeta function ζ(s):= ∑_n=1^∞ 1/n^s
can be interpreted as the energy per point of the lattice ,
interacting pairwisely via the Riesz potential 1/r^s.
Given a parameter Δ∈ (0,1], this physical model is generalized
by considering the energy per point E(s,Δ) of a periodic
one-dimensional lattice alternating the distances between the
nearest-neighbour particles as 2/(1+Δ) and 2Δ/(1+Δ),
keeping the lattice density equal to one independently of Δ.
This energy trivially satisfies E(s,1)=ζ(s) at Δ=1,
it can be easily expressed as a combination of the Riemann and
Hurwitz zeta functions, and extended analytically to the punctured
s-plane ∖{ 1}.
In this paper, we perform numerical investigations of the zeros of
the energy {ρ=ρ_x+ iρ_y}, which are defined by
E(ρ,Δ)=0.
The numerical results reveal that in the Riemann limit Δ→ 1^-
theses zeros include the anticipated critical zeros of the Riemann zeta
function with (ρ_x)=1/2 as well as an unexpected
– comparing to the Riemann Hypothesis – infinite series of off-critical
zeros.
The analytic treatment of these off-critical zeros shows that their
imaginary components are equidistant and their real components diverge
logarithmically to -∞ as Δ→ 1^-, i.e., they become invisible
at the Riemann's Δ=1.
Riemann zeta function; Hurwitz zeta function; critical and off-critical zeros;
Riemann hypothesis
[2010]11E45
§ INTRODUCTION AND MAIN RESULTS
Let two points at distance r interact via the Riesz potential
1/r^s with real s <cit.>.
If the points are located on the lattice and interact pairwisely
by the Riesz potential where s>1, the energy per point
is given by the Riemann zeta function <cit.>
ζ(s) := 1/2∑_n∈^*1/| n|^s = ∑_n=1^∞1/n^s s>1 ,
where the prefactor 1/2 is due to the fact that each interaction
energy is shared by a pair of points.
The function ζ can be analytically continued to the whole complex
s-plane, with a simple pole at s=1.
The Riemann zeta function plays a fundamental role in the algebraic and
analytic number theories
<cit.>,
see monographs <cit.>.
The so-called Riemann Hypothesis about the location of its nontrivial zeros
exclusively on the critical line (s)=1/2 (the symbol means
the real part) is one of the Hilbert and Clay Millennium Prize problems
<cit.>.
Throughout the present paper we assume that the Riemann Hypothesis holds.
The Riemann zeta function and its Epstein's
<cit.>,
Hurwitz's <cit.>, Barnes's <cit.>, etc.
generalisations have numerous applications both in mathematics
(prime numbers, applied statistics <cit.>) and
in physics <cit.>.
Let the Riemann zeta function be a member of a family of functions which
exhibit nontrivial zeros off the critical line.
Possible mechanisms of the disappearance of these off-critical zeros
at the Riemann's point might of general interest.
In this paper, we propose a natural extension of the Riemann zeta function
as the energy of a unit density lattice L_Δ with alternating distances
between the nearest neighbours, say 2/(1+Δ) and
2Δ/(1+Δ); due to the Δ→ 1/Δ symmetry of
the problem, it is sufficient to restrict oneself to Δ
from the interval (0,1].
In analogy with the original model with constant unit spacing, each point
interacts pairwisely with the other points via the Riesz interaction
1/r^s, s>1 and the lattice energy per point is therefore given
(see Proposition <ref>) by
E(s,Δ) = 1/2^sζ(s) + 1/2^s+1[ ζ(s,1/1+Δ)
+ζ(s,Δ/1+Δ) ], s>1, Δ∈ (0,1],
where
ζ(s,a) = ∑_n=0^∞1/(n+a)^s , s > 1,
is the Hurwitz zeta function with the real (positive) parameter a.
Remark that this lattice energy, as a combination of Riemann and
Hurwitz zeta functions, has an analytic continuation on
\{1} (see Proposition <ref>).
For given Δ∈ (0,1], the set of zeros of the lattice energy
is defined as
Z_Δ:={ρ=ρ_x+ iρ_y∈ ,(ρ_x,ρ_y)∈^2 :
E(ρ,Δ)=0}, noticing that Z_1 is the set of zeros of
the Riemann zeta function composed by trivial zeros (i.e., ρ∈ -2)
and critical zeros (i.e., (ρ)=1/2) assuming that
Riemann Hypothesis holds.
Furthermore, for specific values of the parameter
Δ∈{1/5,1/3,1/2}, the energy can be factorized as
E(s,Δ)=f_Δ(s)ζ(s) where f_Δ is a sum of p^s with
integers p.
This automatically gives us critical zeros (i.e. solutions of ζ(ρ)=0
assuming the Riemann Hypothesis) and possible off-critical zeros
(i.e. solutions of f_Δ(ρ)=0), as shown in
Proposition <ref>.
The goal of this paper is to study, both numerically and analytically,
the set of zeros Z_Δ when Δ is in a neighborhood of 1,
i.e. when E(·,Δ) is in the neighborhood of
the Riemann zeta function.
Numerical and analytic analysis shows that approaching Δ→ 1^- the
zeros of E(ρ,Δ) involve the anticipated critical zeros of the Riemann
zeta function with (ρ_x)=1/2 as well as an infinite series
of unexpected off-critical zeros with the following asymptotics for their
components, as Δ→ 1^- (see Theorem <ref>):
ρ_x(Δ)= 2/ln 2ln(1-Δ)+O(1-Δ)→ -∞ ,
ρ_y(Δ)= (2k+1)π/ln 2+O((1-Δ)^2ln 3/ln 2), k∈.
This means that, asymptotically, there is a infinite sequence of
equidistant zero components along the ρ_y axis.
Furthermore, the divergence of ρ_x to -∞ as Δ→ 1^-
is an example of the disappearance of off-critical zeros when approaching
the Riemann's point.
Moreover, the behavior of these zero components with respect to
Δ∈ (0,1] is numerically studied (see Figures <ref> and <ref>).
Plan of the paper.
The generalized 1D model for Riesz points with alternating lattice
spacings is presented in section <ref>.
The energy per particle E(s,Δ) is expressed as a combination
of Hurwitz zeta functions in section <ref>.
The properties of the Hurwitz zeta function are discussed in section
<ref>.
Special values of the parameters Δ when the energy E(s,Δ)
factorizes itself onto the product of the Riemann zeta function and
some simple function are given in section <ref>.
Numerical results for zeros at any 0<Δ<1, together with tests
at the special values of Δ=1/5,1/3,1/2 are presented in section
<ref>.
The spectrum of critical and off-critical zeros in the Riemann's
limit Δ→ 1^- is discussed in section <ref>.
§ THE GENERALIZED ONE-DIMENSIONAL MODEL
§.§ Definition of the model
Given Δ∈ (0,1], we consider the infinite set of points
L_Δ⊂ given by
L_Δ:=2∪( 2 + 2Δ/1+Δ),
which is the unit density periodic configuration with alternating distances 2/(1+Δ)
and 2Δ/(1+Δ), since 2-2/(1+Δ)=2Δ/(1+Δ). Assuming that each pair of points in L_Δ interacts via the Riesz potential 1/r^s, s>1, the total energy per point of this system is therefore
E(s,Δ):=1/4∑_k∈{0,2Δ/1+Δ}∑_p∈ L_Δ p≠ k 1/|p-k|^s.
The following proposition shows how to write this energy in terms of Riemann and Hurwitz zeta functions.
For any s>1 and any Δ∈ (0,1], we have
E(s,Δ) = 1/2^sζ(s) + 1/2^s+1[ ζ(s,1/1+Δ)
+ζ(s,Δ/1+Δ) ].
We simply compute the above double sum as follows:
E(s,Δ): = 1/4∑_k∈{0,2Δ/1+Δ}∑_p∈ L_Δ p≠ k 1/|p-k|^s
=1/4∑_k∈{0,2Δ/1+Δ}(∑_p∈ 2 p≠ k 1/|p-k|^s +
∑_p∈2+ 2Δ/1+Δ p≠ k 1/|p-k|^s)
=1/4∑_n∈ n≠ 01/|2n|^s
+1/4∑_n∈ 2n≠2Δ/1+Δ1/| 2n- 2Δ/1+Δ|^s
+ 1/4∑_n∈ 2n≠ - 2Δ/1+Δ1/| 2n+ 2Δ/1+Δ|^s
+1/4∑_n∈ n≠ 01/|2n|^s
=1/2^s+1∑_n∈^*1/|n|^s
+1/2^s+2∑_n∈1/| n+ Δ/1+Δ|^s+1/2^s+2∑_n∈1/| n- Δ/1+Δ|^s.
We now split the two last sums in order to get two Hurwitz zeta functions and two rests that we write again in terms of the same Hurwitz zeta functions:
E(s,Δ): =1/2^sζ(s)
+1/2^s+2[ ζ(s,1/1+Δ)
+ζ(s,Δ/1+Δ)]
+1/2^s+2(∑_n=1^+∞1/| -n+ Δ/1+Δ|^s
+∑_n=1^+∞1/| -n+ 1/1+Δ|^s)
=1/2^sζ(s)+1/2^s+2[ ζ(s,1/1+Δ)
+ζ(s,Δ/1+Δ)]
+1/2^s+2(∑_n=1^+∞1/( n- Δ/1+Δ)^s
+∑_n=1^+∞1/( n- 1/1+Δ)^s).
Since we have, by the change of variables n=k+1,
∑_n=1^+∞1/( n- Δ/1+Δ)^s
=∑_k=0^+∞1/(k+1/1+Δ)^s and∑_n=1^+∞1/( n- 1/1+Δ)^s
=∑_k=0^+∞1/(k+Δ/1+Δ)^s,
we obtain
E(s,Δ) =1/2^sζ(s)+2/2^s+2[ ζ(s,1/1+Δ)
+ζ(s,Δ/1+Δ)]
=1/2^sζ(s)+1/2^s+1[ ζ(s,1/1+Δ)
+ζ(s,Δ/1+Δ)]
and the proof is complete.
Notice that the energy satisfies the required symmetry relation
E(s,Δ) = E(s,1/Δ).
It is known (see e.g. <cit.>), by a convexity argument
(or by the so-called one-dimensional “Universal Optimality" of ,
see <cit.>) that, for all s>0,
min_Δ∈ (0,1] E(s,Δ)=E(s,1)=ζ(s),
with equality if and only if Δ=1. From our results (see Theorem <ref>), the Riemann zeta function is therefore at the same time the minimal value of our
energy and the only one in its Δ-neighborhood for which the non-trivial zeros are strictly
located on the critical line (s)=1/2. It might be interesting to investigate other lattice energies to understand how universal this phenomenon is.
§.§ The Hurwitz zeta function and the analytic continuation of E(·, Δ)
The Hurwitz zeta function (<ref>) is a generalization of
the Riemann zeta-function (<ref>) via a shift a>0.
In particular,
ζ(s,1) = ζ(s).
In the symbolic computer language Mathematica, the Riemann
and Hurwitz zeta functions are tabulated under the symbols Zeta[s]
and Zeta[s,a], respectively.
The Hurwitz zeta function satisfies two easily verifiable important equalities,
∀ x∈[ 0, 1/2], ∀ s>1,
ζ(s,x) + ζ(s,1/2+x) = 2^s ζ(s,2x) ,
and the multiplication theorem
k∈,∀ s>1, k^s ζ(s)
=∑_n=1^kζ( s, n/k).
The second relation easily implies that
ζ(s,1/3) + ζ(s,2/3) = (3^s-1) ζ(s)
as well as
ζ(s,1/2) = (2^s-1) ζ(s).
From (<ref>) with x=1/4 and (<ref>), we therefore obtain
ζ(s,1/4)+ζ(s,3/4)=(4^s-2^s)ζ(s).
From (<ref>) and (<ref>), it is also straightforward to check
that E(s,1)=ζ(s).
Furthermore, it is clear that (<ref>) holds for all s∈ such
that (s)>1 and we get the following result by directly applying
the classical one (see e.g. <cit.>)
on the analytic continuation of the Riemann and Hurwitz zeta functions.
For all Δ∈ (0,1], the function s↦ E(s,Δ) has
an analytic continuation on \{1}.
Furthermore, we have, for all s such that (s)<1 and all
Δ∈ (0,1],
π^-s/2Γ(s/2)E(s,Δ)
= 1/2^s+1{∫_0^∞[ϑ(1/1+Δ,it)-1]
t^-1-s/2 dt + f(s) } ,
where ϑ(z,it)=∑_n∈ e^-π n^2 te^2iπ n z
is the Jacobi theta function defined for t>0 and z∈, and
f(s) = ∫_0^∞[ ϑ(0,it)-1-1/√(t)]
t^s/2-1 dt
= ∫_0^∞[ ϑ(0,it)-1/√(t)]
t^s/2-1 dt
Recall that, for Δ∈ (0,1] and (s)>1, we have
E(s,Δ) = 1/2^sζ(s) + 1/2^s+1[ ζ(s,1/1+Δ)
+ζ(s,Δ/1+Δ) ].
It has been shown in <cit.> that
s↦ζ(s) and s↦ζ(s,a), a>0, admit an analytic
continuation to \{1}, which implies the same for
s↦ E(s,Δ).
Furthermore, writing z=1/1+Δ, we have
E(s,Δ)= 1/2^sζ(s) + 1/2^s+1[ ζ(s,z)
+ζ(s,1-z) ] .
Considering the analytic continuation of ζ(1-α,a), a>0,
the following formula is well-known <cit.> for all
α such that (α)>0:
π^-1-α/2Γ( 1-α/2)
[ ζ(1-α,z)
+ζ(1-α,1-z) ]
=∫_0^∞[ϑ(z,it)-1]t^α/2 dt/t
holding for z∉ (see the remark in Theorem 12.6 on page 257
of <cit.>).
Replacing α by s=1-α (so that (s)<1 for (α)>0),
we write
π^-s/2Γ(s/2)E(s,Δ)
= π^-s/2/2^sΓ(s/2)ζ(s)
+ π^-s/2/2^s+1Γ(s/2)
[ ζ(s,z) +ζ(s,1-z) ]
=2π^-s/2/2^s+1Γ(s/2)ζ(s)
+ π^-s/2/2^s+1Γ(s/2)
[ ζ(s,z) +ζ(s,1-z) ] .
Next we replace
π^-s/2Γ(s/2)[ ζ(s,z)
+ζ(s,1-z) ]
by the integral given in (<ref>) and
2π^-s/2Γ(s/2)ζ(s)
by the d=1 integral in Eq. (21) of <cit.> for 0<(s)<1 and
by the d=1 integral in Eq. (22) of <cit.> for (s)<0
to complete the proof.
Therefore, we can consider the zeros of s↦ E(s,Δ)
in \{1} defined as
Z_Δ:={ρ=ρ_x+ iρ_y∈ ,(ρ_x,ρ_y)∈^2 : E(ρ,Δ)=0},
noticing that Z_1 is the set of zeros of the Riemann zeta function.
We recall that, according to the Riemann Hypothesis,
Z_1=-2∪ Z^C, Z^C⊂{(z)=1/2},
where Z^C is called the set of critical zeros of ζ and -2 are the trivial zeros of ζ. We are going to see in the next sections that Z_Δ can have other nontrivial off-critical zeros.
§.§ Factorization and zeros of the energy for special values of Δ
There exist special values of Δ for which the energy E(s,Δ)
factorizes itself into a product of the Riemann zeta function ζ(s) and
some simple functions of s, by using the previously presented relations
(<ref>) and (<ref>).
For these cases, both critical and off-critical zeros can be found easily.
The most obvious choice of Δ is Δ=1 for which we have
E(s,1)=ζ(s).
In the cases Δ∈{1/2, 1/3} we have the following result giving the zeros of E(s,Δ) as well as the factorization of the energy.
For all s∈\{1}, we have
E(s,1/2)=1/2^s+1(1+3^s)ζ(s) and
E(s,1/3)=1/2^s+1(2-2^s+4^s) ζ(s) .
Furthermore, the zeros of E(s,1/2) and E(s,1/3) are
Z_1/2=Z_1 ∪{(2k+1) iπ/ln 3}_k∈,
Z_1/3=Z_1∪{1/ln2[ ln( 1± i√(7)/2)
+2 π i k] }_k∈.
For Δ=1/2, one has
1/1+Δ=2/3 and
Δ/1+Δ=1/3 and therefore,
using (<ref>), it holds that
E(s,1/2)=1/2^s+1(1+3^s)ζ(s).
where the function 1+3^s yields an infinite sequence of (purely imaginary)
off-critical zeros
ρ_k = (2k+1) iπ/ln3, k∈.
Furthermore, for Δ=1/3, we have
1/1+Δ=3/4 and
Δ/1+Δ=1/4 and therefore,
applying (<ref>), it holds that
E(s,1/3)=1/2^s+1(2-2^s+4^s) ζ(s).
The function 2-2^s+4^s yields the following zeros
ρ_k=1/ln2[ ln( 1± i√(7)/2)
+2 π i k] , k∈.
In the Δ=1/3 case, since all these zeros have the real part equal to 1/2, the energy for Δ=1/3 exhibits only critical zeros.
The last factorization we are considering in our paper corresponds to Δ=1/5:
E(s,1/5)=1/2^s+1(3-2^s-3^s+6^s)ζ(s).
The function 3-2^s-3^s+6^s exhibits only off-critical zeros which can be
found only numerically, e.g., s≈ 0.635084± 1.07885 i.
§ NUMERICAL RESULTS
The starting point of our numerical determination of zeros of the energy
E(s,Δ) was the case Δ=1/2, with the factorization form (<ref>),
whose spectrum of zeros involves both the critical zeros of the Riemann
zeta function as well as an infinite set of off-critical zeros (<ref>).
It was checked that the accuracy of determination of complex zeros by using
the symbolic language Mathematica is 34-35 decimal digits for both real
and imaginary components.
Then we proceeded to the left and right from this point Δ=1/2 by
changing successively Δ by a small amount to avoid an uncontrolled
skip between neighbouring branches of zeros.
Our numerical experience indicates that changing Δ by 0.01 is
certainly safe from this point of view.
We observe the following:
* The zeros of E(s,Δ) form continuous non-crossing curves in the
complete (ρ_x, ρ_y, Δ) space.
* Nevertheless, the curves may intersect in the reduced spaces
(ρ_y,Δ) and (ρ_x,Δ), see Figures <ref> and <ref>,
respectively.
Notice that the values of Δ∈{1/5,1/3,1/2} serve as
test points of our numerical calculations.
The dependence of the imaginary component of zeros ρ_y, in the range
of its values [0,25], on the parameter Δ∈ (0,1] is pictured
in Figure <ref>.
The special cases Δ∈{1/5,1/3,1/2}, when the energy factorizes
itself onto a product of the Riemann zeta function and a simple function,
are visualized by vertical dashed lines.
These cases yield us precise values of the corresponding zeros and also make
us sure not to miss any curve of zeros.
The critical zeros with ρ_x=1/2 are denoted by red colour,
all other zeros are off-critical; the off-critical zeros for the special
values of Δ∈{1/5,1/2} are denoted by blue colour. We observe two types of zero curves:
* The first three “standard” zero curves ρ_y(Δ), which end up at
the Riemann critical zeros at Δ=1, are represented by full symbols.
* The first three “non-standard” curves ρ_y(Δ) are represented
by open symbols.
Since in the limit Δ→ 1^- these curves tend to off-critical zeros
with the divergent real component ρ_x→ -∞, the curves end up with
crosses indicating the absence of off-critical zeros at the Riemann's
Δ=1.
The dependence of the real part ρ_x of the first three
“non-standard” energy zeros on Δ∈ (0,1] is presented
in Figure <ref> by open symbols, in close analogy with Figure <ref>.
As before, the critical zeros with ρ_x=1/2 are denoted by
red colour. We observe the following:
* For Δ⪆ 0.75, the three curves coincide
on the considered scale and go to -∞ as Δ→ 1^-.
§ ANALYTIC RESULTS IN THE LIMIT Δ→ 1^-
Approaching Δ→ 1^-, with regard to Eq. (<ref>) one anticipates
the presence of critical zeros of the Riemann zeta function for E(s,Δ).
Surprisingly, as was already indicated, there are also additional curves of
off-critical zeros.
To derive coordinates of these off-critical zeros, we set
Δ=1-ε in (<ref>) and expand the energy in Taylor series
in the small positive ε→ 0^+ up to the order ε^5.
Let s∈\{1}, then, as ε→ 0^+,
E(s,1-ε) = ζ(s) + 2^2+s-1/2^5+s s (1+s)
ζ(2+s) ( ε^2 + ε^3
+ 3/4ε^4 + 1/2ε^5 )
+ 1/32^4+s-1/2^11+s s (1+s) (2+s) (3+s)
ζ(4+s) ( ε^4 + 2 ε^5 )
+ O(ε^6) .
It directly follows from the Taylor expansion of the Hurwitz zeta function
(see e.g. <cit.>): for |a|<1,
ζ(s,a)=1/a^s+∑_n=0^∞ (-a)^n
s+n-1nζ(s+n) ,
where the binomial coefficient for a complex s has to be understood as
s+n-1n = s (s+1) (s+2) ⋯ (s+n-1)/n!,
as well as the analytic continuation of s↦ E(s,Δ).
It is clear that in the limit ε→ 0^+ the zeros of
E(s,1-ε) coincide trivially with the critical ones
of the Riemann zeta function ζ(s).
Let us compute the ε→ 0^+ asymptotics of the other
nontrivial zeros.
The nontrivial off-critical zeros of E(s,1-ε) are given, as
ε→ 0^+, by {ρ(k)=ρ_x(k)+ iρ_y(k)}_k∈
where
ρ_x(k)= 2/ln2lnε +
( -3 + 2/ln 2lnπ) + 1/ln 2ε
+ 1/ln 2( 1/4 + 7 π^2/24) ε^2 + 1/ln 2( 1/12
+ 7 π^2/24) ε^3
+ 8/3 ln 2( π^2/8)^ln 3/ln 2cos[ ln 3/ln2 (2k+1) π]
ε^2ln 3/ln 2+ o(ε^2ln 3/ln 2),
ρ_y(k)=1/ln2 (2k+1)π + 8/3 ln 2( π^2/8)^ln 3/ln 2sin[ ln 3/ln2 (2k+1) π]
ε^2ln 3/ln 2
+ o(ε^2ln 3/ln 2).
In particular,
* Vanishing of off-critical zeros: we have
lim_ε→ 0^+ρ_x(k)=-∞;
* Asymptotic crystallization of their imaginary parts: at first order,
as ε→ 0^+, the imaginary parts of off-critical zeros are
equidistributed on the lattice (2+1)π/ln 2.
The other nontrivial zeros {ρ}, besides the one of the Riemann
zeta function, correspond to solutions of the equation
2^ρ = 1-2^2+ρ/2^5ρ (1+ρ) ζ(ρ+2)/ζ(ρ)( ε^2 + ε^3 + 3/4ε^4
+ 1/2ε^5 )
+ 1/31-2^4+ρ/2^11ρ (1+ρ) (2+ρ) (3+ρ) (4+ρ) ζ(4+ρ)/ζ(ρ)( ε^4 + 2 ε^5 )
+ O(ε^6) .
As will be showed later, the component ρ_x of ρ=ρ_x+ iρ_y
goes to -∞ as ε→ 0^+.
To simplify our computations, one applies the well known duality transformation
π^-s/2Γ( s/2) ζ(s) =
π^(s-1)/2Γ( 1-s/2) ζ(1-s)
to each Riemann zeta function in (<ref>).
Using then the formula Γ(x+1) = x Γ(x),
one ends up with the result
2^ρ = - 1-2^2+ρ/2^3π^2
ζ(-1-ρ)/ζ(1-ρ)( ε^2 + ε^3 + 3/4ε^4
+ 1/2ε^5 )
+ 1/31-2^4+ρ/2^7π^4 ζ(-3-ρ)/ζ(1-ρ)( ε^4 + 2 ε^5 ) + O(ε^6) .
In the limit ε→ 0^+, the r.h.s. of this equation vanishes
and, consequently, the component ρ_x of ρ=ρ_x+ iρ_y
must go to -∞ as indicated before.
In the limit ρ_x→ -∞, the ratios of Riemann zeta functions
in (<ref>) can be expanded as follows
ζ(-1-ρ)/ζ(1-ρ) =
1+2^ρ+1 + 3^ρ+1+∑_k≥ 4k^ρ+1/1 + 2^ρ-1
+ 3^ρ-1+∑_k≥ 4k^ρ-1 = 1 + 3/2 2^ρ
+ 8/3 3^ρ + O(4^ρ)
and
ζ(-3-ρ)/ζ(1-ρ) =
1+2^ρ+3 + 3^ρ+3+∑_k≥ 4k^ρ+3/1 + 2^ρ-1
+ 3^ρ-1+∑_k≥ 4k^ρ-1 = 1 + 15/2 2^ρ
+ O(3^ρ) .
In the leading order of the smallness parameter ε, it holds that
2^ρ_x+ iρ_x = - π^2/8ε^2 +o(ε^2),
Since the right-hand side of this equation is real and negative,
the leading order of the ρ_y-component is given by
2^ iρ_y = -1 + o(1), or, equivalently,
ρ_y(k) = 1/ln2 (2k+1) π + o(1) , k∈.
This means that in the limit ε→ 0^+ there exists an infinite
sequence of equidistant zero components along the ρ_y axis.
As follows from (<ref>), the x-component of these zeros diverges
logarithmically as ε→ 0^+:
ρ_x(k) = 2/ln2lnε + ( -3 + 2/ln 2lnπ) + O(ε).
Note that the leading terms are the same for any value of k.
This behavior can be seen in Figure <ref>.
Higher orders of the expansion of ρ_y(k) and ρ_x(k)
in ε can be obtained by inserting the leading order expressions
(<ref>) and (<ref>) directly into the basic relation (<ref>).
Performing the expansion procedure in ε it is important to
realize that
3^s =
( π^2/8ε^2 )^ln 3/ln 2exp[ iln 3/ln 2 (2k+1) π]
+ o(ε^2ln 3/ln 2)
is of order 2ln 3/ln 2 ≈ 3.17>3.
After simple algebra one obtains the desired asymptotics for ρ_x(k)
and ρ_y(k).
Comparison with our numerics.
To check numerically our expansion in ε for the imaginary parts of
the first three (k=0,1,2) off-critical zeros, let us define the deviations
from their ε=0 values as follows
δρ_y(k) := ρ_y(k) - 1/ln2 (2k+1)π .
We know from (<ref>) that the deviations are expected to behave in
the region of the small anisotropy parameter ε→ 0^+ as
δρ_y(k) = 8/3 ln 2( π^2/8)^ln 3/ln 2sin[ ln 3/ln2 (2k+1) π]
ε^2ln 3/ln 2 +o(ε^2ln 3/ln 2).
The numerical results for δρ_y(k) are depicted by open circles
(k=0), squares (k=1) and triangles (k=2) in Figure <ref>.
It is seen that the numerical data fit perfectly the plots given by the
asymptotic formula (<ref>), represented by dashed curves,
for small values of ε≤ 0.02.
As concerns the real parts of the first three (k=0,1,2) off-critical zeros,
we define the deviations as follows
δρ_x(k) := ρ_x(k) -2/ln2lnε -
( -3 + 2/ln 2lnπ) -1/ln 2ε
- 1/ln 2( 1/4 + 7 π^2/24)
ε^2 - 1/ln 2( 1/12
+ 7 π^2/24) ε^3 .
It is obvious from (<ref>) that the deviations are anticipated
to behave for small values of anisotropy ε→ 0^+ as
δρ_x(k) = 8/3 ln 2( π^2/8)^ln 3/ln 2cos[ ln 3/ln2 (2k+1) π]
ε^2ln 3/ln 2 +o( ε^2ln 3/ln 2).
The numerical results for δρ_x(k) are represented by open circles
(k=0), squares (k=1) and triangles (k=2) in Figure <ref>.
The numerical data fit very well the plots deduced from the
asymptotic formula (<ref>) (dashed curves).
§ ACKNOWLEDGEMENTS
The support received from VEGA Grant No. 2/0092/21
and Project EXSES APVV-20-0150 is acknowledged.
10
Apostol76T.M. Apostol,
Introduction to analytic number theory,
Springer, New York,1976.
Barnes04E.W. Barnes,
On the theory of the multiple gamma function,
Trans. Camb. Philos. Soc. 19 (1904) 374–-425.
Borwein13J.M. Borwein, M.L. Glasser, R.C. McPhedran, J.G. Wan,
J.L. Zucker, Lattice sums then and now,
Cambridge University Press, Cambridge, 2013.
BrauJ.S. Brauchart,
Optimal discrete Riesz energy and discrepancy,
Unif. Distrib. Theory 6 (2011) 207–220.
Chowla49S. Chowla, A. Selberg,
On Epstein's zeta function,
Proc. Natl. Acad. Sci. USA 35 (1949) 371–374.
CohnKumarH. Cohn, A. Kumar,
Universally optimal distribution of points on spheres,
J. Amer. Math. Soc. 20(1) (2007) 99–148.
Edwards74H.M. Edwards,
Riemann's Zeta function,
Dover Publications, New York, 1974.
Elizalde12E. Elizalde,
Ten Physical Applications of Spectral Zeta Functions,
Springer Verlag, Berlin, 2012.
Epstein03P. Epstein,
Zur Theorie allgemeiner Zetafunctionen,
Math. Ann. 56 (1903) 615–644.
Epstein07P. Epstein,
Zur Theorie allgemeiner Zetafunctionen II,
Math. Ann. 63 (1907) 205–216.
Fine51N.J. Fine,
Note on the Hurwitz zeta-function,
Proc. Amer. Math. Soc. 2 (1951) 361-364.
Hadamard93J. Hadamard,
Étude sur les propriétés des fonction entières et un particulier
d'une fonction considéré par Riemann,
J. Math. Pure Appl. 9 (1893) 171–215.
Hardy14G.H. Hardy,
Sur les zeros de la fonction ζ(s),
Compt. Rend. Acad. Sci. 158 (1914) 1012–-1014.
Hardy21G.H. Hardy, J.E. Littlewood,
The zeros of Riemann's zeta-function on the critical line,
Math. Z. 10 (1921) 283–317.
Hurwitz1882A. Hurwitz,
Einige Eigenschaften der Dirichletschen Fuctionen
F(s)=∑( D/n)·1/n^s
die bei der Bestimmung der Klassenzahlen binärer quadratischer Formen
auftreten,
Z. Math. Phys. 27 (1882) 86–101.
Hutchinson25J.I. Hutchinson,
On the Roots of the Riemann Zeta-Function.
Trans. Amer. Math. Soc. 27 (1925) 49–60.
Ivic85A. Ivić,
The Riemann Zeta Function,
John Wiley & Sons, New York, 1985.
Jaffe06A.M. Jaffe,
The Millenium Grand Challenge in Mathematics,
Notices of the AMS 53 (2006) 652–660.
Nakamura16T. Nakamura, Real zeros of Hurwitz-Lerch zeta and
Hurwitz-Lerch type of Euler-Zagierdouble zeta functions,
Math. Proc. Cambridge Philos. Soc. 160 (2016) 39–50.
Riemann1859B. Riemann,
Über die Anzahl der Primzahlen unter einer gegebenen Grösse.
Monats-berichte der Berliner Akademie (1859) 671–680.
Riesz16M. Riesz,
Sur l'hypothèse de Riemann,
Acta Math. 40 (1916) 185–-190.
Selberg46A. Selberg,
Contributions to the theory of the Riemann zeta-function,
Arch. Math. Naturvid. 48 (1946) 89–-155.
Spira76R. Spira,
Zeros of Hurwitz zeta functions,
Mathematics of computations 30 (1976) 863–866.
Titchmarsh35E.C. Titchmarsh,
The Zeros of the Riemann Zeta-Function,
Proc. Royal Soc. London A 151 (1935) 234–255.
Titchmarsh88E.C. Titchmarsh,
The Theory of The Riemann Zeta-function, 2nd ed.,
Clarendon Press, Oxford, 1988.
Travenec22I. Travěnec, L. Šamaj,
Generation of off-critical zeros hypercubic Epstein zeta functions,
Appl. Math. Comput. 413 (2022) 126611.
Vepstas08L. Vepštas,
An efficient algorithm for accelerating the convergence of oscillatory
series, useful for computing the polylogarithm and Hurwitz zeta functions,
Numer. Algor. 47 (2008) 211–252.
VentevogelW.J. Ventevogel,
On the configuration of systems of interacting particles with minimum
potential energy per particle,
Physica A 92 (1978) 343–361.
|
http://arxiv.org/abs/2307.04100v1 | 20230709052546 | Visible and infrared self-supervised fusion trained on a single example | [
"Nati Ofir"
] | cs.CV | [
"cs.CV"
] |
Visible and infrared self-supervised fusion trained on a single example
Nati Ofir
August 12, 2023
=======================================================================
This paper addresses the problem of visible (RGB) to Near-Infrared (NIR) image fusion. Multispectral imaging is an important task relevant to image processing and computer vision, even more, since the development of the RGBT sensor. While the visible image sees color and suffers from noise, haze, and clouds, the NIR channel captures a clearer picture and it is significantly required by applications such as dehazing or object detection. The proposed approach fuses these two aligned channels by training a Convolutional-Neural-Network (CNN) by a Self-Supervised-Learning (SSL) on a single example. For each such pair, RGB and IR, the network is trained for seconds to deduce the final fusion. The SSL is based on Sturcture-of-Similarity (SSIM) loss combined with Edge-Preservation (EP) loss. The labels for the SSL are the input channels themselves. This fusion preserves the relevant detail of each spectral channel while not based on a heavy training process. In the experiments section, the proposed approach achieves better qualitative and quantitative multispectral fusion results with respect to other recent methods, that are not based on large dataset training.
§ INTRODUCTION
The problem of visible-to-infrared image fusion is a well-studied area with a plethora of works. Even though many solutions have been developed, there is still a need for an Artificial-Intelligence (AI) approach that is based on Deep-learning (DL), however, does not require heavy pre-training and large dataset acquiring to carry a single multispectral fusion. This paper introduces a DL method that works on a single example and produces a fusion result in an SSL way such that no manual human labeling is required. Given this solution, every multispectral camera can be extended with a fusion channel such that the observer will be able to see the details captured by each spectrum without flickering between the different images. While the visible RGB (0.4-0.7μ m) sees color information, the NIR (0.8-2.5μ m) sees beyond haze and fog and suffers less from the noise of low-light imaging. Since each spectral channel captures different information about the scene their fusion is informative and relevant for a person observing the camera.
While most of the DL fusion approaches, such as attention based <cit.>, required a timely training phase, the proposed method is training CNN weights for each input image for forty seconds on Nvidia Geforce GTX 3060 GPU. In addition, while classic image fusion, such as <cit.> is relatively fast to compute, it is proved in the experiments of this paper that they are less preserving the input detail according to several quantitative measurements. For example, Figure <ref> demonstrates the proposed method results of RGB to NIR fusion on a country example of the dataset <cit.>. These results maintain to combine the information of both inputs, it can be seen that the far mountains, seen only in infrared, are emphasized by the computed CNN in the final fusion. Moreover, the information on the color of the RGB sensor is preserved in the fusion. Even though this method is based on learned AI CNN, the outcome seems naturally real and without special artifacts.
Ofently, the input channels are not aligned with each other, and multispectral image registration is required as a preprocessing step. As the nature of the dataset <cit.>
contains small misalignment, this paper proposes simple solutions for that problem. The first approach is to align the images in advance by methods tailored by multispectral imaging like DL based <cit.> and traditional computer vision bases <cit.>. The second solution, that can be integrated into the proposed CNN architecture is to learn a Spatial-Transformation-Network (STN) <cit.> in a holistically end-to-end method to compute the final aligned fusion results. As this example shows, the CNN output does not suffer from channel misregistrations.
This manuscript is organized as follows. In Section <ref> the previous methods for image fusion are covered. Next, in Section <ref> the proposed approach is explained in detail including the CNN architecture, training algorithm, and loss functions. Then, Section <ref> illustrate the fusion performance with respect to other methods that are not dependent on the time-consuming training phase. Finally, this paper is concluded in Section <ref>.
§ PREVIOUS WORK
Image fusion is a classic problem of computer vision. Early methods utilized signal characteristics for fusion such as Wavelets-based method <cit.>. Laplacian pyramid blending was used to overcome multi-focus image capturing <cit.> for example. Statistical features of the input images can contribute to their fusion such as Principal-Component-Analysis (PCA) <cit.>. Fusion can be carried out according to spectral analysis of the images as was introduced in <cit.>. A recent approach utilized superpixels <cit.> segmentation for a content-based multispectral fusion<cit.>. The DL revolution produced many related works with state-of-the-art (SOTA) blending performances like <cit.>. Visible and infrared fusion is using DL to enhance object detection <cit.>. The proposed method is utilizing DL techniques and lite-CNN-architecture, however, does not depend on heavy training processes and large datasets contrary to the most recent approaches. The idea of training a CNN on a single example has shown significant potential in super-resolution <cit.> and image-generation by Generative-Adverserial-Network (GAN)<cit.>. This work is the first to utilize single-image training for multispectral image fusion.
If the input spectral channels are not geometrically aligned, an apriori step of multispectral registration is required. A single channel registration can be carried out by engineered feature descriptors like Scale-Invariant-Feature-Transform (SIFT) <cit.>. Unfortunately, regular alignment methods usually fail in the multispectral scenario, and therefore a tailored approach to this case is needed. A descriptor that is invariant to different spectra can be based on edge detection <cit.>, like Canny <cit.>, however, this method has limitations on the geometric transformation level. An additional method is to apply for a Mutual-Information based registration <cit.>. MI usually solves translation, or small optical flow fields. Recent methods utilize DL to compute a spectra-invariant descriptor like <cit.>, unfortunately, this method is also geometrically limited. Another DL method, learned a hybrid network for multispectral key points matching <cit.>, it shows better accuracy, however, depends on a training dataset that is manually labeled. The dataset that the proposed methods fuse <cit.> contains small misalignments that are usually solved holistically by the learned CNN. The geometric correction also can be trained using Spatial-Transformation-Network (STN) <cit.>, that computed a geometric transformation by end-to-end learning. In conclusion, multispectral image alignment is a challenging problem that is hardly solved, however, less relevant since the development of RGBT cameras <cit.>.
Self-Supervised-Learning (SSL)is a relevant field, enabling AI and DL to be independent of human labeling. A common SSL approach is utilizing contrastive learning <cit.>. In this paper, the proposed method uses the input spectral channels as a label for their fusion, based on Structure-of-Similarity-Measuare (SSIM) <cit.> and Edge-Preservation (EP) loss <cit.>. As a whole, this study introduces a holistic solution for visible-to-infrared fusion and registration based on SSL.
§ THE PROPOSED MULTISPECTRAL FUSION
This Section will introduce the proposed method to fuse visible and infrared multispectral images, by training a fusion CNN on a single example for several seconds using self-supervised loss functions.
§.§ Network architecture
The proposed CNN architecture for image fusion gets two channels of any image dimension and outputs a single channel with the same height and width as the input. A typical image in the dataset used to evaluate the method <cit.> is 900x768 pixels. The compact fusion network contains four convolutions of kernel 3x3, the first three are followed by ReLU(x) = max(x,0) activation, and the final output-convolution is followed by Sigmod(x) = e^x/1+e^x. The architecture contains two skip connections that are based on numeric addition. Before the feed-forward CNN, an STN is applied to align the spectral channel. In addition, a UNet <cit.> with Resnet18 backbone <cit.> is applied in parallel to the feed-forward CNN, to get a smooth fusion with semantic information.
For more graphic details see Figure <ref>, for the whole CNN parameters see Table <ref>. The total number of parameters is ≈ 4M, such that the CNN is versatile and can be trained fastly. In the experiments Section <ref>, an ablation study is learned on this architecture, and each part is assigned a contribution score to the final fusion result.
Figure <ref> shows a compact version of the proposed architecture, such that according to the ablation study done in this paper, it has main contributions to the final fusion results.
§.§ Training algorithm
To train the method CNN, an algorithm training loop is introduced. See Algorithm <ref> for the whole fusion algorithm containing mainly the self-supervised training loop. The RGB input image is converted to the GRAY, and then the training computed the CNN weights to fuse a specific pair of NIR and GRAY images. During the training the network weight is updated due to a combination of SSIM <cit.> and Edge Preservation <cit.> losses. Finally, after the training loop, the fusion is computed and it is used to distort the RGB channels to contain the fusion result. The number of epochs that were found to be required for high-quality fusion is three hundred. In addition, the CNN is initialized with random weights.
§.§ Loss functions
The loss function that is used to train the CNN are SSIM and Edge Preservation each self-labeled with the input images.
Given two input images I_1, I_2 the SSIM, correlated to the human visual system is defined by:
(2μ_1μ_2+c_1)(2σ_12+c_2)/(μ_1^2+μ_2^2+c1)(σ_1^2+σ_2^2+c_2),
where μ is the mean of each image, σ is the standart deviation and σ_12 is the joint covariance.
This similarity function is widely used for understanding the perception of similar images, and it has its differentiable loss definition <cit.>.
Regarding the Edge-Preservation loss (EP), it is a regular reconstruction loss, applied after image gradient detection.
EP(I_1,I_2) = ||∇ I_1(x)-∇ I_2(x)||_2^2.
In the experiment Section <ref> it is shown that using the EP loss in addition to SSIM improves the quantitative fusion results of the proposed method.
§.§ Multispectral registration
The dataset of <cit.> contains small misalignments between the spectral channels that are basically holistically aligned by the various convolution of the proposed CNN architecture. Even though, if the miss-registration is significant there are approaches to solve it and then fuse with the proposed self-supervised approach. The first solution is based on Spatial-Transformation-Networks (STN) <cit.>. The idea is to apply an STN to the NIR channel at the beginning of the CNN and to train the whole network by the proposed method. If the miss-registration is dramatically significant, then matching is required like the algorithm of <cit.>.
§ RESULTS
The proposed method evaluation is done both quantitatively and qualitatively. For the evaluation, the multispectral dataset <cit.> contains 954 pairs of NIR and RGB, divided into different categories such as country, mountain, urban, and street. The following experiments show that the proposed method produces better results than alternative fast methods for image fusion, in terms of SSIM, Canny <cit.> edge preservation, and statistic correlation. The proposed approach is compared to the latest SuperPixel <cit.>, PCA Fusion <cit.>, and Spectral Fusion <cit.>. In addition, the contribution of the edge preservation loss itself is emphasized.
Figure <ref> demonstrates the proposed method visual results, where fusing RGB and IR images from the dataset of <cit.>. It can be seen, that this approach manages to fuse smoothly images from different categories while maintaining the relevant information for each spectral channel. In addition, Figure <ref>, compares the proposed algorithm for fusion to the recent SuperPixel <cit.> method, it shows that the proposed approach picks the relevant information of each spectral channel even though it is holistic and trained in an end-to-end fashion. The SuperPixel method is based on classic computer vision and is engineered to produce such results, the proposed algorithm achieves similar quality of image fusion, while being based on compact short DL CNN training per example.
Table <ref> compares the edge preservation of the method when training with and without EP loss. For input images I_1,I_2, their fusion F and their corresponding Canny <cit.> binary-edges C_1, C_2, C_F this loss is defined by:
EP(I_1,I_2) = 0.5∑_i∑_x C_i(x) · C_F(x)/∑_x C_i(x).
It is demonstrated in the table that the EP loss is crucial for preserving the edge maps in the proposed self-supervised fusion.
In addition, Table <ref> shows that the self-supervised fusion achieves the highest SSIM fusion score, where:
SSIM(I_1,I_2, F) = 0.5SSIM(I_1, F)+0.5SSIM(I_2,F).
This is another proof of the quality of the proposed algorithm. Moreover, Table <ref> depicts similar result for the correlation metric:
corr(I_1,I_2, F) = 0.5corr(I_1, F)+0.5coor(I_2,F).
In addition, Table <ref> demonstrates in the ablation dataset of the proposed CNN architecture, it shows the fusion SSIM score for every CNN alternative: Compact, Compact+UNet, and Compact+Unet+STN. It can be shown that even a compact CNN can fuse the input images in high quality, however, adding extra parts to the architecture improve the general performance of the self-supervised training.
Overall, this experiment section proves that the self-supervised fusion method trained on a single example achieves a high quality of image fusion with respect to competitive fusion alternatives.
§ CONCLUSIONS
In conclusion, this paper introduces a novel approach for infrared and visible image fusion based on self-supervised short CNN training for a single example pair. The paper presented this method's technical details including CNN architecture, training algorithm, and the relevant loss functions. In addition, it was proved in the experiments of the paper that the proposed method gets the best results both quantitatively and qualitatively over competitive methods for fast multispectral fusion. Overall, this manuscript introduces a relevant approach that can be incorporated easily into multi-sensor cameras and systems.
ieee_fullname
|
http://arxiv.org/abs/2307.04084v1 | 20230709022832 | A Sustainability Roadmap for C$^3$ | [
"Martin Breidenbach",
"Brendon Bullard",
"Emilio Alessandro Nanni",
"Dimitrios Ntounis",
"Caterina Vernieri"
] | hep-ex | [
"hep-ex",
"physics.acc-ph"
] |
Age of FGK Dwarfs Observed with LAMOST and GALAH: Considering the Oxygen Enhancement
Jinghua Zhang
Received August 12, 2023; accepted August 12, 2023
====================================================================================
§ INTRODUCTION
An electron-positron collider gives a unique opportunity to study the Higgs boson's properties with unprecedented precision and also provide an exceptionally clean environment to search for subtle new physics effects <cit.>. A number of different "Higgs factory" proposals, based on linear and circular colliders, are now under consideration. All of these provide collisions at center of mass energies in the range of 240-370 GeV, and some also are capable of reaching higher energies.
A high-energy particle collider is a large energy-consuming research facility. As such, it is important to balance its scientific importance against its environmental cost. The environmental impact of large accelerators has been analyzed in the recent Snowmass 2021 study <cit.> of the future of particle physics in the US <cit.>. The papers <cit.> have examined the environmental cost of particular Higgs factory proposals, though often concentrating on particular elements of the total cost.
In this paper, we attempt a comprehensive evaluation of the carbon cost of the Cool Copper Collider () Higgs factory proposal <cit.> over its full lifetime, including costs from construction and from operation over the proposed timeline. The structure of this paper is as follows: in Section <ref>, we briefly review the design of . In Section <ref>, we review the physics reach for and other Higgs factory proposals and introduce a metric for balancing carbon impact against the physics impact of each proposal. In Section <ref>, we analyze the power costs of operation of and describe methods for modifying the power design of the accelerator that would lead to substantial savings with little impact on the physics performance. In Section <ref>, we analyze the carbon impact of the construction of C3 and emphasize that cut-and-cover construction, as opposed to construction in a deep tunnel, has significant advantages. In Section <ref>, we discuss options for the source of electrical power for the laboratory. In Section <ref>, we bring these analyses together to estimate the total carbon footprint of . Using information from available studies and design reports, we estimate the carbon impact of other Higgs factory proposals and compare these to in the framework described in Section <ref>.
§ REVIEW OF THE ACCELERATOR DESIGN
, recently proposed <cit.>, is a linear facility that will first operate at 250 GeV center-of-mass collisions. Immediately after, without further extension of the linac, it will run at 550 GeV with an RF power upgrade. The high energy operations will enable the exploration of the Higgs-top coupling, and provide direct access to the Higgs self-coupling with double Higgs production <cit.>. Furthermore, the beam polarization, which exploits the strong dependence of electroweak processes on the chirality of the initial state particles, will offer unique insights into the underlying physics, acting as a new tool for discovery <cit.>. This offers a strong complementarity with proton and circular colliders, where beam polarization is not possible.
utilizes a radically different approach to linear accelerators to build a collider with high gradient and high RF efficiency, and thus lower capital and operating costs <cit.>. is based on a distributed coupling accelerator concept, running under liquid nitrogen (LN) <cit.>, that has led to an optimized accelerating gradient and minimized breakdown problems with respect to earlier designs based on normal conducting technologies. This has yielded an overall optimization of the gradient at 70 and 120 MeV/m for the 250 GeV and 550 GeV operating points, respectively.<cit.>. Much higher energies are possible if length is not the major consideration. The fundamental parameters, assumed for the analysis in this paper, are shown in Table <ref>.
By far the major development to date is the actual distributed coupling accelerator structure. will use C-band (5.712 GHz) standing wave RF accelerating structures that are 1 m long. Each has an RF waveguide to bring power in, and in the more probable operating modes, splits RF power evenly between the beam and dissipation in the structure with 43% beam loading. Operating at 80 K brings the shunt impedance up to 300 MΩ/m, allowing for efficient operation at 120 MeV/m. These gradients have been demonstrated at C-band <cit.> and with an electron beam in an X-Band (11.424 GHz) structure on the SLAC XTA beamline <cit.>. The C-band structure has been tested at low power at SLAC and at high power without beam at Radiabeam <cit.>. The gradient results in a collider with a 550 GeV center-of-mass energy capability on an 8 km footprint.
A pre-conceptual design for the overall linac cryogenics has been developed that includes the design for the CryoModules. For the 250 GeV and 550 GeV design, each linac will have 3 re-liquification cryoplants. LN will flow out along the linac in both directions, so there are 6 flow runs. The LN will be above the raft structures, with an initial velocity of ∼0.03 m/s. The LN will cool the accelerator structures by nucleate boiling with a power density of 0.4 W/cm^2, producing saturated vapor which counter-flows back to the cryoplant. Each cryo-run is about 450 meters in length. The vapor velocity near the cryoplant is ∼3 m/s.
§ COMPARISON OF HIGGS FACTORY PHYSICS REACH
Among the colliders being evaluated by the community, the International Linear Collider (ILC) <cit.>, based on superconducting RF technology, has the most advanced design <cit.>, and the ILC is currently under consideration for construction in Japan.
CERN is pursuing as its main strategy a large circular collider, the FCC <cit.>, and China is planning a similar circular collider, the CEPC <cit.>. Each of these circular colliders would require a tunnel with circumference of the order of 100 km to limit synchrotron radiation. Still, though, the expected instantaneous luminosity drops off significantly above center-of-mass energies of 350–400 GeV.
A different alternative is to construct a compact linear collider based on high gradient acceleration. CERN is also pursuing such a proposal, CLIC <cit.>, that would operate at a collision energy of 380 GeV.
The carbon footprint of the proposed future Higgs factories should be assessed relative to the expected physics reach, which has been reviewed most recently in the context of the Snowmass Community process <cit.>. The primary physics goal of a future Higgs factory is the determination of the total Higgs width and Higgs couplings with per-cent or sub-per-cent precision. A reasonable figure of merit to gauge the physics reach of each machine is the expected level of precision for each of these measurements. We note that evaluating the projected measurement precision accounts for the fact that different beam configurations (center-of-mass energy and beam polarization) have a strong impact on the physics reach of each of those machines. These differences in precision are not accounted for when comparing the total number of Higgs bosons produced alone <cit.>.
The physics reach at colliders increases with the center-of-mass energy, since different Higgs boson production mechanisms become accessible. At 250 GeV center-of-mass energy operations the main Higgs boson production mechanism is associated production with a Z boson (→ ZH), enabling a model-independent determination of the Higgs boson total width. Higgs boson production via the W-boson fusion reaction e^+e^-→νν̅H is accessible at √(s)∼500 GeV, where the only visible signals in the final state come from Higgs boson decays. This allows Higgs boson measurements governed by different systematic effects, complementary to the 250 GeV data, as well as opportunities to study effects such as separation of H → gg/bb̅/cc̅ decays and CP violation in H →τ^+τ^- <cit.>. Importantly, at high center-of-mass energies, double Higgs boson production in the ZHH channel opens up, providing direct access to the Higgs boson self-coupling λ_3. At circular machines, given the energy limitations, double Higgs boson production mechanisms are not accessible, thus allowing only for indirect and model-dependent measurements of λ_3, through loop effects in single-Higgs production.
The use of longitudinal beam polarization offers unique advantages for effective precision measurements at a linear collider, since the interaction cross sections at an collider have strong dependencies on beam polarization.
It has been demonstrated that at 250 GeV center-of-mass energy, the ultimate precision reach in the determination of Higgs couplings, through a Standard Model Effective Field Theory (SMEFT) analysis, for an integrated luminosity of 2 ab^-1 with polarized beams, has comparable sensitivity to 5 ab^-1 with unpolarized beams, with most of the gain coming from e^- polarization alone <cit.>. The main effect of beam polarization is to discriminate the effect of different SMEFT operators that contribute to the Higgs boson coupling. There is a similar gain of about a factor of 2.5 from discrimination of the effects of the operators contributing to the WWγ and WWZ couplings, which also enter the SMEFT analysis.
The positron polarization becomes more relevant at higher center-of-mass energies. For instance, W-boson fusion reactions, such as e^+e^-→νν̅H, proceed only from e_L^-e_R^+ initial states, providing a cross-section (or, equivalently, effective luminosity) enhancement of ∼ 2.5 for typical polarizations foreseen at future linear machines <cit.>. Here positron polarization makes a significant contribution. This implies that the same number of Higgs bosons can be produced through this process with only ∼ 40 % of the integrated luminosity, compared to having unpolarized beams.
Moreover, beam polarization at high energy enables the suppression of relevant backgrounds, such as the dominant e^+e^-→ W^+W^- background for positive (negative) electron (positron) beam polarization, increasing the signal-over-background ratio and allowing the precise measurement of the rate of other backgrounds, as well as the reduction of detector-related systematic uncertainties, with combined measurements of datasets with four distinct initial-state polarization configurations. These effects collectively indicate the increased precision reach that beam polarization provides for linear machines <cit.>.
Additionally, electron (primarily) and positron (secondarily) polarization enhance the precision in the extraction of the Higgs couplings, compared to having unpolarized beams. For example, it has been shown that having polarized initial state can yield an effective luminosity improvement factor for linear machines up to ∼ 2.5, thus allowing the same precision for various Higgs couplings to be reached with ∼ 40 % of the integrated luminosity.reference
For these reasons, in this analysis we propose a comparison of the carbon footprint of collider concepts relative to their expected precision in Higgs coupling measurements. Table <ref> summarizes the projected relative precision for Higgs boson couplings measurements at each collider combined with projected results from the HL-LHC. As can be seen, the overall physics reach of all proposed Higgs factories is similar <cit.> for the 240-250 GeV operations, and additional measurements become accessible for the higher center-of-mass energy runs at linear colliders. We also compare the Higgs Factory proposals is in terms of total energy consumption and carbon emissions, for both construction activities and operations, with the latter being the most relevant number when evaluating each project's impact on the global climate.
We then present an estimate of energy consumption and carbon footprint per unit of physics output. This is achieved by taking the average of the relative precision over all Higgs couplings, weighing them by the relative improvement in their measurement with respect to HL-LHC:
⟨δκ/κ⟩ = ∑_iw_i(δκ/κ)_i/∑_iw_i
where the sum runs over the columns of Table <ref> and the weight is defined as:
w = (δκ/κ)_HL-LHC-(δκ/κ)_HL-LHC+HF/(δκ/κ)_HL-LHC+HF
This definition weights measurements by their relative improvement over HL-LHC when combining the HL-LHC and future Higgs Factory (HF) results. Qualitatively, measurements that minimally improve those of HL-LHC are assigned weights near zero, while HF measurements with high precision or large improvement over HL-LHC are assigned larger weights. While other weighting schemes could be used, we argue that Equation <ref> is unbiased towards the type of physics measurement (e.g. Yukawa, self-coupling, vector coupling) and it emphasises the individual strengths of each collider facility.
For the estimation of the weighted average precision, the hcc̅ coupling was excluded, since there is no estimate for HL-LHC, whereas we assume that the hhh coupling for CEPC can be measured with the same precision as for FCC. The weighted average precision for each collider is given in the last row of Table <ref>.
§ POWER CONSUMPTION AND OPTIMIZATIONS
The most obvious way to reduce the carbon impact of a major facility is to minimize the amount of power that it consumes, thereby minimizing the associated emissions from energy production. This is firmly within the means of the facility designers and crucially does not rely on grid electrification. The nominal operating parameters for -250 are given in Table <ref>.
Several avenues can be pursued to optimize operational power requirements. Improvements in luminosity or reduction in power consumption are possible through the development of ancillary technology by increasing the RF source efficiency, increasing the efficiency of powering the accelerating structures or modification of beam parameters to increase luminosity. At present the main linac requires ∼100 MW of power with 40 MW for the RF sources and 60 MW for the cryogenics.
For the RF sources, the concept utilizes an overall RF system efficiency of 50% which is in line with present high power RF sources that are designed with efficiency in mind. However, significant advances in modern design techniques for klystrons are increasing the klystron amplifier's ultimate efficiency significantly with the inclusion of higher order mode cavities, multi-cell outputs and advanced multi-dimensional computational tools. For example, designs now exist for a 50 MW class RF source<cit.> approaching an amplifier efficiency of 70%. Multi-beam RF sources, reducing the beam perveance, have advanced design efforts exceeding 80% efficiency<cit.>. These results reinforce modern understanding on the limits of klystron efficiency <cit.> which indicate a klystron amplifier efficiency of 70-80% is possible, leading to an overall RF source efficiency of 65%.
RF pulse compression, presently not in the baseline, is also a well known technique for powering high gradient structures. For , pulse compression is particularly useful due to the impact of power loss at cryogenic temperatures and due to the relatively long fill time for a copper structure operating at cryogenic temperatures. In a previous study<cit.>, it was found that low factors of pulse compression, which preserves RF efficiency in the compressor<cit.>, improves the overall efficiency of the system by 30%. Recently, additional efforts have been made to realize the extremely high Q cavities required for pulse compression with cryogenically cooled RF structures <cit.>; these include concepts operating at room temperature and inside the cryostat at 80 K.
For the baseline design <cit.> we anticipate operation with 700 ns and 250 ns flat tops respectively for gradients of 70 and 120 MeV/m and a constant power dissipation of 2.5 kW/m at 120 Hz. Figure <ref> and Figure <ref> show the RF power, dissipated energy and gradient during the pulse. While these flat top lengths were selected to limit the challenges of breakdown, increasing the flat top length and reducing the repetition rate should be investigated in order to reduce the thermal load on the linac. At present, the thermal balance between the structure fill/dump time and the flat top is approximately 50% (equal thermal load). If we were to extend the flat top lengths by a factor of two and reduce the repetition rate by a factor of two, the thermal dissipation in the main linac would decrease by 25%. This improvement would have little effect on the overall design of the accelerator, and would be acceptable if the breakdown rates remain low enough. Proving that this is possible will require high gradient testing of structures with 1400 ns and 500 ns respectively.
The beam current of is relatively low thanks to the large bunch spacing and efficient accelerating structures. One could pursue the possibility of reducing the bunch spacing to increase the current. However, this will require compatibility studies with the detector design. Here we consider the scenario where the bunch spacing is reduced by a factor of two. This would keep a bunch spacing of >1 ns for both -250/550, resulting in a decrease of 25% for the cryogenics power. The RF power required would only decrease by 20% because the peak RF power required would be slightly higher during the RF pulse flat top to compensate for the additional current.
We note that these approaches can all be combined for mutual benefit as shown in the last row of Table <ref>. The demonstration R&D plan <cit.> will be able to investigate these approaches and lead to potential power savings.
§ CARBON IMPACT OF CONSTRUCTION
Under the assumption that the electric grid will be successfully de-carbonized by 2040, as it is the goal of many international climate plans, then construction, rather than operations, may well dominate the climate impact of a new particle physics facility <cit.>.
For FCC it is projected that the whole accelerator complex[The main tunnel plus the additional buildings on the site, the materials for the accelerator and detectors, assuming a main tunnel length of 97.7 km (the updated FCC design anticipates 91 km).] will have a carbon impact similar to that of the redevelopment of a neighbourhood of a major city <cit.>. This indicates that the environmental impact of any future collider facility is going to receive the same scrutiny as that of a major urban construction project.
The bottom-up analysis in <cit.> derives an estimate of global warming potential (GWP) for the main tunnel material (concrete) manufacture alone to be equivalent to the release of 237 ktons of CO_2 (). An alternative top-down analysis is instead dependent on the character of the earth to be excavated, leading to estimates ranging from 5-10 kton /km of tunnel construction and total emissions of 489-978 kton [Contributions from many bypass tunnels, access shafts, large experimental caverns, and new surface sites are excluded.].
A life cycle assessment of the ILC and CLIC accelerator facilities is being performed by ARUP <cit.> to evaluate their holistic GWP, so far providing a detailed environmental impact analysis of construction. The components of construction are divided into classes: raw material supply, material transport, material manufacture, material transport to work site, and construction process. These are labelled A1 through A5, where A1-A3 are grouped as materials emissions and A4-A5 are grouped as transport and construction process emissions. The total GWP for ILC and CLIC is taken to be 266 and 127 kton <cit.>, respectively[We use the emissions figures associated to the CLIC drive-beam design, which is more efficient than the alternative design utilizing only klystrons for RF power.]. The approximate construction GWP for the main tunnels are 6.38 kton /km for CLIC (5.6m diameter) and 7.34 kton /km for ILC (9.5m diameter); the FCC tunnel design is similar to that of CLIC, so 6.38 kton /km is used for the calculation of emissions for both FCC and CEPC. While a comprehensive civil engineering report is unavailable for FCC and CEPC, we estimate the concrete required for klystron gallery, access shafts, alcoves, and caverns to contribute an additional 30% of emissions, similar to what is anticipated for CLIC. The analysis indicates that the A4-A5 components constitute 20% for CLIC and 15% for ILC. In the absence of equivalent life cycle assessment analysis for FCC and CEPC, we account for the A4-A5 contributions as an additional 25%. A summary of these parameters is given in Table <ref>.
The tunnel will be about 8 km long with a rectangular profile in each of its component systems. Assuming a cut and cover approach, all the excavated material will be replaced to yield a small berm. We estimate that for the whole accelerator complex only about 50 thousands cubic meters of spoil for the experimental hall will have to be relocated. Figure <ref> shows a schematic of the cross section, where the klystron gallery is situated directly above the accelerator hall with sufficient concrete shielding to allow constant access to the klystron gallery during operation. The application of a top-down estimate of 6-7 kton /km obtained from the ARUP report is not appropriate for the surface site due the differing cross section geometries of the accelerator housing. To allow for a fair comparison among facilities, we take the same basic assumptions of construction materials. In particular, that construction uses a mix of CEM1 C40 concrete and 80% recycled steel, the GWP of concrete is taken to be 0.18 kg /kg concrete with density 2400 kg/m^3<cit.>, and 85%/15% of emissions originate from concrete/steel production. Taking into account construction of the main linacs, injector linacs, damping rings, beam delivery system, and experimental hall, the total volume of construction material is estimated to be about 260,000 m^3 (consisting mostly of concrete by volume). This leads to a GWP of 133 kton for A1-A3 components and GWP per unit length of the main linac of around 17 kton /km. Notably, this is roughly a factor 2 larger than the GWP/km of main tunnel construction of ILC and CLIC; this suggests further tunnel geometry optimizations are achievable with a detailed engineering study. The surface site construction eliminates the need for additional infrastructure (e.g. access tunnels and turnarounds) and greatly reduces the complexity of the construction process, which we estimate to account for an additional 10%[This estimate is half the A4-A5 component associated to tunnelled facilities and is expected to overestimate the improvement associated to a cut and cover approach, due to significant reduction to spoil transport and operation of a boring machine] to the GWP. This yields a final estimate of 146 kton for civil engineering.
Unlike other Higgs factories under evaluation, the site has not been decided yet. A collider could in principle be sited anywhere in the world.
A community decision will be made regarding the actual site selection, although we note that the offers a unique opportunity to realize an affordable energy frontier facility in the US in the near term and the entire program could be sited within the existing US National Laboratories. The tunnel layout would be adapted to its location, and a cut and cover site, suitable for a horizontal layout, is extremely attractive also for both cost and schedule reasons.
The details of the siting options at FNAL are discussed in <cit.>. Sites such as the DOE Hanford site located in the Pacific Northwest have room to accommodate even bigger footprint machines within their site boundary.
§ POSSIBLE MITIGATION STRATEGY DURING OPERATIONS
The carbon footprint of the electricity production required to meet the total site power requirements of 150-175 MW can be substantial. The average carbon intensity of energy production since May 2022 is 194 and 381 g CO_2/kWh for the CAISO and PJM power grids, respectively <cit.>. This would result in the CO_2 emissions equivalent of 5.7 and 11.2 mega tonnes of CO_2 equivalent for a 20 year run. The electrification of the grid will allow to operate much more sustainably by the time data taking begins. The U.S. “has set a goal to reach 100 percent carbon pollution-free electricity by 2035" in its 2021 emissions target report <cit.>. The U.S. is making progress toward this goal, having been ranked #1 on the Renewable Energy Country Attractiveness Index in 2021, driven primarily by widespread adoption of solar energy. The outlook for renewable energy investments have been further buoyed by the recent passage of the Inflation Reduction Act <cit.>. While full electrification by 2035 is conceivable, it is helpful to consider powering infrastructure required by operating only with renewable energy sources to evaluate associated costs and feasibility. The three technologies of interest to this study are photovoltaic cells (solar), onshore and offshore turbines (wind), and energy storage systems (batteries) to facilitate the diurnal cycle of power generation by solar and wind sources.
Solar is the most appealing renewable energy source. It has achieved the highest market penetration among renewable sources and is expected to achieve utility-scale parity with non-renewables within the next decade. The present cost of PV cells is between 0.82 - 1.01 $/W and the land area required to operate a 3 MW scale solar farm is 6-8 acres/MW <cit.>. Assuming PV cell efficiencies will be driven well beyond the present 30% limit by multi-junction fabrication techniques, the values $0.80/W and 4 acres/MW are assumed <cit.>.
While wind energy trails solar in terms of market penetration, providing over 120 GW domestically, it would offer a complementary daily load profile to that of solar energy, where approximately twice as much power is generated at night than during the day, by both onshore and offshore wind farms <cit.>. While onshore wind has greatest penetration in the Midwest, where average wind speeds at 100m elevation can exceed 10 m/s, smaller wind turbines with lower peak output capacity and lower cut-in wind speeds can be suitable for regions where wind patterns are less intense <cit.>. Typical peak power output for onshore and offshore wind turbines are 3 MW and 10 MW with typical capacity factors (efficiency) of 40% and 60%, respectively <cit.>. The significantly higher power production capacity for offshore wind turbines offers an advantage to candidate sites located on the coasts. Fixed-bottom and floating turbines are the preferred for offshore farms on the Atlantic and Pacific coasts, respectively. Floating turbines have the additional advantage of eliminating high-frequency vibrations resulting from mechanical coupling to the sea floor, which can significantly increase the turbine's functional lifetime, and installation of a floating turbine has a significantly reduced impact on local marine life <cit.>. The costs of onshore, fixed-bottom offshore and floating offshore turbines are around 1.3, 3.25 and 5.3 $/W <cit.>.
A major challenge to full electrification is the need to deliver power to end-users reliably when generation is dependent on natural processes which fluctuate on short timescales (local weather patterns, daily cycle) and long timescales (seasons, regional climate cycles). Energy storage systems are required to eliminate dependence on non-renewables during periods of low production by renewable sources, and can be realised using mechanical, thermal, and chemical energy storage techniques. For example, pumped storage hydro-power (PSH) stations represented 99% of utility-scale energy storage in 2019, each of which has GWh-scale capacity <cit.>. While PSH stations can be used to balance load profiles on the regional scale, they can only be situated where geological constraints allow. Battery energy storage systems (BESS) are not subject to such constraints and can further be build in a distributed network near end-users, rather than in large centralised plants. However, utility-scale battery technology is still nascent, with liquid lithium-ion as the most common battery chemistry. While other designs, like lithium-sulfur, lithium-metal, and sodium-ion, can offer higher energy densities and longer lifetimes, various technical challenges must be overcome. As alternative designs are developed for the future, lithium-ion batteries can support BESS operating on the scale required for today. The world's largest BESS is located in Moss Landing, CA, and has a capacity of 1.4 GWh and can deliver 350 MW to the CAISO grid. The Edwards and Sanburn Solar and Energy Storage site, to be completed in 2023, will use 2.5 millon PV modules and 110,000 lithium-ion batteries situated on 6,000 acres to produce up to 1.1 GW and store 3.32 GWh.
We rely on projections of BESS costs and capacities in the years 2040 and 2050 to appraise those associated to . A reference case for the projected domestic storage capacity in batteries in the years 2040 and 2050 are 120 GWh and 210 GWh, respectively <cit.>. The maximum amount of storage capacity needed to power for a 12 hour period at 150 (175) MW is 1.2 (1.4) GWh, constituting less than 1% of expected total market capacity. By 2040, hydro-pumped energy storage will constitute 20% of total storage capacity and will be relegated to storage durations of more than 12 hours. Lithium-ion battery cell lifetimes are typically on the order of 1000 cycles, and other battery chemistries have rapidly increased in lifetime in recent years, topping 600 cycles for Lithium NMC <cit.>. If a 1000 cycle lifetime is assumed for future battery technologies, and batteries would experience 300 full cycles in a year, each battery module would need to be replaced 3 times in each 10 year run period. Costs could be mitigated through battery recycling, at minimum to be smelted and the valuable elements Nickel and Cobalt captured, 10% of the battery cost could feasibly be reclaimed. The cost of batteries designed for 10 hour storage in the years 2040 and 2050 are 125 and 100 $/kWh <cit.>. These parameters can be used to estimate the total cost of batteries for powering scenarios over the full 20 year run time.
Finally, cost mitigation strategies can be explored. The current compensation rate for surplus power sold back to Pacific Gas and Energy was around $525/kW/year on average from January 2022 to May 2023 <cit.>. An analysis by S&P indicates that in 2030, $55/kW/year could be generated through energy arbitrage, where energy purchased during the day can be stored and sold at night when energy prices are driven by the higher cost non-renewables <cit.>. This analysis also shows that the average cost of energy will not substantially decrease over time. Higher battery capacity would be required to capitalise on arbitrage opportunities and is therefore less appealing than selling excess energy production immediately during daytime production. An additional 150 MW of solar capacity in excess of requirements could generate $380 million. If government investment on the scale of the Production and Investment Tax Credits (PTC and ITC) outlined in the IRA were to be available during construction, the cost of batteries could be reduced by 30% and the cost of renewable power generation could be reduced by $0.0275/kWh <cit.>.
For the following analysis, a day/night cycle of 12 hours each is considered and the average power production over the course of a full day is 175 MW. The total energy storage capacity from batteries is set to provide the difference in nighttime power generation (and must be charged during the day with power generated in excess of 175 MW). Table <ref> summarises a possible design configuration using a mix of solar and wind energy.
While the composition of this energy portfolio can impact the total cost estimates, the total cost of energy infrastructure required to de-carbonize operations is approximately $1 billion over the coarse of 20 years of operation. It is important to note that this falls largely outside the scope of project budget. Indeed, most of this cost will be covered by general investment by the US government in electrification of the grid. While FCC would not be able to access 550 GeV CoM energy, it is expected to require 350 MW in the 365 GeV tt̅ run configuration <cit.>. CERN receives significantly de-carbonized energy from France, where 56 nuclear reactors collectively deliver 63 GW to the grid (1.1 GW/plant on average) <cit.>. Assuming FCC operated with nuclear power alone, it would consume 30% of the power output of a single plant. A nuclear reactor today typically costs around 8 billion euros, implying that the energy infrastructure required to operate FCC sustainably is $2.5 billion.
The previous analysis leads to two conclusions about sustainable operation of :
* The required technological innovation of solar, wind, and energy storage systems is expected to meet the site power needs for by the beginning of operations
* Market availability of these technologies will be sufficiently scaled such that they can be deployed for , and the associated costs born by government investment in renewable energy will be comparable if not less than alternate e^+e^- Higgs factory options
We would like to estimate the cost within the budget scope required to operate sustainably in a realistic scenario. A $200 million budget for renewables would support a 250 MW solar farm, fully covering the needs of during the day with an average excess production of 87.5 MW that can be sold to the grid. Assuming increased capacity of domestic BESS results in negligible energy price differences between day and night through arbitrage, would incur energy costs only from the additional 75 MW needed at night on average. At $0.06/kWh, this would amount to $780 million over 20 years. To effectively erase this additional energy cost, the solar farm budget can be increased to $270 million to provide twice the average site power needs. It should be emphasised that can achieve effective energy independence with a modest investment in solar infrastructure. Given the carbon intensity of solar, wind, nuclear, and natural gas of 11, 11, 12, and 524 gCO_2/kWh in the CAISO grid, along with the least optimistic projection of domestic renewable energy production by the US Energy Information Administration, the carbon intensity of electricity produced by the CAISO grid can be expected to fall below 125 gCO_2/kWh by 2050 <cit.>. This is driven by a doubling of solar/wind and a 25% reduction in gas in terms of total energy portfolio composition. Since half of site power originates purely from solar, the average carbon intensity of energy consumption will be better than 68 gCO_2/kWh. This is further improved to 46 gCO_2/kWh in the high technology uptake scenario. These are comparable to the carbon intensity in France of 38 gCO_2/kWh, which is not expected to be further reduced.
§ MITIGATION STRATEGIES FOR OPERATIONS
https://www.nature.com/articles/d41586-022-03551-5
https://link.springer.com/article/10.1140/epjp/s13360-022-03319-w
There can be considerable emissions associated with the production of energy required to meet site operation power requirements. This is highly dependent on the region in which the project operates; regions with highly de-carbonized electricity grids (via solar, wind, hydroelectric, and nuclear power) offer significantly reduced carbon emissions related to energy production than those running on non-renewable energies (gas, oil, and coal). The total emissions of each collider project are then evaluated as the product of the total amount of energy consumed and the local carbon intensity for its production.
While total de-carbonization of the electric grid by 2040 is a nominal goal, it is not assured. The 2040 projection of carbon intensity based on the stated policies scenario for Japan, China, the European Union, and the United States are roughly 150, 300, 40, and 45 t/GWh, respectively <cit.>. However, local variations in renewable energy systems implementation is neglected in these estimates; for example, the CERN-based colliders could take advantage of a 50-50 mix of renewable and nuclear energy. Additional mitigation strategies, such as construction of dedicated renewable energy plants, would reduce the carbon impact of operations in other regions. This strategy has been thoroughly investigated by the Green ILC Project <cit.>. A more moderate strategy can be envisioned for . A 185 MW solar farm could be built with a $150 million budget <cit.>, double covering the average power requirement of [This estimate considers the power optimizations in Table <ref>], such that excess power could be stored for later use at night[The additional cost of selling and purchasing energy through utility companies can be reduced through special contracts and is neglected here], allowing to achieve green energy independence. The use of multi-junction photovoltaic cell fabrication techniques would improve power conversion efficiency well beyond 30% that is common in today's cells <cit.>, allowing such a solar farm to be situated on about 5 km^2 of land <cit.>.
This estimate relies on energy storage systems supported by regional electricity grids. To better understand the feasibility of scaling all parts of energy production (which may fall under the project budget) and energy storage infrastructure (which would be funded by the US government, but would nonetheless need investment), we perform a holistic cost estimate. We first note that the energy storage capacity required to supply 150 MW continuously for 12 hours is less than 1% the expected grid energy storage capacity in 2040 <cit.>, indicating that the US grid should be able to reasonable support operations at this scale using renewable energy. We assume lithium ion batteries[Lithium ion batteries are not considered to be viable long term energy storage solutions, instead technologies such as flow batteries and systems based on mechanical potential energy are favored] are the primary energy storage technology with a lifetime of 1000 cycles, experiencing 300 cycles per year with 10% of battery cost reclaimed through recycling at a base cost of 125 (100) $/kWh in 2040 and 2050 <cit.>. We take the cost of energy production of solar to be $0.80/W <cit.> while taking that of onshore, fixed-bottom offshore and floating offshore wind turbines to be around 1.3, 3.25 and 5.3 $/W <cit.>. An energy production portfolio that provides continuous power for over a 12 hour day/12 hour night period based on these technologies alone would cost approximately $1 billion. This estimate is primarily driven by requirements of battery energy storage systems and holds for a variety of energy source mixes. This indicates a similar cost would be associated to a site located near the Pacific or Atlantic coasts, which could leverage floating and fixed-bottom turbines respectively, in the Southern US where solar would be most efficient, or proximate to large wind farms in the Midwest. A more precise cost and feasibility analysis can be performed when a candidate site is defined, as has been done for experiments operating at the South pole, for example <cit.>. This cost analysis demonstrates that operations could be supported sustainably within the US within the next two decades given conservative projections of technological development.
As a point of comparison, the power requirement of FCC would be about 30% of the output of a large nuclear plant (generating 1.1 GW on average <cit.>). At about $8 billion per facility, the cost of renewable energy infrastructure for FCC would be about $2.5 billion.
To obtain an estimate of the carbon impact of operations at future collider facilities that takes mitigation strategies into account, we first note the carbon intensity of solar, wind, hydro, and nuclear are around 30, 15, 25 and 5 t/GWh, respectively <cit.>. These estimates have some regional variation due to the differences in supply chains and local infrastructure. For instance, given the lifetime of existing nuclear plants of about 30 years, replacement or construction of entirely new facilities will be required and it might effect the overall carbon intensity. While the ultimate energy production portfolio will be different for facilities constructed in different regions, we take a common estimate of 20 t/GWh for all collider facilities in this analysis. We find this to be a reasonable estimate given that any facility can propose mitigation strategies to decouple their carbon impact from the regional average. It also reflects the expectation that clean energy infrastructure supply chains will improve over the next 20 years.
§ ANALYSIS OF TOTAL CARBON FOOTPRINT
A straightforward calculation of total energy consumption is possible using the information summarized in Table <ref>, which includes estimates of the site power P during collision mode, the annual collision time T_collisions and the total running time in years T_run for each center-of-mass energy √(s) considered. We take into account the time spent with the beam operating at full RF and cooling power outside of data-taking mode, for example for machine development, as an additional week for every 6 weeks of data-taking (i.e. +17%), represented as T_development. We take the site power requirement for the remaining period in a calendar year to be 30% of the site power requirement during data-taking (denoted by κ_down). This value is a conservative upper estimate, since without RF power and associated heat load, any accelerator can be kept cold with a small fraction of power to the cryogenics system.
Using these values, the annual energy consumed is calculated as:
E_annual = P[κ_down· T_year+(1-κ_down)(T_collisions + T_development)]
and the total energy consumption summing over all run configurations √(s) runs is
E_total=∑_r ∈ runsE(r)_annual· T_run(r)
For the circular collider projects, FCC and CEPC, we consider separately the cumulative energy consumption of the Higgs physics runs (i.e. √(s)>240 GeV) for a focused comparison on the basis of Higgs physics reach argued in Section <ref>, but additionally include the contribution of Z-pole and WW-threshold runs which impact the climate nevertheless.
Figure <ref> shows the energy consumption for the considered collider projects. The least energy is consumed by CLIC, driven by the lowest planned run time at low energies and its marginally lower power consumption compared to and ILC, which are comparable. The energy consumption of CEPC is large compared to FCC because CEPC plans to collect four times the integrated luminosity at 240 GeV with an associated tripling of the total run duration.
Figure <ref> shows the precision-weighted energy consumption for the considered collider projects, estimated by multiplying the energy consumption of Figure <ref> with the average relative precision in the last row of Table <ref>. The lowest run time for CLIC is now compensated by the reduced relative precision, in comparison to and ILC, leading to overall closer precision-weighted energy consumption. Similarly, the large proposed run time for CEPC is now taken into account in conjunction with the improved precision reach, yielding a total weighted energy consumption closer to FCC.
Figure <ref> shows the associated GWP of the total energy required for operations, obtained by multiplying the total energy consumption by the respective carbon intensity. The GWP of FCC operations benefits from the de-carbonized electricity expected in France and Switzerland, despite its high total energy requirements.
Figure <ref> shows the GWP due to construction of accelerator facilities. The carbon footprint is very similar among the linear and circular colliders, which is driven primarily by the total length of the accelerator. Figure <ref> shows the total GWP from construction and operations. CLIC is the most environmentally friendly option, owing to its lead performance in operations emissions as well as its small footprint. The total GWP of and ILC are driven by operations while that of CLIC, FCC, and CEPC are almost entirely driven by construction emissions. Possible reductions in the construction component could be achieved by using concrete with lower cement content than CEM1 C40 considered in this analysis. Such cases would still leave FCC GWP dominated by construction processes.
Finally, Figure <ref> shows the total precision-weighted GWP from construction and operations, estimated in the same way as the precision-weighted energy consumption in Figure <ref>. Given the overall similar GWP for CLIC and and the superior precision reach of at higher energies, compared to CLIC, appears to be the most environmentally friendly option, when accounting for the precision-weighted total carbon footprint.
The total energy consumption is given in table <ref> for three cases:
(a) when considering the complete running scenarios of Table <ref>, which include higher √(s) runs for ILC,and runs at the Z-pole and WW-threshold for CEPC,FCC;
(b) when only considering the "Higgs factory" modes of the proposed colliders, thus excluding the Z, WW runs for CEPC,FCC;
(c) and when only including the √(s)=250 GeV run for ILC/, since this run already provides comparable sensitivity to the Higgs couplings as the other proposed Higgs factories, as is shown in Table <ref>.
Update table with latest estimates!
The 2045 estimates for the carbon intensity in the various locations where the collider projects could be hosted are given on the 3rd row of table <ref>, and the total carbon footprint is given on the same table for the two cases considered (6th and last row). The total energy consumption and carbon footprint are also shown in Figures <ref>,<ref>.
§ CONCLUSIONS
We present the first analysis of the environmental impact of the newly proposed collider and a comparison with the other proposed facilities in terms of physics reach, energy needs and carbon footprint for both construction and operations.
The physics reach of the proposed linear and circular e^+e^- colliders has been studied extensively in the context of the US Snowmass and European Strategy processes. We zero in on the Higgs boson coupling measurement precision achievable at , CLIC, ILC, FCC, and CEPC and point out that they are generally similar, though linear colliders can operate at higher collision energies allowing access to additional measurements of the Higgs boson's properties. Moreover, the use of polarization at linear facilities effectively compensates for the lower luminosity.
On this basis, the global warming potential of these facilities is compared in terms of absolute environmental impact and in terms of environmental impact per unit of physics output obtained by a weighted average of expected precision on Higgs coupling measurements. The operations emissions of could be improved through beam parameter optimization leading to 63 (79) MW power reduction compared to the nominal 150 (175) MW in the 250 (550) GeV running mode. Mitigation strategies using dedicated renewable energy facilities can reduce the carbon intensity of energy production to 20 ton /GWh. We find that global warming potential is driven by construction rather than by operations beyond 2040. The compact nature of linear collider facilities reduces the total volume of construction materials and opens up the option for a surface site to simplify the construction process. We conclude that linear colliders and in particular have great potential for an environmentally sustainable path forward for high energy collider facilities.
§ ADDITIONAL POINTS
Somewhere in the intro or dedicated section state the importance of polarization for electrons and positrons.
How large is the systematic on the positron polarization measurement ? –> check with Peskin
Detectors with short duty cycle –> less systematic –> more effective per number of Higgs bosons
Beam Damp experiments
When assessing the energy consumption and carbon footprint of a proposed Higgs factory, one has
* The figure of merit when assessing the scientific output of a Higgs factory should not be the number of Higgs bosons produced per se, but rather the precision in the Physics observables of interest (particularly Higgs couplings) that can be reached for a given number of Higgs bosons produced.
* Electron (primarily) and positron (secondarily) polarization can yield an effective luminosity improvement factor for linear machines of ∼ 2.5, i.e. allowing the same precision for various Higgs couplings to be reached with ∼ 40 % of the integrated luminosity.
* Additionally, linear machines can probe higher center-of-mass energies, which offers various advantages compared to linear machines:
* At higher √(s), Higgs boson production cross section increases, enabling a more efficient production of Higgs bosons.
* At high √(s) (above ≃ 500 GeV), linear machines can probe double Higgs production via the ZHH channel, allowing for a direct measurement of the Higgs trilinear coupling λ_3.
For the electron Yukawa coupling, FCC can achieve a 𝒪(1) fractional uncertainty with the dedicated run at the Higgs mass pole, which was however not taken into account for the studies presented here.
Action Items (Dimitris):
* Reach out to each collider project asking for their most up-to-date estimates for: site power for each √(s), annual collision time, operational efficiency and downtime site power fraction → to make sure there is no contention about our estimates once published
More ideas:
* Reach out to Janot/Blondel asking specifics about their assumptions and references for the numbers they're quoting (Dimitris)
* Reach out to Doerr School of Sustainability contact (Caterina?)
* Follow up with Ken Bloom about carbon intensity projections
§ ACKNOWLEDGEMENTS
The authors express their gratitude to Dan Akerib, Tom Shutt, Sridhara Dasu, Patrick Maede, and Jim Brau for their insightful discussions, which have significantly contributed to this work. The authors also extend their appreciation to Michael Peskin and Steinar Stapnes for providing feedback on the manuscript.
The work of the authors is supported by the US Department of Energy under contract DE–AC02–76SF00515.
tocsectionBibliography
atlasnote
|
http://arxiv.org/abs/2307.03887v1 | 20230708034254 | Improving Prototypical Part Networks with Reward Reweighing, Reselection, and Retraining | [
"Robin Netzorg",
"Jiaxun Li",
"Bin Yu"
] | cs.LG | [
"cs.LG",
"cs.AI",
"cs.CV",
"cs.HC"
] |
theoremTheorem[section]
*theorem*Theorem
lemma[theorem]Lemma
*lemma*Lemma
proposition[theorem]Proposition
corollary[theorem]Corollary
claim[theorem]Claim
fact[theorem]Fact
exerciseExercise
||
‖‖
()
{}
⟨⟩
[]
⌊⌋
|-0.25ex|-0.25ex||-0.25ex|-0.25ex|
[2]()#1 #2
Spectral radius, fractional [a,b]-factor and ID-factor-critical graphs[Supported by National Natural Science Foundation of China
(Nos. 11971445 and 12171440),
Henan Natural Science Foundation (No. 202300410377) and
Research Program of Science and Technology at Universities of Inner Mongolia Autonomous Region (No. NJZY22280).]
Ao Fan^a, Ruifang Liu^aCorresponding author.
E-mail addresses: [email protected], [email protected], [email protected]., Guoyan Ao^a, b
^a School of Mathematics and Statistics, Zhengzhou University, Zhengzhou, Henan 450001, China
^b School of Mathematics and Physics, Hulunbuir University, Hailar, Inner Mongolia 021008, China
==========================================================================================================================================================================================================================================================================================================================================
In recent years, work has gone into developing deep interpretable methods for image classification that clearly attributes a model's output to specific features of the data. One such of these methods is the prototypical part network (ProtoPNet), which attempts to classify images based on meaningful parts of the input. While this method results in interpretable classifications, it often learns to classify from spurious or inconsistent parts of the image. Hoping to remedy this, we take inspiration from the recent developments in Reinforcement Learning with Human Feedback (RLHF) to fine-tune these prototypes. By collecting human annotations of prototypes quality via a 1-5 scale on the CUB-200-2011 dataset, we construct a reward model that learns to identify non-spurious prototypes. In place of a full RL update, we propose the reweighed, reselected, and retrained prototypical part network (R3-ProtoPNet), which adds an additional three steps to the ProtoPNet training loop. The first two steps are reward-based reweighting and reselection, which align prototypes with human feedback. The final step is retraining to realign the model's features with the updated prototypes. We find that R3-ProtoPNet improves the overall consistency and meaningfulness of the prototypes, but lower the test predictive accuracy when used independently. When multiple trained R3-ProtoPNets are incorporated into an ensemble, we find an increase in test predictive performance while maintaining interpretability.
§ INTRODUCTION
With the widespread use of deep learning, having these models be interpretable is more important now than ever. As these models continue to see use in high-stakes situations, practitioners hoping to justify a decision need to understand how a deep model makes a prediction, and trust that those explanations are valuable and correct <cit.>. One such proposed method for image classification is the prototypical part network (ProtoPNet), which classifies a given image based on its similarity to prototypical parts of training images, called prototypes <cit.>. This model aims to combine the power of deep learning with an intuitive reasoning module similar to humans.
While ProtoPNet aims to learn meaningful prototypical concepts, in practice, learned prototypes suffer from learning spurious concepts, such as the background of an image, from inconsistent concepts, such as learning both the head and the wing of a bird, and from duplicating concepts, such as having two prototypes that correspond to the same wing of the same bird <cit.>. Such problems are highly detrimental to the efficacy of these models, resulting in wasted computation at best and incorrect reasoning at worst. Various methods have been proposed to account for these issues <cit.>, but these methods involve either costly labelling procedures or fall short of providing a means of measuring prototype quality.
We seek to increase the performance of the learned prototypes by taking inspiration from recent advances in reinforcement learning with human feedback (RLHF) <cit.> and reward learning <cit.>. RLHF and reward learning have become popular approaches for aligning large language models with human preferences, partially due to the flexibility of learned rewards and feedback collection methods <cit.>. While prior work has incorporated human feedback into ProtoPNets <cit.>, no variation of ProtoPNet has incorporated a cheap and flexible reward learning fine-tuning framework.
Towards this end, we propose the reward reweighed, reselected, and retrained prototypical part network (R3-ProtoPNet), which seeks to improve the original ProtoPNet via fine-tuning with a learned reward model. With minimal human feedback data on the Caltech-UCSD Birds-200-2011 (CUB-200-211) dataset <cit.>, we are able to train a high-quality reward model that achieves 91.5% test accuracy when ranking human preferences, serving as a strong measure for prototype quality. R3-ProtoPNet is then able to improve the meaningfulness of prototypes, removing dependence on spurious features, and is able to slightly decrease inconsistency across images compared to the original ProtoPNet. When used as base learners in an ensemble, R3-ProtoPNet is able to outperform an ensemble of ProtoPNets on a held-out test dataset.
In summary, our contributions are as follows. Firstly, we demonstrate that a reward model trained on small amounts of human feedback data (roughly 300 ratings) can accurately rank human preference data. Secondly, due to the high performance of the reward model, we propose using the reward model as a measure of prototype. Thirdly, we introduce the R3-ProtoPNet, which uses reward-guided fine-tuning to improve prototype meaningfulness and ensemble performance.
§ RELATED WORK
§.§ Reinforcement Learning with Human Feedback
Since the success of InstructGPT <cit.>, Reinforcement Learning with Human Feedback (RLHF) has received a great deal of attention in the machine learning community. Although this success is recent, incorporating human feedback into reinforcement learning methods via a learned reward model has a deep history in reward learning <cit.>. While works taking inspiration from InstructGPT have used proximal policy optimization (PPO) to fine-tune networks with human feedback <cit.>, it is unclear to the extent that formal reinforcement learning is necessary to improve models via learned reward functions <cit.>, or if the human feedback needs to follow a particular form <cit.>. Some prior work incorporates the reward function as a way to weigh the likelihood term <cit.>. Keeping this work in mind, we incorporate the reward model into ProtoPNet as a way to reweigh prototypes post-training.
§.§ Example-based Models and Prototypical Part Networks
The field of interpretable deep learning is vast, with a plethora of explainability and interpretability methods available to the user. For a more complete overview of interpretable deep learning, please refer to <cit.>. To ground the discussion, we focus primarily on example-based models, one such example being ProtoPNet. While ProtoPNet is our model of interest, other example-based methods exist, such as the non-parametric xDNN <cit.> or SITE, which performs predictions directly from interpretable prototypes <cit.>. While other example-based methods exist, we focus on the ProtoPNet due to its intuitive reasoning structure.
Since its introduction by <cit.>, ProtoPNets have received a great deal of attention, and various iterations have been developed. Work has explored extending the ProtoPNet to different architectures such as transformers (<cit.>), or sharing class information between prototypes (<cit.>). <cit.> increase the spatial flexibility of ProtoPNet, allowing prototypes to change spatial positions depending on the pose information available in the image. ProtoPNets and variations have seen success in high-stakes applications, such as kidney stone identification (<cit.>) and mammography (<cit.>).
Many works have commented on how the original ProtoPNet tends to overemphasize spurious features, and they have taken different approaches to solving this issue. <cit.> introduce a explainability interface to ProtoPNet, allowing users to see the dependence of the prototype on certain image attributes like hue and shape. The authors claim that seemingly dissimilar or spurious prototypes share certain difficult-to-perceive features, like texture or contrast. <cit.> introduce a variation of the ProtoPNet, IAIA-BL, which biases prototypes towards expert labelled annotations of classification-relevant parts of the image.
Similar to how we provide human feedback at the interpretation level, <cit.> introduce the ProtoPDebug, where a user labels a prototype and image pair as "forbidden" or "valid", and a fine-tuning step maximizes the distance between learned prototypes and patches in the forbidden set and minimizes the distance between learned prototypes and patches in the valid set. While also incorporating human feedback, <cit.> do not ground their method in RLHF, but instead includes the binary feedback as a supervised constraint into the ProtoPNet loss function. Learning a reward function via ratings allows us to simultaneously increase the interpretability of the prototypes, and develop an evaluation metric for the quality of a particular prototype. Compared to previous approaches, reward reweighing, reselection, and retraining allows for fast collection of high-quality human feedback data and the construction of a reward model that measures prototype quality while increasing the interpretability and the performance of the model.
§ PROTOTYPICAL PART NETWORK (PROTOPNET)
In this section, we describe the base architecture used in our method, the Prototypical Part Network (ProtoPNet) introduced in <cit.>. The ProtoPNet aims to introduce interpretability to otherwise uninterpretable image classifiers. In place of predicting from an arbitrary representation, the model makes a classification based on part attention and similar prototypical parts of an image. The general reasoning of a model is to classify an unseen image by finding training images with similar prototypical parts to those of the unseen image. This approach allows the user to interrogate the reasoning of the model, and clearly see which parts of the image led to the model's classification.
§.§ Description
Here we briefly describe the ProtoPNet, adopting the notation used in <cit.>. The ProtoPNet architecture builds on a base convolutional neural network f, which is then followed by a prototype layer denoted g_p, and a fully connected layer h. Typically, the convolutional features are taken pretrained models like VGG-19, ResNet-34, or DenseNet-121.
The ProtoPNet injects interpretability into these convolutional architectures with the prototype layer g_p, consisting of m prototypes P = {p_j}^m_j=1 typically of size 1×1× D, where D is the shape of the convolutional output f(x). By keeping the depth the same as the output of the convolutional layer, but restricting the height and width to be smaller than that of the convolutional output, the learned prototypes select a patch of the convolutional output. Reversing the convolution leads to recovering a prototypical patch of the original input image x. Using upsampling, the method constructs a activation pattern per prototype p_j.
To use the prototypes to make a classification given a convolutional output z=f(x), ProtoPNet's prototype layer computes a max pooling over similarity scores: g_p_j(z) = max_z∈patches(z)log((z - p_j_2^2 + 1)(z - p_j_2^2 + ϵ)), for some small ϵ < 1. This function is monotonically decreasing with respect to the distance, with small values of z - p_j_2^2 resulting in a large similarity score g_p_j(z). Assigning m_k prototypes for all K classes, such that ∑_k=1^K m_k = m, the prototype layer outputs a vector of similarity scores that matches parts of the latent representation z to prototypical patches across all classes. The final layer in the model is a linear layer connecting similarities to class predictions.
In order to ensure that the prototypes match specific parts of training images, during training the prototype vectors are projected onto the closest patch in the training set. For the final trained ProtoPNet, every p_j corresponds to some patch of a particular image.
§.§ Limitations
While ProtoPNet is capable of providing interpretable classifications, the base training described in <cit.> results in prototypes that are inconsistent and represent spurious features of the image (<cit.>). Additionally, same-class prototypes will often converge to the same part of the image, resulting in duplicate prototypes.
<cit.> note that a prototype whose top L (usually L=5) closest training image patches come from different classes than the target class tend to be spurious and inconsistent, focusing on features like the background. To remedy this issue, they introduce a pruning operation, removing these prototypes entirely. While pruning does remove dependency on some subpar prototypes, we find that pruning still leaves some prototypes that rely on spurious and inconsistent features (Table <ref>) and does not improve accuracy. We also find that duplicate prototypes still occur after the pruning operation as well. We visualize subpar prototypes in Figure <ref>. For more examples of low-quality prototypes, please see the supplementary material.
§ HUMAN FEEDBACK AND THE REWARD REWEIGHED, RESELECTED, AND RETRAINED PROTOTYPICAL PART NETWORK (R3-PROTOPNET)
Inspired by the recent advances in reinforcement learning with human feedback (RLHF) <cit.>, the reward reweighed, reselected, and retrained prototypical part network (R3-ProtoPNet) utilizes a learned reward model to fine-tune prototypes. In place of pruning prototypes and sacrificing potential information, we demonstrate that incorporating human feedback into the training of the ProtoPNet improves prototype quality while increasing ensemble accuracy. In this section, we describe the collection of high-quality human feedback data, our reward model, and how we incorporate the reward model into the training loop via a three-stage training procedure.
§.§ Human Feedback Collection
A crucial aspect behind the success of RLHF methods is the collection of high quality human feedback data. Unclear or homogeneous feedback may result in a poor performing reward model <cit.>. The design of human feedback collection is vitally important to the training of a useful reward model.
The inherent interpretability of ProtoPNet leads to a useful benefit for RLHF. Given a trained ProtoPNet, it is possible for a knowledgeable user to directly critique the learned prototypes. Given a particular classification task, a human with enough expertise should be able to recognize if a particular prototype is "good" or "bad" <cit.>. In the case of classifying birds in the CUB-200-2011 dataset, one of the original classification tasks used in <cit.>, it is clear that if a prototype gives too much weight to the background of the image (spurious), or if the prototype corresponds to different parts of the bird when looking at different images (inconsistency), the learned prototype is not meaningfully or interpretably contributing to prediction. Given these prototypes that fail to contribute to prediction, a knowledgeable human trying to classify birds would rate these prototypes as "bad".
There are many different ways to elicit this notion of "goodness" from a user <cit.>. Although it is possible to incorporate many different forms of feedback into the R3-ProtoPNet, such as asking a user to compare prototypes to elicit preferences or ask for a binary value of whether a prototype is "good" or "bad", we found most success with asking the user to rate a prototype on a scale from 1 to 5. While scalar ratings can be unstable across different raters, with a clear, rule-based rating method, rating variance is reduced and it is possible to generate high-quality labels. An example rating scale on the CUB-200-2011 dataset is provided in Figure <ref>.
§.§ Reward Learning
We note that, when a user provides feedback on a prototype, it is not the training image or the model prediction that the user is providing feedback on, but the prototype's resulting interpretation: the activation patterns. Our task is therefore different from RLHF applied to language modeling or RL tasks (<cit.>, <cit.>), where human feedback is provided on the model output or resulting state. We therefore collect a rating dataset 𝒟 = {(x_i, y_i, h_i,j, r_i,j)}_i=1,j=1^n,m, where x_i,y_i are the training image and label, and h_i,j,r_i,j are prototype p_j's activation patterns and user-provided activation patterns for image x_i. We note that collecting preferences for this entire dataset is prohibitive and unnecessary, so we only collect a subset.
Given the dataset 𝒟, we generate the induced comparison dataset, whereby each entry in 𝒟 is paired with one another. Given i≠ i' and/or j≠ j', we populate a new paired dataset, 𝒟_paired, which consists of the entries of 𝒟 indexed by i,j,i',j', and a comparison c, which takes values -1, 0, 1. If the left-hand sample is greater, and therefore considered higher-quality, r_i,j > r_i',j', then c = -1. If the right-hand sample is greater r_i,j < r_i',j', then c = 1. We note that, during learning, we exclude entries with c=0 to increase the contrast between pairs. This synthetic construction allows us to model the reward function, r(x_i, h_i,j), via the Bradley-Terry Model for pairwise preferences <cit.>. We train this model with the same loss function as in <cit.>, a cross-entropy loss over the probabilities of ranking one pair over the other. This synthetic construction combinatorially increases the amount of preference data, allowing us to train a high-quality reward model on relatively small amounts of quality human feedback data.
§.§ Reward Reweighed, Reselected, and Retrained Prototypical Part Network (R3-ProtoPNet)
After having collected high-quality human feedback data and trained a reward model, we can now incorporate it into a fine-tuning framework to improve the interpretability of ProtoPNet. We incorporate the reward model via a three step process consisting of reward weighting, reselection, and retraining. Each step is described in more detail below.
§.§.§ Reward Reweighing
Although PPO is a popular option for RLHF (<cit.>), there is evidence that simpler fine-tuning algorithms can lead to similar performance increases (<cit.>). Inspired by the success and the ease of implementation of reward-weighted learning <cit.>, we develop a reward-weighted update for the ProtoPNet:
max_p_jℒ_reweigh(z_i^*, p_j) = max_p_j∑_i ∈ I(p_j)^nr(x_i, p_j)1/λ_distz_i^* - p_j^2_2 + 1
where z_i^* = argmin_z∈patches(f(x_i))z - p_j_2^2, I(p_j) = {i | y_i∈class(p_j)}, and λ_dist is a fixed hyperparameter. We note that the loss function ℒ_reg is a sum of the inverse distances weighted by the reward of the prototype on that image. Since we only update the prototype p_j, the only way to maximize the loss is to minimize the distance between prototype and image patches with high reward r(x_i, p_j). This causes the prototype to resemble high reward image patches, improving the overall quality of the prototypes. Wanting to preserve prototypes that already have high reward, we only update those prototypes that have relatively low mean reward less than γ = 0.45. λ_dist is included in the loss function to rescale distances, since the closest distances are near zero. We find best performance with λ_dist = 100.
Practically, we find that optimizing this loss function leads to locally maximal solutions, resulting in local updates that do not modify prototypes with low quality values of 1, but it's more likely to improve prototypes with quality values of 2 or higher. If the prototype p_j has high activation over the background of an image x_i, for example, the closest patches z_i^* in the training data will also be background patches, and the reward of the prototype will be low, leaving minimal room for change. It is not possible for this update to dramatically change the location of the patch in the image via this loss function.
§.§.§ Prototype Reselection
In order to improve low quality prototypes that require significant manipulation, we introduce a reselection procedure based on a reward threshold. Given a prototype p_j and image x_i, if 1/n_k∑_i∈ I(p_j)r(x_i, p_j) < α, where α is a pre-determined threshold and n_k is the number of training images in class k, we reselect the prototype. The reselection process involves iterating over patch candidates z'_i and temporarily setting the prototype p'_j = z'_i, where z'_i is chosen randomly from the patches of a randomly selected image x'_i in the class of p_j. If 1/n_k∑_i∈ I(p_j)r(x_i', p'_j) > β, where β is an acceptance threshold, and if none of the prototypes match patch p'_j = z'_j, then we accept the patch candidate as the new prototype. We found that α = 0.15 and β = 0.50 led to good performance. We refer to the combination of reweighting and reselection as the R2 update step, and the corresponding trained model the R2-ProtoPNet.
The reasoning process behind our prototype reselection method takes inspiration from the original push operation in <cit.>. Similar to how ProtoPNet projects prototypes onto a specific training image patch, here we reselect prototypes to be a particular reward-filtered training image patch. With a high enough acceptance threshold β, this forces the elimination of low reward prototypes while preserving the information gain of having an additional prototype.
One possible alternative approach is to instead search over the training patches, and select those patches with the highest reward. We found that randomly selecting patches, in place of searching for patches with the highest reward, led to higher prototype diversity and less computation time. As discussed in Section <ref>, it is possible that a reward model that more explicitly accounts for prototype diversity could alleviate the duplicate issue, but we leave this to future work.
While we do not use a traditional reinforcement learning algorithm to fine-tune our model as is typically done in RLHF <cit.>, pairing the reselection and fine-tuning steps together resembles the typical explore-exploit trade-off in RL problems. We see that fine-tuning with our reward model leads to exploit behavior, improving upon already high-quality prototypes. At the same time, the reselection step serves as a form of exploration, drastically increasing the quality of uninformative prototypes. We find that these similarities are enough to improve the quality of ProtoPNet, as discussed in the next section.
§.§.§ Retraining
A critical step missing in the R2 update is a connection to prediction accuracy. As discussed in Section <ref>, without incorporating predictive information, performing the reward update alone results in lowered test accuracy. Since the above updates only act on the prototypes themselves, not the rest of the network, the result is a misalignment between the prototypes and the model's base features and final predictive layer. The reward update guides the model towards more interpretable prototypes, but the reward update alone fails to use the higher quality prototypes for better prediction.
To account for the lack of predictive performance, the final step of R3-ProtoPNet is retraining. Simply retraining with the same loss function used in the original ProtoPNet update results in the realignment of the prototypes and the rest of the model. Although one could worry that predictive accuracy would reduce the interpretability of the model <cit.>, we find that retraining increases predictive accuracy while maintaining the quality increases of the R2 update. The result is a high accuracy model with higher-quality prototypes. We explore evidence of this phenomenon and why this is the case in the following section.
§ EXPERIMENTS
Here we discuss the results of training the R3-ProtoPNet on the CUB-200-2011 dataset, the same dataset as used in <cit.>. We demonstrate that the R3-ProtoPNet leads for higher quality prototypes across base model architectures and prototype configurations while not sacrificing predictive performance.
§.§ Datasets
R3-ProtoPNet requires two datasets: the original dataset for initial training, and the scalar ratings of activation pattern dataset. Combined, this results in the dataset described in Section <ref>. To offer better comparison against the original ProtoPNet, we use the same dataset for initial training that was used in <cit.>, the CUB-200-2011 dataset <cit.>. The CUB-200-2011 dataset consists of roughly 30 images of 200 different bird species. We employ the same data augmentation scheme used in <cit.>, which adds additional training data by applying a collection of rotation, sheer, and skew perturbations to the images, resulting in a larger augmented dataset.
For the collection of the activation pattern ratings, we only provide activation patterns overlaid on the original images to the rater. Although it is possible to crowdsource the collection of human preference data, we found that it was possible to increase the performance of ProtoPNet with relatively small amounts human preference data that we ourselves collected. We rated a total of 700 prototype-image pairs according to the scale approach described in Figure <ref>, which we justify in the next subsection.
§.§ Architectures and Training
Similar to <cit.>, we study the performance of R3-ProtoPNet across three different base architectures: VGG-19, ResNet-34, and DenseNet-121. While the original ProtoPNet sets the number of prototypes per class at m_k = 10, we additionally run the VGG19 architecture with m_k=5 prototypes to explore model performance when the number of prototypes is limited. No other modifications were made to the original ProtoPNet architecture. We train for 100 epochs and report results for the best performing model.
The reward model r(x_i, h_i) is similar to the base architecture of the ProtoPNet. Two ResNet-50 base architectures take in the input image x_i and the associated acticvation pattern h_i separately, and both have two additional convolutional layers. The outputs of the convolutional layers are concatenated and fed into a final linear layer with sigmoid activation to predict the Bradley-Terry ranking. Predicted rewards are therefore bound in the range (0, 1). We train the reward model for 5 epochs on a comparison dataset of 71,875 paired images and preference labels, and evaluate on a 13,831 testing pairs. The reward model achieves 91.54% test accuracy when trained on the whole dataset, and we additionally find that the reward model converges to roughly 91% test accuracy on a comparison dataset generated from at least 300 rated activation patterns.
§.§ Evaluation Metrics
To evaluate the performance of R3-ProtoPNet, we compare it to ProtoPNet using three metrics: test accuracy, reward, and prototype class mismatch. We use test accuracy to measure the predictive performance of the models. As the above section demonstrates, the learned reward model achieves high accuracy in predicting which prototype ranks above another in accordance with human preferences, so we therefore use it as a measure of prototype quality. Regarding the class mismatch metric, <cit.> note that low-quality prototypes tend to have close training images that come from different classes. To evaluate the effect of R3 updating, we compute the average class mismatch across all prototypes for a given model for the Top-5 and Top-10 closest training images.
§.§ Results
After training ProtoPNet, running the R2 update step, and then performing retraining, we see several trends across multiple base architectures. In Table <ref>, we report the test accuracy of the different base architectures across stages of R3-ProtoPNet training. Generally, the test accuracy from ProtoPNet substantially decreases after applying the R2 update, but retraining tends to recover most of the predictive loss. This accuracy maintenance demonstrates that it is possible to align prototypes with human preferences without sacrificing predictive power.
In Table <ref>, we report the average reward of all prototypes on all test images for a given base architecture. We see that ProtoPNet achieves an average reward between 0.48 and 0.57 across architectures. Investigating the distribution of rewards further in Figure <ref>, it is revealed that ProtoPNet tends to produce a bimodal distribution over prototype rewards, with some bias towards low-quality and high-quality prototypes. Applying the R2 update results in the desired behavior, increasing the average reward and shifting the distribution of rewards upwards. We additionally see that the retraining step in R3-ProtoPNet actually continues to increase average reward across all base architectures while slightly increasing the spread of the reward distribution.
Finally, we report the Top-5 and Top-10 class mismatch in Table <ref>. Here we see an interesting phenomena. Across all base architectures, ProtoPNet has an average class mismatch of at least half of the Top-L closest image patches, for both L=5,10. Although performing the R2 greatly increases the average reward for all base architectures except ResNet-34, we see that class mismatch is only marginally reduced, with still all of the base architectures resulting in mismatches for over half of the closest Top-L training image patches. We see that R3-ProtoPNet greatly reduces class mismatch for the m_k=5 VGG-19 base architecture, but tends to only marginally reduce class mismatch for the m_k=10 case.
§.§ Discussion
Given the results, we see that R3-ProtoPNet manages to increase the quality of learned prototypes without sacrificing predictive performance. While the ResNet-34 and DenseNet-121 base architectures do see a slight performance decrease, producing an ensemble of trained R3-ProtoPNets results in an accuracy increase over an ensemble of the original trained ProtoPNets. We see that R3-ProtoPNet results in a substantial increase of the average test reward, verifying that prototype quality is increasing. There is still much room for improvement, as class mismatch for 10 prototypes does not decrease across all architectures, while there is some class mismatch decrease for the 5 prototype VGG-19-based ProtoPNet. Overall, these results demonstrate that incorporating reward information into the ProtoPNet via reweighing, reselection, and retraining does increase interpretability of ProtoPNets, and, when incorporated into an ensemble, increases predictive performance.
§ LIMITATIONS AND FUTURE WORK
While R3-ProtoPNet improves interpretability and predictiveness in an ensemble, there is plenty of room for improvement. We note that the reward model is trained on ratings of a single image and heatmap, highly constrained to measuring overlap between prototype and the object of interest, but it is quite possible to extend ratings to multiple images and heatmaps. This would allow for the reward model to better learn cross-image preferences, such as consistency. We hope that this could alleviate the duplicate issue as well. We note that R3-ProtoPNet fails to entirely eliminate duplicates, with several high-reward prototypes converge to the same part of the image.
While this work investigated increasing the performance of ProtoPNet, it is possible to extend the R3 update to other extensions of the ProtoPNet. A major benefit of reward fine-tuning is its flexibility in application, and we expect that combining the R3 update with other variations of the ProtoPNet would result in further increased performance gains. Combining multiple feedback modalities, such as the binary feedback used in ProtoPDebug <cit.>, could further increase model performance.
A final limitation with R3-ProtoPNet and other methods that rely on human feedback is that the model itself might be learning features that, while seemingly confusing to a human, are helpful and meaningful for prediction. <cit.> argue that the ProtoPNet can predict with non-obvious textures like texture and contrast, which might be penalized via a learned reward function. Future work is necessary to investigate how ProtoPNet variants could critique human feedback, and argue against a learned reward function.
§ CONCLUSION
In this work, we propose the R3-ProtoPNet, a method that uses a learned reward model of human feedback to improve the meaningfulness of learned prototypical parts. We find that ensembling multiple R3-ProtoPNets results in increased performance over original ProtoPNet ensembles. Considering the high performance of the reward model, we use the reward model as a measure of prototype quality, allowing us to critique the interpretability of ProtoPNet along a human lens. The ability of reward learning to quantize qualitative human preferences make reward-based fine-tuning a promising direction for the improvement of interpretable deep models.
|
http://arxiv.org/abs/2307.04469v1 | 20230710103740 | Beyond spectroscopy. II. Stellar parameters for over twenty million stars in the northern sky from SAGES DR1 and Gaia DR3 | [
"Yang Huang",
"Timothy C. Beers",
"Hai-Bo Yuan",
"Ke-Feng Tan",
"Wei Wang",
"Jie Zheng",
"Chun Li",
"Young Sun Lee",
"Hai-Ning Li",
"Jing-Kun Zhao",
"Xiang-Xiang Xue",
"Yu-Juan Liu",
"Hua-Wei Zhang",
"Xue-Ang Sun",
"Ji Li",
"Hong-Rui Gu",
"Christian Wolf",
"Christopher A. Onken",
"Ji-Feng Liu",
"Zhou Fan",
"Gang Zhao"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.SR"
] |
1School of Astronomy and Space Science, University of Chinese Academy of Sciences, Beijing 100049, Chinese; [email protected]
2Key Lab of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012, P. R. China; [email protected]; [email protected]
3Department of Physics and Astronomy and JINA Center for the Evolution of the Elements (JINA-CEE), University of Notre Dame, Notre Dame, IN 46556, USA
4Department of Astronomy, Beijing Normal University, Beijing 100875, People's Republic of China
5Department of Astronomy and Space Science, Chungnam National University, Daejeon 34134, Republic of Korea
6Department of Astronomy, School of Physics, Peking University, Beijing 100871, People's Republic of China
7Kavli Institute for Astronomy and Astrophysics, Peking University, Beijing 100871, People's Republic of China
8Department of Space Science and Astronomy, Hebei Normal University, Shijiazhuang 050024, People's Republic of China
9Research School of Astronomy and Astrophysics, Australian National University, Canberra, ACT 2611, Australia
10Centre for Gravitational Astrophysics, Research Schools of Physics, and Astronomy and Astrophysics, Australian National University, Canberra, ACT 2611, Australia
We present precise photometric estimates of stellar parameters, including effective temperature, metallicity, luminosity classification, distance, and stellar age, for nearly 26 million stars using the methodology developed in the first paper of this series, based on the stellar colors from the Stellar Abundances and Galactic Evolution Survey (SAGES) DR1 and Gaia EDR3.
The optimal design of stellar-parameter sensitive uv filters by SAGES has enabled us to determine photometric-metallicity estimates down to -3.5, similar to our previous results with the SkyMapper Southern Survey (SMSS), yielding a large sample of over five million metal-poor (MP; [Fe/H] ≤ -1.0) stars and nearly one million very metal-poor (VMP; [Fe/H] ≤ -2.0) stars.
The typical precision is around 0.1 dex for both dwarf and giant stars with [Fe/H] >-1.0, and 0.15-0.25/0.3-0.4 dex for dwarf/giant stars with [Fe/H] <-1.0.
Using the precise parallax measurements and stellar colors from Gaia, effective temperature, luminosity classification, distance and stellar age are further derived for our sample stars.
This huge data set in the Northern sky from SAGES, together with similar data in the Southern sky from SMSS, will greatly advance our understanding of the Milky Way, in particular its formation and evolution.
§ INTRODUCTION
Estimates of stellar parameters, in particular the metallicity, of a large, complete sample of stars is of vital importance to understand the formation and evolution of the Milky Way.
In the past decades, massive progress has been achieved by large-scale spectroscopic surveys, such as the HK Survey <cit.>, the Hamburg/ESO Survey (HES; ) the Sloan Digital Sky Survey (SDSS; ), the Radial Velocity Experiment (RAVE; ), the Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST; ), the Galactic Archaeology with HERMES project (GALAH; ), and the Apache Point Observatory Galactic Evolution Experiment (APOGEE; ).
However, the total number of observed targets collected from all those surveys is no greater than about ten million, less than one ten-thousandth of the estimated total numbers of Milky Way stars.
This under-sampling, together with the complex target-selection strategies, makes it extremely difficult to understand the full assembly history of our Galaxy.
In the first paper of this series <cit.>, we proposed to alleviate this issue of current spectroscopic surveys by deriving stellar parameters for a huge number of stars using narrow/medium-bandwidth photometric surveys (see Table 1 of H22 for a summary).
As a pioneering experiment, H22 present measurements of stellar parameters, including metallicity, luminosity classification, effective temperature,
distance, and stellar age, for over 24 million stars, based on the stellar colors from the SkyMapper Southern Survey (SMSS; ) and Gaia <cit.>, as well as the parallax measurements from Gaia.
This huge data set has already been applied to a number of Galactic studies, including searching for metal-poor stars <cit.>, discovery of ancient halo substructures <cit.>, and understanding the disk/halo formation history (Hong et al. 2023). Its contribution to this field is just beginning to be explored.
In this paper, we present a second pioneering experiment in the Northern sky, using the data from the first data release of the Stellar Abundance and Galactic Evolution Survey <cit.> and Gaia EDR3 <cit.>.
SAGES is an optical multi-band (u, v, g, r, i, DDO-51, Hα_ wide, Hα_ narrow) large-scale photometric survey, aiming to cover 12,000 square degrees of the Northern sky with δ > -5^∘ down to a 5σ depth of 21.5 in the u-band <cit.>.
The u-band filter is the same as in the Strömgren system <cit.>, and the v-band is optimized to provide reliable metallicity measurements by shifting the central wavelength of the SkyMapper v <cit.> to longer wavelengths, by about 100 Å, to reduce the effect of molecular bands of carbon and nitrogen on the metallicity estimates.
The special design of the uv filters (especially the v-band) provides photometric sensitivity to stellar surface gravity and metallicity that are well-demonstrated by numerous previous efforts with similar filter systems (e.g., ; H22).
The gri filters are SDSS-like, which can be used to estimate the stellar effective temperature.
The combination of Hα and other filters can be used to estimate the values of reddening.
Similar to our effort with SMSS (H22), here we present stellar parameter estimates for over 26 million stars using the uv-band data released in SAGES DR1, along with the photometric and parallax information provided by Gaia EDR3 <cit.>.
This paper is structured as follows.
In Section 2, we introduce the data adopted in the current work.
In Section 3, photometric-metallicity estimates from the stellar colors of SAGES DR1 and Gaia EDR3 are described, along with various checks on the photometric measurements.
The determinations of effective temperature, T_ eff, distance, and age are presented in Section 4.
Radial velocity measurements collected from previous spectroscopic surveys and the final sample are described in Section 5.
We present a summary in Section 6.
§ DATA
In the present work, the SAGES DR1 <cit.> dataset is adopted.
SAGES DR1 has released a total of about 100 million sources extracted from 36,092 accepted frames in the uv-bands collected by the 90-inch (2.3m) Bok Telescope at Kitt Peak National Observatory in Arizona.
DR1 covers about half of the Northern Hemisphere (9960 square degrees), about 90 per cent of the planned area.
The median completeness is about 20.4 and 20.3 for the u- and v-band, respectively.
This is one of the deepest near-ultraviolet large-scale photometric survey with a 5σ depth close to 21.5 in the u-band.
Compared to other near-ultraviolet deep photometric surveys, e.g., the SDSS <cit.> and the South Galactic Cap u-band Sky Survey <cit.>, SAGES has the advantage of using the two medium-bandwidth filters uv, which are optimized for estimates of stellar parameters.
In addition to the uv-band data provided by SAGES DR1, the optical bands of G, G_ BP, G_ RP, as well as astrometric information, is adopted from the Gaia EDR3 <cit.>.
The Gaia EDR3 broadband photometry is essentially complete between G = 12 and G = 17.
The completeness is quite complicated for sources fainter than G = 17, which is strongly dependent on celestial position <cit.>.
In total, nearly 33 million stars are selected by the following cuts:
* flag_u/v = 0 in SAGES DR1
* Uncertainties of G, G_ BP, and G_ RP smaller than 0.05 mag
* Galactic latitude |b| ≥ 10^∘
SAGES was initially designed to avoid the high-reddening regions with |b| ≤ 10^∘, although a few disk areas are observed for specific reasons.
The former two cuts are required for precise metallicity estimates, but they do affect the completeness in the faint range (G > 18.5).
The last cut is to exclude those disk regions in our analysis, given their high values of extinction.
This sample is referred to as the main sample for our following analysis.
In this study, the colors u-G_ BP, v-G_ RP, and G_ BP - G_ RP are used.
We note that the mean G_ BP flux in Gaia EDR3 is
over-estimated for faint red sources with G ≥ 20.0 <cit.>. However, only 650 thousand stars (no more than 3 per cent of the full sample) in our final catalog are fainter than 20th magnitude in the G-band.
Therefore, the systematic issue for G_ BP is minor for the current study.
Unless indicated otherwise, these colors are corrected for reddening using the extinction map of <cit.>
[Here the SFD98 E(B-V) is corrected for a 14% systematic
over-estimate <cit.>].
The reddening coefficients for those colors, as well as for the G-band, are calculated using the same way as in H22.
§ METALLICITY DETERMINATION
§.§ Training Set
The key to determinations of metallicity using stellar colors is the training set.
The training set adopted here is similar to that used in H22, which consists of 1) LAMOST DR9[<http://www.lamost.org/dr9/v1.0/>], 2) the revised parameters of metal-poor ([Fe/H] ≤ -1.8) stars of SEGUE <cit.>, along with other datasets from SDSS (we refer to the total dataset below as SEGUE), and LAMOST <cit.> by a custom version of the SSPP (LSSPP; Lee et al. 2015), along with careful visual inspection (by Beers), and 3) the bibliographical compilation of measurements
of stellar atmospheric parameters from high-resolution spectroscopy (HRS) by PASTEL <cit.> and SAGA <cit.> .
The metallicity scale of the former two sets is calibrated to the one obtained from the HRS dataset.
More details of our efforts to construct a training set with a homogenous scale of metallicity, as well as other elemental-abundance ratios, will be described in Huang et al. (2023).
We then cross-match the above training set to the main sample, together with the following cuts:
* The stars must have small values of extinction (to minimize uncertainties due to reddening corrections): Galactic latitude |b| ≥ 20^∘ and E (B - V) ≤ 0.08
* The stars must have reliable metallicity estimates: LAMOST/SEGUE spectral signal-to-noise ratio (SNR) greater than 20, effective temperatures in the range 3800 ≤ T_ eff (K)≤ 7500 (i.e., typical FGK-type stars)
* The photometric uncertainties in the SAGES uv and Gaia G_ BPG_ RPG bands must be smaller than 0.035 mag
* The stars must have Gaia relative parallax measurement uncertainties smaller than 50%
In addition to the above cuts, only about half of the metal-rich ([Fe/H] >-1.0) stars are selected to avoid large differences in the number of metal-rich ([Fe/H] >-1.0) and metal-poor ([Fe/H] <-1.0) stars (see the right panel of Fig. 1).
Given the number of stars in common between SAGES and those with spectroscopy, the cut on Galactic latitude would not introduce bias in the training sets, e.g., a lack of metal-rich disk populations (see the right panel of Fig. 1).
A total of 223,537 stars (182,035 dwarfs and 41,502 giants) are selected to construct the final training set.
The absolute G-band magnitudes of these stars are derived by adopting the distances from <cit.>, based on the parallax measurements from Gaia EDR3.
The Hertzsprung–Russell (H-R) diagram of the training set is then shown in the left panel of Fig. 1.
By using empirical cuts defined in H22, the training stars are further divided into dwarf and giant stars.
The right panel of Fig. 1 shows the metallicity distributions of the dwarf and giant stars in the training set.
§.§ Metallicity Estimation
To estimate photometric metallicity, we first define the metallicity-dependent stellar loci of (u/v-G_ BP)_0 versus (G_ BP - G_ RP)_0 in Fig. 2 for both dwarf stars (top panel) and giant stars (bottom panel).
Similar to our results with SMSS DR2 in H22, both (u-G_ BP)_0 and (v-G_ BP)_0 colors exhibit significant sensitivities to stellar metallicity for different types of stars characterized by (G_ BP - G_ RP)_0.
Third-order 2D polynomials with 10 free parameters are then applied to describe the stellar loci of dwarf and giant stars:
(u/v - G_ BP)_0 = a_0,0 + a_0,1y + a_0,2y^2 + a_0,3y^3 + a_1,0x +
a_1,1xy + a_1,2xy^2 + a_2,0x^2 + a_2,1x^2y + a_3,0x^3,
where x and y represent (G_ BP - G_ RP)_0 and [Fe/H], respectively.
Two to three sigma-clipping is applied in the fitting process.
The resultant fit coefficients are listed in Table 1.
Using the stellar loci, one can determine the photometric metallicity using the maximum-likelihood approach developed in H22.
For a given star, the metallicity is obtained from the probability distribution function (PDF) of [Fe/H] estimated from the likelihood function:
L_c = 1/√(2π)σ_c_ obsexp-(c_ obs - c_ pred)^2/2σ_c_ obs^2,
where c_ obs are the observed colors, i.e., (u/v - G_ BP)_0, with assumed Gaussian errors σ_c_ obs.
The c_ pred represents the same colors predicted by the metallicity-dependent stellar loci (defined by Equation 1) with (G_ BP - G_ RP)_0 from observations and [Fe/H] ranging from -3.5 to +0.8 in steps of 0.01 dex.
The uncertainty in the photometric metallicity estimated is taken to be half of the 68% interval of the resultant PDF.
From the above approach, we estimate the photometric metallicities of training-set stars to be compared to the spectroscopic measurements as an internal test.
These comparisons are shown in Fig. 3 for both dwarf stars (top panel) and giant stars (bottom panel).
Generally, the estimated photometric metallicities agree with the spectroscopic metallicities very well for both dwarf and giant stars, either from (u - G_ BP)_0 or (v - G_ BP)_0; the overall scatter is only 0.09 dex and 0.13 dex for dwarf stars achieved by (u - G_ BP)_0 and (v - G_ BP)_0, respectively.
The scatter of the combined estimates using an error-weighted mean is further reduced to 0.08 dex, even better than the precision of low/medium-resolution spectroscopy.
As shown in the top-right panel of Fig. 4, no significant systematic offset is found for dwarf stars with photometric [Fe/H] >-1.0, and a mild offset of -0.20 to -0.4 dex (photometric minus spectroscopic) is found for metal-poor dwarf stars with photometric [Fe/H] ≤-1.0.
The metallicity precision for dwarf stars as revealed by the internal comparisons is a function of [Fe/H], with scatter smaller than 0.1 dex for [Fe/H] >-0.5, increasing to 0.3-0.4 dex at the extremely metal-poor end ([Fe/H] ∼ -3.0).
For giant stars, the overall scatter is around 0.11 dex.
The comparisons show that photometric metallicity derived from (v - G_ BP)_0 is in excellent agreement with that of spectroscopy, with negligible offsets for [Fe/H] >-2.0 and a small offset of -0.2 dex (photometric minus spectroscopic) at the extremely metal-poor end ([Fe/H] ∼ -3.0).
The metallicity precision from (v - G_ BP)_0 is around 0.1 dex for [Fe/H] >-1.0, and 0.2-0.3 dex for [Fe/H] ≤ -1.0.
The performance of photometric metallicity derived from (u - G_ BP)_0 is moderately worse, especially for warmer giant stars, which are mostly BHB stars (see the blue box in the bottom left panel of Fig. 3).
Finally, the internal checks indicate that there are no systematic trends with effective temperature for the photometric-metallicity estimates of both dwarf and giant stars (see the top-left panel of Fig. 4).
In addition to the internal test, we derive photometric metallicities for LAMOST targets with larger values of E (B-V) that are not included in the training set.
Using the LAMOST targets (including these stars with low values of extinction in the training set), we show the metallicity differences between the photometric and spectroscopic values as a function of E (B-V) in Fig. 5.
The metallicity differences (photometric minus spectroscopic) steadily decrease with E (B-V), and reach ∼ +0.2 dex at E (B-V) ∼ 0.5 for both dwarf and giant stars.
This trend is possibly due to the spatial systematic uncertainties of theSFD98 extinction map, as found most recently by <cit.>.
Moreover, <cit.> have shown that the reddening coefficients depend not only on effective temperature/intrinsic colors, but also extinction itself (ignored in this work).
The neglect of the extinction term may also partly contribute to this E (B-V) dependent trend.
To correct for this systematic trend, a fifth-order polynomial is applied to describe the differences as a function of E (B-V) for dwarf and giant stars, respectively.
According to the above tests, the final metallicity of a dwarf star is given by the combined estimate if both (u - G_ BP)_0 and (v - G_ BP)_0 colors are
available, or given by the single measurement from either (u - G_ BP)_0 or (v - G_ BP)_0, depending on which color is available.
The final metallicity of a giant star is given by the measurement of color (v - G_ BP)_0, or the color (u - G_ BP)_0 if the former is not available.
In this manner, photometric-metallicity estimates are derived for over 26 million stars (23 million dwarf stars and 3 million giant stars) in SAGES.
Note that the extinction dependent zero-point offsets are corrected using the fifth-order polynomial constructed above.
The G-band magnitude distributions of stars with metallicity estimates are shown in the left panel of Fig. 6.
The overall completeness limit is around magnitudes G = 17.5 and 18.5, for dwarf and giant stars, respectively.
As mentioned earlier, we caution that the completeness of Gaia broadband photometry is quite complicated, especially in crowded regions, for stars with G > 17 <cit.>.
The photometric-metallicity distributions of dwarf and giant stars are shown in the right panel of Fig. 6.
The total number of very metal-poor (VMP; [Fe/H] < -2.0) stars is about one million, which is the largest database of VMP candidates yet assembled from photometric techniques.
The metallicity uncertainty of a star is contributed by two sources: the method error deduced from the internal checks and the random errors derived from the likelihood function.
The metallicity uncertainty as a function of G-band magnitude is shown in Fig. 7, which is dominated by the method error and random errors in the bright and faint end, respectively.
§.§ Comparison with APOGEE DR17 and GALAH DR3+
The accuracy of our photometric estimates of metallicity is examined by comparisons with the independent spectroscopic measurements from the APOGEE DR17 <cit.> and GALAH DR3+ <cit.>.
The comparisons are shown in Fig. 8 for 72,995 high-quality (SNR ≥ 30) stars in common with APOGEE and 13,038 high-quality (SNR ≥ 30) stars in common with GALAH DR3+.
Generally, the photometric-metallicity estimates agree very well with the spectroscopic values, without significant offsets.
The overall scatter is only 0.09 dex for dwarf stars and 0.10-0.15 dex for giant stars.
The zero-point and precision of individual metallicity bins are also examined in the lower panels of Fig. 8; the results are consistent with our internal tests (see Fig. 4).
We also present the metallicity differences between the photometric estimates and spectroscopic values from APOGEE DR17 as a function of E (B-V) in Fig. 9.
The plot clearly shows that the offsets are all around zero for different bins of E (B-V), a validation of our polynomial corrections described in Section 3.2 (see Fig. 5).
§.§ Comparison with Metal-poor Samples from High-resolution Spectroscopy
To explore the capabilities of the SAGES filters for determinations of metallicity for metal-poor stars, we collect samples of independent metallicity estimates from HRS, especially for metal-poor stars.
The HRS samples we compare with include a sample of the most metal-poor stars <cit.>, the R-Process Alliance sample <cit.> for over 600 VMP stars, the CFHT ESPaDOnS follow-up observations of 132 metal-poor candidates selected from the Pristine survey <cit.>, the Subaru follow-up observations of 400 VMP candidates selected from the LAMOST <cit.>, and the GTC follow-up observations of extremely metal-poor (EMP) candidates identified from the Pristine and LAMOST surveys <cit.>.
We cross-match the SAGES sample to the collected HRS samples and find 112 stars in common (54 dwarfs and 58 giant stars).
The comparison result is shown in Fig. 10.
Generally, our photometric-metallicity estimates are consistent with the HRS values for metal-poor stars without significant carbon enhancements ([C/Fe] < +0.6).
The overall scatter of the differences (photometric minus spectroscopic) is 0.57 dex and 0.30 dex, respectively, for dwarf and giant stars, with mild offsets of +0.38 dex and +0.18 dex, respectively .
The result is in line with our internal checks (see Fig. 4).
We note the photometric-metallicity estimates of ultra metal-poor (UMP; [Fe/H] < -4.0) stars can be over-estimated by up to 2 dex for stars with very high carbon enhancements ([C/Fe] ≥ +2.0).
§.§ Comparison with SMSS and Gaia XP Spectra
We compare our results to those of H22 from SMSS and those of <cit.> from Gaia XP low-resolution spectra. The latter has recently delivered estimates of metallicity using a
data-driven technique for over 120 million stars from Gaia XP low-resolution spectra.
As shown in Fig. 11, our estimates are consistent with those of <cit.> and H22, with tiny offsets and a scatter smaller than 0.20 dex.
Finally, although the total number of our metallicity estimates (SAGES + SMSS) does not exceed 50 million stars,
we emphasize that the volume of our sample is much larger than that of sample constructed from Gaia XP spectra, given that the limiting magnitude of SAGES and SMSS is nearly 3 mag deeper than that of the Gaia XP spectra.
This larger volume will enable numerous interesting studies of the Milky Way, e.g., searching for substructures in the stellar halo.
§ EFFECTIVE TEMPERATURE, DISTANCE, AND AGE ESTIMATES
The effective temperatures of dwarf and giant stars are derived from the metallicity-dependent T_ eff–color relations constructed in H22.
Here the color is the de-reddened (G_ BP - G_ RP)_0, and metallicity is given by photometric [Fe/H].
In this way, effective temperatures are obtained for all of our program stars.
As examined with over 159,000 stars in common, the effective temperature estimated in this work is quite consistent with that from LAMOST, with a small offset around -24 K (this work minus LAMOST) and a scatter of only 84 K (see Fig. 13).
Distances estimated by <cit.> are adopted for stars with reliable parallax measurements with precision better than 30%, parallax greater than 0.15 mas, and renormalized unit weight error (RUWE) smaller than 1.4.
A total of 15,974,812 stars have distances estimated in this way.
Using the apparent G-band magnitudes and SFD E (B-V), the G-band absolute magnitudes have been derived for the nearly 16 million stars with reliable geometric distances.
Fig. 12 is the Hertzsprung-Russell (H-R) diagram for about 8 million stars with relative parallax error better than 10%, parallax greater than 0.4 mas, and RUWE≤ 1.4.
Guided by the isochrones of PARSEC <cit.>, empirical cuts are defined to further classify dwarf stars into main-sequence turn-off, main-sequence,
and binary stars.
For the stars without geometric distance estimates, the distances are obtained by inferring their absolute magnitudes from the constraints of stellar colors and photometric metallicity.
For main-sequence dwarf stars, the G-band absolute magnitudes are derived from the third-order 2D polynomial relation constructed in H22.
Combining with the G-band magnitude and the SFD E (B -V), the distances are found for over one million main-sequence dwarf stars with (G_ BP - G_ RP)_0 ≥ 1.0.
For giant stars, a likelihood method developed in <cit.> and <cit.> is adopted to infer the i-band absolute magnitude using the (g - i)_0 color, photometric [Fe/H], and empirical color–magnitude fiducials interpolated from six globular clusters.
Here, the g- and i-band magnitudes are from the Pan-STARRS1 surveys <cit.>; the reddening-correction coefficients are from <cit.>.
The interested reader is referred to X14 or <cit.> for more details.
In the above manner, a total of over 1.6 million giant stars have their distances estimated. To test the accuracies of our distance estimates for giant stars, Fig. 14 compares these with those of X14 for over 1600 stars in common.
The results are consistent with each other, with a tiny relative offset of -3.7% (this work minus X14) and a scatter of 21.7%.
This scatter implies that both estimates have a typical precision of about 16%, which is expected by X14.
Finally, we derive stellar ages for stars with good parallax measurements, i.e., parallax measurements with precision better than 30%, parallax greater than 0.15 mas, and RUWE≤ 1.4, using the technique developed in H22.
Nearly 15 million stars have their ages estimated in this way.
We note that the RUWE cut cannot exclude all of the binary stars, whose ages may be over-estimated.
As noted by H22, this technique is mostly valid for main-dequence turn-off and sub-giant stars; uncertainties are larger for other types of stars in the H-R diagram.
We perform a similar check as done in H22 with over 160,000 stars in common between this work and <cit.>, who derived isochrone ages for over 3 million stars with both spectroscopic and astrometric information.
The check shows that the age estimates in this work agree with with those from SD18, with an offset of 5% in relative age difference (age_ TW -age_ SD18)/age_ SD18 and a scatter in the relative age difference of around 20%.
§ RADIAL VELOCITIES AND THE FINAL SAMPLE
We collect measurements of radial velocities for our sample stars available from from completed and ongoing spectroscopic surveys, including
GALAH DR3+ <cit.>, SDSS/APOGEE DR17 <cit.>, Gaia DR3 <cit.>, RAVE DR5 <cit.>, LAMOST DR9[<http://www.lamost.org/dr9/v1.0/>] and SDSS/SEGUE DR16 <cit.>, with typical measurement errors of 1.1, 0.5, 1.0-6.0, 2.0, 5.0 and 5.0 km s^-1, respectively.
In total, over 4.2 million stars in our final sample have radial velocity measurements.
The detailed contributions of radial velocities from each survey are given in Table 2.
If a star has radial velocity measurements from two more surveys, the result from the survey with the highest spectral resolution is adopted.
We note that all of the radial velocity zero-points are calibrated to the updated APOGEE radial-velocity standard stars based on the SDSS/APOGEE DR17 constructed using the same technique proposed in <cit.>.
In the final sample, over 22 million dwarf and 3 million giant stars have photometric-metallicity estimates (see Section 3) from the stellar colors provided by SAGES DR1 <cit.> and Gaia EDR3 <cit.>, and effective temperature estimates from the intrinsic (G_ BP - G_ RP)_0 colors and photometric [Fe/H] (see Section 4).
From the well-developed techniques described in H22, distances and ages are further derived for 18 and 15 million stars in the final sample, respectively (see Section 4).
The radial velocity measurements, if available from the spectroscopic surveys, and the astrometric parameters in Gaia EDR3 <cit.> are also included.
A description of the information for stars in the final sample catalog is presented in Table 3.
The final stellar-parameter sample catalog will be released by the SAGES project as a value added catalog.
This sample already represents large progress on the development of stellar samples from the Northern sky for use in Galactic studies.
Together with our former effort from SMSS DR2 described in the first paper in this series, the sum of which represent photometric metallicities for on the order of 50 million stars, these results will shed light on understanding the formation and evolutionary history of our Galaxy.
The next step of this project is to extend this technique to derive photometric-metallicity with improved precision, especially at the metal-poor end, and other
elemental-abundance ratios (e.g., [α/Fe] and [C/Fe]) from the narrow/medium-band photometric surveys <cit.>, or from Gaia XP low-resolution spectra, although only for stars with a relatively bright limiting magnitude around G ∼ 17.5 mag <cit.>.
§ SUMMARY
In this, the second paper of this series, we present stellar parameters for over 20 million stars in the Northern sky, using SAGES DR1 and Gaia EDR3.
With a careful and comprehensive selection of a training set from spectroscopic measurements, we present photometric-metallicity estimates for nearly 26 million stars (23 million dwarf and 3 million giant stars), with useful metallicity determinations down to [Fe/H] = -3.5.
Both internal and external checks show that the precisions of our photometric measurements are about 0.1 dex in the metal-rich range ([Fe/H] > -1.0) and 0.15-0.25/0.3-0.4 dex for dwarf/giant stars with [Fe/H]≤ -1.0.
This result is comparable to or even better than obtained for the low/medium-resolution spectroscopy.
In addition to metallicity, the final sample also includes measurements of effective temperature from metallicity-dependent T_ eff–color relations, distances either from Gaia parallax measurements or from the metallicity-dependent color-absolute magnitude fiducials, and ages from comparisons between observations and stellar isochrones.
Radial velocities from spectroscopic surveys and astrometric parameters from Gaia EDR3 are also included.
To date, we have delivered stellar parameters for over 50 million stars covering almost 3π steradians of sky, which will be useful to a variety of studies of the Milky Way.
§ ACKNOWLEDGEMENTS
This work is supported by National Key R&D Program of China No. 2019YFA0405500 and National Natural Science Foundation of China grants 11903027, 11833006, 11973001, 11603002, 11811530289 and U1731108.
We used data from the European Space Agency mission Gaia (<http://www.cosmos.esa.int/gaia>), processed by the Gaia Data Processing and Analysis Consortium (DPAC; see <http://www.cosmos.esa.int/web/gaia/dpac/consortium>).
T.C.B. acknowledges partial support from grant PHY 14-30152, Physics
Frontier Center/JINA Center for the Evolution of the
Elements (JINA-CEE), awarded by the US National Science
Foundation. His participation in this work was initiated by conversations that took place during a visit to China in 2019, supported by a PIFI Distinguished Scientist award from the Chinese Academy of Science. Y.S.L. acknowledges support from the National Research Foundation (NRF) of
Korea grant funded by the Ministry of Science and ICT (NRF-2021R1A2C1008679).
Y.S.L. also gratefully acknowledges partial support for his visit to the University
of Notre Dame from OISE-1927130: The International Research Network for Nuclear Astrophysics (IReNA),
awarded by the US National Science Foundation.
CAO acknowledges support from the Australian Research Council through Discovery Project DP190100252.
The Stellar Abundance and Galactic Evolution Survey (SAGES) is a multi-band photometric project built and managed by the Research Group of the Stellar Abundance and Galactic Evolution of the National Astronomical Observatories, Chinese Academy of Sciences (NAOC).
The national facility capability for SkyMapper has been funded through ARC LIEF grant LE130100104 from the Australian Research Council, awarded to the University of Sydney, the Australian National University, Swinburne University of Technology, the University of Queensland, the University of Western Australia, the University of Melbourne, Curtin University of Technology, Monash University and the Australian Astronomical Observatory. SkyMapper is owned and operated by The Australian National University's Research School of Astronomy and Astrophysics. The survey data were processed and provided by the SkyMapper Team at ANU. The SkyMapper node of the All-Sky Virtual Observatory (ASVO) is hosted at the National Computational Infrastructure (NCI). Development and support the SkyMapper node of the ASVO has been funded in part by Astronomy Australia Limited (AAL) and the Australian Government through the Commonwealth's Education Investment Fund (EIF) and National Collaborative Research Infrastructure Strategy (NCRIS), particularly the National eResearch Collaboration Tools and Resources (NeCTAR) and the Australian National Data Service Projects (ANDS).
The Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope, LAMOST) is a National
Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the
National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences.
apj
|
http://arxiv.org/abs/2307.04729v2 | 20230710173843 | Bootstrapping the Chiral Anomaly at Large $N_c$ | [
"Teng Ma",
"Alex Pomarol",
"Francesco Sciotti"
] | hep-th | [
"hep-th",
"hep-lat",
"hep-ph"
] |
left=1cm,right=1cm
40
10
7
blue!20
yellow
white
|
http://arxiv.org/abs/2307.05436v1 | 20230711165857 | Conformal actions of solvable Lie groups on closed Lorentzian manifolds | [
"Vincent Pecastaing"
] | math.DG | [
"math.DG"
] |
Tensor gradiometry with a diamond magnetometer
G. W. Morley
August 12, 2023
==============================================
We consider conformal actions of solvable Lie groups on closed Lorentzian manifolds. With <cit.>, in which we addressed similar questions for semi-simple Lie groups, this work contributes to the understanding of the identity component G of the conformal group of compact Lorentzian manifolds. In the first part of the article, we prove that G is inessential if and only if its nilradical is inessential. In the second, we assume the nilradical essential and establish conformal flatness of the metric on an open subset, under certain algebraic hypothesis on the solvable radical. This is related to the Lorentzian Lichnerowicz conjecture. Finally, we consider the remaining situations where our methods do not apply to prove conformal flatness, and conclude that for an essential compact Lorentzian n-manifold, n ≥ 3, the radical of its conformal group admits a local embedding into O(2,n).
§ INTRODUCTION
This work has two main motivations. First, we would like to obtain a classification, up to local isomorphism, of conformal groups of compact Lorentzian manifolds, that would generalize that of Adams-Stuck-Zeghib, in which the identity component of their isometry groups are classified. Secondly, the Lorentzian version of a conjecture of Lichnerowicz asserts that essential closed Lorentzian manifolds are conformally flat. We aim to establish conformal flatness of these manifolds, assuming that their conformal group satisfies extra algebraic assumption. Let us first recall briefly these problems.
§.§ General context
A conformal diffeomorphism of a pseudo-Riemannian manifold (M,g) is an element f ∈(M) for which there exists φ∈𝒞^∞(M), φ > 0, such that f^*g = φ g. We denote by [g] the conformal class of g and we call (M,[g]) a pseudo-Riemannian conformal structure. We also denote by (M,[g]) the group of conformal diffeomorphisms of (M,[g]).
A natural question is to know if a conformal structure (M,[g]) admits conformal transformations which are not isometries of any metric g' in the conformal class.
Let (M,[g]) be a pseudo-Riemannian conformal structure. A subgroup H < (M,[g]) is said to be inessential if there exists g' ∈ [g] such that H <(M,g'). Otherwise, it is said to be essential. If (M,[g]) is essential, we say that (M,[g]) is essential.
For positive-definite metrics, a closed subgroup H of conformal transformations is essential if and only if it acts non-properly on M. So, for M compact, this is equivalent to the non-compactness of the conformal group. These properties are no longer true for higher signatures, so one of the first questions is how to characterize essentiality.
In dimension at least 3, a conformal class of pseudo-Riemannian metrics defines a rigid geometric structure, be it in Gromov's sense <cit.>, or in the sense that it is equivalent to a normalized Cartan geometry modeled on ^p,q, the pseudo-Riemannian analogue of the Möbius sphere. As a consequence, (M,[g]) has the structure of a Lie transformation group (possibly non connected) and a natural problem is to determine which Lie groups arise in this way, and on which geometry they can act. We will concentrate here on conformal structures on compact manifolds and to the identity component of their conformal group.
Lichnerowicz conjectured in the 1960's that if a closed Riemannian manifold is essential, then it is conformally equivalent to the round sphere. More generally, a “Vague general conjecture” of D'Ambra and Gromov (<cit.>, §0.8) asserts that rigid geometric structures with large automorphism groups can be classified, as it had been the case when Ferrand and Obata proved Lichnerowicz's conjecture, first as formulated above, and then its non-compact analogue <cit.>, see also <cit.> and the extension to the CR case. Since then, various people have been working on generalizations to other geometric structures, notably on a projective version of Lichnerowicz's conjecture (e.g. <cit.>). It appeared that a pseudo-Riemannian analogue of Ferrand-Obata's theorem is not plausible <cit.>. One of the reasons is that it is deeply related to the “rank 1 nature” of conformal Riemannian structures (<cit.>), whereas other signatures are modeled on higher-rank parabolic geometries. In <cit.>, the authors asked if general compact pseudo-Riemannian manifolds admitting an essential conformal group are conformally flat, i.e. locally conformally equivalent to the flat pseudo-Euclidean space ^p,q. Frances gave counter-examples to this question in all signatures of metric index at least 2 (<cit.>). The case of Lorentzian metrics remains however open, and led to the following problem.
Let (M,[g]) be a compact, conformal Lorentzian structure. If it is essential, then it is conformally flat.
Recently, a proof for 3-dimensional, real-analytic, compact Lorentzian manifolds has been established in <cit.>, and the main result of <cit.> implies that it is also true for compact, real-analytic Lorentzian manifolds with finite fundamental group.
Apart from these general results, various works have been dealing with compact pseudo-Riemannian manifolds with “large” conformal groups.
It appeared that if a semi-simple Lie group S of non-compact type acts conformally on a closed pseudo-Riemannian manifold of signature (p,q), then _(S) ≤min(p,q)+1 <cit.> and if equality holds, then the manifold is a quotient of the universal cover of ^p,q by a cyclic group <cit.>. In <cit.>, the opposite situation of actions by rank 1 simple Lie groups is considered and similar conclusions are obtained when the metric index is minimal. For the Lorentzian signature, it is proved in <cit.> that if a non-compact semi-simple Lie groups acts conformally essentially on a closed Lorentzian manifold, then it is conformally flat (see §<ref> below).
In another direction, it is natural to consider the “solvable part” of the conformal group of these manifolds and to derive geometric or dynamical information. Frances and Melnick proved in <cit.> that if a nilpotent Lie group N, with nilpotence degree k, acts conformally on a closed pseudo-Riemannian manifold of signature (p,q), then k ≤ 2min(p,q)+1 and that if equality holds, the manifold is again a quotient of the universal cover of ^p,q by a cyclic group. Apart from this result, to the best of our knowledge, most results about solvable Lie group actions in pseudo-Riemannian geometry concern isometric actions and not much is known about their conformal essential actions, even in Lorentzian signature (see nonetheless <cit.> in the general homogeneous case and the announced results therein).
§.§ Embedding of the radical of the conformal group
The present work focuses on the case of solvable Lie group actions on compact Lorentzian manifolds. We obtain an optimal obstruction for an essential conformal action of a solvable Lie group on a closed Lorentzian manifold:
Let (M^n,[g]), n ≥ 3, be a compact manifold endowed with a Lorentzian conformal structure, and let R be a connected, solvable Lie subgroup of (M,[g]). If R is essential, then there exists a Lie algebra embedding ↪(2,n).
The projective model of the Lorentzian Einstein Universe ^1,n-1 is defined as the smooth quadric ℙ({Q_2,n=0}) ⊂ P^n+1, where Q_2,n is a quadratic form of signature (2,n). It naturally comes with a conformally flat, conformal class of Lorentzian metrics and its conformal group is (2,n). So, the converse statement is immediate: any Lie subgroup of (2,n) acts faithfully on (at least) one compact Lorentzian manifold.
As a consequence of Liouville's theorem, any Lie algebra of conformal vector fields of a conformally flat Lorentzian manifold embeds into (2,n) (see also Lemma <ref>). Therefore, Theorem <ref> supports Conjecture <ref>, because if true, the latter would imply that the identity component of (M,[g]) locally embeds into O(2,n).
Suppose that (M,[g])_0 is essential. In <cit.>, we proved that if the Levi factor of (M,[g])_0 is non-compact, then the metric is conformally flat, and in particular the whole identity component of the conformal group locally embeds into O(2,n). Applying Theorem <ref> to the solvable radical R of (M,[g])_0, and Theorem <ref> below, we conclude that for any closed Lorentzian manifold (M,g), if (M,[g])_0 is essential, then, up to local isomorphism, it is a semi-direct product of a compact semi-simple Lie group with an immersed subgroup of O(2,n).
This raises the following question: Conversely, which solvable Lie subgroups of O(2,n) can be realized as the radical of the conformal group of some closed Lorentzian n-manifold? A similar problem was treated in <cit.> in the classification of isometry groups of closed Lorentzian manifolds, and turned out to be quite delicate. We leave this question for further investigation ; it would ultimately be interesting to reach a statement comparable to Adams-Stuck-Zeghib's classification result, that we recall below.
[<cit.>]
Let (M,g) be a compact Lorentzian manifold and let G = (M,g)_0 denote the identity component of its isometry group. Then, its Lie algebra splits into a direct product = ⊕⊕̨ where $̨ is the Lie algebra of a compact semi-simple Lie group,is abelian andis in the following list:
* _2()
* (2n+1), n ≥ 1,
* Oscillator algebras, i.e. certain solvable extensions ⋉(2n+1), n ≥ 1,
* {0}.
Conversely, for any such Lie algebra, there exists a compact Lorentzian manifold whose isometry group admitsas Lie algebra.
§.§ Essentiality of the nilradical
Let(M,g)be a pseudo-Riemannian manifold. Given an essential subgroupG < (M,[g]), and a non relatively compact subgroupH < G, a natural question is to know ifHis also essential. Or equivalently: ifHpreserves a metric in the conformal class, is it also the case for all ofG?
In our situation, we obtained previously a positive answer whenHis semi-simple and non-compact
[<cit.>]
Let (M,g) be a closed Lorentzian manifold and S be a non-compact semi-simple Lie group. Suppose that S acts conformally on M with discrete kernel.
* If S is inessential, then so is all of (M,g)_0.
* If S is essential, then (M,[g]) is conformally flat.
We would like to get a similar result in the setting of solvable Lie groups actions. We prove here:
Let (M,g) be a closed Lorentzian manifold and let R be a solvable Lie subgroup of (M,[g]). Let N be the nilradical of R. If N is inessential, then so is R.
Furthermore, whenNis non-abelian, then its essentiality is characterized by that ofN_k, the last non-zero term of its lower-central series (see Theorem <ref> below).
Let G the identity component of (M,[g]). Let R ◃ G be its solvable radical and let N ◃ R be the nilradical. If G/R is compact, then G is essential if and only if N is essential.
Lorentzian manifolds for whichG/Ris non-compact andGis essential are conformally flat by <cit.>. The fact that their holonomy centralizes a non-compact simple Lie subgroup of(2,n) = (^1,n-1)seems to be an indication that they are classifiable up to conformal equivalence, justifying our assumption.
We note nonetheless that the statement is false in general whenGhas a non-compact Levi factor. For instance, if(M,g)is a Hopf manifold,(M,g) ≃O(1,n-1) ×^1. The^1factor is inessential but(M,g)_0is not. More generally, we will observe:
Let G,R,N be as in Corollary <ref>. If N is inessential while G is not, then R is abelian and ≃⊕𝔯 as Lie algebras, where is a Levi factor of , which necessarily contains a factor of non-compact type.
§.§ Actions of nilpotent Lie groups
By Corollary <ref>, if the identity componentGis essential and has compact Levi factor, then some nilpotent subgroup ofGis also essential.
As recalled above, a general bound on the nilpotence degree of a nilpotent Lie group acting on closed pseudo-Riemannian manifolds, as well as a geometric characterization of manifolds at the critical case, was obtained in <cit.>.
We prove here the following for general nilpotent Lie group actions on closed Lorentzian manifolds. For a nilpotent Lie algebra, its nilpotence degree is the smallest integerd ≥1such that_d = {0}, where we denote by{_i}_i ≥1the lower central series of.
Let H be a connected nilpotent real Lie group of nilpotence degree k+1 and let (M^n,g), n≥ 3, be a compact Lorentzian manifold. Let H act locally faithfully by conformal transformations of M. Then, we have the following.
* Assume that H is abelian.
* Then H acts locally freely on an open-dense subset of M, hence H ≤ n.
* If H is essential, then it admits either a fixed point, or an isotropic 1-dimensional orbit.
* If H = n or H ≃^n-1 and if H acts faithfully and essentially, then an open subset of M is conformally flat.
* If H is non-abelian, then it is inessential if and only if H_k acts locally freely.
* If H is non-abelian and essential, then an open subset of M is conformally flat. Precisely, has nilpotence degree k ≤ 3, _k=1 and if X ∈_k ∖{0}, then X has a singularity of order 2.
For (1)(a) and (1)(b), the results are true without assuming M compact.
We stress that for (1)(c), our proof requires the action to be faithful in the case of H=^n-1.
According to (1), if H is abelian of dimension greater than 1, and if H acts locally freely, then it is inessential. The converse is false.
For H =, consider the 3-dimensional Hopf manifold (M,[g]) = (^1,2\{0}) / ⟨ 2 𝕀⟩. Let {u^t} be a unipotent one-parameter subgroup of O(1,2) and let {k^t} be the one induced by the homothetic flow on ^1,2 (it factorizes into an ^1-action). Then, the commutative product {u^tk^t} is an essential conformal flow with no singularity.
Point (1) generalizes easily to abelian conformal Lie group actions on Riemannian manifolds. Essentiality is characterized by the existence of a global fixed point.
However, as shown in <cit.> §.5, there are examples of isometric actions of ×𝐓^k on closed pseudo Riemannian manifolds of metric index greater than 1, all of whose orbits are tori of dimension k.
Combining Theorem <ref> and Theorem <ref>, we get:
Let (M,g) be a closed real-analytic Lorentzian manifold and suppose that G=(M,[g])_0 is essential. If the nilradical N of G is non-abelian, then (M,g) is conformally flat.
As explained above, it is not clear in general if, given an essential groupG, there exists a proper subgroupHwhich is still essential. In view of Conjecture <ref>, it is notably an important matter to know ifGcontains an essential flow, so that the assumption of the conjecture can be replaced by that of an essential conformal vector field. Note that this is the first step in the3-dimensional case <cit.>.
In this direction, another consequence of what precedes is that in the real-analytic case, if the identity component is essential, then either the manifold is conformally flat or the nilradical is abelian and acts essentially.
§.§ Organization of the article
In Section <ref>, we introduce some materials that will be used frequently, especially a version of Zimmer's embedding theorem for Cartan geometries, which is a central ingredient in our methods.
In Section <ref>, we start proving (1)(a), (1)(b) and (2) in Theorem <ref>, which relates inessentiality and locally free actions. We then prove Theorem <ref>, first in the case where the nilradical is abelian and then in the non-abelian case.
For these results, the renormalization function is built using the pseudo-normg(X,X)of certain conformal vector fieldsXassociated to the action. In the non-abelian case, Corollary <ref> gives precious information about the signature of the orbits, which will guarantee that the renormalization function is everywhere positive. We show that the corresponding conformal metric isR-invariant by combining elementary arguments based on the commutation relations between flows in theR-action.
Section <ref> is devoted to essential actions of solvable Lie groups. The results in the inessential case imply that the nilradical is essential. In several contexts, we show that either a flow or a non-trivial element in the nilradical admits a singularityxwhich of order2(i.e. the1-jet is trivial), or more generally has a non-linear unipotent holonomy, a notion that we recall at the beginning of Section <ref>, with other tools derived from the normalized Cartan geometry associated to a conformal structure. The results of <cit.> then imply that an open subset containing the singularity in its closure is conformally flat.
The main result of <cit.> plays a central role here. Working in the closure of a light-like orbit of the nilradicalN, this result implies that the holonomy of vector fields inhave to satisfy certain bracket relations related to the adjoint representation ofon, when the latter has non purely imaginary eigenvalues. With some technical algebraic considerations, we will exhibit flows with non linear unipotent singularities inNin every case that does not appear in Proposition <ref>. The section concludes with the case of conformal essential actions of abelian Lie groups of dimensionnandn-1, concluding the proof of (1)(c) in Theorem <ref>. The adjoint action being trivial, Zimmer's embedding theorem does not provide information. The fact that we are close to the critical dimension implies that the isotropy of a point in a light-like orbit is locally linearizable and conjugate to the full horospherical subgroup ofO(1,n-1)(if not, an open subset is conformally flat). It follows that all light-like orbits are closed, and an element of the formϕ_X^t_0,t_0>0, admits a singularity of order2, yielding conformal flatness of an open subset as before.
In the last section <ref>, we conclude the proof of Theorem <ref>. The latter being immediate if we can prove that an open subset is conformally flat, we can restrict to cases to which the methods of Section <ref> do not apply. These are described by the following.
Let (M^n,g) be a closed Lorentzian manifold, with n ≥ 3. Suppose that a connected, solvable Lie group R acts conformally and essentially on M. Then, either there exists a non-empty conformally flat open subset of M, or is isomorphic to a semi-direct product ⋉_ρ, where:
* is abelian, with 1 ≤≤ n-1 ;
* is abelian and ρ : →() is a faithful representation such that ρ(X) is semi-simple for all X ∈;
* Considering the complex weights {α_1,…,α_r}⊂ (^*)^ of ρ^, if λ_i = (α_i) and μ_i = (α_i), then there exists λ∈^* such that for all i, λ_i ∈{0,λ}.
With a last technical observation in the case= n-1(Lemma <ref>), it will then be easy to perform an explicit embedding ofinto(2,n).
§.§ Notations and conventions
We will call-triple in a Lie algebra any triple(X,Y,Z)such that[X,Y] = Z,[X,Z] = [Y,Z]=0andZ ≠0. They span a copy of the3-dimensional Heisenberg Lie algebra. The lower central series{_k}of a Lie algebrais defined as_0 = and_k+1 = [,_k]. Fornilpotent, the nilpotence degree is defined as the smallest integerksuch that_k = 0.
IfGis a Lie group and⊂a Lie subalgebra of its Lie algebra, we call integral subgroup associated tothe connected, immersed subgroup ofGtangent toat the identity.
IfGis a Lie group acting smoothly and with discrete kernel on a manifoldM, then we associate to any elementX ∈the vector fieldX̅ofMwhich is the opposite of the infinitesimal generator of the flow corresponding to the action ofe^tX. The mapping{X ↦X̅}is then a Lie algebra embedding identifyingwith a Lie algebra of vector fields ofM, which is implicitly used in all the article.
A Lie group action of a Lie groupGon a manifoldMis said to be locally faithful if its kernel is discrete. It is said to be locally free if the stabilizer of any point is discrete. We will use the same terminology for a Lie sub-algebra⊂when the corresponding integral subgroupHhas the property in question. For instance, we will say “has a fixed point” to mean thatHhas a fixed point, or “has a degenerate orbit atx” to mean that the orbitH.xis degenerate. For a vector subspace⊂andx ∈M, we will note(x) = {X_x, X ∈} ⊂T_xM.
IfXis a vector field ofM, we say thatXhas a singularity of order2at a pointx_0ifX(x_0)=0and_̣x_0ϕ_X^t = 𝕀, i.e. if its local flow has trivial1-jet atx_0.
If(M,g)is a pseudo-Riemannian manifold on which a groupGacts conformally, the conformal distortion ofGwith respect togis the_>0-valued cocycleλ: G ×M →_>0defined for allφ∈Gandx ∈Mby[φ^* g]_x = λ(φ,x) g_x. The action ofGis said to be inessential ifgcan be replaced by a conformal metricg'with respect to which the action is isometric.
§ EMBEDDING THEOREM AND OTHER ANTERIOR RESULTS
§.§ Embedding theorem for Cartan geometries
We will use several times a version of Zimmer's embedding theorem for Cartan geometries proved in <cit.>, Theorem 4.1. We recall the setting and its statement below. We will apply a corollary of this theorem in Section <ref>, which can be stated (and in fact proved) without the formalism of Cartan geometries. We will then use its full strength in Section <ref> and afterwards. We refer the reader to Section <ref> for a brief introduction to the Cartan connection associated to a conformal structure. At first reading, the reader can skip the general version and retain only Corollary <ref> below.
LetHbe a connected Lie group and letS < Hbe a subgroup, not required to be closed. Following <cit.>, define the discompact radical ofSas the largest algebraic subgroupS̅_din the Zariski closure of_(S)which does not admit any proper, algebraic, normal, cocompact subgroup. For instance,H̅_d = _(H)forS=Han algebraic semi-simple Lie group of non-compact type.
Let(M,M̂,ω)be a Cartan geometry with an effective model space(,P). We noteπ: M̂ →Mthe fibration. LetH →(M,M̂,ω)be an action by automorphisms of the Cartan geometry, which we assume to be a proper Lie group homomorphism. Any such action gives rise to a natural mapι: M̂ →Mon(,), whereMon(,)denotes the variety of injective linear maps fromto, defined byι()(X) = ω_(X)for all∈M̂andX ∈. Recall that by effectiveness of the model, we can define without ambiguity lifts to the Cartan bundle of infinitesimal automorphisms of the Cartan geometry andXcan be seen as a vector field onMor as a right-P-invariant vector field onM̂.
Suppose that _(P) is almost algebraic in ().
If S preserves a finite Borel measure μ on M, then for μ-almost every x, for every ∈π^-1(x), there exists an algebraic subgroup S < _(P) such that
* for all p∈S, p. ι_() = ι_(),
* the induced homomorphism S→() is algebraic, with image S̅_d.
LetH,Sandμbe as above. Forx ∈M, we denote byH_xthe stabilizer ofxinHand_xits Lie algebra. The tangent spaceT_x(H.x)identifies with/ _x, we note[q_x]the ray of quadratic forms on/_xcorresponding to the restriction of the conformal class[g]to the tangent space of theH-orbit ofx. It is a general fact that the adjoint action ofH_xon/_xis conformal with respect to[q_x]. The following consequence of the previous theorem states that forμ-almost every point, the same holds for the discompact radical ofS, even whenHacts locally freely for instance.
For μ-almost every point x, _x is S̅^d-invariant and the induced action of S̅^d on / _x is conformal with respect to [q_x].
In particular, whenSis amenable, every compactS-invariant subset ofMcontains a point where the conclusions of this corollary are true, since amenability guaranties the existence of a finiteS-invariant measure supported in this compact subset.
§.§ Isotropic conformal vector fields
We will use several times the following results (see for instance Lemma 6.1 of <cit.>).
Let X be a Killing vector field of a Lorentzian manifold (M,g). If g(X,X) = 0 and if X vanishes at some point, then X=0.
Let M be a compact manifold and let G be a Lie group acting smoothly on M. Let X,Y∈ be such that Z:=(X)^k Y is centralized by X, for some k ≥ 1. If Y vanishes at some point of M, then so does Z.
Note that if (X,Y,Z) is an -triple of vector fields of a compact manifold, and if Z has no singularity, then the corresponding action of (3) is locally free.
Another tool that we will use is the following.
Let X,Y be two conformal vector fields of a pseudo-Riemannian manifold (M,g). If X_x and Y_x are collinear for every x ∈ M, then X and Y are collinear. In particular, in Lorentzian signature, if X and Y are everywhere light-like and orthogonal, then they are proportional.
If X ≠ 0, let U ⊂ M be an open subset on which X(x) ≠ 0. There is a smooth map f defined on U such that Y = fX over U. If we prove that f is constant, then the lemma is established because Y will coincide with a scalar multiple of X on a non-trivial open subset of M.
So we are reduced to observe that for X a non-vanishing conformal Killing vector field of (M,g) and f ∈𝒞^∞(M), if fX is conformal, then f is constant. From the existence of ϕ,ψ∈𝒞^∞(M) such that ℒ_X g=ϕ g and ℒ_fXg=ψ g, we deduce that for all vector fields Y,Z,
(ϕ-fψ)g(Y,Z) = (Y.f) g(X,Z) + (Z.f)g(X,Y).
If we choose Z ∈ X^⊥ non collinear to X, and if Y ∈ Z^⊥∖ X^⊥, then we get Z.f = 0. The same holds for all Z ∈ X^⊥ by density. Now, because X does not vanish, there exists an isotropic vector field Y such that g(X,Y) ≠ 0. From the same relation with Y=Z, we obtain 2(Y.f) g(X,Y) = 0, so Y.f = 0, and because Y ∉ X^⊥, we deduce f̣ = 0, as announced.
§ INESSENTIAL ACTIONS
LetH<(M,g)be a subgroup, which could be non-closed a priori. Its conformal distortion with respect togis an_>0-valued cocycleλ: H ×M →_>0defined by[ϕ^* g]_x= λ(ϕ,x) g_xfor allϕ∈H,x ∈M. TheH-action is inessential if and only if there exists a functionϕ: M →_>0such thatλ(h,x) = ϕ(h.x)ϕ(x)^-1, the invariant metric beingg/ϕ.
§.§ Inessential nilpotent groups
We consider a connected nilpotent Lie groupNand a compact Lorentzian manifold(M,g)on whichNacts conformally with discrete kernel. We stress that the action is just assumed to be an immersionN →(M,[g])which could be non-proper. We notek+1the nilpotence degree of, i.e. such that_kis the last non-zero term in the lower central series of. For alli ≤kletN_idenote the connected Lie sugroup ofNtangent to_i.
We start with an elementary proposition for the abelian case (k=1), which is (1)(b) in Theorem <ref>. We stress that no compactness is needed.
Assume that N ≃^m is abelian. If all N-orbits have dimension greater than 1, then N is inessential.
Let λ : N × M →_>0 be the conformal distortion of the N-action with respect to g. Pick (X_1,…,X_m) a basis of and consider the function φ defined on M by
φ(x) = ( ∑_1≤ i,j ≤ m g_x(X_i,X_j)^2 )^1/2.
By assumption, φ does not vanish. So, it is a smooth positive function which satisfies φ(g.x) = λ(g,x) φ(x) for every g ∈ N because N is abelian, proving that N is inessential.
This observation generalizes immediately to pseudo-Riemannian signature (p,q), assuming that orbits have dimension greater than min(p,q).
Proposition <ref> can be compared to a well known argument (see for instance <cit.>, Theorem 2.4). For X a conformal vector field of a pseudo-Riemannian manifold (M,g), if g(X,X) does not vanish, then the renormalized metric g/|g(X,X)| is preserved by X.
We now prove (1)(a) in Theorem <ref>. It follows that a conformal Lorentzian action of an abelian Lie group of rank at least2is always inessential over an open-dense subset.
Let N be an abelian Lie group acting conformally on a Lorentzian manifold (M,g). Then, N acts locally freely over an open-dense subset of M.
The approach follows the main argument <cit.>, which was adapted in <cit.>, §2.2, to the conformal setting, but formulated in the context of a generalization of D'Ambra's theorem, so non-valid in general.
The map x ∈ M ↦ N.x ∈𝐍 being lower semi-continuous, there exists an open-dense subset Ω of M on which it is locally constant. Let x ∈Ω and suppose to the contrary N.x < N. Let {ϕ^t}⊂ N_x be a one-parameter subgroup fixing x. Since the N-orbits foliate trivially a neighborhood of x, and N being abelian, it follows that d_x ϕ^t acts trivially on T_x(N.x) and on T_xM / T_x(N.x). By Lemma 2.4 of <cit.>, we obtain _̣x ϕ^t = 𝕀.
Thus, the flow ϕ^t has a singularity of order 2 at x. The proof of <cit.> shows that there are points y arbitrarily close to x such that ϕ^t(y) → x. This contradicts the fact that a neighborhood of x is trivially foliated by the N-orbits, since, replacing y by ϕ^T(y) for T large enough, y would have to belong to N.x, the latter being fixed pointwise by ϕ^t.
For the non-abelian case, we prove the following proposition (case (2) of Theorem <ref>).
Assume that N is non-abelian. Then, N is inessential if and only if N_k acts locally freely everywhere.
The fact that N is inessential implies that N_k acts locally freely is already known (for instance <cit.>, Corollary 6.3). We point out that it follows independently of Lemma <ref> bellow: if the action is isometric, then no non-trivial one-parameter subgroup has a trivial 1-jet at a point, hence the compact subset K in question is empty.
We start with the following intermediate result.
Let be a Lie algebra of conformal vector fields of (M,g), isomorphic to (3) and let Z be a non-trivial element in the center. If acts locally freely, then (x) ⊥ Z_x everywhere. In particular, Z is everywhere isotropic and any X ∈∖.Z is space-like everywhere.
Let H' be the closure of the connected subgroup H<(M,[g]) corresponding to . Then, H' is a 2-step nilpotent Lie subgroup of (M,[g]) due to the following.
Let G be a Lie group and N<G an integral subgroup such that is 2-step nilpotent. Then, the closure N' of N in G is a 2-step nilpotent Lie subgroup of G.
N is 2-step nilpotent as an abstract group, as can be easily seen using Baker-Campbell-Hausdorff formula. Since 𝒵(N) ⊂𝒵(N') and the latter is closed, it follows that 𝒵̅(̅N̅)̅⊂𝒵(N'). Therefore, the commutator [x,y] of two elements of N' must be central in N', because it is the limit of commutators in N, which are central in N. It implies that N' is itself 2-step nilpotent, as an abstract group. We are left to verify that a Lie group which is 2-step nilpotent has a Lie algebra with the same property.
Let Z be the center of N', and let be its Lie algebra. Fix X,Y ∈'. For any s,t ∈, the BCH formula gives an explicit formula for the element Z(s,t) ∈ such that
e^tXe^sYe^-tXe^-sY = e^s(e^tX)Y e^-sY = e^Z(s,t).
For s,t sufficiently small, Z(s,t) ∈. Consequently, seen as an analytic map in s, its first order term (e^tX)Y-Y belongs to . Taking now derivative at t=0, we obtain [X,Y] ∈, proving that ' is 2-step nilpotent.
Let X ∈ be a non-central element and S = {e^tX}_t∈ < H'. As ' is nilpotent, the discompact radical of S coincides with _'(S).
We apply Corollary <ref> to this pair (H',S): any closed, S-invariant subset of M contains a point x such that _'(S) preserves _x' and _'(S) ⊂('/_x',[q_x]). Since the action of _'(S) on '/ _x' is unimodular, it is in fact isometric with respect to q_x. Let Y ∈ such that Z = [X,Y]. We have b_x(Z,Y) = b_x([X,Y],Y) = -b_x(Y,Z), so b_x(Y,Z) = 0, and similarly we obtain b_x(Z,Z)=0. By density, it follows that Z is isotropic and orthogonal to the projection of in ' / _x'. Consequently, for any X ∈∖(), g_x(X,X)>0.
Recall that this is true for some x in the arbitrary compact S-invariant subset we picked. Now consider {x ∈ M : g_x(X,X) ≤ 0}. It is of course compact, but also S-invariant as S acts conformally. Therefore, this subset is empty and we obtain that X is everywhere space-like as expected. Thus, we have shown that any non-central vector field of is everywhere space-like.
As recalled in Remark <ref>, X is inessential, so that the conformal distortion of its flow has range in a compact sub-interval of _>0. For x ∈ M, we note x_t = e^tX.x. Since Z is central, λ(e^tX,x) g_x(Z,Z)=g_x^t(Z,Z), and similarly for g(X,Z). Consequently either g_x(Z,Z)=0, or the function t ↦ g_x_t(Z,Z) is bounded away from 0, and similarly for g(X,Z). Pick now Y ∈ such that [X,Y]=Z. From the relation λ(e^tX,x) g_x(Y,Z) = g_x_t(Y,Z) - t g_x_t(Z,Z), we deduce g_x(Z,Z)=0, proving that Z is everywhere isotropic and g_x_t(Y,Z)=λ(e^tX,x) g_x(Y,Z) is either constant equal to 0, or bounded away from 0. Consequently, using that λ(e^tX,x)g_x(Y,Y) = g_x_t(Y,Y)-2tg_x_t(Y,Z) for all x ∈ M and t ∈, we get that g_x(Y,Z) = 0. This conclusion is valid for any element Y which does not centralize X, so it must be valid for all elements of by continuity.
Let us observe now that N_k = 1 and that its orbits are 1-dimensional and isotropic. For any X ∈_k-1, if Y ∈ is chosen such that Z:=[X,Y] ≠ 0, then (X,Y,Z) span a Lie algebra to which we can apply the previous lemma. Indeed, Z ∈_k is non-zero so must be central and nowhere vanishing by hypothesis and we can apply Lemma <ref>. In particular, for any X∈_k-1 and Y ∈, the vector field [X,Y] is everywhere isotropic.
If, for all X ∈_k-1 and Y ∈, the vector field [X,Y] is nowhere vanishing and isotropic, then _k = 1.
For any x ∈ M and X ∈_k-1, the linear map Z ∈ [X,] ↦ Z_x ∈ T_xM is injective and its range is totally isotropic. It follows that [X,] ≤ 1 for all X ∈_k-1.
Now, let Z_1 = [X_1,Y_1] and Z_2 = [X_2,Y_2] with X_i ∈_k-1 and Y_i ∈. We distinguish two cases:
* [X_1,Y_2]=[X_2,Y_1]=0. In this case, Z_1 +Z_2 = [X_1+X_2,Y_1+Y_2] ∈ [X_1+X_2,] is everywhere isotropic. The same being true for Z_1 and Z_2, it implies that they are proportional by Lemma <ref>.
* Either [X_1,Y_2] ≠ 0 or [X_2,Y_1] ≠ 0. Let us assume [X_1,Y_2] ≠ 0. Because [X_1,] ≤ 1, there exists λ∈ such that Z_1 = [X_1,Y_1] = λ[X_1,Y_2]. Therefore, Z_1+Z_2=[λ X_1 + X_2,Y_2] ∈ [λ X_1 + X_2,]. Similarly to the first case, we deduce that Z_1 is collinear to Z_2.
We deduce that _k=[_k-1,] is a line as claimed.
By hypothesis, the flow corresponding to _k acts with no singularity on M and has isotropic orbits by Lemma <ref>.
Finally, let us prove that N is inessential. Let X ∈_k-1∖(). Let Y ∈ which does not commute with X and let Z = [X,Y] ∈_k. Then, X,Y,Z span a Lie algebra isomorphic to (3) and satisfying the hypothesis of Lemma <ref>. In particular, g(X,X) > 0 everywhere. Let T ∈ and let us note Z' := [T,X].
We distinguish two cases:
* If Z'=0, then λ(e^tT,x) g_x(X,X)=g_ϕ_T^t(x)(X,X) for all x ∈ M and t∈.
* If Z' ≠ 0, then (X,T,Z') is another (3)-triple to which we can apply Lemma <ref>. Therefore, g(Z',Z') = 0 and g(X,Z') = 0 everywhere. From (ϕ_T^t)_* X_x = X_ϕ_T^t(x) - t Z'_ϕ_T^t(x), we then obtain λ(e^tT,x) g_x(X,X) = g_ϕ_T^t(x)(X,X) as in the first case.
As the function φ = g(X,X) is everywhere positive, we get that the conformal distortion of any one-parameter subgroup in N is a coboundary defined by this map. So, all of N preserves g/φ by connecteness.
§.§ Proof of Corollary <ref>
We prove in this section that Theorem <ref> implies Corollary <ref> by a standard averaging argument. We will use the same notationsG,RandNand we assume thatG/Ris compact and that the nilradicalN is inessential. By Theorem <ref>,Ris inessential.
ConsiderK<Ga compact Levi factor, i.e. a compact semi-simple Lie group such that(M,g)_0 = KRandK ∩Ris finite. Observe thatKpreserves the set ofR-invariant metrics in the conformal class because it normalizesR. Hence, by averaging in this convex set of metrics, we obtain a conformal metric which is bothK-invariant andR-invariant.
§.§ Inessential abelian nilradical
We prove in this section Theorem <ref> in the case where the nilradicalN ◃Ris abelian. IfN=R, the statement is obvious, so we assume that≠is not nilpotent. We prove that≃() ⊕^kand thatRis inessential.
Sinceis abelian and[,] ⊂, ifρ: →()denotes the representationρ(X) = (X)|_, thenρhas abelian image.
For all X ∈∖, we have [X,]=1.
We first note that = ρ because .X + is nilpotent if and only if X ∈ by definition of the nilradical. So, if X ∉, there exists Y ∈ such that Z = [X,Y]≠ 0. Since (ϕ_Y^t)_* X_x= X_ϕ_Y^t(x) + tZ_ϕ_Y^t(x) = X_ϕ_Y^t(x) + t(ϕ_Y^t)_*Z_x and [Y,Z]=0, we obtain for all x ∈ M and t ∈
g_x(X,Z) = g_ϕ_Y^t(x)(X,Z) + t g_x(Z,Z),
proving that g_x(Z,Z)=0. Since Z is moreover a Killing vector field of (M,g), Lemma <ref> implies that it does not vanish anywhere. As a consequence, for any x ∈ M, the map Z ∈ [X,] ↦ Z_x ∈ T_xM is injective and has isotropic image, and we get [X,] has dimension 1.
It follows that forX ∈∖, all eigenvalues ofρ(X)^ ∈(^)are real because the existence of a non-real one would imply that of a real plane in the image ofρ(X). It also has exactly one non-zero eigenvalue, and the corresponding eigenspace is1-dimensional, because if all eigenvalues were zero, thenρ(X)would be nilpotent, implyingX ∈. Sinceρ(X)has rank1, it follows that it is diagonalisable over. Using that theρ(X),X ∈, form a commutative family, we deduce that≃() ⊕^kfork ≥0.
Let X,Y be non-zero conformal vector fields of (M,g), with [X,Y] = Y and Y inessential. Then the Lie algebra they span is inessential, and g(X,X) > 0 everywhere.
Using that Y is a Killing vector field of some conformal metric, which we may assume to be g, we obtain that for all x ∈ M, the function t ↦ g_ϕ_Y^t(x)(Y,Y) is constant. Because (e^tY).X = X - tY, we also have g_x(X,Y) = g_ϕ_Y^t(x)(X,Y) + t g_ϕ_Y^t(x)(Y,Y), showing that g(Y,Y) = 0 identically and t ↦ g_ϕ_Y^t(x)(X,Y) is constant. Similarly, g_x(X,X) = g_ϕ_Y^t(x)(X,X)+2t g_ϕ_Y^t(x)(X,Y) also implies g(X,Y)=0. By Lemma <ref>, it follows that Y does not vanish. Therefore, (X_x,Y_x) is 2-dimensional everywhere because if Z ∈(X,Y) vanishes at a point x, then [Z,Y]=α Y for α≠ 0, and then 0 = (ϕ_Y^t)_* Z_x = Z_ϕ_Y^t(x) + α t Y_ϕ_Y^t(x) for all t ∈, contradicting the fact that Z is bounded and Y never vanishes.
Therefore, g(X,X) > 0 everywhere and since g_x(X,X) = g_ϕ_Y^t(x)(X+tY,X+tY) = g_ϕ_Y^t(x)(X,X), we obtain that g/g(X,X) is preserved by both X and Y.
Finally, when≠, any()factor insatisfies the hypothesis of this lemma. If we choose anyX ∈∖, then we haveg(X,X)>0everywhere andg/g(X,X)isR-invariant, proving Theorem <ref> in the case whereNis abelian.
§.§ Inessential non-abelian nilradical
In this section, we treat the case of a non-abelian nilradical for the proof of Theorem <ref>.
Let us assume that the nilradicalN ◃ Rpreserves a metricg. We denote byλ: R ×M →_>0the conformal distortion with respect tog. Let= [,]and letbe the center of. According to Proposition <ref> and Remark <ref>,⊂is a line.
We fixZ ∈a non-zero element.
For all X ∈, g(X,Z)=0 and for X ∈∖(), g(X,X)>0.
Let X ∈. If X ∉, there is Y∈ such that [X,Y]=Z and (X,Y,Z) is an inessential (3)-triple. By Remark <ref>, the corresponding action of (3) is everywhere locally free and g(X,Z) = 0 and g(X,X)>0 everywhere. If X ∈ and if X',Y ∈ are such that [X',Y] = Z, then we also have [X+X',Y]=Z. So, g(X+X',Z) = g(X',Z)=0 and we get g(X,Z)=0.
Note thatandare also ideals of.
We have [,] ∩⊂. In particular, [,] ⊂.
Let X ∈ and X_0 ∈. Assume that X_1 = [X,X_0] ∈. Let x ∈ M. Because X_0 and X_1 commute and N is isometric with respect to g, we have g_ϕ_X_0^t(x)(X_1,X_1) = g_x(X_1,X_1) for all t ∈. Using (e^tX_0)X = X - t X_1 and the corresponding action of the flow of X_0 on X, we get
g_x(X,X) = g_ϕ_X_0^t(x)(X,X) - 2t g_ϕ_X_0^t(x)(X,X_1) + t^2 g_x(X_1,X_1).
It follows that g_x(X_1,X_1) = 0 everywhere. Since we also have g(X_1,Z) = 0 by Lemma <ref>,
it means that (X_1)_x is collinear to Z_x for all x ∈ M. Thus, for some λ∈, X_1 - λ Z is a light-like Killing vector field of (M,g) which vanishes at some point. According to Lemma <ref>, it follows that X_1-λ Z = 0, i.e.X_1 ∈ as claimed.
It is a general fact that[,]is a nilpotent ideal of, so[,] ⊂and/ is abelian. Consider the representation induced by the adjoint representation
ρ : / →(/).
For all X̅∈ /, all the eigenvalues of ρ(X̅)^ are purely imaginary.
Assume to the contrary that 1+iθ is a complex eigenvalue of some ρ(X̅)^. It implies that there exist non-zero vectors X̅_̅0̅ and Y̅_0 in / such that
ρ(X̅) X̅_̅0̅ = X̅_̅0̅ + θY̅_̅0̅
ρ(X̅) Y̅_̅0̅ = Y̅_̅0̅ - θX̅_̅0̅ .
Assume first that θ≠ 0 so that X̅_̅0̅ and Y̅_̅0̅ are linearly independent. Now, for arbitrary X ∈ and X_0,Y_0 ∈ projecting to X̅, X̅_̅0̅ and Y̅_̅0̅ respectively, we have X_1,Y_1 ∈ such that
[X,X_0] = X_0 + θ Y_0 + X_1
[X,Y_0] = Y_0 + θ X_0 + Y_1.
Let us define X_0'= X_0 + 1/1+θ^2X_1 - θ/1+θ^2 Y_1 and Y_0' = Y_0 + θ/1+θ^2 X_1+ 1/1+θ^2 Y_1. Since [,] ⊂ (Lemma <ref>), we obtain :
[X,X_0'] = X_0' + θ Y_0' mod.
[X,Y_0'] = Y_0' - θ X_0' mod. .
Remark that changing the representative of X̅ does not affect these relations. We may assume that the representatives X_0 and Y_0 were initially chosen such that (<ref>) is valid. We get
(e^tX)X_0 = e^t(cos(θ t)X_0 + sin(θ t) Y_0) mod.
and similarly for Y_0. Let us note X_t = cos(θ t)X_0 + sin(θ t) Y_0. Since the projections of X_0 and Y_0 in / are linearly independent, P_x = ((X_0)_x,(Y_0)_x) is a plane on which the metric is everywhere positive-definite (see Remark <ref>).
Hence, we have positive constants α,β such that α≤ g_x(X_t,X_t) ≤β for all t ∈ and x ∈ M. Recall that Z is everywhere orthogonal to any vector field of . Therefore,
λ(ϕ_X^t,x) g_x(X_0,X_0) = e^-2t g_ϕ_X^t(x)(X_t,X_t).
Thus, we get α/β≤ e^2tλ(ϕ_X^t,x) ≤β / α for all x ∈ M and t ∈. We also have the relations λ(ϕ_X^t,x)g_x(X,X) = g_ϕ_X^t(x)(X,X) for all t. So, we must have g_x(X,X)=0, for any x ∈ M.
Since [,] ⊂, we have μ such that [X,Z]=μ Z, therefore (ϕ_Z^t)_* X_x = X_ϕ_Z^t(x) + μ t Z_ϕ_Z^t(x). Using that ϕ_Z^t is isometric with respect to g and g(Z,Z)=0 everywhere, this implies g_x(X,Z) = g_ϕ_Z^t(x)(X,Z) and then
g_x(X,X) = g_ϕ_Z^t(x)(X,X) + 2μ t g_x(X,Z),
so that either μ = 0, or g(X,Z) = 0 everywhere. If μ = 0, then we obtain λ(ϕ_X^t,x) g_x(X,Z) = g_ϕ_X^t(x)(X,Z), which in turn implies g_x(X,Z) = 0 because λ(ϕ_X^t,x) → +∞ as t → -∞. In both cases, g(X,Z) = 0 everywhere.
Now, we recall that the choice of X as a representative of X̅∈/ was arbitrary. In particular, the same conclusions follow for X+Z and we obtain that X_x is everywhere collinear to Z_x. By Lemma <ref>, we obtain that X is proportional to Z. This is the desired contradiction because (Z) acts trivially on /.
Assume now that θ = 0. It implies that there exists X_1 ∈ such that [X,X_0] = X_0 + X_1 and X_0 ∈∖. Since [X,X_1] ∈, we get [X,X_0'] = X_0' mod., where X_0' = X_0+X_1. The same approach as above shows that λ(e^tX,x) is unbounded, so g(X,X) = 0 identically and we derive the same contradiction.
As the algebra/ is abelian, there exists a non-zero vector in(/)^which is common eigenvector to all theρ(X̅)^,X ∈. By the previous lemma, we obtain a linear functionθ: →and non-zero vectorsX̅_0,Y̅_0 ∈/ such that
ρ(X̅) X̅_0 = θ(X) Y̅_0
ρ(X̅) Y̅_0 = -θ(X) X̅_0.
For any representatives X_0,Y_0 ∈, and for all x ∈ M, the metric g_x is positive definite in restriction to the plane spanned by X_0,Y_0 by Lemma <ref>.
Assume first thatθ≠0and let us fixX ∈such thatθ(X) = 1. As in the proof of Lemma <ref>, we can choose representativesX_0,Y_0 ∈such that
[X,X_0] = Y_0 mod.
[X,Y_0] = -X_0 mod. .
Thus,(e^tX)acts as a rotationR^ton(X_0,Y_0)modulo. Letφ: M →_>0be defined by
φ(x) = ∫_S g_x(U,U) Ụ
whereS ⊂(X_0,Y_0)is the unit circle of the Euclidean norm for which(X_0,Y_0)form an orthonormal basis andỤthe Lebesgue measure onS. For allt ∈,x ∈M, andU ∈(X_0,Y_0), we have
[(ϕ_X^t)^*g]_x(U,U) = g_ϕ_X^t(x)(R^t(U),R^t(U)))
sinceZis everywhere isotropic and orthogonal to every vector field of. So, we obtain
λ(ϕ_X^t,x) φ(x) = φ(ϕ_X^t(x)).
By Lemma <ref>, anyX' ∈θ,(e^tX')acts trivially on(X_0,Y_0)modulo. As a consequence,[(ϕ_X'^t)^*g]_x(U,U) = g_ϕ_X'^t(x)(U,U)for everyU ∈(X_0,Y_0), implyingλ(ϕ_X'^t,x) φ(x) = φ(ϕ_X'^t(x)). Therefore, for anyX ∈, the conformal distortion ofϕ_X^twith respect togis given byφ(ϕ_X^t(x)) / φ(x), proving thatg / φis preserved by all ofR.
Finally, ifθ= 0the arguments of the last paragraph show that ifX_0 ∈∖projects to an element in the kernel of allρ(X̅),X ∈, thenψ(x) := g_x(X_0,X_0)is everywhere positive andg/ψis alsoR-invariant.
§.§ Remark on the case of a non-compact Levi factor
We conclude this section with the proof of Proposition <ref>. Assume thatNis inessential. Letbe a Levi factor ofand assume that its non-compact part'is non-trivial. Consider the restriction of the adjoint representation of'to. Let⊂'be an-split Cartan subalgebra. If the representation is non-trivial, considering a non-trivial weight, there existsX ∈andY ∈such that[X,Y]=Y. Hence,g(X,X)>0everywhere by Lemma <ref>. Applying Lemma 2.3 of <cit.> to couples(X,Y_α)whereY_αis any element in a weight space of(), we obtain a basis ofwhose elements preserve the metricg/g(X,X). SoGis inessential.
Therefore, whenGis essential,'centralizes. IfNwas non-abelian, then anyX ∈∖would satisfyg(X,X) > 0and because'centralizesX, it would be inessential, implying thatGis inessential by <cit.> (Proof of Corollary 1.2).
§ ESSENTIAL ACTIONS
From now on, we will consider compact Lorentzian manifolds admitting essential actions of Lie groups. We still note(M,g)a compact Lorentzian manifold of dimensionn ≥3and we assume that a solvable Lie groupRacts conformally and essentially onM. The aim of this section is to provide certain sufficient conditions for the existence of elements or one-parameter subgroups ofRadmitting a singularity with a non-linear unipotent holonomy (mainly, the element or the corresponding flow will have trivial1-jet at this point). We will then conclude that there exists a non-empty conformally flat open subset via anterior results.
§.§ Preliminaries: associated Cartan geometry modeled on ^1,n-1
Recall that the (projective model of the) Lorentzian Einstein Universe is the smooth projective quadric{Q=0} ⊂P^n+1whereQis a quadratic form of signature(2,n)on^n+2. It inherits a natural conformal Lorentzian structure such that(^1,n-1) ≃(2,n). From now on, we will denote(2,n)byG.
§.§.§ Equivalence principle
We introduce now a central tool in conformal geometry. LetP<Gdenote the stabilizer of a null-linex_0in^2,n, letdenote its Lie algebra. We denote byP^+ ≃^nthe nilradical ofPand byG_0 ≃_>0 ×O(1,n-1)a section ofG/P^+, so thatP ≃G_0 ⋉P^+.
The following theorem says that a general Lorentzian conformal structure(M,[g])can be interpreted as a curved version of^1,n-1.
[Equivalence principle for conformal Lorentzian structures]
Let (M^n,[g]) be a conformal Lorentzian structure in dimension n ≥ 3. Then, there exists a P-principal fibration π : M̂→ M and a 1-form ω∈Ω^1(M̂,) verifying
* ∀∈M̂, ω_ : T_M̂→ is a linear isomorphism ;
* ∀ A ∈, ω(A^*) ≡ A, where A^* stands for the fundamental field associated to A ;
* ∀∈M̂, ∀ p ∈ P, (R_p)^* ω = (p^-1) ω, with R_p standing for the right P-action on M̂.
Additionally, a technical normalization condition is required, making this correspondence one-to-one. We then have the following lifting property:
Let f ∈(M) be a diffeomorphism. Then, f is conformal with respect to [g] if and only if there exists a bundle automorphism f̂ such that π∘f̂ = f ∘π and (f̂)^* ω = ω. In this situation, f̂ is uniquely determined, and called the lift of f.
Let X be a vector field on M. Then, X is conformal if and only if there exists a vector field X̂ defined on M̂ such that (R_p)^* X̂ = X̂ for all p ∈ P, π_* X̂ = X, and ℒ_X̂ω = 0. In this situation, X̂ is uniquely determined, and called the lift of X.
When there is no possible confusion, we will however use the same notation for the lift of X. For instance, we will frequently denote by ω_(X) the evaluation at a point ∈M̂ of the Cartan connection on the lift of a conformal vector field X.
The curvature of the associated Cartan geometry is the horizontal2-formΩ∈Ω^2(M̂,)defined byΩ= ω̣+ 1/2[ω,ω]. It is classically known (<cit.> for instance) that the curvature identically vanishes if and only if(M,[g])is locally conformally equivalent to^1,n-1.
Let X,Y be two conformal vector fields. Then, for all ∈M̂, we have
ω_([X̂,Ŷ]) = - [ω_(X̂),ω_(Ŷ)] + Ω_(X̂,Ŷ).
In particular, if X̂_ or Ŷ_ is vertical, then ω_([X̂,Ŷ]) = - [ω_(X̂),ω_(Ŷ)].
Note that we recover here that a Lie algebra of conformal vector fields of a conformally flat Lorentzian manifold can be embedded into=(2,n). This identity even implies that if the Cartan curvature vanishes at a single point, then we obtain an embedding.
§.§.§ Restricted root-space decomposition of (2,n)
Recall that the restricted root-system of= (2,n)isB_2, i.e. isomorphic to{±α, ±β, ±α±β}withα⊥βand|α|=|β|. We introduce linear coordinates on^2,nwhich we fix once and for all. Let(e_0,…,e_n+1)be a basis of^2,nin which the quadratic formQreads 2u_0u_n+1+2u_1u_n+u_2^2+⋯+u_n-1^2and such thatP < Gis the stabilizer ofx_0:=[e_0].
We denote bythe-split Cartan subalgebra
=
{
A=
[ λ ; μ ; 0 ; -μ ; -λ ]
, λ,μ∈}.
We denote byαandβthe restricted-rootsα(A)=λandβ(A)=μ. The compact part of the centralizer ofis identified as
=
{[ 0 ; 0 ; X ; 0 ; 0 ]
, X ∈(n-2)
},
and the restricted root-spaces ofare located as indicated below
[ _α - β _α _α+β 0; _β - α _β 0 _α+β; _-α _- β _β _α; _- α - β 0 _- β _α - β; 0 _- α - β _- α _β - α ].
The Lie algebra ofPis= ⊕⊕_-β⊕_β ⊕_α-β ⊕_α ⊕_α+βand the nilradicalP^+ofPhas Lie algebra^+ = _α-β ⊕_α ⊕_α+β.
The elementA_α ∈such thatα(A_α) = 1andβ(A_α) = 0defines a grading= _-1 ⊕_0 ⊕_1, where_k = {X ∈ | [A_α,X]=kX}, fork =-1,0,1. We have= _0 ⋉_1,_1=^+,_-1 = _-α+β ⊕_-α ⊕_-α-βand_0 ≃⊕(1,n-1), where the-factor is.A_α.
§.§.§ Stereographic projections and holonomy of conformal maps and vector fields with a singularity
Let X be a conformal vector field of (M,[g]), let X̂ be its lift to the Cartan bundle. Assume that X has a singularity x ∈ M. For any ∈π^-1(x), the holonomy of X at is defined as the element ω_(X) ∈.
If f ∈(M,[g]) fixes a point x ∈ M, and if ∈π^-1(x), then the holonomy of f at , denoted ^(f), is the unique element p ∈ P such that f̂() = .p^-1.
The correspondencef ∈(M,[g])_x ↦^(f) ∈Pis then an injective Lie group homomorphism. Of course, whenX(x) = 0, the holonomy ofϕ_X^tatise^tX_h.
Sinceωdefines a global framing onM̂,Xis completely determined by the evaluation ofX̂at any point∈M̂. Thus, when it admits a singularity,Xis determined by the data of its holonomy. The ideal situation is when a neighborhood ofxis conformally flat:Xis then locally conjugate to the conformal vector fieldX_hofG/Pnear its singularityeP. However, relating directly the dynamics ofXto its holonomy is a difficult task in general.
The Cartan connection induces a linear identificationφ_ : T_xM →/, such that for anyf ∈(M,[g])with a fixed pointxand ifp = ^(f)for∈π^-1(x), we have
φ_∘_̣x f = (p) ∘φ_
whereis the representation ofPon/induced by the adjoint representation. So,_̣x fis conjugate to(g_0)|_-1, whereg_0is theG_0component of^(f). In particular,_̣x f = 𝕀if and only if^(f) ∈P^+, and similarly for conformal vector fields.
We describe the dynamics of the holonomy of^(f)orX^hvia stereographic projections. Let(e_0,…,e_n+1)be a basis of^2,nin which the quadratic formQreads 2u_0u_n+1+2u_1u_n+u_2^2+⋯+u_n-1^2and such thatP < Gis the stabilizer ofx_0:=[e_0]. We denote bys_x_0 : ^1,n-1 →M_x_0the stereographic projection given by
s_x_0(v_1,…,v_n) = [ -⟨ v,v ⟩/2 : v_1 : ⋯ : v_n : 1 ],
where⟨v,v ⟩= 2v_1v_n + v_2^2 + ⋯+ v_n-1^2. It is a conformal diffeomorphism between^1,n-1and an open-dense subset subset of^1,n-1called the Minkowski patchM_x_0associated tox_0, whose complement is the light-cone based at that point (see for instance <cit.> §2.1.2 or <cit.> §2.1). The parabolic subgroupP ≃(1,n-1) ⋉^npreservesM_x_0ands_x_0conjugates its action to the affine action of(1,n-1) ⋉^non^1,n-1.
By Liouville Theorem, every conformal diffeomorphism from^1,n-1to an open subset of^1,n-1is of the formg.s_x_0, for someg ∈G. Such maps are called stereographic projections, and the pole of projection ofg.s_x_0isg.x_0. Forx ∈^1,n-1, ifG_xdenotes the stabilizer ofx, then the nilradical_x^+of_xis characterized as the subspace of conformal vector fieldsXof^1,n-1fixingxand such thats^*Xis translation of^1,n-1for some (equivalently any) stereographic projectionswith polex.
For any x ∈^1,n-1, an element X ∈ is called a translation of the Minkowski patch M_x if X ∈_x^+, or equivalently s^*X is a translation of ^1,n-1 for any stereographic projection s with pole x. It is said to be a space-like/time-like/light-like translation if s^*X is a translation of ^1,n-1 with the same property.
§.§.§ Local linearization, non-linear unipotent point-wise holonomy
We identifyPas the subgroup(G_0)|_^+ ⋉^+of the affine group of^+, and its Lie algebra correspondingly. A one-parameter subgroup{e^tX} ⊂Pis conjugate to a one-parameter subgroup ofG_0if and only if its affine action on^+has a fixed point.
Let f ∈(M,[g]) fixing x and let ∈π^-1(x). Then, f is locally linearizable near x if and only if ^(f) is conjugate to an element of G_0.
We have the analogue for the linearization of a conformal vector field X. Note that writing X^h = (X_0, X_1) ∈_0 ⋉^+, X^h is conjugate to an element of _0 if there exists Y_1 ∈^+ such that X_1 = [X_0,Y_1], and in this case (e^Y_1) X^h = X^h - [X_0,Y_1] = X_0.
We will use the following results for essential singularities of conformal vector fields:
Let X be a conformal vector field of a Lorentzian manifold admitting a singularity x_0. Let X_h ∈ be its holonomy at some given point in the fiber of x_0. If e^tX_h is a unipotent one-parameter subgroup of P, and is not conjugate to a one-parameter subgroup of G_0, then an open subset of M containing x_0 in its closure is conformally flat.
This is proved in <cit.>, §5.3. Although the authors give a statement (Theorem 1.2) valid only in analytic regularity, they use this analyticity assumption to deduce conformal flatness of the whole manifold from that of an open subset, and importantly to reduce their problem to the case of a vector field with unipotent holonomy. The rest of their proof is then valid for smooth metrics, yielding a proof of this proposition.
A special situation where this proposition applies is when the_0-part ofX_his trivial, i.e. if the flow ofXhas trivial1-jet atx. This case is in fact covered by Theorem 4.1 of <cit.> which treats every metric signature. Moreover, the proof of this result can be adapted to the following setting (a stronger result is announced by the authors at the moment, see for instance Theorem 9.1 of <cit.> for the3-dimensional case).
Let (M,g) be a Lorentzian manifold and f ∈(M,[g]) be such that f(x)=x and ^(f) ∈exp(^+) for some point x and some (equivalently all) in the fiber over x. Then, an open subset containing x in its closure is conformally flat.
§.§ -triples in (2,n)
We prove here an algebraic property which will be used later. Recall that we noteP < (2,n)the stabilizer of the isotropic linex_0 = [1:0:⋯:0]in the fixed coordinates of^2,nthat we chose in Section <ref>.
Let (X,Y,Z) be an -triple in (2,n) such that Z ∈. Then, Z is a light-like translation of the Minkowski patch M_x_0.
Otherwise stated, there existsp ∈Psuch that(p)Z ∈_α+β.
We use the general Proposition 3.3 of <cit.> which guarantees the conclusion here, modulo conjugacy in G. It is enough to show that the conjugacy can be realized by an element of P, since the cone of null-translations of P is (P)-invariant.
Let g ∈(2,n) such that (g)X ∈_α+β. Recall that we identify the Lie algebra (2,n) with that of conformal vector fields of ^1,n-1.
Let X ∈(2,n). If X is a light-like translation of some Minkowski patch M_x, then its fixed points form a light-like geodesic Δ⊂^1,n-1, i.e. the projectivization of a null plane in ^2,n.
For any y ∈Δ, X is still a light-like translation of the Minkowski patch M_y.
For the first point, we just have to see that it is true for X = X_α+β∈^+ by homogeneity. It is then a straightforward verification.
For the second point, we may assume that x = x_0 = [1:0:⋯:0] and y=[0:1:0:⋯:0] because x and y belong to a common light-like geodesic. A stereographic projection at y is then given by a similar formula as for the projection at x_0, and an explicit computation gives the result.
Let x _0 ∈^1,n-1 be the point such that P=G_x_0 and let x=g.x_0. Let s_0=s_x_0 be the stereographic projection introduced above and let s = g s_0. Then, (s_0)^* Z = s^*((g^-1)^*Z) = s^*((g)Z). Let Z' = (g)Z. Then Z' fixes x because Z fixes x_0, and since Z' also fixes x_0 and is a light-like translation of M_x_0, the previous lemma implies that Z' is also a light-like translation of M_x. So, (s_0)^*Z is a light-like translation, as claimed.
§.§ Light-like singularities in _k
We prove now point (3) in Theorem <ref>.
LetNbe a non-abelian nilpotent Lie group of nilpotence degreek+1. Assume thatNacts locally faithfully and conformally essentially on a compact Lorentzian manifold(M,g).
There exist X ∈_k-1 and Y ∈ such that the vector field [X,Y] is non-zero and has a singularity.
If not, then we can apply Lemma <ref> to any (3)-triple of the form (X,Y,[X,Y]) with X ∈_k-1 and Y ∈ non-commuting and obtain that [X,Y] is everywhere isotropic. We can then apply Lemma <ref> to conclude that _k=1. But this contradicts the essentiality of N by Proposition <ref> since _k = .[X,Y] would act locally freely.
Letbe the Lie algebra of conformal vector fields of(M,g)spanned by(X,Y,Z), withZ := [X,Y]given by the previous lemma. LetH<(M,g)denote the associated integral subgroup. Note that the compact subsetK={x ∈M : Z_x=0}isH-invariant.
There exists x ∈ K at which the holonomy of Z is a null-translation in ^+.
We see H as an integral subgroup of (M,[g])_0 and we note H' the closure of H in (M,[g])_0. As seen in Lemma <ref>, H' is a 2-step nilpotent Lie subgroup of (M,[g]). Let X ∈ be a non-central element, Y ∈ such that [X,Y]=Z and let S = {e^tX}_t∈. We apply Theorem <ref> to the pair (S,H'). Let μ be a finite S-invariant measure supported in K. As H' is nilpotent, S̅^d = _'(S), and then for μ-almost every x ∈ M and for all ∈π^-1(x), we obtain a one parameter subgroup {e^tX'}⊂ P such that the map ι_ : ' → conjugates the action of (e^tX) on ' and that of (e^tX') on ι_(').
In particular, we obtain that [X',ι_(Y)]=ι_(Z), [X',ι_(Z)]=0. Because x ∈ K, the lift of Z to M̂ is tangent to the fiber of M̂, meaning that ι_(Z) ∈. Applying Lemma <ref>, we get [ι_(Y),ι_(Z)]=0. Hence, (X',ι_(Y),ι_(Z)) generates a copy of (3) in (2,n) whose center is contained in . By Proposition <ref>, we obtain that ι_(Z) is a null-translation in ^+ as announced.
Therefore, by Proposition <ref>, an open subset ofMis conformally flat. In particular, we have an embedding↪(2,n). Applying Proposition 3.3 of <cit.>, it follows thatk ≤3and_kis a line, and this concludes the proof of Theorem <ref>.
§.§ Semi-simplicity of ρ and reduction to a semi-direct product.
LetR<(M,g)be a connected solvable Lie subgroup. We assume thatRis not nilpotent and has abelian nilradicalN. By Propositions <ref> and <ref>,Nhas dimension at mostnand there existsx∈MwithN.x ≤1, and ifN.x = 1then it is light-like.
We noteρ: X ∈↦(X)|_ ∈(). By assumption,ρis trivial in restriction toand has abelian image since[,] ⊂.
Let X ∈. If ρ(X) is not a semi-simple element of (), then a vector field of admits a singularity at which its holonomy is a light-like translation. In particular, some non-empty open subset is confomally flat.
Let = .X ⊕. We first observe that the associated connected subgroup H < R is closed. To see it, let H' = {e^tXe^Y, t ∈, Y ∈}. It is a connected subgroup, contained in H. If we prove that H' is closed, we will have H'=H. Let then g = lim e^t_n Xe^Y_n be a limit of elements of H'. In particular, we have a bounded sequence g_n ∈ N_R() such that e^t_n X = g_n e^-Y_n. Since is abelian, we get that (e^t_n X) |_ = (g_n)|_ must be bounded too. Since, the Jordan decomposition of ρ(X) has a non-zero nilpotent component and (e^t_n X)|_ = e^t_n ρ(X), (t_n) must be bounded. So, up to an extraction e^Y_n converges, and the limit is in N = {e^Y, Y ∈}, the latter being closed because is the (abelian) nilradical of
Let (X)|_ = X_ss + X_u be the Jordan decomposition in (). By hypothesis, X_u ≠ 0. We apply now Theorem <ref> to the pair (H,S) where S = {e^tX}. Since the Zariski closure of _(S) contains {e^tX_u} which is Zariski closed, then so does the discompact radical S̅^d (<cit.>, Prop. 1.4).
Since K: = {x ∈ M | (N.x) ≤ 1} is closed and S-invariant, considering an S-invariant measure supported in K, we deduce that there exists x ∈ K such that for all ∈π^-1(x), there exists X' ∈ such that [X',ι_(Y)] = ι_(X_u(Y)) for all Y ∈. Choose Y ∈ such that Z:=X_u(Y) ≠ 0 and X_u^2(Y)=0. If we note Y'=ω_(Y) and Z' = ω_(Z), we then have [X',Y']=Z', [X',Z']=0. Since N.x ≤ 1, (Y',Z') ∩≠ 0, so we must have Z' ∈ because if α Y' + β Z' ∈, then [X',α Y' + β Z'] = α Z' ∈. By Lemma <ref>, we get [Y',Z']= ι_([Y,Z])=0, so (X',Y',Z') is an -triple of (2,n) with Z' ∈. By Proposition <ref>, we get that the holonomy of Z at x is a light-like translation.
Assume that every element of ρ() is semi-simple. Then, the short exact sequence →→ / is split modulo the center, i.e. there exists ⊂ such that = ⊕ and [,] ⊂(). In particular, if [,] ≠ 0, then contains an -triple. Moreover, every -triple of is essential.
By our hypothesis on semi-simplicity, and the fact that elements in commute pairwise, we have a common (complex) diagonalisation basis of of the form
(X_1,Y_1,…,X_r,Y_r,T_1,…,T_s,Z_1,…,Z_t)
and linear functions λ_i,μ_i and ν_j in ^* such that μ_i ≠ 0 and ν_j ≠ 0 and for all X ∈
[X,X_i] = λ_i(X) X_i + μ_i(X)Y_i
[X,Y_i] = -μ_i(X)X_i + λ_i(X) Y_i
[X,T_j] = ν_j(X)T_j
[X,Z_k] = 0
Let d = / and let us choose X^1,…,X^d ∉⋃_i μ_i ∪⋃_j ν_j which project to a basis of /. For all l,m, we have
[X^l,X^m] = ∑_i (a_lm^iX_i + b_lm^iY_i )+ ∑_j c_lm^j T_j + ∑_k d_lm^kZ_k.
Now, for X̃^l=X^l + ∑_i (α_l^i X_i + β_l^iY_i) + ∑_j γ_l^jT_j we obtain
[X̃^l,X̃^m] = ∑_i(a_lm^i + λ_i(X^l)α_m^i - λ_i(X^m)α_l^i -μ_i(X^l)β_m^i + μ_i(X^m)β_l^i)X_i
+ ∑_i (b_lm^i + λ_i(X^l)β_m^i - λ_i(X^m) β_l^i + μ_i(X^l) α_m^i - μ_i(X^m)α_l^i)Y_i
+ ∑_j (c_lm^j + ν_j(X^l)γ_m^j - ν_j(X^m) γ_l^j) T_j + ∑_k d_lm^k Z_k.
Let us see that we can adjust the coefficients α_l^i,β_l^i,γ_l^i such that [X̃^l,X̃^m]∈(Z_1,…,Z_t). For a fixed index i, we impose for all l,m the equations
a_lm^i + λ_i(X^l)α_m^i - λ_i(X^m)α_l^i -μ_i(X^l)β_m^i + μ_i(X^m)β_l^i = 0
b_lm^i + λ_i(X^l)β_m^i - λ_i(X^m) β_l^i + μ_i(X^l) α_m^i - μ_i(X^m)α_l^i = 0,
which we see as the real part and imaginary part of a same equation. Let us remove temporarily the index i. Our relations read for all l,m
A_lm = ζ(X^m) Z_l - ζ(X^l)Z_m,
where ζ = λ+iμ, A_mn = a_mn+ib_mn, and Z_l = α_l + i β_l. Note that A_mn = - A_nm. The Jacobi relation between X^l,X^m,X^n then reads
A_mnζ(X^l) + A_nlζ(X^m) + A_lmζ(X^n) = 0.
Therefore, equation (<ref>) for l,m ≥ 2 follows from the equation (<ref>) for (1,l), (1,m) and the Jacobi relation between X^1,X^l,X^m. So, the solutions (Z_1,…,Z_d) of our systems are the (complex) multiples of
(ζ(X^1), A_2,1/ζ(X^1) + ζ(X^2), …,A_d,1/ζ(X^1) + ζ(X^d)).
Consequently, if we choose
α_m^i = λ_i(X^m) + λ_i(X^1)a_m,1^i + μ_i(X^1)b_m,1^i/λ_i(X^1)^2+μ_i(X^1)^2
β_m^i = μ_i(X^m) + λ_i(X^1)b_m,1 - μ_i(X^1)a_m,1^i/λ_i(X^1)^2+μ_i(X^1)^2
γ_m^i = ν_i(X^m) + c_m,1^i/ν_i(X^1),
then, we get that [X̃^l,X̃^m] ∈() for all l,m.
Thus, there exists a section of → / such that [,] ⊂(). Assume that () ≠ 0 and let X,Y ∈ be such that [X,Y] =:Z ≠ 0. Then (X,Y,Z) is an -triple.
We prove now that any -triple of is essential. Let us assume by contradiction that some -triple (X,Y,Z) preserves a metric g in the conformal class. We know that this triple of vector fields acts everywhere locally freely, with degenerate orbits, and that the orbits of Z give the kernel (Remark <ref>). In particular, g_x(U,U)>0 everywhere, for every U ∈(X,Y,Z) ∖.Z.
For 1 ≤ i ≤ r, for all V ∈(X_i,Y_i), we have (ϕ_U^t)_* V_x = e^-λ_i(U)t [R^-μ_i(U)tV]_ϕ_U^t(x) where R^θV stands for the standard rotation in the plane (X_i,Y_i). Thus,
g_x(Z,V) = e^-λ_i(U)t g_ϕ_U^t(x)(Z,R^-μ_i(U)tV)
g_x(V,V) = e^-2λ_i(U)t g_ϕ_U^t(x)(R^-μ_i(U)tV,R^-μ_i(U)tV).
Therefore, if λ_i(U) ≠ 0, then for all V ∈(X_i,Y_i), g(V,V) = g(Z,V)= 0. By Lemma <ref>, it implies that V is a multiple of Z, a contradiction. So λ_i(U) = 0 for all U ∈(X,Y). Pick now U_0 ∈(X,Y) such that μ_i(U_0) = 0. Then [U_0,V]=0 for all V ∈(X_i,Y_i). So, every V ∈(X_i,Y_i) is a Killing vector field of g' := g/g(U_0,U_0). Now for any U, we have (ϕ_V^t)_* U_x = U_ϕ_V^t(x) + t[U,V]_ϕ_V^t(x)
and using that ϕ_V^t is isometric with respect to g', we deduce similarly as above that [U,V] is everywhere light-like and orthogonal to Z. It follows that μ_i(X)=μ_i(Y) =0 because if for instance μ_i(X) ≠ 0, then Lemma <ref> would imply that Y_i = 1/μ_i(X) [X,X_i] and X_i = - 1/μ_i(X)[X,Y_i] are collinear to Z. Thus, for all 1 ≤ i ≤ r, λ_i and μ_i vanish on (X,Y).
For 1 ≤ j ≤ s, from (ϕ_X^t)_* (T_j)_x = e^-ν_j(X)t(T_j)_ϕ_X^t(x), we deduce that ν_j(X) = 0, because if ν_j(X) ≠ 0, then we could prove similarly as above that T_j is a multiple of Z, a contradiction because [X,T_j] = ν_j(X)T_j ≠ 0. Symmetrically, ν_j(Y) = 0 for all j. Finally, both X and Y centralize all of , which implies that they belong to , which is the desired contradiction because [X,Y] ≠ 0.
§.§ Complex eigenvalues of ρ(X)
We still assumeNabelian and denote byρ: →()the representation defined byρ(X) = (X)|_. Recall thatGdenotes(2,n),P<Gthe parabolic subgroup introduced in Section <ref>.
Let X ∈. If ρ(X)^ has two eigenvalues with distinct, non-zero real parts, then a vector field in has a singularity of order 2.
Let λ_1+iμ_1 and λ_2 + iμ_2, with λ_1 ≠λ_2 and λ_1,λ_2 ∈∖{0} be two such eigenvalues. Let us choose X_1,Y_1,X_2,Y_2 ∈ all non-zero such that
ρ(X) X_1 = λ_1 X_1 + μ_1 Y_1
ρ(X)Y_1 = -μ_1 X_1 + λ_1 Y_1
and ρ(X)X_2 = λ_2 X_2 + μ_2 Y_2
ρ(X)Y_2 = -μ_2 X_2 + λ_2 Y_2.
If μ_k=0, we choose Y_k=X_k. Let be the Lie algebra spanned by X, X_1, X_2, Y_1, Y_2, and let S denote the integral subgroup of G associated to .
The Zariski closure of _(e^tX) contains an -split one-parameter subgroup {h^t} such that h^t(X)=X, h^t(X_1)=e^λ_1 tX_1, h^t(Y_1) = e^λ_1 tY_1, h^t(X_2) = e^λ_2 tX_2, and h^t(Y_2) = e^λ_2 tY_2. Also, if u_1^t = _(e^tX_1), u_2^t = _(e^tX_2), v_1^t = _(e^tY_1), and v_2^t = _(e^tY_2), the subgroups {u_1^t},{u_2^t},{v_1^t}, and {v_2^t} are unipotent, hence Zariski closed in (). By the Noetherian property of the discompact radical (Prop. 1.4 of <cit.>), S̅_d contains {h^t}, {u_1^t},{u_2^t},{v_1^t}, and {v_2^t}.
Let K = {x ∈ M | N.x is isotropic}, where we consider a fixed point an isotropic orbit. By Theorem <ref>, K is non-empty, compact and S-invariant, so there exists a point x ∈ K at which the conclusion of Theorem <ref> are valid. For in the fiber of x, we note S^⊂_(P) the algebraic subgroup obtained in the conclusion of the theorem. By equivariance, S^.p = pS^p^-1.
Considering the -split component of a one-parameter subgroup of S^ which is sent to h^t (by Theorem <ref>), and changing the point appropriately in the fiber, we obtain a one-parameter subgroup {_(e^tA_X)} < P^ contained in the -split Cartan subgroup A described in Section <ref> such that [A_X,ι_(Y)] = ι_([X,Y]) for all Y ∈. Let U_1,U_2, V_1,V_2 ∈ be such that {_)(e^tU_k)} is sent onto {_(e^tX_k)}, for k=1,2, and {_(e^tV_k)} is sent onto {_(e^tY_k)}, for k=1,2.
We show now that X_1 and X_2 both vanish at x. Since x ∈ K and X_1,X_2 ∈, a non-trivial linear combination α_1 X_1 + α_2 X_2 vanishes at x. It means that α_1 ι_(X_1) + α_2 ι_(X_2) ∈. Consequently
α_1λ_1 ι_(X_1) + α_2λ_2 ι_(X_2) = [A_X, α_1 ι_(X_1) + α_2 ι_(X_2) ] ∈.
Since λ_1≠λ_2, we deduce that ι_(X_1) ∈ or ι_(X_2) ∈. Let us assume by contradiction that ι_(X_2) ∉. Necessarily, ι_(X) ∉ because if it did, then
λ_2ι_(X_2)+μ_2 ι_(Y_2)= - [U_2,ι_(X)] ∈
-μ_2ι_(X_2)+λ_2 ι_(Y_2)= - [V_2,ι_(X)] ∈
from which we would derive ι_(X_2) ∈, a contradiction. Next, we see that no element in {X,X_2} vanishes at x, because for all ν∈, [A_X, ι_ (ν X + X_2)] = λ_2 ι_(X_2), so ι_ (ν X + X_2) ∉. Since X_2(x) is tangent to N.x, it is isotropic in T_xM, and using [μ_2V_2-λ_2U_2,ι_(X)] = (λ_2^2 + μ_2^2) ι_(X_2) and Corollary <ref>, we obtain that
0=g_x(X,[μ_2Y_2-λ_2X_2, X]) = (λ_2^2 + μ_2^2) g_x(X,X_2),
and we get that X(x) is orthogonal to X_2(x), and since they are non-collinear, we must have g_x(X,X)>0. Since [A_X,ι_(X)]=0, ι_(X) must have a component on _-α because if not, then it would have components on both _-α+β and _-α-β, which would force α(A_X)=β(A_X)=0, a contradiction because A_X ≠ 0. So, since it has a component on _-α, we deduce α(A_X)=0.
Consider now the components of ι_(X_1) on the restricted root-spaces. Since [A_X,ι_(X_1) ]= λ_1 ι_(X_1) and α(A_X)=0, we deduce that λ_1 = β(A_X) and ι_(X_1) = X_β^1 + X_α+β^1. As for ι_(X_2), we deduce similarly that λ_2=-λ_1 = -β(A_X) and that its decomposition is ι_(X_2) = X_-α-β^2 + X_-β^2 + X_α-β^2. Now,
0 = [ι_(X_1),ι_(X_2)] since ι_(X_1) ∈ and using Lemma <ref>
= [X_β^1,X_-α-β^2] mod.
So, X_β^1 = 0 because ι_(X_2) ∉. We deduce similarly [X_α+β^1,X_-α-β^2] = 0, from which we finally get ι_(X_1) = 0, the contradiction.
Thus, we have obtained that both ι_(X_1) and ι_(X_2) belong to . We conclude with the following.
Let H,Y,Z ∈(2,n) be such that H ∈, Y,Z ∈, and [H,Y]=λ_1Y, [H,Z] = λ_2Z and [Y,Z]=0. Then, Y ∈^+ or Z ∈^+.
We use the same notations as in Section <ref> for the restricted root-spaces of (2,n). Since λ_1 , λ_2 ≠ 0, Y and Z have no component on ⊕. We denote by Y = Y_-β + Y_β + Y^+ and Z = Z_-β+Z_β+Z^+ their decomposition according to = _-β⊕_β⊕⊕⊕^+.
If β(H)=0, then Y_-β=0 and Y_β=0, (similarly for Z), and the lemma is established.
We are then reduced to assume that β(H)≠ 0 and Y_β≠ 0 (the other case being symmetric). Then, β(H) = λ_1 and Y_-β=0. Since λ_1 ≠λ_2, we get Z_β = 0. Therefore, the bracket [Y,Z] decomposes into [Y,Z] = [Y_β,Z_-β] + [Y_β,Z^+] - [Z_-β,Y^+], the first term belonging to ⊕, the second and the third to ^+. It follows therefrom that [Y_β,Z_-β]=0, and it follows that Z_-β=0 (see for instance <cit.>, Lemma 3.1). Finally, Y ∉^+ implies Z ∈^+ as expected.
Applying this fact to A_X, ι_(X_1),ι_(X_2), we obtain that either X_1 or X_2 has a singularity of order 2 at x, concluding the proof.
§.§ Essential actions of abelian Lie groups of dimension n and n-1
This section is devoted to the proof of the following proposition.
Let N be an abelian Lie group acting faithfully, conformally and essentially on a compact Lorentzian manifold (M^n,g), n ≥ 3.
* If N = n, then contains an element X with a singularity of order 2.
* If N is isomorphic to ^n-1, then one of the following is true:
* There exists X ∈ with a singularity x around which X is linearizable and locally conjugate to a homothetic flow.
* There exists X ∈ with a singularity x whose holonomy is non-linear and unipotent.
* There exists an element n_0 ∈ N with a fixed point at which its holonomy is of light-like type.
In every case, an open subset of M is conformally flat.
In the second case, the fact that N has no non-trivial compact subgroup is necessary. Recall that the conformal group of a Hopf manifold is isomorphic to ^1 × O(1,n-1). Considering a horospherical subgroup U < O(1,n-1), the abelian group ^1 × U is essential, of dimension n-1, but does not satisfy (a), (b) or (c).
We start with the following result.
For all n ≥ 1, the maximal dimension of an abelian subalgebra of (1,n+1) is n. For n ≥ 3, any n-dimensional abelian subalgebra of (1,n+1) is conjugate to a restricted root-space, i.e. the Lie algebra of a horospherical subgroup in O(1,n+1).
In (1,3) ≃_2(), an abelian Lie subalgebra of dimension 2 is either a restricted root-space, or a (complex) Cartan subalgebra.
We can see a Lie subalgebra ⊂(1,n+1) as a Lie algebra of conformal vector fields of the Möbius n-sphere ^n. According to Remark <ref>, acts locally freely on a open-dense subset of the sphere, giving the upper bound.
Suppose now = n ≥ 3 and consider the function φ(x) = ∑ g_x(X_k,X_k) where (X_1,…,X_n) is a basis of and g the round metric. If φ was non-vanishing, then all elements of would be Killing vector fields of g/φ. But (^n,g/φ) would be a compact subgroup of (1,n+1) = (^n,[g]), hence contained in a conjugate of O(n+1). This would be a contradiction because any abelian Lie subalgebra of (n+1) is contained in a Cartan subalgebra, and must have dimension at most ⌊n+1/2⌋. We deduce that φ vanishes somewhere, i.e. fixes a point in ^n.
So, we obtain an embedding ι : → (⊕(n)) ⋉^n, the ^n factor corresponding to the horospherical group at the fixed point of ^n. The projection on the (n) factor is abelian. Since an hyperplane of ι() (at least) has no component on , and since n-1 > _((n)), a non-zero element Y ∈ι() belongs to the ^n factor. It follows easily that the embedding has range in (n) ⋉^n. Considering p_1 the projection on the (n) factor and V = ι() ∩^n= p_1 ∘ι, matrices of p_1(ι()) vanish on V, so belong to (V^⊥). Since p_1(ι()) is abelian of dimension n-m in (n-m), we get m=n as expected.
§.§.§ Case N = n
According to Proposition <ref>, we can choosex ∈Msuch thatN.x ≤1. LetN_xbe the stabilizer ofx. According to Lemma <ref>, for anyx̂ ∈π^-1(x), the mapX ∈_x ↦ω_(X) ∈≃(⊕(1,n-1)) ⋉^nis a Lie algebra homomorphism.
IfN_x = N, then the composition with the projection on the⊕(1,n-1)factor has a non-trivial kernel according to Lemma <ref>, and any non-zeroXin this kernel satisfiesω_(X) ∈^+for every∈π^-1(x).
If_x = n-1, then the composition with the projection on the(1,n-1)factor has a non-trivial kernel, and there existsX ∈_xsuch thatω_(X) ∈.A_α ⊕^+, whereA_α ∈is characterized byα(A_α)=1andβ(A_α)=0. Since(A_α)acts homothetically on_-1, the component on.A_αhas to be trivial since_̣x ϕ_X^tfixesX_0(x) ∈T_xMfor anyX_0 ∉_x. Thus,ω_(X) ∈^+, which establishes point (1) in the proposition.
§.§.§ Case N ≃^n-1 with n ≥ 5
IfNhas a fixed pointx, then what precedes shows that there existsX ∈such thatω_(X) = λA_α + X_1 ∈.A_α ⊕^+. Ifλ≠0, then(p)ω_(X) = λA_αwherep=e^1/λX_1, soω_.p^-1(X) ∈.A_α, showing thatXis locally conjugate to(A_α)|__-1by Proposition <ref>
and we are in situation (a). Ifλ= 0, then we in situation (b).
Consequently, we can assume thatNhas no fixed point, and that for allx ∈Msuch thatN.xis1-dimensional and light-like, the embeddingι_ : X ∈_x ↦ω_(X) ∈is such thatp ∘ι_is injective, wherep : (⊕(1,n-1)) ⋉^n →(1,n-1)denotes the projection (if not, we are in situation (b)). Furthermore, we can assume that for allX ∈_x, ifι_(X) ∈is unipotent, then it is conjugate inPto an element of_0.
Recall thatKrefers to the compact subset ofMwhereN-orbits are totally isotropic. We note_xthe Lie algebra of the stabilizer inNof a pointx. The following Lemma will be reused later.
Let x ∈ K. If n ≥ 5, and if we are note in cases (a) and (b) of Proposition <ref>, then there exists ∈π^-1(x) such that ι_(_x) = _β.
If p : →(1,n-1) denotes the projection, since _x = n-2, Lemma <ref> implies that p ∘ι_(_x) ⊂(1,n-1) is a restricted root-space. So, replacing by some . g_0, with g_0 ∈ G_0, we can assume p ∘ι_(_x) = _β.
In fact, no element of ι_(_x) has a component on the grading element A_α. Indeed, every one-parameter subgroup {ϕ_X^t}⊂ N_x fixes an isotropic vector in T_xM which corresponds to the tangent line of the N-orbit of x. Given the form of ι_(X), _̣xϕ_X^t = e^λ t u^t, where {u^t} is a unipotent one-parameter subgroup of O(T_xM) and t the component of ι_(X) on A_α. Hence, λ =0.
Thus, ι_(_x) is included in _β⊕_α-β⊕_α⊕_α+β. Next, we show that it is in fact in _β⊕_α⊕_α+β. Let ι_(X) = X_β+ X_α-β + X_α + X_α+β be the decomposition for some element X ∈_x. Choose Y ∈_x non-zero and such that Y has no component on _α-β (recall that _α-β = 1), and let Y_β+ Y_α + Y_α+β be the components of ι_(Y). Expressing that the bracket is zero, we get [X_α-β,Y_β] = 0, and since Y_β≠ 0, we obtain X_α-β=0.
Therefore, every X ∈_x has a unipotent holonomy, which is then conjugate to an element of _0 by assumption. The following shows that the conjugacy is realized by a same element of P^+, i.e. that we have a simultaneous linearization of the isotropy. This is a slight variation of Proposition 5.2 of <cit.>. Since the hypothesis are different, we give a proof in this particular setting.
* An element X_β+ X_α + X_α+β∈ with X_β≠ 0 is conjugate to an element of _0 if and only if there exists Z ∈_α-β such that X_α = [Z,X_β].
* If ⊂_β⊕_α⊕_α+β is a linear subspace all of whose elements are conjugate to an element of _0, then there exists p ∈exp(_α-β⊕_α) such that (p)⊂_β.
We identify P as the subgroup (G_0)|_^+⋉^+ of the affine group of ^+, and its Lie algebra correspondingly. A one-parameter subgroup {e^tX}⊂ P is conjugate to a one-parameter subgroup of G_0 if and only if its affine action on ^+ has a fixed point, or equivalently, writing X = (X_0, X_1) ∈_0 ⋉^+, if there exists Y_1 ∈^+ such that X_1 = [X_0,Y_1] and then (e^Y_1) X = X - [X_0,Y_1] = X_0.
Recall that if Z ∈_α-β is non-zero, then (Z)|__β : _β→_α is a linear isomorphism and that the bracket _α×_β→_α+β is non-degenerate in the sense that it identifies _α with (_β)^* and conversely.
For the first point, if X_β+ X_α + X_α+β is conjugate to an element of _0, then there exists Y_1 = Y_α-β + Y_α + Y_α + β such that X_α + X_α+β = [X_β,Y_1] = [X_β,Y_α-β ] + [X_β,Y_α], so Z:= -Y_α-β is convenient. Conversely, if there exists Z ∈_α-β such that X_α = [Z,X_β], then if Z' ∈_α is such that [Z',X_β] = X_α+β , then [X_β,-Z-Z'] = X_α+X_α+β, showing that (e^-Z-Z') (X_β+ X_α + X_α+β) = X_β.
For the second point, the linear projection to _β is injective in restriction to . We deduce the existence of a linear subspace V ⊂_β and linear maps φ_α : V →_α and φ_α + β : V →_α+β such that = {X + φ_α(X) + φ_α+β(X), X ∈ V}. Our hypothesis is that for a given non-zero Z_α-β∈_α-β, φ_α(X) is collinear to [Z_α-β,X] for every X ∈ V. Let f : _α→_β be the inverse of ( Z_α-β)|__β. Then, f ∘φ_α : V → V is a linear map fixing all the lines, hence an homothety. Replacing Z_α-β by λ Z_α-β if necessary, we must have φ_α(X) = [Z_α-β,X] for all X ∈ V. Since the bracket _α×_β→_α+β is non-degenerate, there exists Z_α∈_α such that φ_α+β(X) = [Z_α,X] for all X ∈ V. Finally, we obtain (e^-Z_α-β-Z_α) =V ⊂_β as announced.
Replacing now by .p^-1, we obtain ι_(_x) ⊂_β, which concludes the proof of the Lemma since _x = _β = n-2.
Combined with Proposition <ref>, Lemma <ref> implies that the action of the identity component of the stabilizer(N_x)_0is locally linearizable at the neighborhood ofx, and explicitly the local conjugacy is given by
exp_ : 𝒰⊂_-1→ U ⊂ M,
where_-1 = _-α+β⊕_-α ⊕_-α-β,𝒰is a neighborhood of0andUa neighborhood ofx. For allX ∈_x, we have(exp_)^* X = (ω_(X))|__-1over𝒰. Therefore, because_-α+ βis the subspace of_-1centralized by_β, the set of fixed points of(N_x)_0inUis a1-dimensional submanifold, parametrized byπ(exp_(tX_-α+β))fortin a neighborhood of0. Necessarily, this1-dimensional fixed points set coincides locally with theN-orbit ofx. We get the following consequence:
Any 1-dimensional, light-like N-orbit is closed.
Let x be a point with a 1-dimensional, light-like orbit. Let y ∈N̅.̅x̅ be a point in the orbit closure. Since N is abelian, N_x ⊂ N_y, so that _x =_y since we assumed that N has no fixed point. Let U be a neighborhood of y over which (N_y)_0 is locally linearizable. We have observed that {z ∈ U | ∀ g ∈ (N_y)_0, g.z =z} = N.y ∩ U. Let now g_0 ∈ N be such that g_0.x ∈ U. Then, g_0.x is fixed by N_x, which contains (N_y)_0, so g_0.x ∈ N.y. Therefore, y ∈ N.x, showing that N.x is closed as we claimed.
Therefore, for everyX ∈and everyx ∈K, ifX(x) ≠0, then there existst_0 > 0such thate^t_0X ∈N_x. In general, it is possible thate^t_0X ∈(N_x)_0(for instance ifϕ_X^tis a free^1-action). This is not possible in our situation sinceNis isomorphic to^n-1.
The centralizer 𝒵_O(1,n-1)(_β) is 𝒵_O(1,n-1) ×exp(_β).
Consider the standard conformal action of O(1,n-1) on the round sphere ^n-2. Let x ∈^n-2 be the (unique) point fixed by all of _β. Then, for g in the centralizer, we must have g.x=x by uniqueness. Therefore, g ∈ Q < O(1,n-1), the parabolic subgroup fixing x. The group Q decomposes into Q = (^* × O(n-2)) ⋉^n-2, where the ^n-2 factor parametrizes exp(_β) < Q. As the latter clearly centralizes _β, we can assume that g has no component on the ^n-2 factor. Let (a,A) ∈^* × O(n-2) be corresponding to the other factors of g, then the adjoint action of g on some X_β in the root-space, which is parametrized by u ∈^n-2, is a^-1 Au. We get Au = au, for all u ∈^n-2, showing that A is a scalar matrix, and finally g = ±𝕀.
If x ∈ K, then there exists X ∈, ∈π^-1(x) and t_0 > 0 such that X(x) ≠ 0, n_0 := e^t_0X∈ N_x, and ^(n_0) is a light-like translation in P^+.
Let X ∈ be such that X(x) ≠ 0 and let t_0>0 be such that n_0 = e^t_0X∈ N_x. Let ∈π^-1(x) be such that ω_(_x) = _β.
Recall that the map ^ : f ∈(M,g)_x ↦^(f) ∈ P is an injective Lie group homomorphism, where ^ refers to the unique p ∈ P such that f̂() = .p^-1.
So, ^ (n_0) ∈ P is an element centralizing ^((N_x)_0) = exp(_β). Since ^ is injective and n_0 ∉ (N_x)_0, ^(n_0) ∉exp(_β). Using the decomposition P = G_0 ⋉ P^+, and replacing t_0 by 2t_0 if necessary, we obtain that the G_0-part of ^ (n_0) belongs to {A^t}×exp(_β), say (e^s_0 A,e^X_β) for X_β∈_β. Thus, picking Y ∈_x such that ^(e^t_0Y) = e^X_β, and replacing X by X-Y, we may assume that ^(n_0) ∈{A^t}⋉ P^+ and is a non-trivial element. Necessarily, the factor on A^t must be the identity because if not, it would mean that the differential _̣x n_0 is a non-trivial homothety, whereas it fixes pointwise the light-like periodic orbit N.x. Finally, ^(n_0) is a non-trivial element of P^+ which centralizes _β, so we conclude ^(n_0) ∈exp(_α + β).
This concludes the proof of Proposition <ref> in the casen ≥5.
§.§.§ Case of ^n-1, n=3
In this situation,_xis a line. If{ϕ^t}is a one-parameter subgroup ofN_x, then_̣x ϕ^tfixes an isotropic vector inT_xMgiven by the orbitN.x. As a consequence,_̣x ϕ^t = e^λt g^t, where{g^t} ⊂(T_xM,g_x) ≃(1,2)is in the parabolic subgroup fixing this isotropic line. It implies that eitherλ=0and_̣x ϕ^t = g^tis unipotent, or{g^t}is hyperbolic andλ≠0.
For any x ∈ K, the identity component (N_x)_0 = {ϕ_Y^t}_t ∈ is locally linearizable near x, and Y is locally conjugate to one of the two linear vector fields on ^3
X_h =
|
[ 0; x_2; 2x_3 ].
or
X_u =
|
[ x_2; -x_3; 0 ].
.
In both cases, there exists a neighborhood U of x such that {y ∈ U | Y(y)=0} = N.x ∩ U is a one-dimensional submanifold of U.
If _̣x ϕ_Y^t is unipotent, then the proof of the case n ≥ 5 can be directly applied.
So, we assume that _̣x ϕ_Y^t is hyperbolic. Up to conjugacy in P, we have ι_(Y) = Y_0 + Y_1 where
Y_0 =
[ 1 ; 1 ; 0 ; -1 ; -1 ]
and Y_1 ∈^+. After conjugacy by an element of exp(_α⊕_α+β), we can furthermore assume Y_1 = Y_α-β∈_α-β. Now let X ∈ be such that X(x) ≠ 0. Since [X,Y]=0 and Y(x) = 0, [ι_(X),ι_(Y)] = 0. It follows that the _-1 component of ι_(X) is in _-α+β. Let ι_(X) = X_-α+β+X_0+X_1 be the decomposition according to the grading of (2,3). Expressing that the bracket [ι_(X),ι_(Y)]=0, we obtain [X_-α+β,Y_α-β]+[X_0,Y_0]=0. This forces Y_α-β=0 because [_0,_0] is contained in the (1,2) factor of _0 whereas [_α-β,_-α+β] is not. By Proposition <ref>, Y is locally linearizable near x and conjugate to the linear vector field (Y_0)|__-1. The latter fixes only the line _-α+β, so the fixed points of Y near x form a 1-dimensional submanifold near x, which necessarily locally coincides with N.y, as claimed.
It follows, as in Lemma <ref> in the casen ≥5, that for allx ∈K,N.xis closed. LetX ∈be such thatX(x) ≠0and lett_0>0be such thatn_0 = e^t_0X ∈N_x. SinceN ≃^2, we haven_0 ∉(N_x)_0.
If(N_x)_0has a unipotent linear action nearx, then the end of the proof of the casen ≥5applies. Let us assume then that we are in the hyperbolic case. The proof of Lemma <ref> shows that there exists∈π^-1(x)such thatι_(Y)
= (1,1,0,-1,-1). Hence,^(n_0)is an element ofPcentralizingι_(Y). It follows that there existss_0 ∈such that^(n_0.ϕ_Y^s_0) ∈exp(_α-β), which concludes sincen_0 ∉(N_x)_0.
§.§.§ Case of ^n-1, n=4
Whenn=4, by Lemma <ref>, considering the projectionp : →(1,3)as before, we get that for allx ∈Kand∈π^-1(x),p ∘ι_(_x)is either a restricted root-space, or a Cartan subalgebra.
As in the previous cases, we obtain that near every pointx ∈K, there existsY ∈_xwhich is locally linearizable and whose fixed points locally coincide withN.x(in the unipotent case, the proof is the same as in the casen ≥5, in the other case, we pickY ∈_xsuch thatp ∘ι_(Y)is an-split element in the Cartan subalgebra, and the proof of the casen=3applies).
It follows similarly thatN.xis closed for everyx ∈K. Lett_0>0be such thatn_0 = e^t_0X ∈N_x. If the isotropy representation of(N_x)_0atxis unipotent, then the same approach as in the casen ≥5implies the existence of someY_0 ∈_xand∈π^-1(x)such that^(n_0e^Y_0) ∈exp(_α+ β)(and non-trivial). If the isotropy representation sends(N_x)_0onto a Cartan subgroup of(T_xM,g_x), then the same proof as in the casen=3shows that ifY ∈_xis sent onto the-split element of the Cartan subalgebra andZ ∈_xto the imaginary part, then there exists∈π^-1(x)such that
ι_(Y) =
[ 1 ; 1 ; 0 ; 0 ; -1 ; -1 ] and ι_(Z) =
[ 0 ; 0 ; 0 -1 ; 1 0 ; 0 ; 0 ].
Expressing that^(n_0)centralizes these elements, we obtain that
^(n_0) =
[ λ x_0 ; 0 μ ; R ; μ^-1 -x_0; λ^-1 ],
withR ∈O(2),λ,μ∈^*.
Since_̣x n_0fixes the same isotropic vector inT_xMasY, it follows thatλ=μ. So, replacingn_0by its square, and translating it by somee^s_0Y+s_1Z, we obtain
^(n_0) =
[ 1 x_0 ; 0 1 ; 𝕀 ; 1 -x_0; 1 ]∈exp(_α-β),
as expected.
§ PROOF OF THEOREM <REF>: EMBEDDING OF THE RADICAL INTO (2,N)
We assume in all this section that no non-empty open subset of(M^n,g)is conformally flat. We denote byRa connected, solvable Lie subgroup of(M,g), which is assumed to be essential. By Theorem <ref> and Theorem <ref>,Rhas abelian nilradicalNandNis essential. By Proposition <ref>,N ≤n-1and ifN =n-1, then the maximal torus ofNis non-trivial. By Propositions <ref> and <ref>, there exists⊂abelian andρ: →()a faithful representation such that for allX∈,ρ(X)is semi-simple and≃⋉_ρ . Let{α_1,α̅_̅1̅,…,α_r,α̅_̅r̅} ⊂(^*)^be the complex weights ofρ. According to Lemma <ref>, there exists real linear formsλ,μ_1,…,μ_r ∈^*ands ∈{0,…,r}such thatλ≠0and
* α_k = λ + i μ_k for 1 ≤ k ≤ s ;
* α_k = iμ_k for s+1 ≤ k ≤ r,
the case where all weights are purely imaginary corresponding tos=0.
We set first some notations. Let us take the convention that if there existsk ∈{1,…,s}such thatμ_k = 0, then it isk=1, and if there existsk ∈{s+1,…,r}such thatμ_k = 0, then it isk=s+1. Let^ =⊕_k (^)_α_kdenote the complex weight-space decomposition. Remark that∩_k α_k = {0}sinceρis faithful (the centralizer ofincoincides with). At most two weight-spaces are real:(^)_α_1ifμ_1=0and(^)_α_s+1ifμ_s+1=0, the latter corresponding to the center of. For a non-real weightα_k, we define the real even-dimensional subspace
^k = {X+X̅, X ∈ (^)_α_k}⊕{i(X-X̅), X ∈ (^)_α_k}⊂,
which verifies^k ⊕i^k = (^)_α_k ⊕(̅^̅)̅_̅α̅_̅k̅. Finally, let
_1 = ⊕_α_k ∉^* ∪ i ^*_k , _2 = ⊕_α_k ∈ i ^* ∖{0}_k.
Hence, we have the decomposition
= _1 ⊕_2 ⊕_λ⊕(),
where()denotes the center ofand_λ = {X ∈ | ∀H ∈, [H,X] = λ(H)X}.
§.§ Case N ≤ n-2
In this situation, we have the following explicit embedding of any such solvable Lie algebrainto(2,n), which sends the nilradical into_α ⊕_β. Note that_α = _β = n-2. To an element(X,Y)withX ∈andY ∈with decompositionY = Y_1+Y_2+Y_λ+Y_, we associate the matrix of(2,n):
(
[ λ(X) Y_1 Y_λ 0 0 0; 0 0 0 Y_2 Y_ 0; D_1(X) 0 - ^t Y_1; 0 0 - ^t Y_λ; D_2(X) - ^t Y_2 0; 0 - ^t Y_ 0; 0 0 0; 0 ; -λ(X) ] )
where
D_1(X) =
[ 0 -μ_1(X) ; μ_1(X) 0 ; ⋱ ; 0 -μ_s(X); μ_s(X) 0; ]
and
D_2(X) =
[ 0 -μ_s+1(X) ; μ_s+1X) 0 ; ⋱ ; 0 -μ_r(X); μ_r(X) 0; ].
§.§ Case N = n-1
Recall that we can assume thatNhas a non-trivial maximal torus by Proposition <ref> and thatNacts without fixed points. We still denote byKthe compact subset of points admitting a one-dimensional light-likeN-orbit.
If _1 ≠ 0, then _2 = 0 and () ≤ 1.
The result is immediate if n<5. Let us then assume n ≥ 5. Let X ∈ be such that λ(X)=1. We apply Theorem <ref> to the pair (G,S) where S = {e^tX}_t∈. Considering a finite measure supported on K, we deduce the existence of x∈ K at which the conclusions of this theorem are true. Similarly as before, the Zariski closure of (S) in () contains an -split one-parameter subgroup {h^t} which acts homothetically on _1 ⊕_λ and trivially on _2 ⊕().
Necessarily, {h^t} is contained in the discompact radical of this Zariski closure. Choose now ∈π^-1(x) such that ι_(_x) = _β. Considering the Jordan decomposition of a one parameter subgroup of _(2,n)(P) which is sent onto {h^t} by the algebraic homomorphism provided by Theorem <ref>, we get the existence of an -split one parameter subgroup {p^t} < P such that for all X ∈, (p^t)ι_(X) = ι_(h^t(X)). Let X∈ be the generator of {p^t}. Then, we have:
* [X,ι_(Y)] = ι_(Y) for all Y ∈_1 ⊕_λ ;
* [X,ι_(Y)] = 0 for all Y ∈_2 ⊕().
Recall that = (⊕(1,n-1)) ⋉^n. Consider X_0 the (1,n-1) component of X. Then, (X_0) is still -split. It is moreover non-zero because if it was, it would imply (in particular) that X centralizes _β, but this is excluded because _1 ≥ 2, so it intersects _x and there exists Y ∈_1 non-zero such that ι_(Y) ∈_β and [X,ι_(Y)] = [X_0, ι_(Y)] = ι_(Y). So, X_0 generates an -split Cartan subgroup of (1,n-1). Consequently, the centralizer of X_0 in (1,n-1) is isomorphic to .X_0 ⊕(n-2), and an element of (1,n-1) centralizing X_0 is semi-simple, so it cannot belong to a restricted root-space.
If _2 was non-zero, then, being at least 2-dimensional, it would intersect the hyperplane _x. But for Y ∈_x ∩_2, we have ι_(Y) ∈_β and [X,ι_(Y)] = [X_0,ι_(Y)] = 0, a contradiction.
() ≠ 0.
Let T ≃^k denote the maximal torus of the abelian Lie group N. Then, T is normal in R and since (T) is discrete, T is in fact central in R and, by assumption, ≠ 0. So, () ≠ 0.
We can now conclude by distinguishing several cases.
§.§.§ _λ≠ 0
In this situation, we distinguish a direction in_λ(notedY'_λ) that we embed into_α-β:
(
[ λ(X) Y_λ' Y_λ” Y_1 0 0 ; 0 0 0 Y_2 Y_ ; 0 - ^t Y_λ”; D_1(X) - ^t Y_1; D_2(X) - ^t Y_2; 0 - ^t Y_ ; 0 -Y_λ'; -λ(X) ] )
§.§.§ Case _λ = 0 and _1 ≠ 0
Recall that by Lemma <ref>, we have_2 = 0and the center is contained in a line, so())is a line by Lemma <ref>, that we embed into_α-βas follows
(
[ λ(X) Y_ Y_1 ; λ(X) ; D_1(X) - ^t Y_1; -λ(X) -Y_; -λ(X) ] )
§.§.§ Case _λ = 0 and _1=0
In this last configuration, we have the embedding
(
[ 0 Y_' Y_” Y_2 ; 0 ; 0 - ^t Y_”; D_2(X) - ^t Y_2; 0 -Y_'; 0 ] )
alpha |
http://arxiv.org/abs/2307.04673v1 | 20230710161837 | Holographic $T\bar{T}$ deformed entanglement entropy in dS$_3$/CFT$_2$ | [
"Deyou Chen",
"Xin Jiang",
"Haitang Yang"
] | hep-th | [
"hep-th"
] | |
http://arxiv.org/abs/2307.10993v1 | 20230711194842 | Unsupervised Learning in Complex Systems | [
"Hugo Cisneros"
] | cs.NE | [
"cs.NE",
"cs.AI",
"cs.LG"
] |
Probabilistic Unitary Formulation of Open Quantum System Dynamics
[
August 12, 2023
==================================================================
@twosidetrue
fancy
[ER,OL] [EL,OR]
[C]— H. Cisneros: Unsupervised Learning in
Complex Systems —
@twosidefalse
fancy
[R] [L]
[C]— H. Cisneros: Unsupervised Learning in
Complex Systems —
CHAPTER: RÉSUMÉ
Résumé
CHAPTER: ABSTRACT
Abstract
CHAPTER: ACKNOWLEDGMENTS
Acknowledgments
I express my sincere gratitude to Professor Josef Sivic and Tomas Mikolov for
their invaluable guidance, enthusiasm, and endless encouragement, without which
this thesis would not have been possible.
I am very grateful to a number of people who have directly or indirectly
influenced and helped me in my work, both as colleagues and friends: Jelle, Barbora, Kateryna, Teven, and David.
Special thanks to Barbora Hudcová and Josef Sivic for their meticulous
proofreading of parts of this thesis.
I would also like to thank everyone at the Foundational AI Lab and the
Automated Reasoning Lab of the Czech Institute of Informatics, Robotics and
Cybernetics (CIIRC) for creating a welcoming and supportive work environment.
Most importantly, I would like to express my deep appreciation to my parents,
family, and friends for their support and understanding throughout my academic
journey.
Lastly, I would like to express my heartfelt thanks to Brune for her unwavering
patience, help, support, and constant inspiration.
contents
1.5
tocchapterBibliography
|
http://arxiv.org/abs/2307.04348v2 | 20230710051628 | Full statistics of non-equilibrium heat and work for many-body quantum Otto engines and universal bounds: A non-equilibrium Green's function approach | [
"Sandipan Mohanta",
"Bijay Kumar Agarwalla"
] | cond-mat.stat-mech | [
"cond-mat.stat-mech",
"cond-mat.mes-hall"
] |
apsrev
|
http://arxiv.org/abs/2307.04218v1 | 20230709162030 | Investigating Berezinskii-Kosterlitz-Thouless phase transitions in Kagome spin ice by quantifying Monte Carlo process: Distribution of Hamming distances | [
"Wen-Yu Su",
"Feng Hu",
"Chen Cheng",
"Nvsen Ma"
] | cond-mat.stat-mech | [
"cond-mat.stat-mech"
] |
Those authors contributed equally to this work.
Key Laboratory of Quantum Theory and Applications of MoE, Lanzhou Center for Theoretical Physics, and Key Laboratory of Theoretical Physics of Gansu Province, Lanzhou University, Lanzhou, Gansu 730000, China
Those authors contributed equally to this work.
School of Physics, Beihang University, Beijing 100191, China
[email protected]
Key Laboratory of Quantum Theory and Applications of MoE, Lanzhou Center for Theoretical Physics, and Key Laboratory of Theoretical Physics of Gansu Province, Lanzhou University, Lanzhou, Gansu 730000, China
[email protected]
School of Physics, Beihang University, Beijing 100191, China
We reinvestigate the phase transitions of the Ising model on the Kagome lattice with antiferromagnetic nearest-neighbor and ferromagnetic next-nearest-neighbor interactions, which has a six-state-clock spin ice ground state and two consecutive Berezinskii-Kosterlitz-Thouless (BKT) phase transitions. Employing the classical Monte Carlo (MC) simulations, the phases are characterized by the magnetic order parameter, and the critical temperatures are obtained by the finite-size scaling of related physical quantities. Moreover, we attempt to gain general information on the phase transitions from the MC process instead of MC results and successfully extract the correct transition points with surprisingly high accuracy. Specifically, we focus on the selected data set of uncorrelated MC configurations and quantify the MC process using the distribution of two-configuration Hamming distances in this small data collection. This distribution is more than a quantity that features different behaviors in different phases but also nicely supports the same BKT scaling form as the order parameter, from which we successfully determine the two BKT transition points with surprisingly high accuracy. We also discuss the connection between the phase transitions and the intrinsic dimension extracted from the Hamming distances, which is widely used in the growing field of machine learning and is reported to be able to detect critical points. Our findings provide a new understanding of the spin ice transitions in the Kagome lattice and can hopefully be used similarly to identify transitions in the quantum system on the same lattice with strong frustrations.
Investigating Berezinskii-Kosterlitz-Thouless phase transitions in Kagome spin ice by quantifying Monte Carlo process: Distribution of Hamming distances
Nvsen Ma
August 12, 2023
========================================================================================================================================================
§ INTRODUCTION
In a simple magnetic system, such as the two-dimensional Ising square lattice with antiferromagnetic nearest neighbor (NN) couplings, the ordered magnetic state at low temperatures changes into the paramagnetic state as temperature increases with a single finite-temperature phase transition characterized by the order parameters, corresponding susceptibilities or correlations. The universality class of the phase transition is predicted through the scaling behavior and critical exponents of those physical quantities <cit.>. However, the phases and transitions can be totally different once the geometric frustration is introduced, where the competing interactions that cannot be satisfied simultaneously bring degeneracy and result in exotic ground states in both quantum and classical cases <cit.>. In classical phase transitions, as temperature increases, the complex ordered ground state usually turns into the disordered phase at high temperatures step by step through several intermediate phases rather than one direct phase transition. These consecutive phase transitions and different kinds of ordered or partly-ordered intermediate states are observed in the various frustrated spin systems and attract many theoretical and experimental investigations <cit.>. On the other hand, these intermediate phases are ordinarily less known and sometimes even without a well-defined order parameter, make it difficult to understand the phase transitions in frustrated spin systems. What's more, in many theoretical and experimental studies, the existence of the intermediate phases is controversial in aspects of traditional physical properties similar to the ones used in simple Ising magnets, such as order parameters, susceptibility or correlations <cit.>.
In recent years, exploring new methods to detect phase transitions with less prior knowledge of the phases, especially integrating the machine learning (ML) ideas with the Monte Carlo simulations, has attracted increasing interest <cit.>. The ML techniques can be used to directly recognize the differences between phases and trace the critical points by classifying these phases, and have succeeded in detecting the second-order phase transitions <cit.> and Berezinskii-Kosterlitz-Thouless (BKT) phase transitions <cit.> in some well-studied spin systems. While most of these works still require prior knowledge as the ML approach is based on the numerically obtained observables from MC simulations <cit.>, some recent studies have attempted to get the universal information of phase transitions from analyzing only the raw data of spin configurations generated through MC processes <cit.>. In principle, the latter approaches, which do not depend on the specific physical quantities and details of the target system, can hopefully inspire further works with the same idea to probe phase transitions universally by quantifying the MC process.
More than classifying the phases and locating the transition points, recent works demonstrate that the MC process can further verify the universality class of the phase transition and extract the corresponding critical exponent via finite scalings <cit.>. In some sense, the MC sampling in the huge configuration space shares a similar idea with the ML procedure, which identifies the universal property of the high-dimensional data sets from minimally processed data collection. The key aspect is to extract the concerned information with finite degrees of freedom on the system with seemingly infinite complexity. Enlightened by the ML studies and along the same line, we aim to find a new way independent of models or universality classes that can describe the phase transition by quantifying the MC process rather than physical quantities and without employing any specific ML techniques.
On the other hand, our work is in parallel inspired by recent studies on the dynamical phase transition in the context of thermalization and many-body localization (MBL) in closed quantum systems <cit.>. The disorder-induced thermal-MBL transition can be characterized by analyzing the Hamming distances between the Fock states in the configuration basis, with the probability of Fock states determined by the target wavefunction <cit.>. Extending the similar idea to the MC procedure, where the partition function determines the distribution of MC thermal configurations, we presume that the distribution of Hamming distances between these sampled configurations can possibly probe the phase transitions. In this work, we adopt this conception to reinvestigate the phases and phase transitions in the frustrated kagome Ising model (KIM) with antiferromagnetic NN couplings and ferromagnetic next nearest neighbor (NNN) ones, as shown in Fig. <ref>(a). The system has two consecutive temperature-driven BKT phase transitions with the charge-ordered spin-ice state of six-fold symmetry at low temperatures <cit.>. The same ordered ground state has recently been realized in the inter-metallic compound HoAgGe <cit.>, supporting the naturally existing kagome spin ice for the first time. We expect the present work to reveal the rich physics in this system via quantifying the MC procedure without any physical quantities.
The rest of the paper is organized as follows. In Sec. <ref>, we introduce the Kagome spin ice model and BKT phase transitions therein, which have been systematically reinvestigated by MC simulations according to physical quantities in Sec. <ref>. In Sec. <ref>, we demonstrate that the full information of phase transitions and the critical points can be retrieved by non-physical quantities, specifically the distribution of Hamming distances of the selected configurations in the MC procedure. The intrinsic dimension, which is commonly used in ML in detecting transitions, is also discussed in Sec. <ref>. Finally, the summary and discussion are presented in Sec. <ref>.
§ PRELIMINARIES
We study the two-dimensional Ising model on the kagome lattice with antiferromagnetic NN interactions and ferromagnetic NNN ones. The model Hamiltonian reads
H=J_1 ∑_⟨ ij⟩σ_i σ_j + J_2 ∑_⟨⟨ ij⟩⟩σ_i σ_j
where σ_i=± 1 denotes the Ising spin on the site i, and J_1>0 (J_2<0) is the interaction between the NN (NNN) sites on the Kagome lattice illustrated in Fig. <ref>(a). The KIM described by Eq. (<ref>) features the ground state of a six-state clock spin ice <cit.>, which can be described by a complex order parameter
M=1/N∑σ_i exp(iQ·r_i)
with Q=(4π/3,0) and N for the total number of spins. In the ordered phase at low temperatures, the order parameter follows M=|M| e^iϕ with magnitude |M|≠ 0, and the phase ϕ can only take one of the values from ϕ=nπ/3 with n=0,1,2,3,4 and 5, as illustrated in Fig.<ref>(b). As temperature increases, the system first goes into a critical phase with power-law decay of spin correlations and unrestricted values of ϕ, which exhibits an emergent U(1) symmetry <cit.>. At sufficiently high temperatures, all orders break down, and the system features a disordered state. The KIM undergoes two finite-temperature phase transitions, which belong to the same universality class as the six-state clock model <cit.>. Both phase transitions in the q-state clock model are predicted to belong to the Berezinskii-Kosterlitz-Thouless (BKT) universality class as long as q≧ 5 from theoretical analysis <cit.>. Although some numerical studies claim that BKT transitions only occur for systems with q≧ 8 <cit.>, most recent works agree with the idea that those two phase transitions in q=6 clock model are of the BKT type through various computational methods <cit.>.
The six-clock spin-ice ground state has recently been observed in the inter-metallic compound HoAgGe for the first time in Ref. <cit.>, where the authors have claimed that the physics in the experiment can be described by a generalized KIM with geometry distortions and weak dipolar interactions. Our present work aims to reinvestigate the physics of the known physics in the standard KIM in a universal way, and future works following the same procedure would help to understand phase transitions in more complex models without obvious order parameters, such as the compound HoAgGe.
§.§ MC Simulation of Physical Quantities
This paper presents a large-scale computational study on the KIM in Eq.(<ref>) with fixed J_2=-1/3 and J_1=1, with the latter as the energy scale. Specifically, we consider a system with L× L unit cells that contribute a total number of spins N=3L^2, and use the periodic boundary conditions to minimize the boundary effect. We adopt the classical MC method with a standard Metropolis sampling algorithm of single-spin update, where a randomly chosen spin is flipped with a probability p= exp(-βΔ E). Here Δ E is the energy change of the flipping, β=1/k_BT with T the simulated temperature, and k_B is set as 1 for simplicity.
For all calculations with different system sizes (L=12 to 120), we take N_b=56 independent bins of MC procedures to avoid the autocorrelation between the two steps [Here 56 equals to the number of cores in the computing cluster for simulations in this work. Our results have confirmed that 56 bins are rational and sufficient. ]. Each bin has 10000 update steps to reach equilibrium and 20000 MC steps for measurement; That is, we employ a total number of ∝ 10^7 configurations to get the expected thermodynamic averages for different observables. For a generic quantity Q, the value and statistical error are computed as
Q̅ =1/N_b∑_b=1^N_bQ̅_̅b̅
σ_Q =√(1/N_b(N_b-1)∑_b=1^N_b(Q̅_̅b̅-Q̅)^2)
with Q̅_b the average over all MC steps in each bin. Therefore, the estimated value for the expectation of Q becomes ⟨ Q⟩=Q̅±σ_Q. One of the most often studied phase quantities in probing thermal phase transition is the specific heat C_v defined as
C_v=1/Nk_BT^2(⟨ E^2⟩-⟨ E⟩^2),
where E is the total energy of the system. Usually, The peak value of C_v as a function of temperature T changes dramatically with the size, indicating the singularity behavior at L→∞, which can be used to detect phase transitions. However, it is not the case for the BKT transitions, where the peak height does not change much as L increases, as shown in Fig. <ref> (b). Actually, the peak positions for the T-dependence of C_v are usually away from the real critical points in the BKT transitions <cit.>. Therefore one has much more difficulty finding a proper physical quantity with known critic cal behavior to get the correct information about the BKT transitions in the thermodynamic limit. As for KIM we can use the order parameter defined in Eq.<ref>. However, the quantity used to detect a BKT transition usually depends on the specific phase, which can be tricky to find for a system with computational difficulties and without preliminary knowledge.
§.§ Collection of Uncorrelated MC Configurations and Hamming Distance
A standard approach analyzes the phases and phase transitions based on the physical quantities obtained from numerical simulations. In this paper, we consider another possibility: Can one learn the same physics without borrowing the concept of any physical quantities? For the quantum MC studies with the severe sign problem, some recent works have tried to extract physics from the sign value itself and reported several successful cases where the transition can be tackled by the anomaly of either the sign or its derivatives <cit.>. This procedure is apparently unsuitable for the classical MC simulations without the obstacle of the sign. In the present work, we aim to extract information on transitions that happens in the infinite configuration space by quantifying the limited set of sampled configurations that can be used in different systems and transitions.
The MC simulation performs an importance sampling in the configuration space {σ⃗}, and the visited probability of a configuration σ⃗ at certain parameters is determined by the partition function. Therefore, it is rational to assume that the collection of visited configurations in the MC sampling procedure is closely related to the system's physical properties. In the data set of uncorrelated configurations, we are interested in the normalized Hamming distance between two configurations σ and σ^' defined as
D(σ⃗,σ⃗^⃗'⃗) = 1/N∑_i (1/2-σ_iσ^'_i/2).
Obviously, D=0 for the same configurations and D=1 between two exactly opposite ones. For totally uncorrelated data collections, the Hamming distances follow a Gaussian distribution centered at D/N=1/2. The Hamming distance is widely used in the dynamical phase transition from thermalization to many-body localization (MBL) in closed quantum systems <cit.>. In detecting the thermal-MBL transition, the time evolution of D from a single initial state can quantify the ergodicity of the system. For the thermal phase transitions that focus on the thermal equilibrium state, we can compute Hamming distances between every two configurations in a small set of data collection. Unlike some very recent works studying the static phase transitions using the average of the Hamming distance <cit.>, we focus on its distribution of D since the average D is always 1/2 if the MC procedure is not trapped by the local minimums in KIM.
To get a collection of uncorrelated configurations, we adopt 200 equally-spaced configures in each bin and collect the data of all bins. With a total number N_c≈ 10^5, these configurations produce a data set of D with size N_ D≈ 10^10. This is a large number but still infinitely small compared to the total number of configurations 2^N. Fig. <ref>(c) shows the distribution of the Hamming distance as a function of T for L=120 as an example, where one can tell three different regions at the very first glance. At zero temperature, the Hamming distance between the six-fold degenerate ground states supports only four possible values (1, 1/3, 2/3, and 1) [See Appendix <ref>], which is consistent with the numerically obtained P( D) at T=0.5, where one observes a slightly broadening bright region instead of a delta function for the finite system at finite temperatures. As the temperature increases, the position of the P( D)=0 (1) peak moves towards 0.5, as depicted by the bright curves, and the P( D)=1/3 (2/3) peak hardly varies. These four sharp peaks disappear in the intermediate phase, where two broad distributions occur. Further increasing T to sufficiently high temperatures, P( D) in the disordered phase has a symmetric Gaussian distribution centered at 0.5 as all configurations have equal probability.
§ BKT TRANSITIONS IN THE ASPECT OF PHYSICAL QUANTITIES
The BKT transitions in the q-state clock model as well as the KIM can be characterized by the order parameter M and its susceptibility χ_M≡d⟨ M⟩/dh|_h→ 0 to the magnetic field h, which is calculated using
χ_M=β/N(⟨ M^2⟩-⟨ M⟩^2).
Then the critical temperature and exponents in the thermodynamic limit can be extracted from the standard finite-size scaling approach <cit.>. The order parameter and its susceptibility in KIM obey the scaling forms within the region close to T_c as:
M= L^-β_c/ν F_M(ξ/L)
χ_M= L^γ/ν F_χ(ξ/L),
where β_c, γ, and ν are the critical exponents for magnetization, susceptibility, and correlation lengths, with F_M and F_χ are functional forms for the data of different system sizes L. For BKT universality class, the correlation length ξ exponentially diverges at T_c as ξ∼exp(c/√(t)) with t=|T-T_c| and a non-universal constant c <cit.>. Using the general exponent relations γ=ν(2-η) and β_c=ν(d-2+η)/2 with the lattice dimension d=2, we can rewrite the scaling forms of M and χ_M with a single scaling exponent η as
M = L^-η/2 F_M^-1 (L/e^c/√(t))
χ_M = L^2 - η F_χ^-1 (L/e^c/√(t)).
Close to the critical points and inside the whole critical phase, the functional form F_M^-1 (L/e^c/√(t)) approximates to a constant value as L/ξ→ 0. Therefore M (as well as χ_M) should behave in a power law as a function of L in the whole region between T_c1 and T_c2, as confirmed by the numerical results displayed in Fig. <ref>(b), depicted by the linear behavior in the log-log scale. Moreover, one can extract the critical exponent η by fitting M∝ L^-η/2.
The obtained η is shown in Fig. <ref>(c), where the exponents for both transition points and physical quantities (M and χ_M) agree well with the theoretical predictions, that is, η(T_c1)=1/9 and η(T_c2)=1/4 <cit.>. For the critical phase between T_c1 and T_c2 the order parameter has a linear relationship with lnL, and the slope to be -η/2 with 1/9<η<1/4.
We further carry out a standard finite-size scaling by the data collapse approach of the functional form in Eq. (<ref>), as displayed in Fig. <ref>. In the scaling procedure, we adopted the above-mentioned η as fixed values, and searching for the best data collapse is equivalent to the minimization problem in the two-dimensional parameter space {T_c,c} [See appendix <ref> for the scaling detail and error estimation]. The results of M scaling are shown in Fig. <ref>(a) and (b), where the data collection for different L nicely collapse to a smooth curve, and the obtained critical points are in accordance with the fitting in Fig. <ref>(b). Considering the error, the scaling of χ_M provides consistent results, as displayed by the corresponding data collapse in Fig. <ref>(c) and (d) for both critical points. Here and after, for all scaling procedures without specific instructions, the data collapse is carried out using the data within the temperature range [0.70,T_c1) for the first transition point, and (T_c2,1.25] for the second.
§ PROBING BKT TRANSITION BY QUANTIFYING MC PROCEDURE
We reinvestigate the BTK transitions in KIM with the finite-size scaling of the order parameter and its susceptibility in the previous section. However, as we demonstrated in Sec. <ref>, not all physical quantities, such as specific heat, can correctly catch the transition points <cit.>. Moreover, the order parameter may be hard to define for the system without prior knowledge and has computational difficulties. In this section, we aim to study the phase transitions employing the universal quantities quantifying the MC procedure, which do not depend on the details of the target system. The main idea is to quantitatively analyze the MC visited configurations based on the Hamming distances in the small data set of selected uncorrelated configurations.
§.§ Distribution of the Hamming distance
The distribution of Hamming distance features distinguishing behaviors in three phases, as demonstrated in Fig. <ref> (c) and related text in Sec. <ref>. We do not rest content with qualitatively resolving the phases and further seek a proper scaling process with the P( D). As specifically displayed in Fig. <ref>(a), the distribution of Hamming distance is symmetric around 0.5, and the curves of P( D) are quite smooth. Therefore, we try to fit P( D)∈[0,0.5] at each fixed T by the summation of two Gaussian forms as:
G( D)= A/Δ√(2π)exp[-( D- D_0/Δ)^2/2]
+A^'/Δ^'√(2π)exp[-( D- D_0^'/Δ^')^2/2],
where D_0 and D_0^' depict two peak centers with the restriction D_0<D^'_0, Δ and Δ^' are the relative widths, parameters A_1 and A_2 count weights of the two peaks with A_1+A_2=1. Noted that there can only be one peak in the region [0,0.5] at large temperatures, we manually set A^'=0 for T>1. As shown in Fig. <ref>(b-d), this fitting nicely catches the original data.
In the following, we focus on the evolution of the first peak, which starts from D_0=0 at the zero temperature. The position D_0 and the effective height 1/Δ√(2π) as a function of T for various L is displayed in Fig. <ref> (a) and (b), respectively. For both curves, there is an anomalous close to T_c1 and a discontinuous that is greater than T_c2. The latter rapidly decreases as L increases, and its position is always the same in the two panels in Fig. <ref>. This information on possible transitions is similar to the order parameter M and looks more prospective in detecting transitions than the specific heat. Therefore, we try a similar scaling procedure for the 1/Δ√(2π) as the order parameter with a general BKT scaling form:
1/Δ√(2π)L^b= F(L/e^c/√(t)),
where b is the scaling exponent. Here we choose only 1/Δ√(2π) for scaling since D_0 is bounded up to 0.5. Different from the scaling of M and χ_M where the exponent η is known, here we have three parameters to be determined, and the searching for the best data collapse is carried out in the three-dimensional parameter space {T_c,b,c}. Nevertheless, the minimization process generates very good data collapse and critical points. Both critical points greatly agree with that obtained from the scaling of the order parameter and its susceptibility.
Noted that the discontinuous point greater than T_c2 in Fig. <ref> may possibly indicate the real critical point, we mark it as T^*_c2 and examine its extrapolated value in the thermodynamic limit. In the BTK transition, the finite-size shift of the critical point scales as ∝ 1/ln^2(L) <cit.>, thus in Fig. <ref>(a) we displayed the size-dependence of T^*_c2 in the same scale. This extrapolation successfully catches the critical point, and the extrapolated T^*_c2(∞) agrees surprisingly well with the M scaling. We are then interested in whether one can take a similar extrapolation procedure from the order parameter. Here we
extract the more distinct anomalous for the absolute value |M| in Fig. <ref>(a) from its susceptibility
χ_|M|=β/N(⟨ |M|^2⟩-⟨ |M|⟩^2).
The data of χ_|M| is displayed in Fig. <ref>(b), where the T_c2^*(L) is determined by the position of the sharp peak. In Fig. <ref>(a), the extrapolation of T_c2 from χ_|M| get the the same critical T_c2 in the thermodynamic limit. In summary, quantifying the MC procedure with the distribution of D catches the correct critical points, and the P( D) peak features a similar critical behavior with the order parameter and its susceptibility.
§.§ Intrinsic dimension
On the other hand, in some sense, the process of the MC sampling in the exponentially-large configuration space shares a similar idea with the growing field of machine learning (ML), which identifies the universal property of the high-dimensional data sets from minimally processed data collection. Recently, ML ideas have motivated various applications in the context of statistical physics <cit.>. While most of the works focus on analyzing the dimension reduction results from other methods, recent works demonstrated that the reduction procedure could provide the same information <cit.>. Among them, the authors of Ref. <cit.> and <cit.> employ the concept of the intrinsic dimension I_d, which roughly measures the minimum number of variables required to describe the global features of a large data set <cit.>, and compute I_d of the MC thermal configurations to probe different kinds of phase transitions.
In this section, we follow the same approach in Ref. <cit.> and compute the I_d using the so-called two-NN method, which focuses only on the distances to the NN and NNN of each element in the data set, assuming that these two points provide a uniform drawn from small enough I_d-dimensional hyperspheres. Specifically, the data set interested in the present work is a small fraction of MC thermal configurations. Similar to the P( D), we choose 200 uncorrelated configurations in each bin. And for each configuration, the two NNs determined by the smallest (non-zero) Hamming distances define a ratio μ= D_2/ D_1 which obeys the following distribution f(μ) = I_d μ^-I_d-1. In terms of a cumulative distribution P_c(μ), the intrinsic dimension satisfyies
I_d=-ln[1-P_c(μ)]/ln(μ),
then one can get I_d from a linear fitting of this non-decreasing data collection. Different from P( D) in Sec. <ref> where configurations from all bins are considered as a single data set, here I_d is computed in each bin as for the physical quantities. One reason is that different bins might produce possible degenerate configurations, which makes the nearest neighbors ill-defined. Even in a single bin, this degeneracy may appear at very low temperatures, and ln[1-P(μ)] does not scale linearly as ln(μ), as displayed in Fig. <ref>(a). Nevertheless, we perform the same linear-fitting approach for all temperatures, as the linear behavior looks promising for temperatures of interest where the phase transitions occur, as shown in panels (b-d) for T from 0.70 to 1.30 in the same figure. Repeating this fitting procedure for all MC bins, one can obtain the average I_d and its statistical error.
The results of I_d as a function of T for different system sizes are displayed in Fig. <ref>(a), where the curves at larger system sizes feature two anomalies that may correspond to the two critical points. For the first transition, the positions of the anomaly are close to T_c1, as for all other quantities examined in the previous text. However, the second anomaly is far from T_c2 and hardly varies for different L. Moreover, for smaller system sizes with L=12 and 24, I_d is almost flat with no apparent anomaly since I_d may fail to catch the universal information for the relatively small data set <cit.>. Excluding data for the two smallest sizes, we perform the same scaling procedure as for the distribution of Hamming distances, with a BKT scaling form I_d L^b= F(L/ξ) and three unknown parameters to be settled. However, as shown in Fig. <ref>, the data collapse is not as good as previous quantities [See Fig. <ref> and <ref>], and the obtained critical points are inconsistent with the actual phase transitions, especially for the second critical point.
§ SUMMARY AND DISCUSSION
We numerically study the KIM and focus on its two consecutive BKT phase transitions with comprehensive classical MC simulations. After reinvestigating the phases and phase transitions employing the physical quantities such as the magnetic order parameter and its susceptibility, we propose that the BKT phase transitions can also be characterized by the information extracted from MC procedures in a new way that can be used in many different models and phase transitions. Specifically, we first select a small set of uncorrelated configurations determined by the MC procedure, then measure all Hamming distances between every pair of collections and make a target data collection. We demonstrate that the distribution of Hamming distances P( D) contains all information on the phase transitions, and the critical points can be extracted using the same BKT scaling form for the effective height of P( D) peak. Moreover, using anomaly in either height or position of the P( D) peak, one can successfully obtain the transition point by a finite-size extrapolation.
We also compute the intrinsic dimension I_d from the nearest neighbors of the selected MC configurations, which was recently used to analyze the phase transitions in the context of machine learning. However, compared to P( D), the value of I_d is unstable at low temperatures, and the data collapse procedure does not work well for both phase transitions. For comparison and summary, we list the two critical points obtained from different approaches in Table. <ref>. As one can see, while from P( D) one obtains the critical points with surprisingly high accuracy, I_d fails to catch the correct phase transitions.
Quantifying the MC process rather than the results to identify the phase transition has attracted increasing interest in recent years, especially in cooperation with ML ideas. On the other hand, for quantum systems with severe sign problems, such as the Fermion systems or frustrated spin systems, the quantum MC simulation cannot give us meaningful physical quantities because of the negative weight on the sampled configures. In this case, extracting the information on phase transitions from these configurations is especially important and significant. Although based on the classical MC simulation, our findings can be useful in studying quantum systems with the same protocol. In fact, our procedure on the selected configurations tells no difference in quantum or classical cases as long as the MC simulation is carried out on a configuration basis. For example, for the same lattice in the present work with quantum spins that has strong frustrations and the sign problem in calculating physical quantities using MC methods<cit.>, one can try to study the phase transitions following the same procedure. Of course, the quantum system with frustrations is much more intractable, where the MC sampling itself can be unstable. We hope the present results can shed some light on these challenges, and the research along this line deserves further investigation.
§ ACKNOWLEDGMENTS
N.M. thanks Kan Zhao for discussions and collaborations in related contributions. This research was supported by the National Natural Science Foundation of China (grant nos. 12004020, 12174167, 12247101), the 111 Project under Grant No.20063, and the Fundamental Research Funds for the Central Universities.
§ ORDER PARAMETER IN THE COMPLEX PLANE
The main text focuses mainly on the two BKT phase transitions and the corresponding scaling approach to the critical points. Here we present our results on the phases characterized by the distribution of M=|M|e^iϕ in the complex plane {ReM, ImM} at different temperatures. At low temperatures, the system features the same phase as the ground state, where |M|=2/9 is constant, and ϕ only takes six fixed values. As displayed in Fig. <ref>(a) for numerical results at a low temperature T=0.65, which agrees with the schematic pattern for the ground state in Fig. <ref>. When temperature increases, a U(1) symmetry emerges with a nonzero but temperature-dependent |M|, as exhibited by the circle in the complex plane {ReM, ImM} in panel (c) for T=0.90. All orders break down for sufficiently large T due to strong thermal fluctuations with the order parameter |M|=0, and the system is in the disordered phase. For finite system sizes, as shown in Fig. <ref>(b) and (d), one also observes some intermediate patterns for the M distribution, which will disappear in the thermodynamic limit. For all temperatures, M distribution is centrally symmetric of zero.
§ HAMMING DISTANCE OF THE SIX-CLOCK GROUND STATE
The KIM has the ground state of six-clock state spin ice, as explicitly demonstrated in Fig <ref>. The six-fold degenerate ground states satisfy M=|M|e^iϕ, with ϕ=nπ/3 and n=0,1,2,3,4,5. All possible Hamming distances can be easily obtained by counting the different spins between these six degenerate spin configurations. For example, considering the state with n=0 and 1, the three different spins between them result in a Hamming distances D=1/3 as there are in total nine spins in the super unit cell. Repeating this counting procedure, it is easy the find out that all NN pairs of super unit cells have D=1/3, all NNN pairs have D=2/3, and the opposite pairs have D=1. Taking into account that the MC steps around one of the configurations at low temperatures, which is metastable in MC procedures and results in configurations with D=0, all possible D explain the sharp peaks and their positions at low temperatures in Fig. <ref>(c) and <ref>.
Not only the possible values of D, we can also estimate the weight of P( D) at low temperatures. In the data set of uncorrelated MC thermal configurations, all six degenerate ground states have the same weight, with in total 36 possible combinations. The number of combinations supporting D=0 (1) is 6, resulting a P( D)=1/6; Each states have two NNs and NNNs, thus P( D)=1/3 for D=1/3 and 2/3. Despite the thermal fluctuations that broaden the delta peak, the peak heights of P( D) in Fig. <ref> at low temperatures confirm our estimation.
§ DATA COLLAPSE AND UNCERTAINTY
In this work we obtain the critical T_c, η, and c in the scaling form of Eq. (<ref>) from the best data collapse with the minimum error defined by the following cost function <cit.>
C_X=∑_j |X_j+1-X_j|/max{X_j}-min{X_j}-1,
where C_X is a data collection of all M(L,T) [χ_M(L,T)] values in the parameter space for different temperature and system sizes. After sorting X_j in a nondecreasing way with X_j≤ X_j+1, the minimum C_X gives the smoothest curve of all collected data. Since η at the critical point is known, we fix η=1/9 (1/4) in solving the first (second) critical points with two in the two-dimensional parameter space {T_c,c }.
In practice, one obtains a parameter-dependent cost function C_X(T_c,c) for each pair of fitted parameters value, and repeating this procedure in the two-dimensional parameter space {T_c,c} gives us the minimum of C_X and the best data collapse. As shown in Fig. <ref>, the cost function displays a unimodal function in the target region of the {T_c,c} plane for both two critical points and physical quantities. Therefore, we can easily extract the unambiguous minimum of C_X and get the corresponding data collapse in Fig. <ref> in the main text. Note that all three parameters are unknown for the scaling of the P( D) height and the intrinsic dimension I_d, and the minimization in Fig. <ref> and <ref> are carried out in three dimensions.
Equation <ref> and the above procedure do not provide the uncertainty or error of the target parameters. Here the uncertainty is estimated by performing three data collapses, with the data collection of X_j, X_j+σ_X_j and X_j-σ_X_j, and the corresponding obtained critical temperature denoted as T_c, T_c^+ and T_c^-. Then the error of T_c is defined as σ_T_c=max(|T_c^+-T_c|,|T_c^- - T_c|), and in the main text [See Fig. <ref> as an example] the critical temperature is expressed as T_c±σ_T_c. Other target parameters in scaling follow the same procedure and expression.
|
http://arxiv.org/abs/2307.07366v1 | 20230714141525 | Reconstructing Three-decade Global Fine-Grained Nighttime Light Observations by a New Super-Resolution Framework | [
"Jinyu Guo",
"Feng Zhang",
"Hang Zhao",
"Baoxiang Pan",
"Linlu Mei"
] | eess.IV | [
"eess.IV"
] |
[1,2]JinyuGuo
[1,2]FengZhang
[2,3]HangZhao
[2,4]BaoxiangPan
[5]LinluMei
[1]Department of Atmospheric and Oceanic Sciences & Institute of Atmospheric Sciences, Fudan University, Shanghai 200438, China
[2]Shanghai Qizhi Institute, Shanghai 200030, China
[3]Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing 100084, China
[4]Institute of Atmosphere Physic, Chinese Academy of Sciences, Beijing 100029, China
[5]Institute of Environmental Physics, University of Bremen, Bremen 28359, Germany
Feng Zhang ([email protected])
Reconstructing Three-decade Global Fine-Grained NTL Observations by a New SR Framework
Jinyu Guo et al.
1
Reconstructing Three-decade Global Fine-Grained Nighttime Light Observations by a New Super-Resolution Framework
Marc Demoustier, Yue Zhang, Venkatesh Narasimha Murthy(), Florin C. Ghesu, and Dorin Comaniciu
August 12, 2023
================================================================================================================
Satellite-collected nighttime light provides a unique perspective on human activities, including urbanization, population growth, and epidemics. Yet, long-term and fine-grained nighttime light observations are lacking, leaving the analysis and applications of decades of light changes in urban facilities undeveloped. To fill this gap, we developed an innovative framework and used it to design a new super-resolution model that reconstructs low-resolution nighttime light data into high resolution. The validation of one billion data points shows that the correlation coefficient of our model at the global scale reaches 0.873, which is significantly higher than that of other existing models (maximum = 0.713). Our model also outperforms existing models at the national and urban scales. Furthermore, through an inspection of airports and roads, only our model's image details can reveal the historical development of these facilities. We provide the long-term and fine-grained nighttime light observations to promote research on human activities. The dataset is available at <https://doi.org/10.5281/zenodo.7859205>.
Satellite-collected nighttime light (NTL) data can depict the spatial distribution and strength of artificial light sources on the earth's surface, providing a distinct perspective for studying various facets of human activities <cit.>.Defense Meteorological Satellite Program’s Operational Linescan Systems (DMSP-OLS) are the sensors installed on a family of satellites <cit.>. These sensors produced the longest-term NTL archives from 1992 to the present and have become the main source of NTL data <cit.>. DMSP-OLS is widely used in the analysis of carbon emissions and light pollution <cit.>, estimation of gross domestic product (GDP) and population <cit.>, observation of conflicts and disasters <cit.>, and mapping of built-up areas and impervious surfaces <cit.>, etc. Nevertheless, despite its usefulness, DMSP-OLS data has a limitation of coarse granularity which hinders its applicability for in-depth analysis of urban facilities <cit.>.
One reason for the coarse granularity is its low spatial resolution of 1 km <cit.>. More importantly, it has problems of overglow effect and saturation <cit.>. The overglow effect refers to the significant impact of bright pixels on surrounding areas <cit.>. This effect not only results in images appearing too smooth, obscuring details within cities, but also causes some areas without light sources, such as the sea surface, to be illuminated by nearby cities <cit.>. The saturation problem means that the quantization capacity of DMSP-OLS is only 8 bits, so the brightness of urban centers is stable at 63 and has not changed <cit.>. In response to the overglow effect, thresholding methods and classification algorithms were used to identify and eliminate bright pixels without light sources <cit.>. Additionally, some studies assumed that the overglow effect was governed by the point spread function and thus used deconvolution filters to sharpen the image <cit.>. Another assumption was that the pixel-level overglow effect was linearly cumulative, so a self-adjusting model was used to correct the image <cit.>. In dealing with the saturation problem, multi-source data represented by vegetation indices is considered correlated with NTL, and many studies fused DMSP-OLS data with the multi-source data to increase urban interior details <cit.>. All of these efforts tried to cope with the problem of coarse granularity. However, most of them were based on strong assumptions, which were not necessarily consistent with facts. Therefore, their results are inadequate, and it is necessary to develop new methods to effectively solve the coarse granularity issue.
Another widely used NTL sensor is the Suomi National Polar-orbiting Partnership Satellite’s Visible Infrared Imaging Radiometer Suite (NPP-VIIRS) <cit.>. It spans from April 2012 to the present and has a spatial resolution of approximately 500 m <cit.>, as well as onboard calibration <cit.>. Compared with the previous generation of sensors, its overglow effect is much less significant <cit.>. Additionally, its quantization capacity is 14 bits, so there is no saturation problem <cit.>. Overall, NPP-VIIRS is more advanced, and its data has the advantage of being continuous and fine-grained. However, the short time range makes it difficult to be used for the long time-series analysis and applications.
In recent years, some studies have attempted to reconstruct DMSP-OLS data into NPP-VIIRS data to advantageously combine the long-term temporality of DMSP-OLS with the fine-grained nature of NPP-VIIRS. One such study fused the Moderate Resolution Imaging Spectroradiometer (MODIS) Normalized Difference Vegetation Index (NDVI), and DMSP-OLS data to get the vegetation-adjusted NTL urban index (VANUI), which consists of more details. Then a power function was used to establish the regression relationship between UANUI and NPP-VIIRS <cit.>. Subsequently, the Random Forest and Multilayer Perceptron models were adopted in the reconstruction process. The input variables included DMSP-OLS data, Digital Elevation Model (DEM), and road map <cit.>. The models they employed adhered to a point-to-point paradigm, where the spatial relationship among pixels was disregarded. As a result, their application is confined to a city-scale level, and their accuracy warrants further improvement. Super resolution, a technique that reconstructs blurred images into clear images, has made great progress in the last decade with the advent of deep learning <cit.>. AutoEncoder, a deep learning model, was modified and used to reconstruct DMSP-OLS data to NPP-VIIRS data <cit.>. This is an image-to-image model that utilizes the spatial relationship among pixels. It is the first model to create global VIIRS-like images. Nevertheless, owing to the reliance on the MODIS Enhanced Vegetation Index (EVI), DMSP-OLS archives before 2000 were not exploited. Moreover, the model’s generalization ability degrades in untrained years, as shown in the Result Section, which causes unclear output images in earlier years. In general, all the current models establish a direct mapping from DMSP-OLS data and associated variables to NPP-VIIRS data of the same year. Despite some advancements in this field, there remains a dearth of long-term and fine-grained observations of NTL.
This study proposes a novel approach to effectively reconstruct global fine-grained NTL observations covering a period of three decades. We show that both statistical and visual performances of our methodology are significantly better than those of existing methods through validation at the urban, national, and global scales. Moreover, our model reveals the long-term historical changes in some certain facilities that cannot be detected by existing models. The main contributions of this study are as follows:
* We derived a new super-resolution framework and designed a new deep learning model, DeepNTL, to reconstruct DMSP-OLS data into NPP-VIIRS data;
* We evaluated our model and found it is superior to existing models at different scales, and can reveal the decades of light changes in urban facilities;
* To our knowledge, it is the first time to achieve and release a global fine-grained NTL dataset from 1992 to the present with the highest accuracy achieved so far.
The rest of this paper is organized as follows: Section 2 introduces materials and the continuity correction for DMSP-OLS; Section 3 describes the new super-resolution model; Section 4 illustrates implementation and experiments; Section 5 evaluates our result comprehensively; Section 6 presents the summary and conclusions.
§ MATERIALS
There are 6 satellites carrying DMSP-OLS sensors, and they are coded as F10, F12, F14, F15, F16, and F18 <cit.>. From 1992 to 2013, their overpass times were between 19:30 and 22:00, as represented in blue in Figure <ref>. After 2013, the orbits of F15 and F16 drifted and their overpass times changed to between 02:00 and 04:30 after midnight, as shown in orange <cit.>. The Earth Observation Group of the Colorado School of Mines released Version 4 DMSP-OLS Nighttime Lights Time Series[https://eogdata.mines.edu/products/dmsp/]. There are three types of products in the dataset: cf_cvg, avg_vis, and stable_lights.avg_vis. The cf_cvg product records the number of cloud-free observations for each pixel in each year. The avg_vis product is the annual average light without any filtering. The stable_lights.avg_vis product represents the annual average product in which ephemeral light and background noise have been eliminated. The last type was adopted in this study because its main light sources are built-up areas, which are the focus of most of the related studies. The dimension of a global DMSP-OLS image is (16801,43201), and its values range from 0 to 63.
Due to lack of onboard calibration and the orbital drift, DMSP-OLS data is inconsistent between years and between satellites <cit.>. It is necessary to perform an effective inter-calibration to enhance the continuity from 1992 to the present. We used the spatial and temporal variation coefficients to select calibration fields. The details are presented in Appendix <ref>.
The Earth Observation Group also released the NPP-VIIRS dataset[https://eogdata.mines.edu/products/vnl/] from April 2012, as shown in green. The overpass time of NPP-VIIRS is 01:30 after midnight <cit.>. There are several versions of the dataset, and the average-masked one was adopted in this work. This version is the annual average radiance product in which biomass burning, aurora, and most of the background noise were removed. The dimension of a global NPP-VIIRS image is (33601, 86401), which is twice as large as that of a global DMSP-OLS image when the units digit is ignored. The unit of its value is nanoWatts/cm^2/Sr.
§ DEEPNTL MODEL
In the previous models, multi-source data was reconstructed into NPP-VIIRS data using the direct mapping approach. However, this approach led to several issues, including (a) The need to discard parts of DMSP-OLS historical archives to align the times of different variables; (b) Insufficient accuracy; and (c) Degraded generalization ability in untrained years. Therefore, to fully exploit the historical archives of the DMSP-OLS, these auxiliary variables should be circumvented. In addition, to improve the accuracy and maintain a good generalization ability in untrained years, the direct mapping approach should also be circumvented.
We proposed a new super-resolution framework, learning annual difference, for NTL reconstruction. If the NPP-VIIRS image of a certain year can be selected as a reference, and the super-resolution reconstruction for the target year can be regarded as a brightness change on the reference image, then the super-resolution problem can be simplified. Moreover, the basis for the brightness change on the NPP-VIIRS image can be learned from the brightness difference in the DMSP-OLS images of different years. In this way, the above direct mapping approach and auxiliary variables can be avoided.
x^' is considered as the reference year, and a DMSP-OLS image of year x^' is denoted as DMSP_x^' y^' whose satellite code is y^'. x is considered as the target year, and a DMSP-OLS image of year x is represented as DMSP_xy whose satellite code is y. DMSP_x^' y^' and DMSP_xy are located in the same place. Function F_1 is the feature extractor of DMSP-OLS images. The feature difference between DMSP_x^' y^' and DMSP_xy represents the annual difference of DMSP-OLS, as expressed in Eq.<ref>:
F_2(DMSP_x^' y^',DMSP_x y)=F_1(DMSP_x^' y^')-F_1(DMSP_xy),
where F_2(DMSP_x^' y^',DMSP_x y) is the annual difference of DMSP-OLS; the first item in the right is the feature of DMSP_x^' y^'; the second item in the right is the feature of DMSP_xy.
Similarly, VIIRS_x^' denotes the NPP-VIIRS image of the reference year x^'. It covers the same area as DMSP_x^' y^'. VIIRS_x, also in the same place, is the NPP-VIIRS image of the target year x. Function F_3 is the feature extractor of the NPP-VIIRS images. The feature difference between VIIRS_x^' and VIIRS_x is the annual difference of NPP-VIIRS, as expressed in Eq.<ref>:
F_4(VIIRS_x^',VIIRS_x )=F_3(VIIRS_x^')-F_3(VIIRS_x),
where F_4(VIIRS_x^',VIIRS_x ) is the annual difference of NPP-VIIRS; the first item in the right is the feature of VIIRS_x^'; the second item in the right is the feature of VIIRS_x.
Eq.<ref> is reshaped as Eq.<ref>. If there is a transform function H, as shown in Eq.<ref>, which can transform F_2(DMSP_x^' y^',DMSP_x y) into F_4(VIIRS_x^',VIIRS_x ), then we can change Eq.<ref> into Eq. <ref>. The new equation shows that with the help of H, we can combine the NPP-VIIRS feature F_3(VIIRS_x^') of the reference year and the annual difference of DMSP-OLS F_2(DMSP_x^' y^',DMSP_x y) to obtain the NPP-VIIRS feature F_3(VIIRS_x) of the target year. Then, Eq.<ref> is included in Eq.<ref> to get Eq.<ref>.
F_3(VIIRS_x)=F_3(VIIRS_x^')-F_4(VIIRS_x^',VIIRS_x )
F_4(VIIRS_x^',VIIRS_x )=H(F_2(DMSP_x^' y^',DMSP_x y))
F_3(VIIRS_x)=F_3(VIIRS_x^')-H(F_2(DMSP_x^' y^',DMSP_x y))
F_3(VIIRS_x)=F_3(VIIRS_x^')-H(F_1(DMSP_x^' y^')-F_1(DMSP_xy))
To obtain the NPP-VIIRS image instead of the feature of the target year, a reconstruction function G is needed, as shown in Eq.<ref>. This is the initial prototype of our super-resolution model.
VIIRS_x=G(F_3(VIIRS_x^')-H(F_1(DMSP_x^' y^')-F_1(DMSP_xy)))
The minus sign in Eq.<ref> represents the ideal linear scenario, however, the actual scenario may be more complicated. To better reflect the complex and non-linear nature of the neural network models, we assume a new function H^∗ that learns to capture the annual difference of DMSP-OLS, and then transforms it into that of NPP-VIIRS. Similarly, we assume a new function G^∗ that learns to deduct the annual difference of NPP-VIIRS from its features in the reference year to get its features in the target year, and then reconstruct the features into an image. In this way, Eq.<ref> is obtained and shown below. This is the final prototype of our model. Of deep learning models, G^∗, F_3, H^∗, and F_1 are different modules, and their parameters can be learned through data.
VIIRS_x=G^∗(F_3(VIIRS_x^'),H^∗(F_1(DMSP_x^' y^'),F_1(DMSP_xy)))
According to Eq.<ref>, a new deep learning model, DeepNTL, was proposed, as shown in Fig.<ref>. The sizes of DMSP_x^' y^' and DMSP_xy are (1, h, w), which indicates that the channel number, height, and width are 1, h, and w, respectively. When extracting the features of DMSP-OLS images, module F_1 doubles the heights and widths of images, because NPP-VIIRS images are twice as wide and high as that of DMSP-OLS images. Therefore, the sizes of the extracted features F_1(DMSP_x^' y^') and F_1(DMSP_x y) are (1, 2h, 2w).
F_1(DMSP_x^' y^') and F_1(DMSP_xy) are concatenated along the channel dimension to obtain a tensor with a size of (2,2h,2w). When learning the annual difference of DMSP-OLS and transforming it into that of NPP-VIIRS, the module H^∗ increases the tensor’s channel number to c.
The size of VIIRS_x^' is (1,2h,2w). The module F_3 increases its channel number to c when extracting its features. Therefore, the tensor F_3(VIIRS_x^') with a size of (c,2h,2w) is created by this module.
Then, the NPP-VIIRS feature F_3(VIIRS_x^') of the reference year and its annual difference
H^∗(F_1(DMSP_x^' y^'), F_1(DMSP_xy)) are concatenated to obtain a new tensor whose size is (2c,2h,2w). The module G^∗ decreases the channel number from 2c to 1 during the reconstruction. The final output VIIRS_x with the size of (1,2h,2w) is the NPP-VIIRS image of the target year.
In this study, the modified Residual Network (ResNet) was used as the module H^∗, F_3 and G^∗ <cit.>. The Residual Channel Attention Network (RCAN) was used as the module F_1 <cit.>. The architectures and hyperparameters of ResNet and RCAN are described in [AppendixB]Appendix B further.
§ IMPLEMENTATION AND EXPERIMENTS
§.§ Datasets
The years from 2012 to 2019 are the intersecting times of DMSP-OLS and NPP-VIIRS. The DMSP-OLS products in the intersecting years include DMSP_2012F18, DMSP_2013F15, DMSP_2013F18, DMSP_2014F15, DMSP_2015F15, DMSP_2016F15, DMSP_2016F16, DMSP_2017F15, DMSP_2017F16, DMSP_2018F15, DMSP_2018F16, DMSP_2019F15, and DMSP_2019F16. The products of NPP-VIIRS during this period include VIIRS_2012, VIIRS_2013, VIIRS_2014, VIIRS_2015, VIIRS_2016, VIIRS_2017, VIIRS_2018, and VIIRS_2019. To eliminate the background noise in NPP-VIIRS data, the pixels less than 0.5 were assigned as 0.0. In addition, a few pixels have abnormally high values in NPP-VIIRS images. These abnormal values may result from some unstable factors such as flames from the burning of natural gas, and hence, they were usually replaced using specific values in previous studies. In this work, it was found that the value corresponding to 99.99% quantile is 496 nanoWatts/cm^2/Sr among all lit pixels in all global NPP-VIIRS images. Therefore, the pixels greater than 496 nanoWatts/cm^2/Sr were replaced by it.
The reference year x^' was selected between 2013 and 2019. The selection was based on two considerations: (a) For the super-resolution reconstruction between 1992 and 2011, it is beneficial to select a reference year close to this period; (b) The model should not only learn the light change with the increase of year, but also learn that with the decrease of year. Therefore, 2014 was determined as the reference year, DMSP_2014F15 was used as DMSP_x^' y^', and VIIRS_2014 was used as VIIRS_x^'.
The global images are too large to be directly input into the model. Therefore, it is necessary to extract tiles from the global images. The size of the DMSP-OLS tile is (1,128,128), and that of the NPP-VIIRS tile is (1,256,256). The strategy of global random sampling was used to determine the upper left points of tiles, as shown in Fig.<ref>. This strategy can increase the number of tiles, and thus enables the model to learn more possible situations. We generated 30,000 random points all over the world. For each point, the percentages of lit pixels in its DMSP_2014F15 tile and the VIIRS_2014 counterpart were both larger than 1%.
The selected points were used to extract tiles from DMSP_2013F15 to DMSP_2019F16, and hence, each point extracted 12 DMSP-OLS tiles. Each DMSP-OLS tile was combined with the NPP-VIIRS counterpart in the same position and same year to form an example. Therefore, 360,000 examples were created in total. 95% examples among them were used for training, and 5% examples were used for validation. The datasets of 2012 were used for independent testing.
§.§ Loss Function
L_1 loss was used in this study. A set is denoted as {VIIRS_x^n,VIIRS_x^n}_n=1^N, which contains N reconstructed images VIIRS_x^n and ground truth images VIIRS_x^n. The loss function can be expressed as Eq.<ref>:
L(Θ)=1/N∑_n=1^NVIIRS_x^n-VIIRS_x^n_1,
where L is the loss function; Θ is the set of parameters of the model; N represents the number of examples; n is the index of each example.
§.§ Evaluation Metrics
Pearson correlation coefficient (r), peak signal-to-noise ratio (PSNR), and structural similarity measure (SSIM) were used to evaluate the consistency between a super-resolution image (SR) and a ground truth NPP-VIIRS image (GT). r can be expressed as follows:
r=∑_i=1^I(p_i-μ_GT)(p_i-μ_SR)/√(∑_i=1^I(p_i-μ_GT)^2)√(∑_i=1^I(p_i-μ_SR)^2),
μ_GT=1/I∑_i=1^I p_i,
μ_SR=1/I∑_i=1^I p_l,
where μ_GT is the average pixel value of a GT image; μ_SR represents the average pixel value of the corresponding SR image; I is the number of pixels, and i is the pixel index; p_i and p_i represent the i th pixel value in GT and SR images, respectively. r ranges from −1 to 1, and a larger value indicates a stronger correlation. PSNR is expressed as follows:
PSNR=10 log_10(MAX^2/MSE),
MSE=1/I∑_i=1^I(p_i-p_l)^2,
where MSE represents the mean square error; MAX is the possible maximum value. A higher value of PSNR represents a smaller difference between the GT and SR images. Another metric, SSIM, is shown as follows:
SSIM=(2 μ_GTμ_SR+(0.01MAX)^2)(2 σ_GT _SR+(0.03MAX)^2)/(μ_GT^2+μ_SR^2+(0.01 MAX)^2)(σ_GT^2+σ_SR^2+(0.03MAX)^2),
σ_GT=√(1/I∑_i=1^I(p_i-μ_G T)^2),
σ_SR=√(1/I∑_i=1^I(p_-μ_SR)^2),
σ_GT_-SR=1/I∑_i=1^I(p_i-μ_GT)(p_-μ_SR),
where σ_GT and σ_SR are the standard deviations of pixel values in the GT and SR images, respectively; σ_GT_SR is the covariance between the GT and SR images. The range of SSIM is between 0 and 1, and a large value of SSIM indicates a higher consistency between GT and SR images.
§.§ Training and Inference
The model's parameters were randomly initialized. The Adam algorithm was used as the optimizer. The initial learning rate was set as 0.0001. The learning rate was reduced to its 95% as long as the validation loss did not decrease for 3 epochs. We used 8 NVIDIA A10 graphic cards to train our DeepNTL parallelly for approximately three weeks. The batch size for each card was set as 4. During the training, each epoch took more than 4 h, and each card was fully utilized. During the inference, the global DMSP-OLS images from 1992–2019 were split into tiles, and then fed into the trained model. After all the super-resolution results were generated, they were re-organized as global images.
§.§ Baseline Models
The bilinear model is a commonly used, simple baseline model. The RCAN is a baseline model based on convolution operation. The SwinIR is another baseline model for image restoration based on Swin Transformer <cit.>. Additionally, the AutoEncoder is the first model used to convert the DMSP-OLS images into the NPP-VIIRS images, and its productions are open-access, so the production of 2012 was used for comparison in this study. All of them belong to the direct mapping paradigm.
§ RESULT AND DISCUSSION
§.§ Evaluating visual consistency of super resolution
Visual consistency was evaluated using image textures. Six big cities from different continents were used for the detailed analysis, including Shanghai in Asia, Melbourne in Oceania, Athens in Europe, Johannesburg in Africa, Phoenix in North America, and Rio de Janeiro in South America. Figure <ref> shows the global reconstructed NTL image in 1992 using our DeepNTL model as well as its regional images of these cities. Among these cities, Shanghai, Johannesburg, and Rio de Janeiro are located in developing countries, and Phoenix, Athens, and Melbourne are located in developed countries. Johannesburg and Phoenix are interior cities and the others are coastal cities. These cities have different socioeconomic backgrounds and geographical conditions, and hence are representative for the evaluation. Additionally, the NTL images of these cities are bright with distinctive textures, which facilitates comparison of the models' performances.
Shanghai borders the East Sea to the east, Hangzhou Bay to the south, inland to the west, and the Yangtze River estuary to the north. Figure <ref> shows the reconstructed and GT images in Shanghai from 1992–2012. Columns and rows represent different years and models, respectively. The GT image is presented in the last row. The images produced by the bilinear model, displayed in the first row, suffer from a couple of issues. Firstly, its images are blurry, making it very difficult to distinguish details such as streets or buildings. Secondly, the model's results inherit the overglow effect from the original DMSP-OLS images. As an example, the vast sea area to the east of Shanghai is erroneously illuminated Since 2005, even though there is no stable human activity on the sea. Additionally, the images suffer from a saturation problem, resulting in no brightness variation in the urban center. The second row presents the images of the RCAN model, which displays some roads outside the urban center. The model partially alleviates the overglow effect by maintaining the darkness of the eastern sea. It also overcomes the saturation problem to exhibit some brightness variations in the urban center. However, its images are still indistinct within the urban center. The images produced by SwinIR, shown in the third row, share similar characteristics with RCAN, although they are slightly smoother. The AutoEncoder's images are displayed in the fourth row. The model successfully overcomes the overglow effect and the saturation problem, because it does not illuminate the pixels on the East Sea and shows some brightness variations in the urban center. Compared to the previous models, AutoEncoder enhances textures a lot. However, its textures are not entirely consistent with the GT image, making it difficult to discern streets and buildings. Additionally, as the model's input is the combination of DMSP-OLS image and MODIS EVI, and the latter starts from 2000, it cannot reconstruct the images in 1992 and 1998. Furthermore, the images inherit the water mask from MODIS EVI, resulting in some bright pixels in the urban center being set to 0. The water masks appear as south-north dark curves between 121.33°E and 121.57°E in the images produced by AutoEncoder. Finally, the fifth row shows the reconstructed images of the DeepNTL model. Its images successfully overcome the overglow effect and saturation problem, while also being free of any water mask, making them complete. As it does not require any auxiliary data, the entire DMSP-OLS archives since 1992 can be fully utilized. Moreover, its images are clear enough to display streets and other details in all years, and the image in 2012 is highly consistent with the GT image.
The unprecedented fine-grained and long-term NTL analysis can now be supported by DeepNTL. For example, point a in the GT image of Figure <ref> indicates the Shanghai Pudong International Airport, the largest hub airport in East China, that was completed in 1999. Its lights are very bright and conspicuous. According to the DeepNTL images, this spot did not show up in 1992 and 1998, and became increasingly clear in 2005 and 2012. Hence, what the DeepNTL shows matches the facts. Furthermore, point b indicates the Dishui Lake, which is an artificial, circular lake.It was mostly completed in 2003, with ongoing refinements since then. It appears as a small ring on the GT image, which is formed by street lamps surrounding the lake. On the DeepNTL images, this small ring did not exist before 1998; by 2005 it had appeared and became evident in 2012. This is also consistent with the facts. Lastly, point c indicates the East Sea Bridge, a sea-crossing bridge connecting Shanghai with Zhoushan in the south, that was put into service in 2005. On the DeepNTL images, the bridge did not show prior to 1998. It appeared after 2005. This is again in line with the facts. Such long-time analysis for individual facilities is only possible with DeepNTL. In contrast, other models cannot show these details at all. DeepNTL has the same advantages in the analysis of Athens and Rio de Janeiro. To save article space, the images of these two cities are presented in Figure C1 and C2 of [AppendixC]Appendix C.
Another advantage of DeepNTL is that it is able to maintain a strong generalization ability for earlier years that are far from the training years. As depicted in Figure <ref>, the GT image of Phoenix shows a dark polygon area near point a that represents the Phoenix Mountain Preserve. It is much darker than the surrounding area as it has maintained its original ecology over the years. The images of RCAN and SwinIR were able to show the dark mountain reserve in 2005 and 2012, but it disappeared from their images of 1992 and 1998. This is due to the decline in the generalization ability of the two models when they are applied in the early years. In addition, the AutoEncoder image of 2012 was able to display the dot-matrix-like block layout, which is similar to that of the GT image. However, in 2005, its images became blurred and were unable to show the block layout clearly. This, too, is caused by the decline in the generalization ability of the model. Notably, DeepNTL can clearly display the mountain reserve every year and maintain the dot-matrix-like block layout over the years. Incidentally, this city expands in the southeast direction. This advantage can also be found in Melbourne and Johannesburg. To save space in the article, the images of these two cities are presented in Figure C3 and C4 of [AppendixC]Appendix C.
Overall, DeepNTL has the strongest visual consistency when compared to other models, its advantages include: (a) It can reconstruct DMSP-OLS images of all years; (b) The reconstructed images are clear, and the textures are very close to that of GT images; (c) It maintains good generalization ability for earlier years and can accurately detect annual changes for individual facilities.
§.§ Evaluating statistical consistency of super resolution
§.§.§ Statistical consistency at urban scale
Figure <ref> shows the pixel relationship between the model outputs and GT images at an urban scale in 2012. All the pixels in these cities were used to calculate the evaluation metrics. To magnify the most concentrated parts, we zoomed in to the subrange between 0 and 200. The maximum value of the bilinear model is 63, as it does not make a substantive change to the DMSP-OLS images. The points of the bilinear model are concentrated on the abscissas due to the overglow effect. The points of RCAN, SwinIR, and AutoEncoder are more dispersed and have different degrees of overestimation or underestimation. In contrast, all the points of DeepNTL are concentrated along the diagonals.
For each city, DeepNTL has the highest evaluation metrics. For instance, in Shanghai, SwinIR is better than other direct mapping models, with r of 0.79, PSNR of 34.553, and SSIM of 0.828. Our DeepNTL outperforms SwinIR, whose r is 0.967, PSNR is 42.548 and SSIM is 0.978. In Melbourne, r, PSNR, and SSIM of AutoEncoder are 0.862, 39.198, and 0.926 respectively, and higher than those of other direct mapping models. DeepNTL is better than AutoEncoder, with r of 0.981, PSNR of 47.46 and SSIM of 0.985. Similar results can be observed for Athens, Johannesburg, Phoenix, and Rio de Janeiro. The metrics of direct mapping models have unstable rankings in different cities. In contrast, the metrics of DeepNTL consistently rank first in each city and are significantly higher than those of other models. This is due to the ability of DeepNTL to learn the annual difference, which provides a distinct advantage.
§.§.§ Statistical consistency at global and country scale
The performances at global and country scales were evaluated by statistical consistency. Figure <ref> shows the pixel relationships between the output values of the models and GT values at these two scales in 2012. The first row represents the global comparison. One billion pixels were randomly selected from nearly three billion pixels worldwide for evaluation. Such a tremendous number of selected pixels is enough to illustrate the global performance. The metrics of the bilinear model are the lowest among all models. Its global r is 0.395 and PSNR is 42.333, which indicates that there is a huge gap between the global images of bilinear model and GT. AutoEncoder improves the relationship significantly, with a r value of 0.677 and PSNR of 52.652. Since the maximum value of AutoEncoder is less than 400, there is a blank area on the right of its global scatter diagram. RCAN is slightly better than AutoEncoder with a r value of 0.699 and PSNR of 53.231, and its data range is consistent with that of GT. SwinIR is better than RCAN, with a r value of 0.713 and PSNR of 53.425. Compared with the bilinear model, these direct mapping models see evident improvement at the global scale, but there is also obvious dispersion in their scatter diagrams. DeepNTL model has the best performance at the global scale with a r value of 0.873 and PSNR of 55.899. Its data range is consistent with that of GT. The model's points are concentrated on the diagonal, and its dispersion degree is the lowest compared with that of other models.
To evaluate statistical consistency at the country scale, China, Australia, Greece, South Africa, the United States, and Brazil were selected for comparison. All the pixels within each country were used to produce scatter diagrams. DeepNTL maintains the first place in each country. For example, in China, SwinIR performs better than other direct mapping models, with a r value of 0.786, PSNR of 53.155, and SSIM of 0.995. However, DeepNTL outperforms SwinIR with a r value of 0.896, PSNR of 55.526, and SSIM of 0.997. In Australia, AutoEncoder is better than other direct mapping models, with a r value of 0.797, PSNR of 59.416, and SSIM of 0.999. But DeepNTL surpasses AutoEncoder with a r value of 0.954, PSNR of 65.903, and SSIM of 1.0. In Greece, South Africa, the United States, and Brazil, DeepNTL still has the highest evaluation metrics.
NTL has played an irreplaceable role in analyzing human activity. DMSP-OLS provides the longest NTL historical archives from 1992. NPP-VIIRS is the new generation NTL sensor introduced in 2012, with a higher spatial resolution that gives it more application potential. The inconsistency between these two kinds of sensors results in the lack of a long-term, fine-grained NTL dataset. This problem has persisted without an effective solution for a long duration. We introduced a novel super-resolution framework based on the concept of learning annual difference. Using this framework, we developed a new model called DeepNTL, which is specifically designed for NTL data. We created a large dataset comprising 360,000 image examples to train our model. Through visual and statistical evaluations, we demonstrated that DeepNTL surpasses baseline models across multiple scales. In particular, DeepNTL is the only model that can accurately capture the dynamics of infrastructure such as airports and roads.
Although our DeepNTL model and product prove significant advantages over other models on various fronts, the temporal resolution of our product is annual. Fortunately, the monthly DMSP-OLS and daily NPP-VIIRS have been released in recent years. Moreover, some new satellite datasets, e.g. the daily Luojia dataset since 2018 with 130 m spatial resolution, are also available. These new datasets can be used to produce long-term NTL datasets with higher spatiotemporal resolution in the future by our DeepNTL. Our future work also includes using DeepNTL products to conduct large-scale surveys of the long-term changes in global infrastructure, such as airports, roads, bridges, and so on.
For the first time, the long-term and fine-grained NTL observation becomes a reality. The DeepNTL product is a valuable extension of NPP-VIIRS, which provides reliable NTL data for earlier years. Further, it is open access to the public. Users can easily combine future NPP-VIIRS annual data with the DeepNTL product after removing background noise.
The long-term and fine-grained nighttime light dataset is available at <https://doi.org/10.5281/zenodo.7859205>.
JG developed the methodology and software, performed visualization, and drafted the manuscript; FZ conceived the study, performed validation and formal analysis, revised the manuscript, supervised the project, and administered the project; HZ performed investigation and provided resources, and revised the manuscript; BP performed validation and visualization, and revised the manuscript; LM performed formal analysis, and revised the manuscript.
The authors declare that they have no competing interest.
The authors would like to thank Shanghai Qizhi Institute. They also would like to thank the Earth Observation Group of the Colorado School of Mines for providing nighttime light datasets.
In addition, they appreciate all the related studies.
§ INTER-CALIBRATION FOR DMSP-OLS
§.§ The selection of calibration fields
The homogeneity at both spatial and temporal dimensions matters for ideal calibration fields <cit.>. Hence, in addition to the spatial variation coefficient, we used a temporal variation coefficient to measure the temporal stability of the whole period.
GDMSP_xy is a global DMSP-OLS image, in which x is the year and y is the satellite code. With each pixel as a center and three surrounding pixels as the kernel size, the spatial variation coefficient is calculated according to Eq.<ref>:
vc_i^s = σ_i^s/μ_i^s,
where vc_i^s is the spatial variation coefficient of the kernel centered on pixel i; σ_i^s represents the standard deviation of the kernel; μ_i^s is the average light within the kernel. In this way, the spatial variation coefficient image SVC_xy for each GDMSP_xy was obtained. Then, we sorted all the pixel values of all SVC_xy, and took the commonly used 1/4 quantile as the spatial threshold. Pixels lower than this threshold are more uniform in space and were assigned 1. In contrast, those higher than the threshold were assigned 0. After that, the spatial mask SM_xy for each SVC_xy was acquired. Finally, the total spatial mask TSM was calculated by multiplying all the SM_xy. The TSM represents the pixels with high spatial uniformity.
All the GDMSP_xy were stacked together along the channel dimension to form a thick image GDMSP_thick. The temporal variation coefficient was calculated using all channel values for each pixel, as shown in Eq.<ref>:
vc_i^t= σ_i^t/μ_i^t,
where vc_i^t is the temporal variation coefficient for pixel i; σ_i^t is the standard deviation of all channel values at this pixel; μ_i^t denotes the average, and thus, the temporal variation coefficient image TVC was obtained. We also used the 1/4 quantile as the temporal threshold, and then, binarized TVC in a similar way to SVC_xy. After that, the temporal mask TM was created, which represents pixels having a high temporal stability.
In addition, each GDMSP_xy has some saturated pixels with a saturated value of 63. The saturated pixels cannot represent true light values, hence, such pixels must be eliminated. The unsaturated mask USM_xy for each GDMSP_xy was obtained by setting pixels equivalent to 63 as 0 and setting other pixels as 1. Subsequently, we multiplied all the USM_xy to obtain the total unsaturated mask TUSM.
Finally, the calibration fields (CF) were obtained by making an intersection between the total spatial mask (TSM), temporal mask (TM), and total unsaturated mask (TUSM) as expressed in Eq.<ref>:
CF=TSM×TM×TUSM
Of a total of 725,820,001 pixels in a global DMSP-OLS image, 94,737 pixels constitute calibration fields, as shown in Figure <ref> (A). The calibration fields are mainly located in the Northern Hemisphere because most of the NTL is released by countries in this region. The calibration fields located in North America are shown in Figure <ref> (B). They are mainly distributed in eastern United States and southern Canada. These developed areas completed most of the infrastructure construction in the twentieth century, and hence, the NTL in these areas has remained temporally stable and spatially uniform during the past 30 years. Therefore, the calibration fields are densely distributed in these areas. As shown in Figure <ref> (C), the dense distribution of calibration fields is also found in western Europe. Previous studies chose the Sicily Island, Italy, as the calibration field. However, based on our method, it is found that only few pixels remain temporally stable and spatially uniform in Sicily. In fact, Belgium has the densest distribution of calibration fields in western Europe. In addition, the calibration fields are also distributed in the United Kingdom, France, Germany, and some other countries. As shown in Figure <ref> (D), China has a sparse distribution of calibration fields because it has a vast territory and its cities developed rapidly in recent decades.
After the calibration fields were determined, based on Elvidge <cit.>, the quadratic polynomial function was used to correct the continuity of the DMSP-OLS. Fitted parameters and determination coefficients are presented in table <ref>.
§.§ The improved continuity of DMSP-OLS
The global total light values (TLV) in time series can reveal the continuity of DMSP-OLS images. In addition, it has been proven that TLV is well correlated with GDP <cit.>. As shown in Figure <ref> (A), the global GDP has been continuously increasing during the considered period, while the TLVs of original DMSP-OLS images fluctuate significantly and lack continuity. Figure <ref> (B) indicates the TLVs of the images calibrated using Elvidge’s method, which used the Sicily Island as the calibration field <cit.>. The TLVs between 1995 and 2005 are noticeably overestimated. This leads to a wide gap between the TLV and global GDP curves. Moreover, the TLVs after 2014 are underestimated, and hence, there is another wide gap between the TLV and global GDP curves. Figure <ref> (C) presents the TLVs of images calibrated using our method. The TLV increases continuously with the increase in global GDP, and there is no significant underestimation or overestimation. The good correlation between TLV and global GDP indicates that our method can promote the continuity of DMSP-OLS images significantly, and it is better than that of Elvidge’s method.
§
§.§ ResNet Architecture
The modified ResNet is shown in Fig.<ref>. The sizes of its input, middle, and output tensors are (inpC,2h,2w), (midC,2h,2w), and (outC,2h,2w), respectively. inpC, midC and outC are different in H^∗, F_3 and G^∗. The basic structures of the modified ResNet are some continuously stacked residual blocks (ResBlock), as presented in the bottom of Fig.<ref>. Our modification lies in a long skip connection which sums the input of the first residual block and the output of the last residual block. The long skip connection facilitates a deeper model. If inpC is not equal to midC, an additional convolutional layer (Optional Conv) is needed in the long skip connection to change the tensor’s channel number; otherwise, Optional Conv is not needed. In the tail of the ResNet, a convolutional layer (Conv), a batch norm layer (Norm), and an activation function (ReLU) are used to produce the output.
A certain ResBlock_s is shown in the top of Fig.<ref>. It consists of two groups of Conv, Norm, and ReLU. The key point is a short skip connection which sums the input of the first Conv and the output of the second Norm. This short skip connection enables the neural network to learn easily. Similarly, if inpC and midC are not equal in ResBlock_1, an Optinal Conv is also needed in the short skip connection.
§.§ RCAN Architecture
The size of the input and output tensors of RCAN are (1, h, w) and (1,2h,2w), respectively. As shown in the bottom of Fig.<ref>, it can be divided into four parts, including shallow feature extraction, deep feature extraction, upscale module, and reconstruction part. The shallow feature extraction increases the channel number of the input tensor from 1 to dim by the Conv operation.
The main part of the RCAN is the deep feature extraction. It further extracts features by stacking G residual groups (RG). To deepen the model, a long skip connection, which sums the input of RG_1 and the output of the Conv immediately after RG_G, is required. A certain RG_g is shown in the middle of Fig.<ref>. Essentially, it consists of B residual channel attention blocks (RCAB). A medium skip connection sums the input of the RCAB_g,1 and the output of the Conv immediately after RCAB_g,B. A certain RCAB_g,b is presented in the upper left of Fig.<ref>. It first uses a Conv to extract features, then uses a Channel Attention to weight each channel of the features. Subsequently, a short skip connection sums the weighted features and the input of RCAB_g,b. The Channel Attention is shown in the upper right part of the figure. In this block, the height and width of the input tensor shrink to 1 through a global average pooling (Pooling). Then, the channel number shrinks to dim/e through a Conv and a ReLU. After that, a pair of Conv and Sigmiod restores the channel number to dim, and makes the value of each element between 0 and 1. Thus, the weight of each channel can be obtained.
The upscale module occurs after the deep feature extraction, as shown in the bottom of Fig.<ref>. It consists of a Conv and a periodic shuffling operator (Pixel Shuffle). The Pixel Shuffle reorders the tensor elements to double the height and width of its input tensor. The last part of the RCAN is reconstruction, in which the tensor’s channel number shrinks to 1 by a Conv. Finally, the output of the RCAN with a size of (1,2h,2w) is obtained.
§.§ Hyperparameters
h and w were set as 128. For the module F_3, inpC, midC, outC, and S were set as 1, 32, 32, and 16 respectively. For the module H^∗, these four hyperparameters were 2, 32, 32, and 32 respectively. For the module G^∗, they were 64, 64, 1, and 32 respectively. In the module F_1, dim is 64 and G and B were both 6. In addition, the reduction ratio e in the channel attention of RCAN was set as 16.
Kernel Size, Padding and Stride are three key hyperparameters in a convolution operation. In the convolutional layers of channel attention, they were set as 1, 1, and 0, respectively. For other convolutional layers in the model, Kernel Size was set as 3, and the zero-padding strategy was used to maintain the heights and widths of tensors.
§
As shown in Figure <ref>, point a in the GT image of Athens indicates Leo Eateiou Road, a seaside road with Salamina Bay to the west. Among these models, only DeepNTL shows it clearly. Point b indicates Athens International Airport, whose construction began in 1996. In the DeepNTL image, the lights at this location were dim in 1992 and brightened significantly after 1998. This is consistent with the facts. In Figure <ref>, point a in the GT image of Rio de Janeiro indicates the President Costa e Silva Bridge, which straddles Guanabara Bay and connects the cities of Rio de Janeiro and Niteroi. The bridge was completed in 1974. Each year's DeepNTL image shows the bridge clearly, while other models produce unclear images.
As shown in Figure <ref>, point a in the GT image of Melbourne indicates Princes Freeway, and point b indicates Sydney Road. The bilinear, RCAN, and SwinIR models cannot display them. AutoEncoder can show the two roads only in 2012, while it fails to show them in 2005 due to the decline of its generalization ability. DeepNTL is able to clearly display these two roads each year because its generalization ability is stable. In Figure <ref>, point a in the GT image of Johannesburg indicates the N1 Western Bypass, which opened in 1975. It presents an approximate circular arc shape. Point b of Johannesburg indicates the Ben Schoeman Freeway, which opened in 1968. This freeway connects Johannesburg with Pretoria in the northeast. Similarly, the images of the first three models are too blurry to clearly show these two roads. AutoEncoder can show them in 2012; however, the decline of its generalization ability resulted in a loss of image clarity in 2005. Only DeepNTL can clearly display these two roads during these years.
copernicus
|
http://arxiv.org/abs/2307.05969v1 | 20230712073326 | QCD effective charges from low-energy neutrino structure functions | [
"Tanjona Rabemananjara"
] | hep-ph | [
"hep-ph"
] |
#1
#1
#1
#1
#1
#1
and
#1
Submitted to #1
Abstract
Presented
PRESENTED AT
QCD effective charges from low-energy
neutrino structure functions
Tanjona R. Rabemananjara
Department of Physics and Astronomy, Vrije Universiteit, NL-1081 HV Amsterdam
Nikhef Theory Group, Science Park 105, 1098 XG Amsterdam, The Netherlands
We present a new perspective on the study of the behavior of the strong coupling α_s(Q^2)
– the fundamental coupling underlying the interactions between quarks and gluons as described
by the Quantum Chromodynamics (QCD) – in the low-energy infrared (IR) regime.
We rely on the NNSFν determination of neutrino-nucleus structure functions
valid for all values of Q^2 from the photoproduction to the high-energy region
to define an effective charge following the the Gross-Llewellyn Smith (GLS) sum rule.
As a validation, our predictions for the low-energy QCD effective charge
are compared to experimental measurements provided by JLab.
DIS2023: XXX International Workshop on Deep-Inelastic Scattering and
Related Subjects,
Michigan State University, USA, 27-31 March 2023
< g r a p h i c s >
Introduction.
The study of (anti-)neutrino-nucleus interactions plays a crucial role in the interpretation
of ongoing and future neutrino experiments which ultimately will also help improve our general
understanding of the
strong interactions as described by Quantum Chromodynamics (or QCD in short). Different types of
interactions occur depending on the neutrino energies E_ν probed. The one of particular relevance
to QCD is inelastic neutrino scattering, which occurs at energies above the resonance region,
for E_ν≳𝒪(10) GeV and when the invariant mass of the final states satisfies
W ≳ 2 GeV. In such a regime, the inelastic neutrino scattering is composed of nonperturbative
and perturbative regimes referred to as shallow-inelastic scattering (SIS) and deep-inelastic scattering
(DIS), respectively.
The main observables of interest in neutrino inelastic scattering are the differential cross-sections
which are expressed directly as linear combinations of structure functions
F_i,A^ν / ν̅ (x, Q^2) with x the Bjorken variable, Q^2 the momentum transfer, and
A the atomic mass number of the proton/nuclear target.
In the DIS regime, the neutrino structure functions are factorized as a convolution between the
parton distribution functions (PDFs) and hard-partonic cross-sections. The latter are calculable
to high order in perturbation theory while the former have to be extracted from experimental
data. On the other hand, in the SIS regime in which nonperturbative effects dominate, theoretical
predictions of neutrino structure functions do not admit a factorised
expressions in terms of PDFs. Various
theoretical frameworks have been developed to model these low-Q^2 neutrino structure functions,
e.g. <cit.>, but all of them present limitations.
In <cit.> we presented the first determination of neutrino-nucleus structure
functions and their associated uncertainties that is valid across the entire range of Q^2 relevant
for neutrino phenomenology, dubbed NNSFν. The general strategy comes down to dividing the Q^2 range
into three distinct but interconnected regions. These regions refer respectively to the low-, intermediate-,
and large-momentum transfers. At low momentum transfers Q^2 ≲ Q^2_ dat in which
nonperturbative effects occur, we parametrize the structure functions in terms of neural networks
(NN) based on the information provided by experimental measurements following
the NNPDF approach <cit.>. In the intermediate momentum
transfer regions, Q^2_ dat≲ Q ≲ Q^2_ thr, the NN is fitted to the DIS
predictions for convergence. And finally at large momentum transfers, Q ≳ Q^2_ dat,
the NN predictions are replaced by the pure DIS perturbative computations.
Such a framework allows us to provide more reliable predictions of the low-energy neutrino structure functions –
we refer the reader to <cit.> for more details.
The NNSFν enables the robust, model-independent evaluation of inclusive inelastic
neutrino cross-sections for energies from a few tens of GeV up to the several EeV
relevant for astroparticle physics <cit.>, and
in particular fully covers the
kinematics of present <cit.> and future <cit.> LHC neutrino experiments.
Aside from its relevance in studying neutrino physics, the NNSFν framework may also potentially
be used as a tool to strengthen our understanding of the nonperturbative regions of QCD owing to its
predictions in the low-energy regime. It is commonly understood that studying the theory of the strong
interactions in the Infrared (IR) regime is necessary to understand both high-energy and hadronic
phenomena therefore providing sensitivity to a variety of Beyond the Standard Model (BSM) scenarios.
One aspect that deserves a closer look in studying long-range QCD dynamics is the behavior of the
strong coupling α_s due to its special property as an expansion parameter for first-principle
calculations. In the perturbative regime, the uncertainties in the value of α_s is known to
the sub-percent level (Δα_s / α_s = 0.85%, <cit.>). At
low-Q^2, however, its determination is subject to large uncertainties mainly due to the lack of
theoretical frameworks that can correctly accommodate for the nonperturbative effects.
A number of approaches have been explored in the literature to study the coupling in the nonperturbative
regime including lattice QCD or the Anti-de-Sitter/Conformal Field Theory (AdS/CFT) duality implemented
using QCD's light-front quantization. In the following, we use the Grunberg's effective charge approach
defined from the Gross-Llewellyn Smith sum rule sum rule. From perturbative QCD, the effective coupling
charge can be calculated from the perturbative series of an observable – usually defined in terms of the
sum rules – truncated to its first order in α_s. The reason for such a truncation is related to
the scheme as at leading order the observable is independent of the renormalization scheme (RS). One of
the main advantages of the effective charge w.r.t. different approaches is that there are several
experiments that measure the effective coupling α_s^ eff(Q^2) to compare the theoretical
computations to.
Here first we briefly review the Gross-Llewellyn Smith sum rule
and verify that it is satisfied using the neutrino structure function predictions from the NNSFν
determination. We then use the NNSFν framework to compute the effective charge defined from the
sum rule and compare the results to experimental measurements extracted at
JLab <cit.>.
The Gross-Llewellyn Smith sum rule.
The neutrino structure function x F_3^ν N must satisfy the Gross-Llewellyn Smith (GLS)
sum rule <cit.> in which its unsubtracted dispersion relation has to be equal to the
number of valence quarks inside the nucleon N. Such a dispersion relation could also be
extended to the neutrino-nucleus interactions in which the GLS sum rule writes as follows:
GLS(Q^2, A) ≡1/2∫_0^1 d x (F_3^ν A + F_3^ν̅ A)
(x, Q^2)= 3(1+∑_k=1^3(α_s(Q^2)/π)^k c_k
(n_f)) + Δ^ HT/Q^2,
where n_f is the number of active flavors at the scale Q^2. The terms inside the
parentheses on the right-hand side represent the perturbative contribution to the leading-twist
part whose coefficients c_k have been computed up to 𝒪(α_s^4). The
Δ^ HT-term instead represents the power suppressed non-perturbative corrections,
see <cit.> for a recent review. Notice that the form of the perturbative part in
Eq. (<ref>) is convenient because, as opposed to many observables in pQCD,
it does not depend both on x and on the mass number A.
The low-energy experimental data from which the NNSFν neutrino structure functions were determined
do not provide measurements in the low-x region, and therefore the evaluation of
Eq. (<ref>) largely depends on the modeling of the small-x extrapolation region.
In our predictions, the behavior at small-x is inferred from the medium- large-x regions via
the preprocessing factor x^1-α_i whose exponents are fitted to the data. In addition,
due to the large uncertainties governing the small-x region, we have to truncate the integration at
some x_ min value. The truncated sum rule should however converge to the pQCD predictions
in the limit x → 0. The truncated sum rule writes as
GLS(Q^2, A, x_ min) ≡1/2∫_x_ min^1 d x
(F_3^ν A + F_3^ν̅ A) (x, Q^2),
for different values of the lower integration limit x_ min and different nucleon/nuclear
targets.
In Fig. <ref> we display the results of computing the truncated GLS sum rule in
Eq. (<ref>)
using our NNSFν predictions. The results are shown for different lower integration limits
x_ min = 10^-3, 10^-4 and for different nuclei A=1, 56. For reference, we compare
the NNSFν calculations with the NLO fit to the CCFR data <cit.>,
the CERN-WA-047 measurements <cit.>,
and to the pure exact QCD predictions. All the results, except for the NNSFν, are always the
same in all the panels since they are independent of both x and A. In the case of the QCD
predictions, the Q^2 dependence is entirely dictated by the running of the strong coupling α_s(Q^2).
As in the previous section, the error bars on the NNSFν represent the 68% confidence level intervals
from the N_ rep = 200 replicas fit.
Based on these comparisons, we can conclude that there is in general good agreements between
the different results. In particular, the NNSFν and pure QCD predictions are in prefect agreement
when the lower integration limit is taken to be x_ min = 10^-3. Even more remarkably, the
slope of the GLS sum rule, which in the the QCD computation is purely dictated by the running of
the strong coupling α_s(Q^2), is correctly reproduced by the NNSFν predictions. The agreement
in central values slightly worsens when the lower integration limit is lowered down to x_ min = 10^-4.
Such a deterioration can also be seen in the increase of the uncertainties. As alluded earlier, such
a behavior is expected due to the fact that NNSFν does not have direct experimental constraints below
x ≈ 10^-3. Notice that the observations above hold for the different nuclei considered.
QCD effective charges.
In order to fully understand the short- and long-range interactions, knowing the strong coupling α_s in
the nonperturbative domain (or equivalently in the IR regime) is crucial. Further arguments can be
put forth that knowing the IR-behavior of α_s is necessary to fully understand the mechanism for dynamical
chiral symmetry breaking <cit.>.
However, studying the strong coupling in the IR domain is very challenging since standard perturbation theory
cannot be used. In the following section, we explore an attempt to extend the perturbative domain using our NNSFν
framework to provide predictions for the low-energy strong coupling α_s (Q^2).
In the framework of perturbative QCD, the strong coupling – which at leading order can be approximated as
α_s(Q^2) = 4 π / β_0 ln(Q^2 / Λ^2)– predicts a diverging behavior at the Landau Pole
when Q^2 →Λ^2. Such a diverging nature is not an inconsistency of
perturbative computations per se since the pole is located in a region way beyond the ranges of validity
of perturbative QCD. Instead, the origin of such a divergence is the absence of nonperturbative terms in
the series that cannot be captured by high order perturbative approximations. That is, the Landau singularity
cannot be cured by simply adding more terms to the perturbative expansion. The Landau Pole however is unphysical
(with the value of Λ^2 defined by the renormalization scheme) and this is supported by the fact that
observables measured in the domain Q^2 < Λ^2 display no sign of discontinuity or unphysical behavior.
Several approaches have been explored to study the low-energy running of the coupling, each with its advantages,
justifications, and caveats. A prominent approach based on Grunberg's effective charge approach – that we
attempt to pursue here – provides a definition of the coupling that behaves as α_s^ pQCD at large-Q^2
but remains finite at small values of Q^2. Since the regime is extended down to small-Q^2, the effective
charge incorporates nonperturbative contributions that appear as higher-twist. Such an effective charge
is explicitly defined in terms of physical observables that can be computed in the perturbative QCD domain. An
example of such observable that has been very well studied in the literature is the effective charge
α_s^ Bj (Q^2) defined from the polarized Bjorken sum rule <cit.>.
Such an observable has important advantages in that it has a simple perturbative series and is a non-singlet
quantity implying that some Δ-resonance contributions cancel out.
In the following study, we use the effective charge α_s^ GLS (Q^2) defined from the GLS sum rule
introduced above. Following the Grunberg's scheme, the definition of the effective charge α_s^ GLS (Q^2)
which follows from the leading order of Eq. (<ref>) writes as:
GLS(Q^2, A) ≡ 3 ( 1 - α_s^ GLS (Q^2, A)/π) ⟺α_s^ GLS (Q^2, A) = π( 1 - GLS(Q^2, A)/3).
In the perturbative domain Λ^2 ≪ Q^2, we expect the effective charges from the Bjorken and GLS
sum rules to be equivalent α_s^ GLS (Q^2)=α_s^ Bj (Q^2) up to 𝒪(α^2_ MS).
In addition, at zero momentum transfer we expect α_s^ GLS (Q^2=0)=α_s^ Bj (Q^2=0)=π. The latter
kinematic limits originate from the fact that cross-sections are finite quantities and when
Q^2 → 0 ⇒ x=Q^2/(2Mν) → 0, the support integrand in Eq. (<ref>) must also
vanish; therefore we have the following relations:
lim_Q^2 → 0GLS(Q^2, A) = 0 ⟺α_s^ GLS (Q^2=0)=π.
It is important to emphasize that Eq. (<ref>) is directly related to the right-hand side
of Eq. (<ref>). We can see from this definition of the coupling that both the short-distance
effects – those within the parentheses of Eq. (<ref>) – and the long-distance perturbative
QCD interactions – represented by the Δ^ HT term– are incorporated into the expression of
the effective coupling α_s^ GLS (Q^2).
Fig. <ref> displays the effective coupling α_s^ GLS (Q) computed from
the NNSFν predictions. As for the sum rules, the effective charge is truncated at some values x_ min
in order to not be influenced by the small-x extrapolation region. The results are shown for A=208 and
for two different values of x_ min = 10^-3, 10^-4. Our predictions are compared to the
experimental measurements from JLab <cit.>
which measures the Bjorken effective charge α_s^ Bj(Q) using a polarized electron beam. Since the
JLab results do not depend both on x and on the atomic mass number A, the results are the same for all the panels.
This insensitivity of the results w.r.t the value of A reflects the expectation that both the GLS and Bjorken sum rules
are related to the nucleon valence sum rules and therefore take the same values irrespective of the value
of A entering the calculations.
Based on these comparisons, we can infer that the NNSFν predictions and the JLab experimental measurements
agree very well down to Q ∼ 0.5 GeV. As Q → 0 we can see that the effective coupling α_s^ GLS/π
measured at JLab converges to 1 as per the kinematic limit while our predictions converges to ∼⟨ 0.6 ⟩.
Perhaps this result would slightly improve if the structure functions were forced to satisfy the sum rules during
the fit and if more experimental measurements were available to constrain the small-x region. As before, the
decrease in the value of the lower integration limit x_ min induces a significant increase in the
uncertainties.
Conclusions and outlook.
In the first part of the manuscript, we reviewed a new framework – referred to as NNSFν – for the determination
of the neutrino-nucleus structure functions in the inelastic regime. In particular, we stressed on its
capabilities to provide predictions for low-energy neutrino interactions. As a verification of the methodology,
we compared the outcome of the computations of the GLS sum rule originating from such predictions with
measured experimental data to which we found very good agreement.
In the second part, we used the NNSFν determination as a tool to understand the running of the coupling
α_s(Q^2) which encodes at the same time the perturbative dynamics at large momentum transfers and the
nonperturbative dynamics underlying the color confinement at small momentum transfers. The use of standard
perturbative computations to study the coupling at low-Q^2 yields erroneous results as it predicts a diverging
behavior due to the existence of an unphysical pole. Owing to the lack of theoretical formalism that
correctly accounts for the nonperturbative effects, studying the strong coupling in the IR regime is a
challenging task.
A prominent approach that resolves the ambiguity in defining the strong coupling in the nonperturbative
regime is the use of effective charges defined directly from a leading order perturbatively computable
observable. In our study we defined the effective charge α_s^ GLS(Q) from the GLS sum
rule which at large momentum transfers reproduces the perturbative computations and at low momentum
transfers is expected to converge to π. Our predictions yield comparable results to experimental
measurements – accounting for the uncertainties – down to Q ∼ 0.5 GeV. However, our
predictions do not fully satisfy the kinematic limit α_s(Q=0)/π = 1 at zero momentum
transfers. This issue of convergence might be resolved by imposing the neutrino structure
functions to satisfy the sum rules during the fit. From this we conclude that further investigation
is needed in that direction in order to fully understand the Q ∼ 0 behavior.
Acknowledgments.
The author is grateful to Juan Rojo for the careful reading of the manuscript. T. R. is supported by an
ASDI (Accelerating Scientific Discoveries) grant from the Netherlands eScience Center.
JHEP
|
http://arxiv.org/abs/2307.04366v1 | 20230710064648 | A New Wind Farm Active Power Control Strategy to Boost Tracking Margins in High-demand Scenarios | [
"Simone Tamaro",
"Carlo L. Bottasso"
] | physics.flu-dyn | [
"physics.flu-dyn",
"cs.CE",
"cs.SY",
"eess.SY"
] |
Explanation Needs in App Reviews: Taxonomy and Automated Detection
Max Unterbusch
University of Cologne
[email protected]
Mersedeh Sadeghi
University of Cologne
[email protected]
Jannik Fischbach
Netlight Consulting GmbH | fortiss GmbH
[email protected]
Martin Obaidi
Leibniz University Hannover, Software Engineering Group
[email protected]
Andreas Vogelsang
University of Cologne
[email protected]
August 12, 2023
============================================================================================================================================================================================================================================================================================================================================================================================================================
empty
empty
This paper presents a new active power control algorithm designed to maximize the power reserve of the individual turbines in a farm, in order to improve the tracking accuracy of a power reference signal. The control architecture is based on an open-loop optimal set-point scheduler combined with a feedback corrector, which actively regulate power by both wake steering and induction control. The methodology is compared with a state-of-the-art PI-based controller by means of high-fidelity LES simulations. The new wind farm controller reduces the occurrence of local saturation events, thereby improving the overall tracking accuracy, and limits fatigue loading in conditions of relatively high-power demand.
§ INTRODUCTION
The growth of wind energy penetration in the electricity mix requires new control algorithms to keep the electrical grid in balance <cit.>. When operating in active power control (APC) mode, a wind farm intentionally extracts less than the available power from the wind, in order to meet the demands of the transmission system operator (TSO). The application of APC to a wind farm is not trivial and introduces new challenges. In fact, the maximum available power dependents on ambient conditions, which vary dynamically in uncertain ways <cit.>. Additionally, wind may suddenly drop, possibly leaving not enough power reserves to track a given reference signal <cit.>. In a wind farm, the situation is further complicated by the presence of low-momentum turbulent wakes, which are responsible for power losses and fatigue loading of waked turbines <cit.>. Various solutions have been proposed to mitigate wake effects, such as induction and yaw control <cit.>. The latter consists of “steering” the wake away from downstream rotors, and its effectiveness for power boosting has been demonstrated numerically <cit.>, experimentally in the wind tunnel <cit.>, as well as in field trials <cit.>.
Different APC approaches have been presented in the literature. An open-loop APC strategy is discussed in <cit.>. The authors showed that the lack of feedback poses a limitation on the power tracking accuracy of the method, especially in conditions of strong waking. Furthermore, an equal dispatch of power sharing among the turbines proved to be suboptimal, due to the different local power reserves induced by the heterogeneity of the flow.
Recently, various authors have used model predictive control (MPC) for APC <cit.>. The main drawback of such methods lies with the need of a dynamic farm flow model, which can be computationally expensive.
Simpler control structures based on classical PI (proportional integral) loops have also been extensively investigated <cit.>. While lacking the sophistication of MPC, such methods do not need a wind farm flow model and can provide fast response times with simple implementations. The APC PI controller of ref. <cit.> operates on the tracking error and adjusts the power demands to follow a reference, sharing power in an arbitrary, static manner among the turbines. The method includes gain scheduling based on the fraction of saturated wind turbines, defined as the ones whose available power is smaller than the demanded one. This method was improved in ref. <cit.> by dynamically adjusting the set-points of the wind turbines, with the goal of equalizing their loading. The authors tested this methodology with an actuator disk model using large eddy simulations (LES). Later, this approach was also demonstrated with the more sophisticated actuator line method (ALM) in LES <cit.>. So far these PI-based methods have been applied only to induction control, and they are not necessarily optimal. Moreover, saturation conditions are problematic, due to the possible local lack of power reserves (margins), which are not explicitly accounted for nor monitored in the existing implementations.
In this paper, a new wind farm control architecture is presented to improve the power tracking accuracy in conditions of strong persistent wakes, when the wind farm power demand is close to the maximum available power. An improved tracking performance is obtained by explicitly maximizing the power margin, in order to hedge against wind lulls. This novel methodology combines wake steering with induction control. Wake steering is used because of its ability to increase power margins by mitigating wake effects <cit.>. Wake steering is implemented through an open-loop model-based set-point optimal scheduler, closely following the standard implementation that has recently become popular in power-boosting wind farm control <cit.>. Induction control is implemented through a fast closed-loop corrector to improve tracking accuracy. The new methodology is demonstrated in a partial wake impingement scenario of a cluster of turbines, using a TUM-modified version of NREL's ALM-LES Simulator fOr Wind Farm Applications (SOWFA) <cit.>.
The paper is structured as follows. First, the novel APC methodology is presented. Second, the simulation model is described and finally, results are discussed for steady-state and unsteady conditions.
§ METHODOLOGY
The core of the proposed wind farm control architecture is an open-loop model-based set-point optimal scheduler. This control element determines the yaw misalignment of each turbine and its contribution to the demanded value (i.e. power share), given the power demand required by the TSO and the ambient conditions. The latter can be obtained in real time from SCADA data or with wind sensing methods <cit.>. A feedback loop serves the main purpose of correcting tracking errors, which will inevitably arise from the open loop control element during operation. A sketch of the overall control architecture is shown in fig. <ref>. The closed and the open loops are executed at two distinct time rates, since their outputs involve physical phenomena characterized by different time scales. Specifically, the open loop updates the yaw-set points and the power shares at a slower rate, due to the time required by the wake to propagate downstream. On the other hand, the closed loop changes the turbine inductions at a faster pace, to reduce tracking errors.
§.§ Open-loop set-point optimal scheduler
The open-loop component of the algorithm provides the optimal set-points in terms of yaw misalignment and power share. These are computed by a gradient-based optimization that maximizes the smallest power reserve within the wind turbines of the farm, for a given overall power demand.
The power of the ith turbine is noted P_i = P_i (A_i,u_i), where A_i indicates the local ambient conditions (here assumed to include wind speed, wind direction and turbulence intensity), and u_i are the control inputs (namely, induction and yaw misalignment). Power is computed using a wind farm flow model, which here is based on the FLOw Redirection and Induction in Steady-state (FLORIS v2) tool <cit.>.
The maximum power that can be captured by turbine i by adjusting its control set-point u_i (while keeping the set-points of the other turbines fixed) is computed as
P_a,i = arg max_u_i P_i (A,u_i) = 1/2ρπR^2 C_p U^3 cos^P_p(γ),
where ρ is the air density, R is the wind turbine radius, U is the undisturbed free-stream velocity, and P_p is the cosine exponent relating the yaw misalignment angle γ to power. The algorithm looks for the combination of set-points that produce the maximum possible minimum power ratio P_i/P_a,i across all turbines in the farm, while satisfying the power demand of the TSO. This can be expressed as
min_u max_i ∈ [1,N]P_i/P_a,i
such that ∑_i=1^N P_i=P_ref.
In fact, the smaller the power ratio P_i/P_a,i, the larger the margin m_i = 1-P_i/P_a,i that is available to compensate against drops in the wind. Equation (<ref>) represents a constrained optimization problem, which is solved with the gradient-based Sequential Quadratic Programming (SQP) method <cit.>. The optimization does not need to be performed in real time during operation. Rather, it is executed offline for a set of ambient conditions and relative wind farm capacities. Results are collected in a look-up table, which is then interpolated at run-time, similarly to what is routinely done for power-boosting wind farm control <cit.>.
In the example shown later in this work, the open loop is executed every 30 seconds.
§.§ Closed-loop corrector
The closed-loop corrector is directly taken from the work of ref. <cit.>, and it is executed every 0.01 seconds. The corrector consists of a simple PI feedback loop that operates on the power tracking error, which arises from the open-loop component of the control structure. The tuned PI gains used in this work are K_P,APC=0.2 and K_I,APC=0.05^-1.
§.§ Identification of saturation conditions
On each turbine, the occurrence of saturations is determined by a condition that combines tracking error and pitch angle. In particular, a saturation is detected when the blade pitch is at its optimal value and the tracking error exceeds a given negative threshold, set to the value of 100 in this work. The magnitude of this threshold determines the aggressiveness of the wind farm controller. This method was chosen because it can be implemented based on standard information that is readily available on board wind turbines, and does not rely on uncertain and difficult-to-estimate parameters such as thrust coefficient or axial induction.
§ NUMERICAL MODEL
§.§ Steady-state model
The engineering farm flow model FLORIS v2 <cit.> is used here both to synthesize the open-loop part of the controller and to perform steady-state analyses, prior to testing in the dynamic higher-fidelity LES-ALM environment. The standard FLORIS implementation is extended with the option to derate the turbines by modifying the C_p and C_t tables, following a basic curtailment approach. Moreover, a linear dependency of the power loss exponent P_p with C_t is also included in the model <cit.>, so that
P_p=A C_t+B,
where A=-1.56 and B=3.16, based on experimental and numerical observations. This dependency between the power loss exponent and the thrust coefficient is particularly relevant when combining derating and yaw misalignment, since the wind turbines operate at a wide range of C_t values due to their dynamic curtailment.
§.§ Unsteady simulations
LES-ALM simulations are used for testing the performance of the new APC formulation, because they are able to deal with the complex dynamics typical of wind turbine wakes and their interactions <cit.>.
The filtered ALM of refs. <cit.> is used to model the blades, by projecting forces computed along the lifting lines onto the LES mesh grid. Simulations are run with a turbulent wind obtained from a precursor generated in stable atmospheric conditions. The Cartesian mesh consists of approximately 13.5 million cells, and uses six refinement levels. The smallest cells measure 1, and are located in correspondence of the rotors. The computational domain, grids and turbine layout are shown in fig. <ref>.
§ RESULTS AND ANALYSIS
The scenario analyzed in this paper consists of a cluster of three IEA 3.4 wind turbines <cit.>, installed at a distance of 4 diameters and misaligned by half a diameter relatively to the incoming wind vector. The scenario is adapted from <cit.>, and it is chosen to mimic the typical operating conditions of an onshore wind plant with close spacings and partial wake overlaps. The inflow is characterised by a turbulence intensity of 6% at hub height, a shear of 0.2, and a mean wind speed of 9.5, equal to the rated speed of the turbines.
§.§ Steady-state conditions
First, the open-loop optimal scheduler is demonstrated in steady-state conditions. For each turbine, fig. <ref> reports the yaw set-points and power share percentage that maximize the smallest power margin.
The figure shows that the most upstream turbines are misaligned relatively to the wind, with the goal of increasing the power reserves of the downstream ones. Moreover, power share is not distributed equally, because of different local inflow conditions and wake effects.
These margin-optimal set-points (noted induction + yaw in the following) are compared to the ones of two alternative strategies in fig. <ref>. In the first of these strategies (noted induction), only induction is used to match the demand (i.e. the turbines are always aligned with the incoming wind vector). In the second (noted first yaw then induction), the turbines are first misaligned to maximize power capture, and then induction control is used to match the demand. In both cases, the power share is computed in order to maximize the smallest power margin in the wind farm.
The figure shows that —as expected— the margin drops to zero in correspondence of the maximum power of the plant, and increases as the power demand is lowered and the wind turbines are derated. Compared to the induction case, the methods featuring wake steering are able to significantly increase the power margin for a wide range of wind farm power demands. Furthermore, the first yaw then induction strategy generates similar margins to the induction + yaw case at relatively high TSO demands. However, its performance drops slightly as the power demand is lowered, because of the power losses caused by its larger persistent yaw misalignments. These losses are particularly enhanced by the low thrust coefficient at which the turbines operate, due to curtailment <cit.>. Because of its better ability to generate large margins, only the induction + yaw strategy is considered in the remainder of this work.
§.§ Unsteady simulations
Next, the methodology is tested with unsteady CFD simulations. Results are compared with the controller developed in ref. <cit.>, which is assumed here as the state-of-the-art benchmark.
A dynamic reference power signal typical of automatic generation control (AGC) is used as input signal. AGC is the secondary response regime of grid frequency control, and it consists in the modification of the power output of a plant depending on the dynamically changing requests by the transmission system operator <cit.>. A similar signal has been considered by other authors <cit.>.
Fig. <ref> presents the average velocity fields in the wind farm obtained with the benchmark control and with the proposed induction+yaw approach. The effect of yaw misalignment can be clearly observed, as the wakes of the upstream turbines appear to have been deflected in Fig. <ref>.
Fig. <ref> shows a comparison of the power tracking error obtained with the benchmark method and the newly proposed one.
The figure shows that the benchmark method presents frequent negative deviations from the reference signal. These deviations are due to the power saturation of the wind turbines operating in waked inflow conditions. On the other hand, the controller featuring wake steering is capable of reducing the frequency of occurrence of these phenomena, thereby improving the overall tracking accuracy. For the results of fig. <ref>, the new wind farm controller reduces the root-mean-square of the tracking error by 42.6% relatively to the benchmark. In the latter, the significant error occurring at t≈760 is due to a simultaneous saturation of all the wind turbines in the cluster.
In order to better understand how the local power margin is increased by the new method, the pitch angles commanded by the wind turbine controllers are plot in fig. <ref>.
For a standard curtailment derating strategy, larger power reserves are obtained for larger absolute differences between the commanded pitch angle and the optimal value. Figures <ref> and <ref> show that waked turbines display the highest margin increase compared to the benchmark case, due to the lowered impact of the impinging wakes. On the other hand, the most upstream wind turbine (see fig. <ref>) generally displays a lower margin with the new control strategy, because of its yaw misalignment. Nevertheless, for the benchmark controller, the frequent saturation of the downstream turbines number 2 and 3 forces the upstream turbine number 1 to compensate, and in these conditions its margin drops relatively to the new proposed formulation.
Finally, the effect of the new methodology on loads is briefly considered. Fig. <ref> shows the damage equivalent loads (DEL), computed by rainflow counting (<cit.>), for the tower base fore-aft bending moment of each turbine.
Results indicate that the new control strategy reduces fatigue compared to the benchmark one. These results can be explained by the fact that the benchmark controller is unable to maintain load balancing within the farm in high-power-demand conditions, due to the frequent saturation events. Conversely, the new controller reduces the extent of the saturation phenomena, thereby suppressing the abrupt controller actions that are responsible for high-amplitude fatigue cycles.
§ CONCLUSIONS
A new wind farm control methodology for power tracking was presented. The methodology combines wake steering and induction control with the aim of maximizing the lowest power margin within a wind farm. The implementation is based on a slow-rate open-loop optimal set-point scheduler, combined with a fast feedback loop corrector. Compared to a state-of-the-art benchmark, the new methodology is capable of reducing the root-mean-square of the tracking error in conditions of power demand close to the maximum capacity of the plant. In such conditions, the fatigue of the individual wind turbines is also mitigated, because of less frequent saturation phenomena.
-2.5cm
§ ACKNOWLEDGMENT
The authors acknowledge the support of the German Federal Ministry for Economic Affairs and Climate Action (BMWK) through the PowerTracker project. The authors express their appreciation to the Leibniz Supercomputing Centre (LRZ) for providing access and computing time on the SuperMUC Petascale System under Projekt-ID pr84be “Large-eddy Simulation for Wind Farm Control”.
IEEEtran
|
http://arxiv.org/abs/2307.03995v1 | 20230708153652 | Linear approximation to the statistical significance autocovariance matrix in the asymptotic regime | [
"V. Ananiev",
"A. L. Read"
] | physics.data-an | [
"physics.data-an",
"stat.ME"
] |
ReviewRanker: A Semi-Supervised Learning Based Approach for Code Review Quality Estimation
Masum Hasan
August 12, 2023
==========================================================================================
§ INTRODUCTION
In high energy physics searches for new particles that appear in the data as resonances <cit.>,
one usually scans a mass region and hopes to find a peak of high significance at some mass.
The significance at each mass of the scan is generally found by applying Wilks' theorem <cit.>
to the likelihood-ratio test statistic (LRT) <cit.>
for each point,
and results in a field of significances measured across the search region.
While the resonance may appear anywhere in the search region, the analysis
usually targets the highest (local) significance,
which leads to the recurring challenge of estimating the global significance of this
observation.
The necessity of calculating the probability for a background fluctuation to give such a peak of significance anywhere in the search region,
and not simply where the significance is maximal,
is commonly referred to as the look-elsewhere effect (LEE).
There have been a number of studies investigating the LEE, and in our work we
pay particular attention to those describing the significance field with a Gaussian process.
While some studies <cit.>
set the upper bound on the trials factor,
which converts a local p-value into a global one,
and only use a Gaussian process implicitly to link
the low and high significance regions,
other studies <cit.> require explicit values for
the Gaussian process parameters.
In this paper we establish a chain of lightweight steps
from a non-linear parametric statistical model to the trials factor by estimating the covariance matrix of the significance field.
To construct the estimate involving only one background only fit to the data,
we apply linear expansion to the non-linear background shape.
The way to calculate the covariance matrix starting from a linear model
was briefly discussed by Demortier <cit.>.
As part of our work, we give a strict mathematical formulation of the method and demonstrate a practical application of it to non-linear background shapes,
with the estimated covariance matrix serving as a proxy for the straightforward trials factor estimate.
A common input for the methods that quantify the LEE is a set of maximum likelihood fits to
some number of Monte Carlo generated data realizations. They may be used to
estimate the trials factor in the lower significance region, or the covariance
matrix of the Gaussian process itself (the significance autocovariance).
The challenge, then, is to fit enough
datasets to estimate the trials factor with a satisfactory precision, while
keeping the number of fits as small as possible.
In high-energy physics searches for a new particle or a resonance, typically,
the likelihood-ratio test statistic is used to construct the p-value for
each point on a search grid.
In the asymptotic regime, the test statistic follows a χ^2 distribution.
For analyses that use a Gaussian process to model the significance,
the number of degrees of freedom of the test statistic distribution is,
typically, 1. For this case, in Chapter <ref>,
we suggest a method to estimate the significance covariance matrix
that makes use of a single background-only fit to the data.
We replace the set of fits that were required in our previous work,
with derivatives of the best-fit-to-the-data background model.
Fortunately, the derivatives can often be extracted from the fit software.
Core assumptions. In section <ref> we show that three
quite generic requirements:
* the background model should be well approximated by its linear expansion around the best fit parameters,
* the assumption that the fluctuations in different bins of the data set
are independent,
* the fluctuations in each bin follow a Gaussian distribution,
together, are consistent with the assumptions made in the empirical study by Ananiev & Read <cit.>,
which relied on the additivity (superposition) principle
for the fluctuations to empirically estimate the covariance matrix
of the significances.
We argue, therefore, that this work serves
as a theoretical basis for the method of the Asimov set of background samples introduced in the study,
and at the same time may rely on its validations.
§.§ Statistical model
The basic structure of a statistical model commonly used in high-energy physics
experiments that search for a new particle or a resonance was described in
detail in the empirical study <cit.>.
For the present study, we chose the H→γγ inspired model as a benchmark,
because it satisfies without approximation the second and third requirements above.
The search is conducted with the likelihood ratio test statistic evaluated
for each point M of the search grid ℳ.
In this binned model,
the expected background b_i(θ⃗), used as null-hypothesis H_0,
together with the expected signal μ s_i(θ⃗)
form the alternative H_1, expected signal + background estimate:
n_i(μ, θ⃗, M) = b_i(θ⃗) + μ s_i(θ⃗, M),
where i enumerates bins, θ⃗ denotes the vector
of nuisance parameters and μ is the signal strength nuisance parameter.
In the asymptotic regime (e.g. large sample), and neglecting constant terms, log-likelihoods for H_0 and H_1 may be approximated as follows:
-2lnℒ_0(μ=0, θ⃗) = ∑_i ( d_i - b_i(θ⃗)/σ_i)^2,
-2lnℒ_1(μ, θ⃗, M) = ∑_i ( d_i - b_i(θ⃗) - μ s_i(M, θ⃗)/σ_i)^2,
where i enumerates bins, M ∈ℳ denotes the point in the search region ℳ of parameters which are not present under the background-only hypothesis, θ⃗ are the nuisance parameters, and d_i corresponds to the binned data with errors σ_i.
We have assumed that the errors σ_i are independent of the nuisance
parameters θ⃗.
With a linear correction to σ_i it is still possible to get a closed form expression
for the test statistic and significance. The calculation of the covariance would require sampling toys to average out the fluctuations.
No additional fits would be required, however,
so this may be a potential option for more sophisticated analyses.
Our goal is to estimate the covariance matrix Σ_MN of the statistical significances Z_M and Z_N evaluated at two different points of the search region ℳ:
Σ_MN = ⟨ Z_M Z_N ⟩_d, M, N ∈ℳ,
Z_M = (μ̂) √(t_μ(M))∼𝒩[0, 1],
t_μ(M) = -2 lnℒ_0(μ=0, θ⃗_0)/ℒ_1(μ̂, θ⃗_0 + θ⃗_1, M)∼χ^2_d.o.f=1,
where t_μ(M) is the likelihood-ratio test statistic (LRT), Z_M is the so-called signed-root LRT,
θ⃗_0 are the nuisance parameters that maximize
the background-only likelihood ℒ_0,
and θ⃗_0 + θ⃗_1 together with the signal strength μ̂
maximize the signal+background likelihood ℒ_1.
We would like to remark that for the signal+background model we are fitting θ⃗ as a deviation from θ⃗_0.
This is essential for the proper separation of variables in the subsequent calculations.
We assume that the best fit of the backgound model b_i to the data d_i is available for the study as b_i(θ⃗̂⃗) = b̂_i. In order to simplify the notation, we make use of the freedom to choose the reference point for the model parameters θ⃗ and define the best fit parameters to be θ⃗̂⃗ = 0⃗.
§ METHOD
To simplify the notation, we redefine d_i, s_i and b_i to include σ_i:
d_i/σ_i↦ d_i, s_i/σ_i↦ s_i, b_i/σ_i↦ b_i.
The log-likelihoods then become:
-2lnℒ_0 = ∑_i ( d_i - b_i(θ⃗) )^2,
-2lnℒ_1 = ∑_i ( d_i - b_i(θ⃗) - μ s_i(θ⃗) )^2.
For every realization of the data (e.g. an LHC run), we expect the deviations of the fit parameters μ and θ⃗ from 0 to be small (in the absence of a signal), and therefore the first-order expansion of b_i(θ⃗) and s_i(θ⃗) around 0⃗ to be accurate enough.
The log-likelihoods then are:
-2lnℒ_0 = ∑_i ( d_i - b̂_i - Δ_i βθ^β)^2,
-2lnℒ_1 = ∑_i ( d_i - b̂_i - Δ_i βθ^β - μ s_i(0⃗) )^2,
where Δ_i α = ∂ b_i(θ⃗)/∂θ^α|_θ⃗ = 0⃗ is the Jacobian of the best-fit background model and the Einstein summation rule applies to the indices β.
Since the signal model s_i contributes to the log-likelihoods eq. (<ref>) only at lowest order, thus is constant, we simplify s_i(0⃗) to s_i from now on.
The equations that define optimal values of θ⃗_0, θ⃗_1, and μ then are:
∂ℒ_0/∂θ_α|_θ⃗_0∝
∑_i (d_i - b̂_i - Δ_i βθ_0^β)·Δ_iα = 0,
∂ℒ_1/∂θ_α|_θ⃗_1, μ̂∝
∑_i (d_i - b̂_i - Δ_i β (θ_0^β + θ_1^β) - μ̂ s_i)·Δ_iα = 0,
∂ℒ_1/∂μ|_θ⃗_1, μ̂∝
∑_i (d_i - b̂_i - Δ_i β (θ_0^β + θ_1^β) - μ̂ s_i)· s_i = 0.
To reduce the number of indices, we rewrite the expressions above with bra-ket notation:
⟨d -b̂|Δ = ⟨θ_0|Δ^⊺Δ,
0⃗ = ⟨θ_1|Δ^⊺Δ + μ̂⟨s|Δ,
⟨d - b̂|s⟩ = ⟨θ_0 + θ_1|Δ^⊺|s⟩ + μ̂⟨s|s⟩,
where in eq. (<ref>) we used eq. (<ref>) to cancel the θ⃗_0 contribution. We can solve eq. (<ref>) and eq. (<ref>) for θ⃗_0 and θ⃗_1 correspondingly:
⟨θ_0| = ⟨d-b̂|Δ(Δ^⊺Δ)^-1,
⟨θ_1| = - μ̂⟨s|Δ(Δ^⊺Δ)^-1.
It is important to mention that, although Δ itself is generally singular, the product Δ^⊺Δ appears to be a Hessian of -2lnℒ_1 with respect to θ⃗_1. For the background model best-fit point θ⃗ = 0⃗ to be a minimum, it is required that the Hessian be positive definite, thus Δ^⊺Δ is invertible.
We substitute eq. (<ref>) and eq. (<ref>) into eq. (<ref>) and solve for μ̂:
μ̂(M) = ⟨d-b̂| P |s_M⟩/⟨s_M| P |s_M⟩,
P = 1 - Δ(Δ^⊺Δ)^-1Δ^⊺.
An interesting and important fact is that P is a projector and it is symmetric:
P^2 = P, P = P^⊺.
A projector is always positive semi-definite, which means that the product below is non-negative for any non-zero s⃗:
⟨s| P |s⟩ = ⟨s| P^2 |s⟩ = ( P |s⟩)^2 ≥ 0, ∀s⃗≠0⃗ .
Let us estimate the test statistic t_M:
t_M = (-2 lnℒ_0) - (-2 lnℒ_1)
= 2 ⟨d - b̂ - Δθ⃗_0|Δθ⃗_1 + μ̂ s⟩
+ ⟨Δθ⃗_1 + μ̂ s|Δθ⃗_1 + μ̂ s⟩.
We again use eq. (<ref>) to cancel the θ⃗_0 contribution and eq. (<ref>) to substitute the solution for θ⃗_1:
t_M = μ̂⟨d-b̂| P |s_M⟩ = μ̂^2 ⟨s_M| P |s_M⟩.
The significance Z_M, as defined in eq. (<ref>), is:
Z_M = μ̂√(⟨s_M| P |s_M⟩) = ⟨d-b̂| P |s_M⟩/√(⟨s_M| P |s_M⟩).
The square root in eq. (<ref>) is always defined, as the product under the square root is always positive (eq. (<ref>)).
For the covariance matrix estimation, we would need to average over data. We are looking for a solution with uncorrelated fluctuations in each bin (sec. <ref>), and we recall that we normalized the errors to 1 in eq. (<ref>), therefore, the following is true:
E_d{|d-b̂⟩⟨d-b̂|} = 1.
The covariance matrix, then, is:
Σ_MN = E_d{ Z_M Z_N }
= E_d{⟨s_M| P |d-b̂⟩/√(⟨s_M| P |s_M⟩)⟨d-b̂| P |s_N⟩/√(⟨s_N| P |s_N⟩)}
= ⟨s_M| P /√(⟨s_M| P |s_M⟩) E_d{|d-b̂⟩⟨d-b̂|} P |s_N⟩/√(⟨s_N| P |s_N⟩)
= ⟨s_M|/√(⟨s_M| P |s_M⟩) P |s_N⟩/√(⟨s_N| P |s_N⟩),
To see the parallel with Demortier <cit.>,
one needs to think of the background model as a linear combination of vectors in Δ.
Then eq. (<ref>) defines a vector |v_M⟩ = P|s_M⟩/√(⟨s_M|P|s_M⟩), which was introduced by Demortier and is orthogonal to each of the vectors constituting the background shape.
The test statistic, then, can be rewritten as t_M = (⟨d - b̂|v_M⟩)^2,
and the covariance can be expressed as Σ_MN = ⟨v_M|v_N⟩.
where we used the symmetry and projector properties of P.
It should be noted that from the data
fluctuations d⃗ - b⃗̂⃗ contributing to the
covariance matrix in the form
Fluct. ∝ E_d{|d - b̂⟩⟨d - b̂|},
a superposition principle, relied on in ref. <cit.>, can be derived:
Σ_MN = ∑_f Σ^f_MN,
where f enumerates independent fluctuations in different bins.
In summary, we can estimate the autocovariance matrix of the significance field from the signal model and derivatives of the background model:
Σ_MN = ⟨s_M|/√(⟨s_M| P |s_M⟩) P |s_N⟩/√(⟨s_N| P |s_N⟩), M, N ∈ℳ
P = 1 - Δ(Δ^⊺Δ)^-1Δ^⊺,
Δ_i α = ∂ b_i(θ⃗)/∂θ^α|_θ⃗ = 0⃗.
§ JUSTIFICATION OF THE SET OF ASIMOV BACKGROUND SAMPLES
In this section we would like to compare the derived expression
eq. (<ref>) for the linear approximation of the significance
covariance matrix
to the empirical study <cit.> and the
H →γγ inspired model introduced there.
To carry out the calculations we used the SigCorr package
that we developed specifically for trials factor studies,
which now includes functionality for the linear approximation <cit.>.
We estimate the linear approximation using eq. (<ref>)
with the true parameters of the model, which were predefined in the paper.
The resulting matrix shown in figure <ref>
clearly resembles the one presented in the empirical study.
We also show, in figure <ref>,
the difference between the linear approximation computed on
the model's true parameters (figure <ref>)
and the empirical estimate.
We confirm that the empirical covariance matrix is compatible with
the linear approximation suggested in this paper
within the accuracy of the empirical estimate.
On the one hand, the compatibility of the linear approximation and
the empirical study allows us to refer to the validations conducted in
the empirical study, including those regarding trials factor estimation,
and to re-apply them to the method suggested in this paper.
The direct calculation of the up-crossings from the covariance matrix, described in <cit.>, becomes particularly appealing now, since it requires only a single fit of the statistical model to the data.
The linear approximation, on the other hand, serves as the theoretical basis
for the empirical set of Asimov background samples used to estimate the covariance matrix in the aforementioned work.
§ CONCLUSION
In this work we proposed a novel method for the estimation of the covariance matrix of statistical
significance in new particle searches using a linear expansion of the statistical
model around its background-only best fit to the data.
In addition to the closed form expression for the linear approximation
of the significance covariance matrix,
we also presented elegant expressions for the best fitted signal strength
and statistical significance in this approximation.
We proved that the suggested covariance matrix satisfies the superposition
principle with regard to the fluctuations of the data, which makes it a good
proxy to the covariance matrix constructed with
the set of Asimov background samples<cit.>.
Finally, we compared these two approaches with
the example of a H →γγ inspired model
and showed that the deviations are compatible with the error of
the set of Asimov background samples.
We, therefore, claim that all the validations conducted in
the empirical study, including those regarding trials factor estimation,
hold for the linear approximation suggested in this paper,
and the linear approximation serves as a theoretical basis for
the empirical set of Asimov background samples construction.
We would like to thank Elliot Reynolds for the encouraging discussion at the HDBS Workshop at Uppsala.
This research was supported by
the European Union Framework Programme for Research and Innovation Horizon 2020 (2014–2021)
under the Marie Sklodowska-Curie Grant Agreement No.765710.
JHEP
|
http://arxiv.org/abs/2307.05470v1 | 20230708213703 | A Robust and Efficient Optimization Model for Electric Vehicle Charging Stations in Developing Countries under Electricity Uncertainty | [
"Mansur Arief",
"Yan Akhra",
"Iwan Vanany"
] | math.OC | [
"math.OC",
"econ.GN",
"q-fin.EC",
"stat.AP"
] |
1]Mansur M. Arief cor1
[email protected]
2]Yan Akhra
2]Iwan Vanany
[cor1]Corresponding Author
[1]organization=Department of Aeronautics and Astronautics Engineering, Stanford University,
addressline=450 Serra Mall,
city=Stanford,
postcode=94305,
state=CA,
country=USA
[2]organization=Department of Industrial and Systems Engineering, Institut Teknologi Sepuluh Nopember,
addressline=Sukolilo,
city=Surabaya,
postcode=60111,
state=East Java,
country=Indonesia
The rising demand for electric vehicles (EVs) worldwide necessitates the development of robust and accessible charging infrastructure, particularly in developing countries where electricity disruptions pose a significant challenge. Earlier charging infrastructure optimization studies do not rigorously address such service disruption characteristics, resulting in suboptimal infrastructure designs. To address this issue, we propose an efficient simulation-based optimization model that estimates candidate stations' service reliability and incorporates it into the objective function and constraints. We employ the control variates (CV) variance reduction technique to enhance simulation efficiency. Our model provides a highly robust solution that buffers against uncertain electricity disruptions, even when candidate station service reliability is subject to underestimation or overestimation. Using a dataset from Surabaya, Indonesia, our numerical experiment demonstrates that the proposed model achieves a 13% higher average objective value compared to the non-robust solution. Furthermore, the CV technique successfully reduces the simulation sample size up to 10 times compared to Monte Carlo, allowing the model to solve efficiently using a standard MIP solver. Our study provides a robust and efficient solution for designing EV charging infrastructure that can thrive even in developing countries with uncertain electricity disruptions.
* Proposed a simulation-based optimization model to design optimal EV charging station infrastructure that can withstand uncertain power supply in developing countries.
* Used control variates (CV) variance reduction technique to enhance simulation efficiency and provide a highly robust solution that buffers against uncertain electricity disruptions.
* Numerical experiment using data from Surabaya, Indonesia showed the proposed model achieved 13% higher average objective value compared to the non-robust solution.
* The enhanced simulation efficiency through CV reduces the required sample size by a factor of 10 compared to Monte Carlo simulations
* The proposed model showcases a potential to provide a robust solution to the challenges associated with EV charging infrastructure under random electricity disruptions in developing countries.
electric vehicle charging station developing country uncertainty variance reduction
§ INTRODUCTION
The growing global demand for electric vehicles (EVs) has brought to the forefront the need for reliable and easily accessible EV charging infrastructure. According to a report by the International Energy Agency, as numerous governments set ambitious goals for electrifying their transportation systems, the worldwide EV demand has exponentiated in recent years. In 2010, there were only approximately 17,000 EVs on the world’s roads. In 2019, for instance, China led the global EV market, with more than 1 million EVs cars sold that year (more than 50% of global EV demand), followed by the whole of Europe with 561,000 cars sold and the USA with 327,000 cars sold. This trend is projected to persist in the upcoming years <cit.>.
Developing countries are also striving to promote EV adoption, coupled with greener electricity <cit.> to expedite the achievement of their sustainability goals. For example, Indonesia has set an ambitious target of having 20% of all automobile sales be electric by 2025, with a long-term goal of achieving fully electrified transportation by 2050 <cit.>. However, developing countries like Indonesia face significant infrastructure constraints that must be addressed to achieve these goals. The availability of EV charging infrastructure is a crucial issue that must be addressed to support the widespread adoption of EVs. In Indonesia, there were only 240 public EV charging points across the country as of 2021 <cit.>. However, an estimated 31,000 EV charging stations are required throughout the country to support sustainable electrification of vehicles in the country <cit.>.
This lacking infrastructure issue is not unique to Indonesia and is faced by many other developing countries to support the growth of EV adoption. Tackling this challenge by designing a convenient and reliable EV charging network is, however, a very complex task. To ensure a convenient location, it is essential to consider factors such as population density or potential EV demand distribution <cit.>. However, in major cities in developing countries, finding suitable land for charging stations may be challenging due to limited space availability. Furthermore, in developing countries, service uncertainty, including electricity, is one of the most significant issues. Implementing smart charging strategies <cit.> becomes hardly feasible due to electricity supply uncertainty. Outages and other electricity disruptions often occur, posing a significant problem for users who demand reliable service.
To address this challenge, our study proposes a robust solution for designing EV charging infrastructure that accounts for the challenge of electricity disruptions in developing countries. We introduce a simulation-based optimization model that estimates the service reliability of candidate charging stations and incorporates this information into the objective function and constraints. This approach offers a versatile solution by utilizing simulation approaches compared to previous works that assume available disruption probability models. Additionally, we employ a variance reduction technique called control variates (CV) to enhance simulation efficiency, reducing the required sample size by up to 10 times compared to naive Monte Carlo (MC) simulations. This results in an efficient mixed-integer programming (MIP) model that solves for optimal solutions that strike the balanced objective between minimizing the total cost of operating and investing in the charging infrastructure and providing high-quality service to the public. Fig. <ref> illustrates the comparison between the traditional modeling approach without variance reduction vs. the proposed framework that utilizes the variance reduction technique to achieve a tighter confidence interval (hence much more precise output) with less computational burden.
Our work contributes in three key ways. Firstly, we propose a model that specifically addresses the critical issue of electricity disruption in EV charging station planning, particularly in developing countries. Secondly, we integrate the estimation of disruption probabilities into our model, providing a more data-driven approach compared to previous works that assumed available disruption probability models apriori. Finally, our study demonstrates the robustness of the proposed model in solving EV charging infrastructure problems by comparing its performance to a non-robust model, even when disruption probabilities are slightly under or over-estimated. Our numerical experiment, based on an EV dataset from Surabaya, Indonesia, shows that our model achieves a 13% higher average objective value compared to the non-robust solution, highlighting its superior performance to help build sustainable and thriving ecosystems for EVs, both in developed and developing countries in the years to come.
The rest of this paper is structured as follows. In Section <ref>, we provide a concise overview of the literature related to the optimization of EV charging infrastructure
We then present the proposed model formulations in Section <ref>
and approach incorporating the CV technique to estimate the service reliability (i.e. the complement of disruption probability). In Section <ref>, we describe the experiment settings and discuss the main findings in Section <ref>. Finally, we conclude our work in Section <ref>.
§ LITERATURE REVIEW
In this section, we briefly review earlier works directly related to the planning of EV charging infrastructure and relevant case studies that motivate our approach. Examining these earlier works offers insight into the evolution of methodologies, leading to the proposed work, which uniquely introduces a combination of stochastic modeling and variance reduction techniques. The summary is provided in Table <ref>.
The planning of EV charging infrastructure can be viewed as a facility location problem, which aims to minimize an objective function subject to constraints related to the desired performance of the network facilities. Early studies, including those by <cit.> and <cit.>, adopted deterministic models focusing on minimizing charging stations and development costs, respectively. <cit.> sought to maximize service demand, whereas <cit.> aimed to minimize infrastructure and access costs. Similar objectives were pursued by <cit.>, <cit.>, and <cit.>, with deterministic models being the common methodology.
Several other studies, like those conducted by <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>, continued the trend of deterministic models, exploring various aspects of EV charging station optimization. Other researchers, including <cit.>, <cit.>, <cit.>,<cit.>, <cit.>, <cit.>, and <cit.>, focused on minimizing the number of charging stations or the operating cost, or maximizing the EV flow coverage.
Another line of work integrates charging infrastructure into the smart-grid design <cit.> or other renewable energy sources such as solar cells <cit.>. While this approach provides an integrated solution to renewable energy issues and amplifies the positive impact of EVs on the environment, it may not be practical for urban areas in developing countries. A comprehensive review of charging infrastructure designs is presented by <cit.>, emphasizing the need for increasingly detailed modeling that accounts for randomness and variability. However, there is a lack of rigorous real-world case studies that emphasize uncertainty quantification in the modeling framework.
Several case studies have been conducted in both developed and developing countries. For example, <cit.> studied the problem of slow-charging technology in Lisbon, where vehicles are often parked overnight. In contrast, <cit.> considered both fast- and slow-charging technologies, focusing on robustly covering all demands and avoiding partial fulfillment in the city of Toronto. Another case study was conducted by <cit.> using a GIS-based model in Ankara and adopting a fuzzy approach. A city-scale simulation was developed for Singapore by <cit.>, focusing on the trade-off between cost minimization and customer accessibility maximization. Lastly, <cit.> proposed a set covering model for EV charging stations in Surabaya but ignored electricity disruption and only provided redundant demand coverage to provide a buffer against uncertainty, resulting in an overly simplified model and sub-optimal solutions.
In light of these studies, it is clear that the EV facility location problem is a complex and multifaceted issue that requires a tailored approach for different regions and contexts. Developing countries, in particular, may face unique challenges, such as power electricity disruptions, that must be considered in the planning and design of EV facilities. Such disruptions and uncertainty are addressed only in a handful of studies. For instance, <cit.> uses a multi-criteria decision-making approach aiming to strike a balanced solution against flooding disruption that maximizes the charging convenience, minimizes the impact of flood hazards, and minimizes the impact of existing charging stations using TOPSIS. <cit.> integrates the electric bus charging stations with photovoltaic and energy storage systems using a two-stage stochastic programming model, enabling them to incorporate the uncertainty of PV power outputs. <cit.> optimizes the size of the energy storage system considering the annualized cost, penalty cost for buying power during peak hours, and penalty cost for resilience violations. Other works that consider stochastic modeling include <cit.>, which directly use either structure of the stochastic models or simulations to represent elements of uncertainty into their optimization models. The caveat is that the resulting model can be extremely hard to solve, especially when a solution with high confidence is desired.
The proposed work extends the use of stochastic modeling and introduces control variates <cit.>, a variance reduction technique that can speed up a simulation-based optimization model, to the field. We propose an approach that addresses the challenges of the need to account for electricity disruptions via simulation and controlling the resulting objective value uncertainties by adjusting the simulation sample size. Simulation modeling enables the modeler to adjust the degree of modeling fidelity, depending on the prior knowledge available, and can be easily verified by estimating the probability of electricity disruptions and comparing it with available historical data. The resulting simulation-based robust model can be accelerated using variance reduction techniques (i.e., control variates), and it offers a more accurate and practical approach for planning and designing EV charging infrastructure that considers uncertainty and disruptions. The integration of stochastic modeling and control variates sets this work apart from previous research, potentially paving the way for more efficient and effective EV charging station location optimization solutions.
§ MODEL FORMULATION
In this section, we describe our modeling components, including the decision variables, objective function, constraint set, model benchmarks (robust and non-robust model), and the CV method we employ to improve simulation efficiency.
§.§ Decision Variables
We consider a set of demand nodes I and supply nodes J, representing sub-district centers and charging station candidate locations in the region under study. We also consider K vehicle types, representing different vehicle modalities that the residents use for commuting (here, we consider two modalities: electric motorcycles and electric cars). The average time to travel from node i ∈ I to node j ∈ J is denoted by d_ij. A threshold parameter d_max is introduced as an upper bound for this travel time as a proxy to study the robustness of the solution w.r.t. consumer time-to-travel for charging.
The decision variables include binary variables
x_j indicating whether the charging station candidate j is selected or not and y_ij indicating whether demand node i is to be assigned to be served by charging station j. In addition, we also use integer decision variables v_ij^k and u_j, denoting the number of electric vehicles of type k from node i charged at node j and the number of units of charging connectors installed at node j, respectively.
x_j =
1, if station j ∈ J is selected
0, otherwise
y_ij =
1, if node i ∈ I is assigned to node j ∈ J
0, otherwise,
v_ij^k ∈{0, 1, ⋯}, ∀ i ∈ I, j ∈ J, k ∈ K
u_j ∈{0, 1, ⋯}, ∀ j ∈ J
Each opened station j incurs a daily cost h_j and can only accommodate q_j charging connectors due to limited space. Each charging connector incurs g daily operational cost and has a limited daily charging throughput of c_j kWh. A vehicle type k takes e_k kWh energy and t_k time to charge using fast-charging technology. We use the electricity price denoted by r to convert the energy used to monetary value.
§.§ Objective Function
The objective is to maximize daily profits under random disruption events at each station, i.e., the revenue from all undisrupted stations minus operational and investment costs. We add a penalty term for any unmet customer demands due to the disruptions to study proper incentivizing mechanisms to achieve further robust models in the ablation study.
To this end, we consider each charging station j ∈ J to have a reliability
p_j = ℙ(Z_j ≤ z_j) = 𝔼 [𝕀(Z_j) ≤ z_j].
The disruption events are simulated utilizing random variable Z = [Z_j]_∀ j ∈ J∼ q. Z_j represents the underlying state triggering electricity disruption at station j whenever it exceeds some threshold z_j. In practice, electricity disruption events may occur due to extreme weather, spiking demand, or fallen trees <cit.> (in which Z_j might represent wind speed, cumulative region-wide demand, or fallen tree branch weights, respectively, that hits electrical equipment and z_j is the equipment threshold to deal with the corresponding random Z_j realization). <cit.> presents a review of how EV charging infrastructures strain the electricity grids, which, in turn, exacerbate the likelihood of electricity outages, especially in developing countries.
With this consideration, the objective function can be formulated as follows: we have prior information about p_j, ∀ j ∈ J.
max ∑_i ∈ I∑_j ∈ J∑_k ∈ Kr e_k p_j v_ij^k_revenue - s d_ij (1-p_j) v_ij^k_penalty
- ∑_j ∈ J(g u_j + h_j x_j)_total cost.
On the other hand, if p_j is not available, then we can use simulation to estimate the following objective:
max ∑_i ∈ I∑_j ∈ J∑_k ∈ K r e_k v_ij^k 𝔼[𝕀(Z_j≤ z_j) ]_revenue - s d_ij v_ij^k 𝔼[𝕀(Z_j > z_j) ]_penalty
- ∑_j ∈ J(g u_j + h_j x_j)_total cost,
where 𝕀(Z_jl≤ z_j) is binary variables indicating whether the disruption occurs or not.
𝕀 (Z_jl≤ z_j) = 1, if Z_jl≤ z_j
0, otherwise.
Monte Carlo (MC) simulation is one of the most practical methods to achieve this. MC uses n i.i.d. copies of the random variable to estimate the expectation. For each j ∈ J, we first generate Z_j1, Z_j2, ⋯ Z_jn. We then check if the disruption event is triggered or not at the l-th sample and output the binary indicators I_jl = 𝕀 (Z_jl≤ z_j). Then, we use the binary indicators in our final (robust) objective function:
max ∑_i ∈ I∑_j ∈ J∑_k ∈ K∑_l=1^n 1/n( (r e_k v_ij^k I_jl_revenue
- s d_ij v_ij^k (1-I_jl)_penalty)
- ∑_j ∈ J (g u_j + h_j x_j)_total cost.
We call our model the Robust Model in the experiment, to contrast with the original (Non-Robust) model proposed by <cit.>, which is attained when setting I_jl = 1 for all j ∈ J, l ∈{1, 2, ⋯ n} in (<ref>) during optimization. The solutions of both models are evaluated under random disruption events generated using a different random seed.
§.§ Constraints
The maximization of the objective function in (<ref>) is subject to a set of constraints:
s.t. ∑_k ∈ k v_ij^k ≤ y_ij M, ∀ i ∈ I, j ∈ J,
d_ij y_ij≤ d_max , ∀ i ∈ I, j ∈ J,
∑_j ∈ J v_ij^k = w_i^k, ∀ i ∈ I, k ∈ K,
∑_i ∈ I∑_k ∈ K t_k v_ij^k ≤ c_j u_j, ∀ j ∈ J,
u_j ≤ x_j q_j, ∀ j ∈ J,
∑_i ∈ I y_ij≤ x_j M, ∀ j ∈ J,
∑_j ∈ J y_ij≥ 1, ∀ i ∈ I,
∑_j ∈ J x_j ≤ N
∑_j ∈ J∑_l=1^n 1/n y_ij I_jl≥p̅, ∀ i ∈ I
∑_j ∈ J∑_l=1^n 1/n v_ij^k I_jl≥∑_j ∈ J v_ij^kp̅, ∀ i ∈ I, k ∈ K
In the above formulation, constraint (<ref>) ensures that charging stations can only charge vehicles if assigned. Constraint (<ref>) ensures the maximum time-to-charge for consumers does not exceed the set threshold d_max. Constraint (<ref>) ensures all charging demands are fulfilled, where w_i^k denotes the number of vehicles of type k to charge at demand point i. Constraint (<ref>) ensures that the required charging capacity to fulfill each station's assigned demand does not exceed the installed capacity. Constraint (<ref>) restricts the number of charging connectors installed in each station. Constraint (<ref>) ensures that demands are assigned only to opened stations. Constraint (<ref>) guarantees that at least one stations cover each demand. Constraint (<ref>) limits the maximum number of stations to open.
Finally, constraint (<ref>-<ref>) ensures that the probability that at least one of the assigned charging stations serving a given demand is not under an electricity outage is greater than or equal to p̅, assuming that outages between stations are independent.
§.§ Robust vs. Non-Robust Model
The consideration of p_j in our formulation is part of our attempt to boost the robustness of the original model and address the unique challenges and characteristics of urban areas in developing countries. The Non-Robust Model ignores disruption probability, resulting in a more simplified model. Our formulation is general, in the sense that we can attain the earlier model by setting I_jl = 1 for all j ∈ J, l ∈{1, 2, ⋯ n}. This earlier model ignores disruption uncertainty and often results in an overly cost-optimized solution that can have serious performance degradation when disruption occurs. Fig <ref> (left) shows a non-robust solution where only two stations are selected to cover 30+ demand nodes in the city of Surabaya. In this solution, many demand nodes are only covered by one station (no redundancy), and thus, when an electricity disruption hits the charging station, the charging demands will not be met and the residents are served very poorly. Our proposed robust model aims to incorporate the disruption uncertainty and optimizes the location and capacity of EV charging stations while balancing the trade-offs between consumer service level and economic profits. This incorporation maintains a linear objective function and linearized constraints, which still yields an MIP model that can solve efficiently using standard solvers.
§.§ Improving the Efficiency of Disruption Probability Estimation
While the proposed objective function in (<ref>) is still linear, the sample size n required to achieve high statistical confidence might blow up as the disruption probabilities 1 - p_j, ∀ j ∈ J become lower (e.g., as the utilities in developing countries mature). Note that our objective essentially estimates p_j by generating enough values Z_j1, Z_j2, ⋯, Z_jn, and compute
p̂_j = 1/n∑_l=1^n 𝕀(Z_jl≤ z_j)
which can be shown to be unbiased and converges to p_j.
Under the assumption that Z = [Z_j]_∀ j ∈ J∼ q are independently and identically distributed, and z_j, ∀ j ∈ J are fixed threshold values, estimator p̂_j is an unbiased and consistent estimator of p_j.
The proof is straightforward but is provided here for completeness.
Unbiasedness:
𝔼[p̂_j] = 𝔼[ 1/n∑_l=1^n 𝕀(Z_jl≤ z_j) ]
= 1/n∑_l=1^n 𝔼[ 𝕀(Z_jl≤ z_j) ]
= 1/n∑_l=1^n p_j
= p_j
where the first equality follows from the definition of p̂_j, the second equality follows from the linearity of the expectation operator to the sum of indicator functions, and the third line follows from the fact that Z_jl are independently and identically distributed, and the third equality follows from the definition of p_j.
Consistency:
We know that by the law of large numbers, for any ϵ > 0,
lim_n →∞ℙ(|p̂_j - p_j| ≥ϵ) = 0.
Hence, p̂_j converges in probability to p_j, and thus it is a consistent estimator of p_j.
Supposed that we already have an estimate p̂_j, ∀ j
∈ J. We can now plug the estimate into our optimization problem, giving
max ∑_i ∈ I∑_j ∈ J∑_k ∈ Kr e_k p̂_j v_ij^k_revenue - s d_ij (1-p̂_j) v_ij^k_penalty
- ∑_j ∈ J(g u_j + h_j x_j)_total cost
s.t. Constraint (<ref>)-(<ref>)
∑_j ∈ J y_ijp̂_j ≥p̅, ∀ i ∈ I
∑_j ∈ J v_ij^k p̂_j ≥∑_j ∈ J v_ij^kp̅, ∀ i ∈ I, k ∈ K .
Note that this formulation using p̂_j, ∀ j ∈ J is equivalent to the robust model using indicator variables I_jl, ∀ j ∈ J, l ∈{1, 2, ⋯, n} earlier that uses the objective function (<ref>).
§.§.§ Estimating p̂_j to Sufficient Accuracy
While p̂_j is unbiased and consistent, the sample size to ensure a precise estimate can be arbitrarily large, especially when we want a higher accuracy (e.g. when the disruption rate 1-p_j is tiny, such as in developed countries where utility service has high reliability). Suppose we want an δ-accuracy and 1-α confidence level to estimate p_j = 0.9999. Then, we can use Hoeffding's inequality to determine the sample size. According to Hoeffding's inequality, for any δ > 0, the probability that the estimate deviates from the true value by more than δ is bounded by
ℙ(|p̂_j - p_j| > δ) ≤ 2e^-2nδ^2,
where n is the sample size. Hence, if we want to ensure 1-α confidence level, we set 2e^-2nδ^2 = α, and solve for n
n = 1/2δ^2ln(2/α).
For instance, if we want an accuracy of δ = 0.0001 and a confidence level of 1-α = 0.95, then the required sample size is
n = 1/2(0.0001)^2ln(2/0.05) ≈ 114,763,
which is quite huge. Figure <ref> shows the sample size (in a log_10 scale) for various α and δ values. Note, however, that this is an upper bound and in practice, this sample size is not always necessary.
If we have N := |J| stations and each p_j has to be estimated using n≈ 114,763 samples, then we will need N × 114,763 samples to estimate the samples prior to solving the optimization problem, which can be overly burdensome if each simulation runs considers complex systems. Thus, we seek ways to improve efficiency and reduce the variance of the estimator.
§.§.§ Improving Efficiency via Control Variates
One way to improve the estimation efficiency and thus reduce the sample size is through the use of control variates (CV) <cit.>. CV involves introducing a new variable that is correlated with the random variable of interest and can be easily estimated. The CV is then used to adjust the estimate of the random variable to improve its efficiency by reducing the variance of the estimator using the cheaper-to-compute random variable. In our case, we can use CV to estimate p_j = ℙ(Z_j ≤ z_j). Let g(Z_j) be a function of Z_j that is easy to compute. Specifically, if we consider Gaussian q = N(μ, σ) and Z_j ∼ q, we can use
g(z) = Φ(z)
the CDF of the standard normal distribution as the CV to compute g(Z_j). The CV estimator for p_j is computed as
p̂_j = 1/n∑_l=1^n 𝕀(Z_jl≤ z_j) + π_j ( 𝕀 (X_jl≤z̅_j)-g(z̅_j) )
where Z_jl is the l-th sample from the distribution q, X_jl's are standard normal random variables correlated with Z_jl, and z̅_j are the scaled version of z_j chosen to threshold X_jl. Finally, π_j is chosen to minimize the variance
π_j = - Cov( ∑_l=1^n 𝕀(Z_jl≤ z_j), ∑_l=1^n 𝕀(X_jl≤z̅_j) )/Var(∑_l=1^n 𝕀(X_jl≤z̅_j)).
We can show that the CV estimator is unbiased and achieves variance reductions in the following remarks. The reduction in variance, subsequently, allows us to reduce the sample size to achieve the same level of δ and α.
The CV estimator (<ref>) is unbiased for p_j.
The proof is straightforward, showing 𝔼[p̂_j] = p_j.
𝔼[p̂_j] = 1/n∑_l=1^n𝔼[𝕀(Z_jl≤ z_j)]
+π_j (1/n∑_l=1^n𝔼[ 𝕀(X_jl≤z̅_j)]-g(z̅_j) )
= 1/n∑_l=1^np_j + π_j (1/n∑_l=1^n g(z̅_j) ) - π_j g(z̅_j)
= p_j.
Assuming we can generate highly correlated random variables Z_jl and X_jl simultaneously and choose the optimal π_j (<ref>), the CV estimator (<ref>) attains a variance reduction.
Note that the variance without using CV is
Var(p̂_j) = 1/n^2Var(∑_l=1^n𝕀(Z_jl≤ z_j)).
With CV, the variance of the estimator is
Var(p̂_j) = 1/n^2( Var(∑_l=1^n𝕀(Z_jl≤ z_j))
+2π_j Cov(∑_l=1^n𝕀(Z_jl≤ z_j),∑_l=1^n𝕀(X_jl≤z̅_j) )
+π_j^2 Var(∑_l=1^n𝕀(X_jl≤z̅_j)) ) .
Plugging in the optimal π_j for our problem and simplifying, we have
Var(p̂_j) = 1/n^2Var(∑_l=1^n𝕀(Z_jl≤ z_j))
- Cov^2(∑_l=1^n 𝕀(Z_jl≤ z_j), ∑_l=1^n 𝕀(X_jl≤z̅_j) )/n^2 Var(∑_l=1^n 𝕀(X_jl≤z̅_j)).
We can see that the second term in RHS is non-positive, which means that the variance is reduced the most if 𝕀(Z_jl≤ z_j) and 𝕀(X_jl≤z̅_j) are highly correlated (either positively or negatively), which intuitively means X_jl provides some information about Z_jl. It is important to note, however, that in practice, we often use sample covariances and sample variances to compute π_j, so the CV estimator might not achieve this theoretical variance reduction.
§ NUMERICAL EXPERIMENTS
In this study, we examine the EV and electricity data obtained from Surabaya, Indonesia. The EV dataset includes 11 candidate charging stations, 31 sub-regions of the city representing demand nodes, and two vehicle types, namely motorcycles (k=1) and cars (k=2). Figure <ref> illustrates the locations of the candidate charging stations (red nodes) and demand nodes (blue nodes), where the size of the blue nodes denotes the size of the demand at each location. This charging demand, i.e. the number of EVs of type k at each demand node i, is represented by w_i^k. The average travel time from demand node i to charging station j using vehicle k, d_ij^k, is amassed from Google Maps. The full capacity for each charging connector is considered as c_j=1440 minutes/day for all j ∈ J with 24/7 operational hours and the number of connectors installed in station j ∈ J is limited to q_j=8 for all j ∈ J, due to land availability in the candidate locations.
We estimate the disruption probability by simulating random electricity demands Z = [Z_j]_∀ j ∈ J where Z_j ∼ q_j. We obtain this masked data from the local electricity company, which performed data masking and rescaling for privacy and security reasons. The masked mean and standard deviation of q_j along with demand threshold z_j are summarized in Table <ref>. The simulation uses this probability model to generate random demands and an electricity disruption event is triggered for the whole day at station j when Z_j ≥ z_j. Hence, we have station reliability p_j = ℙ(Z_j ≤ z_j), ∀ j ∈ J. The other experiment parameters are summarized in Table <ref>.
We then build our model by running n simulation replications and computing the mean of the objective function values. The result is summarized in Fig. <ref> and Fig. <ref> for n up
to 10,000. The selected stations and demand assignments for each model solution are shown in Fig. <ref> (left: Non-Robust Model, right: Robust Model) and Fig. <ref> (left: Misspecified Model #1, right: Misspecified Model #2). The Misspecified Model #1 is built assuming 0.95p_j while the Misspecified Model #2 assumes 1.05p_j for all j ∈ J, highlighting underestimation and overestimation of service reliability respectively.
The CV estimator is constructed using standard normal random variables X_jl with z̅_j properly scaled. This gives a highly correlated random variables 𝕀(X_jl≤z̅_j) to 𝕀(Z_jl≤ z_j). We show the estimated station reliability (p_j) using MC and CV in Fig. <ref> and its standard error in Fig. <ref> to highlight the superior estimation efficiency using the CV estimator.
§ DISCUSSION AND FINDINGS
In this section, we discuss our findings regarding the robustness of the optimal solutions against disruptions even when the probability is misspecified and the enhanced disruption simulation efficiency that allows robust decision-making for our problem against disruption uncertainties. We also highlight the limitation of the model and our outlook for future research.
§.§ Robustness of the Optimal Solutions
Figure <ref> summarizes the objective function values obtained by benchmarking the Robust Model, Non-Robust Model, Misspecified Model #1 (underestimated station reliability), and Misspecified Model #2 (overestimated station reliability). The optimal solution of the Robust Model (represented by orange and brown lines) outperforms the other models. Conversely, the solution of the Non-Robust Model (represented by blue and purple lines) yields the lowest objective value. The Non-Robust Model prioritizes minimizing operational and investment costs, resulting in only two charging stations being opened. This leads to lower revenue and higher penalties, particularly during disruptions. In contrast, the Robust Model balances operational and investment costs with potential revenue losses and penalties incurred during disruptions. As a result, the Robust Model opens three charging stations, distributing the large charging stations across the geography of the city, resulting in an 18% higher total cost than the Non-Robust Model solution. However, it provides better protection against revenue loss and penalties incurred during disruptions. We also suggest that these charging stations implement a smart energy management policy <cit.> for added robustness. This added robustness leads to a 10% higher revenue and 60% lower penalty when disruptions occur, yielding an approximately 13% higher overall objective. Figure <ref> shows that the Robust Model's balanced solution covers more demand points with two charging stations, resulting in a better revenue and penalty trade-off than the Non-Robust Model.
The Robust Model with misspecified station reliability still provides some level of robustness, as evidenced by the objective values of both the underestimation and overestimation scenarios. These models' solutions have objective values lower than the Robust Model solution but higher than the Non-Robust Model solution. Thus, while accurately estimating station reliability is beneficial, the model can still tolerate imperfections. When utilizing the Robust Model with underestimated station reliability, the solution tends to be more conservative and provides a higher level of buffer against disruptions. This results in a solution with four charging stations, with over 90% of demand points covered by two or more charging stations. On the other hand, overestimating station reliability leads to a solution with only three charging stations, resulting in a lower cost and an objective value very close to the Robust Model. Figure <ref> illustrates the charging station placement for both the underestimated and overestimated scenarios.
§.§ Improved Simulation Efficiency using CV Estimator
We now discuss how we incorporate the simulation into our robust model. The main challenges center around incorporating electricity station reliability p_j, ∀ j ∈ J (and thus corresponding disruption probability 1-p_j, ∀ j ∈ J ), which might require a huge sample size to achieve desired precision level (thus increasing the computational burden of computing the objective function (either (<ref>) or (<ref>)) and the reliability constraints (either (<ref>)-(<ref>) or (<ref>)-(<ref>)).
While both MC and CV estimators of the objective values are unbiased and converge to the same value for each model, the proposed CV estimation approach appears to effectively reduce the estimation variance, thus yielding tighter confidence intervals in Fig. <ref> (brown, silver, pink, and purple lines vs. orange, red, green, and blue lines). Furthermore, Fig. <ref> highlight that all CV estimators attain about 10× smaller standard errors compared to their MC counterparts. This means that CV improves the simulation efficiency and reduces the sample size required to attain the same precision up to a factor of 10 vs. naive MC simulation approach, without accuracy loss.
The dominant efficiency performance of the CV-based estimation technique that reduces the sample size requirement while maintaining accuracy allows us to incorporate the estimated station reliability into the objective function and reliability constraints. This results in the proposed Robust Model that can be solved without increasing the computational cost significantly. The high efficiency of the CV over MC in estimating the reliability probabilities (even to values close to 1.00) is emphasized in Fig. <ref>, in which all CV estimates attain much tighter confidence intervals regardless of the target probability. In this estimation, again, CV estimators attain 10× smaller standard error for the same sample size used by MC estimators. This highlights the applicability of our robust modeling method to deal with problems where electricity disruptions are extremely rare and need to be estimated to an ultra-level precision.
§.§ Limitation of the Current Work
Although our CV-assisted robust model provides optimal solutions that strike a balance between minimal cost and buffering against electricity disruptions, we acknowledge that scaling it to larger problems, such as a larger charging station candidate set and more fine-grained demand points, heavily relies on the efficiency of the MIP solver. Moreover, we acknowledge that the electricity pricing rate used in this study is simplified, whereas more recent dynamic electricity pricing schemes are available and more realistic, though highly nonlinear. Incorporating such schemes could improve the accuracy of our revenue model, but it may not be feasible with our current solver. Additionally, the CV estimation approach used in this study is based on some prior knowledge about the probability model of the random variable triggering the disruption events. In practice, such knowledge may not be easy to obtain. However, we recognize that machine learning models can be leveraged to extract features from historical datasets and estimate disruption events. We can also leverage machine learning techniques to estimate the battery capacity of the EVs <cit.> to better predict the charging time for each arriving demand to extend our model to incorporate nonlinear dynamics and more realistic operations in our future work.
§ CONCLUSION
In this study, we propose a simulation-based optimization model to address the critical issue of designing robust planning for EV charging stations in developing countries, where electricity disruptions may frequently occur and impact customer satisfaction. Our model considers service reliability as a key factor and incorporates it into the objective function and constraints using the control variates (CV) variance reduction technique to improve simulation efficiency. Our numerical experiment, based on a dataset from Surabaya, Indonesia, demonstrates the superior performance of our robust model solution compared to its non-robust counterpart, even in cases of underestimated or overestimated service reliability. While our proposed model shows promise, we acknowledge its reliance on an efficient MIP solver and its use of a simplified electricity pricing rate. Furthermore, our CV estimator is based on prior knowledge of the probability model, which may not be available in practice. As such, we seek to extend our model to cover nonlinear MIP and learning-based disruption estimation in future work. Nonetheless, our model's ability to reduce the required sample size by up to 10× compared to Monte Carlo simulations highlights its potential to provide a robust solution to the challenges associated with EV charging infrastructure under random electricity disruptions.
elsarticle-harv
|
http://arxiv.org/abs/2307.04989v1 | 20230711025828 | Composition constraints of the TRAPPIST-1 planets from their formation | [
"Anna C. Childs",
"Cody Shakespeare",
"David R. Rice",
"Chao-Chin Yang",
"Jason H. Steffen"
] | astro-ph.EP | [
"astro-ph.EP"
] |
firstpage–lastpage
Adversarial Training Over Long-Tailed Distribution
Guanlin Li
Nanyang Technological University, S-Lab
[email protected]
Guowen Xu
City University of Hong Kong
[email protected]
Tianwei Zhang
Nanyang Technological University
[email protected]
August 12, 2023
============================================================================================================================================================================================================================
We study the formation of the TRAPPIST-1 (T1) planets starting shortly after Moon-sized bodies form just exterior to the ice line. Our model includes mass growth from pebble accretion and mergers, fragmentation, type-I migration, and eccentricity and inclination dampening from gas drag. We follow the composition evolution of the planets fed by a dust condensation code that tracks how various dust species condense out of the disc as it cools. We use the final planet compositions to calculate the resulting radii of the planets using a new planet interior structure code and explore various interior structure models. Our model reproduces the broader architecture of the T1 system and constrains the initial water mass fraction of the early embryos and the final relative abundances of the major refractory elements. We find that the inner two planets likely experienced giant impacts and fragments from collisions between planetary embryos often seed the small planets that subsequently grow through pebble accretion. Using our composition constraints we find solutions for a two-layer model, a planet comprised of only a core and mantle, that match observed bulk densities for the two inner planets b and c. This, along with the high number of giant impacts the inner planets experienced, is consistent with recent observations that these planets are likely dessicated. However, two-layer models seem unlikely for most of the remaining outer planets which suggests that these planets have a primordial hydrosphere. Our composition constraints also indicate that no planets are consistent with a core-free interior structure.
planets and satellites: composition –
planets and satellites: formation
– physical evolution
– terrestrial planets
§ INTRODUCTION
The TRAPPIST-1 (T1) system is a late-M dwarf star that hosts seven tightly packed, terrestrial planets <cit.>. The unique and tightly constrained planet masses and orbital distribution of this system has implications for its formation history and as a result, this system has been widely studied. Observations show that the two inner-most planets have the largest masses in the system while the mass of the outer planets increases with their orbital distance <cit.>. This mass trend has been referred to as a reversed mass ranking and is difficult to explain with current planet formation theories <cit.>. The planets are in a complex resonance chain where the outer four planets are in first-order mean-motion resonances with each adjacent planet and the inner three planets are in higher-order resonances (8:5 and 5:3). Three-body Laplace resonances also exist throughout the system <cit.>.
Terrestrial planets can primarily grow their mass though core accretion <cit.> or through pebble accretion <cit.>. <cit.> numerically modeled both mechanisms around a T1-like star to understand if planetary systems around low-mass stars preferentially form through either planetesimal or pebble accretion. They found that while both planetesimal and pebble accretion form similar planetary systems, planets that formed through planetesimal accretion had a much larger water content than the planet analogues that formed through pebble accretion. Thus, constraints on the water mass fraction (WMF) and bulk densities of the T1 planets can provide insight into their formation history.
Measurements from transit-timing variations and dynamical modeling helped constraining the bulk density of the planets <cit.>. These studies showed that the planets have similar densities and are consistent with rocky worlds with water mass fractions (WMF) <20%, suggesting that all the planets formed in a similar manner and their primary growth mode was via pebble accreiton.
<cit.> proposed that the formation of the T1 system first took place at the ice line where planetesimals formed by the streaming instability. The streaming instability is when solid particles concentrate into dense filaments and undergo gravitational collapse into numerous bound objects. Planetesimals up to ∼100 km may form in this way <cit.>. After the initial planetesimals form, they continue to grow by pebble accretion <cit.>. Each planet then undergoes inward type-I migration and accretes silicate pebbles once it is inside the ice line. In this manner, the planets form sequentially at the ice line. The innermost planets stall near the inner edge of the gas disc, which is set by the star's magnetosphere.
The <cit.> formation theory has been the most widely accepted formation theory for the T1 system and as a result, it has been extensively tested. <cit.> analytically tested this theory by evolving protoplanets in a gas disc that begin at the ice line and grow their mass via pebble accretion while migrating in. Protoplanets sequentially appear every protoplanetary appearance timescale of ∼ 10^5 orbital periods at the ice line. This approach was able to reproduce the multiplicity and resonance structure of the T1 system. However, analytical approaches neglect important effects such as two body interactions and the study was not able to reproduce the mass distribution of the T1 system.
<cit.> tested the plausibility of the T1 planets forming sequentially at the ice line by numerically modeling the inward migration of the fully formed T1 planets from the ice line. Using the n-body code rebound <cit.>, and reboundX <cit.> to model the effects of an evolving gas disc on the planets. <cit.> demonstrated that if the T1 planets were sequentially produced and migrated inwards, the planets naturally converged into a chain of first order resonances. Modeling a migration barrier in the inner gas-free cavity, where planets were pushed further inwards from an outer Lindblad torque from the gas disc, tidal forces, and orbital damping from gas drag, they were able to reproduce the observed two-body and three-body resonances of the system. However, this mechanism does not explain the composition of the planets, since beginning with fully formed planets at the ice line may imply a water content much larger than what is observed.
<cit.> numerically modeled the formation of the T1 system starting with results from a Lagrangian dust evolution code that modeled the formation of planetesimals via the streaming instability <cit.>. They tracked the growth of planetesimals into planets via pebble accretion and planetesimal accretion using a modified version of the n-body code mercury <cit.> which included pebble accretion and the gas effects of type-I migration and aerodynamic drag. Once the planets migrated to shorter orbits, they modeled the final stage of mass evolution semi-analytically in order to reduce computation time. This approach reproduced the general mass distribution of the T1 planets, but not the orbital architecture as the last stages were not modeled numerically.
<cit.> also successfully reproduced the masses of the T1 planets by numerically modeling the growth of embryos in a gas disc that loses mass from photoevaporation and disc winds in addition to gas accretion onto the central star. The more complex temporal evolution of this gas disc results in an initially fast and then later, when the surface density profile flattens out, slow migration of the embryos. They found that this fast-then-slow migration resulted in systems that display the reversed mass ranking mass trend found in T1. However, their simulations began with relatively large embryos that already reached the pebble isolation mass, distributed between 0.015–0.2 au, after the disc has evolved for 1 Kyr.
Although previous n-body studies have reproduced the broader features of the T1 system, it has proven difficult to reproduce the planet densities and masses, and the orbital architecture of the system when starting from an early stage in planet formation. In this paper, we use a suite of numerical tools to constrain the bulk compositions of the planets by following their formation process starting just after Moon-sized bodies form and up until the gas disc dissipates, while reproducing the observed planet densities. Detailed modeling of this period allows us to follow the composition of the planets throughout their formation process. While cometary impactors can build or destroy the atmospheres of the T1 planets at a later time <cit.>, and other late-stage mechanisms can alter the surface properties of the planets, we provide constraints on the bulk compositions of the T1 planets from their formation.
We improve upon previous work by using a disc model that changes in time and has not yet been used to study the T1 system, resolving solid body collisions in a more realistic manner (i.e. fragmentation), and by using the most up-to-date prescriptions for pebble accretion. Furthermore, we provide the abundances of the refractory elements of the T1 planets for the first time and use these results to probe the interior structure of the planets using a new planetary interior structure code.
We present a new module for reboundX that tracks pebble accretion growth, type-I migration, and eccentricity and inclination damping from gas drag. Our simulations also model the fragmentation of solid bodies involved in collisions <cit.>. We place tight constraints on the composition of the bodies by simultaneously modeling the composition of the accreted dust pebbles as a function of location and time. The composition of the dust is determined by a dust condensation code by <cit.> that tracks how dust condenses out of an evolving protoplanetary disc as it cools in time. We use our final constraints on bulk composition to calculate the planet radius and density using the planetary interior structure code Magrathea <cit.>.
In Section <ref> we describe our evolving gas disc and our prescriptions for pebble accretion, type-I migration, and eccentricity and inclination damping. We also describe the model we use to track the composition evolution of the bodies. In Section <ref> we describe our n-body setup. In Section <ref> and Section <ref> we lay out our results and in Section <ref> we discuss the implications of these results and caveats of our models. Lastly, we summarise our results in Section <ref>.
§ MODELS
In this section we discuss our models for disc evolution, mass growth via pebble accretion, and the effects of gas on the dynamics of the solid bodies. We include gas effects throughout the duration of the simulation because the T1 planets are thought to have formed in less than a few Myr, a timescale shorter than the disc lifetime <cit.>. However, the effects from the gas disc (i.e. pebble accretion, type-I migration, eccentricity and inclination damping) are turned off after a body moves inwards past 0.02 au because the disc is thought to be truncated by the magnetosphere of the star <cit.>. We implement these prescriptions into a new module for reboundX <cit.> that works in tandem with the n-body integrator rebound <cit.>. We also describe how we track the evolution of the composition of the planets throughout the planet formation process, which is done as a post-processing step.
§.§ Disc Evolution
Our disc evolution model is based on the accretion and evolution of T Tauri discs from <cit.>. We adopt parameter values that are most fitting for the T1 system. The surface density for a gas disc with mass M_ d follows
Σ(r,t_1) = M_ d/2 π R_1^21/(r/R_1)t_1^3/2 e^-(r/R_1)/t_1
with orbital distance r from the star and a radius scale of R_1. R_1 is the radius within which ∼0.6 of the disc mass resides initially. We set R_1=500 au and the initial disc mass M_ d=0.03 M_⋆. Observations of CO line emissions suggests gas discs are between 100-1000 au in size (see Figure 4 of <cit.>). We choose an intermediate value that results in reasonable migration timescales of our starting solid bodies. t_1 is a dimensionless time defined as
t_1=t/t_ s+1
at time t. The viscous timescale for the gas disc is
t_ s= 1/3R_1^2/ν_1,
where ν_1 = ν(R_1) is the kinematic viscosity at r = R_1.
Equation (<ref>) assumes that the disc is vertically isothermal and has a radial temperature profile of T(r) ∝ r^-1/2. As in the minimum mass solar nebula <cit.>, we adopt the temperature profile of
T(r)=280(r/ au)^-1/2(L_⋆/L_⊙)^1/4 K
where L_⋆ is the luminosity of the star. For consistency with previous studies of the T1 system, we place the ice line at r_ ice=0.1 au. Assuming the ice line corresponds to where the temperature is 170 K, Equation (<ref>) leads to a stellar luminosity of L_⋆≈ 1.5 × 10^-3 L_⊙. This luminosity is three times larger than the current luminosity of the T1 star, and therefore our model considers a much earlier time in the life of the star. While the location of the ice line will change in time we opt to use a fixed location of the ice line for simplicity.
The viscosity of the gas disc is prescribed by ν=α c_ s H, where α is a dimensionless constant, c_s the speed of sound, and H the vertical scale height of the gas <cit.>.
The speed of sound is given by
c_ s=√(k_ B T(r)/μ m_ H)
where k_ B is the Boltzmann constant, μ = 2.34 is the mean molecular weight of the gas, and m_H is the hydrogen mass. In turn, the vertical scale height of the gas is given by
H = c_ s/Ω,
where Ω=√(G M_⋆/r^3) is the Keplerian angular frequency. We adopt α=1 × 10^-4, and along with R_1, Equation (<ref>) sets the timescale for the viscous evolution of the gas disc.
Finally, we assume that the column density of the pebbles is determined by a constant dust-to-gas ratio, that is,
Σ_ p(r)=0.01f_ pΣ(r)
where f_ p is a constant scale factor. We adopt a dust-to-gas ratio of 1% and hence f_p = 1.
§.§ Pebble accretion
Mass growth of a planetesimal via pebble accretion is separated into either the Bondi regime or the Hill regime, depending on the size of the pebbles and the mass of the central planetesimal <cit.>. Pebbles move from the Bondi regime to the Hill regime once the radius of the central planetesimal reaches the transition radius,
R_ t(r)=1160 km ( r/5 au)^1/2 ( Δ v/30 m s^-1 ) ( ρ_ pl/2×10^3 kg m^-3 )^-1/3
where ρ_ pl is the bulk density of the planetesimal and Δ v can be approximated by the sub-Keplerian speed of the gas when the pebbles are at least marginally coupled to the gas <cit.>. In our models, Δ v is then
Δ v = -1/2H/r∂ln P/∂ln rc_ s≈11/8c_ s^2/v_ K=29.9 m s^-1,
where P is the pressure in the mid-plane of the disc and v_ K= √(GM_⋆/r) is the Keplerian velocity, and we have used in the last two steps the profiles in Section <ref>.
In the 3D-Bondi branch, mass growth from pebble accretion proceeds as
M = 8.4 × 10^-3 M_⊕ Myr^-1 f_ p ( m_ pl/10^-4M_⊕)^2 ( Δ v/30 m s^-1)^-3(H_ p/H/0.1)^-1
×(H/r/0.05)^-1 (r/5 au )^-2
Once the radius of the planetesimal reaches the transition radius, mass growth proceeds in the 2D-Hill branch as
M=2 R_ accΣ_ p ( Δ v + Ω R_ acc ),
where R_ acc is the radius from which the planetesimal accretes pebbles.
The accretion radius can be approximated by
R_ acc = (Ωτ_ f/0.1)^1/3 R_ Hexp [-0.4 ( τ_ f/ t_ p )^0.65 ],
where R_ H=(m_ pl/3M_⋆)^1/3r is the Hill raidus, t_ p=Gm_ pl/(Δ v + Ω R_ H)^3 is the characteristic timescale for pebbles to pass by the planetesimal, and τ_ f is the friction time <cit.>.
Mass growth continues on the 2D-Hill branch until the planetesimal reaches the isolation mass when the planetesimal is large enough to induce a pressure bump exterior to its orbit which halts the incoming flux of pebbles. We adopt the pebble isolation mass (PIM) from <cit.>,
PIM = (H/r)^3 √(37.3 α + 0.01)× [ 1 + 0.2 ( √(α)/H/r√(1/τ_ s^2+4))^0.7 ] M_⋆,
where τ_s≡Ωτ_f is the dimensionless stopping time. The dimensionless stopping time τ_s of the pebbles depends on their location and composition <cit.>. We use τ_ s=0.1 at or exterior to the ice line, approximating the radial drift barrier, and τ_ s=0.001 interior to the ice line which roughly corresponds to the fragmentation barrier <cit.>. We do not model a change in the pebble surface density profile at the ice line. The assumption of a constant pebble flux is not trivial as planetesimals that form near the ice line will reduce the pebble flux in the inner disk <cit.>.
Once a given body reaches the PIM, all other bodies interior to it also stop growing by pebble accretion. When the bodies accrete pebbles we do not decrease the pebble density in the local region since the pebble accretion rate does not appear to be comparable, let alone exceed, the radial pebble flux often needed in modeling <cit.>. On the other hand, we do track the total pebble mass. All bodies stop growing by pebble accretion once the total dust mass has been reached.
§.§ Type-I migration and gas drag
Angular momentum exchange via spiral density waves cause the planetesimals to migrate inwards via type-I migration, and gas drag dampens the eccentricity and inclination of a planetesimal. <cit.> empirically derived expressions for the accelerations a body experiences in a gas disc, which can be implemented into n-body code. The accelerations a body experiences from type-I migration, eccentricity damping, and inclination damping are
a_ m=-v/t_ m,
a_ e=-2(v·r)r/r^2t_ e,
a_ i=-v_z/t_ ik,
respectively. k is the unit vector in the z-direction and v and r are the velocity and position vectors of the body. The timescales associated with each of these accelerations are scaled by the damping timescale
t_ wave= M_⋆/m_ plM_⋆/Σ(r)r^2 ( H/r)^4 Ω^-1,
from <cit.>. The eccentricity damping time is
t_e=t_ wave/0.780×
[1-0.14(e/H/r)^2 + 0.06(e/H/r)^3 +0.18(e/H/r) (i/H/r)^2 ],
and the inclination damping time is
t_ i=t_ wave/0.544×
[1-0.30(i/H/r)^2 + 0.24(i/H/r)^3 +0.14(i/H/r) (e/H/r)^2 ].
The type-I migration timescale is
t_ m = 2 t_ wave/2.7+1.1 β ( H/r)^-2
( P(e) + P(e)/|P(e)|[ 0.07( i/H/r)+0.085( i/H/r)^4-0.08( e/H/r) ( i/H/r)^2] )
where
P(e)=1+( e/2.25H/r)^1.2 + ( e/2.84H/r)^6/1-( e/2.02H/r)^4,
and Σ(r) ∝ r^-β. Following our surface density profile adopted in Section <ref>, we set β=1.
Dynamical studies have shown that a rapid migration of the fully formed planets is needed to break out of various three-body mean motion resonances (MMRs) before arriving in their current resonant chain. Because these fast migration rates are needed to reproduce the resonant structures an efficient stalling mechanisms may have been present in the inner region of the disc to prevent the planets from falling into the star. Rapid migration timescales of the fully formed T1 planets can naturally explain first-order MMRs in the system <cit.>, but the inner two planets are observed to be in higher order MMRs which indicates divergent migration in the inner disc. <cit.> demonstrated that divergent migration can happen close to the star from magnetospheric rebound. <cit.> were able to reproduce the T1 resonant structure by modeling a strong negative torque in the inner cavity, although this divergent torque was not physically motivated. These studies considered the dynamics of the fully formed T1 planets, when less gas is present. Migration timescales throughout the formation process are likely much shorter when more gas is present, but scattering and resonances may help reduce the migration rates throughout the formation process.
To reproduce the proposed stalling mechanism thought to exist in the T1 system, we use the “inner_disc_edge” module in reboundX by Kajtazi et al. (in prep.) This module applies an inner disc edge that functions as a planet trap by applying an opposite and roughly equal magnitude torque on the migrating body that enters the planet trap. We do not allow bodies to migrate past the orbit of the innermost T1 planet, ∼ 0.01 au by setting the inner disc edge to 0.01 au and the width of the planet trap to 0.01 au. The region in which this planet trap is employed is between 0.01-0.02 au. All our parameter choices for our fiducial disc evolution model are listed in Table <ref>.
§.§ Composition evolution
We use the dust condensation code by <cit.>, which models how dust condenses out of an evolving protoplanetary disc as the disc cools. The dust condensation code gives the initial elemental and mineral distributions of the protoplanetary disc that determines the composition of embryos that form at different orbital distances. We then follow the composition evolution of the embryos as they collide with one another and grow via pebble accretion to form planets. The formation location and collision history of the embryos determines the resulting composition of the planets. The final Fe/Si molar ratio is used in Magrathea <cit.> to determine the mass fraction of the planet's iron core and the planet's radius.
The dust condensation model is run independently of the evolution models discussed in previous sections. We use the dust condensation model for a solar-type star as presented in <cit.> as this is the system the code was developed for. We encountered difficulties converting the surface density profile to the one shown in Equation <ref> and using disc parameters more compatible with an M-dwarf system. We use the solar abundances in the condensation code since the measured metallicity of T1 is similar to the Sun's <cit.>. For these reasons, we use the dust composition profile from a Sun-like star at a single epoch and re-scale the results to fit our T1 disc. Successfully modeling dust condensation in an M-dwarf disc and comparing results with the re-scaled Sun-like disc, used here, will be the subject of future work.
Following the same solar model of <cit.>, the surface density profile for the dust condensation evolution is
Σ (r,t)=M_ disc/π r_0^2(t)exp{-[r/r_0(t)]^2},
where M_ disc=0.21M_⊙ and r_0(t) is the characteristic disc radius. This disc mass corresponds to a disc around a Sun like star immediately after formation. High accretion rates onto the star quickly deplete the disc mass and after one evolution timescale (∼ 26 Kyr), the disc mass is less than 20% its initial disc mass (see <cit.> Figure 2).
The temperature profile is
T^4=3Gτ M_* Ṁ_̇İ/64πσ_SBr^3,
where Ṁ_̇İ= is the accretion rate of gas, σ_SB is the Stefan-Boltzmann constant, the optical depth is τ=κΣ/2, and κ=4 cm^2g^-1 is the opacity for a Solar nebula, the same used in <cit.>. This temperature profile is for a disc dominated by viscous heating <cit.>. The disc is not in a steady state and the mass accretion rate changes in time (see <cit.> Equation 8).
Finally, we re-scale the abundance distribution from the solar model to fit the size of our T1 disc by normalizing the locations of the ice line between the two discs. The re-scaled dust distributions for 12 elements are shown in Figure <ref>. For more details on how the dust condensation results are re-scaled and the validity of extrapolating results from the solar system, see Appendix <ref>.
The chemical equilibrium of condensing dust is modeled with GRAINS <cit.> which includes 33 different elements that form 520 condensed and 242 gaseous species. The combined disc evolution and chemical equilibrium code returns the relative abundance of the elements and condensed species as a function of orbital distance in the disc, at different times. Further details on the dust evolution model can be found in <cit.>.
Using this code, we determine the dust composition of our disc at any orbital distance to track the composition evolution of the solid bodies. Our composition tracking code for the solid bodies is based on the composition tracking code of <cit.> and includes composition changes from pebble accretion. The bodies all begin just exterior to the ice-line and we experiment with different refractory element:water-ice ratios. The refractory elements for a given embryo are determined by the dust composition at the embryo's orbital distance, as determined by the condensation code.
When two bodies collide, the target is the more massive one involved in the collision, with mass M_ t and initial composition
X = (x_1,x_2,...,x_n)
where x_i is the relative abundance of the i^ th species such that
∑_i=1^n x_i =1.
The projectile is the less massive involved in the collision, with mass M_ p and initial composition
Y = (y_1,y_2,...,y_n)
where y_i is the relative abundance of the i^ th species such that
∑_i=1^n y_i =1.
If the collision results in an elastic bounce with no mass exchange, then the composition of each body remains the same. If the collision results in a merger or partial accretion of the projectile, the composition of the target becomes
𝐗' = M_ t𝐗 + M_ p'𝐘/M_ t + M_ p'.
where M_ p' is the mass of the projectile that is accreted by the target. If any fragments are produced from the projectile, they are assigned the composition of the projectile.
If the target becomes eroded, then the composition of the target remains the same but has a less mass M_ lr. The composition of the new fragment(s) becomes
𝐗' = M_ diff𝐗 + M_ p𝐘/M_ tot-M_ lr.
where M_ tot=M_ t + M_ p and M_ diff=M_ t -M_ lr.
We chronologically resolve all the collisions, using the prescription described above, and update the compositions of all the bodies according to the amount and location of the pebbles the body accumulated over the last 100 years. The amount of pebbles a body has accumulated over 100 years, M_ peb, is the mass difference between the body at time t and time t+100 years, after all collisions have been accounted for. The relative abundances of the pebbles is given by,
𝐘_ peb = (y_ peb, 1,y_ peb, 2,...,y_ peb, n),
where y_ peb, i is the relative abundance of the i^th species for the pebble and
∑_i=1^n y_ peb,i =1.
𝐘_ 𝐩𝐞𝐛 is determined by the radial location of the pebble at time t and the output from the dust condensation code. The new composition of a body, which we refer to as the target, from pebble accretion is set by
𝐗'= M_ t𝐗 + M_ peb𝐘_ peb/M_ t + M_ peb.
§ N-BODY SETUP
In our n-body simulations we begin with 30 0.01 M_⊕ embryos exterior to the ice line. The number of embryos are chosen so that the total initial embryo mass is 5% of the mass of the T1 planetary system. We distribute the embryos in an annulus just exterior to the ice line between 0.1-0.15 au. The embryos have a different surface density than the gas profile as they preferentially form at the ice line and the formation of these large solid bodies leads to decoupling from the gas disc to some degree. As a result, we choose for the embryos to follow a surface density profile of Σ_ pl∝ r^-3/2, which is a commonly used surface density profile for the starting bodies in studies of the solar system <cit.>.
We adopt a density of 1.5 g cm^-3 for our embryos which is consistent with 50% ice and 50% rock. We also experiment with multiple initial WMFs to find WMFs that result in planet radii that match the observations. We tested initial WMFs of 15%, 20%, and 25% and find an initial WMF of 20% for planetesimals resulted in planet radii that are in better agreement with the observed T1 planet radii (see Section <ref>). As a result, we report the results for two different initial compositions–the starting composition of the embryo is either 50% water-ice and 50% of the dust composition found at the embryo's initial radial location or, 20% water-ice and 80% of the dust composition found at the embryo's initial radial location.
All of the orbital elements for each body are chosen randomly and follow a uniform distribution. The eccentricities (e) are distributed between 0.0-0.1 and the inclinations (i) between 0.0^∘-0.5^∘. Because our model assumes planetesimals form in a narrow annulus just exterior to the ice line, motivated by results from <cit.> and <cit.> we use a relatively large value for eccentricity (see Section <ref> for more details on this point). The longitude of ascending node (Ω), argument of pericenter (ω), and mean anomaly (f) are all distributed between 0^∘-360^∘.
Using the n-body code rebound and the reboundX module described above, we integrate 100 runs using the mercurius hybrid integrator. We change the random seed generator in each run to provide variation of the orbital elements in the particle disc. We integrate each run for 3 Myr.
Unless being inside the inner cavity which extends out to 0.01 au, all bodies experience growth via pebble accretion – until they reach the PIM –, type-I migration, and eccentricity and inclination damping at all times.
Solid bodies are also free to interact with one another and collisions are resolved with fragmentation. We set the minimum fragment mass to 0.01 M_⊕. The fragmentation model we implement is detailed in <cit.>. A smaller fragment mass would be more realistic. However, this mass is chosen as the result of computational limitations.
§ SYSTEM ARCHITECTURE AND FORMATION HISTORY
Of the 100 runs we conducted, we first focus our analysis on the runs that returned systems with at least six planets since we are interested in systems similar to T1. We define a planet as a body having a mass greater than or equal to 0.2 M_⊕. We find 24 runs that meet this criteria. Of these 24 runs, nine runs had six planets, two runs had seven planets, eight runs had eight planets, four runs had nine planets, and one run had 11 planets. Interestingly, our model is four times more likely to produce eight planets instead of seven. By analyzing the three-body resonance angles throughout the T1 system, the existence of an outer eighth planet in the system has been predicted <cit.>. While <cit.> did not find evidence of TTVs from an eighth planet at a limited range of exterior orbital radii, our models produce systems with an eighth planet that has a mass similar to T1-h and is usually found exterior to the ice line.
Table <ref> lists the planet properties for each planet that formed in these 24 runs. We list the simulation run number, the planet multiplicity (No.), and the mass (M_ p), semi-major axis (a_ p), eccentricity (e), inclination (i), the water mass fraction(WMF), iron (Fe), magnesium (Mg), silicon (Si), and oxygen (O) mass fractions (along with eight other elements) for each planet built from bodies that begin with a 20% or 50% WMF exclusively. Lastly, we list the fraction of the final planet mass that came from pebble accretion (Peb), fragments (Frag), and embryo accretion (Em).
§.§ Mass distributions
Figure <ref> shows the mass (M_ p) and semi-major axis (a_ p) of the planets that formed in the 24 runs. The T1 planets are shown in orange and all the remaining bodies at t=3 Myr in a given run are shown in blue. The size of the dots are proportional to the mass of the planet. The PIM is marked by a black line and the ice line by a vertical blue line for reference. The average total planetary mass of our simulated systems is 5.64 M_⊕ and the total mass of the T1 system is 6.45 M_⊕. On average, each run grows its total initial embryo mass of 0.3 M_⊕ by almost 19 times (∼ 5.64 M_⊕) through pebble accretion.
In the T1 system, the inner two planets are the most massive and then a reversed mass ranking is found for planets d-g where planet mass increases with semi-major axis. In 13 of the 24 runs, the inner most planets are the most massive planets in the system. The bodies that first undergo runaway accretion and accrete the most embryos at the start of the simulation are the first to migrate inwards. As the mass of the body increases, so does its migration rate and the body migrates inwards until it either reaches the inner cavity or is trapped by resonances with inner planets. Since there are no resonances to halt the inward migration of the first protoplanets, this results in a build up of material at the inner edge. This build up of material at the inner edge leads to collisions and accretion that eventually builds the more massive inner planets.
The innermost planets typically accrete more embryos which allows them to grow more massive than the PIM in the inner disc region. This finding is in agreement with <cit.> although <cit.> explored a formation pathway more akin to in-situ planet formation whereas we start with smaller bodies only exterior to the ice line. The bodies that avoid mergers at the start of the simulation migrate inwards at a later time when there are less planetesimals and embryos available to accrete and thus, grow majority of their mass via pebble accretion. These subsequent planets that form grow quickly to the PIM once they cross the ice line. As a result, most of the outer planet masses are near the PIM which increases with distance up to the ice line and can explain the reversed mass ranking of the T1 d-g planets.
The small size of the outermost planet is achieved when the planet forms exterior to the ice line, where the PIM is much lower due to a larger value for τ_ s. While this formation mechanism may explain the small size of T1-h, it is not clear if a planet that grows most of its mass exterior to the ice line is consistent with the observed density of T1-h. In 10 of the 24 runs, the outermost planet is the smallest (or close to the smallest) but is found exterior to the ice line. This places our T1-h analogues at larger semi-major axes and results in planets with larger amounts of water than what is expected from the observed bulk density of T1-h (see Section <ref>).
§.§ Period distributions
Figure <ref> shows the period ratios found between adjacent planets in each of our 24 runs and also the period ratios of the T1 planets. Of the 24 runs, where we focus our analysis, the first-order 3:2 MMR is found in all 24 runs, 13 of the runs contain the first order 4:3 MMR and nine of the runs contain the second order 5:3 MMR. While the stronger resonances of the T-1 system may be found in most of our runs we do not find the 8:5 MMR of the innermost planets observed in the T1 system in any of our 24 runs. We attribute this to the simplified treatment of the inner disc cavity and lack of tidal effects. However, close to an 8:5 MMR, a 2:1 MMR is found in all but four of the runs and some planets may also be found in 5:4 and 6:5 MMRs.
<cit.> demonstrated that once a more accurate treatment of the inner disc region is modeled, by incorporating the effects of an expanding gas-free cavity and the dynamics of the planets in this cavity, the fully formed inner two planets may break out of first order resonances and migrate into the observed 8:5 and 5:3 resonances. Similarly, <cit.> better recovered the observed resonances of the T1 system due to their use of a more complex disc evolution which resulted in more dynamic migration rates of the planets.
Three-body Laplace resonances may also be found throughout the T1 system. These resonances contribute to the long term stability of the system <cit.>. The generalized three-body Laplace relation (GLR) angle is given by,
ϕ_i,i+1,i+2=p λ_i - (p+q) λ_i+1+q λ_i+2,
where λ_i is the mean longitude of the ith planet, and p and q are integers. The GLR is considered stable if the angle ϕ librates about 180^∘. We consider the five main GLR angles observed in the T1 system (see <cit.> for a review of these angles) in all of our runs that contain at least seven planets. We do not find any of these five GLR resonances over the last 1 Myr of simulation time in any of the runs. Again, this could be attributed to the assumed evolution of the gas disc <cit.>, the assumed evolution of the disc's inner cavity <cit.>, and/or the lack of tidal effects <cit.>.
§.§ Eccentricity and inclination distributions
Pebble accretion efficiency increases with the eccentricity of the accreting body until the body moves faster than the pebbles <cit.>. Thus, the eccentricity evolution of the bodies can have significant effects on the final planetary system. Figure <ref> shows the e and i evolution for all the bodies in the 24 runs. The black dashed lines mark the initial values for the starting bodies of e=0.1 and i=0.5^∘. The color corresponds to the mass of the body. While our starting eccentricity is larger than what is commonly used in n-body models, it is worth noting that the commonly adopted smaller value of e=0.01 comes from studies of the solar system where bodies are distributed across the whole range of the starting disc <cit.>. On the contrary, in our formation channel all bodies start in a narrow annulus just exterior to the ice line and there is a relatively large number density of bodies. <cit.> modeled planetesimal formation just exterior to the ice line and found that shortly after formation, the bodies experience body-body scattering which increases the eccentricity distribution of the bodies exterior to the ice line <cit.>. Similar excitation of eccentricity was also found when the planetesimals are formed in narrow axisymmetric dust filaments driven by the streaming instability <cit.>. Motivated by these findings, we choose to initialize our starting bodies with a larger e than what is commonly used.
From Figure <ref> we can see the bodies in our simulations also experience a high degree of scattering at the start of the simulation that increases both the eccentricity and inclination of the bodies. At 50 Kyr the average eccentricity and inclination are e=0.15 and i=6.1^∘, both larger than our initial values. Later, the orbits become dampened as their mass grows through pebble accretion and these larger bodies interact with the gas. At 500 Kyr, the average eccentricity and inclination are e=0.03 and i=0.6^∘. Once bodies begin to reach planet size, interactions with the planets can re-excite orbits. The bodies at the end of our simulations have average values of e=0.07 and i=0.04^∘ and the more massive bodies have less excited orbits than the smaller bodies (see Table <ref> for the eccentricity and inclination of the final planets).
§.§ Collision history and formation timescales
The collision history is a result of the stochastic behavior of the n-body system. Because we randomise the spatial distribution of the starting planetesimals in each run, we also expect each run to have a different collision history. We consider the times of the last collision as a proxy for planet formation timescales. The right panel of Figure <ref> is a histogram of the times for every collision that took place in our 24 runs. The earliest time of the last collision is t∼ 1.5 Kyr in Run38 and Run12 had the last collision at t ∼ 2.5 Myr. The pileup of collisions at t ∼ 2.5 Myr are all from Run12. When considering all 100 runs, Run38 still had the earliest time for the last collision but Run22 had a collision just before t∼ 3 Myr. In all 100 runs, pebble accretion continues after the last collision. In three runs, pebble accretion is still happening just before 3 Myr and the rest of the runs stop pebble accretion before this time as all the bodies have either reached the PIM or are found interior to a body that has reached the PIM.
Run38 reached its final configuration the earliest and experienced its last collision and ceased pebble accretion before 1 Myr. While we did not extend our runs in the absence of gas, it is possible that these systems undergo further evolution during and after the gas disk dissipates. This is expected in systems that do not have stable resonances throughout the system prior to gas disk dissipation. However, the orbital architecture in each of our 24 runs is dominated by strong first order MMRs.
The left panel of Figure <ref> shows a histogram of the total number of fragments produced in each of our 24 systems most similar to T1. Run54 experiences the most fragmentation and produces a total of 371 fragments. This run experiences a super-catastrophic collision around 1.1 Myr, a relatively late time when the colliding bodies have grown more massive, which results in 339 fragments. But we view this collision history as atypical and statistically insignificant. Ten of the runs produce 15 or less fragments. Run55 and Run100 experience the least fragmentation and produce only five fragments. Relative to the fragmentation that may occur in the solar system, our simulations experience little fragmentation <cit.>. However, as discussed below, we find that fragmentation has a significant effect on the multiplicity of the planetary system when fragments are able to grow their mass through pebble accretion.
Fragmentation not only affects the final system architecture and the physical properties of each individual planet, but it can also affect the multiplicity of the system. In all but one of the 24 runs, at least one planet was seeded by a fragment, that is, a fragment produced from a collision grew its mass by pebble accretion and accreted smaller bodies to become a planet. On average, three planets in each run are seeded by a fragment. In Table <ref> the last three columns show the percentage of the final planet mass that came from pebble accretion, the accretion of fragments, and the accretion of other planetesimals. We note that in all but three runs the outermost planet is seeded by fragments and grew most of its mass by pebble accretion.
We further test the effects of fragmentation by performing 100 identical runs where we only resolve collisions with perfect merging. We find this results in systems with lower multiplicities. Five out of the 100 runs with perfect merging returned six planets, and the rest of the runs resulted in fewer than six planets. The mass distribution of the planets with and without fragmentation are similar. This finding suggests that collisional fragmentation is an important process for systems with high terrestrial planet multiplicities.
§ COMPOSITION AND PLANETARY INTERIOR STRUCTURE
The T1 planets have observed bulk densities that are consistent with rocky worlds and water mass fractions (WMFs) less than 20% <cit.>. There are degeneracies between the assumed interior structure and the observed bulk density that result in different WMF predictions for different interior models. <cit.> model the T1 planets with a coupled atmosphere-interior model and find WMFs for the outer four planets of 9-12%. Their predicted WMF at a given CMF are 1σ higher than reported in <cit.> as they use a less compressible water equation of state. They also find T1-d could have a condensed water layer rather than the water vapor atmosphere assumed in <cit.>.
In this section, we discuss the WMFs along with the elemental compositions of the T1 analogs found in our planet formation models and their implications for the structure of the T1 planets.
Figure <ref> shows the output of our condensation code for the 12 main elements we focus our analysis on: O, Fe, Si, Mg, Al, Ca, Ni, Na, Cr, Mn, Co, and P. We choose these elements so that we may make a direct comparison to Earth's compositions as deduced by <cit.>. These data are taken from our dust condensation code at five times the characteristic evolution timescale used in <cit.>, a total of t∼ 130 Kyr, and used to initialize the composition of the bodies at the start of our n-body simulations. The total time of t∼ 130 Kyr corresponds to the dust condensation simulations that best match the relative elemental abundances of the solar system's terrestrial planets and the CM, CO, and CV chondrites <cit.>. Dust condensation is thought to be complete before planetesimal formation so this stage is assumed to take place prior to our n-body simulations. We use these data, along with the assumption that a fraction of the solid material at and exterior to the ice line is water-ice, to set the starting compositions of our bodies and for tracking composition change from pebble accretion. The dust condensation code does not follow the evolution of the ices and so we must make assumptions about what the intial water ice fraction is for the starting bodies.
Table <ref> lists the final WMF and elemental abundances for each planet in our 24 runs after running our composition tracking code. We report values for when the bodies were intialised with a 20% and 50% WMF. We find large variability in all of the elements from planet to planet and run to run. Such variation highlights the sensitivity of the final planet composition on the formation process. To make a more direct comparison of our simulated planets to the T1 planets, all of the simulated planets are binned into seven semi-major axis bins with the same number of planets in each bin. Figure <ref> shows the average and ± 1 σ for the final wt% of water and elements for our seven binned T1 analogs with both initial WMFs. We show Earth's values in black <cit.>.
As compared to the Earth, it appears that these planets tend to have a reduced fraction of oxygen, iron, and magnesium, but an enhanced WMF. We find the inner and outer planets have higher WMFs in contrast to the trend of increasing WMF with orbital distance found in <cit.>.
Among them, the outermost planet, T1-h, has the largest WMF at its vicinity which is exterior to the ice line.
T1-h has a wide range of WMF and O abundance because the outermost planets can accrete pebbles from either interior or exterior to the ice line where water and condensed oxygen is either depleted or abundant (see Figure <ref>).
Across all runs, we find large variability in the final WMFs of our simulated planets ranging from less than 1% up to 50 % (see Table <ref>). If a planet is seeded by a fragment and grows via pebble accretion interior to the ice line, it has a low WMF and hence becomes a dry planet. On the other hand, if a planet grows via pebble accretion exterior to the ice line, it has a high WMF and is a water world.
All of the average WMFs in our T1 analogs are in excess of that of Earth. The WMF_50 values are not consistent with terrestrial worlds, but with water worlds. Significant volatile loss is thought to take place throughout the planet formation process from impacts, irradiation, and green house effects <cit.>. If we were to model volatile loss, the final WMFs would be lower than what we report here. However, it is unclear how much devolitisation the T1 planets underwent throughout the formation process. We examine this more in the following section.
We use our average binned values for elemental compositions of T1 analogs to inform planet interior models with the planet structure code Magrathea <cit.>[Magrathea can be accessed at <https://github.com/Huang-CL/Magrathea>]. magrathea assumes a fully differentiated planet with an iron core, silicate mantle, and water liquid or ice hydrosphere. After specifying the mass of each layer, the code solves the equations of hydrostatic equilibrium and returns the radius for the planet. We feed magrathea the seven average binned values for the iron mass fraction for the core, the average binned value of the WMF for the hydrosphere, and place all remaining elements into the mantle. We use the observed masses of the T1 planets, the default equations of states and phase diagrams in magrathea, and the null-albedo equilibrium temperatures as the start of the adiabatic temperature gradient. We present solutions for a three-layer model (core, mantle, and hydrosphere) here and solutions for a two-layer model (core and mantle) in Section <ref>. We use a 300 K surface temperature for planets b and c and do not model the water-vapor atmosphere required by their high equilibrium temperatures if they do indeed have a water surface layer (see <cit.> for an atmosphere model of T1-b and T1-c).
Table <ref> shows the average binned values for the core mass fraction (CMF), mantle mass fraction (MMF), WMF, and the planet radius as determined by magrathea for a three-layer model for the runs that begin with a 50% and 20% WMF. We also include the observed radii (R_ O) from <cit.> for comparison. The calculated radii that begin with a WMF of 50% are much larger than observations of the T1 planets. The radii of the T1 analog bodies that begin with a WMF of 20% agree much better with observations. As a result, we suggest that either the starting Moon-sized bodies have an initial WMF closer to 20%, or extreme volatile loss of the planets takes place throughout the planet formation process (see Section <ref> for a discussion of this).
Next, we use the average binned values of iron and silicate from the initial 50% WMF to find the WMF needed to match the observed masses and radii of the T1 planets. We use 5000 draws of the correlated masses and radii for each planet found in the <cit.> pipeline[Masses and radii obtained through <https://github.com/ericagol/TRAPPIST1_Spitzer>] rather than fitting distributions to the reported median and standard errors. Table <ref> shows the Fe/Si molar ratios (the same abundances are used to find the CMF and MMF in Table <ref>) and the WMF results for the outer five planets. We do not include planets b and c in this table as we do not find three-layer solutions that match observations. The Fe/Si molar ratio is similiar across all seven planets. The value is higher but within 1σ of the 0.76±0.12 used in <cit.> derived from observed stellar abundances in stars similar to T1.
This shows the WMFs need to be reduced down to about 6%, 4%, 6%, 8%, and 9% for planets d-h, respectively, to obtain the same radii of the T1 planets. These reductions imply that the planets that form from planetesimals with a 50% WMF must lose approximately 75% of their water throughout the formation process in order to match the observed planet densities. The planet analogs that begin with a 20% WMF have Fe/Si molar ratios within 0.1% of the initial 50% WMF runs but have final WMFs mostly within 2σ of the observed inferred WMF in Table <ref>.
§.§ Volatile loss
There are multiple mechanisms that can deplete the planets of their volatile inventory. Volatile loss can take place prior to the planet formation process in the nebulae as the result of chemical interactions between gas and dust, and the formation of chondrules <cit.>. Small pebble may lose some of their volatile reserves through pebble abblation <cit.>. Later, small planetesimals can lose volatiles as they accrete smaller bodies, which heats the body and leads to differentiation <cit.>. As the bodies grow into larger embryos which exceed the PIM, planetary growth proceeds via core accretion. The larger bodies collide with one another in giant impacts, which leads to even further volatile loss <cit.>. Lastly, after the final terrestrial planets have formed, the planets may lose their atmospheres through photoevaporation from the host star <cit.>, core-powered mass-loss where heat from the planet's core is thermally transferred to the planet surface and evaporates the atmosphere <cit.> or, they may lose volatiles from runaway Greenhouse effects <cit.>.
Instead of tracking the multiple ways in which the volatiles may be lost throughout (and after) the formation process, we artificially adjust the WMF while keeping the refractory abundances constant such that the final WMF are similar to those inferred from observations, as described above. However, we consider the collision history to get a sense of the frequency of giant impacts the T1 planets experienced.
The planets that begin with a 20% WMF result in planets with radii and densities similar to observations of the T1 plants. Our simulated planets b, c and e are slightly larger than the observed T1 planets and so these planets would need to undergo slight volatile loss to reproduce the observed radii. Planets b and c are the largest planets in the system and they do experience collisions throughout the formation process which may be a source of volatile loss. Because a 20% WMF reproduces planets with similar radii to the T1 planets without requiring appreciable volatile loss, the starting bodies that formed the T1 planets likely had no less than a 20% WMF on average. However, as noted previously T1-b and T1-c may have water vapor atmosphere which would require an even lower water mass fraction <cit.>.
Since our simulations begin after the formation of planetesimals, giant impacts are the main mechanism for removing volatiles at this stage in the planet formation process. <cit.> showed that when the specific energy, Q_ s, of a collision between two bodies is more than 10^8 J/kg, it can strip an entire ocean of water on a planet that has an atmosphere-to-ocean ratio of 1:300. This same planet can have its entire atmosphere striped if it is involved in a collision with Q_ s≥ 10^7 J/kg. We track the collisions that the simulated planets were involved in to better understand the extent of volatile loss these planets may have experienced. The top panel of Figure <ref> shows Q_ s versus time for all of the collisions the final planets experienced and the bottom panel shows Q_ s versus planet orbital radius. The blue horizontal line marks the specific energy needed to remove the atmosphere of a planet that has a 1:300 atmosphere-to-ocean ratio and the red line marks the specific energy needed to remove an entire ocean from a similar planet.
Two-thirds of our 24 runs had a final planet that experienced at least one ocean evaporating impact. Of these 16 runs, 14 runs had fewer than 10, Run37 experienced 17, and Run54 experienced 42 ocean evaporating impacts. Run54 is also the system that experienced the most fragmentation which created a chain reaction of giant impacts. This can be seen in Figure <ref> near ∼ 1 Myr.
All 24 runs experienced atmosphere stripping collisions. The runs experienced anywhere from 12 up to 83 atmosphere stripping impacts with an average of 36 such collisions. Run37, which experienced 17 ocean evaporating collisions had the most atmosphere stripping collisions. In this run, we would expect such excessive volatile loss to result in dry planets. We observe that the ocean evaporating impacts take place in the inner regions of the disc where orbital speeds are higher and more massive planets are found. As a result, the innermost planets are most susceptible to ocean evaporating impacts which might indicate a lower volatile content for these planets. However, eight of our simulated runs experience no ocean stripping events and no more than 12 atmosphere stripping events, which suggests the planets may experience little volatile loss from giant impacts. The atmosphere stripping impacts can be found throughout regions of the disc but are most commonly found around the ice line. Collisions with smaller impact energies may not result in significant volatile loss and may even be a vehicle for volatile transport <cit.>. More detailed modeling of volatile loss/gain in collision processes in planet formation scenarios is needed in order to better constrain final volatile budgets. From Table <ref>, however, we see that many of the smallest planets in a run form almost all of their mass through pebble accretion. These planets are not involved in any collisions and so we do not expect the smaller planets to lose any volatiles through collisional processes.
§.§ Desiccated Planets
While the slight under-density of the T1 planets compared to Earth suggests a volatile layer and our formation models have led to planets with high WMFs, the volatile loss mechanisms discussed above could lead to desiccated planets devoid of volatiles. Recently, JWST observed thermal emission from secondary eclipses of T1-b <cit.>. The measured temperature of T1-b supports the planet having no atmosphere counter to the water-vapor atmosphere which would result from water-rich surface. We discuss here small core interior solutions to the observed density of the T1 planets which match our composition.
As described above, we find mass fractions of core and mantle with two-layer models in Magrathea which match 5000 draws of the observed masses and radii of the T1 planets from <cit.>. A number of draws of mass and radius do not have solutions assuming only two layers and require a volatile layer. The dessicated CMFs using our simulated Fe/Si ratios (CMF_ D), the CMFs for two-layer planets that match observations (CMF_ O), and the percent of draws with solutions are shown for the seven planets in Table <ref>.
CMF_ D is found from the CMFs in Table <ref> where CMF_ D=CMF/(1-WMF) and is nearly identical for all planets except for T1-h. For T1-b, T1-c and T1-e, 2σ uncertainties of CMF_ O and the CMF_ D of our T1 analogs overlap. However for the remaining four planets, the desiccated CMFs for our T1 analogs are 30-33% which is significantly larger than the CMF inferred from observations for these planets. T1-h needs the smallest core to match observations with a 5% CMF_ O . Our dessicated T1-h analog has the largest CMF_ D, 33%, indicating that all of the iron differentiates into a pure iron core.
To match our iron weight percentages from formation, the outer T1 planets would need a large amount of the iron in the mantle which we previously assumed to be pure magnesium silicate. However, adding iron to the mantle would increase the density of the mantle <cit.>. If we hold total mass constant, the CMF would need to decrease further with iron in the mantle to match the observed radius. <cit.> found a mass-radius line passes through all seven T1 planets for a core-free composition. For this model they used an abundance ratio for Fe/Si/Mg/O of 29.2/17.3/15.3/38.2 wt%. The composition of our T1 analogs after the removal of water is on average 30.0/18.1/16.1/21.2 wt% of Fe/Si/Mg/O. While the Fe/Si/Mg we find coincides well with the core-free composition, a core-free planet requires a high oxygen abundance not seen in our formation models. All of the oxygen weight percents are lower than the 38% needed to oxidize the iron in the mantle.
Other interior models may also fit our compositions and the observed density of the T1 planets. The CMF can be increased by assuming a liquid core or by putting lighter elements in the core. However, a liquid core only increases the inferred CMF of T1-f from 14 to 15%. In addition, a promising area of research not investigated here is the incorporation of water into the mantles of the T1 planets. A hydrated mantle could link our formation mechanisms with the observed lack of atmosphere on T1-b. However, <cit.> found mantles of 1 M_⊕ planets can store 1-2% of their mass in water within the mantle which can change the radius by approximately 0.5% from a dry model. In comparison a 1% change in CMF changes the radius of the planet by 0.2%. A change in the assumed CMF is a much larger effect. To match both our formation models and the observed densities, T1-b and T1-c could be desiccated while the outer planets most likely need a significant volatile layer.
§ DISCUSSION
<cit.> modeled the stellar evolution of T1 and found that the current luminosity of T1 is ∼ 5 × 10^-4 L_⊙ and the luminosity of the star at 10 Myr is ∼ 0.01 L_⊙. However, the stellar luminosity of M-dwarfs in their pre-main sequence stages should be orders of magnitudes larger. <cit.> modeled the luminosity of a pre-main sequence M8 star and found that when the star was ∼1 Myr old its luminosity was ∼0.05 L_⊙ from which point it continually dimmed until it reached the main sequence stage and its current luminosity over the course of ∼1 Gyr.
The PIM is proportional to the temperature of the disc which in turn, depends on the luminosity of the star and the viscosity of the disc. Figure <ref> shows the PIM as a function of four different values for luminosity using the <cit.> temperature profile for the MMSN, Eq. (<ref>), which assumes disc heating is dominated by stellar irradiation. We also show the PIM for the temperature profile used in <cit.> which accounts for stellar irradiation and viscous heating. We plot the T1 planets in orange. We see that the ice line and the luminosity change the PIM, which should affect the subsequent evolution of the system. As the luminosity decreases in time, the PIM also decreases. The evolution of the stellar luminosity, and thus the PIM, is important to capture when modeling the formation of the T1 planets. As the PIM decreases with stellar luminosity and time, this may indicate that planets T1-d and T1-h formed at a later time. Since fragmentation extends the planet formation process and we find that fragments are capable of seeding an entire planet, a fragment produced at a later time which grows by pebble accretion in a relatively depleted gas disc could explain the current masses of T1-d and T1-h.
On the other hand, we note that the other T1 planets appear to follow PIM from an MMSN profile and a specific luminosity.
Nevertheless, accurate temperature profiles are necessary for determining the location of the ice line which also strongly affects the PIM. We adopt a constant, intermediate value for luminosity in our temperature profile and assume a constant location of the ice line which results in planet masses smaller than the observed planet masses for planets e-g. We assume that the ice line is the location in the disk where the temperature is 170 K. However, <cit.> found that in the denser protoplanetary discs that exist around M-dwarfs, the ice line is more likely to exist in the region of the disc where the temperature is 212 K. Future work that implements more accurate temperature profiles in time is necessary to better understand the formation of the T1 system, particularly the role of pebble accretion in the formation process and may more accurately reproduce the observed masses of the T1 planets. However, a cooling disc alone cannot explain the reversed mass distribution as this would imply planets d-g would need to form from the outside in. Even though the ice line would move inwards with a cooling disc, it seems unlikely it would move as far in as the current orbits of planets d,e,f or perhaps even g. An additional explanation for the low mass of T1-h though, would be if the ice line moved interior to T1-h near the time T1-h reached its current mass, thus ceasing pebble accretion and the continued growth of the planet.
The PIM significantly affects the evolution of the system, particularly the mass distribution and thus, migration rates of the planets. <cit.> derived a PIM expression from 3D hydrodynamical simulations while our adopted expression from <cit.> is derived from 2D hydrodynamical results. <cit.> compared their results to <cit.> and found overall good agreement however, their PIM values are a factor of 1.5-2 times smaller than those of <cit.>. This may be attributed to the 3D nature of the <cit.> simulations where gap carving is more difficult to achieve than in the 2D case. Exploring how the 3D PIM expression of <cit.> affects our results is another area of interest for future work.
Our disc evolution only considers mass loss from accretion onto the central star. This leads to relatively low rates of mass loss in the disc which may contribute to the inability of our model to reproduce the higher order resonances where the inner planets are found in. <cit.> showed that mass loss from photoevaporation and disc winds rapidly deplete the disc mass in the T1 system which results in fast then slow migration of the planets, as well as a late expansion of the orbits due to the clearance of the inner cavity. The parameters we chose for our disc model result in relatively modest migration rates. This results in the observed first order MMRs but traps the planets in a three-body resonance not found in the T1 system and does not produce the MMRs of the two inner-most planet pairs.
A faster disc evolution model that permits planetesimals to quickly reach the PIM and quickly produces short migration timescales may help produce the observed planet orbits. Fast disc evolution can also help explain the masses of T1-d and T1-h if they were seeded by a fragment and reached their current masses when the disc dissipated, which prevents them from growing to the PIM. In addition to a disc model that more accurately describes mass loss, a more accurate treatment of the physics in the inner disc cavity may also be needed to produce the observed MMRs between the inner T1 planets <cit.>.
While we find that fragmentation is an important mechanism for producing planets as fragments seed planets that grow primarily from pebble accretion, our simulations assume a relatively large minimum fragment mass due to computational limitations. A smaller minimum fragment mass is likely to affect the planet formation process as more fragments would be produced and each fragment is able to grow its mass through pebble accretion. We note though that the pebble accretion rate sensitively depends on the accreting mass (Eqs. <ref>-<ref>). Additionally, the fragmentation model we use in this study assumes that fragments from a collision are all equally sized and have the same composition. To better understand the role of fragmentation in the formation process, higher resolution models for fragmentation of differentiated bodies are needed. In addition to more accurately modeling planet formation, this should help place tighter constraint on the planet composition.
Volatile loss and gain must be considered in future models in order to reproduce the observed bulk densities of the T1 planets. Our model, which neglects all volatile loss and any atmospheric accretion, over-estimates water mass fractions when starting with reasonable initial WMFs. Detailed modeling of various giant impacts that could have produced the Earth's moon indicate that some vaporization of the Earth's mantle took place, for a wide range of impact energies <cit.>. Accurate handling of volatile and mantle loss from giant impacts <cit.>, irradiation <cit.>, and green house effects <cit.> should help constrain the composition of the final planetary system. However, planet encounters with water rich bodies may also result in increasing the planet's water content. Whether an encounter with a water rich body results in net water loss or water gain depends on the specifics of the collision as demonstrated by previous SPH and n-body simulations <cit.>. Furthermore, terrestrial planets may directly accrete their atmospheres from the surrounding gas disc which may later be reduced by UV and X-ray radiation from the young host star <cit.>. Detailed modeling with respect to atmospheric gain and loss throughout the formation process is also necessary for placing tighter constraints on the bulk composition and interior structure of the planets.
§ CONCLUSIONS
In this study, we presented a disc evolution and pebble accretion model. We incorporated this model into reboundX and used our newly developed module to study the formation of the TRAPPIST-1 (T1) planets. Our model allows for type-I migration and eccentrity and inclination dampening from gas drag in a gas disc. In our simulations, 0.01 M_ bodies began just exterior to the ice line and grew by pebble accretion until the pebble isolation mass (PIM) was reached. We also modeled collisional accretion and fragmentation of the bodies. We used results from a dust condensation code to track the composition evolution of the planets. Using the final compositions of the code and assuming various interior structures, we used the planetary interior structure code Magrathea to obtain radii for our simulated planets.
We reproduced planetary systems that are similar in mass, orbital radius, and multiplicity to the T1 system by numerically modeling planet formation. We found that Moon-sized bodies quickly grow to the pebble isolation mass exterior to the ice line and migrate inwards at rates that commonly result in first-order MMRs between planetary pairs. Our model indicates that the largest planets in the inner system likely grew from a combination of embryo, pebble, and fragment accretion and experienced giant impacts, while the smaller planets in the outer system grew mainly by pebble accretion. We also found that fragmentation between larger bodies plays an important role in seeding the smaller planets as the resulting fragments subsequently grow into planets via pebble accretion.
Tracking the formation process of the planets allowed us to place constraints on the initial water content of the bodies at the start of our simulations. We did not account for any volatile loss but found the inner, larger planets experienced ocean stripping collisions and most planets experienced a few atmosphere stripping collisions. Assigning the initial bodies a WMF of 50% resulted in planets with larger radii and lower densities than those observed in the T1 system. We found that starting bodies with a WMF of 20% resulted in radii and densities similar to those of the T1 planets.
Using our composition constraints and planet interior structure code we found solutions for a two-layer model for planets b and c. This, along with the high number of giant impacts the inner planets experienced throughout their formation process, is inline with recent observations that these planets are likely devoid of an atmosphere. However, the two-layer models seem unlikely for most of the remaining outer planets which suggests that these planets have primordial hydrospheres– an atmosphere and/or a water surface layer. Our composition constraints also indicated that no planets are consistent with a core-free interior structure.
§ ACKNOWLEDGEMENTS
We thank Shichun Huang, Rebecca G. Martin and Zhaohuan Zhu for useful conversations. Computer support was provided by UNLV’s National Supercomputing Center. AC acknowledges support from the NSF through grant NSF AST-2107738.
CCY is grateful for the support from NASA via the Astrophysics Theory Program (grant number 80NSSC21K0141), NASA via the Emerging Worlds program (grant number 80NSSC20K0347), and NASA via the Theoretical and Computational Astrophysics Networks program (grant number 80NSSC21K0497).
§ DATA AVAILABILITY
Simulations in this paper made use of the rebound code (Astrophysics Source Code Library identifier ascl.net/1110.016) and reboundX (Astrophysics Source Code Library identifier ascl.net/2011.020) which can be downloaded freely at <http://github.com/hannorein/rebound> and <https://github.com/dtamayo/reboundx>, respectively. The fragmentation code and bulk composition tracking code for rebound (Astrophysics Source Code Library identifier ascl:2204.010) may be found at <https://github.com/annacrnn/rebound_fragmentation>. magrathea, the planet interior solver, may be downloaded freely at <https://github.com/Huang-CL/Magrathea>. The data underlying this article will be shared on reasonable request to the corresponding author.
mnras
§ PLANET PROPERTIES
In Table <ref>, we list the properties of the planets in each of our 24 runs (out of 100) that produced a system of at least six planets, which most closely resemble the T1 system.
The second column lists the planet multiplicity (No.) of each system.
The next columns are the properties of each planet, including: mass (m_ p), semi-major axis (a_ p), eccentricity (e), inclination (i), and relative abundances (by weight) for water (WMF), O, Fe, Si, Mg, Al, Ca, Ni, Na, Cr, Mn, Co, and P.
The slashes (/) separate the results for initial embryos with a starting WMF of either 20% or 50%.
The three rightmost columns are the percentages of the planet mass that came from pebble accretion (Peb), framgents (Frag), and embryos (Em), respectively.
@|c|c|cccc|ccccccccccccc|ccc|
Final properties of the planets in each of our runs that produced six or more planets.
Run No. M_ p a_ p e i WMF O Fe Si Mg Al Ca Ni Na Cr Mn Co P Peb Frag Em
(M_⊕) (au) (deg) %_20/50 %_20/50 %_20/50 %_20/50 %_20/50 %_20/50 %_20/50 %_20/50 %_20/50 %_20/50 %_20/50 %_20/50 %_20/50 % % %
9*Run3 9*9
1.5 0.012 0.03 0.0 14.0/34.9 13.0/11.5 27.6/20.3 16.5/12.2 14.6/10.8 1.38/1.03 1.51/1.13 1.64/1.21 0.76/0.55 0.39/0.29 0.29/0.21 0.08/0.06 0.15/0.11 54.4 15.8 29.8
1.1 0.016 0.08 0.0 9.4/23.6 17.5/16.8 28.0/22.9 16.7/13.7 14.8/12.2 1.42/1.18 1.55/1.28 1.66/1.36 0.73/0.59 0.4/0.33 0.29/0.24 0.08/0.07 0.15/0.13 41.0 16.8 42.3
1.2 0.021 0.05 0.0 14.0/35.0 9.7/9.3 28.9/21.1 17.3/12.7 15.3/11.2 1.41/1.03 1.56/1.15 1.73/1.26 0.8/0.58 0.41/0.3 0.31/0.22 0.08/0.06 0.16/0.12 35.6 17.9 46.5
0.5 0.034 0.04 0.0 10.5/26.3 14.2/14.2 28.8/22.9 17.3/13.7 15.3/12.2 1.47/1.19 1.61/1.29 1.71/1.35 0.75/0.58 0.41/0.33 0.3/0.24 0.08/0.07 0.16/0.13 98.2 1.8 0.0
0.7 0.054 0.02 0.0 9.6/23.9 15.7/15.7 28.9/23.5 17.3/14.1 15.3/12.4 1.38/1.12 1.54/1.25 1.73/1.41 0.73/0.58 0.41/0.34 0.3/0.25 0.08/0.07 0.16/0.13 98.4 1.6 0.0
0.9 0.086 0.03 0.0 5.6/13.9 19.5/19.5 27.9/24.7 16.7/14.9 14.7/13.1 1.34/1.19 1.49/1.32 1.68/1.49 0.81/0.72 0.4/0.35 0.3/0.27 0.08/0.07 0.15/0.14 98.8 1.2 0.0
1.0 0.097 0.03 0.0 7.5/18.7 15.7/15.7 28.6/24.4 17.2/14.6 15.1/12.9 1.38/1.18 1.53/1.31 1.72/1.47 0.83/0.71 0.41/0.35 0.31/0.27 0.08/0.07 0.16/0.14 97.2 2.8 0.0
0.4 0.155 0.01 0.0 20.0/50.0 0.1/0.1 29.9/18.7 18.0/11.2 15.9/9.9 1.45/0.91 1.61/1.01 1.8/1.12 0.85/0.53 0.43/0.27 0.32/0.2 0.09/0.05 0.17/0.1 88.5 11.5 0.0
0.3 0.246 0.0 0.0 20.0/50.0 0.0/0.0 30.0/18.8 18.0/11.3 15.9/9.9 1.45/0.91 1.62/1.01 1.8/1.12 0.85/0.53 0.43/0.27 0.32/0.2 0.09/0.05 0.17/0.1 95.9 4.1 0.0
6*Run12 6*6
2.0 0.012 0.01 0.0 10.3/25.8 16.3/15.6 28.0/22.5 16.8/13.5 14.9/12.0 1.43/1.17 1.56/1.27 1.66/1.33 0.73/0.57 0.4/0.32 0.29/0.23 0.08/0.07 0.16/0.12 8.5 21.0 70.6
1.3 0.016 0.02 0.0 6.3/15.8 23.1/22.2 27.6/24.4 16.5/14.6 14.7/13.0 1.33/1.18 1.49/1.32 1.61/1.41 0.64/0.55 0.39/0.34 0.27/0.24 0.08/0.07 0.15/0.14 87.8 4.3 7.9
0.7 0.021 0.07 0.0 5.8/14.6 21.2/21.2 28.3/25.0 17.0/15.0 15.0/13.2 1.35/1.19 1.5/1.33 1.71/1.51 0.78/0.69 0.41/0.36 0.31/0.28 0.08/0.07 0.16/0.14 86.6 0.0 13.4
0.8 0.028 0.02 0.0 2.6/6.4 22.0/22.0 28.0/26.6 16.8/15.9 14.8/14.1 1.35/1.28 1.5/1.42 1.68/1.6 0.82/0.78 0.4/0.38 0.3/0.29 0.08/0.08 0.16/0.15 98.6 1.4 0.0
0.4 0.044 0.02 0.0 5.0/12.5 22.5/22.5 28.3/25.5 16.9/15.2 15.0/13.5 1.41/1.28 1.56/1.41 1.68/1.51 0.7/0.62 0.4/0.36 0.29/0.26 0.08/0.07 0.16/0.14 97.4 2.6 0.0
0.7 0.051 0.01 0.0 8.2/20.6 17.4/17.4 28.6/24.0 17.2/14.4 15.2/12.7 1.37/1.14 1.52/1.27 1.71/1.44 0.78/0.65 0.41/0.34 0.31/0.26 0.08/0.07 0.16/0.13 98.3 1.7 0.0
6*Run15 6*6
0.6 0.011 0.03 0.0 13.9/34.6 17.5/14.3 25.9/19.4 15.6/11.6 13.7/10.3 1.27/0.96 1.41/1.06 1.55/1.16 0.71/0.52 0.37/0.28 0.28/0.21 0.08/0.06 0.14/0.11 17.5 1.8 80.7
1.0 0.015 0.06 0.0 10.7/26.9 15.5/14.8 28.1/22.3 16.8/13.4 14.9/11.8 1.39/1.11 1.53/1.23 1.67/1.33 0.76/0.59 0.4/0.32 0.3/0.23 0.08/0.07 0.16/0.12 27.3 1.3 71.4
1.1 0.02 0.05 0.0 12.2/30.5 12.9/12.4 28.4/21.8 17.0/13.0 15.1/11.6 1.42/1.1 1.56/1.2 1.69/1.29 0.77/0.58 0.4/0.31 0.3/0.23 0.08/0.06 0.16/0.12 22.5 11.8 65.7
0.5 0.028 0.02 0.0 8.4/21.1 17.1/17.1 28.6/23.8 17.1/14.3 15.2/12.7 1.47/1.24 1.6/1.34 1.69/1.41 0.74/0.6 0.41/0.34 0.3/0.24 0.08/0.07 0.16/0.13 80.1 0.0 19.9
0.6 0.045 0.01 0.0 7.2/18.1 19.4/19.4 28.5/24.5 17.0/14.6 15.2/13.0 1.48/1.29 1.65/1.44 1.66/1.42 0.68/0.56 0.4/0.34 0.28/0.24 0.08/0.07 0.16/0.14 98.1 1.9 0.0
0.7 0.059 0.01 0.0 6.0/15.1 21.1/21.0 28.4/25.0 17.0/15.0 15.0/13.2 1.35/1.18 1.5/1.32 1.71/1.51 0.74/0.64 0.41/0.36 0.31/0.27 0.08/0.07 0.16/0.14 98.4 1.6 0.0
9*Run26 9*9
2.0 0.012 0.12 0.0 10.8/26.9 16.6/15.4 27.7/22.1 16.6/13.2 14.7/11.7 1.39/1.12 1.52/1.22 1.64/1.31 0.74/0.58 0.39/0.31 0.29/0.23 0.08/0.06 0.15/0.12 27.6 5.1 67.2
1.1 0.019 0.15 0.0 5.8/14.5 21.7/21.6 28.2/25.0 16.8/14.9 15.0/13.3 1.59/1.43 1.66/1.49 1.64/1.45 0.66/0.57 0.4/0.35 0.28/0.24 0.08/0.07 0.16/0.14 86.6 13.4 0.0
0.6 0.025 0.07 0.0 6.8/17.0 20.2/20.1 28.4/24.6 17.0/14.7 15.1/13.1 1.53/1.34 1.67/1.47 1.65/1.42 0.67/0.56 0.4/0.35 0.28/0.24 0.08/0.07 0.16/0.14 83.7 0.0 16.3
0.7 0.033 0.05 0.0 7.2/18.1 19.3/19.3 28.5/24.5 17.1/14.7 15.1/13.0 1.35/1.16 1.51/1.3 1.72/1.48 0.72/0.61 0.41/0.35 0.31/0.26 0.08/0.07 0.16/0.14 82.6 0.0 17.4
0.3 0.044 0.05 0.0 0.0/0.0 30.6/30.6 27.7/27.7 16.5/16.5 14.8/14.8 1.54/1.54 1.67/1.67 1.59/1.59 0.58/0.58 0.39/0.39 0.26/0.26 0.08/0.08 0.15/0.15 96.8 3.2 0.0
0.7 0.057 0.04 0.0 7.4/18.5 18.4/18.4 28.4/24.3 17.1/14.6 15.1/12.8 1.36/1.16 1.51/1.29 1.71/1.46 0.76/0.64 0.41/0.35 0.31/0.26 0.08/0.07 0.16/0.13 98.6 1.4 0.0
0.8 0.075 0.04 0.0 1.1/2.8 26.8/26.8 27.2/26.5 16.3/15.9 14.4/14.1 1.31/1.28 1.45/1.42 1.64/1.6 0.8/0.78 0.39/0.38 0.3/0.29 0.08/0.08 0.15/0.15 98.8 1.2 0.0
0.9 0.091 0.03 0.0 0.2/0.6 26.4/26.4 27.3/27.1 16.4/16.3 14.4/14.4 1.31/1.31 1.46/1.45 1.64/1.63 0.79/0.79 0.39/0.39 0.3/0.3 0.08/0.08 0.15/0.15 98.9 1.1 0.0
0.3 0.103 0.03 0.0 19.6/49.0 1.1/0.9 29.7/18.8 17.8/11.3 15.7/10.0 1.44/0.91 1.6/1.01 1.78/1.13 0.84/0.53 0.42/0.27 0.32/0.2 0.09/0.05 0.16/0.1 95.3 4.7 0.0
8*Run29 8*8
0.6 0.012 0.18 0.0 11.7/29.2 19.6/16.9 26.1/20.6 15.7/12.4 13.9/11.0 1.31/1.04 1.44/1.14 1.55/1.22 0.7/0.54 0.37/0.29 0.27/0.22 0.08/0.06 0.14/0.11 21.0 16.1 62.8
1.5 0.016 0.15 0.0 11.2/28.0 14.5/13.9 28.2/22.2 16.9/13.3 15.0/11.8 1.4/1.11 1.55/1.22 1.68/1.32 0.76/0.59 0.4/0.32 0.3/0.23 0.08/0.07 0.16/0.12 37.0 9.1 53.9
0.5 0.021 0.18 0.0 8.7/21.6 17.2/17.2 28.6/23.8 17.1/14.2 15.2/12.6 1.52/1.28 1.63/1.37 1.68/1.39 0.71/0.57 0.41/0.34 0.29/0.24 0.08/0.07 0.16/0.13 97.9 2.1 0.0
0.5 0.026 0.06 0.0 9.5/23.9 15.7/15.6 28.5/23.3 17.1/13.9 15.1/12.3 1.45/1.2 1.59/1.3 1.69/1.37 0.75/0.59 0.41/0.33 0.3/0.24 0.08/0.07 0.16/0.13 78.5 2.5 18.9
0.6 0.034 0.09 0.0 10.5/26.2 16.0/15.4 28.1/22.5 16.8/13.5 14.9/11.9 1.45/1.18 1.57/1.27 1.66/1.32 0.73/0.57 0.4/0.32 0.29/0.23 0.08/0.07 0.16/0.12 50.8 4.6 44.6
0.7 0.054 0.01 0.0 14.9/37.4 7.4/7.4 29.3/20.9 17.6/12.5 15.5/11.1 1.41/1.01 1.57/1.12 1.76/1.26 0.84/0.6 0.42/0.3 0.32/0.23 0.09/0.06 0.16/0.12 61.4 38.6 0.0
0.9 0.086 0.01 0.0 5.4/13.4 19.7/19.7 27.8/24.8 16.7/14.9 14.7/13.1 1.34/1.19 1.49/1.33 1.67/1.49 0.81/0.73 0.4/0.36 0.3/0.27 0.08/0.07 0.15/0.14 98.8 1.2 0.0
0.4 0.137 0.01 0.0 19.9/49.8 0.1/0.1 30.0/18.8 18.0/11.3 15.9/9.9 1.45/0.91 1.61/1.01 1.8/1.13 0.85/0.53 0.43/0.27 0.32/0.2 0.09/0.05 0.17/0.1 96.9 3.1 0.0
6*Run37 6*6
1.5 0.01 0.12 0.0 11.5/28.8 16.0/14.7 27.6/21.6 16.5/13.0 14.6/11.5 1.4/1.11 1.53/1.21 1.64/1.28 0.73/0.56 0.39/0.31 0.29/0.22 0.08/0.06 0.15/0.12 28.1 44.3 27.6
1.9 0.015 0.17 0.0 12.2/30.5 13.1/12.6 28.5/21.9 17.1/13.1 15.1/11.6 1.56/1.24 1.65/1.29 1.68/1.28 0.73/0.54 0.4/0.31 0.29/0.22 0.08/0.06 0.16/0.12 8.7 28.5 62.8
0.5 0.023 0.08 0.0 6.6/16.6 20.0/20.0 28.4/24.7 17.0/14.7 15.1/13.1 1.52/1.34 1.63/1.43 1.66/1.44 0.69/0.59 0.4/0.35 0.29/0.25 0.08/0.07 0.16/0.14 97.7 2.3 0.0
0.5 0.03 0.05 0.0 8.2/20.4 17.8/17.8 28.6/24.0 17.1/14.4 15.2/12.8 1.52/1.3 1.63/1.39 1.68/1.4 0.7/0.57 0.4/0.34 0.29/0.24 0.08/0.07 0.16/0.13 97.9 2.1 0.0
0.6 0.04 0.04 0.0 9.6/24.1 15.6/15.6 28.8/23.4 17.2/14.0 15.3/12.4 1.46/1.2 1.6/1.31 1.7/1.38 0.73/0.58 0.41/0.33 0.3/0.24 0.08/0.07 0.16/0.13 98.3 1.7 0.0
0.8 0.063 0.0 0.0 6.8/16.9 19.7/19.7 28.4/24.6 17.0/14.8 15.0/13.0 1.35/1.17 1.51/1.31 1.71/1.48 0.82/0.71 0.41/0.35 0.31/0.27 0.08/0.07 0.16/0.14 98.5 1.5 0.0
6*Run38 6*6
0.5 0.013 0.19 0.0 7.8/19.5 23.4/21.5 26.6/22.9 15.9/13.7 14.1/12.2 1.4/1.22 1.51/1.31 1.56/1.34 0.67/0.56 0.38/0.32 0.27/0.23 0.08/0.07 0.15/0.13 40.8 12.3 46.9
0.5 0.016 0.14 0.0 9.9/24.6 15.9/15.6 28.3/22.9 16.9/13.7 15.0/12.2 1.43/1.17 1.56/1.27 1.68/1.36 0.75/0.6 0.4/0.33 0.3/0.24 0.08/0.07 0.16/0.13 58.1 10.4 31.5
0.6 0.02 0.15 0.0 13.6/34.0 14.1/12.3 27.3/20.3 16.4/12.2 14.4/10.8 1.33/0.99 1.47/1.1 1.63/1.21 0.76/0.56 0.39/0.29 0.29/0.22 0.08/0.06 0.15/0.11 42.1 7.6 50.3
0.5 0.032 0.05 0.0 7.2/18.1 19.4/19.3 28.3/24.3 16.9/14.5 15.0/12.9 1.52/1.32 1.62/1.41 1.66/1.42 0.7/0.58 0.4/0.34 0.29/0.24 0.08/0.07 0.16/0.13 80.3 0.0 19.7
0.4 0.05 0.07 0.0 5.8/14.5 21.5/21.5 28.5/25.2 17.0/15.0 15.2/13.4 1.38/1.22 1.54/1.37 1.65/1.46 0.66/0.57 0.4/0.35 0.28/0.25 0.08/0.07 0.16/0.14 97.0 3.0 0.0
0.7 0.058 0.03 0.0 10.2/25.4 15.2/15.0 28.5/22.9 17.1/13.8 15.1/12.1 1.36/1.09 1.52/1.22 1.72/1.38 0.78/0.62 0.41/0.33 0.31/0.25 0.08/0.07 0.16/0.13 62.1 0.0 37.9
11*Run39 11*11
1.3 0.011 0.05 0.0 14.1/35.3 12.1/10.8 27.8/20.4 16.7/12.2 14.7/10.8 1.35/1.0 1.5/1.1 1.66/1.22 0.78/0.56 0.4/0.29 0.3/0.22 0.08/0.06 0.15/0.11 32.7 25.4 42.0
1.0 0.016 0.14 0.0 7.1/17.6 19.8/19.6 28.2/24.4 16.9/14.6 15.0/12.9 1.5/1.32 1.61/1.41 1.66/1.42 0.7/0.59 0.4/0.34 0.29/0.24 0.08/0.07 0.16/0.13 41.3 3.2 55.6
0.9 0.021 0.09 0.0 7.9/19.8 18.5/18.4 28.4/24.0 17.0/14.4 15.1/12.8 1.53/1.32 1.63/1.4 1.66/1.4 0.7/0.57 0.4/0.34 0.29/0.24 0.08/0.07 0.16/0.13 60.0 28.4 11.6
0.6 0.027 0.04 0.0 8.9/22.2 17.4/17.1 28.3/23.4 16.9/14.0 15.0/12.4 1.43/1.2 1.57/1.31 1.68/1.39 0.73/0.59 0.4/0.33 0.29/0.24 0.08/0.07 0.16/0.13 67.1 15.3 17.6
0.4 0.033 0.05 0.0 0.2/0.6 31.5/31.5 28.0/27.8 16.5/16.4 15.0/14.9 2.08/2.08 1.98/1.98 1.51/1.51 0.4/0.4 0.38/0.38 0.23/0.22 0.08/0.08 0.15/0.15 97.4 2.6 0.0
0.3 0.04 0.03 0.0 0.2/0.6 31.5/31.5 27.9/27.8 16.5/16.4 15.0/14.9 2.01/2.0 1.95/1.94 1.52/1.51 0.42/0.42 0.38/0.38 0.23/0.23 0.08/0.08 0.15/0.15 96.5 3.5 0.0
0.4 0.046 0.03 0.0 0.5/1.2 30.6/30.5 28.0/27.7 16.6/16.5 14.9/14.8 1.55/1.54 1.73/1.72 1.57/1.56 0.52/0.51 0.39/0.39 0.25/0.25 0.08/0.08 0.15/0.15 90.7 9.3 0.0
0.5 0.054 0.03 0.0 3.1/7.7 26.6/26.3 27.8/26.2 16.7/15.8 14.7/13.9 1.31/1.23 1.47/1.38 1.66/1.57 0.65/0.6 0.4/0.38 0.29/0.27 0.08/0.08 0.15/0.15 98.0 2.0 0.0
0.8 0.063 0.03 0.0 6.6/16.6 20.1/20.0 28.3/24.6 17.0/14.8 15.0/13.0 1.35/1.17 1.5/1.31 1.7/1.48 0.81/0.7 0.4/0.35 0.31/0.27 0.08/0.07 0.16/0.14 83.6 3.2 13.2
0.9 0.082 0.02 0.0 2.4/6.0 24.2/24.2 27.3/25.9 16.4/15.6 14.4/13.7 1.31/1.24 1.46/1.38 1.64/1.56 0.8/0.76 0.39/0.37 0.3/0.28 0.08/0.08 0.15/0.14 98.9 1.1 0.0
1.0 0.1 0.01 0.0 2.5/6.4 24.1/23.3 27.2/26.1 16.3/15.7 14.4/13.8 1.31/1.25 1.45/1.39 1.64/1.57 0.79/0.76 0.39/0.37 0.3/0.28 0.08/0.08 0.15/0.14 98.9 1.1 0.0
8*Run41 8*8
1.4 0.012 0.17 0.0 9.2/23.0 19.7/18.3 27.3/22.6 16.3/13.5 14.5/12.0 1.4/1.18 1.53/1.28 1.61/1.33 0.7/0.57 0.39/0.32 0.28/0.23 0.08/0.07 0.15/0.13 14.8 4.1 81.1
1.5 0.019 0.1 0.0 8.3/20.9 19.2/18.5 27.8/23.4 16.6/14.0 14.8/12.4 1.45/1.23 1.57/1.33 1.64/1.38 0.71/0.58 0.39/0.33 0.29/0.24 0.08/0.07 0.15/0.13 50.0 12.0 37.9
0.5 0.025 0.1 0.0 9.5/23.8 16.7/16.2 28.2/23.1 16.9/13.8 15.0/12.3 1.45/1.2 1.58/1.3 1.67/1.36 0.73/0.59 0.4/0.33 0.29/0.24 0.08/0.07 0.16/0.13 77.3 0.0 22.7
0.5 0.033 0.07 0.0 8.0/20.0 18.0/18.0 28.6/24.1 17.1/14.4 15.2/12.8 1.45/1.23 1.58/1.34 1.69/1.42 0.71/0.59 0.41/0.34 0.29/0.25 0.08/0.07 0.16/0.13 98.0 2.0 0.0
0.6 0.044 0.04 0.0 7.9/19.8 18.4/18.3 28.6/24.1 17.1/14.4 15.2/12.8 1.51/1.3 1.65/1.41 1.67/1.4 0.69/0.56 0.4/0.34 0.29/0.24 0.08/0.07 0.16/0.13 98.3 1.7 0.0
0.8 0.069 0.03 0.0 5.0/12.6 21.9/21.9 28.0/25.2 16.8/15.1 14.8/13.3 1.34/1.2 1.49/1.34 1.69/1.52 0.82/0.74 0.4/0.36 0.31/0.28 0.08/0.07 0.16/0.14 98.6 1.4 0.0
0.9 0.084 0.03 0.0 3.4/8.4 22.7/22.7 27.5/25.6 16.5/15.4 14.5/13.5 1.32/1.23 1.47/1.36 1.65/1.54 0.81/0.75 0.39/0.37 0.3/0.28 0.08/0.07 0.15/0.14 98.8 1.2 0.0
0.3 0.102 0.02 0.0 20.0/49.9 0.2/0.1 29.9/18.7 17.9/11.2 15.8/9.9 1.45/0.91 1.61/1.01 1.79/1.12 0.85/0.53 0.43/0.27 0.32/0.2 0.09/0.05 0.17/0.1 96.5 3.5 0.0
6*Run46 6*6
1.3 0.013 0.08 0.0 11.6/28.9 16.2/14.7 27.4/21.5 16.4/12.9 14.5/11.4 1.35/1.07 1.49/1.18 1.63/1.28 0.75/0.57 0.39/0.31 0.29/0.23 0.08/0.06 0.15/0.12 21.1 3.8 75.1
0.8 0.016 0.11 0.0 14.9/37.3 9.6/8.7 28.3/20.3 17.0/12.2 15.0/10.8 1.36/0.98 1.52/1.09 1.7/1.22 0.81/0.58 0.41/0.29 0.31/0.22 0.08/0.06 0.16/0.11 54.7 14.9 30.4
0.5 0.021 0.07 0.0 7.5/18.7 19.2/19.0 28.3/24.2 16.9/14.4 15.0/12.8 1.5/1.3 1.61/1.39 1.66/1.41 0.7/0.59 0.4/0.34 0.29/0.24 0.08/0.07 0.16/0.13 79.0 0.0 21.0
0.6 0.026 0.05 0.0 10.9/27.4 13.4/13.4 28.9/22.7 17.3/13.6 15.3/12.1 1.45/1.15 1.59/1.26 1.71/1.34 0.76/0.59 0.41/0.32 0.3/0.24 0.08/0.07 0.16/0.13 79.9 20.1 0.0
0.6 0.041 0.02 0.0 5.9/14.9 21.4/21.4 28.4/25.0 16.9/14.9 15.1/13.3 1.59/1.43 1.68/1.5 1.64/1.44 0.65/0.56 0.4/0.35 0.28/0.24 0.08/0.07 0.16/0.14 98.1 1.9 0.0
0.6 0.047 0.02 0.0 8.0/20.1 18.1/18.1 28.7/24.2 17.1/14.4 15.2/12.9 1.43/1.21 1.6/1.36 1.67/1.4 0.69/0.56 0.41/0.34 0.29/0.24 0.08/0.07 0.16/0.13 98.2 1.8 0.0
8*Run47 8*8
1.8 0.012 0.09 0.0 12.5/31.2 13.4/12.5 28.1/21.4 16.8/12.8 14.9/11.4 1.38/1.06 1.53/1.17 1.67/1.28 0.77/0.58 0.4/0.31 0.3/0.23 0.08/0.06 0.16/0.12 18.6 5.8 75.5
1.3 0.017 0.21 0.0 8.1/20.3 19.2/18.7 28.0/23.7 16.8/14.2 14.9/12.6 1.36/1.15 1.52/1.29 1.65/1.39 0.7/0.58 0.4/0.34 0.29/0.24 0.08/0.07 0.16/0.13 30.9 2.1 67.0
0.6 0.022 0.1 0.0 8.2/20.5 17.8/17.8 28.7/24.1 17.2/14.5 15.2/12.8 1.37/1.15 1.53/1.28 1.71/1.44 0.71/0.58 0.41/0.34 0.3/0.25 0.08/0.07 0.16/0.13 98.2 1.8 0.0
0.6 0.029 0.09 0.0 9.6/24.0 15.3/15.3 28.8/23.4 17.2/14.0 15.3/12.4 1.4/1.14 1.56/1.27 1.7/1.38 0.74/0.59 0.41/0.33 0.3/0.24 0.08/0.07 0.16/0.13 98.2 1.8 0.0
0.3 0.038 0.1 0.0 3.8/9.5 24.6/24.6 28.2/26.0 16.8/15.5 15.0/13.9 1.48/1.38 1.66/1.54 1.63/1.5 0.63/0.57 0.4/0.37 0.27/0.25 0.08/0.08 0.16/0.14 95.4 4.6 0.0
0.7 0.046 0.04 0.0 8.9/22.3 16.7/16.6 28.8/23.8 17.2/14.2 15.3/12.7 1.39/1.15 1.55/1.28 1.69/1.39 0.71/0.57 0.41/0.34 0.29/0.24 0.08/0.07 0.16/0.13 98.3 1.7 0.0
0.8 0.073 0.03 0.0 6.4/16.1 19.5/19.5 28.1/24.5 16.9/14.7 14.9/13.0 1.35/1.17 1.5/1.31 1.69/1.48 0.82/0.72 0.4/0.35 0.31/0.27 0.08/0.07 0.16/0.14 98.7 1.3 0.0
0.9 0.089 0.03 0.0 7.5/18.8 16.5/16.5 28.3/24.1 17.0/14.5 15.0/12.7 1.36/1.16 1.52/1.29 1.7/1.45 0.82/0.7 0.4/0.34 0.31/0.26 0.08/0.07 0.16/0.13 98.8 1.2 0.0
7*Run51 7*7
0.6 0.011 0.16 2.56 13.5/33.8 17.8/14.7 25.9/19.5 15.5/11.7 13.7/10.3 1.25/0.95 1.4/1.05 1.55/1.17 0.72/0.54 0.37/0.28 0.28/0.21 0.08/0.06 0.14/0.11 46.5 22.0 31.6
1.0 0.015 0.21 0.67 9.0/22.5 17.5/17.1 28.2/23.3 16.9/13.9 15.0/12.4 1.47/1.23 1.59/1.33 1.66/1.37 0.72/0.58 0.4/0.33 0.29/0.24 0.08/0.07 0.16/0.13 26.7 1.4 71.9
0.5 0.019 0.18 3.77 7.4/18.5 18.9/18.8 28.4/24.3 17.0/14.5 15.1/12.9 1.49/1.29 1.61/1.39 1.67/1.42 0.71/0.6 0.4/0.34 0.29/0.25 0.08/0.07 0.16/0.13 78.6 0.0 21.4
0.5 0.025 0.17 0.3 6.7/16.8 20.1/20.0 28.3/24.6 16.9/14.7 15.0/13.1 1.54/1.36 1.64/1.44 1.65/1.43 0.69/0.58 0.4/0.35 0.28/0.24 0.08/0.07 0.16/0.14 80.5 0.0 19.5
0.6 0.04 0.05 0.15 7.1/17.7 20.1/19.9 28.2/24.3 16.9/14.5 15.0/13.0 1.56/1.37 1.65/1.44 1.65/1.41 0.68/0.56 0.4/0.34 0.28/0.24 0.08/0.07 0.16/0.13 80.7 0.0 19.3
0.8 0.064 0.01 0.03 5.6/14.1 22.5/22.1 27.8/24.8 16.7/14.9 14.7/13.1 1.32/1.18 1.47/1.31 1.68/1.5 0.81/0.72 0.4/0.36 0.31/0.27 0.08/0.07 0.15/0.14 81.3 0.0 18.7
0.2 0.102 0.01 0.01 19.7/49.3 0.7/0.6 29.8/18.7 17.9/11.3 15.8/9.9 1.44/0.91 1.6/1.01 1.79/1.12 0.85/0.53 0.43/0.27 0.32/0.2 0.09/0.05 0.16/0.1 94.0 6.0 0.0
8*Run54 8*8
1.1 0.011 0.13 0.0 9.0/22.6 19.1/18.1 27.6/23.0 16.5/13.7 14.7/12.2 1.45/1.22 1.57/1.32 1.62/1.34 0.69/0.56 0.39/0.32 0.28/0.23 0.08/0.07 0.15/0.13 6.6 93.4 0.0
1.9 0.016 0.15 0.0 9.3/23.3 17.3/16.7 28.1/23.1 16.8/13.8 14.9/12.3 1.43/1.19 1.57/1.3 1.66/1.36 0.73/0.59 0.4/0.33 0.29/0.24 0.08/0.07 0.16/0.13 14.7 35.2 50.1
0.6 0.021 0.14 0.0 7.8/19.5 19.6/19.1 28.0/23.8 16.7/14.2 14.8/12.6 1.46/1.26 1.58/1.35 1.65/1.4 0.72/0.6 0.4/0.34 0.29/0.24 0.08/0.07 0.15/0.13 78.0 0.0 22.0
0.7 0.033 0.06 0.0 7.1/17.7 19.0/19.0 28.4/24.4 17.0/14.7 15.0/12.9 1.36/1.16 1.51/1.3 1.71/1.47 0.82/0.7 0.41/0.35 0.31/0.27 0.08/0.07 0.16/0.14 98.5 1.5 0.0
0.6 0.043 0.06 0.0 8.0/20.1 17.2/17.2 28.4/23.8 17.0/14.3 15.0/12.6 1.36/1.14 1.52/1.27 1.7/1.43 0.8/0.67 0.41/0.34 0.31/0.26 0.08/0.07 0.16/0.13 98.2 1.8 0.0
0.7 0.057 0.05 0.0 8.2/20.5 16.6/16.6 28.5/23.9 17.1/14.3 15.1/12.6 1.37/1.14 1.52/1.27 1.72/1.44 0.78/0.65 0.41/0.34 0.31/0.26 0.08/0.07 0.16/0.13 98.4 1.6 0.0
0.8 0.069 0.04 0.0 13.4/33.6 9.6/9.6 29.1/21.5 17.5/12.9 15.4/11.4 1.4/1.04 1.56/1.15 1.75/1.3 0.84/0.62 0.42/0.31 0.32/0.23 0.08/0.06 0.16/0.12 69.3 30.7 0.0
0.9 0.09 0.01 0.0 6.9/17.3 17.3/17.2 28.2/24.3 16.9/14.6 14.9/12.9 1.36/1.17 1.51/1.3 1.7/1.46 0.82/0.71 0.4/0.35 0.31/0.26 0.08/0.07 0.16/0.13 98.9 1.1 0.0
7*Run55 7*7
0.8 0.012 0.08 0.0 13.3/33.2 17.1/14.4 26.3/19.9 15.8/11.9 13.9/10.5 1.27/0.96 1.41/1.07 1.57/1.19 0.73/0.55 0.38/0.28 0.28/0.21 0.08/0.06 0.15/0.11 14.5 1.5 84.0
1.0 0.016 0.1 0.0 7.3/18.2 19.4/19.2 28.2/24.2 16.9/14.5 15.0/12.9 1.49/1.3 1.6/1.39 1.66/1.42 0.71/0.6 0.4/0.34 0.29/0.25 0.08/0.07 0.16/0.13 40.3 0.0 59.7
0.5 0.025 0.07 0.0 5.4/13.4 24.2/23.5 27.6/24.9 16.5/14.8 14.7/13.3 1.61/1.48 1.67/1.52 1.59/1.43 0.61/0.53 0.39/0.35 0.27/0.24 0.08/0.07 0.15/0.14 80.7 0.0 19.3
0.6 0.028 0.05 0.0 7.2/18.1 20.0/19.8 28.2/24.3 16.9/14.5 15.0/12.9 1.58/1.39 1.66/1.45 1.64/1.4 0.66/0.55 0.4/0.34 0.28/0.24 0.08/0.07 0.16/0.13 82.5 0.0 17.5
0.6 0.032 0.05 0.0 13.2/32.9 10.2/10.1 29.0/21.7 17.4/13.0 15.4/11.5 1.41/1.05 1.57/1.17 1.73/1.29 0.8/0.59 0.41/0.31 0.31/0.23 0.08/0.06 0.16/0.12 52.9 0.0 47.1
0.7 0.052 0.02 0.0 6.1/15.2 21.0/21.0 28.5/25.1 17.1/15.0 15.1/13.3 1.35/1.19 1.51/1.33 1.68/1.48 0.68/0.59 0.41/0.36 0.29/0.25 0.08/0.07 0.16/0.14 98.3 1.7 0.0
0.8 0.068 0.01 0.0 5.7/14.3 21.1/21.1 28.2/25.0 16.9/15.0 14.9/13.2 1.34/1.19 1.5/1.33 1.7/1.51 0.83/0.74 0.4/0.36 0.31/0.27 0.08/0.07 0.16/0.14 98.5 1.5 0.0
8*Run60 8*8
0.4 0.012 0.18 0.0 6.8/17.0 25.3/23.3 26.4/23.3 15.8/13.9 14.0/12.4 1.46/1.31 1.54/1.38 1.54/1.35 0.63/0.54 0.37/0.33 0.26/0.23 0.08/0.07 0.15/0.13 41.9 0.0 58.1
1.2 0.016 0.17 0.0 11.3/28.3 17.3/15.6 27.1/21.5 16.3/12.9 14.4/11.4 1.35/1.07 1.49/1.18 1.61/1.27 0.73/0.56 0.39/0.31 0.29/0.22 0.08/0.06 0.15/0.12 17.0 4.9 78.1
0.5 0.025 0.18 0.0 5.5/13.8 21.8/21.8 28.2/25.2 16.9/15.0 15.0/13.4 1.55/1.4 1.65/1.48 1.65/1.46 0.67/0.58 0.4/0.35 0.28/0.25 0.08/0.07 0.16/0.14 97.8 2.2 0.0
0.5 0.032 0.08 0.0 7.6/18.9 18.7/18.7 28.5/24.2 17.0/14.5 15.1/12.9 1.51/1.3 1.63/1.4 1.67/1.42 0.7/0.58 0.4/0.34 0.29/0.24 0.08/0.07 0.16/0.13 97.9 2.1 0.0
0.7 0.052 0.05 0.0 5.9/14.7 21.5/21.5 28.5/25.2 17.0/15.1 15.1/13.4 1.35/1.19 1.51/1.33 1.67/1.48 0.66/0.56 0.4/0.36 0.28/0.25 0.08/0.07 0.16/0.14 85.2 0.0 14.8
0.9 0.082 0.02 0.0 5.0/12.4 20.6/20.6 27.7/24.9 16.6/14.9 14.6/13.2 1.33/1.19 1.48/1.33 1.67/1.5 0.81/0.73 0.4/0.36 0.3/0.27 0.08/0.07 0.15/0.14 88.7 0.0 11.3
0.2 0.107 0.04 0.0 20.0/50.0 0.2/0.1 29.8/18.6 17.9/11.2 15.8/9.9 1.44/0.9 1.6/1.0 1.79/1.12 0.85/0.53 0.43/0.27 0.32/0.2 0.09/0.05 0.17/0.1 95.2 4.8 0.0
0.2 0.13 0.01 0.0 20.0/50.0 0.2/0.1 29.8/18.6 17.9/11.2 15.8/9.9 1.44/0.9 1.6/1.0 1.79/1.12 0.85/0.53 0.43/0.27 0.32/0.2 0.09/0.05 0.17/0.1 95.6 4.4 0.0
8*Run68 8*8
1.0 0.011 0.11 0.0 12.2/30.6 17.3/15.0 26.5/20.6 15.9/12.3 14.0/10.9 1.28/1.0 1.43/1.11 1.59/1.23 0.75/0.58 0.38/0.29 0.29/0.22 0.08/0.06 0.15/0.11 27.7 7.3 65.0
1.0 0.015 0.13 0.0 6.1/15.2 21.5/21.2 28.0/24.7 16.7/14.8 14.9/13.1 1.51/1.35 1.61/1.44 1.64/1.44 0.69/0.59 0.4/0.35 0.28/0.25 0.08/0.07 0.15/0.14 40.6 1.2 58.2
0.5 0.019 0.11 0.0 7.6/19.1 18.5/18.5 28.5/24.2 17.0/14.5 15.1/12.9 1.5/1.29 1.63/1.4 1.67/1.42 0.71/0.59 0.4/0.34 0.29/0.24 0.08/0.07 0.16/0.13 80.7 0.0 19.3
0.6 0.025 0.05 0.0 11.3/28.2 13.1/13.1 28.8/22.5 17.2/13.5 15.3/11.9 1.46/1.16 1.6/1.26 1.71/1.33 0.76/0.58 0.41/0.32 0.3/0.23 0.08/0.07 0.16/0.12 59.7 0.0 40.3
0.5 0.033 0.05 0.0 9.1/22.8 16.7/16.5 28.6/23.5 17.1/14.1 15.2/12.5 1.48/1.23 1.61/1.33 1.68/1.38 0.73/0.58 0.41/0.33 0.29/0.24 0.08/0.07 0.16/0.13 81.7 0.0 18.3
0.6 0.044 0.05 0.0 8.3/20.9 17.7/17.6 28.7/24.0 17.2/14.4 15.2/12.7 1.38/1.15 1.54/1.29 1.7/1.42 0.71/0.58 0.41/0.34 0.29/0.24 0.08/0.07 0.16/0.13 98.3 1.7 0.0
0.7 0.057 0.04 0.0 8.3/20.9 17.6/17.6 28.6/23.9 17.2/14.4 15.1/12.7 1.36/1.14 1.52/1.27 1.72/1.44 0.75/0.62 0.41/0.34 0.31/0.26 0.08/0.07 0.16/0.13 98.6 1.4 0.0
0.8 0.075 0.01 0.0 7.0/17.5 18.8/18.5 27.8/24.0 16.7/14.4 14.7/12.7 1.34/1.15 1.49/1.28 1.67/1.45 0.81/0.7 0.4/0.34 0.3/0.26 0.08/0.07 0.15/0.13 88.2 0.0 11.8
6*Run73 6*6
1.0 0.011 0.06 0.0 12.3/30.7 16.7/14.7 26.8/20.8 16.1/12.5 14.2/11.0 1.3/1.01 1.45/1.12 1.61/1.24 0.74/0.57 0.38/0.3 0.29/0.22 0.08/0.06 0.15/0.12 66.5 15.3 18.2
0.7 0.015 0.12 0.0 10.2/25.4 16.6/15.8 27.9/22.5 16.7/13.5 14.8/12.0 1.39/1.13 1.53/1.24 1.66/1.34 0.75/0.59 0.4/0.32 0.29/0.24 0.08/0.07 0.15/0.12 34.3 21.4 44.3
0.7 0.019 0.05 0.0 5.8/14.5 21.5/21.4 28.2/25.0 16.8/14.9 15.0/13.3 1.52/1.37 1.63/1.46 1.65/1.45 0.68/0.59 0.4/0.35 0.28/0.25 0.08/0.07 0.16/0.14 52.7 33.6 13.7
0.6 0.025 0.05 0.0 7.2/18.1 19.0/19.0 28.4/24.4 17.0/14.6 15.1/13.0 1.46/1.26 1.6/1.38 1.67/1.43 0.71/0.6 0.4/0.35 0.29/0.25 0.08/0.07 0.16/0.13 82.2 0.0 17.8
0.5 0.033 0.04 0.0 7.8/19.5 18.3/18.3 28.5/24.2 17.1/14.4 15.2/12.8 1.5/1.29 1.63/1.4 1.67/1.41 0.71/0.58 0.4/0.34 0.29/0.24 0.08/0.07 0.16/0.13 81.5 0.0 18.5
0.7 0.044 0.02 0.0 7.0/17.4 19.8/19.7 28.5/24.6 17.1/14.8 15.1/13.1 1.36/1.17 1.51/1.3 1.7/1.47 0.7/0.59 0.41/0.35 0.29/0.25 0.08/0.07 0.16/0.14 98.5 1.5 0.0
6*Run74 6*6
1.5 0.013 0.1 0.0 10.0/25.0 19.4/17.7 27.0/22.1 16.2/13.2 14.3/11.7 1.37/1.14 1.5/1.24 1.6/1.3 0.7/0.56 0.38/0.31 0.28/0.23 0.08/0.06 0.15/0.12 13.2 3.5 83.3
0.7 0.016 0.18 0.0 4.6/11.6 22.9/22.9 28.1/25.5 16.8/15.2 14.9/13.6 1.49/1.37 1.6/1.46 1.65/1.49 0.69/0.61 0.4/0.36 0.28/0.26 0.08/0.08 0.16/0.14 54.5 32.0 13.5
0.5 0.02 0.17 0.0 6.5/16.3 22.6/21.7 27.5/24.2 16.4/14.4 14.6/12.9 1.46/1.3 1.58/1.4 1.6/1.41 0.66/0.57 0.39/0.34 0.28/0.24 0.08/0.07 0.15/0.13 76.7 3.9 19.4
0.6 0.026 0.08 0.0 7.1/17.8 19.5/19.4 28.5/24.5 17.0/14.7 15.1/13.0 1.42/1.22 1.56/1.35 1.68/1.44 0.69/0.58 0.4/0.35 0.29/0.24 0.08/0.07 0.16/0.14 98.2 1.8 0.0
0.6 0.041 0.04 0.0 6.8/17.0 19.8/19.8 28.5/24.7 17.1/14.8 15.1/13.1 1.41/1.23 1.56/1.35 1.69/1.46 0.7/0.59 0.41/0.35 0.29/0.25 0.08/0.07 0.16/0.14 81.4 0.0 18.6
0.8 0.054 0.02 0.0 6.0/14.9 21.1/20.9 27.9/24.7 16.8/14.8 14.8/13.1 1.34/1.18 1.49/1.31 1.69/1.49 0.82/0.73 0.4/0.35 0.31/0.27 0.08/0.07 0.15/0.14 87.2 0.0 12.8
8*Run81 8*8
0.9 0.011 0.18 0.0 9.0/22.6 20.6/19.0 27.1/22.6 16.2/13.5 14.4/12.0 1.42/1.2 1.53/1.29 1.59/1.33 0.69/0.56 0.38/0.32 0.28/0.23 0.08/0.07 0.15/0.12 80.6 6.5 12.9
1.6 0.018 0.06 0.0 11.7/29.1 14.0/13.4 28.3/22.0 16.9/13.2 15.0/11.7 1.42/1.11 1.56/1.22 1.68/1.3 0.76/0.58 0.4/0.31 0.3/0.23 0.08/0.06 0.16/0.12 43.5 15.2 41.3
0.5 0.023 0.14 0.0 8.0/19.9 17.9/17.9 28.5/24.1 17.1/14.4 15.1/12.8 1.49/1.28 1.61/1.37 1.68/1.41 0.72/0.59 0.4/0.34 0.29/0.24 0.08/0.07 0.16/0.13 97.1 2.9 0.0
0.6 0.031 0.05 0.0 9.8/24.5 15.4/15.4 28.7/23.2 17.2/13.9 15.2/12.3 1.5/1.23 1.62/1.32 1.69/1.36 0.73/0.58 0.41/0.33 0.29/0.24 0.08/0.07 0.16/0.13 98.1 1.9 0.0
0.6 0.04 0.05 0.0 10.9/27.3 13.7/13.6 28.9/22.8 17.3/13.6 15.3/12.1 1.44/1.14 1.59/1.26 1.71/1.34 0.75/0.58 0.41/0.32 0.3/0.23 0.08/0.07 0.16/0.13 98.2 1.8 0.0
0.8 0.064 0.01 0.0 6.8/17.0 19.8/19.7 28.3/24.5 17.0/14.7 15.0/13.0 1.35/1.17 1.51/1.3 1.71/1.48 0.82/0.71 0.41/0.35 0.31/0.27 0.08/0.07 0.16/0.14 98.6 1.4 0.0
0.3 0.102 0.02 0.0 20.0/49.9 0.5/0.3 29.8/18.7 17.9/11.2 15.8/9.9 1.45/0.9 1.61/1.0 1.79/1.12 0.84/0.53 0.43/0.27 0.32/0.2 0.09/0.05 0.17/0.1 96.4 3.6 0.0
0.4 0.133 0.02 0.0 20.0/50.0 0.1/0.0 30.0/18.8 18.0/11.3 15.9/9.9 1.45/0.91 1.61/1.01 1.8/1.12 0.85/0.53 0.43/0.27 0.32/0.2 0.09/0.05 0.17/0.1 97.2 2.8 0.0
9*Run93 9*9
1.8 0.012 0.1 0.0 13.1/32.8 12.9/11.8 27.8/20.9 16.7/12.6 14.7/11.1 1.35/1.01 1.49/1.12 1.67/1.26 0.79/0.59 0.4/0.3 0.3/0.23 0.08/0.06 0.15/0.12 49.7 39.0 11.3
1.1 0.017 0.2 0.0 10.1/25.3 15.4/15.1 28.5/22.9 17.1/13.7 15.1/12.2 1.44/1.17 1.58/1.28 1.69/1.35 0.74/0.58 0.41/0.33 0.3/0.24 0.08/0.07 0.16/0.13 32.1 40.9 27.0
0.5 0.023 0.07 0.0 9.4/23.5 16.0/15.9 28.6/23.4 17.1/14.0 15.2/12.4 1.48/1.23 1.61/1.33 1.69/1.37 0.73/0.58 0.41/0.33 0.29/0.24 0.08/0.07 0.16/0.13 80.8 0.0 19.2
0.3 0.03 0.12 0.0 1.4/3.4 29.5/29.5 28.0/27.3 16.6/16.1 15.0/14.6 2.01/1.97 1.94/1.9 1.54/1.49 0.45/0.43 0.39/0.37 0.24/0.23 0.08/0.08 0.15/0.15 75.8 24.2 0.0
0.6 0.036 0.09 0.0 9.5/23.8 15.7/15.6 28.7/23.4 17.2/14.0 15.2/12.4 1.45/1.19 1.59/1.3 1.7/1.38 0.74/0.59 0.41/0.33 0.3/0.24 0.08/0.07 0.16/0.13 82.1 0.0 17.9
0.6 0.048 0.04 0.0 9.1/22.8 16.5/16.5 28.8/23.7 17.2/14.1 15.3/12.6 1.43/1.19 1.6/1.33 1.68/1.38 0.71/0.56 0.41/0.33 0.29/0.24 0.08/0.07 0.16/0.13 98.3 1.7 0.0
0.8 0.076 0.01 0.0 7.2/18.0 18.2/18.2 28.1/24.1 16.9/14.5 14.9/12.7 1.35/1.15 1.5/1.28 1.69/1.45 0.82/0.71 0.4/0.34 0.31/0.26 0.08/0.07 0.16/0.13 98.7 1.3 0.0
0.3 0.12 0.01 0.0 19.9/49.7 0.6/0.4 29.8/18.7 17.9/11.2 15.8/9.9 1.45/0.91 1.61/1.01 1.79/1.12 0.84/0.53 0.43/0.27 0.32/0.2 0.09/0.05 0.17/0.1 96.3 3.7 0.0
0.3 0.191 0.0 0.0 19.7/49.2 0.6/0.5 29.9/18.9 18.0/11.3 15.9/10.0 1.45/0.91 1.61/1.02 1.79/1.13 0.85/0.53 0.43/0.27 0.32/0.2 0.09/0.06 0.17/0.1 96.2 3.8 0.0
6*Run94 6*6
1.7 0.012 0.03 0.0 10.8/27.0 17.4/16.0 27.4/21.9 16.4/13.1 14.6/11.6 1.42/1.16 1.54/1.24 1.62/1.29 0.71/0.55 0.39/0.31 0.28/0.22 0.08/0.06 0.15/0.12 21.3 2.5 76.2
1.0 0.017 0.1 0.0 9.9/24.8 15.0/15.0 28.7/23.2 17.2/13.9 15.2/12.3 1.47/1.2 1.6/1.3 1.7/1.37 0.75/0.59 0.41/0.33 0.3/0.24 0.08/0.07 0.16/0.13 45.4 28.3 26.3
0.6 0.023 0.06 0.0 13.3/33.4 11.3/10.7 28.4/21.2 17.1/12.7 15.1/11.2 1.37/1.02 1.53/1.14 1.7/1.27 0.79/0.58 0.41/0.3 0.3/0.23 0.08/0.06 0.16/0.12 64.5 2.1 33.4
0.7 0.03 0.03 0.0 6.7/16.8 19.4/19.4 28.2/24.5 17.0/14.7 15.0/13.0 1.35/1.17 1.5/1.3 1.7/1.47 0.82/0.71 0.4/0.35 0.31/0.27 0.08/0.07 0.16/0.14 97.5 2.5 0.0
0.5 0.042 0.03 0.0 3.0/7.6 24.0/23.9 27.4/25.8 16.5/15.5 14.5/13.6 1.31/1.23 1.46/1.37 1.65/1.55 0.8/0.75 0.39/0.37 0.3/0.28 0.08/0.08 0.15/0.14 97.7 2.3 0.0
0.8 0.048 0.02 0.0 6.8/17.1 17.7/17.6 28.2/24.4 16.9/14.6 14.9/12.9 1.36/1.17 1.51/1.3 1.69/1.47 0.82/0.71 0.4/0.35 0.31/0.27 0.08/0.07 0.16/0.14 98.5 1.5 0.0
8*Run95 8*8
0.5 0.013 0.19 0.0 7.6/19.1 24.8/22.5 26.2/22.8 15.7/13.6 13.9/12.1 1.36/1.2 1.48/1.3 1.54/1.34 0.65/0.55 0.37/0.32 0.27/0.23 0.08/0.07 0.15/0.13 41.7 16.2 42.2
1.2 0.016 0.12 0.0 13.1/32.8 11.7/11.0 28.3/21.2 17.0/12.7 15.0/11.2 1.37/1.02 1.52/1.14 1.7/1.27 0.8/0.6 0.4/0.3 0.31/0.23 0.08/0.06 0.16/0.12 31.6 11.0 57.4
0.9 0.02 0.14 0.0 12.1/30.3 13.5/12.8 28.2/21.7 16.9/13.0 14.9/11.5 1.38/1.06 1.53/1.18 1.68/1.29 0.77/0.59 0.4/0.31 0.3/0.23 0.08/0.06 0.16/0.12 58.5 12.6 28.9
0.5 0.032 0.08 0.0 10.1/25.3 15.9/15.4 28.2/22.8 16.9/13.6 15.0/12.1 1.43/1.17 1.57/1.27 1.67/1.35 0.75/0.59 0.4/0.32 0.3/0.24 0.08/0.07 0.16/0.13 39.7 41.9 18.4
0.8 0.041 0.05 0.0 17.9/44.7 4.4/3.8 29.0/19.2 17.4/11.5 15.4/10.2 1.4/0.93 1.56/1.03 1.74/1.15 0.83/0.55 0.41/0.27 0.31/0.21 0.08/0.06 0.16/0.11 40.2 2.9 56.9
0.8 0.066 0.04 0.0 5.6/14.0 21.9/21.6 27.9/24.9 16.8/14.9 14.8/13.2 1.33/1.18 1.48/1.32 1.68/1.5 0.82/0.73 0.4/0.36 0.31/0.27 0.08/0.07 0.15/0.14 85.9 0.0 14.1
0.9 0.076 0.02 0.0 7.8/19.6 17.3/17.3 28.2/23.8 16.9/14.3 14.9/12.6 1.36/1.14 1.51/1.27 1.7/1.43 0.82/0.7 0.4/0.34 0.31/0.26 0.08/0.07 0.16/0.13 98.6 1.4 0.0
0.3 0.121 0.0 0.0 20.0/50.0 0.0/0.0 30.0/18.7 18.0/11.2 15.9/9.9 1.45/0.91 1.61/1.01 1.8/1.12 0.85/0.53 0.43/0.27 0.32/0.2 0.09/0.05 0.17/0.1 96.4 3.6 0.0
9*Run98 9*9
0.7 0.01 0.11 0.0 14.8/37.1 13.4/11.2 26.9/19.5 16.2/11.7 14.3/10.3 1.3/0.94 1.44/1.04 1.62/1.17 0.77/0.55 0.39/0.28 0.29/0.21 0.08/0.06 0.15/0.11 25.9 18.9 55.2
1.6 0.016 0.04 0.0 13.5/33.7 10.8/10.3 28.6/21.2 17.1/12.7 15.1/11.2 1.39/1.03 1.54/1.15 1.71/1.27 0.79/0.58 0.41/0.3 0.31/0.23 0.08/0.06 0.16/0.12 22.0 33.0 45.0
0.7 0.021 0.09 0.0 9.0/22.5 18.0/17.3 28.0/23.2 16.7/13.9 14.8/12.3 1.41/1.18 1.55/1.29 1.66/1.37 0.73/0.59 0.4/0.33 0.29/0.24 0.08/0.07 0.15/0.13 52.0 25.3 22.7
0.6 0.028 0.04 0.0 8.7/21.7 17.3/17.2 28.7/23.8 17.1/14.2 15.2/12.7 1.47/1.23 1.64/1.38 1.68/1.39 0.7/0.56 0.41/0.34 0.29/0.24 0.08/0.07 0.16/0.13 98.3 1.7 0.0
0.6 0.037 0.03 0.0 9.9/24.8 15.1/15.1 28.8/23.2 17.3/13.9 15.3/12.3 1.45/1.18 1.61/1.31 1.7/1.37 0.74/0.58 0.41/0.33 0.3/0.24 0.08/0.07 0.16/0.13 98.2 1.8 0.0
0.7 0.048 0.03 0.0 9.9/24.7 15.2/15.2 28.8/23.2 17.3/13.9 15.3/12.3 1.39/1.12 1.55/1.25 1.72/1.39 0.76/0.6 0.41/0.33 0.31/0.25 0.08/0.07 0.16/0.13 97.8 2.2 0.0
0.8 0.064 0.01 0.0 8.8/21.9 16.3/16.3 28.5/23.6 17.1/14.2 15.1/12.5 1.37/1.13 1.52/1.26 1.72/1.42 0.82/0.68 0.41/0.34 0.31/0.26 0.08/0.07 0.16/0.13 98.7 1.3 0.0
0.3 0.101 0.02 0.0 20.0/49.9 0.1/0.1 30.0/18.8 18.0/11.3 15.9/9.9 1.45/0.91 1.61/1.01 1.8/1.12 0.85/0.53 0.43/0.27 0.32/0.2 0.09/0.05 0.17/0.1 97.0 3.0 0.0
0.4 0.132 0.02 0.0 20.0/49.9 0.1/0.1 30.0/18.8 18.0/11.3 15.9/9.9 1.45/0.91 1.61/1.01 1.8/1.12 0.85/0.53 0.43/0.27 0.32/0.2 0.09/0.05 0.17/0.1 97.2 2.8 0.0
6*Run100 6*6
0.4 0.013 0.11 0.0 9.9/24.7 19.6/17.8 26.9/22.0 16.1/13.2 14.2/11.7 1.33/1.09 1.47/1.21 1.6/1.31 0.72/0.58 0.38/0.31 0.28/0.23 0.08/0.06 0.15/0.12 29.4 6.2 64.4
1.2 0.016 0.1 0.0 12.0/30.1 14.1/13.2 28.0/21.6 16.8/13.0 14.8/11.5 1.4/1.09 1.54/1.19 1.67/1.28 0.76/0.58 0.4/0.31 0.3/0.23 0.08/0.06 0.16/0.12 23.7 1.4 74.9
0.4 0.022 0.14 0.0 3.2/8.1 25.1/25.1 28.0/26.2 16.7/15.6 14.9/13.9 1.53/1.45 1.63/1.53 1.63/1.52 0.66/0.61 0.4/0.37 0.28/0.26 0.08/0.08 0.15/0.14 97.0 3.0 0.0
0.5 0.026 0.05 0.0 7.3/18.2 18.8/18.8 28.4/24.4 17.0/14.6 15.1/12.9 1.48/1.28 1.6/1.38 1.68/1.43 0.72/0.6 0.4/0.35 0.29/0.25 0.08/0.07 0.16/0.13 80.6 0.0 19.4
0.5 0.034 0.05 0.0 7.0/17.5 19.7/19.6 28.4/24.5 17.0/14.6 15.1/13.0 1.54/1.35 1.64/1.43 1.66/1.42 0.68/0.57 0.4/0.35 0.28/0.24 0.08/0.07 0.16/0.14 97.6 2.4 0.0
0.7 0.054 0.0 0.0 6.1/15.2 21.1/21.1 28.5/25.1 17.1/15.1 15.1/13.3 1.35/1.18 1.51/1.32 1.7/1.5 0.68/0.58 0.41/0.36 0.29/0.26 0.08/0.07 0.16/0.14 98.5 1.5 0.0
§ RE-SCALING THE DUST CONDENSATES
We re-scale the dust condensation model around a Sun-like star from <cit.> to a model for the dust condensation around a T1-like star.
To this end, we re-scale the size of the disc by matching the location of the ice line. The ice line, the location where T=170 K, is at 6.5 au around a Sun-like star at the time we take from the <cit.> model.
We multiply the radii of the Sun-like disk by 0.1/6.5 such that the new ice line resides at 0.1 au, which results in a disc spanning between 0.01-0.3 au.
Our composition tracking code then uses the unmodified relative abundances at a given radii of the re-scaled disc to initialize the bodies and to track the composition change from pebble accretion.
Differences in stellar evolution between G-type and M-dwarf stars will lead to different disc masses, lifetimes, mid-plane pressures, temperature profiles and thus different evolutionary tracks and timescales for dust condensation <cit.>. However, extrapolating the relative abundances of the dust from a disc around a Sun-like star at a single epoch and re-scaling for disc size should be representative of the dust in an M-dwarf system with a solar composition, such as TRAPPIST-1 <cit.>, at some earlier epoch. In the dust condensation code the dust condensates do not affect the subsequent evolution of the disc and to first approximation, the results reported here should be valid for the T1 system at some earlier time. While the dust condensation is determined locally by density, mid-plane pressure and temperature, and differences in stellar evolution between G and M-type stars can lead to differences in these parameters, a fast evolution of the condensation only occurs when the disk local temperature is higher than the condensation temperature. This can be seen in Fig. 7 of <cit.> and a clear condensation front can be seen over time. As soon as the temperature reduces to below the condensation temperature, the abundance evolution becomes slowly varying. Thus, once the disc cools sufficiently the dust condensation is sensitive primarily to the initial disc composition.
Figure <ref> compares the density of the gas disc ρ, temperature T, and mid-plane pressure P profiles over 0.01–0.3 au of the re-scaled profile around a Sun-like star from <cit.> and the analytic profiles used in the n-body simulations at t=0, as described in Section <ref>.
To derive the analytic P profile we use
P=ρ c_ s^2
where ρ = Σ/(H√(2 π)) is the gas density, H is the gas scale height and c_ s is the sound speed.
The profiles used in <cit.> correspond to a disc shortly after disc formation and our disc profiles are for a more evolved disc and so differences can be seen between the profiles which may lead to differences in the dust evolution. However, the density and pressure are still within approximately an order of magnitude of one another while the temperatures are below the condensation temperatures of the major elements thus, the relative abundances in condensed dust could remain similar in these two discs as both discs begin with a solar-like composition.
|
http://arxiv.org/abs/2307.05597v1 | 20230710193937 | Phase transitions in systems of particles with only hard-core interactions | [
"Deepak Dhar",
"R. Rajesh",
"Aanjaneya Kumar"
] | cond-mat.stat-mech | [
"cond-mat.stat-mech"
] | |
http://arxiv.org/abs/2307.07600v1 | 20230714194538 | How large is a disk -- what do protoplanetary disk gas sizes really mean? | [
"Leon Trapman",
"Giovanni Rosotti",
"Ke Zhang",
"Benoit Tabone"
] | astro-ph.EP | [
"astro-ph.EP"
] |
Leon Trapman
[email protected]
0000-0002-8623-9703]Leon Trapman
Department of Astronomy, University of Wisconsin-Madison,
475 N Charter St, Madison, WI 53706
0000-0003-4853-5736]Giovanni Rosotti
Dipartimento di Fisica, Università degli Studi di Milano, Via Giovanni Celoria, 16, 20133, Milano, Italy
School of Physics and Astronomy, University of Leicester, Leicester LE1 7RH, UK
Leiden Observatory, Leiden University, 2300 RA Leiden, the Netherlands
0000-0002-0661-7517]Ke Zhang
Department of Astronomy, University of Wisconsin-Madison,
475 N Charter St, Madison, WI 53706
0000-0002-1103-3225]Benoît Tabone
Université Paris-Saclay, CNRS, Institut d’Astrophysique Spatiale, F-91405 Orsay, France
Leiden Observatory, Leiden University, 2300 RA Leiden, the Netherlands
It remains unclear what mechanism is driving the evolution of protoplanetary disks. Direct detection of the main candidates, either turbulence driven by magnetorotational instability or magnetohydrodynamical disk winds, has proven difficult, leaving the time evolution of the disk size as one of the most promising observables able to differentiate between these two mechanisms. But to do so successfully, we need to understand what the observed gas disk size actually traces. We studied the relation between R_ CO, 90%, the radius that encloses 90% of the ^12CO flux, and R_c, the radius that encodes the physical disk size, in order to provide simple prescriptions for conversions between these two sizes. For an extensive grid of thermochemical models we calculate R_ CO, 90% from synthetic observations and relate properties measured at this radius, such as the gas column density, to bulk disk properties, such as R_c and the disk mass M_ disk. We found an empirical correlation between the gas column density at R_ CO, 90% and disk mass: N_ gas(R_ CO, 90%) ≈ 3.73×10^21(M_ disk/M_⊙)^0.34 cm^-2. Using this correlation we derive an analytical prescription of R_ CO, 90% that only depends on R_c and M_ disk.
We derive R_c for disks in Lupus, Upper Sco, Taurus and DSHARP, finding that disks in the older Upper Sco region are significantly smaller (⟨ R_c ⟩ = 4.8 au) than disks in the younger Lupus and Taurus regions (⟨ R_c ⟩ = 19.8 and 20.9 au, respectively). This temporal decrease in R_c goes against predictions of both viscous and wind-driven evolution, but could be a sign of significant external photoevaporation having truncated disks in Upper Sco.
§ INTRODUCTION
Proto-planetary disks are the birth-sites of planets and only by understanding disks and their properties can we understand planet formation <cit.>.
Among the disk properties, size is one of the most fundamental. On a simple level, in combination with the disk mass, disk size is the main parameter determining the disk surface density, which in turn represent the available material to be accreted into planets. On a perhaps deeper level, the evolution of the size can inform us on the mechanism driving disc evolution. For example, in a scenario in which accretion is driven by viscosity, the disc size needs to get larger with time <cit.> in order to conserve the disk angular momentum: this is normally called viscous spreading. Conversely, if angular momentum is extracted by MHD winds, expansion is not required (; although see for the possibility of wind-driven disks growing over time). It is worth mentioning that both processes could affect different parts of the disk simultaneously, thereby complicating our simple view of disk evolution (e.g. ) Other mechanisms such as the presence of a stellar companion <cit.>, external photo-evaporation <cit.> and, if the disk size is determined from the dust continuum emission at sub-millimeter wavelengths, radial drift <cit.> can reduce the size of a proto-planetary disk and make it smaller with time, which has important consequences for disk evolution.
In the previous discussion we have been purposely negligent in describing in detail what “size” means. The underlying assumption in the way the term is normally used is that the disk size should somehow reflect where the disk mass is distributed. In practice, since following the analytical solutions of <cit.> it is common to parameterize disk surface densities using an exponentially tapered power-law, disk size is often intended as the scale radius of the exponential, normally denoted with R_ c. Any other parametrization of the surface density can always be characterized by defining the radius enclosing a given fraction of the disk mass.
In observations, however, proto-planetary disks have multiple “sizes”, and one has to be careful which size is being considered for any analysis to be meaningful. Sizes are different first of all because proto-planetary disks can be observed at multiple wavelengths and in multiple tracers. Before ALMA became available, most available measurements of disk sizes were done in the continuum at sub-mm wavelengths (see review of pre-ALMA results by ), with measurements available also at optical wavelength thanks to HST <cit.>, although predominantly for objects in Orion. While ALMA greatly expanded the sample of sub-mm continuum disc sizes <cit.>), one of ALMA biggest contributions is that we now have relatively large samples with measurements of gas sizes <cit.>. We should highlight however that also “gas” is a generic term since many different gas-phase species are known in proto-planetary disks. In this paper with “gas” disk size we always mean its most abundant species, CO, and particularly its most abundant isotopologue, ^12CO. This choice is motivated by the fact by far ^12CO, in virtue of its brightness, is the tracer with the largest observational sample of measured disk sizes.
Even once the wavelength and tracer are specified, one still needs to specify how the disk size is exactly determined from the observations - e.g., see <cit.> for a discussion concerning the continuum. In this paper we will consider as observational disk size the radius enclosing a given fraction of the total flux, since this definition is generic enough to be applied to any observation, and following common observational conventions take the fraction to be 90 %. We denote this radius as .
Regardless of the observational tracer, one should stress that no available tracer is really tracing the disk size in the purely theoretical sense; i.e., these tracers tell us the surface brightness distribution of the given tracer, and not how the mass of the disk is distributed.
This is because of several reasons: the abundance of the chosen tracer may vary throughout, the intensity can get weaker or stronger as the disk temperature varies, and the given tracer may not be optically thin, implying that its surface brightness does not trace its surface density.
Investigating the link between the observed size () of a proto-planetary disk and the theoretical size () is the purpose of this paper.
In order to accomplish this goal, we have run a grid of thermochemical models where we compute the abundance of ^12CO in the disk and we have ray-traced the models to account for radiative transfer effects. Starting from earlier work presented in <cit.> and <cit.>, we then use this grid to derive simple, yet accurate, analytical relations which allow us to predict the observed disk size for a given disk mass and theoretical size. The benefit of an analytical relation is that it can be inverted relatively easily. We make use of this to derive from observations of of disks in Lupus and Upper Sco, and discuss the implications for disk evolution.
The paper is structured as follows. We first present the technical details of our models in section <ref>, and then show our results concerning the relation between and in section <ref>. In section <ref> we apply the inverse relation to measure in an observational sample and discuss the caveats of our work, before finally drawing our conclusions in section <ref>.
§ THE MODELS
The location of , defined as the radius that encloses 90% of the ^12CO 2-1 flux, depends on the CO emission profile, which in turn depends on the CO chemistry and thermal structure of the disk, both of which can be obtained using a thermochemical model. In this work we use the thermochemical code <cit.> to run a series of disk models.
self-consistently calculates the thermal and chemical structure of a disk with a given (gas and dust) density structure and stellar radiation field. The code first computes the internal radiation field and dust temperature structure using a 2D Monte Carlo method to solve the radiative transfer equation. It then iteratively solves the time-dependent chemistry, calculates molecular and atomic excitation levels, and computes the gas temperature by balancing heating and cooling processes until a self-consistent solution is found. Finally, the model is raytraced to construct synthetic emission maps. A more detailed description of the code is provided in Appendix A of <cit.>.
For the surface density profile of our models we take the self-similar solution of the generalized, i.e. viscous and/or wind-driven, disk evolution given in <cit.>, which is a tapered power-law of the form
Σ_ gas(R) = Γ(ξ + 2 -γ/2-γ) /2π^2( R/)^-γ+ξexp[-(R/)^2-γ].
Here is the mass of the disk, is the characteristic size, γ is the slope of the surface density, which is related to the slope of α̃ (see ). For the viscous case γ coincides with the slope of the kinetic viscosity (see, e.g. ).
ξ is the mass ejection index <cit.> and Γ is the gamma function, which for common ranges of γ and ξ is a factor of order unity.
In this work we will set ξ=0.25, which is equivalent with only vertical angular momentum transport by a MHD wind.
Note that ξ has only a small effect on as shown in Figure 11 in <cit.>. Similarly we set γ=1 for most of this work, but see in Section <ref> for the effect of γ on our results. Note that in contrast to <cit.> disk evolution is not included and the surface density is fixed for each model.
The vertical density is assumed to be a Gaussian around disk midplane, which is the outcome of hydrostatic equilibrium under the simplifying assumption that the disk is vertically isothermal (see Eq (<ref>)). To simulate the effect of observed disk flaring (e.g., ), the vertical scale height of the disk is described by a powerlaw
H(R) = R h_c (R/R_c)^ψ
where h_c is the opening angle at and ψ is the flaring angle.
Dust is included in the form of two dust population following e.g. <cit.>. Small grains [0.005-1 μm], making up a fraction (1-f_ large) of the total dust mass are distributed over the full vertical and radial extent of the disk, following the gas. Large grains [1-10^3μm] that make up the remaining f_ large fraction of the dust mass have the same radial distribution as the gas, but are vertically confined to the midplane to simulate the effect of vertical dust settling. This is achieved by reducing their scale height by a factor χ < 1.
Finally, the star is assumed to be a 4000 K blackbody with a stellar radius chosen such that the star has a stellar luminosity L_* = 0.28 L_⊙. To this spectrum we add a 10^4 K blackbody to simulate the accretion luminosity released by a 10^-8 /yr stellar mass accretion flow, where we assume that 50% of the gravitational potential energy is released as radiation (e.g. ).
Table <ref> summarizes the parameters of our fiducial models.
To test the empirical correlation presented in the next section we also ran multiple sets of models that similar to our fiducial models span a range of disk masses but where one of the fiducial model parameters was varied over two or more values. The selected parameters are all expected to have a significant effect on the gas density, the temperature structure and/or the chemistry of CO. These model parameters include: the stellar luminosity L_*, the opening angle h_c, the external UV field (ISRF), the characteristic radius , the slope of the surface density γ, the flaring angle ψ, the dust settling parameter χ and the fraction of large grains f_ large.
Further parameters such as, for example, the UV luminosity of the star were also examined, but tests showed that they had no significant effect on .
The inclination of the disk can also affect , but its effects can be minimized for moderately inclined disks (< 60 deg) by measuring in the deprojected disk frame (see, e.g. appendix A in ).
§ RESULTS
§.§ A tight empirical correlation between N_ gas() and the disk mass
It is common practice to measure the protoplanetary gas disk sizes from the extent of the sub-millimeter ^12CO rotational emission. These low J lines require a relatively small column to become optically thick, allowing us to easily detect the low density material found in the outer part of the disk. Furthermore, at low column densities UV photons are able to photo-dissociate CO, thus removing the molecule from the gas. The exact CO column density required to self-shield against this depends somewhat on the molecular hydrogen column and the temperature, but it lies at around a few times 10^15 cm^-2 (see, e.g. ). Back-of-the-envelope calculations show that the radius where CO millimeter lines becomes optically thin (R_τ[mm]=1) approximately coincides with the radius where it stops being able to self-shield against photodissociation (R_ CO p.d.).
It should be noted that CO is also partly protected by mutual line shielding of CO by H_2, but this is negligible compared to the effect of CO self-shielding (see, e.g. ).
This sets the expectation of a link between the observed gas disk size , which is linked to R_τ=1, and the surface density, albeit indirectly, from N_ CO(R_ CO p.d.)≈ 10^15 cm^-2 (see, e.g., ).
Using our thermochemical models, we can test this expectation. After measuring from the synthetic CO 2-1 observations of our models we find a surprisingly tight correlation between the gas column density[In this work the gas column density is defined assuming a mean molecular weight μ=2.3, so N_ gas = Σ_ gas2.3 m_H.] at the observed outer radius (N_ gas()) and the mass of the disk (). The top left panel of Figure <ref> shows that N_ gas() increases with as a powerlaw, N_ gas() ∝^0.34.
The positive correlation can be understood, at least qualitatively, by looking at the other quantities shown in Figure <ref> that are also obtained at .
First off, the column density of CO at , denoted as N_ CO(), has an approximately constant value of ≈ 2×10^15 cm^-2 across the full disk mass range examined here.
This value corresponds to the CO column required for CO self-shielding (e.g. ), which matches with the expectation discussed earlier that roughly coincides the radius where CO starts to become photo-dissociated. We would therefore expect that the observed disk size increases with disk mass, because this critical CO column density, assuming a fixed CO abundance, lies further outward for a disk that has more mass (see e.g. ). However, further out from the star the disk is also colder and a larger fraction of the CO column is frozen out, resulting in a lower column-averaged CO abundance. This is corroborated by the rightmost panels of Figure <ref>, which show that the column-averaged CO abundance decreases for higher disk masses and that the height below which CO freezes out increases with disk mass. This decreasing CO abundance means that the gas column density at needs to increase with disk mass in order to reach the same constant CO column density.
While this empirical correlation is evident in the models and can be understood qualitatively, it is difficult to reproduce it quantitatively. Appendix <ref> shows how this could be done using a toy model. It also shows that R_τ_ CO=1, the radius where ^12CO 2-1 becomes optically thin, is the more logical choice for such a model, rather than . However, while this toy model is able to show a correlation between N_ gas(R_τ_ CO=1) and , in practice the empirical relation between N_ gas() and shown in Figure <ref> provides a much tighter correlation. In light of this we will use this empirical correlation throughout the rest of this work.
§.§ Robustness of the correlation against varying disk parameters
Figure <ref> shows the correlation between (N_ CO()) and not only shows up for a single set of models but is unaffected by most disk parameters. The exceptions are the stellar luminosity, the strength of the external interstellar radiation field (ISRF) and the slope of the surface density profile. The stellar luminosity directly affects the temperature structure of disk. Increasing it moves the CO snow surface closer to the midplane. This increases the column averaged CO column at , which reduces the gas column needed to obtain the critical CO column density.
Increasing the ISRF has two effects on the location of . Firstly, a larger CO column, and therefore also a larger gas column, is required to self-shield the CO against the stronger UV radiation field. Secondly, the external radiation will heat up the gas in the outer disk, which can thermally desorb CO ice back into the gas. This will increase the column averaged CO abundance, moving N_ gas() down again. The latter effect likely explains why for high disk mass both sets of models coincide again in Figure <ref>.
Finally, models with a steeper surface density slope (γ=1.5) have a much shallower exponential taper in the outer disk (Σ_ gas,outer∝exp[-(R/)^2-γ]. Depending on the mass of the disk the CO emission in this taper can be partially optically thin. Inspection of the models shows that ones with γ=0.5-1 have τ≳1 at , whereas models with γ=1.5 have τ≈0.1 at this radius. The presence of significant optically thin CO emission means no longer directly traces the radius where CO stops being able to self-shield.
This is an important reason why N_ gas() scales with (see Appendix <ref> for details).
§.§ Deriving an analytical expression for
If we fit the models presented in the previous section with a simple powerlaw between the gas column density at and the disk mass, we obtain
N_ gas()≡ N_ gas,crit≈ 3.7×10^21(M_ gas/M_⊙)^0.34 cm^-2.
As discussed in the previous section, most disk parameters do not affect this powerlaw. Of the ones that do, only the stellar luminosity dependence can be readily included, as it only changes the slope and normalization of the powerlaw by a small factor. If we fit the stellar luminosity dependence of these two parts of our powerlaw, we obtain
N_ gas,crit≈ 10^21.27 - 0.53log_10 L_*(M_ gas/M_⊙)^0.3 - 0.08log_10 L_* cm^-2.
As showed in the recent work by <cit.> we can use this critical gas column density to obtain an analytical expression for . While <cit.> left this critical value as a free parameter (Σ_ crit in their notation), our models provide a quantitative estimate for this parameter.
Because the analytical solution contains a special function, Lambert-W function or product-log function, it is convenient to consider the case in which ≫, i.e. that lies far into the exponential taper of the surface density profile. This case is more traceable and it is
straightforward to show from Eqs. (<ref>) and (<ref>) that the observed outer radius scales with the logarithm of the disk mass
μ m_H N_ gas,crit/Σ_c ≈exp[-(/R_c)^2-γ]
γ=1≈[0.66ln - 2ln + const],
where and are in units of and au, respectively.
To first order the observed outer radius is thus expected to scale with the logarithm of the disk mass. Its dependence on is more complex and will be explored in the next section.
If the surface density profile (Eq. (<ref>)) is inverted without any simplifying assumptions we obtain the following analytical prescription for as function of , (and L_*):[Note that if the stellar luminosity dependence of N_ gas,crit is included, the term in the square brackets of Equations (<ref>), (<ref>) and (<ref>) becomes
[..] = 9.862×10^7 + 0.53log_10 L_*(M_d/)^0.70 + 0.08 log_10 L_*(/ au)^-2,
where L_* is in units of L_⊙.]
R_ CO, 90% = R_c (γ-ξ/2-γ W(2 -γ/γ-ξ[ 4.9·10^7 (M_ d/M_⊙)^0.66( au/R_c)^2]
^2-γ/γ -ξ) )^1/2-γ
Here W(z) is the Lambert-W function, or product-log function, specifically its principal solution (k=0).
For common assumptions of a viscously evolving disk, i.e. γ=1 and ξ=0, the prescription reduces to
R_ CO, 90% = R_c× W( [ 4.9×10^7 (M_ disk/M_⊙)^0.66(R_c/ 1 au)^-2] )
Similarly, for γ=1 and ξ=0.25 (the values of the fiducial models in this work)
R_ CO, 90% = 3 R_c/4× W(4/3[ 4.9×10^7 (M_ disk/M_⊙)^0.66(R_c/ 1 au)^-2]^4/3).
Equation (<ref>) allows us to analytically calculate from just , , L_* and the slope of the surface density. Before we use it, however, it is worthwhile to examine how well it reproduces the obtained from our disk models. Figure <ref> shows this comparison for both the approximation that the dominant part of the surface density profile is its exponential taper (see Eq. (<ref>)) and for the full derivation of an analytical (Eq. (<ref>)).
The approximation of the surface density as just its exponential taper, as was proposed in e.g. <cit.>, captures the general trend of increasing with , but does not match the exact shape of the mass dependence of .
The calculated using Eq. (<ref>) greatly improves the match, showing excellent agreement with the obtained from the disk models. Only for the very lowest and highest disk masses do we see a significant difference between the models and the analytical . Note that these are the same models where N_ gas() does not follow the powerlaw relation with (see Figure <ref>).
Figure <ref> also shows the equivalent of the expression for presented by <cit.>, who derive from where the surface density reaches a critical value Σ_ crit = ξ_ CO^-1 2 m_H N̂_ CO. The line shown here is for their adopted best values, ξ_ CO=10^-6 and N̂_ CO = 10^16 cm^-2. Around a disk mass of ≈ 10^-2 - 10^-1 agrees well with both the models and the analytical expression for from this work. In their work <cit.> use a fiducial initial disk mass of 0.1 and evaluate the viscous evolution of between 0.1 and 3 Myr. Given the fact the mass of viscously evolving disks only decreases slowly over time (∝^-0.5 for γ=1) the disk masses covered in their work mostly lie in the ≈ 10^-2 - 10^-1 range where the models and the analytical expressions all agree.
§.§ The link between and
Up to this point we have computed the observed radius for models where was given. Observationally, however, we are interested in solving the opposite problem: for a given that was obtained from observations, what is the corresponding ?
To that end, having vetted Equation (<ref>) using our DALI models, we can now use it to study the relation between and in disks. Figure <ref> shows as a function of for four different disk masses using γ=1 and ξ=0.25.
The shape of the curve shows that there are two values of that can be inferred from a measurement of .
Figure <ref> is a visualization of this, showing a set of example gas surface densities that all have the same total disk mass but a different . Two profiles intersect with N_ gas() at : = 20 au and = 1000 au. The first is much smaller than , meaning that lies in the exponential taper, while the other that is larger than lies in the powerlaw part of the surface density.
We should note however that while the "powerlaw"- is a mathematical solution for it is also an extrapolation for Eq. <ref> beyond the domain where it was tested. None of the DALI models examined in this work have > and it is entirely possible that disks with such a disk structure, likely those with a very low disk mass, do not follow the N_ gas()- correlation on which Eq (<ref>) is build.
Interestingly, the curves in Figure <ref> also imply that for a given disk mass there is a maximum observed disk size, where is equal to . Increasing beyond this point decreases as a large fraction of the disk mass (≳ 50 %, see the top panel of Figure <ref>) now exists as low surface density material below the CO photodissociation threshold. A demonstration of this effect can be seen in the evolution of for a low mass viscously evolving disk. As can be seen in <cit.> (e.g. their Figure 3), the of a low mass, high viscosity disk first increases with time until the rapid viscous expansion lowers the surface density to the point where the CO photo-dissociation front starts moving inward, resulting in now decreasing with time.
The existence of a maximum for each disk mass also suggests that places a lower limit on the disk mass. By taking the derivative of to and setting it to zero, this minimum disk mass can be written as (for the derivation, see Appendix <ref>)
≳ 1.3×10^-5( / 100 au)^3 M_⊙
It should be kept in mind however that this disk mass has been derived by assuming a surface density profile and fitting it through a single point (N_ gas at ). Its accuracy therefore depends on how well this surface density profile matches the actual surface density of protoplanetary disks.
§ DISCUSSION
§.§ Extracting an estimate of from observed .
In the previous Section we showed that Equation (<ref>) provides the link between and based on . Leveraging this equation we can derive from the observed disk sizes that have now been measured from ^12CO emission for a large number of disks distributed over several star forming regions[Note that for the DSHARP sample we limit ourselves to the sources without severe clouds contamination, see <cit.> for more details.] (e.g. , see Table <ref> and Figure <ref>). Before we continue there are two things that should be kept in mind.
The observations from which these sizes are measured are shallow, which means that the uncertainties on most are large, up to 30 % (see ). Another good example of this are the observations of disks in Upper Sco, where <cit.> detected ^12CO 3-2 in 23 of the 51 continuum detected sources, but from fitting the CO visibilities was only able to provide well constrained gas disk sizes (i.e. statistically inconsistent with 0) for 7 disks in the sample. So when deriving we have to take the uncertainties on into account.
Inverting Eq. (<ref>) also requires the disk gas mass, which is a difficult quantity to measure. Gas masses derived from CO isotopologue emission are found to be low (≲ 1 M_ jup, see e.g. ). However, there are large uncertainties on the CO abundance in disks (e.g. ). We will therefore make the assumption that all disks have a gas-to-dust mass ratio of 100. For the disks where the gas mass is measured using HD the gas-to-dust mass ratio seems approximately 100, although this is only for a few disks in a very biased sample.
New observations from the ALMA survey of Gas Evolution in Protoplanetary disks (AGE-PRO) will allow us to overcome this hurdle by measuring accurate gas masses for 20 disks in Lupus and Upper Sco, using observationally constrain their CO abundance (see for details).
We will discuss the assumption of a single gas-to-dust mass ratio later in this section.
The details for our approach of obtaining from can be found in Appendix <ref>. Before continuing to the -distributions of our various samples, let us first examine the computed for five well known disks that have been previously studied in detail using thermochemical models that reproduce, among a number of other observables, the observed extent of CO and its isotopologues: TW Hya, DM Tau, IM Lup, AS 209 and GM Aur <cit.>. For three of the five disks, DM Tau, IM Lup and TW Hya the simple estimate in Table <ref> roughly agrees with the in the more detailed studies. Not so for GM Aur and AS 209 however, which have estimated that are much smaller than the literature values. For AS 209, the difference in can be traced back to the fact that the disk mass used here (i.e. 100×) is ∼10× larger than the one derived by <cit.>. For GM Aur it is harder to identify a similar cause.
It should be noted though that fitting was not the primary goal of the previous studies discussed here. These detailed models reproduce the observations for the given , but due to the complexity of the fitting it is hard to determine how unique these values of are.
To examine and compare the distributions of in different star-forming regions, we sum up the distributions of for individual sources in each region and normalize the resulting distribution. Figure <ref> shows the normalized distribution of of Lupus, Upper Sco, Taurus and the DSHARP sample. Lupus and Taurus have similar median , 19.8^+12.8_-9 au and 20.9^+54.4_-9.6 au for the two regions respectively, while the DSHARP sample has a slightly larger median = 26.1^+12.1_-9.7 au. Here the uncertainties denote the 25% and 75% quantile of the distribution. The clear outlier is Upper Sco with a median = 4.9^+4.4_-3.2. This is a surprising find given the age difference between Lupus/Taurus (∼1-3 Myr, e.g. ) and Upper Sco (∼5-11 Myr, e.g. ). Figure <ref> thus shows a decrease in with time, which does not match with predictions from either of the predominant theories of disk evolution. Viscously evolving disks are expected to grow over time, with increasing with age. Conversely, disks evolving under the effect of magneto-hydrodynamical disks winds are expected to have an that is constant with time. Even a combination of viscous and MHD wind-driven evolution would be hard pressed to explain the decrease of , given the inability of both components to explain the observed decrease in . A potential cause for the systematically smaller in Upper Sco is the environment in which these disks find themselves, specifically their proximity to the nearby Sco-Cen OB association. Ultraviolet radiation from these O- and B-stars could have truncated the disks (e.g. ), resulting in a different evolutionary path compared to the disks in the more quiescent Lupus and Taurus star-forming regions. Note that in the case of truncated disks the values derived here for Upper Sco should be viewed with caution, as they are derived under the assumption of a tapered powerlaw surface density profile, an assumption which is no longer valid in this case. We reserve a more comprehensive analysis of the effect of external photo-evaporation on in Upper Sco for a future work.
§.§ Caveats and limitations
When comparing median of different regions in Section <ref> there are several factors that we should keep in mind.
The first is that none of these samples are complete. Due to limited sensitivity of the observations the faintest and most compact sources are likely not detected and thus not included in the sample. The inclusion of these sources would decrease the median if they are compact, but without deep observations we cannot rule out the existence of large, low surface brightness disks that would increase the median .
Similarly, the binarity of the samples should be considered. Binaries can truncate the disk and, more generally, disks in multiple systems evolve differently than those around single stars (see, e.g. ). Indeed, there is some suggestion that Upper Sco has a higher binary fraction than Lupus (e.g. ). However, this should not be taken at face value, as our samples are not complete and the binarity surveys are not homogeneous (see appendix A of for an extensive discussion on this). A homogenous study of disk multiplicity is needed to conclusively show its effect on disk sizes.
There is also a difference in methodology that needs to be considered. The in Lupus, Taurus and DSHARP were all measured from the integrated intensity map of the CO emission. As mentioned above, the of Upper Sco are measured from a CO intensity profile that was fitted to the visibilities (see ). It is possible that this introduces some systematic effect that results in lower values for in Upper Sco. These observed are then compared to the noiseless, high resolution synthetic CO observations of our models, which are most akin to the DSHARP observations (see, e.g., Section 3.1.2 in for a detailed discussion how higher resolution and/or sensitivity affects the measurement of ).
It is also worth pointing out that the Upper Sco gas disk sizes are measured from the J=3-2 line rather than the J=2-1 line used for the other regions, but models show that has only a small (≲ 10%
) effect on (e.g. ).
The forthcoming AGE-PRO observations will test this possibility by consistently measuring gas disk sizes for a carefully selected sample of disks in Lupus and Upper Sco.
The assumption of a single gas-to-dust mass ratio for all sources irrespective of their age is also likely to be incorrect. Dust evolution models show that the gas-to-dust ratio increases with age as more of the dust mass is converted into larger bodies that either drift inward and are accreted onto the star (e.g. ) or form planetesimals that do not emit at millimeter wavelengths and are thus unaccounted for in our dust masses (e.g. ). This would however only increase the difference between Upper Sco and the younger regions, as to explain the same with a higher mass disks requires a smaller . Using a lower gas-to-dust mass ratio for Upper Sco would move the median closer to the values of Lupus and Taurus. However, Figure <ref> shows that the effect of changing the disk mass is small. To produce an of, for example, 60 au requires a of ≈5 au if the disk has a mass of =0.1, which increases to ≈10 au for a disk that is three order of magnitude less massive (=10^-4).
Another source of uncertainty is the global CO abundance in the disk. The processes that have been proposed for removing CO from the gas in disks (beyond CO freeze-out and photo-dissociation) are expected to operate on Myr timescales (e.g. ), which is corroborated by observations (e.g. ). In addition to differences between individual sources we can thus expect a trend of lower CO abundances with age. Observations of N_2H^+ of two disks in Upper Sco suggest that this is indeed the case (see ). If the overall CO abundance in the disk is lower the gas column at needs to be larger to build up a CO column capable of self-shielding against photo-dissociation. Given that the total disk mass is fixed the derived will have to increase to explain the same with a lower CO abundance.
Quantifying the effect on depends on the exact physical and/or chemical processes responsible for removing the CO from the gas, but also, maybe even more importantly, on how well-mixed the disk is vertically. If vertical mixing is inefficient CO could be removed from the midplane, as traced by and , while the upper layers of the disk from which emits remain unaffected. In this case, would not be significantly affected by a decrease in CO abundance (see ).
Conversely, if the disk is well-mixed vertically the CO abundance in the emitting layer will also lower than currently assumed. <cit.> showed that this, coupled with the relatively poor brightness sensitivity of the shallow ALMA disk surveys, can significantly reduce the observed value of . Accounting for this fact would bring the characteristic radii of Upper Sco closer to those of disks in Lupus and Upper Sco. Recent work by <cit.> arrived at a similar conclusion.
We should also note that CO depletion factor is seen to vary with radius <cit.>, which complicates extrapolating the CO abundance of the bulk of the gas to the region in the outer disk that is most relevant for setting .
The shape of the surface density in the outer disk is an important part in the analytical relation between and presented in this work (see also Appendix <ref>). Most notably, our models assume that the surface density follows an exponential taper. While this assumption is well grounded in theory, observational constraints on the surface density in the outer part of disks are sparse (e.g. ). Figure <ref> gives us some idea about in what way the surface density must be different to nullify the N_ gas()- relation. If γ is decreased, in which case the exponential taper becomes steeper and the surface density starts to approach a truncated powerlaw, the N_ gas()- relation is retained. This suggests that the relation should be there for disks where the surface density drops of steeply, whereas for disks with a shallow surface density profile in the outer disk the relation will no longer hold and the analytical expression for should not be used. However, we should remain cautious when extrapolating from our “γ-models”. By construction, a steeper exponential taper (i.e. small γ) corresponds to a flatter powerlaw at small radii and vice versa for large γ. In the end it is always prudent to use tailored models for disks with noticeably, or expected, different surface density profiles rather than use a generalized model.
In a similar vein, substructures in the gas and radial variations in the gas-to-dust mass ratio could affect . However, as is measured from the optically thick ^12CO emission these structures would need to meaningfully change the temperature structure in the ^12CO emitting layer to affect the ^12CO emission profile and therefore . <cit.> showed that the high resolution ^12CO observations show comparatively little substructure in contrast to more optically thin CO isotopologues and the dust. However, if a substructure near were to locally change the temperature structure and thereby change the location of the CO snow surface it would likely break the N_ gas()- correlation on which the analytical equation of is build. That being said, most substructures are found much closer to the star, far away from , meaning their effect on is likely minimal.
Similarly, the temperature structure of the disk and its vertical structure or more precisely, how much of the CO column is frozen out is a key link in the correlation of N_ gas() and , as demonstrated by the tight correlation between z_ freeze () and . We have explored the parameters that predominantly affect the temperature structure in our models. From the observational side, several recent studies have used high resolution ALMA observations of CO to map the radial and vertical temperature structures of disks (e.g. ). Temperature structures computed with models similar to the ones in this work have been found to match these observationally constraints (e.g. ). However, the number of disks with good observational constraints on their 2D temperature structures is still limited and, due to requirement of deep, high resolution observations, biased to large disks. There is therefore still the possibility that our models do not accurately describe the temperature structure of all disks, in which case it is very likely that the analytical expression for presented in this work will no longer hold.
§ CONCLUSIONS
In this work we have presented an empirical relation between the gas column density measured at the observed gas outer radius (N_ gas()) and the mass of the disk . Using this relation we provided simple prescriptions for conversions of to and from to (Eq. <ref>).
Our main take-away points are:
* Using thermochemical models, we found an empirical correlation between the gas column density at the observed gas disk size and the mass of the disk: N_ gas() ≈ 3.7×10^21(/)^0.34 cm^-2. Importantly, this correlation does not significantly depend on other disk parameters.
* Following <cit.> we used this empirical correlation to provide an analytical prescription of that only depends on and . This analytical prescription is able to reproduce from thermochemical models for a large range of and .
* Exploring the analytical prescription of reveals a maximum for a given that is independent of (Eq. <ref>). It also shows that for a given any can be obtained with two different values of (≪ or ≫).
* Using the observed and =100× we derived for four samples of disks in Lupus, Upper Sco, Taurus and DSHARP. We find that Lupus and Taurus have similar median , 19.8 and 20.9 au respectively, and the DSHARP disks are slightly larger (= 26.1). Surprisingly, the disks in Upper Sco are significantly smaller, with a median =4.9 au. This decrease in for the older Upper Sco region goes against predictions of both viscous and wind-driven evolution.
We thank the referee for their valuable feedback, which helped to improve the quality of this manuscript
L.T. and K. Z. acknowledge the support of the NSF AAG grant #2205617.
B.T. acknowledges the support by the Programme National “Physique et Chimie du Milieu Interstellaire” (PCMI) of CNRS/INSU with INC/INP and co-funded by CNES.
GR acknowledges support from the Netherlands Organisation for Scientific Research (NWO, program number 016.Veni.192.233), from an STFC Ernest Rutherford Fellowship (grant number ST/T003855/1) and is funded by the European Union (ERC DiscEvol, project number 101039651). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them.
All figures were generated with the -based package <cit.>. This research made use of Astropy,[http://www.astropy.org] a community-developed core Python package for Astronomy <cit.> and SCIPY <cit.>.
aasjournal
§ A TOY MODEL FOR ANALYTICALLY DERIVING THE OBSERVED CO OUTER RADIUS
Section <ref> showed a clear correlation between the gas column density measured at , the radius that enclosed 90% of the ^12CO J=2-1 emission, and the total mass of the disk . It also showed a similarly tight correlation between the height of the CO snow surface as measured at and , giving a hint as to the origin of the first correlation. Here we will set up a simple toy model of the CO abundance in protoplanetary disks, link it to the resulting CO emission, and show how it can produce a correlation between the column density at the outer radius and the disk mass.
§.§ Concept and assumptions
Starting from the observations, it is common to use ^12CO rotational emission to measure the size of protoplanetary disks. Low J lines of CO become optically thick already at small column densities, making CO emission bright and easy to detect out to disk large radii. The transition from optically thick to optically thin CO emission thus occurs in the outer part of the disk, where the surface density likely declines steeply with radius. This is indeed the case if the surface density follows an exponential taper, but one should keep in mind that observational constraints of the shape of the surface density in the outer disk are very limited (see, e.g. ). Given that the density is low here, we can expect only a small contribution of the optically thin CO emission to the total CO flux. In other words, we expect that most, if not all, of the CO emission is optically thick.
At the same time, we know that CO will become photo-dissociated in the outer disk. The exact CO column density required to self-shield against photo-dissociation depends somewhat on the molecular hydrogen column density and temperature, but in general the threshold is taken to be a CO column density of a few times 10^15 cm^-2 (see, e.g. ).
It is common to assume that the radius at which the CO line emission becomes optically thin coincides with the radius at which CO stops being able to self-shield, i.e., that the CO emission disappears beyond this point (e.g. ). In this case we can give a simple description of the CO radial emission profile:
I_ CO(R) =
T_0 (R/R_0)^-β if N_ CO(R)≥ a×10^15 cm^-2
0 otherwise
where T_0 (R/R_0)^-β describes the temperature profile of the CO emitting layer as a simple powerlaw and a is a constant of order unity.
Under the these simplifying assumptions, the radius that encloses 100 % of the CO flux would be the radius where we reach N_ CO(R)≈ a×10^15 cm^2. Note that definition commonly used in observations to measure gas disk sizes, i.e. , the radius that encloses 90% of the flux, is very closely related to the 100% radius (see, e.g., appendix F in ):
R_ CO, 100% = 0.9^1/(2-β)
≈ 0.93× for β=0.5
However, as we will discuss further on in this section, this small difference has a meaningful impact on the N_ gas()- relation discussed in the main body of this work. For the rest of the derivation we will therefore use R_ CO, 100% = R_τ_ CO =1≡ R_τ rather than .
The relation between the CO column density and the H_2 column density depends on the column averaged CO abundance. The zeroth order assumption would be that the CO abundance is a constant 10^-4, where all of the available carbon is locked up in the gas. However, this ignores the fact that the disk becomes colder towards the midplane, causing the CO to freeze out and thus lowering the local CO abundance. Similarly, photo-dissociation will decrease the CO abundance in the uppermost layer of the disk. These two processes confine CO to a so-called warm molecular layer, first introduced as a concept by . As a result, the column averaged CO abundance will be lower than 10^-4.
Given that most of mass in the column is concentrated towards the midplane we can, to first order, ignore the decrease in CO abundance due to photo-dissociation and write the vertical CO abundance profile as a simple step function
x_ CO(R,z) =
0 if z≤ z_ freeze(R)
x_ CO, peak if z> z_ freeze(R),
where z_ freeze(R) describes the height of the CO ice-surface, which is approximately equivalent to T_ gas(R,z_ freeze) = 20 K and we assume that x_ CO, peak = 10^-4.
In principle obtaining z_ freeze(R) requires computing the 2D temperature structure of the disk. This can be done by assuming that T_ gas≈ T_ dust, a reasonable assumption for the area of interest here, and computing T_ dust(r,z) by solving the radiative transfer equation (e.g.,
). Alternatively, the temperature structure can be measured from optically thick emission lines (e.g., ). Here we will keep using z_ freeze(R) until later in the derivation.
The vertical density distribution resulting from isothermal hydrostatic equilibrium is given by a Gaussian (e.g. )
ρ_ gas = Σ(r)/√(2π)H(r)exp[-1/2z^2/H(r)^2],
where H(r) is the height of the disk.
To obtain the CO column density of our simple two-part CO abundance model (Eq. (<ref>)) we need to find the column density above z_ freeze
N_ gas(r) = Σ_ gas(r)/μ m_H
N_ > z_fr(r) = ∫_z_ freeze^∞Σ_ gas(r)/μ m_H √(2π) H(r)exp[-1/2z^2/H(r)^2] dz
= N_ gas(r) 1/√(2π) H(r)∫_z_ freeze^∞exp[-1/2z^2/H(r)^2]dz
t=z/√(2)H=N_ gas(r)/√(π)∫_z_ freeze/√(2)H^∞exp[-t^2]dt
= N_ gas(r)/2[1 - erf(z_ freeze(r)/√(2)H(r))].
Here N_ gas is the gas column density, μ is the mean molecular weight, m_H is the hydrogen atomic mass and erf is the error function.
This allows us to write out the CO column density above z_ freeze (see eq. (<ref>))
N_ CO = x_ CO N_ z > z_ freeze
= x_ CO,peak/2 N_ gas(r) [1 - erf(z_ freeze(r)/√(2)H(r))],
.
Using the gas surface density instead of the gas column density N_ gas(r), Equation (<ref>) becomes
2 N_ CO/x_ CO,peak = Σ_ gas(r)/μ m_H[1 - erf(z_ freeze(r)/√(2)H(r))]
We recall that in our toy model R_τ coincides with the radius where the CO column density is the critical CO column density needed for CO self-shielding (N_ CO(R_τ = N_ CO, crit). We can then derive an expression for R_τ from the previous equations as:
2μ m_H N_ CO/x_ CO,peak ≡2Σ_ CO,crit = Σ_c [1 - erf(z_ freeze(R_τ)/√(2)H(R_τ))]
×(R_τ/)^-γ+ξexp[-(R_τ/)^2-γ]
2 Σ_ CO,crit/Σ_c ≡Φ̂ = [1 - erf(z_ freeze(R_τ)/√(2)H(R_τ))]
×(R_τ/)^-γ+ξexp[-(R_τ/)^2-γ].
A solution for a similar equation without the CO freeze-out term in the square brackets was recently presented by <cit.>.
Here we follow their work by introducing the shorthands Σ_ CO,crit and Φ̂[For direct comparison with <cit.>: Σ_ crit,toci+2022 = Σ_ CO,crit/[..] and Φ_ toci+2022 = 0.5Φ̂/[..], where [..] is the term in square brackets in Equation (<ref>).].
With the introduction of the CO freeze-out term Equation <ref> can no longer be solved analytically. However if the vertical density and temperature structure are known and prescriptions for H(R_τ) and z_ freeze(R_τ), or more accurately z_ freeze(R_τ)/H(R_τ), can be provided the equation can be solved numerically.
As a proof-of-concept we obtain z_ freeze(R_τ) from our models, in favor of the increased complexity that a full fit of the temperature structure would bring, and combine it with informed values of x_ CO,peak=3×10^-5 and N_ CO = 3×10^15 cm^-2 to calculate N_ gas(R_τ) using Eq. (<ref>). The left panel of Figure <ref> shows that these analytical N_ gas reproduce the values from the models, including its dependence on disk mass. However, the figure also shows that the relation between N_ gas(R_τ) and is not a powerlaw as it is for N_ gas() and it is also less tight. The underlying cause for this is the fact that the relation between and R_τ also depends on disk mass. The right panel of Figure <ref> shows the ratio R_τ/, which decreases towards lower disk mass. This mass dependence might appear small, but one should bear in mind that surface density at these radii follows an exponential; a small difference in radius will correspond to a much larger difference in gas column density. This effect introduces a further mass dependence, as more massive disks have a larger (and R_τ) that lies further in the exponential taper of the surface density profile where it is steeper, meaning that differences between and R_τ will result in larger differences between N_ gas() and N_ gas(R_τ) for more massive disks. This is a complex process to model, prompting us to use the empirical correlation presented in Section <ref>.
§ EFFECT OF DISK AND STELLAR PARAMETERS ON THE HEIGHT OF THE CO SNOW SURFACE
In Figure <ref> from left to right, top to bottom the examined parameters are the stellar luminosity(L_*), the scale height at (h_c), the external interstellar radiation field (ISRF), the characteristic size (), the slope of the surface density (γ), the disk flaring angle (ψ), the scale height reduction of the large grains (χ) and the fraction of large grains (f_ large). The gray points in each panel show the fiducial models shown in Figure <ref>. The black dashed line shows N_ gas()∝^0.34.
§ MEASURING THE OBSERVED DISK RADIUS USING 68% INSTEAD OF 90% OF THE CO FLUX
Throughout this work we have used as an observational measure of the disk size, but tests show that a similar result, at least qualitatively, can also be obtained if we instead use the radius that encloses 68% of the CO 2-1 flux (R_ CO, 68%). Recreating Figure <ref> but now for disk properties measured at R_ CO, 68%, we find that there exists a similar powerlaw relation between N_ gas(R_ CO, 68%) and as there did for . While there is much less of a direct link between R_ CO, 68% and the radius where CO becomes photo-dissociated, a fact that can be gleaned from the wide range of N_ CO at R_ CO, 68%, we find a tight relation between and R_ CO, 68% in our models which allows us to also relate R_ CO, 68% to the radius where CO becomes photo-dissociated. The tight relation between and R_ CO, 68% reflects to overall similarity in CO emission profiles between our models, suggesting that our findings for and R_ CO, 68% likely hold for most fraction-of-CO-flux-radii.
If we fit a powerlaw to N_ gas(R_ CO, 68%) and the disk mass we obtain the following critical gas column density
N_ gas(R_ CO, 68%) ≈ 4.1×10^22(M_ gas/M_⊙)^0.5 cm^-2.
Using this critical column density instead of the one for changes the square bracket term in Equation (<ref>) to
[..] = Σ_ c/μ m_H N_ gas(R_ CO, 68%)
= 4.5×10^6 (/)^0.5(/ au)^-2,
which gives us the following analytical prescription for R_ CO, 68%
R_ CO, 68% = R_c (γ-ξ/2-γ W(2 -γ/γ-ξ[ 4.5·10^6 (M_ d/M_⊙)^0.5(R_c/ au)^-2]
^2-γ/γ -ξ) )^1/2-γ
.
§ DERIVING A MINIMUM DISK MASS BASED ON
In Section <ref> we showed that there is a minimum disk mass associated with each . Here we derive this mass analytically.
We begin with the analytical formula for (Eq. (<ref>))
R_ CO, 90% = R_c (γ-ξ/2-γ W(2 -γ/γ-ξ[ 4.9·10^7 (M_ d/M_⊙)^0.66( au/R_c)^2]
^2-γ/γ -ξ) )^1/2-γ
= [1/aW(x)]^1/(2-γ).
Here we have defined a few short hands x = a y^a, y = 4.9·10^7 (M_ d/M_⊙)^0.66(R_c/ au)^-2 and a = 2-γγ-ξ. Taking the derivative of to and setting it to zero we obtain
∂ /∂ = [1/aW(x)]^1/(2-γ) + ∂/∂[1/aW(x)]^1/(2-γ)
= [1/aW(x)]^1/2-γ + /2-γ[1/aW(x)]^γ -1/2-γ∂ W(x)/∂ x∂ x/∂ y∂ y/∂
= [1/aW(x)]^1/2-γ - 2ax/2-γ[1/aW(x)]^γ-1/2-γ∂ W(x)/∂ x
= [1/aW(x)]^1/2-γ - 2ax/2-γ[1/aW(x)]^γ-1/2-γW(x)/x(1+W(x))
0 = 1 - 2a/2-γ[1/aW(x)]^-1W(x)/(1+W(x))
2-γ/2a^2 = 1/(1+W(x))
W(x) = 2a^2/2-γ-1
The inverse Lambert-W function is given by W^-1(y) = ye^y. With this and our shorthands we can write out the maximum , and its corresponding , for a given mass M_ disk
R_ c, max = √(4.9·10^7 M_ disk^0.66)[1/a W^-1(2a^2/2-γ-1)]^-1/2a
R_ CO, 90%,max = R_ c, max[1/a(2a^2/2-γ-1)]^1/(2-γ)
For γ=1,ξ=0 these equations reduce to
R_ CO, 90%,max = R_ c, max = √(4.9·10^7 M_ disk^0.66/e)
Writing the disk mass in terms of R_ CO, 90%,max then gives Eq. (<ref>)
≥[e R_ CO, 90%,max/4.9·10^7]^1/0.66≳ 1.295×10^-5(/ 100 au)^3 .
§ DERIVING FOR DISKS IN LUPUS, UPPER SCO, TAURUS AND DSHARP
Our approach for deriving is as follows (shown in Figure <ref>). We collected a sample of disks with a measured or an upper limit on from the literature (, see Table <ref>). For each source in this sample we first draw a random =100× from the distribution of the observed and its uncertainties. We do the same for , where upper limits on are treated as a uniform distribution between 0 and the upper limit. For Upper Sco is calculated from fitted CO intensity profile reported in Table 4 in <cit.>. Note that the uncertainties on the intensity profile are asymmetrical, which when propagated into the uncertainty on is represented by a two half-Gaussians with different width (see Figure <ref>).
For this (,)_i, we calculate by inverting Equation (<ref>), where we assume that ≫ (see Section <ref>). This procedure is repeated N=1000 times to properly sample the distribution of of each source. Table <ref> lists the derived and its uncertainties for each source.
|
http://arxiv.org/abs/2307.05105v1 | 20230711082718 | Charge conservation in spin torque oscillators leads to a self-induced torque | [
"Pieter M. Gunnink",
"Tim Ludwig",
"Rembert A. Duine"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall"
] |
[email protected]
Institute for Theoretical Physics and Center for Extreme Matter and Emergent Phenomena, Utrecht University, Leuvenlaan 4, 3584 CE Utrecht, The Netherlands
Institute for Theoretical Physics and Center for Extreme Matter and Emergent Phenomena, Utrecht University, Leuvenlaan 4, 3584 CE Utrecht, The Netherlands
Institute for Theoretical Physics and Center for Extreme Matter and Emergent Phenomena, Utrecht University, Leuvenlaan 4, 3584 CE Utrecht, The Netherlands
Department of Applied Physics, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven, The Netherlands
Spin torque oscillators are conventionally described by the Landau-Lifshitz-Gilbert-Slonczewski (LLGS) equation. However, at the onset of oscillations, the predictions of the conventional LLGS equation differ qualitatively from experimental results and thus appear to be incomplete. In this work we show that taking charge conservation into account leads to a previously-overlooked self-induced torque, which modifies the LLGS equation. We show that the self-induced torque originates from the pumping current that a precessing magnetization drives through a magnetic tunnel junction. To illustrate the importance of the self-induced torque, we consider an in-plane magnetized nanopillar, where it gives clear qualitative corrections to the conventional LLGS description.
Charge conservation in spin torque oscillators leads to a self-induced torque
Rembert A. Duine
August 12, 2023
=============================================================================
§ INTRODUCTION
The conventional Slonczewski spin-transfer torque <cit.> provides the basis for the description of steady-state precessions in spin torque oscillators (STO), a versatile functional element of spintronics <cit.>. The corresponding Landau-Lifshitz-Gilbert-Slonczewski (LLGS) equation <cit.>, however, does not accurately predict the observed precession frequency <cit.>. A commonly employed solution to address the discrepancy between prediction and observation is to generalize the Gilbert damping by phenomenologically including nonlinear dissipation terms <cit.>. Micromagnetic simulations that go beyond the single-domain approximation indicate that inhomogeneities in the magnetization could also play an important role <cit.>.
In this work we show that the charge conservation in spin torque oscillators gives rise to a previously-overlooked self-induced torque, modifying the LLGS equation. This self-induced torque leads to qualitatively important corrections to the precession frequency and power and should thus be included in the description of the magnetization dynamics in spin torque oscillators.
We consider a magnetic double tunnel junction as shown in Fig. <ref>, where a metallic nanomagnet is tunnel-coupled to two leads, one of which is also a magnet but with a fixed magnetization. The magnetization of the nanomagnet is dynamic and its angular dynamics are described by the LLGS equation. We treat the magnetization as a single macrospin with constant magnitude and include the Gilbert damping enhancement <cit.> and a Slonczewski torque <cit.>.
Applying a voltage over the system drives a spin-polarized current into the nanomagnet, which exerts a torque on its magnetization that can drive it into a precession <cit.>.
Charge conservation requires that tunneling is the only way to change the charge of the nanomagnet and we therefore need to keep track of the number of electrons that tunnel into or out of the nanomagnet. This gives rise to a coupling between two degrees of freedom: (1) the nanomagnet's magnetization and (2) the nanomagnet's charge.
On the one hand, the magnetization dynamics affects the charge degree of freedom by pumping a charge current through the system, since the left tunnel junction is magnetic <cit.>. On the other hand, the charge degree of freedom affects the magnetization dynamics by altering the voltage drop across the magnetic tunnel junction, in turn altering the Slonczewski torque. As we will show, the emerging interplay gives rise to a self-induced torque in the magnetization dynamics. We emphasize that in this work we do not rederive any new results for the individual tunnel junctions; the self-induced torque emerges from charge conservation in the whole system. To demonstrate the relevance of the charge conservation in spin torque oscillators, we consider an in-plane magnetized nanopillar <cit.> and show that accounting for the self-induced torque leads to qualitatively different predictions.
The remainder of this work is organized as follows. We first describe the magnetization and charge dynamics and their coupling in Sec. <ref>. In Sec. <ref> we show that this gives rise to a self-induced torque in a steady-state situation, which we show to be qualitatively and quantitatively important in the description of an in-plane magnetized nanopillar in Sec. <ref>. We end with a conclusion and discussion in Sec. <ref>.
§ COUPLED DYNAMICS OF MAGNETIZATION AND CHARGE
The degrees of freedom of interest are the magnetization direction and the charge of the nanomagnet. For simplicity, we model the nanomagnet's magnetization by a single macrospin M, which is justified when the nanomagnet is small enough. Similarly, we model the nanomagnet's charge by a single number Q, which describes the total charge on the nanomagnet. The coupled dynamics of magnetization and charge are then described by the Landau-Lifshitz-Gilbert-Slonczewski equation (<ref>) together with the continuity equation (<ref>).
Assuming the magnetization's magnitude M to be constant, the magnetization dynamics is described in terms of its direction m = M / M, which can also be specified with spherical coordinates m = ( sinθcosϕ, sinθsinϕ, cosθ ), where we choose the left lead's fixed magnetization m_fix as the z-axis. The equation of motion for m is the Landau-Lifshitz-Gilbert-Slonczewski (LLGS) equation <cit.>,
ṁ = - γ m × H_eff + α(θ) m×ṁ + γ/M 𝒱 m × I_s × m .
Here, the first term describes the precession of the magnetization with a frequency ω around the effective magnetic field H_eff, where γ is the gyromagnetic ratio. The second term, known as Gilbert damping <cit.>, describes the relaxation of the magnetization towards an energetic minimum. The Gilbert-damping coefficient, α(θ) = α_0 + (ħ^2 γ /4 e^2 M 𝒱) [g̃_l(θ) + g̃_r], contains two terms corresponding to two sources of Gilbert damping: the first term, α_0, accounts for internal Gilbert damping <cit.>; the second term accounts for Gilbert-damping enhancement due to spin pumping into the leads <cit.>, where 𝒱 is the nanomagnet's volume and g̃_l(θ) and g̃_r are the spin-flip conductances of the left and right junction respectively. The third term, known as Slonczewski torque, describes the torque arising due to the spin transfer from the electron system to the magnetization <cit.>.
We assume strong internal relaxation in the nanomagnet's electron system, which means that it relaxes to equilibrium on time scales much shorter than the inverse tunneling rates. Electrons that enter the nanomagnet through one tunnel junction will therefore equilibrate with the other electrons in the nanomagnet before they leave again through the other tunnel junction; for a more detailed discussion, see App. <ref>. In turn, both tunnel junctions are effectively independent.
The Slonczewski spin-transfer torque is then governed by
I_s = ħ/4e g_l^s V_l m_fix,
where g_l^s is the spin-flip conductance of the left junction, and V_l is the voltage drop across the left junction <cit.>.
The charge dynamics are governed by charge conservation. The only way to change the nanomagnet's charge is by tunneling of electrons between the nanomagnet and its leads. Consequently, the charge dynamics are described by the continuity equation
Q̇= I_c^l + I_c^r,
where I_c^r and I_c^l are the charge currents flowing from respectively the right and left lead to the nanomagnet. The charge current through the right junction is simply given by the Ohmic relation
I_c^r = g_r V_r,
where g_r is the conductance of the right junction and V_r is the voltage drop across the right junction [Strictly speaking, this is not an Ohmic current, because the power or energy loss in a tunnel junction is different from an Ohmic resistor.].
In contrast to the right junction, the charge current across the left junction is spin-polarized, and thus the charge current is given by
I_c^l = g_l(θ) V_l + I_c^p,
where g_l(θ) is the conductance of the left junction, V_l is the voltage drop across the left junction, and
I_c^p = - ħ/4e g_l^s m_fix· ( m ×ṁ)
is the pumping current contribution, which is the reciprocal of the Gilbert-damping enhancement, as it arises from the spin pumping into the leads combined with the spin filtering of the magnetic left lead <cit.>.
The vector form, I_c^p ∝ m_fix· ( m ×ṁ), holds for the specific geometry where the precession axis of the nanomagnet's magnetization is parallel to the fixed magnetization of the magnetic lead. However, we believe it to be valid in more general situations, since the cross product, m ×ṁ, is related to the spin pumping which leads to the Gilbert-damping enhancement. The projection onto m_fix enters due to the spin-filtering of the left lead, because of which the pumped spin current is accompanied by a pumped charge current. Therefore, in different geometries the same processes will occur, and thus the same vector form is to be expected. Finally we note that, since the left lead is magnetic, the conductance g_l(θ) also depends on the angle between the magnetizations of the nanomagnet and the left lead <cit.>.
As the Landau-Lifshitz-Gilbert-Slonczewski equation (<ref>) and the continuity equation (<ref>) are coupled to each other, an interplay emerges between the magnetization dynamics and the charge current flow: the magnetization dynamics affects the charge current flow via the pumping current I_c^p and the charge current flow affects the magnetization dynamics via the Slonczewski spin-transfer torque I_s.
§ CHARGE CONSERVATION GIVES RISE TO A SELF-INDUCED TORQUE
To investigate how the interplay with the charge dynamics affects the magnetization dynamics, we want to eliminate the charge degree of freedom. For that purpose, we focus on a steady-state situation for the charge (currents), Q̇=0, which can be justified in two distinct ways: first, when we search for steady-state precessions of the magnetization—as in section <ref>—the charge enters a steady-state as well. Second, the charge degree of freedom usually relaxes on much shorter time scales than the magnetization direction, such that the charge degree of freedom adjusts adiabatically, Q̇≈ 0, to the magnetization dynamics <cit.>.
In a steady-state situation for the charge, the current flowing into the nanomagnet from the left lead is equal to the current flowing out of the nanomagnet into the right lead, I_c^l = - I_c^r, which follows immediately from the continuity equation (<ref>). In turn, using Kirchhoff's voltage law V_l - V_r = V, we find the voltage drop across the left junction,
V_l = g_r/g_l(θ) + g_r V - I_c^p/g_l(θ)+ g_r .
The first term is the standard voltage drop for two resistors (or tunnel junctions) in series. The second term is more interesting: it is an additional voltage drop that arises due to the pumped charge current I_c^p, as given in Eq. (<ref>). Since this additional pumping-current-induced voltage drop is across the left junction with the magnetic lead, it also gives rise to an additional Slonczewski spin-transfer torque. Explicitly, using the voltage drop over the left junction, Eq. (<ref>) to find the Slonczweski spin-transfer-torque, Eq. (<ref>), we obtain a modified LLGS equation (<ref>), which now becomes
ṁ = - γ m × H_eff + α(θ) m×ṁ + γ/M 𝒱 m × I_s^V× m
+ γ/M 𝒱 m × I_s^p × m .
where we have split up the spin-transfer torque in two contributions. Firstly, we obtain the standard Slonczewski spin-transfer torque induced by the voltage bias V applied across the whole system,
I_s^V = ħ/4eg_l^sg_r/g_l(θ) + g_r V m_fix.
It is simply proportional to the voltage bias applied over the two leads and has been obtained before by numerous authors <cit.>. However, we also obtain
I_s^p = (ħ/4e)^2(g_l^s)^2/g_l(θ) + g_r[ m_fix· ( m ×ṁ)] m_fix ,
which is our central result: a new self-induced torque. To be precise, it is a pumping-current induced spin-transfer torque.
Physically speaking, it originates from the pumping current I_c^p driving electrons over a magnetic tunnel junction. If the precession axis of m is aligned with m_fix, it is simply proportional to the precession frequency ω. Therefore—despite its origin—the self-induced torque effectively acts more similar to Gilbert damping than to the conventional voltage-bias induced spin-transfer torque. To be more precise, it will act effectively as an anti-Landau-Lifshitz damping, ∝ m × ( m_fix× m), but with a prefactor that has a specific dependence of the magnetization dynamics, ∝ m_fix· ( m ×ṁ). This specific dependence on the magnetization orientation and dynamics could be used to tell the self-induced torque apart from other spin-transfer torque terms and from the phenomenological nonlinear Gilbert damping.
While a similar effect has been seen before in calculations far away from equilibrium <cit.>, we have shown here that those effects can also be interpreted as a consequence of charge conservation. We therefore expect this effect to be present independent of the strength of relaxation in the electron system, although it might change quantitatively with the internal relaxation rate.
§ EXPERIMENTAL RELEVANCE OF THE SELF-INDUCED TORQUE
In order to show that the self-induced torque is qualitatively and quantitatively relevant, we now consider an in-plane magnetized nanopillar <cit.>.
The effective magnetic field for the nanomagnet is then H_eff = H_0 ẑ- 4π M x̂, where the ẑ-direction is parallel to the magnetization of the left lead ẑ = m_fix and H_0 is the applied external magnetic field. Then the frequency of the linear ferromagnetic resonance is ω_0=γ√(H_0(H_0 + 4π M)) and the critical voltage where the parallel alignment becomes unstable is V_c=γ α(0)(H_0 + 2π M) / σ_0, where σ_0≡γ/M𝒱ħ/4eg_l^s/g_l(0)+g_r is the spin-polarization efficiency for a parallel alignment. Note that the critical voltage is independent of the self-induced torque, which only affects the dynamics in the auto-oscillation regime, where V>V_c.
The charge conductances, g_l/r(θ), the spin-flip conductances, g̃_l/r(θ), and spin conductances, g_l/r^s, that characterize the tunnel junctions are not independent of each other. Instead, they are related to each other as <cit.>
g_l/r(θ) = 1/2[g_l/r^P + g_l/r^AP + (g_l/r^P - g_l/r^AP) cosθ] ,
g̃_l/r(θ) = 1/2[g_l/r^P + g_l/r^AP - ( g_l/r^P - g_l/r^AP) cosθ] ,
g_l/r^s = P^-1 (g_l/r^P - g_l/r^AP),
where g_l/r^P/AP are the conductances of the left/right tunnel junction with the magnetization parallel or anti-parallel to the fixed magnetization. Furthermore, P=(ρ_m^↑ - ρ_m^↓ )/ (ρ_m^↑ + ρ_m^↓) is the polarizing factor of the nanomagnet, where ρ_m^↑ and ρ_m^↓ are the nanomagnet's electron density of states for spin-up and spin-down respectively <cit.>. Note that, since the right tunnel junction is nonmagnetic, we have g_r^P=g_r^AP and it follows that g_r(θ) = g̃_r(θ) = g_r and g_r^s = 0.
We choose the polarization factor of Fe, P=0.4 <cit.>, and g_l^P/G_0=0.12 and g_l^AP/G_0=0.05, where G_0=2e^2/h is the conductance quantum. The tunneling magnetoresistance (TMR) is then TMR=(R_AP - R_P)/R_P=70% [Here, we defined TMR as in sunMagnetoresistanceSpintransferTorque2008.]. We choose the conductance of the right junction to be equal to the parallel conductance of the left junction, g_r=g_l^P. Furthermore, we set H_0/4π M=0.35, ħγ / M𝒱=1 and choose α_0 such that α(0)=0.01. The applied voltage V is then variable, since in experiment it can also be directly controlled.
We numerically solve the LLGS Eq. (<ref>), varying the applied voltage and we show the resulting microwave frequency and average power of the magnetization in Fig. <ref> with (solid) and artificially without (dashed) the self-induced torque. Here p=(1-m_z)/2 is the power of the spin-torque oscillator <cit.>. We give further details regarding the numerical simulation of the LLGS Eq. (<ref>) in App. <ref>.
The quantitative relevance of the self-induced torque for this specific set of parameters is clearly visible, since the solid and dashed lines in Fig. <ref> are obviously different.
The qualitative relevance of the self-induced torque can be seen from the onset of oscillations (V ≥ V_c), where the self-induced torque causes the main oscillation frequency to plateau, Fig. <ref> (solid), which is in qualitative agreement with experimental observations in similar systems <cit.>. In contrast, if one only accounts for the conventional spin-transfer torque, Fig. <ref> (dashed), the LLGS predicts a sharper drop of the main oscillation frequency at the onset of oscillations. A similar behavior is present for the average z-component, resulting in a plateau in the microwave power at the onset of oscillations.
This nonlinear frequency shift at the onset of oscillation [Here, nonlinear refers to the fact that the precession frequency is proportional to ⟨ p⟩.] is well known to be incompletely described by the conventional, or unmodified, LLGS equation <cit.>. The inclusion of the self-induced torque corrects the qualitative behavior at the onset of oscillations.
We also observed that the self-induced torque extends the voltage-range over which auto-oscillations are allowed (not shown here). Here however the θ-dependency of the Gilbert damping enhancement and the Slonczewski spin-transfer torque through the charge and spin-flip conductances g_l(θ) and g̃_l(θ) also plays an important role.
§ DISCUSSION AND CONCLUSION
In this work, we have shown that charge conservation in spin torque oscillators can lead to a novel self-induced torque. Explicitly, we considered a magnetic double tunnel junction, Fig. <ref>, for which we showed that charge conservation leads to an interplay between magnetization and charge dynamics, which gave rise to a self-induced torque. This self-induced torque leads to important modifications in the precession power and frequency, especially at the onset of oscillations. It therefore offers an additional microscopic explanation for the experimental observation of a plateau at the onset of oscillations, complementing the phenomenological nonlinear Gilbert damping of Tiberkevich and Slavin <cit.>.
Due to their similar qualitative behavior, the self-induced torque could be easily mistaken as a contribution to nonlinear Gilbert damping. However, we do not suggest to replace the phenomenological nonlinear Gilbert damping of Tiberkevich and Slavin <cit.> by the self-induced torque. It is likely that there are other effects, such as the magnon-electron and magnon-phonon interactions, that will lead to a proper nonlinear Gilbert damping. However, the self-induced torque and nonlinear Gilbert damping can be distinguished experimentally through their different angle dependence—as is clear from the vector form in Eq. (<ref>).
We want to stress that the main ingredient in our derivation of the self-induced torque is charge conservation and its consequence: the interplay between magnetization and charge dynamics. Therefore, one should expect the self-induced torque to be relevant beyond voltage-biased STOs. For example, it should be equally relevant in ferromagnetic resonance experiments, where a transverse oscillating magnetic field excites the macrospin <cit.>. Such experiments could be used to verify the existence of the self-induced torque proposed here and, simultaneously, to gain deeper insights into the nonlinear Gilbert damping as conceived by Tiberkevich and Slavin <cit.>. Another important example, where the self-induced torque might play a crucial but underacknowledged role is the magnetization switching in magnetic tunnel junctions <cit.>.
Finally, we note that within spin torque oscillators the understanding of the linewidth has suffered from a similar problem as the precession frequency <cit.>. The often employed solution is to phenomenologically include the nonlinear Gilbert damping when describing the thermal fluctuations <cit.>. This solution, although effective in its description of the observed linewidth, does not offer a physical explanation of the nature of the noise. Following the results of this work, it would be of interest to further investigate the effects of charge conservation on the noise in spin torque oscillators. In particular, there will be an additional noise source coming from the altered voltage drop across the left junction as given in Eq. (<ref>), which affects the spin shot noise associated with the discreteness of the spin passing through the left junction <cit.>.
We thank A. Slavin, and A. Shnirman for fruitful discussions.
R.D. is member of the D-ITP consortium, a program of the Dutch Research Council (NWO) that is funded by the Dutch Ministry of Education, Culture and Science (OCW). This work is part of the research programme Fluid Spintronics with project number 182.069, financed by the Dutch Research Council (NWO).
§ ELECTROCHEMICAL POTENTIALS
To relate our simple approach to more elaborate calculations (for example Refs. <cit.>), it is useful to connect the charge on the nanomagnet, Q, and the voltage drops across the tunnel junctions, V_l and V_r, to the electrochemical potentials of the nanomagnet and the leads.
The leads' electrochemical potentials are fixed by the setup (Fig. <ref>) with a ground and voltage source. The electrochemical potential of the right lead μ_r serves as a reference point, as the right lead is connected to the gound. Thus, in the right lead, the electrochemical potential is identical with the chemical potential, as the electrical potential of the ground is (chosen to be) zero. The electrochemical potential of the left lead μ_l is then determined (in reference to μ_r) by the applied voltage; explicitly, μ_l = μ_r + eV. In other words, the voltage that is applied across the whole system is determined by the difference between the leads' electrochemical potentials, V = (μ_l - μ_r)/e.
The nanomagnet's electrochemical potential, μ_m, is closely related to the nanomagnet's charge Q = e ∑_σ∫ερ_m^σ(ε)f_m(ε) + Q_b, where Q_b is the positive background charge, ρ_m^σ(ε) is the nanomagnet's density of states for electrons with spin σ, and f_m(ε) = (e^(ε-μ_m)/k_B T + 1)^-1 is the Fermi-Dirac distribution describing the electron distribution in the nanomagnet. Due to the close relation between μ_m and Q, we can use μ_m as a degree of freedom instead of Q [For simplicity, we assume the temperature T to be irrelevant for the currents, which is justified, for example, when the densities of states are approximately constant on the scale of k_B T.]. In direct analogue, an interplay now emerges between the magnetization dynamics and the nanomagnet's electrochemical potential.
The electrochemical potential of the nanomagnet μ_m determines the voltage drops across the tunnel junctions through V_l/r = (μ_l/r - μ_m)/e. Using these relations for the voltage drops, and proceeding as in the main text, we can determine the steady-state electrochemical potential corresponding to equation (<ref>),
μ_m = g_l(θ) μ_l + g_r μ_r/g_l(θ) + g_r + e I_c^p/g_l(θ) + g_r.
The electrochemical potential contains two terms: the first term is the standard result for double tunnel junctions with strong internal relaxation; the second term is a shift due to the pumping current. From equation (<ref>), the interplay between the nanomagnet's electrochemical potential and its magnetization dynamics can be seen as follows. The magnetization dynamics affects the electrochemical potential μ_m via the pumping current I_c^p. The electrochemical potential μ_m directly affects the voltage drop across the left junction and, in turn, also the Slonczewski spin-transfer torque in equation (<ref>). As a result of this interplay, we find the LLGS equation (<ref>) with a self-induced torque.
Let us note that for g_r ≫ g_l(θ), g̃_l(θ), g_l^s, we find μ_m ≈μ_r. Then, the voltage drop across the magnetic left junction becomes the same as the voltage applied across the whole system, V_l ≈ V, and the self-induced torque becomes irrelevant. So, in this case, we can recover previous results for magnetic single tunnel junctions <cit.>. However, when g_r is roughly comparable to g_l(θ), g̃_l(θ), g_l^s, the self-induced torque is relevant. We expect this result to hold even if the tunnel junctions are replaced by direct contacts.
Finally, however, let us also note that the electrochemical potential of the nanomagnet μ_m (and with it the voltage drops V_l and V_r) is only well defined since we assume strong internal relaxation in the nanomagnet, which leads to an equilibrium Fermi-Dirac distribution for the electrons. Away from that local equilibrium, the nanomagnet's electrochemical potential is ill defined and the full electron distribution takes its role as degree of freedom. In turn, an interplay emerges between the magnetization dynamics and the electron distribution of the nanomagnet. While this interplay has been seen in the strong nonequilibrium case without internal relaxation <cit.>, the vast regime between the two limiting cases of strong and absent internal relaxation has—to the best of our knowledge—not yet been explored theoretically.
§ NUMERICAL LLGS SIMULATIONS
The results as shown in Fig. <ref> are obtained as follows. For every voltage we run a separate simulation of the LLGS equation (<ref>) numerically, starting from an initial condition where the macrospin is orientated along the z-axis, with a random small deviation. The simulations are then run for 4πγ M t=5e4, in order to ensure a steady state precession has been reached. The precession frequency is then obtained from finding the highest peak in the Fourier transform of m_x(t)+im_y(t) and the average z-component from averaging m_z(t), where both are taken over the final 4πγ M t=5e3. Since for the parameters chosen here we have that both α(θ)≪1 and γ/M𝒱| I_s^p|≪1, the dynamics of the magnetization are governed primarily by ṁ= -γ m × H_eff and we can thus replace ṁ→ -γ m × H_eff on the right hand side of LLGS equation (<ref>) and in the self-induced torque, Eq. (<ref>).
40
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Slonczewski(1996)]slonczewskiCurrentdrivenExcitationMagnetic1996
author author J. C. Slonczewski, title title Current-driven
excitation of magnetic multilayers, https://doi.org/10.1016/0304-8853(96)00062-5 journal
journal Journal of Magnetism and Magnetic Materials volume 159, pages L1 (year
1996)NoStop
[Ralph and Stiles(2008)]ralphSpinTransferTorques2008
author author D. C. Ralph and author M. D. Stiles, title title Spin transfer torques, https://doi.org/10.1016/j.jmmm.2007.12.019 journal
journal Journal of Magnetism and Magnetic Materials volume 320, pages 1190 (year
2008)NoStop
[Slavin and Tiberkevich(2009)]slavinNonlinearAutoOscillatorTheory2009
author author A. Slavin and author V. Tiberkevich, title title Nonlinear
Auto-Oscillator Theory of Microwave Generation by Spin-Polarized
Current, https://doi.org/10.1109/TMAG.2008.2009935 journal journal IEEE Transactions on Magnetics volume 45, pages 1875 (year
2009)NoStop
[Kim(2012)]kimChapterFourSpinTorque2012
author author J.-V. Kim, title title Chapter Four -
Spin-Torque Oscillators, in https://doi.org/10.1016/B978-0-12-397028-2.00004-7 booktitle Solid State Physics, Vol. volume 63, editor edited by editor R. E. Camley and editor R. L. Stamps (publisher Academic Press, year 2012) pp. pages 217–294NoStop
[Chen et al.(2016)Chen,
Dumas, Eklund, Muduli,
Houshang, Awad, Dürrenfeld, Malm, Rusu, and Åkerman]chenSpinTorqueSpinHallNanoOscillators2016
author author T. Chen, author R. K. Dumas,
author A. Eklund, author P. K. Muduli, author
A. Houshang, author
A. A. Awad, author
P. Dürrenfeld, author
B. G. Malm, author
A. Rusu, and author
J. Åkerman, title
title Spin-Torque and Spin-Hall Nano-Oscillators, https://doi.org/10.1109/JPROC.2016.2554518 journal
journal Proceedings of the IEEE volume
104, pages 1919 (year 2016)NoStop
[Kiselev et al.(2003)Kiselev, Sankey, Krivorotov, Emley, Schoelkopf, Buhrman, and Ralph]kiselevMicrowaveOscillationsNanomagnet2003
author author S. I. Kiselev, author J. C. Sankey,
author I. N. Krivorotov,
author N. C. Emley, author R. J. Schoelkopf, author R. A. Buhrman, and author D. C. Ralph, title
title Microwave oscillations of a nanomagnet driven by a
spin-polarized current, https://doi.org/10.1038/nature01967
journal journal Nature volume 425, pages 380 (year
2003)NoStop
[Berkov and Gorn(2005)]berkovMagnetizationPrecessionDue2005
author author D. V. Berkov and author N. L. Gorn, title title Magnetization precession due
to a spin-polarized current in a thin nanoelement: Numerical simulation
study, https://doi.org/10.1103/PhysRevB.72.094401 journal journal Physical Review B volume 72, pages 094401 (year
2005)NoStop
[Xiao et al.(2005)Xiao,
Zangwill, and Stiles]xiaoMacrospinModelsSpin2005
author author J. Xiao, author A. Zangwill, and author M. D. Stiles, title title Macrospin models of spin transfer
dynamics, https://doi.org/10.1103/PhysRevB.72.014446 journal journal Physical Review B volume 72, pages 014446 (year
2005)NoStop
[Tiberkevich and Slavin(2007)]tiberkevichNonlinearPhenomenologicalModel2007
author author V. Tiberkevich and author A. Slavin, title title Nonlinear phenomenological
model of magnetic dissipation for large precession angles: Generalization
of the Gilbert model, https://doi.org/10.1103/PhysRevB.75.014440 journal journal Physical Review B volume 75, pages 014440 (year 2007)NoStop
[Berkov and Miltat(2008)]berkovSpintorqueDrivenMagnetization2008
author author D. V. Berkov and author J. Miltat, title title Spin-torque driven
magnetization dynamics: Micromagnetic modeling, https://doi.org/10.1016/j.jmmm.2007.12.023 journal journal Journal of Magnetism and Magnetic Materials volume 320, pages 1238 (year
2008)NoStop
[Miltat et al.(2001)Miltat,
Albuquerque, Thiaville, and Vouille]miltatSpinTransferInhomogeneous2001
author author J. Miltat, author G. Albuquerque,
author A. Thiaville, and author C. Vouille, title title Spin transfer into an inhomogeneous magnetization
distribution, https://doi.org/10.1063/1.1355329 journal journal Journal of Applied Physics volume 89, pages 6982 (year
2001)NoStop
[Li and Zhang(2003)]liMagnetizationDynamicsSpintransfer2003
author author Z. Li and author S. Zhang, title title Magnetization dynamics with a
spin-transfer torque, https://doi.org/10.1103/PhysRevB.68.024404
journal journal Physical Review B volume 68, pages 024404 (year
2003)NoStop
[Tserkovnyak et al.(2002)Tserkovnyak, Brataas, and Bauer]tserkovnyakEnhancedGilbertDamping2002
author author Y. Tserkovnyak, author A. Brataas, and author G. E. W. Bauer, title title Enhanced Gilbert
Damping in Thin Ferromagnetic Films, https://doi.org/10.1103/PhysRevLett.88.117601 journal
journal Physical Review Letters volume
88, pages 117601 (year 2002)NoStop
[Slonczewski(2005)]slonczewskiCurrentsTorquesPolarization2005
author author J. C. Slonczewski, title title Currents, torques,
and polarization factors in magnetic tunnel junctions, https://doi.org/10.1103/PhysRevB.71.024411 journal journal Physical Review B volume 71, pages 024411 (year 2005)NoStop
[Slonczewski and Sun(2007)]slonczewskiTheoryVoltagedrivenCurrent2007
author author J. C. Slonczewski and author J. Z. Sun, title title Theory of voltage-driven
current and torque in magnetic tunnel junctions, https://doi.org/10.1016/j.jmmm.2006.10.507 journal journal Journal of Magnetism and Magnetic Materials series
Proceedings of the 17th International Conference on Magnetism, volume 310, pages 169 (year
2007)NoStop
[Tserkovnyak et al.(2008)Tserkovnyak, Moriyama, and Xiao]tserkovnyakTunnelbarrierenhancedDcVoltage2008
author author Y. Tserkovnyak, author T. Moriyama, and author J. Q. Xiao, title title Tunnel-barrier-enhanced dc
voltage signals induced by magnetization dynamics in magnetic tunnel
junctions, https://doi.org/10.1103/PhysRevB.78.020401 journal journal Physical Review B volume 78, pages 020401 (year
2008)NoStop
[Ludwig and Duine(2021)]ludwigBreakingCoulombBlockade2021
author author T. Ludwig and author R. A. Duine, title title Breaking of Coulomb
blockade by macrospin-assisted tunneling, https://doi.org/10.1103/PhysRevB.103.224406 journal journal Physical Review B volume 103, pages 224406 (year 2021)NoStop
[Gilbert(2004)]gilbertPhenomenologicalTheoryDamping2004
author author T. Gilbert, title title A phenomenological theory
of damping in ferromagnetic materials, https://doi.org/10.1109/TMAG.2004.836740 journal journal IEEE Transactions on Magnetics volume
40, pages 3443 (year 2004)NoStop
[Brataas et al.(2002)Brataas, Tserkovnyak, Bauer, and Halperin]brataasSpinBatteryOperated2002
author author A. Brataas, author Y. Tserkovnyak, author G. E. W. Bauer, and author B. I. Halperin, title title Spin battery operated by
ferromagnetic resonance, https://doi.org/10.1103/PhysRevB.66.060404 journal journal Physical Review B volume 66, pages 060404 (year 2002)NoStop
[Tserkovnyak et al.(2005)Tserkovnyak, Brataas, Bauer, and Halperin]tserkovnyakNonlocalMagnetizationDynamics2005
author author Y. Tserkovnyak, author A. Brataas, author G. E. W. Bauer, and author B. I. Halperin, title title Nonlocal magnetization
dynamics in ferromagnetic heterostructures, https://doi.org/10.1103/RevModPhys.77.1375 journal journal Reviews of Modern Physics volume 77, pages 1375 (year 2005)NoStop
[Chudnovskiy et al.(2008)Chudnovskiy, Swiebodzinski, and Kamenev]chudnovskiySpinTorqueShotNoise2008
author author A. L. Chudnovskiy, author J. Swiebodzinski, and author A. Kamenev, title title Spin-Torque Shot
Noise in Magnetic Tunnel Junctions, https://doi.org/10.1103/PhysRevLett.101.066601 journal
journal Physical Review Letters volume
101, pages 066601 (year 2008)NoStop
[Note1()]Note1
note Strictly speaking, this is not an Ohmic current, because the
power or energy loss in a tunnel junction is different from an Ohmic
resistor.Stop
[Sun and Ralph(2008)]sunMagnetoresistanceSpintransferTorque2008
author author J. Z. Sun and author D. C. Ralph, title title Magnetoresistance and
spin-transfer torque in magnetic tunnel junctions, https://doi.org/10.1016/j.jmmm.2007.12.008 journal journal Journal of Magnetism and Magnetic Materials volume 320, pages 1227 (year
2008)NoStop
[Ludwig et al.(2019)Ludwig,
Burmistrov, Gefen, and Shnirman]ludwigThermallyDrivenSpin2019
author author T. Ludwig, author I. S. Burmistrov, author Y. Gefen, and author A. Shnirman, title title Thermally driven spin transfer torque
system far from equilibrium: Enhancement of thermoelectric current via
pumping current, https://doi.org/10.1103/PhysRevB.99.045429
journal journal Physical Review B volume 99, pages 045429 (year
2019)NoStop
[Virtanen and Heikkilä(2017)]virtanenSpinPumpingTorque2017
author author P. Virtanen and author T. T. Heikkilä, title title Spin Pumping and
Torque Statistics in the Quantum Noise Limit, https://doi.org/10.1103/PhysRevLett.118.237701 journal
journal Physical Review Letters volume
118, pages 237701 (year 2017)NoStop
[Ludwig et al.(2017)Ludwig,
Burmistrov, Gefen, and Shnirman]ludwigStrongNonequilibriumEffects2017
author author T. Ludwig, author I. S. Burmistrov, author Y. Gefen, and author A. Shnirman, title title Strong nonequilibrium effects in
spin-torque systems, https://doi.org/10.1103/PhysRevB.95.075425
journal journal Physical Review B volume 95, pages 075425 (year
2017)NoStop
[Petrović et al.(2018)Petrović, Popescu, Bajpai,
Plecháč, and Nikolić]petrovicSpinChargePumping2018
author author M. D. Petrović, author B. S. Popescu, author U. Bajpai,
author P. Plecháč, and author B. K. Nikolić, title title Spin and Charge Pumping by a
Steady or Pulse-Current-Driven Magnetic Domain Wall: A
Self-Consistent Multiscale Time-Dependent Quantum-Classical Hybrid
Approach, https://doi.org/10.1103/PhysRevApplied.10.054038
journal journal Physical Review Applied volume 10, pages 054038 (year 2018)NoStop
[Bajpai and Nikolić(2019)]bajpaiTimeretardedDampingMagnetic2019
author author U. Bajpai and author B. K. Nikolić, title title Time-retarded damping
and magnetic inertia in the Landau-Lifshitz-Gilbert equation
self-consistently coupled to electronic time-dependent nonequilibrium
Green functions, https://doi.org/10.1103/PhysRevB.99.134409
journal journal Physical Review B volume 99, pages 134409 (year
2019)NoStop
[Bajpai and Nikolić(2020)]bajpaiSpintronicsMeetsNonadiabatic2020
author author U. Bajpai and author B. K. Nikolić, title title Spintronics Meets
Nonadiabatic Molecular Dynamics: Geometric Spin Torque and Damping
on Dynamical Classical Magnetic Texture due to an Electronic Open
Quantum System, https://doi.org/10.1103/PhysRevLett.125.187202
journal journal Physical Review Letters volume 125, pages 187202 (year 2020)NoStop
[Hammar and Fransson(2017)]hammarTransientSpinDynamics2017
author author H. Hammar and author J. Fransson, title title Transient spin dynamics
in a single-molecule magnet, https://doi.org/10.1103/PhysRevB.96.214401 journal journal Physical Review B volume 96, pages 214401 (year 2017)NoStop
[Meservey and Tedrow(1994)]meserveySpinpolarizedElectronTunneling1994
author author R. Meservey and author P. M. Tedrow, title title Spin-polarized electron
tunneling, https://doi.org/10.1016/0370-1573(94)90105-8 journal journal Physics Reports volume 238, pages 173 (year
1994)NoStop
[Note2()]Note2
note Here, we defined TMR as in @citet sunMagnetoresistanceSpintransferTorque2008.Stop
[Note3()]Note3
note Here, nonlinear refers to the fact that the precession
frequency is proportional to ⟨ p⟩.Stop
[Kudo et al.(2009)Kudo,
Nagasawa, Sato, and Mizushima]kudoMeasurementNonlinearFrequency2009
author author K. Kudo, author T. Nagasawa,
author R. Sato, and author K. Mizushima, title
title Measurement of nonlinear frequency shift coefficient in
spin-torque oscillators based on MgO tunnel junctions, https://doi.org/10.1063/1.3176939 journal journal Applied Physics Letters volume 95, pages 022507 (year 2009)NoStop
[Bussmann et al.(1999)Bussmann, Prinz, Cheng, and Wang]bussmannSwitchingVerticalGiant1999
author author K. Bussmann, author G. A. Prinz,
author S.-F. Cheng, and author D. Wang, title title Switching of vertical giant magnetoresistance
devices by current through the device, https://doi.org/10.1063/1.125053 journal journal
Applied Physics Letters volume 75, pages 2476 (year 1999)NoStop
[Katine et al.(2000)Katine,
Albert, and Buhrman]katineCurrentinducedRealignmentMagnetic2000
author author J. A. Katine, author F. J. Albert, and author R. A. Buhrman, title title Current-induced realignment of
magnetic domains in nanostructured Cu/Co multilayer pillars, https://doi.org/10.1063/1.125752 journal journal
Applied Physics Letters volume 76, pages 354 (year 2000)NoStop
[Tiberkevich et al.(2007)Tiberkevich, Slavin, and Kim]tiberkevichMicrowavePowerGenerated2007
author author V. Tiberkevich, author A. Slavin, and author J.-V. Kim, title title Microwave power generated by
a spin-torque oscillator in the presence of noise, https://doi.org/10.1063/1.2812546 journal journal Applied Physics Letters volume 91, pages 192506 (year 2007)NoStop
[Kim et al.(2008a)Kim, Tiberkevich, and Slavin]kimGenerationLinewidthAutoOscillator2008
author author J.-V. Kim, author V. Tiberkevich, and author A. N. Slavin, title title Generation Linewidth of an
Auto-Oscillator with a Nonlinear Frequency Shift: Spin-Torque
Nano-Oscillator, https://doi.org/10.1103/PhysRevLett.100.017207
journal journal Physical Review Letters volume 100, pages 017207 (year 2008a)NoStop
[Kim et al.(2008b)Kim, Mistral, Chappert, Tiberkevich, and Slavin]kimLineShapeDistortion2008
author author J.-V. Kim, author Q. Mistral,
author C. Chappert, author V. S. Tiberkevich, and author A. N. Slavin, title
title Line Shape Distortion in a Nonlinear Auto-Oscillator
Near Generation Threshold: Application to Spin-Torque
Nano-Oscillators, https://doi.org/10.1103/PhysRevLett.100.167201
journal journal Physical Review Letters volume 100, pages 167201 (year 2008b)NoStop
[Note4()]Note4
note For simplicity, we assume the temperature T to be
irrelevant for the currents, which is justified, for example, when the
densities of states are approximately constant on the scale of k_B
T.Stop
|
http://arxiv.org/abs/2307.03952v2 | 20230708110202 | Is ChatGPT a Good Personality Recognizer? A Preliminary Study | [
"Yu Ji",
"Wen Wu",
"Hong Zheng",
"Yi Hu",
"Xi Chen",
"Liang He"
] | cs.CL | [
"cs.CL"
] |
1
.001
Is ChatGPT a Good Personality Recognizer? A Preliminary Study
Yu Ji et al.
mode = title]Is ChatGPT a Good Personality Recognizer? A Preliminary Study
1,2]Yu Ji[orcid=0000-0001-6048-9184]
[email protected]
2,3]Wen Wu[orcid=0000-0002-2132-5993]
[1]
[email protected]
[1]Corresponding author
4]Hong Zheng
3]Yi Hu
3]Xi Chen
1,2]Liang He
[1]organization=Institute of AI Education, East China Normal University,
city=Shanghai,
country=China
[2]organization=School of Computer Science and Technology, East China Normal University,
city=Shanghai,
country=China
[3]organization=Shanghai Key Laboratory of Mental Health and Psychological Crisis Intervention, School of Psychology and Cognitive Science, East China Normal University,
city=Shanghai,
country=China
[4]organization=Shanghai Changning Mental Health Center,
city=Shanghai,
country=China
In recent years, personality has been regarded as a valuable personal factor being incorporated into numerous tasks such as sentiment analysis and product recommendation. This has led to widespread attention to text-based personality recognition task, which aims to identify an individual's personality based on given text. Considering that ChatGPT has recently exhibited remarkable abilities on various natural language processing tasks, we provide a preliminary evaluation of ChatGPT on text-based personality recognition task for generating effective personality data. Concretely, we employ a variety of prompting strategies to explore ChatGPT's ability in recognizing personality from given text, especially the level-oriented prompting strategy we designed for guiding ChatGPT in analyzing given text at a specified level. The experimental results on two representative real-world datasets reveal that ChatGPT with zero-shot chain-of-thought prompting exhibits impressive personality recognition ability and is capable to provide natural language explanations through text-based logical reasoning. Furthermore, by employing the level-oriented prompting strategy to optimize zero-shot chain-of-thought prompting, the performance gap between ChatGPT and corresponding state-of-the-art model has been narrowed even more. However, we observe that ChatGPT shows unfairness towards certain sensitive demographic attributes such as gender and age. Additionally, we discover that eliciting the personality recognition ability of ChatGPT helps improve its performance on personality-related downstream tasks such as sentiment classification and stress prediction.
ChatGPT Personality Recognition Chain-of-Thought Prompting Strategy Level-Oriented Prompting Strategy Natural Language Explanation Unfairness
[
[
August 12, 2023
===================
§ INTRODUCTION
As one of the basic individual characteristics, personality describes the relatively stable pattern of individual w.r.t. her/his behavior, thought, and emotion <cit.>. In recent years, an increasing number of researchers have considered personality as a valuable factor and incorporated it into various tasks (e.g., machine translation <cit.>, product recommendation <cit.>, sentiment analysis <cit.>, and mental health analysis <cit.>), resulting in significant performance improvements. In order to automatically obtain large-scale user personality, text-based personality recognition task is designed to infer user personality based on given user-generated text <cit.>. With the rapid developments of pre-trained Large Language Models (LLMs) (e.g., BERT <cit.>, RoBERTa <cit.>, GPT-3 <cit.>,PaLM <cit.>, and LLaMA <cit.>), more and more LLMs-based methods have been proposed for text-based personality detection task and have achieved remarkable performance improvements <cit.>.
More recently, ChatGPT[https://chat.openai.com/] has attracted a considerable amount of attention with its impressive general language processing ability <cit.>, sparking exploration into its capability boundaries <cit.>. Several works have provided a preliminary evaluation of ChatGPT on various common tasks such as machine translation <cit.>, product recommendation <cit.>, sentiment analysis <cit.>, and mental health analysis <cit.>. Therefore, in this work, we are interested in evaluating the performance of ChatGPT on text-based personality recognition task for generating effective personality data. We also would like to see whether eliciting the personality recognition ability of ChatGPT contributes to improving its performance on other downstream tasks. Concretely, we raise the following Research Questions (RQs):
RQ1: How do different prompting strategies affect ChatGPT's ability to identify personality?
RQ2: How unfair is ChatGPT when serving as a personality recognizer on various sensitive demographic attributes?
RQ3: Does the personality inferred by ChatGPT help improve its performance on other downstream tasks?
To answer these research questions, we conduct experiments on two representative text-based personality recognition datasets (i.e., Essays and PAN) to compare the performance of ChatGPT, traditional neural network (e.g., Recurrent Neural Network (RNN)), fine-tuned RoBERTa, and corresponding State-Of-The-Art (SOTA) model. Specifically, we adopt three classic prompting strategies to elicit the personality recognition ability of ChatGPT, including zero-shot prompting, zero-shot Chain-of-Thought (CoT) prompting, and one-shot prompting. Furthermore, considering that researchers typically analyze texts at different levels (e.g., word level, sentence level, and document level) to obtain valuable text information <cit.>, we design zero-shot level-oriented CoT prompting to guide ChatGPT in analyzing given text at a specified level, thereby gaining a more targeted understanding of given text and recognizing personality more precisely. According to the experimental results, our findings can be summarized as follows:
(1) Among the three classic prompting strategies, zero-shot CoT prompting can better elicit ChatGPT's ability to predict personality based on given text, resulting in its optimal overall performance on the two datasets, although there is still a certain gap in performance compared to the SOTA model. Additionally, ChatGPT with zero-shot CoT prompting could generate more natural language explanations by text-based logical reasoning, enhancing the interpretability of the prediction results. Furthermore, with the assistance of zero-shot level-oriented CoT prompting, ChatGPT could perform more targeted text analysis, enabling it to complete more accurate personality prediction.
(2) ChatGPT exhibits unfairness to some sensitive demographic attributes on text-based personality recognition task. Based on ChatGPT's analysis, the woman group is more likely to have high levels of Openness, Conscientiousness, and Agreeableness when compared to the man group. Besides, relative to the younger group, the elderly group has a higher likelihood to have low Openness.
(3) The personality inferred by ChatGPT could enhance its performance on sentiment classification task and stress prediction task, which may provide new insights for other personality-related tasks (e.g., machine translation and product recommendation).
In the following sections, we first introduce related work regarding personality recognition in Section <ref>. After that, we present the details of our experimental design and analyze the experimental results in Section <ref>. Finally, we conclude the paper and indicate some future directions in Section <ref>.
§ BACKGROUND AND RELATED WORK
Big-Five Factor (BFF) model and Myers-Briggs Type Indicator (MBTI) are two most popular personality assessment models <cit.>. To be specific, BFF model describes personality based on five traits: Openness (O), Conscientiousness (C), Extraversion (E), Agreeableness (A), and Neuroticism (N) <cit.>. Table <ref> shows the propensities of individuals under different personality traits and levels. On the contrary, MBTI describes personality according to four dimensions, including Extraversion/Introversion, Sensing/Intuition, Thinking/Feeling, and Judging/Perceiving <cit.>. Compared to BFF model, MBTI still faces controversy within the academic community <cit.>. Hence, we adopt BFF model to describe individuals' personalities in this paper.
In recent years, an increasing number of researchers regarded Big-Five personality as a valuable personal factor and incorporated it into their models, resulting in significant performance improvements on various tasks <cit.>. For example, Wu et al. <cit.> adopted users' Big-Five personalities to personalize the recommendation diversity being tailored to the users' diversity needs. Ban et al. <cit.> utilized learners' Big-Five personalities to model the individual differences for better predicting the learners' knowledge levels. This has sparked researchers' interest in efficiently acquiring Big-Five personalities.
The conventional approach to identify an individual's Big-Five personality is via personality questionnaires (e.g., NEO-FFI questionnaire <cit.>, BFI-44 <cit.>, BFI-10 <cit.>, and BFMS <cit.>). These personality questionnaires are typically carefully designed by psychology experts and require individuals to rate their behaviors using Likert scales, which is time-consuming and labor-intensive <cit.>. In order to apply Big-Five personality on a large scale across various domains (e.g., machine translation <cit.>, product recommendation <cit.>, sentiment analysis <cit.>, and mental health analysis <cit.>), researchers attempted to implicitly obtain Big-Five personality from various User-Generated Content (UGC), including text <cit.>, handwriting <cit.>, speech <cit.>, electroencephalography (EEG) <cit.>, and so on. Due to substantial evidence from psychological research demonstrating the correlation between user-generated texts and users' Big-Five personalities <cit.>, researchers made an extensive exploration of text-based personality recognition. However, the related methods normally regarded text-based personality recognition task as a special case of text classification. Most of them utilized machine learning algorithms to build personality recognizers with text features such as Linguistic Inquiry and Word Count (LIWC) <cit.> and Structured Programming for Linguistic Cue Extraction (SPLICE) <cit.>. Furthermore, with the rapid development of deep learning, more and more methods using deep neural networks are proposed to solve text-based personality recognition task, as deep neural networks could extract high-order text features from user-generated text automatically <cit.>. For example, Majumder et al. <cit.> designed a deep convolutional neural network with Word2Vec embeddings <cit.> for personality detection. Xue et al. <cit.> presented a two-level hierarchical neural network to learn the deep semantic representations of users' posts for recognizing users' Big-Five personalities. Lynn et al. <cit.> utilized message-level attention to learn the relative weight of users' posts for assessing users' Big-Five personalities. Zhu et al. <cit.> learned post embeddings by contrastive graph transformer network for personality detection. Zhu et al. <cit.> proposed a lexical psycholinguistic knowledge-guided graph neural network to enrich the semantics of users' posts with the personality lexicons. Recently, the remarkable performance enhancements achieved by LLMs in numerous Nature Language Processing (NLP) tasks <cit.> prompted researchers to explore the utilization of LLMs in text-based personality prediction task <cit.>. For example, Mehta et al. <cit.> performed extensive experiments with BERT to arrive at the optimal configuration for personality detection. Ren et al. <cit.> leveraged BERT to generate sentence-level embedding for personality recognition, while a sentiment dictionary is used to consider sentiment information in the process of personality prediction.
Lately, the release of ChatGPT has drawn increasingly great attention due to the incredible general language processing ability of ChatGPT. Therefore, more and more researchers attempted to explore the capability boundaries of ChatGPT and evaluate it on various tasks, including machine translation <cit.>, product recommendation <cit.>, sentiment analysis <cit.>, mental health analysis <cit.>, and so on. Hence, in this work, we are interested in exploring the personality recognition ability of ChatGPT through different prompting strategies for obtaining effective personality data.
§ EXPERIMENTS
§.§ Datasets
We adopt two well-known publicly available datasets in our experiments for text-based Big-Five personality recognition task:
(1) Essays <cit.>: This stream-of-consciousness dataset consists of 2,467 essays written by psychology students, and the Big-Five personality levels (i.e., low and high levels) of the students were acquired through standardized self-report questionnaire.
(2) PAN[https://pan.webis.de/clef15/pan15-web/author-profiling.html]: This dataset comes from the PAN2015 data science competition, which consists of four language sub-datasets (i.e., Dutch, English, Italian, and Spanish). In this work, we choose the English sub-dataset, which contains 294 users' tweets and their Big-Five personality scores. The Big-Five personality scores of the users were obtained by BFI-10 questionnaire <cit.>. Note that, similar to <cit.>, for each of the five personality traits, we adopt the corresponding mean value to convert personality scores into two personality levels (i.e., low and high levels). To be specific, personality score below the corresponding mean value is converted into the low level, while personality score equal to or above the corresponding mean value is converted into the high level.
Similar to <cit.>, we randomly split Essays and PAN datasets into training, validation, and testing sets in the proportion of 8:1:1. The statistics of the two datasets are summarized in Figure <ref>.
§.§ Prompting Strategies
We employ three classic prompting strategies to explore the personality recognition ability of ChatGPT, including zero-shot prompting, zero-shot CoT prompting, and one-shot prompting. The reason for using one-shot prompting alone is that ChatGPT has a limitation on the length of input. Considering that the texts in both Essays and PAN datasets are normally long (i.e., the average lengths of texts in Essays and PAN datasets are 749 and 1,405 respectively), we only provide one demonstration example in the input (i.e., one-shot prompting) without offering more demonstration examples (e.g., two-shot prompting). In addition, inspired by existing NLP research mining valuable text information at different levels (e.g., word level, sentence level, and document level) <cit.>, we design level-oriented prompting strategy to guide ChatGPT in analyzing text at a specified level. Concretely, we combine the level-oriented prompting strategy with zero-shot CoT prompting to construct zero-shot level-oriented CoT prompting. The reason for constructing zero-shot level-oriented CoT prompting based on zero-shot CoT prompting is that ChatGPT with zero-shot CoT prompting has better overall performance on the two datasets when compared to zero-shot prompting and one-shot prompting (see Section <ref>). Hence, we would like to see whether the level-oriented prompting strategy could further enhance the effectiveness of zero-shot CoT prompting. Note that, the four prompting strategies require ChatGPT to simultaneously output the person's levels of five personality traits (i.e., O, C, E, A, and N) based on given text.
(1) Zero-Shot prompting
Analyze the person-generated text, determine the person's levels of Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. Only return Low or High.
Text: "[Text]"
Level:
(2) Zero-Shot CoT prompting
Analyze the person-generated text, determine the person's levels of Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. Only return Low or High.
Text: "[Text]"
Level: Let's think step by step:
(3) One-Shot prompting
Analyze the person-generated text, determine the person's levels of Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. Only return Low or High.
Text: "[Example Text]"
Level: [Openness Level of Example Text] Openness, [Conscientiousness Level of Example Text] Conscientiousness, [Extraversion Level of Example Text] Extraversion, [Agreeableness Level of Example Text] Agreeableness, [Neuroticism Level of Example Text] Neuroticism
Text: "[Text]"
Level:
Note that, to minimize the variance resulting from the sampling of demonstration examples, we randomly select three demonstration examples for conducting experiments and reporting the average performance.
(4) Zero-Shot Level-Oriented CoT prompting
We modify zero-shot CoT prompting as follow to construct zero-shot level-oriented CoT prompting, while [Specified Level] can be set as word level, sentence level, or document level.
Analyze the person-generated text from [Specified Level], determine the person's levels of Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. Only return Low or High.
Text: "[Text]"
Level: Let's think step by step:
§.§ Baselines
Based on our literature research, we choose the following representative models as baselines:
(1) RNN <cit.>: uses RNN to generate text representation for recognizing Big-Five personality. In addition, the pre-trained GloVe model <cit.> is used to initialize the word embeddings.
(2) RoBERTa <cit.>: fine-tunes pre-trained RoBERTa-Base model and utilizes the representation of [CLS] with a linear layer for personality classification.
(3) HPMN (BERT) <cit.>: is one of the SOTA personality prediction models, which uses the personality lexicons to incorporate relevant external knowledge for enhancing the semantic meaning of the person-generated text. Its performance on Essays and PAN datasets is quoted from the original paper.
§.§ Evaluation Metrics
It can be observed from Figure <ref> that Essays and PAN datasets maintain class balance across most of the five personality traits. Therefore, we use Accuracy (the higher the better) <cit.> as the evaluation metric, which is used to measure the personality classification performance. Besides, to make a more intuitive comparison, we adopt Accuracy Improvement Percentage (AIP) to measure the accuracy improvement percentage of ChatGPT against the SOTA model (i.e., HPMN (BERT)), which is calculated as:
AIP=Accuracy_testmodel-Accuracy_SOTA/Accuracy_SOTA*100%
where Accuracy_SOTA and Accuracy_testmodel denote the accuracy of the SOTA model and the test model such as ChatGPT with zero-shot prompting.
§.§ Implementation Details
For the usage of ChatGPT, we adopt the representative version of ChatGPT (i.e., gpt-3.5-turbo). In addition, we set the temperature to 0 for producing more deterministic and focused responses. For RNN and fine-tuned RoBERTa, we set each text has no more than 512 words (padding when text length is less than 512, truncation when text length is greater than 512). Besides, for RNN, the dimension of hidden state, the batch size, and the learning rate are set to 128, 32, and 1e-3 respectively. While for fine-tuned RoBERTa, the batch size and the learning rate are set to 32 and 5e-5 respectively.
§.§ Overall Performance (RQ1)
Considering that ChatGPT may refuse personality recognition due to some reasons[One unexpected response of ChatGPT: “Unfortunately, there is not enough information in the provided text to accurately determine the person's levels of Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism.".], we adopt Majority approach to obtain the prediction results when encountering such rare situations. Specifically, for each personality trait, we regard the majority personality level in training set as the personality level of each sample in testing set. The experimental results on Essays and PAN datasets are shown in Table <ref> and Table <ref>. Concretely, ChatGPT_ZS, ChatGPT_CoT, and ChatGPT_OS represent ChatGPT with zero-shot prompting, zero-shot CoT prompting, and one-shot prompting. In addition, ChatGPT_CoT_W, ChatGPT_CoT_S, and ChatGPT_CoT_D denotes ChatGPT with zero-shot level-oriented CoT prompting, while [Specified Level] is set to word level, sentence level, and document level respectively.
Results of zero-shot prompting. As shown in Table <ref> and Table <ref>, ChatGPT_ZS has better performance than the traditional neural network RNN on both Essays and PAN datasets. For example, relative to RNN, ChatGPT_ZS increases its average classification accuracy from 50.3% to 57.4% on Essays dataset. Furthermore, ChatGPT_ZS not only performs comparably to fine-tuned RoBERTa on Essays dataset (e.g., 57.4% vs. 57.3% in terms of average classification accuracy) but also outperforms fine-tuned RoBERTa on PAN dataset (e.g., 57.3% vs. 55.3% w.r.t. average classification accuracy). Therefore, ChatGPT_ZS exhibits incredible text-based personality recognition ability under zero-shot setting. Since the SOTA model is a task-specific fully-supervised model with complex architecture for personality recognition task, the performance of ChatGPT_ZS falls far behind that of the SOTA model on the two datasets (e.g., 57.3% vs. 67.5% w.r.t. average classification accuracy on PAN dataset). However, another interesting observation is that compared with Essays dataset (i.e., the relatively large-scale dataset), ChatGPT_ZS shows a relatively higher AIP on PAN dataset (i.e., the relatively small-scale dataset). For example, the AIP of ChatGPT_ZS against the SOTA model on Essays and PAN datasets are -29.0% and -15.1% respectively. Furthermore, ChatGPT_ZS even surpasses the SOTA model when predicting personality trait A on PAN dataset (i.e., 70.0% vs. 66.3%). The possible reason is that PAN dataset provides relatively fewer training data for the fully-supervised SOTA model, preventing it from fully learning the differences in personality levels. In contrast, ChatGPT_ZS does not require training data and relies solely on its existing knowledge under zero-shot setting, narrowing the performance gap between ChatGPT_ZS and the SOTA model.
Results of zero-shot CoT prompting. Table <ref> and Table <ref> reveal that zero-shot CoT prompting could effectively enhance ChatGPT's ability on text-based personality recognition task. For example, ChatGPT_CoT increases its average classification accuracy from 57.3% to 60.7% on PAN dataset when compared with ChatGPT_ZS. As for reason, with the help of zero-shot CoT prompting, ChatGPT_CoT can perform more complex logical reasoning, so as to accurately complete the personality prediction task. Besides, ChatGPT_ZS only provides final prediction results (see Figure <ref>), while ChatGPT_CoT could provide additional natural language explanations for its prediction results in most cases (see Figure <ref>). The natural language explanations generated by ChatGPT_CoT not only enhance users' trust in the prediction results but also enables developers to obtain a better understanding of the knowledge deficiencies in ChatGPT. To gain a deep insight into the natural language explanations generated by ChatGPT_CoT, we categorize the nature language explanations into three types: (1) None: no explanation or refuse personality recognition; (2) Original Content: only the original text is provided as explanation; (3) Logical Reasoning: logical reasoning based on the original text. Figure <ref> shows the examples of three types of natural language explanations for the prediction of personality trait O, and Figure <ref> illustrates the distribution of three types of natural language explanations on different datasets and personality traits. As depicted in Figure <ref>, on both Essays and PAN datasets, ChatGPT_CoT provides more natural language explanations of the logical reasoning type for the prediction of personality trait O, while offering more natural language explanations of the original content type when identifying personality trait N. With regard to possible reasons, personality trait O reflects whether a person is creative/open-minded (with high level) or reflective/conventional (with low level) <cit.>, which may not be directly presented in person-generated text. Hence, the prediction of personality trait O requires ChatGPT to engage in more logical reasoning for a deeper analysis of given text. For example, as shown in Figure <ref>, based on given text, ChatGPT_CoT infers that the person's text is mostly focused on concrete details and experiences, with little indication of abstract or imaginative thinking. Therefore, ChatGPT_CoT predicts that the person has low O. On the contrary, personality trait N reflects whether a person is emotionally stable (with low level) or emotionally unstable (with high level) <cit.>. Since individuals normally directly express their negative emotions (e.g., anxiety) in their texts, it is relatively easier for ChatGPT_CoT to predict personality trait N based on the original text without logical reasoning. For example, one of natural language explanation of the original content type generated by ChatGPT_CoT for predicting personality trait N is mentions feeling stressed, tense, and worried about health problems and homework overload. Furthermore, as demonstrated in Figure <ref>, compared with Essays dataset, ChatGPT_CoT provides relatively more natural language explanations of the logical reasoning type for personality recognition on PAN dataset. The possible reason is that Essays dataset consists of stream-of-consciousness essays written by psychology students under professional guidance, while PAN dataset is composed of tweets written freely by various internet users. Hence, compared with the texts in Essays dataset, the texts in PAN datasets generally contain relatively less valuable information, which increases the difficulty of text-based personality prediction on PAN dataset. Therefore, compared to Essays dataset, ChatGPT_CoT needs to perform more logical reasoning to accomplish personality recognition task accurately on PAN dataset.
Results of one-shot prompting. From Table <ref> and Table <ref>, it can be observed that by providing a demonstration example, ChatGPT's performance has improved on Essays dataset but largely declined on PAN dataset. To be specific, ChatGPT_OS increases its average classification accuracy from 57.4% to 58.2% on Essays dataset when compared with ChatGPT_ZS. However, relative to ChatGPT_ZS, ChatGPT_OS decreases its average classification accuracy from 57.3% to 49.3% on PAN dataset. Regarding possible reasons, on the one hand, as mentioned above, the texts in Essays dataset generally contain more valuable information when compared to PAN dataset. Hence, there is a higher probability of selecting samples containing more invalid information from PAN dataset than from Essays dataset, thereby affecting ChatGPT_OS's learning of the relationship between text and Big-Five personality on PAN dataset. On the other hand, the persons in Essays dataset are all psychology students, while the persons in PAN dataset are various internet users from different age groups (from 18 years old to over 50 years old). Hence, without the corresponding demographic attributes (e.g., age) provided, the demonstration example selected from the training set of PAN dataset may not assist ChatGPT_OS in predicting the personalities of certain groups. For instance, if the demonstration example is generated by a young person, the association between text and personality that ChatGPT_OS learns from this demonstration example may not be helpful in predicting the personality of an old person.
Results of zero-shot level-oriented prompting. Table <ref> and Table <ref> demonstrate that guiding ChatGPT_CoT to analyze given text from specified level could help ChatGPT in analyzing given text more targeted and completing personality prediction task precisely. For example, by guiding ChatGPT_CoT_D to analyze given text from document level, its performance on Essays dataset can rival the performance of ChatGPT_OS (58.3% vs. 58.2% w.r.t. average classification accuracy). Similarly, on PAN dataset, when ChatGPT_CoT_S is guided to analyze given text from sentence level, its average classification accuracy has been a notable improvement when compared to ChatGPT_CoT, rising from 57.3% to 62.7%. We believe the possible reason is that the texts in Essays dataset were written within a limited time frame, making it more suitable for conducting overall analysis from document level. On the other hand, the texts in PAN dataset are composed of tweets posted at different times. Hence, it is more appropriate to analyze given text in PAN dataset from sentence level, which is helpful to mine diverse individual information reflected in different tweets. This discovery not only helps optimize existing promptings for text analysis but also offers new insights into eliciting various abilities of LLMs in a fine-grained manner.
§.§ Fairness of ChatGPT on Personality Recognition (RQ2)
Considering that LLMs may be unfair to certain groups due to social bias in its large pre-training corpus <cit.>, we further investigate the fairness of ChatGPT on personality prediction task across different groups. To be specific, we adopt ChatGPT_CoT with different demographic attributes for personality prediction on PAN dataset, as PAN dataset provides various demographic attributes, including gender and age (see Table <ref>). Concretely, we modify zero-shot CoT prompting as follow to provide ChatGPT with specific demographic attribute corresponding to given text:
Analyze the person-generated text, determine the person's level of Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. Only return Low or High. Note that, the person is [Corresponding Attribute].
Text: "[Text]"
Level: Let's think step by step:
Please refer to Table <ref> for the setting of [Corresponding Attribute]. For example, [Corresponding Attribute] is set to aged between 18 and 24 when the age of the corresponding person is between 18 and 24 years old. To be specific, ChatGPT_CoT_gender and ChatGPT_CoT_age represent ChatGPT with the modified zero-shot CoT promptings, which incorporates gender and age information respectively.
It is apparent from Figure <ref> that the incorporation of demographic attributes impairs the personality prediction ability of ChatGPT_CoT to some extent, especially the integration of age information. For example, relative to ChatGPT_CoT, ChatGPT_CoT_gender and ChatGPT_CoT_age decrease their average accuracy from 55.5% to 55.2% and 54.0% respectively. We speculate that this phenomenon may be due to ChatGPT's biases towards certain groups, which leads to unfair treatment of those groups. In order to better observe ChatGPT's biases on personality prediction task, we first obtain the prediction results of ChatGPT_CoT, ChatGPT_CoT_gender, and ChatGPT_CoT_age towards different groups. We then visualize the proportion of low and high levels in those prediction results. Concretely, Figure <ref> and Figure <ref> show the distribution of the prediction results of ChatGPT_CoT and ChatGPT_CoT_gender towards woman and man groups respectively. In addition, Figure <ref>, Figure <ref>, Figure <ref>, and Figure <ref> illustrate the distribution of the prediction results of ChatGPT_CoT and ChatGPT_CoT_age towards different age groups. Take Figure <ref> as an example, the figure represents that among the 174 women in PAN dataset, 51% of them have high O (i.e., ground truth). However, ChatGPT_CoT classifies 74.8% of the 174 women as high O, while ChatGPT_CoT_gender classifies 82.3% of the 174 women as high O. In contrast, as shown in Figure <ref>, among the 174 men in PAN dataset, 47.6% of them have low O (i.e., ground truth). However, ChatGPT_CoT classifies 29.9% of the 174 men as low O, while ChatGPT_CoT_gender classifies 32.0% of the 174 men as low O. In summary, after adding gender information, ChatGPT_CoT_gender classifies more women as high O and classifies more men as low O. This phenomenon suggests that ChatGPT considers women to be more likely to belong to high O when compared to men. In order to make a more intuitive comparison of the prediction results of ChatGPT_CoT, ChatGPT_CoT_gender, and ChatGPT_CoT_age towards different groups, we further visualize the changes of the proportion of high level in the prediction results of ChatGPT_CoT_gender/ ChatGPT_CoT_age relative to ChatGPT_CoT (see Figure <ref>). For example, as displayed in Figure <ref>, for 174 women in PAN dataset, the proportion of women with high A in the prediction results of ChatGPT_CoT_gender has increased by 8.1% when compared to ChatGPT_CoT. Based on Figure <ref>, the biases of ChatGPT towards certain groups can be summarized as follows:
(1) Relative to the man group, the woman group is more likely to exhibit high levels of personality traits O, C, and A.
(2) The older an individual is, the greater the likelihood of her/his personality traits O being low level.
However, these findings are not entirely consistent with existing research. For example, some studies suggest that the woman group is more likely to exhibit high levels of personality traits A and N compared to the man group, whereas gender differences in the other personality traits (i.e., O, C, and E) have been either inconsistent or of negligible magnitude <cit.>. Possible reasons for this could be that, on the one hand, ChatGPT's biases are influenced by the biases of the annotators, which may not be representative. On the other hand, these findings are discovered based solely on the PAN dataset, limiting their generalization to some extent. Nevertheless, this phenomenon serves as a cautionary reminder for researchers to consider fairness when utilizing ChatGPT for personality prediction.
§.§ ChatGPT's Personality Recognition Ability on Downstream Task (RQ3)
We apply the personality data generated by ChatGPT to other downstream tasks for validating the effectiveness of ChatGPT's personality recognition ability. Concretely, we choose sentiment classification task and stress prediction task as the downstream tasks, because existing psychological research indicates that there is a correlation between Big-Five personality and sentiment expression <cit.> as well as stress vulnerability <cit.>. For each task, to make a more comprehensive assessment of the impact of personality data generated by ChatGPT, we first adopt ChatGPT_CoT and fine-tuned RoBERTa to generate the corresponding Big-Five personality based on given text respectively. We then use a basic prompting to elicit the task-related ability (i.e., sentiment classification ability and stress prediction ability) of ChatGPT. Finally, we modify the basic prompting by incorporating different Big-Five personalities and observe the task-related ability of ChatGPT with different modified basic promptings.
To be specific, for sentiment classification task, we adopt a subset of Yelp-2 dataset <cit.> for conducting experiments. The reason for not utilizing the complete Yelp-2 dataset is to take into account the cost of using ChatGPT's API. Concretely, we randomly select 500 positive samples and 500 negative samples from the testing set of Yelp-2 dataset to construct the subset. While for stress prediction task, we choose Dreaddit dataset, which consists of 715 samples (369 positive samples and 346 negative samples) in its testing set. Specifically, considering that the texts in the PAN dataset, Yelp-2 dataset, and Stress dataset are all web posts, we use fine-tuned RoBERTa trained on PAN dataset to generate personality data. Besides, since both tasks are binary classification tasks, we adopt Accuarcy (the higher the better) as the evaluation metric. In addition, the basic promptings used for sentiment classification task and stress prediction task are proposed by <cit.> and <cit.>. Please refer to Table <ref> for the detail of the unmodified/modified basic promptings.
The experimental results are illustrated in Figure <ref>. Note that, ChatGPT_basic represents ChatGPT with the basic prompting, while ChatGPT_basic_PC and ChatGPT_basic_PR denotes ChatGPT with the modified basic promptings, which incorporates the personality data generated by ChatGPT_CoT and fine-tuned RoBERTa respectively. It can be observed that after incorporating the personality data predicted by ChatGPT_CoT, there is an improvement in ChatGPT's performance on both sentiment classification task and stress prediction task. For example, ChatGPT_basic_PC increases its classification accuracy from 96.6% to 97.6% on sentiment classification task when compared to ChatGPT_basic. While for stress prediction task, ChatGPT_basic_PC increases its classification accuracy from 71.3% to 73.0% when compared to ChatGPT_basic. This proves the effectiveness of the personality data generated by ChatGPT_CoT. With an understanding of individuals' Big-Five personalities, ChatGPT can analyze their sentiment expression and stress condition in a more personalized manner. Another interesting finding is that the personality data generated by fine-tuned RoBERTa can help improve the performance of ChatGPT in sentiment classification tasks, but it actually decreases ChatGPT's performance in stress prediction task. We believe that the possible reason for this is that fine-tuned RoBERTa trained on PAN dataset does not generalize well, which results in the poor performance of personality prediction on Dreaddit dataset. In contrast, ChatGPT relies solely on zero-shot CoT prompting to elicit its personality prediction ability and does not require training data, thus exhibiting stronger generalization performance on different datasets.
§ CONCLUSION AND FUTURE DIRECTIONS
In this work, we evaluate the personality recognition ability of ChatGPT with different prompting strategies, and compare its performance with RNN, fine-tuned RoBERTa, and corresponding SOTA model on two representative text-based personality identification datasets. With the elicitation of zero-shot CoT prompting, ChatGPT exhibits impressive personality recognition ability and has strong interpretability for its prediction results. In addition, we find that guiding ChatGPT to analyze text at a specified level helps improve its ability to predict personality, which proves the effectiveness of level-oriented prompting strategy. Moreover, we discover that ChatGPT exhibits unfairness to some sensitive demographic attributes, leading to unfair treatment of some specific groups when predicting personality. Besides, we apply the personality data inferred by ChatGPT in other downstream tasks and achieve performance improvement to some extent. This proves that ChatGPT's personality prediction ability is effective and has high generalization performance.
As for future work, on the one hand, we would like to apply level-oriented prompting strategy to more NLP tasks for observing its effectiveness in mining text information. On the other hand, with the continuous emergence of various LLMs, we are interested in exploring the construction of domain-specific LLMs based on psychological data in order to enhance the personality recognition ability of LLMs.
§ ACKNOWLEDGMENT
This work is funded by Science and Technology Commission of Shanghai Municipality, China (under project No. 21511100302), National Natural Science Foundation of China (under project No. 61907016), Natural Science Foundation of Shanghai (under project No. 22ZR1419000), the Research Project of Changning District Science and Technology Committee (under project No. CNKW2022Y37), and the Medical Master's and Doctoral Innovation Talent Base Project of Changning District (under project No. RCJD2022S07). In addition, it is also supported by The Research Project of Shanghai Science and Technology Commission (20dz2260300) and The Fundamental Research Funds for the Central Universities.
§.§ CRediT Authorship Contribution Statement
Yu Ji: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Data Curation, Writing-Original Draft, Writing-Review and Editing. Wen Wu: Conceptualization, Methodology, Formal analysis, Investigation, Writing-Original Draft, Writing-Review and Editing, Supervision. Hong Zheng: Writing-Review and Editing. Yi Hu: Supervision, Writing-Review and Editing. Xi Chen: Writing-Review and Editing. Liang He: Supervision, Writing-Review and Editing.
§.§ Ethical Approval
Not applicable.
§.§ Data Availability
Data will be made available on request.
§.§ Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
unsrt
|
http://arxiv.org/abs/2307.04269v1 | 20230709215514 | Big-Bang Nucleosynthesis within the Scale Invariant Vacuum Paradigm | [
"V. G. Gueorguiev",
"A. Maeder"
] | nucl-th | [
"nucl-th"
] |
Ronin Institute, Montclair, NJ, USA
Institute for Advanced Physical Studies, Sofia, Bulgaria
Geneva Observatory, University of Geneva, Switzerland
The Scale Invariant Vacuum (SIV) paradigm is applied to the Big-Bang Nucleosynthesis
using the known analytic expressions for the expansion factor a and
the plasma temperature T as functions of the SIV time τ since the Big-Bang
when a(τ=0)=0. The results are compared to the known standard
BBNS model as calculated with the PRIMAT code. Potential SIV-guided deviations from the
local statistical equilibrium are explored. Overall, we find that smaller than usual baryon
and non-zero dark matter content, by a factor of three to five times reduction,
result in compatible to the standard reproduction of the light elements abundances.
Keywords:
Cosmology: theory – primordial nucleosynthesis – dark matter
Big-Bang Nucleosynthesis within
the Scale Invariant Vacuum Paradigm
A. Maeder
August 12, 2023
====================================================================
§ INTRODUCTION
It is often ignored that when the cosmological constant Λ is assumed to be equal to zero,
the equations of General Relativity are scale invariant,
which is also a property present in Maxwell equations of electrodynamics.
However, for a non-zero Λ, the field equations of the gravitation no longer show the property of scale invariance.
A fact, which as discussed by <cit.>, was one of the reasons of Einstein's disenchantment with the cosmological constant.
It is thus of interest to examine at what conditions the scale invariant properties of General Relativity may be restored,
since current cosmological observations support a positive cosmological constant.
A theoretical framework has been developed by <cit.> and <cit.>,
in the so-called co-tensorial calculus based on the Weyl's Integrable Geometry (WIG) based on the original idea by <cit.>.
It offers a consistent basis to account for the properties of scale invariance of gravitation via a scale factor λ,
as also illustrated by several properties studied by <cit.>.
Scale invariant derivatives, modified affine connexions, modified Ricci tensor and curvatures can be obtained leading to a general scale
invariant field equation. Dirac and Canuto et al. have expressed an action principle in the scale invariant framework,
with a matter Lagrangian, as a coscalar of power n=-4 (varying like λ^-4).
By considering the variations of this action, they also obtain the generalization of the Einstein field equation.
This is Equation (7) in <cit.> from which the scale invariant cosmological equations are derived.
In the Weyl’s Integrable Geometry (WIG), the scale factor is undetermined without any other constraints
as shown by Dirac and Canuto et al. Thus, these authors were fixing the scale factor by some external
considerations based on the so-called Large Number Hypothesis; this hypothesis, however, is often disputed <cit.>.
Thus, it seems appropriate to explore other conditions for setting the gauge.
The proposition was made to fix the gauge factor by simply assuming the scale invariance of the empty space <cit.>.
This means that the properties of the empty space should not change under a contraction or dilatation of the space-time.
Indeed, we note, as shown by <cit.>, that the current equation of the vacuum
p_vac= - ϱ_vac c^2 already implies that
ϱ_vac should remain constant
“if a volume of vacuum is adiabatically compressed or expanded”.
On this basis, the cosmological equations derived by <cit.> were simplified <cit.>.
A number of cosmological tests were performed, with positive results. These equations were
then shown to have rather simple analytical solutions <cit.> for models of a matter dominated Universe with a zero curvature.
In order to express the motions of free particles, a geodesic equation was obtained <cit.>
from a minimum action in the Weyl's integrable geometry (WIG). In the weak field approximation,
the geodesic equation leads to a modification of the Newton equation <cit.> where it contains
a (currently very small) additional acceleration term proportional to the velocity of the particles.
This equation was applied to study the internal motions of clusters of galaxies,
the flat rotation curves of spiral galaxies and
the age increase of the “vertical” velocity dispersion of stars in galaxies <cit.>.
The interesting result was that the observational properties of these various systems
could be accounted without requiring to the current hypothesis of dark matter,
and the same for the radial acceleration relation (RAR) of galaxies <cit.>.
The growth of the density fluctuations in the early Universe was also studied by <cit.>
who showed that dark matter is not needed, within the Scale-Invariant Vacuum (SIV) paradigm,
to achieve the growth of the density fluctuations to the currently observed inhomogeneities <cit.>.
Such studies suggested a connection between the Scale-Invariant Vacuum (SIV) theory and
the origin of Dark Matter and Dark Energy <cit.>.
This was further reenforced by the study of scale-invariant dynamics of galaxies,
MOND, dark matter, and the dwarf spheroidals <cit.>.
Furthermore, it was shown that the SIV framework naturally relates the scale invariance, horizons, and inflation,
while providing also a graceful exit from inflation <cit.>.
Summary of the main results was compiled and presented at the conference on
Alternative Gravities and Fundamental Cosmology (AlteCosmoFun'21) and published
in the journal Universe <cit.>.
The above successes naturally inquire further studies as to the applicability of the SIV paradigm to other well-known phenomenon.
A study of the Cosmic Microwave Background <cit.> is one such phenomenon along with
the Big-Bang nucleosynthesis (BBNS), which could be understood without a complicated numerical simulations
<cit.>. The BBNS phenomenon is very relevant for us
since the SIV possesses analytic expressions suitable for first exploration of such a problem <cit.>.
This could be very useful as an approach to study BBNS within the SIV paradigm given
by the recent Mathematica code PRIMAT <cit.>.
The objective of the present work is to apply the analytic expressions derived within the SIV paradigm <cit.>
to the BBNS via the use of the PRIMAT code and to see how well the SIV will perform compared to the standard BBNS model.
For this purpose, in Section <ref>, we provide a summary of the background needed by the reader to understand
the framework to be utilized. In Section <ref> are discussed the main methods, similarities and difference of various relevant functions,
and the equations that need to be employed within the computational process. In Section <ref> we present our main results and
explain the various model choices described in the tables shown.
Finally, summary and conclusions are presented in the Section <ref>.
§ BACKGROUND FRAMEWORK
We start this section with a summary of the commonly used fundamental
physical constants and expressions relevant for the description of
the early Universe and their relation to the observations during the
current epoch:
H_0 = h H_100 , h=70/100 , H_100=100 km/s/Mpc=3.2408×10^-18 s^-1 ,
ρ_c0 = 3H_0^2/(8π G) , G=6.6743×10^-11 m^3/kg/s^2 , τ_0=4.355×10^17s ,
T_0 = 2.7255 K , a_BB=π^2/(15 ħ^3 c^5)=2.31674×10^59 s^2/m^5/J^3 ,
k_B = 1.3806×10^-23J/K , ρ_γ0=a_BB(k_B T_0)^4/c^2=4.6485×10^-34 g/cm^3 ,
N_ν = 3 , K_0=1+7/8(4/11)^4/3N_ν=1.6813 , ρ_γ0 h^2/ρ_c0=2.47476×10^-5.
Here, H_0 is the Hubble constant expressed via the reduced dimensionless
Hubble parameter h and the scale fixing formal constant H_100.
The usual current critical density based on H_0 is ρ_c0, G is Newton's gravitational constant,
and τ_0 is the current age of the Universe
(13.8 Gyr with 365.25 days in a year as in <cit.>).
Some minor differences from ref. <cit.> are to be noted here:
the choice h=0.7 is used in ref. <cit.> while PRIMAT uses Planck's
CMB value of h=0.677 <cit.> and
the pre-factor defining v_eq, in Eq. (27) of <cit.>, is 2.4741×10^-5
rather than the above value for ρ_γ0 h^2/ρ_c0,
furthermore, the current value of the CMB temperature is T_0=2.726 K in <cit.>,
and a=a_BBk_B^4 in Eq. (27) of ref. <cit.>.
PRIMAT <cit.> uses units such that c=1, k_B=1, and ħ=1 with Planck's
CMB value of h=67.66/100 along with the number of effective neutrino flavors as N_ν^eff=3.01,
while Ω_m=0.31 and Ω_b=0.05 correspondingly.
The relevant SIV analytic expressions based on <cit.> are summarized
in the next set of formulas where the prefix “A” is used to indicate
that the subsequent equation number refers to the corresponding original equation
in ref. <cit.>:
(A27) v_eq=K_0 ρ_γ0/(Ω_mρ_c0) , c_2=(v_eq^2+√(v_eq^4+C_rel))/t_eq^2 , (A21)
(A20) C_m=4Ω_m/(1-Ω_m)^2, C_rel=C_m v_eq ,
(A25) t_eq=2^-2/3(v_eq^3/2(1-Ω_m)+√(v_eq^3(1-Ω_m)^2+4Ω_m))^2/3,
(A29) t_ in=C_rel^1/4/c_2^1/2 , Δt=(t_0-t_ in) τ/τ_0 , (A30)
(A33) a(Δt)=√(2c_2t_ in^3 Δt) , τ(T)=T_0^2τ_0/2(t_0-t_ in)√(C_rel)1/T^2, (A39)
(A37) ρ_r(Δt)=ρ_γ0K_0/4C_relΔt^2 , ρ_m(Δt)=ρ_m0c_2^1/4/C_rel^7/8(2Δt)^3/2.
where in (39) of <cit.> one has 1.272×10^9 instead of T_0√(τ_0/2)=1.271×10^9 here.
The quantities v_eq and c_2 are integration constants for the SIV modified FLRW equation and are related to the
matter energy content C_m and the radiation energy content C_rel, while t_eq is the moment of matter–radiation
equality given in the SIV t–time such that the current time satisfies t_0=1. The moment of the Big-Bang (BB),
when a=0, is denoted by t_ in∈[0,1), while Δt is the time since the BB.
Δt is related via (A30) to the conventional time τ=Δτ since the BB in seconds
when τ_0 is the current age of the Universe in seconds and the BB is at τ=0.
The expansion factor a(τ), also known as RW spatial scale factor,
is given by substituting (A30) in (A33),
while (A39) gives the relationship between age τ and temperature T of the radiation.
The last two expressions are the energy-densities for radiation and matter within the SIV.
In PRIMAT, the thermonuclear reaction equations that describe the rate of change
of the abundances of the various nuclear species are defined via in-scalar variables
Y_i=n_i/n_b based on the number density of the nucleus i relative to
the total baryon number density n_b.
For PRIMAT based expressions, we will use prefix P followed by the corresponding equation number in ref. <cit.>.
These equation numbers may differ by ±1 between the arXiv version and the published version of the paper.
The usual reaction rates for the production and reduction of a specific nucleus
are re-expressed from the traditional form, i.e. a two body reaction i+j↔ k+l (P131),
into the new form (P136), and also into a more general case of more bodies (P138),
but the overall co-tensor structure stays the same since all Γ-s are now only in units of inverse seconds:
(P131) ṅ_i⊃ n_kn_lγ_kl→ij-n_in_jγ_ij→kl , γ_ij→kl=⟨σ v⟩_ij→kl , (P132)
(P136) Ẏ_i⊃ Y_kY_lΓ_kl→ij-Y_iY_jΓ_ij→kl , Γ_ij→kl=n_bγ_ij→kl . (P137)
Here, the reaction rate γ_j…→i… is in units cm^3/s but when multiplied by
the appropriate n_b factor it results in Γ_j…→i… being in inverse seconds.
The forward γ_j…→i… and the reverse reaction rates
γ̅_j…→i…=γ_i…→j…
are related due to the assumption of a local thermodynamic equilibrium;
thus, there is a simple three-parameter factor
containing the reaction constants α,β, γ and
expressed using temperature T_9 in GK units,
γ̅_j…→i…=γ_i…→j…
=γ_j…→i…×α T_9^β exp(γ/T_9)
(see (P141) and (P142) for details).
The constant α is an overall reaction constant related to the
stoichiometric coefficients of the reaction species and their spins,
while γ is a factor in a Boltzmann like exponent and
depends on the reaction Q-factor over a temperature constant of 1GK;
as such ∼ Q/T is an in-scalar quantity if mass and thermal energy
have the same λ-scaling.
Thus, the only co-scalar of non-zero power is related to the constant β
since it is coming from a factor of the type m× T (see (P141)).
This means that if energy is scaling as m→ m'=mλ^n_m
and k_BT→ k_BT'=k_BTλ^n_T,
then the effective T_9 re-scaling in the reverse reaction factor T_9^β
should be scaling as λ^(n_m+n_T).
For most of our study we would assume that the scaling power of the rest-mass energy
and thermal energy are the same[
It is possible to argue for different scaling powers of the
radiation and rest-mass energies based on the different conservation laws
for matter (w=0) and radiation (w=1/3) based on the SIV conserved quantity
ρ_w a^3(1+w)λ^1+3w=ρ_0 within SIV.
In doing so, one may induce a deviation from the usual energy conservation.
To avoid such deviation one will have to use the
appropriate equation of state w=p/ρ to determine the unique λ-scaling
for the energy.], that is, n_m=n_T, otherwise it may result in apparent deviations
from the law of energy conservation. Furthermore, we also adopt the
PRIMAT view that one can choose the system of units so that k_B=1
and therefore temperature is directly measuring thermal energy[
One can consider scaling for k_B, that is,
k_B→ k'_B=k_Bλ^n_k_B.
However, since k_B is a conversion constant from temperature to energy (erg/K),
as such it is related to the choice of units that once made should not be subject to change.
Thus, choosing k_B=1 fixes/eliminates this λ-scaling just like the the choice c=1
fixes the time and space units.
However, one has to keep in mind whether the energy is related to thermal energy or rest-mass energy.],
thus there is no question of how the constant k_B scales with
λ since it is just a conversion constant that an observer
can choose to be 1.
§ METHOD
In PRIMAT one finds first the expansion factor a(T), then
builds the energy density and its corrections as function of a
and/or T, finally the time variable τ is obtained
from the FLRW equation[
This is a first order ordinary differential equation that needs
proper initial conditions. In this case it is set to be
τ(a(T_i))=1/(2H(a(T_i)) when integrating
dτ/da=1/(a H(a)) for τ(a), where T_i≈10^12K is the
initial temperature to be considered for the BBNS processes.]
ȧ/a=H=√(8/3π Gρ). In
our approach to study BBNS within SIV, we bypass the numerical solution
of the FLRW equation in favor of using the analytic SIV
functions above. In particular, the functions used in the SIV-PRIMAT
are: a(T)=a(τ(T)) based on τ(T) and a(τ) above,
while the inverse function T(a) is computed within PRIMAT.
[
The PRIMAT (numerical) inverse function process is validated by
comparing the numerically inverted function to the known SIV analytic function T(a).
We use the same validation approach as done by the original PRIMAT code for other similar cases.]
This way densities are not needed for FLRW integration to obtain
τ(a) and a(τ).
As usual, the functional form of the expansion factor a is
inversely proportional to the temperature T. As can be seen from (A39),
(A30), and (A33) a(T)=a(τ(T))= const/T where the constant is
T_0 C_ rel^-1/4√(c_2t_ in^3) which by
(A29) becomes T_0√(t_ in)=T_0λ^-1/2. This
constant depends only on Ω_m and the CMB temperature T_0.
During the BBNS λ is practically constant since it is
very close to 1/t_ in. However, λ is generally
evolving during the evolution of the Universe towards the
value λ_0=1 at the current epoch. In the case of Ω_m→1
one also has λ→1. Either way, one obtains a→ T_0/T→1
towards the current epoch. In PRIMAT for neutrino decoupled scenario
the expansion factor a(T) is also of the form T_0/T but has a
distortion factor of 𝒮(T)^-1/3 due to
neutrino physics and entropy injection by the electron-positron annihilation
process around T∼1GK. The distortion factor 𝒮(T)
becomes 1 for T≪1GK and therefore recovers the usual a(T)=T_0/T
behavior in the low-temperature regime (see Fig. <ref> for details).
Note that if one interprets Ω_m and Ω_b as the current epoch (now)
values, then Ω_m=1 is not realistic limit here given the known current values;
however, since the BBNS is in the radiation epoch far from the matter-radiation equality
when radiation dominates then Ω_m=1 for the total matter and radiation seems reasonable;
thus, we consider Ω_m=1 along with Ω_m=0.3 and Ω_m=Ω_b=0.05
to illustrate the trend due to Ω_m in the graphs.
To define the time variable, PRIMAT solves the FLRW equation
using the relevant energy density. Within the SIV we have an analytic
expressions for the SIV time. In the standard units
(age of the Universe is 13.8 Gyr), the SIV analytic form is
τ(a)=const× a^2 and from (A30) and (A33) the constant is:
τ(a)/a^2 = τ_0/(2c_2 t_ in^3(t_0-t_ in))
=τ_0/(2C_rel^1/2 t_ in(t_0-t_ in))=
= τ_0(1-Ω_m)/(4√(Ω_m v_eq) t_ in(t_0-t_ in))
=τ_0 f(Ω_m).
This constant depends only on Ω_m and for
Ω_m→1 goes to 3τ_0/(4√( v_eq(Ω_m=1))).
As it is seen in Fig. <ref>, during the relevant interval of time,
there is a clear quadratic relation of τ(a) ∼ a^2
that becomes obvious on the τ(a)/a^2 plot. In the limit
Ω_m→1, one has C_m, C_rel, and c_2→∞
in this respect t_eq is sandwiched between t_ in and
t_0=1 and based on (A25) t_eq→1. Notice that the
PRIMAT τ(a)/a^2 is larger in the initial stages of the BBNS
and then becomes smaller then the SIV τ(a)/a^2/λ but
it is about the same order of magnitude.
The λ–scaling is due to the corresponding λ^2–scaling
of the 8π Gρ within the SIV and the fact that the PRIMAT
time is based on integrating the FLRW equation ȧ/a=√(8/3π Gρ).
The time keeping, between the PRIMAT and SIV, is non-uniform (as seen in Fig. <ref>);
this has an impact on the overall time related observables, such as life-time of processes and particles,
i.e. neutron life-time and nuclear reaction rates within the SIV framework.
The details of the PRIMAT τ(a)/a^2 variations
could be understood to be due to the high-temperature behavior of the relativistic density
containing correction terms δρ(T) and also via the dependence on a(T),
which in PRIMAT has the distortion factor 𝒮(T)
that is missing in the SIV model.
As it can bee seen τ(a)/a^2 decreases with Ω_m
but always stays above the PRIMAT value.
The gap between Ω_m=1 (λ=1)
and the PRIMAT τ(a) can be resolved by the
λ–scaling of 8π Gρ as seen in the bottom curves.
This means that if one uses the SIV a(T) instead of the default PRIMAT functions,
but utilizes the PRIMAT densities and usual FLRW to define τ(a)
then one would obtain a curve similar to the one displayed but pushed
by a factor λ to the corresponding SIV τ/a^2
line shown at Ω_m=0.3. The reason for this is the
factor λ in the SIV T∼λ^-1/2, which propagates
through the density ρ∼ T^4 to λ^-2 factor, that
becomes λ^-1 due to the square root in ȧ/a=√(8/3π Gρ),
and ultimately resulting in dτ'=λ dτ, which is consistent
with the SIV view about its effects on time and space intervals.
Below are the densities used in PRIMAT for the case of decoupled neutrinos:
ρ_γ = a_BB(k_BT)^4/c^2(1+δρ(T)+7/8N_ν(⟨ T_ν⟩/T)^4)
=T^4ρ_γ,
ρ_m = n_b0m_b0/c^2a^3(1+Ω_c0/Ω_b0+3/2k_BT/m_b0)
=a_0^3ρ_m0(T)/a^3.
In the high-temperature regime ρ_tot∼ a_BB(k_BT)^4/c^2.
Where the proportionality constant is (1+δρ_max+7N_ν/8)
since at high-temperature the neutrinos are in thermal equilibrium with
the radiation; that is ⟨ T_ν⟩ =T. In
the low-temperature regime this is K_0 since the plasma corrections
δρ are becoming negligible
and ⟨ T_ν⟩ /T→(4/11)^1/3
see PRIMAT (P32). This is demonstrated in Fig. <ref>.
For the rate of change of the nuclear species,
we have to consider that the right hand side of the equation (P136) is in SIV time τ
while the original PRIMAT reaction rates Γ'_j…→i…
are in the Einstein GR frame,
we will use a prime to indicate that, but these rates need to be expressed in the SIV frame.
Based on (P136) we have for the transition from EGR to WIG (SIV) frame:
dY_i/dτ'=Y_kY_lΓ'_kl→ij-Y_iY_jΓ'_ij→kl, ⇒1/λdY_i/dτ=Y_kY_lΓ'_kl→ij-Y_iY_jΓ'_ij→kl.
Thus, Γ_kl→ij in the SIV frame is related to the measured EGR laboratory rates
Γ'_kl→ij via a simple rescale factor λ; that is,
Γ_kl→ij=λΓ'_kl→ij, which is
based on the relationship[Even though
dτ' in EGR is in seconds and so is the WIG time interval dτ,
these two time units (seconds) are not necessarily the same. If they were the same,
then the EGR and WIG frames would coincide since this would imply λ=1.
Furthermore, the relation dτ'=λ dτ
could be viewed as a consequence of the λ scaling of ρ and the definition of
τ via the solution of the FLRW equation as it was discussed in connection to the ρ/T^4 relationship.]
dτ→ dτ'=λ dτ;
thus, the original PRIMAT rates need to be rescaled as
Γ'_j…→i…→ Γ_j…→i…
=λ×Γ'_j…→i….
This is accomplished by using Forwards Rescale Factor (FRF) to all the reaction rates.
That is, FRF=λ for SIV guided studies[
One can argue that FRF should be 1 because the reaction
cross-sections σ should not be modified since the sizes of the nuclei are not governed by gravitation.
Such argument ignores the possibility that ħ may not be an in-scalar object.
Nevertheless, we can carry on and consider
ṅ_i⊃ n_i n_j ⟨σ v⟩_ij→kl based on (P131) and (P132).
Thus, because v is inscalar, there is no change on the RHS of the equation.
However, our argument was about the change of the time parametrization
in the LHS that takes into account dτ→ dτ'=λ dτ.
So, the λ in the denominator of the LHS becomes a FRF scale-factor on the RHS
when one is using a different time parametrization in switching from EGR to WIG (SIV) formulation.].
Furthermore, the T_9 argument of the factor T_9^β in the reverse reaction rates
may have to be rescaled as well with the appropriate λ-factor
for mass and thermal energy scaling.
That is, for SIV guided studies,
mT_9→ mT_9×λ^n_m+n_T.
In our results section, we refer to the scale-factor λ^n_m+n_T
as mŤ scale-factor λ^2n when n_m=n_T=n.
However, the two scale-factors powers n_m and n_T may not be the same,
but we have deferred the discussion on this topic to the Appendix <ref>.
In this paragraph we drop the primes since the discussion is about the standard BBNS.
The validity of Ta=T_0a_0 is affected during the standard BBNS due to the e^+e^- annihilations,
as such it is related to the distortion factor 𝒮(T) (see P31);
that is, a_0T_0=aT𝒮^1/3(T). Furthermore,
since n_b∝ a^-3 and n_γ∝ T^3,
this also affects the baryon to photon in-scalar ratio
η=n_b/n_γ=η_0𝒮^1/3(T),
where η_0=6.0913×10^-10 is the current ratio of baryons to photons;
it is often written as η_10=η_0×10^10 which removes the factor 10^-10 in η_0.
Finally, we would like to point out that if the SIV paradigm is valid, and since during the BBNS λ
is practically constant, then one could include the effect of the e^+e^- annihilation via the distortion factor
𝒮(T) as an equivalent effect within the SIV background.
To do so, we recognize that for EGR ↔ SIV with a'=aλ and T'=Tλ^-1/2,
one has a_0T_0(𝒮'(T'))^-1/3=T'a'=Taλ^1/2 and therefore the new
ã(T)=a_0T_0/(Tλ^1/2)/𝒮^1/3(T)=a_SIV(T)/𝒮^1/3(T),
where 𝒮(T) is defined according to the discussion in the Appendix <ref>
via the known EGR laboratory function 𝒮'(T'), that is,
𝒮(T)=𝒮'(T'(T))=𝒮'(Tλ^-1/2).
Furthermore, the new ã(T) is also equivalent to a'(T')/λ,
as it should be based on a'=aλ.
As a measure of goodness of fit for the theory against the experimental data,
we use √(χ_ϵ^2), which is the 2-norm of
the deviation of theory (th) from observation (ob) with respect to
the experimental error ϵ.
√(χ_ϵ^2)=√(1/N∑_i^N(y_i^(ob)-y_i^(th)/ϵ_i)^2).
A number less than one indicates that all the theory
values are within the observational uncertainties.
§ RESULTS
In Table <ref> we have shown the values of a(T), τ(T), and
ρ(T) for PRIMAT when using standard cosmological parameters
for Ω_CDM=0.26 and Ω_b=0.05 along with
the corresponding values for the relevant SIV functions for the same
cosmological parameters[
We have used temperature T as the
control variable, which is customary;
however, an in-scalar quantity will be more appropriate
since there could be a λ-scaling for T within SIV.].
From Table <ref> and from Fig. <ref> we see that the two “clocks”
are irregular in the few first moments after the Big-Bang
with SIV time ticking about 1.008 times faster than the PRIMAT time;
however, at low temperatures they become synchronized
and shifted only by a few seconds.
This may be taken as a justification to use FRF=1 since
the 1.008 is practically 1.
The relevant element abundances are given in Table <ref>.
In the second column are shown the observational values,
while in the third column are the results of the PRIMAT code
when run with small reaction network and decoupled neutrinos,
without QED dipole corrections[When the partially decoupled neutrinos
and QED dipole corrections are turned on there are minor insignificant changes
to the results that are not relevant to the current discussion.],
with standard Ω _b and Ω _m values.
The forth column shows the SIV results for the same values of
Ω _b and Ω _m using the
analytic SIV functions a_SIV(T) and τ_SIV(T).
The results reveal under-production of
^4He, deuterium, and ^3He with significant
over-production of ^7Li. Abundances improve if we fit the
^4He and D/H by changing the values of
Ω _b and Ω _m as seen in the fifth column.
Now ^3He and ^7Li are compatible with the PRIMAT results,
but at much smaller values of the baryon and total matter.
In this case the dark matter (DM) is less than 3× of the baryon matter (BM),
unlike the usual case where the DM is more than 5× of the BM.
In order to study the contribution of 𝒮(T) within SIV
we consider the last two runs (columns six and seven in Table <ref>).
The sixth column is based on the parameters in the fifth column.
Not shown in the Table, but if we use the PRIMAT values for Ω _b and Ω _m (based on column three)
instead of column five, then we have over-production of ^4He and significant under-production of deuterium and
very high production of ^7Li - all in the further enhanced directions of the results in column four.
So, the last two columns are for a_SIV(T) distorted by 𝒮(T)
which is equivalent to PRIMAT a̅(T) distorted by λ.
Such choice of modification is relevant since
the SIV runs (column four and five) don't include the
electron–positron annihilation and neutrino effects
encoded in the function 𝒮(T).
Adding the distortion function 𝒮(T)
to the a_SIV(T) or equivalently modifying PRIMAT a̅(T) by λ,
results in light increase of ^4He and under production of deuterium and tritium
with an over production of ^7Li (comparing column six to five).
The next, seventh column, fit* is the best possible, but not perfect,
fit for ^4He and D/H and seems to require a significant mass content.
This is a simple (“naive”) SIV model without the utilization of the λ-modifications of the
various reaction parameters (FRF, mŤ, Q/Ť)
that have been discussed in the Appendix <ref>.
Thus, this fit* failure to achieve a perfect 2D fit (on ^4He and D/H) is likely reflecting the need of proper λ-scaling.
To check this, we have added the calculations discussed in the Appendix <ref>.
The last two columns use modified a(T) and therefore one has to rely on the numeric
integrations in PRIMAT for τ(a) and a(τ).
That is, we do not have, as far as we know,
analytic SIV solutions for τ(a) and a(τ) when a(T) is distorted;
thus, we do instead the PRIMAT numeric integration to get the relevant time variable τ(T).
The last column seems to be close to the default PRIMAT run.
That is because λ is close to 1, which is practically PRIMAT
since the last two columns are using the a_SIV(T)
augmented by the distortion factor of 𝒮(T)^-1/3
as indicated in column six, or equivalently this is PRIMAT
a(T) rescaled by λ since
a_SIV/𝒮^1/3=a̅(T)/λ,
where the a̅(T) is the PRIMAT a(T).
The need to have λ close to 1 is not an indicator of dark matter content but
indicates the goodness of the standard PRIMAT results that allows only for λ close to 1
as an augmentation, as such leads to a light but important improvement in D/H
as seen when comparing columns three and seven.
In order to study further the SIV guided modifications to the reverse reactions,
we study only the effects due to mŤ scaling
by utilizing the SIV analytic functions for a(τ) and τ(T)
as in the corresponding middle a_SIV columns in Table <ref>.
The relevant element abundances are given in Table <ref>.
The value of λ is set to 1/t_ in.
The last five columns were fitted on Ω _b and Ω _m
to reproduce ^4He and D/H since these are known
within 1% uncertainty. Given that we have chosen Q/Ť=1; then,
due to the traditionally preferred energy scaling by λ^n_m with n_m=1,
the SIV scalings of the thermal energy k_B T
with λ (k_B=1) should be λ^+1.
However, the BBNS is in the radiation dominated epoch where,
n_T=-1/2 is expected, as discussed in the Appendix <ref>;
therefore, we have also considered a few other scaling options
λ^n with 2n∈{±2,±1,0}.
We have also explored the case n=±2 but
it was not possible to find perfect reproduction for ^4He and D/H for the
mŤ=λ^2 scaling. That is, we had fit* problem. The minimum was at Ω_b=0.02075 and
Ω_m=0.0863 with λ=2.26 and √(χ_ϵ^2)=4.8.
On other hand, mŤ=λ^-2 resulted in reproduction of ^4He and D/H at
Ω_b=0.0149 and Ω_m=0.0809 with λ=2.31 and √(χ_ϵ^2)=7.5.
These cases are consistant with what is seen in Table <ref> but do not have any specific
justification for choosing such mŤ=λ^±2 scaling of the T^β-term in the revers reaction formulas.
From Table <ref> one can conclude that SIV-guided modifications to the local statistical equilibrium
implemented to the T^β-term in the reverse reactions, which is induced by the mŤ-scaling,
that are consistent with ^4He and D/H data, are actually mŤ-scaling independent.
The overall result is a reduced baryon and dark matter content in general but no significant
λ-scaling dependence.
§ SUMMARY AND CONCLUSIONS
The SIV analytic expressions for a(T) and τ(T) can be utilized to study the BBNS within the SIV paradigm.
The functional behavior is very similar to the standard models such as PRIMAT except during the very early universe
where electron-positron annihilation and neutrino processes affect the a(T) function
see Table <ref> and Fig. <ref>.
The distortion due to these effects encoded in the function 𝒮(T)
could be incorporated by considering the SIV paradigm
as a background state of the universe where the processes could take place.
It has been demonstrated that incorporation of the 𝒮(T)
within the SIV paradigm results in a compatible outcome with the standard BBNS
see Table <ref> and if one is to fit the observational data the result is λ≈1
for the SIV parameter λ (see last column of Table <ref>).
However, a pure SIV treatment results in Ω_b≈1% and less total matter,
either around Ω_m≈23%
when all the λ-scaling connections are utilized (see Table <ref>), or around
Ω_m≈6% without any λ-scaling factors
(see the fit column of Table <ref>).
The SIV paradigm suggests specific modifications to the reaction rates,
as well as the functional temperature dependences of these rates,
that need to be implemented to have consistence between the
Einstein GR frame and the WIG (SIV) frame.
In particular, the non-in-scalar factor T^β in the reverse reactions rates
may be affected the most due to the SIV effects.
As shown in Table <ref>, we have studied a specific case of dependences and have seen that
within the assumptions made the SIV model requires three times less baryon matter,
usually around Ω_b≈1.6% and less total matter - around Ω_m≈6%.
The lower baryon matter content leads to also a lower photon to baryon ratio η_10≈2
within the SIV, which is three tines less that the standard value of η_10=6.14.
The results in Table <ref> indicate insensitivity to the specific
λ-scaling dependence of the mŤ-factor in the reverse reaction expressions.
Thus, one may have to explore further the SIV-guided λ-scaling relations as done
for the last column in Table <ref>, however, this would require the
utilization of the numerical methods used by PRIMAT and as such will take us away from the
SIV-analytic expressions explored in this paper that provide simple model for understanding the BBNS within the SIV paradigm.
Furthermore, it will take us further away from the accepted local statistical equilibrium and
may require the application of the reparametrization paradigm that seems to result in SIV like
equations but does not impose a specific form for λ <cit.>.
Our main conclusion is that the SIV paradigm provides a concurrent model of the BBNS
that is compatible to the description of ^4He, D/H, T/H, and ^7Li/H achieved in the standard BBNS.
It suffers of the same ^7Li problem as in the standard BBNS but also suggests a possible
SIV-guided departure from local statistical equilibrium which could be a fruitful direction to be explored
towards the resolution of the ^7Li problem.
§ APPENDIX: EXPLORING THE SIV-GUIDED Λ-SCALING RELATIONS
As mentioned earlier the two scale-factors powers n_m and n_T may not be the same
since one can argue for different scaling powers of the
radiation and rest-mass energies based on the different conservation laws.
For example, in a matter dominated state with w=0 one has
ρ_m a^3λ=ρ_m0 with
m∝ρ_m a^3 R_0^3⇒ m→ m'_0=mλ,
while for radiation dominated epoch w=1/3 one has
ρ_r a^4λ^2=ρ_r0, then by using
ρ_r∝ T^4⇒ Taλ^1/2=T_0a_0 along with
a→ a'=aλ this gives T→ T'=Tλ^-1/2,
so that the usual T'a'=T_0a_0 holds. Thus, while mass scales as λ
when matter is dominating, then the thermal energy scales[
This scaling for radiation is consistent with the mass scaling by λ
since ρ_γ∝ T^4 then the total energy in a comoving 3D volume
will be E_γ=ρ_γ a^3R^3_0∝ T^4 a^3
=T'^4λ^4/2 a'^3λ^-3∝ E'_γ/λ;
that is, E_γλ=E'_γ just as mλ=m'.
This argument shows that there is no contradiction with the
law of energy conservation; that is, while the radiation (thermal energy)
has a different λ-scaling from the rest-mass energy,
when radiation is absorbed from a finite 3D-region of space the process
results in the correct energy scaling as for a system with a rest-mass energy,
which is also finite and localized. The key difference is the different λ-scaling
of a thermal radiation with a state label T in an infinite volume compared to
the λ-scaling of a 3D-localized rest-mass system with a state label m
but consistent upon absorption and emission of localized photons.]
as λ^-1/2 when radiation is dominating.
Thus, this is the correct thermal energy scaling during the BBNS!
Such λ-scalings to FRF, mŤ, and Q/Ť are easy to be implemented in our SIV study
during the BBNS where λ is practically constant.
In doing so, one has to pull-back the known functions; that is, for a function within the SIV frame
f(T) one has to define its value via the corresponding function f'(T') measured within the EGR laboratory frame.
This way one has f(T)=f'(T'(T))=f'(Tλ^n_T) when the two temperatures are related via T'=Tλ^n_T.
We will use this to define the λ-scalings for functions that depend on mŤ and Q/Ť.
In particular, since our control variable is T then we will adjust only the corresponding scale that comes along
with T but will not include any mass related scaling since the formulas for evaluating f'(T',m') are already using
EGR laboratory frame values for these quantities; that is, for functions containing m T
the scaling mŤ must be by λ^-1/2 and those that depend on Q/T should be scaled by λ^+1/2.
The SIV runs shown in Table <ref> could be viewed as a “naive” SIV model because
we have not utilized any λ-modifications of the
various reaction parameters (FRF, mŤ, Q/Ť).
The study of using such modifications are presented in column six and nine of Table <ref>,
while the other columns are the same as in Table <ref>.
The fit* here is the best possible, but not perfect, fit for ^4He and D/H and
seems to require a reduced total mass content relative to PRIMAT
but much more than in the previous case (column five).
The failure of this fit* (column six) to achieve perfect fit to the
^4He and D/H reflects the importance of the electron-positron annihilation process
accounted by 𝒮(T).
We have already discussed the contribution of 𝒮(T) within SIV
as shown in Table <ref> which uses a simple (“naive”) SIV model without the utilization of the
λ-modifications of the various reaction parameters (FRF, mŤ, Q/Ť).
We already suggested that, the second fit* failure to achieve a perfect 2D fit (on ^4He and D/H)
is likely reflecting the need for such λ-modification implementation.
To check this, we have performed the calculations shown in the last column in Table <ref>.
The closeness of Ω _m to 1 in the last two columns is actually reflecting the
need of having FRF close to 1.
This is because FRF is present in the thermonuclear reactions
as well as in the weak reactions where the time scale is set naturally by the neutron life-time.
Thus, in order to not change the weak reactions significantly,
which are related to the neutron life-time, one has to have FRF close to 1.
Since, FRF is expected to be equal to λ,
this leads to λ close to 1 as well. Thus, we kept FRF=1 in the runs for the two
columns shown that use modified a(T) and therefore one has to rely on the numeric
integrations in PRIMAT for τ(a) and a(τ). That is because we do not have
analytic SIV solutions for τ(a) and a(τ) when a(T) is distorted;
thus, we do the numeric integration instead.
The last column seems to be close to the default PRIMAT run.
That is because λ is close to 1, which is practically PRIMAT
since the last three columns are using the a_SIV(T)
augmented by the distortion factor of 𝒮(T)^-1/3
as indicated in column six, or equivalently this is PRIMAT
a(T) rescaled by λ since
a_SIV/𝒮^1/3=a̅(T)/λ,
where the a̅(T) is the PRIMAT a(T).
The need to have λ close to 1 is not an indicator of dark matter content.
These results indicate that unmodified FRF (=1) is preferred,
which pushes λ towards 1 when considered as a possible modification option
as the fit in the last column indicates. For this fit we allowed the
λ-scaling for mŤ and Q/Ť to depend on λ as
stated in the table caption and discussed above but we got back
to the PRIMAT case with a little smaller Ω _b and
Ω _m very close to 1 since this is an effective way of getting λ=1.
99
[Bondi(1990)]Bondi90 Bondi, H. 1990, in Modern Cosmology
in Retrospect, Eds. Bertotti, B., Balbinot, R., & Bergia, S.,Cambridge Univ. Press., 426 pp.
[Bouvier & Maeder(1978)]BouvierM78
Bouvier, P. & Maeder, A. 1978, , 54, 497
[Bronstein & Semendiaev(1974)]Bronstein74
Bronstein, L.N., Semendiaev, K.A. 1974, Aide-memoire de mathematiques,
Ed. Eyrolles, Paris, 935 p.
[Canuto et al.(1977)Canuto, Adams, Hsieh, & Tsiang]Canu77
Canuto, V., Adams, P. J., Hsieh, S.-H., & Tsiang, E. 1977, , 16, 1643
[Carroll et al.(1992)Carroll, Press, & Turner]Carr92
Carroll, S. M., Press, W. H., & Turner, E. L. 1992, , 30, 499
[Carter(1979)]Carter79
Carter, B. 1979, in ”Confrontation of cosmological theories with observational data”,
IAU Symp. 63, Reidel Publ. Co., Dordrecht, p. 291.
[Coles & Lucchin(2002)]Coles02
Coles, P., Lucchin, F. 2002, Cosmology. The Origin and Evolution of Cosmic Structure,
Wiley & Sons Ltd, 492 p.
[Dirac(1973)]Dirac73
Dirac, P. A. M. 1973, Proceedings of the Royal Society of London Series A,
333, 403
[Durrer(2008)]Durrer08
Durrer, R. 2008, The Cosmic Microwave Background, Cambridge Univ. Press, 401 p.
[Jesus(2018)]Jesus18 Jesus, J.F. 2018, arXiv:1712.00697
[Maeder(2017a)]Maeder17a Maeder, A. 2017a, , 834, 194
[Maeder(2017b)]Maeder17b Maeder, A. 2017b, , 847, 65
[Maeder(2017c)]Maeder17c Maeder, A. 2017c, , 849, 158
[Maeder(2018)]Maeder18 Maeder, A. 2018, A. arXiv:1804.04484
[Maeder & Bouvier (1979)]MBouvier79 Maeder, A., Bouvier, P. 1979, Astron. Astrophys., 73, 82
[Maeder & Gueorguiev(2019)]MaedGueor19
Maeder, A.; Gueorguiev, V.G. The growth of the density fluctuations in the scale-invariant vacuum theory.
Phys. Dark Univ. 2019, 25, 100315.
[Maeder & Gueorguiev(2020)]MaedGueor20a
Maeder, A.; Gueorguiev, V.G. The Scale-Invariant Vacuum (SIV) Theory: A Possible Origin of Dark Matter and Dark Energy.
Universe 2020, 6, 46.
[Maeder & Gueorguiev(2020)]MaedGueor20b
Maeder, A.; Gueorguiev, V.G. Scale-invariant dynamics of galaxies, MOND, dark matter, and the dwarf spheroidals.
MNRAS 2019, 492, 2698.
[Maeder & Gueorguiev(2021)]SIV-Inflation'21
Maeder, A.; Gueorguiev, V.G. Scale invariance, horizons, and inflation.
MNRAS 2021, 504, 4005.
[Gueorguiev & Maeder(2021)]univ8040213
Gueorguiev, V.G.; Maeder, The Scale Invariant Vacuum Paradigm: Main Results and Current Progress.
Universe 2022, 8, 213.
[Gueorguiev & Maeder(2021)]sym13030379
Gueorguiev, V.G.; Maeder, A. Geometric Justification of the Fundamental Interaction Fields for the Classical Long-Range Forces.
Symmetry 2021, 13, 379.
[Mukhanov(2004)]Mukhanov04 Mukhanov, V. 2004,
Intnl. J. Theoretical Physics, 43, 669
[Steigman(2007)]Steigman07 Steigman, G. 2007, Ann. Rev.
Nuclear and Particle Sci. 57, 463
[Weinberg(2008)]Weinberg08 Weinberg, S. 2008, Cosmology,
Oxford Univ. press, 593 p.
[Weyl(1923)]Weyl23
Weyl, H. 1923, Raum, Zeit, Materie. Vorlesungen über allgemeine
Relativitätstheorie. Re-edited by Springer Verlag, Berlin, 1970
Maeder18
A. Maeder, “Evolution of the early Universe in the scale invariant theory," ArXiV: 1902.10115
PRIMAT
Pitrou, C., Coc, A., Uzan, J.-P., Vangioni, E.
“Precision big bang nucleosynthesis with improved Helium-4 predictions.”
Physics Reports 754, 1–66 (2018).
ArXiV: 1801.08023
|
http://arxiv.org/abs/2307.06207v2 | 20230712145231 | Local Conditional Neural Fields for Versatile and Generalizable Large-Scale Reconstructions in Computational Imaging | [
"Hao Wang",
"Jiabei Zhu",
"Yunzhe Li",
"QianWan Yang",
"Lei Tian"
] | eess.IV | [
"eess.IV",
"cs.LG",
"physics.comp-ph",
"physics.optics"
] |
[
[
Received May 01, 2023 / Accepted May 31, 2023
=================================================
Deep learning has transformed computational imaging, but traditional pixel-based representations limit their ability to capture continuous, multiscale details of objects. Here we introduce a novel Local Conditional Neural Fields (LCNF) framework, leveraging a continuous implicit neural representation to address this limitation. LCNF enables flexible object representation and facilitates the reconstruction of multiscale information. We demonstrate the capabilities of LCNF in solving the highly ill-posed inverse problem in Fourier ptychographic microscopy (FPM) with multiplexed measurements, achieving robust, scalable, and generalizable large-scale phase retrieval. Unlike traditional neural fields frameworks, LCNF incorporates a local conditional representation that promotes model generalization, learning multiscale information, and efficient processing of large-scale imaging data. By combining an encoder and a decoder conditioned on a learned latent vector, LCNF achieves versatile continuous-domain super-resolution image reconstruction. We demonstrate accurate reconstruction of wide field-of-view, high-resolution phase images using only a few multiplexed measurements. LCNF robustly captures the continuous object priors and eliminates various phase artifacts, even when it is trained on imperfect datasets. The framework exhibits strong generalization, reconstructing diverse objects even with limited training data. Furthermore, LCNF can be trained on a physics simulator using natural images and successfully applied to experimental measurements on biological samples. Our results highlight the potential of LCNF for solving large-scale inverse problems in computational imaging, with broad applicability in various deep-learning-based techniques.
§ INTRODUCTION
Deep learning has revolutionized the field of computational imaging <cit.>, providing powerful solutions to enhance performance and address various challenges in areas such as phase retrieval <cit.>, digital holography <cit.>, diffraction tomography <cit.>, ghost imaging <cit.>, super-resolution imaging <cit.>, lightfield imaging <cit.>, lensless imaging <cit.>, and imaging through scattering media <cit.>. Computational imaging treats the image formation process as a two-step procedure: the object information is first physically encoded in the measurement through the imaging optics, and then the information is computationally reconstructed by solving an inverse problem. The effectiveness of deep learning in computational imaging lies in their ability to capture the underlying imaging model and exploit object priors, enabling robust solutions to ill-posed inverse problems <cit.>.
However, the most widely used reconstruction methods in computational imaging rely on discrete pixels to represent the objects. For instance, a Convolutional Neural Network (CNN) for computational imaging is typically trained on a fixed pixel or voxel grid <cit.>. This representation is inherently limited by the resolution and size of the grid and does not capture the continuous nature and multiscale details of the physical objects. Furthermore, the pixel grid representation poses challenges in scaling to process and store large-scale multi-dimensional computational imaging data <cit.>.
To overcome these limitations, we propose a novel deep learning framework called Local Conditional Neural Fields (LCNF) to solve the imaging inverse problem using a continuous-domain implicit neural representation that is both compact and highly generalizable. By utilizing a continuous representation of objects, the LCNF framework offers a more natural and flexible representation that can capture fine-grained details and reconstruct object features of varying scales. We showcase the unique capabilities of LCNF to solve the highly ill-posed inverse problem in Fourier ptychographic microscopy (FPM) with multiplexed measurements <cit.>, demonstrating robust, scalable, and generalizable large-scale phase retrieval.
The Neural Fields (NF) framework <cit.> has recently gained significant interest in computer vision for its ability to represent and render continuous 3D scenes <cit.>. Unlike traditional CNN structures, NF uses a coordinate-based representation, where spatial coordinates (e.g. (x,y)) are mapped to physical values (e.g. [0,1]) using a multi-layer perceptron (MLP). This unique characteristic of NF allows for the encoding of objects in a continuous representation, decoupled from a discrete grid. It enables on-demand synthesis of any part of the object by simply querying relevant coordinates across arbitrary dimensions and resolutions.
Several NF-based deep learning techniques have been introduced in computational imaging for solving inverse problems using continuous object representations <cit.>. However, these methods are limited by the high computational cost and limited generalization ability. They either require retraining a new NF network for each object reconstruction <cit.> or suffer from the limited representation power of the latent space learned only on the global scale <cit.>, restricting their ability to generalize to diverse objects.
Our proposed LCNF framework overcomes these limitations by leveraging a local conditional NF representation. The conditional representation embeds measurement-specific information into the latent space that promotes model generalization. Additionally, the local representation allows for the incorporation of multiscale information and enables efficient processing of large-scale imaging data. Together, LCNF enables highly scalable and generalizable deep learning-based image reconstructions.
A conceptual illustration of our proposed LCNF framework for FPM reconstruction is shown in Fig. <ref>(a). Building on the concept of conditional NF from prior work <cit.>, our framework utilizes a CNN-based encoder to learn measurement-specific information from a set of 2D multiplexed FPM measurements and encode them into a compact latent-space representation. In FPM, the phase information of an object point is spread across multiple pixels on the measured images due to light diffraction. The CNN-based encoder effectively extracts this information by utilizing its extended receptive field, condensing them into latent vectors. Next, an MLP decoder is employed to reconstruct the phase values of the object at specific locations based on the corresponding latent information.
Unlike the traditional NF framework <cit.> that perform a one-to-one mapping between a single coordinate to the corresponding object value, our decoder is conditioned on a learned latent vector that incorporates information across a region of the input images. This conditioning enables adaptation to different objects since each set of measurements is projected onto a distinct latent space representation by the CNN-based encoder. A crucial aspect of FPM reconstruction is achieving “super-resolution” reconstruction, surpassing the diffraction limit of the input measurements. To achieve this goal, our framework extracts “super-resolved” latent information beyond the “discrete” pixel grid in the measurement by incorporating the Local Implicit Image Function (LIIF) method <cit.> into the decoding process. By combining these components, our LCNF framework achieves versatile deep-learning-based continuous-domain super-resolution image reconstruction based on low-resolution measurements that is applicable to arbitrary objects with varying spatial scales and resolutions.
In this study, we present the capabilities of our proposed LCNF framework for large-scale phase reconstruction based on multiplexed FPM measurements. FPM is a well-established computational imaging technique that combines synthetic aperture and phase retrieval principles to achieve high-resolution reconstructions of amplitude and phase images over a wide field-of-view (FOV) using low-resolution intensity images <cit.>. Here, we showcase the effectiveness of our LCNF framework in accurately reconstructing continuous-domain high-resolution phase images over a large FOV using only five multiplexed measurements. Notably, our approach eliminates the need for complex Generative Adversarial Network (GAN) training, as required in previous state-of-the-art approaches <cit.>.
Our results highlight the ability of LCNF to capture the continuous and smooth priors of the object, enabling robust reconstruction of high-resolution phase images. First, using experimental datasets captured on Hela cells fixed in ethanol or formalin, we show that the LCNF network can accurately reconstruct complex cellular and subcellular structures. In addition, we highlight the robustness of the LCNF framework when subjected to imperfect training datasets, benefiting from the implicit continuous priors embedded in our framework. Specifically, LCNF effectively eliminates common artifacts encountered in traditional model-based FPM algorithms, such as residual phase unwrapping errors, noise, and background artifacts, without the need for additional parametric or learned constraints.
Furthermore, we showcase the strong generalization capabilities of our LCNF framework. Firstly, we demonstrate that LCNF can consistently reconstruct high-resolution phase images even when trained with very limited training data or under different experimental conditions. Remarkably, we achieve high-quality reconstructions even when the network is trained on a single imaging data pair. This superior generalization capability is attributed to our NF-based training strategy, which utilizes pixels as training pairs and effectively expands the training data from a single paired image to a diverse set of pixels. Moreover, we demonstrate that LCNF can be trained using purely simulated datasets composed of natural images. We show that the simulation-trained LCNF network generalizes well when applied to experimental biological measurements, consistently reconstructing detailed subcellular structures with minimal artifacts. Finally, we establish that all LCNF networks, regardless of the training strategy, reliably reconstruct high-resolution phase images across a wide FOV.
In summary, we introduce the LCNF framework as a versatile and generalizable approach for solving highly ill-posed large-scale imaging inverse problems in computational imaging. By leveraging a continuous implicit neural representation, LCNF effectively captures continuous multiscale object information from low-resolution measurements. It provides robust super-resolution reconstruction capabilities, bypassing the limitations of traditional model-based and CNN-based methods that rely on discrete representations. The framework's ability to generalize with very limited training data and its capacity to leverage simulated data further enhance its potential for advancing deep learning-based computational imaging techniques, making it highly attractive for challenging application scenarios where collecting experimental training data is both time-consuming and costly.
§ RESULTS
§.§ The LCNF framework
Our LCNF framework for phase reconstruction from multiplexed FPM measurements is illustrated in Fig. <ref>(a). The encoder E_θ_e takes six low-resolution images as input and projects the learned low-dimensional information into a latent space. The input images consist of two brightfield (BF) and three darkfield (DF) intensity measurements captured with the illumination patterns shown in Fig. <ref>(b), along with a low-resolution linear estimate of the object's phase computed from the two BF measurements using the differential phase contrast (DPC) method <cit.>. To handle the distinct distributions of BF, DF, and DPC images, three separate encoders {e_1,e_2,e_3} are employed to effectively extract the underlying latent information. Each encoder utilizes convolutional layers and residual blocks <cit.> to extract spatial features. The lateral dimensions of the spatial features match those of the input image, allowing for direct coordinate-dependent latent information retrieval during decoding. The spatial features learned from the three encoders are concatenated to form the final latent space representation Φ∈ℝ^H× W× D, where H and W represent the lateral dimensions and D represents the total number of concatenated feature maps in the latent space.
To enable high-resolution phase reconstruction independent of a fixed grid, a five-layer MLP denoted as f_θ is employed as the decoder. For local conditioning, a specific latent vector ϕ∈ℝ^1× 1× D is concatenated with the corresponding spatial coordinate before being inputted to f_θ. This conditioning mechanism ensures that the learned mapping by the MLP is dependent on the input measurement, allowing for generalizability across different objects.
The output of the decoder is a scalar representing the predicted phase value at the location . The LCNF network is trained end-to-end in a supervised manner by minimizing the loss function L:
θ_e,θmin L(f_θ(, ϕ), ψ()),
where θ_e and θ represent the network parameters of the encoder and decoder respectively, ϕ=E_θ_e(m,) is the latent vector encoded from the input m for the queried coordinate , and ψ() is the high-resolution ground-truth phase value at the position . The ground-truth phase images are reconstructed using separate standard FPM measurements and a previously developed model-based reconstruction algorithm <cit.>.
A key aspect of FPM is the reconstruction of super-resolved images beyond the low-resolution input. To facilitate the learning of high-resolution information beyond the low-resolution H× W grid, the LCNF network is also trained on “off-the-grid” high-resolution data queried from a denser grid H'× W'. However, the corresponding “off-the-grid” latent vector is not readily available from the encoded latent space. In practice, the nearest latent vector (based on the Euclidean distance) is used for the decoder. Additionally, to inform the decoder about the relative position of the queried “off-the-grid” location with respect to the nearest latent vector location, the implementation of Eq. (<ref>) utilizes their relative coordinate Δ instead of the absolute coordinate , following the approach introduced in the LIIF method <cit.>. Furthermore, to utilize the information provided by the neighboring latent vectors and improve the continuity of the reconstruction, enhancement techniques including feature unfolding, local ensemble, and cell decoding <cit.> are applied.
After training, the LCNF network allows for querying arbitrary points on the object by providing the corresponding low-dimensional measurements and the queried coordinates as the network input. This eliminates the requirement for a fixed input grid in traditional model-based and deep neural network architectures. The high-resolution phase reconstruction can be visualized on any desired grid. This feature is demonstrated in the results depicted in Fig. <ref>(c), where reconstructions are queried at three distinct pixel densities, showcasing smooth transitions across these diverse spatial scales without any artifacts.
More details about the FPM setup, measurements, and model-based reconstructions are provided in Sections <ref> and <ref>. Additionally, further details about the LCNF framework, including data acquisition and preprocessing, network structure, reconstruction enhancement techniques, and network training and inference are provided in Sections <ref>, and Figs. <ref>, <ref> and <ref>.
§.§ LCNF reconstruction trained with experimental dataset
We first evaluate the performance of our LCNF network using Hela cells fixed with ethanol or formalin as imaging samples. The network is trained separately for these two cell types (see Section <ref>), and the reconstruction results using the network trained on the same cell type are shown in Figs. <ref>(c) and <ref>.
In Fig. <ref>(c), we present the raw low-resolution BF intensity image, the model-based FPM reconstruction, and our LCNF-based reconstruction of ethanol-fixed Hela cells. Furthermore, we display two small subareas (area 1 and area 2). From the figure, it is evident that our network successfully reconstructs high-resolution phase images from low-resolution intensity images, accurately recovering intricate subcellular structures.
To evaluate the continuous object representation capability of our LCNF network, we conduct queries on arbitrary coordinates within subarea 1, as shown at the bottom of Fig. <ref>(c). We perform queries at densities of 6×, 21×, 49.8× compared to the input low-resolution intensity image. Our network successfully reconstructs the phase at these density grids. For comparison, we also include the model-based FPM reconstruction of the same area. Due to predefined grids, the FPM reconstruction exhibits discrete grid artifacts in the enlarged image.
Furthermore, it may suffer from phase unwrapping artifacts (see Section <ref> for more details). For an additional comparison, we present high-resolution phase reconstructions with a grid density of 105.9× in Fig. <ref>. In contrast, our network provides a continuous object reconstruction without any discrete or other phase artifacts.
Figure <ref>(a) and (b) showcase additional reconstruction results for Hela cells fixed with ethanol and formalin, respectively. As shown in the figures, we successfully reconstruct high-resolution phase images from the low-resolution intensity images, accurately capturing detailed cellular and subcellular structures without any artifacts, as highlighted in the zoom-in regions (1)-(6).
§.§ Robustness to phase artifacts
Next, we highlight the robustness of our LCNF network to various phase artifacts that arise from practical FPM experiments, including noise, phase unwrapping errors, and artifacts resulting from an imperfect imaging model.
As shown in Fig. <ref>(c) and Fig. <ref>(a), the model-based FPM reconstruction may exhibit discontinuous artifacts due to imperfect phase unwrapping when dealing with samples that have a phase range exceeding 2π. Moreover, Fig. <ref>(a) illustrates the presence of rippling artifacts in the background region of the model-based FPM reconstruction, possibly resulting from the imperfect FPM imaging model used for the model-based reconstruction <cit.>. Additionally, Fig. <ref>(b) demonstrates that the model-based FPM reconstruction can be susceptible to random phase noise.
In contrast, our LCNF network effectively eliminates these artifacts and achieves accurate, smooth, and continuous reconstructions, even though it has been trained using imperfect ground-truth images from experiments that inevitably contain these artifacts. We quantitatively evaluate the artifact-suppression capability of our LCNF network using the method in <cit.>.
Our analysis shows that our LCNF network can reduce the background artifacts by several folds compared with the model-based FPM reconstruction, as illustrated in Fig. <ref>.
The robustness can be attributed to the implicit continuous priors embedded in our LCNF network structure. The LCNF framework employs a two-step process to achieve continuous representations. Firstly, it encodes the input images into a continuous latent space representation, effectively filtering out noisy information. Secondly, it decodes the queried point by conditioning it on the selected latent vector. This process leverages the continuity priors embedded in the MLP decoder, enabling it to learn a continuous neural representation of the object.
Overall, our network demonstrates robust reconstruction capabilities even when trained with imperfect datasets, benefiting from the continuity of the learned latent space and the continuous representation imposed by the MLP decoder.
§.§ Generalizability of experimental data trained LCNF network
One notable advantage of our LCNF framework is its superior generalization capability, overcoming the limitations of traditional NF frameworks <cit.>. To thoroughly evaluate its generalization performance, we conduct training using three distinct types of experimentally collected datasets, as outlined in Section <ref>. These training scenarios included: (1) utilizing one type of experimental data to train the network and evaluating it on the same type of dataset, (2) training the network with a single pair of data and testing it on the same type of data, and (3) evaluating the aforementioned networks' performance on other types of cells. The LCNF networks consistently demonstrate successful reconstruction of high-resolution phase images across all three training scenarios, as depicted in Fig. <ref>.
We quantitatively assess the performance of our LCNF-based reconstructions for both image patches from the FOV regions matching the training conditions and outside the training region. This assessment is informative because spatially varying aberrations are known to degrade FPM reconstructions <cit.>. By evaluating the reconstruction quality outside the training FOV, we gain insights into the network's robustness against realistic spatially varying aberrations in our experiment. For the evaluation, we employ the mean square error (MSE) as the objective metric. The results, presented in Fig. <ref>, illustrate the robust performance of our LCNF network in all three scenarios, with the corresponding MSE values provided at the bottom of the figure. We also compute the peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and frequency measurement metric (FM). The FM quantifies the recovery of frequency components <cit.>, where higher FM values represent the recovery of more frequency components. The quantitative metrics for the results in Fig. <ref> are presented in Table <ref>, and the metrics for an additional 100 image patches outside the training FOV region are provided in Table <ref>.
As shown in Fig. <ref>, and Tables <ref> and <ref>, the MSE generally increases, while the PSNR, SSIM, and FM decrease when training the LCNF network with a very limited dataset or a different type of data compared to the network trained with the same cell type and the full experimental dataset. This indicates that the network's generalization performance generally degrades when it is trained on a smaller training dataset or the distribution of the testing data is shifted from that of the training data, which is expected. However, the changes in the metric values are small and hardly noticeable in the visualizations in Fig. <ref>, even for the network trained with a single paired training dataset. This highlights the robust generalization performance of our LCNF network.
In addition, when the network is trained with ethanol-fixed Hela cells (Network_Hela-E(18)) and applied to formalin-fixed Hela cells, the SSIM and FM are slightly higher than those of the network train with formalin-fixed Hela cells (Network_Hela-F(16)), as shown in Table <ref>. We attribute this “unusual” result to the fact that ethanol-fixed Hela cells contain more structural details and provide a broader spectrum compare to formalin-fixed Hela cells (see Fig. <ref>(b)). Therefore, the network trained with ethanol-fixed Hela cells may reconstruct more frequency components and thus yield better results.
We attribute this generalization capability to our novel training strategy, which utilizes pixels as the training pairs (as shown in Eq. (<ref>)).
By adopting this approach, we effectively expand the training data from a single paired image (250×250 pixels for the input and 1500×1500 pixels for the high-resolution reconstruction) to a diverse set of pixels. This enables the network to learn from a larger and more varied dataset, contributing to its remarkable generalization capabilities. As a result, our LCNF network demonstrates the ability to reconstruct high-resolution phase images even when trained with very limited training data. This not only reduces the necessity for a large number of training samples but also expedites the overall experimental process, making it highly suitable for challenging experimental scenarios where collecting experimental training data is both time-consuming and costly.
§.§ LCNF network generalizes from simulation to experiment
We further demonstrate the robust generalization capability of our LCNF network by employing a simulator-trained network for the reconstruction of the experimental Hela cells dataset, as depicted in Fig. <ref>. When solving the inverse problem using deep learning-based methods, acquiring paired datasets for network training can be challenging. Traditional NF-based approaches incorporate the imaging forward model within the network, allowing for self-supervised training without the need for paired datasets. However, as mentioned earlier, these methods often lack the ability to generalize across different objects and necessitate separate training for each new object.
An alternative approach is to use the imaging forward model to generate simulated paired datasets for network training <cit.>. However, in the context of FPM reconstruction, the application of simulator-trained networks has been obstructed by the use of the GAN structure that learns highly specific but less generalizable object priors <cit.>.
Here, we demonstrate straightforward deployment of simulation-based training of our LCNF network and achieve high-resolution, wide-FOV phase reconstructions on the experimental biological dataset (see training details in Section <ref>).
We first evaluate the network's performance on simulated data, as illustrated in Fig. <ref>(a). The results confirm the successful reconstruction of high-resolution phase images from low-resolution intensity images. The MSE, PSNR, SSIM, and FM metrics are presented in Table <ref>. When evaluating the network using the simulated dataset, we did not apply any preprocessing techniques described in Section <ref> to the ground-truth high-resolution natural images, except for linearly matching pixel values with the phase range of [0, 9]. Consequently, we can assess the network's performance more accurately without the influence of data preprocessing effects. Notably, our network demonstrates high effectiveness in recovering a significant portion of the spatial frequency components, as evidenced in the visualization of the reconstructed spectrum in the right column of Fig. <ref>(a) and Table <ref>.
Subsequently, we employ our simulator-trained network for the reconstruction of experimentally captured Hela cells datasets, as depicted in Fig. <ref>(b). The network successfully reconstructs Hela cells with detailed subcellular structures and recovers the rich spatial frequency components.
The quantitative metrics, including MSE, PSNR, SSIM and FM, are provided in Table <ref>. The results show that the simulator-trained network performs slightly worse than the experimental data-trained network in terms of MSE, PSNR, and SSIM. This is expected because our training data consists only of natural images, which have significantly different image features compared to the cells in the experiment. However, the FM metrics of our simulation-trained network are consistently higher than all other methods. This observation suggests that training the network on PSD-corrected natural images may promote more effective learning for high-frequency content. To further evaluate the performance of the simulation-trained network, we conduct additional reconstructions on 100 experimental image patches beyond the training FOV. The quantitative metrics are presented in Table <ref>.
These results clearly demonstrate the generalization capability of our simulation-trained LCNF network in achieving high-resolution phase reconstructions in the experiment.
§.§ Robust wide-FOV high-resolution phase reconstruction
Finally, we employ our LCNF network for wide-FOV high-resolution phase reconstructions, as shown in Fig. <ref>. The network is trained solely using the central 250×250-pixel region, indicated by the dashed black square in Fig. <ref>(a). Subsequently, we perform phase image reconstruction across a much larger FOV that encompasses a circular region with a 2160-pixel diameter in the raw measurements (3.51 mm). The resulting wide-FOV reconstructions (12960-pixel in diameter and 6× denser pixel grid compared to the input), obtained using Network_Hela-E(18) and Network_Simulate, are presented in Fig. <ref>(b) and (c), respectively. Additionally, Fig. <ref>(d)-(g) displays selected subareas extracted from the central to the edge of the full-FOV image. The corresponding images include the BF intensity image, model-based FPM reconstruction, LCNF Network_Hela-E(18) reconstruction trained with the experimental dataset (LCNF(exp)), and LCNF Network_Simulate reconstruction trained with the simulated dataset (LCNF(sim)). Overall, the LCNF networks achieve high-quality reconstructions, with subcellular features clearly recovered and minimal artifacts. However, at the very edge of the image, as observed in Fig. <ref>(d)-(g)iv, some distortions are present. This behavior is expected due to the spatially varying aberrations present in our microscope setup, which become more pronounced at the edge of the FOV <cit.>. The experimental dataset used for training the network is obtained from the central FOV, where aberrations are minimal. The simulation assumes a perfect imaging system without any aberrations. Consequently, the network was not exposed to these aberrations during training. Addressing this limitation will require incorporating a spatially variant imaging model, which we plan to consider in our future work.
Furthermore, we present additional wide-FOV reconstructions in Figs. <ref>, <ref>, and <ref> for Hela cells fixed in ethanol or formalin. These reconstructions were obtained using the LCNF networks trained with all the strategies detailed in Section <ref> and <ref>, based on experimental or simulated datasets. The results further underscore the reliability of our LCNF framework in achieving wide-FOV high-resolution phase reconstructions, regardless of the training strategy employed. Notably, our framework demonstrates excellent performance even when trained with very limited data, including a single paired image at the extreme case, or when utilizing simulated training data.
§ DISCUSSION AND CONCLUSION
In this study, we have introduced LCNF, a versatile and generalizable deep learning framework for solving large-scale imaging inverse problems. Unlike traditional CNN frameworks, LCNF leverages a continuous implicit neural representation to enable flexible reconstruction of multiscale information. It introduces a novel local conditioning approach, enhancing its generalization capability compared to existing NF frameworks.
By applying LCNF to solve the multiplexed FPM reconstruction problem, we demonstrate its effectiveness in achieving continuous-domain super-resolution reconstruction from low-resolution measurements, applicable to objects of varying spatial scales and resolutions. In addition, LCNF exhibits robustness against noisy training data. The LCNF reconstructions are free from common artifacts, such as residual phase unwrapping errors, noise and background ripples, that contaminate the training data obtained by traditional FPM reconstructions.
Furthermore, LCNF demonstrates remarkable generalization capabilities across different object types and experimental conditions. Notably, LCNF can be effectively trained even with limited datasets, including a single paired image dataset, considerably simplifying the experimental training data collection process. Additionally, we show that LCNF can be entirely on simulated data and generalize well to experimental data without requiring network retraining or transfer learning. This further highlights the robustness and adaptability of our LCNF approach.
LCNF's efficient processing of multiscale information makes it highly suitable for large-FOV high-resolution image reconstruction applications. We showcase LCNF's ability to robustly perform large-scale super-resolution phase reconstructions using multiplexed FPM measurements, regardless of the training strategy employed.
In summary, we present LCNF as a robust and scalable deep-learning-based continuous-domain reconstruction framework. Its ability to handle large FOV and high-resolution imaging reconstruction, combined with its strong generalization capabilities, makes it suitable for a wide range of computational imaging techniques.
§ METHODS
§.§ FPM experimental setup
Our LCNF network was developed based on the experimental data obtained from our previous study in <cit.>. To briefly describe the FPM setup and the data acquisition method, the illumination multiplexing scheme combined patterns used in DPC <cit.> and randomly multiplexed FPM <cit.> to efficiently encode high-resolution phase information across a wide FOV. Specifically, we used five LED illumination patterns (central wavelength 630 nm), including two BF semi-circle patterns and three 120^∘-arc patterns, as illustrated in Fig. <ref>(b). To capture the standard FPM dataset, sequential illumination with 185 LEDs was employed. In both illumination schemes, the maximum illumination NA was 0.41. The samples used for testing were unstained Hela cells fixed with ethanol or formalin. Intensity images were collected using a 4×, 0.1 NA objective lens (Nikon CFI Plan Achromat) and an sCMOS camera (PCO: pco.edge 5.5) with 2560×2160 pixels and a pixel size of 6.5m.
§.§ Model-based reconstructions
§.§.§ DPC-based phase imaging
Here, we briefly explain the principle of DPC phase imaging; additional details can be found in <cit.>. DPC is a technique used to recover quantitative phase information from intensity images acquired with asymmetric illumination patterns. It offers an improved lateral resolution of 2× compared to the native objective NA.
Under the weak object assumption: o()=e^-μ()+iψ()≈ 1-μ()+iψ(), where μ() represents absorption and ψ() represents phase, a BF intensity measurement I_s() can be approximated to have a linear relationship with the sample <cit.>:
I_S()=Bδ() + H_abs()M() + H_phΨ(),
where I_S(), M(), Ψ() denote the spectrum of I_s(), μ() and ψ(), respectively, and =(u_x,u_y) represents the spatial frequency. B is a constant representing the background signal, and δ() is the Dirac delta function. H_abs, H_ph are the transfer functions for amplitude and phase, respectively <cit.>.
By subtracting the background term and normalizing the acquired BF intensity image, the DPC reconstruction can be formulated as:
M,Ψ min∑_j=1^N_bfI_S-,j() - H_abs,j()M() - H_ph,jΨ()_2^2 + τ_1 R_1(M()) + τ_2R_2(Ψ()),
where I_S-() represents the spectrum of the background-subtracted intensity image, j is the index of DPC measurements, N_bf=2 denotes the number of captured BF images, and ·_2 represents the L_2 norm. τ_1, τ_2 are the regularization parameters, and R_1 and R_2 represent the regularization terms that incorporate prior information about the sample. Here, we utilized the L_2 regularization to solve inverse problem <cit.>.
It should be noted that DPC reconstruction relies on the weak object approximation, which means it only provides accurate results when the phase change of the sample is below 0.64 radians <cit.>. However, in our experiment, the Hela cell samples are fixed in ethanol or formalin, causing phase changes exceeding 2π. This violates the weak object approximation and leads to an underestimation of the object's phase. Despite its limitations, the DPC estimation serves as a useful low-resolution initial guess for the object's phase <cit.>. Therefore, we input this estimation into our network.
§.§.§ FPM forward model
The forward model of FPM describes the intensity image obtained from a single LED illumination. After appropriate normalization to account for the system magnification, it can be expressed as:
I_i() = |ℱ^-1_[O(-_i)P()]()|^2,
where I_i() represents the captured low-resolution intensity image for the i^th LED. |·| takes the amplitude of the complex field, and =(x, y) denotes the lateral coordinates. ℱ^-1 represents the inverse Fourier transform, and O() is the spectrum of the object o().
Each LED illumination is modeled as a plane wave with spatial frequency _i=(u_xi,u_yi)=(sinθ_xi/λ, sinθ_yi/λ), where (θ_xi, θ_yi) defines the illumination angle of the i^th LED and λ denotes the central wavelength. The pupil function of the microscope, denoted by P(), is a circular low-pass filter with a diameter of 2NA/λ, set by the objective lens NA.
In the case of multiplexed illumination, the sample is illuminated by different sets of LEDs based on different illumination patterns, as depicted in Fig. <ref>(b). The captured intensity image can be modeled as the sum of multiple intensity images obtained from individual LEDs <cit.>:
I_S()=∑_i∈SI_i(),
where the symbol ∈ indicates that i is an element of the illumination set S.
§.§.§ Model-based FPM reconstruction
FPM is a recently developed computational imaging technique that enables increasing the imaging system's space-bandwidth product (SBP) by synthesizing multiple low-resolution images into a high-resolution image across a wide FOV <cit.>.
The FPM reconstruction involves solving a non-convex optimization problem that jointly estimates the object O() and the pupil function P() by solving a minimization problem:
O(),P(),{b_i}min∑_i=1^N_led
√(I_i())-|ℱ^-1_[O(-_i)P()]()| - b_i^2_2,
where b_i is the background offset for the i^th image, and N_led is the total number of LEDs used in the sequential FPM measurement.
The reconstruction is performed by an iterative algorithm by following <cit.>.
§.§ The LCNF framework
§.§.§ Data acquisition and preparation
In our study, we investigated different strategies for training our LCNF network using both experimental and simulated datasets.
The experimental data was obtained from <cit.> and was taken on Hela cells fixed in ethanol or formalin. We collected 22 groups of low-resolution measurements (2560×2160 pixels) on ethanol-fixed Hela cells and 20 groups of measurements on formalin-fixed Hela cells using multiplexed illumination. The LED patterns used for illumination are described in Section <ref>, which includes two BF and three DF patterns.
To prepare the input for training the LCNF network, we performed the following steps. Firstly, we extracted the central 250×250 pixels from the raw low-resolution intensity images. Then, we applied dynamic range correction by clipping the minimum 0.1% and maximum 0.1% pixel values for each measurement, following the approach described in <cit.>. This correction helped suppress shot noise and hot pixels. Next, we used the DPC reconstruction algorithm, as explained in Section <ref>), to generate a linear estimation of the phase based on the two BF intensity measurements. Additionally, we normalized the LED intensities by dividing the intensity images by their mean value. We also applied a morphological open operator to estimate and subtract the slow-changing background, following the method described in <cit.>. This process effectively eliminated the unwanted background components and improved the accuracy of the subsequent learning process. Finally, we concatenated the preprocessed low-resolution intensity images with the DPC image.
To obtain ground-truth high-resolution phase images for the experimental data, we applied the following procedure. Firstly, for each standard FPM measurement, we sequentially illuminated 185 LEDs and captured the corresponding low-resolution intensity images. Then, we employed the model-based FPM reconstruction algorithm, detailed in Section <ref>, to reconstruct the phase of the central 250×250-pixel region and produce a high-resolution phase image of 1500×1500 pixels. Next, we applied a phase unwrapping algorithm <cit.> to unwrap the reconstructed high-resolution phase image. Furthermore, we addressed the slow-varying background component present in the reconstructed high-resolution phase image by utilizing a morphological open operator with a kernel size of 50. This step removed the slowly changing background, enhancing the clarity and quality of the phase image. To normalize the range of values in the high-resolution phase image, we clipped the phase range within [0, 12] for the Hela cells fixed in both ethanol and formalin. Subsequently, we divided the phase images by this clipping threshold, resulting in a normalized range of values within [0, 1]. Finally, we paired the preprocessed low-resolution input images with the normalized high-resolution reconstructions, which served as the training data for our neural network.
For the simulated dataset, we utilized a portion of the high-resolution DIV2K dataset <cit.> from the NTRE 2017 challenge <cit.> as our ground-truth phase images. The dataset consisted of 900 cropped natural images, each with a size of 600×600 pixels. Since the natural images have different histogram and spectral distributions compared to the biological cell images (see Fig. <ref>), we performed a preprocessing procedure on these images. The preprocessing involved removing the slowly varying background using an open operator with a kernel size of 20. Then, we applied a maximum value threshold of 0.6 to crop the image values and normalized the cropped images to the range [0, 1] by dividing them by this threshold. To ensure consistency between the simulated dataset and the experimental Hela cells fixed in ethanol (here, we only utilize the data for the Hela cells fixed in ethanol since it contains more frequency content), we took steps to match the power spectrum density (PSD) of the simulated dataset with the experimental data. This involved multiplying the spectrum of each simulated data with a correction map, whose value at a specific frequency is determined by the ratio between the square root of the PSDs of the experimental and the simulated dataset. We then normalized the spectrum-corrected image by dividing it by its maximum value. The resulting normalized and spectrum-corrected high-resolution images were used as ground truth for our network. The effect of the preprocessing procedure for the natural images can be observed in Fig. <ref>.
To generate the un-normalized object phase, we multiplied the normalized and spectrum-corrected high-resolution images by a factor of 9 and then subtracted 2.5, resulting in a phase range of [-2.5, 6.5]. This range corresponds closely to the predominant distribution observed in the histogram of the experimental ethanol-fixed Hela cell dataset and also balances the effect from the large phase values observed in the experimental dataset (for additional information, refer to Figure <ref>). To simulate the low-resolution intensity images, we used Eq. (<ref>) as the forward model and downsampled the simulated intensity images to 100×100 pixels.
Throughout the simulation process, we make the assumption that our simulated system does not exhibit aberration. As a result, the pupil function P() in Eq. (<ref>) is considered an ideal circular low-pass filter, with a value of 1 within the circular region and 0 outside of it.
We applied the same preprocessing steps used for the experimental dataset to obtain the preprocessed simulated low-resolution intensity images, which served as the input for the network. Finally, we paired the preprocessed high-resolution images with their corresponding low-resolution intensity images to create the training pairs for the network training.
§.§.§ LCNF network structure
Our LCNF network consists of a CNN-based encoder and an MLP-based decoder. A detailed visual illustration of the network can be found in Fig. <ref>.
Encoder:
We use three CNN-based encoders, denoted as {e_1, e_2, e_3} to independently encode three different types of input: BF, DF, and DPC images. Each encoder follows a deep residual network structure similar to <cit.>. The encoders take specific image types as input and initially extract spatial features using a convolutional layer. The number of input channels for the first convolutional layer varies according to the number of input images: 2 for two BF images, 3 for three DF images, and 1 for the DPC image. The output channels for the first convolutional layer are fixed at 128 for all encoders.
After the initial convolutional layer, we employ 32 residual blocks to further extract spatial feature maps. Each residual block consists of two convolutional layers 128 input and output channels, a ReLU activation layer, and a multiplication layer with a factor of 1. Skip connections are incorporated in the residual blocks, where feature maps are added together. The spatial features extracted by the residual blocks are then passed through an output convolutional layer with 128 input and output channels. Finally, these features are added with the features maps provided by the initial convolutional layer with a long skip connection. All convolutional layers use 3×3 convolutional kernels.
Once the feature maps are extracted from the input, they are concatenated to form the encoded latent space representation Φ∈ℝ^H× W× D of the image, where H and W represent the pixel resolution along the x and y axes, respectively, while D represents the number of concatenated channels. The H and W dimensions remain the same as the input low-resolution measurements, as we do not include pooling or upsampling layers in our encoder networks. The network structure of the encoders is visually depicted in Fig. <ref>: Encoder.
Decoder: To represent a high-resolution image in a continuous representation, we employ the LIIF approach <cit.>, which represents an object in the encoded latent space and utilizes an MLP as a decoding function to decode the object from the latent space back to the object domain. In our case, we use a standard 5-layer MLP as the decoder, denoted as f_θ. Each layer of the MLP has 256 neurons, and ReLU activation is applied to the first four layers, while the last layer is unactivated. The input dimension of the MLP is 3460, which is obtained by 3 (number of encoders) × 128 (feature maps learned by each encoder) × 9 (feature unfolding) + 2 (dimension of the coordinates) + 2 (cell decoding), where feature unfolding and cell decoding are reconstruction enhancement techniques explained in Section <ref>. The output dimension of the MLP is 1, representing the predicted phase value at the queried location. The structure of the decoder is illustrated in Fig. <ref>: Decoder.
The decoding function can be expressed as:
ψ̂()=f_θ(, ϕ),
where ψ̂() is the decoded physical quantity, such as the phase value in our case, at the queried position . The variable represents the 2D coordinates in the continuous image domain, assumed to range in [-h, h] and [-w, w] for the height and the width, respectively. ϕ∈ℝ^1× 1× D is the selected latent vector from the latent space representation Φ, which is related to the queried position. The decoding function f_θ(, ϕ) can be seen as a mapping function f_θ(·, ϕ): →ψ() that maps a coordinate to the phase value at the position , with the latent vector ϕ as conditional parameters.
It should be noted that the latent space is a low-dimensional space with a dimension H× W× D, where we assign 2D coordinates to each latent vector with the pre-defined sparse grids, as depicted by the gray circles in Fig. <ref> Encoder.
However, for a continuous representation, we may need to query arbitrary coordinates that are not on the predefined grids, as shown by the green circle in Fig. <ref> Encoder.
Consequently, we cannot obtain the exact latent vector for the queried position since the density of the grid in the latent space is much lower than that of the high-resolution grid (H'× W') for the same FOV.
To bypass this issue, we adopt the LIIF approach <cit.>, which assumes that the latent space is continuous; in addition, each latent vector can represent a local part of the continuous image and is responsible for predicting the signals at the set of coordinates that are closest to itself.
Accordingly, we reformulate Eq. (<ref>) as:
ψ̂()=f_θ(Δ, ϕ),
where ϕ is the selected latent vector for coordinate , determined by the nearest latent vector based on the Euclidean distance. Here, Δ=-v and v represent the actual coordinate of the selected latent vector ϕ, respectively.
Taking Fig. <ref> Encoder as an example, the bottom-left gray circle represents the selected latent vector, and v denotes the coordinate of this chosen latent vector.
In summary, our network utilizes CNN-based encoders to encode the measurements into a low-dimensional latent space representation, where coordinates are assigned to latent vectors using predefined sparse grids. We can then query the phase value at arbitrary coordinates and use the MLP decoder to decode the physical quantity based on the selected latent vector. The latent space representation, generated by the encoders, adapts to different objects, allowing our decoding function f_θ(·, ϕ) to demonstrate robust generalization capabilities compared to traditional NF methods.
§.§.§ Reconstruction enhancement techniques
To enhance the information extraction from the latent space and improve the continuity of the reconstruction, we utilize feature unfolding, local ensemble, and cell decoding techniques as described in the LIIF method <cit.>.
Feature unfolding:
To capture additional information beyond a single latent vector ϕ, we employ feature unfolding, which extends ϕ to ϕ̂. Specifically, ϕ̂ is obtained by concatenating the 3×3 neighboring latent vectors of ϕ, as illustrated in Fig. <ref>(a), and is defined as:
ϕ̂_p,q = Concat({ϕ_p+l, q+n})_l,n∈{-1,0,1},
where Concat represents the concatenation of a set of latent vectors. The indices p and q correspond to the selected latent code ϕ that matches the queried coordinate in the latent space. If the queried position is at the image's edge, the latent space Φ is padded with zero-vectors.
Cell decoding:
We incorporate cell decoding, which takes into account the pixel size information in the decoding function f_θ, as illustrated in Fig. <ref>(b). The updated decoding function is expressed as:
ψ̂()=f_θ, cell([Δ, c_h, c_w], ϕ̂),
where the [c_h, c_w] specifies the height and width of the query pixel with the desired pixel size in the reconstruction. The notation [Δ, c_h, c_w] denotes the concatenation of the coordinate and the pixel size.
Thus, f_θ, cell([Δ, c_h, c_w], ϕ̂) signifies that the decoding function reconstructs the value with the relative coordinate Δ and the pixel size (c_h, c_w), conditioned on the “unfolded” latent vector ϕ̂ at the coordinate .
Local ensemble
A concern with Eq. (<ref>) is the discontinuous prediction when the queried coordinate crosses the middle area between two neighboring latent vectors, resulting in a switch between latent codes (i.e. the selection of the nearest latent vector changes). For example, it occurs when the queried coordinate (green dot) crosses the dashed line depicted in Fig. <ref>(c). Around such coordinates, predictions for two infinitesimally close coordinates are generated based on different latent vectors. Due to imperfections in the learned encoder E_θ_e and decoding function f_θ, these borders may exhibit discontinuous patterns.
To address this issue, we employ the local ensemble technique, extending Eq. (<ref>) to:
ψ̂()=∑_t∈{00,01,10,11} S_t/S·f_θ, cell([Δ_t, c_h, c_w], ϕ̂_t),
where ϕ̂_t (t∈{00,01,10,11}) represents the four nearest latent vectors (top-left, top-right, bottom-left, bottom-right) based on the queried coordinate, Δ_t denotes the relative coordinate between the queried coordinate and the selected latent vector, and S_t indicates the area of the rectangle between the queried coordinate and the coordinate of latent vector diagonal to the selected latent vector, as shown in Fig. <ref>(c). The weights S_t are normalized by S=∑_t S_t. Moreover, the latent space representation Φ is mirror-padded outside the edge, allowing the above formula to work for coordinates near the image borders.
§.§.§ Network training
Implementation details:
To train our network, we follow the procedure outlined in Section <ref> to prepare the training data, which consists of paired input images and the corresponding ground-truth phase images. During each training step, we randomly crop a smaller patch of size 48×48 pixels from the input images. Recall that the size of the raw input measurements differs for the experimental and simulated datasets, with dimensions of 250×250 pixels and 100×100 pixels respectively.
We encode the input using three encoders, as described in detail in Section <ref>, resulting in a latent space representation Φ with dimensions of H=48, W=48, D=384.
Subsequently, we assign 2D coordinates to each latent vector ϕ∈ℝ^1× 1× 384, which is defined on a sparse grid with the same grid density of 48×48 as the input. The height and width range of the latent space is set as [-H, H] and [-W, W] respectively, resulting in a distance of 2 between neighboring latent vectors.
The high-resolution ground-truth phase images, with an original pixel resolution of 1500×1500 pixels for the experimental dataset and 600×600 pixels for the simulated dataset, are correspondingly cropped into 288×288-pixel patches to match the same FOV as the input images. This scaling indicates that our ground-truth high-resolution phase image has a pixel resolution 6× higher than that of the input in both the x and y directions.
Similar to the assignment of 2D coordinates in the latent space, we assign 2D coordinates to the cropped high-resolution image within the height and width range of [-H, H] and [-W, W] respectively. The grid density is increased to 288×288, and the distance between adjacent pixels is reduced to 1/3. This coordinate assignment ensures positional consistency across the measurement domain, latent space, and reconstruction domain, assuming that the information within a 2D image is inherently positionally dependent and the information at a given position is preserved across different domains.
Next, we randomly select 2304 pixels from the high-resolution image patch as the ground-truth phase values by randomly picking the coordinates defined in the high-resolution grid. These coordinates are also used to select the corresponding latent vectors ϕ from Φ, as described in Section. <ref>. The selected latent vectors and relative coordinates are concatenated and input into the MLP. We further employ the reconstruction enhancement techniques detailed in Section <ref>.
The output of the MLP is the predicted phase value at the queried position , and we train our network by comparing this prediction with the ground truth using the L_1 norm, as shown in Eq. (<ref>).
It is important to note that during the training stage, we define grids for the high-resolution ground-truth image and query the high-resolution image at these predefined coordinates.
However, after training, we no longer need to query points at predefined grids and can freely query phase values at any coordinates since our MLP can effectively represent the object in a continuous manner.
We utilize the PyTorch framework for training our network. The Adam optimizer is employed, with an initial learning rate of 1×10^-4. To adaptively adjust the learning rate during training, we use the method in PyTorch. This method reduces the learning rate by a factor of 0.2 when the loss function fails to improve. During training, a batch size of 5 is used.
Training with different datasets:
To comprehensively evaluate the generalization capability of our LCNF framework, in total, we explored three different training strategies and trained five different networks using different datasets, as detailed below.
Training with the full experimental dataset. In this case, we trained two networks using two different experimentally captured Hela cell datasets, denoted as Network_Hela-E(18) and Network_Hela-F(16).
For the first Hela(E) dataset, we gathered 22 groups of images for Hela cells fixed in ethanol. Network_Hela-E(18) was trained using 18 paired datasets, validated using 2 paired datasets, and tested using 2 paired datasets.
For the second Hela(F) dataset, we captured 20 groups of Hela cells fixed in formalin. Network_Hela-F(16) was trained using 16 paired datasets, validated using 2 paired datasets, and tested using 2 paired datasets.
Training with a single pair of experimental dataset. To further evaluate the generalization capability of our network, we conducted training of two networks using only a single training image pair from two different cell types, denoted as Network_Hela-E(1) and Network_Hela-F(1). The remaining images were designated as the test set. This approach allows us to assess the network's performance when trained on extremely limited data, providing insights into its generalization ability and the capability to reduce the complexity of acquiring experimental training data.
Training with the simulated natural images dataset. In addition to the experimental dataset, we also trained another network using only simulated datasets on natural images, denoted as Network_Simulate. The data preparation is described in Section <ref>. For this purpose, we utilized a total of 800 paired images for training, with 50 paired images for validation and another 50 paired images for testing. This simulated dataset allows us to assess the performance of our network in the absence of an experimental training dataset, providing insights into its ability to generalize from simulation to experiment.
For all three training scenarios, the network typically converged at around 500 epochs.
The training duration varied depending on the dataset. Training the network with a single pair of the experimental dataset took approximately 3 hours while training with the full experimental dataset and the simulated dataset took approximately 24 hours to converge using an NVIDIA Tesla P100 GPU on the Boston University Shared Computing Cluster.
§.§.§ Network inference
Upon completion of network training, we can reconstruct high-resolution phase images using a continuous local conditional neural field representation. To perform network inference, we provide the preprocessed measurements of the desired FOV as input and configure the pixel resolution for the resulting reconstructed image.
During the inference process, the network assigns coordinates to each pixel, as described in Section <ref>, and predicts the corresponding phase value for each queried position.
In contrast to previous NF frameworks <cit.> that require a consistent number of input coordinates, resulting in a smaller FOV when aiming for a higher pixel resolution image, our approach maintains the same FOV while increasing the number of queried coordinates. We achieve this by employing varying grid densities for reconstruction. Notably, the prediction process for a 1500×1500-pixel high-resolution image takes approximately 25 seconds, resulting in an average rate of approximately 1×10^-5 seconds per pixel on a computer with an NVIDIA Quadro RTX8000 GPU.
For the reconstruction of the wide-FOV phase image, we performed inference with a 6× denser grid sampling compared to the raw measurement. We first divided our measurement into a series of small patches with 250×250 pixels each. Next, we reconstructed each patch individually, resulting in high-resolution phase images with dimensions of 1500×1500 pixels. To create the final wide-FOV reconstruction image, we employed the alpha blending algorithm <cit.> to stitch together the individual reconstructions, forming a high-resolution phase image with a diameter of 12960 pixels. It is worth noting that our LCNF network is inherently capable of directly inferring the entire FOV image without requiring any stitching process. However, due to the limitation of GPU memory (48 GB) on our computer, we utilized this patch-wise inference method since direct inference of the entire FOV would exceed the available memory.
§ ACKNOWLEDGEMENTS
The authors acknowledge Boston University Shared Computing Cluster for proving the computational resources.
The work is funded by National Science Foundation (1846784).
§ DATA AVAILABILITY
The neural network and the data set used in this work are available at <https://github.com/bu-cisl/LCNF>.
§ CONFLICT OF INTEREST
The authors declare no competing interests.
ieeetr
arabic
§ SUPPLEMENTARY INFORMATION FOR:
LOCAL CONDITIONAL NEURAL FIELDS FOR VERSATILE AND GENERALIZABLE LARGE-SCALE RECONSTRUCTIONS IN COMPUTATIONAL IMAGING
Hao Wang^1, Jiabei Zhu^1, Yunzhe Li^1,†, Qianwan Yang^1, Lei Tian^1,2,*
[1] Department of Electrical and Computer Engineering, Boston University, Boston, MA 02215, USA.
[2] Department of Biomedical Engineering, Boston University, Boston, MA 02215, USA.
[†]Current address: Department of Electrical Engineering & Computer Sciences, University of California, Berkeley, CA 94720, USA.
Correspondence: [email protected]
|
http://arxiv.org/abs/2307.04299v1 | 20230710013448 | Schiff moments of deformed nuclei | [
"Oleg P. Sushkov"
] | nucl-th | [
"nucl-th",
"physics.atom-ph"
] |
School of Physics, The University of New South Wales, Sydney, New South Wales
2052, Australia
Stimulated by recent suggestion of Cosmic Axion Spin Precession Experiment
with Eu contained compound we develop a new method for accurate calculation
of Schiff moments of even-odd deformed nuclei.
The method is essentially based on experimental data on magnetic moments and
E1,E3-amplitudes in the given even-odd nucleus and in adjacent even-even
nuclei. Unfortunately such sets of data are not known yet for most of
interesting
nuclei. Fortunately the full set of data is available for ^153Eu.
Hence, we perform the calculation for ^153Eu and find value of the
Schiff moment.
The value is about 30 times larger than a typical Schiff moment
of a spherical heavy nucleus. The enhancement of the Schiff moment
in ^153Eu is related to the low energy octupole mode.
On the other hand the value of Schiff moment we find is 30 times
smaller than that obtained in the assumption of static octupole
deformation.
Schiff moments of deformed nuclei.
O. P. Sushkov
August 12, 2023
====================================
§ INTRODUCTION
Electric dipole moment (EDM) of an isolated quantum object in a nondegenerate
quantum state is a manifestation of violation of time reversal (T) and
parity (P) fundamental symmetries. Searches of EDM
of neutron is a long quest for fundamental
P,T-violation <cit.>.
EDM of a nucleus can be significantly larger than that of a
neutron <cit.>.
However a nucleus has nonzero electric charge and therefore in a charge neutral
system (atom, molecule, solid) EDM of nucleus cannot be measured <cit.>. The quantity that can be measured is the so called Schiff Moment (SM)
which is nonzero due to the finite nuclear size <cit.>.
Like EDM the SM is a vector directed along the angular momentum.
Renewal of my interest to this problem is related to Cosmic Axion Spin
Precession Experiment (CASPEr) on searches of the QCD axion dark matter.
The current CASPEr experiment is based on Lead Titanate
ferroelectric <cit.>, see also Ref. <cit.>.
The experiment is probing the Schiff moment of ^207Pb nucleus.
There is a recent suggestion <cit.> to use for CASPEr
experiment the crystal of EuCl_3· 6H_2O instead of Lead Titanate.
The major advantage is experimental: a possibility to polarise Eu nuclei via
optical pumping in this crystal
allows to improve sensitivity by orders of magnitude.
Expected effect in EuCl_3· 6H_2O has been calculated in
Ref. <cit.>.
The observable effect in a solid is built like a Russian doll Matreshka,
it has four different spatial and energy scales inside each other.
(i) Quark-gluon scale, r < 1fm,
(ii) Nuclear scale, 1fm ≲ r ≲ 10fm,
(iii) Atomic scale, 10fm < r ≲ 1Å,
(iv) Solid state scale, r > 1Å.
The calculation <cit.> is pretty accurate at the scale (iii),
it has an uncertainty at most by factor 2 at the scales (i) and (iv).
However, the uncertainty at the scale (ii), the nuclear scale, is two
orders of magnitude, this is the uncertainty in ^153Eu Schiff moment.
Such an uncertainty is more or less typical for deformed even-odd nuclei.
The aim of the present work is twofold (i) development of the accurate method
for SM calculation, (ii) performamce of the calculation for ^153Eu.
A reliable purely theoretical calculation is hardly possible.
Therefore, our appoach is to use available experimental
data as much as posible.
^153Eu is a deformed nucleus. A simplistic estimate of SM of a
nucleus with quadrupolar deformation based on Nilsson model
performed in Ref. <cit.>
gave a result by an order of magnitude larger than SM of a spherical heavy
nucleus, say SM of ^207Pb.
It has been found later in Ref. <cit.> that if the nucleus
has a static octupolar deformation the SM is dramatically enhanced.
Based on analysis of rotational spectra of ^153Eu authors of
Ref. <cit.> argued that ^153Eu has a static octupolar
deformation and hence, using the idea <cit.>
arrived to the estimate of SM that is 10^3 times larger than that of a
heavy spherical nucleus.
To elucidate structure of wave functions of ^153Eu in the
present work we analyse available experimental data on magnetic moments and
amplitudes of E1,E3-transitions. In the result of this
analysis we confidently claim that the model of static octupolar deformation
is too simplistic. Nilsson wave functions of quadrupolar deformed nucleus are
almost correct. However, this does not imply that the octupolar mode is
irrelevant.
There is an admixture of the octupole vibration to the Nilsson states
and we determine the amplitude of the admixture. All in all this allows us
to perform a pretty reliable and accurate calculation of SM.
To avoid misundertanding, our statement about the magnitude of the SM
is based on analysis of a broad set of data, therefore, the statement is
nuclear
specific, it is valid for ^153Eu and it is valid for ^237Np.
Unfortunately such sets of data are not known yet for many interesting
nuclei.
Structure of the paper is the following.
In Section II we analyse lifetimes of relevant levels in ^152Sm and
^153Eu and hence find the relevant E1-amplitudes.
The Section III is the central one, here we discuss the structure of wave
functions of the parity doublet |5/2^±⟩ in ^153Eu.
Section IV determines the quadrupolar deformation of ^153Eu.
In Section V we explain the parametrisation we use for the octupolar
deformation.
Section VI describes the structure of octupole excitations.
Section VII extracts the value of octupole deformation from experimental data.
In section VIII we calculate the T- and P-odd mixing of 5/2^+ and 5/2^-
states in ^153Eu.
EDM of ^153Eu nucleus is calculated in Section IX and SM of ^153Eu
nucleus is calculated in Section X.
Section XI presents our conclusions.
§ EXPERIMENTAL E1-AMPLITUDES IN ^152SM AND ^153EU
All data in this Section are taken from Ref. <cit.>.
Even-even nuclei in vicinity of ^153Eu have low energy ≈ 1MeV
collective octupole excitation.
There is the quadrupolar ground state rotational band and the octupolar
rotational band starting at energy of the octupole excitation.
As a reference even-even nucleus we take ^152Sm. In principle ^154Sm
also would do the job, but the data for ^154Sm are much less detailed,
especially on electron scattering that we discuss in Section VII.
Energies of the relevant states of the octupolar band in ^152Sm
are: E(1^-)=963keV, E(3^-)=1041keV.
The halftime of the 1^- state is t_1/2=28.2fs, hence the
lifetime is τ=28.2/ln(2)=40.7fs.
The state decays via the E1-transition to the ground state, 0^+, and to the
2^+ state of the ground state rotational band.
The decay branching ratio is W(0^+)/W(2^+)=0.823.
Therefore, the partial lifetime for 1^- → 0^+ transition is
τ_partial=90fs.
The 1^- → 0^+ E1-transition decay rate is <cit.>
1/τ_partial=4ω^3/3(2j+1)
|⟨ j^'||d||j⟩|^2 ,
For 1^- → 0^+ transition j=1 and j^'=0.
The reduced matrix element of the dipole moment can be expressed in terms
of d_z in the proper reference frame of the deformed nucleus <cit.>
|⟨ j^'||d|| j⟩|^2 = |√((2j+1)(2j^'+1))(
[ j^' 1 j; -m 0 m ])|^2
× |⟨ 0| d_z|1⟩|^2
For 1^- → 0^+ transition j=1, j^'=0, m=0. Hence
⟨ 0| d_z|1⟩=+ e× 0.31fm
Here e=|e| is the elementary charge.
^153Eu is a deformed nucleus with the ground state |5/2^+⟩.
The nearest opposite parity state |5/2^-⟩ has energy E=97.4keV.
The halftime of the |5/2^-⟩ state is t_1/2=0.20ns, hence the
lifetime is τ=0.29ns. The lifetime is due to the E1-decay
|5/2^-⟩→ |5/2^+⟩.
Using Eqs.(<ref>),(<ref>) with j=j^'=m=5/2 and comparing with
experiments we find the corresponding d_z in the proper reference frame.
⟨ 5/2^+ |d_z|5/2^-⟩= - e× 0.12fm
Of course lifetimes do not allow to determine signs in Eqs. (<ref>) and
(<ref>). We explain in Section VI how the signs are determined.
§ WAVE FUNCTIONS OF THE GROUND STATE PARITY DOUBLET |5/2^±)
IN ^153EU.
The standard theoretical description of low energy states in ^153Eu
is based on the Nilsson
model of a quadrupolar-deformed nucleus. In agreement with experimental data,
the model predicts the spin and parity of the ground state, 5/2^+. It also
predicts the existence of the low-energy excited state with opposite parity,
5/2^-. The wave functions of the odd proton in the Nilsson scheme are
|5/2^+⟩=|4135/2⟩, |5/2^-⟩=|5325/2⟩.
Explicit form of these wave functions is presented in Appendix.
There are two rotational towers built on these states.
An alternative to Nilsson approach is the model of static collective octupolar
deformation <cit.>.
In this model the odd proton moves in the pear shape potential forming
the Ω=5/2 single particle state.
A single rotational tower built on this odd proton state is consistent with
observed spectra and this is why the paper <cit.> argues in favour
of static octupole deformation. However, two different parity rotational
towers in Nilsson scheme are equally consistent with observed spectra.
Therefore, based on spectra one can conclude only that both the Nilsson model
and the static octupolar deformation model are consistent with spectra. One
needs additional data to distinguish these two models.
The Nilsson model explains the value Ω=5/2 while in the
“static octupole” model this value pops up from nowhere. However, in
principle it is possible that accidentally
the single particle state in the pear shape potential has
Ω=5/2.
To resolve the issue “Nilsson vs octupole” we look at magnetic moments.
The magnetic moment of the ground state is μ_5/2^+=1.53μ_N,
see Ref. <cit.>. This value is consistent with prediction
on the Nilsson model <cit.>.
The magnetic moment of the 5/2^- state has some ambiguity,
the measurement led to two possible interpretations,
“the recommended value” μ_5/2^-=3.22μ_N, and another value
consistent with measurement μ_5/2^-=-0.52μ_N, see Ref. <cit.>. The recommended value is consistent with the prediction of the
Nilsson model <cit.>.
Thus the magnetic moments are consistent with the Nilsson model and
inconsistent with the octupole model which implies
μ_5/2^-≈μ_5/2^+.
While the arguments presented above rule out the static octupole model,
they do not imply that the octupole is irrelevant, actually it is relevant.
We will show now that while the Nilsson model explains magnetic moments
it cannot explain E1-amplitudes.
Within the Nilsson model one can calculate the E1 matrix
element ⟨ 5/2^+|d_z|5/2^-⟩.
A straightforward calculation with wave functions (<ref>) gives the
dipole matrix element
d_z = e (1-Z/A)⟨ 5325/2|z|4135/2⟩
= e (1-Z/A)z_0/√(2)(0.527-0.510+0.017)
= e× 0.036fm .
Here we account the effective proton charge (1-Z/A)=0.59.
The calculated matrix element (<ref>)
is 3 times smaller than the experimental one (<ref>).
The first impression is that the disagreement is not bad
having in mind the dramatic compensations in Eq.(<ref>).
However, there are two following observations.
(i) It has been pointed out in Ref.<cit.> that the compensation
in (<ref>) is not accidental: the compensation is due to the structure
of Nilsson states, and the matrix
element ⟨ 5325/2|z|4135/2⟩ is proportional
to the energy
splitting Δ E = E_5/2^--E_5/2^+. The matrix element is small
because Δ E is small compared to the shell model energy
ω_0≈ 7.7MeV.
The value (<ref>) is calculated with wave functions from
Ref. <cit.> that correspond to Δ E ≈ 450keV.
On the other hand in reality Δ E ≈ 97keV.
Therefore, the true matrix element must be even smaller than the value
(<ref>).
(ii) The electric dipole operator is T-even. Therefore, there is a suppression of the matrix element due to pairing of protons, d_z → d_z (u_1u_2-v_1v_2),
where u and v are pairing BCS factors. This further reduces the matrix
element, see Ref.<cit.>.
The arguments in the previous paragraph lead to the conclusion that while the
Nilsson model correctly predicts quantum numbers and explains magnetic
moments, the model does not explain the electric dipole
transition amplitude.
The experimental amplitude is by an order of magnitude larger than the Nilsson one. This observation has been made already in Ref.<cit.>.
Admixture of the collective octupole to Nilsson states resolves the dipole moment issue.
So, we take the wave functions as
|+⟩ = |5/2^+⟩ = √(1-α^2)|4135/2⟩|0⟩-α| 5325/2⟩|1⟩
|-⟩ = |5/2^-⟩ = √(1-α^2)| 5325/2⟩|0⟩
-α|4135/2⟩|1⟩
Here the states |0⟩ and |1⟩ describe collective octupole mode,
|0⟩ is the symmetric octupole vibration and
|1⟩ is antisymmetric octupole vibration. For intuition:
|0⟩ corresponds to the
ground state of ^152Sm and |1⟩ corresponds to the octupole
excitation at energy ≈ 1MeV.
We will discuss in Section VI the specific structure of the states
|0⟩, |1⟩, explain why the mixing coefficient in both states
in (<ref>) is the same, and explain why α >0.
Using (<ref>) and
neglecting the small single particle contribution the transition electric dipole moment is
⟨5/2^+|d_z|5/2^-⟩=-2α√(1-α^2)⟨ 0 |d_z|1⟩
Hence, using the experimental values (<ref>) and (<ref>) we find
α≈0.12/2 × 0.31=0.20
Thus, the weight of the admixture of the collective vibration to the simple Nilsson state is just α^2= 4%.
This weight is sufficiently small to make the Nilsson scheme calculation of
magnetic moments correct. On the other hand the weight is sufficiently large
to influence electric properties.
Note that the octupole vibration itself does not
have an electric dipole transition matrix element. The E1 matrix element
is zero due to elimination of zero mode, ⟨ 1|d_z|0⟩=0.
The nonzero value of the dipole matrix element, ⟨ 1|d_z|0⟩ 0,
arises due to a small shift of the neutron
distribution with respect to the proton distribution in combination
with the octupole deformation, see e.g.
Refs. <cit.>.
While this issue is important theoretically, pragmatically it is not
important to us since we take both values of matrix elements (<ref>)
and (<ref>) from experiment.
It is worth noting also that in the static octupole model one expects
⟨ 5/2^+ |d_z|5/2^-⟩= ⟨ 0| d_z|1⟩=+ e× 0.31fm
that is like magnetic moments inconsistent with data.
§ QUADRUPOLAR DEFORMATION OF ^153EU.
The standard way to describe nuclear deformation is to use parameters
β_l.
In the co-rotating reference frame for the quadrupolar deformation
the surface of the nucleus is given by equation (we neglect β_2^2
compared to 1)
R(θ)=R_0(1+β_2Y_2,0)
R_0=r_0A^1/3
r_0≈1.2fm
Here A is the number of nucleons.
Let us determine β_2 using the known electric quadrupole moment Q
in the ground state of ^153Eu. There are two contributions in Q,
(i) collective contribution due to the collective deformation,
(ii) single particle contribution of the odd proton. Using Nilsson
wave functions it is easy to check that the single particle contribution is
about 3-4% of the experimental one, so it can be neglected.
Collective electric quadrupole moment is given by density of protons ρ_p,
Q_0=Q_zz = ∫ρ_p(3z^2-r^2)dV=4√(π/5)∫ρ_pr^2Y_20dV
= 3ZR_0^2/√(5π)β_2
[1+2√(5)/7√(π)β_2
+12/7√(π)β_4]
Here we also account β_4. Z is the nuclear charge.
Eq.(<ref>) gives the quadrupole moment in the proper reference frame.
In the laboratory frame for the ground state, J=Ω=5/2, the quadrupole
moment is Q=5/14Q_0, see problem to 119 in Ref.<cit.>.
The ground state quadrupole moment of ^153Eu is
Q=2.412 barn <cit.>. From here, assuming β_4=0.07,
we find the quadrupole deformation of ^153Eu nucleus in the ground state,
β_2≈ 0.29 .
The values β_2≈ 0.29, β_4=0.07 perfectly agree with that
in ^152Sm determined from electron scattering <cit.>.
The electric quadrupole moment of ^151Eu nucleus in the ground state is
Q=0.903barn <cit.>.
Therefore, in ^151Eu the quadrupolar deformation, β_2≈ 0.12,
is significantly smaller than that in ^153Eu.
§ NUCLEAR DENSITY VARIATION DUE TO THE OCTUPOLE DEFORMATION
The standard way to describe the static octupole deformation β_3 is to
use parametrisation (<ref>)
R(θ)=R_0(1+β_1Y_10+β_2Y_2,0+β_3Y_3,0+...)
This Eq. describes the surface of nucleus in the proper reference frame.
The dipole harmonic Y_10 is necessary to eliminate the zero mode, i.e.
to satisfy the condition
⟨ z⟩= ∫ρ(r)rY_10dV=0
where ρ(r) is the number density of nucleons.
From (<ref>) we find
β_1=-xβ_2β_3 , x=√(243/140π)≈ 0.743 .
For our purposes it is more convenient to use parametrisation different from
(<ref>), the parametrisation we use is
δρ= β_3
3A/4π R_0^2δ[r-R_0(1+β_2Y_20)](Y_30-xβ_2Y_10) .
Here δρ is the octupolar component of the nuclear density.
Due to the δ-function, δ[...], the component is nonzero only
at the surface of the nucleus. Parametrisations (<ref>) and (<ref>)
are equivalent, both satisfy the constraint (<ref>) and
both give the same octupole moment
Q_30=√(4π/7)∫ρ r^3 Y_30dV=
β_33A/√(28π)R_0^3 .
§ STRUCTURE OF THE VIBRATIONAL STATES |0⟩, |1⟩
The deformation picture described in the previous section is purely classical.
Quantum mechanics makes some difference.
We work in the proper reference frame where the nuclear axis direction, the
z-axis, is fixed.
Hence, there are two possible orientations of the pear, as it is shown in
Fig.<ref>. There is tunnelling between these two orientations, the
tunnelling leads to the energy splitting and to formations of
symmetric and antisymmetric states
|0⟩, |1⟩. This picture is valid when the tunnelling energy
splitting, Δ E_tun, is larger than the rotational energy
splitting,
Δ E_rot. Experimentally Δ E_tun∼ 1MeV,
Δ E_rot≈ 20keV, so the description is well justified.
The description of Fig. <ref> implies that
the octupole deformation is quasistatic. The quasistatic description is
justified by the existence of well defined rotational towers in ^152Sm
built on |0⟩ and |1⟩ states, see Ref. <cit.>.
Note that even if the pear tunneling amplitude is comparable with the
rotatinal energy, Δ E_tun∼Δ E_rot, the octupole
deformation is not static. To have a trully static octupole one needs
Δ E_tun≪Δ E_rot.
The Hamiltonian for the odd proton reads
H=p^2/2m+U(r) .
Here U(r) is the selfconsistent potential of the even-even core.
It is well known that the nuclear density ρ(r) has approximately the
same shape as the potential
U(r) ≈U_0/ρ(0)ρ(r) ,
where U_0≈ -50MeV and ρ(0)=3/(4π r_0^3).
Hence the variation of the potential related to the octupole deformation is
δ U = U_0/ρ(0)δρ
= β_3 U_0 R_0 δ[r-R_0(1+β_2Y_20)](Y_30-xβ_2Y_10) .
This is the perturbation that mixes single particle Nilssen states
with simultaneous mixing of |0⟩ and |1⟩. The mixing matrix
element is
M=⟨ 1|⟨ 5325/2|δ U|4135/2⟩|0⟩
= ∫ρ_sp(r)δ U(r) dV
ρ_sp(r)= ⟨ψ^*_532(r)ψ_413(r)⟩ .
Here ρ_sp is offdiagonal single particle density
of Nilsson wave functions (<ref>),
the density depends on r, the brackets ⟨ ..⟩ in
ρ_sp denote averaging over spin only.
Numerical evaluation of the mixing matrix element is straightforward,
the answer at β_2=0.29 is M≈ 5β_3MeV. The value slightly
depends on β_2, at β_2=0 the value of M is 10% smaller.
The coefficient α in Eqs.(<ref>) is
α=M/Δ E_tun ,
where Δ E_tun≈ 1MeV.
Eqs.(<ref>),(<ref>) together with positive value of M explain why the
coefficient α is the same in both Eqs.(<ref>) and why
α > 0.
Moreover, comparing (<ref>) with value of α extracted from
experimental data, Eq.(<ref>), we determine the octupole deformation,
β_3 =0.04. While the value is reasonable, unfortunately one cannot
say that this is the accurate value.
The shape approximation (<ref>) is not very accurate.
Even more important, it is not clear how the BCS factor influences ρ_sp. The BCS factor can easily
reduce ρ_sp by factor ∼ 2-3, hence increasing β_3 by the
same factor.
Theoretical calculations of β_3 give
values from 0.05 <cit.>, to 0.075 <cit.>, and
even 0.15 <cit.>.
§ THE VALUE OF THE OCTUPOLE DEFORMATION PARAMETER Β_3
With wave functions shown in Fig.<ref> one immediately finds
the electric octupole matrix element between states |0⟩ and
|1⟩
⟨ 1|Q_30^(e)|0⟩=eZ/AQ_30 ,
where Q_30 is given by Eq.(<ref>).
We are not aware of direct measurements of Q_30^(e) in ^152Sm.
The book <cit.> presents the “oscillator strengths” for
corresponding E3 transitions in ^152Sm and ^238U,
^152Sm: B_3=1.2× 10^5e^2fm^6, ^238U: B_3=5× 10^5e^2fm^6
(table 6.14 in the book).
However, these values have been determined not from direct electromagnetic
measurements, the “oscillator strengths” have been indirectly extracted from
deuteron scattering from the nuclei.
Fortunately for ^238U there is a more recent value determined from the
electron scattering <cit.>: B_3=(6.4± 0.6)× 10^5e^2fm^6.
All in all this data give β_3 ≈ 0.08 for both ^152Sm and
^238U.
Fortunately, the electron scattering data <cit.>
allow to determine β_3 in ^152Sm pretty accurately.
The Ref. <cit.> was aimed to determine β_2 and β_4,
their results, β_2=0.287± 0.003, β_4=0.070± 0.003 are remarkably
close to that we get for ^153Eu in Section IV.
The inelastic scattering spectrum copied from Ref. <cit.> is
shown in Fig.<ref>.
Here we reanalyse the spectrum.
The first inelastic peak at E=122keV (≈ channel 73) corresponds to
the 2^+
excitation of the rotational ground state band. The peak after subtraction
of the background is shown in panel a of Fig.<ref>.
Red dots are experimental points and the solid curve is the Gaussian fit
I=Ae^-(x-x_0)^2/σ^2
A=7.23, x_0=72.9, σ=5.21 .
Hence, the halfwidth is
Γ =2ln(2)σ=49.3keV. Here we account that one channel step is
6.82keV.
This energy resolution is 0.065% of electron energy 76MeV.
This is slightly smaller than the “typical value” 0.08% mentioned
in Ref. <cit.>.
The peak at Fig.<ref> near the channel 210 is a combination of the
3^- octupole (E=1041keV) and of the γ 2^+ state of the
γ-band (E=1086keV).
The peak after subtraction of the background is shown in panel b of
Fig.<ref>. We fit the double peak by the double Gaussian
I=B[e^-(x-x_1)^2/σ^2+e^-(x-x_2)^2/σ^2]
B=0.670, x_1=207.6, x_2=214.2, σ=5.21.
The value of x_1 corresponds to E=1041keV, the value of x_2 corresponds
to E=1086keV, σ is known from (<ref>).
The fitting shows that intensities of 3^- and γ 2^+ lines cannot
differ by more than 5%, so we take them equal. Therefore in the end
there is only one fitting parameter B.
Based on Eqs.(<ref>) and (<ref>) we find the ratio of spectral weights
S(3^-)/S(2^+)=B/A=0.093
Here 2^+ is the ground state rotational state.
Interestingly, the analysis gives also the spectral weight of the
γ 2^+ state. This allows to determine the magnituide of the
γ-deformation. However, this issue is irrelevant to the Schiff
moment and therefore we do not analyse it further.
Coulomb potential or Eu nucleus at r ≈ R_0 is 15MeV. This is
significantly smaller than the electron energy 76MeV. Therefore, the electron
wavefunction can be considered as a plane wave. The momentum transfer is
q=2psin(93.5^o/2)≈ 111MeV≈ 0.562fm^-1 .
Using expansion of the plane wave in spherical harmonics together with
Wigner-Eckart theorem the spectral weights can be expressed as integrals
in the co-rotating reference frame
S(2^+) ∝ |∫ Y_20j_2(qr)ρ(r) dV|^2
S(3^-) ∝ |∫ Y_30j_3(qr)δρ(r) dV|^2
Here j_l(qr) is the spherical Bessel function <cit.>,
ρ(r) is the density with quadrupolar deformation, and
δρ is given by Eq.(<ref>).
The coefficient of proportionality in both equations (<ref>) is the same
and therefore we skip it. Evaluation of integrals in (<ref>)
is straightforward, it gives
∫ Y_20j_2(qr)ρ(r) dV ∝β_2j_2(qR_0)=0.302β_2
∫ Y_30j_3(qr)δρ(r) dV ∝β_3j_3(qR_0)=0.205β_3
Comparing the theoretical ratio with it's experimental value (<ref>) and
using the known quadrupolar deformation we find the octupolar deformation
β_3=0.45β_2=0.130.
In the previous paragraph we used the plane wave approximation
for the electron wave function neglecting the Coulomb potential ≈ 15MeV
compared to the electron energy 76MeV. A simple way to estimate the
Coulomb correction is to change q→ q^'≈ q(1+15/76)=0.673fm^-1.
This results in β_3=0.090. Probably this simplistic way overestimates the
effect of the Coulomb potential. An accurate calculation of distorted electron
wave functions would allow to determine β_3 very accurately.
For now we take
β_3= 0.10
§ T- AND P-ODD MIXING OF 5/2^+ AND 5/2^- STATES IN ^153EU
The operator of the T, P-odd interaction reads <cit.>
H_TP=ηG/2√(2)mσ⃗·∇⃗ρ
Here G≈ 1.03/m^2 is the Fermi constant, η is a dimensionless
constant characterising the interaction, σ⃗ is the Pauli matrix
corresponding to the spin of unpaired nucleon, and ρ is the nuclear
number density.
The single particle matrix element of H_TP between the Nilsson states can be
estimated as, see Ref. <cit.>,
⟨ 532|H_TP|413⟩∝⟨ 532|∇ρ|413⟩∝⟨ 532|∇ U|413⟩
∝⟨ 532|[p, H]|413⟩∝
(E_532-E_413) ⟨ 532|p|413⟩
∝
(E_532-E_413) ⟨ 532|[r,H]|413⟩
∝
(E_532-E_412)^2 ⟨ 532|r|413⟩
Thus, the matrix element is suppressed by the small parameter (Δ E/ω_0)^2,
with Δ E ≈ 100keV and ω_0 ≈ 8MeV. Hence,
the single particle matrix element can be neglected.
The matrix element between the physical states (<ref>) contains also the
collective octupole contribution
⟨ - |H_TP|+⟩= -α⟨ 5325/2|⟨ 1|H_TP|
0⟩|5325/2⟩
-α⟨ 4135/2|⟨ 0|H_TP|
1⟩|4135/2⟩
Integrating by parts we transform this to
⟨ - |H_TP|+⟩ = αη G/2√(2)m∫[ρ_532(r)+ρ_413]δρ (r) dV
ρ_532(r) = ∂_z⟨ 532|σ_z|532⟩
ρ_413(r) = ∂_z⟨ 413|σ_z|413⟩
Here δρ is the octupole density (<ref>). Note that the “spin densities”
ρ_532 and ρ_413 depend on r, the brackets ⟨ ..⟩ in
definition of the densities in (<ref>) denote averaging over spin only.
Note also that the “spin densities” are T-odd. Therefore, the BCS factor
practically does not influence them.
Numerical evaluation of integrals in (<ref>) with Nilsson wave functions (<ref>) is straightforward, the result is
⟨ - |H_TP|+⟩ = αηβ_3
G/2√(2)m3A/4π R_0^4[ I_413+I_532]
Dimensionless I_413 and I_532 are plotted in Fig.<ref> versus β_2.
At the physical deformation, β_2=0.29, Eq.(<ref>), the values are
I_413=-0.66 and I_532=-1.05. Hence we arrive to the following mixing
matrix element
⟨ - |H_TP|+⟩=-0.24 αηβ_3 eV .
§ ELECTRIC DIPOLE MOMENT OF ^153EU NUCLEUS
We need to determine signs in Eq.(<ref>) and Eq.(<ref>).
In our notations β_3 > 0 corresponds to the pear orientation with
respect to the z-axis shown in Fig.<ref>.
According to Refs. <cit.> protons are
shifted in the positive z-direction. Hence, d_z in Eq.(<ref>) is
positive. Hence, using Eqs.(<ref>), we conclude that the sign in
Eq.(<ref>) is negative.
With Eqs.(<ref>) and (<ref>) we find the T,P-odd electric dipole moment
in the ground state.
d^TP_z = 2⟨+|d_z|-⟩⟨-|H_TP|+⟩/E_+-E_-
= -0.59× 10^-6αβ_3η [e · fm]
= -1.18× 10^-8η [e· fm] .
For the numerical value we take α=0.20, see Eq.(<ref>), and
β_3=0.10, see Eq.(<ref>).
Eq. (<ref>) gives the EDM in the co-rotating reference frame. The EDM in
the laboratory reference frame is
d^TP=5/7d^TP_z = -0.84× 10^-8η [e · fm]
This EDM is comparable with that of a heavy spherical nucleus, see
Ref. <cit.>
§ SCHIFF MOMENT OF ^153EU NUCLEUS
The operator of the Schiff moment (SM) reads <cit.>
Ŝ_z=1/10[∫ρ_q r^2 z dV -5/3r_q^2d_z]
It is a vector. Here ρ_q is the charge density and
r_q^2 ≈3/5R_0^2
is the rms electric charge radius squared.
With the static octupole deformation (<ref>) the 1st
term in (<ref>) is
S_intr=1/10∫ρ_q r^2 z dV =9/20√(35)πeZR_0^3β_2β_3
Here we use the same notation S_intr as that in
Refs.<cit.>.
The matrix element of the first term in (<ref>) between the states
(<ref>) is
⟨ +|Ŝ_1z|-⟩ = -2α S_intr
= -α9/10π√(35)eZR_0^3β_2β_3
Combining this with Eq.(<ref>) we find the expectation value over the
ground state
⟨ +|Ŝ_1z|+⟩ =
2⟨+|Ŝ_1z|-⟩⟨-|H_TP|+⟩/E_+-E_-
= -0.24× 10^-6e Z R_0^3α^2β_2β_3^2η
Hence, the Schiff moment is
S_z = ⟨ +|Ŝ_z|+⟩ =⟨ +|Ŝ_1z|+⟩-
1/10R_0^2d_z^TP
= [-4.0× 10^-3α^2β_2β_3^2
+2.4× 10^-6αβ_3]η [e · fm^3]
= -4.16×10^-7η [e · fm^3]
For the final numerical value we take α=0.20, see Eq.(<ref>),
β_2=0.29, see Eq.(<ref>) and β_3=0.10, see Eq.(<ref>).
Note that the first term in the middle line of Eq.(<ref>) is proportional
to α^2β_3^2 and at the same time the second term is
proportional to αβ_3. This is because one power of αβ_3
is “hidden” in the experimental dipole matrix element (<ref>).
The second term is just about 10% of the first one.
Eq. (<ref>) gives the Schiff moment in the co-rotating reference frame.
The Schiff moment in the laboratory reference frame is
S=5/7S_z = -2.97× 10^-7η [e · fm^3]
This result is pretty reliable, the major uncertainty about
factor 2 is due to uncertainty in the value of β_3.
A more accurate analysis of inelastic electron scattering
data <cit.>, see Section V, can reduce the uncertainty.
In ^151Eu the energy splitting E_–E_+ is 3.5 times larger than that
in ^153Eu, and the quadrupolar deformation is 2.5 times smaller.
Therefore, the Schiff moment is at least an order of magnitude smaller
than that of ^153Eu.
Unfortunately, there is no enough data for an accurate calculation for
^151Eu.
Another interesting deformed nucleus is ^237Np.
Performing a simple rescaling from our result for ^153Eu we get the
following estimate of ^237Np Schiff Moment,
S ∼-1.5× 10^-6η [e · fm^3]. This is 40 times larger than
the single particle estimate <cit.>.
Of course following our method and using ^238U as a reference nucleus
(like the pair ^153Eu, ^152Sm in the present work) one can
perform an accurate calculations of ^237Np Schiff moment.
Data for ^238U are available in Ref. <cit.>.
§ CONCLUSIONS
The Hamiltonian of nuclear time and parity violating interaction is
defined by Eq.(<ref>). For connection of the dimensionless
interaction constant η with the QCD axion θ-parameter see
Ref. <cit.>. The Hamiltonian (<ref>) leads to the Schiff moment
of a nucleus.
In the present work we have dveloped a new method to calculate Schiff moment
of an even-odd deformed nucleus.
The method is essentially based on experimental data on magnetic moments and
E1,E3-amplitudes in the given even-odd nucleus and in adjacent even-even
nuclei. Unfortunately such sets of data are not known yet for most of
interesting nuclei.
Fortunately the full set of necessary data exists for ^153Eu.
Hence, using the new method, we perform the calculation for ^153Eu.
The result is given by Eq.(<ref>).
The theoretical uncertainty of this result, about factor 2, is mainly due to
the uncertainty in the value of the octupole deformation.
A more sophisticated analysis of available electron scattering
data can further reduce the uncertainty.
The Schiff Moment (<ref>) is about 20-50 times larger than that in
heavy spherical nuclei <cit.> and it is
3 times larger than what paper <cit.>
calls “conservative estimate”.
On the other hand it is by factor 30 smaller than the result of
Ref. <cit.> based on the model of static octupole deformation.
Using the calculated value of the Schiff Moment we rescale results of
Ref. <cit.> for the energy shift of ^153Eu nuclear spin
and for the effective electric field in EuCl_3· 6H_2O compound.
The result of the rescaling is
δ E_o= 0.9× 10^-9θ [eV]
E_o^*=0.3 MV/cm
These are figures of merit for the proposed <cit.>
Cosmic Axion Spin Precession Experiment with EuCl_3· 6H_2O.
§ ACKNOWLEDGEMENT
I am grateful to A. O. Sushkov for stimulating discussions and interest to the
work. This work has been supported by the Australian Research Council Centre
of Excellence in Future Low-Energy Electronics Technology (FLEET)
(CE170100039).
§ NILSSON WAVE FUNCTIONS
Parameters of the deformed oscillator potential used in Nilsson model are
ω_z=ω_0(1-2/3δ) , z_0=1/√(mω_z)
ω_ρ=ω_0(1+1/3δ) , ρ_0=1/√(mω_ρ)
ω_0=41MeV/A^1/3
where m≈ 940MeV is the nucleon mass.
The parameter δ is related β_2 used in the main text
as
δ=3√(5)/4√(π)β_2≈ 0.946β_2 .
The oscillator wave functions defined in Ref. <cit.> are
z = z/z_0
|0⟩_z = 1/(√(π)z_0)^1/2e^-z^2/2
|1⟩_z = √(2)/(√(π)z_0)^1/2z
e^-z^2/2
|2⟩_z = 1/(2√(π)z_0)^1/2[2z^2-1]
e^-z^2/2
|3⟩_z = 1/(3√(π)z_0)^1/2z[2z^2-3]
e^-z^2/2
ρ = ρ/ρ_0
|2,2⟩_ρ = 1/√(2π)ρ_0ρ^2e^-ρ^2/2e^2iφ
|3,3⟩_ρ = 1/√(6π)ρ_0ρ^3 e^-ρ^2/2e^3iφ
|4,2⟩_ρ = 1/√(6π)ρ_0ρ^2(ρ^2-3)
e^-ρ^2/2e^2iφ
|5,3⟩_ρ = 1/√(24π)ρ_0ρ^3(ρ^2-4)
e^-ρ^2/2e^3iφ
The Nilsson wave functions for the quadrupolar deformation δ=0.3
written the oscillator basis (<ref>) are <cit.>
|4135/2⟩ = 0.938|1⟩_z|3,3⟩_ρ|↓⟩
-0.342|2⟩_z|2,2⟩_ρ|↑⟩
+ 0.054|0⟩_z|4,2⟩_ρ|↑⟩
|5325/2⟩ =
0.861|3⟩_z|2,2⟩_ρ|↑⟩
+0.397|2⟩_z|3,3⟩_ρ|↓⟩
+ 0.310|1⟩_z|4,2⟩_ρ|↑⟩
+0.075|0⟩_z|5,3⟩_ρ|↓⟩
99
Ramsey1982
N. F. Ramsey, Ann. Rev. Nucl. Part. Sci. 32, 211 (1982).
Serebrov2015
A. P. Serebrov et al,
Physics of Particles and Nuclei Letters 12, 286 (2015).
Abel2020
C. Abel et al., Physical Review Letters 124, 081803 (2020), arXiv:2001.11966.
Sushkov1984
O. P. Sushkov, V V Flambaum, and I B Khriplovich. Sov. Phys. JETP 60,873 (1984).
Schiff1963 L. I. Schiff, Physical Review 132, 2194 (1963).
Budker2014 D. Budker, P. W. Graham, M. Ledbetter, S. Rajendran,
and A. O. Sushkov, Phys. Rev. X 4, 21030 (2014).
Mukhamedjanov2005 T. N. Mukhamedjanov and O. P. Sushkov,
Physical Review A 72, 34501 (2005).
ASushkov2023a A. O. Sushkov, arXiv:2304.12105.
ASushkov2023
A. O. Sushkov, O. P. Sushkov, A. Yaresko,
Phys. Rev. A 107, 062823 (2023); arxiv 2304.08461.
Auerbach1996 N. Auerbach, V. V. Flambaum, and V. Spevak,
Physical Review Letters, 76, 4316 (1996).
Flambaum2020 V. V. Flambaum and H. Feldmeier.
Phys. Rev. C 101, 015502 (2020).
Firestone1999
Richard B Firestone. Table of Isotopes. Ed. by S Y Frank Chu and Coral M Baglin. 1999.
LL4
Quantum Electrodynamics: Volume 4 (Course of Theoretical Physics) 2nd Edition
by V B Berestetskii, E.M. Lifshitz, and L. P. Pitaevskii.
LL3
Quantum Mechanics: Non-Relativistic Theory 3rd Edition
by L. D. Landau, E. M. Lifshitz.
Lamm1969
I. L. Lamm, Nuclear Physics A 125, 504 (1969).
Kemah2022 E. Kemah, E. Tabar, H. Yakut, G. Hosgor
https://dergipark.org.tr/en/pub/saufenbilder/article/1123474
BohrMottelson
Aage Bohr and Ben R. Mottelson. Nuclear Structure. World Scientific, 1998
Sushkov1993 O. P. Sushkov and V. B. Telitsin,
Phys. Rev. C 48, 1069 (1993).
Leander1986
G. A. Leander, W. Nazarewicz, G. F. Bertsch, and J.
Dudek, Nucl. Phys. A 453, 58 (1986).
Dorso1986
C. O. Dorso, W. D. Myers, and W. J. Swiatecki, Nucl.
Phys. A 451, 189 (1986).
Butler1991 P. A. Butler and W. Nazarewicz, Nucl. Phys. A 533,
249 (1991).
Bertozzi1972
W. Bertozzi, T. Cooper, N. Ensslin, J. Heisenberg, S. Kowalski, M. Mills,
W. Turchinetz, C. Williamson, S. P. Fivozinsky, J. W. Lightbody, Jr., and S.
Penner, Phys. Rev. Lett. 28, 1711 (1972).
Hirsch1978
A. Hirsch, C. Creswell, W. Bertozzi, J. Heisenberg, M. V. Hynes, S. Kowalski,
H. Miska, B. Norum, F. N. Had, C. P. Sargent, T. Sasanuma, and W. Turchinetz,
Rev. Lett. 40, 632 (1978).
Ebata2017 S. Ebata and T. Nakatsukasa
Physica Scripta 92, 064005 (2017).
Zhang2010 W. Zhang, Z. P. Li, S. Q. Zhang, and J. Meng,
Phys. Rev. C 81, 034302 (2010).
Spevak1997
V. Spevak, N. Auerbach, and V. V. Flambaum. Phys. Rev. C 56, 1357 (1997).
|
http://arxiv.org/abs/2307.03950v1 | 20230708105034 | Mod 2 instanton homology and 4-manifolds with boundary | [
"Kim A. Frøyshov"
] | math.GT | [
"math.GT",
"math.DG"
] |
Mod 2 instanton homology and 4-manifolds
with boundary
Kim A. Frøyshov
======================================================
Using instanton homology with coefficients in /2 we construct a
homomorphism from the homology cobordism group to the integers
which is not a rational linear combination of
the instanton h–invariant and the Heegaard Floer
correction term d. If an oriented
homology 3–sphere Y bounds a smooth, compact,
negative definite 4–manifold without 2–torsion in its homology
then (Y)≥0, with strict inequality if the intersection form
is non-standard.
empty
plain
§ INTRODUCTION
This paper will introduce an integer invariant (Y) of oriented
integral homology 3–spheres Y. This invariant is defined in terms of
instanton cohomology with coefficients in /2 and may be regarded as a
mod 2 analogue of the h–invariant <cit.>, which was defined with
rational coefficients. Both invariants grew out of efforts to extend
Donaldson's diagonalization theorem <cit.> to 4–manifolds with
boundary.
We will use the instanton (co)homology originally introduced by Floer
<cit.>, an exposition of which can be found in <cit.>. With
coefficients in /2, instanton cohomology I(Y;/2) comes equipped with
some extra structure, namely two “cup products” u_2 and u_3 of degrees
2 and 3, respectively, and homomorphisms
I^4(Y;/2)_0⟶/2_0'⟶ I^1(Y;/2)
counting index 1 trajectories running into and from the trivial flat
2 connection, respectively.
This extra structure enters in the definition of the
invariant q_2, which is given in Section <ref>.
Reversing the rôles of the cup products u_2,u_3 in the definition
yields another invariant q_3. However, the present paper will focus on
.
It would be interesting to try to express the invariants h,q_2,q_3 in terms of
the equivariant instanton homology groups recently introduced by Miller Eismeier
<cit.>.
We now describe some properties and applications of .
For any oriented homology 3–spheres Y_0 and Y_1 one has
(Y_0#Y_1)=(Y_0)+(Y_1).
The proof of additivity is not quite straightforward and occupies more
than half the paper.
thm[Monotonicity]
Let W be a smooth compact oriented 4-manifold with boundary
W=(-Y_0)∪ Y_1, where Y_0 and Y_1 are oriented homology
3–spheres. Suppose the intersection form of W is negative
definite and H^2(W;) contains no element of order 4. Then
(Y_0)≤(Y_1).
If the manifold W in the theorem actually satisfies b_2(W)=0 then one can
apply the theorem to -W as well so as to obtain (Y_0)=(Y_1).
This shows that descends to a group homomorphism
→, where is the integral homology cobordism group.
We observe that the properties of described so
far also hold for the instanton
h–invariant, the negative of its monopole analogue <cit.>, and
the Heegaard Floer correction term d. Note that the latter three
invariants are monotone
with respect to any negative definite cobordism, without any assumption on the
torsion in the cohomology.
thm[Lower bounds]
Let X be a smooth compact oriented 4-manifold whose boundary
is a homology sphere Y. Suppose the intersection form of X is negative
definite and H^2(X;) contains no 2-torsion. Let
J_X:=H^2(X;)/torsion,
and let w be an element of J_X which is not divisible by 2.
Let k be the minimal square norm (with
respect to the intersection form) of any element
of w+2J_X. Let n be the number of elements of w+2J_X of square norm k.
If k≥2 and n/2 is odd then
equation*
(Y)≥k-1.
By an integral lattice we mean a free abelian group of finite rank
equipped with
a symmetric bilinear integer-valued form. Such a lattice is called
odd if it contains an element of odd
square; otherwise it is called even.
cor
Let X be as in Theorem <ref>.
Let J_X⊂ J_X be the orthogonal complement of the sublattice of J_X
spanned by all vectors of square -1, so that J_X is an orthogonal sum
J_X=m-1⊕ J_X
for some non-negative integer m.
description
(i)If J_X≠0, i.e. if J_X is not diagonal, then (Y)≥1.
(ii)If J_X is odd then (Y)≥2.
To deduce (i) from the theorem, take C:=v+2J_X where v is any non-trivial
element of
J_X of minimal square norm. To prove (ii), choose a v with minimal odd
square norm.
thm
Let Y be the result of (-1) surgery on a knot K in S^3. If changing
n^- negative crossings in a diagram for K produces a positive knot then
0≤(Y)≤ n^-.
For k≥2 the Brieskorn sphere (2,2k-1,4k-3) is the boundary of a
plumbing manifold with intersection form -_4k (see
Section <ref>), and it is also
the result of (-1)
surgery on the (2,2k-1) torus knot. In these examples
the upper bound on given by
Theorem <ref> turns out to coincide with the lower bound
provided by Theorem <ref>, and one obtains the following.
For k≥2 one has
((2,2k-1,4k-3))=k-1.
On the other hand, by <cit.> one has
h((2,2k-1,4k-3))=⌊ k/2⌋,
and in these examples the correction term d satisfies d=h/2, as follows
from <cit.>. This shows:
The invariant is not a rational linear combination of the
h–invariant and the correction term d.□
In particular,
h,:→
are linearly independent homomorphisms, and the same is true for d,.
It follows from this that has a ^2 summand. However, much more is
true: Dai, Hom, Stoffregen, and Truong <cit.> proved that
has a ^∞ summand. Their proof uses involutive Heegaard Floer homology.
The monotonicity of the invariants h,d, leads to the following result.
Let Y by an oriented homology 3-sphere. If
min(h(Y),d(Y))<0<(Y)
then Y does not bound any definite 4-manifold without elements of order 4
in its second cohomology.
An explicit example to which the theorem applies is 2(2,5,9)#-3(2,3,5).
A related result was obtained by Nozaki, Sato, and Taniguchi <cit.>.
Using a filtered version of instanton homology they proved that certain linear
combinations of Brieskorn homology 3–spheres do not bound any definite
4–manifold.
If an oriented homology 3-sphere Y satisfies
h(Y)≤0<(Y)
then I^5(Y;) contains 2–torsion, hence Y is not homology cobordant
to any Brieskorn sphere (p,q,r).
We conclude this introduction with two sample applications of the invariant
.
Let X be a smooth compact oriented connected 4-manifold whose boundary
is the Poincaré sphere (2,3,5).
Suppose the intersection form of X is negative definite.
Let J_X be as in Corollary <ref>.
(i) If J_X is even then J_X=0 or -E_8.
(ii) If J_X is odd then H^2(X;)
contains an element of order 4.
Earlier versions of this result were obtained using instanton homology
in <cit.> (assuming X
is simply-connected) and in <cit.> (assuming X has no 2–torsion
in its homology).
There are up to isomorphism two even, positive
definite, unimodular forms of rank 16, namely 2E_8 and _16.
If Z denotes the negative definite E_8–manifold then the boundary
connected sum Z#_Z has intersection form -2E_8.
It is then natural to ask whether (2,3,5)#(2,3,5) also bounds
-_16.
There appears to be no obstruction to this coming from
the correction term.
Let X be a smooth compact oriented 4-manifold whose boundary
is (2,3,5)#(2,3,5). Suppose the intersection form of X is negative
definite and H^2(X;) contains no 2–torsion. If J_X is even then
J_X=0, -E_8, or -2E_8.
Further results on the definite forms bounded by a given homology 3–sphere
were obtained by Scaduto <cit.>.
Some of the results of this paper were announced in various talks several years
ago. The author apologizes for the long delay in publishing the results.
§ THE BASE-POINT FIBRATION
Let X be a connected smooth n–manifold, possibly with boundary, and
P→ X a principal 3 bundle. Fix p>n and let A be a p1
connection in P. This means that A differs from a smooth connection by a
1–form which lies locally in L^p_1. Let _A be the group of
p2 automorphisms (or gauge transformations) of P that preserve A.
The connection A is called
* irreducible if _A={1}, otherwise reducible;
* Abelian if _A≈1;
* twisted reducible if _A≈/2.
Note that a non-flat reducible connection in P is either
Abelian or twisted reducible.
Recall that automorphisms of P can be regarded as sections of the bundle
P3×3 of Lie groups, where 3 acts on itself by
conjugation. An automorphism is called even if it lifts to a
section of P3×2. A connection A in P is called
even-irreducible if its stabilizer _A contains no non-trivial
even automorpisms, otherwise A is called even-reducible.
A non-flat connection is even-reducible if and only if it is Abelian.
Now suppose X is compact and let be the space of
all L^p_1 connections in P. The affine Banach space is acted upon
by the Banach Lie group consisting of all L^p_2 automorphisms of P.
Let ^*⊂ be subset of irreducible connections and define
=/. The irreducible part ^*⊂ is a Banach manifold,
and it admits smooth partitions of unity provided p>n is an even integer,
which we assume from now on. Instead of ^* we often write ^*(P),
or ^*(X) if the bundle P is trivial. Similarly for , etc.
Let ^* be the space of all even-irreducible
L^p_1 connections in P.
Let be the group of even p2
automorphisms of P. As explained in <cit.>, there is an exact
sequence
1→→→ H^1(X;/2)→0.
The quotient ^*=^*/ is a Banach manifold.
Let X be a topological space.
(i) A class v∈ H^2(X;/2) is called admissible if
v has a non-trivial pairing with a class in H_2(X;), or equivalently,
if there exist a closed oriented 2–manifold and a continuous map
f:→ X such that f^*v≠0. If and f can be chosen such that,
in addition,
f^*a=0 for every a∈ H^1(X;/2),
then v is called
strongly admissible.
(ii) An 3 bundle E→ X is called
(strongly) admissible if
the Stiefel-Whitney class w_2(E) is (strongly) admissible.
For example, a finite sum v=∑_ia_i∪ b_i with
a_i,b_i∈ H^1(X;/2) is never strongly admissible.
Let X be a compact, oriented, connected smooth 4–manifold with base-point
x∈ X. Let P→ X be an 3 bundle.
(i) If P is admissible then the 3 base-point fibration over
^*(P) lifts to a 2 bundle.
(ii) If P is strongly admissible then the 3 base-point
fibration over ^*(P) lifts to a 2 bundle.
We spell out the proof of (ii), the proof of (i) being similar
(or easier). Let be a closed oriented surface and f:→ X
a continuous map such that f^*P is non-trivial and eqn:fa0 holds.
We can clearly arrange that is
connected. Because X≥2 it follows
from <cit.> that f can be uniformly approximated
by (smooth) immersions f_0. Moreover, if the approximation is sufficiently
good then f_0 will be homotopic to f. Therefore, we may assume f is an
immersion.
Since base-point fibrations associated to different base-points in
X are isomorphic we may also assume that x lies in the image of f,
say x=f(z).
We adapt the proof of <cit.>, see also
<cit.>. Let →^*:=^*(P) be the
oriented Euclidean 3–plane bundle associated to the
base-point fibration. We must find an Hermitian 2-plane bundle
such that
is isomorphic to the bundle ^0_
of trace-free
skew-Hermitian endomorphisms of .
Let E→ X be the standard 3–plane bundle associated to P.
Choose an Hermitian 2–plane bundle W→ together with an isomorphism
ϕ:^0_W≈→ f^*E, and fix a connection A_,det
in (W). Any (orthogonal)
connection A in E induces a connection in f^*E
which in turn induces a connection A_ in W with central part
A_,det. Choose a spin structure on and let S^*± be
the corresponding spin bundles over . For any
connection A in E let
_,A:S^+⊗ W→ S^-⊗ W
be the Dirac operator
coupled to A_. If A is an L^p_1 connection, p>4, and A_0 is a
smooth connection in E then A-A_0 is continuous, hence
_,A-_,A_0 defines a bounded operator L^2→ L^2 and
therefore a compact operator L^2_1→ L^2. Let
:=(_,W)
be the determinant line bundle over (E)
associated to the family of Fredholm operators
_,A:L^2_1→ L^2.
Then automorphism (-1) of W acts on with weight equal to the
numerical index of _,A. According to Atiyah-Singer's theorem
<cit.> this index is
(_,A)={ch(W)Â()}·[]=c_1(W)·[].
But the mod 2 reduction of c_1(W) equals f^*(w_2(E)),
which is non-zero by assumption, so the index is odd.
The assumption eqn:fa0 means that every automorphism of E
pulls back to an
even automorphism of f^*E. Moreover, every even automorphism of
f^*E≈^0_W
lifts to an automorphism of W of determinant 1, the lift being well-defined
up to an overall sign since is connected. Because the automorphism
(-1) of W acts trivially on ⊗ W_z this yields an action of
(E) on ⊗ W_z. The quotient
:=(⊗ W_z)/(E)
is a complex 2-plane bundle over ^*(E).
We claim that there is an Hermitian
metric on such that on every fibre _A there is an Hermitian
metric for which the projection _A⊗ W_z→_[A] is an
isometry. To see this, let S⊂(E) be any local slice for the action
of (E), so that S projects diffeomorphically onto an open subset
U⊂^*(E). Choose any Hermitian metric on |_S and let
g_U be the induced Hermitian metric on _U≈(⊗ W_z)|_S.
Now cover ^*(E) by such open sets U and patch together the corresponding
metrics g_U to obtain the desired metric on .
Given any Hermitian metric on a fibre _A there are linear isometries
^0__A⊗ W_z≈→^0_W_z≈→ E_x,
where the first isometry is canonical and independent of the chosen metric
on _A and the second one is given by ϕ. This yields an isomorphism
^0_≈→.□
§ MODULI SPACES
Let P→ Y be a principal 3 bundle, where Y is a closed oriented
3–manifold. The Chern-Simons functional
:(P)→/
is determined up to an additive constant by the property that if A is any
connection in the pull-back of P to the band [0,1]× Y then
(A_1)-(A_0)=∫_[t_0,t_1]× Y F_A∧ F_A,
where A_t denotes the restriction of A to the slice
{t}× Y, and ·∧· is formed by combining the wedge
product on forms with minus the Killing form on the Lie algebra of 3.
If P=Y×3 then we normalize so that its value on
the product connection θ is zero. If v is any automorphism of P
then for any connection B in P one has
(v(B))-(B)=-1/2(v),
where the degree (v) is defined to be the intersection number of v
with the image of the constant section 1.
Equation eqn:csdeg,
up to an overall sign, was stated without proof in <cit.>.
A proof of eqn:csdeg
can be obtained by first observing that the left-hand side of the
equation is independent of B, and both sides define homomorphisms from the
automorphism group of P into . Replacing v by v^2 it then
only remains to verify the equation
for even gauge transformations, which is easy.
If v lifts to a section v of P3×2 then
(v)=2( v),
where ( v) is the intersection number of v with the image
of the constant section 1. In particular, every even automorphism of
P has even degree.
The critical points of the Chern-Simons functional
are the flat connections in P. In practice, we will add a small holonomy
perturbation to as in <cit.>, but this will usually not be
reflected in our notation.
Let (P) denote the space of all critical points of modulo
even automorphisms of P. The even-reducible part of (P) is denoted
by ^*(P). If Y is an (integral) homology sphere then P is
necessarily trivial and we write (Y)=(P).
Now let X be an oriented Riemannian 4–manifold with tubular ends
[0,∞)× Y_i, i=0,…,r, such that the complement of
:=⋃_i [0,∞)× Y_i
is precompact. We review the standard set-up of moduli spaces of anti-self-dual
connections in a principal 3 bundle Q→ X, see <cit.>. Given a
flat connection ρ in Q|_, we define the moduli space
M(X,Q;ρ) as follows. Choose a smooth connection A_0 in Q which agrees
with ρ outside a compact subset of X.
We use the connection A_0 to define Sobolev norms on forms with values
in the adoint bundle _Q of Lie algebras associated to Q.
Fix an even integer p>4.
Let =(Q) be the space of connections in Q of the form A_0+a with
a∈ pw1, where w is a small, positive exponential weight as in
<cit.>. There is a smooth action on by the Banach
Lie group consisting of all p2 gauge transformation u
of Q such that ∇_A_0u· u∈ pw1.
Let :=/ and let M(X,Q;ρ) be the subset of consisting
of gauge equivalence classes of connections A satisfying F^+_A=0.
In practice, we will often add a small holonomy perturbation to the
ASD equation, but this will usually be suppressed from notation.
We observe that the value of the Chern-Simons integral
(Q,ρ):=-1/8π^2∫_X F_A∧ F_A
is the same for all A∈. (If X is closed then the right hand side
of Equation eqn:ka-int equals the value of -p_1(Q) on the
fundamental class of X. This normalization will be convenient in
Section <ref>.)
If u is an automorphism of Q|_ then
from Equations eqn:cs-int-band and eqn:csdeg we deduce that
(Q,u(ρ))-(Q,ρ)=2∑_i(u_i),
where u_i is the restriction of uto the slice {0}× Y_i.
Similarly, for the expected dimensions we have
M(X,Q;u(ρ))-M(X,Q;ρ)=4∑_i(u_i).
On the other hand, if u extends to a smooth automorphism of all of Q
then ∑(u_i)=0,
and the converse holds at least if u is even.
Given the reference connection A_0, we can identify the restriction of the
bundle Q to an end [0,∞)× Y_i with the pull-back of a bundle
P_i→ Y_i.
Let _i∈(P_i) be the element obtained by restricting
ρ to any slice {t}× Y_i where t>0. We will usually
assume that each _i is non-degenerate.
The above remarks show that
the moduli space M(X,Q;ρ) can be specified by the
r–tuple =(_1,…,_r) together with one extra piece of data:
Either the Chern-Simons value =(Q,ρ) or the expected dimension d
of M(X,Q;ρ). We denote such a moduli space by
M_(X,Q;) or M_(d)(X,Q;).
Note that for given there is exactly one moduli space M_(d)(X,Q;)
with 0≤ d≤7; this moduli space will just be denoted by M(X,Q;).
For any anti-self-dual connection A over X, the energy _A(Z)
of A over a measurable subset Z⊂ X is defined by
_A(Z):=-∫_Z F_A∧ F_A
=∫_Z|F_A|^2.
If X= and Z=I× Y for some interval I then we write
_A(I) instead of _A(I× Y).
§ SPACES OF LINEARLY DEPENDENT VECTORS
This section provides background for the definition of the cup product u_2
as well as results which will be used in the proof of
Proposition <ref>.
For any finite-dimensional real vector space V set
L(V):={(v,w)∈ V⊕ Vv,w are linearly dependent in V}.
Then L(V) is closed in V⊕ V and
L^*(V):=L(V)∖{(0,0)}
is a smooth submanifold of V⊕ V of codimension n-1, where n is the
dimension of V.
As a short-hand notation we will often write v∧ w=0 to express that
v,w are linearly dependent.
If B is any smooth Banach manifold and π:E→ B a smooth real vector
bundle of finite rank let L^*(E)→ B be the associated smooth fibre bundle
whose fibre over a point x∈ B is L^*(E_x), where E_x=π(x).
Similarly, let L(E)→ B be the topological fibre bundle with fibre
L(E_x) over x.
Let ℓ→ S^1 be the non-trivial real line bundle such that for
z∈ S^1 the fibre of ℓ over z^2 is the line z in . Let
E:=E× S^1 and ℓ:=B×ℓ be the pull-backs of
the bundles E and ℓ, respectively, to B× S^1. We identify
R^2=, so that (a,b)=a+bi for real numbers a,b.
Let s=(s_1,s_2) be a nowhere vanishing smooth section of E⊕ E.
Let be the section of E⊗ℓ such that for any
p∈ B and z=(x_1,x_2)∈ S^1 one has
(p,z^2)=(x_1s_1(p)+x_2s_2(p))⊗ z.
(i) The projection B× S^1→ B maps the zero-set of
bijectively onto the locus in B where s_1,s_2 are linearly dependent.
(ii) A zero (p,w) of is regular if and
only if s is transverse to L^*(E) at p.
The proof of (i) is left as an exercise. To prove (ii) we may assume
E is trivial, so that s_j is represented by a smooth map f_j:B→ V
for some finite-dimensional real vector space V. We observe that
for any u_1,u_2∈ V and z=(x_1,x_2)∈ S^1 one has
(u_1,u_2)=(x_1u_1+x_2u_2)⊗ z+(x_1u_2-x_2u_1)⊗ iz
as elements of V⊕ V=V⊗_.
It follows that the tangent space of L^*(V) at a point (v_1,v_2)
which satisfies x_1v_1+x_2v_2=0 is given by
T_(v_1,v_2)L^*(V)=V⊗ iz+(x_1v_2-x_2v_1)⊗ z.
Now suppose (p,w) is a zero of and s(p)=(v_1,v_2), z^2=w.
Then eqn:tlv holds. Let L_j:T_pB→ V be the derivative of f_j at p.
Then (p,w) is
a regular zero of precisely when V is spanned by the vector
x_1v_2-x_2v_1 together with the image of the map x_1L_2+x_2L_2.
From eqn:u1u2 we see that
the latter condition is also equivalent to s being transverse to
L^*(V) at p.□
We record here a description of the sections of
E⊗ℓ which
will be used in the proof of Proposition <ref> below.
Let _a( E) denote
the space of all sections s∈( E) such that
s(p,-z)=-s(p,z)
for all (p,z)∈ B× S^1.
Then there is a canonical real linear isomorphism
( E⊗ℓ)→_a( E), ↦
characterized by the fact that
(p,z^2)=(p,z)⊗ z
for all (p,z)∈ B× S^1.□
If B is finite-dimensional, the bundle E has rank 3,
and s is a generic smooth
section of E⊕ E then s(L(E))
represents the Poincaré dual of the second Stiefel-Whitney
class w_2(E) in the following sense. Given any class a∈ H_2(B;/2),
represented by a generic smooth map f:→ B
where is a closed surface, then
a,w_2(E)≡#(s∘ f)(L(E))2.
§ “GENERIC” SECTIONS
Let B be a smooth Banach manifold and π:E→ B a smooth
real vector bundle of finite rank. If B is infinite-dimensional then we
do not define a topology on the space (E) of (smooth) sections of
E, so it makes no sense to speak about residual subsets of (E).
Instead, we will say
a subset Z⊂(E) is “residual” (in quotation marks) if
there is a finite-dimensional subspace ⊂(E) such that
for every finite-dimensional subspace '⊂(E) containing
and every section s of E there is a residual subset
⊂' such that s+⊂ Z. Note that “residual” subsets
are non-empty, and any finite intersection of “residual” subsets is again
“residual”. We will say a given property
holds for a “generic” section of E if it holds for every section belonging
to a “residual” subset of (E).
We indicate one way of constructing such subspaces .
Suppose B supports smooth bump functions, i.e. for any point
x∈ B and any neighbourhood U of x there exists a smooth function
c:B→ such that c(x)≠0 and c=0 outside U. Given a compact subset
K of B, one can easily construct a finite-dimensional subspace
⊂(E) such that, for every x∈ K, the evaluation map
→ E_x, s↦ s(x)
is surjective. Therefore, if we are given a collection of smoooth maps
f_k:M_k→ B, k=1,2,…, where each M_k is a
finite-dimensional manifold and the image of each f_k is
contained in K then, for a “generic” section s of E, the map
s∘ f_k:M_k→ E
is transverse to the zero-section in E for each k.
§ INSTANTON COHOMOLOGY AND CUP PRODUCTS
In this section we will work with 3 connections modulo
even gauge transformation (see Section <ref>),
although this will not be
reflected in our notation.
In particular, we write ^* instead of ^*. This notational
convention applies only to this section.
(In Subsection <ref>, which only deals with
homology spheres, the convention is irrelevant.)
§.§ Instanton cohomology
Let Y be a closed oriented connected 3-manifold and P→ Y an
3 bundle. If Y is not an homology sphere then we assume P is
admissible. For any ,β∈(P) let M(,β) denote the
moduli space of instantons in the bundle × P→ with flat
limits at -∞ and β at ∞ and with expected dimension
in the interval [0,7]. Let
(,β)=M(,β)/,
where acts by translation. If ,β are irreducible then
the relative index
(,β)∈/8 is defined by
(,β)= M(,β)8.
For any commutative ring R with unit we denote by I(P;R) the relatively
/8 graded
instanton cohomology with coefficients in R as defined in
<cit.>. Recall that this is the cohomology of a cochain complex
(C(P;R),d) where C(P;R) is the free R–module generated by ^*(P)
and the differential d is defined by
d=∑_β#(,β)·β.
Here, # means the number of points counted with sign,
and the sum is taken over all β∈^*(P) satisfying
(,β)=1.
If P is admissible then ^*(P)=(P). If instead Y is an homology
sphere then (P)=(Y) contains exactly one reducible point θ,
represented by the trivial connection.
The presence of the trivial connection provides C(P;R)=C(Y;R) with an absolute
/8 grading defined by
()= M(θ,)8.
The trivial connection also gives rise to homomorphisms
C^4(Y;R)→ R'→ C^1(Y;R)
defined on generators by
=#(,θ), 1=∑_β#(θ,β)·β,
where we sum over all β∈^*(Y) of index 1.
These homomorphisms satisfy d=0 and d'=0 and therefore define
I^4(Y;R)_0→ R_0'→ I^1(Y;R).
We conclude this subsection with some notation for energy. If A is any
ASD connection in the bundle Q:=× P and I is any interval then
we write _A(I) instead of _A(I× Y). Moreover, if
,β∈(Y) and the moduli space M(,β) is expressed as
M(,Q;ρ) in the notation of Section <ref> then
we define
(,β):=1/4(Q,ρ),
which equals the total energy of any element of M(,β). (Note,
however, that M(,β) may be empty.)
§.§ Cup products
We continue the discussion of the previous subsection, assuming P is
admissible unless Y is an homology sphere.
In most of this paper
the coefficient ring R will be /2, and we write
I(P):=I(P;/2).
For j=2,3 we will define a degree j endomorphism
u_j:I^*(P)→ I^*+j(P).
Insofar as the Floer cohomology is some kind of Morse
cohomology of ^*(P), one may think of u_j as cup product with
the jth Stiefel-Whitney class of the base-point fibration over ^*(P).
The map u_j will be
induced by an endomorphism
v_j:C^*(P)→ C^*+j(P)
which we now define. For any t∈ set
t:=[t-1,t+1]× Y.
Let P_0=[-1,1]× P denote the pull-back of the bundle P to 0.
For any ,β∈(P) and any irreducible point ∈ M(,β)
let
[t]:=|_Y[t]∈^*(P_0)
denote the restriction of to the band Y[t]. (The fact that [t]
is irreducible follows from
Proposition prop:unique-continuation-cylinder.)
Choose a base-point y_0∈ Y,
and let
→^*(P_0)
be the natural
real vector bundle of rank 3 associated to the base-point (0,y_0)∈0.
To define v_3, choose a “generic” smooth section s_1 of .
For any ,β∈^*(P)
with (β)-()≡38 the matrix coefficient
v_3,β is defined to be
equation
v_3,β:=#{∈M(,β)s_1([0])=0},
where # means the number of points counted modulo 2.
To define v_2, let s_2,s_3 be a pair of smooth sections of which
define a “generic” section of ⊕.
For any ,β∈^*(P)
with (β)-()≡28 the matrix coefficient
v_2,β is defined to be
equation
v_2,β:=
#{∈M(,β)s_2,s_3 are linearly dependent at [0]}.
Note that, for dimensional reasons, s_2 and s_3 cannot simultaneously
vanish at [0] for any ∈ M(,β).
prop
For j=2,3 one has
dv_j=v_jd
as homomorphisms C^*(P)→ C^*+j+1(P).
To prove this for j=2, let ,β∈^*(P) with
(β)-()≡38.
The number of ends of the 1-manifold
{∈ M(,β)s_2,s_3 are linearly dependent at [0]},
counted modulo 2, is (dv_2+v_2d),β. Since the number of ends
must be even, this proves the assertion for j=2. The case j=3 is similar.
□
The homomorphism u_j:I^*(P)→ I^*+j(P) induced by v_j is independent of
the sections s_i. For u_3 this will follow from
Lemma <ref> below,
and a similar argument works for u_2.
We consider again the bundle P_0=[-1,1]× P over Y[0]=[-1,1]× Y.
Let U be an open subset of ^*(P_0) such that for all
,β∈^*(P) with (,β)≤3 and every
∈ M(,β) one has that [0]∈ U. A section s of
|_U is said to satisfy Property 3 if for all ,β
as above the map
M(,β)→, ↦ s([0])
is transverse to the zero-section in .
Let U⊂^*(P_0) be as in Definition <ref>
and suppose s,s' are sections of |_U satisfying Property 3.
Let v_3,v'_3 be the corresponding cup products defined as in
eqn:v3def. Then there is an endomorphism
H:C(P)→ C(P)
such that
v_3+v'_3=dH+Hd.
For a “generic” section of the map
f_β:M(,β)×[0,1]→,
↦(1-t)s([0])+ts'([0])+t(1-t)([0])
is transverse to the zero-section whenever (,β)≤3.
Fix such a and let Z_β denote the zero-set of f_β.
If (,β)=2 then Z_β is a finite set. Let H be the
homomorphism with matrix coefficients
H,β=#Z_β.
If (,β)=3 then Z_β is a compact
1–manifold-with-boundary. Counted modulo 2, the number of boundary points
of Z_β is (v_3+v'_3),β, whereas the number of
ends is (dH+Hd),β. These two numbers must agree, proving
the lemma.□
Let W be a smooth, compact, oriented, connected 4–manifold with two
boundary components, say W=-Y_0∪ Y_1. Let Q→ W be an
3 bundle, and let P_i be the restriction of Q to Y_i. Suppose
one of the following two conditions holds.
(i) At least one of the bundles P_0,P_1 is admissible.
(ii) Both Y_0 and Y_1 are homology spheres, the bundle Q is
trivial, and H_1(W;)=0 and b_+^2(W)=0.
Then the homomorphism T:I(P_0)→ I(P_1) induced by (W,Q) satisfies
Tu_j=u_jT for j=2,3.
Moreover, if (ii) holds then
T=:I^4(Y_0)→/2.□
If P→ Y is an admissible 3 bundle then u_3=0 on I(P).
By Proposition <ref> there is an Hermitian
2–plane bundle →^* such that ≈^0_.
For a “generic” section s of , we have
s([0])≠0 whenever lies in a moduli space M(,β)
of dimension at most 3. Given such a section s, let U be the
open subset of ^* where s≠0. Then |_U splits
as an orthogonal sum
|_U=⊕ L
of two complex line bundles. Hence |_U has a nowhere vanishing
trace-free skew-Hermitian endomorphism
(
[ i 0; 0 -i ]). This yields a non-vanishing section s' of |_U.
Let s be the restricion to U of a “generic” section of ,
and let v_3,v'_3 be the cup products defined by s,s', respectively.
Then v'_3=0, so by Lemma <ref> we have
v_3=dH+Hd.
By definition, v_3 induces the cup product u_3 in cohomology,
so u_3=0.□
Let Y be an oriented homology 3–sphere and Y' the result of (±1)
surgery on a knot in Y. Let n be a non-negative integer.
(i) If (u_3)^n=0 on I(Y) then (u_3)^n+1=0 on I(Y').
(ii) If (u_2)^n=0 on I(Y) and has genus 1
then (u_2)^n+1=0 on I(Y').
If R is a commutative ring and
A⟶ B⟶ C
an exact sequence of modules over the polynomial ring R[u] such
that u^m=0 on A and u^n=0 on C for non-negative integers m,n then
u^m+n=0 on B. (Here, u^0 acts as the identity map.)
Now suppose Y' is (-1) surgery on . (If instead Y' is (+1)
surgery on then the proof is similar with the roles of Y,Y' reversed.)
Let Y” be 0 surgery on and I(Y”) the instanton cohomology of
the non-trivial 3 bundle over Y”.
We apply the above observation to the long exact surgery sequence
(see <cit.>)
⋯→ I(Y”)→ I(Y)→ I(Y')→ I(Y”)→⋯
Statement (i) now follows from Proposition <ref>. To prove
(ii), recall that if P_T^3 is a non-trivial 3 bundle over the
3–torus then I(P_T^3) is non-zero in two degrees differing by
4 modulo 8 and zero in all other degrees. Therefore, u_2=0 on
I(P_T^3). If has genus 1 then by arguing as in the proof of
<cit.> we find that u_2=0 on I(Y”), from which
(ii) follows.□
As a special case of Proposition <ref> we have the following
corollary.
If Y is (±1) surgery on a knot in S^3 then u_3=0 on I(Y).
Let P→ Y be an 3 bundle. We assume P is admissible
if Y is not a homology sphere. Then the endomorphisms u_2 and u_3
on I(P)
are nilpotent. In other words, there is a positive integer n such that
u_2^n=0, u_3^n=0 on I(P).
We use the same link reduction schemes as
in the proofs of <cit.>.
In the present case there is no need to consider any reduced groups, as
the cup products u_j are defined on all of I(Y).□
We include here a result for oriented homology 3–spheres Y obtained by
adapting the proof of Proposition <ref> for j=2 to 2–dimensional
moduli spaces M(,θ). This result will be used in
Proposition <ref> below.
For any ∈^*(Y) we introduce the
temporary notation
M_:={∈ M(,θ)s_2∧ s_3=0 at [0], and _([0,∞))≥},
where is a small positive constant.
If M(,θ)<6 then M_ is a manifold-with-boundary, and
M_ has a description analogous to that of M_, just replacing
the inequality _([0,∞))≥
by an equality. We define homomorphisms
:C^2(Y)→/2, ^-:C^3(Y)→/2
on generators by
:=#( M_), ^-β:=# M_β.
v_2+^-d=.
Let ∈^*(Y), ()=2. Then M_ is a
1–manifold-with-boundary. The number of boundary points,
counted modulo 2, is
by definition, and this must agree with the number of ends of M_, which
is ( v_2+^-d).□
§.§ Commutators of cup products
Let Y be an oriented homology 3–sphere.
We introduce a degree 4 endomorphism
ϕ:C^*(Y)→ C^*+4(Y)
which will be used to describe the commutator of v_2 og v_3.
defn For any ,β∈^*(Y)
let 23(,β) be the subspace of × consisting of those
points (,t) satisfying the following conditions:
itemize
* s_1([-t])=0,
* s_2([t]) and s_3([t]) are linearly dependent.
If (β)-()≡48
then 23(,β) consists of a finite number of points (see part (I)
of the proof of Proposition <ref> below), and we set
ϕ,β:=#23(,β).
prop
If Y is an oriented integral homology 3-sphere then for “generic”
sections s_1,s_2,s_3 one has
equation
v_2v_3+v_3v_2+'=dϕ+ϕd.
Hence, on I(Y) one has
equation
u_2u_3+u_3u_2=_0'_0.
The proof will be given in Subsection <ref>.
Let v_3,v_3':C^*(Y)→ C^*+3(Y) be the cup products defined by “generic”
sections s,s' of . At least in degrees different from 4, the
commutator of v_3 and v_3' is given by a formula analogous to
eqn:v2v3chhom. This formula involves the homomorphism
ψ:C^p(Y)→ C^p+5(Y), p≠4
with matrix coefficients
ψ,β=#{(,t)∈× s([-t])=0=s'([t])}.
The condition p≠4 is imposed to make sure that factorizations through
the trivial connection do not occur in the moduli spaces M(,β).
For q≢3,48 one has
dψ+ψ d=v_3v'_3+v'_3v_3
as maps C^q(Y)→ C^q+6(Y).
If the sections s,s' are sufficiently close (in a certain
sense) then v_3=v_3' (see Lemma <ref> below)
and the following hold.
If the sections s,s' are sufficiently close then there exist
* an extension of ψ to a cochain map C^*(Y)→ C^*+5(Y)
defined in all degrees, and
* a homomorphism Ξ:C^*(Y)→ C^*+4(Y) such that
ψ=v_2v_3+dΞ+Ξ d.
The proof will be given in Subsection <ref>.
§ DEFINITION OF THE INVARIANT
Let Y be any oriented homology 3-sphere.
defn
We define a non-negative integer ζ_2(Y) as follows. If _0=0
on (u_3)⊂ I(Y) set ζ_2(Y):=0. Otherwise, let ζ_2(Y)
be the largest positive integer n for which there exists an
x∈(u_3) such that
_0u_2^kx=
0 for 0≤ k<n-1,
1 for k=n-1.
Here, u_2^k denotes the k'th power of the endomorphism u_2. Note that
if x is as in Definition <ref> then using
the relation eqn:u2u3 one finds that u_3u_2^kx=0 for
0≤ k≤ n-1.
defnSet (Y):=ζ_2(Y)-ζ_2(-Y).
An alternative description of will be given in
Proposition <ref> below.
If ('_0)⊂(u_3) in I^1(Y) then ζ_2(-Y)=0. Otherwise,
ζ_2(-Y) is the largest positive integer n for which the inclusion
(u_2^k'_0)⊂(u_3)+∑_j=0^k-1(u_2^j'_0)
in I(-Y)
holds for 0≤ k<n-1 but not for k=n-1.
Of course, in eqn:imu2delincl it suffices to sum over those j that are
congruent to k mod 4, since I(-Y) is mod 8 periodic.
Recall that I^q(Y) and I^5-q(-Y) are dual vector spaces for any
q∈/8. Furthermore, the maps
_0:I^4(Y)→/2, u_3:I^q(Y)→ I^q+j(Y)
are dual to
'_0:/2→ I^1(-Y), u_3:I^5-q-j(-Y)→ I^5-q(-Y),
respectively. In general, the kernel of a linear map between finite-dimensional
vector spaces is equal to the annihilator of the image of the dual map.
Applying this to _0u_2^j:I^4-2j(Y)→/2 we see that the inclusion
eqn:imu2delincl holds if and only if
(_0u_2^k)⊃(u_3)∩⋂_j=0^k-1(_0u_2^j)
in I(Y).
This proves the lemma.□
prop
Either ζ_2(Y)=0 or ζ_2(-Y)=0.
Suppose ζ_2(Y)>0, so there is an x∈ I^4(Y) such that
u_3x=0 and _0x=1. Then Proposition <ref> yields
'_0(1)=u_3u_2x, hence ζ(-Y)=0 by Lemma <ref>.□
We now reformulate the definition of ζ_2 in terms of the mapping cone of
v_3. This alternative definition will display a clear analogy with the
instanton h-invariant and will be essential for handling the algebra involved
in the proof of additivity of .
For q∈/8 set
MC^q(Y):=C^q-2(Y)⊕ C^q(Y),
and define
D:MC^q(Y)→ MC^q+1(Y), (x,y)↦(dx,v_3x+dy).
Then D∘ D=0, and we define MI(Y) to be the cohomology of the
cochain complex (MC(Y),D). The short exact sequence of cochain
complexes
0→ C^*(Y)→ MC^*(Y)τ→ C^*-2(Y)→0,
where (y)=(0,y) and τ(x,y)=x,
gives rise to a long exact sequence
equation
⋯→I^q-3(Y)u_3→I^q(Y)_*
→MI^q(Y)τ_*→I^q-2(Y)→⋯.
We introduce some extra structure on *j(Y). Firstly,
the homomorphisms
gather*
:=∘τ:MC^6(Y)→/2,
':=∘':/2→MC^1(Y)
induce homomorphisms
MI^6(Y)_0⟶/2'_0⟶
MI^1(Y).
We extend trivially to all of MC(Y), and similarly for _0.
Furthermore, we define a homomorphism
V:MC^*(Y)→ MC^*+2(Y), (x,y)↦(v_2x,ϕ x+v_2y).
A simple calculation yields
equation
DV+VD=',
which is analogous to the relation <cit.> in rational
instanton homology. It follows that V induces homomorphisms
gather*
MI^q(Y)→MI^q+2(Y), q≢6,78,
MI^6(Y)∩(_0)→MI^0(Y),
each of which will be denoted by U.
If _0=0 on MI^6(Y) then ζ_2(Y)=0. Otherwise,
ζ_2(Y) is the
largest positive integer n for which there exists a z∈ MI(Y)
such that
_0 U^kz=cases
0 for 0≤ k<n-1,
1 for k=n-1.
This follows immediately from the definitions.□
§ DEFINITE 4-MANIFOLDS
The goal of this section is to prove Theorem <ref>.
Let X be an oriented, connected
Riemannian 4–manifold with a cylindrical end [0,∞)× Y,
where Y is an integral homology sphere.
Suppose
b_1(X)=0=b^+(X).
Let E→ X be an oriented Euclidean 3–plane bundle and w_2(E)
its second Stiefel-Whitney class. We will count reducibles in
ASD moduli spaces for E
with trivial asymptotic limit.
Let w∈ H^2(X,;/2) be the unique
lift of w_2(E). Abusing notation, we denote by w_2(E)^2∈/4
the value of the Pontryagin square
w^2∈ H^4(X,;/4)
on the fundamental class in
H_4(X;;/4). Then for ∈^*(Y) the expected dimension of
a moduli space for E with asymptotic limit satisfies
M_(X,E;)≡()-2w_2(E)^28.
If ρ is a trivial connection in E|_ then (E,ρ) is an
integer reducing to -w_2(E)^2 modulo 4. Hence,
M_k:=M_k(X,E;θ)
is defined for integers k satisfying k≡-w_2(E)^24. Moreover,
M_k is empty for k<0, and M_0 (when defined) consists of flat connections.
The expected dimension is
M_k=2k-3.
§.§ Reducibles
In this subsection we restrict to k>0.
After perturbing the Riemannian metric on X in a small ball we can arrange
that M_k contains no twisted reducibles (see <cit.>).
The set M_k of reducible (i.e. Abelian)
points in M_k has a well known description
in terms of the cohomology of X, which we now recall. Let
P:={c∈ H^2(X;) [c]_2=w_2(E), c^2=-k},
where [c]_2 denotes the image of c in H^2(X;/2).
Let P:= P/±1 be the quotient of P by the involution
c↦-c.
There is a canonical bijection M_k→ P.
If [A]∈ M_k then A respects a unique splitting
E=⊕ L,
where is a trivial rank 1 subbundle of E. A choice of orientation
of defines a complex structure on L. Mapping [A] to the point in
P represented by c_1(L) yields the desired bijection. For further
details see <cit.> and <cit.>.□
Assuming P is non-empty we now express the number |P| of elements of P
in terms of the intersection form of X and the torsion subgroup
of H^2(X;). For any v∈ H^2(X;) let v̅ denote
the image of v in H^2(X;)/. Choose a∈ P and let
Q_a:={r∈ H^2(X;)/ r≡a̅ mod 2, r^2=-k}.
Define Q_a:= Q_a/±1.
|P|=|2|·|Q_a|.
Note that 2 has even order precisely when H^2(X;) contains an element
of order 4.
Because k>0 we have that (-1) acts without fixed-points on both
P and Q_a. Therefore,
| P|=2|P|, | Q_a|=2|Q_a|.
The short exact sequence 0→2→→/2→0 gives rise to a long
exact sequence
⋯→ H^2(X;)2→ H^2(X;)→ H^2(X;/2)→ H^3(X;)→⋯.
From this sequence we see that there is a well defined map
P→ Q_a, c↦c̅
which descends to an injective map
f: P/2→ Q_a.
In fact, f is bijective. To see that f is surjective, let r∈ Q_a.
Then
r=a̅+2x̅=a+2x
for some x∈ H^2(X;), and a+2x∈ P. This shows that
| P|=|2|·| Q_a|.
Combining this with eqn:2PQ we obtain the proposition.□
§.§ 2–torsion invariants of 4–manifolds
The proof of Theorem <ref> will involve certain 2–torsion
Donaldson invariants which we now define. Let d_0 be the smallest expected
dimension of any moduli space M_k=M_k(X,E;θ) that contains a reducible,
where k is a non-negative integer.
For any pair
(r,s) of non-negative integers satisfying
2r+3s≤ d_0+2
we will define an element
rs= rs(X,E)∈ I(Y)
which will be independent of the Riemannian
metric on X and also independent of the choice of small holonomy
perturbations.
To define rs, choose disjoint compact codimension 0 submanifolds
Z_1,…,Z_r+s of X and base-points z_j∈ Z_j.
It is convenient to assume that each of these submanifolds contains a band
[t_j,t_j+1]× Y for some t_j≥1. (We assume that the perturbed
ASD equation is of gradient flow type in the region [1,∞)× Y.)
Then
Proposition <ref> guarantees that
every perturbed ASD connection in E with irreducible limit will
restrict to an irreducible connection over each Z_j.
Choose “generic”
sections {_ij}_i=1,2,3 of the canonical 3–plane bundle
_j→^*(Z_j,E_j), where E_j:=E|_Z_j. For any ∈^*(Y)
let d=d() be the integer such that
0≤ d-2r-3s≤7,
d≡()-2w_2(E)^28.
Let M_r,s(X,E;) be the set of all ∈ M_(d)(X,E;) such that
* _2,j,_3,j are linearly dependent at |_Z_j for
j=1,…,r, and
* _1,j(|_Z_j)=0 for j=r+1,…,r+s.
Let
q_r,s:=∑_#M_r,s(X,E;)·∈ C(Y),
where the sum is taken over all generators in C(Y) of index
2w_2(E)^2+2r+3s. Then q_r,s is a cocycle, and we define
rs(X,E):=[q_r,s]∈ I(Y).
Standard arguments show that rs is
independent of the choice of submanifolds
Z_j and sections _ij.
Let k be an integer greater than one.
If M_ℓ is empty for ℓ<k then
k-20=#M_k.
Deleting from M_k a small neighbourhood of each reducible point
we obtain a manifold-with-boundary W with one boundary component P_η
for each reducible η, each such component being diffeomorphic to
k-2. Let
Ŵ:=W∩ M_k-2,0(X,E;θ)
be the set of all ∈ W such that _2,j and _3,j are linearly
dependent at |_Z_j for j=1,…,k-2.
Then Ŵ is a 1–manifold-with-boundary. For dimensional reasons and
because of the condition that M_ℓ be empty for ℓ<k, bubbling
cannot occur in sequences in Ŵ. Therefore, the only source of
non-compactness in Ŵ is factorization over the end of X, so
the number of ends of Ŵ equals k-20 modulo 2.
As for the boundary points of Ŵ,
observe that for every x∈ X the restriction of the 3–plane bundle
_θ,x→ M^*_k to P_η is isomorphic to the direct sum
⊕ L of a trivial real line bundle and the tautological
complex line bundle. It follows easily from this that P_η∩Ŵ
has an odd number of points for every reducible η, hence
|Ŵ|≡|M_k|2.
Since the number of boundary points of Ŵ must agree with the number of
ends when counted modulo 2, this proves the proposition.□
In the proof of the following proposition and at many places later we will
make use of a certain kind of cut-off function. This should be a smooth
function b:→ such that
b(t)=
0 for t≤-1,
1 for t≥1.
Suppose 2r+3s≤ d_0+2, so that rs is defined.
(i) rs=u_2r-1s if r≥1.
(ii) rs=u_3 rs-1 if s≥1.
We only spell out the proof of (ii), the proof of (i) being similar.
Let M_r,s-1(X,E;) be defined as above, but using only the submanifolds
Z_1,…,Z_r+s-1 and the corresponding sections _ij.
Choose a path :[-1,∞)→ X such that (-1)=z_r+1 and
(t)=(t,y_0) for t≥0, where y_0∈ Y is a base-point.
For any ∈^*(Y) and x∈ X let
_,x→ M_r,s-1(X,E;)
be the canonical 3–plane bundle associated to the base-point x.
For any =[A]∈ M_r,s-1(X,E;) and t≥-1 let
_,t:(_,(t))_→(_,(-1))_
be the isomorphism defined by the holonomy of A along .
Here, (_,x)_ denotes the fibre of the bundle _,x at
the point .
Given a “generic” section s of →^*(Y[0]) we define
a section s_ of the bundle
_,(-1)×[-1,∞)→ M_r,s-1(X,E;)×[-1,∞)
by
s_(,t):=(1-b(t-2))·_1,r+s(|_Z_r+s)
+b(t-2)·_,t(s([t])),
where b is as in eqn:b-prop1.
Let j:=2w_2(E)^2+2r+3s∈/8. If ()=j-1 then the zero set
s_(0) is a finite set. Summing over such we define
h_r,s:=∑_(#s_(0))·∈ I^j(Y).
Counting ends and boundary points of the 1–manifolds s_β(0)
for (β)=j we see that
dh_r,s+v_3q_r,s-1=q_r,s.
Passing to cohomology, we obtain (ii).□
If E is strongly admissible then D_r,s(X,E)=0 for s>0.
Let f:→ X be as in Definition <ref>
with v=w_2(E). For t≥0 let X t be the
result of deleting from X the open subset (t,∞)× Y.
Choose t>0 so large that X t contains f(). Then
E|_X t is strongly admissible.
Choose the submanifolds Z_1,…,Z_r+s such that Z_r+s=X t.
By Proposition <ref>
the (frame bundle of) _j→^*(E_r+s)
lifts to a 2 bundle.
For j=1,…,r+s-1 choose “generic” sections {_ij}_i=1,2,3
of _j. Arguing as in the proof of Proposition <ref>
we see that there is an open subset U⊂^*(Z_r+s,E_r+s)
and a section of _r+s such that if is any element of
a 3–dimensional moduli space M_r,s-1(X,E;) then |_Z_r+s∈ U
and (|_Z_r+s)≠0. Taking _1,r+s:= we have that all
0–dimensional moduli spaces M_r,s(X,E;) are empty.
Reasoning as in the proof of Lemma <ref> we conclude
that D_r,s=0.□
§.§ Lower bound on
Recall Definition <ref> above.
Given a space, X, a non-zero class w∈ H^2(X;)/torsion
is called strongly admissible
if some (hence every) lift of w to H^2(X;) maps to a strongly
admissible class in H^2(X;/2).
Let V be a smooth compact oriented connected 4-manifold whose boundary
is a homology sphere Y. Suppose the intersection form of V is negative
definite and at least one of the following two conditions holds:
(i) H^2(V;) contains no 2–torsion.
(ii) H^2(V;) contains no element of order 4, and
w^2≢04. Furthermore, either w is strongly admissible or
u_3=0 on I(Y) (or both).
Let
J:=H^2(V;)/torsion,
and let w be an element of J which is not divisible by 2.
Let k be the minimal square norm (with
respect to the intersection form) of any element
of w+2J. Let n be the number of elements of w+2J of square norm k.
If k≥2 and n/2 is odd then
equation
(Y)≥k-1.
Note that if we leave out case (ii) then the theorem says the same as
Theorem <ref>.
After performing surgery on a collection of loops in V representing
a basis for H_1(V;)/ we may assume that b_1(V)=0.
From the exact sequence eqn:2long-exact-seq we see that the
2–torsion subgroup of H^2(V;) is isomorphic to H^1(V;/2).
Let
X:=V∪(0,∞)× Y
be the result of adding a half-infinite cylinder to V, and choose a
Riemannian metric on X which is of cylindrical form over the end.
We identify the (co)homology of X with that of V. Choose a
complex line bundle L→ X whose Chern class represents w. Choose a
Euclidean metric on the 3–plane bundle
E:=⊕ L.
Since we assume that H^2(X;)
contains no element of order 4, it follows from Proposition <ref>
that M_ℓ contains an odd number of reducibles for ℓ=k
but no reducibles for 0<ℓ<k.
We now show that if w^2≡0 (4), so that M_0 is defined, then M_0
is free of reducibles. Suppose A is a connection in E
representing a reducible point in M_0. Then A preserves some orthogonal
splitting E=⊕ L', where → X is a real line bundle.
Because Condition (i) of the proposition must hold, the bundle is
trivial. Choose a complex structure on L'. Since L' admits a flat
connection, its Chern class c_1(L') is a torsion class in H^2(X;).
But c_1(L) and c_1(L') map to the same element of H^2(X;/2), namely
w_2(E), hence
c_1(L)=c_1(L')+2a
for some a∈ H^2(X;). This contradicts our assumption that w∈ J
is not divible by 2. Thus, M_0 is free of reducibles as claimed.
By Proposition <ref> we have
D_k-2,0≠0,
and Proposition <ref> says that
D_k-2,0=u_2^k-2D_0,0.
Now suppose w is strongly admissible (which is trivially the case
if Condition (i) holds). Then
the bundle E is strongly admissible, so by
Propositions <ref> and <ref> we have
u_3D_0,0=D_0,1=0.
This proves eqn:q2ineq.□
§ OPERATIONS DEFINED BY COBORDISMS
§.§ Cutting down moduli spaces
Let Y_0,Y_1,Y_2 be oriented (integral) homology 3–spheres and W a
smooth compact connected oriented 4–manifold such that
H_i(W;)=0 for i=1,2 and W=(-Y_0)∪(-Y_1)∪ Y_2. Then we call
W a (4–dimensional) pair-of-pants cobordism from Y_0∪ Y_1 to
Y_2, or a pair-of-pants cobordism from Y_1 to (-Y_0)∪ Y_2.
We will consider various operations on Floer cochain complexes induced by
pair-of-pants cobordism. To define these we first introduce some notation.
Let X be an oriented connected Riemannian 4–manifold with incoming
tubular ends (-∞,0]× Y_j, j=0,…,r and outgoing tubular ends
[0,∞)× Y_j, j=r+1,…,r', where each Y_j is an
homology sphere. For t≥0 let X t be the result of deleting
from X the open pieces (-∞,-t)× Y_j, j=0,…,r and
(t,∞)× Y_j, j=r+1,…,r'. We assume
X0 is compact. For i=0,…,r' let y_i∈ Y_i be a base-point
and set
e_i:=
-1, i=0,…,r,
1, i=r+1,…,r'.
For any integers j,k in the interval [0,r'] such that j<k
let _jk:→ X be a smooth path satisfying
_jk(t)∈ X1 for |t|≤1 and
_jk(t)=
(-e_jt,y_j), t≤-1,
(e_kt,y_k), t≥1.
Loosely speaking, the path _jk enters along the jth end
and leaves along the kth end of X.
Let =(_1,…,_r'), where _j∈(Y_j) and at least one
_j is irreducible. For the remainder of this subsection we write
M:=M(X,E;),
where E→ X is the product 3 bundle.
The unique continuation result of
Proposition prop:unique-continuation-cylinder ensures that if
_j is irreducible then the restriction of any
element of M to a band on the jth end of X will be irreducible.
Let → M× X be the universal (real) 3–plane bundle (see
<cit.>).
For any t≥0 let t denote the
restriction of to M× X t. Given a base-point x_0∈ X let
_X,x_0;→ M be the canonical 3–plane bundle,
which can be identified
with the restriction of to M×{x_0}.
If :J→ X is a smooth path in X defined on some interval J then
a section of the pull-back bundle (𝕀×)^* over
M× J is called holonomy invariant if
for all =[A]∈ M and real numbers s<t one has that (,s)
is mapped to (,t) by the isomorphism
_(,(s))→_(,(t))
defined by holonomy of A along the path |_[s,t].
Suppose Z⊂ X is a compact codimension 0 submanifold-with-boundary
such that A|_Z is irreducible for every [A]∈ M. Given a base-point
z_0∈ Z, let _Z,z_0→^*(E|_Z)
be the base-point fibration, and let
R_Z:M→^*(E|_Z), ↦|_Z.
Then the pull-back bundle R_Z^*_Z,z_0 is canonically isomorphic to
_X,z_0;, and we will usually identify the two bundles without further
comment.
Choose (smooth) sections z_1,z_2,z_3 of 2 and
for any x∈ X2 let
M∩ w_3(x):=
{∈ M z_1(,x)=0},
M∩ w_2(x):=
{∈ M
z_2,z_3 are linearly
dependent at (,x)}.
For j=0,…,r' let _j→^*(Y_j[0]) be the canonical
3–plane bundle associated to a base-point (0,y_j).
For j<k, any j', and i=1,2,3 choose
* a section ijk of _j
and a section ijk of _k,
* a section ijk of 2,
* a section s_ij' of _j'.
Let b_-1,b_0,b_1 be a partion
of unity of subordinate to the open cover
{(-∞,-1),(-2,2),(1,∞)}.
If j<k and both _j,_k are
irreducible we introduce, for i=1,2,3, a section
of the bundle (𝕀×_jk)^* associated, loosely speaking,
to a base-point moving along the path _jk. Precisely, we define
s_ijk(,t):=b_-1(t) ijk(|_Y_j[-e_jt])
+b_0(t) ijk(|_X2,_jk(t))
+b_1(t) ijk(|_Y_k[e_kt]).
Using these sections, we define cut-down moduli spaces
M∩ w_3(_jk):=
{(,t)∈ M× s_1jk(,t)=0},
M∩ w_2(_jk):=
{(,t)∈ M×
s_2jk, s_3jk are linearly
dependent at (,t)}.
We now consider the case of a base-point moving along the jth end.
For t≥0 let _j(t):=(e_jt,y_j). If _j is irreducible let
M∩ w_2(_j):={(,t)∈ M×[0,∞)
s_2j,s_3j are linearly dependent at |_Y_j[e_jt]}.
We omit the definition of M∩ w_3(_j) since it will not be
needed in the remainder of this paper
(although something close to it was used in the proof of
Proposition <ref>).
We can also combine the ways moduli spaces are cut down in
the above definitions. Namely, for ℓ,ℓ'∈{2,3} let
M∩ w_ℓ(x)∩ w_ℓ'(_jk):=
{(,t)∈ M∩ w_ℓ'(_jk)
∈ M∩ w_ℓ(x)},
M∩ w_ℓ(_jk)∩ w_ℓ'(_j'k'):=
{(,t,t')∈ M××
(,t)∈ M∩ w_ℓ(_jk),
(,t')∈ M∩ w_ℓ'(_j'k')},
M∩ w_ℓ(_jk)∩ w_2(_j'):=
{(,t,t')∈ M××[0,∞)
(,t)∈ M∩ w_ℓ(_jk),
(,t')∈ M∩ w_2(_j')}.
If one of the _js is trivial, say _h=θ, and M<8
(to prevent bubbling) then one can also cut
down M by, loosely speaking, evaluating w_2 or w_3 over the
“link of θ at infinity” over the hth end of X. We now make this
precise in the case of w_2 and an outgoing end [0,∞)× Y_h. The
definitions for w_3 or incoming ends are similar. To simplify notation
write Y:=Y_h.
We introduce a function τ^+=τ^+_h on M related to the energy
distribution of elements over the hth end.
Choose >0 so small that for any β∈(Y) the Chern-Simons
value (β)∈/ has no real lift in the interval (0,].
(Recall that we assume (θ)=0.)
Given ∈ M, if there exists a t>0 such that
_([t-2,∞)× Y)= then t is unique, and we write
t^+():=t. This defines t^+ implicitly as a smooth function on an open
subset of M. We modify t^+ to get a smooth function
τ^+:M→[1,∞) by
τ^+():=
1+b(t^+()-2)·(t^+()-1) if t^+() is defined,
1 else,
where the cut-off function b is as in eqn:b-prop1.
Note that τ^+()<3 if t^+()<3 and
τ^+()=t^+() if t^+()≥3.
The restriction of to the band Y[τ^+()] will be denoted by
R^+()∈(Y[0]).
In the above situation there is a real number T_0
such that if is any element of M satisfying
τ^+()>T_0-1 then R^+() is irreducible.
Suppose the lemma is false. Then we can find a sequence _n
in M such that τ^+(_n)→∞ and
R^+(_n) is reducible for every n. Let A_n be a smooth connection
representing _n, and let t_n=τ^+(_n).
By assumption, there is no bubbling in M, so
we can find gauge transformations u_n defined over [0,∞)× Y
and a smooth connection A' over such that, for every constant
c>0, the sequence u_n(A_n)|_[t_n-c,t_n+c] converges in C^∞
to A'|_[-c,c]. The assumption on means that no energy can be
lost over the end [0,∞)× Y in the limit, hence
_A'([-2,∞)× Y)=.
In particular, A' is not trivial. But there are no non-trivial reducible
finite-energy instantons over (as long as the perturbation of the
Chern-Simons functional is so small that there are no
non-trivial reducible critical points).
Therefore, A' must be irreducible. From the unique continuation result of
Proposition <ref> it follows that
A'|_{0}× Y is
also irreducible, so A_n is irreducible for large n.
This contradiction proves the lemma.
□Let T_0 be as in the lemma. For any element of M for which
R^+() is irreducible, let
s'_ih() denote the holonomy invariant section of
(𝕀×_h)^* such that s'_ih(,τ^+())=s_ih(R^+()).
Let x_h:=(0,y_h) and define a section of _X,x_h; by
s_ih():=(1-b(τ^+()-T_0))· z_i(|_X2,x_h)
+b(τ^+()-T_0)· s'_ih(R^+()),
where again b is as in eqn:b-prop1.
Let
M∩ w_2(τ^+):={∈ Ms_2h,s_3h linearly dependent
at }.
If j<k and both _j,_k are irreducible let
M∩ w_ℓ(_jk)∩ w_2(τ^+):=
{(,t)∈ M∩ w_ℓ(_jk)∈ M∩ w_2(τ^+)}.
If M is regular, then the various cut down moduli
spaces defined above will be transversely cut out when the sections involved
are “generic”.
§.§ Operations, I
We now specialize to the case when X has two incoming ends
(-∞,0]× Y_j, j=0,1 and one outgoing end [0,∞)× Y_2,
and
H_i(X;)=0, i=1,2.
Such a cobordism gives rise to a homomorphism
A:C^p(Y_0)⊗ C^q(Y_1)→ C^p+q(Y_2)
for any p,q∈/8, with matrix coefficients
A(_0⊗_1),_2:=#M(X;)
for generators _0∈ C^p(Y_0), _1∈ C^q(Y_1), and
_2∈ C^p+q(Y_2), where =(_0,_1,_2). We can construct
more homomorphisms using the sections s_ijk chosen above.
For any path _jk as above and k=2,3 let
T_i,j,k:C^p(Y_0)⊗ C^q(Y_1)→ C^p+q+i-1(Y_2)
be defined on generators by
T_i,j,k(_0⊗_1),_2:=
#[M(X;)∩ w_i(_jk)].
For the cases used in this paper we introduce the simpler notation
B:=T_3,0,1, E:=T_3,0,2, A':=T_2,1,2.
We will also consider homomorphisms defined using two base-points, each
moving along a path in X. At this point we only define
B':C^p(Y_0)⊗ C^q(Y_1)→ C^p+q+3(Y_2)
by
B'(_0⊗_1),_2:=
#[M(X;)∩ w_3(_01)∩ w_2(_12)].
In the next proposition, the differential in the cochain complex
C(Y_i) will be denoted by d (for i=0,1,2), and
d=d⊗1+1⊗ d
will denote the differential in C(Y_0)⊗ C(Y_1).
Let
v_3:=v_3⊗1+1⊗ v_3,
regarded as a degree 3 cochain map from C(Y_0)⊗ C(Y_1) to itself.
(i) dA+A d=0.
(ii) dB+B d=A v_3.
(iii) dE+E d=A(v_3⊗1)+v_3A.
(iv) dA'+A' d=A(1⊗ v_2)+v_2A.
(v) dB'+B' d=B(1⊗ v_2)+v_2B
+A' v_3+A(1⊗ϕ)+A_θ(1⊗).
The only non-trivial part here is (v), where one encounters
factorization through the trivial connection over the end
(-∞,0]× Y_1. This can be handled as in the proof of
Proposition <ref> given in Subsection <ref>,
to which we refer for details.□
The homomorphism
:MC^*(Y_0)⊗ MC^*(Y_1) → C^*(Y_2),
(x_0,y_0)⊗(x_1,y_1) ↦ B(x_0,x_1)+A(x_0⊗ y_1+y_0⊗ x_1)
is a cochain map of degree -2.
Let D=D⊗1+1⊗ D be the differential in the complex
MC(Y_1)⊗ MC(Y_2). Then
D[(x_0,y_0)⊗(x_1,y_1)] =
[(dx_0,v_3x_0+dy_0)⊗(x_1,y_1)+(x_0,y_0)⊗(dx_1,v_3x_1+dy_1)]
=B(dx_0⊗ x_1+x_0⊗ dx_1)
+A[dx_0⊗ y_1+(v_3x_0+dy_0)⊗ x_1+
x_0⊗(v_3x_1+dy_1)+y_0⊗ dx_1]
=B d(x_0⊗ x_1)
+A[ v_3(x_0⊗ x_1)+ d(x_0⊗ y_1+y_0⊗ x_1)]
=d[(x_0,y_0)⊗(x_1,y_1)],
where the last equality follows from Proposition <ref>.□
The homomorphism
MI^*(Y_0)⊗ MI^*(Y_1)→ I^*(Y_2)
obtained from Proposition <ref> will also be denoted by .
In order to simplify notation we will often write ,
instead of _0,_0 if no confusion can arise.
For all a∈ MI(Y_0), b∈ MI(Y_1), the following hold.
(i) If a=0 then (Ua,b)=u_2(a,b).
(ii) If b=0 then (a,Ub)=u_2(a,b).
We spell out the proof of (ii). Reversing the roles of Y_0,Y_1
yields a proof of (i). Let
',:MC^*(Y_0)⊗ MC^*(Y_1)→ C^*(Y_2)
be given by
'[(x_0,y_0)⊗(x_1,y_1)]
:= B'(x_0,x_1)+A'(x_0⊗ y_1+y_0⊗ x_1),
[(x_0,y_0)⊗(x_1,y_1)]
:=( x_1)A_θ(x_0).
Let D be as in the proof of Proposition <ref>. We show that
d'+' D=v_2+(1× V)+,
from which (ii) follows. Observe that the first four lines
in the calculation of D in
Proposition <ref> carry over to ' D.
That proposition then gives
' D [(x_0,y_0)⊗(x_1,y_1)]
=(B' d+A' v_3)(x_0⊗ x_1)
+A' d(x_0⊗ y_1+y_0⊗ x_1)
=dB'(x_0⊗ x_1)+B(x_0⊗ v_2x_1)+v_2B(x_0⊗ x_1)
+A(x_0⊗ϕ x_1)+( x_1)A_θ(x_0)
+[dA'+A(1⊗ v_2)+v_2A](x_0⊗ y_1+y_0⊗ x_1)
=[d'+v_2+(1× V)+][(x_0,y_0)⊗(x_1,y_1)].□
Our next goal is to compute u_2. To this end we introduce some
variants Ȧ,Ḃ,A^+,B^+ of the operators A,B. Each of these
variants is a homomorphism
C^p(Y_0)⊗ C^q(Y_1)→ C^p+q+d(Y_2)
for d=2,4,1,3, respectively, defined for all p,q, and the matrix
coefficients are
Ȧ(_0⊗_1),_2 :=
#[M(X;)∩ w_2(x_2)],
Ḃ(_0⊗_1),_2 :=
#[M(X;)∩ w_2(x_2)∩ w_3(_01)],
A^+(_0⊗_1),_2 :=
#[M(X;)∩ w_2(_2)],
B^+(_0⊗_1),_2 :=
#[M(X;)∩ w_3(_01)∩ w_2(_2)],
where =(_0,_1,_2) as before, x_2=_2(0)∈ X, and
_i,_ij are as in Subsection <ref>.
(i) dȦ+Ȧ d=0.
(ii) dḂ+Ḃ d=Ȧ v_3.
(iii) dA^++A^+ d=v_2A+Ȧ.
(iv) dB^++B^+ d=A^+ v_3+v_2B+Ḃ.
Standard.□
The homomorphism
:MC^*(Y_0)⊗ MC^*(Y_1) → C^*(Y_2),
(x_0,y_0)⊗(x_1,y_1) ↦Ḃ(x_0,x_1)
+Ȧ(x_0⊗ y_1+y_0⊗ x_1)
is a (degree preserving) cochain map.
The same as for Proposition <ref>, using
Proposition <ref> (i), (ii).□
The homomorphism
MI^*(Y_0)⊗ MI^*(Y_1)→ I^*(Y_2)
obtained from Proposition <ref> will also be denoted by .
As maps MI^*(Y_0)⊗ MI^*(Y_1)→ I^*(Y_2) one has
=u_2.
This is analogous to the proof of Proposition <ref>. Let
^+:MC^*(Y_0)⊗ MC^*(Y_1)→ C^*(Y_2)
be given by
^+[(x_0,y_0)⊗(x_1,y_1)]
:= B^+(x_0,x_1)+A^+(x_0⊗ y_1+y_0⊗ x_1).
We show that
d^++^+ d=v_2+.
From Proposition <ref> we get
^+ D(x_0,y_0)⊗(x_1,y_1)
=(B^+ d+A^+ v_3)(x_0⊗ x_1)+A^+ d(x_0⊗ y_1+
y_0⊗ x_1)
=(dB^++v_2B+Ḃ)(x_0⊗ x_1)
+(dA^++v_2A+Ȧ)(x_0⊗ x_1)
=(d^++v_2+)(x_0,y_0)⊗(x_1,y_1).□
We also need to bring in moduli spaces over X with trivial limit over the
end _+× Y_2. These give rise to homomorphisms
A^θ,B^θ,Ȧ^θ,Ḃ^θ:C^p(Y_0)⊗ C^d-p(Y_1)→/2
where d=5,3,3,1, respectively. They are defined on generators by
A^θ(_0⊗_1) :=#M(_0,_1,θ),
B^θ(_0⊗_1) :=#[M(_0,_1,θ)∩ w_3(_01)],
Ȧ^θ(_0⊗_1) :=#[M(_0,_1,θ)∩ w_2(x_0),
Ḃ^θ(_0⊗_1)
:=#[M(_0,_1,θ)∩ w_2(x_0)∩ w_3(_01).
(i) A+ A^θ d=0.
(ii) B+B^θ d=A^θ v_3.
(iii) Ȧ+Ȧ^θ d=0.
(iv) Ḃ+Ḃ^θ d=
Ȧ^θ v_3+⊗.
Here, (⊗)(x_0⊗ x_1)=( x_0)( x_1).
The proof is standard.
(i) =0.
(ii) u_2=⊗.
Statement (i) is proved just as Proposition <ref>, replacing
Proposition <ref> by Proposition <ref>.
We now prove (ii). For g_i=(x_i,y_i)∈ MC(C_i), i=0,1 let
^θ(g_0⊗ g_1):=Ḃ^θ(x_0⊗ x_1)
+Ȧ^θ(x_0⊗ y_1+y_0⊗ x_1).
Arguing as in the proof of Proposition <ref> and using
Proposition <ref> we obtain
^θ D(g_0⊗ g_1)
=(Ḃ^θ d+Ȧ v_3)(x_0⊗ x_1)
+Ȧ^θ d(x_0⊗ y_1+y_0⊗ x_1)
=Ḃ(x_0⊗ x_1)+ x_0· x_1
+Ȧ(x_0⊗ y_1+y_0⊗ x_1)
=(+⊗)(g_0⊗ g_1).
If g_0,g_1 are cocycles then by Proposition <ref> we have
v_2(g_0⊗ g_1)=(g_0⊗ g_1)
= g_0· g_1.□
For p≠4 let
F:C^p(Y_0)⊗ C^q(Y_1)→ C^p+q+4(Y_2)
be defined by
F(_0⊗_1),_2:=
#[M(X;)∩ w_3(_01)∩ w_3(_02)].
For p=4 the map F may not be well-defined due to possible factorizations
through the trivial connection over the end _-× Y_0.
The definition of F involves two different sections of the bundle
_0→^*(Y_0[0]), namely
s_k:= 10k, k=1,2.
From now on we assume s_1,s_2 are so close that
they define the same cup product v_3:C^*(Y_0)→ C^*+3(Y_0).
If the sections s_1,s_2 are sufficiently close then the map
F in eqn:Fdef can be extended to all bidegrees (p,q) such that
dF+F d=B(v_3⊗1)+v_3B+E v_3+A(ψ⊗1),
where ψ is as in Proposition <ref>.
The main difficulty in extending the map F to degree p=4,
related to factorization through the trivial connection over the end
(-∞,0]× Y_0, is the same as in
extending the map ψ to degree 4, and the main difficulty in
proving eqn:Fthm is the same as in proving that ψ is a cochain map
(Proposition <ref>). As we prefer to explain the ideas involved
in the simplest possible setting, we will not spell out the proof
of Proposition <ref> but instead refer to
Subsection <ref> for details.
Sometimes we will fix the variable _1 in the expressions defining
A,B,E,F. Thus, for any y∈ C^r(Y) we define a homomorphism
A_y:C^*(Y_0)→ C^*-r(Y_2), x↦ A(x⊗ y),
and we define B_y,E_y,F_y similarly. Looking at moduli spaces over X
with trivial limit over the end _-× Y_1 we obtain homomorphisms
A_θ :C^*(Y_0)→ C^*(Y_2),
E_θ :C^*(Y_0)→ C^*+2(Y_2).
with matrix coefficients
A_θ(_0),_2 :=#M(X;_0,θ,_2),
E_θ(_0),_2 :=#[M(X;_0,θ,_2)∩ w_3(_02)].
We consider a variant of Floer's complex introduced by Donaldson
<cit.>.
For any oriented homology 3–sphere Y let *(Y) be the complex
with cochain groups
p(Y) =C^p(Y), p≠0,
0(Y) =C^0(Y)⊕/2
and differential d̅=d+'.
Now take Y:=Y_1. For y=(z,t)∈0(Y_1) let
A_y:=A_z+tA_θ, E_y:=E_z+tE_θ.
For any x∈ C(Y_1) and y∈*(Y_1) we have
[d,A_y]+A_d̅y =0,
[d,E_y]+E_d̅y =[A_y,v_3],
[d,B_x]+B_dx =A_xv_3+A_v_3x,
[d,F_x]+F_dx =[B_x,v_3]+E_xv_3+E_v_3x+A_xψ.
Here, [d,A_y]=dA_y+A_yd, and similarly for the other commutators.
For y∈ C(Y_1) this follows from Propositions <ref> and
<ref>, whereas the case y=(0,1)∈0(Y_1) is easy.□
Suppose x∈ C^-2(Y_1) and y=(z,t)∈0(Y_1) satisfy
dx=0, v_3x=d̅y.
Then the homomorphism :MC^*(Y_0)→ MC^*(Y_2) given by the matrix
(
[ A_y+B_x A_x; E_y+F_x+A_xΞ A_y+B_x+E_x+A_xv_2 ])
is a cochain map.
Writing =([ P Q; R S ])
we have
d+ d=(
[ dP+Pd+Qv_3 dQ+Qd; dR+Rd+v_3P+Sv_3 dS+Sd+v_3Q ]).
The fact that this matrix vanishes is easily deduced from
Propositions <ref> and <ref> and Lemma <ref>.
We write out the calculation only for the bottom
left entry.
[d,E_y +F_x+A_xΞ]
=E_v_3x+[v_3,A_y]+[v_3,B_x]+E_v_3x+E_xv_3+A_xψ+A_x[d,Ξ]
=v_3(A_y+B_x)+(A_y+B_x+E_x+A_xv_2)v_3,
hence [d,R]=v_3P+Sv_3 as claimed.□
As maps MI^*(Y_0)⊗ MI^*(Y_1)→ I^*(Y_2) one has
u_3=0.
For j=0,1 let (x_j,y_j) be a cocycle in MC(Y_j), i.e.
dx_j=0, v_3x_j=dy_j.
Let the map of Lemma <ref> be defined with
x=x_1, y=y_1, and let (x_2,y_2):=(x_0,y_0). Then
((x_0,y_0)⊗(x_1,y_1))=B_x_1(x_0)+A_y_1(x_0)+A_x_1(y_0)=x_2.
Since (x_2,y_2) is a cocycle, we have v_3x_2=dy_2, proving the proposition.
□
If (Y_j)≥1 for j=0,1 then
(Y_2)≥(Y_0)+(Y_1).
For j=0,1 let n_j:=(Y_j) and choose z_j∈ MI(Y_j) such that
U^kz_j=cases
0 for 0≤ k<n_j-1,
1 for k=n_j-1.
Let x:=(z_0⊗ z_1)∈ I(Y_2). Then u_3x=0 by
Proposition <ref>. For 0≤ k_j≤ n_j-1, repeated
application of Proposition <ref> yields
u_2^k_0+k_1x=(U^k_0z_0⊗ U^k_1z_1),
hence u_2^k_0+k_1x=0 by Proposition <ref>. Therefore,
u_2^mx=0, 0≤ m≤ n_1+n_2-2.
On the other hand,
u_2^n_1+n_2-1x = u_2u_2^n_0-1u_2^n_1-1x
= u_2(U^n_0-1z_0⊗ U^n_1-1z_1)
=( U^n_0-1z_0)( U^n_1-1z_1)
=1.
Therefore, (Y_2)≥ n_0+n_1 as claimed.□
We will give a second application of Lemma <ref>, but first we need
some preparation. Let A^θ_θ:C^5(Y_0)→/2
be defined on generators by
A^θ_θ():=#M(,θ,θ).
For y=(z,t)∈ q(Y_1) define
A^θ_y:C^5-q(Y_0)→/2 and B^θ_z:C^3-q(Y_0)→/2
by
A^θ_y(x):=A(x⊗ z)+tA^θ_θ(x),
B^θ_z(x):=B^θ(x⊗ z).
(i) A_θ+A^θ_θ d+A^θ_'(1)=.
(ii) A_y+A^θ_y d+A^θ_d̅y=t.
(iii) B_z+B^θ_z d+B^θ_dz=A^θ_zv_3+A^θ_v_3z.
If (Y_0)≥1 and (Y_1)=0 then (Y_2)≥1.
Since (Y_0)≥1 we can find (x_0,y_0)∈ MC^6(Y_0) such
that
dx_0=0, v_3x_0=dy_0, x_0=1.
Since (Y_1)=0, Lemma <ref> says that there exist
x_1∈ C^-2(Y_1) and y_1=(z_1,1)∈ 0(Y_1) such that
dx_1=0, v_3x_1=d̅y_1.
Let be as in Lemma <ref>. Then (x_0,y_0) is a cocycle
in MC(Y_2), and by Lemma <ref> we have
(x_0,y_0) =(A_y_1+B_x_1)x_0+ A_x_1y_0
=(_d̅y_1++_x_1v_3+_v_3x_1)x_0+_x_1dy_0
=1.
Therefore, (Y_2)≥1.□
§.§ Operations, II
We now consider the case when X has one incoming end (-∞,0]× Y_0
and two outgoing ends [0,∞)× Y_1 and [0,∞)× Y_2,
where Y_2==(2,3,5) is the Poincaré homology sphere oriented as the
boundary of the negative definite E_8–manifold. We again assume that
H_i(X;)=0, i=1,2.
We will define homomorphisms
P,P',Q:C^*(Y_0)→ C^*+d(Y_1)
where d=2,3,4, respectively, making use of cut-down moduli spaces
introduced at the end of Subsection <ref> with
h=2, so that τ^+=τ^+_2.
We define P,P',Q on generators by
P_0,_1 :=#[M(X;_0,_1,θ)∩ w_2(τ^+)],
P'_0,_1 :=
#[M(X;_0,_1,θ)∩ w_2(_01)∩ w_2(τ^+)],
Q_0,_1 :=
#[M(X;_0,_1,θ)∩ w_3(_01)∩ w_2(τ^+)].
As maps C(Y_0)→ C(Y_1) the following hold.
(i) [d,P]=0.
(ii) [d,P']=[v_2,P].
(iii) [d,Q]=[v_3,P]+'.
(iv) P+Pd=.
Here, is as defined at the end of Subsection <ref>.
In (iii), argue as in the proof of Proposition <ref>
to handle
factorization through the trivial connection over X.□
Note that statements (i), (iii) are equivalent to the fact that the
homomorphism
Ψ=
([ P 0; Q P ])
:MC^*(Y_0)→ MC^*+2(Y_1)
satisfies
[D,Ψ]='.
The homomorphism I^*(Y_0)→ I^*+2(Y_1) induced by P will also be denoted
by P.
As maps I(Y_0)→ I(Y_1) the following hold.
(i) [u_2,P]=0.
(ii) [u_3,P]='.
(iii) P= u_2.
Combine Propositions <ref> and <ref>.□
If (Y_0)≥2 then
(Y_1)≥(Y_0)-1.
Let n:=(Y_0) and choose x∈ I(Y_0) such that u_3x=0 and
u_2^kx=
0 for 0≤ k<n-1,
1 for k=n-1.
By Proposition <ref> we have u_3Px=0 and
u_2^kPx= Pu_2^kx= u_2^k+1x=
0 for 0≤ k<n-2,
1 for k=n-2.
This shows that (Y_1)≥ n-1.□
§.§ Additivity of
Throughout this subsection, Y,Y_0,Y_1 will denote oriented homology
3–spheres. As before, will denote the Poincaré homology sphere.
If (Y_j)≥1 for j=1,2 then
(Y_0# Y_1)≥(Y_0)+(Y_1).
Recall that there is a standard cobordism W from (-Y_0)∪(-Y_1)
to Y_0# Y_1. By attaching half-infinite tubular ends to W we obtain
a manifold X to which we can apply the results of
Subsection <ref>. The proposition now follows from
Proposition <ref>.□
If (Y_0)≥1 and (Y_1#(-Y_0))=0 then (Y_1)≥1.
This follows from Proposition <ref>.□
If (Y#)≥2 then
(Y)≥(Y#)-1.
This follows from Proposition <ref>
with Y_0=Y# and Y_1=Y.
□
In the following, we write Y_0∼ Y_1 to indicate that Y_0 and
Y_1 are homology cobordant.
If Y_0# Y_1∼ then (Y_0)+(Y_1)=1.
Let k_j:=(Y_j).
Case 1: n_0n_1=0. Without loss of generality we may assume that
n_1=0. By Proposition <ref> we have n_0≥1.
If n_0≥2 then,
since Y_0∼#(-Y_1), Proposition <ref> would give
-n_1=(-Y_1)≥(#(-Y_1)-1≥1,
a contradiction. Hence, n_0=1, so the lemma holds in this case.
Case 2: n_0n_1>0. We show that this cannot occur.
If k_j>0 then Proposition <ref> yields
1=()≥ n_0+n_1≥2,
a contradiction. Similarly, if k_j<0 then the same proposition yields
-1=(-)≥2.
Case 3: n_0n_1<0. Then we may assume that n_0>0.
Applying Proposition <ref> we obtain
n_0=(#(-Y_1))≥1-n_1≥2.
Proposition <ref> now gives -n_1≥ n_0-1.
Altogether, this shows that
n_0+n_1=1.□
(Y#)=(Y)+1.
Apply the lemma with Y_0=Y# and Y_1=-Y.□
For any oriented integral homology 3–spheres Y_0,Y_1 one has
(Y_0# Y_1)=(Y_0)+(Y_1).
Let k_j:=(Y_j) and Z_j:=Y_j#(-k_j).
By Corollary <ref> we have (Z_j)=0, so by
Proposition <ref>,
0=(Z_0# Z_1)=(Y_0# Y_1#(-n_0-n_1))=(Y_0# Y_1)-n_0-n_1.
□
§ FURTHER PROPERTIES OF . EXAMPLES
§.§ Proof of Theorem <ref>
Let W' be the result of connecting the two boundary components of W
by a 1–handle. Then W and W' have the same second cohomology group
and the same intersection form.
Let Z be the negative definite E_8–manifold (i.e. the result of
plumbing on the E_8 graph), so that the boundary of Z
is the Poincaré sphere . We will apply Theorem <ref>
to the boundary-connected sum
V:=W'#_∂ Z.
Let S,S'⊂ Z be embedded oriented 2–spheres corresponding to adjacent
nodes on the E_8 graph. These spheres both have self-intersection number
-2, and S· S'=1. Let
v=P.D.([S])∈ H^2(V, V)≈ H^2(V)
be the Poincaré dual of the homology class in V represented by S. Then
v·[S']=1, hence v is strongly admissible. The class
w∈ J_V represented by v satisfies w^2=-2, and
± w are the only classes in w+2J_V with square norm 2.
Theorem <ref>
and Proposition <ref> now yield
(Y)+1=(Y#)≥1,
hence (Y)≥0 as claimed.□
§.§ Proof of Theorem <ref>
Theorem <ref> is an immediate consequence of the following
two propositions.
Let K,K' be knots in S^3 such that K' is obtained from K by changing
a positive crossing. Let Y,Y' be (-1) surgeries on K,K', respectively.
Then
0≤(Y')-(Y)≤1.
We observe that Y' is obtained from Y by (-1) surgery on a linking
circle of the crossing such that
bounds a surface in Y of genus 1.
The surgery cobordism W from Y to Y' satisfies H_1(W;)=0 and
b^+_2(W)=0, hence (Y')≥(Y) by Theorem <ref>. Since Y
bounds a simply-connected negative definite 4–manifold (the trace of the
surgery on K) we have (Y)≥0 by the same theorem.
Let Y” be 0–surgery on .
By Floer's surgery theorem <cit.> there is a long exact sequence
⋯→ I(Y”)→ I(Y)ϕ→ I(Y')ψ→ I(Y”)→⋯
where ϕ is induced by the cobordism W.
Let n:=(Y') and suppose n≥2, the proposition already being proved
for n=0,1. Then there is a b∈ I(Y') such that
u_2^jb=
0, 0≤ j<n-1,
1, j=n-1.
By Proposition <ref> we have
ψ u_2b=u_2ψ b=0,
hence u_2b=ϕ a for some a∈ I(Y). For j≥0 we have
u_2^j a= u_2^jϕ a= u_2^j+1 b.
Combining this with Corollary <ref> we
obtain (Y)≥ n-1=(Y')-1 and the proposition is proved.□
If Y is (-1) surgery on a positive knot K in S^3 then (Y)=0.
This follows from Theorem <ref> because Y bounds
simply-connected 4–manifolds V_± where V_+ is
positive definite and V_- is negative definite. As V_- one can take the
trace of the (-1) surgery on K. On the other hand, since K can be
unknotted by changing a collection of positive crossings, the observation in
the beginning of the proof of Proposition <ref>
yields V_+.□
§.§ Proof of Proposition <ref>
Let Y_k:=(2,2k-1,4k-3). Then Y_k bounds the simply-connected
4–manifold V_k obtained by plumbing according the weighted graph
in Figure 1,
where the total number of nodes is 4k.
Let e_1,…,e_4k be an orthonormal basis for ^4k. The
intersection form of V_k is isomorphic to the lattice
_4k:=
{∑_i x_ie_i
2x_i∈, x_i-x_j∈, ∑_i x_i∈2},
with the nodes of the plumbing graph corresponding to the following
elements of _4k:
1/2∑_i=1^4ke_i, e_2+e_3,
(-1)^j(e_j-1-e_j), j=3,…,4k.
Let w∈ J_k=H^2(V_k;) be the element corresponding to
1/2∑_i=1^4ke_i. Since ± w are the only elements
of minimal square norm
in w+2J_k it follows from Theorem <ref> that
(Y_k)≥ k-1.
On the other hand, Y_k is also the result of (-1) surgery on the
torus knot T_2,2k-1. Since T_2,2k-1 can be unknotted by changing k-1
crossings we deduce from Theorem <ref> that
(Y_k)≤ k-1.
This proves the proposition.□
§.§ Proof of Theorem <ref>
Since we will use different coefficient rings R, the homomorphism
:C^4(Y;R)→ R
defined in Subsection <ref> will now be denoted by
_R.
By definition, the condition h(Y)>0 means that there exists a cocycle
w∈ C^4(Y;) such that _ w≠0. Note that replacing the
coefficient group by yields an equivalent condition.
On the other hand, the condition (Y)>0 means that there exists a
cocycle z∈ C^4(Y;/2) such that _/2z≠0 and such that the
cohomology class of z is annihilated by u_3. If in addition z lifts
to an integral cocycle z∈ C^4(Y;) then _ z must be odd,
in particular non-zero, hence h(Y)>0.
Now suppose (Y)>0 and h(Y)≤0.
The above discussion shows that the homomorphism
I^4(Y;)→ I^4(Y;/2) is not surjective, hence the Bockstein homomorphism
I^4(Y;/2)→ I^5(Y;) is non-zero. This proves the theorem.□
§.§ Proofs of Theorems <ref> and
<ref>
Proof of Theorem <ref>:
Part (i) was proved in <cit.> using Seiberg-Witten
theory. To prove (ii), let =(2,3,5). Then ()=1 by
Proposition <ref>. If H^2(X;) contains no 2–torsion
then (ii) follows from Corollary <ref>. Under the weaker assumption
that H^2(X;) contains no element of order 4, we can appeal to
Theorem <ref> since u_3=0 on I().□
Proof of Theorem <ref>:
Let be the
monopole h–invariant defined in <cit.>. (One could
equally well use the correction term d.) Then ()=-1, and
additivity of yields (#)=-2. If ξ is any
characteristic vector for J_X then by <cit.> one has
-(Y)≥1/8(b_2(X)+ξ·ξ).
Let J_X=m-1⊕ J_X as in Corollary <ref>. By
assumption, J_X is even, so
J_X has characteristic vectors ξ with ξ·ξ=-m. Therefore,
J_X=b_2(X)-m≤16.
By the classification
of even unimodular definite forms of rank ≤16 (see <cit.>) one has
J_X=0, -E_8, -2E_8, or -_16.
It only remains to rule out J_X=-_16.
Recalling that is the result of
(-1) surgery on the negative trefoil knot and applying
Proposition <ref>
twice we find that u_2^2=0 on I^*(#), hence
(#)≤2. On the other hand,
if J_X=-_16 then applying Theorem <ref> as in the proof
of Proposition <ref> we would obtain
(#)≥3, a contradiction. This proves the theorem.□
§ TWO POINTS MOVING ON A CYLINDER, I
The main goal of this section is to prove
Proposition <ref>. The first two subsections
will introduce some concepts used in the proof, which appears in the final
subsection.
§.§ Energy and holonomy
Let Y be an oriented (integral) homology 3–sphere with base-point y_0.
Let
→^*(Y[0])
be the canonical oriented Euclidean 3–plane bundle, where
Y[0]=[-1,1]× Y as in eqn:ybt-def.
Let ,β∈(Y), not both reducible. Over M(,β)× there
is a canonical 3–plane bundle β
obtained by pulling back the universal bundle over
M(,β)×× Y by the map (,t)↦(,t,y_0).
There is a canonical isomorphism β→ R^* where
R:M(,β)×→^*(0), (,t)↦[t],
so we can identify the fibre of β at (,t) with
the fibre _[t] of at [t].
Recall from Subsection <ref>
that a section of β is called holonomy invariant if
for all =[A]∈ and real numbers s<t one has that (,s)
is mapped to (,t) by the isomorphism
equation*
_[s]→_[t].
defined by holonomy of A along the path [s,t]×{y_0}.
Let be the set of elements of ^*(0) that can be
represented by flat connections.
Choose three sections ρ_1,ρ_2,ρ_3 of which form a positive
orthonormal basis at every point in some neighbourhood of .
Choose >0 so small that the following three conditions hold:
description
(i)If A is any instanton over (-∞,2]× Y satisfying
A(-∞,2]< such that the flat limit of A is
irreducible then ρ_1,ρ_2,ρ_3 are orthonormal at A[0].
(ii)If A is any instanton over [-2,∞)× Y satisfying
A[-2,∞)< such that the flat limit β of A is
irreducible then ρ_1,ρ_2,ρ_3 are orthonormal at A[0].
(iii)For each pair ,β∈(Y) the difference
()-(β)∈/ has no real lift in the half-open interval
(0,2].
Here, _A refers to the energy of A as defined in
eqn:def-energy.
Let ,β be distinct elements of (Y). If [A]∈ M(,β) then
_A()>2,
since the left hand side is a positive real lift of
()-(β). We can therefore define smooth functions
τ^-,τ^+:M(,β)→
implicitly by
_A((-∞,τ^-(A)+2])=
=_A([τ^+(A)-2,∞)).
We will consider the average and difference
τ_a:=1/2(τ^++τ^-), τ_d:=τ^+-τ^-.
Clearly, τ_d>0.
There are translationary invariant smooth restriction maps
R^±:M(,β)→^*(0), ↦[τ^±()]
which, by the unique continuation result of
Proposition prop:unique-continuation-cylinder, descend to injective maps
Ř^±:(,β)→^*(0).
If is irreducible then for any =[A]∈ M(,β) the vectors
equation
ρ_i(R^-()), i=1,2,3
form an orthonormal basis for _R^-(), by choice of .
Let ρ^-_i be the holonomy invariant section of β whose
value at (,τ^-()) is ρ_i(R^-()).
Similarly, if β is irreducible, then the vectors
ρ_i(R^+()) form an orthonormal basis for _R^+().
Let ρ^+_i be the holonomy invariant section of β whose
value at (,τ^+()) is ρ_i(R^+()).
If ,β are both irreducible let
h=(h_ij):M(,β)→3
be the map whose value at [A] is the holonomy of A along
[τ^-(A),τ^+(A)]×{y_0} with respect to the bases described above,
so that
ρ^-_j(,t)=∑_ih_ij()ρ^+_i(,t).
§.§ Factorization through the trivial connection
Now assume ()=4, (β)=1. We will introduce real valued
functions ^± on M(,β) which measure the extent
to which a given element factors through the trivial connection over Y.
Set
M_,θ:=R^-(M(,θ)),
which is a finite subset of ^*(0).
Let M_ be the union of all subsets
R^-(M(,β'))⊂^*(0) where β'∈^*(Y) and
M(,β')≤4. Note that M_ is compact.
Choose an open neighbourhood U_ of M_,θ in ^*(0) such that
itemize
* the closure of U_ is disjoint from M_,
* U_ is the disjoint union of open sets
U_,i, i=1,…,r, each of which
contains exactly one point from M_,θ.
Choose a closed neighbourhood U'_ of M_,θ contained in U_
and a smooth function
equation
e_:→[0,∞)
such that e_=1 on U'_ and e_=0 outside U_. Define the
translationary invariant function
λ^-:M(,β)→[0,∞), ↦ e_(R^-())·τ_d().
The function ^+ is defined in a symmetrical fashion (corresponding to
reversing the orientation of Y).
Let M_β be the union of all subsets
R^+(M(',β))⊂^*(0) where '∈^*(Y) and
M(',β)≤4.
Choose an open neighbourhood V_β of
M_θ,β:=R^+(M(θ,β) in ^*(0) such that
the closure of V_β is disjoint from M_β, and such that
V_β is the disjoint union of open sets
V_β,j, j=1,…,s, each of which
contains exactly one point from M_θ,β.
Choose a
closed neighbourhood V'_β of M_θ,β contained in V_β
and a smooth function
e_β:→[0,∞)
such that e_β=1 on V'_β and e_β=0 outside V_β. Set
λ^+:M(,β)→[0,∞), ↦ e_β(R^+())·τ_d().
lemma
There is a constant C<∞ such that for any ∈ M(,β)
satisfying ^-()+^+()>C one has ^-()=^+().
Suppose the lemma does not hold. Then one can find a sequence _n
in M(,β) such that ^-(_n)+^+(_n)→∞ and
^-(_n)≠^+(_n). After passing to a subsequence we may assume
that the sequence _n chain-converges. If the chain-limit lay in
(,β), or if the chain-limit involved factorization through
an irreducible critical point, then ^±(_n) would be bounded.
Therefore, the chain-limit must lie in
(,θ)×(θ,β) and, consequently,
^-(_n)=τ_d(_n)=^+(_n) for n≫0, a contradiction.□
In the course of the proof we also obtained the following:
lemma
For a chain-convergent sequence _n in M(,β) the following are
equivalent:
description
(i) λ^-(_n)→∞.
(ii) λ^+(_n)→∞.
(iii) The chain-limit of _n lies in
(,θ)×(θ,β).□
Since ^+ will not appear again in the text, we set
:=^-
to simplify notation. For any real number T set
_=T:={∈()=T}.
Given ∈ M(,β), one has R^-()∈ U_ if ()>0
(by definition of ), and R^+()∈ V_β if ()≫0
(by Lemma <ref>).
Therefore, if ()≫0 then there is a map
d:M(,β)_=T→(,θ)×(θ,β)
characterized by the fact that if d()=(_1,_2) then
R^-() and Ř^-(_1) lie in the same set U_,i, and
R^+() and Ř^+(_2) lie in the same set V_β,j.
Gluing theory (see <cit.>) provides the following result:
lemma
There is a T_0>0 such that for any T≥ T_0 the map
d× h×τ_a:
_=T→((,θ)×(θ,β))×3×
is a diffeomorphism.□
§.§ Proof of Proposition <ref>
Let ,β∈^*(Y) with
(β)-()≡58. To compute the
matrix coefficient (v_2v_3+v_3v_2),β we distinguish between two
cases. If ()≢48 the calculation will consist in counting
modulo 2 the number of ends of the 1-manifold 23(,β).
If ()≡48 then M(,β) may contain
sequences factoring through the trivial connection over Y. To deal
with this we consider the subspace of
M(,β)× consisting of points (,t) with
()≤ T for some large
T. By carefully cutting down this subspace to a 1-manifold and then
counting the number of ends and boundary points modulo 2 we obtain
eqn:v2v3chhom.
For s∈ we define the translation map
_s:→, (t,y)↦(t+s,y).
Part (I) Suppose ()≢48. Then
no sequence in M(,β) can have a chain-limit involving
factorization through the trivial connection.
We will determine the ends of the smooth 1-manifold 23(,β).
Let (_n,t_n) be a sequence in
23(,β). After passing to a subsequence we may assume that
the following hold:
description
(i) The sequence ^*_-t_n(_n)
converges over compact subsets of to some
^-∈ M(^-,β^-). (By this we mean
that there are connections
A_n,A̅ representing _n,^- respectively, such that
A_n→A̅ in C^∞ over compact subsets of .)
(ii) The sequence ^*_t_n(_n) converges over compact subsets of
to some ^+∈ M(^+,β^+).
(iii) The sequence t_n converges in [-∞,∞] to some point
t_∞.
Here, [-∞,∞] denotes the compactification of the real line obtained
by adding two points ±∞.
Suppose (_n,t_n) does not converge in 23(,β).
Case 1: t_∞ is finite. Then M(^-,β^-) has
dimension 4 and either ^-= or β^-=β. The corresponding
number of ends of 23(,β), counted modulo 2, is
(dϕ+ϕ d),β.
Case 2: t_∞=∞. Let n^± be the dimension of
M(^±,β^±). Because
s_1(^-[0])=0, s_2(^+[0])∧ s_3(^+[0])=0
we must have n^-≥3 and n^+≥2. On the other hand,
n^-+n^+≤ M(,β)=5,
so n^-=3, n^+=2. It follows that
=^-, β^-=^+, β^+=β.
The corresponding number of ends of 23(,β) is
v_2v_3,β modulo 2.
Case 3: t_∞=-∞. Arguing as in Case 2 one finds that the number
of such ends of 23(,β) is
v_3v_2,β modulo 2.
Since the total number of ends of 23(,β) must be zero modulo 2,
we obtain the equation eqn:v2v3chhom in the case
()≢48.
Part (II) Now suppose ()≡48.
We will again make use of a cut-off function b as in eqn:b-prop1 in
Subsection <ref>,
but we now impose two further conditions, namely
b(0)=1/2, b'(t)>0 for -1<t<1.
Set
c:×→, (,t)↦ b(t-τ_a()).
Choose generic 3×3 matrices A^+=(a^+_ij) and A^-=(a^-_ij) and
for j=1,2,3 define a section ρ_j of the bundle R^*
over M(,β)× by
ρ_j:=(1-c)∑_ia^-_ijρ^-_i+c∑_ia^+_ijρ^+_i.
Define a function g:M(,β)×→[0,1] by
g(,t):=b(()-1)· b(τ^+()-t)· b(t-τ^-()).
For j=1,2,3 we now define a section s_j of R^* by
s_j(,t):=(1-g(,t))· s_j([t])+g(,t)·ρ_j(,t).
defn
Let 23(,β) be the subspace of × consisting of those
points (,t) that satisfy the following conditions:
itemize
* s_1(,-t)=0,
* s_2(,t) and s_3(,t) are linearly dependent.
To understand the ends of 23(,β)
we will need to know that certain subspaces of M(,θ) and
M(θ,β), respectively, are “generically” empty.
These subspaces are defined as
follows. For ∈ M(,θ) and j=1,2,3 let
s_j():=(1-b(-τ^-()))· s_j([0])+b(-τ^-())
∑_ia^-_ijρ^-_i(,0),
and for ∈ M(θ,β) let
s_j():=(1-b(τ^+()))· s_j([0])+b(τ^+())
∑_ia^+_ijρ^+_i(,0).
Set
M_2(,θ) :={∈ M(,θ) s_2()∧ s_3()=0},
M_3(,θ) :={∈ M(,θ) s_1()=0}.
Replacing (,θ) by (θ,β) in the last two definitions
we obtain subspaces M_k(θ,β) of M(θ,β).
For k=2,3, each of the spaces M_k(,θ) and M_k(θ,β)
has expected dimension
1-k and is therefore empty for “generic” choices of sections s_j and
matrices A^±.
There is a constant C_0<∞ such that for all
(,t)∈23(,β) one has
|t|≤min(-τ^-(),τ^+())+C_0.
We must prove that both quantities |t|+τ^-() and
|t|-τ^+() are uniformly bounded above for (,t)∈23(,β).
The proof is essentially the same in both cases, so we will only spell it out
in the first case. Suppose, for contradiction, that (_n,t_n)
is a sequence in 23(,β)
with |t_n|+τ^-(_n)→∞.
After passing to a subsequence we may assume
that the sign of t_n is constant, so |t_n|=-et_n for some constant
e=±1. Then [et_n]→ by exponential decay
(see <cit.>), and
s_j(,et_n)=s_j(_n[et_n]) for n≫0.
If e=1 then this gives
0=s_2(_n[t_n])∧ s_3(_n[t_n])→ s_2()∧ s_3(),
as n→∞, whereas if e=-1 we get
0=s_1(_n[-t_n])→ s_1().
However, for “generic” sections s_j, both s_2()∧ s_3()
and s_1() are non-zero. This contradiction proves the lemma.
□
For any constant C_1<∞ there is constant L>0 such that for
all (,t)∈23(,β) satisfying ()≥ L one has
|t|≤min(-τ^-(),τ^+())-C_1.
Suppose to the contrary that there is a constant C_1<∞ and a
sequence (_n,t_n) in 23(,β) such that (_n)→∞
and
|t_n|>min(-τ^-(_n),τ^+(_n))-C_1.
After passing to a subsequence we may assume that at least one of the following
two conditions holds:
(i) |t_n|>-τ^-(_n)-C_1 for all n,
(ii) |t_n|>τ^+(_n)-C_1 for all n.
The argument is essentially the same in both cases, so suppose (i) holds. By
Lemma <ref> we also have
|t_n|≤-τ^-(_n)+C_0,
hence the sequence τ^-(_n)+|t_n| is bounded. Since
(_n)→∞ we have τ_d(_n)→∞, so
τ^+(_n)+|t_n|=τ_d(_n)+(τ^-(_n)+|t_n|)→∞.
After passing to a subsequence we may assume that
* the sequence _n chain-converges;
* the sequence τ^-(_n)+|t_n| converges to a real number;
* |t_n|=-et_n for some constant e=±1.
From Lemma <ref> we deduce that '_n:=^*_et_n_n
converges over compact subsets of to
some ∈ M(,θ). For large n we have c(_n,et_n)=0
and
g(_n,et_n)=b(et_n-τ^-(_n))=b(-τ^-('_n))→ b(-τ^-()).
For j=1,2,3 we now get
s_j(_n,et_n)→ s_j().
But then lies in
M_2(,θ) (if e=1) or in M_3(,θ) (if e=-1),
contradicting the fact that the latter two spaces are empty.□
Choose L≥2 such that for all (,t)∈23(,β) with
()≥ L one has
|t|≤min(-τ^-(),τ^+())-1,
which implies that s_j(,t)=ρ_j(,t). Set
23(,β):={(,t)∈23(,β)()≥ L}.
We will show that 23(,β) is transversely cut and therefore
a one-manifold with boundary, and determine the number of boundary
points and ends modulo 2. We will see that the number of ends is given by
the same formula as in Part (I), whereas the boundary points contribute the
new term ' of eqn:v2v3chhom.
Ends of 23(,β):
Let (_n,t_n) be a sequence in
23(,β). After passing to a subsequence we may assume that
(i),(ii), (iii) of Part (I) as well as the following hold:
description
(iv) The sequence _n is chain-convergent.
(v) The sequence τ_a(_n) converges in [-∞,∞].
(vi) Either (_n)>0 for all n, or (_n)=0 for all n.
Suppose (_n,t_n) does not converge in 23(,β).
Case 1: (_n)=0 for all n. Then g(_n,t_n)=0 and therefore
s_j(_n,t_n)=s_j(_n[t_n]).
This case is similar to Part (I) and the corresponding number of ends of
23(,β), counted modulo 2, is
(v_2v_3+v_3v_2+dϕ+ϕ d),β,
where ϕ is defined as before.
Case 2: (_n)>0 for all n. We show this is impossible.
By definition of the
chain-limit of _n must lie in (,β), so
τ_d(_n) is bounded. By Lemma <ref>, the sequence
τ^-(_n) is bounded above whereas τ^+(_n) is bounded below,
hence both sequences must be bounded.
Applying Lemma <ref>
again we see that t_n is bounded. Therefore, both sequences
τ_a(_n) and t_n converge in , so (_n,t_n) converges
in M(,β)× and hence in 23(,β),
which we assumed was not the case.
Boundary points of 23(,β): Let M=M(3,) be the space of
all 3×3 real matrices, and let U⊂ M be the open subset
consisting of those matrices B satisfying
B_1≠0, B_2∧ B_3≠0,
where B_j denotes the jth column of B. Then M∖ U is the union
of three submanifolds of codimension at least two, hence U is a connected
subspace and a dense subset of M. Let
F:3××× U× U →^3×^3×^3,
(H,v,w,B^+,B^-) ↦(F_1,F_2,F_3),
where
F_1 =(1-b(v))HB^-_1+b(v)B^+_1,
F_j =(1-b(w))HB^-_j+b(w)B^+_j, j=2,3.
Then F is a submersion, so F(0,0,0) is empty. Moreover, the set
Z:=F({0}× L(^3),
consisting of those points in the domain of F for which
F_1=0, F_2∧ F_3=0,
is a codimension 5 submanifold and a closed subset of
3×^2× U^2.
The projection π:Z→ U^2 is a proper map whose mod 2 degree is
_2(π)=1.
The equations eqn:FFF imply -1<v,w<1, hence π is
proper. To compute its degree,
let e_1,e_2,e_3 be the standard basis for ^3 and let B^± be given by
B^-_1=B^-_2=e_1, B^-_3=e_2,
B^+_1=-e_1, B^+_2=e_1, B^+_3=-e_2.
We show that the preimage
Z':=π(B^+,B^-) consists of precisely one point.
Suppose (H,v,w)∈ Z'. Because 0≤ b≤1, the equation F_1=0 implies
b(v)=1/2 and hence v=0, He_1=e_1, F_2=e_1. Because
He_2⊥ e_1, the vectors F_2,F_3 are linearly dependent if and only if
F_3=0, which yields w=0, He_2=e_2. Thus,
Z'={(I,0,0)},
where I is the identity matrix.
Using the fact that f(I,0,0)=(0,e_1,0)
and that the tangent space to L^*(^3) at (e_1,0) is
^3×{0}+ e_1 it is easy to see that the map
F( · , · , · ,B^+,B^-):3××→^9
is transverse to
{0}× L^*(^3) at (I,0,0), or equivalently, that (B^+,B^-)
is a regular value of π. This proves the claim.□
By Lemma <ref> we can identify
∂23(,β)=
(,θ)×(θ,β)×π(A^+,A^-),
where (H,v,w) corresponds to (h(),-t-τ_a(),t-τ_a()) for
(,t)∈∂23(,β).
Hence, for generic matrices A^± the number of boundary points of
23(,β), counted modulo 2, is ',β.
This completes the proof of Proposition <ref>.
□
§ TWO POINTS MOVING ON A CYLINDER, II
Let Y be an oriented homology 3–sphere.
In this section we will prove Proposition <ref>, which concerns
a certain cochain map
ψ:C^*(Y)→ C^*+5(Y)
appearing in the proof of additivity of .
We will continue using the notation
introduced in Section <ref>.
§.§ The cochain map ψ
We begin by recalling the definition of ψ in degrees
different from 4 mod 8 given in Subsection <ref>.
Let s_1,s_2 be "generic" sections of the canonical 3–plane bundle
→^*(Y[0]).
(Later we will impose further conditions on s_1,s_2.)
For any ,β∈^*(Y) set
33(,β):={(,t)∈× s_1([-t])=0=s_2([t])}.
If (,β)=5 and ()≢48 then
arguing as in Part (I) of the proof of Proposition <ref>
one finds that
33(,β) is a finite set. We define the matrix coefficient
ψ,β by
ψ,β:=#33(,β).
Recall that any "generic" section of defines a cup product
C^*(Y)→ C^*+3(Y) by the formula eqn:v3def. Let v_3 and v'_3
be the cup products defined by s_1 and s_2, respectively.
prop
For q≢3,48 one has
dψ+ψ d=v_3v'_3+v'_3v_3
as maps C^q(Y)→ C^q+6(Y).
Let ,∈^*(Y) with (,)=6 and
()≢3,48. Note that no sequence in M(,) can
have a chain-limit involving factorization through the trivial connection.
Now let (_n,t_n) be a sequence
in 33(,). After passing to a subsequence we may assume that
description
(i) The sequence ^*_t_n_n converges over compact subsets of
to some point ^+∈ M(^+,^+).
(ii) The sequence ^*_-t_n_n converges over compact subsets of
to some point ^-∈ M(^-,^-).
(iii) The sequence t_n converges in [-∞,∞] to some point
t_∞.
Clearly, s_1(^+[0]=0=s_2(^-[0]), hence (^±,^±)≥3.
Case 1: t_∞ finite. Then (^+,^+)=5 and either
^+= or ^+=. The corresponding number of ends of
33(,), counted modulo 2, is
(dψ+ψ d),.
Case 2: t_∞=∞. Then (^±,^±)=3, so
^-=, ^-=^+, and ^+=. The corresponding number of ends of
33(,) is v_3v'_3, modulo 2.
Case 3: t_∞=-∞. As in Case 2 one finds that the number of such
ends is v'_3v_3, modulo 2.
Since the total number of ends of 33(,) must be zero modulo 2,
we obtain the proposition.□
We now show that v_3=v'_3 if the sections s_1,s_2
are close enough in a certain
sense. To make this precise, we introduce the following
terminology: We will say a section s of has
Property 4 if for all
,β∈^*(Y) with (,β)≤4 the map
s_β:M(,β)→, ↦ s([0])
is transverse to the zero-section in .
Suppose s∈() has Property 4, and let be any
finite-dimensional linear
subspace of (). Then for any sufficiently small ∈
the following hold:
description
(i)The section s':=s+ has Property 4.
(ii)The sections s and s' define the same cup product
C^*(Y)→ C^*+3(Y).
Let (,β)=3.
Combining the transversality assumption with a compactness argument
one finds that the zero-set Z of s_β is a finite set.
Now observe that the map
equation
M(,β)×→, (,)↦(s+)([0])
is smooth, since has finite dimension. Therefore, given any
neighbourhood U of Z in M(,β) then the zero-set of
(s+)_β is contained in U for all sufficiently small .
The lemma now follows by applying the implicit function theorem to the map
eqn:sfrpmap.□
From now on we assume that s_1,s_2 are sufficiently close in the sense of the
lemma, so that in particular v_3=v'_3. Since we are taking coefficients
in /2, we deduce from Proposition <ref> that dψ=ψ d
in degrees different from 3 and 4 modulo 8.
We now extend the definition of ψ to degree 4.
Let ,β∈^*(Y) with
()=4 and (β)=1. To define ψ,β we use
the set-up of Subsections <ref> and <ref>
and define ρ_j, s_j for j=1,2
as in Subsection <ref>, where A^± should now be
generic 3×2 real matrices. In particular, we require that
A^± should have non-zero columns and that the angle between the columns
of A^+ should be different from the angle between the columns
of A^-. For any 3×2 real matrix B with non-zero columns B_j
set
ν(B):= B_1,B_2/B_1B_2,
using the standard scalar product and norm on ^3.
Then the above assumption on the angles means that ν(A^+)≠ν(A^-).
Now define
33(,β):={(,t)∈× s_1(,-t)=0, s_2(,t)=0}.
prop
33(,β) is a finite set.
It is easy to see that Lemmas <ref>
and <ref> hold with 33(,β) in place of
23(,β). Arguing as in the proof of Proposition <ref>
one finds that for any L>0 there are only finitely many points
(,t)∈33(,β) with ()≤ L. Choose L≥2 such that
for all (,t)∈33(,β) with ()≥ L one has
|t|≤min(-τ^-(),τ^+())-1,
which implies that s_j(,t)=ρ_j(,t). We claim that
there are no such (,t). For suppose (,t)
is such an element and set
(H,v_1,v_2):=(h(),-t-τ_a(),t-τ_a())∈3××.
Then for j=1,2 one has
(1-b(v_j))HA^-_j+b(v_j)A^+_j=0.
However, there is no solution (H,v_1,v_2) to these equations, since we
assume the columns A^±_j are non-zero and ν(A^+)≠ν(A^-).□
We define ψ in degree 4 by
ψ,β:=#33(,β).
prop
If the endomorphism ψ is defined in terms of “generic” sections s_1,s_2
that are sufficiently close then
dψ=ψ d
as maps C^*(Y)→ C^*+6(Y).
Although we could deduce this from Proposition <ref> below,
we prefer to give a direct proof, partly because the techniques involved
are also needed in the proof of Proposition <ref>.
It only remains to prove this in degrees 3 and 4 modulo 8. There is a
complete symmetry between these two cases because of
Lemma <ref>, so we will spell out the proof only in
degree 4. Let ,∈^*(Y) with ()=4, ()=2.
We will show that (dψ+ψ d),=0 by counting the ends of
a certain 1–dimensional submanifold
33(,) of M(,)×.
For any '∈(Y) we define a smooth function
:M(',)→
as follows.
For each β∈^1_Y let K_β be the union of all subsets
R^+(M(”,))⊂^*(Y[0]) where β≠”∈(Y) and
(”,)≤(β,),
where ( · , · ) is as in eqn:cs-al-beta.
Then K_β is compact. Choose a closed neighbourhood
W_β in ^*(Y[0]) of the finite set R^+(M(β,)) such that
W_β is disjoint from K_β, and a smooth function
f_β:^*(Y[0])→[0,1]
such that the following two conditions hold:
* W_β and W_β' are disjoint if β≠β';
* f_β=1 on a neighbourhood of R^+(M(β,)),
and f_β=0 outside W_β.
Set f:=1-∑_β f_β.
Let be the set of all β∈^1_Y such that
(',)>(β,)>0.
For ∈ M(',) and β∈ we
define τ^+_β()∈ implicitly by
_([τ^+_β()-2,∞))=(β,)+,
where the constant is as in Subsection <ref>, and set
():=f(R^+())·τ^+()+
∑_β f_β(R^+())·τ^+_β().
The function behaves under translation in the same way as
τ^±. Namely, for any real number s one has
(^*_s())=()-s.
For any ∈ M(',) let
() denote the restriction of to the band ().
For i=1,2,3 let i be the holonomy invariant section of
'β whose value at (,()) is ρ_i(()).
lemma
Let _n be a chain-convergent sequence in M(',).
If the last term of the chain-limit of _n lies in (β,)
for some β∈^*(Y) of index 1 then
(τ^+-)(_n)→∞,
otherwise the sequence (τ^+-)(_n) is bounded.
Because of the translationary invariance of τ^+- we may
assume that τ^+(_n)=0. Then _n converges over compact subsets of
to some element ∈ M(”,) representing the last term
in the chain-limit of _n. In fact, because no energy
can be lost at ∞ by the choice of , there are, for any real number
r, connections A_n,A representing _n,, respectively, such
that
A_n-A_L^p,w_1((r,∞)× Y)→0,
as follows from the exponential decay results of <cit.>.
Here, p,w are as in the definition of the space of connections
in Section <ref>.
Suppose first that β:=” is irreducible of index 1. Then
(_n)=τ^+_β(_n) for n≫0 and
(τ^+-τ^+_β)(_n)=-τ^+_β(_n)→∞,
proving the first assertion of the lemma.
Now suppose the sequence
(τ^+-)(_n) is not bounded. After passing to a subsequence we may
assume that there exists a β∈ such that for each n one has
R^+(_n)∈ W_β. Suppose, for contradiction, that ”≠β.
Since W_β is closed we must have R^+()∈ W_β
as well, hence
(”,)>(β,).
From eqn:anai we deduce that
τ^+_β(_n)→τ^+_β(),
so
(-τ^+)(_n)=τ^+_β(_n) is bounded. This contradiction shows
that ”=β.□
If _n is a sequence in M(',) which converges over compacta
to ∈ M(”,), where ”∈(Y) and
(”)≠1, then
(_n)→().
Let β∈^1_Y with (β,)>0.
If (”,)≤(β,) then R^+()∉W_β. Since
W_β is closed, we have R^+(_n)∉W_β for n≫0. This means
that β contributes neither to () nor to (_n) for
n≫0. If on the other hand (”,)>(β,) then
τ^+_β(_n)→τ^+_β().
From this the lemma follows.□
Let and be the real-valued functions on M(,) defined by
:=1/2(+τ^-), :=1/2(-τ^-).
Let
:M(,)→[0,∞), ↦ e_(R^-())·(),
where e_ is as in eqn:eal. As the following lemma shows,
the quantity () measures the extent to which
factors through the trivial connection θ over Y.
lemma
Let _n be a chain-convergent sequence in M(,).
If the first term of the chain-limit of _n lies in (,θ) then
(_n)→∞,
otherwise the sequence (_n) is bounded.
Because of the translationary invariance of we may assume
τ^-(_n)=0 for all n,
so that the sequence _n converges over compact subsets of to some
∈ M(,β), where β∈(Y). Then represents
the first term of the chain-limit of _n.
Part I. Suppose first that β=θ. We will show that
(_n)→∞.
There are two sequences 1,2 of real numbers such that
itemize
* ^*_1(_n) converges over compact subsets of to an
element of M(,θ).
* ^*_2(_n) converges over compact subsets of to an
element of M(θ,β'), where β' is an element of ^*(Y) which
is either equal to or has index 1.
* 2-1→∞.
Define the sequence r_n of real numbers implictly by
__n((-∞,r_n])=(,θ)+.
Then r_n<τ^+(_n) and r_n<τ^+_β(_n) for all β∈_,
hence r_n<(_n). For large n one therefore has
(_n)=(_n)-τ^-(_n)>r_n-τ^-(_n).
But
1-τ^-(_n), 2-r_n
are both bounded sequences and 2-1→∞, hence
(_n)>r_n-τ^-(_n)→∞.
Part II. Now suppose β is irreducible. We will show that
the sequence (_n) is bounded.
Case 1: β=. Then _n converges to in
M(,), hence (_n) is bounded.
Case 2: (,β)≤4. For large n one would then have
R^-(_n)∉U_, hence e_(R^-(_n))=0 and therefore
(_n)=0.
Case 3: (,β)=5, i.e. (β)=1.
For large n one would then have
R^+(_n)∈ W_β and therefore
(_n)=e_(_n[0])·τ^+_β(_n)
→ e_([0])·τ^+(),
so that (_n) is bounded in this case, too.□
Given '∈(Y), a real number d, and a real 3×2 matrix
A'=(a'_ij) of maximal rank we define two sections ζ_1,ζ_2 of
' by
ζ_j(,t):=b^+ j+(1-b^+)∑_i=1^3a'_ijρ^+_i,
where b^+:=b(τ^+--d). Here, and in the remainder of this
section, b:→ is a smooth function satisfying eqn:b-prop1
and eqn:b-prop2.
We will show that for '= and generic matrix A' the sections
ζ_1,ζ_2 are linearly independent at any point
(,t)∈ M(,)× with ()≫0. We begin by spelling
out sufficient conditions on A' under which this holds.
For any β∈^1_Y the finite set
(θ,β)×(β,) is in 1-1 correspondence with
the set of points (,')∈ M(θ,β)× M(β,)
satisfying
τ^+()=0=τ^+(').
(In other words, this is one way of fixing translation.) For each such pair
(,'), represented by a pair (A,A') of connections, say, the holonomy
of A along the path [0,∞)×{y_0} composed with the holonomy of
A' along (-∞,0]×{y_0} defines an isomorphism
_,':_[0]→_'[0].
For any real number r and j=1,2 let
η_j(r)=r·_,'(ρ_j([0]))+
(1-r)∑_i=1^3a'_ijρ_i('[0]).
Then the set
C:={r∈[0,1]η_1(r)∧η_2(r)=0}
has expected dimension 1-2=-1 and is empty for generic matrices A'.
Since (Y) is finite we conclude that for generic A', the set C
is empty for any β∈^1_Y and any
(,')∈ M(θ,β)× M(β,) satisfying ttom.
From now on we assume A' is chosen so that this holds.
lemma
Let A' be as described above.
If d>0 is sufficiently large then
the sections ζ_1,ζ_2 are linearly independent at every
point in M(θ,)×.
If the lemma were false then we could find a sequence d_n of real
numbers converging to ∞ and for each n an element
_n∈ M(θ,) such that ζ_1,ζ_2, defined with d_n
in place of d, are linearly dependent at (_n,t) for some (hence any)
t. Because A' has maximal rank and the assumptions on ensure that
ρ_1,ρ_2,ρ_3 are linearly independent at R^+(_n), we must have
b^+(_n)>0, i.e.
(τ^+-)(_n)>d_n-1,
which shows that (τ^+-)(_n)→∞. After passing to a subsequence
we can assume that the sequence _n is chain-convergent and that
b^+(_n) converges to some r∈[0,1]. By Lemma <ref>
the chain-limit lies in (θ,β)×(β,) for some
β∈^1_Y. Then the sequences
^*_τ^+(_n)(_n), ^*_τ^+_β(_n)(_n)
converge over compact subsets of to some ∈ M(θ,β) and
'∈ M(β,), respectively, and ttom holds. But then
η_1(r) and η_2(r) are linearly dependent, contradicting the assumption
on A'.□
From now on we assume that d is chosen so that the
conclusion of Lemma <ref> holds.
lemma
There is a constant T_1<∞ such that the sections ζ_1,ζ_2
are linearly independent at every point (,t)∈ M(,)× with
()>T_1.
Recall that if ζ_1,ζ_2 are linearly independent at
(,t) for some real number t then the same holds at (,t') for all
t'. Now suppose the lemma
were false. Then we could find a sequence in M(,) such that
()→∞ and ζ_1(,t),ζ_2(,t) are linearly
dependent for every n. We may also arrange that τ^+(_n)=0.
After passing to a subsequence we may assume that
is chain-convergent. From Lemma <ref> we see that
there are two possibilities for the chain-limit.
Case 1: The chain-limit of _n lies in
(,θ)×(θ,β)×(β,) for some
β∈^1_Y. Then (_n)=τ^+_β(_n) for n≫0.
Let ∈ M(θ,β) be a representative for the
middle term of the chain-limit. By Lemma <ref> we have
(τ^+-)(_n)→∞, so for t_n:=() one has
ζ_j(_n,t_n)→ρ_j(R^+()),
contradicting the fact that the ρ_j are linearly independent at
R^+().
Case 2: The chain-limit of _n lies in
(,θ)×(θ,). Then _n converges over compact
subsets of to some ∈ M(θ,) satisfying
τ^+()=0. According to Lemma <ref> we have
(_n)→(), so
ζ_j(_n,t)→ζ_j(,t)
for any t. Hence, ζ_1,ζ_2 must be linearly dependent at
(,t). But d was chosen so that the conclusion of
Lemma <ref> holds, so we have a contradiction.□
At any point (,t)∈ M(',)× where ζ_1,ζ_2
are linearly independent let ξ_1(,t),ξ_2(,t) be the
orthonormal pair of vectors in _[t] obtained by applying the
Gram-Schmidt process to ζ_1(,t) and ζ_2(,t), and let
ξ_3=ξ_1×ξ_2 be the fibrewise cross-product of ξ_1 and ξ_2.
Then {ξ_j(,t)}_j=1,2,3 is a positive orthonormal basis for
_[t].
We now have the necessary ingredients to define the cut-down moduli space
33(,). Set
c:M(,)×→[0,1], (,t)↦ b(t-())
and for j=1,2,3 define a section _j of the bundle _ over
M(,)× by
_j:=(1-c)∑_ia^-_ijρ^-_i+c∑_ia^+_ijξ_i.
Choose a constant T_1 for which the conclusion of Lemma <ref>
holds and define a function g:M(,)×→[0,1] by
g(,t):=b(()-T_1)· b(()-t)· b(t-τ^-()).
For j=1,2,3 we now define a section s_j of _ by
s_j(,t):=(1-g(,t))· s_j([t])+g(,t)·_j(,t).
Now set
33(,):={(,t)∈× s_1(,-t)=0, s_2(,t)=0}.
In the study of the ends of 33(,) we will encounter certain
subspaces of M(θ,) which we now define. For ∈ M(θ,)
and j=1,2 set
s_j():=(1-b(()))· s_j([0])
+b(())∑_i=1^3a^+_ijξ_i(,0)
and define
M_3;j(θ,):={∈ M(θ,) s_j()=0}.
This space has expected dimension 2-3=-1 and is empty for “generic”
choices of sections s_j and matrix A^+.
There is a constant C_0<∞ such that for all
(,t)∈33(,) one has
|t|≤min(-τ^-(),())+C_0.
That |t|+τ^-() is uniformly bounded above for
(,t)∈33(,) is proved in the same way as the corresponding part
of Lemma <ref>. To prove the same for |t|-(),
suppose there were a sequence (_n,t_n)∈33(,) with
|t_n|-(_n)→∞.
After passing to a subsequence we may assume the following.
* The sequence _n is chain-convergent;
* There is a constant e=±1 such that |t_n|=et_n for all n;
* The sequence et_n-τ^+(_n) converges in [-∞,∞] to
some point t.
Let j:=1/2(3+e). Then for n≫0 we have
0= s_j(_n,et_n)=s_j(_n[et_n]).
According to Lemma <ref> one of the following two cases
must occur.
Case 1: The sequence (τ^+-)(_n) is bounded. Then
et_n-τ^+(_n)→∞, so _n[et_n]→. By continuity of
s_j we must have s_j()=0, which however will not hold for a
“generic” section s_j.
Case 2: (τ^+-)(_n)→∞. From Lemma <ref>
we deduce that ^*_τ^+(_n)(_n) converges over compact subsets of
to some ∈ M(β,), where β∈^1_Y. Then
(_n)=τ^+_β(_n) for n≫0. Furthermore,
^*_τ^+_β(_n) converges over compacta to an element of some
moduli space M(',β), where β≠'∈(Y).
Case 2a: t=±∞. Then the exponential decay results of
<cit.> imply that
_n[et_n] converges to (if t=-∞) or to
(if t=∞). This is ruled out in the same way as Case 1.
Case 2b: t finite. Then ^*_et_n(_n) converges over compacta
to ':=^*_t()∈ M(β,), and _n[et_n]→'[0].
But then s_j('[0])=0, which will not hold for a “generic” section
s_j of the bundle , since M(β,) has dimension 1
whereas has rank 3.□
For any constant C_1<∞ there is constant L>0 such that for
all (,t)∈33(,) satisfying ()≥ L one has
|t|≤min(-τ^-(),())-C_1.
If not, then there would be a constant C_1<∞ and
a sequence (_n,t_n)∈33(,) with (_n)→∞
such that either
(i) |t_n|>-τ^-(_n)-C_1 for all n, or
(ii) |t_n|>(_n)-C_1 for all n.
Case (i) is rule out as in the proof of Lemma <ref>. Now
suppose (ii) holds. Because (_n)→∞ we have
(_n)→∞.
From Lemma <ref> we deduce that |t_n|-(_n) is bounded,
so
|t_n|-τ^-(_n)→∞.
This implies that c(_n,t_n)=1 for n≫0. After passing to a subsequence
we may assume that the sequence _n chain-converges and
|t_n|=-et_n for some constant e=±1.
Case 1: (τ^+-)(_n) is bounded. By Lemmas <ref>
and <ref> the chain-limit of _n must lie in
(,θ)×(θ,), so after passing to a subsequence
we may assume that '_n:=^*_et_n(_n) converges over compacta to some
∈ M(θ,). Using Lemma <ref> we obtain
g(_n,et_n)=b((_n)-et_n)=b(('_n))→ b(()).
Let j:=1/2(3+e). Then
0= s_j(_n,et_n)→ s_j().
But then lies in M_3;j(θ,), which is empty by choice of the
matrix A^+.
Case 2: (τ^+-)(_n)→∞. Then the chain-limit of _n
lies in (,θ)×(θ,β)×(β,) for
some β∈^1_Y. For large n we now have
(_n)=τ^+_β(_n) and ξ_j(_n,et_n)= j(_n,et_n),
j=1,2. After passing to a subsequence we may assume that
'_n:=^*_et_n(_n) converges over compacta to some
∈ M(θ,β). For large n we have
g(_n,et_n)=b(τ^+_β(_n)-et_n)=b(τ^+_β('_n))→
b(τ^+()).
Let j:=1/2(3+e). Then
0= s_j(_n,et_n)→(1-b(τ^+()))· s_j([0])
+b(τ^+())∑_ia^+_ijρ^+_i(,0).
Thus, lies in M_3(θ,β), which is empty by choice of A^+.
□
There is a constant L<∞ such that for all (,t)∈33(,)
one has ()<L.
For any (,t)∈33(,) with ()>T_1 let
h()∈3 be the matrix whose coefficients h_ij() are given by
ρ^-_j(,t)=∑_ih_ij()ξ_i(,t).
By Lemma <ref> there is an L≥ T_1+1 such that for all
(,t)∈33(,) with ()≥ L one has
|t|≤min(-τ^-(),())-1,
which implies that s_j(,t)=_j(,t). Given such a (,t),
the triple
(H,v_1,v_2):=(h(),-t-τ_a(),t-τ_a())∈3××
satisfies the equation
(1-b(v_j))HA^-_j+b(v_j)A^+_j=0.
for j=1,2. However, as observed in the proof of
Proposition <ref>, these equations have no solution
for generic matrices A^±.□
We will now prove Proposition <ref> in degree 4
by counting the number of ends of 33(,) modulo 2.
Ends of 33(,): Let (_n,t_n) be a sequence in
33(,). After passing to a subsequence we may assume that
the following hold:
(i) The sequences ^*_-t_n(_n) and ^*_t_n(_n)
converge over compact subsets of .
(ii) The sequence ^*_τ^-(_n)(_n) converges over compacta
to some ∈ M(,β), where β∈(Y).
(iii) The sequences t_n and τ^-(_n) converge in
[-∞,∞].
Suppose (_n,t_n) does not converge in 33(,).
Case 1: β=. We show this cannot happen. First observe that
the sequence (_n)
converges in . Since Lemma <ref> provides an upper bound
on τ^-(_n) and a lower bound on (_n) it follows that
both sequences must be bounded. Applying the same lemma again we see that
|t_n| is bounded. But then assumptions (ii) and (iii) imply that
(_n,t_n) converges in 33(,), which we assumed was not the
case.
Case 2: β irreducible, M(,β)≤4. Then
(_n)=0 for n≫0. As in the proof of
Proposition <ref> we find that the corresponding number of ends
of 33(,) is ψ d,.
Case 3: β irreducible, M(,β)=5. Then
(_n)=τ^+_β(_n) for n≫0, and
(_n)→τ_d().
As in Case 1 we see that the sequences τ^-(_n)
and t_n must be bounded, hence they both converge in by assumption (iii).
From (ii) we deduce that _n converges over compacta to some
'∈ M(,β) (related to by a translation).
By Lemma <ref> we have ξ_j(_n,t)= j(_n,t) for
n≫0 and any t, so
_j(_n,t)→_j(',t).
Setting t':=lim t_n we conclude that
(',t')∈33(,β). The corresponding number of ends of
33(,) is dψ,.□
§.§ Calculation of ψ
There are constants ^±∈/2 independent of Y and satisfying
^++^-=1 such that if ψ is defined in terms of “generic”
sections s_1,s_2
that are sufficiently close and e is the sign of
ν(A^+)-ν(A^-) then there is a homomorphism
Ξ:C^*(Y)→ C^*+4(Y) such that
ψ=v_3v_2+^e'+dΞ+Ξ d.
To be precise, if s'∈() satisfies Property 4 and
⊂() is any sufficiently large finite-dimensional linear
subspace then for any sufficiently small generic
(_0,_1)∈×
the conclusion of the proposition holds with s_j=s'+_j.
The above proposition completes the proof of Proposition <ref>
except for the order of v_2,v_3, which is insignificant in vue of
Proposition <ref>. (The order could be reversed by a small
change in the proof given below.)
Let ,β∈^*(Y) with (,β)=5.
Part (I) Suppose ()≢48.
For -3≤ y≤3 we define a section χ_y of by
6χ_y:=(3-y)s_1+(3+y)s_2.
In particular,
χ_-3=s_1, χ_3=s_2.
Let
:={z∈:|(z)|≤3, |z|≥1}
and let ':=/±1 be the surface-with-boundary obtained by identifying
each z∈ with -z. The image of a point z∈ in ' will be
denoted by [z].
Let ξ̅∈(), and let ξ̂ be a section of the bundle
× S^1 over ^*(Y[0])× S^1 satisfying
ξ̂(,-z)=-ξ̂(,z),
so that ξ̂∈_a() in the notation of
Section <ref>. We then define a section ξ of the
bundle × over ^*(Y[0])× as follows. Let
b_1(z):=b(|z|-2).
For ∈^*(Y[0]) and z=(x,y)∈ let
ξ(,z):=(1-b_1(z))·(ξ̅()+ξ̂(,z/|z|))
+b_1(z)χ_y().
Let f:→ be the smooth function given by
f(z):=b_1(z)(z).
Note that f(z)=(z) for |z|≥3, and f(z)=0 for |z|=1.
Moreover, f(-z)=-z.
(i) Let =(,β)
be the subspace of M(,β)×'
consisting of those points (,[z]) such that
ξ([f(z)],z)=0, ξ([f(-z)],-z)=0.
(ii) Let =(,β) be the subspace of
M(,β)× S^1×[0,∞)
consisting of those points (,z^2,r) such that z∈ S^1 and
ξ̂([-r],z)=0, ξ̅([r])=0.
If ξ̅ is “generic” and ξ̂ is given by a “generic” section
of ⊗ (see Lemma <ref>)
then will be a smooth
1–manifold-with-boundary. Now choose a section s'∈() satisfying
Property 4. If is a sufficiently large finite-dimensional
linear subspace of () and (_0,_1) a generic element of
× then taking s_j=s'+_j, j=1,2 the space will be
a smooth 1–manifold-with-boundary. (The reason that transversality can be
achieved over the boundary component of M(,β)×' given by
|z|=1 is essentially that if V is any real vector space then
every element of V× V can be written as (a+b,a-b) for suitable
a,b∈ V.) If in addition _0,_1 are
sufficiently small then for -3≤ y≤3 the section χ_y
will satisfy Property 4 and define the same cup product
v_3:C^*(Y)→ C^*+3(Y) as s', by Lemma <ref>.
The part of the boundary of given by |z|=1 can be identified with
the boundary of (defined by r=0). To see this, let
(,z)∈ M(,β)× and set _0:=[0]. Then
(,[z])∈ if and only if
ξ̅(_0)+ξ̂(_0,z)=0=ξ̅(_0)-ξ̂(_0,z),
which in turn is equivalent to (,z^2,0)∈.
This allows us to define a topological 1–manifold-with-boundary
=(,β) as a quotient
of the disjoint union ∐ by identifying each boundary point
of with the corresponding boundary point of .
The proposition will be proved by counting the ends and boundary points
of modulo 2. Before doing this, we pause to define the homomorphism
Ξ. Let ',β'∈^*(Y) with (',β')=4.
Replacing (,β) by (',β') in Definition <ref>
yields zero-dimensional manifolds _j(',β'), j=1,2.
The argument that we will give below to determine the ends of _j(,β)
can also be applied to show that _j(',β') is compact.
Granted this, we define Ξ:=Ξ_1+Ξ_2, where Ξ_j has matrix coefficient
Ξ_j',β':=#_j(',β').
Ends of (,β): Let (_n,[z_n]) be a sequence in
(,β), where z_n=(x_n,y_n)∈^2. After passing to a subsequence
we may assume that
description
(i) The sequence ^*_-x_n(_n)
converges over compact subsets of to some
^-∈ M(^-,β^-).
(ii) The sequence ^*_x_n(_n) converges over compact subsets of
to some ^+∈ M(^+,β^+).
(iii) The sequence (x_n,y_n) converges in
[-∞,∞]×[-3,3] to some point (x,y).
Suppose (_n,[z_n]) does not converge in (,β).
Case 1: x finite. Then (^+,β^+)=4 and either
^+= or β^+=β. The corresponding number of ends of
(,β) is (dΞ_1+Ξ_1d),β modulo 2.
Case 2: x=±∞. Then for n≫0 one has
0=ξ([± x_n],± z_n)→χ_± y(^±[0]).
Hence χ_± y(^±[0])=0. Since
χ_± y satisfy Property 4 we must have (^±,β^±)≥3,
so
5=(,β)≥(^-,β^-)+(^+,β^+)≥6.
This contradiction shows that there are no ends in the case x=±∞.
Ends of (,β): We argue as in part (I) of the proof of
Proposition <ref>. Let (_n,z_n^2,r_n) be a sequence in
(,β).
After passing to a subsequence we may assume that r_n converges in
[0,∞] to some point r. Then the number of ends modulo 2
corresponding to r<∞ is (dΞ_2+Ξ_2d),β.
Using Proposition <ref> and Lemma <ref> we see that
the number of ends corresponding to r=∞ is v_3v_2,β.
Boundary points of (,β): These are the points
(,[z]) in M(,β)×' where (z)=3 and
0=ξ([x],z)=s_2([x]), 0=ξ([-x],-z)=s_1([-x]).
The number of such points is by definition ψ,β.
Since the number of ends plus the number of boundary point of must
be zero modulo 2 we obtain the equation eqn:psi-v3v2 in the
case ()≢42.
Part (II) Suppose ()≡48. We define maps
V^±:[-3,3]→^3 by
6V^±(y):=(3-y)A^±_1
+(3+y)A^±_2.
Choose generic elements
L̅^±∈^3 and functions L̂^±:S^1→^3 satisfying
L̂^±(-z)=-L̂^±(z) for z∈ S^1. We define maps
L^±:→^3 by
L^±(z):=(1-b_1(z))· (L̅^±+L̂^±(z/|z|))+
b_1(z)· V^±((z)),
where the function b_1 is as in eqn:b1def. Let (,β)
be the vector bundle over × obtained by pulling back
the bundle →^*(Y[0]) by the map
×→^*(Y[0]), (,z)↦[f(z)].
Let c and g be the functions defined in
eqn:c23def and eqn:gomt, respectively.
We define sections ,s of (,β) by
(,z):=(1-c(,f(z)))∑_i=1^3L^-_i(z)ρ^-_i(,f(z))
+c(,f(z))∑_i=1^3L^+_i(z)ρ^+_i(,f(z)),
s(,z):=(1-g(,f(z)))·ξ([f(z)],z)+g(,f(z))·(,z).
Let =(,β) be the subspace of ×' consisting
of those points (,[z]) such that
s(,z)=0, s(,-z)=0.
We define sections ,s̅ of the bundle (,β)
over × by
(,r):=(1-c(,r))∑_i=1^3L̅^-_iρ^-_i(,r)
+c(,r)∑_i=1^3L̅^+_iρ^+_i(,r),
s̅(,r):=(1-g(,r))·ξ̅([r])+g(,r)·(,r).
Let (,β) be the vector bundle over
× S^1× obtained by pulling back the bundle
by the map
× S^1×→ Y[0], (,z,r)↦[r].
We define sections ,ŝ of (,β) by
(,z,r):=(1-c(,r))∑_i=1^3L̂^-_i(z)ρ^-_i(,r)
+c(,r)∑_i=1^3L̂^+_i(z)ρ^+_i(,r),
ŝ(,z,r):=(1-g(,r))·ξ̂([r],z)+g(,r)·(,z).
Note that ŝ(,-z,r)=-ŝ(,z,r).
Let =(,β) be the subspace of
× S^1×[0,∞)
consisting of those points (,z^2,r) such that z∈ S^1 and
ŝ(,z,-r)=0, s̅(,r)=0.
By inspection of the formulas involved one finds that for |z|=1 one has
(,0)+(,z,0) =(,z),
s̅(,0)+ŝ(,z,0) =s(,z).
Therefore, the part of the boundary of given by |z|=1 can be
identified with the boundary of (defined by r=0). By gluing
and correspondingly
we obtain a topological 1–manifold-with-boundary .
There is a constant C_0<∞ such that for all
(,[z])∈ one has
|f(z)|≤min(-τ^-(),τ^+())+C_0.
The proof is similar to that of Lemma <ref>.
We must provide upper bounds on both quantities |f(z)|+τ^-() and
|f(z)|-τ^+() for (,[z])∈.
The proof is essentially the same in both cases, so we will only spell it out
in the second case. Suppose, for contradiction, that (_n,[z_n])
is a sequence in with |f(z)|-τ^+(_n)→∞.
By perhaps replacing z_n by -z_n we can arrange that
(z_n)≥0. Then f(z_n)≥0 as well,
and g(_n,f(z_n))=0 for n≫0. Let z_n=(x_n,y_n).
After passing to a subsequence we may assume
that z_n converges in [0,∞]×[-3,3] to some
point (x,y).
Case 1: x finite. Let z:=(x,y)∈. The sequence _n
converges to over compact subsets of , so
for large n we have
0=ξ(_n[f(z_n)],z_n)→ξ(,z).
However, the space of all w∈ for which ξ(,w)=0 has
expected dimension 2-3=-1, so this space is empty for “generic”
sections s_1,s_2,ξ̅,ξ̂. Hence, x cannot be finite.
Case 2: x=∞. Then f(z_n)=x_n for large n.
Now, ^*_x_n_n converges over compacta
to , so for large n we have
0=ξ(_n[x_n],z_n)=χ_y_n(_n[x_n])→χ_y().
However, the space of all t∈[-3,3] for which χ_t()=0 has
expected dimension 1-3=-2, so this space is empty for “generic”
sections s_1,s_2. Hence, x≠∞.
This contradiction proves the lemma.□
In the proof of Lemma <ref> below
we will encounter certain limits
associated to sequences in with chain-limits in
(,θ)×(θ,β). These limits lie in
cut down moduli spaces analogous to those introduced in
Definitions <ref> and <ref>,
with M(,θ) or M(θ,β)
in place of . We now define these cut-down spaces in the case of
M(θ,β) and observe that they are “generically” empty.
The case of M(,θ) is similar.
For any (,z)∈× let
s(,z):= (1-b(τ^+()-f(z)))·ξ([f(z)],z)
+b(τ^+()-f(z))∑_i=1^3L^+_i(z)ρ^+_i(,f(z)).
Let (θ,β) be the subspace of M(θ,β)×'
consisting
of those points (,[z]) such that
s(,z)=0, s(,-z)=0.
Then (θ,β) has expected dimension 3-6=-3 and is
empty for “generic” sections s_1,s_2,ξ̅,ξ̂ and generic choices of
A^+,L̅^+,L̂^+.
Let (θ,β) be the subspace of
M(θ,β)×[-3,3] consisting of those points (,y) such that
(1-b(τ^+()))·χ_y([0])
+b(τ^+())∑_iV^+_i(y)ρ^+_i(,0)=0.
We observe that the space (θ,β) (a parametrized version of
the space M_3(θ,β) defined in
Subsection <ref>)
has expected dimension 2-3=-1 and is
empty for “generic” sections s_1,s_2 and generic matrix
A^+.
For any constant C_1<∞ there is constant L>0 such that for
all (,[z])∈ satisfying ()≥ L one has
|f(z)|≤min(-τ^-(),τ^+())-C_1.
The proof is similar to that of Lemma <ref>. If the lemma
did not hold there would be a sequence (_n,[z_n]) in such that
(_n)→∞ and one of the following two conditions hold:
(i) |f(z_n)|>-τ^-(_n)-C_1 for all n,
(ii) |f(z_n)|>τ^+(_n)-C_1 for all n.
Suppose (ii) holds, the other case being similar.
By replacing z_n by -z_n, if necessary, we can arrange that (z_n)≥0.
From Lemma <ref> we deduce that the sequence
f(z_n)-τ^+(_n) is bounded, whereas
f(z_n)-τ^-(_n)→∞.
For large n we therefore have
c(_n,f(z_n))=1, g(_n,f(z_n))=b(τ^+(_n)-f(z_n)).
Let z_n=(x_n,y_n).
After passing to a subsequence we may assume that
* '_n:=^*_x_n_n converges over compact subsets of to
some '∈ M(θ,β);
* z_n converges in [0,∞]×[-3,3] to some point z=(x,y).
Case 1: x finite. Then _n converges over compacta to some
∈, and
0=s(_n,z_n)→ s(,z).
Beause the sequence z_n is bounded, we also have c(_n,f(-z_n))=1 for
large n, so
0=s(_n,-z_n)→ s(,-z).
But then (,[z]) belongs to (θ,β), contradicting the fact
that that space is empty.
Case 2: x=∞. Since
τ^+('_n)=τ^+(_n)-x_n,
we obtain
g(_n,f(z_n))=b(τ^+('_n)) for n≫0.
Therefore,
0=s(_n,z_n)→
(1-b(τ^+(')))·χ_y('[0])
+b(τ^+('))∑_iV^+_i(y)ρ^+_i(',0).
But this means that (',y) belongs to (θ,β), which is
empty.
This contradiction proves the lemma.□
There is a constant C_0<∞ such that for all
(,z^2,r)∈ one has
r≤min(-τ^-(),τ^+())+C_0.
This is similar to the proof of Lemma <ref>.□
For any constant C_1<∞ there is constant L>0 such that for
all (,z^2,r)∈ satisfying ()≥ L one has
r≤min(-τ^-(),τ^+())-C_1.
This is similar to the proof of Lemma <ref>.□
Choose L≥2 such that the conclusions of Lemmas <ref>
and <ref> hold with C_1=1. For all (,[z]∈
with ()≥ L we then have
s(,z)=(,z),
and for all (,z^2,r)∈ with ()≥ L we have
ŝ(,z,-r)=(,z,-r), s̅(,r)=(,r).
From Lemma <ref> it follows that
L is a regular value of the real functions on
and defined by . Therefore,
:={(,[z])∈()≤ L},
:={(,z^2,r)∈()≤ L}
are smooth 1–manifolds-with-boundary, and
^L:=∪
is a topological 1–manifolds-with-boundary. (As before we identify the
part of given by |z|=1 with the part of given by r=0.)
Ends of ^L: From Lemma <ref> we deduce that every
sequence (_n,[z_n]) in which satisfies (_n)>0 has a
convergent subsequence. Similarly, it follows from Lemma <ref>
that every sequence (_n,z_n^2,r_n) in with (_n)>0 has a
convergent subsequence. (See the proof of Proposition <ref>,
“Ends of 23(,β)”, Case 2.) Therefore, all ends of ^L
are associated with sequences on which =0.
The number of such ends,
counted modulo 2, is given by the same formula as in Part (I), namely
(v_3v_2+dΞ+Ξ d),β.
Boundary points of ^L: The boundary of ^L decomposes as
^L=∪'∪,
where and are the parts of the boundaries of
and , respectively, given by ()=L, and '
is the part of the boundary of given by (z)=±3.
By choice of matrices A^± there are no points (,t)∈33(,β)
with ()≥ L, hence W'_=33(,β) and
#'=ψ,β.
By Lemma <ref> we can identify
=(,θ)×(θ,β)×, =(,θ)×(θ,β)×,
where is the set of points (H,τ,[z]) in
3××' satisfying
(1-b(f(z)-τ))HL^-(z)+b(f(z)-τ)L^+(z),
(1-b(f(-z)-τ))HL^-(-z)+b(f(-z)-τ)L^+(-z),
whereas is the set of points (H,τ,z^2,r) in
3×× S^1×[0,∞) satisfying
(1-b(-r-τ))HL̂^-(z)+b(-r-τ)L̂^+(z)=0,
(1-b(r-τ))HL̅^-+b(r-τ)L̅^+=0.
Here, (H,τ) corresponds to (h(),τ_a()).
It follows from these descriptions that
#(∪)=',β,
where =#(∪)∈/2 is independent of the manifold Y.
To prove the theorem it only remains to understand the dependence of
on the pair of matrices A=(A^+,A^-). To emphasize the dependence on A
we write =(A) and =(A). The space is
independent of A. The part of corresponding to |z|=1 is also
independent of A and is empty for generic L̅,L̂ for dimensional
reasons.
Let P denote the space of all pairs (B^+,B^-) of 3×2 real
matrices with non-zero columns B^±_j. Let
P^±:={(B^+,B^-)∈ P±(ν(B^+)-ν(B^-))>0},
where ν is as in eqn:nuB. Note that each of P^+,P^- is homotopy
equivalent to S^2× S^2 and therefore path connected.
For any smooth path C:[0,1]→ P we define
:=⋃_0≤ t≤1(C(t))×{t}⊂3××'×[0,1].
As observed above there are no points (H,τ,[z],t) in with |z|=1.
Since b_1(z)>0 for |z|>1 we can therefore make regular
(i.e. transversely cut out) by varying C alone. If is regular
then it is a compact 1–manifold-with-boundary, and
=(C(0))∪(C(1))∪ X_C,
where X_C is the set of points (H,τ,x,t) in
3×××[0,1] satisfying the two equations
(1-b(x-τ))HC^-_1(t)+b(x-τ)C^+_1(t)=0,
(1-b(-x-τ))HC^-_2(t)+b(-x-τ)C^+_2(t)=0.
It follows that
(C(0))+(C(1))=#X_C.
If A,B∈ P^+ then we can find a path C:[0,1]→ P^+ from A to B.
Then X_C is empty. By perturbing C(t) for 0<t<1 we can arrange that
is regular. This yields (A)=(B). The same holds if
A,B∈ P^-.
Let ^± be the value that takes on P^±. To compute
^++^-, let (e_1,e_2,e_3) be the standard basis for ^3 and define
C:[0,1]→ P by
-C^+_1(t) =C^-_1(t):=e_1,
-C^+_2(t) :=(1-t)e_1+te_2,
C^-_2(t) :=(1-t)e_2+te_1.
Then C(0)∈ P^+ and C(1)∈ P^-. Moreover,
X_C consists of the single point
(I,0,0,1/2), and this point is regular. (Here I is the identity matrix.)
If we perturb C a little in order
to make regular then X_C will still consist of a single, regular point.
We conclude that
^++^-=#X_C=1.
This completes the proof of the proposition.□
§ INSTANTONS REDUCIBLE OVER OPEN SUBSETS
The following proposition is implicit in <cit.> but we include a
proof for completeness.
Let X be an oriented connected Riemannian 4–manifold and E→ X
an oriented Euclidean 3–plane bundle. Suppose A is a non-flat
ASD connection in E which restricts to a reducible connection over some
non-empty open set in X. Then there exists a rank 1 subbundle of E
which is preserved by A.
This is a simple consequence of the unique continuation argument in the
proof of <cit.>. The proof has two parts: local existence
and local uniqueness.
(i) Local existence. By unique continuation, every point in X has a connected
open neighbourhood V such that A|_V is reducible, i.e. there exists
a non-trivial automorphism u of E|_V such that ∇_Au=0. The
1–eigenspace of u is then a line bundle preserved by A.
(ii) Local uniqueness. Because A is not flat, it follows from
unique continuation that the set of points in X where F_A=0 has empty
interior. Now let V be any non-empty connected open set in X and suppose
A preserves a rank 1 subbundle ⊂ E|_V. We show that is
uniquely determined. Let x∈ V be a point where F_A≠0. By the
holonomy description of curvature
(see <cit.>) we can find a loop
in V based at x such that the holonomy _(A) of A along
is close to but different from the identity. The 1–eigenspace of
_(A) is then 1–dimensional and must agree with the fibre _x.
If x' is an arbitrary point in V then there is a similar description of
_x' in terms of the holonomy of A along a loop obtained by
conjugating with a path in V from x to x'.
□
§ UNIQUE CONTINUATION ON A CYLINDER
As in Subsection <ref> let
Y be a closed oriented connected 3-manifold and P→ Y an
3 bundle. If Y is not an integral homology sphere then we assume
P is admissible.
Let J⊂ be an open interval.
We consider the perturbed ASD equation for connections
in the bundle J× P→ J× Y obtained by
adding a holonomy perturbation to the Chern-Simons function. For a connection
A in temporal gauge the equation takes the form
A_t/ t=-*F(A_t)+V(A_t),
where A_t is the restriction of A to the slice {t}× P and
V is the formal gradient of the perturbation.
The following proposition is probably well known among experts, but we
include a proof for completeness.
Suppose A,A' are perturbed ASD connections in the bundle
J× P→ J× Y. If A and A' are in temporal gauge and
A_T=A'_T for some T∈ J, then A=A'.
We will apply (an adaption of)
the abstract
unique continuation theorem in <cit.>. To this end, fix an arbitrary
connection B in P and let
c_t=A_t-A'_t, a_t=A_t-B, a'_t=A'_t-B.
We have
F(A_t)=F(B)+d_Ba_t+a_t∧ a_t
and similarly for A'_t, so
c_t/ t+*d_Bc_t=-*(a_t∧ c_t+c_t∧ a'_t)
+V(A_t)-V(A'_t).
By <cit.> we have
V(A_t)-V(A'_t)_L^2≤c_t_L^2,
hence
c_t/ t+*d_Bc_t_L^2≤ϕ(t)c_t_L^2
where
ϕ(t)=(a_t_∞+a'_t_∞+1).
Because *d_B is a formally self-adjoint operator on 1–forms on Y and
ϕ is locally square integrable (in fact, continuous), we deduce
from <cit.> that for any
compact subinterval [t_0,t_1] of J
there are constants C_0,C_1 such that for t_0≤ t≤ t_1 one has
c_t_L^2≥c_t_0_L^2·exp(C_0t+C_1).
(<cit.> considers the case when c_t is defined for 0≤ t<∞, but
the approach works equally well in our case.)
Taking t_1=T we obtain c_t=0 for
t<T. Replacing c_t by c_-t we get c_t=0 for
t>T as well.□
10
AS1
M. F. Atiyah and I. M. Singer.
The index of elliptic operators: I.
Ann. of Math., 87:484–530, 1968.
BD1
P. J. Braam and S. K. Donaldson.
Floer's work on instanton homology, knots and surgery.
In H. Hofer, C. H. Taubes, A. Weinstein, and E. Zehnder, editors,
The Floer Memorial Volume, pages 195–256. Birkhäuser, 1995.
DHST1
I. Dai, J. Hom, M. Stoffregen, and L. Truong.
An infinite-rank summand of the homology cobordism group.
arXiv:1810.06145.
D1
S. K. Donaldson.
An application of gauge theory to four dimensional topology.
J. Diff. Geom., 18:279–315, 1983.
D2
S. K. Donaldson.
The orientation of Yang–Mills moduli spaces and 4–manifold
topology.
J. Diff. Geom., 26:397–428, 1987.
D5
S. K. Donaldson.
Floer Homology Groups in Yang–Mills Theory.
Cambridge University Press, 2002.
DK
S. K. Donaldson and P. B. Kronheimer.
The Geometry of Four-Manifolds.
Oxford University Press, 1990.
Miller-Eismeier1
M. Miller Eismeier.
Equivariant instanton homology.
arXiv:1907.01091.
FS2
R. Fintushel and R. J. Stern.
Definite 4–manifolds.
J. Diff. Geom., 28:133–141, 1988.
F1
A. Floer.
An instanton invariant for 3–manifolds.
Comm. Math. Phys., 118:215–240, 1988.
Fr0
K. A. Frøyshov.
On Floer homology and 4–manifolds with boundary, 1995.
D.Phil. thesis, University of Oxford.
Fr1
K. A. Frøyshov.
The Seiberg–Witten equations and four-manifolds with boundary.
Math. Res. Lett., 3:373–390, 1996.
Fr3
K. A. Frøyshov.
Equivariant aspects of Yang–Mills Floer theory.
Topology, 41:525–552, 2002.
Fr7
K. A. Frøyshov.
An inequality for the h–invariant in instanton Floer theory.
Topology, 43:407–432, 2004.
Fr13
K. A. Frøyshov.
Compactness and gluing theory for monopoles, volume 15 of
Geometry & Topology Monographs.
Geometry & Topology Publications, 2008.
Fr4
K. A. Frøyshov.
Monopole Floer homology for rational homology 3–spheres.
Duke Math. J., 155:519–576, 2010.
Fr14
K. A. Frøyshov.
4–manifolds and intersection forms with local coefficients.
J. Diff. Geom., 91:233–259, 2012.
Hirsch
M. W. Hirsch.
Differential Topology.
Springer, 1976.
HM
D. Husemoller and J. Milnor.
Symmetric Bilinear Forms.
Springer-Verlag, 1973.
Kotsch1
D. Kotschick.
SO(3)–invariants for 4-manifolds with b_2^+=1.
Proc. London Math. Soc., 63(3):426–448, 1991.
KM3
P. B. Kronheimer and T. S. Mrowka.
Embedded surfaces and the structure of Donaldson's polynomial
invariants.
J. Diff. Geom., 41:573–734, 1995.
KM5
P. B. Kronheimer and T. S. Mrowka.
Monopoles and Three-Manifolds.
Cambridge University Press, 2007.
KM7
P. B. Kronheimer and T. S. Mrowka.
Knot homology groups from instantons.
J. Topology, 4:835–918, 2011.
Jeffrey-Lee-Manifolds-DG
Jeffrey M. Lee.
Manifolds and Differential Geometry.
AMS, 2009.
NST1
Y. Nozaki, K. Sato, and M. Taniguchi.
Filtered instanton Floer homology and the homology cobordism group.
arXiv:1905.04001.
Ogawa
H. Ogawa.
Lower bounds for solutions of differential inequalities in Hilbert
space.
Proc. AMS, 16:1241–1243, 1965.
OS6
P. S. Ozsváth and Z. Szabó.
On the Floer homology of plumbed three-manifolds.
Geometry & Topology, 7:185–224, 2003.
Scaduto2
Ch. W. Scaduto.
On definite lattices bounded by a homology 3–sphere and
Yang-Mills instanton Floer theory.
arXiv:1805.07875.
Scaduto1
Ch. W. Scaduto.
Instantons and odd Khovanov homology.
J. Topology, 8(3):744––810, 2015.
University of Oslo, Norway
Email: [email protected]
|
http://arxiv.org/abs/2307.04034v1 | 20230708191132 | Robust Universal Inference | [
"Beomjo Park",
"Sivaraman Balakrishnan",
"Larry Wasserman"
] | stat.ME | [
"stat.ME"
] |
Robust Universal Inference
Beomjo Park, Sivaraman Balakrishnan, and Larry Wasserman
Department of Statistics & Data Science
Machine Learning Department
Carnegie Mellon University
Pittsburgh, PA 15213.
August 12, 2023
In statistical inference, it is rarely realistic that the hypothesized statistical model is
well-specified, and consequently it is important to understand the effects of misspecification on inferential procedures.
When the hypothesized statistical model is misspecified, the natural target of inference is a projection of the data generating distribution onto the model.
We present a general method for constructing valid confidence sets for such projections, under weak regularity conditions, despite possible model misspecification.
Our method builds upon the universal inference method of <cit.> and
is based on inverting a family of split-sample tests of relative fit. We study settings in which our methods yield either exact or approximate,
finite-sample valid confidence sets for various projection distributions. We study rates at which the resulting confidence sets shrink around the target of inference and complement these results
with a simulation study.
§ INTRODUCTION
One of the broad goals of statistical inference is to draw conclusions about a population from a sample of the population. This goal is typically facilitated by the use of a
statistical model 𝒫, a collection of distributions, which the statistician hypothesizes will contain a useful approximation to the data generating distribution.
The well-specified case is when ∈ and the misspecified case is when this does not necessarily hold.
In the misspecified case, the target of inference is usually
a projection distribution. Formally, given a divergence ρ which maps a pair of distributions to ℝ_+, we can define the (forward) projection[We tacitly assume that the projection exists and is unique. When the projection is not unique our inferential guarantees always hold for any (arbitrary) fixed choice of the projection . Characterizing the existence of a projection distribution (for f-divergences) has received some attention in past work <cit.>.] of the distribution
onto the statistical model as:
:= _P ∈ρ( P).
The general goal of our paper is to construct uniformly valid confidence sets for assuming only weak regularity conditions
on the distribution and the statistical model .
We let X_1,…, X_n be an i.i.d sample from a distribution ∈ defined on ℝ^d, where 𝒬⊇𝒫 is a class of distributions
satisfying weak regularity conditions. We wish to construct (honest)
confidence sets, C_α(X_1,…, X_n)
such that,
inf_∈ℙ_(∈ C_α(X_1,…, X_n)) ≥ 1 - α.
In parametric statistical models, in the well-specified case,
the likelihood-ratio test, and confidence sets
obtained from asymptotically Gaussian estimators, are the main inferential tools for constructing hypothesis tests and confidence intervals. In the misspecified case,
one can develop analogous tools for constructing tests and intervals for the Kullback-Leibler (KL) projection parameter,
using sandwich estimates for the variance <cit.>.
The validity of these methods, in both the well-specified and misspecified cases, relies on large sample asymptotic theory and requires that the statistical
model 𝒫 and the sampling distribution satisfy strong regularity conditions.
In recent work <cit.>, we introduced a procedure (described in more detail in Section <ref>)
based on data-splitting to construct uniformly, finite-sample valid likelihood-ratio confidence sets under no regularity conditions. This work showed that,
in the well-specified setting, sample-splitting can yield practical, finite-sample valid inference, even for irregular statistical models, often at a surprisingly small statistical price.
The challenges of inference under weak regularity conditions are exacerbated in the misspecified setting.
In contrast to the well-specified case where the target of inference is unambiguous, in the misspecified case there are many natural targets of inference. Each choice of
the divergence ρ in (<ref>), yields a different target and in most cases these targets will have drastically different properties.
This in turn poses significant
challenges in constructing a unified framework for statistical inference in the misspecified setting. Under weak regularity conditions, the KL projection distribution
can be an unstable inferential target, wherein small perturbations to the data-generating distribution P^* can lead to dramatic shifts in the target P. From a theoretical standpoint,
this makes finite-sample valid inference for the KL projection distribution
challenging, unless strong regularity conditions are imposed. From a practical
standpoint, these instabilities can make the KL projection distribution an undesirable target, and in these cases it is essential to develop a flexible family of methods that can target
other (more stable) projection distributions.
To address these challenges, we develop
a re-interpretation of the universal inference method <cit.> as inverting a particular family of pairwise likelihood-ratio tests. This interpretation
brings into focus the key building block of universal inferential methods – pairwise hypothesis tests. Building on this insight
we show that one can develop robust universal inference procedures by inverting appropriate families of robust pairwise tests. We then study
the design and properties of robust pairwise tests, and relate them to the coverage and size properties of our proposed robust universal inference method.
§.§ Related Work
Asymptotic statistical inference, in both the well-specified and misspecified cases, is a topic
of classical interest.
Some entry points to the vast literature on this topic include the reference books <cit.>. Results in this literature <cit.>
typically leverage strong regularity conditions to determine the asymptotic distribution of a point estimate (such as the Maximum Likelihood Estimator (MLE)), and use the asymptotic distribution of the estimate
to construct (asymptotically valid) confidence sets.
Our work is motivated in part by a recent line of work <cit.>, and more classical work <cit.>,
where sample-splitting is used to avoid the strong regularity conditions typically needed for valid statistical inference.
The focus on statistical inference under weaker regularity conditions, despite model misspecification, is the central theme of work in robust statistics <cit.>.
One of the best understood methods for constructing robust estimators is to select, from a set of candidates, one which wins a carefully setup tournament – an idea which goes back to <cit.>, and
others. At the heart of these tournament estimators are pairwise selectors, which attempt to robustly select one of a pair of candidates, which provide a better relative fit to the sampling distribution.
These robust pairwise tests have been used to great effect in robust estimation, and our work highlights their usefulness in constructing assumption-light confidence sets.
§.§ Outline
The rest of this paper is organized as follows. Section <ref> provides some background. We briefly introduce the universal inference procedure, and develop a new perspective on it.
Section <ref> motivates the methods we study in this paper by pinpointing some of the failures of universal inference.
Section <ref> describes a general strategy to construct confidence sets for projection distributions, and highlights the importance of designing tests of relative fit.
Section <ref> highlights some important examples where we are able to build on prior work in order to design exact and approximate tests of relative fit for different choices of the underlying divergence measure.
Section <ref> studies the size of the resulting confidence sets.
Section <ref> demonstrates some of the strengths of our proposed inference methods
based on illustrative numerical examples. We conclude in Section <ref> with a brief discussion of future work.
§ BACKGROUND
We let X_1,…, X_n be an i.i.d sample from a distribution ∈ defined on ℝ^d, and
we let denote our working statistical model.
Throughout the paper, the collection of distributions will be quite general, typically only satisfying some weak regularity conditions.
§.§ Universal Inference
Our starting point is our prior work <cit.> which
introduced a procedure
based on data-splitting to construct uniformly, finite-sample valid confidence sets under weak regularity conditions.
Importantly, the validity guarantees of universal inference
require the statistical model to be well-specified. The universal inference procedure is to:
* Split the data := {X_1,…,X_n} into two sets _0 and _1.
* On the set _1 calculate any estimate (e.g., could be the MLE in the model ).
* Assume that the distributions in 𝒫 have densities (denoted with lower-case symbols) with respect to a dominating measure λ. We let ℒ_0(P) denote the likelihood of the distribution P evaluated on the samples in _0:
ℒ_0(P) := ∏_i ∈_0 p(X_i),
and define ℒ_0() analogously.
Then construct the confidence set,
C_α(X_1,…,X_n) = { P: ℒ_0(P)/ℒ_0()≥α}.
In the well-specified case, <cit.> show (in their Theorem 1) that,
under no additional regularity conditions, C_α is a finite-sample
valid 1 - α confidence set for
the distribution .
§.§ A Re-Interpretation of Universal Inference
To motivate the main proposal of this paper it is useful to re-visit (and generalize)
the procedure described above, via the lens of inverting a family of hypothesis tests. The basic idea is classical, and is sometimes referred
to as the duality between confidence sets and hypothesis tests.
Formally, given samples X_1,…, X_n ∼,
suppose we have a family of tests
ϕ_P: {X_1,…,X_n}↦{0,1}
for testing the null hypothesis H_0: = P. Here the test function
ϕ_P takes the value 1 to indicate a rejection of the null hypothesis and takes the value 0 otherwise.
If the family of tests is valid, i.e. they control the Type I error,
_P [ϕ_P(X_1,…,X_n) ]≤α, ∀ P ∈,
then the following confidence set,
C_α(X_1,…,X_n) := { P ∈: ϕ_P = 0 },
is uniformly valid when the statistical model is correctly specified, i.e.
inf_∈ℙ_(∈ C_α(X_1,…, X_n)) ≥ 1 - α.
Although this is a general recipe for constructing valid confidence sets, it does not provide the statistician much guidance in designing
tests which might lead to small confidence sets.
Universal inference is based on the idea that one can use a separate sample
to construct an accurate estimate . We can then construct our family of tests, on the remaining samples, to have high power in distinguishing
the sampling distribution from this pilot estimate. Formally, we could choose our family of tests
to have high power to distinguish the hypotheses:
H_0: = P, versus
H_1: = .
This use of a separate sample to construct a pilot estimate,
simplifies the design of the tests to invert considerably since now we can focus on
tests that have strong guarantees for distinguishing this simple null-versus-simple alternative. Indeed, universal inference uses
the likelihood-ratio test for distinguishing these hypotheses, resulting in tests ϕ_P of the form:
ϕ_P = 𝕀[ ℒ_0()/ℒ_0(P) > (P, ) ],
for a choice of the threshold (P, ) which ensures that the condition in (<ref>) is satisfied. Although it is possible to determine optimal thresholds in the likelihood-ratio tests above, this can be practically cumbersome since these thresholds depend on both the pilot estimate and the null
hypothesis P under consideration. The work of <cit.> further shows that a universal threshold = 1/α suffices to ensure the condition in (<ref>). To summarize,
one can view the universal inference confidence set (<ref>) as arising by inverting a family of likelihood-ratio tests designed to distinguish
each candidate distribution P from a pilot estimate .
We emphasize that the universal inference procedure, and its reinterpretation described above rely crucially on correct model-specification to ensure validity.
For instance, inverting a family of tests that satisfies (<ref>) is no longer meaningful when the model is misspecified.
However, the testing interpretation suggests that one might develop novel variants of the universal inference procedure which
are useful despite model-misspecification, by formulating appropriate robust hypotheses and designing robust tests for distinguishing them. We make these ideas
precise in Section <ref>.
§.§ Divergences
Throughout this paper, we make frequent use of different divergences between pairs of
probability distributions. We briefly introduce them here. We let P and Q be distributions defined on ℝ^d with
densities p and q with respect to a common dominating measure λ.
The Hellinger distance is defined as:
(P,Q) = 1/√(2)( ∫ (√(p) - √(q))^2 dλ)^1/2,
and the Kullback-Leibler (KL) divergence is defined as:
(P Q) =
∫(logp/q) P̣, if P is dominated by Q,
∞, otherwise.
The family of density power divergences <cit.>, are defined for a parameter β≥ 0 as,
_β (P Q) =
∫{ q^1+β - ( 1+1/β) q^β p + 1/β p^1+β} d λ,
β > 0
(P Q),
β = 0
where _0 = is defined by taking the limit of β→ 0.
Finally, the family of Integral Probability Metrics (IPMs) <cit.>,
are defined as
_ℱ(P, Q) = sup_f ∈ℱ| _P (f) - _Q (f) |
where ℱ is a symmetric class (i.e., f ∈ℱ - f ∈ℱ) of real-valued bounded measurable functions on the domain of P and Q.
Important special cases of IPMs include the Total Variation distance (TV, where ℱ is the collection of functions with sup-norm at most 1), the Wasserstein-1 distance (where ℱ is the collection of 1-Lipschitz functions) and
the Maximum Mean Discrepancy (MMD, where ℱ is the unit ball of a Reproducing Kernel Hilbert Space with kernel k).
§ FAILURES OF UNIVERSAL INFERENCE
To provide some motivation and intuition for the methods we propose in this paper, it is useful to understand some of the failures of the universal inference framework
when the statistical model is misspecified, and the target of inference is the KL projection.
§.§ Unbounded Likelihood-Ratios
The behavior of likelihood-ratio based methods can be sensitive to the tail behavior of likelihood-ratios.
The following simple example illustrates that under model misspecification, universal inference can fail to cover the KL projection parameter. These pathologies
do not arise when the statistical model is correctly specified, and the challenges in this example arise due to an interplay between poorly behaved likelihood-ratios and model misspecification.
This example also serves to highlight the fact that the KL projection parameter can in some cases be an undesirable inferential target. We let (p) denote the Bernoulli distribution with parameter p.
Suppose we observe
X_1,…,X_n ∼ := (ϵ_n) for some non-negative 0 < ϵ_n < (1-α)/n.
We use the statistical model = {(p) : p ∈{0, 1/2 }}.
Suppose we consider the pilot estimator to be the MLE,
= _p ∈ℒ_1(p).
Then, for all sufficiently large n ≥ n_0 where n_0 only depends on α the split LRT confidence set in (<ref>), with an equal sized split into _0 and _1,
fails to cover the KL projection at the nominal level.
The proof is in Appendix <ref>. The intuition is however clear. In this example the KL projection distribution is (1/2).
For ϵ_n ≪ 1/n, with high probability the samples X_1,…,X_n are all 0. Consequently, the MLE with high-probability
will be (0). Furthermore, the split sample likelihood ℒ_0 will be much higher for (0) than (1/2), and consequently (1/2)
will not be included in the universal set.
In this example likelihood-ratios are unbounded and as a consequence the KL divergence is an unstable function of the model parameters, i.e.
when ϵ_n = 0, ((ϵ_n) (0)) is 0, but is ∞ for any ϵ_n > 0. In such cases,
the finite-sample (log)-likelihood-ratio is a poor estimate of the population KL divergence, and this poses significant challenges for
finite-sample valid inference. From a practical standpoint, a more reasonable inferential target could be a different, stabler projection distribution
(e.g., the Hellinger or TV projection distribution) and we address this in Sections <ref> and <ref>.
§.§ Failure Despite Bounded Likelihood-Ratios
In the previous example it is clear that unbounded likelihood-ratios can result in pathologies which are challenging to address with finite-sample valid inference.
However, even when all likelihood-ratios in the model are well-behaved, universal inference can fail to cover the KL projection parameter. It is important to note that except under the stringent
condition that the underlying model is convex (see Section 6 of <cit.>), universal inference has no guaranteed coverage when the model is misspecified.
Suppose we obtain X_1,…,X_n ∼ := (0.5 + ϵ_n) for some small, positive 0 <ϵ_n ≤ c/n, where c > 0 is a small positive universal constant.
Our hypothesized model consists of two distributions, = {(p) : p ∈{1/4, 3/4 }}.
Suppose we take the pilot estimator to be the MLE (<ref>).
Then, for all sufficiently large n (depending only on α) the split LRT confidence set in (<ref>), with an equal sized split into _0 and _1,
fails to cover the KL projection at the nominal level.
We give a formal proof in Appendix <ref>. The KL projection distribution is (3/4). We show that the pilot estimate with probability near 1/2 will
be the distribution (1/4), and further with probability near 1/2 the KL projection (3/4) will have a much smaller split sample likelihood than . As a direct consequence,
universal inference will fail to cover the projection distribution (3/4).
In contrast to the previous example, this example is much less pathological. All the relevant likelihood-ratios are bounded, and the log-likelihood is a consistent
estimate of the KL divergence. However, even in this relatively benign example universal inference fails.
We show in Section <ref> that a simple modification to the universal inference procedure fixes this issue when the relevant likelihood-ratios are bounded, and ensures correct coverage.
In order to focus on the main issues, we have illustrated the failure of universal inference when the pilot estimator is the MLE. Indeed, part of the appeal of universal inference is that its
coverage guarantees hold, in the well-specified case for any pilot estimate (including the MLE).
Though we do not pursue this here, it is straightforward to extend these examples to show that both failures persist irrespective of
how the pilot is chosen, i.e. the failures of universal inference that we highlight are driven by the second stage (of constructing the confidence set) and not by the first stage (of constructing a reasonable pilot estimate).
These examples set the stage for the methodological development of the rest of the paper. To address problems of the first type we recommend targeting a different
projection parameter (for instance, the TV or Hellinger projection, in Sections <ref> and <ref>), and to address problems of the second type we develop methods which guarantee coverage
of the KL projection parameter when the likelihood-ratios are uniformly upper bounded or more generally have finite 2 + ξ moments for some ξ > 0 (see Section <ref>).
§ ROBUST UNIVERSAL INFERENCE
In this section, we present a simple but powerful pair of general results which yield exact and approximate universal confidence sets. The workhorse of these results
are tests of relative fit which we first briefly introduce before showing how these tests can be inverted to derive robust confidence sets.
§.§ Tests of Relative Fit
Suppose that we are given samples X_1,…, X_n ∼, together with a pair of candidate distributions (P_0, P_1) ∈^2, and a divergence measure ρ.
With this setup in place, we now consider a family of tests ϕ_P_0, P_1 to distinguish the hypotheses:
H_0: ρ( P_0) ≤ρ( P_1), versus
H_1: ρ( P_0) > ρ( P_1).
We refer to the tests ϕ_P_0, P_1 as exact tests of relative fit.
Notice that in contrast to the classical setting, where we hypothesize that one of the distributions (P_0, P_1) truly generated
the samples, in the misspecified setup
this assumption is no longer tenable.
Instead, we hypothesize that one of the distributions (P_0, P_1) is closer to the data generating distribution.
In general, the two hypotheses are no longer simple hypotheses and we need to take some care in designing the family
of tests ϕ_P_0, P_1. The design of tests of relative fit (and closely related variants) have a rich history and form the basis for a class of tournament-based robust estimators
<cit.>.
For divergences like the Total Variation and the Hellinger distance, designing exact tests of relative fit can require strong regularity conditions akin to those that would be required
to estimate these divergences. Surprisingly, in these cases, it is still possible to design approximate tests of relative fit
under weak regularity conditions. More formally, suppose that for some ν≥ 1, we can design a test for the following null hypothesis:
H_0: νρ( P_0) ≤ρ( P_1).
We refer to tests for this hypothesis as approximate tests of relative fit ϕ_P_0, P_1,ν. Under the null hypothesis, the distribution P_0 is closer than P_1 to by a factor ν≥ 1, which can
ease the design of valid tests for this hypothesis.
Robust tests for null hypotheses of the form in (<ref>) (for the Hellinger distance) were introduced by <cit.> and are discussed in detail in the work of <cit.>. In the context
of estimation these approximate tests yield what are known as non-sharp oracle inequalities. In the context
of inference, as we explore further in Section <ref>, inverting approximate relative fit tests will yield weaker guarantees.
In Section <ref> we consider the design of tests of relative fit in concrete settings, but now proceed to study the implications of designing such tests
for the construction of robust confidence sets.
§.§ Exact and Approximate Robust Universal Confidence Sets
We now propose to construct a confidence set by inverting a family of tests of relative fit. This is similar in spirit to
the procedure described in Section <ref>.
§.§.§ Exact Robust Universal Confidence Sets
Suppose, for every ∈, the family of tests of relative fit ϕ_P_0, P_1 is valid, i.e. it controls the Type I error:
_[ϕ_P_0, P_1(X_1,…,X_n) ]≤α, ∀ (P_0, P_1) ∈_0
where _0 = { (P_0, P_1) ∈^2: ρ( P_0) ≤ρ( P_1)}.
Then, for any fixed P_1 ∈, the confidence set we construct is the set of candidates P_0 which we fail to reject:
C_α,n≡ C_α(X_1,…,X_n) := { P_0 ∈: ϕ_P_0, P_1 (X_1,…,X_n) = 0 }.
The following result shows that irrespective of the choice of P_1 the above construction yields a valid confidence set for the projection distribution:
For any fixed P_1 ∈, C_α,n is a uniformly valid (1-α) honest confidence set for the projection .
For any ∈,
_(∉ C_α,n )
= _( ϕ_, P_1 = 1 )
= _( ϕ_, P_1 ) ≤α
using (<ref>) since (, P_1) ∈_0 for any choice of P_1 ∈.
As in the well-specified case discussed earlier, this general result does not provide any guidance on how to choose P_1.
We follow the idea of universal inference and first construct an accurate estimate of from a separate sample _1 and then construct the family of split tests of relative fit ϕ_P_0, from the remaining samples _0. We call the resulting confidence set the exact Robust Universal Confidence set:
C_α,n≡ C_α (X_1,…, X_n) := {P_0∈: ϕ_P_0, (_0) = 0}.
Let ∈ be any estimate of based on _1. Then, the exact robust universal confidence set C_α,n is a uniformly valid confidence set for , meaning that
inf_∈_ (∈ C_α, n) ≥ 1 - α.
The proof is straightforward noticing that conditional on _1, the claim reduces to the claim of Proposition <ref>. Concretely, for any ∈,
_ (∉ C_α, n)
=_ (ϕ_, )
= __1[ __0(ϕ_, (_0) | _1) ]
≤__1 (α) = α.
The robust confidence set will often contain both the pilot estimate as well as the projection distribution
(see Proposition <ref> in Appendix <ref> for a formal statement). This is similar to the classical universal inference procedure which in the well-specified case will often contain both the
pilot estimate and the true sampling distribution. In universal inference this suggests that in order to obtain small confidence sets,
we should aim to design to be a good estimate of the true sampling distribution . On
the other hand in the misspecified case, this suggests that we should design to be a good estimate of the projection . Specifically, our pilot estimate should
be tailored to the divergence measure ρ. We investigate the choice of and its effect on the size of the resulting confidence set further in Section <ref>.
§.§.§ Approximate Robust Universal Confidence Sets
In some cases, e.g., for the Hellinger distance and the TV distance, designing exact robust tests will require some (potentially strong) regularity conditions.
However, in these cases one can design approximate tests of relative fit straightforwardly.
Suppose, for any ∈, the family of approximate tests of relative fit ϕ_P_0, P_1, ν which controls the Type 1 error satisfies (<ref>) with _0 = { (P_0, P_1) ∈^2 : νρ( P_0) ≤ρ( P_1)} for some ν≥ 1. We will additionally make the mild assumption that our tests of relative fit do not reject (with probability at least 1-α) when comparing the relative fit of a distribution to itself, i.e.:
sup_∈_ [ϕ_P,P,ν] ≤α for any fixed P∈.
This condition will be true for all the tests we introduce in Section <ref>.
Let be any estimate of from _1.
Then, the approximate robust universal confidence set, akin to (<ref>), is obtained by inverting the family of valid split tests ϕ_P_0, , ν constructed from the remaining samples _0:
C_ν,α,n≡ C_ν,α(X_1,…,X_n) := { P_0 ∈: ϕ_P_0, , ν (_0) = 0 }.
This confidence set may not cover the projection distribution . We will relax our goal to instead be to
cover an approximate projection distribution.
More formally, we relax the target of inference to be the ν-approximate projection set _ν defined as
_ν = {P∈: ρ( P) ≤νρ() }.
If a set C is a ν-approximate confidence set, we define its coverage by
_(Q∈ C for some Q ∈_ν) =
_(_ν∩ C ≠∅).
Figure <ref> shows a schematic diagram to illustrate the notion of approximate coverage. When ν = 1, i.e. we invert an exact test, we guarantee that with probability at least 1 - α, the set
C_ν,α,n contains . On the other hand, when ν > 1 we only guarantee that the intersection of C_ν,α,n with the collection of ν-approximate projections (in cyan) is non-empty.
The set _ν is a collection of distributions that are as close to as (up to a factor ν). The approximate confidence set guarantee
is most meaningful when ν is close to 1, or when the model misspecification is not too extreme, i.e. ρ() is small.
Let ∈ be any estimate of based on _1. Suppose that our approximate relative fit tests are valid, and satisfy the condition in (<ref>).
Then, the approximate robust universal confidence set C_ν,α,n is a uniformly valid ν-approximate confidence set for :
inf_∈_ (_ν∩ C_ν,α, n∅) ≥ 1 - α.
Fix any ∈. Let the event E = {∈_ν}. On the event E, (<ref>) implies
_ (∉ C_ν,α,n | E) = __1 (__0 (ϕ_, ,ν (_0) | _1, E) | E) ≤α.
On the complement of E, i.e., ∉_ν, _0 contains (, ). Thus, an analogous argument to that in the proof of Theorem <ref> can be used.
Combining the two results, we obtain that, for all ∈,
_ (_ν∩ C_ν,α, n = ∅)
≤_ (∉ C_ν,α, n | E) (E) + _ (∉ C_ν,α, n | E^∁) (E^∁)
≤α.
As in the construction of the exact robust universal confidence set, one should aim to choose the pilot estimate as close as possible to .
In the exact setting, the choice of the pilot estimate does not affect the validity of the resulting set and only affects its size. However, in
constructing an approximate robust universal set,
if we can ensure the pilot is accurate, then our approximate validity guarantees improve. Concretely, for some sequence κ_n we define:
(κ_n) := {P∈ : ρ( P) ≤ρ() + κ_n}.
If we can ensure that the pilot estimate is contained in (κ_n) with probability at least 1 - β for some sequence κ_n,
then we can show that the constructed confidence set C_ν, α,n will intersect (κ_n) with high probability. For instance, if κ_n → 0
as n grows, then rather than simply intersecting the set of approximate projections _ν, we can now show that C_ν,α,n intersects a shrinking neighborhood
around . More formally we have the following result (we omit its proof since it follows the same arguments as in Theorem <ref>):
Let (κ_n_1) be defined as in (<ref>), and suppose that our pilot is accurate, i.e. we can ensure that with probability at least 1 - β, ∈(κ_n_1).
Suppose further that our approximate relative fit tests are valid, and satisfy the condition in (<ref>). Then:
inf_∈_((κ_n_1) ∩ C_ν,α, n∅) ≥ 1 - α - β.
In this section, we have shown that inverting exact or approximate tests of relative fit yield robust exact or approximate confidence sets despite model-misspecification. We now turn our attention
to the design and analysis of these tests.
§ DESIGNING TESTS OF RELATIVE FIT
Our proposed method relies on designing valid tests of relative fit.
In this section,
we design exact tests of relative fit in KL and the density power divergences, and design approximate tests
for the Hellinger, TV and IPM-based divergences.
§.§ Kullback-Leibler Divergence
To design an exact test of relative fit for the KL divergence we make a simple observation
that there is a natural plug-in estimator of the difference in KL divergences. We can rewrite
the difference in KL divergences as:
( P) - () =
∫logp_1/p
where p and p_1 are the density of P and with respect to a common dominating measure. When
we obtain samples from this suggests the following
log split likelihood ratio test:
ϕ_P = [ 1/n_0∑_i∈_0 T_i (P,) > t_α (P, ) ],
T_i(P, ) ≡ T(X_i; P, ) = logp_1 (X_i)/p (X_i),
where _0 is an index set of _0 and t_α (P, ) is chosen to ensure validity. This test
was called the relative information fit test (RIFT) and studied in the work of <cit.> to study the relative
goodness-of-fit of two candidate estimates. In our paper,
we invert the same test in order to construct a robust universal confidence set.
When the variance of T_i(P, ) (conditional on 𝒟_1) is finite, we can derive the asymptotic
distribution (conditional on 𝒟_1)
of the log split likelihood ratio via the CLT.
Let T_n_0 (P,) = ∑_i∈_0 T_i(P, ) / n_0.
Conditional on _1 and assuming that the variance _ [T(P_0, P_1)] < ∞, for any (P_0,P_1) ∈^2,
√(n_0)( T_n_0 (P,) - _ T (P, ) ) ⇝(0, s_P^2 )
as n_0 →∞
where s_P^2 ≡ s_P^2 (_1) = _ [T_1^2] - _ T_1^2 can be estimated by ŝ_P^2 = 1/n_0∑_i∈_0 (T_i(P, ) - T_n_0)^2,
and ⇝ denotes convergence in distribution (conditional on 𝒟_1).
When assessing distributions P that are very similar to the pilot , it might be the case that s_P^2 is vanishingly small. Consequently, it is possible that s_P/s_P does
not converge in probability to 1, and the CLT with estimated variance s_P^2 need not hold. Following <cit.>
we modify each T_i(P,) by adding a small amount of independent Gaussian noise, i.e. we replace each T_i(P, ) above by T_i(P, ) + δ Z_i where Z_1,…,Z_n_0∼ N(0,1),
for some small positive constant δ > 0 (we use δ = 0.01 but note that this has no practical effect and this modification simply eases the theoretical analysis). We denote the resulting
statistic by T_n_0,δ(P, ) and the corresponding empirical standard deviation by s_P,δ.
Then, we define the KL Relative Divergence Fit () set as
_, n≡_α, n () = {P∈ : T_n_0, δ(P, ) ≤z_αŝ_P,δ/√(n_0)}
where z_α is a 1-α quantile of standard normal distribution. The following result provides asymptotic and non-asymptotic guarantees for the set _, n.
Suppose that 𝒬 is such that for some 0 < ξ≤ 1 the 2+ξ moments M_P_0,P_1 := _ |T(X; P_0, P_1) - _T(X; P_0, P_1)|^2+ξ are finite, for any (P_0,P_1) ∈^2, then
inf_∈_ (∈_, n) ≥ 1 - α - C n^-ξ/2,
where C < C' (1 + sup_(P_0,P_1) ∈𝒫^2 M_P_0,P_1) /δ^(2+ξ) for a universal constant C'.
We give a formal proof in Appendix <ref>. The claim follows as a consequence of the Berry-Esseen bound for the studentized statistic <cit.>. Some care is required
to handle the degeneracy (discussed above) when the variance of the summands can be small and to handle the randomness in the pilot estimate .
We can now revisit the failures of universal inference discussed in Section <ref>. Recall that Example <ref> illustrates the instability of the KL projection because likelihood ratios may not be bounded.
The KL set does not resolve this weakness since the KL set uses the same split likelihood ratio statistic as for the universal confidence set <cit.> and its 2 + ξ
moment is not uniformly bounded in Example <ref>. However, the KL set does resolve the failure highlighted in Example <ref>.
Assume the same model as in
Example <ref>. Suppose we take the pilot estimator to
be the MLE. The KL set (<ref>), with an equal sized split into _0 and _1, covers the KL projection at the nominal level asymptotically.
This result follows directly from Theorem <ref>, since in this example all of the relevant log likelihood ratios are uniformly upper bounded.
It is worth noting that both the standard universal set, and the set _, n are based on essentially the same split likelihood ratio statistic,
and it is perhaps surprising that the standard universal set fails but _, n succeeds in guaranteeing coverage.
Despite being based on the same statistic, the two sets use very different thresholds. It is easy to see that one can rewrite the split
LRT confidence set in universal inference <cit.> as:
_sLRT= {P∈ : T_n_0 (P,) ≤log (1/α)/n_0}.
The threshold used in (non-robust) universal inference decays at the fast rate of order O(1/n_0) compared to that of the robust universal confidence set _, n
whose threshold decays at the rate O(1/√(n_0)). When the model is misspecified the (non-robust) universal set shrinks too rapidly leading to the failure highlighted in Example <ref>.
The confidence set _, n is constructed by approximating the distribution of the test statistic in (<ref>).
When likelihood ratios are uniformly upper bounded it is straightforward to construct finite-sample valid sets via an exponential tail bound.
For example, the finite-sample exact robust universal confidence set based on the Hoeffding bound is:
_HF,B,n = {P∈ : T_n_0 (P,) ≤ B√(log(1 / α)/2n_0)},
where B is such that |T_i (P_0, P_1) - 𝔼_ T(P_0,P_1)| ≤ B for all (P_0,P_1)∈^2. In this case we assume that the
upper bound B is known to the statistician. One can generalize this construction in various ways. When the statistic is assumed to only have finite
variance one can use Chebyshev's inequality to construct a finite-sample valid set. When in addition to boundedness the statistic might have small variance
one can use empirical Bernstein-type inequalities to construct finite-sample valid confidence sets. We explore these further in Appendix <ref>.
We compare the empirical performance of _, n and these finite-sample valid sets in Section <ref>.
§.§ Density Power (DP) Divergences
We can construct an exact test of relative fit for the family
of DP divergences following the same strategy as in KL case.
Let
T_n_0(P, )
= _β (_n_0 P) - _β (_n_0)
= ∫{ p^1+β - p_1^1+β}λ̣- ( 1+1/β) 1/n_0∑_i∈_0[ p^β - p_1^β] (X_i)
:= 1/n_0∑_i∈_0 T_i(P, ),
where _n_0 is the empirical measure constructed from _0.
The split statistics T_i(P, ) encode the difference in average β-powered densities (penalized with L_1+β norm) rather than the log-likelihood ratio evaluated on the sample _0 when β > 0.
Then, conditional on 𝒟_1, _ T(P,) = _β ( P) - _β (). We define the DP set _,n exactly as in (<ref>),
and observe that the analogue of Theorem <ref> holds (with an identical proof) for _,n.
Recall that KL set was unable to resolve the instability problem
in Example <ref>. This is because the likelihood ratios in this model can blow up. On the other hand the DP set relies on the statistics in (<ref>), which are bounded for any β > 0, provided the relevant
densities are well-defined.
Formally, we have the following result:
Suppose we have the same model as in
Example <ref>.
For sufficiently large n, for any pilot estimator , the DP set _,B,n defined as in (<ref>) with B=1 + 1/β, with an equal sized split into _0 and _1, covers the DP projection at the nominal level.
A formal proof can be found in Appendix <ref>. The key observation is that the DP projection is (0) for a sufficiently large sample size for any fixed β > 0. The DP projection in this example is more stable than the KL projection (1/2), considering that ϵ_n is much closer to 0 than 1/2.
Consequently, we show that the DP set will cover the target of inference (0) with high probability. We emphasize that the MLE is also (0) with high probability, yet both universal split LRT and KL set based on the MLE fail to cover the KL projection due to the instability of the population projection distribution.
§.§ Hellinger Distance
The Hellinger distance (or the difference in Hellinger distances) does not lend itself to a natural plug-in estimator. The usual method of estimating the Hellinger distance proceeds instead via some type of non-parametric density estimation, which in turn
requires additional smoothness assumptions. Since our goal in this paper is to design assumption-light methods, we instead relax the target of inference. This in turn opens the door for designing approximate tests of relative fit.
Our strategy will be to modify the ρ-estimator[The name “ρ-estimator” comes from the standard symbol used for the Hellinger affinity.] <cit.>
which is a density estimator tailored to the Hellinger loss.
Define the split ρ-test statistic
T_n_0 (P, ) := Δ (P, ) + 1/n_0∑_i∈_0ψ( √(p_1/p) (X_i) ),
Δ (P_0, ) = 1/√(2)[^2(P_0, P) - ^2(, P) ],
where P = (P + ) / 2 and ψ: [0,∞] ↦ [-1,1] is a non-decreasing Lipschitz function satisfying ψ (x) = - ψ (1/x).
The choice of ψ we adopt throughout this paper, is to take ψ(u) = (u-1)/√(1+u^2) which
comes from work on the ρ-estimator <cit.>.
The function
ψ is a bounded transformation of the likelihood ratio, and due to this boundedness the split ρ-test statistic is tightly concentrated around its expectation.
The following proposition, which follows directly from Proposition 11 of <cit.>, characterizes the expectation of the split ρ-statistic.
For any P^*, P_0, P_1,
(2 + √(2)) _T_n_0 (P_0,P_1) ≤(3 + 2√(2)) ^2 (, P_0) - ^2 (, P_1).
This proposition ensures that _T_n_0(P_0, P_1) is negative for any ∈ when the null hypothesis H_0 : (3+2√(2)) ^2 (, P_0) ≤^2 (, P_1) is true. This proposition in
turn suggests that T_n_0(P_0, ) could be a useful statistic for designing an approximate test of relative fit in the Hellinger distance with ν = √(3+2√(2)).
We define the Hellinger Relative Distance fit () set _,n exactly analogous to the
KL set (<ref>) (obtained from a δ-corrupted version of the statistics T_n_0(P, )).
The following result follows by combining Theorems <ref> and <ref>, and noticing that the split statistic is uniformly upper bounded.
Let ν = √(3 + 2√(2)). For any 𝒬,
inf_∈_ (_ν∩_, n∅) ≥ 1 - α - C/√(n),
where C < C'/δ^3 (for a universal constant C').
We are now in a position to revisit Example <ref>. In Proposition <ref>, we showed that changing the target of inference to DP projection could address the failure of universal inference.
In a similar vein, targeting the Hellinger projection resolves the failure, but interpreting the resulting guarantee requires some nuance as set may not cover the exact Hellinger projection, and is only guaranteed to cover
a ν-approximate projection.
In the case of Example <ref>, it will turn out for sufficiently small values ϵ the ν-approximate Hellinger projection set is a singleton (and equal to the exact Hellinger projection). As highlighted earlier, when
the amount of model-misspecification is not too large the distinction between the ν-approximate projection set and the exact projection can be small.
Assume the same model as in Example <ref>. Suppose we take the pilot estimator to be the Minimum Hellinger Distance estimator <cit.>,
= _P ∈ (_n_1 P).
For sufficiently large n (> 20), the Hellinger set _,n, with an equal sized split into _0 and _1, covers the Hellinger projection ≡(0) at the nominal level asymptotically.
A formal proof is provided in Appendix <ref>. It will turn out that in this example
the ν-approximate Hellinger projection is exactly the Hellinger projection when ϵ≤ 0.05, and is the entire model , otherwise. This means that for larger values of ϵ, approximate validity is trivial, yet vacuous, as the target of inference can be any distribution in . This highlights the downside of targeting the ν-approximate projection set: when the model-misspecification is severe the resulting guarantees might be vacuous.
§.§ Integral Probability Metrics (IPMs)
Our proposal for a ν-approximate test of relative fit for IPMs is inspired by the work of <cit.> and <cit.>, where
a similar idea was used to design robust density estimates. Recall the definition of the IPM,
_(P_0, P_1) = sup_f ∈( _P_0 (f) - _P_1 (f) ).
Associated with any pair of distributions is a so-called witness function f^*_(P,Q) = sup_f ∈ ( _P (f) - _Q (f) ), which
witnesses the largest mean discrepancy between the two distributions.
The split test statistic is then defined by:
T_n_0 (P, ) = ∫ f^*_(P, )P̣ + /2 - 1/n_0∑_i ∈_0 f^*_(P, ) (X_i).
The usefulness of this statistic is highlighted by the following characterization of the expectation of the statistic.
For any P^*, P_0, P_1,
2 _ T (P_0,P_1) ≤ 3 (, P_0) - (, P_1).
See Appendix <ref> for a formal proof. For the TV IPM this result appears in the work of <cit.> and <cit.>, and our result generalizes their argument
to other IPMs. Proposition <ref> ensures that _ T(P,Q) is negative for all ∈ under the null hypothesis in (<ref>) with ν=3.
We can construct _ by inverting the IPM approximate relative fit test, to obtain an identical guarantee to the one in Corollary <ref> (now with ν = 3).
To further illustrate the construction of IPM approximate relative fit tests we consider three widely used IPMs—total variation distance, Wasserstein distance, and maximum mean discrepancy—where the witness functions are more explicit.
Total Variation Distance.
Suppose ρ(P_0 P_1) = (P_0, P_1) where is the total variation distance. This is an IPM over the function class = {f : f≤ 1}. An equivalent definition is (P_0, P_1) = sup_A | P_0 (A) - P_1(A) | = P_0 () - P_1 () where = {p_0 > p_1} is the Yatracos set with maximal discrepancy between P_0 and P_1. The witness function is f^*_(P_0, P_1) (x) = (x ∈) - 1/2. An immediate advantage of targeting the
TV projection comes from that f^* is uniformly bounded.
Given samples , consider the following test statistic which referred to as the split Scheffé statistic:
T_n_0 (P,) = P () + ()/2 - _n_0(), _n_0 () = 1/n_0∑_i∈_0 (X_i ∈)
where is redefined to be = {p > p_1}.
The split Scheffé statistic, as the name suggests, is a sample-split analogue of the Scheffé estimate that was originally proposed in <cit.> building upon the work of <cit.>.
Wasserstein Distance.
Suppose ρ(P_0 P_1) = _1 (P_0, P_1) is the 1-Wasserstein distance (or Kantorovich metric). The associated function class is = {f: Lf≤ 1 } where Lf := sup{ |f(x) - f(y) | / x - y : x y } is the Lipschitz semi-norm.
Although the ideas are much more general, we limit our discussion to univariate distributions on a compact support, i.e, = [0,b]. In this case, the witness function is explicit and easy to describe <cit.>.
Define t; P_0, P_1 = ( F_P_1(t) > F_P_0 (t) ) - ( F_P_0 (t) > F_P_1 (t) ) ∈{0, ± 1 },
where F_P denotes the CDF of P.
The witness function is
f^*_(P_0, P_1) (x) = ∫_0^xt; P_0, P_1ṭ <cit.>.
A direct application of the split statistic (<ref>) yields
T_n_0 (P,) = 1/2∫t; P, ( _n_0 (t) - F_P (t) + F_ (t)/2) ṭ,
where _n_0 (t) = 1/n_0∑_i∈_0(X_i ≤ t) is the empirical distribution. This particular split statistic is a sample-split analogue of the ℓ-estimator <cit.>.
Maximum Mean Discrepancy.
Suppose that is a unit ball of the reproducing kernel Hilbert space (RKHS) ,
with kernel k(x,y), and RKHS norm ·_ℋ,
i.e., = {f: f≤ 1}. Then the corresponding IPM (<ref>) is called the Maximum Mean Discrepancy <cit.>. It was shown by <cit.> that the analytic witness function f^*_(P, ) = μ_P - μ_/μ_P - μ_ where μ_P(·) := 𝔼_P [k(X,·)] is the mean embedding of P.
The split statistic T_n_0 (P, ) in this case
reduces to an average of the (negative) witness function - _n_0 (f^*_(P, ) ) if the kernel k(·,·) is symmetric. In this case, the sign of the split statistic captures, in expectation, whether the population is closer to P or based on mean embeddings.
§.§ Unified Sufficient Conditions for any Divergence Measure
In this section we unify some of the treatment of the previous sections by giving conditions on split test statistics which ensure
the exact and approximate validity of the resulting confidence sets.
Given data , we consider tests of the form:
ϕ_P_0, P_1, ν = ( T_n(P_0,P_1) > t_α(P_0,P_1)).
We assume that the test statistic satisfies the following two additional conditions:
T is anti-symmetric, i.e.,
T(X; P_0, P_1) = - T(X; P_1, P_0) for all P_0, P_1 ∈.
There exists some fixed, positive numbers ν, c_1 ≥ 1 such that for all ∈, and any fixed P_0, P_1 ∈,
c_1 _ T (; P_0, P_1) ≤νρ ( P_0) - ρ ( P_1).
Assumption <ref> ensures that _ T (; P_0, P_1) is always negative for all ∈ when the null hypothesis (<ref>) is true. For instance, Propositions <ref> and <ref> establish the analogue of Assumption <ref> for Hellinger and IPM projection, respectively.
Now, we may define ρ-set _ρ,n as in KL set (<ref>) by inverting the test based on (a δ corrupted version of) the statistic T:
_ρ, n := {P∈ : T_n_0, δ(P, ) ≤z_αŝ_P,δ/√(n_0)}
If the test statistic is bounded, i.e. T(X;P_0,P_1) ≤ B for any pair of distributions P_0,P_1 ∈𝒫^2 then
we can define the finite-sample ρ-set as in (<ref>):
_ρ,B,n = {P∈ : T_n_0 (P, ) ≤ B√(log(1 / α)/2n_0)}
The following general result holds:
Suppose that the test statistic satisfies Assumptions <ref> and <ref>.
* Suppose that 𝒬 is such that for some 0 < ξ≤ 1 the 2+ξ moments M_P,Q := _ |T(X; P, Q) - _T(X; P, Q)|^2+ξ are finite, for any (P,Q) ∈^2, then
inf_∈_ (∈_ρ, n) ≥ 1 - α - C n^-ξ/2,
where C < C' (1 + sup_P,Q M_P,Q) /δ^(2+ξ) for a universal constant C'.
* Suppose that T(X; P,Q) ≤ B, then:
inf_∈_ (_ν∩_ρ,B, n∅) ≥ 1 - α.
The proof of the validity claims follow the same structure as the proof of Theorem <ref>. The crucial Assumption <ref> distills out the key property of the test statistics that is useful in ensuring asymptotic or
finite-sample validity. With these general validity results in place, we now turn our attention to studying the size of the resulting robust universal sets.
§ SIZE OF ROBUST UNIVERSAL CONFIDENCE SETS
In the well-specified setting, for statistical models which satisfy classical regularity conditions, <cit.> showed that the Hellinger diameter of the split LRT confidence set depends on
two factors: the size of determined by its (local) Hellinger bracketing entropy, and the closeness of to in the Hellinger distance. In a similar vein, in this section we show that
the size of the universal sets, under certain regularity conditions, can be upper bounded by two factors: roughly, measuring the quality of the pilot estimate, and the size of statistical model.
In the misspecified setting, we would like the robust universal set to shrink around its target at a fast rate.
To measure the (directed) divergence between two sets measured in a divergence ρ and with respect to outside of , we define the ρ_^-divergence motivated by the directed Hausdorff distance.
For a given divergence ρ and a collection of distributions S_1 ⊂, we define an ϵ-fattening of S_1 by:
S_1 ⊕ϵ := ∪_Q ∈ S_1{P ∈ : ρ ( P) ≤ρ ( Q) + ϵ}.
Now given two collections of distributions S_0, S_1 ⊂, we define the ρ_^-divergence by
ρ^_ (S_0, S_1) = inf{ϵ≥ 0 : S_0 ⊆ S_1 ⊕ϵ}.
ρ^_ (S_0, S_1) is the minimum ϵ-fattening of S_1 with reference to which contains S_0.
To express the rate at which the robust universal sets shrink,
we use the Rademacher complexity of ℱ_T, 𝒫, a function class which depends on the test statistic of choice, and the statistical model 𝒫. Concretely, we define,
ℱ_T, 𝒫 := {f: f(x) := T(x; P,Q), P,Q ∈𝒫}.
We denote the Rademacher complexity of this class by ℜ_n(ℱ_T, 𝒫):
ℜ_n(ℱ_T, 𝒫) := 𝔼[ sup_f ∈ℱ_T, 𝒫1/n∑_i=1^n R_i f(X_i)],
where R_i are i.i.d. Rademacher random variables.
In some of the cases we have considered in this paper, under additional regularity conditions the complexity measure
ℜ_n(ℱ_T, 𝒫), can be related to a complexity measure of the underlying model 𝒫 using a standard contraction argument <cit.>:
Suppose that , and the pilot estimate are distributions supported on some compact set 𝒞, with density with respect to the Lebesgue measure which are upper and lower bounded by constants.
Then, for the test statistics introduced in Sections <ref>,<ref> and <ref>, ℜ_n(ℱ_T, 𝒫) ≲ℜ_n(𝒫).
Finally, to characterize the quality of the pilot estimator , we say that the
is an η_n-consistent estimator if
ρ () - ρ() = O_ (η_n),
where we use the standard big O in probability notation to indicate stochastic boundedness.
With these preliminaries in place, we have the following result for the size
of the ρ-set obtained by inverting a finite-sample valid relative fit test. The proof will be given in Appendix <ref>.
Suppose that (<ref>) holds and sup_(P, Q)∈^2 |T(P, Q) - 𝔼 T(P,Q)| ≤ B.
Fix any projection distribution , and recall the collection _ν in (<ref>).
Then the robust universal confidence set _ρ,B,n in (<ref>), for an equal sized split into 𝒟_0 and 𝒟_1,
satisfies for any ∈,
ρ_^( _ρ,B,n, _ν) ≤ O_( η_n + ℜ_n(ℱ_T, 𝒫) + B√(log(1/α)/n)).
Theorem <ref> states that the directed ρ_^-divergence between the exact robust universal confidence set and its target shrinks to zero at the prescribed rate, since _ν is a singleton {} when ν = 1. One can no longer show such a result for the ν-approximate robust universal confidence set even with an infinite number of observations. This is because, conditional on _1, the split test ϕ_P, , ν is guaranteed to achieve (exponentially) small Type 2 error uniformly over ∈ only for distributions P which are at least νρ() away from .
Nevertheless, Theorem <ref> characterizes the rate at which _ρ,B,n shrinks to _ν.
Theorem <ref> also shows how the size of the set depends on the choice of . When possible we should
choose a pilot estimate which converges to the target at a fast rate to ensure that the term η_n is sufficiently small. A sensible choice is often a minimum distance estimator <cit.> which is not only a consistent estimator of under some regularity conditions but is also robust to some misspecification in its corresponding distance <cit.>.
§ SIMULATIONS
In this section, we evaluate our proposed exact and approximate robust universal confidence sets in two particular setups—Overdispersion and Contamination—and demonstrate the advantages of the methods
we propose.
§.§ Overdispersion
Overdispersion is a classic example of model misspecification where the true distribution has larger variance than what can be represented by the hypothesized model. Specifically, consider a case of count data generated from the negative binomial distribution with mean 𝔼_ (X):= θ^* and variance 𝕍_ (X) = κθ^* where the positive constant κ represents the dispersion ratio. Suppose a statistician hypothesized a Poisson model 𝒫_Θ = {Poi(θ) : θ∈ℝ_+} to best describe . Since the mean and the variance are the same for the Poisson distribution (implicitly assuming κ=1), the dispersion ratio κ captures the severity of the model misspecification. Figure <ref> shows ρ (Poi(θ)) with ρ = , , across the dispersion ratio. Notice that KL projection is the true mean θ^* (= 10) regardless of the dispersion ratio whereas Hellinger and TV projection gets smaller as the true variance is more inflated.
The split LRT is sensitive to the misspecification. As highlighted in Section <ref>, the split LRT confidence set (_sLRT) may fail to cover the KL projection unlike the KL set (_) even with the same choice of θ_1 and the same log split likelihood-ratio statistic. Figure <ref> contrasts the performance of _sLRT and _ based on 1000 replicates of 200 simulated observations. In computing the confidence sets, the observations are equally split in half and we choose θ_1 to be the sample mean (which is the MLE) of the first half samples. As the misspecification gets more severe (larger κ), the empirical coverage of KL projection parameter (θ̃) decreases for _sLRT. When the dispersion ratio becomes larger than 3, _sLRT fails to achieve the nominal 95% coverage whereas _ maintains the validity regardless of how severe the misspecification is. Both the center and the right panel depict the size of the estimated confidence set varying over the dispersion ratio but from a different perspective. The former is based on the maximal excess KL divergence from the KL projection (which can be at most twice the KL-diameter of the set) whereas the latter is based on the L_2 distance over the parameter space. It is not surprising that compared to _, _sLRT is smaller in the L_2 sense and is closer to in an excess divergence sense.
Beyond KL projection
Unlike the KL projection, the Hellinger and TV projections are different for different degrees of overdispersion. Our target of inference regarding Hellinger and TV distance is ν-approximate projection rather than the projection as seen in the left panel of Figure <ref>. When the factor κ≥ 6 the ν-approximate target for both Hellinger and TV distance includes any θ∈ℝ_+. For values of dispersion ratio κ≥ 6, the ν-approximate projection for both the Hellinger and TV distances becomes and thus the approximate coverages are trivially 100%. Once again
this highlights that the approximate projection is a meaningful target only when the model misspecification is not too severe.
Figure <ref> summarizes the performance of approximate sets regarding Hellinger (_) and TV distance (_) based on 1000 replicates of 200 simulated observations. We choose the minimum distance estimator for θ_1 for both _ and _. Both _ and _ yield 100% empirical coverage—defined as a proportion of the confidence set that intersects _ν—across all dispersion ratios except almost well-specified case (0.01% dispersion) with 97.4% and 99.1% coverage, respectively. This conservatism is expected because for these divergences we have relaxed our target of inference to be the set
of ν-approximate projections.
Nevertheless, this does not mean that the Hellinger and TV sets are vacuously large. The center and right panel of Figure <ref> show the diameter of the set in Hellinger or TV distance sense, or Euclidean sense. The size of the set increases as the misspecification exacerbates regardless of distance measure. In general, _ is larger than _. _ behaves closer to _ under slight to moderate overdispersion and to _ as the overdispersion becomes severe.
Comparison between asymptotic and finite sample valid sets
Figure <ref> compares the various TV set when the is a 32% variance inflated negative binomial—Berry-Esseen (_), Hoeffding bound (_HF), empirical Bernstein bound <cit.>, and empirical Bentkus bound <cit.>. See Appendix <ref> for explicit forms of each confidence set. In all cases, we choose the same minimum TV distance estimator θ_1. The KL set dominates all finite sample valid confidence sets considered in this section, despite its validity relying on asymptotics. The finite sample valid sets are too conservative (and yield a meaningless set =) when only a few observations are available (n ≤ 50).
Although our paper does not primarily focus on obtaining the tightest finite-sample valid confidence set, leveraging the variance _(X) can often be beneficial when constructing the confidence set. In this example, _EBS and _EBK outperform _HF since the Bernstein and Bentkus bounds are more sensitive to the variance.
§.§ Contamination
Consider the following contaminated data generating distributions
which are mixtures of Gaussians. This simulation setup is used in the work of <cit.>.
_1 = 0.99 N(0, 1) + 0.01 N(0, 30^2) (Symmetric)
_2 = 0.94 N(0, 1) + 0.01 N(20, 20^2) + 0.05 N(-30, 20^2) (Asymmetric)
_3 = 0.7 N(2, 1) + 0.2 N(-2, 1) + 0.1 N(0, 30^2) (Heavily Asymmetric)
For each case, we denote _ to be an uncontaminated distribution that does not include the outlying noise distributions.
Consider a location-scale family of Gaussian distribution 𝒫_Θ = {N(μ, σ^2 ) : (μ, σ)∈Θ} as a working model. (See Appendix <ref> for additional simulations for
a location family with fixed scale.) Our goal is to evaluate the empirical performance—coverage and size—of robust universal confidence sets for the (approximate) projection of the various contaminated distributions onto 𝒫.
Figure <ref> shows the mean and standard deviation of the projection distribution with respect to the KL, DP, Hellinger and TV distances along with the mean and standard deviation of the contaminated and uncontaminated distributions. The KL projection parameter is the same as the parameters of contaminated distribution in all cases.
The DP projection parameters, get closer to uncontaminated parameters as the β parameter increases.
The Hellinger projection is the closest to the uncontaminated parameters among all projections we considered, however, the size of _ν is much larger than that of approximate TV projection. The set _ν for both Hellinger and TV distance is quite large for the heavily misspecified case (Case 3).
Practically, we recommend targeting DP projection with a reasonable choice of β (> 0.05) for this heavily misspecified case.
Figure <ref> illustrates the empirical coverage and size of split LRT and ( and ) sets based on 1000 replications. For split LRT and KL sets, we choose θ̂_1 to be the quasi-MLE, whereas, for the
DP set, we use the minimum DP divergence estimator. The split LRT fails to cover KL projection in all cases whereas sets achieve the nominal coverage with large enough sample sizes. The DP sets show superior coverage than KL set across all sample sizes. Such a target coverage improvement is more evident in the smaller sample sizes below 200, and as β gets larger, i.e., the DP set targets a more stable projection. Regardless of what divergence measure ρ is of interest, the size of the confidence set with reference to ρ shrinks to zero as the sample size increases. Again, the extremal values of _^ (_, ) for sample sizes below 500 highlight the instability of KL projection.
Figure <ref> shows the maximal ρ-distance of and set from based on 1000 replications along with the ρ(_ν), a set of ρ-distance from to approximate projection _ν. ρ(_ν) illustrates the same phenomena as in Figure <ref> but with respect to each distance. Theoretically, we can only claim the shrinkage of set up to _ν. This can be seen in Figure <ref> for both Hellinger and TV set as the maximum excess distance from reaches νρ(_ν) with large enough samples. sets shrink beyond _ν in this example: the Hellinger set converges to a set quite close to with large enough sample size, while the TV set converges to a set around which does not appear to shrink with sample size.
§ DISCUSSION
In this paper,
we presented a general method for constructing uniformly valid exact and approximate confidence sets for
various projection distributions
under weak regularity conditions in the presence of possible model misspecification.
We demonstrated that the universal inference procedure <cit.> can fail catastrophically
even in simple examples, under fairly benign model-misspecification. We then showed
that the robust universal inference framework can address these failures, providing methods which are robust and can
meaningfully target different projection distributions.
Despite data splitting playing an essential role in constructing an assumption-light universal confidence set, it also poses inefficiency and algorithmic randomness since only a random subset of observation is used in constructing the split statistics. This can be partially addressed with crossfitting where we average the split statistic with that after swapping the role of _0 and _1. In contrast to the well-specified
setting where the validity of the crossfit set is immediate, more care is needed under model-misspecification. We investigate the validity of the crossfit set in Appendix <ref>.
The papers <cit.> study many variants of universal inference (including constructing confidence sequences instead of confidence sets, to combining multiple sample-splits) and investigating these
variants in the context of the robust universal inference framework of this paper would be interesting.
Finally, our paper brings to the forefront the role of pairwise tests of fit (and relative fit) together with sample-splitting, in designing broadly applicable inference methods. We expect this basic insight to
have further implications in other contexts, for instance, in designing universal inference procedures in other settings where likelihood-based methods are inapplicable.
§ ACKNOWLEDGEMENTS
This work was partially supported by funding from the NSF grants DMS-1713003, DMS-2113684 and
CIF-1763734, as well as an Amazon AI and a Google Research Scholar Award to SB. The authors are grateful to Arun Kuchibhotla, Aaditya Ramdas and Ian Waudby-Smith for
helpful discussions regarding finite-sample valid confidence sets.
plainnat
§ PROOFS FROM SECTION <REF>
§.§ Example <ref>
Note that the KL projection = ( 1/2 ). Consider the event E where all of the observed samples X_1,…,X_n are 0.
We can see that,
_(E) = 1 - _( ∑_i=1^n X_i > 0 ) ≥ 1 - _[ ∑_i=1^n X_i ] = 1 - n ϵ_n.
Now, on the event E, it is clear that the MLE = (0).
Let us denote the split-sample universal set by C_α(X_1,…,X_n), where we assume for simplicity that 𝒟_0 and 𝒟_1 each have n/2 samples.
We then have,
_(∉ C_α(X_1,…,X_n)| E)
= _(ℒ_0()/ℒ_0() ≤α | E)
= _(1/2^n/2≤α | E) = 1,
for n ≥ 2 log_2(1/α). As a consequence, we can upper bound the coverage of the universal set by,
_(∉ C_α(X_1,…,X_n))
≥_(E) _(∉ C_α | E) ≥ 1 - n ϵ_n.
Thus, we see that if 0 < ϵ_n ≤β/n for some β > 0, and n ≥ 2 log_2(1/α) then the universal set has coverage at most β. Choosing β < (1 - α)
we see that the universal set fails to have its advertised coverage.
§.§ Example <ref>
The KL projection is ( 3/4 ). For simplicity we suppose that n is even, and that 𝒟_0 consists of the first n/2 samples and 𝒟_1 consists of the remaining samples.
For a constant β > 0, let us consider the events E_0, E_1 defined as,
E_0 = ( ∑_i=1^n/2 X_i < n/4 - β√(n))
E_1 =( ∑_i=n/2^n X_i < n/4 ).
When events E_0 and E_1 hold we can see that the universal set C_α(X_1,…,X_n) fails to cover . In more detail, on the event E_1 the MLE, is (1/4) and thus,
_(∉ C_α(X_1,…,X_n) | E_0, E_1) = _(ℒ_0()/ℒ_0() ≤α | E_0, E_1)
≤_(1/3^2β√(n)≤α) = 1,
provided that n ≥ (log_3(1/α))^2/(4β^2). Thus, it suffices to show that E_0 and E_1 happen with sufficiently large probability.
Using the fact that the Total Variation distance between the n-fold product measures,
((1/2)^n, (1/2 + ϵ_n)^n) ≤ n ϵ_n,
we can reason instead about the probability of the events E_0 and E_1 when drawing samples from (1/2), and account for the difference using the Total Variation. Combining this fact with the standard
Berry-Esseen bound applied to Bernoulli sums, together with some simple algebraic manipulations, we obtain that for some universal constant C > 0,
P(E_0 ∪ E_1) ≥ P(Z < 0) × P(Z < -2√(2)β) - 2 C/√(n) - n ϵ_n.
Thus, choosing ϵ_n ≪ 1/n, and β to be a sufficiently small constant, when n is sufficiently large, we obtain that,
P(E_0 ∪ E_1) ≥1/8,
and thus that,
P(∉ C_α(X_1,…,X_n)) ≥ 1/8.
§ PROOFS FROM SECTION <REF>
In this section, we formally verify the claim that the universal set typically includes both the pilot and the projection distribution.
We first define the ρ-diameter of the set C as _ρ (C) = sup_P_a, P_b ∈ Cρ(P_a P_b).
Let ∈ be any estimate of based on _1. Then, the exact robust universal confidence set C_α,n defined in (<ref>) has diameter at least ρ() with
-probability at least 1 - 2α:
inf_∈_(_ρ (C_α,n) ≥ρ() ) ≥ 1 - 2 α.
Following a similar argument to that in the proof of Theorem <ref>, notice that for any ∈, _ (∉ C_α,n | _1) ≤α. Together with a union bound, we obtain that
both and are included in the set C_α,n with -probability at least 1- 2α (conditionally on _1), and on this event, the diameter of the set is at least ρ() in expectation.
§ PROOFS FROM SECTION <REF>
§.§ Proof of Theorem <ref>
We first work conditional on the sample 𝒟_1 used to construct the pilot estimate .
Let us define
M_P,δ,ξ := 𝔼_ [|T_i(P,) + Z_i δ - 𝔼_ T(P,)|^2+ξ | 𝒟_1].
Due to the added Gaussian noise, the variance M_P,δ,0 is always strictly positive (i.e., larger than δ^2).
By Minkowski's inequality, conditional on _1, we have
M_P,δ,ξ≤[ (_| T_i (P, ) - _ T_i (P, )|^2+ξ | _1)^1/2+ξ
+ δ( |Z|^2+ξ)^1/2+ξ]^2+ξ.
This means that for assumed , there exists a universal constant C_M such that (conditionally on _1) the 2+ξ moment of corrupted statistic T_i (P, ) + δ Z_i is uniformly bounded by C_M for all P∈.
Conditionally on _1, the generalized Berry-Esseen bound
for the studentized statistic <cit.> yields that, for a universal constant C',
sup_t| ℙ_(√(n_0)( T_n_0,δ (P,) - _ T (P, ) )/s_P,δ≥ t | 𝒟_1) - P(Z ≥ t)|
≤C' M_P,δ,ξ/n_0^ξ/2δ^2+ξ≤ C n_0^-ξ/2,
where C = C' C_M δ^-(2+ξ).
This holds in particular for ∈𝒫.
Consequently, we see that,
inf_∈_ (∈_, n) = inf_∈𝔼_ [ _ (∈_, n | 𝒟_1)]
≥ 1 - sup_∈𝔼_ [ _ (∉_, n | 𝒟_1)]
≥ 1 - [α - C n^-ξ /2],
as claimed.
§.§ Proof of Proposition <ref>
Recall that X_i iid∼(ϵ_n) for ϵ_n ≤ (1-α)/n and our hypothesized model is ={(p): p∈{0, 1/2}}. For a fixed β >0,
_β((ϵ_n) (p) )
= C + (p^1+β + (1-p)^1+β) - (1 + 1/β) [ϵ_n p^β + (1-ϵ_n) (1-p)^β]
where C = ∑_x∈0,1ϵ_n^(1+β)x (1-ϵ_n)^(1+β)(1-x). The DP divergences from to the elements of the working model are
_β((ϵ_n) (0) )
∝ 1 - (1 + 1/β) (1-ϵ_n)
= (1 + 1/β) ϵ_n - 1/β
_β((ϵ_n) (1/2) )
∝ - (1/2)^β / β.
Therefore, the DP projection is
=
(0), if ϵ_n ≤ (1 -(1/2)^β) / (1 + β),
(1/2), otherwise.
Since ϵ_n < (1-α)/n, the projection will be (0) for any β >0, provided that n ≥ (1-α) (1 + β) / (1- (1/2)^β).
Now we turn our attention to constructing the DP set. For any fixed (P,Q) ∈^2, the split statistic is uniformly bounded, i.e., |T_i (P,Q) - _ T (P,Q)| ≤ 1 + 1/β since
T_i (P, Q)
= ∑_x∈{0,1}[ (p^x (1-p)^1-x)^1+β - (q^x (1-q)^x)^1+β]
- ( 1+1/β) [ (p^X_i (1-p)^1-X_i)^β - q^X_i (1-q)^1-X_i] (X_i).
By Hoeffding's inequality, _,1+1/β,n ensures nominal coverage for any estimator , since we have that:
_(∉_,1+1/β,n)
= _(_( T_n_0 (,) > β + 1/β√(log(1/α)/2 n_0) | _1 )) ≤α.
§.§ Proof of Proposition <ref>
Note that Hellinger projection is (0) for n>6 (as long as ϵ_n < 0.146) since
^2((ϵ_n), (0))
= 1 - √(1 - ϵ_n), ^2((ϵ_n), (1/2))
= 1 - √(ϵ_n / 2) - √((1-ϵ_n) / 2).
Similarly, ν-approximate Hellinger projection is
(0) if ϵ_n < 0.051 or otherwise. Hereafter we only consider n > 20 where _ν =.
The minimum Hellinger Distance Estimator (MHDE) is
_p∈{0, 1/2}^2 (_n_1, (p))
= _p∈{0, 1/2}√(p X_n_1) + √((1-p) (1 - X_n_1))
where X_n_1 = ∑_i∈_1 X_i / n_1.
Thus,
= (0), X_n_1 < 0.5-1/(2√(2)) ≈ 0.146
(1/2), Otherwise.
This implies that the advertised coverage is guaranteed when X_n_1 < 0.146. Otherwise, Corollary <ref> ensures the asymptotic (approximate) validity.
§.§ Proof of Proposition <ref>
The proof follows directly by the triangle inequality.
2 _ T (P_0, P_1)
= _P_0 f^*_(P_0, P_1) + _P_1 f^*_(P_0, P_1) - 2 _ f^*_(P_0, P_1)
= 2 [ _ f^*_(P_0, P_1) - _ f^*_(P_0, P_1)] - _P f^*_(P_0, P_1) - _P_1 f^*_(P_0, P_1)
= 2 [ _ f^*_(P_1, P_0) - _P f^*_(P_1, P_0)] - _ (P_0, P_1)
≤ 2 _ (, P_0) - _ (P_0, P_1)
≤ 2 _ (, P_0) - [_(, P_1) - _(, P_0)]
(by the triangle inequality)
= 3 _(, P_0) - _(, P_1)
§ PROOFS FROM SECTION <REF>
§.§ Proof of Theorem <ref>
Recall that the exact robust universal confidence set based on the Hoeffding bound is
_ρ,σ,n = { P∈ : T_n_0 (P,) ≤ B √(log (1/α)/2 n_0)}.
We denote t_α,n := B √(log (1/α)/2 n_0) throughout the proof, and use C to denote _ρ,B,n.
Throughout the proof, we fix a projection distribution and assume an equal split between _0 and _1.
Denote δ_ν (P, Q) = ρ( P) - νρ( Q) for any P, Q ∈.
We want to show that, for fixed κ > 0, for some finite M > 0,
_( sup_P ∈δ_ν(P, ) > M (η_n + ϵ̃_n) ) ≤κ,
where ϵ̃_n = ℜ_n(_T,𝒫) ∨ t_α,n and for all n large enough.
Let the event E be δ_1 (, ) ≤ (M/ν) η_n which happens with probability at least 1-κ/2. Then,
_( sup_P ∈δ_ν(P, ) > M (η_n + ϵ̃_n) )
≤_( sup_P ∈δ_ν(P, ) > M (η_n + ϵ̃_n) | E ) + κ/2
= _( sup_P ∈δ_ν(P, ) > M (η_n + ϵ̃_n) - νδ_1(, ) | E ) + κ/2
≤_( sup_P ∈δ_ν(P, ) > M ϵ̃_n | E ) + κ/2.
Thus, it suffices to show that conditional on E, with -probability at most κ/2, all P∈ such that δ_ν(P, ) > M ϵ̃_n are not included in . Hereafter we condition on event E.
Let _ϵ := {P ∈ : δ_ν(P, ) > ϵ}.
From Assumption <ref>, we have that
_( ∀ P∈_ϵ, P ∈_ρ,B,n | _1 )
= _( sup_P∈_ϵT_n_0(, P) ≥ - t_α,n | _1 ),
≤_( sup_P∈_ϵ[T_n_0(, P) - _ T(, P)] ≥ϵ - t_α,n | _1 ).
where the inequality is from noticing that conditional on _1,
sup_P∈_ϵ [- _ T(, P)] ≥sup_P∈_ϵδ_ν(P, ) ≥ϵ by Assumption <ref>.
To ease the notation, denote the centered statistic as T_P := T_n_0(, P) - _ T(, P).
Since |T(,P)| ≤ B, any change in X_i can change sup_P∈_ϵT_P at most 2B/ n_0. By McDiarmid's inequality, we have that
_(sup_P∈_ϵT_P ≥ϵ - t_α,n | _1)
≤exp( - n(ϵ - t_α,n - _[sup_P∈_ϵT_P] )^2/2 B^2).
Now we focus on bounding _ [sup_P∈_ϵ |T_P|] (which is greater than _ [sup_P∈_ϵT_P ]).
Let _T,𝒫 = {T(·; , P) : P ∈}. The symmetrization lemma <cit.> states that
_Xsup_f∈_T,𝒫1/n_0| ∑_i=1^n_0[f (X_i) - _ f(X_i)] |
≤ 2 _X,εsup_f∈_T,𝒫|1/n_0∑_i=1^n_0 R_i f(X_i)| := 2 _n_0 (_T,𝒫)
where R_i are iid Rademacher random variables.
§ FINITE-SAMPLE VALID CONFIDENCE SET FOR BOUNDED TEST STATISTIC
Suppose the split statistics are uniformly bounded, i.e., |T_i (P)| ≤ B for all i. Classic Cramér-Chernoff bounds yield finite-sample valid exact (ν = 1) or approximate (ν > 1) confidence sets.
_HF is a uniformly valid 1-α exact (ν = 1) or approximate (ν > 1) confidence set for where
_HF = {P ∈ : T_n_0 (P) ≤√(B^2/2n_0log(1/α))}.
Typically, Hoeffding's bound does not scale with the variance which results in a conservative confidence set. Confidence set based on Bernstein's inequality is given as follows.
_BS is a uniformly valid 1-α exact (ν = 1) or approximate (ν > 1) confidence set for where
_BS = { P∈ : T_n_0(P) ≤√(2 S^2 log (1/α)/n_0 + B^2/9( log (1/α)/n_0)^2) + B log (1/α)/3 n_0}
where S^2 = S^2(P) = (c_1 ν)^2 [ρ ( P) + ρ ()].
However, _BS above requires knowledge of to compute S. Empirical Bernstein bounds <cit.> address this issue.
Denote T̃_i (P,Q) = (T (X_i; P, Q)+ B) / (2B). _EBS is a valid 1-α confidence set for exact (ν = 1) or approximate (ν > 1) projection where
_EBS = { P∈ : ∑_i=1^n_0λ_i T̃_i (P, )≤log(1/α) + ∑_i=1^n_0 v_i ψ_E (λ_i) },
v_i = (T̃_i (P, ) - T̃_i - 1 (P, ))^2, ψ_E(λ) = - (log(1-λ) - λ), and
λ_i = √(2log(1/α)/n_0 Ŝ_i - 1^2)∧ c, Ŝ_i^2 = 1/4 + ∑_l=1^i (T̃_l - T̃_l)^2/i + 1, T̃_i = 1/i+1∑_l=1^i T̃_l,
for some c ∈ (0,1).
When the variance or an upper bound of the variance is known, Bentkus's bound <cit.> is sharper than any Cramér-Chernoff type bounds. See <cit.> for details. Define a Bernoulli random variable G = G(S^2, B) as
( G = B ) = S^2/S^2 + B^2 := p_SB, ( G = - S^2/B) = 1 - p_SB
_BK is a valid 1-α confidence set for is a valid 1-α confidence set for exact (ν = 1) or approximate (ν > 1) projection where
_BK = { P∈ : T_n_0(P,) ≤ q(α) }
where q(α) is the solution to
P_2 ( u; ∑_i∈_0 G_i )
:= inf_t ≤ u_( ∑_i∈_0 G_i - t )_+^2/(u -t )_+^2 = α,
and S^2 = S^2(P) = (c_1 ν)^2 [ρ ( P) + ρ ()].
As in the case of Bernstein's bound (<ref>), Bentkus's bound (<ref>) requires prior knowledge of to compute the variance S. The empirical Bentkus's bound <cit.> addresses this by taking the union bound on variance over-estimation and the Bentkus's inequality. Following <cit.> define the over-estimator of S as, for δ∈[0,1],
S_n (δ) = √(S_n_0^2 + g_2,n_0 (δ)) + g_2,n_0(δ), S_n^2 = 1/⌊ n / 2 ⌋∑_i=1^⌊ n / 2 ⌋(T_2i - T_2i-1)^2/2,
where g_2,n(δ) := B (√(2) n)^-1√(⌊ n / 2 ⌋)Φ^-1 (1- 2 δ / e^2) and Φ is the cdf of a standard Gaussian.
_EBK is a valid 1-α confidence set for is a valid 1-α confidence set for exact (ν = 1) or approximate (ν > 1) projection where for some δ∈[0,1],
_EBK = { P∈ : T_n_0(P,) ≤ q(α - δ) }
where q(α - δ) is the solution to
P_2 ( u; ∑_i∈_0 G_i ( S^2_*(δ), B ) ) = α - δ.
with S_* (δ) := min_1 ≤ i ≤ n_0S_i (δ).
In Section <ref>, we choose δ = α/3 to construct the empirical Bentkus's bound-based TV set.
§ CROSSFIT SET
Despite universal inference holds for any , let us assume we choose such that sup_P ∈T(·; P, P_1) - T(·; P, )_L_2() = o(1).
For any fixed P ∈, consider the following decomposition:
_n_0 T(·; P, P_1) - _ T(·; P, )
= (_n_0 - _) [T(·; P, P_1) - T(·; P, )] + _[T(·; P, P_1) - T(·; P, )] + (_n_0 - _) T(·; P, ).
The first term is the empirical process which is o_ (1/√(n_0)) applying Lemma 2 of <cit.>. The second part is the bias which is o(1) from our choice of . The last term yields the CLT.
Now let T_n_1 (P; P_0) := ∑_i∈_1 T(X_i; P, P_0) /n_1 where we change the role of _0 and _1. Define a cross-fitting estimator as
T_n^× (P)
= n_1 T_n_1 (P; P_0) + n_0 T_n_0 (P; P_1)/n.
The n (T^× (P) - _ T(·; P, )) has the following decomposition:
n_0 (_n_0 - _) [T(·; P, P_1) - T(·; P, )]
+ n_1 (_n_1 - _) [T(·; P, P_0) - T(·; P, )]
+ n_0 _ [T(·; P, P_1) - T(·; P, )] + n_1 _ [T(·; P, P_0)- T(·; P, )]
+ n (_n - _) T(·; P, ).
Similarly, both empirical process terms in the first line are o_ (1/√(n)), and bias terms in the second line are o (1). Thus, we left with the same CLT term. The decomposition implies that as long as one chooses a “good” candidate estimator, cross-fit estimator also provides an asymptotically (uniformly) valid inference on .
Construct a cross-fit ρ-set as follows:
C^×_ρ,α, n = {P∈ : T^× (P) ≤ z_αŝ_P^×/√(n)}.
where ŝ_P^× 2 = [_ (T (X; P, P_1)) + _ (T (X; P, P_0) )] / 2 is a consistent estimator of _ (T(X; P, )).
§ ADDITIONAL RESULTS AND TECHNICAL DETAILS ON NUMERICAL STUDIES
§.§ Computational detail
We adopt a heuristic search method for finding a confidence set in multivariate parameter space. For brevity, we explain the procedure in 2 dimensions, but the procedure can be straightforwardly extended to higher dimensions. From the observation that when _1 is close to θ̃, i.e., ρ( P__1) ≤νρ() as seen in the proof of Theorem <ref>, T_n_0 (_1) = 0 for split statistics that satisfies Assumption <ref> and <ref>. Therefore, we construct a star-convex confidence set that always includes _1. We construct the rays originate from _1, i.e., R_ω = {θ∈Θ : r_ω^⊤ (θ - _1) = 0, r ≥ 0 } where r_ω = (r sinω, - r cosω) for angle ω∈ [- π, π]. For each ω, we find a root of an evidence function (θ) = T_n_0 (θ) - t_α (θ) using Brent's method <cit.> on R_ω constrained with radius r varying from 0 (corresponding θ=_1) to some r_0 > 0 such that the corresponding θ_0 satisfies (θ_0) > 0.
§.§ Gaussian contamination - Location family
Consider a Gaussian location family = {(θ, 1) : θ∈} where the variance is fixed to that of uncontaminated distributions. Figure <ref> shows the projection parameters along with those of contaminated and uncontaminated distributions. The mean of contaminated distribution and that of uncontaminated distributions are the same for Cases 1 and 3 but not for Case 2. This leads to the interesting observation that forward KL projection is the closest to the uncontaminated distribution in Case 3 unlike location-scale family in Figure <ref>, Section <ref>.
Figure <ref> summarizes the performance of confidence sets targeting the forward KL or DP projection over 1000 replications. Clearly, split LRT fails to attain the nominal coverage even for a large enough sample size. All other sets achieve the nominal coverage for moderate to large sample size. _ are shorter than _ and even than the invalid split LRT set for Cases 2 and 3.
|
http://arxiv.org/abs/2307.04619v1 | 20230710150729 | Learning Fine Pinch-Grasp Skills using Tactile Sensing from Real Demonstration Data | [
"Xiaofeng Mao",
"Yucheng Xu",
"Ruoshi Wen",
"Mohammadreza Kasaei",
"Wanming Yu",
"Efi Psomopoulou",
"Nathan F. Lepora",
"Zhibin Li"
] | cs.RO | [
"cs.RO"
] |
Surface magnon spectra of nodal loop semimetals
László Oroszlány
August 12, 2023
===============================================
empty
empty
This work develops a data-efficient learning from demonstration framework which exploits the use of rich tactile sensing and achieves fine dexterous bimanual manipulation. Specifically, we formulated a convolutional autoencoder network that can effectively extract and encode high-dimensional tactile information. Further, we developed a behaviour cloning network that can learn human-like sensorimotor skills demonstrated directly on the robot hardware in the task space by fusing both proprioceptive and tactile feedback. Our comparison study with the baseline method revealed the effectiveness of the contact information, which enabled successful extraction and replication of the demonstrated motor skills.
Extensive experiments on real dual-arm robots demonstrated the robustness and effectiveness of the fine pinch grasp policy directly learned from one-shot demonstration, including grasping of the same object with different initial poses, generalizing to ten unseen new objects, robust and firm grasping against external pushes, as well as contact-aware and reactive re-grasping in case of dropping objects under very large perturbations.
Moreover, the saliency map method is employed to describe the weight distribution across various modalities during pinch grasping. The video is available online at: https://youtu.be/4Pg29bUBKqshttps://youtu.be/4Pg29bUBKqs.
§ INTRODUCTION
Dexterous robot manipulation has the capability to work across a range of tasks and environments. However, enabling dexterous manipulation in robots, particularly in a manner that is comparable to human capabilities, remains an unsolved challenge. Currently, numerous studies utilize visual feedback to enable robots to perform dexterous manipulation tasks such as box flipping <cit.>, object rotating <cit.>, and door opening <cit.>. However, these visual-based methods have limitations, as the visual data could be influenced by occlusion and lighting variations. Consequently, it is very important to investigate how to incorporate tactile information for the enhancement of dexterous manipulation in robotic systems.
Tactile sensing plays a vital role in capturing detailed information about contact surfaces, including the distribution of contact forces and their variations during force-sensitive tasks – which is indispensable for achieving dexterous handling of lightweight objects with irregular surfaces, shapes, and deformable properties. Especially during close-range interaction between hands and objects, visual occlusion restricts the ability to perceive detailed information of the contact surfaces, during which tactile sensors become valuable for providing essential information of these unseeable surfaces. Integrating tactile sensing into motor learning of dexterous grasping can enhance the rich and precise sensing of surface contacts and interaction dynamics, provide irreplaceable and direct feedback when manipulating objects <cit.>, and enable more robust and precise manipulation tasks. It is crucial to explore how robots can leverage this information to achieve human-comparable dexterous manipulation abilities.
The canonical hardware for robot manipulation incorporates Force/Torque sensors that can only measure the 6-degree-of-freedom (DoF) wrench at each end-effector. Soft optical-based tactile sensors can provide abundant and discriminative contact information by quantifying the deformation of the soft materials using a camera system <cit.>.
Currently, several soft tactile sensors have been developed, including TacTip <cit.>, DigiTac <cit.>, Gelsight <cit.>, and DIGIT <cit.>. However, how to use high-dimensional data from tactile sensors for robot dexterous grasping remains open research.
The complex and non-trivial deformation of soft tactile sensors during dexterous grasping tasks presents a considerable challenge. Human can deal with soft contacts, quuckly adapt to new tasks, and produce skills of dual-arm coordination for manipulating objects. Learning from Demonstration (LfD) offers an intuitive, efficient method for acquiring human skills through synchronized tactile information, encoding rich state-action mapping and enabling robots to learn human sensorimotor skills while responding to tactile and proprioceptive feedback. In addition, the common issue of accumulating compounding errors during dexterous manipulation task execution in LfD can be mitigated by utilizing rich tactile information as feedback. The challenge involves effectively extracting features from sensory data and integrating them with proprioceptive states for sample-efficient human dexterous manipulation behavior learning.
This work is motivated to develop an effective LfD framework that leverages rich tactile sensing to learn dexterous sensorimotor skills. Our approach focuses on achieving one-shot LfD of fine pinch grasp, using high-dimensional contact information from tactile sensors and a limited amount of real data. The contributions are summarized as follows:
* A novel feature extraction approach to encapsulate essential features from tactile sensing data, which are then fused with robot proprioceptive states and tactile image difference, thus resulting in a low-dimensional latent space representation that significantly enhances the learning process of fine grasping skills.
* An effective LfD framework that integrates tactile sensory input and robot proprioceptive state, which enables the robot to efficiently acquire feedback-driven dexterous grasping skills through a single demonstration.
The proposed framework is validated by pinch grasp tasks on a dual-arm setup equipped with TacTips sensors <cit.> and has achieved the successful retrieval of a small, cylindrical object on a table using one-shot demonstration. Our experimental results show that the policy, learned from one-shot human demonstration data, can achieve stable grasping of unseen objects with different diameters, masses, and materials.
Furthermore, the robustness of the framework against external disturbances has been validated, with the learned policy demonstrating stable grasping under external disturbance, as well as the capacity to autonomously execute successful re-grasping in case of a large external force that pushes off the object.
We applied saliency map analysis <cit.> and revealed how the learned policy uses different sensory modalities in a variable way throughout the dexterous pinch grasp process, and demonstrate the capability and effectiveness of our proposed network to efficiently learn features of the high-dimensional data and autonomously segment the long-horizon data into several distinct fine-skills for execution.
§ RELATED WORKS
During robotic grasping, tactile sensors can provide rich contact information which is not easily accessed via visual information, thereby playing a crucial role in enhancing the dexterous grasping capabilities <cit.>.
Prior research on robotic pinch grasp has primarily focused on either force analysis and planning to achieve force closure <cit.> or the development of specialized grippers <cit.>. Soft deformable tactile sensors have the ability to perform contact-rich interactions with the environment and manipulate delicate objects safely <cit.>. With optical-based tactile sensors, the orientation of the contact surface can be inferred from the tactile image, enabling stabilization of the pinch grasp by rolling the sensor on the contact surface and applying desired grasping forces <cit.>. The study in <cit.> proposed a novel tactile sensor capable of measuring and localizing distributed forces that enables the robot hand to grasp deformable soft objects.
One open question is how to extract useful information from high-dimensional tactile images. The works in <cit.> estimate 6D contact wrenches from tactile images and the estimated wrenches that can be used as feedback to the grasping controllers within the classical control theory. Deep neural networks can also be used to process tactile images. The works in <cit.> show that contact poses can also be detected from tactile images, which was then combined with goal-driven methods to achieve non-prehensile object stable pushing. The works in <cit.> introduce Autoencoder networks <cit.> to compress the high-dimensional tactile images into low-dimensional latent vectors which can be used for several down-stream tasks, such as object classification.
Moreover, although deformable tactile sensors facilitate area contact, potentially improving grasp stability and protecting delicate objects, the dynamics of the deformable sensor cannot be neglected. The work proposed in <cit.> combines 3D geometry of the tip of a deformable tactile sensor with robot proprioceptive action to learn the tactile sensor membrane dynamics and predict the deformation conditioned on robot action. Data-driven method can be used to learn the dynamics and combined with the Model Predictive Control (MPC) methods to achieve tactile servoing <cit.>. Insights from human intrinsic understanding may prove valuable in leveraging deformable sensors to achieve dexterous dual-arm manipulation tasks. LfD is an intuitive and effective way to learn human skills from collected demonstrations, which is very helpful for tasks requiring high-level skills, such as intricate coordination between two arms. By segmenting the collected motion data, the work proposed in <cit.> generates a set of motion primitives to complete tasks. Additionally, human can use their senses to accomplish different tasks this can be used to investigate how the multi-sensory data can jointly together and help with the manipulation task <cit.>.
§ METHODS
§.§ System Overview
Teleoperation through a physical robot is a viable approach for generating real demonstration data that can be executed on a physical system, and it was shown to be effective to perform fine dexterous grasping <cit.>. As shown in Fig. <ref>, the overall architecture incorporates a teleoperation system for the collection of human demonstration data, and a dual-arm setup for executing pinch grasp tasks. The teleoperation system consists of two haptic devices (Force Dimension Sigma 7) for human operators to control the dual-arm robot <cit.>. The dual-arm robot system includes two Franka Emika Panda arms each with a TacTip <cit.> installed on the end-effector of each arm. The Tactips capture contact information between end-effector and objects as 2D tactile images. Task-Space Sequential Equilibrium and Inverse Kinematics Optimization (SEIKO) runs in the backend to guarantee the guarantee of physical constraints and safety of the dual-arm robot <cit.>.
The Learning from Demonstration (LfD) framework (see Fig. <ref>) is composed of two distinct networks: 1) a Convolutional AutoEncoder (CAE) network to extract the latent features from tactile images; 2) a Behaviour Cloning (BC) network to learn the policy of dexterous dual-arm grasping with tactile sensing from human demonstrations.
§.§ Demonstration Dataset of Bimanual Manipulation
In our implementation, the haptic devices allow operators to adjust the 6D pose simultaneously, providing an intuitive way to demonstrate bimanual grasping skills on a dual-arm robot. During the demonstration, a human operator teleoperates the dual-arm robot to complete the grasping task by sending Cartesian commands to the two end-effectors via two haptic devices. The human demonstration data are recorded automatically during the entire grasping.
§.§ Tactile Feature Extraction
The Tactip used in this work is an optical tactile sensor with a soft hemispherical tip, which was 3D-printed in one piece combining an elastic skin with 330 rigid white markers (pins) <cit.>. When the soft tip deforms during contact with objects, the white pins start to move away from their initial positions. The displacement of these pins reflects the complex deformation of the soft surface. An inner camera captures and projects the displacement to an array of white pins on a black background in the image plane, as shown in Fig. <ref>. Raw tactile RGB images are firstly resized to 256×256 pixels using linear interpolation and converted to grayscale images, which are then cropped using a circle mask and converted to binary images by thresholding. A median filter is applied to denoise the binary images.
We propose to use a self-supervised learning method – convolutional autoencoder network to extract robust features that can represent the contact properties from the preprocessed tactile images. Eight convolutional layers are used in the CAE network to extract the spatial information represented by the displacement of the pins. The structure of CAE network is shown in Fig. <ref>. The CAE network consists of an encoder and a decoder, formulated as follows:
[ g_Θ(·): 𝒳ℋ; f_Φ(·): ℋ𝒳̂ ].
The encoder g_Θ(·) projects each tactile image γ_t in the high-dimensional input space 𝒳 (256×256) to 16 feature maps γ_l in the low-dimensional latent space ℋ (16×16), then the decoder f_Φ(·) reconstructs that image from the same feature maps to the output space 𝒳̂ (256×256). The binary cross-entropy loss function is used as the reconstruction loss between the input images 𝒳 and the reconstructed images 𝒳̂ to
update the network parameters via back-propagation:
[ L_CAE(γ_t, γ_p) = -(γ_tlogγ_p) + (1-γ_t)log(1-γ_p); γ_l = g_Θ (γ_t), γ_p = f_Φ(γ_l) ],
where γ_p is the reconstructed image by the decoder network.
§.§ Behavior Cloning Network
We propose and design a novel BC network to learn the behaviors of coordinated manipulation skills of bimanual grasping from human demonstration data. Dexterous bimanual grasping skills can be considered into two categories: (1) adaptive interaction with different objects, and (2) dual-arm motion coordination. To capture these skills, we have designed the input to our network to include encoded tactile feature maps, tactile image differences, and the robot's proprioceptive state. The encoded feature maps and tactile image differences capture the human-object interaction skills. The robot's proprioceptive state, on the other hand, offers insights into the coordination of movements between both arms. These inputs collectively serve to reflect the complexity and adaptability of dexterous grasping skills.
Following this idea, we use the encoded tactile feature maps l_t, the proprioceptive state ϕ_t, and the tactile image difference e_t as input to the BC network to represent and learn fine human skills. The discrete-time state-action pair set G = {(s_0, a_0), (s_1,a_1), ..., (s_t,a_t),...} is created to train the BC network, where s_t = (l_t, ϕ_t, e_t) denotes the robot state and a_t denotes the Cartesian commands of the two arms at time t.
Using such data of multiple modalities as input to train a network requires a well-crafted embedding structure. A common way of fusing a 2D feature map and a 1D feature vector is to flatten the 2D feature map into a 1D vector and concatenate the flattened vector and the 1D feature vector. However, we found that the flattening projection results in the loss of spatial correlation of tactile information. In this work, we specifically tile the proprioceptive state of robots and the tactile image difference to match the dimension of the tactile feature maps, so as to keep the spatial information of the encoded tactile feature maps.
We then concatenate the tactile feature maps, the tiled proprioceptive state maps, and the tactile image difference on each feature channel, as shown in Fig. <ref>. The convolutional layers in the BC network first filter the input feature maps (46×16×16) to a feature map (1×8×8), which is then flattened and fed into a fully connected network (FCN). The FCN network outputs a vector â∈^12 as the predicted Cartesian pose commands of the two arms, including 3D position and 3D orientation for each arm.
The loss function used to train the BC network consists of two parts, which are formulated as:
[ L_BC(a,â) = a - â^2 + d - d̂^2; â = ψ(l, ϕ, e;Φ_conv, Φ_fcn) ]
where a∈^12 is the Cartesian pose commands of the two arms from the human demonstration dataset, and â∈^12 is the predicted Cartesian pose command by the BC network ψ(·;Φ_conv,Φ_fcn), parameterized by Φ_conv and Φ_fcn; l, ϕ and e denote the tactile feature maps, the proprioceptive state maps and the tactile image difference, respectively. The second term d - d̂^2 is added to learn the dual-arm coordination skills from human demonstrations, where d∈ ^3 is the relative position between the two end-effectors, and d̂∈ ^3 is the predicted relative position between the two end-effectors by the BC network.
§ EXPERIMENTS AND RESULTS
§.§ Experimental Setup and Data Collection
We validate the performance of LfD with tactile sensing for robot dexterous manipulation on the challenging tasks: object retrieval from the desk using dual-arm pinch grasp. We have designed a comparative study involving two different configurations to show that our method can outperform the vanilla BC baseline qualitatively and quantitatively. During dexterous grasping, external vision can be easily occluded by the end-effector, potentially leading to inaccurate object estimation. Therefore, our experiments operate without using external visual sensors. By default, the starting position of the object lies between two robot hands, and the whole demonstration is represented in the task space. This assumption allows us to have an interface to connect with a 6D pose estimation from an external camera (e.g., as in <cit.>) and then position the initial poses of two hands around the target object before starting our control policy.
We collected two demonstrations for each task. The human demonstration dataset collected in the grasping task includes three main components: the Cartesian commands, the proprioceptive states, and the tactile feedback (i.e., tactile images provided by the TacTip sensors). The Cartesian commands and the proprioceptive states of the two arms are collected at a frequency of 1000 Hz. Two Tactips record the tactile image pairs at a frequency of 60 Hz. For each demonstration, about 1000 tactile images are recorded. Before using the collected dataset to train the networks, several pre-processing methods are used to process the raw data. The proprioceptive states of the two arms and the tactile images, collected at different sampling rates, are synchronized using a linear interpolation method to align their timestamps. A median filter is then applied to smooth the Cartesian commands a_t, i.e., the 6D poses of two end-effectors. For raw tactile images, the structural similarity index measure (SSIM) <cit.> is used to quantify the difference between the current frame and the original frame.
§.§ Design of Validation Tasks
§.§.§ Learning grasping vial
The human demonstrator performs teleoperation of dual-arm robots to grasp a plastic vial (a test tube with Φ=15.65mm) that is horizontally placed on the table.
A Behaviour Cloning (BC) network is trained using the garthered demonstration data, and the trained policy is tested on dual-arm robots to validate its generalisation on unseen initial poses. During the evaluative phase, we positioned the test tube between the end-effectors to evaluate the performance of the learned policy given variations in the starting position, specifically alterations of up to ±20 degrees and displacements of up to ±2 centimetres in the objects' locations.
§.§.§ Generalization to unseen objects
To evaluate the generalisability of the trained policy to unseen objects with a variation of radius, weight, or even materials (e.g., soft and fragile objects), a set of test experiments have been conducted using multiple objects of different radii ranging from 11.7mm to 28.6mm.
§.§.§ Robustness against external disturbance
We also validate the robustness of the trained policy against external disturbances. We applied random external pushes from the left, right, up, and down directions on the grasped object to test if the two arms can coordinate their end-effectors' poses to ensure the balance of the object.
§.§.§ Re-grasping capability
The re-grasping experiments are conducted to test if the trained controller is contact-awareness and can perceive the loss of contact with the object in order to make necessary adjustments according to the tactile feedback and react to grasping failures. After the successful normal grasping, we severely pushed the object away to break its static equilibrium, and the object dropped down between two end-effectors again.
§.§ Network Evaluation
Our proposed model is developed using PyTorch <cit.>. For the training of the Convolutional AutoEncoder (CAE), a dataset comprising 1500 tactile images obtained from the TacTip sensor during the demonstration was utilized as a demonstration. A representation of reconstruction performance is shown on the validation set is shown in Fig. <ref>. The trained CAE exhibits a satisfactory reconstruction quality, with a Mean Squared Error (MSE) loss of 0.015 and a Structural Similarity Index Measure (SSIM) of 0.934. The model training process, which involved 100 iterations, was completed in approximately two hours using an NVIDIA 1080 GPU. In the case of the Behaviour Cloning (BC) network, the model was trained for 1000 iterations and the training process takes approximately 5 minutes.
§.§ Results of Grasping Tasks
The BC network trained on human demonstration data is deployed on a real dual-arm robot to verify its performance by the designed tasks. A set of snapshots showing grasping a tube from the tube holder and the table is shown in Fig. <ref>. In both grasping tasks, the learned control policy achieved 100% success rate, even when the initial poses of the tube are different from their original pose in the demonstration.
The dual-arm robot can make prompt adjustments and enable stable dexterous grasping by learning from only one demonstration. In the process of lifting the object, the dual-arm robot achieves stable grasping by constantly twiddling the “fingertips” (tips of TacTip sensors) and adjusting the object to the central position. The process of retrieving an object from the table and adjusting its pose to maintain balance requires very fine movements and interactions supported by rich tactile information, where a 6-axis force/torque information is not sufficient to discern different contact situations in this scenario.
We evaluate the robustness of the learned policy against external disturbances. It can be seen from Fig. <ref> that the dual-arm robot can make a proper adjustment to adapt to pushes. Although the pose of the two-arm robot in contact with the object was changed each time while being pushed, the dual-arm robot can always fine-adjust the object reactively to the center of the fingertips (Tactip sensors), roll and move the object to the desired position. Compared with the manually programmed behavior, this serves as a feedback policy that has been successfully acquired from human dexterity skills, which enables the dual-arm robot to autonomously adjust the posture and ensure a stable grasp quickly. It is noteworthy that such active rolling adjustment has not been explicitly demonstrated by any separate trials, but rather, this behavior was successfully captured by the rich tactile data during one-shot demonstration of pick-lift grasping.
To examine the reaction in the presence of an unknown situation, i.e., grasping failures, the learned policy demonstrated contract-awareness of the falling object, i.e., loss of contact according to the tactile feedback, and thus controls the robot to restart the grasping process, which was not explicitly programmed or demonstrated by the prior LfD data. The result of the regrasping experiments in Fig. <ref> shows that the tactile-based control learned from human demonstrations is very effective in performing robotic dexterous bimanual manipulation tasks autonomously and quickly without the need for explicit manual programming or complex planning.
The policy also achieves successful grasping of previously unseen objects, as shown in Fig <ref>. Although the test objects have a variety of sizes and weights compared with the object used in the demonstration, the policy can still perform stable grasping. The experiment results show that the trained policy can generalize to unseen objects with similar cylindrical shapes but with different sizes and weights.
§.§ Comparison Study
We conducted a comparison study to validate that successful grasping is achieved by the active use of tactile sensing. Besides training a BC network using the structure shown in Fig. <ref>, we also train two different BC networks for comparison. The first one has exactly the same BC network structure but with frozen input of the tactile images, meaning that the image input stays unchanged during both the training and testing. The second one has an FCN structure and uses the poses of two end-effectors (both positions and orientation) as the input to train the network.
The proposed BC network demonstrates convergence to a loss of 0.04 on the testing set. In contrast, the network employing frozen tactile information achieves convergence with a loss of 0.5, while the FCN converges to a loss value of 1.
These results prove that the effective integration of tactile information significantly enhances the convergence rate and leads to a reduced loss value in the final model.
We also compared all the grasping performances on the real dual-arm robot. As shown in Fig. <ref>, both BC network structures without using the tactile information failed in grasping the tube: robot arms failed to approach the object from its initial pose, and instead, they bypassed the object and moved towards the desired end-poses, showing no contact-awareness. The experimental results indicate that tactile feedback plays an essential role in providing contact information for initiating contacts, generating appropriate adjustments, lifting and retrieving to the desired target locations, enabling the dual-arm robot to perform very fine and dexterous contact-rich skills.
§.§ Interpretability
To explicitly show how much different modalities influence the entire operation, we use the saliency map method for calculating the weight distribution. The procedure for calculating this distribution is formulated as follows:
[ W_i = N(I)/N(I) + N(J), W_j = N(J)/N(I) + N(J) ],
where W_i and W_j are the weight distributions of each modality. N(·) represents the normalization process. I is importance of the tactile information that is calculated by adding all the absolute value of weight that the learned policy distributed to tactile features. J is calculated in the same way by adding all the the absolute value of weight that are distributed to robot proprioceptive state features.
The comprehensive process of dexterous pinch grasping can be subdivided into four primary stages: pre-grasp, pressing, rolling and lifting, and stabilization. Each of these stages utilizes tactile feedback in a distinct manner.
In Fig <ref>, the weight changes during the complete dexterous pinch grasping process are depicted. Initially, as the end-effector moves toward the objects without any contact deformation on the tactile sensor, the weight of the robot's proprioceptive state exceeds that of the tactile information. When the tactile sensor comes into contact with the desk and is prepared for a pre-grasp pose, the weight of the tactile information increases (stage A). As the end-effector advances towards the object and initiates contact, the weight attributed to the tactile information increase, exceeding that of the proprioceptive state (stage B). During the roll and lift phase, the weight of the tactile information initially decreases, subsequently achieving equilibrium with the proprioceptive state (stage C). This indicates that during the lifting phase, the learned policy necessitates tactile information for successful in-hand manipulation and proprioceptive information for effective dual-arm coordination. Finally, upon successfully lifting the tube, the weight reverts to the tactile information, facilitating the stabilization of the tube (stage D).
§ CONCLUSION AND FUTURE WORK
The presented Learning from Demonstration (LfD) framework showed successful skill transfer from humans to robots with minimal one-trial of real robot data, with the use of rich tactile sensing at robot's fingertips. Through our journey of exploring how to best utilize the new generation of compliant tactile sensors, we have developed the presented encoding methods that can effectively extract and capture high-dimensional contact sensing from soft tactile sensors, together with the fusion with proprioceptive feedback. The interesting outcome is to confirm the possibility to learn from real robot data directly, without the need of high computation or big data, if the right data is used.
Our comparison studies showed that without the use of tactile sensing, dexterous motor skills cannot be learned by one-shot demonstrations with traditional robot sensing which is rather limited. Our proposed approach overcomes the traditional limitations of one-shot learning method, through the use of tactile and proprioceptive information for extracting useful information and mapping it into fine-grained motor skills. This approach is shown to be robust in the presence of external pushes and is able to perform re-grasp the object if it drops, which was not shown in the one-shot demonstration and emerges as the natural outcome of sensorimotor skills through state-action mapping. The ability to learn from real data/hardware and a single demonstration is very attractive to gain a wider range of machine learning approaches in the real world, where tasks can hard be simulated and only a small amount of data is available.
Meanwhile, one apparent limitation is that one-shot learning is apriori trained on a specific task and object, and it can be generalised and robust only around neighbourhood situations within a category of similar tasks: generalization applies to new/unseen objects that are similar to the demonstrated object of certain variations. The advantage of having only one demonstration comes with the trade-off that when a very different object grasping is needed, then at least one demonstration data is needed. Another limitation is that the robot's performance is based on blind grasping and re-grasping, and has not yet utilised external visual perception. In the future, integration of the current framework with stereo vision could extend the versatility and dexterity of object manipulation. Overall, our proposed LfD framework provides an attractive solution for learning from one demonstration with tactile sensing and supports broad real-world applications in robotics with data scarcity.
IEEEtran
|
http://arxiv.org/abs/2307.04525v2 | 20230710124936 | Cluster-Induced Mask Transformers for Effective Opportunistic Gastric Cancer Screening on Non-contrast CT Scans | [
"Mingze Yuan",
"Yingda Xia",
"Xin Chen",
"Jiawen Yao",
"Junli Wang",
"Mingyan Qiu",
"Hexin Dong",
"Jingren Zhou",
"Bin Dong",
"Le Lu",
"Li Zhang",
"Zaiyi Liu",
"Ling Zhang"
] | eess.IV | [
"eess.IV",
"cs.CV",
"cs.LG"
] |
Yuan, M. et al.
Effective Opportunistic Gastric Cancer Screening
^1DAMO Academy, Alibaba Group
^2Peking University
^3Hupan Lab, 310023, Hangzhou, China
^4Guangdong Province People's Hospital
^5The First Affiliated Hospital of Zhejiang University
^6Peking University Changsha Institute for Computing and Digital Economy
Cluster-Induced Mask Transformers for Effective Opportunistic Gastric Cancer Screening on Non-contrast CT Scans
Mingze Yuan^1,2,3,*, Yingda Xia^1,, Xin Chen^4,, Jiawen Yao^1,3, Junli Wang^5, Mingyan Qiu^1,3, Hexin Dong^1,2,3, Jingren Zhou^1, Bin Dong^2,6, Le Lu^1, Li Zhang^2, Zaiyi Liu^4,, Ling Zhang^1
August 12, 2023
===================================================================================================================================================================================================
Gastric cancer is the third leading cause of cancer-related mortality worldwide, but no guideline-recommended screening test exists. Existing methods can be invasive, expensive, and lack sensitivity to identify early-stage gastric cancer. In this study, we explore the feasibility of using a deep learning approach on non-contrast CT scans for gastric cancer detection. We propose a novel cluster-induced Mask Transformer that jointly segments the tumor and classifies abnormality in a multi-task manner. Our model incorporates learnable clusters that encode the texture and shape prototypes of gastric cancer, utilizing self- and cross-attention to interact with convolutional features. In our experiments, the proposed method achieves a sensitivity of 85.0% and specificity of 92.6% for detecting gastric tumors on a hold-out test set consisting of 100 patients with cancer and 148 normal. In comparison, two radiologists have an average sensitivity of 73.5% and specificity of 84.3%. We also obtain a specificity of 97.7% on an external test set with 903 normal cases. Our approach performs comparably to established state-of-the-art gastric cancer screening tools like blood testing and endoscopy, while also being more sensitive in detecting early-stage cancer. This demonstrates the potential of our approach as a novel, non-invasive, low-cost, and accurate method for opportunistic gastric cancer screening.
Work was done during an internship at DAMO Academy, Alibaba Group.
Corresponding authors: [email protected]; {wolfchenxin, zyliu}@163.com
§ INTRODUCTION
Gastric cancer (GC) is the third leading cause of cancer-related deaths worldwide <cit.>. The five-year survival rate for GC is approximately 33% <cit.>, which is mainly attributed to patients being diagnosed with advanced-stage disease harboring unresectable tumors. This is often due to the latent and nonspecific signs and symptoms of early-stage GC. However, patients with early-stage disease have a substantially higher five-year survival rate of around 72% <cit.>. Therefore, early detection of resectable/curable gastric cancers, preferably before the onset of symptoms, presents a promising strategy to reduce associated mortality. Unfortunately, current guidelines do not recommend any screening tests for GC <cit.>. While several screening tools have been developed, such as Barium-meal gastric photofluorography <cit.>, upper endoscopy <cit.>, and serum pepsinogen levels <cit.>, they are challenging to apply to the general population due to their invasiveness, moderate sensitivity/specificity, high cost, or side effects. Therefore, there is an urgent need for novel screening methods that are noninvasive, highly accurate, low-cost, and ready to distribute.
Non-contrast CT is a commonly used imaging protocol for various clinical purposes. It is a non-invasive, relatively low-cost, and safe procedure that exposes patients to less radiation dose and does not require the use of contrast injection that may cause serious side effects (compared to multi-phase contrast-enhanced CT). With recent advances in AI, opportunistic screening of diseases using non-contrast CT during routine clinical care performed for other clinical indications, such as lung and colorectal cancer screening, presents an attractive approach to early detect treatable and preventable diseases <cit.>. However, whether early detection of gastric cancer using non-contrast CT scans is possible remains unknown. This is because early-stage gastric tumors may only invade the mucosal and muscularis layers, which are difficult to identify without the help of stomach preparation and contrast injection. Additionally, the poor contrast between the tumor and normal stomach wall/tissues on non-contrast CT scans and various shape alterations of gastric cancer, further exacerbates this challenge.
In this paper, we propose a novel approach for detecting gastric cancer on non-contrast CT scans. Unlike the conventional “segmentation for classification" methods that directly employ segmentation networks, we developed a cluster-induced Mask Transformer that performs segmentation and global classification simultaneously. Given the high variability in shape and texture of gastric cancer, we encode these features into learnable clusters and utilize cluster analysis during inference. By incorporating self-attention layers for global context modeling, our model can leverage both local and global cues for accurate detection. In our experiments, the proposed approach outperforms nnUNet <cit.> by 0.032 in AUC, 5.0% in sensitivity, and 4.1% in specificity. These results demonstrate the potential of our approach for opportunistic screening of gastric cancer in asymptomatic patients using non-contrast CT scans.
§ RELATED WORK
Automated Cancer Detection. Researchers have explored automated tumor detection techniques on endoscopic <cit.>, pathological images <cit.>, and the prediction of cancer prognosis <cit.>. Recent developments in deep learning have significantly improved the segmentation of gastric tumors <cit.>, which is critical for their detection. However, our framework is specifically designed for non-contrast CT scans, which is beneficial for asymptomatic patients. While previous studies have successfully detected pancreatic <cit.> and esophageal <cit.> cancers on non-contrast CT, identifying gastric cancer presents a unique challenge due to its subtle texture changes, various shape alterations, and complex background, e.g., irregular gastric wall; liquid and contents in the stomach.
Mask Transformers. Recent studies have used Transformers for natural and medical image segmentation <cit.>. Mask Transformers <cit.> further enhance CNN-based backbones by incorporating stand-alone Transformer blocks, treating object queries in DETR <cit.> as memory-encoded queries for segmentation. CMT-Deeplab <cit.> and KMaX-Deeplab <cit.> have recently proposed interpreting the queries as clustering centers and adding regulatory constraints for learning the cluster representations of the queries. Mask Transformers are locally sensitive to image textures for precise segmentation and globally aware of organ-tumor morphology for recognition. Their cluster representations demonstrate a remarkable balance of intra-cluster similarity and inter-class discrepancy. Therefore, Mask Transformers are an ideal choice for an end-to-end joint segmentation and classification system for detecting gastric cancer.
§ METHODS
Problem Formulation. Given a non-contrast CT scan, cancer screening is a binary classification with two classes as ℒ={0, 1}, where 0 stands for“normal” and 1 for“GC” (gastric cancer). The entire dataset is denoted by 𝒮 = {(𝐗_i, 𝐘_i, 𝐏_i) | i=1,2,⋯,N}, where 𝐗_i is the i-th non-contrast CT volume, with 𝐘_i being the voxel-wise label map of the same size as 𝐗_i and K channels. Here, K=3 represents the background, stomach, and GC tumor. 𝐏_i ∈ℒ is the class label of the image, confirmed by pathology, radiology, or clinical records. In the testing phase, only 𝐗_i is given, and our goal is to predict a class label for 𝐗_i.
Knowledge Transfer from Contrast-Enhanced to Non-contrast CT. To address difficulties with tumor annotation on non-contrast CTs, the radiologists start by annotating a voxel-wise tumor mask on the contrast-enhanced CT, referring to clinical and endoscopy reports as needed. DEEDs <cit.> registration is then performed to align the contrast-enhanced CT with the non-contrast CT and the resulting deformation field is applied to the annotated mask. Any misaligned ones are revised manually. In this manner (Fig. <ref>d), a relatively coarse yet highly reliable tumor mask can be obtained for the non-contrast CT image.
Cluster-Induced Classification with Mask Transformers.
Segmentation for classification is widely used in tumor detection <cit.>. We first train a UNet <cit.> to segment the stomach and tumor regions using the masks from the previous step. This UNet considers local information and can only extract stomach ROIs well during testing. However, local textures are inadequate for accurate gastric tumor detection on non-contrast CTs, so we need a network of both local sensitivity to textures and global awareness of the organ-tumor morphology. Mask transformer <cit.> is a well-suited approach to boost the CNN backbone with stand-alone transformer blocks. Recent studies <cit.> suggest interpreting object queries as cluster centers, which naturally exhibit intra-cluster similarity and inter-class discrepancy. Inspired by this, we further develop a deep classification model on top of learnable cluster representations.
Specifically, given image 𝐗∈ℝ^H × W × D, annotation 𝐘∈ℝ^K × HWD, and patient class 𝐏∈ℒ, our model consists of three components: 1) a CNN backbone to extract its pixel-wise features 𝐅∈ℝ^C × HWD (Fig. <ref>a), 2) a transformer module (Fig. <ref>b), and 3) a multi-task cluster inference module(Fig. <ref>c). The transformer module gradually updates a set of randomly initialized object queries 𝐂∈ℝ^N × C, i.e., to meaningful mask embedding vectors through cross-attention between object queries and multi-scale pixel features,
𝐂←𝐂 + max_N (𝐐^c (𝐊^p)^T) 𝐕^p,
where c and p stand for query and pixel features, 𝐐^c, 𝐊^p, 𝐕^p represent linearly projected query, key, and value. We adopt cluster-wise argmax from KMax-DeepLab <cit.> to substitute spatial-wise softmax in the original settings.
We further interpret the object queries as cluster centers from a cluster analysis perspective. All the pixels in the convolutional feature map are assigned to different clusters based on these centers. The assignment of clusters (a.k.a. mask prediction) 𝐌∈ℝ^N × HWD is computed as the cluster-wise softmax function over the matrix product between the cluster centers 𝐂 and pixel-wise feature matrix 𝐅, i.e.,
𝐌 = Softmax_N(𝐑) = Softmax_N(𝐂𝐅).
The final segmentation logits 𝐙∈ℝ^K × HWD are obtained by aggregating the pixels within each cluster according to cluster-wise classification, which treats pixels within a cluster as a whole. The aggregation of pixels is achieved by 𝐙 = 𝐂_K 𝐌, where the cluster-wise classification 𝐂_K is represented by an MLP that projects the cluster centers 𝐂 to K channels (the number of segmentation classes).
The learned cluster centers possess high-level semantics with both inter-cluster discrepancy and intra-cluster similarity for effective classification. Rather than directly classifying the final feature map, we first generate the cluster-path feature vector by taking the channel-wise average of cluster centers 𝐂 = 1/N∑_i=1𝐂_i ∈ℝ^C. Additionally, to enhance the consistency between the segmentation and classification outputs, we apply global max pooling to cluster assignments 𝐑 to obtain the pixel-path feature vector 𝐑∈ℝ^N. This establishes a direct connection between classification features and segmentation predictions. Finally, we concatenate these two feature vectors to obtain the final feature and project it onto the classification prediction 𝐏∈ℝ^2 via a two-layer MLP.
The overall training objective is formulated as,
ℒ = ℒ_seg(𝐙, 𝐘) + ℒ_cls(𝐏, 𝐏),
where the segmentation loss ℒ_seg(·,·) is a combination of Dice and cross entropy losses, and the classification loss ℒ_cls(·,·) is cross entropy loss.
§ EXPERIMENTS
§.§ Experimental setup
Dataset and Ground Truth. Our study analyzed a dataset of CT scans collected from Guangdong Province People's Hospital between years 2018 and 2020, with 2,139 patients consisting of 787 gastric cancer and 1,352 normal cases. We used the latest patients in the second half of 2020 as a hold-out test set, resulting in a training set of 687 gastric cancer and 1,204 normal cases, and a test set of 100 gastric cancer and 148 normal cases. We randomly selected 20% of the training data as an internal validation set. To further evaluate specificity in a larger population, we collected an external test set of 903 normal cases from Shengjing Hospital. Cancer cases were confirmed through endoscopy (and pathology) reports, while normal cases were confirmed by radiology reports and a two-year follow-up. All patients underwent multi-phase CTs with a median spacing of 0.75 × 0.75 × 5.0 mm and an average size of (512, 512, 108) voxel. Tumors were annotated on the venous phase by an experienced radiologist specializing in gastric imaging using CTLabeler <cit.>, while the stomach was automatically annotated using a self-learning model <cit.>.
Implementation Details. We resampled each CT volume to the median spacing while normalizing it to have zero mean and unit variance. During training, we cropped the 3D bounding box of the stomach and added a small margin of (32, 32, 4). We used nnUNet <cit.> as the backbone, with four transformer decoders, each taking pixel features with output strides of 32, 16, 8, and 4. We set the number of object queries N to 8, with each having a dimension of 128, and included an eight-head self-attention layer in each block. The patch size used during training and inference is (192, 224, 40) voxel. We followed <cit.> to augment data. We trained the model with RAdam using a learning rate of 10^-4 and a (backbone) learning rate multiplier of 0.1 for 1000 epochs, with a frozen backbone of the pre-trained nnUNet <cit.> for the first 50 epochs. To enhance performance, we added deep supervision by aligning the cross-attention map with the final segmentation map, as per KMax-Deeplab <cit.>. The hidden layer dimension in the two-layer MLP is 128. We also trained a standard UNet <cit.> to localize the stomach region in the entire image in the testing phase.
Evaluation Metrics and Reader Study. For the binary classification, model performance is evaluated using area under ROC curve (AUC), sensitivity (Sens.), and specificity (Spec.). And successful localization of the tumors is considered when the overlap between the segmentation mask generated by the model and the ground truth is greater than 0.01, measured by the Dice score. A reader study was conducted with two experienced radiologists, one from Guangdong Province People's Hospital with 20 years of experience and the other from The First Affiliated Hospital of Zhejiang University with 9 years of experience in gastric imaging. The readers were given 248 non-contrast CT scans from the test set and asked to provide a binary decision for each scan, indicating whether the scan showed gastric cancer. No patient information or records were provided to the readers. Readers were informed that the dataset might contain more tumor cases than the standard prevalence observed in screening, but the proportion of case types was not disclosed. Readers used ITK-SNAP <cit.> to interpret the CT scans without any time constraints.
Compared Baselines. <ref> presents a comparative analysis of our proposed method with three baselines. The first two approaches belong to “Segmentation for classification" (S4C) <cit.>, using nnUNet <cit.> and TransUNet <cit.>. A case is classified as positive if the segmented tumor volume exceeds a threshold that maximizes the sum of sensitivity and specificity on the validation set. The third baseline (denoted as “nnUNet-Joint") integrates a CNN classification head into UNet <cit.> and trained end-to-end. We obtain the 95% confidence interval of AUC, sensitivity, and specificity values from 1000 bootstrap replicas of the test dataset for statistical analysis. For statistical significance, we conduct a DeLong test between two AUCs (ours vs. compared method) and a permutation test between two sensitivities or specificities (ours vs. compared method and radiologists).
§.§ Results
Our method Outperforms Baselines. Our method outperforms three baselines (<ref>) in all metrics, particularly in AUC and sensitivity. The advantage of our approach is that it captures the local and global information simultaneously in virtue of the unique architecture of mask transformer. It also extracts high-level semantics from cluster representations, making it suitable for classification and facilitating a holistic decision-making process. Moreover, our method reaches a considerable specificity of 97.7% on the external test set, which is crucial in opportunistic screening for less false positives and unnecessary human workload.
AI Models Surpass Experienced Radiologists on Non-contrast CT Scans. As shown in <ref>, our AI model's ROC curve is superior to that of two experienced radiologists. The model achieves a sensitivity of 85.0% in detecting gastric cancer, which significantly exceeds the mean performance of doctors (73.5%) and also surpasses the best performing doctor (R2: 75.0%), while maintaining a high specificity. A visual example is presented in <ref>. This early-stage cancer (T1) is miss-detected by both radiologists, whereas classified and localized precisely by our model.
Subgroup Analysis. In <ref>, we report the performance of patient-level detection and tumor-level localization stratified by tumor (T) stage. We compare our model's performance with that of both radiologists. The results show that our model performs better in detecting early stage tumors (T1, T2) and provides more precise tumor localization. Specifically, our model detects 60.0% (6/10) T1 cancers, and 77.8% (7/9) T2 cancers, surpassing the best performing expert (50% T1, 55.6% T2). Meanwhile, our model maintains a reliable detection rate and credible localization accuracy for T3 and T4 tumors (2 of 34 T3 tumors missed).
Comparison with Established Screening Tools. Our method surpasses or performs on par with established screening tools <cit.> in terms of sensitivity for gastric cancer detection at a similar specificity level with a relatively large testing patient size (n=1151 by integrating the internal and external test sets), as shown in <ref>. This finding sheds light on the opportunity to employ automated AI systems to screen gastric cancer using non-contrast CT scans.
§ CONCLUSION
We propose a novel Cluster-induced Mask Transformer for gastric cancer detection on non-contrast CT scans. Our approach outperforms strong baselines and experienced radiologists. Compared to other screening methods, such as blood tests, endoscopy, upper-gastrointestinal series, and ME-NBI, our approach is non-invasive, cost-effective, safe, and more accurate for detecting early-stage tumors. The robust performance of our approach demonstrates its potential for opportunistic screening of gastric cancer in the general population.
Acknowledgement
This work was supported by Alibaba Group through Alibaba Research Intern Program. Bin Dong and Li Zhang was partly supported by NSFC 12090022 and 11831002, and Clinical Medicine Plus X-Young Scholars Project of Peking University PKU2023LCXQ041.
splncs04
|
http://arxiv.org/abs/2307.03999v1 | 20230708154620 | Transport properties in gapped graphene through magnetic barrier in a laser field | [
"Rachid El Aitouni",
"Miloud Mekkaoui",
"Ahmed Jellal",
"Michael Schreiber"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall"
] |
Laboratory of Theoretical Physics, Faculty of Sciences, Chouaïb Doukkali University, PO Box 20, 24000 El Jadida, Morocco
Laboratory of Theoretical Physics, Faculty of Sciences, Chouaïb Doukkali University, PO Box 20, 24000 El Jadida, Morocco
[email protected]
Laboratory of Theoretical Physics, Faculty of Sciences, Chouaïb Doukkali University, PO Box 20, 24000 El Jadida, Morocco
Canadian Quantum Research Center,
204-3002 32 Ave Vernon, BC V1T 2L7, Canada
Institut für Physik, Technische Universität, D-09107 Chemnitz, Germany
We study the transport properties of Dirac fermions through gapped graphene through a magnetic barrier irradiated by a laser field oscillating in time. We use Floquet theory and the solution of Weber's differential equation to determine the energy spectrum corresponding to the three regions composing the system. The boundary conditions and the transfer matrix approach are employed to explicitly determine the transmission probabilities for multi-energy bands and the associated conductance. As an illustration, we focus only on the three first bands: the central band T_0 (zero photon exchange) and the two first side bands T_±1 (photon emission or absorption). It is found that the laser field activates the process of translation through photon exchange. Furthermore, we show that varying the incident angle and energy gap strongly affects the transmission process. The conductance increases when the number of electrons that cross the barrier increases, namely when there is a significant transmission.
78.67.Wj, 05.40.-a, 05.60.-k, 72.80.Vp
Keywords: Graphene, laser field, magnetic field, energy gap, transmission, Klein effect, conductance.
Transport properties in gapped graphene through magnetic barrier in a laser field
Michael Schreiber
August 12, 2023
===================================================================================
§ INTRODUCTION
Graphene is a two-dimensional carbon-based material that is one atom thick, and has atoms structured in a hexagonal shape like a honeycomb <cit.>. Graphene has incredible properties such as a very high mobility <cit.>, electrons moving with a speed 300 times lower than the speed of light, a good conductivity (minimal in the vicinity of the Dirac points, i.e., always the fermions pass), being flexible <cit.> and being very hard <cit.>.
Due to these properties, graphene is becoming the most used material in the technological industries <cit.>.
It is theoretically studied in the framework of the tight-binding model <cit.> and as a result, the energy spectrum shows a linear dispersion relation. In addition, the energy bands are in contact at six points <cit.>, called Dirac points K (K'), and form cones around them. It is surprising that electrons can pass from the valance band to the conduction band easily without any effect. This lack of excitation energy constitutes, in fact, an obstacle and a challenge for the fabrication of devices based on graphene. Consequently, to control the passage of electrons, an energy gap should be created between the two bands. Several studies have been reported on the subject to overcome such situations, for instance, either by deforming graphene to generate pseudo-magnetic fields that play the role of a real magnetic field <cit.> or by stacking one layer of graphene on the other <cit.>.
On the other hand, fermions confined in graphene under barriers, at normal incidence, can cross them even if their energy is less than the barrier heights, an effect known as the Klein paradox <cit.>.
For an oscillating potential over time, the energy spectrum acquires sub-bands, generating several transmission modes, and each mode corresponds to an energy band <cit.>.
Furthermore, an applied magnetic field to graphene generates a quantized energy spectrum known as Landau levels <cit.>. Combining these with the oscillating potential gives rise to a current density in x- and y-directions <cit.>. When the graphene is irradiated by a time-varying laser field,
subbands emerge in the energy spectrum, and then the barrier exchanges photons with the fermions, generating infinite transmission modes <cit.>. As a consequence, the laser field suppresses the Klein effect, which makes it possible to control the passage of fermions.
We investigate how Dirac fermions can cross a gapped graphene subjected to a magnetic barrier and irradiated by a laser field. Within the framework of Floquet theory <cit.> and by using the solution of Weber's differential equation <cit.>, we will be able to determine the eigenspinors corresponding to each region composing the system. These will be matched at boundaries and mapped in matrix form by applying the matrix transfer approach to finally get the transmission coefficients for all energy bands. Now, with the help of the current density, we derive the transmission probabilities for all modes.
The conductance is also calculated by integrating the total transmission over all incident angles.
Since it is not easy to treat all modes numerically, we limit our study to the first three bands, which are the central band (l=0) and the two first side bands (l =±1). We show that increasing the barrier width, or the incidence energy, decreases the transmissions, which implies that the number of electrons that cross the barrier decreases, consequently, the conductance decreases. On the other hand, when the intensity of the laser field increases, we observe that the transmissions decrease, but they increase as long as its frequency increases. When the barrier width increases, it is found that the resonance peaks appear, and their number increases. Another set of results shows that the transmissions are almost zero when the incidence energy is less than the energy gap, and the Klein paradox is still present.
This paper is organized as follows. In Sec. <ref>, we present the Hamiltonian describing our system and we will solve the eigenvalue equations to determine the wave functions in the three regions. We use the boundary conditions and the matrix formalism to express the transmission probabilities of each band, and we calculate the integral of this total transmission which makes it possible to determine the conductance at zero temperature in Sec. <ref>. We discuss our numerical results in Sec. <ref>. Finally, we conclude our work.
§ THEORETICAL MODEL
We study the behavior of Dirac fermions in a graphene sheet divided into three regions. Regions 1 and 3 contain only pristine graphene, whereas the gapped region 2 of width d is subjected to a perpendicular magnetic field and irradiated by a laser field, as shown in Fig. <ref>.
The present system can be described the following Hamiltonian
H= v_F σ⃗·[p⃗-e/c(A⃗_L(t)+A⃗_B(x))]+Δσ_z
where σ_x,y,z are Pauli matrices, v_F≈ c/300 is the Fermi velocity , p⃗=-iħ(∂/∂ x,∂/∂ y) the momentum operator, e the electronic
charge. The vector potential
A⃗_L(t) of the laser field in the dipole approximation <cit.> is generated by an electric field of amplitude F and frequency ω defined as E(t)=Fsin(ω t), which is given by
A⃗_L(x,y,t)=(0,A_0cos(ω t),0)
with the laser field amplitude A_0=F/ω. For the magnetic field, the vector potential A⃗_B( x) is chosen in the Landau gauge B(0,x,0) and the continuity allows us to write
A⃗_B(x)= {[ 0, x<0; Bx, 0<x<d; Bd, x>d. ].
To determine the eigenspinors Ψ(x,y,t)=(Ψ_1, Ψ_2)^T in the three regions, we solve the eigenvalue equation, with T standing for transpose. In region 2 (0<x<d), we get
ΔΨ_1(x,y,t) + v_F[p_x-i(p_y-eF/ωcos(ω t)-eBx)]Ψ_2(x,y,t)=iħ∂/∂ tΨ_1(x,y,t)
v_F[p_x+i(p_y-eF/ωcos(ω t)-eBx)]Ψ_1(x,y,t)-ΔΨ_2(x,y,t)=iħ∂/∂ tΨ_2(x,y,t)
To proceed further, note that in the framework of the Floquet approximation <cit.>, the oscillation of the laser field over time produces several energy modes in the eigenspinors. As a result, we have
Ψ(x,y,t)=ψ(x,y,t)e^-iEt/ħ
where E is the Floquet quasi-energy, ψ(x,y,t) is a time periodic function satisfying ψ(x,y,t+t_0)=ψ(x,y,t) and t_0 is the time period of the laser field. On the other hand, if the Hamiltonian is invariant along the y-direction, then we write Ψ(x,y,t)=e^ik_yye^-iEt/ħφ(t)(ϕ_1(x),ϕ_2(x))^T, and therefore (<ref>,<ref>) become
v_F[-i∂/∂ x-i(k_y-F/ωcos(ω t)-Bx)]ϕ_2(x)φ(t)e^ik_yye^-iEt = (i∂/∂ t-Δ)ϕ_1(x)φ(t)e^ik_yye^-iEt
v_F[-i∂/∂ x+i(k_y-F/ωcos(ω t)-Bx)]ϕ_1(x)φ(t)e^ik_yye^-iEt = (i∂/∂ t+Δ)ϕ_2(x)φ(t)e^ik_yye^-iEt
in the system unit (ħ=e=c=1). It is straightforward to find
-iF/ωcos(ω t)=∂/∂ tφ(t)
and therefore the temporal component is
φ(t)=e^-iαsin(ω t).
Now, we use the Jacobi–Anger identity e^-iαsin(ω t)=∑_-∞^+∞J_m(α)e^-imω t to write (<ref>,<ref>) as
∂ϕ_2(x)/∂ x-[x/ℓ_B^2-k_y+mϖ]ϕ_2(x)-i (ε+mϖ-δ)ϕ_1(x)=0
∂ϕ_1(x)/∂ x+[x/ℓ_B^2-k_y+mϖ]ϕ_1(x)-i (ε+mϖ+δ)ϕ_2(x)=0
where ℓ_B=1/√(B), ϖ=ω/v_F, F̃=F/v_F, ε=E/v_F and δ=Δ/v_F. From
(<ref>,<ref>), we obtain two new decoupled equations
∂^2ϕ_1(x)/∂ ^2 x+[1/ℓ_B^2-(x/ℓ_B^2-k_y+mϖ)^2+(ε+mϖ)^2-δ^2]ϕ_1(x) = 0
∂^2ϕ_2(x)/∂ ^2 x+[-1/ℓ_B^2-(x/ℓ_B^2-k_y+mϖ)^2+(ε+mϖ)^2+δ^2]ϕ_1(x) = 0.
These can be expressed in terms of the Weber differential equations <cit.> by making the change of variable X_m=√(2)(x/ℓ_B-k_yℓ_B+mϖℓ_B) and setting v_m=(εℓ_B+mϖℓ_B)^2-(δℓ_B)^2/2, to get
d^2ϕ_1,2(X_m)/dX_m^2+[±1/2-X^2_m/4 +v_m]ϕ_1,2(X_m)=0
having the following solutions
ϕ_1(X_m) = A_mD_v_m(X_m)+B_mD_v_m(-X_m)
ϕ_2(X_m) = -i√(2 )/εℓ_B+mϖℓ_B+δℓ_B[ A_mD_v_m+1(X_m)-B_mD_v_m+1(-X_m)]
where A_m, B_m are constant coefficients corresponding to mth side-band, and D_v_m is the parabolic cylinder function. Consequently, the eigenspinors in region 2 take the form
Ψ_2(x,y,t)=e^ik_yy∑_l=-∞^+∞[A_l[ Ξ^+_l(x); η^+_l(x) ]
+B_l[ Ξ^-_l(x); η^-_l(x) ]]∑_m=-∞^+∞J_m(α)e^-i(ε+(l+m)ω)t
and we have defined
Ξ^±(x) = D_v_m(± X_m)
η^±(x) = ∓i√(2)/εℓ_B+m ϖℓ_B+δℓ_B D_v_m+1(± X_m).
In the region 1 (x<0) we have only pristine graphene, and then we can easily obtain the associated eigenspinors and eigenvalues <cit.>
Ψ_1(x,y,t)=e^ik_yy∑_m=-∞^+∞[δ_l,0[ 1; Λ_l ]e^ik_lx+∑_m,l=-∞^+∞r_l[ 1; -Λ^*_l ]e^-ik_lx]δ_m,le^-iv_F( ε+mϖ)t
ε+lϖ=s_l√(k^2_l+k^2_y)
where r_l is the amplitude of the reflected wave corresponding to band l, δ_m,l=J_m-l(α=0), s_l=sgn(v_Fε+lv_Fϖ),
ϕ_l=tan^-1k_y/k_l,
k_l=εcosϕ_l,
k_y=εsinϕ_l and
Λ_l=s_lk_l+ik_y/√(k^2_l+k^2_y)=s_le^iϕ_l.
We can establish
the relation between the incident angles
ϕ_l=arcsin(ε/ε+lϖsin(ϕ_0)).
In region 3 (x>d), the emergent angle ϕ'_l is different than the incident one ϕ_0 because of the continuity of the vector potential. The solution is <cit.>
Ψ_3(x,y,t)=e^ik_yy∑_m,l=-∞^+∞[t_l[ 1; Λ'_l ]e^ik'_lx+b_l[ 1; -Λ'^*_l ]e^-ik'_lx]δ_m,le^-iv_F(ε+mϖ)t
ε+lϖ =s_l√(k_l^'2+(k_y- d/ℓ_B^2)^2)
where t_l is the transmission amplitude of the transmitted wave corresponding to the band l, b_l is a null vector,
ϕ'_l=tan^-1ky- d/ℓ_B^2/k'_l,
k'_l=(ε+lϖ)cosϕ'_l,
k_y=(ε+lϖ)sinϕ'_l+d/ℓ_B^2
and
Λ'_l=s_lk'_l+i(k_y-d/ℓ_B^2)/√(k_l^'2+(k_y-d/ℓ_B^2)^2)=s_le^iϕ'_l.
From the conservation of the momentum k_y, we get the relation
ϕ'_l=arcsin(ε/ε+l ϖsinϕ_0- d/ℓ_B^2/ε+lϖ).
As we will see, the above results can be used to study the transport properties of gapped graphene scattered by a magnetic barrier and irradiated by a laser field. We obtain the transmissions associated with several energy bands and the corresponding conductance.
§ TRANSMISSION PROBABILITIES
We use the continuity of the eigenspinors at x=0 and x =d to
determine the transmission probabilities for the present system. This corresponds to the processes
Ψ_1(0,y,t)=Ψ_2(0,y,t) and Ψ_2(d,y,t)=Ψ_3(d,y,t),
which yields
δ_m,0+r_m=∑_l=-∞^+∞(A_lΞ^+_l(0)+B_lΞ^-_l(0))J_m-l(α)
δ_m,0Λ_m-r_mΛ_m^*=∑_l=-∞^+∞(A_lη^+_l(0)+B_lη^-_l(0))J_m-l(α)
t_me^ik'_md+b_me^-ik'_md=∑_l=-∞^+∞(A_lΞ^+_l(d)+B_lΞ^-_l(d))J_m-l(α)
t_mΛ^'_me^ik'_md-b_mΛ_m^'*e^-ik'_md=∑_l=-∞^+∞(A_lη^+_l(d)+B_lη^-_l(d))J_m-l(α).
We have four equations, but each one has an infinite number of modes, and to solve the problem, we use the transfer matrix approach. As a result, we get
[ Υ_1; Υ'_1 ]
=[ ℕ_1,1 ℕ_1,2; ℕ_2,1 ℕ_2,2 ][ Υ_2; Υ'_2 ]=ℕ[ Υ_2; Υ'_2 ]
with
ℕ=[ 𝕀 𝕀; Γ^+ Γ^-; ]^-1[ 𝕏^+_0 𝕏^-_0; ℝ^+_0 ℝ^-_0 ][ 𝕏^+_d 𝕏^-_d; ℝ^+_d ℝ^-_d ]^-1[ 𝕀 𝕀; Γ'^+ Γ'^-; ][ 𝕂^+ 𝕆; 𝕆 𝕂^-; ]
and
Γ^±=±δ_m,lΛ_l^±1, Γ'^±=±δ_m,lΛ_l^'±1, 𝕏^±_z=Ξ_l^±(z)J_m-l(α), ℝ^±_z=η_l^±(z)J_m-l(α), 𝕂^±=e^± ik'_lLδ_m,l
where 𝕆 is the zero matrix, 𝕀 is the unit matrix and z={0,d}.
In this case, we take into account Dirac fermions traveling from left to right with energy E, and from (<ref>), we obtain
Υ_2=ℕ^-1_1,1Υ_1
with the Kronecker coefficient δ_0,l=Υ_1 and Υ_2=t_l.
Because n and l range from -∞ to +∞ and are challenging to solve, the aforementioned transfer matrix is of infinite order. Due to this, we replace the infinite series with a finite set of terms ranging from -N to N, provided that N≥F/ω^2 <cit.>, resulting in
t_-N+k=ℕ'[k+1,N+1]
where ℕ'=ℕ^-1_11, k=0, 1, 2,⋯ N.
To simplify, we limit our studies only to the central band and the first two side bands l=0,± 1 of energy E± hω having the following transmission coefficients
t_-1=ℕ'[1,2],
t_0=ℕ'[2,2],
t_1=ℕ'[3,2].
On the other hand, the current density is determined from the continuity equation, its expression given by J=e v_F Ψ^*σ_xΨ , therefore the expression of the incident, reflected and transmitted current density given by
J_inc,0=ev_F(Λ_0+Λ^*_0)
J_tra,l=ev_Ft^*_lt_l(Λ'_l+Λ'^*_l)
J_ref,l=ev_Fr^*_lr_l(Λ_l+Λ^*_l)
The relation between the current density and the transmission probability is expressed as T_l=J_tra,l/J_inc,0. Then, after some algebra, we get
T_l=cosϕ'_l/cosϕ_0|t_l|^2
and the total transmission probability is given by summing up over all modes
T=∑_lT_l.
By definition, the conductance at zero temperature is the average of the flux of the fermions on the half Fermi surface <cit.>, on the other hand it is the integration of the total transmission T over k_y <cit.>, given by
G=G_0/2π∫_-k_y^max^k_y^maxT dk_y
where G_0 is the conductance unit.
Using the relation between transverse wave vector k_y and the incident angle ϕ_0 to express G as
G=G_0/2π∫_-π/2^π/2T cosϕ_0dϕ_0.
To investigate and underline the basic features of the present system, we numerically analyze the transport properties based on the transmission channels and associated conductance in the following chapter.
§ RESULTS AND DISCUSSION
We numerically study the transmission probabilities of Dirac fermions in gapped graphene through a magnetic barrier in a laser field. Recall that the oscillation of the barrier over time generates several energy bands, which give rise to transmission channels. Due to the difficulty of analyzing all modes, we will limit ourselves to the first three bands, where the central band T_0 corresponds to zero photon exchange and the first two side bands T_±1 to absorption or emission of photons.
Fig. <ref> shows the transmission probability as a function of the energy εℓ_B for different incident angles. There is transmission if the condition ε >d/ℓ_B^2-l ϖ/1+sinϕ_0
is satisfied, in other words, this quantity plays the role of an effective mass <cit.>. For normal incidence, as depicted in Fig. <ref>, transmission is zero for ε<δ. Due to this condition, resonance peaks appear with decreasing amplitudes along the εℓ_B-axis, that is to say the disappearance of the Fabry-Pérot resonance, which is in agreement with previous results <cit.>. The transmission process with zero photon exchange, T_0, is dominating, and therefore, the majority of the electrons cross the barrier without photon exchange.
Fig. <ref> shows the behavior of T_0 for different incident angles. As a result, in Fig. <ref> it increases
sharply away from the normal incidence. On the other hand, transmission with photon exchange as shown in Figs. <ref>, <ref> there is a decrease for large energy.
We can conclude that the behavior of T_0 changes if we move away from the normal incidence and that
the photon exchange process is suppressed.
Fig. <ref> displays the transmission probability as a function of εℓ_B under a suitable choice of physical parameters. Transmissions appear when condition ε >δ is satisfied. As clearly seen in Fig. <ref>, we observe the dominance of T_0 compared to those corresponding to the first two side bands, and it is almost equal to the total transmission as
found
in <cit.>. Now for different values of F̃ℓ_B^2, we plot T_0 in Fig. <ref>. We see that T_0 decreases with the increase of F̃ℓ_B^2, because the increase in laser field suppresses T_0 as we have already seen <cit.>.
Fig. <ref> displays the effect of field frequency on transmission: increasing the frequency increases T_0.
Fig. <ref> is drawn for different values of barrier width d/ℓ_B. If this increases, resonance peaks appear and their number increases, and the
oscillations get closer. A similar result is obtained in our previous work <cit.>.
Fig. <ref> presents the transmission probabilities as a function of the energy gap δℓ_B.
We show in Fig. <ref> the total transmission probability (magenta line) and those with or without photon exchange.
We distinguish two interesting cases: first, for δℓ_B<6, the Klein effect is very clear and transmission with photon exchange is almost zero, that means that the majority of electrons cross the barrier without photon exchange. Second, for δℓ_B > 6, the transmissions decrease in an oscillatory way until they become zero when δℓ_B is close to εℓ_B=15.
Fig. <ref> displays the total transmission for different values of F̃ℓ_B, and we see that the increase of F̃ℓ_B suppresses the transmission, as has been found in <cit.>. The Klein effect is clear for very small values of F̃ℓ_B and δℓ_B. For F̃ℓ_B=0.3, the Klein effect is observed only for δℓ_B<6, then the transmission decreases in an oscillatory way until the oscillations vanish. If we increase F̃ℓ_B the transmission keeps the same shape with decreasing amplitude, which is in agreement with the results of <cit.>.
Fig. <ref> is similar to the previous one, but here we vary ϖℓ_B. As a result, for ϖℓ_B=1 the Klein effect always exists up to ϖℓ_B=5, then the transmission decreases in an oscillatory way towards zero near εℓ_B. On the other hand, there will be total reflection if the incident energy is lower than the energy gap.
If the frequency decreases, the transmission retains the same shape, but the amplitude decreases. Fig. <ref> shows the effect of the barrier width on the total transmission. We observe that resonance peaks appear when the width increases. For very small widths, the Klein effect is found up to δℓ_B ≈ 6, and then the transmission decreases towards zero. Increasing the width increases the number of oscillations and their amplitudes, as already seen in <cit.>. We summarize that increasing the amplitude of the field suppresses transmission inside the barrier. On the other hand, increasing the frequency increases the transmission, and increasing the width increases the number of oscillations and their amplitude.
Fig. <ref> shows the transmission probabilities as a function of the barrier width d/ℓ_B. In Fig. <ref> we observe that all the transmissions have sinusoidal behavior. The total transmission oscillates in the vicinity of one (Klein paradox). T_0 is predominant and its oscillation amplitude decreases when the width increases. The transmissions with photon exchange also oscillate, but with phase shift, which increases along the d/ℓ_B-axis. For certain values of d/ℓ_B, the transmissions with or without photon exchange are equal.
Fig. <ref> displays transmission with photon emission for different values of the transverse wave vector k_yℓ_B. There is always a sinusoidal behavior with increasing amplitude along the d/ℓ_B-axis. When k_yℓ_B increases, the width of the oscillations decreases.
In Fig. <ref>, we show the effect of the laser field frequency on transmission. We notice that the amplitude and period of oscillations decrease as the frequency increases. Thus, the increase in frequency suppresses the transmissions with photon exchanges.
We vary the intensity of the laser field F̃ℓ_B^2 in Fig. <ref> and observe that the transmission is oscillating with the same period. We notice that the increase in F̃ℓ_B^2 causes an increase in transmission with photon exchange and decreases that of the central band.
In Fig. <ref>, we plot the conductance as a function of the energy εℓ_B. Choosing different values of width d/ℓ_B, Fig. <ref> reveals that the conductance varies almost exponentially for lower values of d/ℓ_B, and oscillates when d/ℓ_B increases.
Fig. <ref> shows the effect of intensity F̃ℓ_B^2 of the laser field on conductance. We observe that conductance increases as F̃ℓ_B^2 increases, but it vanishes when ε→δ.
Fig. <ref> is plotted for different values of frequency ϖℓ_B. We notice that the conductance tends to zero when εℓ_B is close to δℓ_B and the oscillations increase as ϖℓ_B increases.
In Fig. <ref>, we vary δℓ_B to observe that the conductance is always almost zero when ε tends towards δ.
Finally, to increase the conductance, it is necessary to increase the number of electrons crossing the barrier, thereby increasing the transmission. As we have seen, the transmission increases when the incident energy increases or the barrier width decreases, as well as when the intensity of the laser field decreases or its frequency increases.
In Figure <ref>, the conductance is represented as a function of the energy gap δℓ_B. By choosing three values of incident energy in Fig. <ref>, we show that the conductance is maximum at the beginning, then decreases in an oscillatory way towards zero near the value δ =ε. The amplitude increases when incident energy increases as well, exhibiting a behavior similar to transmission as we have seen before.
Fig. <ref> shows the effect of width d/ℓ_B on the conductance. There are always resonance peaks that appear around δℓ_B=3, the number of oscillations increases with the increase of d/ℓ_B. In Figs. <ref> and <ref>, we visualize the effect of the laser field parameters on the conductance. They show that the amplitude of the conductance increases with the increase in frequency, and decreases when the amplitude increases.
§ CONCLUSION
We studied the effect of a gapped magnetic barrier irradiated by a laser field generated by an electric field of amplitude F and frequency ω on Dirac fermions in graphene. We started with the solution of the eigenvalue equations to determine the spinors in the three regions of the gapped sheet. We used the Floquet theory, and the solution of Weber's differential equation to determine the eigenspinors corresponding to each region as combinations of parabolic cylindrical functions. Then we employed the boundary conditions, which give four equations, each equation has infinite modes. To solve them, we used the transfer matrix approach to obtain a matrix of infinite order that is difficult to solve. For simplicity, we focused only on the three first bands, the central band corresponds to l=0 and the two first side bands correspond to l=±1. Lastly, we calculated the integral of the total transmission probability to obtain conduction at zero temperature.
When a barrier oscillates in time, it generates several energy bands, namely the photon exchange between the barrier and the Dirac fermions. Here we found that the transmission process with zero photon exchange is much more important than the process with photon exchange. Klein's paradox is still present, but we can suppress it. As we know, the original Klein effect is only observed for normal incidences (ϕ_0=0), but in this work, this effect is observed for non-normal incidences. When the barrier width is increased, the transmission decreases until it disappears for a critical width, the same thing happens for the conductance. On the other hand, the transmission increases when the incident energy increases. However, to have transmission, it is necessary to satisfy the condition that binds the incident energy to the other barrier parameters: ε >d/ℓ_B^2-l ϖ/1+sinϕ_0. As we know the conductance exists if we have a non-zero transmission, which always implies the verification of this last condition..
9
Novoselov2004
K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, Y. Zhang, S. V. Dubonos, I. V. Grigorieva, and A. A. Firsov, Science 306, 666 (2004).
Novoselov2005
K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, M. I. Katsnelson, I. V. Grigorieva, S. V. Dubonos, and A. A. Firsov, Nature 438, 197 (2005).
mobil2
S. Morozov, K. Novoselov, M. Katsnelson, F. Schedin, D. Elias, J. Jaszczak, and A. Geim, Phys. Rev. Lett. 100, 016602
(2008).
mobil
K. I. Bolotin, K. J. Sikes, Z. Jiang, M. Klima, G. Fudenberg, J. Hone, P. Kim, and H. L. Stormer, Solid State Commun.
146, 351 (2008).
flix
C. Lee, X. Wei, J. W. Kysar, and J. Hone, Science 321, 385 (2008).
Beenakker2008
C. W. Beenakker, Rev. Mod. Phys. 80, 1337 (2008).
Bhattacharjee2006
S. Bhattacharjee and K. Sengupta, Phys. Rev. Lett. 97, 217001 (2006).
Bunch2005
J. S. Bunch, Y. Yaish, M. Brink, K. Bolotin, and P. L. McEuen,
Nano Lett. 5, 2887 (2005).
Berger2004
C. Berger, Z. M. Song, T. B. Li, X. B. Li, A. Y. Ogbazghi, R. Feng,
Z. T. Dai, A. N. Marchenkov, E. H. Conrad, P. N. First, and W. A. de Heer, J.
Phys. Chem. B 108, 19912 (2004).
Tight
S. Reich, J. Maultzsch, C. Thomsen, and P. Ordejon, Phys. Rev. B 66, 035412 (2002).
Castro2009
A. H. Castro Neto, F. Guinea, N. M. R. Peres, K. S. Novoselov, and A. K. Geim, Rev. Mod. Phys. 81, 109 (2009).
propr
N. M. R. Peres, J. Phys.: Condens. Matter 21, 323201 (2009).
def1
F. Guinea, M. I. Katsnelson, and A. K. Geim, Nat. Phys. 6, 30 (2010).
def4
G.-X. Ni, Y. Zheng, S. Bae, H. R. Kim, A. Pachoud, Y. S. Kim, C.-L. Tan, D. Im, J.-H. Ahn, B. H. Hong, and B. Ozyilmaz, ACS Nano 6, 1158 (2012).
scatring
S. Latil and L. Henrard, Phys. Rev. Lett. 97, 036803 (2006).
Morozov2005
S. V. Morozov, K. S. Novoselov, F. Schedin, D. Jiang, A. A. Firsov, and A. K. Geim, Phys. Rev. B 72, 201401 (2005).
klien2
M. I. Katsnelson, K. S. Novoselov, and A. K. Geim, Nat. Phys. 2, 620 (2006).
jellal2014
A. Jellal, M. Mekkaoui, E. B. Choubabi, and H. Bahlouli, Eur. Phys. J. B 87, 123 (2014).
conmagnetic
A. De Martino, L. Dell’Anna, and R. Egger, Phys. Rev. Lett. 98, 066802 (2007).
Landau
F. Xu and L. Zhang, Chin. Phys. B 28, 117403 (2019).
Magnetic2011
M. O. Goerbig, Rev. Mod. Phys. 83, 1193 (2011).
confinementmagnetic
N. Myoung and G. Ihm, Physica E 42, 70 (2009).
Elaitouni2022
R. El Aitouni and A. Jellal, Phys. Lett. A 447, 128288 (2022).
biswas2013
R. Biswas and C. Sinha, Appl. Phys. 114, 183706 (2013).
biswas2012
C. Sinha and R. Biswas, Appl. Phys. Lett. 100, 183107 (2012).
laser2
M. Ahsan Zeb, K. Sabeeh, and M. Tahir, Phys. Rev. B 78, 165420 (2008).
rachid2022
R. El Aitouni, M. Mekkaoui, A. Jellal, Ann. Phys. (Berlin) 535, 2200630 (2023).
floquetappr
Z. Gu, H. A. Fertig, D. P. Arovas, and A. Auerbach, Phys. Rev. Lett. 107, 216601 (2011).
grad
I. S. Gradshteyn, I. M. Ryzhik, Table of Integrals, Series, and Products (Academic Press, Inc. New York, 1980).
approx
R. Loudon, The Quantum Theory of Light (3rd ed, Oxford University Press, New York, 2000).
math
F. W. J. Olver, J. Res. Nat. Bur. Standards Sect. B 63, 131 (1959).
conduct1
X. Chen and J. W. Tao, Appl. Phys. Lett. 94, 262102 (2009).
conduct2
M. R. Masir, P. Vasilopoulos, and F. M. Peeters, Phys.
Rev. B 79, 035409 (2009).
Biswas2021
R. Biswas and C. Sinha, Sci. Rep. 11, 2881 (2021).
biswas2016
R. Biswas, S. Maitty, and C. Sinha, Physica E. 84, 235 (2016).
Mekkoui2021
M. Mekkaoui, A. Jellal, and H. Bahlouli, Solid State Communi.
358, 114981 (2022).
Sergy2011
S. E. Savel’ev and A. S. Alexandrov, Phys. Rev. B 84, 035428 (2011).
MEKKAOUI2018
M. Mekkaoui, R. El Kinani, and A. Jellal, Mater. Res. Expr. 6, 085013 (2019).
Makkoui2015
H. Chnafa, M. Mekkaoui, A. Jellal, and A. Bahaoui, Physica E 148, 115645 (2023).
|
http://arxiv.org/abs/2307.03874v1 | 20230708013842 | The geometry of the Thurston metric: a survey | [
"Huiping Pan",
"Weixu Su"
] | math.GT | [
"math.GT",
"math.CV",
"math.DG",
"32G15, 30F45, 30F60"
] |
New Constraints on ALP Electron and Photon Couplings from ArgoNeuT and the MiniBooNE Beam Dump
Jaehoon Yu
==============================================================================================
This chapter is a survey about the Thurston metric on the Teichmüller space.
The central issue is the constructions of extremal Lipschitz maps between hyperbolic surfaces. We review several constructions, including the original work of Thurston.
Coarse geometry and isometry rigidity of the Thurston metric, relation between the Thurston metric and the Thurston compactification are discussed.
Some recent generalizations and developments of the Thurston metric are sketched.
Mathematical classification (2010)
32G15; 30F45; 30F60.
plain
|
http://arxiv.org/abs/2307.06139v2 | 20230709045849 | Constructing Maximal Extensions of the Vaidya Metric in Israel Coordinates: I. Integration of the Field Equations | [
"Sheref Nasereldin",
"Kayll Lake"
] | gr-qc | [
"gr-qc"
] |
APS/123-QED
[email protected]
[email protected]
Department of Physics, Queen's University, Kingston, Ontario, Canada, K7L3N6
This paper explores a complete representation of the Vaidya model, a radial flux of radiation in the eikonal approximation, used for modeling various phenomena in both classical and semi-classical General Relativity and Astrophysics. The majority of the applications of the Vaidya model have been formulated in an incomplete representation. A complete representation is obtained here by direct integration of the Einstein field equations. We present the methodology to obtain this complete representation, and its utility in the modeling of general relativistic phenomena.
Constructing Maximal Extensions of the Vaidya Metric in Israel Coordinates:
I. Integration of the Field Equations
Kayll Lake
August 12, 2023
===================================================================================================================
§ INTRODUCTION
The Schwarzschild metric <cit.> has been used to study the exterior geometry of spherical stellar objects undergoing gravitational collapse <cit.>, where it is assumed that the radiation emitted by the object is insignificant. However, during the advanced stages of stellar collapse, these objects are expected to emit a considerable amount of mass in the form of radiation, see for example <cit.>. Therefore, the exterior of a collapsing stellar object is no longer empty, and the Schwarzschild vacuum metric is no longer suitable for its description. The Vaidya metric <cit.> is more suitable for this situation and has been widely used to classically study the geometry outside [With suitable boundary conditions, such as Israel's conditions, see <cit.>, on the spherical surface, this exterior solution can be matched to some proper interior solution, see for example <cit.> and <cit.>.] radiating spherical stellar objects, see for example <cit.>.
Thus, one can treat this dynamical mass distribution with its envelop of radiation as an isolated system existing in otherwise vacuum, asymptotically flat spacetime that is described by the Schwarzschild vacuum metric.
The “self-similar" Vaidya metric has been used to construct spacetimes that exhibit a visible strong singularity, demonstrating the potential for the failure of the Penrose “Cosmic censorship hypothesis" <cit.>. This conjecture states that singularities arising from regular initial conditions do not have any causal influence on spacetime. If the hypothesis were to fail, it would be a major flaw in the theory of general relativity and would make it impossible to predict the events in any region of spacetime containing a singularity, as new information could emerge in an unpredictable manner. The growth of curvature along non-spacelike geodesics has been examined (see for example, <cit.>), and the visible singularity in self-similar spacetimes has been classified as strong. Furthermore, Lake and Zannias <cit.> showed that the emergence of naked singularities in these spacetimes is due to the self-similarity assumption, rather than spherical symmetry.
On the semi-classical level, the Vaidya metric has been utilized to explore black hole evaporation, possibly due to Hawking's radiation <cit.>, (see for example <cit.>). Furthermore, the Vaidya metric in the double-null coordinates (the mass function must be linear) <cit.> has been used to study the quasi-normal modes (QNM) as a model that supposedly will give deeper insights on the gravitational excitations of black holes (see for example <cit.>).
Despite the fact that the majority of applications were structured with the Vaidya metric written in the Eddington-Finkelstein-Like (EFL) coordinates, these coordinates have been known for some time to be incomplete (see for example <cit.>), leaving the Vaidya manifold not maximally covered. Thus, to ensure the accuracy of all applications, it is required to construct a complete set of coordinates and thoroughly assess the impact of this set of coordinates. This is the primary objective of this paper.
We organize this paper as follows. In the next section, we review the EFL coordinates and provide a proof of incompleteness of this set of coordinates, which is the main motivation for any subsequent coordinate representation. In Section <ref>, we review the use of Israel coordinates <cit.> to write the Vaidya metric <cit.>, and discuss why the derivation of these coordinates resulted in unsatisfactory results when attempting to obtain maximal coverings of the Vaidya manifold. The main results of this paper are outlined in Section <ref>, in which we introduce an algorithmic method to obtain Israel coordinates by direct integration of the field equations, without relying on any coordinate transformation. In Section <ref>, we present necessary physical restrictions that must be imposed on the flux of radiation. In Section <ref>, we provide a general derivation regarding the location of the apparent horizon in the Vaidya manifold. It is emphasized that the location of the apparent horizon is established before introducing any expressions to the characterizing functions. In Section <ref>, we demonstrate that our construction can be used to obtain both EFL and Israel coordinates by choosing different expressions for the functions that arise from integrating the field equations; such functions, as well as the coefficient of the cross term in the general metric that is presented, shall be referred to as the “characterizing functions". In Section <ref>, we briefly calculate some of the invariants of the Vaidya metric in Israel coordinates. The last section highlights the main results of the paper and discusses the possible extensions of the current work.
§ THE EFL COORDINATES
The Vaidya metric, in the EFL coordinates, is a spherically symmetric solution to the Einstein field equations with the energy momentum tensor approximated in “the eikonal form" <cit.>, which expresses a unidirectional radial flow of unpolarized radiation,
T_αβ = Φ k_αk_β= ϵ/4π r^2dm(u)/duk_αk_β,
where ϵ = ± 1 and k_α = δ^u_α is tangent to radial inward or outward-going null geodesics. The spacetime line element in the EFL coordinates takes the form
ds^2 = -(1-2m(u)/r)du^2+2ϵ dudr+r^2dΩ^2_2,
where dΩ^2_2 = dθ^2+sin^2θ dϕ^2 is the metric of a unit 2-sphere. For ϵ = +1, the metric expresses inward-directed radiation (towards smaller values of the radius r) with a monotonically increasing m as a function of the “advanced time" coordinate u. If ϵ = -1, the metric is that of outgoing radiation (towards larger values of the radius r) with m being monotonically decreasing as a function of the “retarded time" coordinate u. However, it is conventional, as stated in <cit.>, to assign u as the retarded time and v as the advanced time. Furthermore, it is worthwhile to note that the quantity Φ, usually called as the energy density of the radiation flux, does not have a direct operational meaning because the tangent null vector k_α does not have a natural normalization. Thus, it is preferable, see also <cit.>, to consider the following quantity:
ρ = Φ (k_αu^α)^2,
which defines the energy density as measured locally by an observer with a timelike 4-velocity u^α.
§.§ Incompleteness of the EFL Coordinates
In this subsection, we demonstrate why the EFL coordinates (u,r,θ,ϕ) do not provide a complete description of the Vaidya manifold. The incompleteness of these coordinates is the primary motivation for the search for new coordinates in which the manifold is complete, allowing radial null geodesics to continue moving to infinite values of their affine parameter or be terminated upon encountering a gravitational singularity. The incompleteness of the coordinates (u,r,θ,ϕ) becomes evident when studying the behavior of the ingoing radial null geodesics, emanating from the past null infinity ^- or from the past singularity surface r=0, for the case (0<m(∞)<∞). It was suggested, but not proven in <cit.>, that the geodesics appear to approach the future even horizon (FEH) surface, r=2m(∞), as u →∞, though they actually reach it for finite values of their affine parameter, see Fig. <ref>.
To support these insightful claims, we present a more articulated proof. We draw attention to the fact that, whereas Fig. <ref> is only valid for outgoing radiation, the forthcoming proof is valid for both ingoing and outgoing radiation. Let us consider the two branches of radial null curves, for which ds^2=0 and θ = ϕ = const. The first branch is given by u=const (red), and the second branch (blue) is given by the solution of the following ordinary differential equation [This differential equation is a special case of Chini's equation <cit.>, which does not have a general solution.],
du/dr =2 ϵ r/r-2m(u).
We assume the following to hold
0 < m(±∞)< ∞,
the question now arises as to whether the affine parameter λ remains finite as r → 2m(±∞) along the second branch. In order to answer this question we write the second branch (<ref>) as a system of 1^st order ODEs
ṙ = r-2m(u)/λ,
u̇ = 2ϵ r/λ,
where an overdot indicates d/dλ, so that differentiation of the previous system with respect to λ produces the geodesic equations of (<ref>)
r̈ = - 4 ϵ m^'(u)r/λ^2,
ü = - 4ϵ m(u)/λ^2,
where use has been made of both (<ref>) and (<ref>). Now let us assume that λ→±∞ as r → 2m(±∞) then by virtue of (<ref>) and (<ref>) we obtain
lim_λ→±∞u̇= lim_λ→±∞ü = 0,
which is not possible as this changes the second geodesic branch into the first [Note that the first branch is characterized by u=const, which entails u̇ = ü = 0.]. Therefore, our assumption is wrong, and we conclude that λ along the second branch remains finite as r → 2m(±∞). If we write this value of λ as λ_0, we obtain
lim_λ→λ_0ṙ = 0,
and
lim_λ→λ_0u̇ = 4ϵ m(±∞)/λ_0.
Evidently, the last equation remains finite because the mass function m(±∞) is assumed finite from the beginning. By virtue of (<ref>), we conclude that the region (r<2m(±∞)) is inaccessible in the EFL coordinates. Therefore, an extension is necessary.
§ ISRAEL COORDINATES
In order to overcome the “incompleteness problem" of the EFL coordinates, Israel <cit.> introduced what he described as the analytic completion of the Vaidya manifold (<ref>). In Israel coordinates (u,w,θ,ϕ), the Vaidya line element reads
ds^2 = (w^2/2m(u)r(u,w)+4m^'(u)/U(u)) du^2+2dudw+r(u,w)^2dΩ^2_2,
where U(u) = ∫_0^udu/4m(u), r(u,w) = U(u)w+2m(u), and the function m(u) is always positive. Notice that (<ref>) suffers a true singularity at r(u,w) = 0, see (<ref>), and at u=0, if m'(u) does not vanish there, as explained below. To avoid any possible confusion about what is to be said, let us label the EFL retarded coordinate, u, as t. This then shows that (<ref>) is reduced to the outgoing Vaidya metric, (<ref>) with u=t and ϵ=-1, by the transformation
t(u) = -∫_0^udu/U(u),
regular for (u>0, t<∞). Apart from the cumbersome nature of Israel coordinates, the Vaidya metric in Israel coordinates (<ref>) does not adequately represent both the internal and external fields as long as the mass function m(u) is only defined for u ≥ 0. Since u=0 corresponds to t=+∞ (t(u)∝ -log U(u)), it is impossible to extend the line element to the range (u<0) via a coordinate transformation, as it would require knowledge of the mass function m(t>∞), i.e., beyond FEH. Hence, we believe that the “maximal" extension of the Vaidya manifold, as given by the line element (<ref>), is imprecise. It is worth noting that there was an attempt <cit.> to extend the Vaidya metric in terms of Israel coordinates. However, this approach faced the same problem as the original Israel extension of relying on coordinate transformations and the necessity of knowing the mass function m(u) beyond the FEH in advance. It is also worthy of notice that although Israel coordinates have obvious advantages over the EFL coordinates, the Vaidya metric in Israel coordinates has not gained enough attention. To our knowledge, the metric has only been used once (see <cit.>) to study the complete gravitational collapse of a radiating shell of matter. Prior to the attempt given in <cit.>, all the work done to investigate the gravitational collapse in the presence of radiation was not complete. That is, the gravitational collapse was not followed beyond the event horizon because the Vaidya manifold in the EFL coordinates only describes the external field around a collapsing radiating object.
§ GENERAL COORDINATE CONSTRUCTION
Consider the following general spherically symmetric metric expressed in the coordinates (u,w,θ,ϕ) <cit.>
ds^2 = f(u,w) du^2+2h(u,w) du dw + r(u,w)^2dΩ^2_2,
where r(u,w) measures the area of the 2-sphere u=w=const. The energy momentum tensor is once more taken to be of the eikonal form,
T^αβ = Φ k^αk^β,
where k^α = δ^α_w is a radial null vector and the quantity Φ(k^αu_α)^2 is the energy flux, measured by an observer with tangent u_α. Straightforward calculations <cit.> show that the only non-zero component of the Einstein tensor is G^w w from which Φ can be directly obtained. If we take radial null trajectories with four-tangent k^α to be radial null geodesics affinely parametrized by w, i.e.,
k^β∇_βk^α = 0,
this yields
∂ h(u,w)/∂ w = 0.
Thus, the function h(u,w) reduces to a function of only u, h(u,w)≡ h(u). While we will limit ourselves to the choice h(u) = ±1, we will keep the function as is for potential future use.
§.§ Solving the Einstein Field Equations
First [This approach of solving the field equations was first introduced in <cit.> to express the Schwarzschild-de Sitter vacuum metric in Israel coordinates, and was later utilized in <cit.> to obtain the Vaidya metric in the same set of coordinates.], we benefit from the vanishing of the G^uu component to obtain
∂ ^2/∂ w^2 r(u,w)= 0.
This leads, by integration, for a general expression [We also note that this expression can be deduced by assuming that (<ref>) has a vanishing second Ricci invariant <cit.>. This result is particularly important because it is directly obtained from the geometry of the spacetime before considering the matter content.], to r(u,w)
r(u,w) = f_1(u)w+f_2(u).
In the sequel all the functions f_n (u) are assumed suitably smooth [ All the functions are assumed to be at least C^2.]. Second, by solving G^θθ = 0, with the aid of (<ref>), we obtain
r(u,w)∂ ^2/∂ w^2 f(u,w) + 2f_1(u)∂/∂ wf(u,w) - 4h(u)d /duf_1(u) =0.
Integrating (<ref>) yields
f(u,w)= 2 f_1^'(u) h(u) f_2(u)^2-f_1(u)f_3(u)/f_1(u)^2r(u,w)
+2 f_1^'(u) h(u)w/f_1(u)+f_4(u),
where (') denotes ordinary differentiation with respect to the coordinate u. By solving G^uw = 0, we find that f_4(u) is given by
f_4(u) = h(u)(2f_1(u)f_2^'(u)-h(u))/f_1(u)^2,
where use has been made of (<ref>) and (<ref>). By virtue of (<ref>), (<ref>), and (<ref>) the only non-zero component of the Einstein tensor can be given as
G^ww = 1/χ(u)(2h(u)^2f_2(u)^2f_1^”(u)+4h(u)^2f_2(u)f_1^'(u)f_2^'(u)
-h(u)f_3(u)f_1^'(u)-2h(u)f_2(u)^2 h^'(u)f_1^'(u)
-h(u) f_1(u)f_3^'(u)+2f_1(u)f_3(u)h^'(u) ),
where χ(u,w)=h(u)^4f_1(u)r(u,w)^2. The G^ww is conveniently expressed in the following way. First define the Hernandez-Misner mass <cit.>
m ≡r(u,w)^3/2 R_θϕ^ θϕ,
where R is the Riemann tensor. By calculating R_θϕ^ θϕ for (<ref>) and making the necessary simplifications, (<ref>) can be given in terms of the characterizing functions f_n(u) as
m = m(u) = 2h(u)f_2(u)^2f_1^'(u)-f_1(u)f_3(u)/2h(u)^2,
where the mass function must always remain positive-valued over its domain. As a result, G^ww can be expressed in a more succinct form,
G^ww = 2 m^'(u)/h(u)f_1(u)r(u,w)^2 = 8 πΦ.
Similarly, a more convenient expression of the function f(u,w) can be obtained with the aid of (<ref>), (<ref>), (<ref>), and (<ref>)
f(u,w) = 𝒜(u) r(u,w)^2 +ℬ(u) r(u,w)+𝒞(u)/f_1(u)^2r(u,w),
where
𝒜(u) = 2h(u)f_1^'(u),
ℬ(u) = 2h(u)f_1(u)f_2^'(u)-2h(u)f_2(u)f_1^'(u)-h(u)^2,
𝒞(u) = 2h(u)^2m(u).
§ PHYSICAL RESTRICTIONS ON THE CHOICE OF THE CHARACTERIZING FUNCTIONS
The first restriction that we impose, using (<ref>), is given by the following inequality
2h(u)f_2(u)^2f_1^'(u)>f_1(u)f_3(u).
This is necessary to ensure that the mass function, m(u), is always positive.
The second restriction is that the measured radiation flux is a positive quantity,
Φ (k^αu_α)^2> 0.
Substituting (<ref>) in (<ref>) and simplifying, we obtain
m^'(u)/h(u)f_1(u)>0,
which dictates that the signs of m^'(u) and h(u)f_1(u) have to be identical. As our attention is confined to classical matter fields (radiation), a minimum requirement is that this matter distribution must satisfy the Weak Energy Condition (WEC). This requirement implies, with the aid of (<ref>), the following stipulations on the different forms of radiation, summarized in Table <ref>.
Table. <ref> clearly illustrates that both ingoing and outgoing radiation can be obtained without changing the sign of the function h(u). However, as will be seen shortly, the direction of radiation in the EFL coordinates is dictated by the sign of the function h(u).
§ THE APPARENT HORIZON AND THE EVENT HORIZON
We begin this section by providing a general derivation to the location of the apparent horizon of (<ref>). To this end, let us examine the congruence of radial null trajectories
characterized by the four-tangent ℓ^α,
ℓ^α = δ^α_u-f(u,w)/2h(u)δ^α_w,
However, it does not satisfy the geodesic equation in the affine-parameter form. This is evident from the equations ℓ^α∇_αℓ^u = κℓ^u and ℓ^α∇_αℓ^w = κℓ^w, where κ = κ (u,w) and it is called the inaffinity. The geodesics equations are:
ℓ^α∇_αℓ^u = (2d/d uh(u)-∂/∂ wf(u,w)/2h(u))(1) = κℓ^u,
and
ℓ^α∇_αℓ^w = (2d/d uh(u)-∂/∂ wf(u,w)/2h(u))(-f(u,w)/2h(u)) = κℓ^w,
with the inaffinity κ given by
κ = 2d/duh(u)-∂/∂ wf(u,w)/2h(u).
The associated expansion scalar Θ^(ℓ) of this non affinley parametrized congruence of radial null geodesics, see <cit.> for the definition of the expansion in this case, is given by
Θ^(ℓ) = ∇_αℓ^α-κ,
= -r(u,w) ∂/∂ wf (u,w)-2 r(u,w) d/d uh (u)/2 h (u) r(u,w)
- 2 f (u,w) ∂/∂ wr (u,w)-4 h (u) ∂/∂ ur (u,w)/2 h (u) r(u,w)-κ,
= - 1/h(u)r(u,w)( f(u,w) ∂/∂ wr(u,w)-2h(u)∂/∂ ur(u,w)).
The apparent horizon is characterized by Θ^(ℓ) = 0, and thus by virtue of (<ref>) we obtain the following condition
2h(u)∂ r(u,w)/∂ u = f(u,w) ∂ r(u,w)/∂ w.
We substitute (<ref>) in (<ref>), which yields
2h(u) ( f_1^'(u)w+f_2^'(u)) = f(u,w)f_1(u).
With the aid of (<ref>) the previous equation takes the form
0 = 2f_1^'(u)r(u,w)^2+2h(u)m(u)
-( 2w f_1(u)f_1^'(u)+2f_2(u)f_1^'(u)+h(u))r(u,w).
We can use (<ref>) once more to reduce the last equation to
-h(u)( r(u,w)-2m(u) ) = 0,
which immediately gives the sought-after result:
r(u,w) = 2m(u).
It is thus established that the apparent horizon is located at r=2m(u).
We also note that the previous result is established before making any choices for the characterizing functions, f_n(u). Determining the location of the event horizon in the Vaidya metric is not as straightforward as locating the apparent horizon. In fact, the entire future history of the metric, as specified by the functions f(u,w) and h(u), must be predetermined in order to identify the null generators of the event horizon <cit.>.
However, we may generically define the future (past) event horizon as a causal boundary for the timelike geodesics terminating at future (past) timelike infinity, i^+(i^-) [For the definitions of these infinities we refer to <cit.>.].
§ SPECIFIC COORDINATE REPRESENTATIONS OF THE VAIDYA METRIC
In this section, we demonstrate that we can obtain various coordinate representations of the Vaidya metric by selecting different expressions for the characterizing functions, h(u) and f_n(u). Additionally, we emphasize that the meaning of the coordinate u is dependent on the choice of the characterizing functions, and thus the coordinate u in the EFL coordinates has a different interpretation to that in Israel coordinates.
§.§ The Vaidya Metric in the EFL Coordinates
Let us choose the characterizing functions such that h(u) = ± 1, f_1(u) = 1, and f_2(u) = 0, then we obtain w = r with the help of (<ref>). Furthermore, we get f_3(u) = -2m(u) from (<ref>). Substituting these values in (<ref>) yields
f(u,r) = -r+2m(u)/r,
and thus the metric (<ref>) becomes
ds^2 = -(1-2m(u)/r)du^2± 2dudr+r^2dΩ_2^2,
with G^ww = ± 2m^'(u)/r^2. It is clear that, with the help of Table <ref>, we can obtain h(u) = -1 for the outgoing radiation version of the Vaidya metric, where the coordinate u is a retarded time. Similarly, selecting h(u) = +1 yields the ingoing radiation version of the Vaidya metric, with u as an advanced time.
§.§ The Vaidya Metric in Israel Coordinates
In this subsection, we explore how by introducing different choices to the functions f_n(u), we obtain Israel coordinates. Let us consider the following choices: f_1(u) = U(u), f_2(u) = 2 M(u), and f_3(u) = 0. It follows from (<ref>) that for M(u)=m(u) (which is a choice),
U^'(u) = h(u)/4m(u).
Thus, with the aid of the first fundamental theorem of calculus we write
U(u) = ∫_0^uh(x)/4m(x) dx.
However, since our choices for the function h(u) will be confined to either +1 or -1, we set h(u)=h=±1. Consequently, the expression (<ref>) takes the form
U(u) = h∫_0^u1/4m(x) dx.
It follows that the spacetime line element (<ref>) can be written as
ds^2 = (w^2/2m(u)r+4hm^'(u)/U(u)) du^2+2hdudw+r^2dΩ^2_2,
where r is no longer a coordinate; it is now a function r=r(u,w) = U(u)w+2m(u) and G^ww = 2hm^'(u)/U(u)r(u,w)^2. Here, u is a null coordinate and (<ref>) describes both outgoing and ingoing radiation. It is interesting to note that the presence of h is not necessary for (<ref>), as demonstrated in <cit.>, particularly when m^'(u)=0. It is noteworthy that, in accordance with (<ref>), the apparent horizon is now located at w=0.
There is some ambiguity regarding the sign of u which appears in the definition of the function U(u) (<ref>); for example, in <cit.>, u is always positive, whereas in <cit.> u can be either positive or negative. We shall resolve this ambiguity and demonstrate when u can be negative or positive. To this end, recall that
U^'(u) = h/4m(u),
which means that the sign of U^'(u) is solely determined by the sign of h. Also, with the aid of the WEC, (<ref>), and (<ref>), we have
m^'(u)/hU(u) = m^'(u)/∫_0^udx/4m(x) > 0,
where in the last equation we have taken h^2 = 1. Hence, for m^'(u)>0 the integral must be positive (u in the integral must be positive) and for m^'(u)<0 the integral has to be negative (u in the integral must be negative). Consequently, we have seen that the sign of u in the integral is not always positive like in <cit.>, and the dichotomy in the function U(u) based on the sign of u is explained in a more articulated way. We have summarized all the choices we have considered thus far in Table <ref>.
Finally, we introduce a restriction on the w coordinate corresponding to the the surface r(u,w) = 0, the physical singularity, see below. Since r(u,w) = U(u)w+2m(u), for r(u,w) = 0 we obtain
w = -2m(u)/U(u)≡ w_0(u),
and so w_0 > 0 for U(u)<0 and w_0 < 0 for U(u)>0. It turns out that this exactly the case when we study the radial null geodesics in the proposed maximal extensions of the Vaidya metric <cit.>.
§ INVARIANTS
Up to syzygies <cit.>, we find that the only non-differential non-vanishing invariant of (<ref>) is the first Weyl invariant,
w1R ≡1/8C_αβγδC^αβγδ
= 3/2h(u)^4r(u,w)^6(f_1(u)f_3(u)-2h(u)f_1(u)'f_2(u)^2),
which reduces to the following expression in Israel coordinates,
w1R ≡1/8C_αβγδC^αβγδ = 6m(u)^2/r(u,w)^6,
where C_αβγδ is the Weyl tensor. However, as (<ref>) makes clear, it would be informative to have invariant information for m^'(u). This is obtained by way of the Bach tensor <cit.>, see also <cit.>. First define
A_αβδ = ∇^γC_αγβδ,
where ∇^γ denotes contravariant derivative. The Bach tensor is given by
B_αβ = ∇^δ A_αβδ+R^γδC_αγβδ/2.
Since the Bach tensor is trace-free, the first Bach invariant is
B≡ B_αβB^αβ.
In the present case we find, with the aid of (<ref>), that
B = (4U(u)m^'(u)/r(u,w)^4)^2.
Nevertheless, the preceding result does not provide the desired invariant definition of m'(u) due to its dependence on the functions r(u,w) and U(u).
§ SUMMARY AND DISCUSSION
We have examined the construction of Israel coordinates for the Vaidya metric and have simplified the problem to finding appropriate expressions for the characterizing functions that arise from integrating the field equations. This construction is systematic and does not necessitate any coordinate transformation, which provides us with the chance to spot potential extensions of the Vaidya manifold by introducing distinct expressions for the characterizing functions, f_n(u). Nonetheless, the main focus of this paper is to reconstruct Israel coordinates for the Vaidya metric. By utilizing the WEC, we have understood the role of the function h(u) in the Vaidya metric. Although the sign of the h(u) is paramount in determining the direction of radiation in the EFL coordinates, we have demonstrated that this is not the case for Israel coordinates. That is, both ingoing and outgoing radiation can be achieved with h=+1 or h=-1. However, the impact of changing the sign of the function h(u) will be further investigated when we discuss the completeness of Israel coordinates in <cit.>. The next step, see <cit.>, is to introduce explicit mass functions as candidates for the three possible Vaidya models and assess the completeness of Israel coordinates in relation to these mass functions.
§ ACKNOWLEDGEMENT
This work was supported (in part) by a grant from the Natural Sciences and Engineering Research Council of Canada (to KL).
|
http://arxiv.org/abs/2307.04892v1 | 20230710203027 | Entity Identifier: A Natural Text Parsing-based Framework For Entity Relation Extraction | [
"El Mehdi Chouham",
"Jessica López Espejel",
"Mahaman Sanoussi Yahaya Alassan",
"Walid Dahhane",
"El Hassane Ettifouri"
] | cs.CL | [
"cs.CL"
] |
1
.001
Entity Identifier: A Natural Text Parsing-based Framework For Entity Relation Extraction
mode = title]Entity Identifier: A Natural Text Parsing-based Framework For Entity Relation Extraction
[1]
[1]<tnote text>
1]El Mehdi Chouham[]
[email protected]
1]Jessica López Espejel[
orcid=0000-0001-6285-0770
]
[email protected]
[1]organization=Novelis
Research and Innovation Lab,
addressline=207 Rue de Bercy,
city=Paris,
postcode=75012,
state=,
country=France
1]Mahaman Sanoussi Yahaya Alassan[]
[email protected]
1]Walid Dahhane[orcid=0000-0001-5387-3380]
[email protected]
1]El Hassane Ettifouri[
orcid=0000-0001-5299-9053
]
[email protected]
[1]
[1]Corresponding author
[1]Corresponding author
[1]
The field of programming has a diversity of paradigms that are used according to the working framework. While current neural code generation methods are able to learn and generate code directly from text, we believe that this approach is not optimal for certain code tasks, particularly the generation of classes in an object-oriented project. Specifically, we use natural language processing techniques to extract structured information from requirements descriptions, in order to automate the generation of CRUD (Create, Read, Update, Delete) class code. To facilitate this process, we introduce a pipeline for extracting entity and relation information, as well as a representation called an "Entity Tree" to model this information. We also create a dataset to evaluate the effectiveness of our approach.
* We have presented Entity Identifier, a pipeline method for transforming requirements specifications in natural language into a model diagram that incorporates Stanford Scene Graph Parsing.
* We create a dataset and define evaluation metrics to assess the effectiveness of our approach and facilitate future research in this area.
* Our method achieves high scores on simple requirement statements, but struggles in handling complex Wikipedia paragraphs.
Entity Relation Extraction Entity Tree Natural Language Processing
_set:nn stm / mktitle nologo
[
[
August 12, 2023
===================
§ INTRODUCTION
In Natural Language Processing (NLP), many tasks put effort on converting input text into a more readily understandable form for humans. Examples of such tasks include translation <cit.>, summarization <cit.>, question answering <cit.>, text rephrasing <cit.>, and named entity recognition <cit.>. However, only a relatively small number of tasks, such as sequence and token classification, may be primarily useful for machines. We believe that the development of automatic methods to model natural language for machine use has the potential to enable a wide range of future applications. These models may allow for the creation of systems that can understand and interpret human language and leverage this knowledge for downstream use, potentially leading to new and innovative applications in various industries and sectors.
For instance, in the field of text-to-code generation, current AI models such as CodeBERT <cit.>, CodeT5 <cit.>, JaCoText <cit.>, and Codex <cit.> have shown promising results, but still struggle with lack of optimization, inconsistency, and syntax issues. The latest is a major limitation, as syntactically correct code is necessary for a machine to execute it. Additionally, practical considerations such as code structure, clearness, and readability are important aspects of software development that these models have not yet been able to fully address. We believe that incorporating a structuring phase for natural language inputs could significantly advance the capabilities of tasks like text-to-code generation.
In this work, we work on automating the job of requirement analysis by extracting key information that can be directly used to build UML (United Modeling Language) Class diagram and generate class code for CRUD (Create, Read, Update, Delete) operations from natural language requirement specifications. Our primary goal is exploring the benefits of structuring natural language for code generation. Specifically, we aim to extract entities and relationship triplets, including their characteristics, in a manner similar to the joint entity and relation extraction task <cit.>. In addition, we aim to extract common unnamed entities, data types, class attributes, and database relations between entities, in order to build a rich representation, we refer to as an Entity Tree. This representation can be useful for downstream tasks. Figure 1 illustrates an example of the Entity Tree representation.
In development workflows, diagrams such as UML and MCD (Merise Conceptual Diagram) are extensively used for engineering and visualization purposes. These diagrams enable collaborators to analyze, refine, and plan different aspects of a project more effectively. Our approach not only grants direct access to these diagram types but also simplifies the generation of class code and database structure using template heuristics. This liberates developers from repetitive tasks, allowing them to concentrate on more challenging ones. The key advantage of this approach in code generation task is the creation of more dependable code with fewer syntax errors.
In this work, we present a parsing framework that constructs an Entity Tree (ET) from natural language text. To the best of our knowledge, there are currently no datasets available for common entity and relation extraction. Therefore, we create a dataset and define evaluation metrics to assess the effectiveness of our approach and facilitate future research in this area. Our method achieves notable results in the overall extraction of triplets.
The rest of the paper is organized as follows: In section <ref>, we present the related work. In section <ref>, we provide a comprehensive formulation of the problem. In Section <ref>, we describe in details our proposed method. In Section <ref>, we present the experimental protocol and the proposed dataset. In Section <ref>, we provide obtained results and discuss them. Finally, We conclude in Section <ref> and present and some future directions.
§ RELATED WORK
There is a significant amount of prior research on system requirements parsing, but most existing models do not anticipate the use of an entirely natural language input. <cit.> proposed a framework for requirement parsing that utilizes a set of word processing tools and typing rules. This approach involves the user in typing the input, with the system providing error messages and highlighting patterns to enforce adherence to what the paper refers to as a controlled natural language for requirements' specifications. This approach relies on the user to follow specific typing rules and may not be suitable for users who want a fully natural user experience while typing the input.
In <cit.>, a greater emphasis is placed on the use of natural language processing and ontology techniques with OpenNLP to build the parsing architecture. This approach allows for more flexibility for the user, but the parsers rely on transforming complex sentences into simple sequences rather than consistently representing the information. Additionally, the approach does not utilize coreference resolution, meaning that splitting sentences can result in loss of linking information. Furthermore, the use of rules to ignore certain concept words later in the pipeline may lead to error propagation and confusion. Overall, this approach may require more user intervention to ensure accurate parsing of the input.
Similarly, <cit.> uses a suite of natural language processing tools to extract UML diagrams directly. Their approach involves the use of heuristic rules for identifying classes, relationships, and objects. However, our research posits that it is more effective to extract entities and relationships jointly, as the semantics of a sentence are dependent on the combination of both. Furthermore, while the aforementioned tool is presented as software specifically designed for the creation of UML diagrams, our research aims to output the extracted elements in a more widely-used representation.
The approach presented in <cit.> addresses the issue of information loss through the use of a coreference resolver in the pipeline. The pipeline utilizes both dependency parsing and syntactic parsing during the parsing phase, and employs a set of rules to construct a Domain Model in a subsequent phase. However, the approach has some limitations, including the fact that coreference resolution is performed after segmentation, and the reliance on noun phrases and verbs instead of a compact representation for building the domain model.
In our research, we focus on a fundamental task of common entity and relation extraction. Unlike traditional entity and relation extraction, our task aims to extract all entities and relationships in a text, not just named entities. We use the extracted sets of entities and relationships to build a model of the system. We believe that this approach can highlight the potential utility of jointly extracting unnamed entities and relations as a distinct task within the field of knowledge extraction. After optimizing our pipeline for this task, we then build our domain model equivalent referred to as the Entity Tree.
§ PROBLEM FORMULATION
Our goal is to design a mechanism that translates a natural language description of system requirements into a structured representation called an Entity Tree (as shown in Figure <ref>). This representation will enable us to easily generate UML, MCD, CRUD diagrams, and database structures.
Let S be a sequence and E={e_1, e_2, ..., e_|S|} the set of its entities' components. An Entity Component e_1 is defined by 4 elements:
* Attributes - set of descriptive labels representing the entity.
* Extends - the entity it extends, if it has a parent entity.
* Name - the entity name.
* Relationships - set of relationships with other entities.
We define a relationship as the link between two entities. Here we refer to the entity containing the list of relationships as the subject e_s, the target entity as the object e_o and the predicate as the relation r. We explicitly define a relationship as a dictionary of the following eight items:
* Extract number - the exact number of the subject entities e_s, if mentioned.
* Primitivity - it indicates if the e_o is primitive. An entity is primitive if it is a literal such as a number, a date, a boolean, etc. In other words, when it is not a Class instance. Values which are considered primitive can be redefined by the user.
* Min - cardinality on the e_s side.
* Max - cardinality on the e_o side.
* Object attributes - set of labels describing e_o.
* Object name - the e_o name.
* Relationship type - the predicate r.
* Type - the type notion refers to the parent class of e_o if it has one, otherwise it assigns a corresponding primitive type.
The Entity Tree models the entities and relationships of a sentence in a very rich way as it describes every item and its relation with other items. Its data structure serves as a representation of entities and their relationships within a sentence or piece of text. Its utilization allows for a more intricate and comprehensive depiction of sentence structure, surpassing conventional approaches such as parsing trees. In the example "The cat sat on the mat.", the words "cat" and "mat" are entities, and the relationship between them is "sat on".
In our case, the Entity Tree can also include additional information about each entity, such as its type or attributes. This presentation offers valuable applications in tasks such as information extraction, text summarization, and question answering.
§ PROPOSED METHOD
Our approach uses a combination of multiple natural language processing (NLP) tools to extract entities, relationships, and their characteristics. An overview of our pipeline is shown in Figure <ref>. Firstly, the input text undergoes coreference resolution to replace each pronoun with the word to which it refers. Secondly, the text is segmented into sentences. Thirdly, for each sentence, we use a Scene Graph Parser (SGP)[A Stanford's CoreNLP tool] <cit.> to obtain the corresponding scene graph. An aggregator is used to fuse different scene graphs. Finally, we apply additional methods to double-check features extraction, generate cardinality, and build up the Entity Tree representation of the input. We detail each component of our system in the following subsections.
§.§ Text Segmentation
To use the CoreNLP scene graph parser for building a scene graph, the input should adhere to the scene graph parser’s input format. According to the official documentation, the implementation only works on single sentences. For this reason, the input goes through sequential methods, a sentence segmentation function and a coreference resolver.
Firstly, the text goes through Spacy’s integrated sentence sequencer. The latter ensures that the text will not only be split based on dots. Instead, the split is done using regular expressions while taking into consideration links, phone numbers, decimal numbers, etc.
Secondly, pronouns are replaced by the nominal group to which they refer to in sentences starting with a pronoun. This prevents sending to the scene graph parser sentences with no nominal groups and spread confusing downstream.
The text segmentation block outputs a list of clear sentences, that can be semantically understood without the need for the rest of the text. In the next step, each sentence will be sent to the scene graph parser.
§.§ Scene Graph Parsing
The scene graph parser extracts knowledge from a sentence into a scene graph. As its name suggests, the scene graph can be thought of as a representation of the scene described in the input sentence. Various implementations are available. In our work, we stick with the rule-based scene graph parser since its implementation showed the best stability. Practically, the CoreNLP implementation returns a list of relationships. A relationship consists of a subject, the predicate and the object, and a list of attributes built similarly with a list of sets, each containing a subject, the predicate and an attribute.
Scene graph parser starts by building an enhanced version of a dependency tree of the sentence (Klein and Manning, 2003), only then it resolves quantificational modifiers, coreferences and plural.
§.§.§ Quantificational modifiers
Dependency tree parsers treat noun expression quantificational modifiers such as “both of” or “a bunch of” like noun phrases leading to the latent dependency trees containing root pronouns instead of the subject. Scene graph parser tackles this issue by checking if a noun expression is a quantification modifier against a precompiled list. This addition guarantees an intermediate dependency tree were the head component is an entity.
§.§.§ Coreference
The parser performs a coreference resolution on the pronouns of the dependency tree, disambiguating pronouns and other words that might be ambiguous without context. It uses an adaptation of the <cit.> algorithm for dependency trees, to save underlying semantic link between sentences, clear up any confusions and improve the accuracy of downstream natural language processing tasks.
§.§.§ Plural resolution
In the context of plural resolution, the scene graph parser is tasked with addressing the challenge of "collective-distributive" ambiguity, which refers to the difficulty of determining whether a group noun refers to a single entity that is comprised of multiple individual parts (collective) or multiple distinct entities (distributive). On the one hand, in the example sentence "Two guys in police uniforms", the parser would need to determine that the noun "guys" refers to two distinct individuals, each of whom is wearing a police uniform. On the other hand, in the example sentence "Two passengers in the backseat of the car", the parser would need to recognize that the noun "passengers" refers to two distinct individuals, but that there is only one backseat entity. The scene graph parser only considers that the distributive plural is far more common. This is convenient as it aligns with the common nature of specifications' descriptions.
At this stage the parser outputs what is referred to as a Semantic Graph. From this point, the semantic graph will undergo a set of methods to extract objects and attributes.
§.§.§ Rule-based Parser
The goal of these procedure is to extract object entities, relations, and attributes from the input sentence using a semantic graph and Semgrex2 expressions. As of the time this research was conducted, Semgrex2 expressions were able to capture a wide range of pattern constructions. It supports the following patterns:
* Adjectival modifiers
* Subject-predicate-object constructions
* Subject-predicate constructions
* Copular constructions
* Prepositional phrases
* Possessive constructions
* Passive constructions
* Clausal modifiers of nouns
In addition to the rule-based approach, the CoreNLP library offers a classifier-based approach for extracting relations between entities. However, we have found that the rule-based approach is more reliable, as the classifier-based approach may overlook complex patterns and underlying semantics. The output of this process is a scene graph, which consists of a list of relationships and a list of attributes.
We transform the scene graph into a list of entities, each of which has a sub-list of attributes and a sub-list of relationships. A relationship is a link between two entities, consisting of an object entity and a predicate verb describing the connection between the subject entity and the object entity.
§.§ Cardinality Extraction
The scene graph produced at this stage does not include information about the cardinality of entities, as the process converts the corresponding words into their lowercase stem. To address this issue, our cardinality extractor iterates through the list of relationships and the corresponding sentences to determine cardinality. Plural noun expressions are assigned a cardinality of "*", while singular noun expressions are assigned a cardinality of "1". Additionally, for database structure purposes, if an object is described as "optional" in the sentence, the corresponding entity is assigned a cardinality of "0".
This will help easily deduce database cardinality types' downstream if needed. For example, in "A level has multiple bosses" we can deduce a many-to-one type of cardinality. Furthermore, this module also extracts and preserves any indicated numerical quantifiers as the "Exact Number" entry of the Entity Tree.
§.§ Inheritance Resolver
Attributes in a scene graph can be either noun expressions or adjectives. However, in our case, for a noun attribute to be considered as an adjective, it should not be mentioned else where as an entity, otherwise it will primarily be considered as a parent class to the described entity instead of an attribute.
We implement an engine that uses a pre-defined set of rules to iterate through scene graph attributes. It looks up for relationships containing a “to be” predicate and applies rules to determine the entity's relative position in the inheritance hierarchy. In this way, we to build up the inheritance.
§.§ Type description
Similar to a variable type in programming, we assign to entities the kind of data that they can hold.
If there are types explicitly described through specific attributes appearing in a list pre-defined by the user (such as Date, Long, Short, and Int), the corresponding entity is marked as primitive and is assigned this type. Object Entities which appear as subjects elsewhere in the text are not assigned a type. Thus, their value is defaulted to "String".
The Entity Tree produced reflects domain model-like information, including class attributes and relationships with additional entries. Heuristics can be applied to this Entity Tree to generate consistent class code or UML class diagrams from directly system requirements. With simple rules script, we easily generate the UML class diagram illustrated in Figure <ref>.
§ EXPERIMENTAL PROTOCOL
In this section, we will present our dataset, as well as the metrics used to evaluate our method.
§.§ Dataset
§.§.§ Entity relation triplet dataset
We aim to evaluate our entity relation extraction work on a dataset specifically tailored for this purpose. However, we have found that existing datasets such as WebNLG <cit.> and New York Times <cit.> primarily concentrate on the extraction of named entities, which is not appropriate for our use case. For this reason, we constructed a dataset consisting of randomly selected Wikipedia paragraphs and labeled each paragraph with its corresponding (e_s, relationship, e_o) triple for evaluation purposes. Our dataset, contains :
* Text theme
* The text content
* The corresponding (e_s, r, e_o) triplets
* Scene Graph Extracted Dependencies
The dataset contains a total of 198 examples. The dataset is only intended for evaluation. We therefore consider that this amount of examples is sufficient to evaluate an algorithm that does not contain a training phase.
§.§.§ Patterns set
A close examination of Wikipedia paragraphs reveals that they can be quite complex, which is at odds with the more precise, concise, and syntactically clear nature of system requirements descriptions. To evaluate the performance of our parser on a more representative way, we have developed an additional checklist of six patterns.
§.§ Metrics
For evaluation, we define some metrics to evaluate the extraction accuracy. We believe that a successful information extraction of the sentence resides in a consistent extraction of the triplet (e_s, r, e_o). Therefore, we focus on evaluating this aspect. We propose metrics based on Intersection over Union as well as BLEU <cit.> score. The approach of using BLEU is motivated by the idea that our task can be viewed as a type of translation, and the BLEU metric is well-suited to evaluating the quality of translations.
• Pair IoU -
First of all, we introduce an IoU metric to evaluate the number of exactly detected pairs.
IoU_Pair(A, B) = A ∩ B/A ∪ B
where A is reference entity pairs' set, and B is candidate entity pairs' set. For the sake of readability, e_s, e_o refer to the concatenation of the name and the attribute of each of the subject entity and the object entity respectively.
• Pairs mean BLEU -
We acknowledge that detecting exact pairs in complex text can be challenging, particularly when the entities are compound. Despite this, we believe that the use of the BLEU metric may provide a more suitable approach for comparing texts in this context compared to other methods. In this context, we introduce a BLEU mean of pairs metric.
.5 !BLEU_Mean(D_x, Dy) = ∑_i ∈ D_x∑_i ∈ D_y[B(e_s^i, e_s^j) ≥ k ] ×[B(e_o^i, e_o^j) ≥ k ]
/| D_x |
with :
.5 !B(entity^i, entity^j) = BLEU(entity^i, entity^j) + BLEU(entity^j, entity^i)/2
D_x is the reference sentences' set,
D_y is the candidate sentences' set, and
k is the threshold parameter. Comparisons with B() values that are less than k are omitted.
• Pairs exclusive BLEU -
We introduce another BLEU-based metric that only takes into account max BLEU value of the two permutations of reference and generated pair.
.5 !BLEU_Exclusive(D_x, Dy) = ∑_i ∈ D_x∑_i ∈ D_y[B(e_s^i, e_s^j) ≥ k ] ×[B(e_o^i, e_o^j) ≥ k ]
/| D_x |
with :
.5 !B(entity^i, entity^j) = max [BLEU(entity^i,
entity^j), BLEU(entity^j, entity^i)]
entity refers to the concatenation of the name and the attribute of entity.
• Triplet BLEU -
Finally, we introduce a similar metric that evaluates both entities and their relation based on BLEU value of their concatenation.
.5 !BLEU_Triplet(D_x, Dy) = ∑_i ∈ D_x∑_i ∈ D_y[B(W(triplet^i), W(triplet^j)) ≥ k ]
/| D_x |
with :
.5 !B(triplet^i, triplet^j) = max [BLEU(triplet^i, triplet^j), BLEU(triplet^j, triplet^i)]
where:
W(.) is the concatenation of e_s, r and e_o of a triplet through white space characters,
triplet refers to the concatenation of the name and the attribute of each entity and the relationship.
§ RESULTS AND DISCUSSION
Our entity identification pipeline achieves high scores on the basic patterns set, with scores of 0.905, 0.929, and 0.929 using the metrics IoU_Pair, [email protected] and [email protected], respectively. It also performed exceptionally well in identifying triplets, earning a perfect score. These results demonstrate that the pipeline is capable of handling simple requirement statements, which is necessary for achieving our final goal.
However, when applied to complex Wikipedia paragraphs, the model encounters challenges and struggles to maintain the same level of performance. While the pipeline still manages to achieve decent results in terms of the overall extraction metric BLEU_Triplet on the WikiTriplet dataset, it performs poorly when it comes to detecting entity pairs. Specifically, it obtains scores of 0.004, 0.036, and 0.036 on the IoU_Pair, [email protected], and [email protected] metrics, respectively, indicating a significant drop in performance for entity pair identification in complex text.
Our objective is to build a parser that performs well on simple sentences while also being adaptable to complex entries. For this reason, we have adopted an iterative optimization approach. This approach focuses on continuously improving the parser's performance based on its performance on the basic patterns list. The idea behind this strategy is to establish a solid basis in optimizing the pipeline's performance on simpler inputs and then gradually enhancing it to handle more complex text.
§ CONCLUSIONS AND PERSPECTIVES
We have presented Entity Identifier, a pipeline method for transforming requirements specifications in natural language into a model diagram that incorporates Stanford Scene Graph Parsing. The latter is a natural text parser originally used for extracting scene information from image queries. We propose a novel task called common entities and relations extraction that aims to extract all related entities in a text and their relationship, this task will help better model natural text for machine use. While our entity identification pipeline demonstrates impressive capabilities in handling simple requirement statements, it encounters difficulties when confronted with complex Wikipedia paragraphs. Thus, an improvement of the current work would be to expand this task into all entities extraction including common words. In addition, it would be interesting to expand our evaluation dataset to include more examples from different sources.
cas-model2-names
|
http://arxiv.org/abs/2307.07548v1 | 20230714180002 | Topology of 2D Dirac operators with variable mass and an application to shallow-water waves | [
"Sylvain Rossi",
"Alessandro Tarantola"
] | math-ph | [
"math-ph",
"cond-mat.mes-hall",
"math.MP"
] |
Article Title]Topology of 2D Dirac operators with variable mass and an application to shallow-water waves
1]Sylvain [email protected]
1]Alessandro [email protected]
[1]Institute for Theoretical Physics, ETH Zürich, Wolfgang-Pauli-Str. 27, Zürich, 8093, Zürich, Switzerland
A 2D Dirac operator with constant mass is a topological insulator, sitting in class D of the Kitaev table. Yet, it lacks a bulk invariant due to its non-compact Brillouin zone. We address the issue by letting the mass depend on the position operator in the vertical direction, interpolating between the original model and its negative-mass counterpart. The resulting Hamiltonian has a compact Brillouin zone and a well-defined bulk index. Relating the latter to the signed number of localized chiral modes propagating along the zero-mass line yields an example of bulk-edge correspondence. Identical methods reveal another instance of the correspondence, when applied to the rotating shallow-water model with variable angular velocity.
[
[
received: ** 2023, accepted: * 2023
=======================================
§ INTRODUCTION
Dirac's model is almost ubiquitous in modern physics. Originally introduced to describe relativistic electrons <cit.>, it has more recently found applications in condensed matter <cit.>. Band touchings with linear dispersion relation, well-described by Dirac-type Hamiltonians, appear in Weyl semimetals <cit.>, graphene <cit.> and many other platforms <cit.>. When the host material is topological, such Dirac cones become central in describing the gap closing and quantum phase transition. Indeed, the classification of the Clifford algebras associated with these Dirac Hamiltonians, in presence of the symmetries allowed by the ten-fold way <cit.>, is one of the routes that lead to the derivation of the periodic table of topological insulators <cit.>.
The topology of Dirac operators is often probed by introducing an edge (or interface) and counting the signed number of chiral modes propagating along it <cit.>. However, the paradigm of bulk-edge correspondence <cit.> suggests that the unbounded sample should possess an equivalent bulk index. For 2D translation invariant materials with broken time-reversal symmetry, such index is usually expected to be the first Chern number <cit.> of their Bloch bundle <cit.>. This is however ill-defined if the Brillouin zone is non-compact, a typical feature of continuum models shared also by the 2D massive Dirac Hamiltonian that we wish to study.
The aim of this article is to resolve this issue and endow the 2D Dirac model with a properly defined bulk topological index. This goal is achieved by joining the positive-mass model with its negative-mass counterpart, abiding by the rule that fermions always come in pairs (cf. Nielsen-Ninomiya theorem <cit.>). In practice, the pairing is performed by writing the mass term as a function of the second component x_2 of the two-dimensional position operator x = (x_1,x_2). The resulting model has appeared multiple times in the literature, see e.g. <cit.> or <cit.> and references therein. The mass profile changes sign at x_2=0 and saturates to a constant value m_±≷ 0 at x_2 →±∞. Considering both asymptotic Hamiltonians (x_2 →±∞) at once allows for a compactification of the Brillouin zone. The Chern number of the Bloch bundle constructed upon it is the new bulk index. It is equal to the signed number of states that localize around the x_2=0 interface, and such equality is an instance of bulk-edge correspondence.
The 2D Dirac Hamiltonian describes spin-1/2 particles. Its spin-1 counterpart happens to coincide with the Hamiltonian of the rotating shallow-water model <cit.>. The latter is a hydrodynamical model derived from Euler's equations and used to describe the dynamics of thin layers of fluid lying on a rotating bottom. The angular velocity f of this rotation plays the same role as the mass in the Dirac setting. The shallow-water model, upon addition of an odd-viscous term, displays an anomalous bulk-edge correspondence <cit.> or violates it altogether <cit.> depending on the boundary condition. After defining the bulk index in complete analogy with the Dirac case, we show that no such violation is present in our setting, at least for a specific profile of the variable angular velocity.
The paper is organized as follows. In Section <ref>, we introduce the Dirac Hamiltonian with constant mass, naively compute its Chern number and argue why it is ill-defined. An alternative model with non-constant mass is proposed and its essential spectrum found. In Sec. <ref>, we propose a bulk index for the new model and prove it is topological. In Sec. <ref>, we define the edge index, compute it and prove its independence from the choice of mass profile. The shallow-water model is introduced in Sec. <ref> as the spin-1 counterpart of the previously studied Hamiltonian. A bulk index is defined in complete analogy with Section <ref>, and the edge index is computed in a simple case. The two coincide. Sec. <ref> is finally devoted to conclusions and future prospects.
§ SETUP: TWO-DIMENSIONAL DIRAC HAMILTONIANS WITH (NON)-CONSTANT MASS
A Dirac Hamiltonian with constant mass is introduced. Some of its spectral properties are listed, and the Bloch bundle associated with its positive-energy band constructed. Naively computing its Chern number yields non-integer values. A related model with non-constant mass is introduced and its essential spectrum specified. We claim it can be equipped with a well-defined Chern number, which serves as bulk invariant. The proof of such claim is the content of the next section.
Consider a spin-1/2 particle on ^2, whose Hilbert space is H = L^2 (^2) ⊗^2. Describe its dynamics by a two-dimensional Dirac Hamiltonian H_±, written in terms of Pauli matrices as
H_± (p_1, p_2, m_±) ·σ⃗ =
[ m_± - ∂_1 - ∂_2; - ∂_1 + ∂_2 - m_± ] ,
where σ⃗ = (σ_1, σ_2, σ_3), p_j - ∂_j , (j = 1,2) momentum operator in direction j and m_±≷ 0 is a positive (negative) constant.
The operators H_± enjoy a particle-hole symmetry C H_± C^-1 = - H_±, where C = σ_1 K and K denotes complex conjugation. Neither time-reversal nor chiral symmetry are present. The model therefore sits in class D w.r.t. the Kitaev table <cit.>, and a -valued invariant is expected. The Hamiltonians are moreover translation invariant, and their Fourier transform reads
H_± (k_1, k_2, m_±) ·σ⃗ = d⃗_± (k) ·σ⃗ ,
where k=(k_1,k_2) ∋^2 is a point in the non-compact Brillouin zone ^2, dual of position space ^2 ∋x = (x_1,x_2). The spectra are purely essential, and consist of two bands
±ω_+ (k) = ± | d⃗_+ (k) | = ±√(k^2 + m_+^2)
for H_+ and
±ω_- (k) = ± | d⃗_- (k) | = ±√(k^2 + m_-^2)
for H_-, with k √(k_1^2 + k_2^2).
To discuss the topology of H_±, one has to associate a Bloch bundle to their bands. We can w.l.o.g. restrict our discussion to the top (positive) bands, because the bottom ones are their symmetric counterpart under particle-hole conjugation. Consider the flattened Hamiltonians
H'_± (k) = e⃗_± (k) ·σ⃗d⃗_± (k)/| d⃗_± (k) |·σ⃗ = 1/√(k^2 + m_±^2) (k_1, k_2, m_±) ·σ⃗
and use e⃗_± to construct the projections
P_± (k) = 𝕀 + (e⃗_± (k) ·σ⃗) /2
onto the positive bands. Their associated Bloch bundles then read
E_±{ (k, ψ_k) | k∈^2 , ψ_k∈ran (P_± (k)) ⊂^2 } .
If E_± had a compact base space Ω in place of ^2, its (rightful) Chern number could be computed by (Prop. 2.1 in <cit.>)
Ch (E_±) = 1/2 π∫_Ωe⃗_± (k) · (∂_k_1e⃗_± (k) ∧∂_k_2e⃗_± (k)) k_1 k_2 .
In our current setup, the outcome of the integration in Eq. (<ref>) is not a topological invariant, and thus not necessarily an integer. We nonetheless compute it, but denote it by C ħ ( · ) to distinguish it from well-defined Chern numbers
C ħ (E_±) = ±1/2 = 1/2sgn (m_±) .
Evocatively,
C ħ (E_+) - C ħ (E_-) = +1
is a non-zero integer. We claim that such a difference is a well-defined topological index, and more specifically the bulk invariant of the following Hamiltonian
H (p_1, p_2, m) ·σ⃗ ,
with m a function of the position operator in direction 2 with profile m(x_2) satisfying:
* m(x_2) differentiable with continuous derivative;
* m'(x_2) ≥ 0 ∀ x_2 (monotonicity);
* m(x_2) → m_± as x_2 →±∞.
Without loss of generality, we shall also assume m(0)=0. Notice that translation invariance in direction x_1 is not lost, and H can hence be written fiber-wise as
H(k_1) = (k_1, - ∂_2, m) ·σ⃗ =
[ m k_1 - ∂_2; k_1 + ∂_2 -m ] .
The essential spectrum σ_e (H(k_1)) of the fibered operator reads
σ_e (H(k_1)) = {ω∈: | ω | ≥√(k_1^2 + min{ m_-^2, m_+^2 })} ,
and this result follows from Thm. 3.11 in <cit.> and
(H - H_±) T_a s→ 0 , (a →±∞) ,
where T_a ^- p_2 a and s→ denotes strong convergence. Eq. (<ref>) says that H reduces to the Hamiltonians H_± above when x_2 →±∞. Given this fact, (<ref>) embodies the common wisdom that the essential spectrum is determined by what happens very far away.
Let us heuristically justify our choice of H. Intuitively, H_± are two insulators in the same symmetry class but in different topological phases, as suggested by Eqs. (<ref>). The global Hamiltonian H coincides with H_± for x_2 →±∞, and spatially interpolates between the two for finite values of x_2. At x_2 = 0, m(0)=0 and the local Hamiltonian is gapless. The gap closing hints at a quantum phase transition, and the system described by H can thus be seen as two different topological insulators smoothly glued together along the x_2 = 0 line. The invariant associated to a system of this kind (see e.g. Ref. <cit.>, discussion above Eq. (7)) is either the signed number of bound states propagating along the x_2 = 0 interface (edge), or the difference between the Chern numbers of the two insulators (bulk), just as proposed in Eq. (<ref>).
§ BULK INDEX
The notion of bulk is intimately linked to translation invariance: An observer finds itself in the bulk of a material when he cannot perceive any edges or interfaces nearby, i.e. the Hamiltonian he is subject to does not change if he moves around slightly. In this sense, the system described by H has two separate bulk regions x_2 →±∞ where the relevant Hamiltonians are H_±, respectively. How to combine this doubled bulk picture into a single, coherent one can be understood with a story.
Imagine position space ^2 as a translucent sheet of paper. Draw on it the x_2=0 equatorial line and one reference frame for each half-plane (x_2 > 0 and x_2 <0). Now, fold along the equator, keeping the x_2 > 0 half-pane facing upwards. Think of an observer (Alice) living on the top layer, near the x_2 = 0 edge. Have her move away from it in the positive x_2-direction, until the equator is no longer visible and she experiences constant mass m_+. She has reached the upper bulk, where physics is governed by the translation invariant Hamiltonian H_+. However, looking down through the translucent paper, Alice will be able to see another plane with opposite orientation of the x_2-axis (due to the folding), negative mass m_- and Hamiltonian H_-. Her journey is graphically recounted in Fig. <ref>.
Aided by this intuition, one can formalize the bulk picture of system H. Let E be the affine plane and E_± two oppositely oriented copies of it, with natural identification i: E_+ →E_- through E_+ ≡E≡E_-. Let ^2 be equipped with the canonical orientation. Let φ_± : E_±→^2 be two orientation preserving charts (linear w.l.o.g.). Then i: ^2 →^2 acts between charts with i = -1, for example as
i: (x_1, x_2) ↦ (x_1,-x_2) .
Differently put, i ∘φ_± = φ_∓.
Orientability of a manifold is one of the prerequisites for carrying a spin structure and hence a Dirac operator. E_± can thus host the Dirac Hamiltonians H_± with mass m_± respectively. Moving to momentum space, let E^* be the dual of E, and let the Brillouin zones E_±^* inherit orientation from E_±. We wish to express H_± in the coordinates k of p ∈E^* induced by φ_±. Here and in the following, let φ_+ (p) = (k_1,k_2). By Eq. (<ref>)
i: (k_1, k_2) ↦ (k_1,-k_2) .
When embedded in H as asymptotic Hamiltonians, the action of H_± on a state of momentum p ∈E^* can only differ by the mass m_±. In other words, if
H_+ (p) H_+ (φ_+ (p)) = (k_1,k_2,m_+) ·σ⃗ ,
then
H_- (p) H_- (φ_- (p)) = (k_1,k_2,m_-) ·σ⃗ .
By Eq. (<ref>), this in turn means
H_± (k) = (k_1, ± k_2, m_±) ·σ⃗
as maps ^2 → L(H), where L(H) denotes linear operators on the Hilbert space.
The bundles E_± are completely determined by the projections P_±, which read
P_± (p) 𝕀 + (e⃗_± (p) ·σ⃗) /2 ,
where
e⃗_± (p) = 1/ω_± (p) (k cosθ, k sinθ, m_±) ,
by Eqs. (<ref>,<ref>) and having switched to polar coordinates φ_+ (p) = (k cosθ, k sinθ). We notice that
lim_k →∞e⃗_+ (p) = lim_k →∞e⃗_- (p) ⟹ lim_k →∞ P_+ (p) = lim_k →∞ P_- (p) ,
and such limits are finite, albeit direction dependent. Given existence of these limits, just as the real line can be compactified to a closed interval, so can the planes E^*, E_±^* be compactified to closed disks D, D_±, where points at their boundary represent infinity in the corresponding direction. In turn, the disk is topologically equivalent to a hemisphere.
Using the new Brillouin zones D_±, Eq. (<ref>) is rewritten as
P_+ (p) = P_- (p) , (p ∈∂D) ,
by ∂D_+ = ∂D_-.
Note that
D̃D_+ ∪D_-
joined along ∂D_+ = ∂D_- is a sphere equipped with an orientation, and in fact consistent on D_±.
By Eq. (<ref>),
P = P_+ ⊔ P_-
defines a line bundle E on D̃. We have thus been able to construct a bundle E over compact base space D̃≃ S^2 associated with the global Hamiltonian H.
The bulk index I of the Hamiltonian H defined in Eq. (<ref>) is
I Ch (E) .
This index can now be computed. Formally
Ch (E) = 1/2 π∫_D̃e⃗ (q) · (∂_q_1e⃗ (q) ∧∂_q_2e⃗(q)) q_1 q_2 ,
where e⃗ is the unit vector associated to P and q = (q_1,q_2) some coordinates on the sphere. Operationally, it is natural to place a chart on each hemisphere and evaluate the integral accordingly:
Ch (E) = 1/2 π∫_R^2e⃗_+ (k_1, k_2) · (∂_k_1e⃗_+ (k_1, k_2) ∧∂_k_2e⃗_+ (k_1,k_2)) k_1 k_2
+ 1/2 π∫_R^2e⃗_- (k_1, -k_2) · (∂_k_1e⃗_- (k_1, -k_2) ∧∂_k_2e⃗_- (k_1,-k_2)) k_1 k_2 .
The second integral inherits a flip of the k_2-axis from the negative orientation of E^*_-, which can be undone by a change of variables k_2 ↦ - k_2. The latter results in
Ch (E) = 1/2 π∫_R^2e⃗_+ (k_1, k_2) · (∂_k_1e⃗_+ (k_1, k_2) ∧∂_k_2e⃗_+ (k_1,k_2)) k_1 k_2
- 1/2 π∫_R^2e⃗_- (k_1, k_2) · (∂_k_1e⃗_- (k_1, k_2) ∧∂_k_2e⃗_- (k_1,k_2)) k_1 k_2
= C ħ (E_+) - C ħ (E_-) .
We have thus proven the following proposition.
The quantity C ħ (E_+) - C ħ (E_-) is a genuine topological invariant, namely there exists a bundle E over compact base space D̃≃ S^2 such that
Ch (E) = C ħ (E_+) - C ħ (E_-) .
A few things have been achieved in the previous paragraphs. First of all, we have shown that a single Dirac spin does not have a well-defined bulk topology: it acquires one after being combined with a spin of opposite mass, much in the spirit of the Nielsen-Ninomyia theorem. Secondly, we have endowed the Hamiltonian H, namely the one considered by Jackiw and Rebbi <cit.>, with a bulk invariant. Finally, the idea of folding position space connects edge and interface pictures: One can equivalently look for states localizing along the x_2 = 0 line of the unfolded plane, or consider the bulk picture of the superposed E_± planes, cut them in half and observe modes along the newly created insulator-vacuum boundary.
§ EDGE INDEX AND BULK-EDGE CORRESPONDENCE
Ever since <cit.>, many topological insulators have been shown to exhibit bulk-edge correspondence. The latter means that the edge index, typically the signed number of chiral states propagating along the boundary, coincides with the bulk invariant. Our model Hamiltonian H enjoys the same property. We prove this claim by defining an edge index I^#, computing its value and verifying it matches I. The index I^# is moreover proven independent on the choice of mass profile m(x_2), as expected of a topological quantity.
The following definition of the edge index is a simple extension of the fiducial line approach (see Ref. <cit.> and references therein) to unbounded operators.
Consider H as in Eq. (<ref>), and let m̃min{ |m_-|, |m_+| }. Introduce a Fermi line μ_ϵ (k_1), approximating the lower rim of the upper band from below:
μ_ϵ (k_1) -ϵ + √(k_1^2 + m̃^2) ,
where ϵ > 0 a (small) constant, to be chosen so that μ_ϵ lies entirely in the bulk gap. Denote by ψ_j (x) ^ k_1 x_1ψ̂_j (x_2;k_1) the bound eigenstate with dispersion relation ω_j (k_1),
H(k_1) ψ̂_j (x_2;k_1) = ω_j (k_1) ψ̂_j (x_2;k_1) .
Then, the edge index I^# is
I^# - ∑_j ∈ J I (μ_ϵ, ω_j) ,
with I (μ_ϵ, ω_j) the intersection number of the Fermi line with the j-th edge channel ω_j, and J the index set of bound eigenstates.
A word on conventions. Both μ_ϵ (k_1) and ω_j (k_1), (j ∈ J) are curves on the (k_1,ω)-plane. The intersection number between the k_1 and ω axes is taken to be +1. The minus sign in Eq. (<ref>) is then necessary to recover the standard condensed matter convention <cit.> that states born on the lower rim of a bulk band (while proceeding in the positive k_1 direction) contribute +1 to the edge index. Our choice is also equivalent to counting left-movers (right-movers) positively (negatively).
The edge index is computed by finding the discrete spectrum and the relative eigenstates. Systems like our H have been extensively studied elsewhere, see Refs. <cit.> and <cit.> (very close to our work in techniques and spirit). Two eigenstates ψ^R/L (x, k_1) (right- and left-moving respectively) are known for any profile of the mass. They read
ψ^R/L (x; k_1) = ^ k_1 x_1ψ̂^R/L (x_2;k_1) , ψ̂^R/L (x_2;k_1) = N_R/L^±∫_0^x_2 m(x_2') x_2' [ 1; ± 1 ] ,
with N_R/L∈ some normalization constants. Their dispersion relation is ω_R/L (k_1) = ± k_1, and they are bound for m(x_2) monotonically decreasing or increasing, respectively. Since
I (μ_ϵ, ω_R/L) = ± 1 ,
their contribution to the edge index would be ∓ 1, respectively. We claim that these are the only net contributions, namely all other bound states (if they exist) intersect the Fermi line an even number of times with opposite signs. Equivalently, they emerge from and disappear into the same bulk band.
Let I^# as in Def. <ref>, and let m(x_2) be monotonous: m'(x_2) ≷ 0 for all x_2. Then
I^# = sgn (m') .
The unique bound state giving a net non-zero contribution to I^# for m' > 0 (m' < 0) is ψ^L (ψ^R), cf. Eq. (<ref>). Its dispersion relation reads ω_L = -k_1 (ω_R = +k_1).
The full proof is reported in App. <ref>, but its main elements can be sketched here. Let ψ = (u,v)^T be a candidate solution of the eigenvalue equation (<ref>). The latter consists in two coupled first order ODEs. Decoupling them results in second order ODEs for u and v. These are formally equivalent to time-independent Schrödinger equations, with an effective potential dependent on m and m'. The bound state energies must lie above the infimum of this potential and below its asymptotic value. Such a condition confines possible edge channels to the region of the (k_1,ω) plane between the light cone | ω | = | k_1 | and σ_e (H), see Fig <ref>. If a channel ω_i (k_1) connects the bands, thus giving a non-zero contribution to I^#, it must cross the ω = k_1 = 0 point. However, only two states solve H(0) ψ̂ (x_2,0) = 0, and they are the ones reported in Eq. (<ref>). All other edge modes, if they exist, thus emerge from and disappear into the same band. Their net contribution to I^# is zero, whence the claim.
For m'>0, as originally assumed in section <ref>, Prop. <ref> implies bulk-edge correspondence in the form of
I = +1 = I^# .
Ref. <cit.> freely makes use of the results of Proposition <ref>, which must thus be known in the literature. The same reference, however, fails to acknowledge where such results are first shown. We could not find this information either, and we hence reported our own proof for completeness.
§ APPLICATION: ROTATING SHALLOW-WATER MODEL
The techniques developed above (in particular, compactification of the Brillouin zone via fermion doubling) apply to the rotating shallow-water model <cit.>, where they lead to the proof of a new instance of bulk-edge correspondence. The shallow-water Hamiltonians H_SW^± are given as the spin-1 counterparts of H_±, cf. (<ref>). They have no well-defined bulk invariant, but acquire one when embedded (as asymptotic limits, in the strong sense) into H_SW, a Hamiltonian with variable angular velocity f. H_SW has associated Bloch bundle E_SW over S^2, whose Chern number is the bulk index I = Ch (E_SW). The edge invariant I^# is defined in complete analogy with Def. <ref>. Its value is explicitly computed for a velocity profile f(x_2) = f sgn (x_2), f>0 constant, resulting in I = I^# = +2.
Consider H_± as in Eq. (<ref>). Substitute m_± by f_± and σ⃗ by S⃗ = (S_1,S_2,S_3), where
S_1 =
[ 0 1 0; 1 0 0; 0 0 0 ] ,
S_2 =
[ 0 0 1; 0 0 0; 1 0 0 ] ,
S_3 =
[ 0 0 0; 0 0 -; 0 + 0 ] .
This results in new spin-1 operators
H_SW^± (p_1,p_2,f_±) ·S⃗ .
Eigenstates of H_SW^± are solutions of the shallow-water equations, used among other things to describe Earth's oceanic layers <cit.>. The coefficients f_± are thus recognized as positive (negative) angular velocities, determining sign and strength of the Coriolis force. On Earth, f=0 at the equator.
The Hamiltonians H_SW^± are translation invariant (by construction) and particle-hole symmetric. Writing them fiber-wise, one finds the spectra
σ ( H_SW^± (k) ) = σ_e ( H_SW^± (k) ) = { - √(k^2 + f_±^2)}∪{0}∪{√(k^2 + f_±^2)} , (k∈^2) .
The Brillouin zone is again ^2 (non-compact). As in Sec. <ref>, define the bundles E_SW^± associated with the positive band of H_SW^± (k). These bundles have ill-defined Chern numbers. Even so, computing (<ref>) happens to return integer values
Cħ (E^±_SW) = ± 1 .
Introduce the Hamiltonian
H_SW = (p_1, p_2, f) ·S⃗ ,
where f is a function of position operator x_2 and we assume: f(x_2) continuously differentiable and monotonous; f(x_2) → f_± as x_2 →±∞; f(0)=0. H_SW tends to H_SW^± asymptotically, in the sense that
( H_SW - H_SW^± ) T_a s⟶ 0 , a →±∞ .
The essential spectrum of the partially Fourier-transformed operator
H_SW (k_1) = (k_1, p_2, f) ·S⃗
is
σ_e (H_SW (k_1)) = {ω∈ | |ω| ≥√(k_1^2 + min{f_+^2, f_-^2})}∪{ 0 } .
Unlike Eq. (<ref>), the proof of (<ref>) is non-trivial because of the flat zero-energy band. We defer this proof to App. <ref>.
The Brillouin zone of H_SW is compactified to the sphere S^2 just as in Sec. <ref>. The bundle E_SW constructed over it has
Ch (E_SW) = Cħ (E^+_SW) - Cħ (E^-_SW) = 2 ,
which we take as our bulk index I.
Bulk-edge correspondence is finally probed in a special case, f_+ = f = -f_- and f(x_2) = f sgn (x_2). The bound states, found by direct computation (see App. <ref>), are precisely two:
ψ_a (x,k_1) = C_a ^ k_1 x_1^- f |x_2|[ 1; -1; 0 ] , ψ_b (x, k_1) = C_b ^ k_1 x_1^- |k_1| |x_2|[ 0; sgn (x_2) sgn (k_1); ] ,
with C_a, C_b ∈ normalization constants. The corresponding dispersion relations are
ω_a (k_1) = -k_1 , ω_b (k_1) = f sgn (k_1) ,
as illustrated in Figure <ref>.
Computing the edge index I^# as given in Def. <ref> yields
I^# = - I (μ_ϵ, ω_a) - I (μ_ϵ, ω_b) = +2 = I ,
and Eq. (<ref>) represents another instance of bulk-edge correspondence.
It should be noticed that the solutions in Eq. (<ref>) of the Dirac eigenvalue problem extend immediately to the shallow-water setup. Indeed,
ψ^R/L_SW (x; k_1) = ^ k_1 x_1ψ̂^R/L_SW (x_2;k_1) , ψ̂^R/L_SW (x_2;k_1) = C_R/L^±∫_0^x_2 f(x_2') x_2' [ 1; ± 1; 0 ] , C_R/L∈ ,
are eigenstates of H_SW with eigenvalues ω_R/L = ± k_1 for any integrable angular velocity profile f(x_2). Moreover, ψ^R_SW (ψ^L_SW) is certainly bound for f(x_2) monotonically decreasing (increasing). If f(x_2) = f sgn (x_2), ψ^L_SW = ψ_a (cf. Eq. (<ref>)).
By contrast, ψ_b is a new solution with no counterpart in the Dirac case.
§ CONCLUSIONS AND FUTURE DIRECTIONS
This work considered some of the topological properties of a 2D Dirac operator. In particular, the latter was shown to lack a bulk topological invariant in the constant mass case. It acquired one after being combined with its opposite-mass counterpart, as asymptotic limits of a global Hamiltonian with variable mass profile. It was useful to think of the latter as spatially interpolating between two distinct topological phases (positive and negative mass): The difference of their (ill-defined) invariants provided the global model with a (legitimate) bulk index I. This object was then related to the signed number I^# of bound states propagating along the interface x_2 = 0 of the two insulators. I^# was proven independent on the choice of mass profile m(x_2), thus qualifying as an edge invariant. The equality I = I^# represented an example of bulk-edge correspondence.
Identical ideas were then used to show another instance of correspondence for the shallow-water model, albeit only for the specific profile f(x_2) = f sgn (x_2) of the angular velocity. A couple reasons made the analysis of this system interesting in his own regard, and not just as a corollary of the previous case. On the one hand, finding the essential spectrum of the operator with variable f required non-standard techniques. On the other hand, the presence of bulk-boundary correspondence in our system seemed surprising, considering that its relative with constant f and addition of odd viscosity shows disagreement between bulk and edge indices for certain boundary conditions <cit.>.
Future endeavours include: proving stability of the edge index under reasonable deformations of f(x_2), in the shallow-water case; extending the doubling approach to different systems, starting perhaps with Dirac operators in higher spatial dimension; adapting the techniques to different symmetry classes; ultimately, achieving a general proof of bulk-boundary correspondence for models on the continuum.
§ ACKNOWLEDGEMENTS
The authors are deeply grateful to Gian Michele Graf for his crucial contributions to Appendix <ref>, for numerous illuminating discussions and for useful advice on the structure and message of the paper.
§ PROOF OF PROPOSITION <REF>
The aim of this appendix is to flesh out the proof of Prop. <ref>, so far only sketched in Sec. <ref>.
Fix a fiber k_1 and look for solutions ψ̂ of
H(k_1) ψ̂ (x_2;k_1) = ω (k_1) ψ̂ (x_2;k_1) ,
with energy ω in the bulk gap. In the following, we drop the k_1-dependence since this parameter is fixed.
Let ψ̂ (x_2) = (u (x_2),v (x_2))^T. Eq. (<ref>) explicitly reads
[ m(x_2) k_1 - ∂_2; k_1 + ∂_2 -m(x_2) ][ u(x_2); v(x_2) ] = ω[ u(x_2); v(x_2) ] ,
see Eq. (<ref>).
Let s (u+v)/2, d (u-v)/2, or equivalently
u = s+d v = s-d .
Eq. (<ref>) gets rewritten as
d'(x_2) + m(x_2) d(x_2) = (ω - k_1) s(x_2)
s'(x_2) - m(x_2) s(x_2) = -(ω + k_1) d(x_2)
in terms of s,d, where (·)' = ∂_2 (·). Eqs. (<ref>) can be decoupled at the price of going from first to second order ODEs
( -Δ + W_s (x_2) ) s(x_2) = ω^2 s (x_2) , W_s (x_2) k_1^2 + m^2(x_2) + m'(x_2)
( -Δ + W_d (x_2) ) d(x_2) = ω^2 d (x_2) , W_d (x_2) k_1^2 + m^2(x_2) - m'(x_2) .
Passing to second order may potentially introduce spurious solutions. Even so, if s,d are (square-summable) solutions of System (<ref>), then they are also (square-summable) solutions of System (<ref>). What follows only hinges on the contrapositive of the previous statement.
The ones in (<ref>) are one-dimensional Schrödinger equations with potential W_s/d (x_2), bounded from below. ψ̂ is bound only if both u and v, or equivalently both s and d, are. Bound eigenstates of a Schrödinger operator are known to have energy lying between the minimum and asymptotic value of the potential, see Eq. (2.91) in <cit.> as a reference. By Eq. (<ref>), if s,d are both bound their energy ω is such that
max{ω_s, ω_d }≤ω^2 < min{ k_1^2 + m_-^2, k_1^2 + m_+^2 } ,
where
ω_s/dinf_x_2 W_s/d (x_2) .
If m'>0 (m' < 0), then W_s (x_2) ≥ W_d (x_2) (W_s (x_2) ≥ W_d (x_2)) for all x_2 and thus max{ω_s, ω_d } = ω_s (max{ω_s, ω_d } = ω_d). Moreover, ω_s ≥ k_1^2 (ω_d ≥ k_1^2) by
ω_s inf_x_2 (k_1^2 + m^2 (x_2) + m'(x_2)) ≥ k_1^2 + inf_x_2 m^2 (x_2) + inf_x_2 m'(x_2) = k_1^2
( ω_d . inf_x_2 (k_1^2 + m^2 (x_2) - m'(x_2)) ≥ k_1^2 + inf_x_2 m^2 (x_2) + sup_x_2 m'(x_2) = . k_1^2 ) .
In either case, all bound states must have energy ω satisfying
k_1^2 ≤ω^2 < min{ k_1^2 + m_-^2, k_1^2 + m_+^2 }≡ k_1^2 + m̃^2 .
Notice that ω^2 < k_1^2 + m̃^2 is nothing but the gap condition. Combined with k_1^2 ≤ω^2 ↔ |ω| ≥ |k_1|, it tells us that the allowed bound-state energies must lie between the light cone |ω| = | k_1| and the essential spectrum σ_e (H).
As pointed out in Sec. <ref>, states emerging from and disappearing into the same band give a zero net contribution to I^#. The relevant ones must thus cross the entire band gap. By Eq. (<ref>), they can only do so if their energy is ω = 0 at k_1 = 0. At ω = k_1 = 0, system (<ref>) reduces to
d'(x_2) + m(x_2) d(x_2) = 0
s'(x_2) - m(x_2) s(x_2) = 0
⟷
d'(x_2) = D ^- ∫_0^x_2 m(x_2') x_2'
s'(x_2) = S ^∫_0^x_2 m(x_2') x_2' ,
with S,D ∈ some normalization constants. When m'>0, both are square-summable only for S=0, i.e. s(x_2) ≡ 0. Then u(x_2) = d(x_2) = -v(x_2), and
ψ̂ (x_2) = D ^- ∫_0^x_2 m(x_2') x_2' [ 1; -1 ]≡ψ^L (x_2)
as claimed. Acting with the Hamiltonian reveals the dispersion relation ω_L = -k_1, and I^# = - I (μ_ϵ, ω_L) = +1 = sgn (m').
Similarly, when m'<0 square-summability requires d(x_2) ≡ 0, so that u(x_2) = s(x_2) = v(x_2) and
ψ̂ (x_2) = S ^∫_0^x_2 m(x_2') x_2' [ 1; 1 ]≡ψ^R (x_2) .
This time ω_R = k_1, and I^# = - I (μ_ϵ, ω_R) = -1 = sgn (m').
§ ESSENTIAL SPECTRUM OF THE SHALLOW-WATER HAMILTONIAN
The aim of this appendix is to prove the following theorem, which implies Eq. (<ref>) directly, and all the results that lead to it.
Let H_SW (k_1) be as in Eq. (<ref>) and H^±_SW (k_1) = (k_1, - ∂_2, f_±) ·S⃗, where f_± = lim_x_2 →±∞ f(x_2). Then
σ_e (H_SW (k_1)) = ⋃_s = ±σ_e ( H^s_SW (k_1) )
for all k_1 ∈.
The entire section concerns the shallow-water model, and in particular the operators H_SW (cf. Eq. (<ref>)) and H^±_SW (cf. Eq. (<ref>)). Below, we thus drop subscripts (·)_SW for readability.
Notice that Thm. <ref> could also be stated as σ_e (H (k_1)) = ∪_s = ±σ (H^s (k_1)), because the spectrum of H^± is purely essential. This fact is proven as a warm-up and to display some of the techniques employed later.
σ (H^s (k_1)) = σ_e ( H^s (k_1) ) , ( s = ± ) .
We recall Weyl's criterion: Let A be a self-adjoint operator on some Hilbert space H. Then λ∈ belongs to σ_e (A) if and only if there exists a so-called Weyl sequence {ψ_n }_n ∈H such that ‖ψ_n ‖ = 1, ψ_n w→ 0 and
(A - λ) ψ_n → 0
in Hilbert-space norm.
Both operators H^s are translation invariant in x_2, i.e. commuting with T_a = ^- p_2 a, (a ∈). Then, any sequence of approximate eigenvectors, (H^s - λ ) ψ_n → 0, (‖ψ_n ‖ = 1) can be turned into one, ψ̃_n T_a_nψ_n that also has ψ̃_n w→ 0 by suitable choice of a_n. Thus, by Weyl's criterion λ∈σ (H^s (k_1)) implies λ∈σ_e ( H^s (k_1) ) for all k_1.
We actually have a slightly stronger statement: χψ̃_n → 0 for any χ = χ (x_2) with χ (x_2) vanishing at x_2 → + ∞ (or x_2 → - ∞). This follows from
χ T_a s⟶ 0 , (a → + ∞)
(respectively a → - ∞).
The proof of the main theorem rests on the following lemma, which will be shown later.
Let
K = χ H (H-z)^-2 ,
where χ = χ (x_2), supp χ compact and we recall H ≡ H_SW. Then K is compact for all z ∈, Im z ≠ 0.
We start by showing
σ_e (H^± (k_1)) ⊆σ_e (H (k_1)) , ∀ k_1 .
Assume λ∈σ_e (H^+ (k_1)). Just as in the proof of Lemma <ref>, given a sequence ψ_n of approximate eigenstates we construct a new one
φ_n T_a_nψ_n .
By suitable choice of a_n ∈, φ_n can be made weakly convergent to zero and such that χφ_n → 0 for any χ = χ(x_2) vanishing at x_2 → + ∞. This follows from
χ T_a s⟶ 0 , (a → + ∞) .
By the same reason (<ref>), we have
( H (k_1) - H_+ (k_1) ) T_a s⟶ 0 , (a → + ∞) ,
and likewise for H_- and a → - ∞. The desired inclusions, cf. Eq. (<ref>), follow by the triangle inequality.
The result just derived and Eq. (<ref>) imply 0 ∈σ_e (H (k_1)) ∩σ_e (H^± (k_1)). We are thus left to prove
σ_e (H (k_1)) ∖{0}⊂⋃_s = ±σ_e (H^s (k_1)) .
Let λ∈σ_e (H (k_1)) ∖{0}, and let g = g(λ') be such that: g(λ') = 1 for all λ' in some neighbourhood U ∋λ; supp g compact; 0 ∉supp g. Let ψ_n be a Weyl sequence for λ,
(H- λ) ψ_n → 0 , ψ_n w⟶ 0 , ‖ψ_n ‖→ 1 .
We claim
χψ_n → 0
for any χ = χ (x_2) with suppχ compact. Indeed, if this is the case ψ_n must escape towards either positive or negative spatial infinity, where H (k_1) tends to H^+ (k_1) or H^- (k_1) strongly. In the first (second) case, λ is in the essential spectrum of H^+ (k_1) (H^- (k_1)) by the reasoning employed to prove Eq. (<ref>).
To show Eq. (<ref>) we write
χψ_n = χ (1 - g(H)) ψ_n + χ g(H) ψ_n .
Here and onwards, results are still meant for all k_1 despite omitting k_1 from the notation. Since (1 - g(H)) (H - λ)^-1 is bounded, the first term in Eq. (<ref>) is bounded by a constant times ‖ (H-λ) ψ_n ‖, and thus complies with Eq. (<ref>) by hypothesis (H-λ) ψ_n → 0. Since B (H-z)^2 H^-1 g(H) is also bounded, we just need to show that the second term vanishes:
χ H (H-z)^-2 B ψ_n → 0 .
This follows from Lemma <ref>.
Before proving Lemma <ref>, we state a further one, to be proven later.
For any z ∈ with Im (z) ≠ 0, there exists C = C(z) < ∞ such that
H (p_2^2 +1) H ≤ C ( (H-z̅) (H - z) )^2 ,
where again H = H_SW.
By Lemma <ref>, the operator
A (p_2 + ) H (H-z)^-2 ,
is such that A^* A ≤ C for some finite C, i.e. it is bounded.
Then, K = χ (p_2+ )^-1 A is compact if χ (p_2 + )^-1 is. This is true because χ and (p_2 + )^-1 vanish at infinity in position and momentum space respectively, which is a known sufficient condition for compactness (see Remark <ref> below). The result follows.
Consider the operator T = f(x) g(p) on some functional Hilbert space over ^n, say L^2 (^n) for definiteness, where x and p denote the position and momentum operator respectively. It is proven e.g. in <cit.> (page 47, Thm. XI.20) that, if f,g ∈ L^q (^n) as functions, then T ∈ J_q ⊂ K, where J_q denotes the q-th Schatten class and K the compact operators. Even if f is only vanishing at infinity, its restriction f_L(x) f(x) χ_[-L,L] (x) is in L^2 (^n), with χ_[-L,L] indicator function of the hypercube [-L,L]^× n. The same holds for g. Then, by the aforementioned result, T_L f_L (x) g_L (p) ∈ J_2. The claim that T is compact now follows by T_L → T (L →∞) in operator norm, and the fact that convergent sequences of operators in K have limit in K.
The boundedness of A in Eq. (<ref>) informally states that, if p_2 diverges along some sequence of states, then H(k_1) has to do so too, or to tend to zero.
Before proving Lemma <ref>, we state yet another one, to be once again proven later. It is an identity, somewhat close in spirit to the Weitzenböck formula.
Let d⃗ = (d_1,d_2,d_3) with d_i = d_i^* self-adjoint operators on some Hilbert space H. Let S⃗ = (S_1,S_2,S_3) be as in Eq. (<ref>). Then, on H⊗^3, the following identity holds
( d⃗·S⃗ ) d^2 ( d⃗·S⃗ ) = ( d⃗·S⃗ )^4 + 1/2( (d⃗·S⃗) D + D^* (d⃗·S⃗) ) ,
where d^2 = d_1^2 + d_2^2 + d_3^2 and
D =
[ (d_1 d_3 d_2 - d_2 d_3 d_1 ) [d_3^2,d_1] [d_3^2,d_2]; [d_2^2, d_1] ( d_3 d_2 d_1 - d_1 d_2 d_3 ) - [d_2^2, d_3]; [d_1^2, d_2] - [d_1^2, d_3] ( d_2 d_1 d_3 - d_3 d_1 d_2 ) ] .
If the d_i's are numbers (or commuting operators), then D=0 and Eq. (<ref>) becomes trivial. In fact, the operator d⃗·S⃗ then has eigenvalues 0, ±‖d⃗‖.
Apply the results of Lemma <ref> to H ≡ H(k_1) = (k_1, p_2, f(x_2)) ·S⃗ (operator on L^2 () ⊗^3), having momentarily reinstated the momentum label k_1.
Since d⃗ = (k_1, - ∂_2, f) (where f ≡ f(x_2) here and throughout the proof)
D =
[ -k_1 f' 0 2 f f'; 0 - k_1 f' f” - 2 f' p_2; 0 0 k_1 f' ] ,
with (·)' = ∂_2 (·). Moreover
H d^2 H = H^4 + 1/2 (H D + D^* H) .
Now use A + A^* ≤ϵ A A^* + ϵ^-1 for all ϵ >0 and A = HD to bound
H D + D^* H ≤ϵ H D D^* H + ϵ^-1 .
Split D = D_1 + D_2 with
D_1 =
[ 0 0 0; 0 0 - 2 f' p_2; 0 0 0 ]
and D_2 bounded. Then
D D^* ≤ 2 (D_1 D_1^* + D_2 D_2^*) ,
which by
D_1 D_1^* ≤ 4 p_2 f' p_2 ≤ C_1 p_2^2 , D_2 D_2^* ≤ C_2
becomes
D D^* ≤ 2 ( C_1 p_2^2 + C_2 ) ,
where C_1,C_2 > 0 some constants.
Use p_2^2 ≤ d^2 and plug Eq. (<ref>) into Eq. (<ref>):
H p_2^2 H ≤ H d^2 H ≤ H^4 + ϵ H ( C_1 p_2^2 + C_2) H + (1/2 ϵ) ,
whence
(1- C_1 ϵ) H p_2^2 H ≤ H^4 + ϵ C_2 H^2 + (1/2 ϵ) .
By picking (once and for all) ϵ so small that C_1 ϵ < 1/2, Eq. (<ref>) is recast as
H (p_2^2 + 1) H ≤ 2 H^4 + ( 2 ϵ C_2 + 1) H^2 + ϵ^-1 .
The thesis of the Lemma then follows by two observations. First, ∀ z with Im z ≠ 0, there exist α, β > 0 such that
(H- z̅) (H-z) ≥α H^2 + β .
Second, one can always find a finite, positive constant C = C(z) for which
2 H^4 + ( 2 ϵ C_2 + 1) H^2 + ϵ^-1≤ C (α H^2 + β)^2 .
Everything proceeds by direct calculation. We start by showing
d^2 ( d⃗·S⃗ ) = ( d⃗·S⃗ )^3 + D ,
with D as in Eq. (<ref>). This is seen by
( d⃗·S⃗ )^3 =
[ - (d_1 d_3 d_2 - d_2 d_3 d_1) d^2 d_1 - [d_3^2, d_1] d^2 d_2 - [d_3^2, d_2]; d^2 d_1 -[d_2^2, d_1] - (d_3 d_2 d_1 - d_1 d_2 d_3) - d^2 d_3 + [d_2^2, d_3]; d^2 d_2 -[d_1^2, d_2] + d^2 d_3 + [d_1^2, d_3] - ( d_2 d_1 d_3 - d_3 d_1 d_2 ) ]
d^2 ( d⃗·S⃗ ) =
[ 0 d^2 d_1 d^2 d_2; d^2 d_1 0 - d^2 d_3; d^2 d_2 + d^2 d_3 0 ]
and comparison between Eqs. (<ref>,<ref>).
The conjugate of Eq. (<ref>) reads
( d⃗·S⃗ ) d^2 = ( d⃗·S⃗ )^3 + D^* .
Thus
( d⃗·S⃗ ) d^2 ( d⃗·S⃗ ) = ( d⃗·S⃗ ) ( ( d⃗·S⃗ )^3 + D ) = ( d⃗·S⃗ )^4 + ( d⃗·S⃗ ) D ,
having used Eq. (<ref>). However, its conjugate Eq. (<ref>) implies
( d⃗·S⃗ ) d^2 ( d⃗·S⃗ ) = ( ( d⃗·S⃗ )^3 + D^* )( d⃗·S⃗ ) = ( d⃗·S⃗ )^4 + D^* ( d⃗·S⃗ ) ,
and the final claim follows from summing Eqs. (<ref>,<ref>).
§ INTERFACE STATES OF THE SHALLOW-WATER MODEL, COMPLETE CALCULATION
The following paragraphs are meant to derive the edge states and dispersion relations of Eqs. (<ref>,<ref>), see Sec. <ref>. Such states are square-summable solutions of
H_SWψ = ωψ ,
where H_SW is the shallow water Hamiltonian of Eq. (<ref>)
H_SW = (p_1, p_2, f(x_2)) ·S⃗ = (- ∂_1, - ∂_2, f(x_2)) ·S⃗ ,
with f(x_2) = f sgn (x_2) (f > 0 constant) as our choice of angular velocity profile. In other words, and dropping the (·)_SW subscripts here and below,
H |_x_2 ≷ 0 = H^± = (- ∂_1, -∂_2, ± f) ·S⃗ .
Notice that H^± are as in Eq. (<ref>).
The Hamiltonian (<ref>) is translation invariant in direction x_1. Solutions of Eq. (<ref>) must be of the form
ψ (x;k_1) = ^ k_1 x_1ψ̂ (x_2;k_1) ,
and the eigenvalue problem can be rewritten fiber-wise as
H (k_1) ψ̂ (x_2;k_1) = ω (k_1) ψ̂ (x_2; k_1) , H(k_1) = (k_1, - ∂_2, f sgn (x_2)) ·S⃗ .
Before solving for ψ̂, let us notice that H enjoys the following symmetry for any odd profile f(x_2) of the angular velocity:
Π H Π^-1 = H ,
where
Πdiag (1,1,-1) ⊗Π_2 ≡ M Π_2 , Π^2 = 𝕀 ,
and Π_2 denotes the parity operator in direction x_2 (namely Π_2 g(x_2) = g(-x_2) for any function g).
By Eq. (<ref>), H preserves the even/odd parity sectors
Πψ = ±ψ ,
a fact that will be employed later.
Write ψ̂ in components as ψ̂ = (η, u, v), and let moreover
ψ̂_±ψ̂|_x_2 ≷ 0 = (η_±, u_±, v_±) .
Eq. (<ref>) is explicitly rewritten as
[ (H^± - ω) ψ̂_± = 0 ⇔ ψ̂_±∈Ker (H^± - ω); ⇔ - ωη_± + k_1 u_± - v_±' = 0
k_1 η_± - ω u_±∓ f v_± = 0
- η_±' ± f u_± - ω v_± = 0
, ]
where (·)' = ∂_2 (·), ξ = ξ (x_2;k_1) , (ξ = η, u,v) and the equations must hold for all k_1 ∈. Continuity of η,v across x_2 = 0 follows from the general theory of differential equations, see e.g. Chapter 15 in Ref. <cit.>. By contrast, no continuity condition is imposed on u.
Translation invariance in the two half-planes x_2 ≷ 0 and the requirement of square-summability justify the following ansatz:
η_± (x_2) = N_±^- κ |x_2| , u_± (x_2) = U_±^- κ |x_2| , v_± (x_2) = V_±^- κ |x_2| , N_±, U_±, V_±∈ ,
with κ > 0 a positive constant.
Moreover, by continuity of η,v
N_+ = N_- ≡ N_0 , V_+ = V_- ≡ V_0 .
Plugging the Ansätze (<ref>) into Eq. (<ref>) leads to
- ω N_0 + k_1 U_±±κ V_0 = 0
k_1 N_0 - ω U_±∓ f V_0 = 0
±κ N_0 ± f U_± - ω V_0 = 0 .
We now consider the even/odd cases Πψ̂ = ±ψ̂ (same as Πψ = ±ψ) separately.
* Even states. Observe that
Πψ̂ (x_2) = ψ̂ (x_2) ⟷ψ̂_+ (x_2) = M ψ̂_- (-x_2) ,
cf. (<ref>,<ref>). Component-wise
η_+ (x_2) = η_- (-x_2) , u_+ (x_2) = u_- (-x_2) , v_+ (x_2) = - v_- (-x_2) ,
which combined with Eqs. (<ref>) gives
N_+ = N_- = N_0 , U_+ = U_- ≡ U_0 , V_+ = - V_- .
The first equality is known by continuity of η. The last one complies with v continuous if and only if
v(x_2) ≡ 0 ↔ V_0 = 0 .
System (<ref>) reduces to
- ω N_0 + k_1 U_0 = 0
k_1 N_0 - ω U_0 = 0
κ N_0 + f U_0 = 0
↔ω N_0 = k_1 U_0
(k_1^2-ω^2) U_0 = 0
(k_1 κ + ω f) U_0 = 0 .
Besides the trivial one N_0 = U_0 = 0, there exists exactly one solution meeting our requirement κ > 0, namely
κ = f , ω = - k_1 , U_0 = - N_0 .
We have thus recovered the edge channel ω_a (k_1) of Eq. (<ref>). By Eqs. (<ref>,<ref>,<ref>), the corresponding state reads
ψ̂_a (x_2;k_1) = C_a ^- f |x_2|[ 1; - 1; 0 ] ,
with C_a ∈ some normalization constant. ψ_a (x;k_1) as in Eq. (<ref>) is finally obtained by
ψ_a (x;k_1) = ^ k_1 x_1ψ̂_a (x_2;k_1) .
* Odd states. The procedure is identical. Enforcing
Πψ̂ (x_2) = - ψ̂ (x_2) ⟷ψ̂_+ (x_2) = - M ψ̂_- (-x_2) ,
produces
η (x_2) ≡ 0 , u_+ (x_2) = - u_- (-x_2) , v_+ (x_2) = v_- (-x_2) ,
whence
N_0 = 0 , U_+ = - U_- V_+ = V_- = V_0
and
k_1 U_+ = - κ V_0
(f k_1 - ωκ) V_0 = 0
(f κ - ω k_1) V_0 = 0 .
The only non-trivial solution is
κ = | k_1 | , ω = f sgn (k_1) , U_+ = - sgn (k_1) V_0 .
The middle quantity is indeed the odd edge channel ω_b (k_1) of Eq. (<ref>). The corresponding state reads (cf. Eq. (<ref>))
ψ̂_b (x_2;k_1) = C_b ^- |k_1| |x_2|[ 0; sgn (x_2) sgn (k_1); ] ,
or equivalently
ψ_b (x;k_1) = ^ k_1 x_1ψ̂_b (x_2;k_1) ,
as in Eq. (<ref>).
|
http://arxiv.org/abs/2307.05685v1 | 20230711180008 | Entanglement transitions and quantum bifurcations\\ under continuous long-range monitoring | [
"Angelo Russomanno",
"Giulia Piccitto",
"Davide Rossini"
] | quant-ph | [
"quant-ph",
"cond-mat.quant-gas"
] |
Scuola Superiore Meridionale, Università di Napoli Federico II
Largo San Marcellino 10, I-80138 Napoli, Italy
Dipartimento di Fisica dell’Università di Pisa and INFN,
Largo Pontecorvo 3, I-56127 Pisa, Italy
Dipartimento di Fisica dell’Università di Pisa and INFN,
Largo Pontecorvo 3, I-56127 Pisa, Italy
We study the asymptotic bipartite entanglement entropy of the quantum trajectories of a free-fermionic system,
when subject to a continuous nonlocal monitoring. The measurements are described by Gaussian-preserving
two-point operators, whose strength decays as a power-law with exponent α.
Different behaviors of the entanglement entropy with the system size emerge: for α below a given threshold value
a volume-law behavior sets in, while for larger α we observe a transition from subvolume to area-law,
whose exact location depends on the measurements rate and on the presence of a Hamiltonian dynamics.
We also consider the expectation probability distribution of the measurement operators,
and find that this distribution features a transition from a unimodal to a bimodal shape. We discuss the possible connections
between this qualitative change of the distribution and the entanglement transition points.
Entanglement transitions and quantum bifurcations
under continuous long-range monitoring
Davide Rossini
Indian Institute of Technology Kharagpur
=========================================================================================
§ INTRODUCTION
Nowadays it is widely believed that entanglement, alias a kind of quantum correlations with no classical
analog <cit.>, plays an important role in the equilibrium and the out-of-equilibrium
physics of quantum many-body systems <cit.>.
A prototypical example is that of the entanglement entropy for a pure state, which is defined
as the von Neumann entropy of the reduced density matrix of a given portion of the full system.
Due to its peculiar scaling properties at the critical point <cit.>, it may act as
a witness of the presence of quantum phase transitions.
Moreover, in nonequilibrium conditions, it generally increases linearly in time to eventually attain
an asymptotic value proportional to the system size <cit.>,
and the slope of this scaling contains information on the thermalization properties of the system <cit.>.
Such scenario changes qualitatively in the presence of disorder: For example, in many-body localized phases,
the entanglement entropy undergoes a characteristic, much slower, logarithmic increase
in time (see Ref. <cit.> for a review).
More recently, the focus has been moved to situations beyond the unitary dynamics, which consider
the evolution of monitored systems. The interplay between the intrinsic dynamics of the system and that induced
by the quantum measurement process can lead to a variety of scaling regimes for the asymptotic entanglement entropy,
giving rise to the so called entanglement transitions.
In this framework, an extensive number of works has been focusing on local measurements (either discrete or continuous in time)
performed in monitored quantum
circuits <cit.>,
as well as non-interacting <cit.>
and interacting <cit.> Hamiltonian systems.
Moreover, there exists a deep connection between measurement-induced phases and the encoding/decoding properties of a quantum channel <cit.>.
Situations where the dynamics is only induced by random measurements of non-local
string operators (measurement-only dynamics) have been also considered, finding different scaling regimes of the entanglement entropy, according to the statistics of the randomly measured operators, and the range and the nature of the strings <cit.>.
Among the various theoretical models of monitored quantum systems, considerable coverage has been dedicated to the dynamics
of fermionic Gaussian states, in the presence of quadratic Hamiltonians and Gaussian-preserving measurement
processes (see, e.g., Refs. <cit.>),
as they are amenable to an accurate numerical treatment up to relatively large sizes.
In this framework, for short-range Hamiltonians and local measurements, area-law (saturation to a finite value)
or logarithmic scaling of the asymptotic entanglement entropy with the system size have been reported.
A somewhat richer situation has been found for Hamiltonians with extended power-law interactions,
although keeping the measurement operators onsite, where regimes with a power-law scaling of the entanglement entropy
with the system size are possible <cit.>.
Something similar has been considered in the context of quantum circuits <cit.>.
In a recent paper, we have also shown that the measurement-only dynamics through operators connecting two distant sites
can give rise to a non-trivial entanglement entropy dynamics, with a fast growth of the entanglement entropy <cit.>.
In this work we deal with the quantum dynamics of a Kitaev chain under continuous nonlocal monitoring, which
can be cast as a quantum state diffusion unraveling <cit.>
of a Lindblad master equation with long-range Lindblad operators.
Specifically, we consider two-point fermionic measurement operators, suitably chosen
to preserve Gaussianity, where the coupling decays as a power-law with some exponent α>0.
In the context of dissipation engineering, similar kind of dissipators have been already scrutinized
in some recent
works <cit.>;
these can be realized with two-level atoms in lossy cavity QED experiments, using a magnetic field gradient and a
Raman drive with multiple sidebands <cit.>.
In noninteracting spins monitored by infinite-range operators, an entanglement transition
from area-law to sublogarithm scaling can occur <cit.>.
Here we first consider the asymptotic bipartite entanglement entropy and find a rather rich phenomenology:
For α smaller than a threshold value α^⋆_1, it obeys a volume law,
suggesting a strong entangling power of the long-range measurement operators.
For intermediate values of α, a crossover region emerges, in which the entanglement entropy
scales non-trivially with the size.
For α larger than α^⋆_2 (> α^⋆_1), we recover the area-law scaling
observed in the presence of onsite measurements.
The fact that 0.5 ≲α^*_1 ≲ 1, independently of the Hamiltonian parameters and
of the coupling with the measurement apparatus, is suggestive.
Indeed, α=1 corresponds to the threshold below which both the unitary (Hamiltonian) long-range dynamics
in one dimension <cit.> and at least a single case of Lindblad long-range dynamics
in one dimension <cit.> are exactly described by the mean-field approximation.
We also focus on a measurement-only dynamics, i.e., such that there is no Hamiltonian providing
a unitary part in the evolution.
In that case, we still have evidence that 0.5 ≲α^*_1 ≲ 1. Besides that,
we can locate the transition point between subvolume and area-law behavior at α_2^* ∼ 2,
suggesting an even more interesting comparison with the behavior of long-range Hamiltonians,
where the system behaves short-range above the threshold α=2 <cit.>.
Finally, we study the expectation probability distribution of the measurement operators.
When increasing α, the distribution over a single quantum trajectory of the expectation values
of such operators undergoes a transition from unimodal (one maximum) to bimodal (two maxima), at a point
α̅ that is not immediately related with the change of scaling for the entanglement entropy.
Such transition is reminiscent of the bifurcations occurring in nonlinear driven-dissipative classical dynamical systems, where a single stable stationary point splits into two <cit.>.
Here, due to the presence of quantum fluctuations and classical noise, there are no stationary points, and their equivalent are the maxima of the distribution, that move from being one to two.
In view of this analogy, we dub the unimodal-bimodal transition
of the distribution of the expectations as a “quantum bifurcation”.
The paper is organized as follows. In Sec. <ref>, we define our model, specifying both the Hamiltonian
and the measurement operators, together with the bipartite entanglement entropy we are going to analyze.
In Sec. <ref>, we introduce the quantum state diffusion unraveling of the Lindblad master equation
and discuss how to treat the time evolution of the system, preserving the Gaussian form of the wavefunction.
Section <ref> is devoted to the presentation of our numerical findings summarized above,
for the entanglement entropy (Sec. <ref>) and for the expectation distribution of the measurement operators
(Sec. <ref>). In Sec. <ref> we draw our conclusions.
§ MODEL
We start from a system of spinless fermions on a one-dimensional lattice with N sites, described by the Kitaev Hamiltonian <cit.>
Ĥ = ∑_i [ J ( ĉ_i-ĉ_i^†) ( ĉ_i+1+ĉ_i+1^†) +2h ii] .
The real constants J and h stand for, respectively, the nearest-neighbor coupling and the chemical potential
μ≡ 2h, while ĉ_i^(†) are annihilation (creation) operators on the ith site (i=1,…, N),
exhibiting canonical anticommutation relations.
The Hamiltonian (<ref>) is responsible for the unitary part of the dynamics.
We notice that this model can be mapped, via a Jordan-Wigner transformation, onto a
quantum Ising chain in a transverse field <cit.>. Hereafter we set J=1 as a energy scale
and work in units of ħ=1.
We consider the Lindblad master equation
d/d tρ(t) = - i [ Ĥ, ρ(t) ]
+ γ/2∑_i ( {ℓ̂_i^†ℓ̂_i, ρ} - 2 ℓ̂_i ρ ℓ̂_i^†)
with measurement operators
ℓ̂_i = ∑_jf_i j( ĉ_i-ĉ_i^†) ( ĉ_j+ĉ_j^†) ,
and focus on its quantum state diffusion unraveling.
This corresponds to a continuous time monitoring of the system, which is described by
the following stochastic Schrödinger equation for the pure state |ψ_t⟩:
|ψ_t⟩ = -i Ĥ t|ψ_t⟩ + ∑_i (√(γ)[ℓ̂_i-⟨ℓ̂_i|_⟩t] W_t^i
-γ/2[ℓ̂_i-⟨ℓ̂_i|_⟩t]^2 t)|ψ_t⟩ ,
where γ>0 is the coupling strength with the measurement apparatus,
⟨ℓ̂_i|_⟩t = ⟨ψ_t|ℓ̂_i |ψ_t⟩, and W_t^i are independent Wiener processes
describing a quantum state diffusion process that unravel the equation (<ref>).
In Eq. (<ref>), the real prefactor f_i j is assumed to algebraically decay with the distance D_i,j between site i and site j,
such that
f_i j = 1/N(α) 1/(1+D_i,j)^α , (α≥ 0) .
Here N(α) = (N-1)^-1∑_i, j (1+D_i,j)^-α is a proper normalization constant (the Kac factor),
ensuring extensivity in the system <cit.>. In what follows, we choose periodic boundary conditions for fermions (such that
ĉ_j+N^(†)≡ĉ_j^(†), for any j>N), consequently D_i,j = min(|i-j|,N-|i-j|). Notice also that
the ℓ̂_i are Hermitian, ℓ̂_i =ℓ̂_i^†.
An important property of the operators ℓ̂_i is that
ℓ̂_i^2 = ∑_j, l f_i j f_i l (ĉ_j+ĉ_j^†)
( ĉ_l+ĉ_l^†) = ∑_j f_i j^2 ,
where we used the anticommutation relations for fermions and the fact that (ĉ_i - ĉ_i^†)^2 = -1.
Thanks to this property, Eq. (<ref>) can be seen as a Schrödinger equation
with a non-Hermitian quadratic Hamiltonian. As a consequence, the state |ψ_t⟩ keeps a simple Gaussian form,
described by just N(N-1)/2 complex independent parameters, as we better discuss in Sec. <ref>.
One can thus push the numerics to system sizes of some hundreds of sites and investigate how the presence
of power-law decaying measurement operators affects the production of entanglement during the quantum dynamics.
To this purpose, we concentrate on the entanglement entropy of a subchain
of length l, averaged over different quantum trajectories
S_l (t)≡ - ρ_l lnρ_l ,
where the logarithm is taken in the natural basis.
Here, ρ_l(t) = Tr_N-l[|ψ_t⟩⟨ψ_t|] is the reduced density matrix
of the subchain and |ψ_t⟩ is the (pure) state of a single quantum trajectory
given by a single realization of the stochastic Schrödinger equation dynamics in Eq. (<ref>)
(see also Sec <ref>).
To obtain the average entanglement entropy, we evaluate it on each single stochastic quantum trajectory
and then ensemble-average over different realizations.
In our analysis, we will mostly focus on the asymptotic long-time value
S_l = lim_T→∞∫_t^*^T dt' S_l(t') .
As discussed in Ref. <cit.>, for fermionic Gaussian states, the entanglement entropy can be determined
from the knowledge of the correlation functions, that are introduced in the next section.
§ DYNAMICS UNDER CONTINUOUS MONITORING
Equation (<ref>) can be discretized in time and cast as a sequence of Trotterized evolution steps
that, in the limit Δ t → 0, converge back to Eq. (<ref>) <cit.>. In each Trotterized step,
the measurement and the unitary part of the dynamics act separately and in sequence:
|ψ_t+Δ t⟩≃ C ^∑_i (A_i ℓ̂_i - γℓ̂_i^2 Δ t )^-iĤΔ t|ψ_t⟩ ,
where we have defined
A_i ≡√(γ) Δ W_t^i + 2 γ⟨ℓ̂_i|_⟩tΔ t,
with Δ W_t^i being independent real Gaussian
random variables with vanishing expectation value and variance Δ t.
Expression (<ref>) can be further simplified by using Eq. (<ref>).
In this way one can rewrite Eq. (<ref>) in the simpler form
|ψ_t+Δ t⟩≃C̃^∑_i A_iℓ̂_i^-iĤΔ t|ψ_t⟩ ,
where the irrelevant constant exp( -γ Δ t ∑_j f_i j^2 ), coming from the exponential
of ℓ̂_i^2, has been absorbed into the normalization prefactor C̃.
Being both ℓ̂_i and the Kitaev Hamiltonian quadratic in the fermionic operators ĉ_j^(†),
when starting from an initial Gaussian state, the time evolution of Eq. (<ref>) preserves
Gaussianity. In particular, the state |ψ_t⟩ can be cast as
|ψ_t⟩ = _t exp(1/2∑_j_1,j_2[ Z_t ]_j_1,j_2j_1j_2) |0⟩ ,
where |0⟩ denotes the vacuum state of the ĉ-fermions,
and is thus uniquely described by the N× N antisymmetric matrix Z_t (being Z_t antisymmetric, it is described by N(N-1)/2 complex parameters).
From the matrix Z_t, one can easily derive any two-point correlation functions.
Defining
[ G_t ]_j,l≡⟨ψ_t|lj|ψ_t|,⟩ [ F_t ]_j,l≡⟨ψ_t|lj|ψ_t| ⟩,
the correlation matrices can be written in terms of the matrix Z_t as <cit.>
G_t = ( 1+ Z_t Z_t^†)^-1 Z_t Z_t^†, F_t = ( 1+ Z_t Z_t^†)^-1 Z_t ,
where 1 is the N × N identity matrix.
Being Z_t^T = - Z_t, we see that G_t = G_t^T and F_t = - F_t^T.
In the next subsection we show a simple numerical prescription (whose computational requirements scale
polynomially with N) to evaluate the matrix Z_t after the application of the unitary
and the dissipative part of the evolution step in Eq. (<ref>) (and, therefore, the entanglement entropy).
§.§ Evolution of the matrix Z_t
It is possible to write a system of ordinary differential equations for the matrix Z_t, describing
the evolution in Eq. (<ref>) and efficiently solvable (an alternative derivation
can be found in Ref. <cit.>).
Both the action of the unitary step and the measurement step in Eq. (<ref>) can be described
as the application, to a Gaussian state of the form Eq. (<ref>), of an operator
of the form ^-ξT̂Δ, where
T̂ = ∑_i,j( D_i,j ĉ_i^†ĉ_j + O_i,j ĉ_i^†ĉ_j^†
+ h.c.)
is a generic (Hermitian) quadratic operator, Δ is real, and ξ = {i, -1} accounts for a dynamics
in real or in imaginary time, respectively.
Let us define |ψ⟩ as a Gaussian state of the form (<ref>),
to which we apply the operator ^-ξT̂Δ, and Z
the corresponding antisymmetric matrix.
After this operation the state
|ψ'⟩≡^-ξT̂Δ|ψ⟩ [being |ψ⟩ a generic Gaussian state described
by the matrix 𝐙, as in Eq. (<ref>)] is still Gaussian, and its corresponding
𝐙' matrix is obtained by integrating the system of ordinary differential equations
ξ/ s Z(s)
= 2 [ D· Z(s) + Z(s)· D+ O+ Z(s)· O· Z(s) ] .
from s=0 to Δ, with initial conditions Z(0) ≡ Z, as shown in Ref. <cit.>.
The unitary step of Eq. (<ref>) is obtained by posing ξ=i, Δ = Δ t,
and T̂ = Ĥ, such that
D_i,i+1 = - D_i,i-1 = -J/2, D_i,i = h ,
O_i,i+1 = - O_i,i-1 = -J/2 ,
and zero otherwise.
Analogously, the dissipative step can be obtained by posing ξ=-1, Δ=1, and T̂ = ∑_i A_i ℓ̂_i
[A_i are the real coefficients defined in Eq. (<ref>)].
Using the anticommutation relations and the symmetry of the couplings f_i j = f_j i, we can write
∑_i A_i ℓ̂_i = ∑_i,j[ A_i f_i j(ĉ_iĉ_j+ĉ_j^†ĉ_i) +h.c.]
= 1/2∑_i,j[(A_i-A_j) f_i jĉ_iĉ_j+ (A_i + A_j)ĉ_j^†ĉ_i + h.c.],
so that in Eq. (<ref>) one has
D_i,j = -12(A_i+A_j) f_i j O_i,j = -12(A_i-A_j) f_i j .
Defining N× N matrices U(s), V(s) such that
U^†(s) Z(s) = - V^†(s) ,
we can show <cit.> that, if and only if Z(s) obeys Eq. (<ref>),
then U(s) and V(s) satisfy the linear system of differential equations
ξ/ s([ U(s); V(s) ]) = ([ - D - O; - O - D ])([ U(s); V(s) ]) .
This can be straightforwardly integrated to give <cit.>
([ U'; V' ]) =
exp[-2 ξΔ([ - D - O; - O - D ])]([ U(0); V(0) ]) ,
where U(0) and V(0) correspond to the initial condition Z(0) for |ψ⟩.
The above observation provides a direct and simple solution to the problem of finding how the matrix
Z of a Gaussian state, as that in Eq. (<ref>), is modified after the evolution (<ref>).
The latter is composed of a unitary step, followed by a dissipative step: any time a given operator
of the form e^- ξT̂Δ
is applied to the state (<ref>), the matrix Z≡ - [ U^†]^-1 V^†
is transformed into Z' ≡ - [ U'^†]^-1 V'^†,
the matrices U' and V' being expressed as in Eq. (<ref>).
In the measurement step, to restore the normalization of the state it is necessary to perform the QR decomposition
([ U'; V' ]) =
([ U_Q; V_Q ]) R ,
where R is a L× L upper triangular matrix and U_Q and V_Q obey the unitarity condition U_Q^† U_Q + V_Q^† V_Q = 1. From the one side, the QR decomposition does not modify the matrix Z that defines the state, since it is easy to check that Z' = - [ U'^†]^-1 V'^† = - [ U_Q^†]^-1 V_Q^†. From the other side, it restores unitarity <cit.>, allowing the evaluation of the correlation matrices as
G'=1- U_Q U_Q^†, F'=- U_Q V_Q^† ,
as one can easily check by substituting Z' = - [ U_Q^†]^-1 V_Q^†
in Eqs. (<ref>) and by imposing the unitarity condition.
§ RESULTS
The results presented below have been obtained by initializing the system in the ground state
of the Hamiltonian (<ref>) with J=1, h_i=100,
and letting it evolve after a sudden quench of the field to h=0.5.
We checked that the asymptotic value of the entanglement entropy, as well as the expectation probability
distribution of the measurement operators, are not affected by the choice of h_i and weakly depend on h.
Therefore, without loss of generality, hereafter we keep them fixed.
To compute the entanglement entropy, we choose a balanced bipartition by taking l = N/2 [see Eq. (<ref>)],
and finally perform the averages in Eqs. (<ref>) and (<ref>)
over a given number N_ r of realizations of the stochastic process.
On the other hand, to obtain the full counting statistics, we evolve a single quantum trajectory
up to a long time T.
Details on the convergence of our numerical results are provided in Appendix <ref>.
§.§ Entanglement entropy
§.§.§ Dynamics with unitary and measurement parts
We specifically address the behavior of the average asymptotic entanglement entropy [see Eq. (<ref>)]
as a function of the system size N and of the power-law exponent α for the measurement operator.
We first consider a free-fermionic system described by the Kitaev Hamiltonian (<ref>),
and continuously monitored through the long-range operators (<ref>).
In Fig. <ref>(a) we show S_N/2 versus the system size N,
for different values of α (color gradient).
We notice that, for α≲ 1, it exhibits a volume-law scaling (i.e., it grows linearly with N).
When increasing the power-law exponent, the curves bend to eventually show a flat profile, for very large α.
This behavior suggest the emergence of a volume-law behavior in the long-range regime (α < 1) that,
after a crossover for intermediate values of α, turns into an area-law behavior
at short-range monitoring (α≫ 1).
This can be appreciated more clearly in Fig. <ref>(b), where we plot the normalized
asymptotic entanglement S_N/2/N versus the power-law exponent α.
As expected, for α≲ 1 the curves for different values of N collapse to a finite value,
evidencing a linear scaling with N. On the other hand, for α≳ 3.5,
the curves approach the zero value, thus signaling the onset of a regime where the dependence
of S_l with N is very weak, if not absent (meaning area-law behavior).
In the intermediate regime 1 ≲α≲ 3.5, we also observe a less-than-linear dependence
on the system size, which is more difficult to characterize properly.
Further insight on the sublinear region (α≳ 1) can be obtained after rescaling the entropy by ln(N),
as in Fig. <ref>(c).
In particular, looking at the inset, the curves for different system sizes exhibit a crossing at α∼ 3.2.
This should correspond to a value marking the transition between a more-than-logarithmic
and a sublogarithmic (most probably area-law) dependence with N.
At this point we should note that, since for α > 2 the measurement operators have a short-range character,
one cannot rule out the possibility to have a further transition in the intermediate region, from a power-law (sublinear)
to a logarithmic scaling, before ending up into an area-law region at α≳ 3.2.
Although hardly visible from our numerical data, the possible occurrence of a logarithmic scaling could be
of the same kind of those emerging in free-fermionic systems in the presence
of local monitoring <cit.>.
Summarizing, we can locate two special points α^⋆_1 and α^⋆_2 separating three
regions with qualitatively different behaviors in the entropy scaling with N
(increasing α, we have volume-law, intermediate subvolume, and area-law scalings of S_l with N).
To the best of our numerics, for γ = 0.1 (corresponding to the data reported in Fig. <ref>),
the turning points correspond to 0.5 ≲α^⋆_1 ≲ 1 and α^⋆_2 ∼ 3.2.
While the position of α^⋆_1 is quite robust when changing the Hamiltonian parameters,
this seems not to be the case for α^⋆_2.
In fact, we have performed simulations for other values of γ (see, e.g.,
the data for γ=0.5 in Appendix <ref>) and
found that, while the three above regimes (volume-law, intermediate crossover, and area-law) are still present,
the transition point from the intermediate to the area-law behavior
moves to different values of the power-law exponent (namely, α^⋆_2 decreases with increasing γ).
On the opposite hand, we always find 0.5 ≲α^⋆_1 ≲ 1.
§.§.§ Measurement-only dynamics
We now switch to the study of a measurement-only dynamics, i.e., for the case without a Hamiltonian providing
a unitary part in the dynamics (J = h = 0).
The plot of S_N/2/N versus α is provided in Fig. <ref>(a)
[corresponding to Fig. <ref>(b) for the case with Hamiltonian], and S_N/2/ln N versus α
can be found in Fig. <ref>(b) [corresponding to Fig. <ref>(c)
for the case with Hamiltonian].
We notice that the behavior in the small α-dynamics is quite stable and, in particular,
it exhibits a volume-law scaling with L, for α≲ 1.
This suggests that the transition point at 0.5 ≲α^⋆_1 ≲ 1 should not depend on the presence
of a Hamiltonian and that it is a property of the measurement operators only.
There is still an intermediate region featuring a subvolume scaling that vanishes at α^⋆_2 ∼ 1.9,
corresponding to the intersection point of the curves S_N/2/ln N [c.f., the crossing point for the curves in
the inset of Fig. <ref>(b)].
The fact that α_1^* appears to be independent of the system parameters suggests us a comparison with other
long-range systems.
From one side, it is known that long-range Hermitian Hamiltonians in one dimension behave mean-field
for N→∞ for α < 1, short-range for α>2, and for 1<α<2 there is an intermediate
regime where the excited states of the system can break a symmetry, but in a non-mean-field way <cit.>.
In the case without Hamiltonian, the dynamics is provided by a long-range noisy (pseudo) Hamiltonian in imaginary time
[see Eq. (<ref>)], and it is interesting that the transition points of the
dynamics (0.5 ≲α_1^*≲ 1 and α_2^*∼ 1.9) approximately coincide with those of the unitary dynamics.
We also recall that, at least in one case <cit.>, α=1 is the threshold below which
a mean-field description is exact for N→∞ in a Lindblad dynamics in one dimension with long-range Lindbladians.
We conclude the section with a remark on the behavior of α_2^*. Considering that the measurement-only case corresponds to the limit of infinite γ (more precisely γ≫ h, J), we find that α_2^* decreases with γ, as we can see in table <ref>.
§.§ Expectation probability distribution of the measurement operators
We now consider the statistics of the expectations of the measurement operators, a quantity that is
experimentally more relevant, being provided by the expectation values of a physically observable operator.
Recent studies have pointed out that, for local measurements, the different properties of this distribution
or related quantities may be connected to the entanglement transitions <cit.>.
Operatively, we consider a single quantum trajectory, evolve it up to a time T
with a given discretization time Δ t, and evaluate all the expectations ⟨ℓ̂_j|_⟩t_n,
for the different discrete times t_n = n Δ t, (n = 1, …, T/Δ t),
and the different sites j=1, …, N. Then we arrange these data into a normalized histogram.
This is the distribution of the expectations of the measurement operators and we call it P(ℓ).
In Fig. <ref> we show, for the dynamics of the monitored Kitaev chain with γ=0.1,
the histograms of the probability of ℓ
for α = 0 (a), α = 2 (b), α = 3 (c), and α = 4 (d).
The various curves in each panel are for different system sizes (color gradient – see legend).
The distributions for α>1 tend to a limit for increasing system size,
while for α≤ 1 there is a rescaling (i.e., in the latter case, the distributions converge
to a limit, if appropriately rescaled).
It is evident that the shape of such distribution exhibits a crossover from a unimodal to a bimodal character,
depending on the value of α. As we have already emphasized, this is reminiscent of bifurcations
in nonlinear classical driven-dissipative dynamical systems <cit.>,
where one stationary point splits into two. Here we have also quantum fluctuations and classical noise,
so instead of having stationary points, we have maxima that move to be one (unimodal) to be two (bimodal).
To locate the turning point α̅, in the main panel of Fig. <ref>(a) we plot the absolute value
of the position of the maximum |ℓ( max[P( ℓ)] )| versus α. The latter starts deviating from zero
at α̅≳ 2, that is far from both the crossover points we identified from the entanglement dynamics
(0.5 ≲α^⋆_1 ≲ 1 and α^⋆_2 ∼ 3.2, for γ=0.1).
The inset of Fig. <ref>(a) shows the variance of the distribution (logarithmic scale on the y-axis).
For α≲ 1 the variance is size dependent, meaning that the distribution shrinks when increasing N.
This dependence is still present for 1 < α≲ 2, but it seems to disappear for larger system sizes.
For α >2, no size dependence is observed and, according to the bimodal character of the distribution, the variance becomes sensitively larger.
Different information can be extracted by looking at the value of the absolute maximum of the distribution,
max[P(ℓ)], shown in Fig. <ref>(b) (logarithmic scale on the y-axis).
The first observation is that, in accordance with the variance behavior, the absolute maximum exhibits a strong size dependence for any α≲ 1.
This size dependence is still present at small N for 1 < α≲ 2 to eventually disappear for larger power-law exponents.
Then we notice that max[P(ℓ)] shows a non-monotonic behavior in α. The absolute minimum
occurs not far from the α^⋆_2 value at which we observed the transition to the area-law regime of the entanglement entropy.
Since we do not have any theoretical insight, we do not make any direct connection between the two transitions.
We finally comment that a different scenario emerges for the measurement-only dynamics.
In fact, in this case we observe the transition from unimodal to bimodal character at α̅∼ 1.
As discussed in Sec. <ref>, this value corresponds to that of α^⋆_1,
at which we observe the crossover of the entanglement entropy
from the volume-law to the subvolume-law phase (c.f. Fig. <ref>).
This result is consistent with the hypothesis that the interplay with the Hamiltonian can generate an intermediate region
displaying more complex features.
No clear information can be extracted by the analysis of the maxima nor of the moments of the distribution (e.g., the variance).
§ CONCLUSION
We have studied the dynamics of the entanglement entropy of a fermionic Kitaev chain undergoing a quantum state diffusion evolution,
as a result of a continuous measurement process generated by two-point power-law decaying operators.
This dynamics preserves the Gaussianity of the state, allowing us to simulate systems up to few hundreds of sites.
First, we focused on the asymptotic entanglement entropy, averaged over the different stochastic measurement processes,
both as a function of the system size and of the power-law measurement exponent α.
We found three regimes: For α < α^⋆_1 (with 0.5 ≲α^⋆_1 ≲ 1),
the entanglement scales linearly with the system size N, that is, as a volume-law; on the opposite hand,
for α > α^⋆_2 (with α^⋆_2 dependent on the parameters of the system). it exhibits
a sublogarithmic (probably area-law) scaling.
A similar behavior emerges when considering the measurement-only dynamics.
In this case, the transition from volume-law to the non-trivial phase roughly occurs at the same
value of 0.5 ≲α^⋆_1 ≲ 1 observed for the full Hamiltonian and measurement-induced evolution,
suggesting that this transition is an effect of the measurement process only.
The other transition point at α^⋆_2, from the subvolume to the area-law phase,
shifts to a smaller power-law exponent.
These findings suggest a comparison with the case of one-dimensional long-range Hamiltonians,
where also two values of α marking a dynamical transition are present:
The investigation of a possible connection between the unitary case and our non-Hermitian dynamics
may be the focus of future research.
Second, we considered the expectation probability distribution of the measurement operators.
For both the cases of dynamics with and without the Hamiltonian, we have seen that such distribution exhibits
a transition from a unimodal to a bimodal behavior, when increasing α above a given threshold α̅.
However, while for the measurement-only dynamics this transition occurs in correspondence of the α^⋆_1
at which the entanglement entropy exhibits a transition from volume to subvolume scaling,
in the additional presence of the Kitaev Hamiltonian this correspondence disappears. The absolute maximum of the distribution,
however, behaves non-monotonically in α and exhibits a minimum occurring at a power-law exponent that is
compatible with the transition from the subvolume to the area-law phase.
Nevertheless, this phenomenon is very interesting in itself, being a quantum analog of the bifurcations
occurring in classical driven-dissipative dynamical systems. For that reason we dub it a “quantum bifurcation”.
In view of the apparently large finite-size effects, to have a confirmation of the stability of the different
system behaviors with α, one could look at other quantities as the mutual information or the correlation
functions.
It would be also tempting to investigate the dependence of these results on the specific unraveling.
For example, one can check whether the α^⋆_1 threshold is robust to the stochastic process
chosen to simulate the Lindblad master equation, i.e., whether it is a property of the operator itself,
as discussed in <cit.>.
Moreover the effects of long-range measurement operators can be tested in others systems,
as for quantum circuits <cit.>.
From an experimental perspective, it is important to investigate the connection between the transition
of the entanglement entropy and the quantum bifurcation of the distribution.
Before concluding, we mention that a remarkably similar phenomenology has been observed in <cit.>, where a system of monitored two coupled chains of free fermions is considered.
In this work it is shown that it is possible to induce non-Markovian effects on one of the two chains, referred as the system by performing Markovian measurements on the other one, referred as the bath.
This non-Markovianity is reflected in the entanglement dynamics that exhibits three different regimes: An area law scaling, a logarithmic scaling and a mixed (logarithmic-volume) scaling. Although it could be interesting to investigate the connection between this non-Markovianity and our non-locality, we leave it to future studies.
We thank V. Alba, G. Chiriacò, and J. De Nardis for fruitful discussions.
A. R. acknowledges computational resources from MUR, PON “Ricerca e Innovazione 2014-2020”,
under Grant No. PIR01 00011 - (I.Bi.S.Co.).
We acknowledge support from the Italian MIUR through PRIN Project No. 2017E44HRF.
§ CONVERGENCE OF THE NUMERICAL RESULTS
All the results have been derived by fixing as integration step Δ t = 5 × 10^-3. This value has been chosen after a convergence check.
The average entanglement entropy S_N/2, defined in Eq. (<ref>), is evaluated after averaging over a finite time T,
which has been chosen a posteriori, after convergence is attained.
In Fig. <ref> we show the characteristic behavior of _N/2 in time for different system sizes (color scale) and α = 2. From this figure it is clear that convergence is reached in reasonable times.
The ensemble average is evaluated over N_ r≥ 48 trajectories. The inequality means that for small system sizes we can easily average over N_r = O(10^2) trajectories, while for larger N the numerical effort required for the simulations does not allow to go beyond N_r = 48.
However, we checked that all the results are consistent inside the error bars δ S_N/2, evaluated as
δ S_N/2 = 1/√(N_ r)√(lim_T→∞∫_t^*^T dt' S_N/2^ 2(t')-S_N/2^ 2) .
§ CASE WITH HAMILTONIAN AND Γ = 0.5
Here we provide results for a case similar to the one considered in Fig. <ref>,
with the only difference that now γ=0.5. The corresponding numerical data are shown in Fig. <ref>.
Looking at the plot of S_N/2/N versus α, we see that the volume-law still persists for small α values,
up to 0.5 ≲α_1^*≲ 1 [Fig. <ref>(a)].
From the data in inset at fixed size, notice also that, for γ=0.5, the entanglement generally drops faster than for γ=0.1.
On the other hand, the transition from subvolume law to sublogarithm law occurs for a different value of α_2^*
(α_2^* ∼ 2.4) as we can see from the crossing of the curves of S_N/2/ln N versus α
for different sizes N [Fig. <ref>(inset of panel (b)).]
93
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Nielsen and Chuang(2011)]Nielsen
author author M. A. Nielsen and author I. L. Chuang, @noop title Quantum Computation and
Quantum Information: 10th Anniversary Edition (publisher
Cambridge University Press, address Cambridge, UK, year 2011)NoStop
[Horodecki et al.(2009)Horodecki, Horodecki, Horodecki, and Horodecki]RevModPhys.81.865
author author R. Horodecki, author P. Horodecki, author M. Horodecki, and author K. Horodecki, title title Quantum entanglement, https://doi.org/10.1103/RevModPhys.81.865 journal
journal Rev. Mod. Phys. volume 81, pages 865 (year 2009)NoStop
[Amico et al.(2008)Amico,
Fazio, Osterloh, and Vedral]Amico_RMP
author author L. Amico, author R. Fazio,
author A. Osterloh, and author V. Vedral, title title Entanglement in many-body systems, https://doi.org/10.1103/RevModPhys.80.517 journal journal Rev. Mod. Phys. volume 80, pages 517 (year 2008)NoStop
[Vidal et al.(2003)Vidal,
Latorre, Rico, and Kitaev]Vidal2003
author author G. Vidal, author J. I. Latorre,
author E. Rico, and author A. Kitaev, title
title Entanglement in quantum critical phenomena, https://doi.org/10.1103/physrevlett.90.227902 journal
journal Phys. Rev. Lett. volume 90, pages 227902 (year 2003)NoStop
[Latorre et al.(2004)Latorre, Rico, and Vidal]Vidal2003b
author author J. I. Latorre, author E. Rico, and author G. Vidal, title title Ground state entanglement in quantum spin
chains, https://doi.org/10.5555/2011572.2011576 journal journal Quantum Inf. Comput. volume 4, pages 48 (year 2004)NoStop
[Alba and Calabrese(2017)]Alba_2017
author author V. Alba and author P. Calabrese, title title Entanglement and
thermodynamics after a quantum quench in integrable systems, https://doi.org/10.1073/pnas.1703516114 journal journal Proc. Natl. Acad. Sci. U.S.A. volume
114, pages 7947 (year 2017)NoStop
[Alba and Calabrese(2018)]Alba_2018
author author V. Alba and author P. Calabrese, title title Entanglement dynamics
after quantum quenches in generic integrable systems, https://doi.org/10.21468/SciPostPhys.4.3.017 journal
journal SciPost Phys. volume 4, pages 017 (year 2018)NoStop
[Singh et al.(2016)Singh,
Bardarson, and Pollmann]Singh_2016
author author R. Singh, author J. H. Bardarson, and author F. Pollmann, title title Signatures of the
many-body localization transition in the dynamics of entanglement and
bipartite fluctuations, https://doi.org/10.1088/1367-2630/18/2/023046 journal
journal New J. Phys. volume 18, pages 023046 (year 2016)NoStop
[Russomanno et al.(2020)Russomanno, Fava, and Fazio]PhysRevB.102.144302
author author A. Russomanno, author M. Fava, and author R. Fazio, title title Nonergodic behavior of the clean Bose-Hubbard
chain, https://doi.org/10.1103/PhysRevB.102.144302 journal journal Phys. Rev. B volume
102, pages 144302 (year 2020)NoStop
[Abanin et al.(2019)Abanin,
Altman, Bloch, and Serbyn]Abanin_RMP
author author D. A. Abanin, author E. Altman,
author I. Bloch, and author M. Serbyn, title
title Colloquium: Many-body localization, thermalization, and
entanglement, https://doi.org/10.1103/RevModPhys.91.021001
journal journal Rev. Mod. Phys. volume 91, pages 021001 (year
2019)NoStop
[Li et al.(2018)Li,
Chen, and Fisher]Li2018
author author Y. Li, author X. Chen, and author M. P. A. Fisher, title title Quantum Zeno effect and the
many-body entanglement transition, https://doi.org/10.1103/PhysRevB.98.205136 journal journal Phys. Rev. B volume 98, pages 205136 (year 2018)NoStop
[Chan et al.(2019)Chan,
Nandkishore, Pretko, and Smith]Chan2019
author author A. Chan, author R. M. Nandkishore, author M. Pretko, and author G. Smith, title title Unitary-projective
entanglement dynamics, https://doi.org/10.1103/PhysRevB.99.224307
journal journal Phys. Rev. B volume 99, pages 224307 (year
2019)NoStop
[Skinner et al.(2019)Skinner, Ruhman, and Nahum]Skinner2019
author author B. Skinner, author J. Ruhman, and author A. Nahum, title title Measurement-induced phase transitions in the
dynamics of entanglement, https://doi.org/10.1103/PhysRevX.9.031009 journal journal Phys. Rev. X volume 9, pages
031009 (year 2019)NoStop
[Szyniszewski et al.(2019)Szyniszewski, Romito, and Schomerus]Szynieszewski2019
author author M. Szyniszewski, author A. Romito, and author H. Schomerus, title title Entanglement transition
from variable-strength weak measurements, https://doi.org/10.1103/PhysRevB.100.064204 journal journal Phys. Rev. B volume 100, pages 064204 (year 2019)NoStop
[Potter and Vasseur(2022)]Vasseur2021
author author A. C. Potter and author R. Vasseur, title title Entanglement dynamics in
hybrid quantum circuits, in https://doi.org/10.1007/978-3-031-03998-0_9 booktitle
Quantum Science and Technology (publisher Springer
International Publishing, year 2022) pp. pages
211–249NoStop
[Bao et al.(2021a)Bao, Choi, and Altman]Bao2021
author author Y. Bao, author S. Choi, and author E. Altman, title title Symmetry enriched phases of quantum circuits, https://doi.org/10.1016/j.aop.2021.168618 journal
journal Ann. Phys. volume 435, pages 168618 (year 2021a)NoStop
[Nahum and Skinner(2020)]Nahum2020
author author A. Nahum and author B. Skinner, title title Entanglement and dynamics
of diffusion-annihilation processes with Majorana defects, https://doi.org/10.1103/PhysRevResearch.2.023288 journal
journal Phys. Rev. Res. volume 2, pages 023288 (year 2020)NoStop
[Chen et al.(2020)Chen,
Li, Fisher, and Lucas]Chen2020
author author X. Chen, author Y. Li, author M. P. A. Fisher, and author A. Lucas, title
title Emergent conformal symmetry in nonunitary random dynamics
of free fermions, https://doi.org/10.1103/physrevresearch.2.033017
journal journal Phys. Rev. Res. volume 2, pages 033017 (year
2020)NoStop
[Li et al.(2019)Li,
Chen, and Fisher]Li2019
author author Y. Li, author X. Chen, and author M. P. A. Fisher, title title Measurement-driven entanglement
transition in hybrid quantum circuits, https://doi.org/10.1103/PhysRevB.100.134306 journal journal Phys. Rev. B volume 100, pages 134306 (year 2019)NoStop
[Jian et al.(2020)Jian,
You, Vasseur, and Ludwig]Jian2020
author author C.-M. Jian, author Y.-Z. You,
author R. Vasseur, and author A. W. W. Ludwig, title title Measurement-induced criticality in random quantum
circuits, https://doi.org/10.1103/PhysRevB.101.104302 journal journal Phys. Rev. B volume
101, pages 104302 (year 2020)NoStop
[Y. Li, R. Vasseur, M. P. A. Fisher, and A. W. W.
Ludwig(2021)]Li2021
author author Y. Li, R. Vasseur, M.
P. A. Fisher, and A. W. W. Ludwig, @noop title
Statistical mechanics model for Clifford random tensor networks and
monitored quantum circuits (year 2021), https://arxiv.org/abs/2110.02988 arXiv:2110.02988 NoStop
[Szyniszewski et al.(2020)Szyniszewski, Romito, and Schomerus]Szyniszewski2020
author author M. Szyniszewski, author A. Romito, and author H. Schomerus, title title Universality of
entanglement transitions from stroboscopic to continuous measurements, https://doi.org/10.1103/PhysRevLett.125.210602 journal
journal Phys. Rev. Lett. volume 125, pages 210602 (year 2020)NoStop
[Turkeshi et al.(2020)Turkeshi, Fazio, and Dalmonte]Turkeshi2020
author author X. Turkeshi, author R. Fazio, and author M. Dalmonte, title title Measurement-induced criticality in
(2+1)-dimensional hybrid quantum circuits, https://doi.org/10.1103/PhysRevB.102.014315 journal journal Phys. Rev. B volume 102, pages 014315 (year 2020)NoStop
[Lunt et al.(2021)Lunt,
Szyniszewski, and Pal]Lunt2021
author author O. Lunt, author M. Szyniszewski, and author A. Pal, title title Measurement-induced criticality and entanglement
clusters: A study of one-dimensional and two-dimensional Clifford
circuits, https://doi.org/10.1103/PhysRevB.104.155111 journal journal Phys. Rev. B volume
104, pages 155111 (year 2021)NoStop
[Sierant et al.(2022a)Sierant, Schirò, Lewenstein, and Turkeshi]Sierant2022_B
author author P. Sierant, author M. Schirò,
author M. Lewenstein, and author X. Turkeshi, title title Measurement-induced phase transitions in
(d+1)-dimensional stabilizer circuits, https://doi.org/10.1103/PhysRevB.106.214316 journal journal Phys. Rev. B volume 106, pages 214316 (year 2022a)NoStop
[Nahum et al.(2021)Nahum,
Roy, Skinner, and Ruhman]Nahum2021
author author A. Nahum, author S. Roy, author B. Skinner, and author
J. Ruhman, title title Measurement and entanglement phase transitions in all-to-all quantum
circuits, on quantum trees, and in Landau-Ginsburg theory, https://doi.org/10.1103/PRXQuantum.2.010352 journal journal PRX Quantum volume 2, pages
010352 (year 2021)NoStop
[Zabalo et al.(2020)Zabalo,
Gullans, Wilson, Gopalakrishnan, Huse, and Pixley]Zabalo2020
author author A. Zabalo, author M. J. Gullans,
author J. H. Wilson, author S. Gopalakrishnan, author D. A. Huse, and author J. H. Pixley, title
title Critical properties of the measurement-induced transition
in random quantum circuits, https://doi.org/10.1103/PhysRevB.101.060301 journal journal Phys. Rev. B volume 101, pages 060301 (year 2020)NoStop
[Sierant and Turkeshi(2022)]Sierant2022_A
author author P. Sierant and author X. Turkeshi, title title Universal behavior
beyond multifractality of wave functions at measurement-induced phase
transitions, https://doi.org/10.1103/PhysRevLett.128.130605
journal journal Phys. Rev. Lett. volume 128, pages 130605 (year
2022)NoStop
[Chiriacò et al.(2023)Chiriacò, Tsitsishvili, Poletti,
Fazio, and Dalmonte]Chiriaco2023
author author G. Chiriacò, author M. Tsitsishvili, author D. Poletti, author R. Fazio, and author M. Dalmonte, @noop title Diagrammatic method for many-body
non-Markovian dynamics: memory effects and entanglement transitions
(year 2023), https://arxiv.org/abs/2302.10563
arXiv:2302.10563 NoStop
[Klocke and Buchhold(2023)]Klocke2023
author author K. Klocke and author M. Buchhold, @noop title Majorana loop models for
measurement-only quantum circuits (year 2023), https://arxiv.org/abs/2305.18559 arXiv:2305.18559 NoStop
[Cao et al.(2019)Cao,
Tilloy, and De Luca]DeLuca2019
author author X. Cao, author A. Tilloy, and author A. De Luca, title title Entanglement in a fermion chain under continuous
monitoring, https://doi.org/10.21468/SciPostPhys.7.2.024
journal journal SciPost Phys. volume 7, pages 24 (year
2019)NoStop
[Buchhold et al.(2021)Buchhold, Minoguchi, Altland, and Diehl]Buchhold2021
author author M. Buchhold, author Y. Minoguchi,
author A. Altland, and author S. Diehl, title
title Effective theory for the measurement-induced phase
transition of Dirac fermions, https://doi.org/10.1103/PhysRevX.11.041004 journal journal Phys. Rev. X volume 11, pages 041004 (year 2021)NoStop
[Jian et al.(2022)Jian,
Bauer, Keselman, and Ludwig]Jian2022
author author C.-M. Jian, author B. Bauer,
author A. Keselman, and author A. W. W. Ludwig, title title Criticality and entanglement in
nonunitary quantum circuits and tensor networks of noninteracting fermions, https://doi.org/10.1103/PhysRevB.106.134206 journal
journal Phys. Rev. B volume 106, pages 134206 (year 2022)NoStop
[Coppola et al.(2022)Coppola, Tirrito, Karevski, and Collura]Coppola2022
author author M. Coppola, author E. Tirrito,
author D. Karevski, and author M. Collura, title title Growth of entanglement entropy under local
projective measurements, https://doi.org/10.1103/PhysRevB.105.094303 journal journal Phys. Rev. B volume 105, pages 094303 (year 2022)NoStop
[Fava et al.(2023)Fava,
Piroli, Swann, Bernard, and Nahum]Fava2023
author author M. Fava, author L. Piroli,
author T. Swann, author D. Bernard, and author
A. Nahum, @noop title
Nonlinear sigma models for monitored dynamics of free fermions (year 2023), https://arxiv.org/abs/2302.12820
arXiv:2302.12820 NoStop
[Poboiko et al.(2023)Poboiko, Pöpperl, Gornyi, and Mirlin]Poboiko2023
author author I. Poboiko, author P. Pöpperl, author I. V. Gornyi, and author A. D. Mirlin, @noop title Theory of free fermions under
random projective measurements (year 2023), https://arxiv.org/abs/2304.03138 arXiv:2304.03138 [quant-ph] NoStop
[Jian et al.(2023)Jian,
Shapourian, Bauer, and Ludwig]Jian2023
author author C.-M. Jian, author H. Shapourian,
author B. Bauer, and author A. W. W. Ludwig, @noop
title Measurement-induced entanglement transitions in quantum
circuits of non-interacting fermions: Born-rule versus forced measurements
(year 2023), https://arxiv.org/abs/2302.09094
arXiv:2302.09094 NoStop
[Merritt and Fidkowski(2023)]Merritt2023
author author J. Merritt and author L. Fidkowski, title title Entanglement
transitions with free fermions, https://doi.org/10.1103/PhysRevB.107.064303 journal journal Phys. Rev. B volume 107, pages 064303 (year 2023)NoStop
[Alberton et al.(2021)Alberton, Buchhold, and Diehl]Alberton2021
author author O. Alberton, author M. Buchhold, and author S. Diehl, title title Entanglement transition in a monitored
free-fermion chain: From extended criticality to area law, https://doi.org/10.1103/physrevlett.126.170602 journal
journal Phys. Rev. Lett. volume 126, pages 170602 (year 2021)NoStop
[Turkeshi et al.(2021a)Turkeshi, Biella,
Fazio, Dalmonte, and Schirò]Turkeshi2021
author author X. Turkeshi, author A. Biella,
author R. Fazio, author M. Dalmonte, and author M. Schirò, title
title Measurement-induced entanglement transitions in the
quantum Ising chain: From infinite to zero clicks, https://doi.org/10.1103/physrevb.103.224210 journal journal Phys. Rev. B volume 103, pages 224210 (year 2021a)NoStop
[Szyniszewski et al.(2022)Szyniszewski, Lunt, and Pal]Szynieszewski2022
author author M. Szyniszewski, author O. Lunt, and author A. Pal, @noop
title Disordered monitored free fermions (year
2022), https://arxiv.org/abs/2211.02534 arXiv:2211.02534
[quant-ph] NoStop
[Turkeshi et al.(2021b)Turkeshi, Dalmonte, Fazio, and Schirò]Turkeshi2022
author author X. Turkeshi, author M. Dalmonte,
author R. Fazio, and author M. Schirò, title
title Entanglement transitions from stochastic resetting of
non-hermitian quasiparticles, https://doi.org/10.1103/PhysRevB.105.L241114 journal
journal Phys. Rev. B volume 105, pages L241114 (year
2021b)NoStop
[Piccitto et al.(2022a)Piccitto, Russomanno, and Rossini]Piccitto2022
author author G. Piccitto, author A. Russomanno, and author D. Rossini, title title Entanglement transitions
in the quantum Ising chain: A comparison between different unravelings of
the same Lindbladian, https://doi.org/10.1103/PhysRevB.105.064305 journal journal Phys. Rev. B volume 105, pages 064305 (year 2022a)NoStop
[Piccitto et al.(2022b)Piccitto, Russomanno, and Rossini]Piccitto2022e
author author G. Piccitto, author A. Russomanno, and author D. Rossini, title title Erratum: Entanglement
transitions in the quantum Ising chain: A comparison between different
unravelings of the same Lindbladian, https://doi.org/10.1103/PhysRevB.106.219901 journal journal Phys. Rev. B volume 106, pages 219901(E) (year 2022b)NoStop
[Tirrito et al.(2022)Tirrito, Santini, Fazio, and Collura]Tirrito2022
author author E. Tirrito, author A. Santini,
author R. Fazio, and author M. Collura, @noop
title Full counting statistics as probe of measurement-induced
transitions in the quantum Ising chain (year 2022), https://arxiv.org/abs/2212.09405 arXiv:2212.09405 [cond-mat.stat-mech]
NoStop
[Paviglianiti and Silva(2023)]Paviglianiti2023
author author A. Paviglianiti and author A. Silva, @noop title Multipartite entanglement in the
measurement-induced phase transition of the quantum Ising chain (year 2023), https://arxiv.org/abs/2302.06477 arXiv:2302.06477
[quant-ph] NoStop
[Lunt and Pal(2020)]Lunt2020
author author O. Lunt and author A. Pal, title title Measurement-induced entanglement
transitions in many-body localized systems, https://doi.org/10.1103/PhysRevResearch.2.043072 journal
journal Phys. Rev. Res. volume 2, pages 043072 (year 2020)NoStop
[Rossini and Vicari(2020)]Rossini2020
author author D. Rossini and author E. Vicari, title title Measurement-induced
dynamics of many-body systems at quantum criticality, https://doi.org/10.1103/PhysRevB.102.035119 journal journal Phys. Rev. B volume 102, pages 035119 (year 2020)NoStop
[Tang and Zhu(2020)]Tang2020
author author Q. Tang and author W. Zhu, title title Measurement-induced phase transition:
A case study in the nonintegrable model by density-matrix renormalization
group calculations, https://doi.org/10.1103/PhysRevResearch.2.013022 journal
journal Phys. Rev. Res. volume 2, pages 013022 (year 2020)NoStop
[Fuji and Ashida(2020)]Fuji2020
author author Y. Fuji and author Y. Ashida, title title Measurement-induced quantum
criticality under continuous monitoring, https://doi.org/10.1103/PhysRevB.102.054302 journal journal Phys. Rev. B volume 102, pages 054302 (year 2020)NoStop
[Sierant et al.(2022b)Sierant, Chiriacò, Surace, Sharma, Turkeshi, Dalmonte, Fazio, and Pagano]Sierant2021
author author P. Sierant, author G. Chiriacò,
author F. M. Surace, author S. Sharma, author
X. Turkeshi, author
M. Dalmonte, author
R. Fazio, and author
G. Pagano, title title Dissipative Floquet dynamics: from steady state to measurement
induced criticality in trapped-ion chains, https://doi.org/10.22331/q-2022-02-02-638 journal journal Quantum volume 6, pages
638 (year 2022b)NoStop
[Doggen et al.(2022)Doggen,
Gefen, Gornyi, Mirlin, and Polyakov]Doggen2022
author author E. V. H. Doggen, author Y. Gefen, author I. V. Gornyi,
author A. D. Mirlin, and author D. G. Polyakov, title title Generalized quantum measurements with
matrix product states: Entanglement phase transition and clusterization, https://doi.org/10.1103/PhysRevResearch.4.023146 journal journal Phys. Rev. Res. volume
4, pages 023146 (year 2022)NoStop
[Altland et al.(2022)Altland, Buchhold, Diehl, and Micklitz]Altland2022
author author A. Altland, author M. Buchhold,
author S. Diehl, and author T. Micklitz, title
title Dynamics of measured many-body quantum chaotic systems, https://doi.org/10.1103/PhysRevResearch.4.L022066 journal journal Phys. Rev. Res. volume
4, pages L022066 (year 2022)NoStop
[Gullans and Huse(2020a)]Gullans2020_A
author author M. J. Gullans and author D. A. Huse, title title Scalable probes of
measurement-induced criticality, https://doi.org/10.1103/PhysRevLett.125.070606 journal
journal Phys. Rev. Lett. volume 125, pages 070606 (year 2020a)NoStop
[Gullans and Huse(2020b)]Gullans2020_B
author author M. J. Gullans and author D. A. Huse, title title Dynamical purification phase
transition induced by quantum measurements, https://doi.org/10.1103/PhysRevX.10.041020 journal journal Phys. Rev. X volume 10, pages 041020 (year 2020b)NoStop
[Lóio et al.(2023)Lóio,
De Luca, De Nardis, and Turkeshi]Loio2023
author author H. Lóio, author A. De Luca,
author J. De Nardis, and author X. Turkeshi, @noop
title Purification timescales in monitored fermions
(year 2023), https://arxiv.org/abs/2303.12216
arXiv:2303.12216 NoStop
[Choi et al.(2020)Choi,
Bao, Qi, and Altman]Choi2020
author author S. Choi, author Y. Bao, author X.-L. Qi, and author
E. Altman, title title Quantum error correction in scrambling dynamics and
measurement-induced phase transition, https://doi.org/10.1103/PhysRevLett.125.030505 journal
journal Phys. Rev. Lett. volume 125, pages 030505 (year 2020)NoStop
[Bao et al.(2020)Bao,
Choi, and Altman]Bao2020
author author Y. Bao, author S. Choi, and author E. Altman, title title Theory of the phase transition in random unitary
circuits with measurements, https://doi.org/10.1103/PhysRevB.101.104301 journal journal Phys. Rev. B volume 101, pages 104301 (year 2020)NoStop
[Bao et al.(2021b)Bao, Choi, and Altman]Bao2021_A
author author Y. Bao, author S. Choi, and author E. Altman, title title Symmetry enriched phases of quantum circuits, https://doi.org/10.1016/j.aop.2021.168618 journal
journal Ann. Phys. volume 435, pages 168618 (year 2021b)NoStop
[Fidkowski et al.(2021)Fidkowski, Haah, and Hastings]Fidkowski2021
author author L. Fidkowski, author J. Haah, and author M. B. Hastings, title title How dynamical quantum memories
forget, https://doi.org/10.22331/q-2021-01-17-382 journal journal Quantum volume
5, pages 382 (year 2021)NoStop
[Bao et al.(2021c)Bao, Block, and Altman]Bao2021_B
author author Y. Bao, author M. Block, and author E. Altman, @noop
title Finite time teleportation phase transition in random
quantum circuits (year 2021c), https://arxiv.org/abs/2110.06963 arXiv:2110.06963 NoStop
[Barratt et al.(2022)Barratt, Agrawal, Potter, Gopalakrishnan, and Vasseur]Barratt2022_A
author author F. Barratt, author U. Agrawal,
author A. C. Potter, author S. Gopalakrishnan, and author R. Vasseur, title
title Transitions in the learnability of global charges from
local measurements, https://doi.org/10.1103/PhysRevLett.129.200602
journal journal Phys. Rev. Lett. volume 129, pages 200602 (year
2022)NoStop
[Dehghani et al.(2023)Dehghani, Lavasani, Hafezi, and Gullans]Dehgani2023
author author H. Dehghani, author A. Lavasani,
author M. Hafezi, and author M. J. Gullans, title title Neural-network decoders for measurement induced
phase transitions, https://doi.org/10.1038/s41467-023-37902-1
journal journal Nat. Commun. volume 14, pages 2918 (year
2023)NoStop
[Kelly et al.(2022)Kelly,
Poschinger, Schmidt-Kaler, Fisher, and Marino]Kelly2022
author author S. P. Kelly, author U. Poschinger,
author F. Schmidt-Kaler, author M. P. A. Fisher, and author J. Marino, @noop
title Coherence requirements for quantum communication from
hybrid circuit dynamics (year 2022), https://arxiv.org/abs/2210.11547 arXiv:2210.11547 NoStop
[Ippoliti et al.(2021)Ippoliti, Gullans, Gopalakrishnan,
Huse, and Khemani]Ippoliti2021
author author M. Ippoliti, author M. J. Gullans, author S. Gopalakrishnan, author D. A. Huse, and author V. Khemani, title title Entanglement phase
transitions in measurement-only dynamics, https://doi.org/10.1103/PhysRevX.11.011030 journal journal Phys. Rev. X volume 11, pages 011030 (year 2021)NoStop
[Sriram et al.(2022)Sriram,
Rakovszky, Khemani, and Ippoliti]Sriram2022
author author A. Sriram, author T. Rakovszky,
author V. Khemani, and author M. Ippoliti, @noop
title Topology, criticality, and dynamically generated qubits in
a stochastic measurement-only Kitaev model (year 2022), https://arxiv.org/abs/2207.07096 arXiv:2207.07096 [quant-ph]
NoStop
[Lang and Büchler(2020)]Lang2020
author author N. Lang and author H. P. Büchler, title title Entanglement transition
in the projective transverse field Ising model, https://doi.org/10.1103/PhysRevB.102.094204 journal journal Phys. Rev. B volume 102, pages 094204 (year 2020)NoStop
[Minato et al.(2022)Minato,
Sugimoto, Kuwahara, and Saito]Minato2022
author author T. Minato, author K. Sugimoto,
author T. Kuwahara, and author K. Saito, title title Fate of measurement-induced phase transition in
long-range interactions, https://doi.org/10.1103/PhysRevLett.128.010603 journal
journal Phys. Rev. Lett. volume 128, pages 010603 (year 2022)NoStop
[Zerba and Silva(2023)]Zerba2023
author author C. Zerba and author A. Silva, @noop title Measurement phase transitions in the
no-click limit as quantum phase transitions of a non-hermitean vacuum
(year 2023), https://arxiv.org/abs/2301.07383
arXiv:2301.07383 [quant-ph] NoStop
[Müller et al.(2022)Müller, Diehl, and Buchhold]M_ller_2022
author author T. Müller, author S. Diehl, and author M. Buchhold, title title Measurement-induced dark state phase
transitions in long-ranged fermion systems, https://doi.org/10.1103/physrevlett.128.010605 journal
journal Phys. Rev. Lett. volume 128, pages 010605 (year 2022)NoStop
[Block et al.(2022)Block,
Bao, Choi, Altman, and Yao]Block_2021
author author M. Block, author Y. Bao, author S. Choi, author
E. Altman, and author
N. Yao, title title The measurement-induced transition in long-range interacting quantum
circuits, https://doi.org/10.1103/PhysRevLett.128.010604
journal journal Phys. Rev. Lett. volume 128, pages 010604 (year
2022)NoStop
[Sharma et al.(2022)Sharma,
Turkeshi, Fazio, and Dalmonte]Sharma_2022
author author S. Sharma, author X. Turkeshi,
author R. Fazio, and author M. Dalmonte, title
title Measurement-induced criticality in extended and long-range
unitary circuits, https://doi.org/10.21468/scipostphyscore.5.2.023
journal journal SciPost Phys. Core volume 5, pages 023 (year
2022)NoStop
[Piccitto et al.(2023)Piccitto, Russomanno, and Rossini]Piccitto2023
author author G. Piccitto, author A. Russomanno, and author D. Rossini, @noop title Entanglement dynamics with
string measurement operators (year 2023), https://arxiv.org/abs/2303.07102 arXiv:2303.07102 [cond-mat.stat-mech]
NoStop
[Gisin and Percival(1992)]Gisin1992
author author N. Gisin and author I. C. Percival, title title The quantum-state
diffusion model applied to open systems, https://doi.org/10.1088/0305-4470/25/21/023 journal journal J. Phys. A: Math. Theor. volume 25, pages 5677 (year 1992)NoStop
[Jacobs(2014)]Jacobs2014_Book
author author K. Jacobs, @noop title Quantum Measurement
Theory and its Applications (publisher Cambridge University
Press, address Cambridge, England, year
2014)NoStop
[Seetharam et al.(2022a)Seetharam, Lerose, Fazio, and Marino]PhysRevResearch.4.013089
author author K. Seetharam, author A. Lerose,
author R. Fazio, and author J. Marino, title
title Correlation engineering via nonlocal dissipation, https://doi.org/10.1103/PhysRevResearch.4.013089 journal
journal Phys. Rev. Res. volume 4, pages 013089 (year 2022a)NoStop
[Seetharam et al.(2022b)Seetharam, Lerose, Fazio, and Marino]PhysRevB.105.184305
author author K. Seetharam, author A. Lerose,
author R. Fazio, and author J. Marino, title
title Dynamical scaling of correlations generated by short- and
long-range dissipation, https://doi.org/10.1103/PhysRevB.105.184305 journal journal Phys. Rev. B volume 105, pages 184305 (year 2022b)NoStop
[Marino(2022)]PhysRevLett.129.050603
author author J. Marino, title title Universality class of
Ising critical states with long-range losses, https://doi.org/10.1103/PhysRevLett.129.050603 journal
journal Phys. Rev. Lett. volume 129, pages 050603 (year 2022)NoStop
[Passarelli et al.(2022)Passarelli, Lucignano, Fazio, and Russomanno]PhysRevB.106.224308
author author G. Passarelli, author P. Lucignano, author R. Fazio, and author A. Russomanno, title title Dissipative time crystals with
long-range Lindbladians, https://doi.org/10.1103/PhysRevB.106.224308 journal journal Phys. Rev. B volume 106, pages 224308 (year 2022)NoStop
[Passarelli et al.(2023)Passarelli, Turkeshi, Russomanno,
Lucignano, Schirò, and Fazio]passarelli2023postselectionfree
author author G. Passarelli, author X. Turkeshi, author A. Russomanno, author P. Lucignano, author M. Schirò, and author R. Fazio, @noop title Post-selection-free measurement-induced
phase transition in driven atomic gases with collective decay (year 2023), https://arxiv.org/abs/2306.00841 arXiv:2306.00841
[quant-ph] NoStop
[Campa et al.(2014)Campa,
Dauxois, Fanelli, and Ruffo]Ruffo
author author A. Campa, author T. Dauxois,
author D. Fanelli, and author S. Ruffo, @noop title Physics of long-range interacting systems (publisher Oxford University Press, year 2014)NoStop
[ŽŽunkovi čč et al.(2018)ŽŽunkovi čč, Heyl,
Knap, and Silva]PhysRevLett.120.130601
author author B. ŽŽunkovi čč, author M. Heyl,
author M. Knap, and author A. Silva, title
title Dynamical quantum phase transitions in spin chains with
long-range interactions: Merging different concepts of nonequilibrium
criticality, https://doi.org/10.1103/PhysRevLett.120.130601
journal journal Phys. Rev. Lett. volume 120, pages 130601 (year
2018)NoStop
[Strogatz(2015)]strogatz:book
author author S. Strogatz, @noop title Nonlinear Dynamics and
Chaos: With Applications to Physics, Biology, Chemistry, and Engineering, edition 2^ nd ed. (publisher Taylor &
Francis, address London, UK, year
2015)NoStop
[Cross and Greenside(2009)]cross:book
author author M. Cross and author H. Greenside, @noop title Pattern Formation and
Dynamics in Nonequilibrium Systems (publisher Cambridge
University Press, address Cambridge, UK, year
2009)NoStop
[Kitaev(2001)]Kitaev_2001
author author A. Y. Kitaev, title title Unpaired Majorana
fermions in quantum wires, https://doi.org/10.1070/1063-7869/44/10S/S29 journal
journal Physics-Uspekhi volume 44, pages 131 (year 2001)NoStop
[Sachdev(2011)]Sachdev
author author S. Sachdev, https://doi.org/10.1017/CBO9780511973765 title Quantum Phase Transitions (publisher
Cambridge University Press, address Cambridge, UK, year 2011)NoStop
[Pfeuty(1970)]PFEUTY197079
author author P. Pfeuty, title title The one-dimensional
Ising model with a transverse field, https://doi.org/https://doi.org/10.1016/0003-4916(70)90270-8 journal journal Annals of Physics volume 57, pages 79 (year 1970)NoStop
[Kac et al.(1963)Kac,
Uhlenbeck, and Hemmer]kac
author author M. Kac, author G. Uhlenbeck, and author P. Hemmer, title title On the van der Waals theory of the
vapor-liquid equilibrium. I. Discussion of a one-dimensional model, @noop journal journal J. Math. Phys. volume 4, pages 216 (year
1963)NoStop
[Mbeng et al.(2020)Mbeng,
Russomanno, and Santoro]glen
author author G. B. Mbeng, author A. Russomanno, and author G. E. Santoro, @noop title The quantum Ising chain for beginners
(year 2020), https://arxiv.org/abs/2009.09208
2009.09208 [quant-ph] NoStop
[Zanca and Santoro(2016)]PhysRevB.93.224431
author author T. Zanca and author G. E. Santoro, title title Quantum annealing speedup
over simulated annealing on random Ising chains, https://doi.org/10.1103/PhysRevB.93.224431 journal journal Phys. Rev. B volume 93, pages 224431 (year 2016)NoStop
[not()]nota1
@noop note Deriving Eq. (<ref>) with respect
to t', we get
[/ t'
U^†(t')] Z(t')+ U^†(t')/ t'
Z(t')= - / t' V^†(t') .
Substituting in it Eqs. (<ref>) and (<ref>), we get an
identity.Stop
[Ladewig et al.(2022)Ladewig, Diehl, and Buchhold]Ladewig2022
author author B. Ladewig, author S. Diehl, and author M. Buchhold, title title Monitored open fermion dynamics:
Exploring the interplay of measurement, decoherence, and free Hamiltonian
evolution, https://doi.org/10.1103/PhysRevResearch.4.033001
journal journal Phys. Rev. Res. volume 4, pages 033001 (year
2022)NoStop
[Turkeshi et al.(2023)Turkeshi, Piroli, and Schirò]Turkeshi2023
author author X. Turkeshi, author L. Piroli, and author M. Schirò, @noop title Density and current statistics in
boundary-driven monitored fermionic chains (year 2023), https://arxiv.org/abs/2306.09893 arXiv:2306.09893
[cond-mat.stat-mech] NoStop
Chiriaco2023b
author author G. Chiriacò, personal communication.
|
http://arxiv.org/abs/2307.10395v1 | 20230714133845 | RDSim, a fast and comprehensive simulation of radio detection of air showers | [
"Washington R. de Carvalho Jr.",
"Abha Khakurdikar"
] | astro-ph.HE | [
"astro-ph.HE",
"astro-ph.IM"
] |
PiTL: Cross-modal Retrieval with Weakly-supervised Vision-language Pre-training via Prompting
Jorma Laaksonen
=============================================================================================
§ INTRODUCTION
Initially proposed in the sixties, the radio detection of cosmic rays has undergone a renaissance in the last 15 years. It has now come of age as it has been shown to be competitive with other detection techniques while offering many advantages over them. It is currently being used by several cosmic ray and neutrino experiments worldwide <cit.>.
In this context we introduce RDSim, a framework for the simulation of the radio emission of extensive air showers (EAS) and its detection by an arbitrary antenna array. It is being developed with speed in mind and uses simple, yet still precise, toymodel-like approaches to simulate both the radio emission and the detector response. After an initial set up, it is able to simulate in detail millions of events in just a few minutes. This speed makes it possible to investigate larger areas around the detector, study events with very low detection probability and examine geometrical effects, such as border effects and those that arise due to asymmetries in the radio array. Thanks to the large statistics it makes possible, RDSim is specially suited to be used as a fast and accurate aperture calculator.
This work is organized as follows: Section <ref> describes the radio emission and detector response models used, including trigger parameters and a brief description of the optional particle trigger simulation.
Section <ref> outlines the extra models used in the case of neutrino events, such as sampling of the neutrino interaction point and tau-lepton propagation. Section <ref> describes the general structure and procedures of RDSim, which is followed by a discussion on Section <ref>.
§ RADIO EMISSION AND DETECTOR RESPONSE MODELING
The radio emission model in RDSim is based on the superposition of the Askaryan and geomagnetic emission mechanisms and is an expansion of the model presented in <cit.>. It uses as input full ZHAireS <cit.> simulations of just a few antennas along a reference line. The superposition model then disentangles the Askaryan and geomagnetic components in order to get the amplitudes of the peak electric field, for each of the emission mechanisms separately, as a function of the distance to the core along the reference line. By assuming an elliptical symmetry for the amplitudes of each mechanism and using their theoretical polarizations (see Fig. <ref>), it is able to estimate the net peak electric field, along with its polarization, at any position on the ground. Given an arbitrary observer at a distance r from the center of the ellipse (blue antenna on Fig. <ref>), we use the elliptical symmetry to get the distance R_ along the reference line (red line in Fig. <ref>) where we sample the Askaryan and geomagnetic amplitudes. We then add them up, taking into account their expected theoretical polarizations, to obtain the net electric field and polarization at the desired observer position (see <cit.> for more details).
The original superposition model described in <cit.> did not take Early-Late effects into account and started to become inaccurate for showers with θ>70^∘ (see Fig. 7 of <cit.> and the right panel of Fig. <ref>, where the old model is labeled as “No scaling”). In this new iteration of the model we now take into account the changes in the distance to the shower as the position of the observer changes. This is modeled by scaling both the Askaryan and geomagnetic amplitudes at the relevant point R_ on the reference line by D^_/D^_, where D^_ is the distance between and R_ and D^_ is the distance between and the observer.
We have compared the results of our expanded superposition model with full simulations of the radio emission. An example can be seen on Fig. <ref>. The left panel shows the results of a full ZHAireS simulation of an 80^∘ shower and the middle panel shows the estimated field using the new model at the same positions. On the right panel we show the amplitudes of the electric field along the major axis of the elliptical radio footprint, where the Early-Late effects are maximum. One can see that the new model (marked as “With Scaling”) has a very good agreement with the full simulation. The maximum difference in this example comparison is 6%.
The old model could only estimate the net electric field for the exact same arrival direction as the ZHAireS simulation used to construct it. We have implemented a way to rotate the new superposition model to any desired azimuth angle, making it possible to reuse a single input ZHAireS simulation multiple times. In order to do this we rotate the ellipse to match the azimuth of the new desired arrival direction. But here we also have to take into account the changes in the angle α between the shower axis and the magnetic field (see left panel of Fig. <ref>), which has an impact on the amplitude of the geomagnetic component of the emission, as it roughly scales with sinα. To correct for this we scale the geomagnetic amplitudes along the whole reference line by sinα'/sinα, where α (α') is the angle between the original (rotated) shower axis and B⃗. We have found that the errors introduced by this rotation are very small, as can be seen in Fig. <ref>, where we compare the fields obtained by an unrotated model of a shower with θ=85^∘ coming from the West (left) with one rotated in azimuth by 45^∘ to match that same arrival direction (right). The maximum difference in this example is just 2%. We have also implemented a simple linear scaling of the electric field with shower energy, extending even further the phase-space that can be covered by a single ZHAireS input simulation.
The characteristics of the detector, including its response, is modeled in a very simple way. The antenna positions on a flat plane defined by the ground altitude at the array are arbitrary and read from an input text file. In the case of arrays that do not measure the vertical component of the electric field, RDSim can be set to just use its horizontal component. Antenna triggers are modeled by a simple settable threshold in electric field amplitude (default is 100 μ V/m). The effect of the beam pattern of the antennas can also be taken into account by setting an optional input file. By default we assume the pattern is the same for each detectable polarization and we only take into account the zenithal dependence of the beam pattern w.r.t. the radiation arrival direction. For a given antenna, we then just multiply the original electric field obtained from the superposition model by the beam pattern, obtaining an “effective” electric field, which is then used instead of the original field for the simple threshold trigger. An array-level trigger is also implemented in a very simple way, by a settable minimum number of antenna-level triggers required to consider the whole event as triggered. At the end of the run, the information of all events is saved to a compressed ROOT file, including the components of the measured electric field of each triggered antenna. This makes it possible to implement more complex analyses outside the simulation, e.g. more complex triggers, such as the “veto antenna” approach of the radio only-trigger at OVRO-LWA <cit.>, or the simple particle trigger described below.
For detectors that require ground particles in order to trigger, we have implemented a simple particle trigger simulation. At the moment we only model high energy muons arriving at the ground. This particle trigger is based on a simple model to approximate the muon density at ground level, which uses low thinning AIRES <cit.> simulations as input. As is the case with the radio emission model, we implemented a rotation of the muon model in order to use a single AIRES simulation to approximate the muon density for many arrival directions. More details can be seen in <cit.>. The shape of the particle detector is taken into account by using an effective area A_(θ) as a function of the shower zenith angle, which is akin to a shadow area of the detector on the ground. For a circular water tank like the ones used in AUGER <cit.>, A_(θ)=π r^2 + 2rhtanθ, where r is the tank radius and h is the height of the water inside it. In the case of a horizontally installed scintillator at ground level, the effective area is just the geometrical area of the detector. We then estimate the number of muons crossing the detector by sampling a poissonian distribution with a rate parameter λ=A_ρ_μ, where ρ_μ is the muon density at the location of the detector. The particle detector is considered triggered if the number of sampled muons crossing it is greater than a settable threshold. In order to maintain RDSim's great speed, this calculation is done after the main radio simulation is finished. This means that the particle trigger is calculated only for the events and stations that actually triggered in the radio-only part of the simulation.
§ EXTRA MODELING FOR NEUTRINO EVENTS
The emission model used in RDSim can handle any downgoing shower that can be simulated with ZHAireS. This includes neutrino CC and NC induced showers as well as those initiated by tau-lepton decays in the atmosphere. To simulate CC and NC initiated showers we use a previously produced extensive library of Herwig <cit.> simulations of neutrino interactions and inject the secondaries into ZHAireS, which then simulates the shower and its radio emission. In the case of showers initiated by tau decays the procedure is the same, but instead of Herwig, we use our library of tau decays simulated with Tauola <cit.>.
Due to their nature, these neutrino initiated showers require extra steps in the simulation, such as sampling the position where the neutrino interacts in the atmosphere (see left panel of Fig. <ref>) and in the case of ν_τs also the propagation of the τ from its creation to its decay, where a shower is created (see right panel of Fig. <ref>). Since the cross-section of neutrino interactions is very small, even at the highest energies, we assume that the point where the neutrino interaction occurs is equally distributed in atmospheric depth. So, in order to sample the interaction position along the shower axis for the events, we simply divide the atmosphere in slices of equal thickness Δ X in atmospheric depth, each centered at a different interaction depth X_ measured from the top of the atmosphere. This means that we have many more slices at low altitude (high X_), due to the higher air density. For each of these slices we perform ZHAireS simulations of showers initiated by CC or NC interactions. From these simulations, instances of the superposition emission model are then created for each of the slices. During the run, for each event the interaction point X_ along the shower axis is then sampled by just choosing one of the available atmospheric slices randomly. RDSim then chooses one of the corresponding instances of the emission model at that particular X_ slice to simulate the radio emission [The thickness Δ X of the slices is settable. Smaller values of Δ X will sample the atmosphere more finely, but will require many more ZHAireS simulations to setup an RDSim run. The biggest impact of using a large Δ X is at the highest altitudes. Due to the lower air density this will lead to large distances (or equivalently large differences in altitude) between the interaction points of the available simulations.].
In the case of showers initiated by tau-lepton decays (see right panel of Fig. <ref>), the propagation of the tau is handled in a very simple way, disregarding the τ energy losses in air. First we sample the depth X_0 of the ν_τ interaction in the atmosphere. Just as before, this is done by choosing a random atmospheric Δ X slice. The distance L_(E_τ) between the ν_τ interaction and the tau-lepton decay is then sampled by propagating the tau (of energy E_τ) in steps of length Δ L between the interaction position and the ground. For this we use the probability of tau decay per meter: dP(E_τ)=m_τ/E_ττ, where τ=86.93μ m is the decay length. If the τ decays above ground, this will give us the position along the axis where the tau decays and the shower starts, described by the decay depth X_ measured from the top of the atmosphere. If it does not decay above ground, no shower is created. For speed, the tau propagation simulation is performed externally and prior to the main RDSim run. We record the fraction of taus that do not decay before reaching the ground and thus creates no shower. For those that do decay above ground we create parametrizations of the distributions of X_, which also takes into account the position X_0 of the initial interaction of the ν_τ, where the tau-lepton is produced. These are recorded in text files that are read by RDSim during the main simulation. To create the instances of the superposition emission model to be used in tau decay events, just as before, the atmosphere is divided in slices of equal thickness Δ X and for each slice ZHAireS simulations are performed. But in this case the products of the tau decay, obtained from Tauola, are injected into ZHAireS instead. During the main run, for each tau decay event we sample the previously obtained probability of the tau decaying above ground. If it does decay above ground the position of the decay is sampled from the previously produced parametrizations. RDSim then chooses one of the corresponding instances of the emission model at the particular slice where the decay occurs to simulate the radio emission. If the tau does not decay before reaching the ground, no shower is created and the event is instantly marked as not triggered. On the left panel of Fig. <ref> we show an example tau decay event simulated at the AUGER-RD array <cit.>.
§ STRUCTURE AND PROCEDURE
RDSim was build around speed, but offers comprehensive and precise simulations of the radio emission and its detection by an arbitrary radio array. It was implemented in C++ and is structured around a few key classes, along with tools to help setup a run and analyze the results.
At the end of a run, the information of every event, including event number, arrival direction, core position, Energy, sinα, number of triggered stations and the full information about the instance of the emission toymodel used
. In the case of neutrino events, the interaction (or tau decay) height is also saved. For those events that triggered, the full information on all triggered stations is saved, including the electric field components.
The first step to run an RDSim simulation is the creation of multiple instances of the emission model that are relevant for the type of events we want to study. The effect of shower-to-shower fluctuations can be included by just running multiple ZHAireS simulations to create multiple instances of the emission toymodel for the same class of event. This will automatically include the effect of e.g. an distribution in the simulation. The typical runtime for ZHaireS simulations for this purpose is just ∼15 minutes per antenna in a single core, making it possible to create hundreds of emission toymodel instances in just a few hours, if a cluster with ∼ 100 cores is used.
The main run is controlled by an input file which contains the parameters of the simulation to be run, such as the ranges for the arrival direction, energy and core position, as well as the trigger thresholds. It also lists the emission toymodel instances that are to be included, along with settable ranges for the azimuth angle and energy each one is allowed to be used for[In general it is safe to allow a toymodel to be rotated to any azimuth angle and to be scaled to energies at least half an order of magnitude around its original energy.]. This main input file also contains the location of the optional files for the antenna beam patterns and, in the case of tau decay events, the parametrizations previously obtained for the tau propagation. During the run, for each event we sample an isotropic arrival direction (with zenith rounded up to the nearest available toymodel), along with an energy and a core position equally distributed in area. RDSim then searches for all toymodels that can be used for these particular values and chooses one of them randomly. The chosen toymodel is rotated and scaled to match the parameters of the event and then used to calculate the electric field at each antenna. The beam pattern is then applied to the calculated fields and the trigger condition is checked for each antenna.
§ DISCUSSION
RDSim is a very flexible framework for the simulation of radio detection. It can be used to model the response of very different antenna arrays, as illustrated by the left and middle panels of Fig. <ref>, where we show example events simulated at AUGER-RD (left) and OVRO-LWA (right). Owing to the large statistics made possible by its speed, RDSim can be used to perform detailed studies of the impact of the array characteristics in its detection capabilities. On the right panel of Fig. <ref> we show a very simple example of such a study, depicting the number of triggered stations as a function of core position for 30^∘ 0.5 EeV proton showers at OVRO-LWA. One can see that for the zenith angle studied, events landing in the NW and South parts of the OVRO-LWA array cannot be detected due to the low antenna density in those regions, while the number of triggered stations rises fast as the shower core approaches the center of the array with its very high antenna density.
Events with a low probability of detection can also be investigated in detail. This can be very important as many different classes of these low probability events can add up to a sizable contribution to e.g. the aperture of a detector. RDsim can also be very useful to generate a much clearer picture of the contribution of each class of event, making it possible to determine what can and cannot be seen by a detector and to what extent, by estimating the detection probability of each class of event. This is particularly useful if one desires to perform more detailed studies based on full simulations, as it can be used to optimize the phase-space to be fully simulated. It can be used to estimate not only the total number of full simulations needed to thoroughly cover the relevant phase-space, but also the relative number of simulations to be performed for each class of event, based on their detection probability. This is specially important for ν (τ decay) induced showers, as they have a much bigger phase space if compared to regular showers due to their extra relevant variables, such as their highly variable interaction (decay) depth, making the generation of unoptimized full simulation libraries unfeasible.
Recently we started comparing RDSim results to full emission and detector simulations. Preliminary results of these comparisons shows a very good agreement, despite the astronomical decrease in the computing time needed by RDSim. We are also in the process of adding the functionality to simulate mountain events, i.e. events induced by the decay of tau-leptons produced by ν_τ interactions in the terrain around the detector. For this we will use topographical maps of the region to calculate, given a core position and an arrival direction for an event, the amount of rock traversed and the distance to the closest rock face. This information will then be used after the main run to convolute the probability of detection of a τ of energy E_τ, established by RDSim, with the probability of such tau-lepton exiting the mountain as a function of the energy E_ν of the ν_τ that produces it.
99
FrankRadioReview Frank G. Schröder, Prog. Part. Nucl. Phys. 93, 1-68, (2017)
toymodel
J. Alvarez-Muniz, Washington R. Carvalho Jr., Harm Schoorlemmer, Enrique Zas, Astropar. Phys. , 59, 29-38, (2014)
zhaires J. Alvarez-Muniz, W. R. Carvalho Jr. and E. Zas, Astropar. Phys., 35, 325, (2012)
ovroupdates Kathryn Plant et al, PoS (ICRC2021) 204, (2021) and PoS (ARENA2022)
aires S. Sciutto, AIRES manual, http://www.fisica.unlp.edu.ar/auger/aires/
AugerPrime Jörg R. Hörandel for the Auger Collaboration, EPJ Web Conf.
210 (UHECR 2018), (2019)
Herwig G. Corcella et al, herwig 6.5, JHEP 0101 (2001)
Tauola R. Decker et al, Comput. Phys. Commun. 76, 361 (1993)
RDSim-ECRS W. R. de Carvalho Jr. and Abha Khakurdikar, PoS (ECRS2022), in preparation
|
http://arxiv.org/abs/2307.03892v1 | 20230708035958 | Embedding Mental Health Discourse for Community Recommendation | [
"Hy Dang",
"Bang Nguyen",
"Noah Ziems",
"Meng Jiang"
] | cs.IR | [
"cs.IR",
"cs.CL"
] |
Feature selection simultaneously preserving both class and cluster structures
Suchismita Dasmycorrespondingauthor and Nikhil R. Pal
August 12, 2023
=============================================================================
*These authors contributed equally to this work
Our paper investigates the use of discourse embedding techniques to develop a community recommendation system that focuses on mental health support groups on social media. Social media platforms provide a means for users to anonymously connect with communities that cater to their specific interests. However, with the vast number of online communities available, users may face difficulties in identifying relevant groups to address their mental health concerns. To address this challenge, we explore the integration of discourse information from various subreddit communities using embedding techniques to develop an effective recommendation system. Our approach involves the use of content-based and collaborative filtering techniques to enhance the performance of the recommendation system. Our findings indicate that the proposed approach outperforms the use of each technique separately and provides interpretability in the recommendation process.
§ INTRODUCTION
The rise of social media as a platform has allowed people all over the world to connect and communicate with one another.
Further, these communities that exist online are able to keep their members anonymous from one another, allowing new communities to form which would have a hard time existing without anonymity.
Specifically, this new and robust anonymity has allowed an explosion of online communities with a focus on giving each other advice on health issues.
While being involved in seeking peer support in a community with people that have experienced similar issues can provide a significant positive impact on someone's ability to navigate their personal problems <cit.>, finding communities with relevant discourse is not trivial.
Often, the platforms which host these communities have a very large quantity of them.
There are over 100,000 different communities on Reddit alone.
Further, some communities are not easily found due to their inherently anonymous nature, so the only way a user can decide if they fit within the community is by spending time reading through the discourse happening within the community.
For these reasons, new users seeking others who have experienced similar situations may have a very hard time finding communities that would help them the most, even if they are familiar with the platform which hosts the communities.
Recently, embedding long sequences of text has received lots of interest both from the research community and from practitioners.
A number of studies have shown embeddings can be useful for measuring the similarity both between document pairs and between question-document pairs <cit.>, allowing for retrieval of the most similar documents given a new question or document.
However, little work has been done investigating how the discourse within a community, which represents the meaning of that community, can be represented in a single embedding. The discourse of a community in this context can be all users' posts in that specific community or represented community's description.
This poses a unique challenge as discourse within these communities is often in the form of threads that, unlike documents, are not naturally represented as a single block of text.
The goal of this work is to develop a system to recommend support groups to social media users who seek help regarding mental health issues using embeddings to represent the communities and their discourse.
Specifically, we aim to leverage the text of a given user's posts along with the description and posts in each subreddit community to help recommend support groups that the user could consider joining.
Our main research questions are as follows:
* In representing online communities through discourse embeddings, what type of information can be used?
* To what degree do these representations improve the accuracy of predicting users' behaviors regarding their involvement in sharing experiences within groups or communities?
* Do different discourse embedding methods change the prediction capacity of our community recommendation model?
In exploring these research questions, we propose a hybrid recommendation approach that leverages both content-based and collaborative filtering to construct our community recommendation model. As shown in Fig. <ref>, the content-based filtering component investigates different methods of embedding discourse within a community to recommend similar communities to users. It is then combined with a matrix factorization model that learns user engagement behavior in a community to improve recommendation decisions. Utilizing users' past interactions as well as text-based information about the communities, we show that our model achieves promising accuracy while offering interpretability.
§ RELATED WORK
There are a number of studies related to our work.
<cit.> and <cit.> constructed discourse embeddings to find relations between short text segments.
While the two studies were similar in concept, they focused on short text segments where this work instead focused on constructing discourse embeddings for entire social media communities.
<cit.> showed NLP techniques could be used with electronic health records to predict mental health crises 4 weeks in advance.
While online communities were no replacement for professional medical help, this suggested many who had looming mental health problems seek help before a crisis.
<cit.> experimented on the same dataset we used with Natural Language Processing techniques such as TF-IDF and sentiment analysis to understand the effects of COVID-19 on mental health.
Although working on the same dataset, our work studies a different task: to recommend mental health-related support community to Reddit users.
<cit.> adopted a similar approach to ours in content-based filtering for recommendation.
Specifically, they mapped a Wikipedia page to each item and generate its corresponding vector representation using three feature-extraction methods - Latent Semantic Indexing, Random Indexing, and Word2Vec.
We extended this method by exploring more recent representations of text such as BERT <cit.> and OpenAI embeddings.
<cit.> recommended threads in health forums based on the topics of interest of the users.
Specifically, self-reported medical conditions and symptoms of treatments were used as additional information to help improve thread recommendations <cit.>.
While our work is also situated in the health domain, we are interested in recommending a broader support group to users rather than a specific thread.
<cit.> used sentiment and other features to automatically evaluate dialog, showing NLP techniques could be used to evaluate quality of discourse.
In doing so, they leveraged weak supervision to train a model on a large dataset without needing quality annotations.
§ PROBLEM DEFINITION
Suppose we have a Reddit's "who-posts-to-what" graph, which is denoted by G = (U, V, E) where U is the set of users, V is the set of subreddit communities, and E, a subset of U× V, is the set of edges.
The number of user nodes is m = |U| and the number of subreddit communities is n = |V|. So, U = {(u_1, P_1), (u_2, P_2) , ..., (u_m, P_m)} where P_i is the set of posts by user u_i and V = {(v_1, P^'_1), ..., (v_n, P^'_n)} where P^'_j is the set of all posts in subreddit v_j.
If a user u_i posts to subreddit v_j, there is an edge that goes from u_i to v_j, which is denoted by e_ij = e(u_i, v_j).
The problem is that given G, predict if e_ij = e(u_i, v_j) exists.
In other words, will user u_i post to subreddit v_j?
§ METHODOLOGY
Figure <ref> illustrates our recommendation pipeline, which adopts a hybrid approach by incorporating both content-based filtering (CBF) and collaborative filtering, specifically matrix factorization (MF) strategies. The CBF model recommends new subreddits based on the average of a user's previous interactions, weighted by how similar the previous subreddits are to the new ones. Meanwhile, users and subreddits are represented in a k-dimensional joint latent space in the MF model. The distance between users and subreddits in this latent space is used to provide recommendations for new subreddits. The predictions from these two components are linearly combined to obtain the final recommendation of subreddits to users.
The collaborative filtering component of our solution leverages nonnegative matrix factorization to represent our users and subreddits in lower-dimensional latent space. In this sense, we redefine the adjacency matrix 𝐀 in our problem definition so that it works with nonnegative factorization. More specifically, users' past interactions with items are represented by the adjacency matrix 𝐀∈{5, 1, 0}^m × n. A_ij = 5 if the user u_i has posted to subreddit j, A_ij = 1 if the user u_i has NOT posted to the subreddit v_j, and A_ij = 0 is the missing connection that needs predicting. Given this adjacency matrix 𝐀, the task is to predict the missing elements A_ij = 0. In the following sections, we elaborate on each component of our recommendation model and then discuss how they are combined to obtain our final solution.
§.§ Content-based Filtering
In recommending items to users based on their past interactions and preferences, content-based filtering methods represent each item with a feature vector, which can then be utilized to measure the similarity between items <cit.>. If an item is similar to another item with which a user interacted in the past, it will be recommended to that same user. Thus, in addition to the adjacency matrix 𝐀, we utilize another matrix 𝐂 of size n× n, where 𝐂_ab is the similarity between the embeddings for two subreddits with embedding vectors 𝐚 and 𝐛.
In this paper, we use cosine similarity as the similarity measure:
𝐂_ab = 𝐚·𝐛𝐚𝐛,
To predict the value of the missing element where A_ij = 0 (whether user u_i will post to subreddit v_j), we compute the average of user u_i's past interactions (which subreddits user u_i posted and did not post to), weighted by the similarity of these subreddits to subreddit v_j.
Mathematically,
A^'_ij = ∑_k=1^n A_ik C_kj/∑_k=1^n C_kj.
We can generalize the above formula to obtain the new predicted adjacency matrix using matrix-level operations:
𝐀^(CBF) = (𝐀𝐂) ⊙𝐃,
where
* 𝐃 = 1. / (𝐈·𝐂) (element-wise),
* 𝐈 is an indicator matrix such that I_ij = 1 if A_ij≠ 0, otherwise I_ij = 0,
* and ⊙ is the Hadamard product.
§.§.§ Representing Subreddit Discourse with Description and Posts
It is helpful to consider the specific domain of the application to represent each item as an embedding. In the context of our subreddit recommendation problem, we take advantage of two types of text-based information about a subreddit to construct the similarity matrix: (1) the posts within the subreddit itself and (2) the general description about the reddit provided by the subreddit moderators.
We then use a feature extraction method to obtain two embeddings of a subreddit, one based on its description and the other based on its posts. As a subreddit contains many posts, each of which has a different embedding given the same feature-extraction method, we take the average of the embeddings across all posts within a subreddit to obtain one embedding for the subreddit.
§.§.§ Feature Extraction
In this paper, we consider three feature-extraction methods: Term Frequency-Inverse Document Frequency (TF-IDF), Bidirectional Encoder Representations from Transformers (BERT) <cit.>, and OpenAI.[OpenAI API Embeddings: <https://platform.openai.com/docs/guides/embeddings>]
TF-IDF: The TF-IDF algorithm represents a document as a vector, each element of which corresponds to the TF-IDF score of a word in that document.
The TF-IDF score for each word in the document is dictated by (1) the frequency of the word in the document <cit.>, and (2) the rarity of the word in the entire text corpus <cit.>.
That is, a term is important to a document if it occurs frequently in the document but rarely in the corpus.
We use the implementation from scikit-learn <cit.> to obtain the TF-IDF representations of our subreddits.
BERT: We employ BERT to generate sentence embeddings as another feature extraction technique <cit.>.
BERT takes a sentence as input and generates a fixed-length vector representation of the sentence.
This representation is meant to capture the syntactic and semantic meaning of the input sentence in a way that can be used for various natural language processing tasks, such as sentence classification or semantic similarity comparison.
In the context of our problem, we can treat each subreddit description or each post as a sentence and feed it to a pre-trained BERT model to generate the embeddings that represent the subreddit. Long posts are truncated to fit within the context limits of pre-trained models. We experiment with 4 different variations of BERT embeddings:
* BERT base and large <cit.>
* Sentence-BERT, or SBERT <cit.>
* BERTweet <cit.>
OpenAI: Similar to BERT embeddings, OpenAI embeddings take in a string of text and output an embedding that represents the semantic meaning of the text as a dense vector.
To do this, the input string is first converted into a sequence of tokens.
The tokens are then fed to a Large Language Model (LLM), which generates a single embedding vector of fixed size.
OpenAI's text-embedding-ada-002 can take strings of up to 8191 tokens and returns a vector with 1536 dimensions.
§.§ Nonnegative Matrix Factorization for Collaborative Filtering
Matrix factorization (MF) approaches map users and items (subreddits in this case) to a joint latent factor space of a lower dimension k <cit.>. The goal of this method is to recommend to a user the subreddits that are close to them in the latent space. More formally, MF involves the construction of user matrix 𝐏 of dimension m× k and subreddit matrix 𝐐 of dimension n× k. In this sense, the resulting term, 𝐩_i^⊤𝐪_j, captures user u_i's interest in item v_j’s characteristics, thereby approximating user u_i's rating of item v_j, or denoted by A_ij.
This modeling approach learns the values in 𝐏 and 𝐐 through the optimization of the loss fuction
min_𝐏,𝐐∑_A_ij∈𝐀 ( A_ij - 𝐩_i^⊤𝐪_j )^2 + λ ( 𝐩_i ^2 + 𝐪_j ^2).
Matrix factorization offers the flexibility of accounting for various data and domain-specific biases that may have an effect on the interaction between user u_i and subreddit v_j. In this paper, we consider three types of biases: global average μ, user bias b_i^(p), and subreddit bias b_j^(q). The updated loss function is given by:
min_𝐏,𝐐∑_A_ij∈𝐀 ( A_ij - μ - b_i^(p) - b_j^(q) - 𝐩_i^⊤𝐪_j )^2 +
λ ( 𝐩_i ^2 + 𝐪_j ^2 + b_i^(p)^2 + b_j^(q)^2).
After optimization, each element in the new predicted adjacency matrix 𝐀^𝐌𝐅 is given by:
𝐀^(MF)_ij = 𝐩_i^⊤𝐪_j + μ + b_i + b_j
§.§ Final Model: Hybrid Approach
Our main model leverages insights from both content-based filtering and matrix factorization by taking a linear combination of their predicted adjacency matrix. Specifically, the new adjacency matrix is given by:
𝐀^(MF+CBF) = β𝐀^(CBF) + (1 - β) 𝐀^(MF),
where β is a hyperparameter that controls how much the CBF model (vs MF model) contributes to the final prediction.
§ DATA AND EXPERIMENTAL SETUP
For the experimental setup, we use the data from <cit.> working on Reddit platforms in mental health domains, particularly health anxiety.
§.§ Data Description
The dataset is collected from 28 mental health and non-mental health subreddits.
The dataset is suitable for studying how subreddits and social media platforms correlated with individuals' mental health and behavior.
The original data comprises 952,110 Reddit posts from 770,176 unique users across 28 subreddit communities, which include 15 mental health support groups, 2 broad mental health subreddits, and 11 non-mental health subreddits. We also manually collect descriptions of the 28 subreddits and use that information along with the posts to conduct the content similarity matrix.
§.§ Data Preprocessing
Although the original dataset has a large number of unique users, the majority of them only contribute posts to one or two different communities. This presents a challenge when evaluating our specific task. As our objective is to examine users' behavior over time and provide recommendations for engaging in suitable subreddits, we have implemented a filter to exclude users who post to fewer than three subreddits. After filtering, the remaining users and posts are 16,801 and 69,004, respectively, while the number of subreddits remains to be 28.
We also seek to understand the distribution of interactions between users and different subreddits. The detailed distribution of post frequency across subreddits is visualized in Figure <ref>.
§.§ Experimental Setup
§.§.§ Data Splits
To construct our data splits, for each user in our dataset, we choose the most recent subreddit that the user first posted to as the test example.
For example, if the user post history is [subreddit1, subreddit2, subreddit3, subreddit1, subreddit2], then subredddit3 will be used as the test example.
For each positive training example, we pair it with a negative example randomly sampled from the list of subreddits where the user has not posted to.
§.§.§ Evaluation Metrics
In assessing the performance of our recommendation method and the baseline, we use the following evaluation metrics: Recall@K and Mean Reciprocal Rank (MRR).
§.§ Results
Table <ref> presents the performance of our hybrid recommendation system as well as its individual components (MF or CBF). For CBF, we report its performance on different types of embeddings constructed using different information (posts or description) and different feature extraction methods (TF-IDF, BERT, or OpenAI). Figure <ref> visualizes the results of exemplary models in a diagram for better analysis using Recall@K.
According to Table <ref>, all variants of our recommendation method outperform the random predictor. Among all the variants, the hybrid solution using the content similarity matrix generated from OpenAI embeddings achieves the highest performance in MRR (0.4244) and average Recall@K.
For CBF, operating a feature-extraction method on subreddit posts results in higher performance than operating the same method on description. For example, the MRR for CBF - BERT base is 0.3140 when using posts and 0.3024 when using description. It can also be observed that given the same information (either posts or information), deep-learning-based feature extraction methods like OpenAI and BERT bring about better performance for CBF than TF-IDF.
As our recommendation model combines both MF and CBF, we investigate the effect of hyperparameter β, which dictates how much CBF contributes to the final prediction. Figure <ref> illustrates the performance of the hybrid models on varying β. When β = 0, the hybrid model's performance is the same as that of MF. When β = 1, the hybrid model's performance is the same as that of CBF. It can be seen from the peak of these curves that this way of linearly combining MF and CBF brings about significant improvement in MRR.
§.§ Case Studies
We perform a series of case studies to understand why certain information and methods are more helpful than others in recommending subreddits to users. We present our findings by comparing the behavior of the following models: (1) CBF models using TF-IDF and OpenAI Embedding on Subreddit Descriptions, (2) CBF models using OpenAI Embeddings on Subreddit Descriptions and Posts, and (3) MF model and Hybrid model.
§.§.§ CBF models using TF-IDF and OpenAI Embedding on Subreddit Descriptions
The objective of the first case study is to investigate the impact of different types of embedding methods on the performance of recommendations. To achieve this, we employ TF-IDF and OpenAI Embedding approaches to analyze subreddit descriptions and compare their predictions using content-based filtering (CBF) approaches, as illustrated in Figure <ref>. Specifically, we consider User A's historically interacted subreddits, which relate to depression, loneliness, and anxiety, respectively, with the ground truth of socialanxiety. For CBF models, the content similarity C between historically interacted and ground truth subreddits is crucial for accurate predictions. Hence, we evaluate the similarity scores between them. According to the result, the OpenAI Embedding technique outperforms TF-IDF in learning the representation of subreddits.
Based on the analysis of content similarity matrices of the two approaches, we observe that TF-IDF has low similarity scores among subreddits due to its bag-of-words (BOW) approach, which fails to capture semantic relationships in short texts <cit.>, such as subreddit descriptions. In contrast, OpenAI Embeddings, which can capture semantic meanings, performs better for encoding the meanings of subreddit descriptions for recommendation tasks.
§.§.§ CBF models using OpenAI Embeddings on Subreddit Descriptions and Posts
The second case study aims to investigate the impact of different types of information on the performance and recommendations of CBF models. To achieve this goal, we evaluate OpenAI Embeddings approaches on two types of information, subreddit descriptions, and posts. Figure <ref> illustrates the predictions using CBF approaches utilizing OpenAI Embeddings on posts and descriptions. Specifically, we examine User B's historical posts, which are in depression and personalfinance, and the ground truth label is legaladvice. To understand the behavior of CBF on these two types of information, we analyze the similarities between historical subreddit interactions of User B and how the ground truth label is correlated with these subreddits. Our analysis shows that using OpenAI Embeddings on subreddit posts can capture strong relationships between personalfinance and legaladvice, where many legaladvice posts are related to financial information. However, when only using subreddit descriptions of legaladvice, which is "A place to ask simple legal questions, and to have legal concepts explained.", the model fails to capture this relationship.
Furthermore, as shown in Table <ref>, the use of subreddit posts as representations for communities generally exhibits higher performance across most metrics when compared to using community descriptions. The reason is that subreddit descriptions contain less information than posts describing only the general purpose of the subreddit. In contrast, using subreddit posts can accurately learn the representations of the subreddits. Therefore, among the two types of information, using subreddit posts to represent subreddits helps models achieve better performance.
§.§.§ MF vs MF + CBF model using OpenAI Embeddings on Subreddit Discourses
The objective of the third study is to investigate the performance improvement achieved by combining MF and CBF. Specifically, we aim to explore how the use of discourse embeddings to generate content similarity matrices among subreddits can address challenges encountered by the MF approach. To this end, we evaluate the MF and MF + CBF approaches using OpenAI Embeddings on posts. The predictions generated by the two models are presented in Figure <ref>.
We further examine the construction of scores using MF for this case study. The scores values are generated using latent features P, Q, μ, b^(p), and b^(q), representing user, item features, global average, user, and item biases, respectively. However, due to the imbalance in the dataset, there are more posts in some subreddits than others, leading to a cold start problem for the MF approach to accurately learn communities with a small number of examples. In this case study, MF fails to generate correct predictions for the divorce community due to the limited number of posts available. Additionally, MF is biased towards subreddits with more posts, as reflected by the b^(q) values that have strong correlations with the number of posts in the subreddit communities, as depicted in Figure <ref>.
We demonstrate that the top three predictions generated by MF are the subreddits with the highest item biases compared to other subreddits, which are also the ones with the most posts. However, as divorce only accounts for 0.78% of the dataset, the performance of MF is limited. By utilizing OpenAI Embeddings on Subreddit Discourses to represent subreddit communities, we can integrate semantic information into the prediction process, thereby overcoming the cold start problem encountered by MF. Furthermore, this approach captures the relationships between the target recommended subreddit, historically interacted communities and semantic similarities. In this case, the most similar subreddits to personalfinance are legaladvice and divorce, while the most similar subreddits to parenting are autism and divorce.
Overall, we showcase that integrating semantic information into MF can address the cold start problem, and combining MF with CBF using discourse embeddings can make better recommendations.
§ CONCLUSION
This study aimed to investigate the effectiveness of different types of discourse embeddings when integrated into content-based filtering for recommending support groups, particularly in the mental health domain. Our findings showed that the hybrid model, which combined content-based filtering and collaborative filtering, yielded the best results. Moreover, we conducted an extensive case study to demonstrate the interpretability of our approach's predictions.
Previous studies have brought to light the use of past behaviors to make more accurate recommendations in mental health <cit.>. They also emphasize effective communication between the recommender system and the user as an essential factor for users' proper understanding of mental health in general as well as in their own journey <cit.>. Through promising prediction accuracy and interpretability, we believe that this method can serve as a valuable tool to support individuals, particularly those with mental health concerns, to share and seek help regarding their issues.
§ LIMITATIONS
In our current project, we have not taken into account the temporal information that treats the historical behavior of users as a sequence of actions. Thus, the model may not capture how user behaviors change over time. To ensure full support to users in need, we recommend that future work should address this limitation by considering users' historical behaviors as a sequence of actions. Moreover, although our pre-trained models achieved significant results without fine-tuning discourse embeddings, we suggest that fine-tuning these models can enhance performance by capturing the nuances of the datasets' distribution and contexts. Furthermore, conducting a detailed comparison of additional open-source Large Language Models (LLMs) would provide more comprehensive insights into their performance.
Additionally, in addition to analyzing the efficiency of different models, it is crucial to evaluate the cost associated with implementing these models. Therefore, future work should consider both fine-tuning and evaluating additional LLMs, while also taking into account the costs of utilizing these models.
§ ACKNOWLEDGEMENT
This work was supported by NSF IIS-2119531, IIS-2137396, IIS-2142827, CCF-1901059, and ONR N00014-22-1-2507.
style/acl_natbib
|
http://arxiv.org/abs/2307.04960v1 | 20230711014210 | Simple Reference Immutability for System F-sub | [
"Edward Lee",
"Ondřej Lhoták"
] | cs.PL | [
"cs.PL"
] |
Keeps objects fresh for up to 5X longer!
Computer Science
University of Waterloo
200 University Ave W.
Waterloo
ON
N2L 3G1
Canada
Computer Science
University of Waterloo
200 University Ave W.
Waterloo
ON
N2L 3G1
Canada
Reference immutability is a type based technique for taming mutation that has long been
studied in the context of object-oriented languages, like Java. Recently, though,
languages like Scala have blurred the lines between functional programming languages
and object oriented programming languages. We explore how reference immutability
interacts with features commonly found in these hybrid languages, in particular
with higher-order functions – polymorphism – and subtyping. We construct a calculus which encodes a reference immutability system as a simple extension of and prove
that it satisfies the standard soundness and immutability safety properties.
<ccs2012>
<concept>
<concept_id>10011007.10011006.10011008</concept_id>
<concept_desc>Software and its engineering General programming languages</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10011007.10011006.10011041</concept_id>
<concept_desc>Software and its engineering Compilers</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Software and its engineering General programming languages
[500]Software and its engineering Compilers
Simple Reference Immutability for
Ondřej Lhoták
October 2023
==================================
§ INTRODUCTION
Code written in a pure, functional language is referentially transparent – it has no side
effects and hence can be run multiple times to produce the same result. Reasoning about
referentially transparent code is easier for both humans and computers. However, purely
functional code can be hard to write and inefficient, so many functional languages contain
impure language features.
One important side effect that is difficult to reason about is mutation of state.
Mutation arises naturally, but can cause bugs which can be hard to untangle;
for example, two modules which at first glance are completely unrelated may interact
through some shared mutable variable. Taming – or controlling – where
and how mutation can occur can reduce these issues.
One method of taming mutation is reference immutability <cit.>.
In this setting, the type of each reference to a value can be either mutable or immutable.
An immutable reference cannot be used to mutate the value or any other
values transitively reached from it.
Mutable and immutable references can coexist for the same value, so an immutable reference
does not guarantee that the value will not change through some other,
mutable reference. This is in contrast to the stronger guarantee of object immutability,
which applies to values, and ensures that a particular value does not change through
any of the references to it.
Reference immutability has long been studied in existing object-oriented programming
languages such as Java <cit.>
and C# <cit.>. However, reference immutability is largely unexplored
in the context of functional languages with impure fragments – languages like Scala or OCaml, for example.
Many programs in Scala are mostly immutable <cit.>.
A system that formally enforces specified patterns of immutability would help programmers and compilers
better reason about immutability in such programs.
One feature that is important in all languages but especially essential in functional programs is polymorphism.
The interaction of polymorphism and reference immutability raises interesting questions.
Should type variables abstract over annotated types including their immutability annotations (such as ), or only
over the base types without immutability annotations (such as )? Should uses of type variables
admit an immutability annotation like other types do? For example, should be allowed, where is a type variable rather than a concrete type?
If yes, then how should one interpret an annotated variable itself instantiated with an annotated type?
For example, what should the type mean if the variable is instantiated with ?
Our contribution to this area is a simple and sound treatment of reference immutability in <cit.>.
Specifically, we formulate a simple extension of with the following properties:
* Immutability safety: When dealing with reference immutability, one important property
to show is immutability safety: showing that when a reference is given a read-only type, then the underlying value is not modified through that reference.
In we introduce a dynamic form of immutability, a term-level construct, which makes precise the
runtime guarantees that we expect from a reference that is statically designated as immutable by the type system.
We do this by formalizing , an untyped calculus with references and seals.
Dynamic seals are transitive in that they seal any new references that are read from a field of an object through
a sealed reference.
* -style polymorphism: preserves the same bounded-quantification
structure of . At the same time, it allows type variables to be further modified by immutability
modifiers.
* Immutable types are types: To allow for -style polymorphism, we need to treat immutable types
as types themselves. To do so, instead of type qualifiers, we introduce a type operator that can be freely applied
to existing types (including type variables). The operator turns a type into an immutable version of the same type.
While this complicates the definition of subtyping and proofs of canonical forms lemmas,
we resolve these issues by reducing types to a normal form.
Our hope is to enable reference immutability systems in functional languages by giving simple, sound foundations in , a calculus that underpins many practical functional programming languages.
The rest of this paper is organized as follows. In Section <ref> we give an overview of reference immutability. In Section <ref> we introduce an un-typed core calculus, , to describe sealing
and how it relates to reference immutability safety at run time.
In Section <ref> we present , which enriches with types, and show that it satisfies the standard
soundness theorems. In Section <ref> we use the soundness results from and the dynamic safety results from to show that our desired immutability safety properties hold in . We survey related and possible future work in Section <ref> and we conclude in Section <ref>.
Our development is mechanized in the Coq artifact that we will submit to the OOPSLA artifact evaluation process.
§ REFERENCE IMMUTABILITY
Reference immutability at its core is concerned with two key ideas:
* Immutable references: References to values can be made immutable,
so that the underlying value cannot be modified through that reference.
* Transitive immutability: An immutable reference to a compound value that contains other
references cannot be used to obtain a mutable reference to another value.
For example, if x is a read-only reference to a pair, the result of evaluating x.first
should be viewpoint adapted <cit.> to be a read-only reference, even if the pair contains references that are otherwise mutable.
For example, consider the following snippet of Scala-like code that deals with polymorphic mutable pairs.
[language=Scala]
case class Pair[X](var first: X, var second: X)
def good(x : Pair[Int]) = x.first = 5
def bad1(y : @readonly Pair[Int]) = y.first = 7
def bad2(y : @readonly Pair[Pair[Int]]) = y.first.first = 5
def access(z: @readonly Pair[Pair[Int]]): @readonly Pair[Int] = z.first
A reference immutability system would deem the function good to be well-typed because it mutates
the pair through a mutable reference x.
However, it would disallow bad1 because it mutates the pair through a read-only reference y. Moreover, it would also disallow bad2 because it mutates the pair
referenced indirectly through the read-only reference y. This can also be seen by looking
at the access function, which returns a read-only reference of type @readonly Pair[Int]
to the first component of the pair referenced by z.
§.§ Why though?
Immutable values are crucial even in impure functional programming languages because pure code is often easier to reason about.
This benefits both the programmer writing the code,
making debugging easier, and the compiler when applying optimizations.
Although most values, even in impure languages, are immutable by default <cit.>,
mutable values are sometimes necessary for various reasons. For example, consider a compiler
for a pure, functional, language. Such a compiler might be split into multiple passes,
one which first builds and generates a symbol table of procedures during semantic analysis,
and one which then uses that symbol table during code generation.
For efficiency, we may wish to build both the table and the procedures in that table with an impure loop.
[language=Scala]
object analysis
class Procedure(name : String)
val locals : mutable.Map[String, Procedure] = mutable.Map.empty
def addLocalProcedure(name: String, proc: Procedure) =
local += (name -> proc)
val table : mutable.Map[String, Procedure] = mutable.Map.empty
val analyze(ast: AST) =
ast.forEach(() => table.add(new Procedure(...)) )
The symbol table and the properties of the procedure should not be mutable everywhere, though;
during code generation, our compiler should be able to use the information in the
table to generate code but shouldn't be able to change the table nor the information in it!
How do we enforce this though?
One solution is to create an immutable copy of the symbol table for the code generator, but this
can be fragile. A naive solution which merely clones the table itself will not suffice, for example:
[language=Scala]
object analysis
private val table[analysis] = ...
def symbolTable : Map[String, Procedure] = table.toMap // create immutable copy of table.
object codegen
def go() =
analysis.symbolTable["main"].locals += ("bad" -> ...) // whoops...
While this does create an immutable copy of the symbol table for the code generator,
it does not create immutable copies of the procedures held in the table itself!
We would need to recursively rebuild a new, immutable symbol table with new, immutable procedures to guarantee immutability, which can be an expensive proposition, both in terms of code and in terms of runtime costs.
Moreover, creating an immutable copy might not even work in all cases.
Consider an interpreter for a pure, functional language with support for letrec x := e in f.
The environment in which e is interpreted contains a cyclic reference to x, which necessitates mutation in the interpreter. Without special tricks like lazyness this sort of structure cannot be constructed, let alone
copied, without mutation.
[language=Scala]
abstract class Value
type Env = Map[String, Value]
case class Closure(var env: Env, params: List[String], body: Exp) extends Value
def interpret_letrec(env: Env, x: String, e: Exp, f: Exp) : Value =
val v = interpret(env + (x -> Nothing), e)
case v of
Closure(env, params, body) => v.env = v.env + (x -> v) // Update binding
interpret (env + (x -> v), f)
Here, the closure that v refers to needs to be mutable while it is being constructed,
but since the underlying language is pure, it should be immutable afterwards. In particular,
we should not be able to mutate the closure through the self-referential reference
v.env = env + (x -> v), nor should we be able to mutate the closure while interpreting f.
We would like a system that prevents writes to v from the self-referential
binding in its environment and from the reference we pass to interpret (env + (x -> v), f).
This is what reference immutability provides.
[language=Scala]
abstract class Value
type Env = Map[String, @readonly Value]
case class Closure(env: var Env, params: List[String], body: Exp)
def interpret_letrec(env: Env, x: String, e: Exp, f: Exp) : Value =
val v = interpret(env + (x -> Nothing), e)
case v of
Closure(env, params, body) => v.env = env + (x -> @readonly v) // update binding
interpret (env + (x -> @readonly v), f)
§ DYNAMIC IMMUTABILITY SAFETY
Now, to formalize reference immutability, we need to formalize exactly when references are used
to update the values they refer to. For example, from above, how do we check that access
does what it claims to do?
[language=Scala]
def access(z: @readonly Pair[Pair[Int]]): @readonly Pair[Int] = z.first
How do we check that access returns a reference
to z.first that, at runtime, is never used to write to z.first or any other values transitively reachable from it through other references? How do we even express this guarantee precisely?
If we consider a reference as a collection of getter and setter methods for the fields of the
object it refers to, we could ensure that a reference is immutable by dropping all the setter
methods. To ensure that immutability is transitive, we would also need to ensure that the result
of applying a getter method is also immutable, i.e. by also dropping its setter methods and recursively
applying the same modification to its getter methods. We will make this precise by introducing
the calculus with a notion of sealed references.
§.§
To answer this question we introduce , the untyped lambda calculus with collections of mutable references – namely, records
– extended with a mechanism for sealing references. is adapted from the CS-machine of
<cit.> and extended with rules for dealing with sealed references.
Sealed references: To address the question about dynamic, runtime safety – can we ensure that read-only references are never used to mutate values – references can be explicitly sealed so
that any operation that will mutate the cell referenced will fail to evaluate; see Figure <ref>.
rules-untyped
The seal form protects its result from writes. A term under a seal form reduces
until it becomes a value. At that point, values that are not records,
like functions and type abstractions, are just transparently passed through the seal construct. However, values that are – records – remain protected by the seal form, and do not reduce further. For example:
({y : 0001})
is an irreducible value – a sealed record where the first field is stored at location 1 in the store.
Intuitively, this can be viewed as removing the setter methods from an object reference. A sealed
reference v behaves exactly like its unsealed variant v except that writes to v are forbidden and reads from v return sealed results.
Rules that mutate the cells corresponding to a record
explicitly require an unsealed open record; see write-field.
This ensures that any ill-behaved program that mutates a store cell through a sealed record will get stuck,
while an unsealed record can have its fields updated:
[ ⟨{x : 10}.x = 5, []⟩ ⟨{x : 0001}.x = 5, [0001: 10] ⟩; ⟨ 10, [0001: 5] ⟩ ]
A sealed record cannot have its fields written to. Unlike record field reads, for which there is a
sealed sealed-field counterpart to the standard record read rule field,
there is no corresponding rule for writing to a sealed record for write-field.
Recall that write-field requires an open, unsealed record as input:
[]
l : v ∈σ
⟨{x : l }.x = v', σ⟩⟨v, σ[l ↦v'] ⟩
The calculus does not contain any rule like the following, which would reduce writes on a sealed record:
[]
l : v ∈σ
⟨({x : l }).x = v', σ⟩⟨v, σ[l ↦v'] ⟩
So a term like:
[ ⟨ ({x : 10}).x = 5, []⟩ ⟨({x : 0001}).x = 5, [0001: 10] ⟩; ]
Dynamic viewpoint adaptation: After reading a field from a sealed record, the
semantics seals that value,
ensuring transitive safety – see sealed-field.
[]
l : v ∈σ
⟨({x : l }).x, σ⟩⟨v, σ⟩
For example:
[ ⟨ ({y : {x : 10}}).y, []⟩ ⟨({y : {x: 001}}).y, [001: 10] ⟩; ⟨({y : 002}).y, [001: 10, 002: {x: 001}] ⟩; ⟨ ({x: 001}), [001: 10, 002: {x: 001}] ⟩ ]
Sealed references and dynamic viewpoint adaptation allow for a succinct guarantee
of dynamic transitive immutability safety – that no value is ever mutated through a read-only
reference or any other references transitively derived from it.
Aside from preventing writes through sealed references, we should show that
sealing does not otherwise affect reduction.
For this we need a definition that
relates pairs of terms that are essentially equivalent except that one has more seals than the other.
Let s and t be two terms. We say s ≤ t if t can be obtained from s
by repeatedly replacing sub-terms s' of s with sealed subterms s'.
This implies a similar definition for stores:
Let σ and σ' be two stores. We say σ≤σ' if and only if
they have the same locations and for every location l ∈σ, we have σ(l) ≤σ'(l).
The following three lemmas formalize how reduction behaves for terms that are equivalent modulo seals.
The first one is for a term t that is equivalent to a value – it states that if t
reduces, the resulting term is still equivalent to the same value.
It also shows that the resulting term has fewer seals than t, which we'll need
later for an inductive argument.
Let s be a term. Then |s| is the number of seals in s.
Let v be a value, σ_v be a store, t be a term such that v ≤ t, and σ_t be
a store such that σ_v ≤σ_t.
If ⟨ t, σ_t ⟩⟨ t', σ_t'⟩
then v ≤ t', σ_v ≤σ_t', and |t'| < |t|.
The next lemma is an analogue of Lemma <ref> for terms. Given two equivalent
terms s and t, if s steps to s' and t steps to t', then
either s and t' are equivalent or s' and t' are equivalent. Moreover, again,
to show that reduction in t is equivalent to reduction in s, we have that |t'| < t if s ≤ t'.
Let s, t be terms such that s ≤ t and let σ_s, σ_t be stores such that
σ_s ≤σ_t. If ⟨ s, σ_s⟩⟨ s', σ_s'⟩
and ⟨ t, σ_t⟩⟨ t', σ_t'⟩ then:
* Either s ≤ t', σ_s ≤σ_t', and |t'| < |t|, or
* s' ≤ t' and σ_s' ≤σ_t'.
Together, Lemmas <ref> and <ref> relate how terms
s and t reduce when they are equivalent modulo seals. Assuming that both s and t reduce, every step of s corresponds to finitely many steps of t, and they reduce to equivalent results as well.
This shows that sealing is transparent when added onto references that are never written to,
allowing for a succinct guarantee of immutability safety.
Finally, the last lemma states that erasing seals will never cause a term to get stuck.
Seals can be safely erased without affecting reduction.
Let s, t be terms such that s ≤ t and let σ_s, σ_t be stores such that
σ_s ≤σ_t. If ⟨ t, σ_t⟩⟨ t', σ_t'⟩ then:
* Either s ≤ t', σ_s ≤σ_t', and |t'| < |t|, or
* There exists s' and σ_s' such that ⟨ s, σ_s⟩⟨ s', σ_s'⟩, s' ≤ t' and σ_s' ≤σ_t'.
From this we can derive the following multi-step analogue, after observing the following lemma:
If s is a term and v is a value such that s ≤ v, then s is also a value.
Hence:
Suppose s and t are terms such that s ≤ t. If ⟨ t, σ_t⟩⟨ v_t, σ_t'⟩ for some value v_t, then for any σ_s ≤σ_t
we have ⟨ s, σ_s⟩⟨ v_s, σ_s'⟩ such that
v_s' ≤ v_s' and σ_s' ≤σ_t'.
Finally, it can be shown that the seals are to blame when two equivalent terms s and t reduce differently
– in particular, when one reduces but the other gets stuck.
Let s, t be terms such that s ≤ t, and let σ_s, σ_t be stores such that
σ_s ≤σ_t. If ⟨ s, σ_s⟩⟨ s', σ_s'⟩
and t gets stuck, then the reduction performed on s was a write to a record using rule write-field.
(Sketch) As s cannot further reduce, the evaluation context of s and t must match; there
are no extraneous seals that need to be discharged. As such, from inspection of the reduction rules,
we see that in all cases except for write-field, for every possible reduction that s
could have taken, there is a possible reduction that t could have taken as well, as desired.
§ TYPING AND STATIC SAFETY
provides a dynamic guarantee
that a given program will never modify its sealed references, but it does not provide any
static guarantees about the dynamic behavior of a given program. To do that, we need a type
system for that will reject programs
like access(seal Pair(3,5)).first = 10,
which we know will crash.
To ensure that well-typed programs do not get stuck, a type system for needs a static analogue of sealing – a way to turn an existing type into a read-only type.
Read-only types denote references that are immutable and that (transitively) adapt
any other references read through them to be immutable as well.
Issues arise, however, when we introduce polymorphism.
§.§ Polymorphism
Recall our earlier example – a polymorphic Pair object.
[language=Scala]
case class Pair[X](var first: X, var second: X)
In a functional language, it is only natural to write higher-order functions that are polymorphic
over the elements stored in the pair. Consider an in-place map function over pairs, which applies a function
to each element in the pair, storing the result in the original pair. This naturally requires mutable
access to a pair.
[language=Scala]
def inplace_map[X](pair: Pair[X], f: X => X): Unit =
pair.first = f(pair.first);
pair.second = f(pair.second);
This is all well and good, but we may wish to restrict the behaviour of f over the elements of the
pair. It may be safer to restrict the behaviour of f so that it could not mutate the elements
passed to it. Note that we cannot restrict access to the pair, however, as we still need to mutate it.
[language=Scala]
// Is this well founded?
def inplace_map[X](pair: Pair[X], f: @readonly X => X): Unit =
pair.first = f(pair.first);
pair.second = f(pair.second);
Now, such a definition requires the ability to further modify type variables with immutability
qualifiers. This raises important questions – for example, is this operation even well founded? This
depends on what X ranges over.
*X ranges over an unqualified type: If type variables range over types which
have not been qualified by @readonly, then
this operation is clearly well founded – it is simply qualifying the unqualified type that
X will eventually be substituted by with the @readonly qualifier. This approach
has been used by ReIm for Java and for an immutability system for C# – <cit.>.
However, this raises the problem of polymorphism over immutability
qualifiers as well – for example, a Pair should be able to store both immutable and mutable object
references. The only natural solution is to then introduce a mutablity qualifier binder
to allow for polymorphism over immutability qualifiers, as thus:
[language=Scala]
case class Pair[M, X](var first: M X, var second: M X)
def inplace_map[M, X](pair: Pair[M, X], f: @readonly X => M X): Unit =
pair.first = f(pair.first);
pair.second = f(pair.second);
Mutability qualifier binders have been used previously, most notably by <cit.>.
For one, updating the binding structure of a language is not an easy task – ReIm notably omits
this sort of parametric mutability polymorphism <cit.>.
However, this sort of solution has its downsides; in particular, existing higher-order functions need to be updated with immutability annotations or variables, as type variables no longer stand for a full type.
For example, an existing definition of List map which appears as thus originally:
[language=Scala]
def map[X](l: List[X], f: X => X): List[X]
needs to be updated to read as the following instead:
[language=Scala]
def map[M, X](l: List[M X], f: M X => M X): List[M X]
Instead, we would like to have X range over fully qualified types as well, but as we will
see that poses some issues as well.
X ranges over fully-qualified types: If type variables can range over
types which have been already qualified by @readonly, then we can avoid introducing mutability binders in the definitions for Pair, inplace_map,
and map above. A Pair can be polymorphic over its contents X without caring about
the underlying mutability of X. However, this raises the question – how do we interpret repeated
applications of the @readonly qualifier? For example, what if we applied inplace_map on a
Pair[@readonly Pair[Int]]? Then inplace_map would expect a function f with type
@readonly (@readonly Pair[Int]) => @readonly Pair[Int]. While our intuition would tell us that @readonly (@readonly Pair[Int]) is really just a @readonly Pair[Int], discharging this equivalence in a proof is not so easy.
One response is to explicitly prevent type variables from being further qualified. Calculi which take this
approach include <cit.>. However,
this restriction prevents this version of inplace_map from being expressed. How can we address this?
Our approach, which we explain below, is to treat @readonly as a type operator that works over all types.
Following the intuition that sealing removes setters from references, @readonly should be a type operator which removes setters
from types. While this does cause complications, we show below how types like @readonly @readonly Pair[Int]
can be dealt with, using subtyping and type normalization.
§.§
To address these issues, we introduce , which adds a type system in the style of to . The syntax of is given in Figure <ref>;
changes from are noted in grey.
syntax
is a straightforward extension of with collections of mutable references – namely, records
– and with two new extensions: read-only types and sealed references. To be close to existing
functional languages with subtyping and records, records in are modelled
as intersections of single-element record types, to support record subsumption, as in <cit.> and
<cit.>.
See Figures <ref> and <ref> for full subtyping and typing rules respectively.
normalform
subtyping
typing
Read-only types:
The readonly type operator transforms an existing type to a read-only version of itself.
Unlike the read-only mutability qualifier in Javari and ReIm, which is paired with a type
to form a pair of a qualifier and a type, a read-only type in is itself a type.
The readonly operator can be seen as the static counterpart of sealing or of deleting setter methods from an object-oriented class type.
Any type T is naturally a subtype of its readonly counterpart T, which
motivates the choice of as a base calculus. This subtyping relationship
is reflected in the subtyping rule mutable. The seal typing rule gives a read-only type to sealed references.
Static viewpoint adaptation: The readonly-record-elim rule is a static counterpart
of the sealed-field reduction rule. Given a reference s to a record with read-only type, it gives a
read-only type to the result of a read s.x of a field x from that reference. If S is the type of field
x in the record type given to s, the rule viewpoint-adapts the type, giving s.x the type S.
§.§.§ Normal Forms for Types
In , is a type operator that can be applied to any type, which
enables us to express types such as X, where X is some type
variable of unknown mutability. However, if X is itself instantiated with
some readonly type T, the type X becomes
T, with two occurrences of the type operator.
Intuitively, such a type should have the same meaning as T.
Additionally, certain types should be equivalent under subtyping. For example,
for both backwards compatibility and simplicity, arrow S → T and for-all types ∀ (X S).T should be equivalent under subtyping to their read-only forms (S → T) and (∀ (X S).T), respectively, as well.
Having multiple representations for the same type, even infinitely many,
complicates reasoning about the meanings of types and proofs of soundness.
Therefore, we define a canonical representation for types as follows:
A type T is in normal form if:
* T is the top type ⊤.
* T is a function type S_1 → S_2, where S_1 and S_2
are in normal form.
* T is an abstraction type ∀(X S_1).S_2, where S_1
and S_2 are in normal form.
* T is an intersection type S_1 ∧ S_2, where S_1 and S_2
are in normal form.
* T is a record type { x : S }, where S is in normal form.
* T is a read-only record type { x : S }, where S
is in normal form.
* Type variables X and read-only type variables X
are in normal form.
A type in normal form is simple – it is an intersection of function, abstraction,
and record types, each possibly modified by a single readonly operator.
For example, {x : X}∧{y : Y} is in normal form. The type ({x : X}∧{y : Y})
is not. A grammar for types in normal form can be found in Figure <ref>.
This allows us to reason
about both the shape of the underlying value being typed, and whether or not it has been modified by
a readonly operator. Naturally we need a theorem which states that every type has a normal form and
a function nf to compute that normal form. Such a function nf is shown in Figure <ref>.
Normalization both computes a normal form and is idempotent – a type in normal form normalizes to itself.
normalization
For any type T, nf(T) is in normal form. Moreover, if T is in normal form, nf(T) = T.
Moreover, types are equivalent to their normalized forms under the subtyping relationship.
Γ | Σ⊢ nf(T) T and Γ | Σ⊢ T nf(T).
For one direction, note that nf(nf(T)) = nf(T), and hence nf(nf(T)) nf(T). Applying denormalize allows us to show that nf(T) T, as desired. The other case follows
by a symmetric argument.
Not only does this allow us to simplify types to a normal form,
this also allows us to state and prove canonical form lemmas and inversion lemmas, necessary
for preservation and progress: Theorems <ref> and <ref>.
Below we give examples for record types. Similar lemmas exist and are mechanized
for function types and type-abstraction types as well.
If S is a subtype of {f : T'}, and S is in normal form,
then at least one of its components is a type variable X or a record type
{f : S'}, where Γ⊢ T' S' T'.
If v is a value and ∅ | Σ⊢ v : {f : T},
then v is a record and f is a field of v that maps to some location l.
If S is a subtype of {f : T'}, and S is in normal form,
then at least one of its components is a type variable X, read-only type variable X,
a record type {f : S'} where Γ⊢ T' S' T', or a read-only record type
{f : S'} where Γ⊢ T' S' T'.
If v is a value and ∅ | Σ⊢ v : {f : T},
then v is a record or a sealed record and f is a field of v that maps to some location l.
Note that normalization is necessary to state the inversion lemmas for read-only records, as {f : T'},
{f : T'}, etc, give an infinite series of syntactically in-equivalent but semantically
equivalent types describing the same object – a read-only record where field f has type T'.
§.§.§ Operational Safety
Operationally, we give small-step reduction semantics coupled with a store to in Figure <ref>.
evaluation
Again, these rules are a straightforward extension of with mutable boxes and records, with additional rules for
reducing sealed records. To prove progress and preservation theorems, we
additionally need to ensure that the store σ itself is well typed in the context of some store typing
environment Σ – see rule store.
The crux of preservation for is to show that sealed records are never given a non-read-only
type, so that the typing rule for reading from a mutable record – record-elim – cannot
be applied to sealed record values.
Suppose Γ | Σ⊢ r : T for some record r. If T is in normal form,
then the components of T are:
* The top type ⊤, or
* a read-only record type {f : T'}.
From this key result we can show that preservation holds for .
Suppose ⟨ s, σ⟩⟨ t, σ' ⟩.
If Γ | Σ⊢σ and Γ | Σ⊢ s : T for some type T,
then there is some environment extension Σ' of Σ such that Γ | Σ' ⊢σ' and
Γ | Σ' ⊢ t : T.
Conversely, values given a non-read-only record type must be an unsealed collection of references.
Suppose ∅ | Σ⊢ v : {f : T} for runtime value v. Then v is an unsealed runtime record
where field f maps to some location l.
This lemma is needed to prove progress.
Suppose ∅ | Σ⊢σ and ∅, Σ⊢ s : T. Then either
s is a value or there is some t and σ' such that ⟨ s, σ⟩⟨ t, σ' ⟩.
§ STATIC IMMUTABILITY SAFETY
Armed with Progress and Preservation, we can state immutability safety for full .
allows us to show that sealed records are never used to mutate their underlying
referenced values. shows that well-typed programs using seals do not get stuck.
To prove immutability safety for , one problem still remains –
allows records that are not sealed to be given a read-only type.
We still need to show that records with such a type
are not used to mutate their values. In other words, we need to show
that records with a read-only type could be sealed,
and that the resulting program would execute in the same way.
We will do this by showing that,
given an original, well-typed program s, we can add seals to its read-only
subterms to obtain a new, well-typed program t, and furthermore that t behaves the same way as s,
up to having additional seals in the resulting state.
The first step is to show that
sealing does not disturb the typing judgment
for terms.
Suppose Γ | Σ⊢ t : T. Then Γ | Σ⊢ t : T.
By seal, Γ | Σ⊢ t : T. Then since T <: T,
by sub, Γ | Σ⊢ t : T, as desired.
From this, given a term s and a typing derivation for s, D = Γ | Σ⊢ s : T, we can seal
those subterms of s that are given a read-only type in D.
Let C be a term context with n holes, and let s=C[s_1, s_2, s_3, , s_n] be a term.
Suppose D is a typing derivation showing that Γ | Σ⊢ s : T. Suppose also that
D gives each subterm s_i of s a type T_i. Then s' = s[ s_1, s_2, , s_n] has the following properties:
* s ≤ s', and
* There exists a typing derivation D' showing that Γ | Σ⊢ s' : T as well.
(1) is by definition. As for (2), to construct D',
walk through the typing derivation D showing that Γ | Σ⊢ s : T.
When we reach the point in the typing derivation that shows that s_i
is given the type T_i, note that s_i can also be given the type T_i
by the derivation given by Lemma <ref>. Replace the sub-derivation in D
with the derivation given by Lemma <ref> to give a derivation in D' for s_i,
as desired.
This motivates the following definition.
Let s be a term and let D = Γ | Σ⊢ s : T be a typing derivation for s.
Define (s,D) to be the term constructed from s by replacing all subterms s_i of s
given a read-only type in D by s_i.
A crested term essentially seals any sub-term of the original term
that is given a read-only type in a particular typing derivation.
By definition, for any term s and typing derivation D for s, we have s ≤(s,D).
Moreover, a crested term can be given the same type as its original term as well.
Let s be a term and let D = Γ | Σ⊢ s : T be a typing derivation for s.
Then s ≤(s,D), and there exists a typing derivation showing that Γ | Σ⊢(s, D) : T as well.
Now by progress – Theorem <ref> – we have
that for any well typed term s with typing derivation D = ∅ | Σ⊢ s : T, its protected – crested – version (s,D) will also step. By preservation – Theorem <ref> – we have
that (s,D) either eventually steps to a value or runs forever, but never gets stuck.
It remains to relate the reduction steps of (s,D) to those of s, and specifically to show that if one reduces to some specific value and store, then the other also reduces to an equivalent pair of value and store.
We will do so by using the dynamic immutability safety properties proven in Section <ref>.
satisfies the same sealing-equivalence properties as – seals do not
affect reduction, except perhaps by introducing other seals. The following
are analogues of Lemmas
<ref>,
<ref>,
and
<ref>
for .
Let v be a value, σ_v be a store, t be a term such that v ≤ t, and σ_t be
a store such that σ_v ≤σ_t.
If ⟨ t, σ_t ⟩⟨ t', σ_t'⟩
then v ≤ t', σ_v ≤σ_t', and |t'| < |t|.
Let s, t be terms such that s ≤ t and let σ_s, σ_t be stores such that
σ_s ≤σ_t. If ⟨ s, σ_s⟩⟨ s', σ_s'⟩
and ⟨ t, σ_t⟩⟨ t', σ_t'⟩ then:
* Either s ≤ t', σ_s ≤σ_t', and |t'| < |t|, or
* s' ≤ t' and σ_s' ≤σ_t'.
Let s, t be terms such that s ≤ t and let σ_s, σ_t be stores such that
σ_s ≤σ_t. If ⟨ t, σ_t⟩⟨ t', σ_t'⟩ then:
* Either s ≤ t', σ_s ≤σ_t', and |t'| < |t|, or
* There exists s' and σ_s' such that ⟨ s, σ_s⟩⟨ s', σ_s'⟩, s' ≤ t' and σ_s' ≤σ_t'.
Stepping back, we can see using Lemma <ref> that one step of s to a term s' corresponds to finitely many steps of (s,D); every step that (s,D) takes either removes a seal or
corresponds to a reduction step that s originally took. Hence (s,D) eventually steps to a term t' such that s' ≤ t', preserving the desired equivalence of reduction between s and (s,D). The following is a generalization of the previous statement to two arbitrarily chosen well-typed terms s and t satisfying s ≤ t.
Suppose ∅, Σ⊢σ_s and ∅, Σ⊢ s : T.
Suppose ⟨ s, σ_s ⟩⟨ s', σ_s' ⟩.
For σ_s ≤σ_t, and s ≤ t, such that Γ, Σ⊢σ_s
and Γ, Σ⊢ t : T, we have that ⟨ t, σ_t ⟩⟨ t', σ_t' ⟩ where s' ≤ t' and σ_s' ≤σ_t'.
From Theorem <ref> we have that there exists a t' and σ_t'
that ⟨ t, σ_t⟩⟨ t', σ_t'⟩.
By Lemma <ref> we have that either s ≤ t', σ_s ≤σ_t', and
|t'| < |t|,
or that s' ≤ t' and σ_s' ≤σ_t'. If s' ≤ t' and σ_s' ≤σ_t'
we are done. Otherwise, observe that since |t'| < |t|, a seal was removed.
This can only occur a finite number
of times, as t and t' have at most a finite number of seals, so we can simply loop until we obtain a t' and σ_t' such that s' ≤ t' and σ_s' ≤σ_t'. Note that Preservation – Theorem <ref>
allows us to do so as each intermediate step t' can be given the same type Γ | Σ⊢ t': T.
Finally, when s eventually reduces to a value v, we can use Lemma <ref> to show
that (s,D) reduces to a similar value v' as well. Again, the following is a generalization
of the previous statement to two arbitrarily chosen well-typed terms s and t satisfying s ≤ t.
Suppose ∅, Σ⊢σ_s and ∅, Σ⊢ s : T
such that s eventually reduces to a value v_s – namely, that ⟨ s, σ_e⟩⟨ v_s, σ_s'⟩ for some σ_s'.
Then for any t such that s ≤ t and ∅, Σ⊢ t : T,
we have that t eventually reduces to some value v_t,
– namely ⟨ t, σ_e⟩⟨ v_t, σ_t'⟩, such that
v_s ≤ v_t and σ_s' ≤σ_t'.
For each step in the multi-step reduction from ⟨ s, σ_e⟩⟨ v_s, σ_s'⟩ we can apply Lemma <ref> to show that
⟨ t, σ_t⟩ eventually reduces to ⟨ t', σ_t'⟩ where v_s ≤ t'
and σ_s' ≤σ_t'. Now by Theorem <ref> and Lemma <ref> we have that either t' is a value, in which case we are done, or that ⟨ t', σ_t' ⟩ steps to ⟨ t”, σ_s' ⟩ where v_s ≤ t”. Again, we can only
take a finite number of steps of this fashion as the rule which reduces t' t”
can only be one that removed a seal, so eventually we obtain a value v_s such that ⟨ t, σ_s ⟩⟨ v_t, σ_t'⟩ with v_s ≤ v_t, and σ_s' ≤σ_t', as desired.
Again, note that Preservation – Theorem <ref>
allows us to do so as each intermediate step t' can be given the same type Γ | Σ⊢ t': T.
Now from Lemma <ref> we obtain our desired immutability safety
results as a consequence – namely, given a well-typed term s
that reduces to a value v_s, any references in s with a type are never actually
mutated, since they can be transparently sealed (which does not change the typing) to no ill effect. Formally,
our main result is:
Suppose s is a term, D = ∅ | Σ⊢ s : T is a typing derivation for s,
and let σ_s be some initial store such that ∅ | Σ⊢σ_s.
Then:
* (s, D) can be given the same type as s – ∅ | Σ⊢ crest(s,D) : T.
Moreover, if ⟨ s, σ_s⟩⟨ v_s, σ_s'⟩, for some value v_s,
then:
* (s, D) will reduce to a value v_t – ⟨ crest(s, D), σ_e⟩⟨ v_t, σ_t'⟩, such that
* v_t and σ_t' are equivalent to v_s and σ_s',
modulo additional seals – namely, that
v_s ≤ v_t and σ_s' ≤σ_t'.
Finally, it is useful to show that the converse result is also true; seals can be safely removed without affecting reduction. First note that seals themselves can be transparently removed
without affecting the types assigned to the term.
Suppose Γ | Σ⊢ s : T. Then Γ | Σ⊢ s : T.
Moreover, the following analogue of Lemma <ref> holds in .
Suppose s and t are terms such that s ≤ t. If ⟨ t, σ_t⟩⟨ v_t, σ_t'⟩ for some value v_t,
then for any σ_s ≤σ_t we have ⟨ s, σ_s⟩⟨ v_s, σ_s'⟩ such that
v_s ≤ v_t and σ_s' ≤σ_t'.
While Lemma <ref> is enough to show when s ≤ t, if t reduces to a value then
so does s, we need Lemma <ref> to reason about the types of s and v_s.
Suppose s and t are terms such that s ≤ t. If ⟨ t, σ_t⟩⟨ v_t, σ_t'⟩ for some value v_t,
then for any σ_s ≤σ_t we have ⟨ s, σ_s⟩⟨ v_s, σ_s'⟩ for some value v_s such that
v_s' ≤ v_s' and σ_s' ≤σ_t'.
Moreover, Γ | Σ⊢ s : T and Γ | (Σ', Σ) ⊢ v_s : T for some Σ' as well.
By Lemma <ref> we can show that Γ | Σ⊢ s : T.
By Lemma <ref> we have that v reduces to some value v_s. By preservation –
Theorem <ref> we have that v_s has type T, as desired.
§ MECHANIZATION
Our mechanization of is based on the mechanization of by <cit.>.
Our mechanization is a faithful model of as described in this paper except for one case.
To facilitate mechanization, reduction in our mechanization is done via explicit congruence rules in
each reduction rule instead of an implicit rule for reducing inside an evaluation context, similar to
how <cit.> originally mechanize as well.
Proofs for all lemmas except for Theorem <ref> and Lemmas <ref>, <ref>, and
<ref> have been mechanized using Coq 8.15 in the attached artifact. Theorem <ref> and Lemmas
<ref>, <ref>, and <ref> have not been mechanized as they require computation
on typing derivations which is hard to encode in Coq as computation on Prop cannot be reflected
into Set. Lemma <ref> has been omitted from our mechanization as it is hard to formally state, let alone prove, in a setting where reduction is done by congruence, though it almost follows intuitively from how the reduction rules are set up.
As the proofs of Lemmas <ref>, <ref>,
<ref>, and <ref> do not rely on any extra structure present in over ,
proofs for their analogues Lemmas <ref>, <ref>,
<ref>, and <ref> have been omitted, as they can be recovered by erasing
the appropriate cases from their analogues.
§ RELATED AND FUTURE WORK
§.§ Limitations – Parametric Mutability Polymorphism
Unlike other systems, does not support directly mutability polymorphism, neither through
a restricted @polyread modifier as seen in <cit.>, nor through
explicit mutability variables as seen in <cit.>.
This is a true limitation of , however, we note that it is possible to desugar parametric mutability polymorphism from a surface language into a core calculus like .
As <cit.> point out in their work, parametric mutability polymorphism can be desugared via overloading,
noting that overloading itself can be dealt with in a surface language before desugaring into a base calculus, as seen before with Featherweight Java <cit.>.
For example, consider the following top-level parametric function,
access, which is parametric on mutability variable M:
[language=Scala]
def access[M](z: [M] Pair[Pair[Int]]): M Pair[Int] = z.first
This function can be rewritten instead as two functions with the same name access,
one taking in a regular, mutable pair, and one taking in a a readonly pair:
[language=Scala]
def access(z: Pair[Pair[Int]]): Pair[Int] = z.first
def access(@readonly z: Pair[Pair[Int]]): @readonly Pair[Int] = z.first
Nested and first-class functions are a little trickier but one can view a polymorphic, first-class
function value as a read-only record packaging up both overloads.
[language=Scala]
access: (z: Pair[Pair[Int]]) => z.first ,
access: (@readonly z: Pair[Pair[Int]]) => z.first
It would be interesting future work to see how one could add parametric mutability polymorphism to .
§.§ Future Work – Algorithmic Subtyping
The subtyping rules of are fairly involved and it is difficult to see if an algorithmic subtyping system could
be devised. We would conjecture that one could do so, using techniques from <cit.>'s integrated
subtyping work, but nonetheless algorithmic subtyping for remains an interesting and open problem.
§.§ Viewpoint Adaptation
Viewpoint adaptation has been used in reference immutability systems to denote the type-level adaptation
which is enforced to guarantee transitive immutability safety.
When a field r.f is read from some record r,
the mutability of the resulting reference needs to be adapted from both the mutability
of r and from the type of f in the record itself. While this notion of adaptation was known as
early as Javari <cit.>, the term “viewpoint adaptation” was first coined by <cit.>. They realized that this notion of adaptation could be
generalized to arbitrary qualifiers – whether or not the type of a field read r.f should
be qualified by some qualifier @q should depend on whether or not f's type is qualified and whether or
not r's type is qualfied as well – and used it to implement an ownership system for Java references
in order to tame aliasing in Java programs.
§.§ Reference Immutability
Reference immutability has long been studied in the context of existing object-oriented languages such
as Java and C#, and more recently has been studied in impure, functional languages like Scala.
roDOT <cit.>: roDOT extends the calculus of Dependent Object Types <cit.> with support for reference immutability. In their system, immutability
constraints are expressed through a type member field x.M of each object, where x is mutable
if and only if M ≤, and x is read-only if and only if M ≥⊤. Polymorphism in roDOT
is out of all reference immutability systems closest to how polymorphism is done in . Type variables
quantify over full types, and type variables can be further restricted to be read-only as in .
Constructing a read-only version of a type, like how we use readonly in , is done in roDOT by
taking an intersection with a bound on the type member M. For example, inplace_map
from before could be expressed in roDOT using an intersection type to modify immutability on the
type variable X:
[language=Scala]
def inplace_map[X](Pair[X]: pair, f: (X M :> Any) => X): Unit
Dort et. al. also prove that roDOT respects immutability safety, but with different techniques than how
we show immutability safety in . Instead of giving operational semantics with special forms
that guard references from being mutated, and relying on progress and preservation to imply static safety,
they take a different approach and show instead that values on the heap that change during reduction must be reachable by some statically-typed mutable reference in the initial program. roDOT is a stronger system
than , as in particular mutabilities can be combined. For example, one could write
a generic getF function which reads a field f out of any record that has f as a field
polymorphic over both the mutabilities of the record x and the field f:
[language=Scala]
def getF[T](x: M: *, f : T) : T M :> x.M = x.f
Here, the return type of getF will give the proper, tightest, viewpoint-adapted type for reading
x.f depending on both the mutabilities of x and f. This is not directly expressible
in and can only be expressed using overloading:
[language=Scala]
def getF[T](x: @readonly f : T): @readonly T = x.f
def getF[T](x: f : T) : T = x.f
However, in contrast, roDOT is significantly more complicated than .
Immutability for C# <cit.>: Of all the object calculi with
reference immutability the calculus of <cit.> is closest to that of roDOT in terms of flexibility. Polymorphism is possible over both mutabilities and types in Gordon's system, but must be done separately; type variables instead quantify over base types that have
not been qualified with some immutability annotation, whether that be read-only or mutable. The inplace_map function that we discussed earlier would be expressed with both a base-type variable as well as a mutability variable:
[language=Scala]
def inplace_map[M, X](Pair[M X]: pair, f: @readonly X => M X): Unit
Like roDOT, Gordon's system also allows for mutability annotations to be combined in types, in effect
allowing viewpoint adaptation to be expressed at the type level using the mutability operator ~>.
For example, getF could be written as the following in Gordon's system:
[language=Scala]
def getF[MS, MT, T, S <: f : MT T](x: MS S) : (MS > MT) T = x.f
Unlike roDOT however, which allows for inferences to be drawn about the mutability of the type (T & {M :> x.M}).M depending on the bounds on T and S, the only allowable judgment we
can draw about MS ~> MT is that it can be widened to @readonly. We cannot conclude,
for example, that MS ~> MT <: M in the following, even though both MS <: M
and MT <: M:
[language=Scala]
def getF[M, MS <: M, MT <: M, T, S <: f : MT T](x: MS S) : (MS > MT) T = x.f
Gordon et. al. also demonstrate the soundness and immutability safety of their system but
through an embedding into a program logic <cit.>.
Javari <cit.>: Reference immutability was first modelled in the context
of Java; Javari is the earliest such extension. In Javari's formalization,
Lightweight Javari, type variables X stand in for either other type variables, class types, and readonly-qualified class types. Unlike roDOT and , in Lightweight Javari, type variables cannot be further qualified by the readonly type qualifier. Lightweight Javari, however, does support parametric mutability polymorphism for class types, but does not support parametric mutability polymorphism directly on methods. Instead, limited parametric mutability method polymorphism in Javari, denoted with the keyword romaybe, is desugared using overloading into the two underlying methods handling the read-only case and the mutable case replacing romaybe in the source. Our earlier example, getF, can be written using romaybe
as follows:
[language=Java]
class HasF<T>
T f;
romaybe T getF() romaybe return f;
However, this example is inexpressible in the core calculus Lightweight Javari, as @readonly T is ill-formed. As for safety, immutability safety is done in Lightweight Javari through a case analysis on how typed Lightweight Javari program terms can reduce.
<cit.> claim that the soundness of Lightweight Javari reduces to showing the soundness of Lightweight Java, but no formal proof is given.
ReIm: <cit.>: ReIm simplifies Javari to enable fast, scalable mutability
inference and analysis. Like Javari, ReIm supports two type qualifiers – readonly and polyread,
where readonly marks a read-only type and polyread is an analogue of romaybe from Javari.
Like Lightweight Javari, and unlike roDOT and , ReIm restricts how qualifiers interact with generics. ReIm's polymorphism model is similar to that of <cit.> – type variables range over unqualified types. However, ReIm has no mechanism for mutability polymorphism, and therefore getF cannot be written in ReIm at all. Unlike other related work, neither soundness nor immutability safety is proven to hold for ReIm.
Immutability Generic Java: <cit.>: Immutability Generic Java is a scheme for expressing immutability using Java's existing generics system. The type List<Mutable> denotes
a mutable reference to a List, whereas the type List<Readonly> denotes a read-only reference to a list. Viewpoint adaptation is not supported, and transitive immutability must be explicitly opted into. For example, in the following snippet, the field value of C is always mutable. Transitive
immutability must be explicitly opted into by instantiating List with the immutability parameter ImmutOfC.
[language=Java]
class C<ImmutOfC>
List<Mutable /* ImmutOfC for transitivity */, Int> value;
Moreover, transitive immutability cannot be expressed at all over fields given a generic type.
Type variables by the nature of how immutability is expressed in IGJ range over fully qualified types,
and there is no mechanism for re-qualifying a type variable with a new immutability qualifier. For example,
the mutability of value in any Box below depends solely on whether or not T is mutable.
Hence the value field of a Box is mutable even if it was read through a read-only Box reference – that is, a reference of type Box<ReadOnly>.
[language=Java]
class Box<ImmutOfBox, T>
T value;
Box<Readonly, List<Mutable,Int>> b = new Box(...)
b.value.add(10); // OK – even though it mutates the underlying List.
§.§ Languages with Immutability Systems
Finally, some languages have been explicitly designed with immutability in mind.
C++: const-qualified methods and values provide limited viewpoint adaptation.
Reading a field from a const-qualified object returns a const-qualified field, and C++
supports function and method dispatching based on the constness of its arguments <cit.>. Mutability
polymorphism is not explicitly supported but can be done with a combination of templates and overloading.
[language=C++]
struct BoxedInt
int v0;
;
template<typename T> struct HasF<T>
T f;
T getF() return f;
const T getF() const return f;
const HasF<BoxedInt> x;
x.getF() // Calls const qualified getF()
const BoxedInt OK = x.f; // OK, as x.f is of type const BoxedInt.
BoxedInt Bad = x.f; // Bad, discards const-qualifier.
In this example a C++ compiler would disallow Bad because the type of x.f has been adapted
to a l-value of const BoxedInt. However, viewpoint adaptation does not lift to reference
or pointer types in C++. For example, if instead we had a pointer-to-T in HasF:
[language=C++]
template<typename T> struct HasF<T>
T* f;
BoxedInt b5;
const HasF<BoxedInt> x b;
BoxedInt* NotGreat = x.f; // OK, as x stores a constant pointer to a mutable BoxedInt
NotGreat->v = 10; // Modifies b!
C++'s limited viewpoint adaptation gives x.f the type BoxedInt * const,
which is a constant pointer to a mutable BoxedInt, not the type BoxedInt const * const,
which would be a constant pointer to a constant BoxedInt. This allows
the underlying field to be mutated.
D: In contrast to C++, where const becomes useless for pointer and reference
fields, D supports full reference immutability and viewpoint adaptation with a transitive const extended to work for pointer and reference types <cit.>.
Again, mutability polymorphism is not directly supported but can be encoded with D's compile-time meta-programming system.
Rust: In Rust, references are either mutable or read-only, and only
one mutable reference can exist for any given value. Read-only references are transitive,
like they are in , roDOT, and other reference immutability systems, and unlike C++. Here,
in this example, we cannot write to s3.f as it s3 is an read-only reference to s2,
even though s2.f has type &mut String.
[language=Rust]
struct HasF<T>
f: T
fn main()
let mut s1 = String::from("hello");
let s2 = HasF f: mut s1 ;
s2.f.push_str("OK");
let s3 = s2;
s3.f.push_str("BAD");
Unlike other languages, though, the mutability of a reference is an intrinsic property of the reference type itself.
Instead of having a type operator readonly that, given a reference type T, creates a read-only
version of that reference type, Rust instead defines & and &mut, type operators
that, given a type T, produce the type of a read-only reference to a T and
the type of a mutable reference to a T, respectively. Here, in the following example,
s1 is a String, s2 is a mutable reference to a s1 – &mut String, and s3 is a read-only reference to s2 – & (&mut String), where all three of s1, s2, and s3 are stored at distinct locations in memory.
[language=Rust]
let s1 = String::from("hello");
let mut s2 = s1;
let s3 = s2;
As such, in Rust, one cannot create a read-only version of an existing reference type.
This makes higher-order functions over references that are polymorphic over mutability,
like inplace_map from above, inexpressible in Rust. However, if we instead had a Pair
that owned its elements, we could write the following version of inplace_map:
[language=Rust]
struct Pair<T>
fst: T,
snd: T
fn inplace_map<T>(p: mut Pair<T>, f: fn ( T) -> T)
p.fst = f( p.fst);
p.snd = f( p.snd);
Note, though, that in this setting, the elements p.fst and p.snd are embedded in the pair p
and owned by it.
§.§ Type Qualifiers and Polymorphism
<cit.> formalize
a system for enriching types with qualifiers with support for polymorphism over both ground, unqualified
types and qualifiers themselves. In this setting, readonly can be viewed as a type qualifier,
similar to how C++'s const can be viewed as a qualifier in <cit.>. The
resulting calculus which arises is similar to the calculus of <cit.> restricted
only to reference immutability qualifiers.
§.§ Contracts
Our approach to sealing references is similar to and was inspired by practical programming experience
with Racket contracts – <cit.>.
Sealing, in particular, can be viewed as attaching a chaperone contract which
raises an exception whenever the underlying chaperoned value is written to, and attaches
fa similar chaperone to every value read out of the value. For example, a
dynamic reference immutability scheme for Racket vectors could be implemented with the following
chaperone contract:
[language=Scheme]
(define (chaperone-read vec idx v)
(seal v))
(define (chaperone-write vec idx v)
(error 'seal "Tried to write through an immutable reference."))
(define (seal v)
(cond
[(vector? v) (chaperone-vector vec chaperone-read chaperone-write))
[else v]))
Strickland et. al. prove that chaperones can be safely erased without changing the behaviour
of the underlying program when it reduces to a value. Our results on dynamic safety, Lemmas <ref>, <ref>, and <ref> can be viewed as an analogue of <cit.> specialized to reference immutability. In this setting, our static
immutability safety results show that a well-typed program will never raise an error by writing to a chaperoned
vector.
§ CONCLUSION
We contributed a simple and sound treatment of reference immutability in .
We show how a simple idea, sealing references, can provide dynamic immutability safety guarantees
in an untyped context – – and how soundness and -style polymorphism can be recovered in
a typed calculus which builds on both and .
Our hope is to enable reference immutability systems in functional languages via this work, by giving simple soundness foundations in a calculus () which underpins many impure functional languages today.
We thank Yaoyu Zhao for his interesting discussions on reference immutability.
We thank Alexis Hunt and Hermann (Jianlin) Li for their useful feedback on early drafts of this work.
This work was partially supported by the Natural Sciences and Engineering Research Council of Canada and
by an Ontario Graduate Scholarship. No seals were clubbed in the creation of this paper.
|
http://arxiv.org/abs/2307.05664v1 | 20230711180000 | TESS Stellar Rotation up to 80 days in the Southern Continuous Viewing Zone | [
"Zachary R. Claytor",
"Jennifer L. van Saders",
"Lyra Cao",
"Marc H. Pinsonneault",
"Johanna Teske",
"Rachael L. Beaton"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.EP"
] |
Zachary R. Claytor
[email protected]
0000-0002-9879-3904]Zachary R. Claytor
Department of Astronomy, University of Florida, 211 Bryant Space Science Center, Gainesville, FL 32611, USA
Institute for Astronomy, University of Hawai‘i at Mānoa, 2680 Woodlawn Drive, Honolulu, HI 96822, USA
0000-0002-4284-8638]Jennifer L. van Saders
Institute for Astronomy, University of Hawai‘i at Mānoa, 2680 Woodlawn Drive, Honolulu, HI 96822, USA
0000-0002-8849-9816]Lyra Cao
Department of Astronomy, The Ohio State University, Columbus, OH 43210, USA
0000-0002-7549-7766]Marc H. Pinsonneault
Department of Astronomy, The Ohio State University, Columbus, OH 43210, USA
0009-0008-2801-5040]Johanna Teske
Earth & Planets Laboratory, Carnegie Institution for Science, 5241 Broad Branch Road, NW, Washington, DC 20015, USA
0000-0002-1691-8217]Rachael L. Beaton
Space Telescope Science Institute, Baltimore, MD, 21218, USA
Department of Physics and Astronomy, Johns Hopkins University, Baltimore, MD 21218, USA
AAS Journals
The TESS mission delivers time-series photometry for millions of stars across the sky, offering a probe into stellar astrophysics, including rotation, on a population scale. However, light curve systematics related to the satellite's 13.7-day orbit have prevented stellar rotation searches for periods longer than 13 days, putting the majority of stars beyond reach. Machine learning methods have the ability to identify systematics and recover robust signals, enabling us to recover rotation periods up to 30 days for FGK dwarfs and 80 days for M dwarfs. We present a catalog of rotation periods for cool dwarfs in the Southern Continuous Viewing Zone, estimated using convolutional neural networks. We find evidence for structure in the period distribution consistent with prior Kepler and K2 results, including a gap in 10–20-day cool star periods thought to arise from a change in stellar spin-down or activity. Using a combination of spectroscopic and gyrochronologic constraints, we fit stellar evolution models to estimate masses and ages for stars with rotation periods. We find strong correlations between the detectability of rotation in TESS and the effective temperature, age, and metallicity of the stars. Finally, we investigate the relationships between rotation and newly obtained spot filling fractions estimated from APOGEE spectra. Field star spot filling fractions are elevated in the same temperature and period regime where open clusters' magnetic braking stalls, lending support to an internal shear mechanism that can produce both phenomena.
§ INTRODUCTION
Rotation, activity, and magnetism are all deeply connected to the structure and evolution of stars.
In stars similar to and less massive than the Sun, rotation and convection power magnetism, which influences stellar winds and causes flares. Magnetized winds create torque on stars, causing them to spin down over time <cit.>; this allows us to infer stellar ages from rotation periods using gyrochronology <cit.>. Stellar magnetism is the source of space weather, which directly affects life on Earth as well as the habitability of planets around other stars.
Because of the inextricable links to rotation, a complete picture of stellar activity and magnetism demands a grasp of rotation across all types of stars.
The Kepler mission <cit.> enabled rotation period estimates for more than 50,000 stars in a single 100-square-degree patch of sky <cit.>, revolutionizing our understanding of stellar rotation. Kepler's rotation periods enabled precise age estimates for field stars <cit.> and investigations of changing stellar activity with time <cit.>. The mission also revealed departures from expected rotational behavior, such as a gap in the period distribution of cool stars <cit.> and a halt of magnetic braking in middle-aged Solar-like stars <cit.>. Stellar evolution and population synthesis models failed to predict these behaviors <cit.>, highlighting the need for updated theory as well as more period measurements.
As successful as Kepler was in measuring rotation periods, the survey design imposed tight limitations on the kinds of stars that could be studied in rotation. The spacecraft's goal of finding Earth-like planets around Sun-like stars resulted in a complex selection function that biased observed stellar samples in comparison to the underlying population <cit.>. For example, the choice to avoid young stellar populations biases the Kepler sample toward less active, slowly rotating stars intrinsically more difficult to detect in rotation. Any Kepler study preserves these biases, and the selection function is difficult to correct for. Furthermore, the small observing footprint means that any new rotational physics inferred from Kepler must be tested against other samples across the sky. The solution is an untargeted, all-sky survey.
The Transiting Exoplanet Survey Satellite <cit.> stares at millions of stars in its search for transiting planets, surveying the entire sky in 27-day sectors. In addition to short-cadence light curves for pre-selected targets, TESS delivers full-frame images (FFIs), enabling high-precision photometry for any source brighter than ∼15th magnitude. Importantly, TESS does not rely only on postage stamps for selected targets as Kepler did; the FFIs permit investigators to design their own surveys. While primarily a planet-finding mission, the mission's short cadence and long temporal baseline also make it suitable for studying stellar variability due to oscillations, pulsations, and rotational spot modulation. While studies of stellar oscillations and pulsations have achieved some success <cit.>, systematics related to TESS's observing strategy and data processing have slowed the quest for rotation periods <cit.>. It is worth noting that the Kepler mission faced similar challenges; the seminal stellar rotation paper <cit.> was published 5 years after the satellite was launched.
TESS's unique 2:1 resonance orbit of the Earth-Moon system subjects the detectors to earthshine and moonlight on the timescale of the orbit, 13.7 days <cit.>. The earthshine itself has time-varying signals within it, such as a 1-day modulation from the rotation of the Earth <cit.>. Besides earthshine, TESS encounters systematics related to angular momentum dumps, detector heating, data downlinks, and more, all on timescales that interfere with astrophysical signals. The telescope's large field of view (24^∘ by 96^∘ total) makes the background spatially non-uniform as well. Because of these effects, throughout a sector the TESS detectors encounter systematics on different pixels at different times and with varying intensity, making them difficult to correct.
Attempts to remove or correct spurious instrumental signals may also attenuate astrophysical signals, particularly those on the timescales of the telescope's orbital period (13.7 days) and longer. Rapid rotators, which also have larger spot modulation amplitudes, are affected less, and conventional rotation searches with TESS have been largely successful at measuring periods shorter than 13 days (see, for example, with 131 periods, with 169, with 13,504, and with ∼100,000). However, the same searches have struggled to recover longer periods, instead catching periods associated with the TESS systematics. So far, only <cit.> have claimed to recover long periods in TESS, but they relied heavily on priors from the observed Kepler period distribution.
The efforts to correct TESS systematics have yielded broadly useful public pipelines and tools like eleanor <cit.>, TESS-SIP <cit.>, Unpopular <cit.>, and T'DA <cit.>. While each pipeline makes different but well-motivated choices to handle the systematics, each decision runs the risk of accidentally removing stellar signals <cit.>. Rather than trying to remove the systematics at the risk of removing astrophysical signals, we adopt deep machine learning methods that see the periodicity with the noise and disentangle them.
Deep learning methods are now widely used in stellar astrophysics. <cit.> used random forests to classify and detect rotation signals in Kepler light curves, while <cit.> used random forests to draw connections between stellar parameters to estimate TESS rotation periods. <cit.> employed convolutional neural networks (CNNs) to identify stellar flares in light curves, while <cit.> applied similar techniques to detect oscillations in red giant stars. CNNs are particularly powerful when working with images or image-like data, which are ubiquitous in astronomy. CNNs can be trained to identify images of many different classes despite contaminating features, making them particularly attractive for our problem with TESS systematics.
In a pilot study of 21 objects, <cit.> demonstrated that long periods can be recovered from TESS data using CNNs trained on simulated data. In this work we apply the <cit.> approach to a greatly expanded sample to infer rotation periods with uncertainties for cool, main-sequence stars in the TESS Southern Continuous Viewing Zone (SCVZ). We employ new training sets tailored to specific period ranges and the specific light curves in which we search for periods. We present the periods, their associated uncertainties, and model-inferred stellar parameters in the first catalog probing long stellar rotation periods with TESS.
The paper is outlined as follows. In Section <ref> we describe our data and sample selection. In Section <ref> we outline our deep learning framework, including the training sets and model architectures. Section <ref> details our method to fit stellar evolutionary models to stars with reliable rotation periods. In Section <ref> we present the rotation periods and analyze the TESS SCVZ period distribution, comparing and contrasting with Kepler and K2. In Section <ref> we explore the detectability of periods as a function of temperature, metallicity, age, and convection zone properties to understand the effects of detection limits on the period distribution. In Section <ref> we use new spot filling fraction measurements from infrared spectroscopy to examine the effects of spottedness on the detectability of rotation, and we finally conclude in Section <ref>.
§ DATA AND SAMPLE SELECTION
cccr
Sample Selection
Designation Description Criteria # targets
A1 TESS-SPOC Dwarfs ≤ -78^∘ & ≤ 15 & ≤ 10,000 K 38,215
A2 TASOC Dwarfs ≤ -78^∘ & ≤ 15 & ≤ 5,000 K 29,609
B1 APOGEE–TESS-SPOC A1 & APOGEE DR17 16,545
B2 APOGEE–TASOC A2 & APOGEE DR17 3,156
C1 Rotators Either A1 or A2 & reliable period
C2 Non-Rotators Either A1 or A2 & no reliable period
D APOGEE Rotators Either B1 or B2 & C1 2,654
Gold Binary-Cleaned Rotators D & = =0 & < 1.2 & < 0.1
Platinum Single Cool Dwarfs Gold & M_G > 4.4 & G_BP-G_RP > 1 & |Δ M_G| < 0.4
Our sample selections using TIC version 8.2 <cit.>, APOGEE DR17 <cit.>, and Gaia DR3 <cit.>. Selection criteria are given as table column names where convenient and include ecliptic latitude (TIC ), TESS magnitude (TIC ), effective temperature (TIC ), ASPCAP <cit.> spectral fit flags and , absolute G-magnitude (Gaia M_G), color index (Gaia G_BP - G_RP), and Gaia photometric excess above the main sequence (|Δ M_G|). The lettered samples are described in Section <ref>, while the Gold and Platinum science samples are detailed in Sections <ref> and <ref> respectively.
For the period search, we targeted stars cool enough to maintain surface convection zones and dynamos capable of producing surface spots. The steps of our full sample selection our outlined in Table <ref>. We excluded evolved red giants, of which the large majority are slow rotators <cit.>. The 1–2% that rotate rapidly are typically the products of binary star interactions <cit.>, and not reliable age tracers. We selected relatively bright, cool dwarf and subgiant stars in the TESS SCVZ, a 450 square degree field centered around the southern ecliptic pole. TESS observed the SCVZ continuously for 350 days in its first year, taking FFIs every 30 minutes. The long baseline ensures sufficient coverage for the most slowly-rotating stars we might hope to detect. For example, an M-dwarf rotating once every 100 days will complete 3.5 rotations under observation in the CVZs. In the same interval, an old K-dwarf rotating at 45 days will rotate nearly 8 times, and a G-dwarf at 30 days will rotate more than 10.
We selected stars from the TESS Input Catalog <cit.> with effective temperature ≤ 10,000 K, TESS magnitude ≤ 15, and ecliptic latitude ≤ -78^∘ to target the SCVZ. There are 398,977 such stars in the TIC, but requiring public photometry narrowed the sample considerably. 38,215 targets had public FFI photometry from the TESS Science Processing Operations Center <cit.>. We also used FFI data products from the TESS Asteroseismic Science Operations Center <cit.>, but we selected only the 29,609 targets with TIC T_eff < 5,000 K to prioritize the most likely rotation detections. We motivate the choice to use both TESS-SPOC and TASOC products in Section <ref>, and we detail each pipeline's target selections in Sections <ref> and <ref>.
§.§ APOGEE Spectroscopy
While the TESS Input Catalog has metallicities and surface gravities for all the stars in our sample, the sources are a heterogeneous combination of photometry and spectroscopy, observations and models. Furthermore, the TIC has no information on detailed abundances, which are useful when investigating changing Galactic chemistry with time, and which are important to the connection between rotation and magnetism <cit.>. We therefore supplement TESS photometric rotation periods with spectroscopic parameters from the Apache Point Observatory Galactic Evolution Experiment <cit.>.
APOGEE collects high-resolution (R ∼ 22,500), near-infrared (1.51–1.70 μm) spectra and provides calibrated, model-dependent estimates of effective temperature, surface gravity, metallicity, and detailed abundances for hundreds of thousands of stars across the entire sky. The TESS/APOGEE survey within APOGEE-2S <cit.> targeted 38,000 stars in the TESS SCVZ with 2MASS color and magnitude ranges 7 < H < 11 and J-K > 0.3, and about 9,000 other SCVZ stars were observed for other programs. We cross-matched the TIC SCVZ cool dwarfs with APOGEE Data Release 17 <cit.> to obtain spectroscopic parameters for 47,142 stars. Of those, 16,545 have TESS-SPOC data products, and 3,156 have data products in our TASOC subsample. These combine to yield 17,796 unique targets with APOGEE spectroscopy and either TESS-SPOC or TASOC photometry.
We adopted calibrated effective temperatures, metallicities, and α-element abundances estimated by the APOGEE Stellar Parameters and Abundances Pipeline <cit.>. Comparisons between APOGEE stellar parameters and high-fidelity measurements have demonstrated the ASPCAP-derived uncertainties to be underestimated for giants <cit.>, and dwarfs alike <cit.>. Pinsonneault et al. (in prep.) find temperature errors of 30 K in giants, larger for dwarfs, and scatter in clusters of 0.05 dex in metallicity and 0.03 dex in α enhancement. We therefore set minimum uncertainty floors of 50 K for T_eff, 0.05 dex for [M/H], and 0.03 dex for [α/M]. While these likely still underestimate the error in the ASPCAP measurements, they were large enough for our fitting routines to find self-consistent models that successfully predicted other stellar parameters, e.g., luminosity or surface gravity.
§.§ Gaia
We supplemented our sample with data from Gaia DR3 <cit.>, including G, G_BP, and G_RP magnitudes, parallaxes, and Renormalized Unit Weight Error (RUWE). Gaia data were available for all targets. Computing the absolute magnitude M_G from G and parallax, we use a photometric excess and RUWE to identify and remove likely binaries before population analysis.
§.§ Photometry
There are several publicly available light curve sets, pipelines, and tools designed and optimized for TESS data. We review some of the most widely used in Appendix <ref>. After trying several systematics removal pipelines and data products, we found that all pipelines were too aggressive and removed stellar signal. Instead, we used the apertures from two public pipelines and performed our own minimal corrections. Due to data availability and lightweight data products, we determined the apertures from the TESS-SPOC <cit.> and TASOC <cit.> to be the best available for a rotation search at the time of writing.
TESS-SPOC provides data products for fewer stars over a longer baseline, while TASOC provides products for a larger sample, but over a shorter baseline in TESS year 1. The two pipelines feature different target and aperture selections, providing two slightly overlapping stellar samples so that we can maximize the number of rotation periods while testing for robustness of periods against the pipelines' different apertures. We summarize the pipelines' key differences in the next two sections; we then describe our custom photometry using the pipeline apertures in Section <ref>. Both pipelines' target pixel file (TPF) and aperture data are publicly available on MAST[TESS-SPOC data are available at [10.17909/t9-wpz1-8s54]10.17909/t9-wpz1-8s54, while TASOC data are available at [10.17909/t9-4smn-dx89]10.17909/t9-4smn-dx89].
§.§.§ TESS-SPOC
The SPOC pipeline <cit.> was initially used to calibrate the TESS FFIs and generate TPFs and light curves for all two-minute cadence targets. <cit.> more recently used the SPOC pipeline to create TPFs and light curves for FFI targets, providing the TESS-SPOC light curves on MAST.
<cit.> selected a maximum of ten thousand targets per CCD from the TIC for a maximum of 40,000 stars in the SCVZ. For each CCD, the selection order was (1) all two-minute cadence targets; (2) potentially high-value planet host candidates with H magnitude ≤ 10 or distance ≤ 100 pc, flux contamination ≤ 50%, and TESS magnitude Tmag ≤ 16; (3) field star targets brighter than Tmag ≤ 13.5, log surface gravity ≥ 3.5 (CGS units), and flux contamination ≤ 20%. The depth Tmag ≤ 13.5 was chosen to ensure sufficient signal-to-noise. We estimated the 6-hour CDPP of our custom TESS-SPOC light curves to be about 4,000 ppm at Tmag = 13.5. At this faint limit, a 5σ detection should vary at the 2% level. About 0.3% of Kepler rotators varied at this level <cit.>.
TESS-SPOC computed photometric apertures using the same module as was used for Kepler. Briefly, the module uses a synthetic FFI produced from the input catalog and the real pixel response function to compute the optimal aperture for each target. <cit.> detail the full FFI target selection, <cit.> describe the SPOC pipeline, and <cit.> outline the aperture selection. The TESS-SPOC pipeline has produced TPFs, which include target apertures, for all sectors in year 1. We queried all TPFs available for our sample, yielding time-series images and photometric apertures for 38,215 targets.
§.§.§ TASOC
TASOC has performed photometry for all stars brighter than TESS magnitude ≤ 15 for use in asteroseismology <cit.>. To date, only sectors 1–6 from the first year have been processed, yielding time-series FFI photometry with a 160-day baseline and 30-minute cadence. While fewer sectors of data are available from TASOC, limiting us to shorter rotation periods than TESS-SPOC, TASOC's fainter magnitude limit and lack of number cap (i.e., TESS-SPOC processed not more than 10,000 stars per CCD, but TASOC has no such limit) complements the TESS-SPOC data. To compute light curves, we downloaded the TASOC apertures and applied them to cutouts from the calibrated FFIs.
The TASOC pipeline computed apertures for all TIC targets brighter than Tmag ≤ 15. Aperture selection is fully described by <cit.>, but uses the clustering algorithm DBSCAN <cit.> to find clusters of pixels associated with TIC targets. The watershed image segmentation routine from <cit.> is then used to segment apertures containing more than one target. In general, the apertures created by the TASOC pipeline are larger than those created by TESS-SPOC, resulting in light curves with higher photometric precision. We estimated our custom TASOC light curves to have 6-hour CDPP of 3,000 ppm at Tmag = 13.5. A 5σ detection at this magnitude will vary at the 1.5% level. In Kepler, about 0.8% of rotating stars varied at this level <cit.>.
TASOC data products are also available on MAST. To obtain the likeliest targets for detecting rotation, we queried data for TIC dwarf stars cooler than 5,000 K, yielding FFI cutouts for 29,609 targets spanning the first 6 sectors of the TESS mission.
§.§.§ Custom Light Curves and Wavelet Transform
For both datasets, we began with the publicly available TPF cutouts from calibrated FFIs. The FFI calibrations include traditional bias, dark, and flat field corrections, cosmic ray removal, corrections for variations in pixel sensitivity, and removal of smear signals resulting from the cameras' lack of shutters <cit.>. After FFI calibration, both the TESS-SPOC and TASOC pipelines perform background subtraction and systematics correction to produce light curves; we opt not to use this next level of data correction, as they can have the unintended consequence of removing or attenuating the stellar signals. To mitigate the removal of stellar rotation signals, we performed custom photometry using the apertures supplied by the pipelines. For each available TPF, we computed light curves as follows:
* reject cadences with bad quality flags, which are usually associated with cosmic rays, data downlinks, or angular momentum dumps
* compute a raw light curve using simple aperture photometry, adding all pixels within the aperture
* remove the first three principal components of the time series using
* reject 5σ outliers from light curve.
Although neural networks can perform regression in spite of systematics to some extent, some systematics removal is necessary.
We sought to perform as little systematics correction as possible in order to preserve the underlying stellar signals. Removing the first three principal components corrected the largest TESS systematics—Earthshine and angular momentum dumps—while leaving smaller systematics and stellar signals mostly intact. To determine the optimal number n_pca of principal components to remove, we removed 1, 2, 3, 4, and 5 components from a set of 10 randomly selected light curves. We then visually inspected the resulting light curves to determine for what value of n_pca the largest systematics were removed. Meanwhile, removing 5σ outliers cleaned the light curves from systematic jumps and stellar flares. Next, we median-divided the light curves for each target and stitched them together, linearly interpolating to fill any gaps. Finally, we computed Morlet wavelet transforms following <cit.> and binned them to 64×64 pixels to be used as input to the convolutional neural network.
§.§.§ Variability Amplitudes
We computed the photometric variability amplitudes R_per and S_ph for all our stars with estimated periods. Like <cit.>, to compute R_per we measured the interval between the 5th and 95th percentile of normalized flux in each period bin, then took the median of those values. We computed S_ph as in <cit.>, by partitioning the light curve into segments of duration 5P_rot, then taking the standard deviation of the light curve flux over each segment. This creates a time series of standard deviations; S_ph is taken to be the median value. For different analyses we use either R_per or S_ph, but in theory the two metrics are related by S_ph≈ 0.35 R_per[This approximation holds true for perfect sinusoids observed for an integer number N of cycles or in the limit of large N. However, our measured S_ph and R_per follow this relation remarkably well.]. We verified this relation in our measurements, so for this work we consider the two metrics to be interchangeable.
We emphasize that due to sector-to-sector stitching and detector sensitivity changes during momentum dumps, variability amplitudes for periods longer than about 13 days and especially 27 days will inevitably be suppressed. To attempt to account for this, we ran a series of noiseless, sinusoidal light curve simulations through a renormalization and stitching algorithm and compared the measured amplitudes to the true input amplitudes. For perfect sinusoids, we found that the amplitude suppression factor decays exponentially with period past 27 days. However, applying a correction to our measured amplitudes did not affect our results; to avoid artificially injecting period–amplitude biases we leave our reported amplitudes uncorrected.
Finally, we also measured the 6-hour combined differential photometric precision <cit.>, which quantifies the photometric noise, for each of our light curves. Since the CDPP is measured on timescales shorter than the typical TESS systematics, it should be unaffected by momentum dumps and sector-to-sector stitching.
§ DEEP LEARNING FRAMEWORK
To infer rotation periods from TESS light curves, we applied the method of <cit.> with a few modifications. Namely, we generated new training sets tailored to both the TESS-SPOC and TASOC samples (mostly to represent the different light curve lengths), and we optimized different neural networks for each data set.
§.§ Training Set
In <cit.> we trained a convolutional neural network on a set of synthetic light curves made from physically realistic spot evolution simulations, combined with real TESS noise from SCVZ galaxy light curves. Inactive galaxies do not vary on yearlong timescales or shorter, and thus they are a robust standard sample that can be useful to infer systematics. Other quiescent objects can serve the same role, such as hot stars, which we employ here.
Two weaknesses of our previous approach were that (1) we were not successful in recovering periods of less than 10 days from our held-out test set, and (2) the neural network overfit within a few (of order 10) iterations over the training set. The first weakness was due to the choice of a loss function that enabled the network to estimate period uncertainty. In the presence of uncertainty, inferred periods are biased toward the mean of the distribution and away from the upper and lower limits. The effect is most pronounced for the top and bottom 10% of the training set period range, affecting the ranges from 0–18 days and 162–180 days. Since the ability to estimate the period uncertainty is a key strength of our approach, we worked around this problem by using multiple training sets with different period ranges.
We created four separate simulated training sets using <cit.> with periods ranging from 0.1 day to 30, 60, 90, and 180 days. Having a shorter upper limit such as 30 days allows us to more successfully probe the short-period range—here only 0–3 days and 27–30 days are severely affected—while having multiple training sets with increasing upper limits gives us multiple period estimates that we can mutually compare for extra tests of reliability. The distributions of all simulation input parameters besides period were the same as in <cit.> (the simulations for the 180 day upper limit are the same as in the previous work), and the same simulations were used for both the TESS-SPOC and TASOC training sets. The only other difference was the source of the light curves combined with the simulations to emulate instrumental noise and systematics. We note that using multiple training realizations yields multiple period estimates for the same star; we discuss the breaking of ties in Section <ref>.
The second shortcoming was simply due to the small number (∼ 2,000) of galaxy light curve examples. If there are too few examples of noise in the training set, the neural network learns the noise quickly and overfits the data. Since there are many more bright stars than galaxies in TESS, we addressed this by combining our simulations with light curves of stars in temperature ranges that should be comparatively quiescent to emulate TESS noise. <cit.> detected periods in Kepler stars hotter than the Sun half as often as in cooler stars. Given TESS's slightly worse photometric precision and redder pass band than Kepler, we expect TESS stars hotter than the Sun to be even harder to detect in rotation. This makes stars in the temperature range above ∼5,800 K ideal for use as quiescent light curve sources. At first we queried TPFs and computed light curves for TASOC stars in the range 5,800 K ≤ T_eff≤ 6,000 K. We kept light curves with at least 4 sectors to allow for gaps in the data while ensuring that there were data for more than half the time baseline. This yielded a set of 23,332 TASOC noise templates, an order of magnitude more than the number of galaxy sources used in the previous exercise. The same range of temperatures in TESS-SPOC, requiring that light curves have at least 7 sectors to cover more than half of the time baseline, has only 6,000 targets, so a larger temperature range was required. We used the range 6,000 K ≤ T_eff≤ 8,000 K, which contained 17,637 sources. Table <ref> details the noise light curves samples that make up the TESS-SPOC and TASOC training sets.
We note that the temperature range for the TESS-SPOC noise light curves overlaps with the δ Scuti instability strip <cit.> and with the γ Doradus strip <cit.>. Of our TESS-SPOC noise targets, 1,724 (∼10%) fall within the δ Scuti strip, and depending on the criterion used, as few as 30% <cit.> and as many as two-thirds <cit.> are within the γ Dor strip. Because δ Scuti stars pulsate with periods on the order of hours and γ Dor less than about 3 days, we do not expect significant contamination from pulsation in our training light curves. The TASOC noise sample does not overlap with either instability strip. The presence of contaminants in the training set should make the CNN more robust against contamination (i.e., misidentifying a pulsator as a rotator), but thoroughly testing this is beyond the scope of this work.
§.§ Convolutional Neural Network
We began with the same CNN as in <cit.>, which uses the Adam optimizer <cit.> and negative log-Laplacian loss, enabling the estimation of uncertainty along with the rotation period. The loss function has the form
ℒ = ln(2b) + |P_true - P_pred|/b,
where b, the median absolute deviation, is taken to represent the uncertainty.
We experimented with different architectures to optimize different networks to the TESS-SPOC and TASOC training sets. The original architecture had three convolution layers with (A) 8, 16, and 32 kernels, respectively, but we also tried (B) 16, 32, and 64 kernels; (C) 32, 64, and 128; and (D) 64, 128, and 256. More kernels or filters per layer allow the network to learn more features if they are present in the data, but they may also cause the network to overfit the data faster. We trained each architecture individually on each training set and chose the architecture with the best overall recovery on a held-out test sample. For the TESS-SPOC set, architecture C performed best overall, but architecture A was optimal for the TASOC set. We discuss the details of architecture optimization in Appendix <ref>.
§ ROTATIONAL MODELING
With newly obtained TESS rotation periods, we will be able to look for trends of rotation detectability and variability across fundamental stellar parameters. Stars spin down and become less active as they age <cit.>, so we expect both detectability and variability to decrease with age. We also know activity to vary with Rossby number, the ratio of rotation period to the convective overturn timescale <cit.>. To validate these relationships and look for potential departures from expected behavior in TESS, we will need to derive ages and Rossby numbers for our sample. We employ the stellar evolution and rotational modeling using <cit.> to infer ages, masses, convective timescales, and Rossby numbers for our stars with rotation periods and APOGEE spectroscopy.
lc
Input Physics to Stellar Evolution Models
Parameter Value/Source
Atmosphere <cit.>
Convective overshoot False
Diffusion True
Equation of state OPAL <cit.>
High-temperature opacities OP <cit.>
Low-temperature opacities <cit.>
Mixing length α 1.86
Mixture and solar Z/X <cit.>
Nuclear reaction rates <cit.>
Solar X 0.7089
Solar Y 0.2728
Solar Z 0.0183
Δ Y / Δ Z 1.4
Surface (Z/X)_⊙ 0.02289
Angular momentum evolution <cit.>
Initial rotation period 8.134 d
Critical Rossby number 2.16
Critical ω for saturation 3.394×10^-5 s^-1
f_k 6.575
Disk coupling timescale 0.281 Myr
The stellar evolution tracks were generated using the non-rotating version of the Yale Rotating Stellar Evolution Code <cit.>, then global stellar properties were used to calculate angular momentum evolution following the magnetic braking law of <cit.>. The models are fully described by <cit.>, but we list the input physics and solar calibration here in Table <ref>. The angular momentum evolution includes weakened magnetic braking beginning about halfway through the main sequence <cit.>, but does not include core-envelope decoupling <cit.> or the apparent stalling of spin-down that appears to occur in young, cool stars <cit.>.
Using the Markov-chain Monte Carlo (MCMC) tools in , we interpolated and fit stellar evolution models to observational data. For the MCMC we used a χ^2 log-likelihood of the form
ℒ_χ^2 = -1/2∑_i(x_i - x_i')^2/σ_x_i^2,
where x_i and σ_x_i are the observational input parameters and uncertainties, respectively, x_i' is the computed value from the model, and i iterates over the input parameters. The observables used in this computation were the CNN-inferred rotation periods, APOGEE calibrated temperatures, metallicities ([M/H]) and α-element abundances ([α/M]). All MCMC input data are provided with uncertainties in Table <ref>.
§ ROTATION PERIODS OF TESS STARS
We estimated periods for TESS-SPOC targets with at least 7 sectors and TASOC targets with at least 4 sectors, the same minimum numbers of sectors as for the training light curves. To determine reliability we followed the method of <cit.> and used a cut in fractional uncertainty to denote reliability. We do not treat the estimated σ_P (= b in Eq. <ref>) as a formal uncertainty. Rather, the quantity σ_P/P serves as a metric of relative credibility of period estimates. σ_P/P ≤ 35% translated to ∼10% median percent error in the training set recovery, so we adopt this uncertainty cut as our baseline for reliability. Since there are four neural networks, each with its own period range, we obtained four sets of period candidates for both the TESS-SPOC and TASOC data sets. If two or more neural networks yielded estimates passing our reliability cut for the same star, we averaged the estimates and added their standard deviation in quadrature to the uncertainty. If the newly combined fractional uncertainty was larger than 35%, we discarded the star.
ll
Properties of Rotationally Detected TESS SCVZ Stars, MCMC Input & Fit Parameters
Label Description
TIC TESS Input Catalog ID
prot CNN-inferred rotation period
e_prot rotation period uncertainty
prov period provenance: TESS-SPOC or TASOC
rper photometric activity range R_per
sph photometric activity index S_ph
cdpp combined differential photometric precision
Tmag TESS magnitude
contratio TIC flux contamination ratio
instability instability strip flag
parallax Gaia DR3 parallax
ruwe Gaia DR3 renormalized unit weight error
phot_g_mean_mag Gaia DR3 apparent G magnitude
bp_rp Gaia DR3 G_BP - G_RP color index
teff APOGEE DR17 effective temperature
teff_err temperature uncertainty
m_h APOGEE DR17 metallicity [M/H]
m_h_err metallicity uncertainty
alpha_m APOGEE DR17 α enhancement [α/M]
alpha_m_err α enhancement uncertainty
snr_bad APOGEE DR17 spectral signal-to-noise flag
fspot spot filling fraction
age MCMC gyrochronological age
e_age+ 1σ age upper credible limit
e_age- 1σ age lower credible limit
mass MCMC-inferred stellar mass
e_mass+ 1σ mass upper credible limit
e_mass- 1σ mass lower credible limit
rad MCMC-inferred stellar radius
e_rad+ 1σ radius upper credible limit
e_rad- 1σ radius lower credible limit
Ro MCMC-inferred Rossby number
fconv MCMC convergence flag
The “snr_bad” flag represents the APOGEE spectral signal-to-noise flag and is set for only 21 stars. The “instability” flag marks stars whose TIC temperatures and luminosities place them in the instability strip. It is set to 1 for 98 stars within the δ Scuti instability strip characterized by <cit.>, 2 for 105 stars in the γ Doradus strip of <cit.>, and 3 for 243 stars in the γ Dor strip of <cit.>. This table is available in its entirety in machine-readable format.
We obtained TESS-SPOC stars with reliable periods and reliable TASOC periods. These combine for a total of unique targets, of which overlap between the two samples. We discuss the overlap sample in Section <ref>. The rotation periods up to 80 days, their photometric amplitudes, selected spectroscopic parameters, and associated flags are presented in Table <ref>. We also list the stellar parameters for the rotationally non-detected stars in Table <ref>. For stars with periods from both TESS-SPOC and TASOC data, we favored the TESS-SPOC period in the final table due to the light curves having twice the duration as the TASOC light curves. We note that while the CNN estimated periods longer than 80 days that passed the uncertainty cut, this regime is highly contaminated by obviously spurious detections. We leave the vetting of periods longer than 80 days to future work, and for now we consider only shorter periods to be reliable. Figure <ref> shows a small selection of light curves for which we obtained periods. The periods are plotted against TIC effective temperature in Figure <ref> to illustrate, for the first time, the distribution of main sequence stellar rotation periods longer than 13 days in TESS.
ll
Properties of Rotationally Nondetected TESS SCVZ Stars
Label Description
TIC TESS Input Catalog ID
prov period provenance: TESS-SPOC or TASOC
rvar photometric activity range R_var
cdpp combined differential photometric precision
Tmag TESS magnitude
contratio TIC flux contamination ratio
instability instability strip flag
parallax Gaia DR3 parallax
ruwe Gaia DR3 renormalized unit weight error
phot_g_mean_mag Gaia DR3 apparent G magnitude
bp_rp Gaia DR3 G_BP - G_RP color index
teff APOGEE DR17 effective temperature
teff_err temperature uncertainty
m_h APOGEE DR17 metallicity [M/H]
m_h_err metallicity uncertainty
alpha_m APOGEE DR17 α enhancement [α/M]
alpha_m_err α enhancement uncertainty
star_bad APOGEE DR17 stellar parameter fit flag
snr_bad APOGEE DR17 spectral signal-to-noise flag
The “star_bad” flag represents the APOGEE stellar parameter fit flag, set when a best-fit model is close to a grid edge. It is set for 772 stars. The “snr_bad” flag represents the APOGEE spectral signal-to-noise flag and is set for only 123 stars. The “instability” flag marks stars whose TIC temperatures and luminosities place them in the instability strip. It is set to 1 for 1,997 stars within the δ Scuti instability strip characterized by <cit.>, 2 for 3,412 stars in the γ Doradus strip of <cit.>, and 3 for 6,161 stars in the γ Dor strip of <cit.>. This table is available in its entirety in machine-readable format.
§.§ Features of the Period Distribution: Biases
Since the TESS-SPOC sample spans a wider range in temperature than our TASOC sample, we will focus our main discussion of the period distribution on the TESS-SPOC sample. In this period distribution there are two apparent edges worth noting. First, there is temperature edge at 6,000 K. The underlying sample distribution has no such edge, so it must be produced by the period search. 6,000 K is the lower bound of the noise source sample used for the TESS-SPOC training set, so above this temperature there is some overlap between the training light curves and the “real" data. It is possible that inclusion in the training set as a noise template (multiple instances with varying injected simulated rotation signals) confuses the neural network and causes it to assign a large uncertainty to these targets. Another possibility is that spot modulation amplitudes drop above 6,000 K, where the convective envelope disappears and stars become less active. This drop in amplitude is seen in the Kepler stars of Fig. <ref>. The drop in detections above 6,000 K is likely a combination of these effects.
The other edge is in rotation period and occurs at roughly 27 days. While slow rotators tend to be less active than fast rotators at fixed temperature, the spot modulation amplitudes at which we expect to lose detections vary in period across temperature. In other words, a period detection edge produced by astrophysical variability should not be flat. Rather, the 27-day detection edge is likely related to the 27-day sector length in TESS. Without a reliable absolute flux calibration in each sector, stitching sector light curves together can destroy coherent signals longer than 27 days in period. While we include sector-to-sector stitching in all our training sets, the 27-day edge suggests that the training sets do not fully capture the sector effects in TESS, or at the very least the sector effects make period estimates much less certain beyond 27 days.
§.§ Features of the Period Distribution: Populations
The period–temperature distribution displays a sloped, short-period edge, similar to what was seen in Kepler <cit.>. This edge represents the point at which field stars converge onto the slowly-rotating sequence <cit.>.
The distribution also displays a gap in rotation period, occurring at roughly 12 days at 5,000 K and increasing to 20 days at 4,000 K. <cit.> first identified this gap in the Kepler field, and it has also been recovered in other field star samples using K2 <cit.>. <cit.> showed that the gap may close in fully convective star samples. We present here the first detection of the rotation period gap using TESS.
Figure <ref> shows another look at the rotation period distribution, now colored by the photometric variability amplitude S_ph, in comparison with the distribution from the Kepler field <cit.>. As we expect, stellar variability generally decreases with increasing periods at fixed temperature, since slowly rotating stars are less magnetically active than faster stars. There is a significant dip in the variability between 3,500 K and 4,500 K, most notably near the location of the rotation period gap, which goes from about (5,000 K, 12 d) and curves upward to (4,000 K, ∼20 d) (refer to Figure <ref>). This is consistent with <cit.>, who found a similar dip in variability near the period gap in Kepler and K2 stars. They argued that the dip in variability causes the apparent gap in rotation periods, where stars in the gap exhibit modulation too small to be detected in rotation.
Figure <ref> shows the TESS-SPOC period–temperature distribution using different variability range R_per floor values. Requiring log(R_per/ppm) > 3.5 removes many stars from the top-left corner of the diagram, which are hot but have apparently long rotation periods. While we do not expect to find many stars in this regime based on Galactic population synthesis <cit.>, the stars that are here should have low variability because they are hot and therefore have thin-to-nonexistent outer convective envelopes, and because they spin relatively slowly. The stars that are lost from the top panel to the middle panel of Figure <ref> are likely mostly spurious detections whose measured R_per is actually the photometric noise, as well as a handful of correctly measured, low-variability stars. As we continue to increase the R_per floor, we see two effects. First, we lose more low-variability stars on the hot, long-period edge. This is precisely what we expect to see in a period sample: raising the variability floor, we should lose the highest-Rossby number stars first. These are the slowly rotating, hot stars in the top left “corner" of the distribution. Second, the gap becomes more apparent, consistent with <cit.>, although stars are not lost from the gap at a significantly higher rate than stars outside the gap.
§.§ Comparison between TESS-SPOC and TASOC
In the TASOC sample (e.g., in the right panel of Figure <ref>), we again see a weak presence of the period gap as well as the sloped short-period edge. the TASOC sample also show the 27-day horizontal detection edge exhibited by the TESS-SPOC sample, resulting from the increase in uncertainty past 27 days from sector-to-sector stitching.
There are stars in common between the TASOC and TESS-SPOC samples. We estimated two periods for each of these stars using different neural networks fit to different training sets tailored to the different light curve lengths. While the underlying pixel data between the two samples were the same, the apertures used to perform photometry were different, and the TESS-SPOC light curves were more than twice as long (13 sectors) as the TASOC light curves (6 sectors). In addition, the two training sets used different underlying samples of stars for noise and systematics. This gives us a sample to compare period estimates for robustness against photometric aperture, training set, and duration of observation.
In Figure <ref> we compare the period estimates for the overlap sample. They mostly agree, with a median relative error of 7%. The estimates that disagree have relatively large uncertainties, though the fact that they make our 35% reliability cut means that there will be some contamination in our period sample. 76% of stars in the overlap sample have period estimates agreeing to within 20%. The discrepancies likely arise from the different aperture selection, different light curve durations, or differences in the underlying training sets, although here we do not attempt to isolate the main contributor.
§.§ Comparison with other large field rotation samples
The TESS rotation period distribution is the product of the underlying distribution of periods, the presence of modulation in the light curve, the availability of data products, and the ability to detect periods across various stellar parameters. To try and understand the relative influence of these effects, we compare the TESS period distribution with other large period data sets, particularly Kepler and K2. Figure <ref> shows the period distributions from Kepler and K2, while Figure <ref> shows our newly obtained TESS distribution. We represent temperature bins as vertical histograms in the style of <cit.> to increase the clarity of the period gap in the cool-temperature regime. The number of temperature and period bins is adjusted in each panel to account for the total number of stars in each sample.
The top panel of Figure <ref> displays 52,338 carefully vetted Kepler rotation periods from <cit.>. The Kepler period distribution exhibits a pileup on its upper edge for stars hotter than ∼ 5,500 K, which is a prediction of the weakened magnetic braking hypothesis <cit.> and has been well-studied in the Kepler field <cit.>. Also present is the rotation period gap, clearly visible at ∼ 15 days at 5,000 K, ∼ 17 days at 4,500 K, and continuing to increase and widen at cooler temperatures.
The bottom panel of Figure <ref> shows 13,847 rotation periods from stars in K2 measured by <cit.>. These represent a high-fidelity subsample with normalized Lomb-Scargle peak height > 0.5 and variability range R_var > 0.1%. Peak heights range from 0 to 1 and quantify how sinusoidal a light curve is, with a perfect sinusoid returning unit peak height, and noisy, non-sinusoidal data returning values close to zero. R_var is defined similarly to R_per, except that R_var is the variability range over the entire light curve, rather than a median of ranges per period bin. The K2 distribution shows the period gap most strongly between 5,000 K and about 4,250 K, but it is weakly visible in cooler stars, where it appears to increase in period and widen as in Kepler. The hot star pileup is not apparent here. This is likely due to the relatively large temperature uncertainty in the K2 Ecliptic Plane Input Catalog <cit.>, which blurs out features in the temperature distribution. <cit.>. Finally, periods longer than about 35 days are largely absent from the K2 distribution because of K2's 80-day observing campaigns in each field.
The TESS distribution in the top panel of Figure <ref> shows periods for 5,056 TESS-SPOC stars with σ_P/P ≤ 35%, representing the most credible detections. The period gap is present and is most apparent at temperatures between 4,000 and 5,000 K. It is still visible at temperatures cooler than 4,000 K, but the dearth of reliable detections at periods nearing 30 days makes the gap more difficult to detect. In addition to the gap, we detect a handful of M-dwarfs rotating with periods between 40 and 60 days; similar stars were also observed in the Kepler period distribution. We visually inspected the light curves for these stars and confirmed them to be true rotation detections with photometric variability R_per approaching 1%. On the hot end, the distribution lacks the long-period edge seen in Kepler because of the abundance of hot stars apparently rotating with ∼20-day periods. These are likely spurious detections, as their measured amplitudes are close to the noise floor (close to 100 ppm for Tmag = 8 and 1% for Tmag = 15). When we raise the variability floor in the bottom panel of Figure <ref>, the hot, slow rotators mostly disappear, but the gap and the slowly rotating M-dwarfs remain.
We offer one final view of the TESS period distribution, now plotted over the Kepler distribution of <cit.>, in Figure <ref>. The short-period edge of the TESS distribution has the same location and slope of Kepler's, suggesting that the edge is a result of rotational evolution, rather than arising from details of the star formation history <cit.>. The rotation period gap agrees as well, following Kepler's for as long as the TESS gap remains visible into the hot stars. TESS appears to see stars in regions Kepler does not: the slowly rotating, hot stars (T_eff > 5000 K and P_rot > 30 d) have amplitudes close to the noise floor and are likely spurious detections. On the other hand, the slowly rotating M dwarfs, with TESS periods up to 80 days, have been vetted by eye and are mostly real rotation detections. Interestingly, the branch of stars beneath the period gap turns over at temperatures below 3,500 K, which is not seen in Kepler but is seen in some younger samples observed by K2 and MEarth <cit.>.
§.§ Modeling Results
Taking the stars with reliable periods from either TESS-SPOC or TASOC, we cross matched with APOGEE DR17 spectroscopic parameters estimated with ASPCAP <cit.>. To ensure a high quality sample, we removed objects with the ASPCAP STAR_BAD and SNR_BAD flags set. We also checked the MULTIPLE_SUSPECT flag for possible double-lined spectroscopic binaries, but none of our APOGEE rotators were flagged. Some stars in our sample had multiple visits and therefore multiple ASPCAP measurements. For targets with multiple measurements, we averaged the temperatures, metallicities, and α abundances, then added the standard deviation of those measurements in quadrature with the formal ASPCAP uncertainties to obtain an uncertainty for each measurement. This affected 201 targets out of with APOGEE spectroscopy. We then filtered out targets with large Renormalized Unit Weight Error <cit.> and high flux contamination ratio (TIC contratio > 10%) to clean the sample of potential binary or nearby flux contaminants. This yielded a sample of stars, which we designate as our “Gold” sample. We fit models to these stars according to the procedure in Section <ref>, taking the posterior medians as the nominal fit parameters. The fit parameters and their uncertainties are presented in Table <ref>.
§.§.§ The TESS SCVZ age distribution
The ages for our stars, which are estimated using our TESS rotation periods, are shown in Figure <ref>. We separate stars with rotation periods less than 10 days, which in Kepler were more likely to be spun-up by close binary companions than be true rapid rotators <cit.>. The age distribution peaks between 2 and 4 Gyr, which is consistent with other age distributions of Solar neighborhood stars: <cit.> used isochrones for GALAH stars, <cit.> used rotation-based ages for Kepler dwarfs, <cit.> used isochrones for Kepler dwarfs, <cit.> used asteroseismology for Kepler giants, and <cit.> used seismology in the TESS CVZs; all obtained a distribution peaking between 2 and 4 Gyr. We note that our age distribution lacks many of the old (> 6 Gyr) stars seen in other samples. This is a consequence of two detection biases: (1) our 27-day detection edge prevents the reliable detection of stars hotter than 4,000 K, and (2) old stars rotate more slowly and are less active, further complicating their detection in rotation.
§.§.§ Galactic chemical evolution
With rotation-based ages and high-resolution APOGEE spectroscopic abundances, we can also look for Galactic chemical evolution trends in TESS <cit.>. Stars' initial composition patterns are set by the compositions of the clouds in which they form, which are in turn enriched by stars that have lived and died before them. Galactic chemical evolution is often traced using the relative abundances of α elements (e.g., O, M, Ca) to metals and metals to hydrogen <cit.>. APOGEE provides values for [α/M] and [M/H], which we adopt. The background Galactic composition was governed by the dominance of core-collapse supernovae in the early Milky Way, followed by dominance of type Ia supernovae beginning about 8 Gyr ago <cit.>. Both types of supernovae enriched the interstellar medium with metals, but in different ratios. Consequently, stars display decreasing [α/M] and increasing [M/H] with time (reversed as a function of age). We therefore expect old stars to have low metallicity but high α enhancement, while young stars should have higher metallicity and lower α-element abundances <cit.>. These young and old populations are representative of the classical Galactic “thin" and “thick" disks, respectively.
Figure <ref> shows stellar α-element abundance as a function of rotation-based age in the TESS SCVZ. As expected, young stars are generally α-poor and metal-rich. There is a slight increasing trend of α enhancement with age. We detect very few stars in rotation older than 6 Gyr due to the detection biases discussed in Section <ref>. Finally, we also detect a few young α-rich stars. These are known from other samples <cit.> and are likely to be the products of stellar mergers. In this scenario, two old, α-enhanced stars merge, destroying the stars' rotation histories and yielding a fast-rotating, apparently young, α-enhanced product <cit.>.
§.§.§ Stellar activity
Finally, with rotationally characterized stars we can begin to investigate trends of photometric activity with model-derived parameters like age and Rossby number. We define the Rossby number as the ratio of the rotation period over the convective overturn timescale τ_cz, where the convective timescale is computed from our models as the pressure scale height H_P divided by the convective velocity evaluated at a distance H_P above the base of the convection zone. To quantify the photometric activity, we use the photometric activity index S_ph rather than R_per so that we can compare the trends in the TESS SCVZ with those in the Kepler field observed by <cit.>, and <cit.>. The ages and and Rossby numbers for the Mathur et al. sample were computed using the same procedure and models underlying this work, so we can directly compare the Kepler and TESS distributions. We start with the Gold sample and discard stars with periods less than 10 days as before, leaving stars with TESS periods, APOGEE spectroscopic parameters, and well-determined Rossby numbers and ages.
Figure <ref> shows the photometric activity index S_ph versus the Rossby number for our binary-cleaned stars, plotted over the distribution of stars from Kepler. Activity decreases with increasing Rossby number, as expected. The TESS distribution generally agrees with the Kepler distribution. We have a few stars close to the high-activity saturated regime <cit.>, but most of our stars are magnetically unsaturated. The TESS detection limits are clear here, as our lowest-activity star with a period detection has S_ph = 345 ppm, compared to Kepler's lower limit in the tens of ppm. We have a few hot stars at Ro ≳ 2 where Kepler has almost none. These are the likely spurious period detections from before (e.g., Figure <ref>).
We show S_ph as a function of rotation-based age in Figure <ref>. Photometric activity decreases with age, an effect of stellar spin-down. The TESS distribution follows the range and morphology of the Kepler distribution all the way down to the TESS rotation detection limit of S_ph≈ 350 ppm.
§ DETECTABILITY OF ROTATION
Here we consider the detectability of rotation as a function of fundamental stellar parameters. At a fixed rotation period, at lower temperature and higher metallicity, we expect deeper convective envelopes, stronger magnetism, more surface spots, and more easily detectable rotational modulation. Besides changing with static stellar parameters, the strength of a star's magnetism changes as the star ages. Main-sequence stars with outer convective envelopes are born spinning fast and spin down as they age as they lose angular momentum to magnetized stellar winds <cit.>. The decrease in rotation speed results in a weakening magnetic field, fewer or smaller spots, and less flux modulation, making rotation more difficult to detect in older stars than in younger stars at fixed mass and composition.
While we might expect rotation to be harder to detect in lower metallicity stars, an age dependence enters the picture because of the variation of Galactic composition with age. Because the background abundance ratios change with time, any apparent changes in rotation detectability with stellar abundances may actually be caused by the decreasing detectability with age.
To investigate the detectability of rotation with fundamental stellar properties, we consider the fraction of targets for which we detected periods in stellar parameter bins. While the CNN infers a period for each target, we can use the estimated uncertainty to determine whether those periods are reliable. As in <cit.>, we label targets with σ_P/P < 0.35 (corresponding to ∼10% median percent error) as successful detections, and anything else as a nondetection.
Figure <ref> shows the rotation detection fraction versus temperature and metallicity for all our rotationally-searched stars with APOGEE spectroscopy. Only bins with at least five targets are shown so that the diagram is not muddled by small number fluctuations. As expected, cooler stars, especially cooler than 5,000 K, are detected in period significantly more often than hotter stars. In the range 5,000 K < T_eff≲ 6,000 K, where the detections begin to decrease as a function of temperature, there appears to be a weak trend in metallicity, with higher-metallicity ([M/H] ≳ -0.1) stars being detected in period more frequently than lower-metallicity stars. This is consistent with <cit.>, who found that rotation is more easily detected in Kepler stars with higher metallicity at fixed mass. We see the same bias toward higher metallicity among our rotators, which may be due either to the deeper convective envelope resulting from enhanced opacity, or to more rapid rotation (and therefore higher activity) from increased moment of inertia and slower spin-down.
Another view of the detection fraction is shown in Figure <ref>, this time as a function of metallicity and α-element enhancement. At fixed metallicity, we detect fewer stars in rotation at high [α/M] due to the underlying relationship between age and α enhancement. High-α stars tend to be older, spin more slowly, and are less active, so we expect them to be more difficult to detect in rotation. This view also allows us to inspect the period detection fraction across metallicity at fixed α enhancement. At fixed [α/M], there is significant scatter in the detection fraction across metallicity. Some bins (e.g., 0 < [α/M] < 0.05) show gradually increasing detectability with increasing metallicity, while others (e.g., -0.05 < [α/M] < 0) worsen in detection at higher metallicity. Due to the amount of noise in the bins, it is difficult to conclude whether the apparently enhanced detection fraction at higher metallicity is caused by higher activity from deeper convection zones or by the underlying age distribution.
§ SPOT FILLING FRACTION
The links between temperature, metallicity, age, convection, rotation, and photometric variability shed light on the generation of magnetism in cool, main-sequence stars. The strength of rotational modulation in the light curve, and therefore the detectability of rotation, hint at the presence of cool spots created by magnetic fields concentrated near the stellar surface. Because spots are created by the same dynamo that rotation and convection drive, we can use the prevalence of spots in different temperature and rotation ranges to infer dynamo properties in those regimes.
<cit.> found that temperature-sensitive spectral features include contributions from the quiet photosphere and cooler spots. Thus, fitting APOGEE spectra with two temperature components, they inferred the surface spot filling fractions and the temperature contrasts of a sample of stars. They used a modified version of the FERRE code <cit.>, the spectral fitting code used by the ASPCAP pipeline, to infer spot filling fractions for all stars in APOGEE DR17. Following this method, we obtained spot filling fractions and updated effective temperatures for the stars in our sample with APOGEE spectra.
We began with the stars in our Gold sample (described in Section <ref>). We made cuts in Gaia DR3 magnitudes and colors using M_G > 4.4 and G_BP - G_RP > 1 to target below the field main-sequence turnoff and ensure all our stars are securely on the main sequence. This yielded cool, main-sequence stars, but a few (less than 20) showed an excess in M_G, indicating that they were likely leftover binary systems <cit.>. To remove these, we fit a line to the main sequence and computed the magnitude excess as Δ M_G = M_G - ⟨ M_G ⟩, where ⟨ M_G ⟩ was the fit main-sequence magnitude. The distribution of magnitude excesses had two clear peaks, with a trough we visually identified at Δ M_G = -0.4. We removed stars with |Δ M_G| < 0.4, leaving stars. We designate these as our “Platinum" sample, a pure, cool, main-sequence sample robustly free from binary contamination.
With spot filling fractions, we can now investigate the detectability of rotation as a function of surface spot coverage. We might expect more spotted stars to be easier to detect in rotation, as they should have higher photometric variability. Figure <ref> shows the Platinum sample K-dwarfs with 1.5 < G_BP - G_RP < 2, along with the stars in the same regime but with no rotation detection. The left panel shows the subsamples' distributions of spot filling fractions, while the right panel shows the cumulative frequency distributions. A Kolmogorov–Smirnov test returns a p-value of 0.3, rejecting the null hypothesis that the two samples are drawn from the same underlying distribution with only 70% (i.e., just over 1σ) significance. There are too few stars in this regime to confirm any difference in spot filling fraction between the period detection and nondetection samples.
We show the Platinum sample on a Gaia color-magnitude diagram in Figure <ref>, with points colored by spot filling fraction.
While most stars in our sample have low spot filling fractions less than 10%, the mid-K range (1.5 < G_BP - G_RP < 2) exhibits elevated fractions. Here, filling fractions reach ≈ 0.3–0.4, behavior that was first observed by <cit.>, which they attributed to internal differential rotation. There is a clear gradient of increasing filling fraction with increasing M_G. This may represent an increase of spot coverage with increasing metallicity in this temperature regime; the correlation between spot filling fraction and metallicity is still present after strong binary rejection, and the trend disappears outside this temperature range.
<cit.> suggested that core-envelope decoupling gives rise to anomalous rotation behavior in cool stars, evidenced by elevated spot filling fractions in cluster K-dwarfs between 4,000 and 4,500 K. The process of decoupling and recoupling drives a radial shear layer and enhanced surface magnetism. With field star rotation periods up to 27 days, we can investigate the behavior of rotation and spottedness in the TESS SCVZ. Figure <ref> shows the period–temperature distribution of our Platinum sample, again colored by spot filling fraction, with the rotation sequences from benchmark open clusters Pleiades <cit.>, Praesepe <cit.>, NGC 6811 <cit.>, Ruprecht 147 <cit.>, and M67 <cit.>. Here we use the two-component fit effective temperature, rather than the TIC or ASPCAP values, for consistency with the spot filling fractions. As a function of temperature, spot filling fractions increase in the mid-K range—the same behavior <cit.> identified in Praesepe. At fixed temperature, we might expect filling fractions to be higher at shorter periods, where stars rotate faster and are more magnetically active. Instead, spot filling fractions in the mid-K range appear to be elevated across the entire span of recovered periods (∼10–30 d). <cit.> predict shear-enhanced magnetism to persist in this temperature range until ages of a few Gyr (temperature-dependent), so it is likely we do not reach long enough periods for the spot filling fraction to decrease.
The increase in spot filling fraction occurs in the color and temperature range where open clusters NGC 6811 and Rup 147 were shown to exhibit an unexpected epoch of stalled rotational braking <cit.>. NGC 6811 is 1 Gyr old <cit.>, but for temperatures cooler than 5,000 K its rotation sequence rests upon that of the 670-Myr-old Praesepe <cit.>. Somewhere between the ages of these clusters, stellar spin-down departs from the classical picture from gyrochronology. By 2.7 Gyr <cit.>, stars at the hot end have resumed braking, but the cooler stars lag behind, suggesting that the epoch of stalled braking lasts longer for lower-mass stars <cit.>.
<cit.> showed that a two-zone interior model, which allows the core and envelope to rotate at different rates, can nearly reproduce the stalled spin-down behavior exhibited by these clusters. In these models the core and envelope decouple, and the envelope continues to spin down from magnetic braking while the core maintains its rotation speed. During recoupling, angular momentum is transferred from the core to the envelope, and the apparent spin-down is temporarily slowed or halted. After recoupling, the star again behaves as a solid body and undergoes classical <cit.> braking. While <cit.> argued in favor of the two-zone model, they could not rule out a temporary reduction in the braking torque, either from reduced wind or weakening of the magnetic field, as a possible cause. We suggest that the coincidence of elevated spot filling fractions in field stars with the stalled braking seen in open clusters supports the shear-driven dynamo hypothesis argued by <cit.>.
§ SUMMARY & CONCLUSION
We used deep learning to infer reliable periods for main sequence stars near the southern ecliptic pole from year-long TESS full-frame image light curves. Our periods represent the first large-scale recovery and measurement of rotation in TESS stars rotating more slowly than 13.7 days, the limit previously imposed by TESS's complicated systematics. We fit stellar evolutionary models to the stars using rotation and high-resolution spectroscopic parameters to determine stellar ages, masses, convection timescales, Rossby numbers, and more. We investigated the detectability of rotation as a function of fundamental stellar parameters as well as new spot filling fractions inferred from spectroscopy. Our key results and conclusions are as follows:
* We find evidence for the intermediate rotation period gap, first discovered in the Kepler field and seen in K2 field star samples across the ecliptic plane, the first such detection from TESS stars. The period gap in TESS closely aligns with the gaps from previous missions, cementing the conclusion that the gap is a product of stellar structure and evolution and not star formation history.
* The rotation period gap coincides with a dip in photometric variability, consistent with the findings of <cit.> in other field star populations.
* The distribution of rotation periods in TESS closely resembles the distributions seen by Kepler and K2. Its lower edge features a slope of increasing period with decreasing temperature, similar to the distributions from previous missions, and we detect slowly rotating M-dwarfs with a similar location and distribution as in Kepler.
* We detect a higher fraction of stars in rotation at cooler effective temperatures, where stars rotate faster at fixed age and have deeper convective envelopes resulting in higher activity amplitudes. We also preferentially detect rotation in stars at higher metallicities at fixed temperature. This may owe to deepening convective envelopes with increasing metallicity, or to increased moment of inertia with increasing metallicity resulting in slower spin down, and faster period (and therefore higher activity) at fixed age.
* In Gaia color regimes with a range of spot filling fractions, stars detected in rotation showed no significant difference in spot filling fraction compared to stars with no period detection.
* Field stars with elevated spot filling fractions coincide with open cluster stars that exhibit a temporary stall in magnetic braking. These coincide at least partly with the period gap and its variability depression, suggesting a common cause.
While TESS systematics have presented unique challenges that remain difficult to solve with conventional period-finding techniques, deep learning presents a way to circumvent instrument systematics without having to solve systematics removal for every individual case. Since first observing the southern hemisphere in 2018, TESS has also observed the North, revisited both hemispheres, and continues to observe the entire sky in its search for transiting exoplanets. As it does, it continues to build a vast trove of stellar light curves to search for rotation in stars across the entire sky.
Our simulation-driven CNN approach enables the inference of more than just rotation. The existing training sets include activity level, latitudinal differential rotation, spot lifetimes, and activity cycles. These quantities can be probed with minimal modification to our CNN framework and would provide new avenues of investigation of stellar rotational, activity, and magnetic evolution.
Understanding the complicated rotational evolution of low-mass stars and the related anomalies in activity and spot coverage will require more rotation periods for more diverse populations of stars. As we grow the number of rotation periods obtained with TESS, precise and homogeneously derived temperatures and metallicities will be imperative to pinpoint the regimes where stellar rotation and activity processes change. The Milky Way Mapper (MWM) of the Sloan Digital Sky Survey V <cit.> is obtaining APOGEE spectroscopy for 6 million stars across the whole, including 300,000 observed with TESS two-minute cadence in the SCVZ. MWM will provide homogeneous temperatures, metallicities, and detailed chemical abundances for all these stars, offering unprecedented precision on the fundamental parameters of a large rotation sample.
Upcoming space missions will provide crucial avenues to rotation periods as well. The methods in this work will be applicable to photometry obtained by the Nancy Grace Roman Space Telescope <cit.>. Roman will perform a Galactic Bulge Time Domain Survey <cit.> with cadence similar to TESS with the addition of lower cadence photometry in at least one secondary band. Not only will a rotation be made accessible in a relatively unprobed population of stars toward the Galactic bulge, but the multi-band coverage will provide access to time-domain temperature resolution, enabling the study of stellar spot and facula distributions for hundreds of thousands of stars. Furthermore, the potential to observe two globular clusters near the Galactic center with Roman <cit.> would provide the first large gyrochronology anchors at both old ages and sub-Solar metallicities.
We gratefully acknowledge Gagandeep Anand, Ashley Chontos, Monique Chyba, Ryan Dungee, Rafael Garcia, Daniel Huber, Corin Marasco, Savita Mathur, Peter Sadowski, Ângela Santos, Benjamin Shappee, Xudong Sun, and Jamie Tayar for helpful conversations that improved the quality of this manuscript.
The technical support and advanced computing resources from the University of Hawai‘i Information Technology Services – Cyberinfrastructure are gratefully acknowledged.
J.v.S. and Z.R.C. acknowledge support from the National Aeronautics and Space Administration (80NSSC21K0246, 80NSSC18K18584)
This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA's Science Mission Directorate.
Funding for the Sloan Digital Sky
Survey IV has been provided by the
Alfred P. Sloan Foundation, the U.S.
Department of Energy Office of
Science, and the Participating
Institutions.
SDSS-IV acknowledges support and
resources from the Center for High
Performance Computing at the
University of Utah. The SDSS
website is www.sdss4.org.
SDSS-IV is managed by the
Astrophysical Research Consortium
for the Participating Institutions
of the SDSS Collaboration including
the Brazilian Participation Group,
the Carnegie Institution for Science,
Carnegie Mellon University, Center for
Astrophysics | Harvard &
Smithsonian, the Chilean Participation
Group, the French Participation Group,
Instituto de Astrofísica de
Canarias, The Johns Hopkins
University, Kavli Institute for the
Physics and Mathematics of the
Universe (IPMU) / University of
Tokyo, the Korean Participation Group,
Lawrence Berkeley National Laboratory,
Leibniz Institut für Astrophysik
Potsdam (AIP), Max-Planck-Institut
für Astronomie (MPIA Heidelberg),
Max-Planck-Institut für
Astrophysik (MPA Garching),
Max-Planck-Institut für
Extraterrestrische Physik (MPE),
National Astronomical Observatories of
China, New Mexico State University,
New York University, University of
Notre Dame, Observatário
Nacional / MCTI, The Ohio State
University, Pennsylvania State
University, Shanghai
Astronomical Observatory, United
Kingdom Participation Group,
Universidad Nacional Autónoma
de México, University of Arizona,
University of Colorado Boulder,
University of Oxford, University of
Portsmouth, University of Utah,
University of Virginia, University
of Washington, University of
Wisconsin, Vanderbilt University,
and Yale University.
This work has made use of data from the European Space Agency (ESA) mission
Gaia (<https://www.cosmos.esa.int/gaia>), processed by the Gaia
Data Processing and Analysis Consortium (DPAC,
<https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC
has been provided by national institutions, in particular the institutions
participating in the Gaia Multilateral Agreement.
<cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, Astroquery, <cit.>
§ PUBLIC TESS PHOTOMETRY AND TOOLS
There are several publicly available light curve sets, pipelines, and tools designed and optimized for TESS data. We list some of the most widely used in Table <ref>. Tools like <cit.> and <cit.> are general tools to download, process, and analyze TESS data. is a flexible tool that allows for several different systematics correction routines to be used on the same light curves. However, it requires large downloads, making it somewhat inconvenient for working with large data. <cit.> is a light curve processing pipeline optimized for systematics removal while preserving multi-sector astrophysical signals. It may be ideal for the problem of rotation, but it requires downloading large FFI cutouts, or the entire set of FFIs, for it to work optimally. does no automatic processing and provides simple tools for downloading and interacting with image and light curve data. We use for all our photometry and light curve processing.
Among the many public light curve datasets, the TESS Quick-Look Pipeline <cit.> and DIAmante <cit.> are designed for planet searches, so their light curve processing is aggressive and can remove the stellar signals we are interested in. The difference imaging analysis (DIA) light curves of <cit.> are for general use, but only sectors 1–5 of the first year are available. The GSFC-ELEANOR-LITE light curves <cit.> are a brand new data set using to create general-use light curves for all TESS stars brighter than 16th magnitude in the TESS band pass. They will be worth considering for large scale investigations in TESS, but currently only four sectors are publicly available. The TESS Science Processing Operations Center <cit.> has FFI light curves for nearly 40,000 bright SCVZ targets, with background subtraction and systematics correction, as well as underlying pixel data and apertures, available. They are suitable for general use and are easily downloaded from MAST. Finally, the TESS Asteroseismic Science Operations Center <cit.> is producing data products for all targets brighter than 15th TESS magnitude. They provide two different light curve products optimized for signals at different timescales with varying levels of systematics correction.
lcc
TESS Full Frame Image Light Curves, Pipelines, and Tools
Name Reference(s) Science Use
LightBlue1 <cit.> general
<cit.> general
<cit.> general
DIA <cit.> general
QLP <cit.> exoplanet detection
LightBlue1 TESS-SPOC <cit.> general
DIAmante <cit.> exoplanet detection
LightBlue1 T'DA/TASOC <cit.> asteroseismology
GSFC-ELEANOR-LITE <cit.> general
Software tools are listed first, followed by public light curve data sets. Tools and data sets used in this work are highlighted in blue. All light curve data sets are documented and publicly available as MAST High Level Science Products at <https://archive.stsci.edu/hlsp>, except for DIA <cit.>, which is available at <https://filtergraph.com/tess_ffi>.
§ OPTIMIZING THE NEURAL NETWORK ARCHITECTURE
In Section <ref> we lay out the various convolutional neural network (CNN) architectures that we trained and assessed to optimize our network's performance. Here we discuss the details of that optimization and the justification for our choices of architecture.
For both the TESS-SPOC and TASOC data products, we trained four different CNNs, each with 3 convolution layers, but each CNN had different numbers of convolution kernels to give the networks different flexibility in learning features. The architectures were (A) 8, 16, and 32 kernels; (B) 6, 32, and 64 kernels; (C) 32, 64, and 128; and (D) 64, 128, and 256. We also used four different training sets for both TESS-SPOC and TASOC, each with a different upper limit on rotation period. The period upper limits were 30, 60, 90, and 180 days, intended to optimize different networks for different period ranges. We trained all four architectures on each period range, compared performance metrics, and chose the architecture that had the best performance on average across all four training sets. For performance metrics, we considered, (1) average test loss, (2) median relative uncertainty, (3) percentage of test targets recovered to within 10% and 20% accuracy, and (4) the 1st and 99th percentiles of the filtered period estimates. To illustrate the meaning of these values, we will use the 180-day TESS-SPOC training set as an example.
During training, each training set is partitioned into a training, validation, and test set. The training set is used to fit the network parameters, the validation set is used to determine when to stop training to avoid overfitting, and the test set is used to assess performance. We monitored the average loss for all three partitions during training so that we can construct learning curves, which show the loss values versus training epoch. Figure <ref> shows the learning curves for all four architectures on the 180-day training set. The solid lines represent the training loss, while the dashed lines represent the test loss. Left unchecked, training loss will continue to decrease, but the loss on a held-out validation set will plateau or begin to increase once the network begins overfitting, which we use as our stopping criterion. The test loss is highest for run A, the simplest architecture we used. This indicates that run A is not complex enough to fully learn the features in the data, or at least that it begins overfitting before it can fully learn the features. Run B performs better, but is comparable to runs C and D, which fully train in fewer epochs. We can rule out run A for this case, but more metrics are needed to properly assess which run performs best.
One of the strengths of our method is the ability to estimate an uncertainty, which we can use as a metric of predicted reliability <cit.>. Specifically, we use the fractional uncertainty σ_P/P to normalize for period dependence. A better-trained network should have lower values of σ_P/P, indicating more reliable estimates. We use the median σ_P/P as an additional metric of performance in addition to using it to filter out bad estimates. Figure <ref> shows the filtered period estimates for each run, but note that the median fractional uncertainty listed in each panel is computed over the unfiltered periods. Run B has the lowest estimated uncertainty, so by this metric it performs the best and has the most reliable estimates.
We also use accuracy metrics to assess performance. The “acc10" and “acc20" metrics quantify what fraction of test targets are recovered to within 10% and 20% accuracy after filtering by uncertainty. The “acc10" metrics for each run are near 50%, which also means that the median relative error on the period estimates is near 10% for all runs. Run B has the highest accuracy metrics, so it once again performs best.
Estimating uncertainty biases estimates toward the median of the distribution, making period inference near the edges of the training set period range more difficult <cit.>. We attempt to mitigate this by tabulating the 1st and 99th percentiles of each (unfiltered and filtered) inferred period range. Figure <ref> shows the distribution of periods for both the unfiltered (left) and filtered (right) estimates. Though it is difficult to assess by eye, run A has the lowest 1st percentile (12.1 d) in the filtered sample, although all runs have first percentiles in the 12–13 day range. This also gives us a lower limit for where we can expect successful period estimates from this training set: networks trained on the 180-day set struggle to infer periods less than 12 days, motivating the need for training sets with smaller period ranges.
We prioritized metrics as follows: we considered the average test loss to rule out runs that failed to compete in loss value (e.g., runs B, C, and D achieved comparable loss values, but run A fell short). We then prioritized the accuracy metrics and uncertainty together, then if those were comparable we used the 1st and 99th percentile values to break ties.
When considering all our metrics for the 180-day TESS-SPOC training set, run B performs the best overall. We then repeated this process for each training set and chose the architecture that performed best over all training sets. Following this procedure, we chose architecture C for the TESS-SPOC data and architecture A for TASOC. We note that it may be optimal to use the optimal architecture for each training set, rather than adopt one architecture for all sets. We will consider before publication and release of the final period catalog.
aasjournal
|
http://arxiv.org/abs/2307.03944v1 | 20230708095104 | Enhanced Strong Coupling between Spin Ensemble and non-Hermitian Topological Edge States | [
"Jie Qian",
"Jie Li",
"Shi-Yao Zhu",
"J. Q. You",
"Yi-Pu Wang"
] | quant-ph | [
"quant-ph",
"cond-mat.mes-hall",
"physics.optics"
] |
Interdisciplinary Center of Quantum Information, Zhejiang Province Key Laboratory of Quantum Technology and Device, Department of Physics, Zhejiang University, Hangzhou 310027, China
Interdisciplinary Center of Quantum Information, Zhejiang Province Key Laboratory of Quantum Technology and Device, Department of Physics, Zhejiang University, Hangzhou 310027, China
Interdisciplinary Center of Quantum Information, Zhejiang Province Key Laboratory of Quantum Technology and Device, Department of Physics, Zhejiang University, Hangzhou 310027, China
Hefei National Laboratory, Hefei 230088, China
[email protected]
Interdisciplinary Center of Quantum Information, Zhejiang Province Key Laboratory of Quantum Technology and Device, Department of Physics, Zhejiang University, Hangzhou 310027, China
[email protected]
Interdisciplinary Center of Quantum Information, Zhejiang Province Key Laboratory of Quantum Technology and Device, Department of Physics, Zhejiang University, Hangzhou 310027, China
Light-matter interaction is crucial to both understanding fundamental phenomena and developing versatile applications. Strong coupling, robustness, and controllability are the three most important aspects in realizing light-matter interactions. Topological and non-Hermitian photonics, have provided frameworks for robustness and extensive control freedom, respectively. How to engineer the properties of the edge state such as photonic density of state, scattering parameters by using non-Hermitian engineering while ensuring topological protection has not been fully studied. Here we construct a parity-time-symmetric dimerized photonic lattice and generate complex-valued edge states via spontaneous PT-symmetry breaking. The enhanced strong coupling between the topological photonic edge mode and magnon mode in a ferromagnetic spin ensemble is demonstrated. Our research reveals the subtle non-Hermitian topological edge states and provides strategies for realizing and engineering topological light-matter interactions.
Enhanced Strong Coupling between Spin Ensemble and non-Hermitian Topological Edge States
Yi-Pu Wang
August 12, 2023
========================================================================================
Introduction.—Topology has evolved as a powerful governing principle for predicting and harnessing the robust propagation of currents in various systems, including condensed matter system <cit.>, acoustics <cit.>, mechanics <cit.> and photonics <cit.>. In topological photonics, a topological invariant ensures robust localization or propagation of electromagnetic waves <cit.>. On the other hand, non-Hermitian photonics <cit.> has also flourished in recent years, not only due to the ubiquitous non-Hermiticity in nature <cit.>, but also because the non-Hermiticity provides additional degrees of freedom to manipulate the wave behaviors. In pursuit of the simultaneous robustness and greater control flexibility, as well as the interest in fundamental research, non-Hermitian topological physics <cit.> has received considerable attention and substantial development. Scientists investigate new paradigms <cit.> and explore potential applications in this interdisciplinary territory <cit.>.
A coupled system can have two forms of non-Hermiticity. One kind is generated when there is asymmetric interaction between the sites, which leads to the non-Hermitian skin effect <cit.>. The other type, which is caused by on-site loss, can lead to intriguing phenomena associated with the parity-time (PT) symmetry. The PT-symmetric systems have received special attention, because they were proved to have real spectra <cit.>. A sequence of studies have studied the topologically protected bound (defect) states in PT-symmetric topological systems <cit.>, where the defect states are real in the PT-symmetry unbroken phase. Moreover, a number of studies have investigated whether topological edge states exist in the PT-symmetric systems <cit.>, concluding that since the edge state is not an eigenstate of the PT operator, an imaginary eigenvalue is obtained along with the spontaneous PT-symmetry breaking. In this case, a non-Hermitian edge state is obtained. We find that these imaginary edge states in the PT-symmetric system are actually topologically protected by the particle-hole symmetry <cit.>. In the one-dimensional (1D) non-Hermitian PT-symmetric Su-Schrieffer-Heeger (SSH) model <cit.>, the chiral symmetry of the system is broken, losing its topological ℤ invariant, but the particle-hole symmetry of the system is preserved and the system owns a topological ℤ_2 invariant. In the presence of perturbations that do not violate the particle-hole symmetry, the real parts of the eigenvalues of the edge modes remain 0, reflecting the topologically protected characteristics. Under this situation, the topological photonic mode with robust properties can be further manipulated by non-Hermiticity, which is highly desirable for investigating light-matter interactions <cit.>.
To investigate the interaction between topological photonic modes and matters <cit.>, we employ the photon-magnon coupling system <cit.>, which has benefits including the flexible tunability and experimental demonstration at room temperature. In this Letter, we use a set of lossy microwave resonators to build 1D non-Hermitian SSH photonic lattices. By coupling a ferromagnetic spin ensemble (FSE) to Hermitian and non-Hermitian SSH chains and monitoring the strength of the coupling between the photonic modes and the magnon mode in the FSE, we verify the topological edge states and bulk states. Non-Hermiticity introduced by the on-site alternating losses breaks the passive PT-symmetry of zero-energy modes and results in two complex-valued edge states, which localize exponentially at the opposite ends of the chain [Fig. <ref>(b)]. Further, the photonic density of state (PDOS) at boundaries is larger than that in the Hermitian case [Fig. <ref>(a)], which strengthens the coupling between the topological photonic mode and the magnon mode. Our experiment demonstrates the potential of manipulating the interaction between topological photonic states and matter by exploiting non-Hermiticity.
System and model.—The SSH chain consists of six unit cells [Figs. <ref>(a) and <ref>(b)], in which each unit contains two split-ring-resonators (SRRs) fabricated on the F4B substrate [Fig. <ref>(a)]. In the experiment, the SRR exhibits a resonance at ω_0/2π=5.62 GHz with an intrinsic loss of γ_0/2π=24.42 MHz, and the topological property is unaltered by the uniform losses along the chain <cit.>. Therefore, SRRs with the same loss can be used to build the Hermitian SSH model. Two neighboring SRRs are separated by staggered spacings to realize the intracell and intercell coupling rates, v and w. Edge states appear in the finite chain when the bulk winding number of the Hermitian Hamiltonian is 𝒲_h=1 <cit.>. The effective Hermitian SSH chain is designed in the topological non-trivial phase (v/2π=216.5 MHz, w/2π=341 MHz) and the Hamiltonian is written as <cit.>:
ℋ_h/ħ=∑_s=1^2N(ω_0-iγ_0)â_s^†â_s+∑_s=1^2N-2(vâ_sâ_s+1^†+wâ_s+1â_s+2^†),
where â_s^† (â_s) is the photon creation (annihilation) operator of the s-th SRR. The uniform losses of the units only yield all eigenvalues of the chain to have the same imaginary component iγ_0. The eigenvalues of the coupled SRRs are plotted in the complex plane, as shown in Fig. <ref>(c). A pair of zero-energy modes (Re(ω_m=6,7)-ω_0=0, green dots) appear in the band gap (gray area), which are the edge modes. The measured transmission spectrum of the chain is shown in Fig. <ref>(d), where the peaks correspond to the resonances of the eigenmodes. By simulating the field distribution at the edge mode frequency of ω_0/2π=5.62 GHz, we find that the electromagnetic field tends to localize at both edges of the chain, as predicted by wave function distribution <cit.>. In the low-frequency region, the measured spectrum [Fig. <ref>(d), solid line] displays an amplitude deviation from that in the high-frequency region. This is due to the residual dissipative coupling between SRRs <cit.>.
Then, on-site non-Hermiticity is added to the SSH chain. As depicted in Fig. <ref>(a), resistors R_A=0.1 Ω and R_B=2.7 Ω are integrated into odd and even sites of the chain, respectively, which induce alternated losses of γ_A/2π=36 MHz and γ_B/2π=73 MHz. The Hamiltonian becomes <cit.>:
ℋ_nh/ħ= ∑_s∈ X(ω_0-iγ_A)â_s^†â_s+∑_s∈ Y(ω_0-iγ_B)â_s^†â_s
+∑_s=1^2N-2(vâ_sâ_s+1^†+wâ_s+1â_s+2^†),
where X={1, 3, 5, ..., 2N-1}, Y={2, 4, 6, ..., 2N}, and N=6. The integrated resistors shift ω_0/2π to 5.48 GHz, and the hopping rates shift to v/2π=208.5 MHz, and w/2π=335.5 MHz. The alternated losses make the system a passive PT-symmetric one. The spontaneous PT-symmetry breaking occurs in zero-energy modes, resulting in a splitting of the imaginary parts of zero-energy modes, as shown in Fig. <ref>(e). One with a low loss Im(ω_m=6)/2π=40.42 MHz (Edge_1, blue dot) localizes at the left boundary of the chain, and the other with a high loss Im(ω_m=7)/2π=68.58 MHz (Edge_2, red dot) localizes at the right, as schematically shown in Fig. <ref>(b). The bulk Hamiltonian still preserves the PT-symmetry when δγ/2<|w-v|, and δγ=γ_B-γ_A. In this regime, the topological property is still determined by the generalized integer winding number 𝒲_nh <cit.>. 𝒲_nh=1 guarantees the existence of two non-Hermitian topological edge modes.
Experiment results.—To investigate the edge modes engineered by the non-Hermiticity, we measure the PDOS and linewidths of the edge and bulk modes in both Hermitian and non-Hermitian cases. Notably, conventional detection of the PDOS relies on the near-field radiation <cit.>, but in the non-Hermitian situation, the local gain and loss will diminish its reliability. Using the spin ensemble as a probe, we can directly detect the PDOS. In addition, it allows us to study the strong coherent interaction between the topological photonic modes and magnons.
In the experiment, the spin ensemble employed to couple with the chain is a 1-mm diameter yttrium iron garnet (YIG) sphere. The magnon mode in the sphere interacts with the local photonic modes, with a coupling strength g proportional to ηχ√(nSħω_r/2V) <cit.>, where η≤1 describes the spatial overlap and polarization matching between the photonic mode and the magnon mode, χ is the gyromagnetic ratio, n is the total number of spins, S=5/2 is the spin number of the ground state Fe^3+ ion in YIG, ω_r is the resonance frequency, and V is the photonic mode volume. Consequently, the square of the coupling strength g^2 directly reflects the PDOS at the coupling location. Firstly, we move the YIG sphere to each site (labeled as s, s=1,2,3,...,12) of the Hermitian chain, and obtain the PDOS distribution of the m-th eigenmode by analyzing the transmission spectra. The bias magnetic field is perpendicular to the device plane, and mappings of transmission spectra are measured versus electromagnet current and probe frequency. Figures <ref>(b) and <ref>(e), for instance, show the mappings when the YIG sphere is placed at site-1 and site-12, respectively. The coupling strength between m-th eigenmode of the chain and the magnon mode at the s-th site is defined as g_m,s, which can be obtained by fitting the level repulsion with:
ω_m,s^±=1/2[ω_n+ω_m±√((ω_n-ω_m)+4g_m,s^2)],
where ω_n=ω_n-iγ_n and ω_m=ω_m-i(γ_m+κ_m) are the eigenvalues of the uncoupled magnon mode and the m-th eigenmode of the chain, respectively. γ_n is the total loss rate of the magnon mode, γ_m is the intrinsic loss rate of the m-th eigenmode, and κ_m is the extrinsic loss rate of the m-th eigenmode to the input/output ports <cit.>. Coupling strengths between the magnon mode and edge modes (m=6,7) at site-1 and site-12 are obtained by fitting the level repulsion depicted in Figs. <ref>(b) and <ref>(e), which are g_edge,1/2π=g_edge,12/2π=80 MHz. Similarly, coupling strengths between the magnon mode and bulk mode (m=8) at site-1 and site-12 are obtained as g_bulk,1/2π=g_bulk,12/2π=37 MHz. g_m,s^2 as a function of the site index s are illustrated in Figs. <ref>(c) and <ref>(d), denoted by blue (m=8) and red dots (m=6,7), respectively. The observed g_m,s^2 are in good agreement with the intensity distributions for the wave function |φ_m,s|^2 (gray bar diagram).
Then, we couple the spin ensemble to the non-Hermitian SSH chain, as shown in Fig. <ref>(a). Figures <ref>(b) and <ref>(e) display the mappingswhen the YIG sphere is placed at site-1 and site-12, respectively. The mappings show similar amount of level repulsion, but reflects very different linewidths of the edge modes. Using Eq. (<ref>), the loss of the edge mode at site-1 is fitted to be γ_edge,1/2π=41.1 MHz, which is contributed by the addition of the two edge modes (m=6,7). The relation is γ_edge,s=[Im(ω_m=6)·|φ_6,s|^2+Im(ω_m=7)·|φ_7,s|^2]/(|φ_6,s|^2+|φ_7,s|^2), and the wave functions of the edge modes |φ_m,s|^2 are displayed as the bar diagram in Fig. <ref>(d). Similarly, we get γ_edge,12/2π=67.9 MHz. More interestingly, the coupling strengths between the magnon mode and edge modes at site-1 and site-12 are observed to be g_edge,1/2π=g_edge,12/2π=112 MHz, which is larger than that in the Hermitian case (80 MHz). We plot g_m,s^2 versus site index s for m=8 and m=6, 7 in Figs. <ref>(c) and <ref>(d), respectively. It can be found that the bulk mode maintains expanded, similar to the Hermitian bulk mode. But, as shown in Fig. <ref>(d), the low-loss edge state (Edge_1) accumulates at the left boundary, while high-loss edge state (Edge_2) accumulates at the right edge. The introduction of on-site loss does contribute to the increase of PDOS at the boundaries. The mechanism can be interpreted as follows: When the PT-symmetry of the edge states is broken, the energy flow between adjacent resonators is partly blocked <cit.>. The low-loss (high-loss) edge state becomes more localized at the low-loss (high-loss) site, as shown in Figs. <ref>(b) and <ref>(a), it corresponds the left (right) boundary of the chain.
It is also intriguing to detect the properties of the non-Hermitian topological edge states from spectroscopic measurements. In the PT-symmetry unbroken phase, two topological edge states cannot be distinguished via spectroscopic measurement, as shown in Fig. <ref>(a). The absorptivity spectra A_1 measured when loading microwave to port 1 is totally coincident with A_2 measured when loading microwave to port 2. In the symmetry broken phase, two topological edge states can be distinguished in spectra, as shown in Fig. <ref>(b). The spectra A_1 exhibits the low-loss state with a relatively narrow bandwidth, while the spectra A_2 reveals the high-loss state.
Finally, we anticipate to discuss about some additional characteristics of the exceptional point (EP) in the non-Hermitian chain. The dimensionless eigenvalues are defined as β_real+iβ_imag, where β_real=[Re(ω)-ω_0]/(v+w), β_imag=[|Im(ω)|-γ̅]/(v+w), and γ̅=(γ_A+γ_B)/2. In a finite SSH chain, when increasing the non-Hermitian parameter δγ/2(v+w), a series of exceptional points are gradually reached [Figs. <ref>(c) and <ref>(d)]. It can be found that the EP of the edge modes is distinctly away from the EPs of the bulk modes. The edge modes experience spontaneous PT-symmetry breaking (SPTB) at EP_1, where δγ/2(v+w) is only about 0.02. With the increase of chain length, the non-Hermiticity needed for SPTB in edge modes decreases exponentially. In the case of N≫1, any finite δγ will lead to the SPTB in edge modes <cit.>. However, the minimum requirement of SPTB in bulk mode needs δγ/2|w-v|, which is much larger than 0.02. Additional analysis is provided in the supplementary materials.
Conclusion.—We have implemented the PT-symmetric non-Hermitian topological SSH model with microwave resonators and achieved the control of topological edge states using the on-site non-Hermiticity. Through spontaneous PT-symmetry breaking, we obtain the non-Hermitian edge modes, where the photonic mode densities are enhanced at both ends of the chain. We realize the strong coupling between the edge modes and the magnon mode in both Hermitian and non-Hermitian cases. We experimentally verify that the coupling strength between the non-Hermitian edge states and the spin ensemble is stronger than that in the Hermitian situation. Our research illustrates non-Hermiticity engineered topological edge states and paves a way for studying strong coherent interaction between topological photonic modes and matter.
This work is supported by the National Key Research and Development Program of China (No. 2022YFA1405200), National Natural Science Foundation of China (No. 92265202, No. 11934010, No. U1801661, and No. 12174329), and the Fundamental Research Funds for the Central Universities (No. 2021 FZZX001-02).
99
Burkov-16
A. A. Burkov, Topological semimetals, Nature Materials 15, 1145 (2016).
Hasan-10
M. Z. Hasan and C. L. Kane, Colloquium: Topological insulators, Rev. Mod. Phys. 82, 3045 (2010).
Zhaoju-15
Z. Yang, F. Gao, X. Shi, X. Lin, Z. Gao, Y. Chong, and B. Zhang, Topological Acoustics, Phys. Rev. Lett. 114, 114301 (2015).
Ma-19
G. Ma, M. Xiao and C. T. Chan, Topological phases in acoustic and mechanical systems, Nat. Rev. Phys. 1, 281 (2019).
Yihao-22
H. Xue, Y. Yang, B. Zhang, Topological acoustics, Nature Reviews Materials 7, 974 (2022).
Huber-16
S. D. Huber, Topological mechanics, Nat. Phys. 12, 621 (2016).
Haldane-08
F. D. M. Haldane and S. Raghu, Possible realization of directional optical waveguides in photonic crystals with broken time-reversal symmetry, Phys. Rev. Lett. 100, 013904 (2008).
Wang-09
Z. Wang, Y. Chong, J. D. Joannopoulos, and M. Soljačić, Observation of unidirectional backscattering-immune topological electromagnetic states, Nature 461, 772 (2009).
Lu-14
L. Lu, J. D. Joannopoulos, and M. Soljačić, Topological photonics, Nat. Photon. 8, 821 (2014).
Ozawa-19
T. Ozawa et al., Topological photonics, Rev. Mod. Phys. 91, 015006 (2019).
Blanco-Redondo-18
A. Blanco-Redondo, B. Bell, D. Oren, B. J. Eggleton and M. Segev, Topological protection of biphoton states, Science 362, 568 (2018).
Yang-18
B. Yang et al., Ideal Weyl points and helicoid surface states in artificial photonic crystal structures, Science 359, 1013 (2018).
Klembt-18
S. Klembt et al., Exciton-polariton topological insulator, Nature, 562, 552 (2018).
Feng-17
L. Feng, R. EI-Ganainy, and L. Ge, Non-Hermitian photonics based on parity–time symmetry, Nat. Photon. 11, 752 (2017).
EI-Ganainy-18
R. EI-Ganainy et al., Non-Hermitian physics and PT symmetry, Nat. Phys. 14, 11 (2018).
Longhi-18
Stefano Longhi, Parity-time symmetry meets photonics: A new
twist in non-hermitian optics, Europhysics Letters 120, 64001 (2018).
Bender-07
C. M. Bender, Making sense of non-hermitian hamiltonians, Reports on Progress in Physics 70, 947 (2007).
Ashida-20
Y. Ashida, Z. P. Gong, and M. Ueda, Non-Hermitian physics, Adv. Phys. 69, 249 (2020).
Coulais-21
C. Coulais, R. Fleury, and J. Van Wezel, Topology and broken Hermiticity, Nat. Phys. 17, 9 (2021).
Bergholtz-21
E. J. Bergholtz, J. C. Budich, and F. K. Kunst, Exceptional topology of non-Hermitian systems, Rev. Mod. Phys. 93, 015005 (2021).
Yao-18
S. Yao and Z. Wang, Edge States and Topological Invariants of Non-Hermitian Systems, Phys. Rev. Lett. 121, 086803 (2018).
Yokomizo-19
K. Yokomizo and S. Murakami, Non-Bloch band
theory of non-Hermitian systems, Phys. Rev. Lett. 123, 066404 (2019).
CHL-20
C. H. Lee, L. Li, R. Thomale, and J. Gong,
Unraveling non-Hermitian pumping: Emergent spectral singularities and anomalous responses, Phys. Rev. B 102, 085151
(2020).
Helbig-20
T. Helbig et al., Generalized bulk–boundary correspondence in non-Hermitian topolectrical circuits. Nat. Phys. 16, 747 (2020).
Xue-20
L. Xiao, T. Deng, K. Wang, G. Zhu, Z. Wang, W. Yi, and P. Xue, Non-Hermitian bulk–boundary correspondence in quantum dynamics, Nat. Phys. 16, 761 (2020).
Zhao-19
H. Zhao et al., Non-Hermitian topological light steering, Science 365, 1163 (2019).
St-Jean-17
P. St-Jean et al., Lasing in topological edge states of a one-dimensional lattice, Nat. Photon. 11, 651 (2017).
Parto-18
M. Parto et al., Edge-Mode Lasing in 1D Topological Active Arrays, Phys. Rev. Lett. 120, 113901 (2018).
Hu-21
B. Hu et al., Non-Hermitian topological whispering gallery, Nature 597, 655 (2021).
Alvarez-18
V. M. Martinez Alvarez, J. E. Barrios Vargas, and L. E. F. Foa Torres,
Non-Hermitian robust edge states in one dimension: Anomalous localization and eigenspace condensation at exceptional points, Phys. Rev. B 97, 121401(R) (2018).
Okuma-20
N. Okuma, K. Kawabata, K. Shiozaki, and M. Sato, Topological Origin of Non-Hermitian Skin Effects, Phys. Rev. Lett. 124, 086801 (2020).
Bender-98
C. M. Bender and S. Boettcher, Real Spectra in Non-Hermitian Hamiltonians Having PT Symmetry, Phys. Rev. Lett. 80, 5243 (1998).
Schomerus-13
H. Schomerus, Topologically protected midgap states in complex photonic lattices, Opt. Lett. 38, 1912 (2013)
Malzard-15
S. Malzard, C. Poli, and H. Schomerus, Topologically Protected Defect States in Open Photonic Systems with Non-Hermitian Charge-Conjugation and Parity-Time Symmetry, Phys. Rev. Lett. 115, 200402 (2015).
Weimann-17
S. Weimann et al., Topologically protected bound states in photonic parity-time-symmetric crystals, Nat. Mater. 16, 433-438 (2017).
Stegmaier-21
A. Stegmaier et al., Topological Defect Engineering and PT Symmetry in Non-Hermitian Electrical Circuits, Phys. Rev. Lett. 126, 215302 (2021).
Esaki-11
K. Esaki, M. Sato, K. Hasebe, and M. Kohmoto, Edge states and topological phases in non-Hermitian systems, Phys. Rev. B 84, 205128 (2011).
Hu-11
Y. C. Hu and T. L. Hughes, Absence of topological insulator phases in non-Hermitian PT-symmetric Hamiltonians, Phys. Rev. B 84, 153101 (2011).
Xue-17
L. Xiao, X. Zhan, Z. H. Bian, K. K. Wang, X. Zhang, X. P. Wang, J. Li, K. Mochizuki, D. Kim, N. Kawakami, W. Yi, H. Obuse, B. C. Sanders, P. Xue, Observation of topological edge states in parity–time-symmetric quantum walks, Nature Physics 13, 1117 (2017).
Cheng-22
D. Cheng et al., Truncation-dependent PT phase transition for the edge states of a two-dimensional non-Hermitian system, Phys. Rev. B 105, L201105 (2022).
SM
See Supplementary Materials at ... for device details, Hamiltonian and topological invariant analysis, additional transmission mappings, and the experimental measurement details, which includes Refs. <cit.>.
Su-79
W. P. Su, J. R. Schrieffer and A. J. Heeger, Solitons in Polyacetylene, Phys. Rev. Lett. 42, 1698 (1979).
Gutzler-21
R. Gutzler, M. Garg, C. R. Ast, K. Kuhnke, and Kern, K. Light–matter interaction at atomic scales, Nat. Rev. Phys. 3, 441 (2021).
Ruggenthaler-18
M. Ruggenthaler, N. Tancogne-Dejean, J. Flick, H. Appel, and A. Rubio, From a quantum-electrodynamical light–matter description to novel spectroscopies, Nat. Rev. Chem. 2, 0118 (2018).
Kockum-19
A. F. Kockum, A. Miranowicz, S. De Liberato, S. Savasta, and F. Nori, Ultrastrong coupling between light and matter, Nat. Rev. Phys. 1, 19 (2019).
Kim-21
E. Kim et al., Quantum Electrodynamics in a Topological Waveguide, Phys. Rev. X 11, 011015 (2021).
Huebl-PRL-2013
H. Huebl, C. W. Zollitsch, J. Lotze, F. Hocke, M. Greifenstein, A. Marx, R. Gross, and S. T. B. Goennenwein, High Cooperativity in Coupled Microwave Resonator Ferrimagnetic Insulator Hybrids, Phys. Rev. Lett. 111, 127003 (2013).
Tabuchi-PRL-2013
Y. Tabuchi, S. Ishino, T. Ishikawa, R. Yamazaki, K. Usami, and Y. Nakamura, Hybridizing Ferromagnetic Magnons and Microwave Photons in the Quantum Limit, Phys. Rev. Lett. 113, 083603 (2014).
Zhang-PRL-2014
X. Zhang, C.-L. Zou, L. Jiang, and H. X. Tang, Strongly Coupled Magnons and Cavity Microwave Photons, Phys. Rev. Lett. 113, 156401 (2014).
Tobar-PRApp-2014
M. Goryachev, W. G. Farr, D. L. Creedon, Y. Fan, M. Kostylev, and M. E. Tobar, High-Cooperativity Cavity QED with Magnons at Microwave Frequencies, Phys. Rev. Applied 2, 054002 (2014).
You-npj-2015
D. Zhang, X.-M. Wang, T.-F. Li, X.-Q. Luo, W. Wu, F. Nori, J. Q. You, Cavity quantum electrodynamics with ferromagnetic magnons in a small yttrium-iron-garnet sphere, npj Quantum Information 1, 15014 (2015).
Wang-2019
Y.-P. Wang, J. W. Rao, Y. Yang, P.-C. Xu, Y. S. Gui, B. M. Yao, J. Q. You, and C.-M. Hu, Nonreciprocity and Unidirectional Invisibility in Cavity Magnonics, Phys. Rev. Lett. 123, 127202 (2019).
Wang-2020
Y.-P. Wang and C.-M. Hu, Dissipative couplings in cavity magnonics, Journal of Applied Physics 127, 130901 (2020).
Rameshti-22
B. Z. Rameshti, S. V. Kusminskiy, J. A. Haigh, K. Usami, D. Lachance-Quirion, Y. Nakamura, C. Hu, H. X. Tang, G. E. W. Bauer and Y. M. Blanter, Cavity Magnonics, Physics Reports 979, 1-60 (2022).
Yuan-22
H. Y. Yuan, Y. Cao, A. Kamra, P. Yan, and R. A. Duine, Quantum magnonics: when magnon spintronics meets quantum information science, Physics Reports 965, 1 (2022).
Bellec-13
M. Bellec, U. Kuhl, G. Montambaux, and F. Mortessagne, Tight-binding couplings in microwave artificial graphene, Phys. Rev. B 88, 115437 (2013).
Peng-14
B. Peng, Ş. K. Özdemir, F. Lei, F. Monifi, M. Gianfreda, G. Long, S. Fan, F. Nori, C. M. Bender and L. Yang, Parity-time-symmetric whispering-gallery microcavities, Nat. Phys. 10, 394 (2014).
|
http://arxiv.org/abs/2307.07230v1 | 20230714085044 | Aharonov-Bohm caging in spin-orbit coupled exciton-polariton lattices | [
"Wei Qi"
] | cond-mat.quant-gas | [
"cond-mat.quant-gas"
] |
APS/123-QED
[email protected]
Department of physics, Shaanxi University of Science and Technology, Xi'an 710021, China
We study the Aharonov-Bohm (AB) caging effect in rhombic exciton-polariton lattices, with the Rashba-Dresselhaus spin-orbit coupling (RDSOC) in acting a synthetic gauge field. The effective magnetic flux through each plaquette is controlled by the orientation of the RDSOC and geometry of the rhombic lattice. The results show that the interplay of lattice geometry and the RDSOC will dramatically influence the energy band structure, furthermore, determining the transportation properties of exciton-polariton condensates. Non-Hermitian effects, which arise from the polariton intrinsic loss mechanism, on the AB caging is also discussed in detail. Meanwhile, the effect of disorder on the dynamics of AB caging is investigated, and we find that the disorder will lead to the inverse Anderson localization. We propose that using the AB caging effect allows to trap and steer the propagation of polaritons in a given parameter regime. Considering the specific example of a photonic liquid crystal microcavity to achieve our theoretical predictions, the AB caging could be switched on and off by applying an external voltage.
Aharonov-Bohm caging in spin-orbit coupled exciton-polariton lattices
Wei Qi
August 12, 2023
=====================================================================
§ INTRODUCTION
Investigating the properties of localization, disorder, and transport is essential for different areas of physics and modern quantum technologies. In condensed matter physics, we know that in the presence of random disorder electron transport will be destroyed, which is called Anderson localization <cit.>. Another interesting and more controllable method is using the interplay of π-flux and lattice geometry, which can yield full localization of quantum dynamics in lattice systems, a striking interference phenomenon known as Aharonov-Bohm caging <cit.>, by which the single-particle spectrum collapses into a set of perfectly flat (dispersionless) Bloch bands. Therefore, an input excitation is decomposed in flat band states, the energy is caged, and the transport is abruptly reduced into a couple of unit cells <cit.>.
Different from the dynamic localization in other systems, the AB caging effects requiring synthetic gauge fields result in destructive interference in the rhombic lattice systems through tuning of the tunnelling amplitude and phase.
Over the past decade, there has been great interesting for synthesizing artificial gauge fields in various platforms, such as ultra-cold atoms <cit.> and photonic materials <cit.>, where they constitute the basis of synthetic topological matter <cit.> and quantum simulations <cit.>. Although the use of a magnetic flux was initially thought of for electronic lattices, this phenomenon extends to other neutral systems by the use of artificial gauge fields. With the help of synthetic magnetic fluxes induced by the synthetic gauge fields, this special flat-band localization mechanism, has spurred great interest in different areas of physics <cit.>. AB cages were first observed in networks of conducting wires <cit.>, and were recently realized in ultra-cold atom systems <cit.> and photonic lattices <cit.> both theoretically and experimentally. In further steps, the nonlinear dynamics of AB cages <cit.> and nonlinear symmetry breaking of AB cages <cit.> are discussed.
Exciton-polaritons are part-light part-matter quasiparticles formed in semiconductor microcavities <cit.>. Many novel dynamic properties have been reported in this system arising from lattice geometry and gauged field, such as: spiraling vortices in exciton-polariton condensates <cit.>; dynamical critical exponents in polariton quantum systems <cit.>; the flatband of a one-dimensional Lieb lattice of coupled micropillar cavities <cit.>, exciton polaritons in a two-dimensional Lieb lattice <cit.>; polariton topological insulators in flat band systems <cit.>; and topological phase transition in an exciton-polariton lattice <cit.>. Most recently, the synthetic gauge fields have been achieved in liquid crystal exciton-polariton systems <cit.>. It is suggested that Rashba-Dresselhaus spin-orbit coupling (RDSOC) in lattices acts as a synthetic gauge field, which can be used to control the phases and magnitudes of these coupling coefficients in the lattice system <cit.>. The results present in this paper are certainly a step forward on the study of discrete dynamics <cit.>, offering a new tool for the mobility of localized wave-packets in polariton lattices.
In this article, we show that with the help of RDSOC, AB caging can be achieved in rhombic exciton-polariton lattices. We show how the orientation of RDSOC controls the localization or delocalization mechanism of polariton condensates in the lattice. Non-Hermiticity arising from the natural dissipative properties of polariton condensates <cit.> allow the polariton system to exhibit a complex energy spectrum and wave-packet decay. Disorder is shown to lead to the inverse Anderson localization phenomenon, that is to say the presence of disorder will let the wave-packet disperse but not localize.
The paper is organized as follows. In Sec. <ref>, we present the physical models. In Sec. <ref>, Aharonov-Bohm caging in clean and Hermitian lattices is presented. The effects of non-Hermiticity on the caging dynamics is presented in Sec. <ref>. In Sec. <ref>, inverse Anderson localization in disordered lattices is discussed. Finally, In Sec. <ref>, we give our main conclusions.
§ THEORETICAL MODEL
We consider a quasi-1D rhombic lattice with three coupled sublattices (denoted as A, B, and C) as schematically shown in Fig. <ref>.
The RDSOC can be represented as a constant gauge potential that enters the tunneling coefficient as an effective phase β, where β is proportion to the angle θ, i.e. β_1,2=α a/ħ^2/2mcosθ_1,2 <cit.>, a is the link length. α is the amplitude and θ is the orientation of RDSOC, which allows us to tune the magnitude and sign of the tunneling coefficient. Due to the RDSOC, an effective flux ϕ=2(β_1+β_2) is formed in each plaquette as displayed in Fig. <ref>. Then the system is described by the following effective Hamiltonian:
H = ∑_n J[(â^†_nb̂_ne^iβ_1 + â_n+1b̂^†_ne^iβ_2
+ â^†_nĉ_ne^-iβ_2 + â_n+1ĉ^†_ne^-iβ_1 + H.c.)
+(Δ^a_n-iγ_a)â^†_nâ_n + (Δ^b_n-iγ_b)b̂^†_nb̂_n
+ (Δ^c_n-iγ_c)ĉ^†_nĉ_n],
with J the hopping constant between each site. â_n (â^†_n), b̂_n (b̂^†_n), ĉ_n (ĉ^†_n) are bosonic annihilation and creation operators corresponding to the sites A, B, C of the cell n. Δ^a_n, Δ^b_n, Δ^c_n and γ_a, γ_b, γ_c allow for on-site disorders and polariton loss rates on each sublattice of A, B and C, respectively. Under periodic boundary conditions the Hamiltonian can also be written in momentum (k) space:
H_k = ∑_k J[(â^†_kb̂_ke^iβ_1 + â_kb̂^†_ke^iβ_2e^-ik
+ â^†_kĉ_ke^-iβ_2 + â_kĉ^†_ke^-iβ_1e^-ik + H.c.)]
+ (Δ^a_n-iγ_a)â^†_kâ_k
+ (Δ^b_n-iγ_b)b̂^†_kb̂_k
+ (Δ^c_n-iγ_c)ĉ^†_kĉ_k,
where η̂_k=1/√(N)∑_nη̂_ne^ikn, with η̂_n=â_n, b̂_n, ĉ_n are bosonic annihilation and creation operators corresponding to the site A, B, C of the cell n, and N denote the number of unit cells in the lattice.
Furthermore, from Eq. (<ref>) we can obtain a three-band Bloch Hamiltonian in matrix form:
ℋ_k =
[ Δ^a_n-iγ_a J(e^iβ_1 + e^ike^-iβ_2) J(e^-iβ_2 +e^ike^iβ_1); J(e^-iβ_1+ e^-ike^iβ_2) Δ^b_n-iγ_b 0; J(e^iβ_2+ e^-ike^-iβ_1) 0 Δ^c_n-iγ_c ].
Solving the eigenvalues of Eq. (<ref>), the spectrum of the system can be obtained. When the effective flux ϕ enclosed in each diamond plaquette is π, that is when β_1+β_2=π/2, the lattice has an entirely flat spectrum. This is known as AB caging. It is instructive to consider the dynamics of polariton condensates in such a situation. Here it is convenient to apply the mean-field approximation, by setting A_n=⟨â_n⟩, B_n=⟨b̂_n⟩, and C_n=⟨ĉ_n⟩. The quantities A_n, B_n and C_n are the field amplitudes for the sites A, B, and C in the n-th unit cell, A_n^*, B_n^* and C_n^* are the complex conjugate of the field amplitudes. The dynamics is then given by the Heisenberg equation of motion,
i dη_n/dt = ∂ H/∂η_n^*.
Then we get evolution equations of each site in the unite cell n,
{
iȦ_n = J(B_ne^iβ_1 + C_ne^-iβ_2 + B_n-1e^-iβ_2 + C_n-1e^iβ_1)+ (Δ^a_n-iγ_a)A_n,
iḂ_n = J(A_ne^-iβ_1 + A_n+1e^iβ_2) + (Δ^b_n-iγ_b)B_n,
iĊ_n = J(A_n+1e^-iβ_1 + A_ne^iβ_2) + (Δ^c_n-iγ_c)C_n.
.
To further characterize localization and diffraction properties of the polariton condensate wave-packets,
it is helpful to define two characteristic quantities, namely, the inverse participation number P^-1 and wave-packet width W.
P^-1 is defined as:
P^-1=∑_n(|A_n|^4+|B_n|^4+|C_n|^4).
The inverse participation number is always smaller than or equal to 1 and it gives a measure of the
number of sites where condensates are confined. For example, if we have P=1 then exciton-polaritons are confined to a single site,
and if P∼ m the polaritons are confined to a cluster of m sites. Another useful quantity is the average square width W^2, defined as
W^2=⟨ x^2⟩-⟨ x⟩^2/N^2,
with ⟨ x⟩=∑_nn(|A_n|^2+|B_n|^2+|C_n|^2) and ⟨ x^2⟩=∑_nn^2(|A_n|^2+|B_n|^2+|C_n|^2).
The average width W is useful to characterize how signals or wave-packets injected into the system disperse: it equals zero in the presence of caging and grows over time if dispersion is present.
§ AHARONOV-BOHM CAGING IN CLEAN AND HERMITIAN LATTICES
First, we consider the ideal case of a clean lattice without disorder, and no losses. In this situation, the parameters in Eq. (<ref>) will be Δ^a_n=Δ^b_n=Δ^c_n=0 and γ_a=γ_b=γ_c=0. The energy spectrum is displayed in Fig. <ref> (a). In Fig. <ref> (b), we choose β_1=β_2=π/4 that satisfies the AB caging condition β_1+β_2=π/2. We can see in this condition, the three energy bands are all flat, representing AB caging. However, when the AB caging condition is broken, that means β_1+β_2≠π/2, two of the flat bands will become the dispersive, as shown in Fig. <ref> (c).
It is instructive to consider how the orientation of RDSOC can influence the band structure, potentially changing also the dynamics of the system in transition to and from the AB caging situation. In Fig. <ref>, the energy bands as a function of β_2 are shown in (a) and (b) for a fixed β_1=π/4 and β_1=π/3, respectively. It is clear that the orientation of RDSOC will dramatically change the band structure, and there exists an energy degenerate point in Fig. <ref> (a) circled with red line and (b) circled with green line, which correspond to the angles β_2=π/4 (a) and β_2=π/6 (b), coinciding with the AB caging condition β_1+β_2=π/2. In the following, we mainly choose the parameters β_2=β_1=π/4, that is, the case shown in Fig. <ref> (a) as an example to illustrate the caging dynamics and related properties.
We now consider the dynamics of an injected wave-packet on the central site of a rhombic chain. This wave-packet could be excited with an external laser, and is here modelled with the initial condition C_0=1 and all other sites initially zero. Fig. <ref> shows that the initial wave-packet is able to spread to neighboring sites B and C, however, remains caged in the central plaquette (note that the same parameters as in Fig. <ref> (b) were used). For further demonstration we show the inverse participation ratio P^-1 and wave-packet width W in Fig. <ref>. P^-1 oscillates in a finite range, representing oscillation, which means the localization is occurred. Meanwhile, the width W remains small as shown in Fig. <ref> (b), which is also a signature of wave-packet localization.
Fig. <ref> shows the results of dispersive dynamics for the case of a broken AB caging condition, where we choose the parameters the same as Fig. <ref> (c). It is clearly displayed that following the time evolution the wave-packet in site A, B and C are all dispersive. The other parameters P^-1 and W are plotted in Fig. <ref>, for further characterizing the localization or delocalization properties. From Fig. <ref>, we can see that for this case, the inverse participation function P^-1 approaches zero, and the wave-packet width W increases with the time.
§ AB CAGING DYNAMICS IN THE NON-HERMITIAN LATTICES CASES
In this section, we will discuss the effects of dissipation on the caging dynamics. Indeed, due to the intrinsic non-Hermitian properties of exciton-polariton systems, the dissipation in this system is unavoidable. Further, it is in principle possible that different lattice sites may have different dissipation rates, either through their engineering or partial compensation of dissipation with the application of a non-resonant laser <cit.>. In this non-Hermitian system the energy will become complex, as shown in Fig. <ref>. The results display the real part energy ReE in Fig. <ref> (c) and (e), which has a similar dependence on momentum k to the Hermitian case as shown in Fig. <ref> (b) and (c). The imaginary part of the energy is plotted in Fig. <ref> (d) with β_1=π/4 and (f) with β_1=π/3, we can see that the imaginary part is little changed by the effective angle β_1. For further study the complex energy band structure, Fig. <ref> displays how the real part of energy ReE and imaginary part of energy ImE depends on the momentum k. The energy projection on the bottom plane of (ReE, ImE) with green lines shows that they have different topological structure. When the AB caging condition is satisfied as shown in the Fig. <ref> (a), the energy bands in the (ReE, ImE) plane form three straight lines. But when the AB caging condition is broken, the energy band structure on the (ReE, ImE) plane will form a two loop structure, corresponding to a non-Hermitian skin effect. Such an effect has been discussed previously in exciton-polariton lattices, in different configurations <cit.>. Fig. <ref> shows the time evolution of wave-packets, accounting for the presence of dissipation. In contrast to Fig. <ref>, the oscillations are quickly damped out, however, the main effect of localization remains clearly visible even in this more experimentally realistic configuration.
§ INVERSE ANDERSON LOCALIZATION PHENOMENONS IN THE DISORDER LATTICES
In this part, we will consider the effects of disorder on caging dynamics in the polariton lattice system, i.e, Δ^a_n, Δ^b_n, Δ^c_n not equal to zero but randomly distributed. In such a case, Fig. <ref> shows that the presence of disorder can lead to wave-packet delocalization. In other words, there is an inverse Anderson localization <cit.> where disorder actually favours propagation by breaking the AB caging, which is contrast to the typical case of simple lattices where disorder tends to inhibit propagation. Such an effect is further confirmed by the inverse participation parameter P^-1 and width of wave-packet W as shown in Fig. <ref> (a) and (b), respectively.
§ CONCLUSION
In this work, we have theoretically studied the AB caging dynamics in exciton-polariton lattices where artificial gauge fields are induced by spin-orbit coupling. Interestingly, we find that though tuning the flux which is achieved by RDSOC in each plaque, the essential AB caging effects will be achieved in the polariton lattice system. Such effects persist in the presence of polariton losses, which make the system non-Hermitian with a complex band-structure. Moreover, disorder can break the caging dynamics leading to an inverse Anderson transition will also appear in this polariton lattices system. The results open the door for using external gauge fields to manipulate trapping or transportation in exciton-polariton lattices.
Wei Qi was supported by the China Scholarship Council, the National Natural Science Foundation of China under Grant No. 11805116, and the Natural Science Basic Research Plan in Shaanxi Province, China (Grant No. 2023-JC-YB-037).
99
Anderson P. W. Anderson, Phys. Rev. 109, 1492 (1958).
Vidal J. Vidal, R. Mosseri, and B. Douçot, Phys. Rev. Lett. 81, 5888 (1998).
Aravena G. Cáceres-Aravena, D. Guzmán-Silva, I. Salinas, and R. A. Vicencio, Phys. Rev. Lett. 128, 256602 (2022).
Dalibard J. Dalibard, F. Gerbier, G. Juzeliunas, P. Öberg, Rev. Mod. Phys. 83, 1523 (2011).
Eckardt A. Eckardt, Rev. Mod. Phys. 89, 011004 (2017).
Celi A. Celi, P. Massignan, J. Ruseckas, N. Goldman, I. B. Spielman, G. Juzeliūnas, and M. Lewenstein, Phys. Rev. Lett. 112, 043001 (2014).
KFang K. Fang, Z. Yu, S. Fan, Nat. Photonics 6, 782 (2012).
YLumer Y. Lumer, M. A. Bandres, M. Heinrich, L. J. Maczewsky, H. Herzig-Sheinfux, A. Szameit,
M. Segev, Nat. Photonics 13, 339 (2019).
TOzawa T. Ozawa and H. M. Price, Nat. Rev. Phys. 1, 349 (2019).
IBloch I. Bloch, J. Dalibard, and S. Nascimbene, Nat. Phys. 8, 267 (2012).
DLeykam D. Leykam, A. Andreanov, and S. Flach, Adv. Phys. X 3, 1473052 (2018).
CCAbilio C. C. Abilio, P. Butaud, T. Fournier, B. Pannetier, J. Vidal, S. Tedesco, and B. Dalzotto, Phys. Rev. Lett. 83, 5102 (1999).
CNaud C. Naud, G. Faini, and D. Mailly, Phys. Rev. Lett. 86, 5104 (2001).
HangLi H. Li, Z. Dong, S. Longhi, Q. Liang, D. Xie, and B. Yan, Phys. Rev. Lett. 129, 220403 (2022).
Gabriel G. Cáceres-Aravena, D. Guzmán-Silva , I. Salinas , and R. A. Vicencio, Phys. Rev. Lett. 128, 256602 (2022).
MDLiberto M. D. Liberto, S. Mukherjee, and N. Goldman, Phys. Rev. A 100, 043829 (2019).
Gligoric G. Gligorić, P. P. Beličev, D. Leykam, and A. Maluckov, Phys. Rev. A 99, 013826 (2019).
HDeng H. Deng, H. Haug, and Y. Yamamoto, Rev. Mod. Phys. 82, 1489 (2010).
TByrnes T. Byrnes, N. Y. Kim, and Y. Yamamoto, Nat. Phys. 10, 803 (2014).
ICarusotto I. Carusotto and C. Ciuti, Rev. Mod. Phys. 85, 299 (2013).
Xuekai X. Ma, Y. V. Kartashov, Tingge Gao, L. Torner, and S. Schumacher, Phys. Rev. B 102, 045309 (2020).
PComaron P. Comaron, G. Dagvadorj, A. Zamora, I. Carusotto, N. P. Proukakis, and M. H. Szymańska, Phys. Rev. Lett. 121, 095302 (2018).
VGoblot V. Goblot, B. Rauer, F. Vicentini, A. Le Boité, E. Galopin, A. Lemaître, L. Le Gratiet, A. Harouri, I. Sagnes, S. Ravets, C. Ciuti, A. Amo, and J. Bloch, Phys. Rev. Lett. 123, 113901 (2019).
CEWhittaker C. E. Whittaker, E. Cancellieri, P. M. Walker, D. R. Gulevich, H. Schomerus, D. Vaitiekus, B. Royall, D. M. Whittaker, E. Clarke, I. V. Iorsh, I. A. Shelykh, M. S. Skolnick, and D. N. Krizhanovskii, Phys. Rev. Lett. 120, 097401 (2018)
ChunyanLi C. Li, F. Ye, X. Chen, Y. V. Kartashov, A. Ferrando, L. Torner, and D. V. Skryabin, Phys. Rev. B 97, 081103(R) (2018)
MPieczarka M. Pieczarka, E. Estrecho, S. Ghosh, M. Wurdack, M. Steger, D. W. Snoke, K. West, L. N. Pfeiffer, T. C. H. Liew, A. G. Truscott, and E. A. Ostrovskaya, Optica 8, 1085 (2021)
Rechnka K. Rechcińka, M. Król, R. Mazur, P. Morawiak, R. Mirek, K.Łempicka, W. Bardyszewski, M. Matuszewski, P. Kula, W. Piecek,
P. G. Lagoudakis, B. Piȩka, J. Szczytko, Science 366, 727 (2019)
Gaotingge Y. Li, X. Ma, X. Zhai, M. Gao, H. Dai, S. Schumacher and T. Gao, Nat. Commun. 13, 3785 (2022).
Pavel P. Kokhanchik, D. Solnyshkov, T. Stöferle, B. Piȩtka, J. Szczytko, and G. Malpuech, Phys. Rev. Lett. 129, 246801 (2022).
Lederer F. Lederer, G. I. Stegeman, D. N. Christodoulides, G. Assanto, M. Segev, and Y. Silberberg, Phys. Rep. 463, 1
(2008).
Flach S. Flach and A. V. Gorbach, Phys. Rep. 467, 1 (2008).
Rahmani A. Rahmani, M. Kȩdziora, A. Opala, and M. Matuszewski, Phys. Rev. B 107, 165309 (2023)
ZHLiu1 Z.-H. Liu, O. Entin-Wohlman, A. Aharony, J. Q. You, and H. Q. Xu, Phys. Rev. B 104, 085302 (2021).
Aharonov Y. Aharonov and A. Casher, Phys. Rev. Lett. 53, 319 (1984).
EWertz E. Wertz, A. Amo, D. D. Solnyshkov, L. Ferrier, T. C. H. Liew, D. Sanvitto, P. Senellart, I. Sagnes, A. Lemaître, A. V. Kavokin, G. Malpuech, and J. Bloch, Phys. Rev. Lett., 109, 216404 (2012).
SMandal1S. Mandal, R. Banerjee, E. A. Ostrovskaya, and T. C. H. Liew, Phys. Rev. Lett., 125, 123902 (2020).
SMandal2 S. Mandal, R. Banerjee, and T. C. H. Liew, ACS Photon., 9, 527 (2022);
XXu X. Xu, R. Bao, and T. C. H. Liew, Phys. Rev. B, 106, L201302 (2022).
PKokhanchik P. Kokhanchik, D. Solnyshkov, and G. Malpuech, arXiv:2303.08483 (2023).
Longhi S. Longhi, Opt. Lett. 46, 2872 (2021).
|
http://arxiv.org/abs/2307.07657v1 | 20230714232743 | Machine learning for option pricing: an empirical investigation of network architectures | [
"Laurens Van Mieghem",
"Antonis Papapantoleon",
"Jonas Papazoglou-Hennig"
] | q-fin.CP | [
"q-fin.CP",
"cs.LG",
"91G20, 91G60, 68T07"
] |
equationsection
⟨
⟩
e
τ
ω
|
http://arxiv.org/abs/2307.05288v2 | 20230711142833 | Navigating Uncertainty: The Role of Short-Term Trajectory Prediction in Autonomous Vehicle Safety | [
"Sushil Sharma",
"Ganesh Sistu",
"Lucie Yahiaoui",
"Arindam Das",
"Mark Halton",
"Ciarán Eising"
] | cs.CV | [
"cs.CV"
] |
^[email protected] and ^2,[email protected]
^1University of Limerick, Ireland, ^2Valeo Vision Systems, Ireland,
^3DSW, Valeo India
Navigating Uncertainty: The Role of Short-Term Trajectory Prediction in Autonomous Vehicle Safety
Sushil Sharma^1,2, Ganesh Sistu^2, Lucie Yahiaoui^2, Arindam Das^1,3,
Mark Halton^1 and Ciarán Eising^1
===============================================================================================================
empty
00
Autonomous vehicles require accurate and reliable short-term trajectory predictions for safe and efficient driving. While most commercial automated vehicles currently use state machine-based algorithms for trajectory forecasting, recent efforts have focused on end-to-end data-driven systems. Often, the design of these models is limited by the availability of datasets, which are typically restricted to generic scenarios. To address this limitation, we have developed a synthetic dataset for short-term trajectory prediction tasks using the CARLA simulator. This dataset is extensive and incorporates what is considered complex scenarios - pedestrians crossing the road, vehicles overtaking - and comprises 6000 perspective view images with corresponding IMU and odometry information for each frame. Furthermore, an end-to-end short-term trajectory prediction model using convolutional neural networks (CNN) and long short-term memory (LSTM) networks has also been developed. This model can handle corner cases, such as slowing down near zebra crossings and stopping when pedestrians cross the road, without the need for explicit encoding of the surrounding environment. In an effort to accelerate this research and assist others, we are releasing our dataset and model to the research community. Our datasets are publicly available on https://github.com/sharmasushil/Navigating-Uncertainty-Trajectory-Predictionhttps://github.com/sharmasushil/Navigating-Uncertainty-Trajectory-Prediction.
Keywords: Trajectory Prediction, CNN-LSTM and
CARLA simulator
§ INTRODUCTION
Autonomous vehicles are revolutionizing the transportation industry with a core focus on improved safety and an enhanced driver experience. At the same time, ensuring the safe manoeuvring of autonomous vehicles in real-world scenarios remains very challenging. One of the key components of autonomous vehicle safety is the ability to accurately predict short-term trajectories that allow the host vehicle to navigate through uncertain and dynamic environments <cit.>. Short-term trajectory prediction refers to the estimation of the future position and movement within a limited time frame of a vehicle. By accurately perceiving the ego vehicle trajectory, an autonomous vehicle can anticipate potential hazards and proactively plan its actions to avoid collisions <cit.> or respond to risky situations<cit.>. Significant progress has been made in recent years in developing trajectory prediction models for autonomous vehicles <cit.>. To make predictions about the future movements of surrounding entities, these models employ diverse data sources, including sensor data like LiDAR, radar, and cameras. Machine learning techniques, including deep neural networks <cit.>, have proven to be effective in capturing complex spatiotemporal patterns <cit.> and improving trajectory prediction accuracy. By analyzing and understanding previous data, researchers and engineers can identify common patterns, critical factors, and potential risks associated with trajectory prediction <cit.>. This knowledge can guide the development of more reliable and effective prediction algorithms, leading to enhanced safety measures <cit.> and increased public trust in autonomous vehicles. In this study, we aim to investigate the role of short-term trajectory prediction in ensuring the safety of autonomous vehicles.
The main contributions of this paper are as follows:
* Short-term trajectory prediction of the vehicle from only perspective view images with no explicit knowledge encoding.
* A novel dataset[https://drive.google.com/drive/folders/1JPb64bGV88ymZkJrUBaKQg12tToZVF7T?usp=sharingDataset: https://drive.google.com/drive/folders/1JPb64bGV88ymZkJrUBaKQg12tToZVF7T?usp=sharing] to encourage the research community to pursue the direction of end-to-end implicit vehicle trajectory prediction learning methods.
§ RELATED WORK
Several studies have focused on the crucial role of short-term trajectory prediction in ensuring the safety of autonomous vehicles<cit.>. One notable work is the research conducted by <cit.>. The authors proposed the use of Recurrent Neural Networks (RNNs) to accurately forecast the future trajectories of surrounding objects in complex driving environments. By training their model on real-world driving datasets, they achieved impressive trajectory prediction results, allowing autonomous vehicles to navigate with improved safety and awareness. Another relevant study by <cit.>, explored the application of Generative Adversarial Networks (GANs) for probabilistic trajectory prediction. By leveraging GANs, the researchers were able to generate multiple plausible future trajectories for vehicles, incorporating uncertainty into the prediction process. This probabilistic approach provides valuable information for autonomous vehicles to assess potential risks and make informed decisions in dynamic traffic situations. Furthermore, the work of <cit.>, emphasized the significance of considering interaction information between vehicles and pedestrians. The researchers proposed a novel interaction-aware trajectory prediction model <cit.> that effectively captured the mutual influence and dependencies between different road users. By incorporating interaction information, the model achieved improved accuracy <cit.> and reliability in trajectory prediction, contributing to enhanced safety in autonomous driving scenarios. These studies collectively highlight the importance of short-term trajectory prediction in autonomous vehicle safety. Our study aims to address the limitations encountered in previous studies by incorporating frequently occurring critical actors - pedestrians at the crosswalks, keeping the volume of the dataset minimal and balanced while ensuring all critical cases are covered. To address these challenges, we have created a novel dataset using the Carla simulation platform. By leveraging this synthetic dataset, we expect our model to achieve enhanced performance. This approach enables us to overcome the constraints related to data collection and processing, thereby augmenting the results of our study.
§ METHODOLOGY
§.§ Architecture Topology
The network developed in this work is designed to predict the future trajectories of a vehicle based on a sequence of perspective-view images. It comprises two primary components: a Convolutional Neural Network (CNN) and a Long-Short Term Memory Network (LSTM). CNN plays a crucial role in extracting essential features from the input image sequence using convolution, a mathematical operation that filters the data to capture specific features. These deep features obtained from the CNN serve as inputs to the LSTM, which specializes in temporal prediction tasks by capturing long-term dependencies and maintaining memory over time. The LSTM network learns to infer future positions of the vehicle within the predicted trajectory based on the extracted deep features and the input image sequence.
Overall, the architecture of the network is shown in Figure <ref>, which provides a visual summary of how the CNN and LSTM components are connected. The architecture topology diagram illustrates the CNN used for trajectory estimation from a sequence of n input images. The left side of the diagram shows the input layer of the network where the image sequence is fed into the CNN. The images are then processed through a series of convolutional layers, pooling layers, and activation functions to extract features from the images. The extracted features are then passed through one or more fully connected layers to estimate the trajectory, which is displayed on the right side of the diagram. We use a custom encoder here since this work aims to provide a solution that can be deployed on a low-resource device within less than 1 TOPS. However, given no constraints on the device, any state-of-the-art encoder as a backbone can be integrated into the feature extraction stage.
§.§ Trajectory prediction approach
We define a sequence as a sequence of n images as X^<t-n> at time t. The purpose of X^<t-n> is to anticipate future trajectory position.
Y^<t+n> = {x_0,y_0,x_1,y_1,x_2,y_2,x_3,y_3,. . . . . .,x_n,y_n}
where x and y depict the location of the ego vehicle. The distance feedback d_1^<t-n> measures the distance between the current pose of the ego vehicle P_ego and the final pose in the sequence P_dest and expressed in (<ref>). This objective's goal is to shorten the ego vehicle's local travel path.
d_1^<t+n> = ∑_i=1 ^n_0 P_ego^<t+i> - P_dest^<t+n>_2^2
(<ref>) demonstrates how to derive the lateral velocity V_lat from the angular velocity of the ego vehicle v_δ. The main objective in terms of trajectory prediction is to anticipate the future path of the ego vehicle based on its current state and inputs. By understanding the vehicle's lateral velocity V_lat and angular velocity v_δ, we can estimate its trajectory and predict its future position and orientation.
V_lat^<t+n> = ∑_i=1 ^n_0 v_δ^<t+n>
(<ref>) defines the y direction velocity component, which is used to determine the longitudinal velocity as V_long. The speed is set at 30 kph. The main objective is to calculate the future longitudinal velocity denoted as V_long^<t+n>, based on the sum of forward velocities, represented as v_f^<t+n>, over a specific time horizon.
V_long^<t+n> = ∑_i=1 ^n_0 v_f^<t+n>
Root mean squared error (RMSE) is also being tested as an objective, which is represented by the Euclidean distance between the predicted position of the ego vehicle P_ego for a given trajectory at a specific time-step and the actual position of the ego vehicle P_ego at that time-step, as seen in equation (<ref>).
RMSE = ∑_i=1 ^n√((P_ego - P_ego)^2)/n
§.§ Why the CARLA simulation platform?
We have chosen to utilize the CARLA simulation platform <cit.> for creating our dataset for a number of reasons. It is a dedicated open-source simulator for autonomous driving research. When compared to other simulation platforms such as Carsim <cit.>, MATLAB <cit.>, and Gazebo <cit.>, the CARLA simulation stands out for its comprehensive feature set, realistic simulation, customization options, and support for multi-sensor data generation.
A comparison of each of the aforementioned simulation platforms is given in Table <ref>.
blue
§ EXPERIMENTAL SETUP
§.§ Implementation Details
A CNN-LSTM model for trajectory prediction is implemented using the TensorFlow framework and trained on perspective view images from the CARLA simulation. The first step is preprocessing, which involves resizing, normalizing, and splitting the dataset into a ratio of 60% : 20%: 20% for training, validation and testing respectively. The next step involves a CNN extracting spatial features, which are then fed into an LSTM to model for temporal dependencies. The training process involves utilizing suitable loss functions to train the model effectively. In the case of trajectory prediction, one commonly used loss function is the Mean Squared Error (MSE) loss. This particular loss function calculates the average of the squared differences between the predicted trajectories and the ground truth trajectories. By doing so, it penalizes larger deviations between the predicted and actual trajectories, we specifically employed the Adam optimization algorithm <cit.>.
Table <ref> outlines the hyperparameters employed to train the proposed CNN-LSTM networks. The optimal values of these hyperparameters are obtained
experimentally to maximize the performance and increase the generalization capabilities of the model.
§.§ Dataset Generation
The specifics of how the datasets were created using the CARLA simulator are described in this section. In the simulator, we adjusted the camera position to achieve a top-down view, enabling us to capture comprehensive 360and Bird's Eye View (BEV) perspectives of each scene. Note that the utilization of the top-view image approach for trajectory prediction in autonomous vehicles provides a comprehensive and detailed comprehension of the surrounding environment, which in turn aids in making accurate decisions. Each image has dimensions of 800 pixels width and 600 pixels height. The field of view (FOV) for the camera is set to 90. The CARLA camera position and orientation are defined as cam_rotation = (-90°, 0°, -90°) and cam_location = (0,0,15). If we consider a camera positioned 15 meters above the host vehicle, it would imply that the camera is mounted at a height of 15 meters from the ground level. This positioning suggests that the camera is elevated significantly above the vehicle, capturing a top-down or bird's eye view perspective of the surrounding environment.
The initial dataset, referred to as “Level 1”, consists of 1000 perspective view images. To achieve a more enhanced real-life simulation, ego vehicles and pedestrians were incorporated into each scene. In addition, we annotated each image with supplementary details such as speed and local coordinates (x,y and z).
Similarly, the subsequent, more challenging dataset, referred to as “Level 2”, consists of 5000 images. In order to generate more intricate and diverse scenarios, the number of vehicles and pedestrians was augmented in each scene. This deliberate augmentation aimed to ensure that our machine-learning models could effectively handle a wide range of real-life scenarios. As done previously, all images in this dataset were annotated with supplementary information, as presented in Figure <ref>.
§ RESULTS
In the Carla simulation, we define an IMU sensor that captures linear acceleration (m/s^2) and angular velocity data (rad/sec). The IMU sensor records the frame number at which each measurement occurs, enabling us to determine the simulation start time accurately. By utilizing a sensor, we can collect real-time measurements of linear acceleration and angular velocity throughout the simulation. The time elapsed from when the simulation started to when the measurement was taken was also recorded. The GNSS sensor provides the position and rotation of the sensor at the time of measurement, relative to the vehicle coordinates. The position is typically represented in meters, while the rotation is expressed in degrees. A comparison between the ground truth trajectory and our predicted trajectory is shown in Figure 3(a). In Figure 3(b), we focus on a specific critical scenario where a pedestrian enters the road, leading to vehicles coming to a halt. For more insight, a demo video on this real-life scenario can be checked in detail at https://youtu.be/DZDqGbkInko?t=31https://youtu.be/DZDqGbkInko?t=31
An ablation study on the number of LSTM cells (α=1, β=2, γ=3, δ=4) is conducted on our CNN-LSTM model. This comparison was performed using the CARLA dataset for the two specified levels, Level 1 and Level 2 respectively. For this analysis, three evaluation metrics - RMSE, MAPE, and AED. A summary of the results is shown in Table <ref>.
One corner case is presented in Figure <ref>. For visual purposes, a blue bounding box represents the position of the object as per the ground truth and a red bounding box is used to highlight the prediction of the same object from our proposed model. Moreover, our model demonstrates exceptional performance in predicting the frame at time t+5, closely aligning with the ground truth t. Notably, it excels even in challenging scenarios, such as when a vehicle is navigating a bend and a pedestrian unexpectedly appears. The model effectively captures and comprehends the unpredictable behaviour of pedestrians when crossing the road, showcasing its remarkable trajectory prediction capabilities. Our model performance in these critical scenarios is demonstrated at https://youtu.be/DZDqGbkInkohttps://youtu.be/DZDqGbkInko
§ CONCLUSIONS
In this work, we have developed a novel single-stage end-to-end deep network proposal for short-term vehicle trajectory prediction.
First, we introduce a CNN-LSTM network topology for trajectory prediction, leveraging its effectiveness in handling complex stochastic tasks.
Then we generate a large synthetic dataset using the CARLA simulator, providing a valuable resource for training and evaluating trajectory prediction models in a supervised learning fashion with a focus on safety. The data-driven approach presented in this paper offers a scalable alternative to traditional rule-based optimization algorithms, paving the way for further advancements in the field. The provided synthetic dataset serves as a baseline for future research, encouraging the research community to compare their models against the proposed methodology. We hope this effort will foster innovation and drive improvements in trajectory prediction models.
§ ACKNOWLEDGMENTS
This article has emanated from research conducted with the financial support of Science Foundation Ireland under Grant number 18/CRT/6049.
apalike
|
http://arxiv.org/abs/2307.04330v1 | 20230710035411 | A uniform and pressure-robust enriched Galerkin method for the Brinkman equations | [
"Seulip Lee",
"Lin Mu"
] | math.NA | [
"math.NA",
"cs.NA",
"65N15, 65N30, 76D07"
] |
Quasicrystalline second-order topological semimetals
Dong-Hui Xu
August 12, 2023
====================================================
This paper presents a pressure-robust enriched Galerkin (EG) method for the Brinkman equations with minimal degrees of freedom based on EG velocity and pressure spaces. The velocity space consists of linear Lagrange polynomials enriched by a discontinuous, piecewise linear, and mean-zero vector function per element, while piecewise constant functions approximate the pressure. We derive, analyze, and compare two EG methods in this paper: standard and robust methods. The standard method requires a mesh size to be less than a viscous parameter to produce stable and accurate velocity solutions, which is impractical in the Darcy regime. Therefore, we propose the pressure-robust method by utilizing a velocity reconstruction operator and replacing EG velocity functions with a reconstructed velocity. The robust method yields error estimates independent of a pressure term and shows uniform performance from the Stokes to Darcy regimes, preserving minimal degrees of freedom. We prove well-posedness and error estimates for both the standard and robust EG methods. We finally confirm theoretical results through numerical experiments with two- and three-dimensional examples and compare the methods' performance to support the need for the robust method.
10pt
Keywords: enriched Galerkin finite element methods, Brinkman equations, pressure-robust, velocity reconstruction, uniform performance
§ INTRODUCTION
We consider the stationary Brinkman equations in a bounded domain Ω⊂ℝ^d for d=2,3 with simply connected Lipschitz boundary ∂Ω: Find fluid velocity :Ω→ℝ^d and pressure p:Ω→ℝ such that
-μΔ𝐮 + μ/K 𝐮 + ∇p = 𝐟 in Ω,
∇·𝐮 = 0 in Ω,
=0 on ∂Ω,
where μ is fluid viscosity, K is media permeability, and is a given body force.
The Brinkman equations describe fluid flow in porous media characterized by interconnected pores that allow for the flow of fluids, considering both the viscous forces within the fluid and the resistance from the porous media. The Brinkman equations provide a mathematical framework for studying and modeling complex phenomena such as groundwater flow, multiphase flow in oil reservoirs, blood flow in biological tissues, and pollutant transport in porous media.
In this paper, for simplicity, we consider the scaled Brinkman equations
-νΔ𝐮 + 𝐮 + ∇p = 𝐟 in Ω,
∇·𝐮 = 0 in Ω,
=0 on ∂Ω,
where ν∈[0,1] is a viscous parameter.
Mathematically, the Brinkman equations can be seen as a combination of the Stokes and Darcy equations.
When ν→1, the Brinkman equations approach a Stokes regime affected by the viscous forces, so standard mixed formulations require the H^1-conformity for velocity.
On the other hand, since the Darcy model becomes more prominent as ν→ 0, finite-dimensional spaces for velocity are forced to satisfy the H(div)-conformity.
This compatibility in velocity spaces makes it challenging to construct robust numerical solvers for the Brinkman equations in both the Stokes and Darcy regimes.
The numerical tests in <cit.> show that standard mixed methods with well-known inf-sup stables Stokes elements, such as MINI and Taylor-Hood elements, produce suboptimal orders of convergence in the Darcy regime.
Moreover, with piecewise constant approximations for pressure, the standard methods' velocity errors do not converge in the Darcy regime, while mesh size decreases.
On the other hand, Darcy elements such as Raviart-Thomas and Brezzi-Douglas-Marini do not work for the Stokes domain because they do not satisfy the H^1-conformity.
Therefore, the development of robust numerical solvers for the Brinkman equations has had considerable attention.
There have been three major categories in developing robust numerical methods for the Brinkman equations. The first category considers Stokes/Darcy elements and adds stabilization (or penalty) terms or degrees of freedom to impose normal/tangential continuity, respectively. This approach allows Stokes elements to cover the Darcy regime <cit.> or H(div)-conforming finite elements to be extended to the Stokes regime <cit.>. Also, the stabilized method in <cit.> coarsens a pressure space and applies a stabilization term on pressure, while the robust method in <cit.> uses an enlarged velocity space. The second approach is to introduce another meaningful unknown and define its suitable formulation and finite-dimensional space, such as velocity gradient <cit.>, vorticity <cit.>, and Lagrange multipliers at elements' boundaries <cit.>. The third direction is the development of a velocity reconstruction operator, first introduced in <cit.>, mapping Stokes elements into an H(div)-conforming space. In a discrete problem for the Brinkman equations, reconstructed velocity functions replace Stokes elements in the Darcy term and the test function on the right-hand side. This idea has been adopted for a uniformly robust weak Galerkin method for the Brinkman equations <cit.>, which inspires our work because of its simplicity in modification.
Our research focuses on developing a robust numerical method for the Brinkman equations with minimal degrees of freedom. The enriched Galerkin (EG) velocity and pressure spaces have been proposed by <cit.> for solving the Stokes equations with minimal degrees of freedom. The velocity space consists of linear Lagrange polynomials enriched by a discontinuous, piecewise linear, and mean-zero vector function per element, while piecewise constant functions approximate the pressure. More precisely, a velocity function =^C+^D consists of a continuous linear Lagrange polynomial ^C and a discontinuous piecewise linear enrichment function ^D, so interior penalty discontinuous Galerkin (IPDG) formulations are adopted to remedy the discontinuity of ^D. These velocity and pressure spaces satisfy the inf-sup condition for the Stokes equations, so they are stable Stokes elements.
We first observe a standard EG method derived from adding the Darcy term (,)_Ω to the Stokes discrete problem in <cit.>.
Our numerical analysis and experiments show that the standard EG method provides stable solutions and convergent errors for the Brinkman equations if a mesh size satisfies the condition h<√(ν) that is impractical in the Darcy regime (ν→0). Hence, inspired by <cit.>, we use the velocity reconstruction operator <cit.> mapping the EG velocity to the first-order Brezzi-Douglas-Marini space, whose consequent action is preserving the continuous component ^C and mapping only the discontinuous component ^D to the lowest-order Raviart-Thomas space. Then, we replace the EG velocity in the Darcy term and the test function on the right-hand side with the reconstructed linear H(div)-conforming velocity.
Therefore, with this simple modification, our resulting EG method yields pressure-robust error estimates and shows uniform performance from the Stokes to Darcy regime without any restriction in a mesh size, which is verified by our numerical analysis and experiments. Through two- and three-dimensional examples, we compare the numerical performance of our robust EG and the standard EG methods with the viscous parameter ν and mesh size h. The numerical results demonstrate why the standard EG method is not suitable for the Brinkman equations in the Darcy regime and show that the robust EG method has uniform performance in solving the Brinkman equations.
The remaining sections of this paper are structured as follows:
Some important notations and definitions are introduced in Section <ref>.
In Section <ref>, we introduce the standard and robust EG methods for the Brinkman equations, recalling the EG velocity and pressure spaces <cit.> and the velocity reconstruction operator <cit.>.
We prove the well-posedness and error estimates of the standard EG method in Section <ref>.
In Section <ref>, we show the robust method's well-posedness and error estimates that mathematically verify the uniform performance from the Stokes to Darcy regimes.
Section <ref> validates our theoretical results through numerical
experiments in two and three dimensions. Finally, we summarize our contribution in this paper and discuss
related future research in Section <ref>.
§ PRELIMINARIES
In this section, we introduce some notations and definitions used in this paper.
For a bounded Lipschitz domain 𝒟∈ℝ^d, where d=2,3, we denote the Sobolev space as H^s(𝒟) for a real number s≥ 0.
Its norm and seminorm are denoted by ·_s,𝒟 and |·|_s,𝒟, respectively.
The space H^0(𝒟) coincides with L^2(𝒟), and the L^2-inner product is denoted by (·,·)_𝒟.
When 𝒟=Ω, the subscript 𝒟 will be omitted.
This notation is generalized to vector- and tensor-valued Sobolev spaces.
The notation H_0^1(𝒟) means the space of v∈ H^1(𝒟) such that v=0 on ∂𝒟, and L_0^2(𝒟) means the space of v∈ L^2(𝒟) such that (v,1)_𝒟=0.
The polynomial spaces of degree less than or equal to k are denoted as P_k(𝒟).
We also introduce the Hilbert space
H(div,𝒟):={∈ [L^2(𝒟)]^d:div ∈ L^2(𝒟)}
with the norm
_H(div,𝒟)^2:=_0,𝒟^2+div _0,𝒟^2.
For discrete setting, we assume that there exists a shape-regular triangulation of Ω whose elements T∈ are triangles in two dimensions and tetrahedrons in three dimensions.
Also, denotes the collection of all edges/faces in , and =∪, where is the collection of all the interior edges/faces and is that of the boundary edges/faces.
For each element T∈, let h_T denote the diameter of T and _T (or ) denote the outward unit normal vector on ∂ T.
For each interior edge/face e∈ shared by two adjacent elements T^+ and T^-, we let _e be the unit normal vector from T^+ to T^-.
For each e∈, _e denotes the outward unit normal vector on ∂Ω.
In a triangulation , the broken Sobolev space is defined as
H^s():={v∈ L^2(Ω):v|_T∈ H^s(T), ∀ T∈},
equipped with the norm
v_s,:=(∑_T∈v^2_s,T)^1/2.
When s=0, the L^2-inner product on is denoted by (·,·)_.
Also, the L^2-inner product on is denoted as ⟨·,·⟩_, and the L^2-norm on is defined as
v_0,:=(∑_e∈v^2_0,e)^1/2.
The piecewise polynomial space corresponding to the broken Sobolev space is defined as
P_k() = {v∈ L^2(Ω): v|_T∈ P_k(T), ∀ T∈}.
In addition, the jump and average of v on e∈ are defined as
v:={[ v^+-v^- on e∈,; v on e∈, ].
v:={[ (v^++v^-)/2 on e∈,; v on e∈, ].
where v^± is the trace of v|_T^± on e∈∂ T^+∩∂ T^-. These definitions are extended to vector- and tensor-valued functions.
We finally introduce the trace inequality that holds for any function v∈ H^1(T),
v_0,e^2≤ C(h_T^-1v_0,T^2+h_T∇ v_0,T^2).
§ ENRICHED GALERKIN METHODS FOR THE BRINKMAN EQUATIONS
We first introduce the enriched Galerkin (EG) finite-dimensional velocity and pressure spaces <cit.>.
The space of continuous components for velocity is
= {^C ∈ : ^C|_T∈ [P_1(T)]^d, ∀ T ∈}.
The space of discontinuous components for velocity is defined as
= {^D ∈ L^2(Ω) : ^D|_T = c ( - _T), c ∈ℝ, ∀ T ∈},
where _T is the barycenter of T∈.
Thus, the EG finite-dimensional velocity space is defined as
:= ⊕.
We note that any function ∈ consists of unique continuous and discontinuous components, =^C+^D for ^C∈ and ^D∈.
At the same time, the EG pressure space is
Q_h := { q ∈ : q|_T ∈ P_0(T), ∀ T ∈}.
Therefore, we formulate a standard EG method for the Brinkman equations with the pair of the EG spaces × Q_h by adding the Darcy term to the Stokes formulation <cit.>.
This algorithm employs interior penalty discontinuous Galerkin (IPDG) formulations because any EG velocity function in 𝐕_h has a discontinuity.
IPDG formulations include two penalty terms scaled by h_e with the penalty parameters ρ_1 and ρ_2.
The method provides reliable numerical solutions in the Stokes regime.
However, this approach may not be effective in solving the Brinkman equations in the Darcy regime because it requires
H(div)-conforming discrete velocity functions. Moreover, the method's velocity error bounds may depend on a pressure term inversely proportional to ν.
For this reason, we develop a pressure-robust EG method that produces stable and accurate solutions to Brinkman problems with any value of ν∈(0,1].
First, the velocity reconstruction operator <cit.> is defined as : →ℬDM_1()⊂ H(div,Ω) such that
∫_e () ·_e p_1 ds = ∫_e ·_e p_1 ds,
∀p_1 ∈P_1(e), ∀e ∈,
∫_e () ·_e p_1 ds = 0, ∀p_1 ∈P_1(e), ∀e ∈,
where ℬDM_1() is the Brezzi-Douglas-Marini space of index 1 on .
Then, we propose the pressure-robust EG method as follows.
Using the velocity reconstruction operator , we force discrete velocity functions in to be H(div)-conforming.
We replace the velocity functions in the bilinear form (,)_ in (<ref>) and the right-hand side with the reconstructed velocity .
Thus, the term (,)_ with the H(div)-conforming velocity dominates the formulation when ν approaches to 0 (the Darcy regime).
Moreover, the reconstructed velocity on the right-hand side allows us to obtain error bounds independent of a pressure term inversely proportional to ν.
§ WELL-POSEDNESS AND ERROR ANALYSIS FOR ST-EG (ALGORITHM <REF>)
First of all, we introduce the discrete H^1-norm in <cit.> for all ∈,
^2 := ∇_0, ^2 + ρ_1 h_e^-1/2_0, ^2,
where ρ_1 is an H^1-penalty parameter. With this norm, the coercivity and continuity results for the bilinear form (·,·) have been proved in <cit.>: For a sufficiently large H^1-penalty parameter ρ_1, there exist positive constants κ_1 and κ_2 independent of ν and h such that
(, ) ≥κ_1 ^2 ∀∈,
|(, )| ≤κ_2 ∀, ∈.
Then, we define an energy norm for Brinkman problems involving the discrete H^1-norm and L^2-norm,
^2 := ν^2 + _0^2 +ρ_2 h_e^1/2_0, ^2.
In this case, ρ_2 is an L^2-penalty parameter that should be sufficiently large for well-posedness, and its simple choice is ρ_2=ρ_1.
The following lemma shows an essential norm equivalence between · and · scaled by ν and h.
For given ν and h, we define a positive constant C_ne (Norm Equivalence) as
C_ne:=C√(ν+h^2(ρ_2/ρ_1+1)),
where C is a generic positive constant independent of ν and h.
Then, the following norm equivalence holds: For any ∈, we have
√(ν)≤√(ν+c_1 h^2)≤≤ C_ne,
for some small 0<c_1<1. Moreover, the constant C_ne is bounded as
C_ne≤ C( √(ν)+h)
for some generic constant C>0.
We observe each term in the energy norm
^2=ν^2 + _0^2 +ρ_2 h_e^1/2_0, ^2.
Since .|_T is a linear polynomial in the second term, a scaling argument implies
_0≤ Ch∇_0,≤ Ch.
For the trace term, we have
ρ_2 h_e^1/2_0, ^2≤ Ch^2(ρ_2/ρ_1)ρ_1h_e^-1/2_0, ^2≤ Ch^2(ρ_2/ρ_1)^2.
Thus, we obtain
^2≤ C(ν+h^2(ρ_2/ρ_1+1))^2.
On the other hand, the inverse inequality and the same argument for the trace term lead to
^2≤ C h^-2(^2_0+ρ_2 h_e^1/2_0, ^2),
where C contains ρ_1/ρ_2. In this case, we assume C>1 and set c_1=1/C, so
(ν+c_1h^2)^2≤^2.
Let us introduce the interpolation operator in <cit.> : [H^2(Ω)]^d → defined by
Π_h=Π_h^C+Π_h^D,
where Π_h^C∈_h
is the nodal value interpolant of and Π_h^D∈_h satisfies
(∇·Π_h^D,1)_T=(∇·( - Π_h^C ), 1)_T for all T∈.
The following interpolation error estimates and stability <cit.> are used throughout our numerical analysis:
|- | _j, ≤C h^m-j ||_m, 0 ≤j ≤m ≤2, ∀∈[H^2(Ω)]^d,
- ≤C h _2, ∀∈[H^2(Ω)]^d,
≤C _1,
∀∈.
For the pressure, we introduce the local L^2-projection 𝒫_0: → Q_h such that (q - q, 1)_T = 0 for all T∈. Its interpolation error estimate is given as,
q -
q_0 ≤ C h q_1, ∀ q ∈ H^1(Ω).
§.§ Well-posedness
We first prove the coercivity and continuity results concerning the energy norm ·.
For any ,∈𝐕_h, we have the coercivity and continuity results:
ν(,)+(,) ≥ K_1^2,
|ν(,)+(,)| ≤ K_2,
where K_1=min(κ_1,1) and K_2=max(κ_2,1).
If we observe the bilinear forms (·,·) and (·,·) and use the coercivity (<ref>), then we have
ν(,)+(,) ≥κ_1ν^2+_0^2 +ρ_2 h_e^1/2_0, ^2
≥min(κ_1,1)^2.
Moreover, it follows from the Cauchy-Schwarz inequality and the continuity (<ref>) that
|ν(,)+(,)| ≤κ_2ν+_0_0
+ (√(ρ_2)h_e^1/2_0,)(√(ρ_2)h_e^1/2_0,)
≤max(κ_2,1).
Next, we prove the discrete inf-sup condition for the problem (<ref>) in Algorithm <ref>.
Assume that the penalty parameters ρ_1 and ρ_2 are sufficiently large.
Then, there exists a positive constant C_1:=C_is/C_ne such that
sup_∈(,q)/≥ C_1q_0, ∀ q∈ Q_h,
where C_is>0 (Inf-Sup), independent of ν and h, is the constant for the inf-sup condition for · in <cit.>.
It follows from the discrete inf-sup condition in <cit.> and the upper bound of in (<ref>) that
C_isq_0≤sup_∈(,q)/≤ C_nesup_∈(,q)/.
Furthermore, Lemma <ref> yields the continuity of (·,·) with .
For any ∈ and q∈ Q_h, there exists a positive constant C independent of ν and h such that
|(,q)|≤C/√(ν+c_1 h^2)q_0.
It follows from
the continuity of (·,·) in <cit.> and
the upper bound of in (<ref>) that
|(,q)|≤ Cq_0≤C/√(ν+c_1 h^2)q_0.
Thus, we obtain the well-posedness of the method in Algorithm <ref>.
There exists a unique solution (,)∈× Q_h to the method.
It suffices to show that _h=0 and p_h=0 when =0 because and Q_h are finite-dimensional spaces.
Choosing =_h in (<ref>) and q=p_h in (<ref>) and adding the two equations imply ν(_h,_h)+(_h,_h)=0.
Hence, _h=0 by (<ref>), so _h=0.
If _h=0 in (<ref>), then (,p_h)=0 for all ∈. Therefore, the inf-sup condition (<ref>) yields p_h_0=0, so p_h=0.
§.§ Error estimates
Let (,p)∈ [H_0^1(Ω)∩ H^2(Ω)]^d× [L_0^2(Ω)∩ H^1(Ω)] be the solution to (<ref>)-(<ref>).
We define the error functions used in the error estimates
χ_h:=-Π_h, 𝐞_h:=Π_h-_h, ξ_h:=p- p, ϵ_h:= p-p_h.
First, we derive error equations in the following lemma.
For any ∈ and q∈ Q_h, we have
ν(_h,)+(_h,)-(,ϵ_h) =l_1(,)+l_2(,)+𝐬(Π_h,)+(,ξ_h),
(_h,q) =-(χ_h,q),
where the supplemental bilinear forms are defined as follows:
l_1(,):=ν(Π_h-,),
l_2(,):=(Π_h-, )_,
𝐬(Π_h,):=ρ_2⟨ h_eΠ_h,⟩_.
We have -(Δ,)
_=(,) for any ∈ from <cit.>, which implies that
-ν(Δ,)_
=ν(Π_h,)-ν(Π_h-,).
The definition of (·,·) also gives
(,)_=(Π_h,)-(Π_h-, )_-ρ_2⟨ h_eΠ_h,⟩_,
and integration by parts and continuity of p lead to
(∇ p,)_ = ∑_T∈⟨ p,·⟩_∂ T -(p,∇·)_T= -(,p).
Thus, the equation (<ref>) imposes
ν(Π_h,)+(Π_h,)-(,p)=(,)+l_1(,)+l_2(,)+𝐬(Π_h,).
By comparing this equation with (<ref>) in the method, we arrive at
ν(_h,)+(_h,)-(,ϵ_h)=l_1(,)+l_2(,)+𝐬(Π_h,)+(,ξ_h).
Moreover, it follows from the continuity of and (<ref>) that
(∇·,q)_=(,q)=0=(_h,q),
which implies (<ref>).
In what follows, we prove estimates for the supplemental bilinear forms in Lemma <ref>.
Assume that ∈[H^2(Ω)]^d and ∈. Then, we have
|l_1(,)|≤C√(ν) h_2,
|l_2(,)|≤C h^2_2,
|𝐬(Π_h,)|≤Ch^2_2,
where C is a generic positive constant independent of ν and h and may vary in each case.
It follows from (<ref>), (<ref>), and (<ref>) that
|l_1(,)| =|ν(Π_h-,)|
≤νκ_2Π_h-
≤ Cν h _2
≤ C√(ν)h_2.
Using the Cauchy-Schwarz inequality and (<ref>),
we get the following upper bounds
|l_2(,)| =|(Π_h-,)_|
≤Π_h-_0_0
≤ Ch^2||_2.
Finally, the Cauchy-Schwarz inequality, trace inequality (<ref>), and (<ref>) imply
|𝐬(Π_h,)| =|ρ_2⟨ h_eΠ_h,⟩_|
=|ρ_2⟨ h_eΠ_h-,⟩_|
≤ρ_2h_e^1/2Π_h-_0,h_e^1/2_0,
≤h_e^1/2Π_h-_0,
≤ Ch^2||_2.
In addition, we expand the continuity of (·,·) in <cit.> to be relevant to the error equations (<ref>) because χ_h=-Π_h∉𝐕_h and ξ_h=p- p∉Q_h.
For any ∈ and q∈ Q_h, we have
|(,ξ_h)|≤Ch p_1,
|(χ_h,q)|≤Chq_0_2,
where C is a generic positive constant independent of ν and h and may vary in each case.
First, we use the Cauchy-Schwarz inequality to get
|(,ξ_h)| =|(∇·,ξ_h)_-⟨·_e,ξ_h⟩_|
≤ C(∇_0,ξ_h_0+h_e^-1/2_0,h_e^1/2ξ_h_0,).
Then, the trace term is bounded by using the trace inequality (<ref>) and interpolation error estimate (<ref>),
h_e^1/2ξ_h_0,^2≤ C(ξ_h_0^2+h^2∇ξ_h_0,^2)≤ Ch^2p_1^2
because ∇ξ_h=∇(p- p)=∇ p.
Hence, the definition of the discrete H^1-norm and estimate (<ref>) imply
|(,ξ_h)|≤ Chp_1.
Similarly, it follows from the Cauchy-Schwarz inequality, trace inequality (<ref>), and (<ref>) that
|(χ_h,q)| ≤ C(∇χ_h_0,q_0+h_e^-1/2χ_h_0,h_e^1/2q_0,)
≤ Cq_0χ_h≤ Chq_0_2.
Therefore, we show error estimates of the method in Algorithm <ref> for the Brinkman equations.
Let (,p)∈ [H_0^1(Ω)∩ H^2(Ω)]^d× [L_0^2(Ω)∩ H^1(Ω)] be the solution to (<ref>)-(<ref>), and (,p_h)∈× Q_h be the discrete solution from the method. Then, we have the following error estimates
Π_h-_h ≤ C[(√(ν)+1)h_2 + ( h+h/√(ν+c_1 h^2))p_1 ],
p-p_h_0 ≤ C[ (ν+√(ν))h_2 + (√(ν)+1)hp_1 ].
First of all, we apply the continuity results (<ref>), (<ref>), the estimates (<ref>), and the norm equivalence (<ref>) to the error equation (<ref>),
(,ϵ_h) =ν(_h,)+(_h,)-l_1(,)-l_2(,)-𝐬(Π_h,)-(,ξ_h)
≤ C(_h+√(ν)h_2+h^2_2+h/√(ν+c_1 h^2)p_1).
Thus, the inf-sup condition (<ref>) with (<ref>) implies
ϵ_h_0≤ C(√(ν)+h)(_h+√(ν)h_2+h^2_2+h/√(ν+c_1 h^2)p_1).
We choose =_h in (<ref>) and q=ϵ_h in (<ref>) and substitute (_h,ϵ_h) with -(χ_h,ϵ_h) to obtain
ν(_h,_h)+(_h,_h)=-(χ_h,ϵ_h)+l_1(,_h)+l_2(,_h)+𝐬(Π_h,_h)+(_h,ξ_h).
In this case, we estimate the term (χ_h,ϵ_h)
using (<ref>),
|(χ_h,ϵ_h)|≤ Ch_2ϵ_h_0.
The term (_h,ξ_h) is estimated by using (<ref>) and (<ref>),
|(_h,ξ_h)|≤ Chp_1_h≤ Ch/√(ν+c_1h^2)p_1_h.
Hence, it follows from (<ref>), (<ref>), (<ref>), and (<ref>) that
_h^2≤ C(h_2ϵ_h_0 + √(ν)h_2_h+h^2_2_h + h/√(ν+c_1 h^2)p_1_h).
We use the estimate (<ref>) and omit high-order terms (h^3 or h^4) to obtain,
h_2ϵ_h_0 ≤ C( (√(ν)+h)h_2_h + ν h^2_2^2 + √(ν)+h/√(ν+c_1 h^2)h^2_2p_1)
≤ C( (√(ν)+h)h_2_h + ν h^2_2^2+ h^2_2p_1)
because √(ν) +h≤ (√(2/c_1))√(ν+c_1 h^2).
If we apply the Young’s inequality to each term with a positive constant α, then we have
√(ν)h_2_h≤ν h^2/2α_2^2+α/2_h^2,
h^2_2_h≤h^4/2α_2^2 + α/2_h^2,
h^2_2p_1≤h^2/2α_2^2 + α h^2/2p_1^2,
h/√(ν+c_1 h^2)p_1_h≤h^2/2α(ν+c_1 h^2)p_1^2+α/2_h^2.
Therefore, a proper α implies
_h^2≤ C[(ν+1)h^2_2^2 + ( h^2+h^2/ν+c_1 h^2)p_1^2 ],
so we finally get
_h≤ C[(√(ν)+1)h_2 + ( h+h/√(ν+c_1 h^2))p_1 ].
On the other hand, we observe the intermediate estimate (<ref>) and omit high-order terms (h^2 or h^3) to show the pressure error estimate,
ϵ_h_0≤ C[(√(ν)+h)_h+ν h_2+hp_1].
Thus, we bound _h with the velocity error estimate (<ref>), so we finally obtain
ϵ_h_0≤ C[ (ν+√(ν))h_2 + (√(ν)+1)hp_1 ],
when omitting h^2-terms.
Theorem <ref> explains that the errors converge in the first order with h under the condition h<√(ν) easily satisfied in the Stokes regime.
However, the velocity error in the Darcy regime may not decrease with h due to the pressure term in the velocity error bound, that is, when ν→ 0,
h/√(ν+c_1h^2)p_1→1/√(c_1)p_1.
We will confirm these theoretical results through numerical experiments.
For this reason, the method in Algorithm <ref> may not be effective in solving the Brinkman equations with small ν, which motivates us to develop and analyze the method in Algorithm <ref>.
§ WELL-POSEDNESS AND ERROR ANALYSIS FOR PR-EG (ALGORITHM <REF>)
In this section, we prove well-posedness and error estimates for the method in Algorithm <ref>.
The error estimates show that the method's velocity and pressure errors decrease in the optimal order of convergence in both the Stokes and Darcy regimes, so we expect stable and accurate numerical solutions with any ν as h decreases.
We first define another energy norm by replacing _0 with _0,
^2_ℛ := ν^2 + _0^2 +ρ_2 h_e^1/2_0, ^2.
We also introduce the interpolation error estimate of the operator in <cit.>.
For any ∈, there exists a positive constant C independent of ν and h such that
- _0≤ Chh_e^-1/2_0,≤ C h .
This interpolation error estimate allows to have the norm equivalence between _ℛ and scaled by ν and h, similar to Lemma <ref>.
For any ∈, it holds
√(ν)≤√(ν+c_2 h^2)≤_ℛ≤ C_ne,
where C_ne is the constant defined in Lemma <ref> and 0<c_2<1 is a small constant.
It suffices to prove that _0≤ Ch for the upper bound because _0 is replaced by _0 in the norm _ℛ.
Indeed, it follows from the triangle inequality, the error estimate (<ref>), and the argument in the proof of Lemma <ref> that
_0 ≤_0 + -_0≤_0+Ch≤ Ch.
Hence, we obtain
_ℛ^2=ν^2 + _0^2 +ρ_2 h_e^1/2_0, ^2≤ C(ν + h^2(ρ_2/ρ_1+1))^2.
For the lower bound, we recall the result in Lemma <ref> and apply (<ref>) to it,
^2 ≤ C h^-2(_0^2+ρ_2 h_e^1/2_0, ^2)
≤ C h^-2(_0^2+-_0^2+ρ_2 h_e^1/2_0, ^2)
≤ Ch^-2(_0^2+ h^2h_e^-1/2_0,^2+ρ_2 h_e^1/2_0, ^2)
=Ch^-2(_0^2+ρ_2 h_e^1/2_0, ^2)+C_0h_e^-1/2_0,^2,
where C_0 contains ρ_1/ρ_2 but is independent of ν and h.
Then, for a sufficiently large ρ_1, we have
ρ_1-C_0/ρ_1^2≤ Ch^-2(_0^2+ρ_2 h_e^1/2_0, ^2).
Therefore, we set c_2=(ρ_1-C_0)/(Cρ_1) and assume c_2<1 to have
c_2h^2^2≤_0^2+ρ_2h_e^1/2_0,^2,
which implies
(ν+c_2h^2)≤_.
In addition, we prove the norm equivalence between and _ using the results in Lemma <ref>, Lemma <ref>, and Lemma <ref>.
For any ∈, it holds
c_*_≤≤ c^*_,
where c_* and c^* are positive constants independent of ν and h.
It follows from the results in Lemma <ref> and Lemma <ref> that
ν^2+_0^2≤ C(ν^2+c_1h^2^2+_0^2)≤ C^2.
Similarly, from Lemma <ref> and Lemma <ref>, we obtain
ν^2+_0^2≤ C(ν^2+c_2h^2^2+_0^2)≤ C^2_.
§.§ Well-posedness
Most of the results for the well-posedness of the method are similar to those of the method. Thus, we briefly state and prove the results concerning ·_ℛ in this subsection.
For any ,∈𝐕_h, the coercivity and continuity results hold:
ν(,)+𝐜̃(,) ≥ K_1^2_ℛ,
|ν(,)+𝐜̃(,)| ≤ K_2_ℛ_ℛ,
where K_1=min(κ_1,1) and K_2=max(κ_2,1).
The proof is the same as that of Lemma <ref>, so we omit the details here.
Assume that the penalty parameters ρ_1 and ρ_2 are sufficiently large.
Then, we have
sup_∈(,q)/_ℛ≥ C_1q_0, ∀ q∈ Q_h,
for C_1=C_is/C_ne defined in Lemma <ref>.
Similar to the proof of Lemma <ref>, the discrete inf-sup condition in <cit.> and the upper bound of _ℛ in (<ref>) imply
C_isq_0≤sup_∈(,q)/≤ C_nesup_∈(,q)/_ℛ.
For any ∈ and q∈ Q_h, it holds
|(,q)|≤C/√(ν+c_2 h^2)q_0_ℛ,
for a generic positive constant C independent of ν and h.
Similar to the proof of Lemma <ref>, this result is proved by the continuity of (·,·) in <cit.> and the upper bound of in (<ref>).
Finally, we obtain the well-posedness of the method in Algorithm <ref>.
There exists a unique solution (,)∈× Q_h to the method.
The proof is the same as Theorem <ref>, so we omit the details here.
§.§ Error estimates
We recall the error functions
χ_h:=-Π_h, 𝐞_h:=Π_h-_h, ξ_h:=p- p, ϵ_h:= p-p_h,
where (,p)∈ [H_0^1(Ω)∩ H^2(Ω)]^d× [L_0^2(Ω)∩ H^1(Ω)] is the solution to (<ref>)-(<ref>).
Then, we derive error equations for the method.
For any ∈ and q∈ Q_h, we have
ν(_h,)+(_h,)-(,ϵ_h) =l_1(,)+l_3(,)+l_4(,)+𝐬(Π_h,),
(_h,q) =-(χ_h,q),
where l_1(,) and 𝐬(Π_h,) are defined in Lemma <ref>, and the other supplemental bilinear forms are defined as follows:
l_3(,):=ν(Δ, -)_,
l_4(,):=(Π_h-,)_.
Since -(Δ,)
_=(,) for any ∈, we have
-ν(Δ,)_ =-ν(Δ,)_-ν(Δ,-)_
=ν(,)-ν(Δ,-)_
=ν(Π_h,)-ν(Π_h-,)-ν(Δ,-)_.
By the definition of (·,·), we also have
(,)_ =(Π_h,)_-(Π_h-,)_
=(Π_h,)-(Π_h-,)_-ρ_2⟨ h_eΠ_h,⟩_.
Since · is continuous on ∂ T and ∇· is constant in T, integration by parts implies
(∇ p,)_ = -(, p).
Hence, we obtain the following equation from (<ref>),
ν(Π_h,)+(Π_h,)-(, p)=(,)+l_1(,)+l_3(,)+l_4(,)+𝐬(Π_h,).
If we compare this equation with (<ref>) in the method, then we arrive at
ν(_h,)+(_h,)-(,ϵ_h)=l_1(,)+l_3(,)+l_4(,)+𝐬(Π_h,).
For the second equation (<ref>), the continuity of and (<ref>) in the method lead us to
(∇·,q)_=(,q)=0=(_h,q).
We present estimates for the supplementary bilinear forms used in Lemma <ref>.
Assume that ∈[H^2(Ω)]^d and ∈. Then, we have
|l_1(,)|≤C√(ν)h_2_ℛ,
|l_3(,)|≤C√(ν)h_2_ℛ,
|l_4(,)|≤C h_2_ℛ,
|𝐬(Π_h,)|≤C h^2_2_ℛ,
where C is a generic positive constant independent of ν and h and may vary in each case.
The estimates (<ref>) and (<ref>) are proved by the estimate in Lemma <ref> and the norm equivalence (<ref>).
On the other hand, the Cauchy-Schwarz inequality, (<ref>), and (<ref>) lead to
|l_3(,)| =|ν(Δ, -)_|
≤ν_2-_0
≤ Cν h_2
≤ C√(ν)h_2_ℛ.
Using the Cauchy-Schwarz inequality, (<ref>), (<ref>), and (<ref>),
we get the following upper bounds,
|l_4(,)| =|(Π_h-,)_|
≤|(Π_h-Π_h,)_|+|(Π_h-,)_|
≤ ChΠ_h_0+Π_h-_0_0
≤ Ch||_1_ℛ.
Hence, we prove error estimates of the method in Algorithm <ref>.
Let (,p)∈ [H_0^1(Ω)∩ H^2(Ω)]^d× [L_0^2(Ω)∩ H^1(Ω)] be the solution to (<ref>)-(<ref>), and (,p_h)∈× Q_h be the discrete solution from the method. Then, we have the following pressure-robust error estimates
Π_h-_h_ℛ≤ Ch(√(ν)+1)_2,
𝒫_0p-p_h_0≤ C h(ν+√(ν))_2 + Ch^2_2.
We start with the error equation (<ref>),
(,ϵ_h)=ν(_h,)+(_h,)-l_1(,)-l_3(,)-l_4(,)-𝐬(Π_h,).
Then, it follows from (<ref>) and (<ref>) that
(,ϵ_h)≤ C(_h _ℛ+√(ν)h_2+h_2+h^2_2)_ℛ.
From the inf-sup condition (<ref>) with (<ref>), we obtain
ϵ_h_0≤ C(√(ν)+h)(_h_ℛ+√(ν)h_2+h_2+h^2_2).
We also choose =_h and q=ϵ_h in (<ref>) and substitute (<ref>) into (<ref>) to get
ν(_h,_h)+(_h,_h)=-(χ_h,ϵ_h)+l_1(,_h)+l_3(,_h)+l_4(,_h)+𝐬(Π_h,_h).
Here, it follows from (<ref>) that
|(χ_h,ϵ_h)|≤ Ch_2ϵ_h_0.
Therefore, from (<ref>), (<ref>), and (<ref>), we have
_h_ℛ^2≤ C( h_2ϵ_h_0+√(ν)h_2_h_ℛ+h_2_h_ℛ),
while omitting h^2-terms.
We also replace ϵ_h_0 by its upper bound in (<ref>) omitting high-order terms,
_h^2_ℛ≤ C(√(ν)h_2_h_ℛ+h_2_h_ℛ).
In this case, the Young's inequality gives
√(ν)h_2_h_ℛ≤ν h^2/2α_2^2+α/2_h^2_ℛ,
h_2_h_ℛ≤h^2/2α_2^2+α/2_h^2_ℛ.
Therefore, it follows from choosing a proper α that
_h^2_ℛ≤ Ch^2(ν+1)_2^2,
which implies that
_h_ℛ≤ Ch(√(ν)+1)_2.
If we apply this estimate to (<ref>), then we obtain
ϵ_h_0≤ Ch(ν+√(ν))_2+Ch^2_2.
We emphasize that the error bounds in Theorem <ref> are pressure-robust and have no detrimental effect from small ν.
With ν→0, the method's velocity errors decrease in the optimal order, and pressure errors do in the second order (superconvergence is expected).
This result implies that the method produces stable and accurate solutions to the Brinkman equations in the Darcy regime.
In addition, we prove total error estimates showing the optimal orders of convergence in velocity and pressure.
Under the same assumption of Theorem <ref>, we have the following error estimates
-_h_ℛ≤ Ch(√(ν)+1)_2,
p-p_h_0≤ Ch((ν+√(ν))_2+p_1).
For the velocity error estimate, we show
-Π_h_ℛ≤ C√(ν)h_2.
More precisely, we recall χ_h=-Π_h and observe the energy norm,
χ_h^2_ℛ=νχ_h^2+χ_h_0^2+ρ_2h_e^1/2χ_h_0,^2.
Then, it follows from (<ref>), (<ref>), and (<ref>) that
χ_h_0≤χ_h-χ_h_0+χ_h_0≤ Chχ_h+χ_h_0≤ Ch^2_2.
Also, from (<ref>) and (<ref>), we obtain
h_e^1/2χ_h_0,≤ C(χ_h_0^2+h^2∇χ_h_0,^2)^1/2≤ Ch^2_2.
Hence, since χ_h≤ Ch_2, the error bound is
χ_h_ℛ≤ C(√(ν)h+h^2)_2.
Furthermore, the pressure error estimate is readily proved by the triangle inequality and interpolation error estimate (<ref>).
In conclusion, the proposed method solves the Brinkman equations in both the Stokes and Darcy regimes, having the optimal order of convergence for both velocity and pressure.
§ NUMERICAL EXPERIMENTS
This section shows numerical experiments validating our theoretical results with two- and three-dimensional examples.
The numerical methods in this paper and their discrete solutions are denoted as follows:
* (_h^,p_h^): Solution by the method in Algorithm <ref>.
* (_h^,p_h^): Solution by the method in Algorithm <ref>.
While considering the scaled Brinkman equations (<ref>) with the parameter ν, we recall the error estimates for the method in Theorem <ref>,
Π_h-^_h≲(√(ν)+1)h_2 + ( h+h/√(ν+c_1 h^2))p_1,
p-p_h^_0≲(ν+√(ν))h_2 + (√(ν)+1)hp_1,
and the error estimates for the method from Theorem <ref>
Π_h-_h^≲(√(ν)+1)h_2,
p-p_h^_0≲(ν+√(ν))h_2+h^2_2.
We mainly check the error estimates (<ref>) and (<ref>) by showing various numerical experiments with ν and h.
We also display the difference between the numerical solutions for and in the Darcy regime, which shows that the method is needed to obtain stable and accurate velocity solutions.
Moreover, we present permeability tests considering the Brinkman equations (<ref>) with viscosity μ and permeability K and applying both EG methods.
The permeability tests enhance the motivation of using the method for the case of extreme μ or K.
We implement the numerical experiments using the authors' MATLAB codes developed based on iFEM <cit.>.
The penalty parameters are ρ_1=ρ_2=3 for all the numerical experiments.
§.§ Two dimensional tests
Let the computational domain be Ω=(0,1)× (0,1). The velocity field and pressure are chosen as
= ([ 10x^2(x-1)^2y(y-1)(2y-1); -10x(x-1)(2x-1)y^2(y-1)^2 ]),
p = 10(2x-1)(2y-1).
Then, the body force and the Dirichlet boundary condition are obtained from (<ref>) using the exact solutions.
§.§.§ Robustness and accuracy test
We compare the and methods to see robustness and check their accuracy based on the error estimates (<ref>) and (<ref>).
First, we interpret the method's velocity error estimate (<ref>) depending on the relation between coefficient ν and mesh size h.
The first-order convergence of the energy norm with h is guaranteed when ν≫ h^2, but it is hard to tell any order of convergence when ν is smaller than h^2 due to the term h/√(ν+c_1h^2).
On the other hand, the velocity error estimate for the method (<ref>) means the first-order convergence in h regardless of ν.
In Figure <ref>, we check the discrete H^1-error for the velocity scaled by ν, √(ν)-_h. It is a component of the energy norm -_h.
The method tends to produce errors increasing with 𝒪(h^-1/2) when h>√(ν), while the errors decrease with 𝒪(h^3/2) when h<√(ν).
This result supports the error estimates (<ref>) (superconvergence may happen because we solve the problem on structured meshes) and means that a tiny mesh size is needed for accurate solutions with small ν.
However, the method's errors uniformly show the first-order convergence, 𝒪(h), regardless of ν.
This result supports the error estimates (<ref>), so the method guarantees stable and accurate solutions in both the Stokes and Darcy regimes.
We fix ν=10^-6 and compare the velocity errors and solutions of the and methods.
Table <ref> displays the energy errors and their major components, the discrete H^1-errors scaled by ν and L^2-errors.
For the method, the energy errors decrease in the half-order convergence because the L^2-errors are dominant and decrease in the same order.
However, the H^1-errors keep increasing unless h<√(ν)=10^-3, so the H^1-errors will become dominant and deteriorate the order of convergence of the energy errors.
On the other hand, using the method, we expect from (<ref>) that the energy errors and major components converge in at least the first order of h.
Indeed, Table <ref> shows that the H^1-errors decrease in the first order with h, while the L^2-errors reduce in the second order.
Since the energy error involve both H^1- and L^2-errors, the energy errors decrease in the second order because of the dominant L^2-errors but eventually converge in the first order coming from the H^1-errors.
In Figure <ref>, the method produces accurate velocity solutions clearly showing a vortex flow pattern when ν=10^-6 and h=1/16. In contrast, the numerical velocity from the method includes significant oscillations around the boundary of the domain.
Moreover, the pressure error estimates (<ref>) and (<ref>) tell us that the convergence order for the pressure errors is at least 𝒪(h) in both methods. However, the method can produce superconvergent pressure errors because the term h^2p_1 is dominant when ν is small.
In Table <ref>, the pressure errors of the method, p-p_h^_0, decrease in at least 𝒪(h^3), which means superconvergence compared to the interpolation error estimate (<ref>).
On the other hand, the method still yields pressure errors converging in the first order with h.
Since the interpolation error is dominant in the total pressure errors p-p_h_0, the errors in Table <ref> have the first-order convergence with h in both methods.
Therefore, the numerical results support the pressure error estimates (<ref>) and (<ref>).
§.§.§ Error profiles with respect to ν
We shall confirm the error estimates (<ref>) and (<ref>) in terms of the parameter ν by checking error profiles depending on ν.
We define the following error profile functions of ν based on the error estimates and show that these functions explain the behavior of the velocity and pressure errors with ν:
* E_,2^(ν):=0.1h√(ν)+0.3h/√(ν+3h^2)+0.4h=0.1/32√(ν)+0.3/√(32^2ν+3)+0.4/32 from (<ref>),
* E_,2^(ν):=0.8h√(ν)+0.05h=0.8/32√(ν)+0.05/32 from (<ref>),
* E_p,2^(ν):=2hν+3h√(ν)+0.3h=2/32ν+3/32√(ν)+0.3/32 from (<ref>),
* E_p,2^(ν):=0.5hν+0.01h√(ν)+0.01h^2=0.5/32ν+0.01/32√(ν)+0.01/32^2 from (<ref>),
where h=1/32.
Figure <ref> shows the velocity and pressure errors and the graphs of the above error profile functions when ν decreases from 1 to 0 and h=1/32.
As shown in Figure <ref>, the velocity errors for the method increase when ν is between 1 to 10^-4 and tend to remain constant when ν is smaller.
The method's pressure errors decrease slightly and stay the same as ν→0.
On the other hand, the velocity and pressure errors for the method significantly reduce and remain the same after ν=10^-4.
This error behavior can be explained by the graphs of the error profile functions guided by the error estimates (<ref>) and (<ref>), so this result supports the estimates concerning ν.
In addition, the velocity and pressure errors for the method are almost 1000 times smaller than the method in Figure <ref>.
Therefore, we confirm that the method guarantees more accurate solutions for velocity and pressure when ν is small.
§.§.§ Permeability test
In this test, we consider the Brinkman equations (<ref>) with viscosity μ=10^-6 and permeability given as the permeability map in Figure <ref>.
The permeability map indicates that fluid tends to flow following the blue regions, so the magnitude of numerical velocity will be more significant in the blue areas than in the red parts.
We set the velocity on the boundary of the domain as =⟨ 1,0⟩ and body force as = ⟨ 1, 1⟩.
We mainly compare the magnitude of the numerical velocity obtained from the two methods in Figure <ref>.
We clearly see that the method's velocity is more stable than the method's velocity containing nonnegligible noises (or oscillations) around the boundary.
This result tells that the method is necessary for stable and accurate velocity solutions to the Brinkman equations with extreme viscosity and permeability.
§.§ Three dimensional tests
We consider a three-dimensional flow in a unit cube Ω=(0,1)^3. The velocity field and pressure are chosen as
= ([ sin(π x)cos(π y) - sin(π x)cos(π z); sin(π y)cos(π z) - sin(π y)cos(π x); sin(π z)cos(π x) - sin(π z)cos(π y) ]),
p = π^3sin(π x)sin(π y)sin(π z)-1.
The body force and the Dirichlet boundary condition are given in the same manner as the two-dimensional example.
§.§.§ Robustness and accuracy test
In the two-dimensional tests, we checked that the condition h<√(ν) was required to guarantee the optimal order of convergence for the method, while the method showed a uniform performance in convergence independent of ν.
We obtained the same result as in Figure <ref> from this three-dimensional test.
Table <ref> displays the velocity solutions' energy errors and influential components, comparing the method with when ν=10^-6.
The method's energy errors tend to decrease because the dominant L^2-errors decrease, but the H^1-errors scaled by ν increase.
These H^1-errors may make the energy errors nondecreasing until h<√(ν)=10^-3.
However, the methods guarantee at least first-order convergence for all the velocity errors, showing much smaller errors than the method.
This numerical result supports the velocity error estimates in (<ref>) and (<ref>), and we expect more accurate solutions from the method when ν is small.
In addition, we compare numerical velocity solutions of the and methods when ν=10^-6 and h=1/16 in Figure <ref>.
The velocity solutions of both methods seem to capture a three-dimensional vortex flow expected from the exact velocity.
However, the velocity of the method contains noises around the right-top and left-bottom corners, where the streamlines do not form a circular motion.
In Table <ref>,
as expected in (<ref>), the method's pressure errors decrease in at least first-order.
On the other hand, the method's pressure errors, p -p_h^𝚄𝚁_0, decrease much faster, showing superconvergence.
This phenomenon is expected by the pressure estimate (<ref>) when ν is small.
Moreover, the orders of convergence of the total pressure errors, p-p_h_0,
for both methods are approximately one due to the interpolation error.
§.§.§ Error profiles with respect to ν
We define error profile functions suitable for the three-dimensional test by determining constants in the estimates (<ref>) and (<ref>):
* E_,3^(ν):=0.1h√(ν)+h/√(ν+3h^2)+9h=0.1/16√(ν)+1/√(16^2ν+3)+9/16 from (<ref>)
* E_,3^(ν):=6h√(ν)+0.25h=6/16√(ν)+0.25/16 from (<ref>),
* E_p,3^(ν):=1.5hν+h√(ν)+2.5h=1.5/16ν+1/16√(ν)+2.5/16 from (<ref>),
* E_p,3^(ν):=2hν+0.02h√(ν)+0.2h^2 = 2/16ν+0.02/16√(ν)+0.2/16^2 from (<ref>),
where h=1/16.
In Figure <ref>, the method's velocity and pressure errors decrease when ν changes from 1 to 10^-4 and remain the same when ν gets smaller.
However, the errors for the method slightly increase or decrease when 10^-4≤ν≤ 1, and they stay the same as ν→0.
Thus, the errors of the method are almost 100 times smaller than the method when ν≤ 10^-4, which means the method solves the Brinkman equations with small ν more accurately.
The error profile functions show similar error behaviors in Figure <ref>, supporting error estimates (<ref>) and (<ref>).
§.§.§ Permeability test
We apply piecewise constant permeability to the Brinkman equations (<ref>) in the cube domain Ω=(0,1)^3,
K() = {[ 10^-6 if ||≤ (0.25)^2,; 1 otherwise. ].
The other conditions are given as; viscosity μ=10^-6, boundary condition =⟨ 1,0,0⟩, and body force =⟨ 1, 1,1⟩.
We expect the fluid flow to be faster out of the ball with small permeability, and it tends to avoid the ball and be affected by the boundary velocity.
The streamlines and colored magnitude of the method's velocity in Figure <ref> exactly show such an expectation on the fluid flow, while the method fails to provide a reliable velocity solution.
§ CONCLUSION
In this paper, we proposed a pressure-robust numerical method for the Brinkman equations with minimal degrees of freedom based on the EG piecewise linear velocity and constant pressure spaces <cit.>.
To derive the robust method, we used the velocity reconstruction operator <cit.> mapping the EG velocity to the first-order Brezzi-Douglas-Marini space.
Then, we replaced the EG velocity in the Darcy term and the test function on the right-hand side with the reconstructed velocity. With this simple modification, the robust EG method showed uniform performance in both the Stokes and Darcy regimes compared to the standard EG method requiring the
mesh restriction h<√(ν) that is impractical in the Darcy regime.
We also validated the error estimates and performance of the standard and robust EG methods through several numerical tests with two- and three-dimensional examples.
Our efficient and robust EG method for the Brinkman equations can be extended to various Stokes-Darcy modeling problems, such as coupled models with an interface and time-dependent models. Also,
the proposed EG method can be extended for nonlinear models, such as nonlinear Brinkman models for non-Newtonian fluid and unsteady Brinkman-Forchheimer models.
plain
|
http://arxiv.org/abs/2307.04526v2 | 20230710124959 | Self Expanding Neural Networks | [
"Rupert Mitchell",
"Martin Mundt",
"Kristian Kersting"
] | cs.LG | [
"cs.LG",
"I.2.6"
] |
Automatic Debiased Machine Learning for Covariate Shifts
Michael Newey and Whitney K Newey Research was sponsored by the United States Air Force Research Laboratory and the United States Air Force Artificial Intelligence Accelerator and was accomplished under Cooperative Agreement Number FA8750-19-2-1000. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the United States Air Force or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. This research was supported by NSF Grant 1757140
August 12, 2023
====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
The results of
training a neural network are heavily dependent on the architecture chosen;
and even a modification of only the size of the network,
however small,
typically involves restarting
the training process.
In contrast to this,
we begin training with a small architecture,
only increase its capacity as necessary
for the problem,
and avoid
interfering with
previous optimization
while doing so.
We thereby introduce
a natural gradient based approach
which intuitively expands both the width and depth of a neural network
when this is likely to substantially reduce the hypothetical converged training loss.
We prove an upper bound on the “rate” at which neurons are added,
and a computationally cheap lower bound on the expansion score.
We illustrate the benefits of such Self-Expanding Neural Networks in both classification and regression problems,
including those where the appropriate architecture size is substantially uncertain a priori.
§ INTRODUCTION
Correctly tailoring a model's capacity to an arbitrary task is extremely challenging, especially when the latter is not yet well studied.
This challenge can be side stepped by choosing an architecture which is so large that a poor solution is nevertheless unlikely to occur <cit.>, e.g. due to the double-descent phenomenon. However, since it is hard to predict what size would be large enough this will often in practice entail using a massively overparameterized network <cit.> <cit.> <cit.>.
Surely it is possible to detect that the existing capacity of the network is insufficient and add more neurons when and where they are needed?
In fact, biological neural networks are grown by adding new neurons to the existing network through the process of neurogenesis.
The popular review <cit.> discusses the relatively recent discovery that this process is still active in the adult mammalian brain
<cit.>,
and <cit.> <cit.> identify it as a key ability underpinning lifelong learning.
Thus inspired,
we propose an analogous process for adding both neurons and layers to an artificial neural network during training,
based on a local notion of “sufficient capacity” derived from first principles in close relation to the natural gradient <cit.> <cit.>.
Any method for artificial neurogenesis
must answer three questions to avoid the problem of locally insufficient capacity <cit.>.
It must determine when the current capacity is insufficient and that neuron(s) must therefore be added.
It must identify where these neurons should be introduced.
Finally, it must choose what initialization is appropriate for these neurons.
These questions, if they are addressed at all in the literature, are normally addressed piecemeal or in ad-hoc ways. For example, very few methods address the question of what <cit.> <cit.>.
is answered either by assuming predetermined schedules <cit.> <cit.>,
or by waiting for the training loss to converge <cit.> <cit.>,
neither of which are informative about where.
“Whenever you parry, hit, spring, ..., you must cut the enemy in the same movement.”
[
Miyamoto Musashi, The Book of Five Rings (circa 1645)
]
Our metaphorical enemy is not a loss which is momentarily poor, or even one which is converging to a poor value:
it is a deficiency in our parameterization such that the optimizer cannot make progress.
We argue that by inspecting the degrees of freedom of the optimizer in function space,
one may not only strike faster in answer to when, but answer where and what in the same stroke.
From a mathematical perspective, these degrees of freedom available to the optimizer
are given by the image of the parameter space under the Jacobian,
and the derivative with respect to the loss in function space will not in general lie in this subspace.
It is however possible to project this derivative onto that subspace,
and the natural gradient, ^-1,
is exactly the change in parameters which changes the function according to this projection.
In order to measure the size of that projection for a given parameterization,
we introduce the natural expansion score η = ^T ^-1.
Specifically, the capacity of a neural network is locally insufficient when this score is small for the current parameterization. We therefore add neurons when this substantially increases η, where they will maximally increase η, and choose what initialization to use for the new parameters according to how it increases η. To summarize, our contributions are:
* We introduce the natural expansion score which measures the increase in rate of loss reduction under natural gradient descent when width or depth is added to a neural network.
* We show how such additions may be made during training
without altering the function represented by the network.
Our neurogenesis inspired Self-Expanding Neural Networks (SENN) thus avoid interfering with previous optimization or requiring restarts of training.
* We prove that the number of neurons added simultaneously in SENN is bounded. We further introduce a computationally efficient approximation as a provable lower bound to increases in natural expansion score resulting from additions.
* We demonstrate SENN's effectiveness for regression and classification.
In the remainder of this paper, we proceed as follows:
In section <ref> we summarize existing growth methods,
in section <ref> we then describe SENN,
and in section <ref> we illustrate its operation in practice.
§ RELATED METHODS FOR GROWING NEURAL NETWORKS
The problem of adding nodes to neural networks during training has been under consideration for over 30 years (e.g. Dynamic Node Creation (DyNC) <cit.>),
but remains substantially unsolved.
There does not seem to exist a unified answer to , and ,
as we summarize in table <ref>.
Most methods cannot add depth and sideline at least one of these questions.
Inspired by neurogenesis like SENN, <cit.> examine the case of representational learning with stacked autoencoders,
where they exploit local reconstruction error to determine and to add neurons.
Due to their more general setting,
DyNC,
Progressive NNs (PrNNs) <cit.> and Dynamically Expandable NNs (DENNs) <cit.>
use simple training loss convergence or even task boundaries to answer , but must then fall back on ad-hoc preset decisions for .
(However, DENNs use subsequent pruning to mitigate the excess capacity introduced by the preset.)
All four methods freeze old neurons or continue training from their present values,
but randomly initialize new neurons in answer to .
While ActiveNAS <cit.> can add both width and depth,
it does so by completely restarting training with a fresh initialization of the whole network after every modification.
It then waits for convergence,
and uses preset answers to similar to the previous methods.
The final cluster of three methods all aim to improve on random initialization as an answer to what.
Splitting Steepest Descent (SSD) <cit.> and Firefly <cit.> make small changes to the existing function and answer by optimizing the consequent loss reduction.
The former answers by waiting for convergence and examining the loss, whereas the latter simply adds more capacity every N epochs.
Gradmax <cit.> is the closest to SENN in spirit,
but is based on vanilla rather than natural gradient descent. More importantly, potential extensions of the method to the and questions are mentioned briefly and their investigation deferred to future work.
All three of these latter methods are only able to avoid redundancy of added neurons with existing neurons to the extent that the network is already converged. Of these three, only GradMax completely avoids changing the overall function.
In contrast, SENN provides a monolithic answer to all three questions via the natural expansion score.
§ SELF-EXPANDING NEURAL NETWORKS
To provide a cohesive answer to , , with Self-Expanding Neural Networks,
we start with the definition of the natural expansion score as the foundation:
The natural expansion score η = ^T ^-1 is given by the inner product of the natural gradient ^-1 with the gradient .
With this definition
we will describe we add capacity without interfering with the existing optimized parameters
in section <ref>.
We then in section <ref>
give an intuitive account of what our score η measures,
and why we use this to decide to add capacity.
Section <ref> gives a more mathematically precise account of the meaning of η,
and what this says about initializations should be used for new capacity.
Section <ref> extends the argument of <ref> to deciding new capacity should be added and whether it should be depth or width,
allowing us to put the ingredients of SENN together and summarize this combination.
Finally, sections <ref> and <ref> cover the practical questions of convergence guarantees and computational efficiency respectively.
add conditions for Wout equals and sigma p equals
§.§ How to add: expanding without changing the overall function
In order to explain how to add without changing the overall function,
we will consider the illustration in figure <ref>.
This shows a perceptron with two hidden layers, each with three neurons.
The number of neurons a hidden layer may be increased by introducing a new copy of the activation function σ_p
and connecting it to the neurons of the preceding layer with some linear transform _p.
As shown on the left of the figure, we connect the new neuron to the subsequent layer (in this case the output )
with a linear transform initialized to zero.
In doing so, we guarantee that we will not perturb the function specified by the existing parameters.
Although _p will initially receive zeroed gradients since the output transform is zero,
this latter transform will immediately receive non-zero gradients and thereby become non-zero.
The new neuron may thus be used in future optimization.
In addition to width expansion, we now consider inserting an entirely new layer,
as shown on the right of figure <ref>.
In essence, a particular linear transform, _2 in the figure, is replaced with a single layer perceptron.
To this end, we assume our nonlinearity σ_p to be parameterised, and there to exist a choice of those parameters such that σ_p = is the identity.
If we require the initial linear transform _p of the inserted perceptron to be invertible (but otherwise arbitrary),
then we may choose the output linear transform of the perceptron to be the matrix product _2 _p.
With these choices made, the inserted perceptron is equivalent to the linear transform _2 it replaces,
and the overall parameterized function once again remains unchanged.
We thus have the first ingredient of SENN:
SENN Ingredient 1: How to add more capacity without changing the overall function.
We add proposed neurons p to layer i
by concatenation along the the ith hidden dimension
(0⊎_i+1) ∘ (σ_p ⊎σ_i) ∘ (_p ⊎_i) = _i+1∘σ_i ∘_i,
and initialize the output weights of p to zero.
We insert a new layer q by replacing some linear transform _i
with the composition (_i _q) ∘ (σ_q = ) ∘_q,
where _q is invertible and σ_q is initialized to be the identity.
We must therefore choose a suitable parameterized activation function.
Rational activation functions satisfy the above conditions and were shown to obtain good real world performance <cit.>.
We use the simplified parameterization
σ_ (x) = α x + (β + γ x)/(1+x^2),
where = {α, β, γ} are the three parameters of σ,
and setting = { 1, 0, 0 } results in the identity function, as required.
Since this parameter count is small, we do not share the activation function weights within our layers.
§.§ When to add: deciding whether more capacity is useful
Having decided how to add,
perhaps the most natural way to evaluate the utility of making
some change to the parameterization is to ask what immediate effect this has on the total loss.
However, we cannot do this as we have assumed the overall function to remain unaltered.
We must therefore consider additional information such as the gradients of the function.
Specifically, one can favor adding neurons which maximally increase the euclidean norm of the gradients ||||_2.
As found in <cit.> this norm functions well for selecting which neurons to add when the network is close to convergence since
it is a direct measure of the rate at which gradient descent will decrease the loss.
Unfortunately, comparing the gradient norms ||||_2^2 and ||'||_2^2 for the current parameterization and some new expanded parameterization '
is insufficient to determine whether or not more capacity is needed in the first place.
This is primarily because it does not account for redundancy in the parameterization:
if there is some neuron a such that the gradients of the linear weights in the next layer “listening” to it have some large norm ||_a||_2,
then we could introduce an exact copy of this neuron a' for which the corresponding norm would also be ||_a'||_2 = ||_a||_2.
Since the squared euclidean norm is additive across parameters,
we could unboundedly increase ||||_2^2 just by adding very many copies of this one neuron a.
[
More generally, the same problem would occur when considering a new neuron c whose activations were some linear combination of those of some existing neurons a and b.
]
In SENN, we avoid this problem with the following simple notion of redundancy.
We are using our parameters to express a point in function space.
At some point in optimization we are therefore also using them to express a small change in function space.
There is some direction that our optimizer “wants” to move in (i.e. the direction in function space which most quickly reduces the loss).
We can define new parameters as being useful in a way which is non-redundant with the old parameters to the extent that they allow the optimizer to better express the direction in function space it “wants” to move in.
Our natural expansion score η = ^T ^-1 captures this combined sense of usefulness and non-redundancy in a way which will be made more mathematically precise in the next section.
This description of its function is sufficient, however, to justify our answer to when:
SENN Ingredient 2: When to add more capacity.
A new neuron or layer will be helpful and non-redundant if it provides a fractional increase in η = ^T greater than some threshold τ.
we find a potential new neuron or layer for which this is true, we add it.
We defer specific choices for τ to section <ref>,
at which point we may draw on the derivation of η.
§.§ What to add: determining the initial value of new neurons
We are assuming fisher information metric on output is euclidean here. Maybe mention in footnote?
The reader may at this point be expecting us to tackle the question of additional capacity is most useful,
but this would put the cart before the horse.
Additional capacity is useful to the extent that it can be initialized in a way which is useful,
which we now consider.
To simplify mathematical notation in this section,
we consider the output to be concatenated over the entire training dataset.
While the gradient of the loss with respect to the output _ tells us how the loss changes for arbitrary changes in ,
the only changes in we can actually achieve with some parameterization Θ are given by Jacobian product for some small parameter change ∈Θ.
Let _Θ be the orthogonal projection onto this space of directions in output space.
The vector _Θ_ is then the portion of _ which lies in the space of achievable output changes,
and its squared norm ||_Θ_||_2^2 is a scalar measure of how much of _ this portion is.
The vector _Θ_ is the image under the Jacobian
of some tangent vector in the parameter space.
By the definition of orthogonal projection, minimizes || - _||_2,
but if there are redundant directions in Θ then there may exist multiple such .
There is however a unique _* which minimizes ||||_2 among those which minimise || - _||_2.
The Moore-Penrose inverse, ^+, of is the unique matrix such that _* = ^+ _ for arbitrary _.
However, is a map from parameter space to total output space, which depends on dataset size N.
This dependency can be avoided by working with maps from the parameter space to itself,
such as the following
average over the dataset = 1/N^T,
known as the Fisher information matrix.
The natural gradient is then given by ,
where = 1/N^T _ is the gradient of the loss with respect to the parameters averaged over the training set,
and existence of = + ϵ is guaranteed by the addition of a small multiple ϵ of the identity.
In the limit of small ϵ this is exactly our _*.[
In fact, an alternative definition of the Moore-Penrose inverse is:
^+ := lim_ϵ→ 0(^T + ϵ)^-1^T
]
We are now able to rewrite the squared norm ||_Θ_||_2^2 in the familiar form of definition <ref>:
||_Θ_||_2^2 =
_*^T ^T _* =
^T ^-1^T ^-1 =
N ^T ^-1 =
N η .
Here, the factor of the dataset size N appears because the average gradient and our η are normalized according to the training set size.
With this formula, we have now derived η from first principles
and may use it to choose between specific initializations, yielding our third SENN ingredient:
SENN Ingredient 3: What Initialization to Use.
If ' ∈Θ' is an initialization of an expanded parameterization Θ' such that the overall function remains unchanged (see section <ref>), then the best such initialization _*' is given by
_' (η').
When we add new neurons or layers, we choose initialization to use by this method.
§.§ Where to add: completing the algorithm
Much as the euclidean norm ||||_2 measures the rate of loss reduction according to vanilla gradient descent,
our η measures the rate of loss reduction according to natural gradient descent.
This gives a uniform way of comparing the effect of new capacity no matter where it is added in the network or whether it takes the form of new neurons in an existing layer or a new layer.
In particular, one may compare the η values of the best initializations (see section <ref>) for each such variety of addition.
[
In general one can also adjust for the “size” of each addition in some relevant sense.
We found it sufficient to just penalize adding entire new layers versus single new neurons by some constant factor.
]
SENN Ingredient 4: Where to Add.
A choice of whether to add width or depth, and in the network the new neuron/layer will be added,
specifies a particular extension of the current parameter space Θ'.
We make those choices which correspond to the extension Θ_*' = _Θ'_' (η') for which the best initialization is possible.
Our newfound knowledge of η as a rate of loss reduction in hand,
we return to the question of specifying the expansion threshold τ,
which we deferred from section <ref> in our previous answer to when.
An increase from the current natural expansion score η_c to a new score η_p due to some proposed expansion p corresponds to an increase in the rate of loss reduction by natural gradient descent.
We define this increase to be “sufficient” when it corresponds to a relative increase η_p / η_c > τ
in loss reduction rate greater than the expansion threshold τ.
For example, with the intuitive choice τ=2,
each addition must at least double the rate of loss reduction.
Following the well known intuition that a network does not practically converge without setting the learning rate to zero,
it is generally considered to have converged once changes in loss become sufficiently small.
In analogy to monitoring plateaus in loss,
we further require the increase in loss reduction resulting from new capacity to surpass an absolute stopping criterion α. While we answer when, and cohesively with η during training,
we thus concur with all prior works on terminating training.
Overall, we may now summarize all ingredients of SENN on the basis of the natural expansion score:
SENN: Summary.
When we add width or depth we do so without changing the overall function.
We add new capacity when this produces a relative increase in score η_p / η_c > τ
larger than the expansion threshold τ.
We add new capacity where it would most increase η,
and choose what initialization to use in order to maximize η.
We ensure the addition process terminates by additionally comparing each Δη
to the absolute stopping criterion α,
and not adding capacity when η_p - η_c ≤α.
§.§ Bounds on convergence of expansion
Consider repeatedly running our addition algorithm for a network with initial expansion score η_0.
The expansion threshold τ guarantees that η_i > τη_i-1 after the i-th addition.
Since η = ^T ^-1
is the squared length of the projected gradient in output space ||P_Θ_||_2,
it is non-negative and bounded above by η≤λ = ||_||_2^2.
Since η_i grows exponentially with i
and is bounded above by λ
the maximum number of sequential additions i < N_s
increases logarithmically with λ.
Specifically, N_s < (lnλ - lnη_0)/lnτ.
This bound becomes large when η_0 is small,
but we also know that η_1 > α from the stopping criterion α.
The maximum number of additions N_s from repeatedly running the expansion algorithm is bounded:
N_s < 1 + (lnλ - lnα)/lnτ.
(Proof in supplementary material.)
For example, if τ = 2 and α / λ > 10^-3 then N_s < 1 + 3ln10/ln2 < 11.
Note that exponentially large ratios between α and λ produce only linearly large bounds on N_s.
We now consider the number of additions N_T made over the course of training with natural gradient descent.
Intuitively, λ is the total possible loss reduction and α is the minimum reduction which justifies expanding the network.
If every time we expand the network it only achieves this minimum reduction then we must expand a total of roughly N_T ≈λ / α times.
If the loss function has constant curvature equal to the fisher ,
then the total loss reduction possible with the current parameters is given by 1/2η
and we have N_T < λ / α exactly.
More generally,
we expect that when is an underestimate of the true curvature,
η will overestimate the usefulness of new neurons causing N_T to be larger,
and vice versa for an overestimate.
See supplementary for more in depth discussion.
§.§ Efficiently computing a lower bound on score increase
Recall that the natural expansion score η is given by the inner product of the gradient with the natural gradient ^-1.
Since working with the natural gradient can be challenging due to the matrix inverse ^-1,
we will make use of established approximation techniques.
Specifically, when we need the natural gradient for the whole network we will use the iterative conjugate gradient method, as suggested for the Hessian in <cit.>,
performing Fisher-vector multiplication cheaply via auto-differentiation.
check if there is a better citation which uses F not H
When we require the inverse Fisher _l^-1 for the linear transform in some layer l considered in isolation,
we approximate _l by the Kronecker product _l ≈_l = _l ⊗_l,
choose S or G
where _l is the second moment of the activations at the input of the linear transform,
and _l is given by the second moment of some distribution of gradients with respect to the output of the linear transform.
The relevant gradient distribution is determined by the choice of metric on the output space implicit in the exact definition of one is using,
which for us is the euclidean metric.
The advantage of this Kronecker factorization is that _l may be inverted by inverting _l and _l separately:
_l^-1 = _l^-1⊗_l^-1,
which is much cheaper.
If ∂ is the gradient with respect to the weights as a matrix,
then the natural gradient is given by ^-1∂^-1<cit.>.
The natural expansion score η is given by the inner product of the gradient with the natural gradient as vectors,
which in this matrix form becomes the elementwise inner product
η = ∑_i,j∂_ij (^-1∂^-1)_ij,
which can also be expressed as a trace: η = [∂^T ^-1∂^-1].
The trace formula for η is reminiscent of the definition of
the pearson correlation coefficient r^2 = xy^2 / (xxyy).
The gradient for is given by the expectation ∂ = ^T,
where is the input activation vector,
is the derivative of the loss with respect to the outputs,
and the expectation is over the dataset.
Let the residual gradient
_r = - ^T^-1
insert/handle layer indices here
be the part of the gradient not predicted by the current activations .
Then if _p is the activation vector of a set of proposed neurons,
and _p is their second moment,
then the “correlation coefficient” of the new activations with the residual gradients is a lower bound Δη' on the improvement Δη in natural expansion score
(proof in appendix via block LDU decomposition of joint activation covariance):
Δη' := [_p^-1_p _r^T_l^-1_r _p^T]
is a lower bound Δη' ≤Δη = η_p - η_c on the improvement in natural expansion score due to some proposed addition of neurons p to a layer l.
Intuitively, Δη' is the fraction of variance in residual gradients “explained” by the output of our new neuron(s).
This result holds for adding an arbitrary number of new neurons to an existing layer.
If a layer was inserted while retaining residual connections around it,
then the same result would hold if we treated the activations of the new layer as “new neurons” in the old layer to calculate Δη'.
Because our activation function can represent the identity,
we will automatically add these connections if in fact they are necessary,
so we in fact use this same method for evaluating our actual layer insertions.
The bound Δη' can be computed for an arbitrary proposal p of additional neurons
using only those intermediate activations and gradients which it would be necessary to cache
in order to calculate the gradient and (Kronecker factored approximate) natural gradient
via backpropagation.
Therefore, if we have an outer optimizer which computes and ,
then we may optimize arbitrarily many proposals p for arbitrarily many steps with an inner optimizer
without incurring any additional costs related to the evaluation of the existing network.
The costs of this inner optimizer instead scale with the size of the (very small) networks whose addition to the existing network is being considered.
§ EXPERIMENTS
We now apply Self-Expanding Neural Networks to regression and classification, to illustrate the behavior of the natural expansion score and demonstrate SENN's efficacy.
§.§ Width Addition in Least-Squares Regression
We first show that the evolution over training of the possible improvements Δη' in natural expansion score due to potential width expansions is meaningful.
In order to do so we consider the application of a single layer SENN to a one dimensional least squares regression task as shown in figure <ref>,
i.e. SENN with depth addition deliberately disabled.
The reason to have only one hidden layer is that this is effectively least squares regression with basis functions given by the neurons of that layer.
We can therefore plot the normalized score increase Δη' / η_c of the best neuron for each basis function location and length scale.
Where Δη' / η_c > 1 there exists an acceptable proposal.
Accepted/rejected proposed neurons are shown on this landscape in red/black at key points in training.
We see in the leftmost figure that the best such proposal is accepted because it achieves a large improvement in η,
and it corresponds to a basis function location close to datapoints with visibly large prediction error which we have been unable to reduce using the existing neurons.
The next figure to the right shows the same landscape after the new neuron is introduced,
and it can be seen that the Δη' / η_c values for neurons with similar locations to it have been dramatically reduced
since they would be redundant.
The second figure from the right shows the result of optimizing the new expanded parameters until the point at which the next neuron would be added.
It can be seen that the prediction errors in the region of the previously introduced neuron are now practically invisible,
and that the next neuron is to be introduced in a different region in which errors remain.
The rightmost figure shows the function approximation at the conclusion of training,
and it can be seen that the prediction errors are negligible and proposals with large relative increase in η are not to be found in the region considered.
The reader may note that there are some possible new neurons with small length scales which would surpass the expansion threshold which we do not find;
we could deliberately try optimizing initializations at this lengthscale to find these,
but this would likely result in overfitting.
Overall, SENN thus identifies regions of locally insufficient capacity in our parameterization,
targets these regions precisely with new added neurons,
and uses this selectively added capacity to achieve a good final fit.
§.§ Layer Addition in Classification
consider briefly mentioning width addition.
We now highlight SENN's depth expansion in the context of
classification.
Specifically, we consider two-dimensional inputs from the half-moons dataset <cit.>.
In figure <ref> we plot Δη' / η_c for the best layer addition proposals as a function of overall optimizer steps.
Visualizations of the learned decision boundary at initialization and just before layer additions are shown.
We can observe that Δη' / η_c increases approximately monotonically during three phases,
punctuated by large drops when layers are added.
In the initial phase the network has zero hidden layers (i.e. is linear),
and the simplicity of the decision boundary at the end of this phase reflects this.
Since the datapoints are not linearly separable,
the large Δη' / η_c value correctly indicates that the introduction of a hidden layer is necessary in order to further reduce loss.
The visible increase in decision boundary complexity and accuracy over the course of the second phase confirms this.
The beginning of the third phase marks the introduction of a second hidden layer and we wait until Δη' / η_c rises again,
indicating an exhaustion of this new capacity, before reexamining the decision boundary.
The increase in boundary complexity is less visible this time, but close inspection reveals that the boundary has become narrower and more rounded.
Conclusively, we have intentionally constructed a scenario where depth addition is necessary for a good fit to lie in the space of solutions,
and seen that SENN inserts new layers when this is necessary for global expressivity.
§.§ Dynamic Selection of Appropriate Architecture Size in Image Classification
Finally, we examine the ability of self-expanding neural networks to choose an appropriate size when classifying MNIST <cit.> images.
The leftmost plots of figure <ref> show SENN's total hidden size and validation accuracy during training on the full dataset as a function of total batches seen.
This use of mini-batching is not strictly necessary for MNIST but we use it to better reflect the realities of training modern neural networks.
Our SENN is initialized with a single hidden layer of size 10, and promptly adds a second hidden layer, also of size 10.
All five seeds considered then proceed to consistently add width to these layers at a moderate rate until a total hidden size of around 40 is reached,
at which point far fewer productive extensions of the network are found and addition slows dramatically.
It can be seen that this results in respectable validation performance (>97%) by the end of training with very modest hidden neuron counts (50-60).
It is of particular note that our method produces strong anytime performance:
we are able to continually expand size, and even insert layers, during training without any attendant drops in validation accuracy.
Indeed, our method exhibits mostly monotonic improvement up to stochasticity from batching,
a property not shared by methods which rely on reinitializing a new network, e.g. <cit.>. This makes SENN a perfect fit to prospective applications in e.g. active or continual learning, in the spirit of our original neurogenesis inspiration.
Having verified sensible performance of SENN on the full MNIST dataset,
we now examine the way in which they adapt their final converged size to the amount of information in the dataset.
To this end, we take class-balanced subsets of MNIST of varying sizes and train SENNs to convergence.
To maximize clarity in our examination of this relationship, we restrict the SENN to width addition.
The converged hidden sizes are shown together with the standard error across five seeds in the rightmost plots of figure <ref>.
The first of these shows log width against linear subset size for ease of comparison to the leftmost panel. It can be seen that the final width tails off rapidly with subset size.
The rightmost plot shows instead linear width against logarithmic subset size,
in which we can now distinguish three regimes.
For the smallest subsets, the initial hidden size of 10 is sufficient.
For subsets between 10% and 60% of the standard training set,
the final hidden size increases logarithmically,
but past that point further increases in subset size do not similarly increase the final network size.
We posit that this is due to substantial redundancy within the MNIST training set,
leaving further capacity growth unnecessary. Thus, SENN does not only provide desirable any time performance, but also tailors its size suitably to the available data.
§ CONCLUSION
We have introduced the natural expansion score η and shown how it may be used to cohesively answer the three key questions , and of growing neural networks.
We have demonstrated its ability to capture redundancy of new neurons with old and thereby make sensible expansion decisions
across time and tasks.
While we have focused on providing a thorough mathematical grounding of the natural expansion score in this work,
we acknowledge that the multilayer perceptrons on which it was demonstrated
differ in scale and complexity from many of the architectures in active use for deep learning in the modern big data regime.
Dually, however, prospects for further development are promising, as our theoretical results regarding η apply for arbitrary expansions of parameterized models,
and our method of expansion would extend naturally to, for example, convolutional neural networks or normalizing flows
where layers may be initialized invertibly.
This work was supported by the project “safeFBDC - Financial Big Data Cluster” (FKZ: 01MK21002K), funded by the German Federal Ministry for Economics Affairs and Energy as part of the GAIA-x initiative, and the Hessian research priority programme LOEWE within the project “WhiteBox”. It benefited from the Hessian Ministry of Higher Education, Research, Science and the Arts (HMWK; projects “The Third Wave of AI” and “The Adaptive Mind”).
named
§ PROOFS
§.§ Theorem 1: Bounded rate of addition
In this section we prove theorem 1 of the main body.
We will assume ≻ 0 to be positive definite, with the following straightforward consequence
The natural expansion score is non-negative η = ^T ≥ 0.
If ≻ 0, then ≻ 0,
and ^T ≥ 0 for all .
Considering the effect of the expansion threshold τ we obtain the following bound:
Let η have initial value η_0 and be bounded above by λ > η.
If threshold τ guarantees that η_i > τη_i-1 for the i-th addition,
then the maximum number of successive additions N_s is bounded by
N_s < lnλ - lnη_0/lnτ.
Due to the threshold τ, η grows at least exponentially: η_i > τ^i η_0.
But η is bounded: λ≥η_i > τ^i η_0.
Since ln is monotonic, we may take logarithms:
lnλ > i lnτ + lnη_0.
and rearrange to get i < lnλ - lnη_0/lnτ for all additions i.
This true for every i-th addition which is accepted, and so in particular also true for the last N_s-th addition.
Considering also the effect of the stopping criterion α we obtain theorem 1:
If the stopping criterion α guarantees that η_i > η_i-1,
then the maximum number of successive additions N_s is either 0, or bounded by
N_s < 1 + lnλ - lnα/lnτ.
Either N_s = 0, or there is a first addition with natural expansion score η_1 for which
η_1 - η_0 > α.
From lemma <ref> we then have η_1 > α.
We may then substitute α into lemma <ref> in place of η_0 to obtain
a bound on further additions, yielding
N_s < 1 + lnλ - lnα/lnτ.
This theorem is important because it guarantees that SENN will add a limited number of neurons or layers before continuing training.
Intuitively, this is because it rapidly becomes the case that any new neuron is either not relevant to rapidly decreasing the loss, or is redundant with some already extant neuron.
§.§ Theorem 2: Lower bound on increase in natural expansion score
We now prove theorem 2 of the main body, concerning a lower bound on the increase in natural expansion score η due to the addition of new proposed neuron(s) to a layer.
Let the joint activations = [ _c; _p ] of the current and proposed neurons have second moment
^T = = [ _c _cp; _pc _p ].
We will assume the Fisher matrix for the layer to which neurons are to be added to factorize as = ⊗, where ≻ 0 is positive definite.
We first derive a convenient form of a known result discussed in, for example, <cit.>,
related to the joint covariance of multivariate Gaussian distributions.
Let _p = _p - _pc_c _cp be the Schur complement of _c in .
Let also = [ _c; _p ] be an arbitrary vector,
and be the linear operator defined by = _p - _pc_c _c,
i.e. the residual part of _p not predicted by _c.
Then, ^T = _c^T _c _c +
()^T _p.
The following may be obtained by performing a block LDU decomposition:
=
[ _c _cp; _pc _p ] =
[ _c 0; _pc_c _p ][ _c 0; 0 _p ][ _c _c _cp; 0 _p ]
which we may then use to decompose :
=
[ _c _cp; _pc _p ]^-1 =
[ _c -_c _cp; 0 _p ][ _c 0; 0 _p ][ _c 0; -_pc_c _p ]
The desired result then follows by substitution into ^T:
^T =
[ _c _p ][ _c -_c _cp; 0 _p ][ _c^-1 0; 0 _p^-1 ][ _c 0; -_pc_c _p ][ _c; _p ]
=
_c^T _c _c +
(_p - _pc_c _c)^T _p^-1 (_p - _pc_c _c)
Recall from section 3.6 that η may be expressed as a trace:
η = [^-1^T^-1^T]
where is the derivative of the loss with respect to the outputs (i.e. layer pre-activations) of the linear transform.
We can use lemma <ref> to write the increase in natural expansion score Δη as
Δη = [^-1^T^-1^T]
- [^-1_c_c^-1_c ^T]
= [^-1 ()^T_p^-1() ^T]
where we can take inside the expectations by linearity.
It is computationally convenient for us to be able to have an expression in terms of residual gradients instead of residual activations, so we note the following:
()^T = _r _p^T
where _r = - _c^T_c^-1_c is the residual gradient.
()^T = (_p - _p _c^T_c^-1_c)^T
= _p^T - _c^T_c^-1_c _p^T
= ( - _c_c^-1_c) _p^T
= _r _p^T
Finally, we establish the following relationship between _p^-1 and _p^-1:
_p^-1 - _p^-1= (_p - _pc_c _cp)^-1 - _p^-1≽ 0.
The matrix inverse _p^-1 can be expanded as the following power series
_p^-1 = (_p - _pc_c _cp)^-1 =
∑_n=0^∞_p (_pc_c _cp_p)^n
We observe that this is a sum of positive semi-definite matrices, and truncate the series at n=0 and rearrange:
_p^-1 - _p = ∑_n=1^∞_p (_pc_c _cp_p)^n ≽ 0
We may now prove theorem 2 from section <ref>.
Δη' is a lower bound on the increase in natural expansion score Δη due to the addition of some proposed neurons p:
Δη≥Δη' = [^-1_r _p^T_p^-1_p _r^T]
Substituting lemma <ref> into corollary <ref> we have
Δη = [^-1_r ^T_p^-1_r^T].
The difference between Δη and Δη' is given by
Δη - Δη' = [^-1_r ^T (_p^-1 - _p^-1) _r^T].
This is the squared norm of _r ^T as a vector according to the Kronecker product
^-1⊗ (_p^-1 - _p^-1).
The first factor is positive semi-definite by assumption, the second by lemma <ref>,
and the Kronecker product of positive semi-definite matrices is positive semi-definite.
Therefore Δη - Δη' ≥ 0 and so Δη≥Δη'.
The significance of this lower bound on Δη is that _r and ^-1 may be computed once,
and then used to optimize very many proposals with different activations _p.
That is, performing N steps of gradient descent to optimize proposed neurons p scales linearly in the evaluation cost of _p and _p^-1.
These linear costs are unaffected by the number of neurons currently in the layer being added to,
and unaffected by the total number of layers in the network.
§ THE CONSEQUENCES OF NON-FISHER CURVATURE FOR TOTAL NEURONS ADDED
In section 3.5 we discussed the total number of neurons added during training, and in particular the extent to which we could provide bounds on this.
As noted there, in the case where the Fisher is constant over training and exactly equal to the hessian,
the dynamics of training are very simple.
The loss L has its global minimum at the point reached by a step of exactly ^-1,
and it can be seen by integration that the reduction in loss due to such a step is exactly Δ L = 1/2^T ^-1 = 1/2η.
The stopping criterion α corresponds to the requirement that parameter expansions should enable a further reduction in loss of at least 1/2α.
Since η≤λ is bounded by λ, the maximum possible reduction in loss is Δ L_max = 1/2λ.
If we pessimistically assume that every parameter expansion enables the minimal loss reduction of only 1/2α,
then the total number of added neurons N_T is still bounded by N_T < λ/α.
The case where the true hessian of the loss is some constant multiple of the Fisher = κ which is itself constant,
is almost as simple.
The parameters evolve along the same trajectory, only they move a factor of κ faster than they would if =.
This also results in a rescaling η = κη_B of natural expansion scores relative to the baseline value η_B in the case where was accurate.
While this has no effect on the behaviour of the expansion threshold τ,
the inflated η values mean that the effective value of α is reduced by a factor of κ
and so the total number of added neurons N_T is now only bounded by N_T < κλ/α.
We will now try to describe the effect of more general failures of to represent the true curvature .
Local expansion behaviour, i.e. without further parameter optimization, is bounded by lemma <ref> of appendix <ref>.
Assuming the baseline case of =, we may substitute λ = 2Δ L_max.
If we assume small step sizes, the rate of loss reduction L = -η is given by the natural expansion score by definition,
regardless of .
If at all times t during training the rate of reduction of expansion score -η(t) < -η_B(t) is lower than the baseline scenario,
then η will at all times be greater than expected.
Since the rate of loss reduction L = η is given by η, L will decrease faster than expected and the remaining maximum possible loss reduction Δ L_max will be at all times less than expected.
It can be seen from lemma <ref> that discrepancies in these directions relative to baseline will result in fewer additions being made.
We now only need to establish conditions under which the actual rate of reduction in η is lower than the expected rate.
The rate of change during optimization (indicated by overdot) of the various components of η can be described as follows:
= -^-1
= = -^-1
^T ^-1 = -^T ^-1^-1
η = -^T ^-1 - ^T ^-1 - ^T ^-1^-1
= -^T ^-1( 2 + ) ^-1
Since in the base case _B = and _B = 0, we have that
if + 1/2≼
then -η≤ -η_B.
Putting the above results together, we have that if at all times during training + 1/2≼,
then the bound on total additions N_T < λ/α should hold.
Incorporating the previous result regarding = κ,
it also appears that if at all times + 1/2≼κ, then N_T < κλ/α.
Assuming positive definite and the loss surface smooth (i.e. and finite),
then there will exist some finite κ for which the condition holds and so N_T will be bounded.
§ HYPERPARAMETERS AND IMPLEMENTATION DETAILS
All experiments were run on a single Nvidia A100 or V100 GPU, using no longer than one day each.
Our implementation uses the JAX <cit.> autodifferentiation and Flax <cit.> neural network libraries.
The full source code used to run the experiments is provided in the supplementary material, and will be made publicly available on publication of this work.
In all experiments we optimize our parameters via natural gradient descent with a learning rate of 0.1 and Tikhonov damping of magnitude 0.1.
In the image classification experiments we use batches of size 1024 and a weight decay of rate 0.001.
We initialize our dense layers with the default initialization of Flax (LeCun Normal) <cit.>,
and use a unit normal initialization for the parameters of our rational functions.
For the visualization experiments we use τ=2, for the image classification experiments we use τ=1.007 and τ=1.03
for the whole dataset and variable subset experiments respectively.
Larger thresholds τ result in longer training times but more conservative network sizes and higher accuracy of η estimates due to being a closer approximation to the curvature near convergence on the existing parameters.
Any extra costs are negligible for the visualization experiments, so we use the intuitive value of 2,
but we choose τ values for the image classification experiments in light of this natural trade-off.
We use α=0.0025 for all experiments apart from the whole dataset image classification, for which we use α=0.25.
Here the latter choice compensates for larger noise in Δη' introduced by use of a validation batch, as will be discussed shortly.
We adjust the expansion score increases for layer additions by a constant factor of 2 in the visualization experiments and 60 in the image classification experiments.
These values are selected to be within an order of magnitude of the actual layer sizes expected in classification of a toy dataset versus images, and so of the number of new neurons a new layer represents.
We calculate the natural gradient via the conjugate gradient method with a maximum iteration count of 100 when optimizing the existing parameters.
When optimizing the initializations of proposed neurons or layers we use the Kronecker factored approximation of the Fisher matrix for the relevant layer
based on derivatives of the predictions of the network as in <cit.>.
We compute Δη' based on this and normalize it with respect to the output gradient magnitudes of the particular task.
When comparing Δη' / η_c to τ we use the η_c value given by for the layer in question.
When considering adding layers, we ensure new layers are invertible by adding a regularization term of 0.01(ln)^2 when optimizing the initialization of their linear transform ,
and by setting the minimal singular values of to be at least 0.001 times its average singular value before adding the layer to the network.
In our visualization experiments we do not use batching, so we consider adding depth and width every 30 steps,
and add at most one layer per 90 steps.
In the image classification experiments we use batching and so consider adding width and depth every 10 epochs,
adding at most one layer each time.
We use the same scheme for initializing proposed new neurons or layers as for initializing the starting network.
In our whole dataset image classification experiment we then optimize proposal initializations to maximize Δη' via 300 steps of vanilla gradient descent
on a fixed batch of 1024 images.
We consider 10000 neuron proposals and 100 layer proposals per location, and use a learning rate of 0.3,
reducing this by a factor of 3 as necessary to maintain monotonic improvement in Δη' for each proposal.
We take the best proposal on this batch of size 1024 for each depth and width addition location,
and reevaluate its Δη' on a fixed validation batch of size 1024 when deciding whether and where to add.
The variable degree of overfitting of the best proposal results in some noise in Δη' at each location which we compensate for by choosing a relatively large α.
For our other experiments we optimize proposal initializations using 3000 steps of the Metropolis Adjusted Langevin Algorithm (MALA) <cit.>,
using a unit gaussian prior on initializations during these steps.
We use a temperature T of 10 and an initial step size of 0.3, and adjust by a factor of 3 every 10 steps if necessary to maintain an acceptance rate of around 0.6.
We consider 100 width proposals and 100 layer proposals for each location,
and obtain 100 final MALA samples i for each location width could be added and each location depth could be added.
We then construct a categorical distribution over each set of 100 samples via (1/TΔη_i'),
and use the corresponding expectation of Δη' when deciding when and where to add capacity and whether it should be depth or width.
We draw initializations for new capacity from this categorical distribution,
except in the initial least squares regression experiment, where we use _i Δη_i' over the 100 samples i to make figure 2 more intuitive.
|
http://arxiv.org/abs/2307.06090v1 | 20230712112740 | Can Large Language Models Aid in Annotating Speech Emotional Data? Uncovering New Frontiers | [
"Siddique Latif",
"Muhammad Usama",
"Mohammad Ibrahim Malik",
"Björn W. Schuller"
] | cs.SD | [
"cs.SD",
"eess.AS"
] |
Despite recent advancements in speech emotion recognition (SER) models, state-of-the-art deep learning (DL) approaches face the challenge of the limited availability of annotated data. Large language models (LLMs) have revolutionised our understanding of natural language, introducing emergent properties that broaden comprehension in language, speech, and vision. This paper examines the potential of LLMs to annotate abundant speech data, aiming to enhance the state-of-the-art in SER. We evaluate this capability across various settings using publicly available speech emotion classification datasets. Leveraging ChatGPT, we experimentally demonstrate the promising role of LLMs in speech emotion data annotation. Our evaluation encompasses single-shot and few-shots scenarios, revealing performance variability in SER. Notably, we achieve improved results through data augmentation, incorporating ChatGPT-annotated samples into existing datasets. Our work uncovers new frontiers in speech emotion classification, highlighting the increasing significance of LLMs in this field moving forward.
Speech emotion recognition, data annotation, data augmentation, large language models
Can Large Language Models Aid in Annotating Speech Emotional Data? Uncovering New Frontiers
Siddique Latif, Muhammad Usama, Mohammad Ibrahim Malik, and
Björn W. Schuller, Fellow, IEEE
Corresponding E-mail: [email protected]
==============================================================================================================================================================
§ INTRODUCTION
The rapid growth in Natural Language Processing (NLP) has led to the development of advanced conversational tools, often called large language models (LLM) <cit.>. These tools are capable of assisting users with various language-related tasks, such as question answering, semantic parsing, proverbs and grammar correction, arithmetic, code completion, general knowledge, reading comprehensions, summarisation, logical inferencing, common sense reasoning, pattern recognition, translation, dialogues, joke explanation, educational content, and language understanding <cit.>. LLMs are trained on an enormous amount of general-purpose data and human-feedback-enabled reinforcement learning. A new field of study called “Foundational Models" has emerged from these LLMs, highlighting the interest of the academic community and computing industry <cit.>. The foundational models have demonstrated the ability to perform tasks for which they were not explicitly trained. This ability, known as emergence, is considered an early spark of artificial general intelligence (AGI) <cit.>. The emergence properties of the foundational models have sparked a wide range of testing of these models for various tasks, such as sentiment analysis, critical thinking skills, low-resource language learning and translation, sarcasm and joke understanding, classification, and other affective computing challenges.
Speech emotion recognition (SER) is a fundamental problem in affective computing. The need for SER has evolved rapidly with the rapid integration of modern technologies in every aspect of our lives. SER systems are designed to understand the wide range of human emotions from the given input data (audio, video, text, or physiological signal) using traditional and modern machine learning (ML) techniques <cit.>. However, the availability of larger annotated data remains a challenging aspect for speech emotion recognition (SER) systems, which prompts the need for further investigation and exploration of new methods.
The use of crowd-sourced and expert intelligence for data annotation is a common practice. The annotated data serves as the ground truth for ML models to learn and generate predictions. This annotation policy is mostly opted in computational social science (sentiment analysis, bot detection, stance detection, emotion classification, etc.), human emotion understanding, and image classification <cit.>. However, these strategies are prone to a variety of biases, ranging from human biases to situational biases <cit.>. These annotation techniques also necessitate a big pool of human annotators, clear and straightforward annotator instructions, and a verification rationale that is not always available or dependable <cit.>. Although there are a few unsupervised techniques for data annotations, these techniques necessitate a high sample size of the data; unfortunately, the generated annotations do not embed the context <cit.>.
Annotating speech emotion data is a doubly challenging process. The annotators listen to a speech recording and assign an annotation to a data sample using the pre-defined criteria. Human emotions are highly context-dependent, and annotating emotions based on a brief recording in a specific controlled situation might restrict the annotations' accuracy. Though the state-of-the-art on human-annotated emotion classification is strong, the generalisability of the learning for unseen data with slightly different circumstances might stymie the SER system's effectiveness. The recent availability of several LLMs (ChatGPT, Google Bard, etc.) has unearthed the possibility of replacing or assisting human annotators. LLMs are trained on enormous text corpora, allowing them to learn and grasp complicated language patterns. Their emergence property <cit.> makes them well-suited for data annotations and various studies (e. g., <cit.>) explored LLMs for annotations of various natural language processing (NLP) tasks. However, none of the studies explores them to annotate speech emotion data based on the transcripts.
In this paper, we present an evaluation of the effectiveness of large language models (LLMs)
in annotating speech data for SER. We performed a series of experiments to show the effectiveness of ChatGPT for data annotation. However, we observed that annotations solely based on text lacked generalisation to speech emotion data due to the absence of audio context. To address this limitation, we propose a novel pipeline that incorporates audio features such as average energy, pitch, and gender information to provide essential audio context for accurate sample annotation. Furthermore, we introduce a method for encoding speech into a fixed-length discrete feature representation using a Vector Quantised Variational Autoencoder (VQ-VAE) <cit.>, which serves as the audio context in the annotation prompt. To the best of our knowledge, this is the first endeavour to leverage LLMs for annotating speech emotion data, specifically for classification purposes, and evaluating their performance. We conduct a comparative analysis between LLM-based data annotations and human data annotations using publicly available datasets, including IEMOCAP and MSP-IMPROV.
In the following section, we provide a brief literature review on the use of LLMs for data annotation. We highlight the gap between conventional annotations and annotations made with LLMs. Section III covers the methodology used in this study, Section IV presents the initial results and compares the performance of various LLMs for speech emotion data annotation, Section V provides a detailed discussion of the results and limitations, and Section VI concludes the paper with the potential to extend this work.
§ RELATED WORK
This section provides an overview of the research on leveraging fundamental models such as LLMs for data annotation <cit.>. Data annotations are critical for developing ML models capable of uncovering complex patterns in large datasets and pushing the
state-of-the-art
in a particular domain. Human expert annotators, bulk annotations, semi-supervised annotations, and crowdsourced annotations are all widely used approaches in practice <cit.>. These strategies have their pros and cons. Human annotators, for example, can provide high-quality data annotations but are susceptible to challenges such as fairness, bias, subjectivity, high cost and time, label drifting, annotation fatigue and inconsistency, dealing with data ambiguity, and scalability. Bulk annotations are a faster and less expensive technique to create data annotations, but they might result in lower-quality annotations. Semi-supervised annotations combine the benefits of human-expert annotations with bulk annotations for data annotation, but they are complex to implement and have generalisability and robustness difficulties. Although crowdsourcing human intelligence to annotate large datasets is the quickest and most cost-effective option, it can create lower-quality annotations and is more challenging to manage the quality of the annotations.
Recently, a few studies have investigated the efficacy of LLMs (i. e., ChatGPT) for data annotations. The goal of these experiments was to explore the potential of ChatGPT for data annotation and to find out whether ChatGPT can achieve full emergence in downstream tasks such as classification. Zhu et al. <cit.> tested the ability of ChatGPT to reproduce the human-generated annotations for five seminal computational social science datasets. The datasets include stance detection (two datasets), hate speech detection, sentiment analysis, and bot detection. Their results indicate that ChatGPT is capable of annotating the data, but its performance varies depending on the nature of the tasks, the version of ChatGPT, and the prompts. The average re-annotation performance is 60.9% across all five datasets. For the sentiment analysis task, the accuracy of ChatGPT re-annotating the tweets is reported at 64.9%, and for the hate speech task, the ChatGPT performance has gone down to 57.1%. The authors also provided a prompt template that was used for re-annotating the data.
Fact-checking is a well-known way to deal with the misinformation epidemic in computational social science. Hose et al. <cit.> evaluated the ability of LLMs, specifically ChatGPT, to assist fact-checkers in expediting misinformation detection. They used ChatGPT as a zero-shot classifier to re-annotate 12,784 human-annotated (“true claim", “false claim") fact-checked statements. ChatGPT was able to correctly re-annotate 72.0% of the statements. The study further suggests that ChatGPT performs well on recent fact-checked statements with “true claim" annotations.
Despite the reasonable performance of ChatGPT on fact-checking, it is hard to suggest that it will replace human fact-checkers anytime soon. Yang et al. <cit.> explored the rating of news outlet credibility by formulating the problem as a binary re-annotation task for ChatGPT. ChatGPT achieved a reasonable performance in re-annotating 7,523 domains with a Spearman correlation coefficient of ρ = 0.54. Tornberg <cit.> also used ChatGPT-4 as a zero-shot classifier for re-annotating 500 political tweets. He found that ChatGPT-4 outperformed experts and crowd annotators in terms of accuracy, reliability, and bias. Gilardi et al. <cit.> reported that ChatGPT used as a zero-shot classifier, outperformed the crowd-works-based text annotations for five text-annotation tasks around content moderation. We have also observed studies using LLMs (ChatGPT) for annotating/re-annotating data for various computational social science tasks such as election opinion mining tasks <cit.>, intent classification <cit.>, genre identification <cit.>, stance detection <cit.>, and sentiment analysis <cit.>. Several other prominent works that evaluate the application of LLMs in the annotation of computational social science datasets for various applications include <cit.>.
Amin et al. <cit.> evaluated the capabilities of ChatGPT in three famous NLP classification tasks in affective computing: personality recognition, suicide tendency prediction, and sentiment analysis. Their results indicated that ChatGPT shows far better performance (in the presence of the noisy data) than Word2Vec models <cit.>; ChatGPT further produces comparable performance with Bag-of-Words (BoW) and Word2Vec models (without noisy data) and was outperformed by a RoBERTa model <cit.> trained for a specific affective computing task. ChatGPT scored an unweighted average recall of 85.5% on the sentiment analysis, outperforming BoW and Word2Vec models by nearly 20.0%. RoBERTa also scored an unweighted average recall of 85.0%
on this task.
For the suicide tendency prediction task, ChatGPT's performance was the same as Word2Vec and BoW, with all three models achieving an unweighted average recall of nearly 91.0%. RoBERTa outperformed ChatGPT on this task, achieving an unweighted average recall of 97.4%. For the personality recognition task, RoBERTa performed best, scoring an unweighted average recall of 62.3%. ChatGPT performed the worst on this task, getting an unweighted average recall of 54.0%.
Interestingly, Word2Vec and BoW models also performed marginally well when compared to ChatGPT for this task.
Wang et al. <cit.> argued that GPT-3 can be a low-cost solution for the data annotations for downstream natural language understanding and generation tasks. This research evaluated the efficacy of augmenting human-annotated data with GPT-3 annotated data for improving the performance (language understanding and generation) in a constrained annotation budget. They tested their method on various language understanding and generation tasks, ranging from sentiment analysis, question answering, summarisation, text retrieval to textual entailment. They found that GPT-3 based annotations policy saved 50.0% to 96.0% cost in annotation tasks. However, they also noted that GPT-3 is not yet as reliable as human annotators in annotating high-stakes sensitive cases. More details on the evaluation of the comparison of ChatGPT with human experts on various NLP tasks are compared and evaluated in <cit.>. Huang et al. <cit.> explored the ability of ChatGPT to reproduce annotations and their corresponding natural language explanation. Their results indicate that lay people agreed with the results more when they were provided with the ChatGPT-generated natural language explanation of the annotations than just the considered post itself along with the annotation. ChatGPT agreed with the human-annotated data points 80.0% of the time.
In contrast to the aforementioned studies, our research explores the untapped potential of LLMs in annotating emotions in speech data. We present a novel approach that incorporates audio context into LLMs to improve the precision of annotations. To our knowledge, no prior research has investigated the utilisation of LLMs for annotating speech emotion data.
§ METHODOLOGY
In our exploration of emotional data annotation, we conduct a series of experiments. Firstly, we annotate samples using only text, and then we incorporate audio features and gender information alongside textual data for improved annotation. To incorporate audio context, we utilise the average energy and pitch of each utterance and pass it to ChatGPT. Additionally, we propose the use of VQ-VAE to generate a 64-dimensional discrete representation of audio, which is also provided to ChatGPT as the audio context. For speech-emotion classification, we train a bi-directional Long-Short Term Memory (BLSTM)-based classifier. The following section provides further details on our proposed method.
§.§ VQ-VAE for Speech Code Generation
We propose to use a Vector-Quantised Variational Autoencoder (VQ-VAE) <cit.> to learn a discrete representation from the speech data. Unlike traditional VAEs where the discrete space is continuous, VQ-VAEs express the latent space as a set of discrete latent codes and the prior is learnt rather than being fixed. As illustrated in Figure <ref>. the model is comprised of three main parts: the encoder, the vector quantiser, and the decoder.
The encoder takes in the input in the form of Mel-spectrograms and passes it through a series of convolutional layers having a shape of (n,h,w,d) where n is the batch size, h is the height, w is the width and d represents the total number of filters after convolutions. Let us denote the output from the encoder as z_e. The vector quantiser component contains an embedding space with k total vectors each with dimension d. The main goal of this component is to output a series of embedding vectors that we call z_q. To accomplish this, we first reshape z_e in the form of (n*h*w, d) and calculate the distance for each of these vectors with the vectors in the embedding dictionary. For each of the n*h*w vectors, we find the closest of the k vectors from the embedding space and index the closest vector from the embedding space for each n*h*w vector. The discrete indices of each of the vectors in the embedding space are called codes, and we get a unique series of codes for each input to the model. The selected vectors are then reshaped back to match the shape of z_e. Finally, the reshaped vector embeddings are passed through a series of transpose convolutions to reconstruct the original input Mel-spectrogram. One problem with this approach is that the process of selecting vectors is not differentiable. To tackle this problem, the authors
simply copy the gradients from z_q to z_e.
The total loss is composed of three loss elements: the reconstruction loss, the code book loss, and the commitment loss. The reconstruction loss is responsible for optimising the encoder and decoder and is represented by:
Reconstruction Loss = -log( p(x|z_q) ).
We use a code book loss which forces the vector embeddings to move closer to the encoder output z_e.
Code Book Loss = ||sg[z_e(x)] - e||^2 ,
where sg is the stop gradient operator, this essentially freezes all gradient flows. e are the vector embeddings and x is the input to the encoder. And finally, for making sure that the encoder commits to an embedding we add a commitment loss.
Commitment Loss = β ||z_e(x) - sg[e]||^2,
here β is a hyperparameter that controls the weight we want to assign to the commitment loss.
Overall, we train the VQ-VAE model to represent the audio representation in the form of a discrete list of integers or “codes". These audio representations can be used in addition to the transcriptions and fed to ChatGPT for annotation. In the following section, we will delve into the details of the annotation procedure.
§.§ Emotion Label Annotation using LLMs
We evaluated the data annotation ability of ChatGPT with different experiments. We start our experiments by annotating the training data of IEMOCAP by passing the textual transcripts to ChatGPT and annotating the data both in zero-shot and few-shot settings. For a few shots, we randomly selected 10 samples from the training data and passed them to ChatGPT as context. We trained the classifier using the training samples annotated with ChatGPT and unweighted average recall (UAR) is computed. We repeat this procedure of annotation by passing the audio features along with the textual information. First of all, we
use average pitch and energy for a given utterance and re-annotated the data both in a zero-shot and a few-shots setting, and classification UAR is measured using a BLSTM based classifier. As the female voice usually has a high pitch and energy, therefore, we also annotated the data by providing the gender information. Finally, we propose to use an audio representation by VQ-VAE (Section <ref>) and pass it to ChatGPT as audio context. We then used the OpenAI API with the “ChatGPT pro" version to annotate the data. In our approach, we meticulously designed and curated multiple prompts for annotating the data, leveraging ChatGPT for the annotation process. We trained the classifier on the annotated dataset and computed the UAR, considering it as a benchmark for evaluating the classification performance. To improve upon this benchmark, we conducted additional experiments, exploring various prompts to enhance the classification results beyond the established performance level.
§.§ Speech Emotion Classifier
In this work, we implement convolutional neural network (CNN)-BLSTM-based classifiers due to their popularity in SER research <cit.>. It has been found that the performance of BLSTM can be improved by feeding it with a good emotional representation <cit.>. Therefore, we use CNN as emotional feature extractor from the given input data <cit.>. A CNN layer acts like data-driven filter banks and can model emotionally salient features. We pass these emotional features to the BLSTM layer to learn contextual information. Emotions in speech are in the temporal dimension, therefore, the BLSTM layer helps model these temporal relationships <cit.>. We pass the outputs of BLSTM to an attention layer to aggregate the emotional salient attributes distributed over the given utterance. For a given output sequence h_i, utterance level salient attributes are aggregated as follows:
R_attentive=∑_iα_ih_i,
where α_i represents the attention weights that can be computed as follows:
α_i=expW^T h_i/∑_jexpW^T h_j,
where W is a trainable parameter. The attentive representation R_attentive computed by the attention layer is passed to the fully connected layer for emotion classification. Overall, our classifier is jointly empowered by the CNN layers to capture an abstract representation, the BLSTM layer for context capturing, the attention layer for emotional salient attributes aggregation, and the fully connected layer emotion classification.
§ EXPERIMENTAL SETUP
§.§ Datasets
To evaluate the effectiveness of annotations by ChatGPT, we use three datasets: IEMOCAP, MSP-IMPROV, and MELD which are commonly used for speech emotion classification research <cit.>. Both, the IEMOCAP and the MSP-IMPROV datasets are collected by simulating naturalistic dyadic interactions among professional actors and have similar labelling schemes. MELD contains utterances from the Friends TV series.
§.§.§ IEMOCAP
The Interactive Emotional Dyadic Motion Capture (IEMOCAP) database is a multimodal database that contains 12 hours of recorded data <cit.>. The recordings were captured during dyadic interactions between five male and five female speakers. The Dyadic interactions enabled the speakers to converse in unrehearsed emotions as opposed to reading from a text. The interactions are almost five minutes long and are segregated into smaller utterances based on sentences, where each utterance is then assigned a label according to the emotion. Overall, the dataset contains nine different emotions. To be consistent with previous studies, we use four emotions including sad (1084), happy (1636), angry (1103), and neutral (1708).
§.§.§ MSP-IMPROV
This corpus is a multimodal emotional database recorded from 12 actors performing dyadic interactions <cit.>, similar to IEMOCAP <cit.>. The utterances in MSP-IMPROV are grouped into six sessions, and each session has recordings of one male and one female actor. The scenarios were carefully designed to promote naturalness while maintaining control over lexical and emotional contents. The emotional labels were collected through perceptual evaluations using crowdsourcing <cit.>. The utterances in this corpus are annotated in four categorical emotions: angry, happy, neutral, and sad. To be consistent with previous studies <cit.>, we use all utterances with four emotions: anger (792), sad (885), neutral (3477), and happy (2644).
§.§.§ MELD
Multimodal EmotionLines Dataset <cit.> or MELD contains over 1400 dialogues and 13000 utterances and multiple speakers from the popular TV series Friends. The utterances have been labelled from a total of seven emotions: Anger, Disgust, Sadness, Joy, Neutral, Surprise and Fear. Furthermore, MELD also contains sentiment annotations for each utterance. To stay consistent with the other datasets we choose four emotions including sadness (1002 samples), neutral (6436 samples), joy and anger (1607 samples). With this configuration, we get a total of 11353 utterances from the dataset.
§.§ Speech Features
For utterances across all datasets, we use a consistent sampling rate of 16 kHz. For extracting the audio features we then convert the audio into Mel spectrograms. The Mel-spectrograms are computed with a short-time Fourier transform of size 1024, a hop size of 256, and a window size of 1024. We specify a total of 80 Mel-bands for the output and cutoff frequency of 8 kHz. We set a cutoff length of 256 for each Mel spectrogram to have a final shape of 80x256, where smaller samples are zero-padded. Finally, the Mel spectrograms are normalised in the range of [-1,1].
§.§ Hyperparameters
The VQ-VAE was trained using the following parameters: We chose a batch size of 256 and trained for a total of 1000 epochs with a learning rate of 1e^-4. The convolution layers each had a stride and kernel size of 2 and 3, respectively. A total of 8192 token embeddings were selected, where each had a dimensionality of 512. With our particular configuration, we got a total of 64 codes for each given utterance. We pass these codes to ChatGPT along with textual data for annotation. Based on these annotations, we trained over the classifier.
Our classifier consists of convolutional layers and a Bidirectional LSTM (BLSTM)-based classification network. To generate high-level abstract feature representations, we employ two CNN layers. In line with previous studies <cit.>, we utilise a larger kernel size for the first convolutional layer and a smaller kernel size for the second layer. The CNN layers learn feature representations, which are then passed to the BLSTM layer with 128 LSTM units for contextual representation learning. Following the BLSTM layer, an attention layer is applied to aggregate the emotional content spread across different parts of the given utterance. The resulting attentive features are then fed into a dense layer with 128 hidden units to extract emotionally discriminative features for a softmax layer. The softmax layer employs the cross-entropy loss function to calculate posterior class probabilities, enabling the network to learn distinct features and perform accurate emotion classification.
In our experiments, we utilise the Adam optimiser with its default parameters. The training of our models starts with a learning rate of 0.0001, and at the end of each epoch, we assess the validation accuracy. If the validation accuracy fails to improve for five consecutive epochs, we decrease the learning rate by half and revert the model to the best-performing previous epoch. This process continues until the learning rate drops below 0.00001. As for the choice of non-linear activation function, we use the rectified linear unit (ReLU) due to its superior performance compared to leaky ReLU and hyperbolic tangent during the validation phase.
§ EXPERIMENTS AND RESULTS
All experiments are conducted in a speaker-independent manner to ensure the generalisability of our findings. Specifically, we adopt an easily reproducible and widely used leave-one-speaker-out cross-validation scheme, as commonly employed in related literature <cit.>. For cross-corpus SER, we follow <cit.> and use IEMOCAP for training and MSP-IMPROV is used for validation and testing. For the experiments, we repeat each experiment ten times and calculate the mean and standard deviation of the results. The performance is presented in terms of the unweighted average recall rate
(UAR), a widely accepted metric in the field that more accurately reflects the classification accuracy across multiple emotion categories
when the data is in imbalance across these.
§.§ Within Corpus Experiments
For the within-corpus experiments, we select the IEMOCAP data and compare the results with the baseline UAR achieved using actual true labels.
We trained the classifier for different settings: (1) true label settings, (2) zero-shot ChatGPT labels, and (3) few-shots ChatGPT labels. In the first experiment, we trained the CNN-BSTM-based classifier on true labels using the well-known above mentioned leave-one-speaker-out scheme <cit.>. In the second and third experiments, the classifier is trained in the same leave-one-speaker-out scheme, however, we annotated samples using ChatGPT with our proposed approach. We repeat the second and third experiments using text only and text plus audio context. Results are presented in Figure <ref>. Overall, results on data annotated using few shots achieve improved results compared to the zero-shot scenario.
It is important to note that the emotion classification performance using training data annotated with only text is poor compared to the baseline. Here, baseline results represent when the classifier is trained using the original annotations of IEMOCAP. This observation underscores the insufficiency of textual information alone to provide the necessary context for accurate annotation by ChatGPT. Consequently, additional context becomes essential to enable ChatGPT in effectively annotating the data.
As previously found, for example, happy and angry voice samples often have high energy and pitch compared to a sad and neutral voice <cit.>. Building upon this insight, we incorporated the average energy and pitch values of a given utterance as additional contextual information for ChatGPT during the re-annotation process, both in zero-shot and few-shot settings.
However, the performance improvement was not considerable,
primarily due to the confounding factor of gender, as female voices typically exhibit higher pitch and energy compared to male voices <cit.>. To address this limitation, we extended the experiment by providing gender labels to ChatGPT, resulting in improved classification accuracy as illustrated in <ref>. In addition to average energy, pitch, and gender information, we further proposed the utilisation of audio patterns to provide enhanced audio context for annotation. To achieve this, we employed a VQ-VAE model to encode the given utterance into discrete representations. These representations, along with the textual and other feature inputs, were employed in various experiments for annotation (refer to Figure <ref>). Notably, in the zero-shot scenario, no substantial improvements were observed. However, significant
advancements were achieved by incorporating the discrete codes generated by VQ-VAE, in conjunction with average energy, pitch, and gender information.
§.§ Cross-Corpus Evaluations
In this experiment, we perform a cross-corpus analysis to assess the generalisability of annotations performed using our proposed approach. Here, we trained models on IEMOCAP, and testing is performed on the MSP-IMPROV data. IEMOCAP is more blanched data, therefore, we select it for training by following previous studies <cit.>. We randomly select 30.0 % of the MSP-IMPROV data for parameter tuning and 70.0 % of data as testing data.
We report results using the few-shots annotation by ChatGPT as it consistently demonstrated superior performance compared to the zero-shot setting.
We compare our results with different studies in Table <ref>. In <cit.>, the authors use the CNN-LSTM model for cross-corpus evaluation. They show that CNN-LSTM can learn emotional contexts and help achieve improved results for cross-corpus SER. In <cit.>, the authors utilise the representations learnt from unlabelled data and feed it to an attention-based CNN classifier. They show that the classifier's performance can be improved by augmenting the classifier with information from unlabelled data. We compare our results using the CNN-BLSTM-based classifier by using the IEMOCAP annotated by the ChatGPT model. This experiment demonstrates the generalisability of annotations performed by ChatGPT in cross-corpus settings. However, it is worth noting that our results did not surpass those of previous studies. In the subsequent experiment, we aim to showcase the potential for enhancing the performance of SER using data annotations generated by ChatGPT, both within-corpus and cross-corpus settings.
§.§ Augmentating the Training Data
In the previous two experiments, we showed, how we can annotate new speech-emotional data using a large language model like ChatGPT. However, the performance does not surpass the UAR achieved using actual labels. In this experiment, we aim to address this limitation by showcasing the potential of improving SER performance through data augmentation using our proposed approach. For this, we can utilise abundantly available audio data by annotating with our proposed approach. For instance, data from YouTube can be annotated and used to augment the SER system. To validate this concept, we select the MELD dataset, which consists of dialogue samples from the Friends TV series. We employ the few-shot approach, using samples from the IEMOCAP dataset for few-shots, and annotate the MELD data with four emotions: happy, anger, neutral, and sad. We used samples from IEMOCAP data for the few-shots and annotated MELD data in four emotions including happy, anger, neutral, and sad. Results are presented in Figure <ref>, where we compare the results with the CNN-BLSTM classifier using the actual IECMOAP labels and when data is augmented using the samples with ChatGPT labels.
This analysis provides insights into the effectiveness of data augmentation for enhancing the performance of the SER system.
Furthermore, we provide a comprehensive comparison of our results with previous studies in both within-corpus and cross-corpus settings, as presented in Table <ref>.
In <cit.>, the authors utilise DialogueRNN for speech emotion recognition using IEMOCAP data. Peng et al. <cit.> use an attention-based CNN network for emotion classification. We achieve better results compared to these studies by augmenting the classifier with additional data annotated by ChatGPT. One possible reason can be that these studies did not train the models with augmentation. However, we also compared the results with <cit.>, where the authors use different data augmentation techniques to augment the classifier and achieve improved results. In contrast, we use ChatGPT to annotate the publicly available data and use it for augmentation of the training set. We are achieving considerably improved results compared to <cit.>. One possible reason is that we are adding new data in the classifiers' training set, however, authors in <cit.> employed perturbed versions of the same data, which can potentially lead to overfitting of the system. Similarly, we achieve considerably improved results for cross-corpus settings compared to the precious studies <cit.>, where the authors augmented their classification models with either synthetic data or perturbed samples using audio-based data augmentation techniques like speed perturbation, SpecAugmet, and mixup.
Overall, our results showcase the effectiveness of our approach in achieving superior performance compared to previous studies, both in within-corpus and cross-corpus settings. The utilisation of ChatGPT for data annotation and augmentation proves to be a promising strategy for enhancing SER systems.
§.§ Limitations
In this section, we highlight the potential limitations of our work and in general the limitations of LLMs for data annotation. During our experiments, we observed the following limitations:
* We obtained promising results by augmenting the training data with samples annotated using ChatGPT. However, this approach proved ineffective when applied to corpora such as LibriSpeech <cit.>, where the recordings lack emotional variations. Although we attempted to utilise
LibriSpeech data (results are not shown here), the results were not as promising as those achieved with MELD.
* ChatGPT is known to be sensitive to prompt variability, which can lead to ambiguous and erroneous results if even slight changes are made to the prompt content. In order to address this issue, we suggest conducting experiments using different prompts to generate annotations (as presented in Section <ref>). The inclusion of more context in the prompts has been shown to improve the quality of results. However, for SER annotation prompts, this can be particularly challenging due to the significant variability of human emotions within short time frames. This limitation stems from LLMs' reliance on training data.
* ChatGPT has not been trained particularly to annotate speech emotion data. While the emergent nature of ChatGPT has aided with annotation, relying exclusively on ChatGPT annotation is insufficient. Through our research, we have found that incorporating ChatGPT-based annotations alongside the training data leads to enhanced classification performance. Notably, when utilising multi-shot ChatGPT annotations instead of zero-shot annotations, we observe a substantial performance improvement.
* ChatGPT offers a significant cost reduction in data annotation. For instance, in our experiments, we were able to annotate IEMOCAP data examples using ChatGPT for approximately 30 USD, which is significantly lower than human annotations cost. However, it is paramount to note that the accuracy of ChatGPT-based annotations is not as good as human annotations because ChatGPT is not specifically trained for annotating speech emotion data. As a result, it is a trade-off situation. Therefore, it becomes a trade-off between cost and accuracy. Striking the right balance is crucial when utilising ChatGPT for data annotation to avoid potential inaccuracies in classification performance.
Despite the mentioned limitations, we have found ChatGPT to be an invaluable tool for speech-emotion data annotation. We believe that its capabilities will continue to evolve. Currently, generating annotations using ChatGPT and incorporating them to augment human-annotated data has demonstrated improved performance in speech emotion classification. This highlights the potential of ChatGPT as a valuable asset in advancing research in this field.
§ CONCLUSIONS AND OUTLOOK
In this paper, we conducted a comprehensive evaluation of ChatGPT's effectiveness in annotating speech emotion data. To the best of our knowledge, this study is the first of its kind to explore the capabilities of ChatGPT in the domain of speech emotion recognition. The results of our investigation have been encouraging, and we have discovered promising outcomes. Below are the key findings of our study:
* Based on our findings, we observed that text-based emotional annotations do not generalise effectively to speech data. To address this limitation, we introduced a novel approach that harnesses the audio context in annotating speech data, leveraging the capabilities of a large language model. By incorporating the audio context, we successfully enhanced the performance of SER, yielding improved results compared to the text-based approach.
* We observed that the quality of annotations by ChatGPT considerably
improved when using a few-shot approach compared to a zero-shot one. By incorporating a small number of annotated samples, we were able to achieve improved results in our evaluation.
* We introduced an effective technique to utilise large language models (LLMs) to augment the speech emotion recognition (SER) system with the annotated data by ChatGPT. The augmented system yielded improved results compared to the current state-of-the-art SER systems that utilise conventional augmentation techniques.
In our future work, we aim to expand our experimentation by applying our approach to new datasets and diverse contexts. This will allow us to further validate the effectiveness and generalisability of our proposed technique. Additionally, we plan to explore and compare the annotation abilities of different LLMs for speech emotion data, enabling us to gain insights into their respective strengths and weaknesses. We also intend to use LLMs in the training pipeline of the SER system.
10
brown2020language
T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal,
A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language
models are few-shot learners,” Advances in neural information
processing systems, vol. 33, pp. 1877–1901, 2020.
bommasani2021opportunities
R. Bommasani, D. A. Hudson, E. Adeli, R. Altman, S. Arora, S. von Arx, M. S.
Bernstein, J. Bohg, A. Bosselut, E. Brunskill et al., “On the
opportunities and risks of foundation models,” arXiv preprint
arXiv:2108.07258, 2021.
wei2022emergent
J. Wei, Y. Tay, R. Bommasani, C. Raffel, B. Zoph, S. Borgeaud, D. Yogatama,
M. Bosma, D. Zhou, D. Metzler et al., “Emergent abilities of large
language models,” arXiv preprint arXiv:2206.07682, 2022.
latif2023transformers
S. Latif, A. Zaidi, H. Cuayahuitl, F. Shamshad, M. Shoukat, and J. Qadir,
“Transformers in speech processing: A survey,” arXiv preprint
arXiv:2303.11607, 2023.
latif2022deep
S. Latif, “Deep representation learning for speech emotion recognition,”
Ph.D. dissertation, University of Southern Queensland, 2022.
cioffi2017computation
C. Cioffi-Revilla and C. Cioffi-Revilla, “Computation and social science,”
Introduction to computational social science: Principles and
applications, pp. 35–102, 2017.
latif2020deep
S. Latif, R. Rana, S. Khalifa, R. Jurdak, J. Qadir, and B. W. Schuller, “Deep
representation learning in speech processing: Challenges, recent advances,
and future trends,” arXiv preprint arXiv:2001.00378, 2020.
latif2022ai
S. Latif, H. S. Ali, M. Usama, R. Rana, B. Schuller, and J. Qadir, “Ai-based
emotion recognition: Promise, peril, and prescriptions for prosocial path,”
arXiv preprint arXiv:2211.07290, 2022.
latif2019caveat
S. Latif, A. Qayyum, M. Usama, J. Qadir, A. Zwitter, and M. Shahzad, “Caveat
emptor: the risks of using big data for human development,” Ieee
technology and society magazine, vol. 38, no. 3, pp. 82–90, 2019.
rottger2021two
P. Röttger, B. Vidgen, D. Hovy, and J. B. Pierrehumbert, “Two contrasting
data annotation paradigms for subjective nlp tasks,” arXiv preprint
arXiv:2112.07475, 2021.
liao2019unsupervised
X. Liao and Z. Zhao, “Unsupervised approaches for textual semantic annotation,
a survey,” ACM Computing Surveys (CSUR), vol. 52, no. 4, pp. 1–45,
2019.
burns2022discovering
C. Burns, H. Ye, D. Klein, and J. Steinhardt, “Discovering latent knowledge in
language models without supervision,” arXiv preprint
arXiv:2212.03827, 2022.
zhu2023can
Y. Zhu, P. Zhang, E.-U. Haq, P. Hui, and G. Tyson, “Can chatgpt reproduce
human-generated labels? a study of social computing tasks,” arXiv
preprint arXiv:2304.10145, 2023.
huang2023chatgpt
F. Huang, H. Kwak, and J. An, “Is chatgpt better than human annotators?
potential and limitations of chatgpt in explaining implicit hate speech,”
arXiv preprint arXiv:2302.07736, 2023.
ding2019group
S. Ding and R. Gutierrez-Osuna, “Group latent embedding for vector quantized
variational autoencoder in non-parallel voice conversion.” in
INTERSPEECH, 2019, pp. 724–728.
yang2023harnessing
J. Yang, H. Jin, R. Tang, X. Han, Q. Feng, H. Jiang, B. Yin, and X. Hu,
“Harnessing the power of llms in practice: A survey on chatgpt and beyond,”
arXiv preprint arXiv:2304.13712, 2023.
pustejovsky2012natural
J. Pustejovsky and A. Stubbs, Natural Language Annotation for Machine
Learning: A guide to corpus-building for applications.1em plus 0.5em
minus 0.4em" O'Reilly Media, Inc.", 2012.
hoes2023using
E. Hoes, S. Altay, and J. Bermeo, “Using chatgpt to fight misinformation:
Chatgpt nails 72% of 12,000 verified claims,” 2023.
yang2023large
K.-C. Yang and F. Menczer, “Large language models can rate news outlet
credibility,” arXiv preprint arXiv:2304.00228, 2023.
tornberg2023chatgpt
P. Törnberg, “Chatgpt-4 outperforms experts and crowd workers in
annotating political twitter messages with zero-shot learning,” arXiv
preprint arXiv:2304.06588, 2023.
gilardi2023chatgpt
F. Gilardi, M. Alizadeh, and M. Kubli, “Chatgpt outperforms crowd-workers for
text-annotation tasks,” arXiv preprint arXiv:2303.15056, 2023.
elmas2023opinion
T. Elmas and İ. Gül, “Opinion mining from youtube captions using
chatgpt: A case study of street interviews polling the 2023 turkish
elections,” arXiv preprint arXiv:2304.03434, 2023.
cegin2023chatgpt
J. Cegin, J. Simko, and P. Brusilovsky, “Chatgpt to replace crowdsourcing of
paraphrases for intent classification: Higher diversity and comparable model
robustness,” arXiv preprint arXiv:2305.12947, 2023.
kuzman2023chatgpt
T. Kuzman, I. Mozetic, and N. Ljubešic, “Chatgpt: Beginning of an end of
manual linguistic data annotation? use case of automatic genre
identification,” ArXiv, abs/2303.03953, 2023.
mets2023automated
M. Mets, A. Karjus, I. Ibrus, and M. Schich, “Automated stance detection in
complex topics and small languages: the challenging case of immigration in
polarizing news media,” arXiv preprint arXiv:2305.13047, 2023.
wang2023chatgpt
Z. Wang, Q. Xie, Z. Ding, Y. Feng, and R. Xia, “Is chatgpt a good sentiment
analyzer? a preliminary study,” arXiv preprint arXiv:2304.04339,
2023.
ziems2023can
C. Ziems, W. Held, O. Shaikh, J. Chen, Z. Zhang, and D. Yang, “Can large
language models transform computational social science?” arXiv
preprint arXiv:2305.03514, 2023.
veselovsky2023generating
V. Veselovsky, M. H. Ribeiro, A. Arora, M. Josifoski, A. Anderson, and R. West,
“Generating faithful synthetic data with large language models: A case study
in computational social science,” arXiv preprint arXiv:2305.15041,
2023.
mu2023navigating
Y. Mu, B. P. Wu, W. Thorne, A. Robinson, N. Aletras, C. Scarton, K. Bontcheva,
and X. Song, “Navigating prompt complexity for zero-shot classification: A
study of large language models in computational social science,” arXiv
preprint arXiv:2305.14310, 2023.
rytting2023towards
C. M. Rytting, T. Sorensen, L. Argyle, E. Busby, N. Fulda, J. Gubler, and
D. Wingate, “Towards coding social science datasets with language models,”
arXiv preprint arXiv:2306.02177, 2023.
amin38will
M. M. Amin, E. Cambria, and B. W. Schuller, “Will affective computing emerge
from foundation models and general ai? a first evaluation on chatgpt,”
IEEE Intelligent Systems, vol. 38, p. 2.
mikolov2013efficient
T. Mikolov, K. Chen, G. Corrado, and J. Dean, “Efficient estimation of word
representations in vector space,” arXiv preprint arXiv:1301.3781,
2013.
liu2019roberta
Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis,
L. Zettlemoyer, and V. Stoyanov, “Roberta: A robustly optimized bert
pretraining approach,” arXiv preprint arXiv:1907.11692, 2019.
wang2021want
S. Wang, Y. Liu, Y. Xu, C. Zhu, and M. Zeng, “Want to reduce labeling cost?
gpt-3 can help,” in Findings of the Association for Computational
Linguistics: EMNLP 2021, 2021, pp. 4195–4205.
guo2023close
B. Guo, X. Zhang, Z. Wang, M. Jiang, J. Nie, Y. Ding, J. Yue, and Y. Wu, “How
close is chatgpt to human experts? comparison corpus, evaluation, and
detection,” arXiv preprint arXiv:2301.07597, 2023.
oord2018neural
A. van den Oord, O. Vinyals, and K. Kavukcuoglu, “Neural discrete
representation learning,” 2018.
latif2021survey
S. Latif, R. Rana, S. Khalifa, R. Jurdak, J. Qadir, and B. W. Schuller,
“Survey of deep representation learning for speech emotion recognition,”
IEEE Transactions on Affective Computing, 2021.
trigeorgis2016adieu
G. Trigeorgis, F. Ringeval, R. Brueckner, E. Marchi, M. A. Nicolaou,
B. Schuller, and S. Zafeiriou, “Adieu features? end-to-end speech emotion
recognition using a deep convolutional recurrent network,” in 2016
IEEE international conference on acoustics, speech and signal processing
(ICASSP).1em plus 0.5em minus 0.4emIEEE, 2016, pp. 5200–5204.
latif2019direct
S. Latif, R. Rana, S. Khalifa, R. Jurdak, and J. Epps, “Direct Modelling of
Speech Emotion from Raw Speech,” in Proc. Interspeech 2019, 2019,
pp. 3920–3924.
qayyum2018quran
A. Qayyum, S. Latif, and J. Qadir, “Quran reciter identification: A deep
learning approach,” in 2018 7th International Conference on Computer
and Communication Engineering (ICCCE).1em plus 0.5em minus
0.4emIEEE, 2018, pp. 492–497.
Lotfian+2016
R. Lotfian and C. Busso, “Retrieving categorical emotions using a
probabilistic framework to define preference learning samples,” in
Interspeech 2016, 2016, pp. 490–494.
kim2016emotion
Y. Kim and E. M. Provost, “Emotion spotting: Discovering regions of evidence
in audio-visual emotion expressions,” in Proceedings of the 18th ACM
International Conference on Multimodal Interaction.1em plus 0.5em
minus 0.4emACM, 2016, pp. 92–99.
busso2008iemocap
C. Busso, M. Bulut, C.-C. Lee, A. Kazemzadeh, E. Mower, S. Kim, J. N. Chang,
S. Lee, and S. S. Narayanan, “Iemocap: Interactive emotional dyadic motion
capture database,” Language resources and evaluation, vol. 42, no. 4,
p. 335, 2008.
busso2017msp
C. Busso, S. Parthasarathy, A. Burmania, M. AbdelWahab, N. Sadoughi, and E. M.
Provost, “Msp-improv: An acted corpus of dyadic interactions to study
emotion perception,” IEEE Transactions on Affective Computing,
vol. 8, no. 1, pp. 67–80, 2017.
burmania2016increasing
A. Burmania, S. Parthasarathy, and C. Busso, “Increasing the reliability of
crowdsourcing evaluations using online quality assessment,” IEEE
Transactions on Affective Computing, vol. 7, no. 4, pp. 374–388, 2016.
gideon2017progressive
J. Gideon, S. Khorram, Z. Aldeneh, D. Dimitriadis, and E. M. Provost,
“Progressive neural networks for transfer learning in emotion recognition,”
arXiv preprint arXiv:1706.03256, 2017.
poria-etal-2019-meld
S. Poria, D. Hazarika, N. Majumder, G. Naik, E. Cambria, and R. Mihalcea,
“MELD: A multimodal multi-party dataset for emotion recognition in
conversations,” in Proceedings of the 57th Annual Meeting of the
Association for Computational Linguistics.1em plus 0.5em minus
0.4emFlorence, Italy: Association for Computational Linguistics, Jul.
2019, pp. 527–536.
dai2019learning
D. Dai, Z. Wu, R. Li, X. Wu, J. Jia, and H. Meng, “Learning discriminative
features from spectrograms using center loss for speech emotion
recognition,” in ICASSP 2019-2019 IEEE International Conference on
Acoustics, Speech and Signal Processing (ICASSP).1em plus 0.5em minus
0.4emIEEE, 2019, pp. 7405–7409.
gideon2019improving
J. Gideon, M. G. McInnis, and E. M. Provost, “Improving cross-corpus speech
emotion recognition with adversarial discriminative domain generalization
(addog),” IEEE Transactions on Affective Computing, vol. 12, no. 4,
pp. 1055–1068, 2019.
latif2018variational
S. Latif, R. Rana, J. Qadir, and J. Epps, “Variational autoencoders for
learning latent representations of speech emotion: A preliminary study,”
Proc. Interspeech 2018, pp. 3107–3111, 2018.
latif2020multi
S. Latif, R. Rana, S. Khalifa, R. Jurdak, J. Epps, and B. W. Schuller,
“Multi-task semi-supervised adversarial autoencoding for speech emotion
recognition,” IEEE Transactions on Affective Computing, 2020.
bao2019cyclegan
F. Bao, M. Neumann, and N. T. Vu, “Cyclegan-based emotion style transfer as
data augmentation for speech emotion recognition.” in INTERSPEECH,
2019, pp. 2828–2832.
latif2022multitask
S. Latif, R. Rana, S. Khalifa, R. Jurdak, and B. W. Schuller, “Multitask
learning from augmented auxiliary data for improving speech emotion
recognition,” IEEE Transactions on Affective Computing, 2022.
malik2023preliminary
I. Malik, S. Latif, R. Jurdak, and B. W. Schuller, “A preliminary study on
augmenting speech emotion recognition using a diffusion model,”
Proceedings of Interspeech, Dublin, Ireland, August, 2023, 2023.
yildirim2004acoustic
S. Yildirim, M. Bulut, C. M. Lee, A. Kazemzadeh, Z. Deng, S. Lee, S. Narayanan,
and C. Busso, “An acoustic study of emotions expressed in speech,” in
Eighth International Conference on Spoken Language Processing, 2004.
fraccaro2011experimental
P. J. Fraccaro, B. C. Jones, J. Vukovic, F. G. Smith, C. D. Watkins, D. R.
Feinberg, A. C. Little, and L. M. Debruine, “Experimental evidence that
women speak in a higher voice pitch to men they find attractive,”
Journal of Evolutionary Psychology, vol. 9, no. 1, pp. 57–67, 2011.
neumann2019improving
M. Neumann and N. T. Vu, “Improving speech emotion recognition with
unsupervised representation learning on unlabeled speech,” in ICASSP
2019-2019 IEEE International Conference on Acoustics, Speech and Signal
Processing (ICASSP).1em plus 0.5em minus 0.4emIEEE, 2019, pp.
7390–7394.
sahu2018enhancing
S. Sahu, R. Gupta, and C. Espy-Wilson, “On enhancing speech emotion
recognition using generative adversarial networks,” Proc. Interspeech
2018, pp. 3693–3697, 2018.
majumder2019dialoguernn
N. Majumder, S. Poria, D. Hazarika, R. Mihalcea, A. Gelbukh, and E. Cambria,
“Dialoguernn: An attentive rnn for emotion detection in conversations,” in
Proceedings of the AAAI conference on artificial intelligence,
vol. 33, no. 01, 2019, pp. 6818–6825.
peng2021efficient
Z. Peng, Y. Lu, S. Pan, and Y. Liu, “Efficient speech emotion recognition
using multi-scale cnn and attention,” in ICASSP 2021-2021 IEEE
International Conference on Acoustics, Speech and Signal Processing
(ICASSP).1em plus 0.5em minus 0.4emIEEE, 2021, pp. 3020–3024.
panayotov2015librispeech
V. Panayotov, G. Chen, D. Povey, and S. Khudanpur, “Librispeech: an asr corpus
based on public domain audio books,” in 2015 IEEE International
Conference on Acoustics, Speech and Signal Processing (ICASSP).1em
plus 0.5em minus 0.4emIEEE, 2015, pp. 5206–5210.
|
http://arxiv.org/abs/2307.04587v1 | 20230710142900 | Endotaxial Stabilization of 2D Charge Density Waves with Long-range Order | [
"Suk Hyun Sung",
"Nishkarsh Agarwal",
"Ismail El Baggari",
"Yin Min Goh",
"Patrick Kezer",
"Noah Schnitzer",
"Yu Liu",
"Wenjian Lu",
"Yuping Sun",
"Lena F. Kourkoutis",
"John T. Heron",
"Kai Sun",
"Robert Hovden"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci"
] |
Reliable Devices Yield Stable Quantum Computations
The manuscript is authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The U.S. Government retains for itself, and others acting on its behalf, a paid-up nonexclusive, irrevocable worldwide license in said article to reproduce, prepare derivative works, distribute copies to the public, and perform publicly and display publicly, by or on behalf of the Government. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan: https://www.energy.gov/doe-public-access-plan.
Samudra Dasgupta^1, 2^*, and Travis S. Humble^1,2^†
^1Quantum Science Center, Oak Ridge National Laboratory, Oak Ridge, Tennessee, USA
^2Bredesen Center, University of Tennessee, Knoxville, Tennessee, USA
^*[email protected], ORCID: 0000-0002-7831-745X
^†[email protected], ORCID: 0000-0002-9449-0498
February 2023
====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Some exotic crystals spontaneously reorganize their valence electrons into periodic structures known as charge density waves (CDWs). In essence, two crystals emerge—the underlying atomic lattice and the emergent charge lattice. Just like atomic crystals, a charge density wave has defects: dislocations, disclinations, and elastic deformation <cit.>. Furthermore, the charge density wave can undergo phase transitions wherein the charge lattice unit cell changes shape and size. All of this CDW reshaping and topological restructuring occurs even when the underlying atomic lattice remains unchanged.
In low dimensions, these quantum phase transitions are promising candidates for novel devices <cit.>, efficient ultrafast non-volatile switching <cit.>, and suggest elusive chiral superconductivity <cit.>. Unfortunately, 2D CDWs are inherently unstable and accessing low-dimensional CDWs remains a challenge <cit.>. Even worse, at elevated temperatures where devices typically operate, disruption of charge density waves is all but guaranteed due to ever-present disorder <cit.>. A long-range ordered incommensurate CDW has yet to be reported.
Here we stabilize ordered incommensurate charge density waves (oIC-CDW) at elevated temperatures (TIC = 350 K) in two-dimensions by endotaxial synthesis of polytype heterostructures. The estimated hundred-fold amplitude enhancement of the charge density wave has an increased coherence length comparable to the underlying atomic crystal. The enhanced order of the oIC-CDW increases electronic resistivity. This substantial enhancement of charge order is achieved through encapsulation of an isolated octahedral CDW layer within a matrix of prismatic metallic layers via 2D endotaxial synthesis.
Realizing the ordered incommensurate CDW reveals CDWs have hexatic structure at high-temperature—that is, long-range translational symmetry is limited by proliferation of topological defects (i.e., dislocations and disclinations) in CDWs. We show at high-temperatures, the CDWs in continuously melt as additional dislocations and disclinations form in the charge lattice. This hexatic CDW melting process was not previously observable since the incommensurate CDW normally emerges as a highly-disordered, melted state. By restoring order through 2D endotaxy, we can reversibly melt and unmelt CDWs in . Based on these results, we access new regimes of the CDW phase diagram for octahedrally coordinated in temperature vs disorder space. Similar vestigial ordering (i.e., hexaticity) was predicted by Nie, Tarjus and Kivelson <cit.>; however, with 2D endotaxy we can now tune down the disorder in the CDW phase diagram.
§ THE ORDERED INCOMMENSURATE CHARGE DENSITY WAVE
The ordered incommensurate CDW (oIC) reported herein (Fig. <ref>a–d) is strikingly distinct from the well-known incommensurate (IC) CDW (Fig. <ref>e–h) found in 1T- or 1T-. Here, the oIC phase is a truly two-dimensional (2D) CDW with long-range positional and orientational order that couples strongly with the underlying crystal lattice (Fig. <ref>a). The oIC-CDW, illustrated in Figure <ref>b, is a crystalline charge-lattice with well-defined, sharp peaks in Fourier space (Fig. <ref>b-inset). This CDW charge-lattice (aCDW = 11.87 nm) exists within an underlying atomic lattice illustrated in Figure <ref>c.
Electron–lattice interaction is an essential aspect of CDWs, and associated soft-phonon modes manifest as static periodic lattice distortions (PLDs) that reduce crystal symmetry and lower the electronic energy <cit.>. For , the CDW pulls atoms toward the nearest charge maximum to form periodic clusters of atoms (Fig. <ref>c). Notably for incommensurate charge ordering, each cluster is distinct since the atomic lattice is not commensurate with the CDW. While these lattice distortions are small (<10 pm), selected area electron diffraction (SAED) is sensitive to subtle picoscale distortions and making it a popular choice for characterization of CDW/PLDs <cit.>. CDW/PLDs diffract incident swift electrons into distinct superlattice peaks decorating each Bragg peak <cit.>. In reciprocal space, the CDW charge lattice (Fig. <ref>b-inset) and the measurable atomic superlattice peaks (Fig. <ref>c-inset) have corresponding spacing, symmetry, and intensity.
Diffracted superlattice peaks provide a direct measure of the CDW lattice and contain rich information on their order-disorder. Specifically, diffraction represents an ensemble average of the structure over the selected area, and disorder manifests as diffused diffraction peaks <cit.>. Disorder of CDWs smears superlattice peaks but leaves the principle Bragg peaks unaffected (Fig. <ref>g-inset). For oIC-CDWs, the charge lattice is ordered with limited defects, thus diffraction shows both sharp superlattice and Bragg peaks (Fig. <ref>c-inset). In contrast, the well-known IC-CDW in 1T- possesses significant disorder of its charge distribution. Across decades, the IC phase in 1T- is reported with a ring-like, azimuthally diffuse diffraction around each Bragg peak <cit.>, yet the origin of the diffused superlattice peaks is hardly discussed <cit.>.
Here we present the well-known IC-CDW in bulk 1T- as a hexatically disordered charge lattice containing dislocations and disclinations (Fig. <ref>f). In-situ SAED of 1T- taken at 408 K (Fig. <ref>a) shows azimuthally blurred first order superlattice peaks (marked brown). Averaging all six third order Bragg peaks (inset, Γ_3) better highlights this point. Notably, hexatic phases are known to have six-fold rotationally symmetric, azimuthally diffused peaks <cit.>. The experimental diffraction of IC-CDWs are consistent with a hexatic charge distribution (Fig. <ref>f) <cit.> and corresponding azimuthally diffuse structure factor (Fig. <ref>f, g-inset). The IC-CDWs are three-dimensional (or quasi-2D) with non-negligible out-of-plane interactions (Fig. <ref>e–h).
In contrast, the oIC-CDW, shows drastically sharper and stronger superlattice peaks measured by in-situ SAED at 408 K (Fig. <ref>b). Sharpening is especially highlighted in averaged third order Bragg peaks (Γ_3). The measured superlattice peaks of oIC-CDW are sharper both in azimuthal (by ∼60%) and radial (by ∼50%) directions when compared to the IC-CDW. Notably, the superlattice peak widths of the oIC phase is comparable to the peak widths of the principle Bragg peaks. Therefore, the oIC is a spatially coherent electronic crystal.
The oIC-CDW, a 2D charge ordered state, is enhanced by at least one-hundred fold over previously reported bulk IC-CDWs. Diffracted superlattice peaks in oIC-CDWs have an integrated intensity over ten times stronger despite that the number of charge ordered layers has been reduced to less than 10% of the material. Thus, endotaxial engineering improves not only the long range order but also the charge order amplitude of the IC-CDW. The correlation of long-range order and CDW enhancement is measured directly via hexatic CDW melting later in this manuscript.
§ ENDOTAXIAL POLYTYPE HETEROSTRUCTURE OF
The oIC-CDW phase reported herein is stabilized by synthesizing endotaxial polytype heterostructures of , where oIC-CDWs reside in monolayers of octahedrally coordinated (Oc-) embedded within prismatic (Pr-) matrix and one-to-one atomic registry (Fig. <ref>e). Endotaxial polytype heterostructures are synthesized by heating 1T- at ∼720 K for 15–30 min in an inert environment. Notably, 1T- is metastable and goes through Oc-to-Pr endotaxial layer-by-layer polytype transformation upon heating (≳ 620 K). In-situ SAEDs (Fig. <ref>c i–iv) were acquired at 20 seconds intervals at 408 K through the high temperature conversion process (723 K). These snapshots reveal sharpening of superlattice peaks—a clear indicator of enhanced CDW order. Cooling the sample midst transition stops the conversion and an interleaved polytype heterostructure is synthesized—confirmed by cross-sectional ADF-STEM.
Figure <ref>d and e show atomic resolution micrographs of bulk 1T endotaxially converted to a polytype heterostructure. The atomic resolution images demonstrate endotaxial monolayer encapsulation of Oc- (Fig. <ref>e, highlighted red) in Pr-layers. The Pr- (bulk: 2H, 3R) are metallic above ∼100 K. Previous work showed these metallic layers decouple CDWs out-of-plane and raise the critical temperature for commensurate quantum states (i.e., C-CDW) from ∼200 K to ∼350 K <cit.>.
Surprisingly, the endotaxial polytype heterostructure stabilizes long-range order in IC-CDWs at elevated (≳ 350 K) temperatures. The oIC-CDW phase has correlation length comparable to the crystal lattice, quantified by comparing widths of both superlattice and Bragg peaks from in-situ selected area electron diffraction patterns (SA aperture: 850 nm diameter). This indicates the CDW is relatively ordered (i.e. spatially coherent) over the distances comparable to the parent atomic crystal (∼102 nm).
This enhancement of long-range CDW order is accompanied by a marked increase of the in-plane resistivity of the IC phase (Fig. <ref>f). Figure <ref>f shows temperature vs in-plane resistivity measurement of 1T (brown) and endotaxial (red) specimen. Resistivity of endotaxial is higher for IC-CDW phases (>358 K), despite having many metallic layers introduced to the system. This implies that oIC-CDWs have a much higher resistivity than hexatic-IC in 1T-.
§ HEXATIC MELTING OF IC-CDW
Creating the oIC-CDW provides an ordered charge lattice that can be hexatically melted upon further heating. Hexatic melting is a uniquely 2D process wherein a crystal melts in two stages through the creation of dislocations and disclinations <cit.>. During this process the reciprocal space structure continuously evolves. Initially at lower-temperatures (c.a. 350 K), the oIC phase is an ordered charge crystal with well-defined peaks in reciprocal space (Fig. <ref>c). As temperature rises, the CDW peaks continuously blur azimuthally as the density of dislocations and disclinations increases (Fig. <ref>d, e). Azimuthal blurring of the reciprocal lattice is characteristic of hexatic phases and reflects the loss of translational symmetry while maintaining some orientational order <cit.>. Eventually, at higher temperatures (c.a. 570 K), the hexatic crystal completely dissociates into an amorphous liquid state with ring-like structure factor. Figure <ref>c–e, are generated using a phenomological Monte Carlo simulation wherein displacement of the CDW charge centers follow a temperature dependent Maxwell-Boltzmann probability distribution (See Methods). Here, the incommensurate CDW hexatically melts while the underlying atomic lattice remains unchanged—in diffraction this corresponds to a blurring of CDW superlattice peaks and preservation of Bragg peaks.
During the hexatic melting of oIC-CDWs, superlattice peaks increasingly blur as temperature is raised—clearly visible in in-situ SAED at Fig. <ref>a-i) 473 K, Fig. <ref>a-ii) 523 K, and Fig. <ref>a-iii) 573 K. The blurring is anisotropic and more prominent along azimuthal directions as expected for hexatic phases. The CDW peaks are quantified throughout the melting process in Figure <ref>b. Azimuthal peak width (Fig. <ref>b, blue-triangles) increases continuously with temperature; roughly doubling when raised from 410 K to 570 K. Around 520 K the oIC has melted into a state that resembles the well-known IC-CDW for bulk . This CDW melting process is reversible and peaks sharpen when temperature is decreased. Notably, Bragg peaks do not show appreciable changes indicating only the electronic crystal is melting, not the atomic crystal.
Although the CDW melting process appears hexatic, it is distinct from familiar liquid crystals, silica spheres, or atomic crystals wherein the amplitude of the order parameter does not change. Here, quantitative analysis of the superlattice peak intensities (Fig. <ref>a-red) reveals the charge density wave amplitude decreases with temperature. This is expected as topological defects in CDWs (dislocations and disclinations) have locally divergent strain with elastic energy cost that forces a local amplitude collapse. These local CDW amplitude collapses have been observed at the center of topologcal defects in the 3D charge ordering of manganites <cit.>.
§ THE CDW PHASE DIAGRAM FOR OCTAHEDRAL
Endotaxial synthesis of octahedrally coordinated allows access to new phases of matter and construction of a phase diagram for CDWs using temperature (T) and disorder (). The CDW phase diagram for 1T- is shown in Figure <ref>. 1T- exists with native disorder and the ordered, commensurate phase (C-CDW, Fig. <ref>g) is only observed at low-temperatures. At room temperature, the CDW is a partially-ordered NC phase (Fig. <ref>f) that enters the hexatic IC phase upon heating (Fig. <ref>e). At high-temperatures or high-disorder, CDWs degrade or vanish. The high disorder regime was historically achieved by substituting tantalum ions with other metal species (e.g. Ti, Nb) or by forcing intercalates within the van der Waals gap <cit.>. At room temperature, mild substitution of titanium (1T-Ta0.7Ti0.3S2) drives the system into hexatic-IC CDW states (Fig. <ref>h), and as more titanium is substituted (1T-Ta0.3Ti0.7S2) CDW vanishes completely (Fig. <ref>i).
The low disorder regime, now accessible by endotaxial engineering, provides room temperature ordered C-CDWs and a novel ordered IC-CDW at higher temperatures. Notably with low-disorder, the C to IC transition is direct and the NC phase does not appear. The IC phase is ordered, but the CDW can be continuously melted into a disordered hexatic-IC phase (as described in figure <ref>). The boundaries of the CDW phase diagram are drawn with consistency to hexatic melting of 2D collidal particles under temperature and disorder <cit.> as well as nematic CDWs <cit.>.
Notably, CDWs in endotaxial are two dimensional and the oIC phase has enhanced order despite the 3D to 2D dimensionality reduction. In bulk 1T- CDWs are quasi-2D with non-negligible out-of-plane interaction (Fig. <ref>h) <cit.>. Formation of endotaxial polytype heterostructures disrupts the out-of-plane interactions and CDWs reside in a protected 2D environment <cit.>. Stabilization of an ordered IC-CDW in 2D seemingly contradicts with Hohenberg-Mermin-Wagner theorem <cit.> and Imry-Ma argument <cit.> which state spontaneous symmetry breaking of continuous symmetry (e.g. IC-CDWs) is unstable at non-zero temperatures in 2D. While both principles do not prevent intermediate phases with short-range order, the 2D CDWs should be none-the-less more fragile to disorder <cit.>. An ordered IC phase can only emerge in ultra-clean environments. Here endotaxial synthesis protects CDW states by strain-free encapsulation in a chemically identical environment of metallic layers that shield disorder.
§ CONCLUSION
In summary, we demonstrate that endotaxial synthesis of clean interleaved polytypic heterostructures can stabilize fragile quantum phases such as ordered CDWs even at high temperatures. Here, we stabilize and enhance 2D charge density waves (both long-range order and amplitude) in an endotaxially confined monolayer of 1T-. Surprisingly, the low-dimensional symmetry breaking of an ordered incommensurate CDW (oIC-CDW) appears, suggesting the quantum states reside within minimal extrinsic disorder. By enhancing CDW order the hexatic nature of IC-CDWs are revealed. Experimental observation matches advanced simulation of electron diffraction of charge lattices to provide the real-space evolution of 2D CDW melting. Heating the oIC-CDW in-situ TEM above 400 K we see a reversible hexatic melting process, in which disclinations and dislocations destroy long-range translational symmetry of the CDW while maintaining its orientational order. The CDW melts well before the underlying atomic crystal changes. In 2D, CDWs are expected to manifest through vestigial electronic hexaticity—a weak CDW with substantial defects and short range order. The nature of vestigial phases in CDWs remains poorly understood with little direct evidence. From these results, a CDW phase diagram for 1T- is created and consistent with the predicted emergence of vestigial quantum order.
§ REFERENCES
[heading=none]
§ ACKNOWLEDGEMENTS
S.H.S. acknowledges the financial support of the W.M. Keck Foundation. Experiments were conducted using the Michigan Center for Materials Characterization (MC2) with assistance from Tao Ma and Bobby Kerns. This work made us of electron microscopy facility of the Platform for the Accelerated Realization, Analysis, and Discovery of Interface Materials (PARADIM) supported by the National Science Foundation, which is supported by National Science Foundation under Cooperative Agreement No. DMR-2039380. N.S. acknowledges additional support from the NSF GRFP under award number DGE-2139899. P.K. and J.H. gratefully acknowledge support from NSF MRSEC DMR-2011839. Y.L, W.J.L. and Y.P.S, thank the support from the National Key
R&D Program (Grant No. 022YFA1403203 and No. 2021YFA1600201), the National
Natural Science Foundation of China (Grant No. U2032215, No. U1932217 and
No. 12274412).
§ AUTHOR CONTRIBUTIONS
S.H.S and R.H. conceived the charge lattice model and associated lattice distortions and linked them to diffraction of . S.H.S., Y.M.G., N.S., L.F.K., and R.H. performed HAADF-STEM and in-situ TEM and interpreted electron microscopy data. S.H.S. fabricated samples for electronic measurements. P.K. and J.T.H. performed and analyzed electronic measurements. S.H.S., I.E.B., R.H. and K.S. provided theoretical interpretation. S.H.S. and N.A. performed Monte-Carlo simulations. S.H.S. K.S. and R.H. created the phase diagram of octahedrally coordinated . Y.P.S. synthesized 1T- crystal. S.H.S. and R.H. prepared the manuscript. All authors reviewed and edited the manuscript.
§ COMPETING INTERESTS
The authors declare no competing interests.
§ METHODS
§.§ Simulated Diffraction of Charge Lattices with Heating
Charge density waves are electronic modulations describable in reciprocal space by three wave vectors (so called, triple q) or in real-space as local charges arranged into a hexagonal lattice. For a fully ordered system, the charge lattice is a perfect lattice (Fig. <ref>b left), and the structure factor (Fig. <ref>b left inset) is also a perfect lattice. Here, the periodicity is equal to the incommensurate CDW wave vector qIC (or aIC in real-space). Traditional CDW theory elegantly describes ordered (or slightly disordered) systems using sparse representation in reciprocal space for ordered systems. However, a real-space basis readily describes topological disorder (dislocations and disclinations) in a charge density wave. This becomes particularly critical for IC phase (>350 K) of 1T-, where diffraction studies reveal azimuthally diffused superlattice peaks <cit.> that we show to be consistent with topological disorder in CDWs. Describing disorder of CDW plays a critical role in simulating experimentally consistent diffraction patterns at high temperatures.
The hexatic melting of a real-space charge lattice is illustrated with phenomenological Monte Carlo simulations of the NPT ensemble (constant particle count, temperature, and pressure). The displacement of charge centers in a CDW follow a Maxwell-Boltzmann probability distribution at different temperatures. The interaction energy between charge centers is calculated using a shifted Lennard Jones potential truncated at 18.7 Å. From these first principles, the likelihood of forming dislocations and disclinations in a CDW lattice increases with temperature.
Diffraction of the simulated CDWs is calculated from the corresponding periodic lattice distortion (PLD) of a 1T- crystal. The displacements are small (≲10 pm), but clearly manifest as superlattice peaks with distinctive intensity in SAED. Notably, the superlattice peak intensities becomes stronger at higher |𝐤|; this is distinguishable from chemically ordered superlattice peaks that decay as |𝐤| increases <cit.>. In , atoms displace toward the charge centers which is equivalent to a longitudinal displacement wave. Here, a the displacement amplitude is proportional to the charge density gradient with a max displacement set at 7 pm. Electron diffraction is kinematically simulated under a flat Ewald Sphere approximations using the Fourier transform of the displaced atomic lattice.
§.§ Electron Microscopy
In-situ SAED was performed on Thermofisher Scientific (TFS) Talos (operated at 200 keV, SA aperture 850 nm) with Protochips Fusion Select holder and Gatan OneView Camera. Cross-sectional HAADF-STEM images were taken on JEOL 3100R05 (300 keV, 22 mrad) with samples prepared on TFS Nova Nanolab DualBeam FIB/SEM.
TEM specimens were prepared by exfoliating bulk 1T- and 1T- crystals onto polydimethylsiloxane (PDMS) gel stamp. The sample was then transferred to TEM grids using home-built transfer stage. Silicon nitride membrane window TEM grid with 2 µm holes from Norcada and Porotochips Fusion Thermal E-chips. From optical contrast and CBED patterns, the samples (Fig. 1, 2) were estimated to be 20–50 nm thick <cit.>.
§.§ Synthesis and Acquisition of bulk crystals
1T- for in-situ SAED measurements and electronic measurements was acquired from HQ Graphene. 1T- (x ≈ 1) for cross-sectional HAADF-STEM measurements was grown by the chemical vapor transport method with iodine as a transport agent. Stoichiometric amounts of the raw materials, high-purity elements Ta, S, and Se, were mixed and heated at 1170 K for 4 days in an evacuated quartz tube. Then the obtained powders and iodine (density: 5 mg/cm3) were sealed in another longer quartz tube, and heated for 10 days in a two-zone furnace, where the temperature of source zone and growth zone was fixed at 1220 K and 1120 K, respectively. A shiny mirror-like sample surface was obtained, confirming their high quality. All CDW characterization was done on 1T-; Se-doped sample was used only for polytype characterization in cross-sectional HAADF-STEM (Fig. <ref>d,e).
§.§ Endotaxial Synthesis of oIC-CDW in
Interleaved 2D polytypes were synthesized by heating 1T- to 720 K in high vacuum (<107 Torr) or in an argon purged glovebox <cit.>. 1T- was held at 720 K for ∼10 minutes, then brought down to room temperature. Once the interleaved polytype is fully established, the oIC-CDW becomes stable electronic state above 350 K.
§.§ Device Fabrication and Electronic Measurement
For resistivity measurements, flakes were transferred using PDMS gel stamp method to pre-fabricated bottom contacts. The fabrication of bottom contacts are detailed in <cit.>. The flake was sculpted into rectangular bar (∼11 µm×15 µm) using TFS Nova Nanolab DualBeam FIB/SEM (See Supplementary Figure S4). The thickness of the flake was determined by AFM.
Resistivity vs temperature measurements were performed in a Quantum Design Dynacool PPMS using a standard sample puck and an external Keithley 2400 series source meter. The sample was adhered to the puck backplane with silver paint, and contacts were wire bonded to the puck channel pads using 50 µm Au wire. To ensure sample thermalization, a baffle rod with an Au-coated sealing disk hovering <1 cm above the sample was inserted into the PPMS bore, and the heating and cooling rate was restricted to <2 K/min. 10 µA current was sourced for four wire measurements. The current/voltage limits were chosen to keep electric fields below 10 kV/cm to avoid sample breakdown, as well as to keep current densities below 105 A/cm2 and prevent localized heating at low temperatures.
|
http://arxiv.org/abs/2307.04178v1 | 20230709140217 | Shock excitation of H$_2$ in the James Webb Space Telescope era | [
"L. E. Kristensen",
"B. Godard",
"P. Guillard",
"A. Gusdorf",
"G. Pineau des Forets"
] | astro-ph.GA | [
"astro-ph.GA"
] |
Niels Bohr Institute, University of Copenhagen, Øster Voldgade 5–7, 1350 Copenhagen K, Denmark
[email protected]
Observatoire de Paris, Université PSL, Sorbonne Université, LERMA, 75014 Paris, France
[email protected]
Laboratoire de Physique de l’École Normale Supérieure, ENS, Université PSL, CNRS, Sorbonne Université, Université Paris Cité, 75005 Paris, France
Sorbonne Université, CNRS, UMR 7095, Institut d’Astrophysique de Paris, 98bis bd Arago, F-75014 Paris, France
Institut Universitaire de France, Ministère de l’Enseignement Supérieur et de la Recherche, 1 rue Descartes, 75231 Paris Cedex F-05, France
Université Paris-Saclay, CNRS, Institut d’Astrophysique Spatiale, 91405 Orsay, France
Molecular hydrogen, H_2, is the most abundant molecule in the Universe. Thanks to its widely spaced energy levels, it predominantly lights up in warm gas, T ≳ 10^2 K, such as shocked regions externally irradiated or not by interstellar UV photons, and it is one of the prime targets of James Webb Space Telescope (JWST) observations. These may include shocks from protostellar outflows, supernova remnants impinging on molecular clouds, all the way up to starburst galaxies and active galactic nuclei.
Sophisticated shock models are able to simulate H_2 emission from such shocked regions. We aim to explore H_2 excitation using shock models, and to test over which parameter space distinct signatures are produced in H_2 emission.
We here present simulated H_2 emission using the Paris-Durham shock code over an extensive grid of ∼ 14,000 plane-parallel stationary shock models, a large subset of which are exposed to a semi-isotropic external UV radiation field. The grid samples six input parameters: the preshock density, shock velocity, transverse magnetic field strength, UV radiation field strength, the cosmic-ray-ionization rate, and the abundance of polycyclic aromatic hydrocarbons, PAHs. Physical quantities resulting from our self-consistent calculations, such as temperature, density, and width, have been extracted along with H_2 integrated line intensities. These simulations and results are publicly available on the Interstellar Medium Services platform.
The strength of the transverse magnetic field, as quantified by the magnetic scaling factor, b, plays a key role in the excitation of H_2. At low values of b (≲ 0.3, J-type shocks), H_2 excitation is dominated by vibrationally excited lines; whereas, at higher values (b ≳ 1, C-type shocks), rotational lines dominate the spectrum for shocks with an external radiation field comparable to (or lower than) the solar neighborhood. Shocks with b ≥ 1 can potentially be spatially resolved with JWST for nearby objects. H_2 is typically the dominant coolant at lower densities (≲ 10^4 cm^-3); at higher densities, other molecules such as CO, OH, and H_2O take over at velocities ≲ 20 km s^-1 and atoms, for example, H, O, and S, dominate at higher velocities. Together, the velocity and density set the input kinetic energy flux. When this increases, the excitation and integrated intensity of H_2 increases similarly. An external UV field mainly serves to increase the excitation, particularly for shocks where the input radiation energy is comparable to the input kinetic energy flux. These results provide an overview of the energetic reprocessing of input kinetic energy flux and the resulting H_2 line emission.
Shock excitation of H_2 in the James Webb Space Telescope eraTables B.1 – B.7 are only available in electronic form
at the CDS via anonymous ftp to cdsarc.cds.unistra.fr (<130.79.128.5>)
or via <https://cdsarc.cds.unistra.fr/cgi-bin/qcat?J/A+A/675/A86>
L.E. Kristensen1
B. Godard2,3
P. Guillard4,5
A. Gusdorf3,2
G. Pineau des Forêts6,2
Received 27 February 2023; accepted 23 May 2023
============================================================================================================================================================================================================================================================
§ INTRODUCTION
Shocks are inherently out-of-equilibrium time-dependent phenomena that permeate space. They appear over a wide range of scales, ranging from, for example, accretion onto stars or protoplanetary disks, winds and jets driven by accreting (proto)stars, planetary nebulae, supernova remnants, starburst galaxies, jets from active galactic nuclei (AGN), and to galaxy-galaxy collisions <cit.>. Common to all these phenomena is that the input kinetic energy flux dissipated by the shock accelerates, heats, and compresses the medium. When the medium cools down, radiation is emitted, which we observe. To understand the physical origin of emission (e.g., preshock density, shock velocity) and the energetic processing taking place in shocks, it is thus necessary to reverse engineer the observed light. Doing so requires models.
One of the often-used tracers of shocks is molecular hydrogen, H_2 <cit.>. This is the most abundant molecule in the interstellar medium by some four orders of magnitude over CO and H_2O. The molecule is the lightest, and so it has the most widely spaced rotational levels (J = 1 has E_ up / k_ B = 170 K and J = 2 has E_ up / k_ B = 510 K). As such, it is predominantly excited in warm (T ≳ 10^2 K) and hot (T ≳ 10^3 K) molecular gas. This molecule has no permanent dipole moment, and only forbidden electric quadrupole transitions occur, although at low probability. The main reason H_2 emission is still bright is because of its high abundance.
H_2 emission is readily observed from the ground, particularly in higher-excited rovibrational transitions at near-infrared wavelengths <cit.>. The brightest of these is typically the = 1–0 S(1) line at 2.12 μm. A few pure rotational lines are also accessible from the ground, and the line profiles may even be velocity resolved on telescopes such as the Very Large Telescope <cit.>. However, it is necessary to go above the atmosphere to observe the lower-excited pure rotational transitions of H_2. Space-based telescopes such as the Infrared Space Observatory (ISO) and the Spitzer Space Telescope (Spitzer) both observed these transitions toward numerous shocked regions <cit.>, as did the Stratospheric Observatory For Infrared Astronomy <cit.>. Now the James Webb Space Telescope (JWST) is doing the same <cit.>. Particularly, the MIRI instrument is observing the rotational transitions with a gain in sensitivity and spatial resolution of two orders of magnitude compared with Spitzer, and an increase in spectral resolution of a factor five <cit.>. Similar improvements are reached with the NIRSpec instrument compared with the VLT-SINFONI integral-field unit, allowing deep observations of the rovibrational lines of . The wavelength coverage of NIRSpec, NIRCam, and MIRI are illustrated in Fig. <ref>, which shows a simulated H_2 spectrum with the instrument wavelength coverages displayed.
Planning and interpreting the abovementioned observations is often done by use of models. With models, it is possible to constrain, for example, the shock velocity and preshock density, which together give the input kinetic energy flux, 1/2 ρ _ s^3, where ρ is the mass density and _ s is the shock velocity. In molecular shocks, a comparison reveals that up to 50% of the input energy is radiated away in H_2 emission <cit.>, depending on shock conditions, making H_2 the dominant coolant in these shocks. Spitzer particularly opened up for characterization of the pure rotational H_2 lines. Observations and subsequent modeling revealed that most H_2 emission could be reproduced by shock models <cit.>. However, when additional constraints, such as the H/H_2 ratio and the cooling length are included for protostellar outflows, a single shock model no longer reproduces observations <cit.>. Instead, as argued, the observational beam likely catches different shocks, or more complex shock geometries than 1D, which is to be expected; this is not just the case for protostellar outflows, but also observations of shocks in the diffuse gas of starburst and colliding galaxies <cit.>. Irrespective of the specific science case, the first step in comparing observations to models is to have the models available.
The Paris-Durham shock code <cit.> has been developed and maintained for more than 35 years <cit.>. The code can either find jump (J-type shocks) or continuous (C-type shocks) solutions depending on the input physical parameters. Recent developments include the treatment of an external UV radiation field <cit.>, and self-irradiation in high-velocity shocks <cit.>. Here we present the results of running a large grid of simulations of (externally irradiated) shocks with the goal of exploring how the input energy flux (kinetic and radiative) is reprocessed and ultimately results in H_2 emission. These model predictions can be used directly to interpret, for example, JWST observations of shock emission.
The paper is organized as follows. Section <ref> describes the shock model and the model grid, with a particular emphasis on H_2 excitation and emission. The section also describes which physical quantities were extracted from the models, and the methodology applied. Section <ref> describes the results and provides a discussion of these results. Finally, the main points are summarized in Sect. <ref>.
§ MODEL AND GRID DESCRIPTION
The current version of the multifluid shock code is extensively described in <cit.> and references therein, and only the main relevant points will be described here. These points particularly relate to H_2 emission and other observable diagnostics, but also how the initial shock conditions are calculated. The code is publicly available[<http://ism.obspm.fr/shock.html>], and the entire grid presented in this paper is also available on the ISM platform[ <https://app.ism.obspm.fr/ismdb/>]. In Appendix <ref> we provide an introduction to this platform and demonstrate how it can be used.
§.§ Initial conditions
The main focus of this paper is on H_2, and so the chemistry considered in this paper and, more importantly, in the models run, is a gas-phase-only chemistry. That is, grain adsorption and desorption processes are not included. The only exceptions are the formation of H_2 on grains, and grain erosion for the release of elemental Si, Fe, etc. into the gas phase. Photochemistry is included in all steps of the calculation; readers can refer to the text below for more details.
Our assumption is that the initial conditions are in equilibrium, that is, thermal and chemical equilibrium with or without an incident radiation field. Running a shock model therefore requires multiple steps, all done using the Paris-Durham code <cit.>. This code simulates steady-state gas equilibrium, photon-dominated regions (PDRs), or shocks. These steps are illustrated in Fig. <ref>. First, a chemical steady-state calculation is run with the given density and radiation field. For irradiated shocks, the next step is to take the final equilibrium conditions from the chemical steady-state calculation and use these as input for a PDR calculation, where a tracer particle is advected at a small velocity (≤ 0.01 km s^-1) from an A_ V of 10^-9 to 10^-1. The advection speed is chosen such that the time it takes to cross the PDR front is long enough that equilibrium is reached; this timescale is 10^5–10^9 years for high to low densities. The choice of a final A_ V of 0.1 is motivated by two considerations. First, the primary focus of this paper is H_2 and the A_ V thus needs to be high enough that the preshock gas is substantially molecular (molecular fraction ≥ 0.1) for the majority of the G_0 values here, specifically the part of the grid where G_0/n_ H < 1. Second, the A_ V should be low enough that H_2 is not fully self-shielded. These two conditions are met at an A_ V of 0.1. The final conditions, in terms of steady-state abundances, temperature, and H_2 level populations, are then used as the input physical conditions of the shock calculation. The shock is run in the final step.
The initial elemental abundances are provided in Table <ref>. Of particular importance is the abundance of polycyclic aromatic hydrocarbons (PAHs). In the model, a representative PAH molecule is included, C_54H_18 and its singly charged ions. Table <ref> reports the amount of H and C locked up in this PAH for a PAH abundance of X(PAH) = 10^-6. The grain temperature is kept fixed at 15 K.
We cover a 6D parameter space with preshock density (n_ H = 2 n(H_2) + n(H)), shock velocity (_ s), strength of the transverse magnetic field[The transverse magnetic field strength scales with the density as B = b ×√(n_ H ( cm^-3)) μG, where b is a scaling factor.] (b), external UV radiation <cit.>, H_2 cosmic-ray ionization rate (ζ_ H2), and the fractional abundance of the PAHs (X(PAH)). The parameter space is presented in Table <ref>. Depending on the initial conditions, the code either finds a Jump (J-type) solution or a Continuous (C-type) solution (see below, Sect. <ref> for more details). Throughout this paper, we use two shock models to illustrate differences when changing b from 0.1 to 1.0; these are referred to as model A and B (Table <ref>). For the given set of input parameters, model A gives rise to a J-type shock, and model B a C-type shock.
§.§ Molecular hydrogen
Collisional excitation and de-excitation of H_2 is calculated for collisions with H, H_2, and He. The collisional rate coefficients for H_2-H_2 collisions are adopted from <cit.> and for H_2-He collisions from <cit.>. In the case of H_2-H collisions, for the first 49 levels of H_2 the rates are from <cit.> and <cit.>, where the rates have been calculated using a full quantum mechanical approach. For the remaining levels, the rates from <cit.> are used. They were calculated using a quasi-classical approach. The reactive reaction rates of H_2 with H are from <cit.>.
The number of levels has been set to 150 here, and the highest level is = 8, J = 3 (E/k_ B=39,000 K). The model assumes that there are no levels between the user-set value and the dissociation level. This may be important when calculating the dissociation rate of H_2, since molecules that are already excited have internal energies that are closer to the dissociation limit, and thus require less energy to dissociate. For the models run here, we find that there is no significant difference in H_2 emission by increasing the number of levels.
Depending on the initial conditions, H_2 may dissociate in the shock through collisions. As the post-shock gas cools, H_2 reforms on the grains <cit.> and it is necessary to account for the bond energy released (4.5 eV ∼ 5.1 × 10^4 K). We assume that approximately one third of the energy goes to internal energy of the molecule. This internal energy distribution follows a Boltzmann distribution with a temperature corresponding to ∼ 17,000 K. The remaining energy is equally split between kinetic energy of the newly formed H_2 molecule, and heating of the grain.
The H_2 level populations are used for calculating the local H_2 line emissivities. This is done under the assumption of optically thin emission, which typically applies to H_2 emission because of its lack of a permanent dipole moment. Of these lines, 1000 are output explicitly and stored as emissivity profiles in this grid. About 900 of these H_2 lines are covered by the JWST instruments MIRI and NIRSpec. These two instruments together cover the wavelength range of 0.6 – 28 μm, that is the = 0–0 S(0) ground-state line at 28.3 μm (Fig. <ref>) is not covered.
§.§ Grid
The total set of grid parameters is presented in Table <ref>; covering this range of parameter space resulted in ∼ 14,000 simulations in total. Each simulation produces a number of outputs that are all stored in human-readable ASCII files and an HDF5 file for easy extraction[The full model outputs are provided on the ISM platform: <https://app.ism.obspm.fr/ismdb/>]. These include physical properties of the shock (e.g., temperature, density, velocity) as a function of distance and time through the shock, and chemical properties (e.g., local densities, charge state, column densities), excitation of H_2 (level populations and local emissivities). In this case, the time is calculated as the neutral flow time, t_ n = ∫ dz / _ n. In total, more than 2600 quantities are stored as profiles through each shock, and 1400 quantities are stored as integrated values.
The model integrates the gas state far downstream in order to ensure that a steady-state solution is contained within the simulation. Therefore, special care needs to be taken when extracting integrated quantities such as column densities or line intensities. We here adopt a similar criterion for the size of the shock as in <cit.> based on radiative energy dissipation. We here set that limit as the point where 99.9% of the total radiation has been emitted (see Appendix <ref>). Specifically, this means that the size, z_ s is defined as:
Υ(z_ s) - Υ(0)/Υ(∞)-Υ(0) = 99.9 % ,
where Υ is the sum of the kinetic, magnetic, and thermal energy fluxes.
For ease of use, we provide a number of tables containing already-extracted results at the Centre de Données astronomiques de Strasbourg (CDS[Add link to CDS archive at publication stage.]). Example tables are provided in Appendix <ref> in Tables <ref> – <ref>. These tables include:
<ref> Physical parameters such as peak temperature, density, width, and age of the shock;
<ref> Column densities of selected species, particularly H, H_2, O, OH, H_2, C^+, C, and CO;
<ref> Data required for creating H_2 excitation diagrams, i.e., ln(N/g) and E for each of the 150 levels;
<ref> H_2 integrated intensities of the 1000 lines extracted, along with their wavelength;
<ref> Width of the H_2 emitting zone for the = 0–0 S(1), 1–0 S(1), 0–0 S(9), 1–0 O(5), and 2–1 S(1) lines;
<ref> H_2 o/p ratios determined both locally and integrated through the shock;
<ref> Integrated line intensities of 29 transitions arising from C^+, Si^+, H, C, Si, O, S^+, N^+, N, and S.
On occasion, the model does not converge for numerical reasons; this happens in ∼5% of cases. This convergence-failure occurs often in C^*-type shocks, when the flow crosses the first sonic point <cit.>. In these cases, the model output is ignored but the input parameters are still recorded in the tables.
§.§ Model limitations
The model has a number of inherent assumptions, which are discussed in the following. The include the shock geometry, magnetic field orientation, self-irradiation, stationary shocks, and grain chemistry.
Geometry. The model treats a plane-parallel shock front, thus ignoring geometry. The lack of geometry is especially important in J-type shocks, where the gas may be compressed by four orders of magnitude or more. In nature, such a compression would quickly lead to a expansion of the high-pressure post-shock gas into the surrounding low-pressure medium, however, that is not possible in a 1D simulation. As a result, the post-shock density could be overestimated. For the case of H_2 emission, this is less important: most of the H_2 emission is generated in the warm parts of the shock where T > 100 K, prior to where significant post-shock expansion would occur.
Magnetic field orientation. The magnetic field orientation is assumed to be perpendicular to the direction of motion. This may not always be the case in molecular clouds, in fact, there is no a priori reason to assume the shock wave and field orientation are well aligned. If the field is not perpendicular to the direction of motion, the compression will lead to a change in field geometry, as described and discussed in <cit.>. These effects are not included here.
Self-irradiation. The model is best suited for molecular shocks. In shocks where H_2 is dissociated and atomic H is excited, the shocks become self-irradiated. While this self-irradiation can be solved iteratively <cit.>, it is not included in the present version of the grid. This limits J-type shocks to _ s≲ 30 km s^-1.
Stationary shocks. All the shocks in this paper are stationary shocks. This implies there needs to be enough time for the stationary structure to fully develop. While the code can mimic non-stationary shocks, an additional free parameter, the age of the shock, is needed, and it is deemed beyond the scope of this work to explore the effects of that parameter <cit.>.
Grain chemistry. Grain-grain interactions are omitted in this grid. For conditions where the velocity is below ∼ 25 km s^-1 and the density is below ∼ 10^5 cm^-3, this assumption is likely valid <cit.>. At larger velocities or densities, grains may interact, leading to grain evaporation and fragmentation which changes the size distribution of grains. Finally, in this grid we do not include ice mantles on the grains.
§ RESULTS AND DISCUSSION
The shock has an initial kinetic energy flux of 1/2 ρ _ s^3, where ρ = 1.4 n_ H m_ H is the mass density; most of this energy is radiated away in the shock. Figure <ref> shows how the energy is lost in shocks with b = 0.1, velocities of 20 and 30 km s^-1, and densities of 10^4 and 10^6 cm^-3. The pie charts are sorted by initial kinetic energy flux going from left to right, and top to bottom. The H_2 fraction decreases with increasing velocity and density because of dissociation. H_2 then reforms on the grains in the postshock gas introducing a heating term which counteracts the cooling of H_2. This is visible in the pie charts as the fraction of H_2 emission decreases monotonically with input kinetic energy flux, from 75% to 0.5%.
Figure <ref> is similar to Fig. <ref>, but for a stronger magnetic field (b = 1.0), i.e., the input kinetic energy fluxes are the same as above. Increasing b to 1 has the consequence that the two 20-km s^-1 shocks become C-type shocks; the 30-km s^-1 shocks remain J-type shocks. The J-type shocks are dissociative, and the H_2 cooling fraction thus decreases significantly, as also illustrated in Fig. <ref>.
The distribution of energy flux into emission lines has been described previously <cit.>, and a comparison in H_2 cooling fractions of the total input kinetic energy flux reveals broad agreement between different models and previous versions of the Paris-Durham model. These pie charts provide a global view of the energetic reprocessing in these shocks. In the following, the role of the different input parameters on the energetic reprocessing will be discussed in more detail, with a specific emphasis on H_2 emission.
§.§ Magnetic field
The strength of the transverse magnetic field, B, sets the ion-magnetosonic speed, c_ ims, together with the ion mass density, ρ_ i:
c_ ims = (c_ s + B^2 / 4πρ_ i)^1/2,
where c_ s is the sound speed. For _ s < c_ ims, the ionized and neutral fluids are decoupled and a magnetic precursor is present <cit.>; the code treats these multiple fluids self-consistently. For _ s > c_ ims, the ionized and neutral fluids are coupled, and there is no magnetic precursor (Fig. <ref>). We refer to Sect. 2.1 of <cit.> for a more in-depth description of the differences between J- and C-type shocks. Figure <ref> shows where the different shock types are as a function of b and _ s for a density of 10^4 cm^-3, Fig. <ref> shows the shock type for a part of the grid presented in this paper. For low values of b (≲0.3), the resulting shocks are J-type shocks, while for b ≳ 1.0 the resulting shocks are predominantly C-type shocks.
The effects of the magnetic precursor is that the input kinetic energy flux is deposited over a much larger spatial range (Fig. <ref>), resulting in lower peak temperatures when compared to shocks with the same input kinetic energy flux but no magnetic precursor. This naturally affects the excitation of H_2, as illustrated in Fig. <ref> in the form of the fraction of total integrated intensity to initial kinetic energy flux. The H_2 excitation is illustrated for the two reference shocks (Table <ref>), both with the same input kinetic energy flux. The figure demonstrates that for both shocks, most of the kinetic energy is radiated away in H_2 emission (see Fig. <ref> and <ref>); the difference in total H_2 integrated intensity from the two shocks is ∼ 15%. However, the integrated intensity from model B (b=1.0) is dominated by pure rotational emission (> 99% of H_2 emission), whereas it is spread over the vibrational levels in model A (b=0.1).
The differences in H_2 excitation and the origin thereof for different values of b are further explored in Fig. <ref> for models A and B in the left and right column, respectively. The first row shows the emerging H_2 spectrum from the two shocks. As was already clear from Fig. <ref>, most of the H_2 emission in model A is spread over the vibrational transitions, whereas emission in model B predominantly is rotational. To make these artificial spectra, a uniform resolving power of R = λ/Δλ = 2500 is assumed, similar to the resolving powers of the NIRSpec and MIRI instruments on JWST, and the line shapes are Gaussian. That is, the integrated intensity calculated in the models is I_ total = √(π) I_ peakΔλ / (2 √(2 ln 2)). A uniform resolving power implies that the emission from longer-wavelength transitions is spread over a larger wavelength range, and thus the peak emission is lower. This stark difference in the H_2 spectra can be understood from the physical structure of the shock.
The kinetic energy flux injected into the two shocks is the same, but the temperature structure is very different. For J-type shocks, such as model A, the maximum temperature can be approximated by <cit.>:
T_ max = 53 K(_ s/1 km s^-1)^2.
For model A, the maximum temperature is ∼ 2×10^4 K (Fig. <ref>, second row). This high temperature ensures that the vibrational H_2 levels are readily populated. For model B (b = 1.0), on the other hand, the magnetic precursor causes the kinetic energy to be deposited over a much larger scale (∼ 10^3 AU vs. ∼ 1 AU), and the resulting peak temperature is much lower (∼ 2000 K). In this case, the temperature is so low that only the rotational levels are significantly excited.
The third row of Fig. <ref> shows excitation diagrams for the two shocks. For model A, all points fall on a single curved line, indicating that the levels are probing a range of excitation temperatures, T_ ex. Particularly, the higher-J and rovibrational transitions probe hotter gas than the lower-J transitions, and the slope is thus shallower (slope = –1/T_ ex). In this case, the excitation temperatures is similar to the gas temperature where the local emissivity peaks (second row of Fig. <ref>). The excitation diagram for model B shows more scatter (caused by the low initial o/p ratio, see below), but the excitation temperatures still match the gas kinetic temperature where the levels are excited. In Appendix <ref> we provide figures showing the extracted excitation temperatures sampling the full range of initial density and shock velocity for b = 0.1 and 1.0, and G_0 = 0 and 1.
Another feature of the excitation diagram for model B is that there is a clear difference between the ortho- and para-levels of H_2. Here the ortho-levels (odd J) are displaced downward compared to the corresponding para-levels (even J), and the resulting zigzag pattern indicates that the ortho/para (o/p) ratio is lower than the high-temperature statistical equilibrium value of 3 <cit.>.
There are no radiative or collisional transitions between ortho- and para-H_2 levels, only exchange reactions with H, H_2, and protonated ions (e.g., H_3^+, HCO^+) can change the spin state <cit.>. The line emission and resulting excitation diagram is integrated through the shock, and thus does not provide information on the local o/p ratio. This is calculated directly from the level populations as n_ o / n_ p, and it can be compared to the cumulative column density ratio, N_ o / N_ p. Both these values are shown in the bottom row of Fig. <ref>. This column density ratio is often dominated by the column densities of H_2 in the two lowest rotational levels, J=0 and 1, which are not accessible in emission. Therefore, we also show the o/p ratio as calculated from the column densities of the lowest observable rotational levels, in this case from the J = 2–9 levels (S(0) to S(7) transitions). In model A, the temperature is high enough that the H exchange reaction H_2^ para + H → H_2^ ortho + H proceeds efficiently <cit.>. The resulting o/p ratios are thus close to 3, although the inferred rotational o/p is somewhat lower than 3 (∼ 1). For model B, the temperature never get high enough that the exchange reactions with H become dominant; instead, the ion-neutral proton-transfer reactions dominate, but they are limited by the low abundances of ions. Thus, the o/p ratios remain at ∼ 0.1. In both models, the initial temperature is 10 K and the gas is dense, which leads to a steady-state o/p ratio of 10^-3 <cit.>. Had the initial temperature been higher or the gas not been in steady state, the initial o/p ratio would have been higher, and the o/p ratio through the shock also correspondingly higher. All in all, however, special care must be taken when interpreting o/p ratios inferred from observations <cit.>.
As mentioned above, the input kinetic energy flux is deposited over a larger spatial range for increasing values of b. Specifically, a “phase transition” occurs when the resulting shock type goes from being J- to C-type, and a magnetic precursor develops. This typically happens at higher values of b or lower velocities (Fig. <ref> shows which physical conditions lead to which shock type). Naturally the ionization fraction also plays a role in setting the shock type (Eq. <ref>), but the gas is primarily neutral for the conditions examined here, and effectively this fraction does not play a role here. To measure the width and to make it a usable observational constraint, we have extracted the scale over which 80% of the H_2 emissivity is generated for a subset of lines: the = 0–0 S(1), 1–0 S(1), 0–0 S(9), 1–0 O(5), and 2–1 S(1) lines. These widths are shown in Fig. <ref> together with the integrated intensity of the lines; here we show the widths of the = 0–0 S(1) and 1–0 S(1) emitting regions. The shocks with b = 0.1 all have widths less than 10 AU, whereas the b = 1 shocks have widths up to ∼ 10^5 AU or ∼ 1 pc. For these shocks, there is an anticorrelation between the width and the integrated intensity: the wider shocks have lower integrated intensities. The J-type shocks occurring for b = 1 and _ s≥ 25 km s^-1 have larger widths than their b = 0.1 counterparts by one order of magnitude. Even though these are J-type shocks, the magnetic field still plays a significant role.
§.§ Velocity and density
The shock velocity, _ s sets the maximum temperature in J-type shocks (Eq. <ref>). H_2 excitation is sensitive to temperature, and so the velocity effectively sets the excitation. This is seen in the simulated spectra (Fig. <ref>). At the lowest velocity (5 km s^-1), the integrated intensity is low and only a few rotational lines are seen in the spectrum. On the contrary, at velocities ≳ 20 km s^-1, we see rich vibrational H_2 spectra. At the same time the peak specific intensity increases by a factor of ∼10, until the velocity reaches 30 km s^-1 and the shock becomes dissociative. In this case, H_2 only contributes to the cooling once it has reformed on the grains. Thus, to a first order, the excitation is set primarily by the velocity in J-type shocks, and the density plays a role in setting the total integrated intensity.
In C-type shocks, the combination of density and velocity is what affects the excitation and the integrated intensity (Fig. <ref>, bottom panel). This is illustrated in the top row of Fig. <ref>, which shows the total H_2 integrated intensity emitted as well as the brightest line. Here, the brightest line serves as a proxy for the excitation in the sense that the higher excited the brightest line is, the higher the excitation is. For the C-type shocks (orange dots), there is a clear intensity and excitation gradient which depends on both density and velocity. The brightest lines are rotational over the bulk of parameter space (from 0–0 S(0) to S(6)), and they are typically para-H_2 transitions (even J). For the case of J-type shocks (blue dots), the intensity gradient is dominated by the density, as discussed above. However, the brightest lines quickly become vibrational; the = 1–0 Q(1) line (2.41 μm) is predicted to be particularly bright, as is the = 1–0 S(3) line (1.96 μm). Thus, identifying the brightest line in the H_2 spectrum provides constraints on where in parameter space the shock is located. Appendix <ref> provides an overview of the dominant cooling lines across the grid.
The H_2 fraction in the gas is highest at the lower densities and lower velocities where H_2 does not dissociate. However, for a given velocity, the total H_2 integrated intensity increases monotonically with density, as shown in Fig. <ref>. This is in spite of the fraction of input kinetic energy flux radiated by H_2 is monotonically decreasing. Thus, for the shocks with the brightest H_2 emission, other molecules and atoms are needed to trace the bulk deposition of kinetic energy. Examples include emission from CO and H_2O at lower velocities, and O, S, and H at higher velocities.
§.§ UV radiation field
In an externally UV-irradiated shock, the UV photons lead to increased gas ionization and thus higher density of the charged fluid. This increase causes a tighter coupling between the neutral and charged fluids, which in turn leads to the kinetic energy typically being deposited over shorter scales compared to in the absence of external UV radiation. Thus, the temperature typically increases and the shocks become narrower <cit.>. The increased temperature naturally causes higher excitation of H_2, as is illustrated in the H_2 spectra in Fig. <ref>. Here, the shock in model B, showing pure rotational excitation of H_2, is exposed to increasing strengths of an external UV-field, from G_0 = 0 to 10^3. The increase in temperature (from 1700 K to 2800 K) leads to an increase in excitation, and the vibrational levels start to become populated.
The second effect of the UV field is to deposit additional energy into the shock <cit.>. Either this energy deposition is indirect in the form of ionization followed by recombination and release of binding energy, or the energy deposition is direct, where UV photons excite H_2 electronically, from which the molecules can de-excite radiatively. It is clear that for the highest values of G_0, the additional energetic input is significant. This is illustrated in Fig. <ref>. Here, the energy radiated away by H_2 as a function of vibrational level is shown for model B, similar to Fig. <ref>. In this case, model B is exposed to stronger UV fields, and the higher vibrational levels are excited, as also seen in Fig. <ref>. The total fraction of energy lost in H_2 emission increases almost monotonically from 0.63 to 1.07 of the input kinetic energy flux. Thus, at least 7% of the excitation is caused by the UV field, and likely more as there are other channels of energy loss (Fig. <ref>). For a quantitative description of the role of UV pumping on the H_2 level populations, we refer to Fig. 8 of <cit.>.
Even for relatively weak UV field strengths (e.g., G_0 = 1), the UV photons may play a significant role. Figure <ref> is similar to Fig. <ref> in that the top panels show the total amount of H_2 emission and the strongest H_2 line. For the weak shocks (low density, low velocity), one major difference is seen when the UV field is turned on: in the absence of external UV radiation, the brightest lines are all para-H_2 lines (even J) because there is no significant para- to ortho-H_2 conversion. For the weak UV field, the strongest lines are predominantly ortho-lines (odd J), which is consistent with observations of the diffuse gas in colliding galaxies <cit.>. This suggests that interstellar shocks in general are not fully shielded, but exposed to some UV radiation.
§.§ H_2 excitation for JWST observers
JWST represents an increase in sensitivity, spatial and spectral resolution by more than an order of magnitude over previous infrared space-based telescopes <cit.>. We here outline some of the ways in which the models may be used to plan and interpret the JWST observations of shocked regions, keeping in mind the model limitations listed in Sect. <ref>.
H_2 spectroscopy. The spectroscopic capabilities of NIRSpec and MIRI make them perfectly suited for observing H_2 line emission. The excitation of H_2 is the result of a complex interplay between various input parameters, as discussed above, with some degeneracies, especially between the density and shock velocity. This is for example illustrated in Fig. 13 of <cit.>, where observations of H_2 emission from the explosive Orion-KL protostellar outflow are analyzed. With high enough spectral resolution, independent constraints can be made on the shock velocity, thus directly breaking the degeneracy <cit.>.
It will likely not be possible to strongly constrain shock conditions from H_2 observations alone, unless the observers only consider subgrids of physical parameters relevant to their studies. An example could be that if shocks in diffuse clouds are studied, only the lowest densities in the grid would be relevant. Furthermore, in a large number of cases, G_0 can be independently constrained, for example, by studying ionized gas lines, UV continuum observations, or PAH features at infrared wavelengths. Observers should also be aware that, in shock-dominated environments, the total H_2 line emission in a given beam is likely the product of a distribution of shocks arising from a multiphase medium with different conditions. Such an example of shock probability distributions convolved with the use of grids of shock models have been used to interpret H_2 observations in the intragroup shocked diffuse gas in colliding galaxies <cit.>.
Shock width. The NIRCam instrument on JWST is well-suited for observing H_2 emission. The instrument contains three categories of filters, narrow-, medium-, and wide-band filters. Their wavelength coverages are illustrated in Fig. <ref>. Of the narrowband filters, three center on H_2 lines: F212N (=1-0 S(1)), F323N (=1-0 O(5)), and F470N (=0-0 S(9)). The spatial resolution ranges from 007 to 016, corresponding to linear scales of 14 and 32 AU at a distance of 200 pc, a typical distance to nearby star-forming regions. As illustrated in Fig. <ref>, the width of shocks with b = 1.0 is typically resolvable if the shock is observed close to edge on, except at the highest densities (≳10^7 cm^-3 for C-type shocks, and ≳10^6 cm^-3 for J-type shocks). Shocks with b = 0.1 are not resolvable at a distance of 200 pc. Having a measured shock width puts additional constraints on the shock models: the width is sensitive to the strength of the transverse magnetic field and thus serves as an independent constraint of this parameter <cit.>. Besides NIRCam, the MIRI IFU offers the possibility of producing spectral line maps of H_2 emission at 160 AU (0."5) spatial resolution at a distance of 200 pc of the 0–0 S(1) line at 17 μm. Emission from this line traces colder gas, and so is typically more extended than the higher-excited lines shown in Fig. <ref>. This resolution is therefore still enough to resolve shock-dominated line emission from dissipative regions in nearby star-forming clouds <cit.>.
H_2 photometry. As shown in Fig. <ref>, the NIRCAM and MIRI imaging filters includes multiple ro-vibrational and rotational H_2 lines, so the use of a those filters may prove to be efficient as far as exposure time and mapping area are concerned. Such observations may be used for constraining shock conditions. As an example, Figs. <ref> and <ref> show the brightest lines for a given set of initial conditions. Thus, if an observed region is dominated by shocked H_2 emission, then it might be possible to broadly constrain the range of parameter space where the emission is generated. That is, with the model results in hand, the user can construct “H_2 photometry” which can be compared to observations, assuming H_2 emission dominates the spectrum and the contribution from, e.g., PAH emission is negligible, or assuming that a combination of filters can be used to remove the contribution of the continuum emission. A similar approach has been shown to work efficiently for the wideband MIRI filters for observations of the colliding galaxies in Stephan's Quintet <cit.>.
H_2 summary. Table <ref> summarizes what sets the H_2 integrated intensity and the excitation. This table is by no means exhaustive, but may be used as an overview guide of H_2 emission in shocks. To constrain the excitation properly, it is necessary to cover as large a wavelength range as possible, and to cover both rotational and rovibrational lines. The former are predominantly excited in C-type shocks, and the latter in J-type shocks. Once a solution has been found that approximately reproduces observations, we recommend the user to fine-tune the grid further for more precise solutions. This can be done either by interpolating the grid values; in this case care must be taken when going from one shock type to another. Alternatively the user can download the model and run their own shock models, in which case we recommend benchmarking their results against the models presented here in a first step. Finally, we recommend that the total integrated intensity of the H_2 lines is compared to the total available mechanical energy output from a given source, to ensure that the best-fit shock model is physical <cit.>.
Atomic lines. Apart from H_2 emission, the model calculates line emission from several other atomic and ionic species. As an example, JWST-MIRI will observe the [Si] line at 25 μm <cit.>, and the integrated line intensity of this line is calculated and tabulated from the grid. The same applies to lines from other species, e.g., O, and C. Naturally, these lines light up in different parts of parameter space compared to H_2, and thus provide complementary information.
Other emission lines. The abundances of some 140 other species have been calculated through the shock. Examples of particular relevance to JWST and shocks include Fe^+, OH and H_2O, because these species have a number of transitions visible in the NIRSpec and MIRI wavelength ranges and these species are some of the dominant coolants (e.g., Fig. <ref> and <ref>). The abundance, temperature, and density profiles are calculated through the shock, which means that the profiles can be post-processed to calculate integrated line intensities using for example a large velocity gradient (LVG) radiative transfer code <cit.>, which has not been done for this grid of models. Just as for the atomic lines, these will provide complementary observational constraints.
§ SUMMARY
Here we present the results of an extensive grid of plane-parallel steady-state shock models. The grid was constructed by varying six parameters: the preshock density, shock velocity, strength of the transverse magnetic field, strength of the UV field impinging on the shock, the cosmic-ray-ionization rate, and the PAH abundance. This is the first time such an extensive grid of shock models has been run and made publicly available.
The purpose of running this grid of models was to examine under which shock conditions H_2 is efficiently excited, and how shock conditions affect the H_2 excitation and integrated line intensities. H_2 is already being extensively observed with JWST, and the coming years will see a flood of H_2 observations. At the moment it is therefore critical for planning and interpreting JWST observations.
We find that the strength of the transverse magnetic field, as quantified by the magnetic scaling factor, b, plays a key role in the excitation of H_2. At low values of b (≲ 0.3, J-type shocks), H_2 excitation is dominated by vibrationally excited lines; whereas, at higher values (b ≳ 1, C-type shocks), rotational lines dominate the spectrum for shocks without an external radiation field. Shocks with b ≥ 1 can potentially be spatially resolved with JWST for nearby objects, which serves as an additional constraint.
H_2 is typically the dominant coolant at lower densities (≲ 10^4 cm^-3); at higher densities, other molecules such as CO, OH, and H_2O take over at velocities ≲ 20 km s^-1 and atoms, for example, H, O, and S, dominate at higher velocities. Together, the velocity and density set the input kinetic energy flux. When this increases, the excitation and integrated intensity of H_2 increases similarly.
An external UV field mainly serves to increase the excitation, particularly for shocks where the input radiation energy is comparable to or greater than the input kinetic energy flux. Together, these results provide an overview of the energetic reprocessing of input energy and the resulting H_2 line emission observable by JWST.
We would like to thank F. Boulanger and S. Cabrit for simulating discussions, particularly at the beginning of this project, as well as J. A. Villa Vélez. The research leading to these results has received funding from the European Research Council, under the European Community’s Seventh framework Programme, through the Advanced Grant MIST (FP7/2017–2022, No. 742719). The grid of simulations used in this work has been run on the computing cluster Totoro of the ERC MIST, administered by MesoPSL. We would also like to acknowledge the support from the Programme National “Physique et Chimie du Milieu Interstellaire” (PCMI) of CNRS/INSU with INC/INP co-funded by CEA and CNES. The research of LEK is supported by a research grant (19127) from VILLUM FONDEN. PG would like to thank the Sorbonne University, the Institut Universitaire de France, the Centre National d'Etudes Spatiales (CNES), the “Programme National de Cosmologie and Galaxies” (PNCG). This work has made use of the Paris-Durham public shock code V1.1, distributed by the CNRS-INSU National Service “ISM Platform” at the Paris Observatory Data Center[<http://ism.obspm.fr>].
aa
§ THE ISM PLATFORM
The ISM platform[<http://ism.obspm.fr>] is a web portal that contains a series of services developed for the diffusion of state-of-the-art astrochemical models and the preparation and interpretation of observations. Regarding the Paris-Durham shock code, the platform provides access to the numerical code and its previous versions, a full documentation of the physical processes implemented, a tutorial to learn how to run the code locally, and a series of selected references. The platform also provides two analysis tools, IDAT and the Chemistry Analyzer tool, which can be used to study the output of the shock code and identify the processes responsible for the thermochemical evolution of the gas in a simulation. Finally, the platform contains a numerical database (InterStellar Medium DataBase or ISMDB) that provides an easy access to recalculated grid of theoretical models.
On this platform it is possible to “Search models in ISMDB” and from there “Browse models.” This leads to a page where combinations of input shock parameters can be specified, and once the selection has been made, it is possible to “Get model.” The resulting page shows the input parameters as well as some of the resulting quantities (e.g., shock type). The entire model output can be downloaded for further analysis, or the model can be quickly inspected directly through “Online analysis with IDAT.” This tool allows the user to select different quantities and plot them against distance through the shock on one or two different y-axes if so desired. An example could be the velocities through the shock as well as the temperature.
§ TABLES WITH EXTRACTED PARAMETERS
We here provide example tables of the physical quantities already extracted from the grid (Tables <ref> – <ref>). These tables are available on CDS in electronic format. These tables include:
<ref> Physical quantities such as peak temperature, density, width, and age of the shock;
<ref> Column densities of relevant species, particularly H, H_2, O, OH, H_2, C^+, C, and CO;
<ref> Data required for creating H_2 excitation diagrams, i.e., ln(N/g) and E for each of the 150 levels;
<ref> H_2 integrated intensities of the 1000 lines extracted, along with their wavelength;
<ref> Width of the H_2 emitting zone for the = 0–0 S(1), 1–0 S(1), 0–0 S(9), 1–0 O(5), and 2–1 S(1) lines;
<ref> H_2 o/p ratios determined both locally and integrated through the shock;
<ref> Integrated line intensities of 29 transitions arising from C^+, Si^+, H, C, Si, O, S^+, N^+, N, and S.
An energy cutoff of 99.9% was used to define the point at which integrated quantities (e.g., line intensities, column densities) were integrated to (Sect. <ref>). Tests were performed using cutoffs at 95%, 99%, 99.9%, 99.99%, and 99.999%. The two lower values (95 and 99%) did not capture the H_2-emitting zone, particularly in strong CJ-type shocks where the temperature exceeds 10^5 K. The difference between 99.9% and 99.99% cutoffs were on the order of a few percent in terms of H_2 integrated line intensities for the = 0–0 S(1), 1–0 S(1), and 2–1 S(1) transitions for most shock conditions. Thus, a threshold of 99.9% ensured that most of the H_2 radiative cooling zone was encompassed.
§ ADDITIONAL FIGURES
§.§ Excitation temperatures
Excitation temperatures have been extracted and calculated from a subset of the grid. Figures <ref> and <ref> show these temperatures calculated from the = 0, J = 3 to 5 levels (S(1) to S(3)) and the = 0, J = 6 to 11 levels (S(4) to S(9)) levels, respectively. The excitation temperatures are shown for b = 0.1 and 1, and G_0 = 0 and 1. Figures <ref> and <ref> show excitation temperatures for the = 1, J = 0–8 and = 2, J = 0–8 vibrationally excited levels.
§.§ Cosmic ray ionization rate
In the model, cosmic rays may ionize H_2 and other species. When these species recombine, primarily H_2, secondary UV photons are emitted. Direct excitation by cosmic rays is not included. In this manner, cosmic rays serve as an additional source of both ionization and thus energy input. The expectation is that they will impact the H_2 emission to a similar degree as an external UV field. Their impact, however, is smaller than that of UV radiation. This is illustrated in Fig. <ref>, where the integrated line intensity of three representative lines are shown as a function of the cosmic ray ionization rate, ζ_ H2, for Model B. In this case, the PAH abundance is set to 10^-8. For no external UV radiation, the integrated intensity increases by ∼ one order of magnitude when ζ_ H2 increases by two orders of magnitude. For G_0 = 1, there is practically no change in intensity over the same range of ζ_ H2, however, the vibrationally excited lines are significantly brighter than for the shocks without an external radiation field.
§ DOMINANT COOLING LINES
It is natural, when examining such a large grid, to identify the dominant H_2 cooling lines, that is, the H_2 lines that are most likely to be observed for a given set of input parameters. One way of identifying these lines for the entire grid, is to go through each model and tabulate the lines with integrated intensities that are greater than 25% of the maximum intensity. This arbitrary cutoff is chosen from the perspective that if the strongest line is detected at 20σ, then these lines would also be detectable at the 5σ level. Next, the lines are sorted according to which ones are present in the largest number of models, i.e., which are typically the dominant cooling lines in a global perspective. The lines that are present in at least 25% of models are tabulated in Table <ref>.
Twenty-four lines are present in at least 25% of models. The lines are either = 0–0 or 1–0 transitions; the higher-excited levels are clearly not sufficiently populated over the majority of the grid. Some of the lines in Table <ref> are observable from the ground, for example, the often bright = 1–0 S(1) line at 2.12 μm, but the majority of the lines are not (17/24 lines). All lines are, however, observable with the JWST. Eighteen lines are observable with NIRSpec, while seven are observable with MIRI. At 5.06 μm, the = 0–0 S(8) line is observable with both instruments, and could serve as a cross-calibrator between the two instruments.
|
http://arxiv.org/abs/2307.05721v1 | 20230709084446 | HA-ViD: A Human Assembly Video Dataset for Comprehensive Assembly Knowledge Understanding | [
"Hao Zheng",
"Regina Lee",
"Yuqian Lu"
] | cs.CV | [
"cs.CV"
] |
Self-healing unitarity is an Optical illusion: Comment on `Self-healing of unitarity in effective field theories and the onset of new physics'
Archit Vidyarthi [email:[email protected]]
August 12, 2023
==============================================================================================================================================
Understanding comprehensive assembly knowledge from videos is critical for futuristic ultra-intelligent industry. To enable technological breakthrough, we present HA-ViD – the first human assembly video dataset that features representative industrial assembly scenarios, natural procedural knowledge acquisition process, and consistent human-robot shared annotations. Specifically, HA-ViD captures diverse collaboration patterns of real-world assembly, natural human behaviors and learning progression during assembly, and granulate action annotations to subject, action verb, manipulated object, target object, and tool. We provide 3222 multi-view, multi-modality videos (each video contains one assembly task), 1.5M frames, 96K temporal labels and 2M spatial labels. We benchmark four foundational video understanding tasks: action recognition, action segmentation, object detection and multi-object tracking. Importantly, we analyze their performance for comprehending knowledge in assembly progress, process efficiency, task collaboration, skill parameters and human intention. Details of HA-ViD is available at: <https://iai-hrc.github.io/ha-vid>
§ INTRODUCTION
Assembly knowledge understanding from videos is crucial for futuristic ultra-intelligent industrial applications, such as robot skill learning <cit.>, human-robot collaborative assembly <cit.> and quality assurance <cit.>. To enable assembly video understanding, a video dataset is required. Such a video dataset should (1) represent real-world assembly scenarios and (2) capture the comprehensive assembly knowledge via (3) a consistent annotation protocol that aligns with human and robot assembly comprehension. However, existing datasets cannot meet these requirements.
First, the assembled products in existing datasets are either too scene-specific <cit.> or lack typical assembly parts and tools <cit.>. Second, existing datasets did not design assembly tasks to foster the emergence of natural behaviors (e.g., varying efficiency, alternative routes, pauses and errors) during procedural knowledge acquisition. Third, thorough understanding of nuanced assembly knowledge is not possible via existing datasets as they fail to annotate subjects, objects, tools and their interactions in a systematic approach.
Therefore, we introduce HA-ViD: a human assembly video dataset recording people assembling the Generic Assembly Box (GAB, see Figure <ref>). We benchmark on four foundational tasks: action recognition, action segmentation, object detection and multi-object tracking (MOT), and analyze their performance for comprehending application-oriented knowledge. HA-ViD features three novel aspects:
* Representative industrial assembly scenarios: GAB includes 35 standard and non-standard parts frequently used in real-world industrial assembly scenarios and requires 4 standard tools to assemble it. The assembly tasks are arranged onto 3 plates featuring different task precedence and collaboration requirements to promote the emergence of two-handed collaboration and parallel tasks. Different from existing assembly video datasets, GAB represents generic industrial assembly scenarios (see Table <ref>).
* Natural procedural knowledge acquisition process: Progressive observation, thought and practice process (shown as varying efficiency, alternative assembly routes, pauses, and errors) in acquiring and applying complex procedural assembly knowledge is captured via the designed three-stage progressive assembly setup (see Figure <ref>). Such a design allows in-depth understanding of the human cognition process, where existing datasets lack (see Table <ref>).
* Consistent human-robot shared annotations: We designed a consistent fine-grained hierarchical task/action annotation protocol following a Human-Robot Shared Assembly Taxonomy (HR-SAT[HR-SAT, developed by the same authors, is a hierarchical assembly task representation schema that both humans and robots can comprehend. See details via: <https://iai-hrc.github.io/hr-sat>] , to be introduced in Section 2.3). Using this protocol, we, for the first-time, (1) granulate action annotations to subject, action verb, manipulated object, target object, and tool; (2) provide collaboration status annotations via separating two-handed annotations; and (3) annotate human pauses and errors. Such detailed annotation embeds more knowledge sources for diverse understanding of application-oriented knowledge (see Table <ref>).
§ DATASET
In this section, we present the process of building HA-ViD and provide essential statistics.
§.§ Generic Assembly Box
To ensure the dataset can represent real-world industrial assembly scenarios, we designed the GAB shown in Figure <ref>.
First, GAB[Find GAB CAD files at: <https://iai-hrc.github.io/ha-vid>.] is a 250×250×250mm box including 11 standard and 24 non-standard parts frequently used in real-world industrial assembly. Four standard tools are required for assembling GAB. The box design also allows participants to naturally perform tasks on a top or side-facing plate, closer to the flexible setups of real-world assembly.
Second, GAB consists of three plates featuring different task precedence and collaboration requirements. Figure <ref> shows the subject-agnostic task precedence graphs (SA-TPG) for the three plates with different precedence constraints. These different task precedence graphs provide contextual links between actions, enabling situational action understanding with different complexities. The cylinder plate also has more collaboration tasks, posing greater challenges for understanding collaborative assembly tasks. Gear and cylinder plates contain parts that become hidden after assembly, e.g., spacers under the gears. This introduces additional complexities for understanding assembly status.
§.§.§ Dataset Collection
Data was collected on three Azure Kinect RGB+D cameras mounted to an assembly workbench facing the participant from left, front and top views, as shown in Figure <ref>. Videos were recorded at 1280×720 RGB resolution and 512×512 depth resolution under both lab lighting and natural lighting conditions. 30 participants (15 males, 15 females) assembled each plate 11 to 12 times during a 2-hour session.
To capture the progression of human procedural knowledge <cit.> acquisition and behaviors (e.g., varying efficiency, alternative routes, pause, and errors) during learning, a three-stage progressive assembly setup is designed. Inspired by discovery learning <cit.>, we design the three stages as[The instruction files can be found at <https://iai-hrc.github.io/ha-vid>. The detailed instructions were written following HR-SAT to align assembly instructions with our annotations.]: Discovery – participants are given minimal exploded view instructions of each plate; Instruction – participants are given detailed step-by-step instructions of each plate; Practice – participants are asked to complete the task without instruction.
The first stage encourages participants to explore assembly knowledge to reach a goal, the second stage provides targeted instruction to deepen participants’ understanding, and the last stage encourages participants to reinforce their learning via practicing. During Instruction and Practice stages, the participants were asked to perform the assembly with the plate facing upwards and sideways.
§.§.§ Dataset Annotations
We provide temporal and spatial annotations to capture rich assembly knowledge shown in Figure <ref>.
To enable human-robot assembly knowledge transfer, the structured temporal annotations are made following HR-SAT. According to HR-SAT (shown in Figure <ref>), an assembly task can be decomposed into primitive tasks and further into atomic actions. Each primitive task and atomic action contain five description elements: subject, action verb, manipulated object, target object and tool. Primitive tasks annotations describe a functional change of the manipulated object, such as inserting a gear on a shaft or screwing a nut onto a bolt. Atomic actions describe an interaction change between the subject and manipulated object such as a hand grasping the screw or moving the screw. HR-SAT ensures the annotation transferability, adaptability, and consistency.
The ST-TPGs files can be downloaded at: <https://iai-hrc.github.io/hr-sat>
We annotate human pause and error as null and wrong respectively to enable research on understanding assembly efficiency and learning progression. Our annotations treat each hand as a separate subject. Primitive tasks and atomic actions are labeled for each hand to support multi-subject collaboration related research. Alongside the primitive task annotations, we annotate the two-handed collaboration status as: collaboration, when both hand work together on the same task; parallel, when each hand is working on a different task; single-handed, when only one hand is performing the task while the other hand pauses; and pause, when neither hand is performing any task. More details about the temporal annotations can be found in Supplementary Section 2.3.
For spatial annotations, we use CVAT[<https://www.cvat.ai/>], a video annotation tool, to label bounding boxes for subjects, objects and tools frame-by-frame. Different from general assembly datasets, we treat important assemblable features, such as holes, stud and USB female, as objects, to enable finer-grained assembly knowledge understanding.
§.§ Statistics
In total, we collected 3222 videos with side, front and top camera views. Each video contains one task – the process of assembling one plate. Our dataset contains 86.9 hours of footage, totaling over 1.5 million frames with an average of 1 min 37 sec per video (1456 frames). To ensure annotation quality, we manually labeled temporal annotations for 609 plate assembly videos and spatial annotations for over 144K frames. The selected videos for labeling collectively capture the dataset diversity by including videos of different participants, lighting, instructions and camera views.
Overall, our dataset contains 18831 primitive tasks across 75 classes, 63864 atomic actions across 219 classes, and close to 2M instances of subjects, objects and tools across 42 classes. Figure <ref> presents the annotation statistics of the dataset. Our dataset shows potential for facilitating small object detection research as 46.6% of the annotations are of small objects. More statistics can be found in Supplementary Section 2.4.
Our temporal annotations can be used to understand the learning progression and efficiency of participants over the designed three-stage progressive assembly setup, shown in Figure <ref>. The combined annotation of wrong primitive task, pause collaboration status and total frames can indicate features such as errors, observation patterns and task completion time for each participant. Our dataset captures the natural progress of procedural knowledge acquisition, as indicated by the overall reduction in task completion time and pause time from stage 1 to 3, as well as the significant reduction in errors. The wrong and pause annotations enable research on understanding varying efficiency between participants.
By annotating the collaboration status and designing three assembly plates with different task precedence and collaboration requirements, HA-ViD captures the two-handed collaborative and parallel tasks commonly featured in real-world assembly, shown in Figure <ref>. Overall, 49.6% of the annotated frames consist of two-handed tasks. The high percentage of two-handed tasks enables research in understanding the collaboration patterns of complex assembly tasks.
§ BENCHMARK EXPERIMENTS
We benchmark SOTA methods for four foundational techniques for assembly knowledge understanding, i.e., action recognition, action segmentation, object detection, and MOT. Due to page limit, we highlight key results and findings in this section, and present implementation details, more results and discussions in the Supplementary Section 3.
§.§ Action Recognition, Action Segmentation, Object Detection and MOT
Action recognition is to classify a sequence of video frames into an action category. We split 123 out of 609 temporally labeled videos to be the testset, and the rest is trainset. We benchmark five action recognition methods from three categories: 2D models (TSM <cit.>, TimeSFormer <cit.>), 3D models (I3D <cit.>, MVITv2 <cit.>), and skeleton-based method (ST-GCN <cit.>) and report the Top-1 accuracy and Top-5 accuracy in Table <ref>.
Action segmentation is to temporally locate and recognize human action segments in untrimmed videos <cit.>. Under the same train/test split, we benchmark three action segmentation methods, MS-TCN <cit.>, DTGRM <cit.> and BCN <cit.>, and report the frame-wise accuracy (Acc), segmental edit distance (Edit) and segmental F1 score at overlapping thresholds of 10% in Table <ref>.
Object detection is to detect all instances of objects from known classes <cit.>. We split 18.4K out of 144K spatially labeled frames to be testset, and the rest is trainset. We benchmark classical two-stage method FasterRCNN <cit.>, one-stage method Yolov5 <cit.>, and the SOTA end-to-end Transformer-based method DINO <cit.> with different backbone networks, and report parameter size (Params), average precision (AP), AP under different IoU thresholds (50% and 75%) and AP under different object scales (small, medium and large) in Table <ref>.
MOT aims at locating multiple objects, maintaining their identities, and yielding their individual trajectories given an input video <cit.>. We benchmark SORT <cit.> and ByteTrack <cit.> on the detection results of DINO and ground truth annotations (test split of object detection), respectively. We report average multi-object tracking accuracy (MOTA), ID F1 score (IDF1), false positive (FP), false negative (FN), and ID switch (IDS) over the videos in our testing dataset in Table <ref>.
The baseline results show that our dataset presents great challenges on the four foundational video understanding tasks compared with other datasets. For example, BCN has 70.4% accuracy on Breakfast <cit.>, MViTv2 has 86.1% Top-1 accuracy on Kinetics-400 <cit.>, DINO has 63.3% AP on COCO test-dev <cit.>, and ByteTrack has 77.8% MOTA on MOT20 <cit.>.
Compared to the above baseline results, we are more concerned with whether existing video understanding methods can effectively comprehend the application-oriented knowledge (in Figure <ref>). We present our subsequent analysis in Sections 3.2-3.5.
§.§ Assembly progress
Insight #1: Assembly action recognition could focus on compositional action recognition and leveraging prior domain knowledge.
Understanding assembly progress, as an essential application-oriented task, requires real-time action (action verb + interacted objects and tools) recognition, and compare the action history with predefined assembly plan (represented in a task graph). After further analysis of the sub-optimal action recognition performance in Table <ref>, we found recognizing interacting objects and tools are more challenging than recognizing action verbs, (as shown in Table <ref>). Therefore, a promising research direction could be compositional recognizing action verb and interacted objects and tools.
Leveraging prior domain knowledge, such as task precedence and probabilistic correlation between action verbs and feasible objects and tools, one may improve the performance of action recognition. With defined task precedence graphs and rich list of action verb/object/tool pairs, HA-ViD enables research on this aspect.
Insight #2: Assembly action segmentation should focus on addressing under-segmentation issues and improving segment-wise sequence accuracy. Assembly progress tracking requires obtaining the accurate number of action segments and their sequence. For obtaining the accurate number of action segments from a given video, previous action segmentation algorithms <cit.> focused on addressing over-segmentation issues, but lack metrics for quantifying under/over-segmentation. Therefore, we propose segmentation adequacy (SA) to fill this gap. Consider the predicted segments as s_pred={s_1',s_2',…,s_F'} and ground truth segments as s_gt={s_1,s_2,…,s_N} for a given video, where F and N are the number of segments, SA = tanh(2(F-N)/F+N). Table <ref> reveals the significant under-segmentation issues on our dataset. This reminds the community to pay attention to addressing under-segmentation issues for assembly action understanding. The proposed SA can offer evaluation support, and even assist in designing the loss function as it utilizes hyperbolic tangent function.
As for segment-wise sequence accuracy, the low value of Edit in Table <ref> suggests pressing required research efforts. Compared with Breakfast <cit.> (66.2% Edit score with BCN algorithm), our dataset presents greater challenges.
§.§ Process Efficiency
Understanding process efficiency is essential for real-world industry. It requires video understanding methods to be capable of recognizing human pause and error. HA-ViD supports this research by providing null and wrong labels.
Insight #3: For null action understanding, efforts need to be made on addressing imbalanced class distribution. Table <ref> shows the recall and precision of action recognition and action segmentation of null actions. We suspect the high recall and low precision is caused by the imbalanced class distribution, as null is the largest head class (see Figure <ref>).
Insight #4: New research from wrong action annotations. Wrong action is the assembly action (primitive task level) occurred at wrong position or order. Our annotation for wrong actions allows in-depth research on understanding its appearing patterns between participants across the three stages. Joint understanding between wrong actions and their adjacent actions could also trigger new research of predicting wrong actions based on action history.
§.§ Task Collaboration
Insight #5: New research on understanding parallel tasks from both hands Table <ref> shows that both action recognition and segmentation have lowest performance on parallel tasks during assembly. One possible reason is that the foundational video understanding methods rely on global features of each image, and do not explicitly detect and track the action of each hand. This calls for new methods that can independently track both hands and recognize their actions through local features. Recent research on human-object interaction detection in videos <cit.> could offer valuable insights.
§.§ Skill Parameters and Human Intention
Understanding skill parameters and human intentions from videos is essential for robot skill learning and human-robot collaboration (HRC) <cit.>.
Typically, skill parameters vary depending on the specific application. However, there are certain skill parameters that are commonly used, including trajectory, object pose, force and torque <cit.>. While videos cannot capture force and torque directly, our dataset offers spatial annotations that enable tracking the trajectory of each object. Additionally, the object pose can be inferred from our dataset via pose estimation methods. Therefore, HA-ViD can support research in this direction.
Understanding human intention in HRC refers to a combination of trajectory prediction, action prediction and task goal understanding <cit.>. Our spatial annotations provide trajectory information, SA-TPGs present action sequence constraints, and GAB CAD files offer the final task goals. Therefore, HA-ViD can enhance the research in this aspect.
§ CONCLUSION
We present HA-ViD, a human assembly video dataset, to advance comprehensive assembly knowledge understanding toward real-world industrial applications. We designed a generic assembly box to represent industrial assembly scenarios and a three-stage progressive learning setup to capture the natural process of human procedural knowledge acquisition. The dataset annotation follows a human-robot shared assembly taxonomy. HA-ViD includes (1) multi-view, multi-modality data, fine-grained action annotations (subject, action verb, manipulated object, target object, and tool), (2) human pause and error annotations, and (3) collaboration status annotations to enable technological breakthroughs in both foundational video understanding techniques and industrial application-oriented knowledge comprehension.
As for limitation of HA-ViD, the imbalanced class distribution of primitive tasks and atomic actions could cause biased model performance and insufficient learning. In addition, the true complexities and diversities of real-world assembly scenarios may still not be fully captured.
We benchmarked strong baseline methods of action recognition, action segmentation, object detection and multi-object tracking, and analyzed their performance on comprehending application-oriented knowledge in assembly progress, process efficiency, task collaboration, skill parameter and human intention. The results show that our dataset captures essential challenges for foundational video understanding tasks, and new methods need to be explored for application-oriented knowledge comprehension. We envision HA-ViD will open opportunities for advancing video understanding techniques to enable futuristic ultra-intelligent industry.
§ ACKNOWLEDGEMENTS
This work was supported by The University of Auckland FRDF New Staff Research Fund (No. 3720540).
10
Duque2019
D. A. Duque, F. A. Prieto, and J. G. Hoyos, “Trajectory generation for
robotic assembly operations using learning by demonstration,” Robotics
and Computer Integrated Manufacturing, vol. 57, no. December 2018,
pp. 292–302, 2019.
Lamon2019
E. Lamon, A. De Franco, L. Peternel, and A. Ajoudani, “A Capability-Aware
Role Allocation Approach to Industrial Assembly Tasks,” IEEE Robotics
and Automation Letters, vol. 4, no. 4, pp. 3378–3385, 2019.
Frustaci2020
F. Frustaci, S. Perri, G. Cocorullo, and P. Corsonello, “An embedded machine
vision system for an in-line quality check of assembly processes,” Procedia Manufacturing, vol. 42, pp. 211–218, 2020.
Cicirelli2022
G. Cicirelli, R. Marani, L. Romeo, M. G. Domínguez, J. Heras, A. G.
Perri, and T. D'Orazio, “The HA4M dataset: Multi-Modal Monitoring of an
assembly task for Human Action recognition in Manufacturing,” Scientific Data, vol. 9, p. 745, dec 2022.
Ben-Shabat2021
Y. Ben-Shabat, X. Yu, F. Saleh, D. Campbell, C. Rodriguez-Opazo, H. Li, and
S. Gould, “The IKEA ASM Dataset: Understanding people assembling furniture
through actions, objects and pose,” Proceedings - 2021 IEEE Winter
Conference on Applications of Computer Vision, WACV 2021, pp. 846–858,
2021.
Sener2022
F. Sener, R. Wang, and A. Yao, “Assembly101: A Large-Scale Multi-View Video
Dataset for Understanding Procedural Activities,” Cvpr, 2022.
Toyer2017
S. Toyer, A. Cherian, T. Han, and S. Gould, “Human Pose Forecasting via Deep
Markov Models,” DICTA 2017 - 2017 International Conference on Digital
Image Computing: Techniques and Applications, vol. 2017-Decem, pp. 1–8,
2017.
Zhang2020
J. Zhang, P. Byvshev, and Y. Xiao, “A video dataset of a wooden box assembly
process: Dataset,” DATA 2020 - Proceedings of the 3rd Workshop on Data
Acquisition To Analysis, Part of SenSys 2020, BuildSys 2020, pp. 35–39,
2020.
Ragusa2021
F. Ragusa, A. Furnari, S. Livatino, and G. M. Farinella, “The MECCANO
Dataset: Understanding Human-Object Interactions from Egocentric Videos in an
Industrial-like Domain,” in 2021 IEEE Winter Conference on
Applications of Computer Vision (WACV), pp. 1568–1577, IEEE, jan 2021.
Georgeff1986
M. Georgeff and A. Lansky, “Procedural knowledge,” Proceedings of the
IEEE, vol. 74, no. 10, pp. 1383–1398, 1986.
Mayer2004
R. E. Mayer, “Should There Be a Three-Strikes Rule Against Pure Discovery
Learning?,” American Psychologist, vol. 59, no. 1, pp. 14–19, 2004.
Lin2019
J. Lin, C. Gan, and S. Han, “TSM: Temporal Shift Module for Efficient Video
Understanding,” in 2019 IEEE/CVF International Conference on Computer
Vision (ICCV), pp. 7082–7092, IEEE, oct 2019.
Bertasius2021
G. Bertasius, H. Wang, and L. Torresani, “Is Space-Time Attention All You
Need for Video Understanding?,” in Proceedings of the 38th
International Conference on Machine Learning, pp. 813–824, feb 2021.
Carreira2017
J. Carreira and A. Zisserman, “Quo Vadis, Action Recognition? A New Model and
the Kinetics Dataset,” in 2017 IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), pp. 4724–4733, IEEE, jul 2017.
Li2022
Y. Li, C.-Y. Wu, H. Fan, K. Mangalam, B. Xiong, J. Malik, and C. Feichtenhofer,
“MViTv2: Improved Multiscale Vision Transformers for Classification and
Detection,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR), pp. 4794–4804, IEEE, jun 2022.
Yan2018
S. Yan, Y. Xiong, and D. Lin, “Spatial temporal graph convolutional networks
for skeleton-based action recognition,” in 32nd AAAI Conference on
Artificial Intelligence, AAAI 2018, pp. 7444–7452, jan 2018.
Wang2021
D. Wang, D. Hu, X. Li, and D. Dou, “Temporal Relational Modeling with
Self-Supervision for Action Segmentation,” Proceedings of the AAAI
Conference on Artificial Intelligence, vol. 35, pp. 2729–2737, dec 2021.
Farha2019
Y. A. Farha and J. Gall, “MS-TCN: Multi-Stage Temporal Convolutional Network
for Action Segmentation,” in 2019 IEEE/CVF Conference on Computer
Vision and Pattern Recognition (CVPR), vol. 2019-June, pp. 3570–3579, IEEE,
jun 2019.
Wang2020
Z. Wang, Z. Gao, L. Wang, Z. Li, and G. Wu, “Boundary-Aware Cascade Networks
for Temporal Action Segmentation,” in ECCV, vol. Part XXV 1,
pp. 34–51, 2020.
Amit2014
Y. Amit and P. Felzenszwalb, “Object Detection,” in Computer Vision,
pp. 537–542, Boston, MA: Springer US, 2014.
Ren2017
S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time
Object Detection with Region Proposal Networks,” IEEE Transactions on
Pattern Analysis and Machine Intelligence, vol. 39, pp. 1137–1149, jun
2017.
Jain
G. J. A. C. A. S. J. B. N. Y. K. K. M. T. J. F. i. L. Z. Y. C. W. A. V. D. M.
Z. W. C. F. J. N. L. U. V. Jain, “YOLOv5,”
Zhang2022a
H. Zhang, F. Li, S. Liu, L. Zhang, H. Su, J. Zhu, L. M. Ni, and H.-Y. Shum,
“DINO: DETR with Improved DeNoising Anchor Boxes for End-to-End Object
Detection,” mar 2022.
Luo2021
W. Luo, J. Xing, A. Milan, X. Zhang, W. Liu, and T. K. Kim, “Multiple object
tracking: A literature review,” Artificial Intelligence, vol. 293,
p. 103448, apr 2021.
Bewley2016
A. Bewley, Z. Ge, L. Ott, F. Ramos, and B. Upcroft, “Simple online and
realtime tracking,” in 2016 IEEE International Conference on Image
Processing (ICIP), pp. 3464–3468, IEEE, sep 2016.
Zhang2022
Y. Zhang, P. Sun, Y. Jiang, D. Yu, F. Weng, Z. Yuan, P. Luo, W. Liu, and
X. Wang, “ByteTrack: Multi-Object Tracking by Associating Every Detection
Box,” in Proceedings of the European Conference on Computer Vision
(ECCV), vol. 2, oct 2022.
Kuehne2014
H. Kuehne, A. Arslan, and T. Serre, “The Language of Actions: Recovering the
Syntax and Semantics of Goal-Directed Human Activities,” in 2014 IEEE
Conference on Computer Vision and Pattern Recognition, pp. 780–787, IEEE,
jun 2014.
Kay2017
W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan,
F. Viola, T. Green, T. Back, P. Natsev, M. Suleyman, and A. Zisserman, “The
Kinetics Human Action Video Dataset,” may 2017.
Lin2014
T.-Y. Lin, M. Maire, S. Belongie, L. Bourdev, R. Girshick, J. Hays, P. Perona,
D. Ramanan, C. L. Zitnick, and P. Dollár, “Microsoft COCO: Common
Objects in Context,” may 2014.
Dendorfer2020
P. Dendorfer, H. Rezatofighi, A. Milan, J. Shi, D. Cremers, I. Reid, S. Roth,
K. Schindler, and L. Leal-Taixé, “MOT20: A benchmark for multi object
tracking in crowded scenes,” mar 2020.
Tu2022
D. Tu, W. Sun, X. Min, G. Zhai, and W. Shen, “Video-based Human-Object
Interaction Detection from Tubelet Tokens,” in Advances in Neural
Information Processing Systems 35, pp. 23345—-23357, 2022.
Chiou2021
M.-J. Chiou, C.-Y. Liao, L.-W. Wang, R. Zimmermann, and J. Feng, “ST-HOI: A
Spatial-Temporal Baseline for Human-Object Interaction Detection in
Videos,” in Proceedings of the 2021 Workshop on Intelligent Cross-Data
Analysis and Retrieval, (New York, NY, USA), pp. 9–17, ACM, aug 2021.
Mees2020
O. Mees, M. Merklinger, G. Kalweit, and W. Burgard, “Adversarial Skill
Networks: Unsupervised Robot Skill Learning from Video,” in 2020 IEEE
International Conference on Robotics and Automation (ICRA), pp. 4188–4194,
IEEE, may 2020.
Zheng2022
P. Zheng, S. Li, L. Xia, L. Wang, and A. Nassehi, “A visual reasoning-based
approach for mutual-cognitive human-robot collaboration,” CIRP
Annals, vol. 71, no. 1, pp. 377–380, 2022.
Jeon2022
J. Jeon, H.-r. Jung, F. Yumbla, T. A. Luong, and H. Moon, “Primitive Action
Based Combined Task and Motion Planning for the Service Robot,” Frontiers in Robotics and AI, vol. 9, feb 2022.
Berger2016
E. Berger, S. Grehl, D. Vogt, B. Jung, and H. B. Amor, “Experience-based
torque estimation for an industrial robot,” in 2016 IEEE International
Conference on Robotics and Automation (ICRA), pp. 144–149, IEEE, may 2016.
Lu2022
Y. Lu, H. Zheng, S. Chand, W. Xia, Z. Liu, X. Xu, L. Wang, Z. Qin, and J. Bao,
“Outlook on human-centric manufacturing towards Industry 5.0,” Journal of Manufacturing Systems, vol. 62, pp. 612–627, jan 2022.
Supplementary Document for HA-ViD: A Human Assembly Video Dataset for Comprehensive Assembly Knowledge Understanding
§ OVERVIEW
This supplementary document contains additional information about HA-ViD.
Section <ref> further describes the process of building HA-ViD, including the design of the Generic Assembly Box, data collection, data annotation, and annotation statistics.
Section <ref> presents the implementation details of our baselines, discusses the experimental results, and provides the licenses of the benchmarked algorithms.
Section <ref> discusses the bias and societal impact of HA-ViD.
Section <ref> presents the research ethics for HA-ViD.
§ HA-VID CONSTRUCTION
In this section, we further discuss the process of building HA-ViD. First, we introduce the design of the Generic Assembly Box. Second, we describe the three-stage data collection process. Third, we describe data annotation details. Finally, we present critical annotation statistics.
§.§ Generic Assembly Box Design
To ensure the dataset is representative of real-world industrial assembly scenarios, we designed the Generic Assembly Box (GAB), a 250×250×250mm box (see Figure <ref>), which consists of 11 standard parts and 25 non-standard parts and requires 4 standard tools during assembly (see Figure 2).
GAB has three assembly plates, including General Plate, Gear Plate, and Cylinder Plate, and three blank plates. The opposite face of each assembly plate is intentionally left blank to allow a different assembly orientation. Three assembly plates feature different design purposes.
General Plate (see Figure <ref>) was designed to capture action diversity. The general plate consists of 11 different parts. The parts used in this plate were designed to include the different directions, shapes, and forces in which the common assembly actions can be performed. Since there is close to no precedence between assembling different parts, General Plate results in the most variety of possible assembly sequences.
Gear Plate (see Figure <ref>) was designed to capture parallel two-handed tasks, e.g., two hands inserting two spur gears at the same time. Gear Plate has three gear sub-systems: large gear, small gear, and worm gear, which mesh together to form a gear mechanism. The plate consists of 12 different parts. Gear Plate has a higher precedence constraint on assembly sequence than the general plate.
Cylinder Plate (see Figure <ref>) was designed to capture two-handed collaborative tasks, e.g., two hands collaborating on screwing the cylinder cap onto the cylinder base. Cylinder Plate requires assembling a cylinder subassembly and fastening it onto the plate. This plate consists of 11 parts. The parts were designed to represent assembling a subassembly where parts become fully occluded or partially constrained to another part (see the cylinder in Figure <ref>).
Table <ref> shows a summary of the three assembly plates. The box can be easily replicated using standard components, laser cutting, and 3D printing. The CAD files and bill of material can be downloaded from our website[<https://iai-hrc.github.io/ha-vid>].
§.§ Data Collection
Data was collected on three Azure Kinect RGB+D cameras mounted to an assembly workbench. 30 participants (15 male, 15 female) were recruited for a 2-hour session to assemble the GAB. During the data collection session, participants were given a fully disassembled assembly box, assembly parts, tools, and instructions. To capture the natural progress of human procedural knowledge acquisition and behaviors (varying efficiency, alternative routes, pauses, and errors), we designed a three-stage progressive assembly setup:
Discovery: Participants were asked to assemble a plate twice following the minimal visual instructions (see Figure <ref>).
Instruction: Participants were asked to assemble a plate six times following the detailed step-by-step instructions (see Figure <ref>). Six different instruction versions were created, each presenting a different assembly sequence. Each participant was given three different instruction versions, where two attempts were completed following each instruction version. The three instruction versions given to one participant must contain assembling the plate facing both upwards and sideways.
Practice: After the first two stages, participants were asked to assemble a plate four times without any instructions. During this stage, participants performed two attempts of each plate facing upwards and two attempts of each plate facing sideways.
The instruction files are available on our website[https://iai-hrc.github.io/ha-vid].
§.§ Data Annotation
To capture rich assembly knowledge, we provide temporal and spatial annotations.
Temporal Annotations: In HR-SAT[Details for the definitions of primitive task and atomic action can be found at: https://iai-hrc.github.io/hr-sat], an assembly task can be decomposed into a series of primitive tasks, and each primitive task can be further decomposed into a series of atomic actions. For both primitive task and atomic action, there are five fundamental description elements: subject, action verb, manipulated object, target object, and tool (see Figure <ref>). We follow HR-SAT to provide primitive task and atomic action annotations for the assembly processes recorded in the videos. To enable the research in two-handed collaboration task understanding, we defined the two hands of each participant as two separate subjects, and we annotated action verb, manipulated object, target object, and tool for each subject. For both primitive task and atomic action annotations, we follow the annotation specification shown in Figure <ref>.
Spatial Annotations: For spatial annotations, we use CVAT[https://www.cvat.ai/] to annotate the subjects (two hands), objects (manipulated object, target object), and tools via bounding boxes, shown in Figure <ref>.
§.§ Annotation Statistics
Overall, the dataset contains temporal annotations of 81 primitive task classes and 219 atomic action classes. The trainset and testset were split by subjects to balance data diversity. Figure <ref> and Figure <ref> show the class distributions of primitive task and atomic action annotations in the trainset and testset, respectively.
Overall, the dataset contains spatial annotations of 42 classes. The trainset and testset were split by subjects to balance data diversity. Figure <ref> shows the class distributions of spatial annotation classes in the trainset and testset.
§ EXPERIMENT
In this section, we provide the implementation details of the baselines, the results unreleased in the main paper, further discussions on the results, and the licenses of the benchmarked algorithms.
§.§ Action Recognition
We use the MMSkeleton[https://github.com/open-mmlab/mmskeleton] toolbox to benchmark ST-GCN <cit.>; the MMAction2[https://github.com/open-mmlab/mmaction2] toolbox to benchmark I3D <cit.>, TimeSformer <cit.>, and MVITv2 <cit.>; and the original codes to benchmark TSM <cit.>. For ST-GCN, we first extracted the upper 26 skeleton joints from each frame as the input. Action clips which consisted of frames where the skeleton could not be extracted, were excluded from reporting the performance. For I3D (rgb), TSM, MVITv2, and TimeSformer, the RGB frames of each clip were used as input. For I3D (flow), we extracted TV-L1 optical flow frames from each clip as input. To compare model performance on different views (side, front, and top), hands (left and right hands) and annotation levels (primitive task and atomic action), we conducted a combinational benchmark, which means we benchmark each model on 12 sub-datasets (see Figure <ref>). We report the Top-1 and Top-5 accuracy on these sub-datasets in Table <ref>.
ST-GCN: Following the default parameters from MMSkeleton, we use the SGD optimizer with a dropout of 0.5. The learning rate was initialized as 0.1 and decayed by a factor of 10 after epochs 10 and 50. We sampled all frames as the input. The ST-GCN was pretrained on NTU <cit.>, and we finetuned it on our 12 sub-datasets. As the slowest convergence of the 12 sub-datasets was observed around 70 epochs, we set the total training epochs to be 80 with a batch size of 16.
TSM: Following the original paper’s suggestions, we use the SGD optimizer with a dropout of 0.5. The learning rate was initialized as 0.0025 and decayed by a factor of 10 after epochs 20 and 40. 8 frames were uniformly sampled from each clip. The TSM was pretrained on ImageNet <cit.>, and we finetuned it on our 12 sub-datasets. As the slowest convergence of the 12 sub-datasets was observed around 40 epochs, we set the total training epochs to be 50 with a batch size of 16.
TimeSformer: Following the default parameters from MMAction2, we use the SGD optimizer. The learning rate was initialized as 0.005 and decayed by a factor of 10 after epochs 5 and 10. 8 frames were uniformly sampled from each clip. The TimeSformer was pretrained on ImageNet-21K <cit.>, and we finetuned it on our 12 sub-datasets. As the slowest convergence of the 12 sub-datasets was observed around 90 epochs, we set the total training epochs to be 100 with a batch size of 8.
I3D (rgb) and (flow): Following the default parameters from MMAction2, we use the SGD optimizer with a dropout of 0.5. The learning rate was initialized as 0.01 and decayed by a factor of 10 after epochs 40 and 80. 32 frames were uniformly sampled from each clip. I3D takes ResNet50 pretrained on ImageNet-1K <cit.> as the backbone, and we finetuned it on our 12 sub-datasets. As the slowest convergence of the 12 sub-datasets was observed around 90 epochs, we set the total training epochs to be 100 with a batch size of 4.
MVITv2: Following the default parameters from MMAction2, we use the AdamW optimizer with a cosine annealing learning rate with the minimum learning rate of 0.00015. 16 frames were uniformly sampled from each clip. The MVITv2 was pre-trained on Kinetics-400 <cit.> via MaskFeat <cit.>, and we finetuned it on our 12 sub-datasets. As the slowest convergence of the 12 sub-datasets was observed around 90 epochs, we set the total training epochs to be 100 with a batch size of 4.
The benchmarking results of action recognition are shown in Table <ref>. We use a single RTX 3090 GPU to train each model, and Table <ref> shows the average training time of each model for each sub-dataset.
§.§ Action Segmentation
We benchmark three action segmentation algorithms: MS-TCN, DTGRM, and BCN, and report the frame-wise accuracy (Acc), segmental edit distance (Edit) and segmental F1 score at overlapping thresholds 10% in Table <ref>. Before benchmarking, we extract I3D features for each frame as the input of the action segmentation algorithms. We use the Pytorch version of the I3D implementation[https://github.com/piergiaj/pytorch-i3d] and the pretrained model on ImageNet <cit.> and Kinetics <cit.>. For action segmentation, we also conducted a combinational benchmark.
MS-TCN: We follow the model settings provided by <cit.>. More specifically, we use the Adam optimizer with a fixed learning rate of 0.0005, dropout of 0.5 and sampling rate of 1 (taking all frames into the network). As the slowest convergence of the 12 sub-datasets was observed around 800 epochs, we set the total training epochs to be 1000 with a batch size of 10.
DTGRM: We follow the model settings provided by <cit.>. More specifically, we use the Adam optimizer with a fixed learning rate of 0.0005, dropout of 0.5 and sampling rate of 1. As the slowest convergence of the 12 sub-datasets was observed around 800 epochs, we set the total training epochs to be 1000 with a batch size of 16.
BCN: We follow the model settings provided by <cit.>. More specifically, we use the Adam optimizer with the learning rate of 0.001 for the first 30 epochs and 0.0001 for the rest epochs, dropout of 0.5 and sampling rate of 1. As the slowest convergence of the 12 sub-datasets was observed around 200 epochs, we set the total training epochs to be 300 with a batch size of 1.
The benchmarking results of action segmentation are shown in Table <ref>. We use a single RTX 3090 GPU to train each model, and Table <ref> shows the average training time of each model for each sub-dataset.
§.§ Object Detection
We benchmark three object detection algorithms: Faster-RCNN <cit.>, YOLOv5 <cit.> and DINO <cit.> with different backbone networks. The results have been reported in the main paper. Therefore, we only discuss the implementation details here. We train Faster-RCNN and DINO using the implementation provided by the MMDetection <cit.> and train YOLOv5 using the implementation provided by the MMYOLO[https://github.com/open-mmlab/mmyolo].
Faster-RCNN: We train Faster-RCNN with three backbone networks: ResNet50, ResNet101, and ResNext101. All the networks have been pretrained on the coco_2017_train dataset <cit.> and finetuned on our dataset. Following the default setting provided by MMDetection, we use the SGD optimizer with a momentum of 0.9 and weight decay of 0.0001. The learning rate was initialized as 0.02 and decayed by a factor of 10 at epochs 8 and 11. As the slowest convergence of the three models was observed around 14 epochs, we set the total training epochs to be 20. We set the batch size as 4, 1, and 5, respectively, for ResNet50, ResNet101, and ResNext101.
YOLOv5: We train YOLOv5-small and YOLOv5-large using MMDetection. These two models have been pretrained on the coco_2017_train dataset, and finetuned on our dataset. Following the default setting provided by MMDetection, we use the SGD optimizer with a momentum of 0.937, weight decay of 0.0005 for both models. The linear learning rate with base learning rate of 0.0025 and factor of 0.01 was applied to YOLOv5-small. The linear learning rate with base learning rate of 0.0025 and factor of 0.1 was applied to YOLOv5-large. We set the total training epochs to be 100 epochs with a batch size of 32 and 50 epochs with a batch size of 10, respectively, for YOLOv5-small and YOLOv5-large to ensure convergence.
DINO: We benchmark the DINO model with the Swin-large network as the backbone. The model has been pretrained on the coco_2017_train dataset, and finetuned on our dataset. Following the default setting provided by MMDetection, we use the AdamW optimizer with a learning rate of 0.0001 and weight decay of 0.0001. As the convergence was observed around 6 epochs, we set the total training epochs to be 10 with a batch size of 1.
We use single RTX 3090 GPU to train each model, and Table <ref> shows the average training time of each model.
§.§ Multi-Object Tracking
In this paper, we focus on tracking-by-detection methods because, normally, tracking-by-detection methods perform better than joint-detection-association methods <cit.>. Since we already benchmarked the object detection methods, we only need to test the SOTA trackers. We benchmark SORT <cit.> and ByteTrack <cit.> trackers on the detection results of DINO and ground truth annotations, respectively. The results have been reported in the main paper. Since the trackers are not neural networks, we do not need to train them and explain the implementation details. We always use the default parameters of the algorithm. For more details, please refer to the papers <cit.> and their GitHub repositories.
§.§ Discussion
In this section, we further discuss the results from the above experiments and analyze a prevalent problem of video understanding – occlusion.
§.§.§ General Discussion
Action recognition: We found the Top-1 accuracy of primitive task recognition is 15.6% higher on average than atomic action recognition, and the atomic action recognition performance of the left hand is 2.4% higher on average than the right hand. One possible reason behind these two observations can be occlusion since (1) primitive task recognition is less influenced by occlusion because it can rely on the key motion or relevant object recognition; and (2) the left hand is less occluded because the side-view camera is mounted on the left-side of the participant.
Action segmentation: We found (1) the frame-wise accuracy (Acc) of atomic action segmentation is 4% lower on average than primitive task segmentation, as atomic actions have higher diversity and current methods face under-segmentation issues (refer to the main paper); and (2) on the atomic action level, the Acc of the left hand is 6% higher on average than the right hand, where one possible reason could be that the left hand is less occluded.
Object detection: From Table 4 of the main paper, we found that (1) the large-scale end-to-end Transformer based model (DINO) performs the best, and the traditional two-stage method (Faster-RCNN) has better performance on small objects but worse performance on large objects than the one-stage method (YOLOv5), which is consistent with the conclusion of <cit.>; (2) current methods still face great challenges in small object detection, as the best model only has 27.4% average precision on small object detection; and (3) recognizing objects with same/similar appearances but different sizes is challenging (see Figure <ref>, e.g., Bar and Rod, Hole C1-C4, and two Wrenches).
Multi-object detection: From Table 5 of the main paper, we found that (1) object detection performance is the decisive factor in tracking performance; (2) with perfect detection results, even the simple tracker (SORT) can achieve good tracking results, as SORT has 94.5% multi-object tracking accuracy on the ground truth object bounding boxes; and (3) ByteTrack can track blurred and occluded objects better (comparing b1-2, c1-2, and f1-2 in Figure <ref>) due to taking low-confidence detection results into association, but it generates more ID switches (IDS) (seeing a2-f2 in Figure <ref>) due to the preference of creating new tracklets.
§.§.§ Occlusion Analysis
From the discussion in Section <ref>, we can see occlusion is a prevalent problem of video understanding. Therefore, we further explore the impact of occlusion on video understanding tasks in this Section. Table <ref> reports the average results over two hands of action recognition and segmentation on three views and the combined view (Com). We fuse the features from three views before the softmax layer to evaluate the performance of the combined view. The results show the significant benefits of combining three views which offers a viable solution for mitigating occlusion challenges in industrial settings.
Figure <ref> shows the impact of occlusion on tracking and reidentification via visualizing SORT and ByteTrack tracking results on sampled ground truth object annotations. To quantitatively analyze the occlusion problem, we design two metrics: occlusion duration (OD) and occlusion frequency (OF). Given a video of n frames v=[f_1,…,f_n], the observation of object k is denoted as O_k=[o_t^k,o_t+1^k,…,o_t+m^k], where t and t+m are the frame numbers that object k first, and last appear, respectively. o_j^k={0,1}, where 0 denotes observed, and 1 denotes unobserved. OD_k=1/m∑_j=t^j=t+mo_j^k and OF_k=1/2∑_j=t^j=t+m-1|o_j+1^k-o_j^k|. OD_k and OF_k describe the occluded duration and occluded frequency of object k in a video. We calculate the average OD and OF over every object in our testing dataset and compare the results with the tracking results on ground truth object annotations in Table <ref>. Table <ref> shows a negative correlation between mOD and mOF with MOTA and IDS, which is also consistent with the findings in Figure <ref>. We envision OD and OF will serve as effective occlusion evaluation tools for developing better object association modules and reidentification modules in MOT.
§.§ Licenses of the benchmarked algorithms
The licenses of the benchmarked algorithms are listed in Table <ref>.
§ DATASET BIAS AND SOCIETAL IMPACT
Our objective is to construct a dataset that can represent interesting and challenging problems in real-world industrial assembly scenarios. Based on this objective, we developed the Generic Assembly Box that encompasses standard and non-standard parts widely used in industry and requires typical industrial tools to assemble. However, there is still a gap between our dataset and the real-world industrial assembly scenarios. The challenges lie in:
1) the existence of numerous unique assembly actions, countless parts, and tools in the industry;
2) the vast diversity of operating environments in the industry;
3) various agents and multi-agent collaborative assembly scenarios in the industry.
Therefore, additional efforts would be needed to apply the models trained on our dataset to real-world industrial applications. We hope the fine-grained annotations of this dataset can advance the technological breakthrough in comprehensive assembly knowledge understanding from videos. Then, the learned knowledge can benefit various real-world applications, such as robot skill learning, human-robot collaboration, assembly process monitoring, assembly task planning, and quality assurance. We hope this dataset can contribute to technological advancements facilitating the development of smart manufacturing, enhancing production efficiency, and reducing the workload and stress on workers.
§ ETHICS APPROVAL
HA-ViD was collected with ethics approval from the University of Auckland Human Participants Ethics Committee. The Reference Number is 21602. All participants were sent a Participant Information Sheet and Consent Form[The participant consent form is available at: <https://www.dropbox.com/sh/ekjle5bwoylmdcf/AACLd_NqT3p2kxW7zLvvauPta?dl=0>] prior to the collection session. We confirmed that they had agreed to and signed the Consent form before proceeding with any data collection.
§ DATA DOCUMENTATION
We follow the datasheet proposed in <cit.> for documenting our HA-ViD dataset:
1. Motivation
(a) For what purpose was the dataset created?
This dataset was created to understand comprehensive assembly knowledge from videos. The previous assembly video datasets fail to (1) represent real-world industrial assembly scenarios, (2) capture natural human behaviors (varying efficiency, alternative routes, pauses and errors) during procedural knowledge acquisition, (3) follow a consistent annotation protocol that aligns with human and robot assembly comprehension.
(b) Who created the dataset, and on behalf of which entity?
This dataset was created by Hao Zheng, Regina Lee and Yuqian Lu. At the time of creation, Hao and Regina were PhD students at the University of Auckland, and Yuqian was a senior lecturer at the University of Auckland.
(c) Who funded the creation of the dataset?
The creation of this dataset was partially funded by The University of Auckland FRDF New Staff Research Fund (No. 3720540).
(d) Any other Comments?
None.
2. Composition
(a) What do the instances that comprise the dataset represent?
For the video dataset, each instance is a video clip recording a participant assembling one of the three plates of the designed Generic Assembly Box. Each instance consists of two-level temporal annotations: primitive task and atomic action, and spatial annotations, which means the bounding boxes for subjects, objects, and tools.
(b) How many instances are there in total?
We recorded 3222 videos over 86.9 hours, totaling over 1.5M frames. To ensure annotation quality, we manually labeled temporal annotations for 609 plate assembly videos and spatial annotations for over 144K frames.
(c) Does the dataset contain all possible instances, or is it a sample (not necessarily random) of instances from a larger set?
Yes, the dataset contains all possible instances.
(d) What data does each instance consist of?
See 2. (a).
(e) Is there a label or target associated with each instance?
See 2. (a).
(f) Is any information missing from individual instances?
No.
(g) Are relationships between individual instances made explicit?
Yes, each instance (video clip) contains one participant performing one task (assembling one of the three plates of the designed Generic Assembly Box.)
(h) Are there recommended data splits?
For action recognition and action segmentations, we provide two data splits: trainset and testset.
For object detection and multi-object tracking, we provide another two data splits: trainset and testset.
Refer to Section <ref> for details.
(i) Are there any errors, sources of noise, or redundancies in the dataset?
Given the scale of the dataset and complexity in annotation, it is possible that some ad-hoc errors exist in our annotations. However, we have given our best efforts (via human checks and quality checking code scripts) in examining manually labelled annotations to minimize these errors.
(j) Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)?
The dataset is self-contained.
(k) Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctor-patient confidentiality, data that includes the content of individuals’ non-public communications)?
No.
(l) Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety?
No.
(m) Does the dataset relate to people?
Yes, all videos are recordings of human assembly activities, and all annotations are related to the activities.
(n) Does the dataset identify any subpopulations (e.g., by age, gender)?
No. Our participants have different ages and genders. But our dataset does not identify this information. To ensure this, we have blurred participants’ faces in the released videos.
(o) Is it possible to identify individuals (i.e., one or more natural persons), either directly or indirectly (i.e., in combination with other data) from the dataset?
No, as explained in 2. (n), we have blurred participants’ faces in the released videos.
(p) Does the dataset contain data that might be considered sensitive in any way (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history)?
No.
(q) Any other comments?
None.
3. Collection Process
(a) How was the data associated with each instance acquired?
For each video instance, we provide temporal annotations and spatial annotations. We follow HR-SAT to create temporal annotations to ensure the annotation consistency. The temporal annotations were manually created and checked by our researchers. The spatial annotations were manually created by postgraduate students at the University of Auckland, who were trained by one of our researchers to ensure the annotation quality.
(b) What mechanisms or procedures were used to collect the data (e.g., hardware apparatus or sensor, manual human curation, software program, software API)?
Data were collected on three Azure Kinect RGB+D cameras via live video capturing while a participant is performing the assembly actions, and we manually labeled all the annotations.
(c) If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)?
No, we created a new dataset.
(d) Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)?
For video recordings, volunteer participants were rewarded gift cards worth NZ$50.00 upon completion of the 2-hour data collection session.
For data annotations, we contracted students at the University of Auckland, and they were paid at a rate of NZ$23.00 per hour.
(e) Over what timeframe was the data collected?
The videos were recorded during August to September of 2022, and the annotations were made during October of 2022 to March of 2023.
(f) Were any ethical review processes conducted (e.g., by an institutional review board)?
Yes, we obtained ethics approval from the University of Auckland Human Participants Ethics Committee. More information can be found in Section <ref>.
(g) Does the dataset relate to people?
Yes, we recorded the process of people assembling the Generic Assembly Box.
(h) Did you collect the data from the individuals in question directly, or obtain it via third parties or other sources (e.g., websites)?
We collected the data from the individuals in question directly.
(i) Were the individuals in question notified about the data collection?
Yes, all participants were informed of the data collection purpose, process and the intended use of the data. They were sent a Participant Information Sheet and signed Consent Form prior to the collection session. All sessions started with an introduction where instructions on data collection, health and safety and confirmation of the Consent Form were discussed.
(j) Did the individuals in question consent to the collection and use of their data?
Yes, all participants were sent a Participant Information Sheet and Consent Form prior to the collection session. We confirmed that they had agreed to and signed the Consent form regarding the collection and use of their data before proceeding with any data collection. Details can be found in Section <ref>.
(k) If consent was obtained, were the consenting individuals provided with a mechanism to revoke their consent in the future or for certain uses?
Yes. The Participant Information Sheet and Consent Form addressed how they can request to withdraw and remove their data from the project and how the data will be used.
(l) Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted?
No, all data have been processed to be made de-identifiable and all annotations are on objective world states. The potential impact of the dataset and its use on data subjects were addressed in the Ethics Approval, Participant Information Sheet and Consent Form. Details can be found in Section <ref>.
(m) Any other comments?
None.
4. Preprocessing, Cleaning and Labeling
(a) Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)?
Yes, we have cleaned the videos by blurring participants’ faces. We have also extracted I3D features from the video for action segmentation benchmarking.
(b) Was the "raw" data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)?
No, we only provide the cleaned videos (participants’ faces being blurred) to the public due to the ethics issues.
(c) Is the software used to preprocess/clean/label the instances available?
Yes, we used CVAT to draw bounding boxes. Details can be found in Section <ref>.
(d) Any other comments?
None.
5. Uses
(a) Has the dataset been used for any tasks already?
No, the dataset is newly proposed by us.
(b) Is there a repository that links to any or all papers or systems that use the dataset?
Yes, we provide the link to all related information on our website.
(c) What (other) tasks could the dataset be used for?
The dataset can also be used for Compositional Action Recognition, Human-Object Interaction Detection, and Visual Question Answering.
(d) Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses?
We granulated the assembly action annotation into subject, action verb, manipulated object, target object and tool. We believe the fine-grained and compositional annotations can be used for more detailed and precise descriptions of the assembly process, and the descriptions can serve various real-world industrial applications, such as robot learning, human robot collaboration, and quality assurance.
(e) Are there tasks for which the dataset should not be used?
The usage of this dataset should be limited to the scope of assembly activity or task understanding, e.g., action recognition, action segmentation, action anticipation, human-object interaction detection, visual question answering, and the downstream industrial applications, e.g., robot learning, human-robot collaboration, and quality assurance. Any work that violates our Code of Conduct are forbidden. Code of Conduct can be found at our website[<https://iai-hrc.github.io/ha-vid>.].
(f) Any other comments?
None.
6. Distribution
(a) Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created?
Yes, the dataset will be made publicly available.
(b) How will the dataset will be distributed (e.g., tarball on website, API, GitHub)?
The dataset could be accessed on our website.
(c) When will the dataset be distributed?
We provide private links for the review process. Then the dataset will be released to the public after the review process.
(d) Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)?
We release our dataset and benchmark under CC BY-NC 4.0[<https://creativecommons.org/licenses/by-nc/4.0/>.] license.
(e) Have any third parties imposed IP-based or other restrictions on the data associated with the instances?
No.
(f) Do any export controls or other regulatory restrictions apply to the dataset or to individual instances?
No.
(g) Any other comments?
None.
7. Maintenance
(a) Who is supporting/hosting/maintaining the dataset?
Regina Lee and Hao Zheng are maintaining, with continued support from Industrial AI Research Group at The University of Auckland.
(b) How can the owner/curator/manager of the dataset be contacted (e.g., email address)?
E-mail addresses are at the top of the paper.
(c) Is there an erratum?
Currently, no. As errors are encountered, future versions of the dataset may be released and updated on our website.
(d) Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances’)?
Yes, see 7.(c).
(e) If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were individuals in question told that their data would be retained for a fixed period of time and then deleted)?
No.
(f) Will older versions of the dataset continue to be supported/hosted/maintained?
Yes, older versions of the dataset and benchmark will be maintained on our website.
(g) If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so?
Yes, errors may be submitted to us through email.
(h) Any other comments?
None.
10
Yan2018
S. Yan, Y. Xiong, and D. Lin, “Spatial temporal graph convolutional networks
for skeleton-based action recognition,” in 32nd AAAI Conference on
Artificial Intelligence, AAAI 2018, pp. 7444–7452, jan 2018.
Carreira2017
J. Carreira and A. Zisserman, “Quo Vadis, Action Recognition? A New Model and
the Kinetics Dataset,” in 2017 IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), pp. 4724–4733, IEEE, jul 2017.
Bertasius2021
G. Bertasius, H. Wang, and L. Torresani, “Is Space-Time Attention All You
Need for Video Understanding?,” in Proceedings of the 38th
International Conference on Machine Learning, pp. 813–824, feb 2021.
Li2022
Y. Li, C.-Y. Wu, H. Fan, K. Mangalam, B. Xiong, J. Malik, and C. Feichtenhofer,
“MViTv2: Improved Multiscale Vision Transformers for Classification and
Detection,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR), pp. 4794–4804, IEEE, jun 2022.
Lin2019
J. Lin, C. Gan, and S. Han, “TSM: Temporal Shift Module for Efficient Video
Understanding,” in 2019 IEEE/CVF International Conference on Computer
Vision (ICCV), pp. 7082–7092, IEEE, oct 2019.
Shahroudy2016
A. Shahroudy, J. Liu, T.-T. Ng, and G. Wang, “NTU RGB+D: A Large Scale
Dataset for 3D Human Activity Analysis,” in 2016 IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), pp. 1010–1019, IEEE, jun
2016.
Deng2009
J. Deng, W. Dong, R. Socher, L.-J. Li, Kai Li, and Li Fei-Fei, “ImageNet:
A large-scale hierarchical image database,” in 2009 IEEE Conference on
Computer Vision and Pattern Recognition, pp. 248–255, IEEE, jun 2009.
Kay2017
W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan,
F. Viola, T. Green, T. Back, P. Natsev, M. Suleyman, and A. Zisserman, “The
Kinetics Human Action Video Dataset,” may 2017.
Wei2022
C. Wei, H. Fan, S. Xie, C.-Y. Wu, A. Yuille, and C. Feichtenhofer, “Masked
Feature Prediction for Self-Supervised Visual Pre-Training,” in 2022
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),
pp. 14648–14658, IEEE, jun 2022.
Farha2019
Y. A. Farha and J. Gall, “MS-TCN: Multi-Stage Temporal Convolutional Network
for Action Segmentation,” in 2019 IEEE/CVF Conference on Computer
Vision and Pattern Recognition (CVPR), vol. 2019-June, pp. 3570–3579, IEEE,
jun 2019.
Wang2021
D. Wang, D. Hu, X. Li, and D. Dou, “Temporal Relational Modeling with
Self-Supervision for Action Segmentation,” Proceedings of the AAAI
Conference on Artificial Intelligence, vol. 35, pp. 2729–2737, dec 2021.
Wang2020
Z. Wang, Z. Gao, L. Wang, Z. Li, and G. Wu, “Boundary-Aware Cascade Networks
for Temporal Action Segmentation,” in ECCV, vol. Part XXV 1,
pp. 34–51, 2020.
Ren2017
S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time
Object Detection with Region Proposal Networks,” IEEE Transactions on
Pattern Analysis and Machine Intelligence, vol. 39, pp. 1137–1149, jun
2017.
Jain
G. J. A. C. A. S. J. B. N. Y. K. K. M. T. J. F. i. L. Z. Y. C. W. A. V. D. M.
Z. W. C. F. J. N. L. U. V. Jain, “YOLOv5,”
Zhang2022a
H. Zhang, F. Li, S. Liu, L. Zhang, H. Su, J. Zhu, L. M. Ni, and H.-Y. Shum,
“DINO: DETR with Improved DeNoising Anchor Boxes for End-to-End Object
Detection,” mar 2022.
Chen2019
K. Chen, J. Wang, J. Pang, Y. Cao, Y. Xiong, X. Li, S. Sun, W. Feng, Z. Liu,
J. Xu, Z. Zhang, D. Cheng, C. Zhu, T. Cheng, Q. Zhao, B. Li, X. Lu, R. Zhu,
Y. Wu, J. Dai, J. Wang, J. Shi, W. Ouyang, C. C. Loy, and D. Lin,
“MMDetection: Open MMLab Detection Toolbox and Benchmark,” jun 2019.
Lin2014
T.-Y. Lin, M. Maire, S. Belongie, L. Bourdev, R. Girshick, J. Hays, P. Perona,
D. Ramanan, C. L. Zitnick, and P. Dollár, “Microsoft COCO: Common
Objects in Context,” may 2014.
Luo2021
W. Luo, J. Xing, A. Milan, X. Zhang, W. Liu, and T. K. Kim, “Multiple object
tracking: A literature review,” Artificial Intelligence, vol. 293,
p. 103448, apr 2021.
Bewley2016
A. Bewley, Z. Ge, L. Ott, F. Ramos, and B. Upcroft, “Simple online and
realtime tracking,” in 2016 IEEE International Conference on Image
Processing (ICIP), pp. 3464–3468, IEEE, sep 2016.
Zhang2022
Y. Zhang, P. Sun, Y. Jiang, D. Yu, F. Weng, Z. Yuan, P. Luo, W. Liu, and
X. Wang, “ByteTrack: Multi-Object Tracking by Associating Every Detection
Box,” in Proceedings of the European Conference on Computer Vision
(ECCV), vol. 2, oct 2022.
Zhao2019
Z.-q. Zhao, P. Zheng, S.-T. Xu, and X. Wu, “Object Detection With Deep
Learning: A Review,” IEEE Transactions on Neural Networks and Learning
Systems, vol. 30, pp. 3212–3232, nov 2019.
Gebru2018
T. Gebru, J. Morgenstern, B. Vecchione, J. W. Vaughan, H. Wallach,
H. Daumé, and K. Crawford, “Datasheets for Datasets,” mar 2018.
|
http://arxiv.org/abs/2307.04365v1 | 20230710064447 | One-Shot Pruning for Fast-adapting Pre-trained Models on Devices | [
"Haiyan Zhao",
"Guodong Long"
] | cs.CV | [
"cs.CV",
"cs.LG"
] |
University of Technology Sydney, Sydney, Australia
[email protected]
[email protected]
One-Shot Pruning for Fast-adapting Pre-trained Models on Devices
Haiyan Zhao Guodong Long
August 12, 2023
================================================================
Large-scale pre-trained models have been remarkably successful in resolving downstream tasks. Nonetheless, deploying these models on low-capability devices still requires an effective approach, such as model pruning. However, pruning the model from scratch can pose a practical challenge given the limited resources of each downstream task or device.
To tackle this issue, we present a scalable one-shot pruning method that leverages pruned knowledge of similar tasks to extract a sub-network from the pre-trained model for a new task. Specifically, we create a score mask using the pruned models of similar tasks to identify task-specific filters/nodes in the pre-trained model for the new task. Based on this mask, we conduct a single round of pruning to extract a suitably-sized sub-network that can quickly adapt to the new task with only a few training iterations.
Our experimental analysis demonstrates the effectiveness of the proposed method on the convolutional neural networks (CNNs) and vision transformers (ViT) with various datasets. The proposed method consistently outperforms popular pruning baseline methods in terms of accuracy and efficiency when dealing with diverse downstream tasks with different memory constraints.
§ INTRODUCTION
Large-scale pre-trained models have exhibited exceptional performance on a wide range of downstream tasks. For instance, CLIP <cit.> has surpassed the current state-of-the-art computer vision models on 27 downstream tasks, each having diverse distributions. However, these pre-trained models typically consist of millions of parameters, hindering their deployment on edge devices with limited memory and computation budgets.
Previous studies <cit.> have demonstrated that only a subset of the filters/nodes in a pre-trained model are crucial for the inference process of a given downstream task. To address this issue, model pruning presents an effective approach wherein unnecessary filters/nodes can be removed without compromising accuracy.
Conventional pruning methods in real-world applications often require repeated pruning of the pre-trained model to adapt to different downstream tasks and low-capability devices, resulting in a waste of computational power and time. Moreover, some devices may not have the capacity to prune large models from scratch due to memory and computation limitations. The question arises: Is it feasible to find a sparse sub-network within a pre-trained model that can quickly adapt to a new downstream task?
Recent studies <cit.> have shown evidence of the lottery ticket hypothesis (LTH), which states that training from a sparse sub-network in a randomly initialized model can achieve comparable performance to the original dense network. However, LTH cannot reduce the number of training iterations required. Furthermore, LTH focuses solely on unstructured weight pruning, which may not necessarily improve the efficiency of training and inference of the pruned model.
Tian et al. <cit.> developed a meta-model that is trained on hundreds of tasks to create a well-initialized pruned model, which can rapidly adapt to a new task within a few training iterations, thereby reducing computational costs. The meta-model is the same for all tasks.
However, in practical scenarios, it is common for a pre-trained model to produce pruned models for downstream tasks or devices with varying memory constraints.
Therefore, we propose to directly utilize prior knowledge from previous pruned models instead of training a new meta-model. For each downstream task, its pruned model retains only critical and task-specific filters/nodes from the pre-trained model.
We investigate the relationship between the pruned models of downstream tasks with different similarities. We observe that tasks with high similarities share more task-specific filters/nodes in their pruned models.
Based on this observation, this paper proposes a novel one-time pruning method called "Scalable Mask Selection Pruning (SMSP)", which is illustrated in Fig. <ref>. By learning from the pruned results of similar tasks, SMSP can create a mask to identify task-specific filters/nodes in the pre-trained model and prune the model once to extract a suitably sized sparse sub-network for a new task. SMSP is scalable because the created mask can be used to extract a sub-network of any pruning ratio from the pre-trained model to adapt to different devices. The sparse sub-network is then trained on the training data of the new task for a few iterations to quickly adapt to the new task. SMSP can significantly reduce the computation cost during pruning while maintaining the excellent performance of the pruned models. Extensive experiments have been conducted to evaluate the proposed method, which demonstrates that SMSP outperforms state-of-the-art pruning methods on CNN and ViT over several datasets. Furthermore, SMSP performs well when used to produce pruned models for tasks with different memory constraints and tasks from unseen datasets, which demonstrates the scalability and generality of SMSP.
§ RELATED WORKS
Model pruning is a highly effective technique for compressing deep neural networks.
Some existing works <cit.> apply iterative pruning approaches to reduce the model size by eliminating filters/nodes with small weights while minimizing the loss of accuracy.
Alternatively, methods like HRank <cit.> and APoZ <cit.> evaluate the importance of each filter based on its corresponding activation maps.
Another line of methods <cit.> maintains a mask for filters/nodes in the model to eliminate redundant parameters automatically.
And this dynamic pruning setting is also widely used in the pruning of the vision transformer.
Recent works <cit.> introduce learnable parameters to each attention head, node, layer, or block in the vision transformer to reduce the model's complexity.
The approach of Goyal et al. <cit.> is different from traditional parameter pruning as they dynamically prune input patches in each block of ViT, resulting in significant reductions in inference computation without compromising the model's performance. Meanwhile, Tang et al.<cit.> evaluate the importance of each patch in maintaining the original final results. However, these pruning methods require starting the pruning process from scratch, which is time-consuming. In contrast, our method leverages pruned knowledge of similar tasks to reduce the number of pruning iterations significantly.
Some other pruning methods aim to speed up the pruning process.
Cai et al. <cit.> propose a once-for-all network that supports diverse settings by decoupling training and neural architecture search, which reduces the cost and makes it scalable for efficient inference across many devices and resource constraints. However, the generated pruned models are all for one task and cannot be generalized to other tasks.
Tian et al.<cit.> proposed a meta method that trains a well-initialized pruned meta-model to quickly adapt to different few-shot tasks. However, this meta-model is the same for all tasks and cannot generalize to devices with varying memory constraints.
MEST<cit.>, which is designed for edge devices, starts training from a sparse sub-network to save training computation.
DLTH <cit.> is a variant of LTH and also starts from a well-designed sub-network. It claims that randomly extracted subnetworks from a randomly initialized dense network can be transformed into a well-performing sub-network that can achieve admirable performance compared to LTH. However, all these methods require a significant amount of time and computation to find the initialized sub-networks. In contrast, our proposed method can be applied to different downstream tasks, and it does not require any additional computation cost to extract a sub-network for each new task.
§ METHODOLOGY
In this section, we establish a model pool consisting of the pruned models obtained from hundreds of tasks on both CNN and ViT. These pruned models are extracted to retain the task-specific knowledge present in the pre-trained model for each task.
We observe that similar tasks tend to share more task-specific filters/nodes. Leveraging this observation, we propose a generic and scalable approach to reduce the computational cost of pruning for new tasks or devices.
§.§ Pool of Pruned Models from Different Tasks
A pruned model of the downstream task typically preserves filters/nodes that are indispensable for its inference in the pre-trained model.
In practice, a dataset of pruned models exists owing to the extensive utilization of large-scale models across various downstream tasks and devices.
In this paper, to emulate this situation, we construct a simplified dataset of pruned models for different tasks and devices using the same pre-trained models.
Automatic Mask Pruning (AMP).
Inspired by <cit.>, we propose automatic mask pruning (AMP) to automatically identify task-specific filters/nodes for different tasks in the pre-trained model.
Algorithm <ref> provides a detailed outline of the AMP process.
Specifically, given a pre-trained network F(·;Θ) with parameter Θ and a training set D_t of a new target task t, let Θ^t={θ^t_i}_i=1:n where θ^t_i denotes every filter/head/node-i in the network.
By adding a mask, we incorporate a learnable mask score S^t_i to each prunable filter/head/node-i in the pre-trained model.
We define an operator ⊙ applied to Θ^t and its associated scores S^t as
(Θ^t⊙ S^t)[i]≜Θ^t[i] · S^t[i]
During the pruning process, these differentiable scores are optimized along with model parameters. To encourage sparsity, an additional L1 regularization loss is applied and filters/nodes with scores below a predefined threshold will be pruned.
The final objective function of AMP is defined as follows:
min_{S^t_i}_i=1:n𝔼_(x,y)∼ D_tl(y, F(x;Θ^t⊙ S^t))+ λS^t_1
where y represents the ground truth for x, l denotes the cross-entropy loss, and λ is the weight used to balance between the two losses.
We apply AMP to prune two major categories of pre-trained models, i.e., CNN and ViT, for diverse tasks with different memory constraints.
Specifically, we select ResNet-18(ResNet-50)<cit.> pre-trained on CIFAR-100<cit.>(ImageNet <cit.>) for CNN, and apply AMP to multiply the mask score to each filter in the network.
For ViT, we use DeiT-S <cit.> pre-trained on ImageNet.
As reported in previous work <cit.>, only some attention heads in deep pre-trained transformers are necessary for downstream tasks. Therefore, AMP is used to prune ViT at two levels: heads in the multi-head attention modules and nodes in the feed-forward network modules.
In the pool of pruned models, tasks for ResNet-18, ResNet-50, and ViT are randomly sampled from classes in CIFAR-100 and ImageNet datasets, respectively.
To verify the generality and scalability of our proposed method, we collect the pruned models of diverse tasks,
which can be divided into 3 groups: 3-classes, 5-classes and 10-classes classification tasks, each containing 300 tasks.
To emulate the memory limitations of various devices, we store pruned models with varying pruning ratios for each task in our model pool.
Due to the high memory costs of storing each pruned model, we have modified the AMP algorithm such that only mask scores are optimized with regularization, while all pre-trained model parameters remain fixed.
This modification facilitates accurate masking for each task to identify task-specific knowledge in the pre-trained model.
As all tasks can share the same pre-trained model during inference, we only record the class labels C^t and the mask S^t for each task t. The mask scores of pruned filters/nodes are then set to 0.
§.§ Knowledge Shared between Tasks
In the realm of multi-task/lifelong learning methods, similar tasks usually share more parameters in the network. In this section, we study the overlap of pruned models for similar tasks to verify whether more similar downstream tasks share more parameters in the pre-trained model.
To compute the similarity between downstream tasks, we apply the Log Expected Empirical Prediction (LEEP) <cit.>, which is used to evaluate the transferability of representations learned by the source task to the target task. This method only requires running the target task's data through the pruned model once to compute the LEEP score.
Overlap of task-specific filters/nodes.
Upon applying AMP to a new task, filters or nodes that have small mask scores will be pruned, whereas those with high mask scores, which contain task-specific knowledge relevant to the downstream task, can be retained in the model.
So we focus on the overlap of these high-score filters/nodes between tasks.
Given the pruned model of a task m, the set of filters/nodes Ω^m retained in the pre-trained model are sorted according to their mask scores {S^m_i}_i∈Ω^m in the descending order.
Ω^m_k denotes the filters/nodes with top-k mask score values in the mask of task m.
For each pair of tasks, say task m and task n (using the same pre-trained model), we compute the overlap ratio R of filters/nodes with top-k score values in their masks, i.e., R = |Ω^m_k ∩Ω^n_k|/k.
In Fig. <ref>, we present the overlap ratio of retained filters/nodes in various pre-trained models for tasks with varying degrees of similarity.
The x-axis of Fig. <ref> represents the top-k filters/heads/nodes with the highest mask scores in the pruned model, while the y-axis represents the overlap ratio of top-k filters in the pruned models of two similar tasks.
Given a new task, we calculate its LEEP similarities to the existing tasks in the model pool. Then we sort these LEEP similarities and partition them into three groups of equal intervals. Existing tasks whose similarity scores fall into a specific interval will be assigned to the corresponding similarity group. From similarity group 1 to group 3 in Fig. <ref>, the similarities between tasks decrease.
We observed from all three plots in Fig. <ref> that the overlap ratios of tasks belonging to similarity group 1 are considerably greater than those of tasks in similarity group 3. This indicates that the pruned models of more similar tasks share a significantly higher number of task-specific filters/heads/nodes. Hence, the pruned models of previous similar tasks can be utilized to identify task-specific parameters in the pre-trained model, expediting the pruning of the new task.
On the other hand, as the value of k increases, the overlap ratios in three plots grow gradually. This can be attributed to the fact that certain filters/heads/nodes with high mask scores in one task may be retained by another task with smaller scores. These filters/nodes have varying importance for different tasks and may serve distinct roles. In plot (c), we observe that the overlap ratios begin to converge when k exceeds 6. This is due to the fact that only a small number of heads (approximately 8) are preserved in the pruned model of each task.
§.§ Scalable Mask Selection Pruning (SMSP)
Inspired by the above discovery, we propose a generic and simple method called “Scalable Mask Selection Pruning (SMSP)" to fast-adapt the pre-trained model to downstream tasks.
The process of generating a mask for each new task is illustrated in Figure <ref>.
SMSP leverages the knowledge of pruned models for similar tasks to create a pruning mask of the pre-trained model for a new task. The detailed process of SMSP is shown in Alg. <ref>.
Specifically, given a new task t, SMSP first calculates its LEEP similarities <cit.> to tasks in the pool and samples M similar neighbor tasks M^t.
The mask scores S^t of task t are computed by summing the mask scores of all selected similar tasks, as shown below:
{S^t_i}_i=1:n = ∑_m=1^MS^m_i
Here, n represents the total number of filters/heads/nodes in the model, and M represents the total number of selected similar tasks.
As filters/nodes with high scores in S^t have been shown to play essential roles in similar tasks, it is likely that they contain task-specific knowledge relevant to the new target task t.
We sort the mask score of task t in descending order. Given any pruning ratio r, SMSP prunes r*n filters with the smallest mask scores once to meet the memory constraint.
The training objective of SMSP is:
min 𝔼_(x,y)∼ D_tl(y, F(x;θ^t_i: i∈Ω))
where θ^t_i: i∈Ω represents filters/nodes retained after pruning.
In the retained sub-network, the mask is removed, and all the parameters are inherited from the original pre-trained model.
SMSP trains the sub-network on the new target task's data for only a few iterations to speed up pruning.
§ EXPERIMENTS
In this section, we evaluate SMSP by pruning ResNet and ViT for downstream tasks from several datasets and compare its results with SOTA pruning methods. We validate the scalability and generality of SMSP by generating pruned models for tasks with different memory constraints. Finally, we study the effect of the mask, the number of similar tasks and task similarities on SMSP.
§.§ Experimental Settings
For each experiment scenario, we randomly sample 50 test tasks from the dataset. Each test task selects its similar tasks from the pool of pruned models according to their LEEP similarities. To make our study more solid, classes in selected similar tasks are disjoint from those in the test task so that their training data are totally different.
In our experiments, we conduct a grid search on a small subset of test tasks to tune the hyperparameters, which are then applied to all tasks. When applying SMSP to prune ResNet, we utilize SGD to train the sub-network and apply cosine annealing learning rate. The batch size is set to 128, and the initial learning rate is set to 0.01.
For experiments of ViT, we follow previous works<cit.> and use the optimizer of AdamW with the cosine-annealing learning rate. During training, we use a batch size of 256 and a smaller initial learning rate of 0.0002.
All results shown in this section are averaged over 50 test tasks.
§.§ Comparison with SOTA Methods
We compare our method with several SOTA pruning methods. To demonstrate our method's effectiveness, we compare it with AMP, a conventional pruning method that prunes the pre-trained model from scratch using a large number of pruning iterations.
For tasks on ResNet, we also include two popular pruning methods as baselines: Feature Pruning <cit.> and Taylor Pruning <cit.>. Feature Pruning calculates the importance of filters by averaging the activation values over all training samples, while Taylor Pruning measures the impact of removing each filter on the final loss to determine their importance.
We also compare our method with some popular methods that accelerate pruning. For example, IHT-based Reptile <cit.> learns a well-initialized pruned meta-model on a set of training tasks. For each new task, it can obtain the final pruned model by training the meta-model for a few iterations. DLTH <cit.> is a variant of LTH, which extracts a winning ticket for each task. MEST <cit.> can accelerate pruning by training from a sparse sub-network.
For pruning ViT, we compare SMSP with PoWER <cit.>, which proposes to dynamically prune the input patches of each block in ViT, and UVC <cit.>, which not only prunes heads and nodes but also unimportant layers and blocks in the model.
The results of comparing SMSP with the baseline methods for ResNet and ViT are presented in Tab. <ref> and Tab. <ref>, respectively. All results are obtained by pruning 5-classes classification tasks with a pruning ratio of 90%. The findings indicate that, for both ResNet and ViT, SMSP performs slightly better than AMP, which requires significantly more pruning iterations.
Although Feature Pruning and Taylor Pruning also yield similar or slightly better results than SMSP for ResNet-18 and ResNet-50, they demand significantly more computational resources than SMSP.
Moreover, SMSP surpasses IHT-based Reptile by a large margin, despite the fact that both approaches leverage knowledge from multiple tasks. Unlike IHT-based Reptile, which employs the same pruned meta-model for each new task, SMSP extracts different sub-networks for different tasks, composed of task-specific parameters, which can enhance performance.
Furthermore, the performance of SMSP outperforms DLTH and MEST, which, like SMSP, start with a well-designed sub-network. However, neither DLTH nor MEST has task-specific knowledge in their initialized pruned model, while SMSP initializes the sub-network by leveraging knowledge from similar tasks.
The outcomes presented in Tab. <ref> demonstrate that SMSP significantly outperforms baseline methods for ViT. Owing to a relatively low number of training iterations, neither UVC nor PoWER can recover the accuracy when a considerable number of parameters or patches are eliminated. Conversely, SMSP leverages a sub-network created by similar tasks as an initialization, hence, only a few training iterations are necessary to construct a well-performing pruned model.
§.§ Evaluation of Scalability and Generality
Our proposed SMSP is scalable in two folds. 1) SMSP can produce a promising pruned model for a new task of any memory constraint with a few training iterations. 2) All pruned models of tasks with varying data distribution and sizes can be selected as similar tasks to accelerate the pruning of the new task.
Applying SMSP to tasks of different sizes.
In Tab. <ref>, we show the results of applying SMSP to tasks of different sizes. The pruning ratios of all tasks are set to 90%. In the table, we find that for test tasks of different sizes, when we use the 5-classes similar tasks to extract the sub-networks for the test tasks, its performance is better than that of the 3-classes similar tasks. This is because similar tasks containing more classes can better differentiate data from different classes. Similar tasks of large sizes can extract more accurate task-specific filters/nodes for a given new task.
Applying SMSP to tasks of different memory constraints.
In Tab. <ref>, we apply SMSP to tasks of varying memory constraints. All the tasks are 5-classes classification tasks.
We observe that SMSP outperforms AMP when transferring between different pruning ratios.
Additionally, SMSP performs better when the pruning ratios of similar tasks and test tasks are the same. This could be attributed to the fact that in a pruned model with a small pruning ratio, some redundant filters/nodes are preserved in the mask, whereas in a pruned model with a large pruning ratio, some task-specific filters/nodes will be removed.
An interesting finding is that SMSP can leverage similar tasks with large pruning ratios to generate a well-performing pruned model of a smaller pruning ratio for a new task. This demonstrates the superiority of using pruned results of similar tasks as prior knowledge.
Performance on unseen tasks. To validate the generality of SMSP, we randomly sample 50 test tasks from Caltech-256 <cit.>. SMSP produces pruned models for these test tasks by learning from pruned results of tasks from ViT trained on ImageNet. The pre-trained ViT and similar tasks in the pool of pruned results never see the data of Caltech-256. All the test tasks are 5-classes classification tasks with the pruning ratio of 90%.
In Tab. <ref>, we show the results of applying SMSP to Caltech-256 and compare it with AMP.
The results show that SMSP can achieve comparable performance as AMP, which uses 10x training iterations. This indicates that SMSP can also identify task-specific heads/nodes in the pre-trained ViT for each unseen task from Caltech-256, so only a few training iterations suffice to produce a well-performed pruned model, showing the generality of SMSP to diverse datasets.
§.§ Ablation Study
Effect of the mask.
The main contribution of SMSP is its ability to leverage the pruned results of similar tasks to generate the task-specific mask for each new test task. To validate the efficacy of the masks produced by SMSP, we randomly generate a mask for each task using the same pruning ratio and compare their performance with that of SMSP. In Tab. <ref>, we observe that for tasks using ResNet-18 and ViT, the performance of random masks is significantly worse than that of SMSP. These results suggest that the masks generated by SMSP can effectively identify filters/nodes that are relevant to the new target tasks.
Effect of the number of similar tasks.
In plot (a) of Fig.<ref>, we study the effect of the number of similar tasks for each new task. For both tasks from ResNet-18 and ViT, as the number of similar tasks increases, the performance of SMSP also improves.
This is because more pruned results of similar tasks can provide more task-specific knowledge for the new task.
When the number >8, SMSP converges, which indicates that 8 similar tasks for each task in SMSP are enough to create a high-quality mask.
Effect of task similarities.
In plot (b) of Fig.<ref>, we compare the performance of SMSP when tasks with different similarities are used. The accuracy of using pruned models with higher similarities is always better than that of lower similarities, which implies that tasks with high similarities share more knowledge with new target tasks. This observation aligns with the findings presented in Section <ref>. The plot also illustrates that SMSP converges when the training iterations >80, indicating that only a limited number of training iterations will be enough for SMSP to build a promising pruned model.
§ CONCLUSION
In this paper, we propose a generic one-shot pruning method called SMSP to fast-adapt the pre-trained model to downstream tasks.
Based on the discovery that tasks with high similarities share more filters/nodes in their pruned models, given a new task, SMSP leverages the knowledge from the pruned models of its similar tasks to extract a sub-network from the pre-trained model. Then, a few training steps on the sub-network can reach a high-quality pruned model. Our experiments demonstrate that SMSP achieves SOTA results in terms of both accuracy and efficiency across various datasets and pre-trained models.
splncs04
|
http://arxiv.org/abs/2307.03974v2 | 20230708132712 | Comparing EventB, $\{log\}$ and Why3 Models of Sparse Sets | [
"Maximiliano Cristiá",
"Catherine Dubois"
] | cs.SE | [
"cs.SE"
] |
Short-time large deviations of the spatially averaged height
of a KPZ interface on a ring
Baruch Meerson
August 12, 2023
=========================================================================================
Many representations for sets are available in programming languages libraries. The paper focuses on sparse sets used, e.g., in some constraint solvers for representing integer variable domains which are finite sets of values, as an alternative to range sequence. We propose in this paper verified implementations of sparse sets, in three deductive formal verification tools, namely , and 3. Furthermore, we draw some comparisons regarding specifications and proofs.
§ INTRODUCTION
Sets are widely used in programs. They are sometimes first-class objects of programming languages, e.g. SETL <cit.> or <cit.>,
but more frequently they are data structures provided in libraries. Many different representations are available, depending on the targeted set operations. In this paper, we focus on sparse sets, introduced by Briggs and Torczon in <cit.>, used in different contexts and freely available for different programming languages (Rust, C++ and many others). In particular,
sparse sets are used in constraint solvers as an alternative to range sequences or bit vectors for implementing domains of integer variables <cit.> which are nothing else than mathematical finite sets of integers. Their use in solvers implementations is motivated by -at least- the two following properties: searching and removing an element are constant-time operations—removing requires only two swapping
operations on arrays; sparse sets are cheap to trail and restore, which is a key point when backtracking.
Confidence on constraint solvers using sparse sets can be improved if the algorithms implementing the main operations are formally verified, as it has been done by Ledein and Dubois in <cit.> for the traditional implementation of domains as range sequences. Hence, the main contribution of this paper is
a verified implementation of sparse sets for representing finite sets of integers in , and 3.
We prove that the implemented operations preserve the invariants and we also prove properties that can be seen as formal foundations of trailing and restoring. As far as we know, this is the first formally verified implementation of sparse sets, whereas it has been done for other representations e.g. <cit.>. All the specifications and proofs can be found here: <https://gitlab.com/cdubois/sets2023.git>.
It has been known for decades that there is no silver bullet for software engineering or software development. The best we can do as software engineers is to increase our toolbox as much as possible and use the best available tool in it for the problem at hand. This software engineer practical principle still applies when it comes to formal development, formal methods and formal verification. In our opinion the Formal Methods (FM for short) community should have as much information as possible about the relative advantages and disadvantages of different FM methods and tools. With the intention to shed some light on the ups and downs of different FM, we specified and verified sparse sets with three different FM techniques. Then, a second contribution of this paper is a comparison of these FM w.r.t. aspects such as expressiveness, specification analysis and automated proof.
§ SPARSE SETS
We deal here with sets as subsets of natural numbers up to N-1, where N is any non null natural number. A sparse set S is represented by two arrays of length N called mapD and domD (as in <cit.>), and a natural number sizeD. The array mapD maps any value v ∈ [0,N-1] to its index ind_v in domD, the value indexed by ind_v in domD is v. The main idea that brings efficiency when removing an element or testing membership is to split domD into two sub-arrays, domD[0,sizeD-1] and domD[sizeD, N-1], containing resp. the elements of S and the elements of [0,N-1] not in S. Then, if S is empty, sizeD
is equal to 0, if S is the full set, then sizeD is N.
Checking if an element i belongs to the sparse set S simply consists in the evaluation of the expression mapD[i]<sizeD. Removing an element from the set consists in moving this element
to domD[sizeD, N-1] (with 2 swaps in mapD and domD and decreasing sizeD). Binding S to the singleton set {v} follows the same idea: moving this element at the first place in domD and assigning the value 1 to sizeD.
In our formalizations, we only deal with two operations consisting in removing an element in a sparse set and bind a sparse set to a singleton set since these two operations are fundamental when solving constraints. In this context, we may also need to walk through all the elements of a variable domain, it means exploring domD[0..sizeD-1]. If minimal and maximal values are required, then they have to be maintained in parallel. This is outside the scope of this work.
§ FORMAL DEVELOPMENT
In this section we succinctly introduce the formal specification language and with more detail the models for sparse sets.
§.§
<cit.> is a deductive formal method based on set theory and first order logic allowing users to design correct-by-construction systems. It relies on a state-based modeling language in which a model, called a machine,
is made of a state and a collection of events allowing for state changes. The state consists of variables constrained by invariants.
Proof obligations are generated to verify the preservation of invariants by events. A machine may use a -mathematical- context which introduces abstract sets, constants, axioms or theorems. A formal design in starts with an abstract machine which is usually refined several times. Proof obligations are generated to verify the correctness of a refinement step.
An event may have parameters. When its guards are satisfied, its actions, if any, are executed, updating state variables. Actions may be -multiple- deterministic assignments, x,y:=e, f, or -multiple- nondeterministic ones, x,y :| BAP(x,x',y,y') where BAP is called a Before-After Predicate relating current (x, y) and next (x', y') values of state variables x and y.
In the latter case, x and y are assigned arbitrary values satisfying the BAP predicate. When using such a non-deterministic form of assignment, a feasibility proof obligation is generated in order to check that there exist values for x' and y' such that BAP(x,x',y,y') holds when the invariants and guards hold. Furthermore when this kind of action is used and refined, the concrete action updating x and y is required to assign them values which satisfy the BAP predicate.
In the following, we use Rodin, an Eclipse based IDE for project management, model edition, refinement and proof, automatic proof obligations generation, model animation and code generation. Rodin supports automatic and interactive provers <cit.>. In this work we used the standard provers (AtelierB provers) and also the SMT solvers VeriT, CVC3 and CVC4. More details about and Rodin can be found in <cit.> and <cit.>.
§.§ formalization
The formalization is made of six components, i.e. two contexts, a machine and three refinements. Context Ctx introduces the bound N as a non-zero natural number and context Ctx1 extends the latter with helper theorems. The high level machine gives the abstract specification. This model contains a state composed of a finite set D, constrained to be a subset of the (integer) range 0..N-1, and two events, to remove an element from D or set D as a singleton set (see Fig. <ref> in which bind is removed for lack of space).
The first refinement (see Fig.<ref>)
introduces the representation of the domain as a sparse set, i.e. two arrays mapD and domD modeled as total functions and also the variable sizeD which is a natural number in the range 0..N. Invariants inv4 and inv5 constrain mapD and domD to be inverse functions of each other.
The gluing invariant inv6 relates the states between the concrete and former abstract machines. So the set domD[0..sizeD-1] containing the elements of the subarray from 0 to sizeD-1 is exactly the set D.
Theorem inv7 is introduced to ease some interactive proofs, it is proved as a consequence of the previous formulas (inv1 to inv6).
It follows directly from a theorem of Ctx1 whose statement is inv7 where domD and mapD are universally quantified. Theorem inv8, also used in an interactive proof, and automatically proved by CVC3, states that domD is an injective function.
Variables mapD and domD are both set initially to the identity function on 0..N-1 and sizeD to N. So invariants are satisfied at the initial state. Machine SparseSets_ref1 refines the events of the initial machine by non deterministic events. So here the remove event assigns the three state variables with values that satisfy invariants and also such that sizeD strictly decreases and removed elements in domD are kept at the same place (properties in bold font). Event bind follows the same pattern (again not shown here).
The second refinement has the same state than the previous refinement (see Fig. <ref>). Its events implement the operations using the new state variables. It is a straightforward translation of the algorithms described in <cit.>.
The only reason to have introduced the intermediate model
SparseSets_ref1 is to express the properties written in bold font and thus generate, in the next refinement, proof obligations which, when discharged, will not only ensure that the events refined in Fig. <ref> preserve the invariants inv1, inv2 …inv6 but also the local properties regarding sizeD and domD[sizeD..N-1] (SIM proof obligations).
The feasibility (FIS) proof obligations generated by the non-deterministic events of SparseSets_ref1 require to prove that there exist values such that the BAP predicate holds. We can prove it using the new values of domD, mapD and sizeD specified in the last refinement as witnesses. The simulation (SIM) proof obligations generated by events of SparseSets_ref2 require to prove that the latter values again satisfy the BAP predicate used in SparseSets_ref1. In order not to do these -interactive- proofs twice, we generalize them and prove them as theorems of the context. Thus to discharge the FIS and SIM proof obligations, we only have to instanciate these theorems to provide a proof.
A last algorithmic refinement, omitted here, refines the remove event in two events, removeLastButOne and removeLast. The former differs from remove only by its more restrictive guard; the latter is dedicated to the case where the element with index sizeD-1 in domD is removed thus avoiding the unnecessary swapping.
§ FORMAL DEVELOPMENT
In this section we briefly present the tool and how we used it to encode the model of sparse sets.
§.§
is a constraint logic programming (CLP) language and satisfiability solver where sets and binary relations are first-class citizens <cit.>. The tool implements several decision procedures for expressive fragments of set theory and set relation algebra including cardinality constraints <cit.>, restricted universal quantifiers <cit.>, set-builder notation <cit.> and integer intervals <cit.>. In previous works has been satisfactory tested against some known case studies <cit.>.
code enjoys the formula-program duality. This means that code can behave as both a formula and a program. When seen as a formula, it can be used as a specification on which verification conditions can be (sometimes automatically) proved. When seen as a program, it can be used as a (less efficient) regular program. Due to the formula-program duality, a piece of code is sometimes called forgram—a portmanteau word resulting from combining formula with proggram.
§.§ formalization
The formalization presented in this paper is the result of translating the abstract specification (i.e., Fig. <ref>) and the second refinement (i.e. Fig. <ref>). Both models can be easily translated into by using the (still under development) state machine specification language (SMSL) defined on top of
(see Fig. <ref> and <ref>) <cit.>. The notions of context and refinement are not available in SMSL. For this reason, refinements introduced in the model have to be manually encoded in . The context is encoded simply as an axiom. In order to ensure that the code verifies the properties highlighted in bold in Fig. <ref> as well as the gluing invariant (i.e., inv6), a few user-defined verification conditions are introduced as theorems. Since the first refinement is introduced to express the properties written in bold, its events have not been encoded in .
Figures <ref> and <ref> list only representative parts of the forgram.
We tried to use the same identifiers as for the models as much as possible. In this way, for example, the invariant labeled as inv6 in the SparseSets_ref1 machine (Fig. <ref>), is named in the forgram. The name of variables in cannot fully complain with those used in the models because requires all variables to begin with a capital letter. So, for example, domD in the SparseSets_ref1 machine becomes in .
As can be seen in Fig. <ref>, the state machine specification language defined on top of allows for the declaration of parameters (similar to context constants), state variables, axioms (similar to context axioms) and invariants. Parameter is used to compute the identity relation on the integer interval [0,N-1] as shown in axiom , which in turn is used in invariant . As is a CLP language implemented on top of Prolog, it inherits many of Prolog's features. In particular, integer expressions are evaluated by means of the predicate. Along the same lines, all set operators are implemented in as constraints. For example, is true when is the identity relation on the set . The term corresponds to the integer interval [0,M].
Invariants named , and correspond to invariant inv1 of the SparseSets_ref1 machine. Splitting invariants in smaller pieces, is a good practice when using as a prover because it increases the chances of automated proofs. implements the negation of invariant . does not automatically compute the negation of user-defined predicates. As a user-defined predicate can contain existential variables, its negation could involve introducing universal quantifiers which fall outside 's decision procedures. Then, users are responsible for ensuring that all predicates are safe.
In invariant we can see the constraint. This constraint implements the notion of restricted universal quantifier (RUQ). That is, for some formula ϕ and set , corresponds to ∀ X.(X ∈ A ϕ(X)). In a constraint it is possible to quantify over binary relations, as is the case of . Hence, we have a quantified ordered pair (), rather than just a variable. Likewise, offers the constraint implementing the notion of restricted existential quantifier (REQ). The important point about REQ and RUQ is not only their expressiveness but the fact that there is a decision procedure involving them <cit.>. In these constraints are used to state a double set inclusion equivalent to the formula domD[0 .. sizeD - 1] = D. If the user is not convinced or unsure about the validity of this equivalence (s)he can use itself to prove it.
Note that is not declared as an invariant because in Fig. <ref> it is a theorem that can be deduced from previous invariants.
Therefore, we introduce it as a simple predicate but then we declare a theorem whose conclusion is . Later, will include as a proof obligation and will attempt to discharge it. Given that is a satisfiability solver, if Φ is intended to be a theorem then we ask it to prove the unsatisfiability of ¬Φ.
Moving into in Fig. <ref> we can see the encoding of the remove operation specified in the SparseSets_ref2 machine of Fig. <ref>, along with two user-defined proof obligations. In , there is no global state so state variables have to be included as explicit arguments of clauses representing operations. Next-state variables are denoted by decorating the base name with an underscore character (e.g., corresponds to the value of in the next state). Another important difference between the and the specifications is that in the latter we can use set unification to implement function application. For instance, is equivalent to the predicate: ∃ y_2, y_5, domD_1. (domD = {sizeD - 1 ↦ y_2, y_1 ↦ y_5}∪ domD_1), where y_1 = mapD(v) (due to the previous set unification). The not-membership constraints following the equality constraint prevent to generate repeated solutions. Hence, when is called with some set term in its fourth argument, this term is unified with . If the unification succeeds, then the images of and are available.
As said before, some user-defined proof obligations are introduced as theorems to ensure that the forgram verifies the gluing invariant (i.e., inv6) and the properties written in bold in machine SparseSets_ref1. Precisely, theorem states that if holds and and its abstract version (not shown in the paper) are executed, then holds in the next state.[ and its abstract version can be distinguished by their arities.]
Likewise, theorem ensures that the second property written in bold in machine SparseSets_ref1 is indeed a property of the forgram. As can be seen, the theorem states that if is executed and the functional image[ is a user-defined predicate computing the relational image through a function— stands for functional image.] of the interval from up to through is , then it must coincide with the functional image of the same interval but through .
Once the specification is ready, we can call the verification condition generator (VCG) and run the verification conditions (VC) so generated:
VCs include the satisfiability of the conjunction of all axioms, the satisfiability of each operation and preservation lemmas for each and every operation and invariant. The last command above will attempt to automatically discharge every VC. Part of the output is as follows:
An answer means that, for some reason, is unable to discharge the VC. Most of the times this is due to some missing hypothesis which, in turn, is due to the way the VCG generates the VCs. Briefly, when it comes to invariance lemmas, the VCG generates them with the minimum number of hypothesis. So, for instance, the invariance lemma named is as follows:
By including minimum hypothesis, will have to solve a simpler goal which reduces the possibilities to have a complexity explosion. If the hypothesis is not enough, the command can be used to find potential missing hypothesis.
In this way, users can edit the VC file, add the missing hypothesis and run the VC again. If more hypotheses are still missing, the process can be executed until the proof is done—or the complexity explosion cannot be avoided.
discharges all the VC generated by the VCG for the present forgram.
§ WHY3 FORMAL DEVELOPMENT
In this section we briefly introduce the 3 platform and describe with some details our specification of sparse sets.
§.§ 3
Why3 <cit.> is a platform for deductive program verification providing
a language for specification and programming, called WhyML, and relies on external automated and interactive theorem provers, to discharge verification conditions. In the context of this paper, we used Why3 with the SMT provers CVC4 and Z3.
Proof tactics are also provided, making 3 a proof environment close to the one of Rodin for interactive proofs. 3 supports modular verification.
WhyML allows the user to write functional or imperative programs featuring polymorphism, algebraic data types, pattern-matching, exceptions, references, arrays, etc. These programs can be annotated by contracts and assertions and thus verified. User-defined types with invariants can be introduced, the invariants are verified at the function call boundaries. Furthermore to prevent logical inconsistencies, 3 generates a verification condition to show the existence of at least one value satisfying the invariant. To help the verification, a witness is explicitly given by the user (see the clause in Fig. <ref>).
The and operators can be used inside post-conditions and assertions to refer to the value of a mutable program variable at some past moment of execution. In particular in a function post-condition refers to the value of term when the function is called. Programs may also contain ghost variables and ghost code to facilitate specification and verification.
From verified WhyML programs, correct-by-construction OCaml programs (and recently C programs) can be automatically extracted.
§.§ 3 formalization
From the 3 library, we use pre-defined theories for integer arithmetic, polymorphic finite sets and arrays. In the latter, we use in particular the operation that exchanges two elements in an array and its specification using the predicate.
We first define a record type, , whose mutable fields are a record of type containing the computational elements of a sparse set representation and a ghost finite set of integer numbers which is the abstract model of the data structure. The type invariant of relates the abstract model with the concrete representation. It is used
to enforce consistency between them. Invariants enforcing consistency between the two arrays and and the bound are attached to the type: lengths of the arrays is , contents are belonging to 0..-1 and the two arrays are inverse of each other, is in the interval 0... These type definitions and related predicates are shown in Fig. <ref>.
Our formalization (see Fig. <ref>, where, again, bind is removed for lack of place) contains three functions,
, and , which update their arguments. They are the straightforward translation of the algorithms in <cit.> in WhyML, except for the supplementary ghost code (the last statement in both and ) which updates the abstract model contained in . Function is a helper function
which is called in the other ones. The contract of makes explicit the modifications of both arrays and , using the predicate defined in the library. Verification conditions for this function concern the conformance of the code to the two post-conditions (trivial as it is ensured by ) and also the preservation of the invariant attached to the type—i.e. mainly that and after swapping elements remain inverse from each other.
Both and act not only on the two arrays and the bound but also on the ghost part, i.e. the corresponding mathematical set . Thus the verification conditions here not only concern the structural invariants related to , and but also the ones deriving from the use of the type, proving the link between the abstract logical view (using finite sets) and the computational one implemented through arrays.
Observe that types and correspond to the state and invariants of the refinements. The abstract specification presented in the first machine becomes a ghost field in WhyML. The invariant of the type corresponds to the gluing invariant (inv6). A similar transposition happens for the operations. Actions in the abstract events, i.e. updating the abstract set, appear as ghost code in WhyML.
All proofs are discovered by the automatic provers except for some proof obligations related to the function. Nevertheless these proofs are simplified thanks to some 3 tactics that inject some hints that can be used by the external provers to finish the proofs.
§ COMPARISON AND DISCUSSION
Set theory is primitive in and whereas Why3 which permits to express other theories, provides a theory for it. Rodin uses provers where set theory is primitive but can also call external provers such as VeriT, Z3 and CVC4—where set theory is not primitive. However a big effort has been done to process set theory in VeriT, which is often recognized as allowing significant improvements in proofs <cit.>.
Why3 relies entirely on external provers where set theory is not primitive. Conversely, is a satisfiability solver that can only work with set theory—and linear integer algebra. It is the only
of the three tools implementing advanced decision procedures for set theory. Likely, this proved to be crucial for being able to be the only tool that automatically discharged all the VC, although it required a simple hypothesis discovery procedure. It should be a concern the time needs to discharge all the VC because with more complex models the resolution time might be prohibitive. It worth to be studied ways of avoiding the algorithmic complexity of the decision procedures implemented in . Results on Computable Set Theory should be revisited (eg. <cit.>). Why3 and Rodin interactive proofs are not numerous and remain quite simple.
In , 51 proof obligations were generated for the whole development, around half of them coming from the first refinement.
37 were proven automatically by the standard provers (AtelierB provers), 18 automatically by SMT provers, mainly VeriT, either directly or after applying the Rodin lasso allowing for adding additional,
backup hypotheses having identifiers in common with
the goal. Only two proof obligations required real human intervention, mainly instantiations of the general theorems introduced in Ctx1 or explicit witnesses introduction in the case of feasibility proof obligations.
After working in the way described in Sect. <ref>, discharges all the 38 VC generated by the VCG in around 7 minutes.
Why3 makes it possible to apply transformations (e.g. split conjunctions) on a proof goal instead of calling an automatic prover on it. Some of these transformations are very simple, e.g. splitting conjunctions, and can then been applied systematically and automatically. Most of the generated VC in our formalization were proven automatically thanks to the split transformation. Only two of them about pieces of type invariants, required human interaction to insert some more complex transformations, e.g a case analysis on indexes in mapD (). At the end, 55 VC were proved by CVC4, except two of them discharged by Z3, in a total amount of time of 30 seconds.
Clearly, all three tools are expressive enough for the problem at hand. However, the specification is probably the most readable. The tools permit to express axioms, invariants and automatically generate similar VC. still needs work to express how two models are linked in terms of abstraction/refinement relations. Writing some key properties proved to be complex in . Indeed, it was necessary to add a somewhat artificial refinement level for Rodin being able to generate the desired VC linking. These properties can be easily defined by the user in . However, in Why3 and , proof obligations are automatically generated from the specifications, in particular the abstract and concrete models can be naturally linked and the tool automatically generates the corresponding VC. In that regard, Why3 and are safer than .
The possibility to count with executable code without much effort enables many lightweight analysis that can be put into practice before attempting complex proofs. In tool where specification and implementation are described by only one piece of code (cf. forgrams). This tool is not the integration of an interpreter and a prover; the same set of rewrite rules are used to compute and prove. In /Rodin there is only a specification—later it can be converted into an executable representation if tools such as ProB are used.
Why3 can execute WhyML programs natively thanks to its interpreter and the command.
Furthermore, once the the program is proved to verify the specification, correct-by-construction OCaml and C programs can be automatically extracted. These programs will be orders of magnitude more efficient than the equivalent forgrams.
§ CONCLUSION
We formally verified the implementation of sparse sets using three formal languages and associated tools, focusing on the operations and correctness properties required by a constraint solver when domains of integer variables are implemented with sparse sets. We compared in particular the several statements of invariants and pre-post properties and their proofs.
As future work, two directions can be investigated. The first one is to complete the formal developments with other set operations. A second one is to implement and verify, in Why3 or , a labeling procedure such as the ones used in constraint solvers, it would need to backtrack on the values of some domains, and thus make use of the theorems proven in this paper. Labeling is native in when the CLP(FD) solver is active.
abbrv
|
http://arxiv.org/abs/2307.05405v2 | 20230711161215 | Boosting Feedback Efficiency of Interactive Reinforcement Learning by Adaptive Learning from Scores | [
"Shukai Liu",
"Chenming Wu",
"Ying Li",
"Liangjun Zhang"
] | cs.RO | [
"cs.RO",
"cs.LG"
] |
Trotter24: A precision-guaranteed adaptive stepsize Trotterization for Hamiltonian simulations
Keisuke Fujii
August 12, 2023
==============================================================================================
empty
empty
Interactive reinforcement learning has shown promise in learning complex robotic tasks. However, the process can be human-intensive due to the requirement of a large amount of interactive feedback. This paper presents a new method that uses scores provided by humans instead of pairwise preferences to improve the feedback efficiency of interactive reinforcement learning. Our key insight is that scores can yield significantly more data than pairwise preferences. Specifically, we require a teacher to interactively score the full trajectories of an agent to train a behavioral policy in a sparse reward environment. To avoid unstable scores given by humans negatively impacting the training process, we propose an adaptive learning scheme. This enables the learning paradigm to be insensitive to imperfect or unreliable scores. We extensively evaluate our method for robotic locomotion and manipulation tasks. The results show that the proposed method can efficiently learn near-optimal policies by adaptive learning from scores while requiring less feedback compared to pairwise preference learning methods. The source codes are publicly available at https://github.com/SSKKai/Interactive-Scoring-IRLhttps://github.com/SSKKai/Interactive-Scoring-IRL.
§ INTRODUCTION
Deep Reinforcement Learning (DRL) has made remarkable progress in addressing robotic control tasks, such as legged robot locomotion <cit.> and robotic manipulation <cit.>. However, formulating an accurate reward function for a specific task can pose a significant challenge. Sparse reward functions are frequently employed for their simplicity, but their absence of reward signals can result in longer exploration and training time <cit.> and lower success rates <cit.>. In general, tasks with high complexity and high-dimensional continuous state-action spaces benefit from denser reward signals. Nonetheless, creating a reward function for autonomous agents necessitates domain expertise, which can be challenging for non-experts. Furthermore, hand-crafted rewards can be vulnerable to local optima or unexpected ways of achieving high numerical returns <cit.>.
Inverse reinforcement learning (IRL) is a problem in which the reward function is inferred from expert demonstration trajectories <cit.>. This eliminates the need for tedious reward engineering and makes it possible to learn reward functions from expert demonstrations <cit.>. However, IRL typically requires optimal demonstrations, while those demonstrated by people are often sub-optimal.
To address this issue, a new method called Trajectory-ranked Reward Extrapolation (T-REX) was recently proposed. T-REX seeks to improve demonstrations by learning rewards from sequences of suboptimal ranked demonstrations. It attempts to convert those rankings into a set of pairwise preferences to train the policy network <cit.>. However, T-REX still requires a large number of demonstrations to train the policy, which can be challenging in tasks where providing demonstrations at such a scale is difficult, even though optimality is not required.
In a recent study, PEBBLE <cit.> proposed an off-policy interactive RL algorithm to train a reward network and a policy network simultaneously from queried pairwise preferences. Teachers provide real-time pairwise feedback to supervise the learning process in the most efficient direction. However, this approach presents three major issues when providing feedback by assigning one-hot preference labels to two trajectories:
1) Two trajectories can be compared only when they have been paired together, making it difficult to gain a broader understanding of the relationship between individual and overall sampled trajectories.
2) Forcing the teacher to prioritize a better trajectory can sometimes become a burden and harm human-in-the-loop training.
3) To increase the number of training examples, partial trajectories (i.e., partitioning a full trajectory into segments) are used rather than full trajectories, which can make evaluating pairwise preferences more ambiguous for the teacher.
Our aim is to enhance feedback efficiency in interactive reinforcement learning. To achieve this, we suggest utilizing scores instead of pairwise preferences as the signals for interacting with RL agents. Moreover, we put forward an adaptive learning scheme to make the training process smoother and more stable.
This includes adaptive network optimization to smoothly update network parameters from score data and adaptive trajectory sampling to mine useful trajectories for teachers to evaluate, making our methodology less sensitive to imperfect or unreliable teacher inputs. By interchangeably giving feedback and adjusting the reward function, we continuously optimize both the reward and policy networks. Furthermore, we implement a scoring graphical user interface (GUI), showing the most relevant trajectories from the previously scored ones when a new trajectory is scored to support users in providing consistent scores. Teachers are allowed to amend and correct previous scores during the training process.
An overview of our proposed framework is illustrated in Fig. <ref>. The main contributions of this paper are summarized as follows.
* We develop an interactive RL method that enables the agent to learn both policy and reward simultaneously using a score-based approach. The RL agent proactively requests scores from teachers for complete trajectories, which results in requiring less feedback compared to pairwise-based methods.
* We propose a method to tackle the problem of inaccuracies in scoring by introducing an adaptable learning approach that can withstand errors. Our proposed method also facilitates efficient learning of personalized and desired behaviors in situations where rewards are limited, based on the teachers' choices.
§ RELATED WORK
§.§ Inverse Reinforcement Learning
IRL allows the agent to better understand tasks and the environment, and learn an optimal policy using the reward via RL methods <cit.>. However, classic IRL frameworks <cit.> assume that demonstrations are optimal and easy to obtain. Maximum entropy IRL <cit.> and Bayesian IRL <cit.> are more robust to limited and stochastic suboptimality, but they cannot produce a policy better than the demonstration, thus their performance still highly relies on the quality of the demonstration. In <cit.>, a generative model is learned from a large number of suboptimal demonstrations to produce noise-free trajectories. In <cit.>, the reward function is formed as a linear combination of known features, and suboptimal demonstrations are utilized by learning rewards from trajectories labeled as success or failure. The method proposed in <cit.> is robust to a limited number of non-optimal demonstrations but still requires many expert demonstrations to discriminate the suboptimal ones. However, these methods can be challenging to apply in real-world scenarios where demonstrations are scarce, expensive, or suboptimal.
§.§ Learning from Evaluative Feedback
Evaluative feedback is a value given by a human teacher that rates the quality of the agent's behavior, which is easier for humans to provide compared to demonstrations. The TAMER framework <cit.> interprets evaluative feedback as a Q^*(s,a) function of RL, and makes the agent act greedily according to it. Meanwhile, the COACH framework <cit.> interprets human feedback as the advantage function A^π(s,a) of policy gradient update. In the policy shaping framework <cit.>, evaluative feedback is considered an optimality label of the action.
Providing evaluative feedback for entire trajectories can give each trajectory a global evaluation of its quality and, therefore, more accessible generalization. <cit.> annotate each trajectory with a numeric score of a human teacher's global performance and leverage the IRL framework to learn a reward by minimizing the distance between human-provided and predicted scores. Although these methods show robustness to scoring errors, they cannot deal with suboptimal and high-dimensional IRL tasks.
Evaluative feedback is useful for handling non-optimality. CEILing <cit.> labels all state-action pairs with binary feedback evaluative feedback, then directly learns a Gaussian distributed policy by reinforcing the good ones while ignoring the bad ones. Similarly, <cit.> proposes IRLDC, which uses binary evaluative feedback to label each state-action pair. These methods still require a few demonstrations or correct feedback.
§.§ Preference-based Reinforcement Learning
<cit.> introduces the preference-based DRL framework, which can learn from pairwise preferences over the agent's current behaviors that the human teacher actively provides during training. This approach is on-policy that needs the human teacher to constantly provide preference feedback during the training process. Thus, <cit.>. extends this framework and proposes T-REX, which learns the reward function from the pairwise preferences derived from a set of pre-collected ranked demonstrations, then applies the reward to RL for policy learning. T-REX allows the learned reward to extrapolate beyond the demonstrations and achieve better-than-demonstrator performance. D-REX <cit.> and SSRR <cit.> extend the learning-from-ranking framework by automatically generating ranked trajectories via noise injection. However, the need for demonstrations still exists.
Myers et al. <cit.> proposed a robot learning method that can learn multimodal rewards from multiple active ranking queries by multiple experts. PEBBLE <cit.> presented an interactive preference learning method that enables users to give preference feedback directly on the behavior of the RL agent, thus eliminating the need for demonstrations. PEBBLE introduces the off-policy learning framework to reuse data and follows the feedback form of pairwise preferences between partial trajectories for the sample efficiency as in <cit.>. To improve feedback efficiency, <cit.> investigates the query selection and policy initialization. <cit.> presents an exploration method to collect more diverse experiences. <cit.> introduces this learning scheme for socially aware robot navigation and reduces the amount of preference feedback from humans by collecting expert demonstrations. <cit.> further increases the feedback efficiency by inferring pseudo-labels on a large number of unlabeled samples with data augmentation, while we annotate global scores to the agent's past experience. In our work, we annotate global scores to the agent's past experiences and demonstrate that this scoring feedback scheme can substantially reduce the amount of required feedback and better fit the off-policy framework.
§ METHODOLOGY
Our proposed framework can be broken down into two processes. First, the RL agent interacts with the environment to create new trajectories to be scored. Second, an off-policy DRL algorithm is applied to update the agent's policy π_ψ in order to maximize the expectation of the predicted reward generated by r̂_θ. Additionally, the teacher reviews the sampled trajectories through video replay and scores them at a frequency of f during the RL training process. The scored trajectories (τ, s) are stored in the scoring buffer 𝒟 to update the reward network. The agent then deduces the teacher's preference from the score difference and updates the reward network accordingly. The updated reward network guides the agent to generate better trajectories. Scoring these trajectories can lead to a more comprehensive reward network, and the agent learns the policy and reward simultaneously.
§.§ Adaptive Learning from Scores
The reward function is trained as a neural network, which we refer to as r̂_θ. Users can choose either states or state-action pairs as input. Our approach utilizes two replay buffers: one for the RL part to store the state-action transitions, and the other for reward learning to store the trajectories and their scores. For policy learning, the only difference between our method and vanilla RL algorithms is that the rewards are produced by the reward network. As a result, our approach can be applied to most off-policy RL algorithms while maintaining their core functionality.
Note that the reward function is dynamically updated during training, which can cause inconsistency in off-policy RL since previously generated rewards may not match the latest reward functions. To address this issue, we adopt the approach proposed in PEBBLE <cit.> and relabel the replay buffer each time we update the reward. Storing all scored trajectories in the scoring buffer allows for off-policy learning in reward learning. This allows newly scored trajectories to be compared with previously scored trajectories, which significantly improves the utilization rate of human feedback. Moreover, the scoring buffer allows the teacher to access previous scores during training and correct them if they change their minds, making reward learning more robust.
Adaptive Network Optimizing. Given a set of trajectories τ_1, τ_2,…, τ_m and their corresponding scores s_1, s_2,…,s_m, our goal is to parameterize a reward function r̂_θ to infer the underlying reward from scores and output the reward value that matches user's evaluation standard, such that ∑_τ_ir̂_θ < ∑_τ_jr̂_θ when s_i < s_j. This problem can be regarded as a learning-to-rank problem <cit.>, to optimize the following equation:
minU( sorted(r̂_θ(τ_1), ..., r̂_θ(τ_|𝒟|)), y)
where y is the ground-truth index list and U is a binary function to evaluate if the ranked position is equivalent to the ground-truth position y_i in y. In our work, we decide to solve this problem using the idea of pairwise learning because our major goal is to train a reward function, instead of a ranker that generates a descent permutation.
The user's preference over any pair of two trajectories is described by a distribution μ, which can be derived by comparing their scores, e.g., μ=1 if s_i < s_j. By following the Bradley-Terry and Luce-Shephard models of preferences <cit.>, a preference predictor using the reward function r̂_θ can be modeled as a softmax-normalized distribution as follows.
P(τ_i ≺τ_j) = e^∑_τ_jr̂_θ/e^∑_τ_ir̂_θ + e^∑_τ_jr̂_θ
where τ_i ≺τ_j denotes the trajectory τ_j is more preferred to the trajectory τ_j. This equation demonstrates that the probability of preferring one trajectory to another is exponentially related to the predicted return of each trajectory. Thus, the parameterized reward function r̂_θ can be learned via minimize the cross entropy loss between the predicted preference and the user's true preference as follows.
ℒ = -∑_(τ_0, τ_1, μ) ∈𝒟[ μlog P(τ_i ≺τ_j) + (1-μ) log P(τ_j ≺τ_i) ]
The preference distribution μ is usually a one-hot encoded label. Although this can learn reward effectively on correct labels, it may suffer from poor performance when there are wrong labels in the database 𝒟. Unfortunately, it is nearly impossible for a human user to score a large number of trajectories perfectly. To strengthen the robustness against scoring error, we use the label smoothing method <cit.> to convert the hard label μ to soft label μ̃ using μ̃ = (1-α)μ + α/K, where α∈ [0,1] is a constant factor that indicates the smoothing strength for one-hot label and K denotes the number of labels, in our case, K=2. However, in our setting, using a constant smoothing strength for all pairwise labels may not be ideal because it ignores the relative relationship implicit in the score differences. It is intuitive that for a trajectory pair, the larger the scores differ, the more confident that one trajectory is better than the other. Thus, to better exploit the information from human scores, we make the smoothing strength adaptive to the score differences by α = 1/(|s_i-s_j|+λ)^2 where λ>1 is a hyperparameter, and we set it as 2 in all our experiments. The adaptive α makes the label μ̃ closer to 0 or 1 when pairwise trajectories significantly differ in the score and approach 0.5 when the scores are similar. Hence, the soft label μ̃ is computed as
μ̃ = (1-1/(|s_i-s_j|+λ)^2)μ + 1/K(|s_i-s_j|+λ)^2
Adaptive Trajectory Sampling. The process of learning through rewards includes two sampling procedures. The first one involves gathering newly created trajectories from the RL agent, which are then evaluated and given scores by teachers. The second procedure involves selecting a batch of trajectory pairs from the scoring buffer, which stores all scored trajectories.
To improve the training of our RL agent, we ask the user to rate the newly generated trajectories. These ratings are stored in the scoring buffer called 𝒟 as scored trajectories (τ, s). However, asking for scores for all trajectories can be overwhelming. Therefore, we aim to choose the most informative scoring queries. This way, even if only a few newly generated trajectories are scored each time, they are enough to train the appropriate reward. We employ the k-means clustering algorithm to automatically select trajectories with high variance in performance, which are then approximated by the predicted rewards. To select k trajectories from a set of newly generated ones for evaluation, we use the reward network to compute the episodic return of each trajectory. Next, we run k-means clustering on these returns and choose the k trajectories whose returns are closest to each k centroid.
For n scored trajectories (τ, s) in the scoring buffer 𝒟, we notice that not all the trajectory pairs of them can lead to effective reward update. To address this, we explore methods for sampling from the scoring buffer 𝒟. An off-the-shelf sampling method is the entropy-based sampling adopted in PEBBLE <cit.>, which randomly samples a large batch of trajectories and seeks to maximize the entropy. However, we notice that when a trajectory with a higher score is sampled into 𝒟, it should be compared more broadly with other trajectories. This allows the reward equation to learn what behaviors lead to higher scores. Unfortunately, entropy-based sampling cannot provide this capability. Inspired by the prioritized experience replay (PER) methods <cit.>, we propose an alternative sampling methodology: either randomly selecting one trajectory in a pair or choosing based on a probability that increases with its score. The probability of each scored trajectory is computed according to its score as
P(i) = s_i^β/∑_ns_n^β, where β is a hyperparameter that determines how much prioritization is assigned to a high-scored trajectory. And n is the total number of trajectories in the scoring buffer. The comparison between our sampling method and entropy-based sampling is shown in the experimental section.
§ EXPERIMENTS
§.§ Experiment Setups
We compare our approach to previous methods to verify if our approach can achieve similar performance with less feedback. In Sec. <ref>, we conduct ablation studies to investigate the influence of adaptive reward updates on the robustness of scoring errors, and to assess how various sampling methods affect performance. In Sec. <ref>, we analyze the learned reward and the agent's behavioral pattern to determine if our approach can accurately extrapolate the user's preferences and underlying intent. Finally, we conduct real human experiments in Sec. <ref>.
We evaluate our proposed method on several continuous robotic tasks in simulation, including locomotion tasks Ant and HalfCheetah in Mujoco simulator <cit.> with OpenAI Gym <cit.>, and robotic manipulation tasks in Metaworld environment <cit.>, namely PushButton and SweepInto. For locomotion tasks, we use the episode return as the evaluation metric. For manipulation tasks, we use the task success rate of the last 100 episodes as the evaluation metric. We train 2,000 episodes for Mujoco locomotion tasks and 3,000 episodes for Metaworld robotic manipulation tasks each run. The episode is 300 steps long, with the exception of the SweepInto task, which is 250 steps long to reduce the proportion of task-goal-unrelated steps in the episode.
We use the state-of-the-art off-policy DRL algorithm SAC to learn the behavioral policy <cit.>. However, the agent can only receive the reward generated by our learned reward function. To model our reward function, we use a single deep neural network consisting of 3 fully connected layers of 256 units with leaky ReLUs. We train the reward network from scores using the Adam optimizer with a learning rate of 10^-3 and a batch size of 128.
We use a standard scoring range of 0 to 10 across all experiments. To evaluate our approach quantitatively, we make the agent learn tasks only by intermittently getting its past experience scored by a scripted teacher. By linearly mapping the episode returns to the scoring range, the scripted teacher provides scores. We use a two-stage scoring frequency: at the beginning of training, we use a faster scoring frequency, scoring 5 trajectories every 10 episodes. When the agent's performance reaches approximately a quarter of the maximum episode returns, we switch to a slower scoring frequency, scoring 10 trajectories every 100 episodes. For all experiments, we set the β for adaptive sampling to 3.
§.§ Results
To examine the effectiveness of our approach, we compare it to the original SAC algorithm training with the same ground true reward as we used for the scripted teacher. We use the same hyperparameters for both trainings. We also compare to the state-of-art preference learning algorithm PEBBLE <cit.>. We use the exact same values of hyperparameters for PEBBLE as the Equal SimTeacher setting reported in <cit.> and the corresponding open-source code repository.
Fig. <ref> shows the learning curves of our approach and PEBBLE with different numbers of teacher preference feedback in comparison to SAC with true reward. Note that our approach employs a different type of feedback than PEBBLE. In this experiment setting, PEBBLE assigns a preference label to a pair of 50-step partial trajectories for single teacher feedback, whereas our approach assigns a global score to an entire episode with 300 steps. As a result, we give PEBBLE an advantage by providing more than three times as much feedback as ours. We can see that our approach achieves the same or higher level of performance than PEBBLE, which is given more feedback, in all tasks. In comparison to the SAC with ground true reward, our approach requires more training time to converge because it must learn the reward from scratch at the beginning of training, but it can match the performance after convergence in all tasks using only a small number of trajectory scores from the teacher. The results show that our approach can learn robot behavioral policies effectively in sparse reward environments with teachers' scores.
§.§ Ablation Study
§.§.§ Robustness to Scoring Errors
The preceding experiments assume access to the perfect correct scores generated by the ground true reward. However, in practice, it is impossible for a human teacher to score hundreds of trajectories accurately: users can give vague and approximate scores for trajectories with similar performance. Thus, we examine the robustness to scoring errors and low scoring precision of our approach by comparing the performance of the noisy scores when using hard reward update, soft reward update by label smoothing, and adaptive reward update.
We simulate the real human teacher by adding noise randomly generated by Gaussian distribution to the scores given by the scripted teacher as s^' = 𝒩(s, σ_noise^2). We adopt a minimal step of 0.5 for these noise-infused scores, such as 3.0 and 7.5. This permits teachers to give scores to trajectories that perform similarly. We tested our method with σ_noise^2 = 0.4 and σ_noise^2 = 0.8, and use Kendall's τ_B coefficient to further measure the rank correlation between the noisy scores and the perfect correct scores. This coefficient is calculated by τ_B=(P-Q)/√((P + Q + T)(P + Q + U)), where P is the number of concordant pairs, Q is the number of discordant pairs, T is the number of ties only in one group of data, and U is the number of ties only in another, τ_B will be high and close to 1 if two variables have similar rank. We found that in our experiment, σ_noise^2 = 0.4 corresponds to τ_B≈ 0.8. The higher noise σ_noise^2 = 0.8 leads to a lower correlation level τ_B≈ 0.65.
The results of training with imperfect scores on the HalfCheetah and ButtonPress tasks are shown in Fig. <ref>, where the smoothing strength is α = 0.05 for the original label smoothing, α^' = 2 for the adaptive reward update. Despite a slight decrease in performance compared to perfect scoring, the adaptive update method performs better compared to the other two methods in both tasks when scoring errors σ_noise^2 = 0.4. With σ_noise^2 = 0.8, the adaptive reward update method surpasses others in the ButtonPress task. However, in the relatively simple task, HalfCheetah did not gain extra advantages over the hard reward update. Overall, our proposed adaptive reward update method delivers the strongest performance and shows strong robustness to high-scoring errors.
We also investigate the effects of different sampling methods to select scored trajectory pairs for reward updates. We used σ_noise^2 = 0.4 to simulate the real scoring scenario. Fig. <ref> shows the learning curves of our approach on the HalfCheetah and ButtonPress tasks under three different sampling schemes: uniform sampling, entropy-based sampling, and priority-based sampling. We can see that the priority-based sampling method significantly outperforms other sampling methods. Although entropy-based sampling performs well with a perfect feedback teacher as suggested in <cit.>, it cannot handle the noisy-scoring scenario well.
§.§ Reward Extrapolation
§.§.§ Reward Analysis
We compare the learned reward function to the ground truth rewards to assess the quality of the learned reward function. We run SAC with true reward to collect trajectories with a variety of performance qualities, and then we compare the episodic ground truth returns to the returns generated by the learned reward function. Fig. <ref> shows the reward function learned by our approach on 250 scores and 500 scores in HalfCheetah and ButtonPress respectively. We can see that the learned reward function has a strong correlation with the true reward on the ground. It should be noted that the learned and true rewards have very different scales, but this difference had no effect on policy learning performance. We further investigate the rewards by looking into the reward functions within an episode at different timesteps. We generate a set of suboptimal trajectories with high and low reward ranges in one episode. The results are shown in Fig. <ref>. We manually normalize the learned reward outputs to have the same scale as the true rewards by multiplying a coefficient. We can see that the learned rewards are well-aligned with the ground truth rewards.
§.§.§ Customized Behavior
One goal of our approach is to enable users to train customized policies through scoring. We demonstrate this in the RLBench <cit.> simulation task PushButton, which requires a Franka Emika Panda robot arm to push a button on a table. We model two scripted teachers to score trajectories with different preferences as follows: (1) teacher 1: robot first moves to the top of the button, then pushes with the gripper tip while remaining vertical, (2) teacher 2: robot moves its gripper parallel to the table and presses the button with its side. For the trained agent's policies please refer to the supplementary video. The result demonstrates that our method can infer users' underlying intent and complete tasks in accordance with the user's preferences.
§.§ Real Human Experiment
We conduct experiments with real human users to test our approach. We create a graphical user interface (GUI) and test it with two users on the MetaWorld ButtonPress environment. The GUI displays four previously scored trajectories and their scores as references to help users score new trajectories consistently. We select two scored trajectories with the closest predicted returns to the current trajectory and two references that are most similar to the current trajectory in Cartesian space, measured by dynamic time warp (DTW) <cit.>. Users are allowed to revise the scores of the reference trajectories as needed, and they could skip scoring if they find it difficult. We follow the two-stage scoring frequency outlined in Sec. <ref>, starting with a faster scoring frequency and allowing users to switch to a lower frequency mode based on their performance. Fig. <ref> shows the learning curve of the four users compared to learning by SAC with true reward and learning by our approach with a scripted teacher. Our approach shows that a good behavior policy could be trained with only about three hundred scores. For more information on using the scoring interface, please refer to the supplementary video.
§ CONCLUSION
We propose an algorithm for interactive RL that uses scores from a teacher to learn both a policy and reward function. This eliminates the need for human demonstrations and maximizes the use of user feedback, reducing the amount of required feedback. Our experiments show that even with a small number of human scores, our method can train robotic locomotion and manipulation tasks to near-optimal levels. With this method, we can map global behavior evaluations to rewards for only states or state-action pairs, allowing us to learn optimal policies in environments where rewards cannot be observed.
IEEEtran
|
http://arxiv.org/abs/2307.04690v1 | 20230710164423 | Heisenberg-limited Hamiltonian learning for interacting bosons | [
"Haoya Li",
"Yu Tong",
"Hongkang Ni",
"Tuvia Gefen",
"Lexing Ying"
] | quant-ph | [
"quant-ph",
"cs.IT",
"cs.NA",
"math.IT",
"math.NA"
] |
Spoofing-Resilient LiDAR-GPS Factor Graph Localization with Chimera Authentication
The views expressed are those of the authors and do not reflect the official guidance or position of the United States Government, the Department of
Defense or of the United States Air Force. Statement from DoD: The appearance of external hyperlinks does not constitute endorsement by the United States
Department of Defense (DoD) of the linked websites, or the information, products, or services contained therein. The DoD does not exercise any editorial,
security, or other control over the information you may find at these locations.
Adam Dai
Electrical Engineering
Stanford University
Stanford, USA
[email protected]
Tara Mina
Electrical Engineering
Stanford University
Stanford, USA
[email protected]
Ashwin Kanhere
Aeronautics and Astronautics
Stanford University
Stanford, USA
[email protected]
Grace Gao
Aeronautics and Astronautics
Stanford University
Stanford, USA
[email protected]
August 12, 2023
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
We develop a protocol for learning a class of interacting bosonic Hamiltonians from dynamics with Heisenberg-limited scaling. For Hamiltonians with an underlying bounded-degree graph structure, we can learn all parameters with root mean squared error ϵ using 𝒪(1/ϵ) total evolution time, which is independent of the system size, in a way that is robust against state-preparation and measurement error. In the protocol, we only use bosonic coherent states, beam splitters, phase shifters, and homodyne measurements, which are easy to implement on many experimental platforms. A key technique we develop is to apply random unitaries to enforce symmetry in the effective Hamiltonian, which may be of independent interest.
§ INTRODUCTION
Many tasks in quantum metrology and quantum sensing can be reduced to the task of learning the Hamiltonian H of a quantum system, whose evolution is described by the operator e^-iHt <cit.>. We call this task Hamiltonian learning, a name that is commonly used in the literature <cit.>. Besides quantum metrology and quantum sensing, Hamiltonian learning is also useful for quantum device engineering <cit.>, and quantum many-body physics <cit.>.
Previous works on Hamiltonian learning for many-body quantum systems are generally subject to the standard quantum limit (SQL), where to estimate the parameters in the Hamiltonian to precision ϵ, (ϵ^-2) samples are required <cit.>. On the other hand, for simple systems such as those consisting of a single spin, the Heisenberg limit can be achieved, where to obtain ϵ precision, only (ϵ^-1) total amount of resources is needed. Achieving the Heisenberg limit requires using quantum-enhanced protocols that either use (ϵ^-1) entangled probes <cit.> or coherent evolution for (ϵ^-1) time <cit.>.
The resources consumed are the number of probes and the length of time evolution, respectively.
The natural question is, can we achieve the Heisenberg limit for many-body quantum systems? When applying the existing quantum-enhanced protocols to the many-body setting, one quickly encounters difficulties. When many entangled probes are used, one needs many copies of the quantum system with the same parameters that can evolve simultaneously without interacting with each other. It is often unclear how one can create these copies, except for certain scenarios, such as when many probes undergo evolution under the same field strength. For long coherent time-evolution, the many-body nature of the quantum systems becomes problematic as subsystems undergo open-system dynamics, and phenomena such as thermalization prevent local observables from having enough sensitivity to achieve the Heisenberg limit. One can consider performing entangled measurements across all parts of the many-body system. Still, the difficulty in simulating the system makes finding a good measurement strategy extremely difficult.
Recently, a method was proposed in <cit.> to perform Hamiltonian learning for many-body spin systems with Heisenberg-limited scaling. The main technique is to apply quantum control in the form of random Pauli operators during time evolution so that the system evolves with an effective Hamiltonian that is easy to learn and, at the same time, preserves the parameters that one wants to learn. Another recent work proved that some form of quantum control is necessary for achieving the Heisenberg limit in this task <cit.>.
The above works are all focused on multi-qubit systems, and Heisenberg-limited Hamiltonian learning for bosonic systems is relatively less studied.
Bosonic systems, such as superconducting circuits <cit.>, integrated photonic circuits <cit.> and optomechanical platforms <cit.> are widely used for quantum sensing, communication, and computing <cit.>. These quantum applications require efficient calibration <cit.>, and it is thus highly desirable to develop optimal algorithms for characterizing bosonic Hamiltonians. For example, quantum computing and sensing with transmons require learning the energy levels and interactions between the transmons and microwave resonators.
For bosonic systems, there is a different set of “easy” quantum states, unitaries, and measurements than for spins. This work assumes that one can prepare coherent states, apply phase shifters and beam splitters, and perform the homodyne measurement. We note that although we may use terms from quantum optics, such as “phase shifters”, we do not constrain our discussion to the optical setting. Additionally, in our protocol, we do not require any squeezing, which can be experimentally difficult to implement <cit.>. Using these resources, we present a protocol to learn a class of interacting bosonic Hamiltonians with Heisenberg-limited scaling. These Hamiltonians involve terms that are quadratic or quartic in the creation and annihilation operators, and are particle-number preserving. The specific form of the Hamiltonians is given in (<ref>). Our protocol can also tolerate a constant amount of noise in the state preparation and measurement (SPAM) procedures and has a small classical post-processing cost.
In our method, we apply random unitaries during time evolution to reshape the Hamiltonian into an effective Hamiltonian that is easier to learn. This follows the same high-level idea as <cit.> but is specifically tailored to the bosonic setting. Moreover, we can interpret the procedure as enforcing a target symmetry in the effective Hamiltonian, thus putting constraints on the dynamics. We believe this technique may be useful for other problems in quantum simulation as well <cit.>. In analyzing the deviation from the effective dynamics, the unboundedness of the bosonic Hamiltonian terms poses a challenge, as the analysis in <cit.> requires Hamiltonian terms to be bounded. We use more involved techniques to overcome this difficulty in Section <ref>.
§ RESULTS
In this work, we focus on quantum systems on N bosonic modes forming a d-dimensional lattice, with the Hamiltonian of the form
H = ∑_⟨i,j|⟩ h_ijb_i^†b_j + ∑_i ω_i b_i^†b_i + ξ_i/2∑_i n_i(n_i-1),
where b_i (b_i^†) are bosonic annihilation (creation) operators, and n_i=b_i^†b_i are the number opreators. ⟨i,j|$⟩ means that the summation is over sitesi,jthat are adjacent to each other.h_ij=h_ji^*, and eachξ_iandω_iis a real number. We also assume that|h_ij|,|ω_i|,|ξ_i|≤1. This class of Hamiltonians is relevant for superconducting quantum processors <cit.>, arrays of coupled cavities <cit.>, and phonon dynamics in ion crystals <cit.>. We will present a protocol that generates estimatesĥ_ij,ω̂_i, andξ̂_isuch that
𝔼[|ĥ_ij-h_ij|^2], 𝔼[|ω̂_i-ω_i|^2], 𝔼[|ξ̂_i-ξ_i|^2]≤ϵ^2,
for alliandj.
The protocol has the following properties:
* The total evolution time is (ϵ^-1);
* The number of experiments is (polylog(ϵ^-1));
* A constant amount of SPAM error can be tolerated.
More precisely, our protocol consists ofN_exp=(polylog(ϵ^-1))experiments, which we number by1,2,⋯,N_exp.
In thejth experiment, we will initialize each bosonic mode in the system in a coherent state, let the system evolve for timet_j>0, and perform homodyne measurement for the bosonic modes. During time evolution, we will apply random beam splitters (on two modes) or phase shifters (on one mode). The total evolution time is defined to be∑_j=1^N_expt_j, which is the amount of time required to run all the experiments. We assume that after we prepare the initial state and before we perform the measurement, the system goes through error channelsℰ_1andℰ_2, which model the SPAM error. Ifℰ_1-ℐ_♢+ℰ_2-ℐ_♢is upper-bounded by a small constant, then our protocol will still be able to reach arbitrary precisionϵ. Here·_♢is the diamond norm <cit.>, andℐis the identity channel. The precision is measured by the mean squared error (MSE). We are using the big-notation to hide the constants for simplicity, and we note that these constants never depend on the system size. Our protocol generates(NN_exp)=(Npolylog(ϵ^-1))classical data and it takes a similar amount of time to process these data to compute the estimates.
Below we will describe the protocol in detail. We will start with a protocol to learn a single anharmonic oscillator, which forms the basic building block for more complex situations.
§.§ Learning an anharmonic oscillator
We first consider the simple case in which
H_AHO = ω n + ξ/2n(n-1),
wheren=b^†b. We want to estimate the coefficientsωandξwith root mean squared error (RMSE) at mostϵ.
This is a quantum sensing problem with two parameters to be estimated. In quantum sensing, one usually calculates the quantum Cramér-Rao bound (QCRB) that provides a lower bound on the MSE of unbiased estimators. Because the two parameters correspond to Hamiltonian terms that commute with each other, the QCRB scales inverse quadratically with time, allowing us to achieve the Heisenberg-limited scaling. This bound, however, is valid only for local estimation where the prior distribution of the estimators is already concentrated around the exact value. Here we provide an estimation protocol that achieves this scaling without any prior knowledge of the parameters.
Our protocol builds upon a robust frequency estimation algorithm similar to the robust phase estimation algorithm proposed in <cit.> as well as the alternative version in <cit.>. In the robust phase estimation algorithm, we assume that through performing certain experiments that we will specify when introducing our protocol, we have access to a random variableZ_δ(t)from measurement results, such that|Z_δ(t)-e^-iωt|≤1with probability at least1-δ, and generating such a random variable requires evolution time(tlog(δ^-1)). With multiple samples of this variable for different values oftandδ, we can generate an estimate ofωwith RMSE at mostϵusing(ϵ^-1)total evolution time. The algorithm proceeds by iteratively obtaining estimates with increasing accuracy through longer time evolution until the target precision is achieved. A detailed description of the algorithm and proof of its correctness can be found in Section <ref>.
We initialize the system in a coherent state|α⟩=e^-|α|^2/2∑_k(α^k/√(k!))|k⟩, and let the system evolve under the HamiltonianH_AHO. In the end we perform homodyne measurements with quadrature operatorsX=(b+b^†)/√(2)andP=i(b^†-b)/√(2)in separate experiments. With these measurement results we will be able to estimate⟨b|_⟩α,t=⟨α|e^iH_AHOtbe^-iH_AHOt|α|$⟩, which can be exactly computed to be
⟨b|_⟩α,t = α e^-|α|^2 e^-iω t e^|α|^2 e^-iξ t.
We perform this calculation in Section <ref>.
Using (<ref>), we can extract the values of ω and ξ from ⟨b|_⟩α,t. For ω, note that ⟨b|_⟩α,t/α = e^-iω t + (|α|^2), and therefore we can choose |α| to be below a small constant so that an estimate for ⟨b|_⟩α,t/α will be close to e^-iω t within some small constant distance, which enables us to apply the robust frequency estimation algorithm to estimate ω with RMSE at most ϵ using total evolution time (ϵ^-1).
For ξ, we can extract its value by constructing a periodically oscillating signal through
e^-iξ t = 1/|α_1|^2-|α_2|^2log(α_2⟨b|_⟩α_1,t/α_1⟨b|_⟩α_2,t) + 1.
This enables us to estimate ξ using the robust frequency estimation algorithm. Note that, once again, ⟨b|_⟩α_1,t and ⟨b|_⟩α_2,t only need to be estimated to constant precision, rather than ϵ precision which would result in an (ϵ^-2) scaling that would destroy the Heisenberg-limited scaling.
In the above procedure, we need to estimate the expectation of X and P operators, which are unbounded operators that can infinitely amplify any error in the quantum state. Fortunately, we found that we can replace them with operators X|X|≤ M and P|P|≤ M, where |X|≤ M=∫_|x|≤ M|x⟩⟨x| x and |P|≤ M is similarly defined. This means truncating the eigenvalues of these operators at a threshold M=(1). In practice, we can simply discard any X and P samples that are above the threshold M to implement the measurement associated with these truncated operators. This fact, together with the error tolerance in the robust frequency estimation algorithm, enables us to tolerate a constant amount of error from SPAM and time evolution.
The combined error from all sources should be below a small constant, which is sufficient for achieving arbitrarily high precision.
§.§ Learning two coupled anharmonic oscillators
Next, we consider a system consisting of two coupled anharmonic oscillators, where the Hamiltonian is of the following form:
H = ω_1 b_1^†b_1 + ω_2 b_2^†b_2 + h_12b_1^†b_2 + h_21b_2^†b_1 + ξ_1/2n_1(n_1-1) + ξ_2/2n_2(n_2-1)
The goal is to learn all the coefficients ω_1, ω_2, ξ_1, ξ_2, and h_12 (h_21=h^*_12).
We first focus on learning the single-mode coefficients ω_1, ω_2, ξ_1, and ξ_2. To do this, we will insert random unitaries during time evolution to decouple the bosonic modes from each other. In other words, the time evolution operator undergoes the following transformation
e^-iHt↦∏_j=1^r U_j^†e^-iHτ U_j = ∏_j=1^r e^-iU_j^†HU_jτ,
where the U_j, j=1,2,⋯,r, are the random beam splitters or phase shifters that we insert, r=t/τ, and the product goes from right to left. Each U_j is independently drawn from a distribution that we denote by 𝒟. In the limit of τ→ 0, the dynamics can be described by an effective Hamiltonian
H_effective = 𝔼_U∼𝒟 U^†HU.
This can be seen by considering the Taylor expansion of the time-evolved state in a small time step:
𝔼_U∼𝒟[e^-iU^†HUτρ e^iU^†HUτ] = ρ - iτ𝔼_U∼𝒟[[U^†HU,ρ]] + (τ^2)
= e^-i𝔼_U∼𝒟[U^†HU]τρ e^i𝔼_U∼𝒟[U^†HU]τ + (τ^2).
Note that the above is not a rigorous proof, because the (τ^2) residue is an unbounded operator. We provide a rigorous bound of how far the actual dynamics deviate from the limiting effective dynamics with finite τ>0 in Section <ref>.
The above procedure introduces additional randomness to our protocol, but it does not introduce any sample complexity overhead, because we only need the final quantum states to be close in terms of the trace distance.
To learn all the single mode coefficients, we let the unitary U drawn from the distribution 𝒟 be
U = e^-iθ n_1, θ∼𝒰([0,2π]).
Here 𝒰([0,2π]) is the uniform distribution over [0,2π].
We can then compute the effective Hamiltonian
H_effective = 1/2π∫_0^2π e^iθ n_1He^-iθ n_1θ = ω_1 n_1 + ω_2 n_2 + ξ_1/2n_1(n_1-1) + ξ_2/2n_2(n_2-1).
In other words, the coupling term h_12b_1^†b_2 + h_21b_2^†b_1 is cancelled in the process, due to the equality e^iθ n_1b_1 e^-iθ n_1=e^-iθb_1.
We can interpret this procedure as enforcing a particle number conservation on the first bosonic mode. In the effective Hamiltonian, the two bosonic modes are no longer coupled together, and therefore we can apply the learning algorithm described in Section <ref> to learn the parameters of the two modes separately. For a more detailed description of the protocol see Section <ref>.
Next, we will learn the coupling coefficient h_12. We will use the following unitaries
U_x(θ) = e^iθ (b_1^†b_2+b_2^†b_1), U_y(θ) = e^θ (b_1^†b_2-b_2^†b_1).
Our protocol is based on the observation that under a single-particle basis rotation, h_12 can be estimated from the new single-mode coefficients. More precisely, we let b̃_1 = U_y(π/4)b_1 U_y^†(π/4), b̃_2 = U_y(π/4)b_2 U_y^†(π/4), and the new bosonic modes will be related to the old ones through
[ b̃_1; b̃_2 ]
=
[ cos(π/4) sin(π/4); -sin(π/4) cos(π/4) ][ b_1; b_2 ].
We will then rewrite the Hamiltonian (<ref>) in terms of b̃_1 and b̃_2. The quadratic part of H can be written as
ω̃_1 b̃_1^†b̃_1 + ω̃_2 b̃_2^†b̃_2 + h̃_12b̃_1^†b̃_2 + h̃_21b̃_2^†b̃_1, where
ω̃_1 = ω_1+ω_2/2+ h_12.
Therefore, h_12 can be estimated if we can learn ω̃_1.
The quartic part becomes more complicated, but the procedure we describe next will yield an effective Hamiltonian of a simpler form.
In our protocol for learning h_12, we will let the random unitaries U_j in (<ref>) be
U_j = U_x(-θ/2), θ∼𝒰([0,2π]),
where 𝒰([0,2π]) denotes the uniform distribution on [0,2π]. Note that e^-iθñ_1=e^-i(θ/2)(n_1+n_2)U_x(-θ/2) where ñ_1=b̃_1^†b̃_1, and because the total particle number n_1+n_2 is conserved, the random unitary U_x(-θ/2) is equivalent to e^-iθñ_1 up to a global phase. This random unitary, as in (<ref>), results in an effective Hamiltonian in which ñ_1 is conserved. The effective Hamiltonian can be written as the following
H_effective = ω̃_1 ñ_1 + ω̃_2 ñ_2 + ξ̃_11/2ñ_1(ñ_1-1) + ξ̃_22/2ñ_2(ñ_2-1) + ξ̃_12ñ_1ñ_2.
In this effective Hamiltonian, the two bosonic modes b̃_1 and b̃_2 are still coupled through the term ξ̃_12ñ_1ñ_2. However, because the particle numbers on both modes are conserved, we can simply initialize the system with no particle on the mode b̃_2, and the coupling term will have no effect. More specifically, the initial state we use is U_y(π/4)|α⟩|0⟩, which is an α-eigenstate for b̃_1 and a 0-eigenstate for b̃_2. The effective Hamiltonian can then be further reduced to
H_effective' = ω̃_1 ñ_1 + ξ̃_11/2ñ_1(ñ_1-1).
This enables us to learn ω̃_1 using the single-mode protocol in Section <ref>, which then gives us h_12 through (<ref>). When performing homodyne measurement in the end, we also need to apply U_y(-π/4) to rotate back to the original single-particle basis. We write down the quantum state we get right before measurement to summarize the whole procedure:
U_y(-π/4)∏_j=1^r(U_x(θ_j/2)e^-iHτU_x(-θ_j/2))U_y(π/4)|α⟩|0⟩,
where all θ_j are independently drawn from the uniform distribution over [0,2π].
The above procedure yields h_12. For h_12, we only need to switch the roles of U_x(θ) and U_y(θ) and go through the same procedure. For a more detailed discussion, see Section <ref>.
§.§ Learning an N-mode system
So far, we have concerned ourselves with learning small systems with one or two modes, but the protocol we develop can be easily generalized to N-mode systems. This section will focus on N bosonic modes arranged on a 1D chain. For the more general situation with a bounded degree graph, e.g., D-dimensional square lattice, Kagome lattice, etc., see Section <ref>.
The Hamiltonian is described by (<ref>), where the bosonic modes are labeled 1,2,⋯, N, and i and j are adjacent only when j=i± 1.
For this N-mode system, we consider a divide-and-conquer approach. We will apply random unitaries so that in the effective dynamics, the system is divided into clusters of one or two modes, each of which does not interact with the rest of the system. In this way, we can learn the parameters associated with each cluster independently and in parallel using our protocol in Section <ref>.
More specifically, we apply random unitaries in the same way as described in (<ref>). The random unitary U_j is first chosen to be
U_j = ∏_k=1^⌊ N/3⌋ e^-iθ_3k n_3k,
where the random variables θ_3k are independently drawn from 𝒰([0,2π]), the uniform distribution over [0,2π]. Randomly applying the unitaries from this distribution enforces particle number conservation on sites with indices that are integer multiples of 3. Therefore, any Hamiltonian term b_i^†b_j that involves sites 3, 6, 9,⋯ are canceled. The effective Hamiltonian is
H = ω_1 n_1 + ω_2 n_2 + h_12b_1^†b_2 + h_21b_2^†b_1
+ ω_4 n_4 + ω_5 n_5 + h_45b_4^†b_5 + h_54b_5^†b_4
+ ⋯
+∑_iξ_i/2n_i(n_i-1),
where we did not include the terms ω_3 n_3, ω_6 n_6, etc., because they only contribute a global phase.
In this Hamiltonian, the two modes 1 and 2 form a cluster: they only interact with each other but not with the rest of the system. The same is true for modes 5 and 6, 7 and 8, etc. We can then apply the two-mode protocol in Section <ref> to learn all coefficients associated with modes 1, 2, 5, 6, ... Note that coefficients associated with different clusters can be learned in parallel in the same experiment.
Other coefficients remain to learn, such as ω_3, h_23, and h_34. We can adopt the same strategy but choose the random unitary U_j = ∏_k=0^⌊ N/3⌋-1 e^-iθ_3k+1 n_3k+1 so that modes 2 and 3, 5 and 6, etc. now form clusters. Similarly, we can let modes 3 and 4, 6 and 7, etc., form clusters. In this way, we can learn all the coefficients in the Hamiltonian using three different clustering schemes. The total evolution time required for carrying out all experiments will only be three times the cost of a two-mode protocol because different clusters can be learned in parallel.
More generally, we consider a system whose interaction can be described by a bounded-degree graph. We can design similar clustering schemes based on an appropriate coloring of its link graph, i.e., the graph whose vertices are the edges of the original graph. The overhead introduced will be quadratic in the degree of the original graph and independent of the system size N. This is discussed in more detail in Section <ref>.
§ DISCUSSION
In this work, we propose a protocol to learn a class of interacting bosonic Hamiltonians with Heisenberg-limited scaling. Our protocol uses only elements of linear optics that that can be implemented on various experimental platforms. Besides achieving the Heisenberg-limited scaling, our protocol can also tolerate a constant amount of
SPAM noise thanks to the robust frequency estimation subroutine discussed in Section <ref>. As a part of the protocol, we also propose a method to enforce symmetry on the effective Hamiltonian governing the system's evolution as discussed in more detail in Section <ref>.
To our knowledge, our work is the first to propose a method that learns interacting bosonic Hamiltonians with Heisenberg-limited scaling in a scalable way. However, many open problems remain to be solved in this research direction. In this work, we only consider the particle-number preserving Hamiltonian in (<ref>), but realistic Hamiltonians may contain terms that do not preserve the particle number, such as the coupling term in the Jaynes–Cummings model <cit.> and capacitive and inductive couplings between superconducting circuits <cit.>. Also, higher-order anharmonic effects beyond the fourth order may be non-negligible in certain quantum systems.
In our protocol, we need to apply random unitaries with a frequency that depends on the target precision. For higher precision, the speed of applying these unitaries will also need to be faster, which may be a problem for experimental implementation. A possible solution is to use some form of continuous control as considered in <cit.>. Moreover, since our protocol requires letting the system evolve coherently for (ϵ^-1) times to reach ϵ precision, the achievable precision will be limited by quantum noise such as dephasing and photon losses that limit the coherence time of most experimental Bosonic systems.
It would be therefore interesting to explore whether noise suppression techniques such as dynamical decoupling <cit.> and quantum error correction <cit.> can mitigate this limitation and whether they can be incorporated into our protocol in a useful and scalable way.
Random Clifford unitaries played a crucial role in the classical shadow formalism <cit.> as well as Hamiltonian learning <cit.>. Similarly, one may wonder whether the random gaussian unitaries used in this work can be useful for other quantum information tasks for bosonic systems, such as classical shadow tomography for continuous-variable systems <cit.>.
§ METHODS
§.§ Enforcing symmetry using random unitaries
This section will describe how to enforce symmetry using random unitaries. This strategy is similar in spirit to the symmetry protection strategies in <cit.>, but is easier to scale to an N-mode system in the current setting.
Let us first consider the general case where we have a compact Lie group G that describes the symmetry we want in the quantum system. Our quantum system is evolving under a Hamiltonian H that does not necessarily satisfy this symmetry, i.e., there may exist g∈ G such that gHg^-1≠ H (here we equate an element of the Lie group with its matrix representation). We want to have the system evolve under an effective Hamiltonian H_effective that satisfies the symmetry, i.e.,
gH_effectiveg^-1 = H_effective.
We achieve this by inserting random unitaries in the same way as in (<ref>), which gives us an effective Hamiltonian according to (<ref>). The distribution from which we draw the random unitaries is the Haar measure on G, which we denote by μ. The effective Hamiltonian can be computed as
H_effective = ∫ gHg^-1μ( g).
When the Hamiltonian H is unbounded, the above equality may only hold in a weak sense.
We can verify that this effective Hamiltonian satisfies the desired symmetry because
g' H_effective g'^-1 = ∫ g'gH(g'g)^-1μ( g) = ∫ g'gH(g'g)^-1μ( (g' g)) = H_effective.
Here we have used the property of the Haar measure that μ( (g' g))=μ( g).
It may not be easy to randomly apply elements from the symmetry group G. Still, in our learning protocol, we will only enforce symmetries that are either U(1) or U(1)×U(1)×⋯×U(1)=U(1)^× N, where sampling can easily be done for each U(1) group separately.
§ ACKNOWLEDGEMENTS
The authors thank Matthias Caro for helpful discussions.
Y.T. acknowledges funding from the U.S. Department of Energy Office of Science, Office of Advanced Scientific Computing Research (DE-NA0003525, and DE-SC0020290). Work supported by DE-SC0020290 is supported by the DOE QuantISED program through the theory consortium “Intersections of QIS and Theoretical Particle Physics” at Fermilab. The Institute for Quantum Information and Matter is an NSF Physics Frontiers Center. The work of H.L. and L.Y. is partially supported by National Science Foundation under awards DMS-2011699 and DMS-2208163. T.G. acknowledges funding provided by the Institute for Quantum Information and Matter and the Quantum Science and Technology Scholarship of the Israel Council for Higher Education.
unsrtnat
§ ROBUST FREQUENCY ESTIMATION
Our main tool to achieve the Heisenberg limit is an algorithm to estimate the frequency from a complex-valued signal with Heisenberg-limited scaling. This algorithm resembles the robust phase estimation algorithm in <cit.> but is different in that we can deal with cases where the frequency we want is encoded in the expectation value rather than the probability.
More precisely, we assume access to a signal Z_δ(t) that is close to e^-iω t, where |ω|<W, by a constant amount of error in both the phase and the amplitude with probability 1-δ, where δ can be tuned. It is also reasonable to assume that for smaller δ, generating the corresponding Z_δ(t) will be more costly, i.e., requiring longer evolution time. Our algorithm then uses Z_δ(t) for different values of δ and t to refine the estimation of ω iteratively. In each iteration, we use the result from the previous iteration to get an estimate θ_j satisfying
ω/W∈(θ_j-π/3·2^j, θ_j+π/3·2^j), 2π,
where
W=3W/π
is a normalization factor. A detailed algorithm description can be found in Algorithm <ref>, adapted from <cit.>.
In the following theorem, we analyze the performance of the above algorithm for a fixed set of values for δ in each iteration. We will then optimize these values to achieve the (ϵ^-1) scaling in Corollary <ref>.
Suppose that |ω|<W is known in advance and that we have access to a signal Z_δ(t) such that
* |Z_δ(t)|=1,
* |Z_δ(t)-e^-i(ω t+f(t))|≤η with probability at least 1-δ, where sup_t|f(t)|≤ C_f <π/3,
* 2arcsinη/2 + C_f≤π/3,
* generating Z_δ(t) requires evolution time C_Z t(log(δ^-1)+1),
then we can produce an estimate ω̂ such that
𝔼[|ω̂-ω|^2]≤∑_j=0^J-1E_j^2 δ_j + ϵ^2/4,
with total evolution time at most
π C_Z/W∑_j=0^J-12^j(log(δ_j^-1)+1),
where
E_0=2π, E_j=4πW/(3· 2^j) ∀ j≥ 1, J = ⌈log_2(4πW/(3ϵ))⌉, W = 3W/π,
and each δ_j∈ (0,1] is arbitrarily chosen.
Denote ω/W=ωπ/3W by ω̃, then |ω̃|<π/3.
We proceed by choosing a sequence of t_j, j=0, 1, …, J-1 and refining the estimation of ω̃ progressively. In each iteration we generate a signal Z_δ_j(t_j) for arbitrarily chosen δ_j.
First, let t_0=1/W, then with probability at least 1-δ_0,
|Z_δ_0(t_0)-e^-i(ω̃ +f(t_0))|≤η,
which yields
|ω̃-(- Z_δ_0(t_0))|≤ 2arcsinη/2 + C_f<π/3 2π.
Thus ω̃∈ (- Z_δ_0(t_0)-π/3+2kπ, - Z_δ_0(t_0)+π/3+2kπ) for some integer k. Let θ_-1 = 0 and S_0 = {- Z_δ_0(t_0)}, then by choosing θ_0 = _θ∈ S_0|θ-θ_-1|_2π=- Z_δ_0(t_0), we obtain
ω̃∈ (θ_0-π/3, θ_0+π/3), 2π.
Here θ, is defined to be the minimum distance to 0 modulo 2π, i.e., θ = π - |(θ2π)-π|.
At step j, we set t_j = 2t_j-1, S_j = {2kπ- Z_δ_j(t_j)/2^j}_k=0,…,2^j-1, and θ_j = _θ∈ S_jθ - θ_j-1.
Now we are ready to prove that if
|Z_δ_j'(t_j')-e^-i(ω̃2^j' +f(t_j'))|≤η,
for all 0≤ j'≤ j, then
ω̃∈ (θ_j-π/3·2^j, θ_j+π/3·2^j), 2π,
for all j by induction. The case j=0 is already proved. Suppose (<ref>) holds for j-1.
Because of <ref>, we have
|ω̃2^j-(- Z_δ_j(t_j))|≤ 2arcsinη/2 + C_f<π/3 2π.
Thus
ω̃∈ I_k := (2kπ - Z_δ_j(t_j)-π/3/2^j, 2kπ - Z_δ_j(t_j)+π/3/2^j) 2π,
for some k=0,1, … 2^j-1. Notice that (I_k, I_k')≥π/2^j-1-π/3·2^j-1 = π/3·2^j-2 for any k≠k', and that the length of the previous estimation (θ_j-1-π/3·2^j-1, θ_j-1+π/3·2^j-1) is exactly π/3·2^j-2, we can ensure that only one k^* satisfies I_k∩(θ_j-1-π/3·2^j-1, θ_j-1+π/3·2^j-1)≠∅, 2π. Moreover, the corresponding k^* satisfies
2k^*π- Z_δ_j(t_j)/2^j = _θ∈ S_j|θ-θ_j-1|_2π, since
|2k^*π- Z_δ_j(t_j)/2^j-θ_j-1|_2π≤|2k^*π- Z_δ_j(t_j)/2^j-ω̃|_2π + |ω̃-θ_j-1|_2π < π/3·2^j + π/3·2^j-1 = π/2^j,
and
|2kπ- Z_δ_j(t_j)/2^j-θ_j-1|_2π≥π/2^j-1 - |2k^*π- Z_δ_j(t_j)/2^j-θ_j-1|_2π > π/2^j-1 - π/2^j = π/2^j
for any k≠k^*. Now we have proved (<ref>).
In the end, notice that (<ref>) has an ambiguity of modulus 2π, we add a proper integer multiple of 2π to θ_J-1 such that |θ_J-1|≤π. We then choose this adjusted θ_J-1 as our estimate for ω̃, and our estimate for ω is Wθ_J-1=:ω̂.
From the above analysis we can see that if (<ref>) holds for 0≤ j'≤ j-1, which means that all the iterations from 0 to j-1 are successful, then by (<ref>), ω̃ is contained in (θ_j-1-π/(3· 2^j-1),θ_j-1+π/(3· 2^j-1))+2kπ for some integer k, and our estimate θ_J-1 is contained in (θ_j-1-π/(3· 2^j-1),θ_j-1+π/(3· 2^j-1))+2k'π for some integer k'. Since |ω̃|<π/3, we have ((θ_j-1-π/(3· 2^j-1),θ_j-1+π/(3· 2^j-1))+2kπ)⊂ (-π, π), and then ((θ_j-1-π/(3· 2^j-1),θ_j-1+π/(3· 2^j-1))+2k'π)∩ [-π, π]=∅ if k'≠k. Hence we must have k=k' since |θ_J-1|≤π. Therefore the error in the normalized ω̃ is at most E_j/W=4π/(3· 2^j), for j=1,2,⋯,J-1. If the very first iteration fails, the error is at most E_0/W=2π. If all the iterations are successful, then by (<ref>) and the argument above, |θ_J-1-ω̃|≤π/(3· 2^J-1)≤ϵ/(2W). From these observations, we will compute the expected error.
We define the random variable j_fail to be the first iteration that fails, i.e.,
|Z_δ_j'(t_j')-e^-i(ω̃2^j' +f(t_j'))|≤η, ∀ j'< j_fail, |Z_δ_j_fail(t_j_fail)-e^-i(ω̃2^j_fail +f(t_j_fail))|> η.
If such a j_fail cannot be found, i.e., all iterations are successful, then we let j_fail=J.
From the above analysis, conditional on j_fail=j<J, the error will be at most E_j/W. In other words 𝔼[|ω̃-θ_J-1|^2|j_fail=j]≤ E_j^2/W^2. If j=J, then the error is at most ϵ/(2W). Also, we have
[j_fail=j] = (1-δ_0)(1-δ_1)⋯ (1-δ_j-1)δ_j≤δ_j.
Therefore the expected square error is
𝔼[|ω-ω̂|^2] = W^2 𝔼[|ω̃-θ_J-1|^2]
=W^2 ∑_j=0^J𝔼[|ω̃-θ_J-1|^2|j_fail=j][j_fail=j]
≤∑_j=0^J-1 E_j^2 δ_j + ϵ^2/4.
This proves (<ref>). Generation of each Z_δ_j(t_j) requires an evolution time of C_Z t_j(log(δ_j^-1)+1), and hence we have total evolution time (<ref>) by adding them up.
In the theorem above, we have left a great deal of flexibility in choosing δ_j. Below, we will try to answer that if we want the MSE to satisfy 𝔼[|ω̂-ω|^2]≤ϵ^2, how we should choose the δ_j to minimize the total evolution time required. We first state our result:
Suppose that |ω|<W is known in advance and that we have access to a signal Z_δ(t) such that
* |Z_δ(t)|=1,
* |Z_δ(t)-e^-i(ω t+f(t))|≤η with probability at least 1-δ, where sup_t|f(t)|≤ C_f <π/3,
* 2arcsinη/2 + C_f≤π/3,
* generating Z_δ(t) requires evolution time C_Z t(log(δ^-1)+1),
then we can produce an estimate ω̂ such that 𝔼[|ω̂-ω|^2]≤ϵ^2, with total evolution time at most (C_Z ϵ^-1).
By (<ref>) and (<ref>), we essentially need to solve the following optimization problem to get the optimal {δ_j}:
{δ_j}minimize ∑_j=0^J-12^j log(δ_j^-1)
subject to ∑_j=0^J-1E_j^2 δ_j ≤3/4ϵ^2.
This optimization problem can be easily solved using the concavity of the logarithmic function, and the optimal δ_j is
δ_j = 3ϵ^2/4E_j^22^j/(2^J-1).
Using this choice of δ_j, we can then compute the total evolution time required through <ref>.
π C_Z/W∑_j=0^J-12^j(log(δ_j^-1)+1) = π C_Z/W∑_j=0^J-12^jlog(δ_j^-1)_(I) + π C_Z/W(2^J-1)_(II).
For term (II), we have
π C_Z/W(2^J-1)<π C_Z 8π/3ϵ
by our choice of J given in <ref>.
For (I), using our expression for δ_j and the expression for E_j in (<ref>), we have
(I) = π C_Z/W∑_j=0^J-12^jlog(4E_j^2/3ϵ^22^J-1/2^j)
= π C_Z/Wlog(16π^2 W^2/3ϵ^2(2^J-1)) + π C_Z/W∑_j=1^J-12^jlog(64π^2 W^2/27ϵ^22^J-1/2^j1/4^j)
< π C_Z/Wlog(64π^3 W^3/9ϵ^3) + π C_Z/W∑_j=1^J-12^jlog(4/38^J-j)
= π C_Z/Wlog(64π^3 W^3/9ϵ^3) + π C_Z/Wlog(4/3)(2^J-2) + π C_Z/Wlog(8)(2^J+2-2J-2)
≤(ϵ) + (C_Z ϵ^-1).
In the last line, we have used the fact that ϵ≤ W (as otherwise we can simply estimate ω by 0) to bound the first term on the second-to-last line. Combining (<ref>), (<ref>), and (<ref>), we can see that the total evolution time of the entire procedure is (C_Z ϵ^-1).
§ LEARNING AN ANHARMONIC OSCILLATOR
The basic building block of our algorithm is a method to learn a single anharmonic oscillator of the form
H_AHO = ω b^†b + ξ/2n(n-1),
where n=b^†b.
We will then outline the experiments we run to learn the coefficients ω and ξ from this Hamiltonian. We first start with a coherent state
|α⟩ = e^-|α|^2/2∑_k=0^∞α^k/√(k!)|k⟩.
We then let the system evolve under the Hamiltonian H_AHO for time t, and obtain the quantum state
e^-i H_AHO t|α⟩ = e^-|α|^2/2∑_k=0^∞α^k/√(k!)e^-iω k t-iξ/2k(k-1)t|k⟩.
In the end, we perform POVM measurement in the eigenbasis of either X=(b+b^†)/√(2) or P=i(b^†-b)/√(2), and by taking average we obtain ⟨X|_⟩α,t and ⟨P|_⟩α,t, where ⟨·|_⟩α,t means taking expectation with respect to the state e^-iH_AHO t|α⟩. With these, we can then obtain the expectation value of b through
⟨b|_⟩α,t=1/√(2)(⟨X|_⟩α,t + i⟨P|_⟩α,t).
The expectation values ⟨b|_⟩α,t for a certain set of α and t will enable us to estimate ω and ξ, and we will demonstrate this below. First we can compute b e^-i H_AHO t|α⟩ to be
be^-iH_AHOt|α⟩ = e^-|α|^2/2∑_k=1^∞α^k e^-iω k te^-iξ/2 k(k-1)t/√((k-1)!)|k-1⟩
= e^-|α|^2/2∑_k=0^∞α^k+1 e^-iω (k+1) te^-iξ/2 k(k+1)t/√(k!)|k⟩.
This yields a closed-form expression for ⟨b|_⟩α,t:
⟨b|_⟩α,t = e^-|α|^2∑_k α |α|^2k e^-iω te^-iξ kt/k!
= α e^-|α|^2 e^-iω t e^|α|^2 e^-iξ t.
Now we are ready to estimate ω and ξ with the help of Corollary <ref>. To estimate ω, we define
Z(t) = ⟨b|_⟩α,t/|⟨b|_⟩α,t| = e^-i(ω t + |α|^2sin(ξ t)),
then Z(t) = e^-i(ω t + f(t)), where f(t) = |α|^2sin(ξ t). Therefore, sup_t|f(t)|≤|α|^2. The exact value of Z(t) is, however, inaccessible in practice, and we need to find an approximation Z_δ(t) such that |Z_δ(t)-Z(t)|≤η with probability at least δ if we want to utilize Corollary <ref>. In the following, we decompose the approximation error into three parts and analyze them separately.
Truncation error.
The first part of the approximation error comes from the truncation of the observables. In our protocol, we truncate the observables up to a threshold M, which means rather than estimating the ⟨X|_⟩α,t and ⟨P|_⟩α,t we estimate ⟨X|X|≤ M|_⟩α,t and ⟨P|P|≤ M|_⟩α,t.
Here |X|≤ M and |P|≤ M are defined to be
|X|≤ M = ∫_-M^M|x⟩⟨x| x, |P|≤ M = ∫_-M^M|p⟩⟨p| p.
This is necessary for the robustness of our protocol. With the original unbounded observables X and P, any small error in the quantum state can potentially be infinitely magnified in the expectation value. The use of bounded observables will ensure that this does not happen.
In the following, we will ensure that the error introduced by this truncation is acceptable for our protocol.
From Chebyshev's inequality, one has
ℙ(|X|≥ M)≤⟨X^2|_⟩α,t/M^2≤2⟨b^† b|_⟩α, t+1/M^2 = 2|α|^2+1/M^2,
where we have used the fact that ⟨b^† b|_⟩α, t=|α|^2. Then, by Cauchy-Schwarz inequality,
|⟨X|_⟩α,t-⟨X|X|≤ M|_⟩α,t| = |⟨X|X|>M|_⟩α,t|≤√(⟨X^2|_⟩α,t)√(⟨|X|>M^2|_⟩α,t)
≤√(2⟨b^† b|_⟩α,t+1)√(ℙ(|X|≥ M))≤2|α|^2+1/M.
Similarly, one has
ℙ(|P|≥ M) ≤⟨P^2|_⟩α,t/M^2≤2⟨b^† b|_⟩α,t+1/M^2 = 2|α|^2+1/M^2,
and
|⟨P|_⟩α,t-⟨P|P|≤ M|_⟩α,t|≤2|α|^2+1/M.
Combining the error bounds for X and P truncations, we will have an error bound for the truncated b operator.
Let
Z_M(t)=1/√(2)(⟨X|X|≤ M|_⟩α,t + i⟨P|P|≤ M|_⟩α,t),
then
Z_M(t)-⟨b|_⟩α,t = √( Z_M(t) - ⟨b|_⟩α,t^2 + Z_M(t) - ⟨b|_⟩α,t^2)
= √((⟨X|_⟩α,t-⟨X|X|≤ M|_⟩α,t^2 + ⟨P|_⟩α,t-⟨P|P|≤ M|_⟩α,t^2))
≤2|α|^2+1/M.
Simulation error.
In practice, the final state we obtained is different from the ideal state e^-iH_AHO t|α⟩.
This is because in the multi-mode situation, H_AHO is the Hamiltonian of the effective dynamics, which differs from the actual dynamics by a small error. In this sense, we only simulate the effective dynamics, thus calling this error the simulation error.
We denote the expectation with respect to the real final state obtained by ⟨·|_⟩α, t, r, where r stands for the parameters used in the simulation. More precisely, as will be explained in Section <ref>, and in particular (<ref>), r is the number of random unitaries that we insert during the time evolution. In Section <ref>, we will show that for any given η_0>0, there exists a choice of r such that
⟨O|_⟩α, t, r-⟨O|_⟩α, t≤Oη_0,
for any bounded observable O. In particular, for any given η_0>0, there is a choice of r such that
⟨X|X|≤ M|_⟩α, t, r-⟨X|X|≤ M|_⟩α, t≤ Mη_0, ⟨P|P|≤ M|_⟩α, t, r-⟨P|P|≤ M|_⟩α, t≤ Mη_0.
Define
Z_M,r(t)=1/√(2)(⟨X|X|≤ M|_⟩α,t,r + i⟨P|P|≤ M|_⟩α,t,r),
then
Z_M,r(t)-Z_M(t)
= √((⟨X|X|≤ M|_⟩α,t,r-⟨X|X|≤ M|_⟩α,t^2 + ⟨P|P|≤ M|_⟩α,t,r-⟨P|P|≤ M|_⟩α,t^2))
≤ Mη_0.
Statistical error. In practice, homodyne measurement generates samples corresponding to the quadrature operator X. By discarding the samples with norm larger than M, we obtain samples x̂_1,x̂_2,⋯,x̂_L corresponding to X|X|≤ M. We then approximate ⟨X|X|≤ M|_⟩α, t, r through the average x̅=(x̂_1+x̂_2+⋯+x̂_L)/L. Similarly, we can generate p̂_1,p̂_2,⋯,p̂_L corresponding to P|P|≤ M, and use p̅ = (p̂_1+p̂_2+⋯+p̂_L)/L to approximate ⟨P|P|≤ M|_⟩α, t, r. It is clear that x̂ and p̂ are unbiased estimates for ⟨X|X|≤ M|_⟩α, t, r and ⟨P|P|≤ M|_⟩α, t, r. Define
Z̅ = 1/√(2)(x̅+ip̅),
then
Z̅-Z_M,r(t)= √((x̅-⟨X|X|≤ M|_⟩α,t,r^2 + p̅-⟨P|P|≤ M|_⟩α,t,r^2))
≤max{x̅-⟨X|X|≤ M|_⟩α,t,r, p̅-⟨P|P|≤ M|_⟩α,t,r}.
Thus by the union bound and Hoeffding's inequality, we have
ℙ(|Z̅-Z_M,r(t)|≥η_1) ≤ℙ(x̅-⟨X|X|≤ M|_⟩α,t,r≥η_1) + ℙ(p̅-⟨P|P|≤ M|_⟩α,t,r≥η_1)
≤ 2e^-Lη_1^2/2M^2+2e^-Lη_1^2/2M^2= 4e^-Lη_1^2/2M^2.
Putting the three types of error together, we have
Z̅-⟨b|_⟩α,t ≤Z̅-Z_M,r(t)+Z_M,r(t)-Z_M(t)+Z_M(t)-⟨b|_⟩α,t
≤η_1+Mη_0+2|α|^2+1/M,
with probability at least 1-4e^-Lη_1^2/2M^2. Define
Z_δ(t) = Z̅/Z̅,
then
Z_δ(t)-Z(t) = Z̅/Z̅-⟨b|_⟩α,t/⟨b|_⟩α,t≤2Z̅-⟨b|_⟩α,t/⟨b|_⟩α,t
=2Z̅-⟨b|_⟩α,t/|α|e^|α|^2(cos(ξ t)-1)≤ 2|α|^-1e^2|α|^2Z̅-⟨b|_⟩α,t.
Hence,
Z_δ(t)-Z(t)≤ 2|α|^-1e^2|α|^2(η_1+Mη_0+2|α|^2+1/M),
with probability at least 1-4e^-Lη_1^2/2M^2. In order for the condition 2arcsinη/2 + C_f≤π/3 in Theorem <ref> to hold, we need
2arcsin(|α|^-1e^2|α|^2(η_1+Mη_0+2|α|^2+1/M)) + |α|^2 ≤π/3.
In order for 1-4e^-Lη_1^2/2M^2≥1-δ to hold, we need
L≥2M^2/η_1^2log4/δ.
In conclusion, we have constructed a signal Z_δ(t) to estimate the parameter ω that satisfies the conditions required by Corollary <ref>.
Define Z_δ(t) = Z̅/Z̅, where Z̅ = 1/√(2)(x̅+ip̅), and (x̅, p̅) are the average values computed from L measurement results each for ⟨X|X|≤ M|_⟩α, t, r and ⟨P|P|≤ M|_⟩α, t, r, respectively. Here ⟨·|_⟩α, t, r denotes the expectation with respect to the real final state obtained by a simulation using r randomly inserted unitaries, which is an approximation of the state e^-iH_AHO t|α⟩ satisfying ⟨O|_⟩α, t, r-⟨O|_⟩α, t≤Oη_0 for any bounded operator O. Then Z_δ(t) satisfies the conditions of Corollary <ref> for the estimation of ω if
|α|^2<π/3, M > e^2|α|^2(2|α|^2+1)/|α|sin(π/6-|α|^2/2),
η_0<1/M(|α|e^-2|α|^2sin(π/6-|α|^2/2)-2|α|^2+1/M),
η_1≤ |α|e^-2|α|^2sin(π/6-|α|^2/2)-2|α|^2+1/M -Mη_0,
L≥2M^2/η_1^2log4/δ.
As a result, α, M, η_0 and η_1 can be chosen as 𝒪(1) constants and the total runtime needed in producing Z_δ(t) is 𝒪(t(log(1/δ)+1)).
When choosing the parameters in practice, one can follow the order in (<ref>), i.e., first decide the value of α, then choose M, η_0, η_1 and L accordingly.
Next, we will build the signal Z_δ(t) for the estimation of ξ with the help of the result above. In particular, when (<ref>) and (<ref>) hold, one can deduce that
|Z̅-⟨b|_⟩α, t| ≤ |α|e^-2|α|^2sin(π/6-|α|^2/2)≤|α|e^-2|α|^2≤|⟨b|_⟩α, t|.
We first observe that cos(ξ t) can be obtained by
cos(ξ t) = 1/|α|^2log⟨b|_⟩α, t/α + 1.
Therefore, when when (<ref>) and (<ref>) hold, the error in the estimation of cos(ξ t) caused by using Z̅ instead of ⟨b|_⟩α, t is
(1/|α|^2log⟨b|_⟩α, t/α + 1) -
(1/|α|^2logZ̅/α+1)
= 1/|α|^2logZ̅/⟨b|_⟩α, t=1/|α|^2log(1+Z̅-⟨b|_⟩α, t/⟨b|_⟩α, t)
≤ 2log2/|α|^2Z̅-⟨b|_⟩α, t/⟨b|_⟩α, t
≤ 2log2|α|^-3e^2|α|^2(η_1+Mη_0+2|α|^2+1/M),
where we have used the concavity of log x and (<ref>) in the third line. For estimating sin(ξ t), we use two different values for α. The ratio between ⟨b|_⟩α_1,t and ⟨b|_⟩α_2,t is
⟨b|_⟩α_1,t/⟨b|_⟩α_2,t = α_1/α_2 e^(|α_2|^2-|α_1|^2)(1-e^-iξ t).
Let Z_α_1, α_2 = ⟨b|_⟩α_1,t/⟨b|_⟩α_2,t/⟨b|_⟩α_1,t/⟨b|_⟩α_2,t
and β = α_2^2-α_1^2.
Assume that β<π/2, then
sin(ξ t) = 1/βarcsin( Z_α_1, α_2),
Now we analyze the error in the estimate of sin(ξ t) caused by approximation. We assume that (<ref>) holds for both α_1 and α_2, and we condition on the event that
Z̅_1-⟨b|_⟩α_1, t≤η_1+Mη_0+2|α_1|^2+1/M,
Z̅_2-⟨b|_⟩α_2, t≤η_1+Mη_0+2|α_2|^2+1/M.
Then
Z̅_1/Z̅_2/Z̅_1/Z̅_2-⟨b|_⟩α_1,t/⟨b|_⟩α_2,t/⟨b|_⟩α_1,t/⟨b|_⟩α_2,t
≤ Z̅_1/Z̅_2/Z̅_1/Z̅_2- ⟨b|_⟩α_1,t/⟨b|_⟩α_2,t/⟨b|_⟩α_1,t/⟨b|_⟩α_2,t
≤ 2Z̅_1/Z̅_2-⟨b|_⟩α_1,t/⟨b|_⟩α_2,t/⟨b|_⟩α_1,t/⟨b|_⟩α_2,t
≤ 2⟨b|_⟩α_2, t/⟨b|_⟩α_1, t[Z̅_1-⟨b|_⟩α_1,t/Z̅_2+⟨b|_⟩α_1,t/⟨b|_⟩α_2,tZ̅_2-⟨b|_⟩α_2,t/Z̅_2]
≤ 4[Z̅_1-⟨b|_⟩α_1,t/⟨b|_⟩α_1,t+Z̅_2-⟨b|_⟩α_2,t/⟨b|_⟩α_2,t]
≤ 4|α_1|^-1e^2|α_1|^2(η_1+Mη_0+2|α_1|^2+1/M)+4|α_2|^-1e^2|α_2|^2(η_1+Mη_0+2|α_2|^2+1/M).
Here we have used the fact that Z̅≥⟨b|_⟩α_,t in the fifth line, which can be deduced from (<ref>). Now, if we further assume that β≤π/3, then since the function arcsin is 2-Lipschitz on [-sinπ/3, sinπ/3], we have
1/βarcsin(Z̅_1/Z̅_2/Z̅_1/Z̅_2)-1/βarcsin( Z_α_1, α_2)
≤ 2/βZ̅_1/Z̅_2/Z̅_1/Z̅_2- Z_α_1, α_2
≤ 8/β(|α_1|^-1e^2|α_1|^2(η_1+Mη_0+2|α_1|^2+1/M)+|α_2|^-1e^2|α_2|^2(η_1+Mη_0+2|α_2|^2+1/M))
Combining (<ref>) and (<ref>), we have
e^iξ t-ĉ+iŝ/ĉ+iŝ≤ 4log2|α|^-3e^2|α|^2(η_1+Mη_0+2|α|^2+1/M)
+16/β(|α_1|^-1e^2|α_1|^2(η_1+Mη_0+2|α_1|^2+1/M)+|α_2|^-1e^2|α_2|^2(η_1+Mη_0+2|α_2|^2+1/M)),
where ĉ = 1/|α|^2logZ̅/α + 1 and ŝ = 1/βarcsin(Z̅_1/Z̅_2/Z̅_1/Z̅_2),
and the condition in Corollary <ref> reads
1≥4log2|α|^-3e^2|α|^2(η_1+Mη_0+2|α|^2+1/M)
+16/β(|α_1|^-1e^2|α_1|^2(η_1+Mη_0+2|α_1|^2+1/M)+|α_2|^-1e^2|α_2|^2(η_1+Mη_0+2|α_2|^2+1/M)).
In particular, we can take α=α_1 and obtain the following result.
Define Z_δ(t) = ĉ+iŝ/ĉ+iŝ, where ĉ = 1/|α_1|^2logZ̅_1/α_1 + 1, ŝ = 1/βarcsin(Z̅_1/Z̅_2/Z̅_1/Z̅_2), and (Z̅_1, Z̅_2) are defined in the same way as in Lemma <ref> for α_1 and α_2, respectively. Then Z_δ(t) satisfies the conditions of Corollary <ref> for the estimation of ξ if
|α_1|^2<π/3, |α_2|^2<π/3, β := |α_1|^2-|α_2|^2<π/2,
M > (4log2|α_1|^-3+16/β|α_1|^-1)e^2|α_1|^2(2|α_1|^2+1) + 16/β|α_2|^-1e^2|α_2|^2(2|α_2|^2+1),
η_0<M-(4log2|α_1|^-3+16/β|α_1|^-1)e^2|α_1|^2(2|α_1|^2+1) - 16/β|α_2|^-1e^2|α_2|^2(2|α_2|^2+1)/M^2((4log2|α_1|^-3+16/β|α_1|^-1)e^2|α_1|^2+16/β|α_2|^-1e^2|α_2|^2),
η_1≤M-(4log2|α_1|^-3+16/β|α_1|^-1)e^2|α_1|^2(2|α_1|^2+1+M^2η_0) - 16/β|α_2|^-1e^2|α_2|^2(2|α_2|^2+1+M^2η_0)/M((4log2|α_1|^-3+16/β|α_1|^-1)e^2|α_1|^2+16/β|α_2|^-1e^2|α_2|^2),
L≥2M^2/η_1^2log8/δ.
As a result, α_1, α_2, M, η_0 and η_1 can be chosen as 𝒪(1) constants and the total runtime needed in producing Z_δ(t) is 𝒪(t(log(1/δ)+1)).
§ LEARNING TWO COUPLED ANHARMONIC OSCILLATORS
In this section, we consider a system consisting of two coupled anharmonic oscillators, and the Hamiltonian is of the following form:
H = ω_1 b_1^†b_1 + ω_2 b_2^†b_2 + h_12b_1^†b_2 + h_21b_2^†b_1 + ξ_1/2n_1(n_1-1) + ξ_2/2n_2(n_2-1)
The goal is to learn all the coefficients ω_1, ω_2, ξ_1, ξ_2, and h_12 (h_21=h^*_12).
§.§ Single-mode coefficients
We first focus on learning the single-mode coefficients ω_1, ω_2, ξ_1, and ξ_2. To do this, we will insert random unitaries during time evolution to decouple the bosonic modes from each other. In other words, the time evolution operator undergoes the following transformation
e^-iHt↦∏_j=1^r U_j^†e^-iHτ U_j = ∏_j=1^r e^-iU_j^†HU_jτ,
where the U_j, j=1,2,⋯,r, are the random linear optics unitaries that we insert, r=t/τ, and the product goes from right to left. Each U_j is independently drawn from a distribution that we denote by 𝒟. In the limit of τ→ 0, the dynamics can be described by an effective Hamiltonian
H_effective = 𝔼_U∼𝒟 U^†HU.
This can be seen by considering the Taylor expansion of the time-evolved state in a small time step:
𝔼_U∼𝒟[e^-iU^†HUτρ e^iU^†HUτ] = ρ - iτ𝔼_U∼𝒟[[U^†HU,ρ]] + (τ^2)
= e^-i𝔼_U∼𝒟[U^†HU]τρ e^i𝔼_U∼𝒟[U^†HU]τ + (τ^2).
The above is not a rigorous proof because the (τ^2) residue is an unbounded operator. We will provide a rigorous bound of how far the actual dynamics deviate from the limiting effective dynamics with finite τ>0 in Section <ref>.
To learn all the single mode coefficients, we let the unitary U drawn from the distribution 𝒟 be
U = e^-iθ b_1^†b_1, θ∼𝒰([0,2π]).
Here 𝒰([0,2π]) is the uniform distribution over [0,2π].
We can then compute the effective Hamiltonian
H_effective = 1/2π∫_0^2π e^iθ b_1^†b_1He^-iθ b_1^†b_1θ = ω_1 b_1^†b_1 + ω_2 b_2^†b_2 + ξ_1/2n_1(n_1-1) + ξ_2/2n_2(n_2-1).
In other words, the coupling term h_12b_1^†b_2 + h_21b_2^†b_1 is cancelled in the process, due to the equality
1/2π∫_0^2π e^iθ b_1^†b_1b_1e^-iθ b_1^†b_1θ=1/2π∫_0^2π e^iθb_1 θ = 0.
We can interpret this procedure as enforcing a particle number conservation on the first bosonic mode.
The effective Hamiltonian has the desirable feature that the two bosonic modes are no longer coupled together. Therefore we can apply the learning algorithm described in Section <ref> to learn the parameters of the two modes separately.
§.§ The coupling coefficient
Next, we consider learning the coupling coefficient h_12. We observe that the coupling term can be transformed into a local one under a single-particle basis transformation. This is done through the following two operators
U_x(θ) = e^iθ (b_1^†b_2+b_2^†b_1), U_y(θ) = e^θ (b_1^†b_2-b_2^†b_1),
which correspond to Pauli-X and Y rotations. They transform the annihilation operators in the following way
[ U_x(θ)b_1 U_x^†(θ); U_x(θ)b_2 U_x^†(θ) ]
=
[ cos(θ) isin(θ); isin(θ) cos(θ) ][ b_1; b_2 ], [ U_y(θ)b_1 U_y^†(θ); U_y(θ)b_2 U_y^†(θ) ]
=
[ cos(θ) sin(θ); -sin(θ) cos(θ) ][ b_1; b_2 ].
We first perform the Pauli-Y rotation and define
b̃_1 = U_y(π/4)b_1 U_y^†(π/4), b̃_2 = U_y(π/4)b_2 U_y^†(π/4).
Through (<ref>) we have
[ b_1; b_2 ]
=1/√(2)[ 1 -1; 1 1 ][ b̃_1; b̃_2 ]
We will then rewrite the Hamiltonian (<ref>) in terms of b̃_1 and b̃_2. The quadratic part of H can be written as
ω̃_1 b̃_1^†b̃_1 + ω̃_2 b̃_2^†b̃_2 + h̃_12b̃_1^†b̃_2 + h̃_21b̃_2^†b̃_1,
where
[ ω̃_1 h̃_12; h̃_21 ω̃_2 ]
=1/2[ 1 1; -1 1 ][ ω_1 h_12; h_21 ω_2 ][ 1 -1; 1 1 ].
In particular, we have
ω̃_1 = ω_1+ω_2/2+ h_12.
For the quartic part, we have
ξ_1/2n_1(n_1-1) = ξ_1/2b_1^†b_1^†b_1b_1 = ∑_ijkl=1^2 ξ^(1)_ijklb̃^†_ib̃^†_jb̃_kb̃_l,
ξ_2/2n_2(n_2-1) = ξ_2/2b_2^†b_2^†b_2b_2 = ∑_ijkl=1^2 ξ^(2)_ijklb̃^†_ib̃^†_jb̃_kb̃_l.
In particular
ξ^(1)_1111 = ξ_1/4, ξ^(2)_1111 = ξ_2/4.
Combining (<ref>) and (<ref>), the Hamiltonian H can be written in terms of b̃_1 and b̃_2 as
H = ω̃_1 b̃_1^†b̃_1 + ω̃_2 b̃_2^†b̃_2 + h̃_12b̃_1^†b̃_2 + h̃_21b̃_2^†b̃_1 + ∑_ijkl=1^2 (ξ^(1)_ijkl+ξ^(2)_ijkl)b̃^†_ib̃^†_jb̃_kb̃_l.
The above expression is much more complicated than the original expression in (<ref>), but we will use random unitaries to produce a much simpler effective Hamiltonian. This time, the random unitary we use will be
U = e^-iθb̃_1^†b̃_1, θ∼𝒰([0,2π]).
With the same derivation as in (<ref>), we can obtain the effective Hamiltonian as 𝔼[U^† HU]. Note that in conjugating with e^iθb̃_1^†b̃_1, each b̃_1 in the Hamiltonian acquires a phase e^iθ, and each b̃_1^† acquires a phase e^-iθ. If in a Hamiltonian term b̃_1 and b̃_1^† do not appear the same number of times, then the term will acquire a phase e^icθ with c∈{-2,-1,1,2}, and integrating over θ will cancel out this term. For example
1/2π∫_0^2π e^iθb̃_1^†b̃_1b̃_1^†b̃_2 e^-iθb̃_1^†b̃_1θ = 1/2π∫_0^2π e^iθb̃_1^†b̃_2 θ = 0,
1/2π∫_0^2π e^iθb̃_1^†b̃_1b̃_1^†b̃_1^†b̃_2b̃_2 e^-iθb̃_1^†b̃_1θ = 1/2π∫_0^2π e^2iθb̃_1^†b̃_1^†b̃_2b̃_2 θ = 0.
In other words, only the terms that conserve the particle number on the first bosonic mode are preserved in the effective Hamiltonian. We can then write the effective Hamiltonian as
H_effective = ω̃_1 b̃_1^†b̃_1 + ω̃_2 b̃_2^†b̃_2 + (ξ^(1)_1111+ξ^(2)_1111)ñ_1(ñ_1-1) + (Añ_1+Bñ_2+C)ñ_2,
where ñ_1 = b̃_1^†b̃_1, and ñ_2 = b̃_2^†b̃_2.
Recall that our goal is to learn the coupling coefficient h_12, whose real part can be derived from ω̃_1, ω_1, and ω_2 through (<ref>), and ω_1, and ω_2 can be learned using the procedure outlined in Section <ref>. We, therefore, only need to estimate ω̃_1 from the effective Hamiltonian.
To do this, we start with a product state |α⟩|0⟩ on the two bosonic modes. Then we apply U_y(π/4) to this state to get the initial state of our time evolution
|Φ(0)⟩ = U_y(π/4)|α⟩|0⟩.
This state is the tensor product of the coherent states of b̃_1 and b̃_2 because one can verify that, using (<ref>),
b̃_1|Φ(0)⟩=b̃_1 U_y(π/4)|α⟩|0⟩ = U_y(π/4) b_1|α⟩|0⟩ = α U_y(π/4)|α⟩|0⟩
b̃_2|Φ(0)⟩=b̃_2 U_y(π/4)|α⟩|0⟩ = U_y(π/4) b_2|α⟩|0⟩ = 0.
Because of the above equation, we can see that there is no particle in the bosonic mode b̃_2 in this state |Φ(0)⟩. As the effective Hamiltonian in (<ref>) conserves the particle number on both bosonic modes, the particle number on the mode b̃_2 will stay 0. Consequently, any term that involves ñ_2 will not affect the dynamics. Therefore we can safely discard these terms and get a new effective Hamiltonian
H_effective' = ω̃_1 b̃_1^†b̃_1 + (ξ^(1)_1111+ξ^(2)_1111)ñ_1(ñ_1-1).
Note that this Hamiltonian only acts non-trivially on the bosonic mode b̃_1. Therefore we can use the single-mode protocol in Section <ref> to learn the coefficient ω̃_1. As guaranteed in (<ref>), we start from the α-coherent state for b̃_1. In the time evolution, the expectation value ⟨b̃_1|$⟩ contains the information to determineω̃_1. The expectation value⟨b̃_1|$⟩ can be extracted through homodyne measurement with two quadrature operators.
Note that we need to convert this homodyne measurement into homodyne measurement for b_1 or b_2. This can be easily done because b̃_1 = U_y(π/4)b_1 U_y^†(π/4). We can therefore apply the unitary U_y^†(π/4) at the end of the time evolution and then perform homodyne measurement for (b_1+b_1^†)/√(2) and i(b_1-b_1^†)/√(2), which combined yields the expectation value ⟨b̃_1|$⟩.
Let us now briefly summarize the whole procedure. We start from a state|α⟩|0⟩, applyU_y(π/4), let the system evolve for timet=rτ, while applying randome^-iθb̃_1^†b̃_1with intervalτ, and in the end applyU_y^†(π/4)=U_y(-π/4), after which we perform homodyne measurement forb_1. The quantum state right before the measurement is applied is
U_y(-π/4)∏_j=1^r(e^iθ_jb̃_1^†b̃_1e^-iHτe^-iθ_jb̃_1^†b̃_1)U_y(π/4)|α⟩|0⟩,
for randomly sampledθ_j,j=1,2,⋯,r.
Note thate^-iθ_jb̃_1^†b̃_1=e^-i(θ_j/2)(n_1+n_2)U_x(-θ_j/2), andHcommute withn_1+n_2because the particle number is conserved. We therefore have
e^iθ_jb̃_1^†b̃_1e^-iHτe^-iθ_jb̃_1^†b̃_1 = U_x(θ/2)e^-iHτU_x(-θ/2).
Consequently we can replace alle^-iθ_jb̃_1^†b̃_1withU_x(-θ_j/2). The quantum state we get in the end is, therefore
U_y(-π/4)∏_j=1^r(U_x(θ_j/2)e^-iHτU_x(-θ_j/2))U_y(π/4)|α⟩|0⟩.
Note that the adjacentU_x(-θ_j/2)andU_x(θ_j-1/2)can be merged intoU_x(-(θ_j-θ_j-1)/2), so that we only need to apply oneXrotation in each time step instead of two.
In the above procedure, we estimateω̃_1, which through (<ref>) we can estimateh_12. Forh_12, we can instead define
b̃_1 = U_x(π/4)b_1U_x^†(π/4), b̃_2 = U_x(π/4)b_2U_x^†(π/4),
and then (<ref>) will become
ω̃_1 = ω_1+ω_2/2+ h_12.
We can then change the whole procedure accordingly to estimateh_12, and the corresponding state before the measurement is
U_x(-π/4)∏_j=1^r(U_y(-θ_j/2)e^-iHτU_y(θ_j/2))U_x(π/4)|α⟩|0⟩.
§ USING A DIVIDE-AND-CONQUER APPROACH TO LEARN AN N-MODE SYSTEM
In this section, we consider the general case, where the Hamiltonian is of the form:
H = ∑_⟨i,j|⟩ h_ijb_i^†b_j + ∑_i ω_i b_i^†b_i + ξ_i/2∑_i n_i(n_i-1).
We will use a divide-and-conquer approach to learn the coefficients in this Hamiltonian. Specifically, we will insert random unitaries during time evolution to decouple the system into clusters containing one or two modes that do not interact with each other and learn the coefficients in each cluster in parallel.
We assume that the bosonic modes are arranged on a graph𝒢=(𝒱,ℰ), where𝒱is the set containing all vertices, each of which corresponds to a bosonic mode, andℰcontains all edges.∑_⟨i,j|⟩means summation over all vertices linked by an edge.
We consider decoupling the system with the help of a graphℒ=(ℰ,ℰ_ℒ)that is the link graph of𝒢. The setℰis the set of all edges in𝒢, andℰ_ℒis the set of edges ofℒ, which we will now define. For any two edgese,e'∈ℰ, we have(e,e')∈ℰif and only if they share a vertex in𝒱.
Next, we color the graphℒwith the following rule: any two vertices inℰmust be colored differently if they are at most distance2from each other. The number of colors needed for this coloring is at mostχ=deg(ℒ)^2+1, and such a coloring can be easily found by a greedy algorithm: we can simply color a vertex by any color that its neighbors or next-neighbors have not used, and such a color is always available because there are at mostχ-1neighbors and next-neighbors. For a graph𝒢with degreeD,deg(ℒ)≤2(D-1), and thereforeχ≤4(D-1)^2+1. This coloring yields a decomposition of the edges
ℰ = _c=1^χℰ_c,
whereℰ_cis the set of edges with colorc.
For each colorc=1,2,⋯,χ, we then learn all the coefficients associated with this color. We denote by𝒱_call the vertices (bosonic modes) that are contained in an edge inℰ_c. During time evolution, we apply random unitaries of the form
U = ∏_i∈𝒱∖𝒱_c e^-iθ_i b_i^†b_i, θ_i∼𝒰([0,2π]).
Hereθ_i,i∈𝒱∖𝒱_c, are independent random variables. Following the derivation in (<ref>), we can see that the effective Hamiltonian is
H_effective=∏_i∈𝒱∖𝒱_c(1/2π∫_0^2πθ_i ) e^-i∑_i∈𝒱∖𝒱_cθ_i n_iH e^i∑_i∈𝒱∖𝒱_cθ_i n_i.
We can then examine the effect of this transformation on each term. For a termb_k^†b_l,k≠l, ifkis in𝒱∖𝒱_cbutlis not, then
∏_i∈𝒱∖𝒱_c(1/2π∫_0^2πθ_i ) e^-i∑_i∈𝒱∖𝒱_cθ_i n_i b_k^†b_l e^i∑_i∈𝒱∖𝒱_cθ_i n_i = 1/2π∫_0^2πω_k e^iω_k b_k^†b_l = 0
The same is true iflis in𝒱∖𝒱_cbutkis not. When bothk,l∈𝒱∖𝒱_c, then
∏_i∈𝒱∖𝒱_c(1/2π∫_0^2πθ_i ) e^-i∑_i∈𝒱∖𝒱_cθ_i n_i b_k^†b_l e^i∑_i∈𝒱∖𝒱_cθ_i n_i
= 1/(2π)^2∫_0^2πω_k ∫_0^2πω_l e^i(ω_k-ω_l) b_k^†b_l = 0.
In other words, for any coupling termb^†_k b_l, the above procedure will cancel it out if eitherkorlis in𝒱∖𝒱_c. All other terms are preserved because they commute withn_ifor anyi∈𝒱∖𝒱_c. The only possibleb^†_k b_lterms left are those withk,l∈𝒱_c.
This also means that(k,l)∈ℰ_cbecause of the following argument: first by definition of𝒱_cthere must existsk'andl'such that(k,k')∈ℰ_cand(l,l')∈ℰ_c. We must have(k,l)∈ℰ, as otherwise, this coupling term would not exist at all. This means that unlessk'=l', the two edges(k,k')and(l,l')as vertices inℒare next-neighbors, which is not allowed in our coloring. Thereforek'=l'and we have(k,l)∈ℰ_c.
Consequently, the effective Hamiltonian is
H_effective = ∑_(i,j)∈ℰ_c h_ijb_i^†b_j + ∑_i ω_i b_i^†b_i + ξ_i/2∑_i n_i(n_i-1).
Next, we will show that the above Hamiltonian is decoupled into clusters of sizes at most2. We will do this by showing that any bosonic modeiinteracts with at most one other bosonic mode in the above Hamiltonian. This can be proved by contradiction: ifiinteracts with bothjandkin the above Hamiltonian, then(i,j)∈ℰ_cand(i,k)∈ℰ_c, which makes(i,j)and(i,k)neighbors as vertices inℒ, and this is forbidden in our coloring.
With the decoupled Hamiltonian in (<ref>), we can then learn the coefficients in each one- or two-mode cluster independently and in parallel using the algorithms described in Sections <ref> and <ref>. Looping over all colorsc∈{1,2,⋯,χ}, we will obtain all the coefficients in the Hamiltonian.
§ DEVIATION FROM THE EFFECTIVE DYNAMICS
In this section, we consider the error introduced by simulating the effective dynamics with the insertion of random unitaries, as mentioned in Section <ref>. Suppose𝒟is a distribution over the set of unitaries, and the initial state of the system is represented by the density matrixρ(0). The actual final state obtained after the insertion ofrrandom unitaries is
_U_j∼𝒟(∏_1≤ j≤ r^←U_j^† e^-iτ H U_j)ρ(0)(∏_1≤ j≤ r^→U_j^† e^iτ H U_j),
where eachU_jis inserted after timeτ=t/r. On the other hand, the desired final state, which facilitates the subsequent steps of the learning process, is
e^-it H_effectiveρ(0) e^it H_effective ,
whereH_effectiveis the effective Hamiltonian:
H_effective = 𝔼_U∼𝒟 U^†HU.
In this section, we provide an analysis of the difference between the two dynamics for a certain class of Hamiltonians and thereby complete the analysis of approximation errors investigated in Section <ref>. For the sake of the Hamiltonians studied in this paper, we consider the Hamiltonians of the following form:
H = ∑_⟨i,j|⟩ h_ij b_i^†b_j + ∑_iω_i n_i + 1/2∑_⟨jklm|⟩ξ_jklmb_j^†b_k^†b_lb_m,
where in the last term we denote by⟨jklm|$⟩ the index quadruples such that {j,k,l,m} form a connected subgraph in the underlying graph 𝒢=(𝒱,ℰ) of bosonic modes. We begin with a lemma describing the action of these Hamiltonians on the product of coherent states.
Let
H = ∑_⟨i,j|⟩ h_ij b_i^†b_j + ∑_iω_i n_i + 1/2∑_⟨jklm|⟩ξ_jklmb_j^†b_k^†b_lb_m,
and
= ⊗_i∈𝒱(e^-|α_i|^2/2∑_k=0^∞α_i^k e^-iζ_i,k/√(k!)|k⟩_i),
where α_i is a complex number of magnitude O(1), and ζ_i,k∈ can be any real number. Then
H = O(Nmax{|ξ_jklm|, |ω_i|,|h_i,j|}),
and
H^2 = O(N^2(max{|ξ_jklm|, |ω_i|,|h_i,j|})^2),
where N=|𝒱|+|ℰ|.
It suffices to prove the result for H/max{|ξ_jklm|, |ω_i|,|h_i,j|}. Therefore we assume max{|ξ_jklm|, |ω_i|,|h_i,j|}=1 without loss of generality. Notice that H is the sum of O(N) terms, and each term takes the form _p b_q or _p b_q_r b_s, where p, q, r, s may be repeated. We will prove that each term acting on |⟩ yields a state whose norm is O(1). We first demonstrate this for _p b_q_r b_s. Simple calculation shows
_p b_q_r b_s
= ⊗_i∉{p,q,r,s}(e^-|α_i|^2/2∑_k=0^∞α_i^k e^-iζ_i,k/√(k!)|k⟩_i)⊗⊗_j∈{p,q,r,s}(e^-|α_j|^2/2∑_k=0^∞α_j^k e^-iζ_j,k/√(k!)√(P_j(k))|k+σ_j⟩_j),
where P_j's are polynomials with ∑_j∈{p,q,r,s} P_j = 4, and σ_j is an integer determined by the numbers of _j and b_j in _p b_q_r b_s. For example, if p=q=r=1, s=2, then P_1(k)=(k+1)^3, σ_1=1, P_2(k)=k, σ_2=-1. Straight calculations can show that
e^-|α_j|^2/2∑_k=0^∞α_j^k e^-iζ_j,k/√(k!)√(P_j(k))|k+σ_j⟩_j^2
= e^-|α_j|^2∑_k=0^∞|α_j|^2k/k!P_j(k) = Q_j(|α_j|^2) = O(1).
where Q_j is a polynomial that can be determined by P_j, but we do not care about its explicit form. Therefore we have shown that
_p b_q_r b_s=√(∏_j∈{p,q,r,s}Q_j(|α_j|^2))=O(1).
Similarly, we can show that _p b_q=O(1). Therefore (<ref>) is established.
Next, we will prove (<ref>). We can fully expand H^2 into O(N^2) terms, each of which has the form _p b_q _p' b_q', _p b_q_r b_s_p' b_q', _p b_q_p' b_q'_r' b_s', or _p b_q_r b_s_p' b_q'_r' b_s'. Again, we may go through a similar process as above and conclude that each term acting on yields a state of magnitude O(1).
Assume that |ϕ_0⟩ = ⊗_i |α_i⟩ is a product of coherent states, and |ϕ_t⟩ is the state obtained by evolving under the effective dynamics for time t, i.e., |ϕ_t⟩ = e^-it|ϕ_0⟩, then |ϕ_t⟩ is a state of the form described in (<ref>) for the distribution 𝒟 used in previous sections.
Using density matrices, the effective dynamics with the Hamiltonian starts from the state ρ(0):=|ϕ_0⟩⟨ϕ_0| and end up in the state ρ(t):=|ϕ_t⟩⟨ϕ_t| at time t, while the actual final state obtained is given by (<ref>).
To bound its distance from the desired state ρ(t), we define the following density operators:
ρ^(ℓ)(t) = (∏_1≤ j≤ℓ^←U_j^† e^-iτ H U_j)ρ(t-ℓτ)(∏_1≤ j≤ℓ^→U_j^† e^iτ H U_j).
Then ρ^(0)(t) = ρ(t) and ρ^(r)(t) is the density operator in (<ref>).
Now consider the distance between ρ^(L-1)(t) and ρ^(L)(t). Define
Q^(L) = ∏_1≤ j≤ L-1^→U_j^† e^iτ H U_j,
then by the independence of U_j, we have
ρ^(L)(t)-ρ^(L-1)(t)_*
=_Q^(L)[ Q^(L)(_U(U^† e^-iτ H Uρ(t-Lτ)U^† e^iτ H U-e^-iτρ(t-Lτ)e^-iτ))(Q^(L))^†]_*
≤_Q^(L)Q^(L)(_U(U^† e^-iτ H Uρ(t-Lτ)U^† e^iτ H U-e^-iτρ(t-Lτ)e^-iτ))(Q^(L))^†_*
= _Q^(L)_U(U^† e^-iτ H Uρ(t-Lτ)U^† e^iτ H U-e^-iτρ(t-Lτ)e^-iτ)_*
=_U(U^† e^-iτ H Uρ(t-Lτ)U^† e^iτ H U-e^-iτρ(t-Lτ)e^-iτ)_*,
where·_* denotes the trace norm (nuclear norm). The fourth line follows from the property of trace norm and the fact that Q^(L) is unitary. From the Taylor expansion, one can obtain
(U^† e^-iτ H Uρ(t-Lτ)U^† e^iτ H U)-ρ(t-Lτ)
= (e^-iτ U^† HUρ(t-Lτ) e^iτ U^† HU)-ρ(t-Lτ)
= (-iτ [U^† H U, ρ(t-Lτ)]-∫_0^τ e^-is U^† HU[U^† HU, [U^† HU, ρ(t-Lτ)]] e^is U^† HU(τ-s) )
=-iτ [ (U^† H U), ρ(t-Lτ)]-(∫_0^τ e^-is U^† HU[U^† HU, [U^† HU, ρ(t-Lτ)]] e^is U^† HU(τ-s) )
= -iτ [, ρ(t-Lτ)]-(∫_0^τ e^-is U^† HU[U^† HU, [U^† HU, ρ(t-Lτ)]] e^is U^† HU(τ-s) ).
Similarly, one has
( e^-iτρ(t-Lτ) e^iτ)-ρ(t-Lτ)
= -iτ [, ρ(t-Lτ)]-∫_0^τ e^-is [, [, ρ(t-Lτ)]] e^is (τ-s) .
Combining (<ref>) and (<ref>), one obtains
(U^† e^-iτ H Uρ(t-Lτ)U^† e^iτ H U-e^-iτρ(t-Lτ)e^-iτ)_*
≤(∫_0^τ e^-is U^† HU[U^† HU, [U^† HU, ρ(t-Lτ)]] e^is U^† HU(τ-s) )_*
+∫_0^τ e^-is [, [, ρ(t-Lτ)]] e^is (τ-s)_*
≤τ^2(sup_U [U^† HU, [U^† HU, ρ(t-Lτ)]] _*+[, [, ρ(t-Lτ)]]_*)
One only needs to bound [U^† HU, [U^† HU, ρ(t-Lτ)]] _* and [, [, ρ(t-Lτ)]]_*. By a direct calculation, one sees that
[, [, ρ(t-Lτ)]]_*
≤^2ρ(t-Lτ)_* + 2ρ(t-Lτ)_*+ρ(t-Lτ)^2_*
=2ϕ_t-Lτ^2ϕ_t-Lτ+2ϕ_t-Lτ^2
≤ C N^2 max{|ξ_jklm|, |ω_i|, |h_i,j|}^2,
where C=𝒪(1) is a constant, and we have used the property of the trace norm for rank-1 matrices. In the last step, we are using <Ref> with H= and =ϕ_t-Lτ. Similarly, one can obtain
[U^† HU, [U^† HU, ρ(t-Lτ)]]_*
=2ϕ_t-LτU^† H^2Uϕ_t-Lτ+2U^† HUϕ_t-Lτ^2
=2H^2Uϕ_t-Lτ+2HUϕ_t-Lτ^2
≤ C N^2 max{|ξ_jklm|, |ω_i|, |h_i,j|}^2.
In the last step, we are using <Ref> with H=H and =Uϕ_t-Lτ. As a result, we have proved the following:
For a Hamiltonian of the form described in (<ref>) and a product of coherent states |ϕ_0⟩ = ⊗_i |α_i⟩ such that α_i are 𝒪(1) constants, we have
_U_j∼𝒟(∏_1≤ j≤ r^←U_j^† e^-iτ H U_j)ρ(0)(∏_1≤ j≤ r^→U_j^† e^iτ H U_j) - e^-it H_effectiveρ(0) e^it H_effective_*
≤ C N^2 t^2/rmax{|ξ_jklm|, |ω_i|, |h_i,j|}^2,
where ρ(0) = |ϕ_0⟩⟨ϕ_0|, H_effective = 𝔼_U∼𝒟 U^†HU, C is a 𝒪(1) constant, N=|𝒱|+|ℰ| and 𝒢=(𝒱,ℰ) is the underlying graph of bosonic modes.
The left-hand side of (<ref>) can be expressed by ρ^(r)(t)-ρ^(0)(t)_*, where ρ^(r)(t) and ρ^(0)(t) are defined in (<ref>). Thus
ρ^(r)(t)-ρ^(0)(t)_*≤∑_L=1^rρ^(L)(t)-ρ^(L-1)(t)_*
≤∑_L=1^r C N^2 τ^2 max{|ξ_jklm|, |ω_i|, |h_i,j|}^2 = C N^2 t^2/rmax{|ξ_jklm|, |ω_i|, |h_i,j|}^2,
where we have used (<ref>), (<ref>) and (<ref>) in the second inequality.
|
http://arxiv.org/abs/2307.07422v1 | 20230708172155 | Can LLMs be Good Financial Advisors?: An Initial Study in Personal Decision Making for Optimized Outcomes | [
"Kausik Lakkaraju",
"Sai Krishna Revanth Vuruma",
"Vishal Pallagani",
"Bharath Muppasani",
"Biplav Srivastava"
] | cs.CL | [
"cs.CL"
] |
Explicit a posteriori error representation for variational problems and application to TV-minimization
[
August 12, 2023
========================================================================================================
Increasingly powerful Large Language Model (LLM) based chatbots, like ChatGPT and Bard, are becoming available to users that have the potential to revolutionize the quality of decision-making achieved by the public. In this context, we set out to investigate how such systems perform in the personal finance domain, where financial inclusion has been an overarching stated aim of banks for decades.
We asked 13 questions representing banking products in personal finance: bank account, credit card and certificate of deposits and their inter-product interactions, and decisions related to high-value purchases, payment of bank dues, and investment advice, and in different dialects and languages (English, African American Vernacular English, and Telugu). We find that although the outputs of the chatbots are fluent and plausible, there are still critical gaps in providing accurate and reliable financial information using LLM-based chatbots.
§ INTRODUCTION
Consider a freshman that has just started making personal financial decisions. They open a bank account to save up money and get their first credit card. They are given some seed money by their family and they also start earning by working on campus.
The student is encouraged by their support system to start thinking about saving into products like Certificate of Deposits (CDs) that earn higher interest. As the student makes a series of decisions in their academic and subsequent professional life, they need to make sound financial decisions and may look for resources online to assist them. An optimal decision needs to consider how the banking products interact with each other along with the changing needs of the student.
For users like this student, increasingly powerful LLM-based chatbots that have the potential to revolutionize the quality of
decision for personal finance are becoming available. LLMs have demonstrated tremendous potential across diverse domains <cit.>, such as natural language processing <cit.> and protein structure <cit.>, and have been claimed to show sparks of artificial general intelligence <cit.>. These models have been implemented in several applications, ranging from mental health assistants <cit.> to financial advisement <cit.>. In the finance domain, LLMs have been used to develop applications such as fraud detection, risk management, and financial forecasting <cit.>. They have been used to analyze financial data, predict stock prices, and generate automated reports. However, with the advent of recent models such as OpenAI's ChatGPT, Google's Bard, and BloombergGPT <cit.>, a comparative chatbot study is needed to evaluate their ability to be financial advisors. In this paper, we present an initial study of ChatGPT and Bard in providing personal decision-making for optimized outcomes.
It is widely known that LLMs based systems have unique limitations.
For example, they may struggle with common-sense reasoning tasks <cit.>, encounter challenges when handling symbols <cit.>, and are susceptible to hallucinations <cit.>.
With this work, we make the following contributions:
* identify a personal financial planning scenario involving a series of tasks (plans) and optimization of decisions.
* show how leading LLM-based chatbots perform in them and analyze their behavior.
* lay out challenges that future chatbots in this area should overcome to provide trusted financial recommendations.
We thus highlight the potential and limitations of current LLM-based systems - ChatGPT and Bard - in their role as financial advisors. We included all the queries posed and responses from both ChatGPT and Bard in our GitHub repository[https://github.com/ai4society/LLM-CaseStudies/tree/main/Finance] along with a few snapshots of the actual conversations.
§ PERSONAL FINANCE USE CASE
§.§ Setup: Tools and Procedure
§.§.§ Chatbots Tested
* ChatGPT: ChatGPT <cit.> is an LLM-based chatbot created by OpenAI that was trained on large amount of text data from the internet, including books and articles. ChatGPT is capable of answering questions, generating text and converse with users in a natural way. It can also learn from users and adapt to new information.
* Bard: Bard <cit.> is an LLM-based chatbot created by Google that was trained on large amount of text data and is capable of generating human-like text in response to user prompts and queries. Like ChatGPT, it is also capable of conversing with users about wide variety of topics in a natural way and adapt to new information.
§.§.§ Product Interaction Categories
Product interaction refers to interaction between different products like Credit Card (CC), Certificate of Deposit (CD) and Account Balance (AB).
Each product has different quantitative properties. For example, credit card due, limit line and billing cycle are some of the properties that would provide credit card information (not private information) of the user. Different properties pertaining to these products are:
* Purchase Amount (PA): It is the amount spent by the user on purchase of a product.
* Billing Cycle (BC): It is the billing cycle of user's credit card.
* Due Amount (DA): The amount that is due on the user's credit card for the specified billing cycle.
* Credit Line (CL): The maximum amount that user could spend using their credit card. If the amount spent exceeds this value, the credit card company could charge additional interest.
* Cashback Percentage (CP): The % of amount which will be returned to the user in the form of cashback on buying furniture using their credit card.
* Account Balance (AB): The amount of cash present in user's personal bank account.
* Annual Percentage Rate (APR): The APR is charged if there is due on the credit card after the due date. Some financial institutions choose to charge a late fee if the minimum due (MD) is not paid. It is calculated by the formula, Daily Period Rate (DPR) x Billing Cycle (in days) x Average Daily Balance (ADB).
* Certificate of Deposit Percentage (CDP): The % of interest accumulated on the cash deposited by the user in the form of CD.
Based on different combinations of these products, we classified the queries into 4 categories. These four categories along with the queries posed under each category, the variables used in each query and the constraints the chatbot has to take into consideration to make a sound recommendation are shown in Table <ref>. In the CC category, we considered a different dialect of English called African American Vernacular English (AAVE) and Telugu, one of the well-known languages from India, to observe how the chatbots handle queries in a different language or dialect.
§.§ Findings
In this subsection, we present the findings from the interesting (and sometimes insightful) conversations we had with Bard and ChatGPT.
§.§.§ Differences Between the Chatbots
Table <ref> shows the differences that were identified between Bard and ChatGPT when queries listed out in Table <ref> were asked. We compare these models on various criteria related to their performance in answering queries. The criteria include accuracy, utilization of user information, personalized suggestions, use of visual aids, bias in recommendations, provision of multiple response drafts, learning from mistakes, and understanding of different dialects and languages.
§.§.§ Error Categories
We identified some limitations / errors in the responses generated by both the chatbots and classified them into the following categories:
* Lack of Personalized Recommendations: When the agent makes a generalized recommendation without using all the information provided by the user, we consider this as lack of personalized recommendation.
* Mathematical Errors: We consider errors like rounding errors, calculation errors, etc. as mathematical errors.
* Perceptual Errors: When the agent misinterprets information given by the user or makes assumptions on unknown data, we consider these as perceptual errors.
* Grammatical Errors: We consider typos, grammatical errors, etc. as grammatical errors (we encountered these errors only in Telugu text generated by ChatGPT).
* Lack of Visual Aids: When the agent doesn't use visual aids like tables, graphs, etc. in its response, we consider these as lack of visual aids.
Table <ref> shows the percentage of queries for which the chatbots exhibited each of these errors. We also list out the individual query identifiers. Qi denotes the query identifier as previously defined (and also shown in Table <ref>). ABi and ACi refer to the corresponding Bard and ChatGPT responses respectively. 'i' denotes the identifier (number). Figures <ref> and <ref> show the response generated by Bard and ChatGPT chatbots respectively. For this one query, Bard made use of a table (though it misinterpreted user information) and ChatGPT did not.
§ DISCUSSION AND CONCLUSION
The application of language models in the finance industry has witnessed a surge in recent times due to their ability to process vast volumes of unstructured data and extract valuable insights. This paper delves into the performance of two prominent language models, Bard and ChatGPT, within the finance domain.
We also find the following challenges in evaluating LLM-based systems for finance domains:
* C1: Changing nature of answers for the same question. How does one create reference test cases since the answers change over time?
* C2: Inability of the chatbots to do numeric reasoning
* C3: Presenting results with easy to follow graphics.
* C4: Support for languages used by customers from different population groups. We considered AAVE - (African American Vernacular English) and Telugu, an Indian language spoken by nearly 100m people world-wide.
* C5: Evaluation the response of users from a diverse set of background. We only considered college students in this study.
C1 can be mitigated by carefully cataloging questions and system answers by identifiers that account for changing behavior over time. For C2, integration with numeric solvers like Wolfram may help <cit.> although this makes the systems non-learnable over time. For C3, different data presentation strategies need to be tried. For C4, the LLM models or the chatbots need to be enhanced. For C5, more experiments are needed with inputs carefully modeling the characteristics of the different user groups. These are just preliminary challenges and we expect them to grow as more researchers will try LLM-based systems in complex and diverse application scenarios.
While our study only comprised thirteen queries, we meticulously selected them to cover various categories of credit card finance. However, there exists ample scope for more extensive testing of these chatbots by expanding the number of queries under each category or including additional categories like student loans and stock purchases. By doing so, we can gain a better understanding of the efficacy of language models in different financial domains and improve their functionality in real-world scenarios.
|
http://arxiv.org/abs/2307.05694v1 | 20230709105511 | A Survey on Figure Classification Techniques in Scientific Documents | [
"Anurag Dhote",
"Mohammed Javed",
"David S Doermann"
] | cs.IR | [
"cs.IR",
"cs.CV",
"cs.LG"
] |
A Survey on Figure Classification Techniques in Scientific Documents
Dhote Anurag Radhesham^1, Mohammed Javed^1, David S Doermann^2
1Department of IT, Indian Institute of Information Technology, Allahabad, India
2Department of CSE, University at Buffalo, Buffalo, NY, USA
Email:{[email protected], [email protected], [email protected]}
August 12, 2023
===============================================================================================================================================================================================================================================================================================
Figures visually represent an essential piece of information and provide an effective means to communicate scientific facts. Recently there have been many efforts toward extracting data directly from figures, specifically from tables, diagrams, and plots, using different Artificial Intelligence and Machine Learning techniques. This is because removing information from figures could lead to deeper insights into the concepts highlighted in the scientific documents. In this survey paper, we systematically categorize figures into five classes - tables, photos, diagrams, maps, and plots, and subsequently present a critical review of the existing methodologies and data sets that address the problem of figure classification. Finally, we identify the current research gaps and provide possible directions for further research on figure classification.
Figure Classification;
Deep Learning;
Scientific documents;
Figure Mining;
Document Segmentation;
§ INTRODUCTION
Classification of images finds tremendous applications in various fields such as automobile, healthcare, agriculture, surveillance, and document analysis <cit.>. In scientific documents, different graphical visualizations such as tables, photos, diagrams, maps, and plots convey specific facts that are more effective than simple text. This factual information improves comprehension. Hence, extracting underlying information represented by figures is an important task. In general, it is referred to as figure mining. Figure mining includes enhancing the figure design, outlining the data represented by figures, detecting plagiarized documents, etc. The figure mining pipeline consists of (i) figure extraction from academic documents, (ii) classification of figures, and (iii) data extraction from each figure type. This paper aims to survey figure classification techniques and their related datasets comprehensively.
To address the problem of figure classification, it is crucial to detect and extract the figures from the respective documents using document segmentation techniques, as illustrated in Fig-<ref>. Generally, a document image may be segmented into text and non-text components. The non-text details are then further processed to classify them into an appropriate category. Much research has been done on the textual processing of documents. But as far as figures are concerned, there need to be more state-of-the-art methods that classify the scientific figures in their appropriate category. Chart image classification has recently interested many research groups <cit.>. This paper aims to highlight the work on chart image classification and include results that include other figure types. The techniques used for classification can be divided into handcrafted-based methods and deep learning-based methods.
The hand-crafted methods manually extract features using traditional feature extraction techniques, then classify the figures using machine learning models. On the other hand, deep learning techniques automatically learn features and classify the figures. Various approaches employed in these two categories are discussed in detail in the upcoming sections. This follows a discussion on several data sets reported in the related literature.
The rest of the paper is organized as follows. Section 2 provides information on the existing literature on the figure classification problem, and a summary of significant contributions is shown in Table<ref>. Section 3 includes a discussion of datasets used in recent works, and details of a few publicly available datasets are summarised in Table-<ref>. Section 4 provides pointers for future research work and many interesting problems that still need to be addressed in figure classification.
§ OVERVIEW OF FIGURE CLASSIFICATION PROBLEM
Figures are visualizations used in scientific literature to convey information and enhance comprehension. Figures often represent data that would otherwise be difficult to process if conveyed by the text. Figures are commonly categorized into well-known classes, such as tables, plots, diagrams, photos, equations, geometric shapes, maps, etc. Classes considered under the classification of figures can vary widely depending on the research field<cit.>. Giannakopoulos et al. <cit.> identify charts, diagrams, geometric shapes, maps, and photographs as the classes for the figure classification problem. Lee et al. <cit.> also considered the table a separate figure class in addition to plots, diagrams, photos, and equations. Table–<ref> summarizes the different figure types present in the existing literature.
It can be observed from the table that the figure categories like tables, plots, diagrams, and photos are popular figure types as compared to equations, geometric shapes, and maps. Considering the previous taxonomies, this paper's figures are divided into Tables, Photos, Diagrams, Plots, and maps. These five categories cover all the existing categories explored so far.
§.§ Table
A table is a structure with cells containing text and numeric data. Tables are very efficient at summarizing the textual information between methods that address similar problems. Tables in literature are used for tasks such as comparing existing methods, summarizing the data sets, highlighting observations, etc. Tables are hence recognized as an essential figure type in literature. Table detection and recognition problems have been extensively studied in previous years, and Hashmi et al.<cit.> summarize the existing work in the review. But classifying tables from other figure types remains open to research problems. Only some studies that have included tables while classifying figures use traditional and deep learning approaches. Lee et al.<cit.> use bags of visual words to classify tables among other figures such as Photos, Diagrams, Plots, etc.
Similarly, Jobin et al.<cit.>, Morris et al.<cit.>, Siegel et al.<cit.> use deep learning techniques to classify tables among other types of figures. Tables are rarely further divided into subcategories as the information conveyed through different table structures does not add to any further comprehension. So there are no subcategories under the class table.
§.§ Photo
A photo is generated when light falls on a photosensitive surface like a photographic sensor. Natural and medical images (diagnostic and radiological photos) are considered under the class photos. Depending on the scientific field, the presence of photos varies drastically. Photos are used in literature to provide deep insights on a specific topic, which are difficult to provide using text or other figure types. Jobin et al. [add ref] identified natural and medical images as figure categories in the DocFigure data set. They used a combination of FC-CNN and FV-CNN to classify these figure types. Medical images, commonly used in medical journals, papers, and articles, are further sub-categorized into diagnostic and nondiagnostic images in ImageCLEF2013 and 2016 datasets. Lagopoulos et al.<cit.>, Almakky et al.<cit.>, Andrearczyk, and Muller<cit.> consider the ImageCLEF2016 data sets to perform the figure classification task.
§.§ Diagram
A diagram represents the relationship between various parts of a concept. Figures like flowcharts, Gantt charts, schematics, conceptual diagrams, and tree diagrams are considered under the class diagrams. Diagrams improve perception by visualizing the structure and flow of a concept. Therefore, they are ubiquitous the scientific literature. Classification of diagrams into their subcategories has yet to be addressed in the literature. However, the existing literature has discussed the problem of the classification of diagrams among other figure types. Jobin et al.<cit.> considered flow charts and tree diagrams as figure types in the classification of figures. Lee et al.<cit.> identify diagrams as a crucial figure type and address its classification among other figure types. The bag-of-visual-words-based method is used to classify diagrams from different figure types.
§.§ Map
A map is a symbolic representation of the characteristics of a place or distribution. The map includes subcategories such as Geographical maps, Scientific maps, TreeMaps, and other geographical representations. Maps are used to describe various features localized in a particular area. Using scientific maps could lead to new insights into existing communities, concepts, and demographics based on map type. Hence it is essential to include maps as a figure type. Many researchers do not consider maps when addressing figure classification tasks. Giannakopoulos et al.<cit.>, Jobin et al.<cit.>, Morris et al.<cit.> include several types of maps in the dataset. Jobin et al. have incorporated Treemaps and Geographical maps into the DocFigure dataset. At the same time, Morris et al. include only geographical maps. As far as the author knows, scientific maps are not included in the existing literature.
§.§ Plot
The plot is a visual technique representing the relationships between two or more variables. Plots are widely used in the scientific literature to convey results with more clarity. There are various subcategories of plots, such as scatter, bar, pie, line, area, etc. Plots have strong representative power and simple rules and have been used in multiple research fields; hence they are considered significant figure types. As plots can be divided into various subcategories, which are also widely used in scientific literature, it has been addressed in existing works more than the other figure types. The following subsections have discussed a few traditional and deep learning approaches for addressing chart image classification.
§ RELATED WORK
The approaches implemented in the present work can be divided into traditional and deep learning categories. The figure classification problem has been addressed more in the bio-medical field than in other areas. This could be because a state-of-the-art data set is designed for automated figures analysis in the biomedical literature called ImageCLEF<cit.>. A detailed discussion regarding various approaches used for figure classification is provided in the following sub-sections. In addition to this, specifically, chart classification techniques are also summarized in detail.
§.§.§ Traditional Approaches
Traditional approaches rely on feature extraction methods used in computer vision. Features are manually extracted from the figures and then represented in mathematical form for further processing. These mathematical representations act as input to the classifiers, following the traditional method-based approach Savva et al.<cit.> present a system that automatically remodels the visualizations to increase visual comprehension. The authors use low-level image features for classification, and to improve further classification, they use text-level features. The performance is tested by training a multiclass SVM classifier on a corpus containing 2601 chart images labeled with ten categories, following Gao et al.'s manual extraction path.<cit.>, propose VIEW, a system that automatically extracts information from raster-format charts. The authors separate the textual and graphical components and classify the given chart image based on the graphic elements extracted from the visual components using SVM.
The text is limited to three chart categories of bar charts, pie charts, and line graphs, with 100 images for each category collected from various real-world digital resources. Instead of taking an image as input, Karthikeyani and Nagarajan<cit.> present a system to recognize chart images from PDF documents using eleven texture features that are part of the Gray Level Co-Occurrence Matrix. A chart image is located in the PDF Document database, and the features are extracted and fed to the learning model. SVM, KNN, and MLP are the classifiers used for getting the classification results. Cheng et al.<cit.> employ a multimodal approach that uses text and image features. These features are provided as input to MLP. The output is characterized as fuzzy sets to get the final result. The corpus contains 1707 figures with three categories and a 96.1% classification result. ReVision pioneered the technique for chart image classification and would act as a state-of-the-art method for future methods.
§.§.§ Deep Learning Approaches
Liu et al.<cit.> used a combination of Convolutional Neural Networks(CNN) and Deep Belief Networks (DBN) to capture high-level information present in deep hidden layers; fully Connected Layers of Deep CNN are used to extract deep hidden features. DBN is then used to predict the image class on the mentioned deep hidden features. Authors use the transfer learning concept and then perform fine-tuning to prevent overfitting. The data set included more than 5,000 images of charts in the categories of pie charts, scatter charts, line charts, bar charts, and flow charts. Deep features are useful over primitive features to provide better stability and scalability to the proposed framework.
Given the results of CNN in the classification of natural images, Siegel et al.<cit.> use two CNN-based architectures for figure classification. They evaluate AlexNet and ResNet-50, which are pre-trained on the ImageNet data set and then fine-tuned for figure classification. This transfer learning approach would be prevalent in subsequent works addressing this problem. The proposed frameworks outperformed the state-of-the-art model, ReVision, by a significant margin. ResNet-50 achieved the best classification accuracy of 86% performed on a dataset containing over 60000 images spread across seven categories.
Amara et al.<cit.> proposed a CNN-based LeNet model to classify their corpus of 3377 images into 11 categories. The model comprises eight layers: an output layer, one fully connected layer, five hidden layers, and an input layer. The fully connected layer is used as a classifier, while the hidden layers are convolution and pooling layers designed to extract features automatically. A fully connected layer employs softmax activation to classify images into predefined classes. For evaluation of the model's performance, an 80-20 split is performed on the data set for training and assessment. The proposed model performs better than the LeNet and pretrained LeNet architectures with an accuracy of 89.5%.
Jung et al. <cit.> present a classification method using the Caffe deep learning framework and evaluate its efficacy by comparing it with ReVision (a state-of-the-art chart-type classification system). The authors use GoogLeNet for classification and compare its results with shallower networks like LeNet-1 and AlexNet. Fivefold cross-validation is used for calculating the accuracy of the image corpus with 737 - 901 images for each chart type. The text concludes that ChartSense provides a higher classification accuracy for all types of graphs than ReVision.
Almakky et al.<cit.> developed a stack-auto encoder model for figure classification. They work with the ImageCLEF 2016<cit.> data set for biomedical subfigures having 30 classes and 10942 images. The data imbalance related to biomedical images has led the authors to use the proposed model. Five autoencoders were trained separately to extract the features in an unsupervised manner. This model is further fine-tuned to retain cohesion using the same binary cross-entropy criterion used to train SDAE layers. An overall accuracy of 64.3% was achieved using the proposed method. Poor overall accuracy compared to other works under the ImageCLEF challenge is attributed to low training samples and the nature of the data set.
With studies adapting the deep learning approach for chart image classification, a comparative study of traditional vs. CNN architectures was required. Chagas et al.<cit.> provide a comparative analysis of conventional vs. CNN techniques. The authors evaluate CNN architectures (VGG19, Resnet-50, and Inception-V3) for chart image classification for ten classes of charts. The performance is compared with conventional machine learning approaches such as classifiers Naive Bayes; HOG features combined with KNN, Support Vector Machine, and Random Forest. Pre-trained CNN models with fine-tuned last convolutional layers were used. The authors concluded that the CNN models surpass traditional methods with an accuracy of 77.76%(Resnet-50) and 76.77%(Inception-V3) compared to 45.03%(HOG+SVM).
Limitation in the figure data set was a significant problem in chart mining as both size and categories limited existing datasets. So, Jobin et al.<cit.> presented DocFigure, a figure classification data set with 33,000 figures for 28 different categories. To classify figures, the author proposes techniques that utilize the deep feature, deep texture feature, and a combination of both. Among these baseline classification techniques, the authors observed that combining deep feature and deep texture feature classifies images more efficiently than individual feature techniques. The average classification accuracy improved by 3.94% and 2.10% by concatenating FC-CNN and FV-CNN over individual use of FC-CNN and FV-CNN, respectively. The overall accuracy of the combined feature methods turned out to be 92.90%.
Due to the need for benchmarks in the chart mining process, Davila et al.<cit.> summarized the works of different participants in the first edition of the competition on Harvesting Raw Tables from Infographics, which provided data and tools to the chart recognition community. Two data sets were provided for the classification task. One was a synthetically generated AdobeSynth dataset, and the other UB-PMC data set was gathered from the PubMedCentral open-access library. The highest accuracy achieved for the synthetic data set was 99.81% whereas for the PMC data set, it was 88.29%. In the second edition of the competition, as the PMC set was improved and included in the training phase, the accuracy of models over the PMC set improved significantly to 92.8.%
Luo et al. proposed a unified method to handle various chart styles.<cit.> where they prove that generalization ability can be obtained in deep learning frameworks with rule-based methods. The experiments were carried out on three datasets with more than 300,000 images with three categories of graphs. In addition to the framework, an evaluation metric for bar, line, and pie charts is also introduced. The authors concluded that the proposed framework performs better than traditional, rule-based deep learning methods. Amara et al.<cit.> propose a deep learning-based framework that automates the feature extraction step, an improved LeNet convolutional neural network architecture version. Over 90,000 images of charts from 11 different categories were chosen for the experiments, and the proposed framework performs significantly better than model-based approaches.
§ DATASETS
There need to be more datasets that contain all the figure types discussed before. DocFigure<cit.> is one data set that includes tables, flowcharts, and other plots in a combined data set of 33,000 images. Morris et al.<cit.> propose SlideImages which includes 9 different classes with 3,629 images of various figures. Given the popularity of table recognition problems, data sets dedicated to images of tables have been developed over the past decade. Current works employ augmentation methods to cope with the problem of a small data set<cit.>.
There has been a significant improvement in size for chart image classification. Revision<cit.> dataset, which would be used for further studies for comparison, had only 2,601 images. The data sets proposed in recent years have more than 20,000 images. However, the data sets used for classification purposes mainly contain synthetic images. All data sets include the actual chart image in JPG, PNG, or JPEG format and the corresponding annotations in JSON and XML format. These studies ignore 3D charts, hand sketches, and composite figures. There need to be more authentic figure images extracted from documents that do not follow the fixed constraint prevalent in training image samples of existing data sets. Table-<ref> below shows the types of figures and their corresponding sample sizes. The data sets mentioned in the table are publicly available and were considered in the works of literature mentioned above.
§ FUTURE DIRECTIONS
Although there has been a significant increase in published articles on this classification problem, severe problems still need to be addressed.
§.§ Lack of Benchmark Data set
The chart image classification problem has been extensively addressed in previous work. However, the high-level classification of charts from other types of figures needs a more state-of-the-art approach. ImageCLEF dataset includes a variety of figure types but is restricted to images in the medical domain. In addition to this, DocFigure and Slideimages have several different figure types. Still, there is a lack of state-of-the-art data sets to address the figure classification problem. Hence, there is a need for a dataset that includes a significant number of images and figure categories that would cover as many different figure types as possible.
§.§ Lack of Robust Model
Recent work makes some hard assumptions while addressing this problem. Most existing data sets contain a small number of real-figure images extracted from documents. This leads non-robust systems to fail when image samples contain intra-class dissimilarity or inter-class similarity. Including authentic figure images in the training phase could improve model performance.
§.§ Inclusion of Noise
Most of the work in the existing literature ignores the effect of noise. The presence of different types of noise, such as background grids, low image quality, composite charts, and the presence of multiple components along with figures, leads to poor performance for models that perform exceptionally on noiseless data<cit.>.
So, there is a need for a robust deep-learning model to cover all the shortcomings mentioned above.
§ CONCLUSION
Figure classification is challenging due to the variety of figures present, the similarity between different figure types, and the noise in the figure images. Techniques used for figure classification have evolved remarkably. Earlier methods focused on manual feature extraction and providing the feature vectors to the different classifiers. Recent approaches, however, use more specific features corresponding to specific figure types more efficiently using deep learning models. Though the performance of these techniques is good, they are not robust enough to handle noisy and real figure image data. In this survey, various methods used for figure classification were discussed, along with the publicly available data sets. Also, some pointers are provided for the shortcomings in the current works.
ieeetr
|
http://arxiv.org/abs/2307.06262v1 | 20230712160256 | An Architecture for Control Plane Slicing in Beyond 5G Networks | [
"Rashmi Yadav",
"Rashmi Kamran",
"Pranav Jha",
"Abhay Karandikar"
] | cs.NI | [
"cs.NI"
] |
An Architecture for Control Plane Slicing in Beyond 5G Networks
Rashmi Yadav
Department of Electrical Engineering
Indian Institute of Technology Kanpur,
India
[email protected]
Rashmi Kamran
Department of Electrical Engineering,
Indian Institute of Technology Bombay,
India
[email protected]
Pranav Jha
Department of Electrical Engineering,
Indian Institute of Technology Bombay,
India
[email protected]
Abhay Karandikar
Department of Electrical Engineering,
Indian Institute of Technology Bombay, India
[email protected]
Director, Indian Institute Technology Kanpur, India
[email protected]
August 12, 2023
==============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
To accommodate various use cases with differing characteristics, the Fifth Generation (5G) mobile communications system intends to utilize network slicing. Network slicing enables the creation of multiple logical networks over a shared physical network infrastructure. While the problems such as resource allocation for multiple slices in mobile networks have been explored in considerable detail in the existing literature, the suitability of the existing mobile network architecture to support network slicing has not been analysed adequately. We think the existing 5G System (5GS) architecture suffers from certain limitations, such as a lack of slice isolation in its control plane. This work focuses on the future evolution of the existing 5GS architecture from a slicing perspective, especially that of its control plane, addressing some of the limitations of the existing 5GS architecture. We propose a new network architecture which enables efficient slicing in beyond 5G networks. The proposed architecture results in enhanced modularity and scalability of the control plane in sliced mobile networks. In addition, it also brings slice isolation to the control plane, which is not feasible in the existing 5G system. We also present a performance evaluation that confirms the improved performance and scalability of the proposed system viz-a-viz the existing 5G system.
Software-defined networking, Mobile networks, Service-driven architecture.
§ INTRODUCTION
T he emergence of the Fifth Generation (5G) mobile network enables a large variety of use cases and services. The Third Generation Partnership Project (3GPP) defined the 5G System (5GS) and categorizes its prominent use cases as Enhanced Mobile Broadband (eMBB), Ultra-Reliable Low Latency Communication (URLLC), Massive Machine Type Communication (mMTC), and Vehicle to Everything (V2X) <cit.>, <cit.>. Each of these use case categories is expected to support a different set of requirements. For example, eMBB use cases are expected to support very high data rates and low to high-speed mobility applications, while URLLC applications typically require very low latency and low to moderate data rates. Similarly, the broad characteristic of mMTC applications is to have low data rates along with very high connection density, while V2X applications require low latency support in high-speed mobility scenarios. Networks beyond the 5G era may need to support newer use cases such as holographic-type communications, tactile internet for remote operations, digital twin etc. Considering their diverse needs, each use case deserves a dedicated network infrastructure to efficiently serve the users. However, providing dedicated infrastructure to each of these use case categories may lead to an increase in Capital Expenditures and Operational Expenditures. Hence, the concept of network slicing is adopted in 3GPP 5GS to support the use-case category-specific requirements.
3GPP defines network slicing as “a paradigm where logical networks/partitions are created, with appropriate isolation, resources and optimized topology to serve a purpose or service category (e.g., use case) or customers (a logical system created on-demand)” <cit.>. A network slice in 3GPP 5GS spans both horizontally as well as vertically, i.e., both Radio Access Network (RAN) and Core Network (CN) and also the control and the user (data) plane functions. In addition to the requirement to support multiple network slices with isolation between them, existing 5GS needs to handle other slice-specific requirements too, as has been discussed in section 16.3 of <cit.>. We observe that one of these requirements, “Support for UE associating with multiple network slices simultaneously” <cit.>, has a significant bearing on the control plane architecture of the 3GPP 5GS. The requirement mandates that “in case a UE is associated with multiple slices simultaneously, only one signalling connection is maintained”. It implies that the control plane functions especially that terminate UE signalling, e.g., a gNB-Centralized Unit-Control Plane (gNB CU-CP) function in RAN terminating Radio Resource Control (RRC) signalling, or an Access and Mobility Management Function (AMF) in CN terminating Non-Access Stratum (NAS) signalling, may have to support more than one slice concurrently. Therefore, achieving “isolation of slices” and having “slice-specific NFs” in the control plane becomes particularly difficult. It should be noted that other 5G standards such as the one developed by O-RAN Alliance <cit.> also do not provide any guidance/resolution to this problem.
Network slicing in 5G networks has been an active field of research and here we provide a survey of the research work on this topic, especially those pertaining to isolation of slices. The work presented in <cit.> highlights the challenges of slice isolation in the user (data) plane but there is no discussion on control plane slice isolation there. Authors in <cit.> investigate the challenges related to RAN slice design and implementation but do not discuss slice isolation. The authors in <cit.> guarantee the functional and performance isolation of slices while allowing the efficient use of resources in the RAN data plane but do not discuss isolation viz-a-viz control plane. A RAN slicing architecture with multiple sets of function splits and placements, which provide isolation among slices, is proposed in <cit.>. However, it neither discusses control plane slice isolation nor does it discuss core network slicing. Another work <cit.> proposes a flexible RAN architecture with a Medium Access Control (MAC) scheduler to abstract and share physical resources among slices. In <cit.>, SDN enabled resource allocation framework is proposed and in <cit.>, a network slicing framework for end-to-end Quality of Service (QoS) with a dynamic radio resource slicing scheme is proposed. In <cit.>, an architecture for the cloud-network slicing concept and realization of the slice-as-a-service paradigm is presented. It is designed to consider modularity and multi-domain dynamic operation as key attributes.
As can be discerned from the literature survey presented above, most existing works focus only on the slicing aspects of the user (data) plane and to the best of our knowledge, no prior work on control plane slice isolation is found. Moreover, the architectural mechanism to enable slice isolation in the control plane of existing 5GS has neither been discussed in the standards nor in the research work.
Therefore, in order to achieve slice isolation in the mobile network control plane, we propose a new mobile network architecture in this paper. The proposed architecture improves the slice-specific design of the mobile network control plane and facilitates slice isolation therein. It is an extension of our earlier 5G-Serv architecture [16]. The 5G-Serv did not explore slicing aspects, which has been addressed in this extension. In addition, we have also done a detailed performance evaluation of the control plane in the proposed architecture in a sliced environment and compared it with the existing 5GS architecture. The performance evaluation focuses on slice-wise session establishment rate, resource utilization, scalability and modularity of the control plane. We demonstrate that the proposed architecture achieves improved control plane performance in a sliced environment as compared to the existing 5GS architecture. An additional benefit of the proposed architecture is its simplified end-user signalling viz-a-viz the 3GPP 5GS.
The rest of the paper is organized as follows: Section <ref> discusses the architectural details of the proposed slice-specific control plane architecture. The procedures involved in the proposed extension are detailed in Section <ref>. Section <ref> provides the system model. The performance evaluation is covered in Section <ref>, while the conclusion is provided in Section <ref>.
§ PROPOSED ARCHITECTURE FOR CONTROL PLANE SLICING
In the existing 5GS, one of the issues behind the problem of slice isolation in the control plane is the termination (placement) of UE signalling functionality within the control plane function, e.g., in gNB-CU-CP or AMF. Hence, the separation of UE signalling handling functionality from the control plane may help us in solving the problem of slice isolation in the control plane. The idea to have separate network functions for UE signalling handling, separate from the existing control plane functions, was proposed in our earlier work 5G-Serv (details available in <cit.>). However, it was not analysed in the context of a sliced network, which has been done in this work.
The architecture of the proposed mobile network is divided into slice-specific control plane and data plane network functions in RAN and CN as shown in Fig. <ref>.
In our proposal, each plane has slice-specific network functions in RAN and the core network. Besides, a few functions are slice-specific or common to a set of slices such as the UE signalling service function (comprising RRC and NAS signalling). In this work, we consider two network slices for simplicity. However, the proposed architecture can easily employ more than two slices. Various components of the proposed architecture are explained in the following sections:
§.§ Control Plane
The control plane functions in the RAN and CN of the proposed architecture are named as the RAN controller and the CN controller, respectively. The RAN controller is responsible for resource allocation and data(user) plane control functionality in RAN, whereas CN controller is responsible for the same functionality in CN.
In the existing 5GS, the gNB-CU-CP, the de facto RAN control plane function, broadly contains the following functionalities[It may have some additional functionality (e.g. support for Xn interface) but those are not important for the discussion here.]: gNB-DU control, gNB-CU-UP control, RRC protocol, Radio Resource Management (RRM) and the Next Generation Application Protocol (NGAP) functionalities. We propose to change the placement of some of these functionalities to simplify the RAN control plane function for end-to-end slicing. As can be discerned, the above-mentioned functionality can broadly be divided into two classes: (i) UE-specific control/signalling functionality, e.g., UE-specific RRC protocol functionality and (ii) RAN user plane control functionality, e.g., gNB-DU/gNB-CU-UP control functionality. The UE-specific signalling functionality in gNB-CU-CP, responsible for terminating RRC protocol signalling with UEs, is moved out of the gNB-CU-CP and is transposed to a new UE signalling service function in the network. Also, UE-specific NGAP message handling, to carry NAS messages between Access and Mobility Management Function (AMF) and gNB-CU-CP in the existing 5GS, is also removed from the gNB-CU-CP. After the relocation of the UE-specific signalling handling functionality from gNB-CU-CP to UE Signaling Service Function, only the RAN user plane control functionality remains there and simplifies the overall gNB-CU-CP design. This simplified gNB-CU-CP is rechristened as the RAN controller in the proposed architecture.
Similarly, UE-specific signalling functionality, e.g., NAS signalling handling, UE authentication etc. is moved out of the CN control plane functions like AMF, SMF, Authentication Server Function (AUSF) and is again placed in the UE signalling service function. The remaining user plane control functionality in CN control plane is rechristened as CN controller, which is considerably simpler compared to the conventional CN control plane in the existing 5GS. The modified RAN and CN controllers can communicate through an inter-controller interface similar to the existing NGAP interface.
A key point to be noted here is that the modified RAN and CN controllers (in the proposed architecture) do not contain the UE signalling functionality and no longer terminate the UE signalling. This architectural change allows the controllers to be slice-specific, i.e., every network slice can have its own RAN and CN controllers removing the constraint for the control plane to necessarily support more than one slice in existing 5GS.
§.§ Data (User) Plane
The user plane is responsible for the transfer of data through the mobile network. There are no changes in the user plane functionality in the proposed architecture over the existing 5GS; these remain the same as the user plane functions in the existing 5GS. The gNB-Centralized Unit-User Plane (gNB-CU-UP) comprises Service Data Adaptation Protocol (SDAP), General Packet Radio Service (GPRS) Tunneling Protocol (GTP), and Packet Data Convergence Protocol (PDCP) layers, and the gNB-Distributed Unit (gNB-DU) has Radio Link Control (RLC), MAC, and Physical (PHY) layers. gNB-DU and gNB-CU-UP are altogether termed as RAN-Data Plane (RAN-DP) or gNB-Data Plane (gNB-DP). RAN-DP (gNB-DP) in RAN and the CN-DP (UPF) in CN maybe slice specific, i.e., each logical network (slice) has its RAN-DP and CN-DP (UPF) functions.
§.§ UE Signalling Service Function
UE signalling service function exchanges signalling messages such as RRC/NAS messages with UEs. This is a new function defined as part of the proposed architecture. The CN control plane functionality in existing 5GS such as NAS signalling termination as part of AMF or UE authentication functionality as part of AUSF are moved from the CN control plane functions to the UE signalling service function in the proposed architecture as is the RRC signalling handling functionality from gNB-CU-CP. These UE signalling service functions can either be slice-specific or common to a set of slices. It is possible to have more than one UE signalling service function in the network for reasons such as load balancing and distribution of functionality across them.
§.§ Interfaces
Fig. <ref> shows proposed/modified interfaces in the proposed architecture. gNB-CU-CP has been segregated into two different entities as RAN controller and UE signalling service function. Conventionally, F1-C is the interface between gNB-DU and gNB-CU-CP and it carries UE-specific messages. However, a modified F1-C (F1-C') interface exists between the RAN-DP and RAN Controller, which now does not carry UE-specific control messages and information elements. Besides, a new interface, F1” is proposed between the RAN-DP and UE signalling service function, which now carries UE-specific RRC/NAS messages. An important consequence of the creation of separate UE signalling service functions, separate from the control plane functions in the proposed architecture, is that UE-specific signalling messages can be treated as another form of data passing through the user plane. Hence the proposed F1” interface can be similar to the F1-U interface of the existing 5GS. Now, UE can exchange the signalling message with the UE signalling service function via RAN-DP over the F1” interface. Controller-service function interface exists between the RAN controller and the UE signalling service function. Inter-controller interface is a new interface which can be based on the existing NGAP interface.
The proposed architecture is validated and elucidated further through an example of Protocol Data Unit (PDU) session establishment call flow in the next section.
§ PDU SESSION ESTABLISHMENT FOR THE PROPOSED ARCHITECTURE
In this section, we detail the call flow for PDU session establishment for a UE with two network slices in the proposed architecture (as shown in Fig. <ref>) to understand its working. Besides, we also compare it with the PDU session establishment call flow of existing 5GS in the same scenario of two slices (referred from Section 4.3.2.2 of <cit.>).
Firstly, UE sends the PDU session establishment request to access a network slice (say slice 1). Based on the received request, the UE signalling service function (common to both the slices in our case) forwards the request to the respective CN controller. Further, the CN controller selects the corresponding CN-DP (UPF) for session establishment, and the N4 session is established at CN-DP (UPF). In Fig. <ref>, the call flow is shown for a UE connecting two network slices in which UE first connects to slice 1 and then to slice 2. Details of message sequences in the call flow are as follows:
* UE sends a PDU session establishment request as a NAS message to the UE signalling service function.
* Based on the received request for a particular slice, the UE signalling service function selects the slice-specific controller (say CN controller-1 for the first slice) in the core network and sends a message to create a PDU session context.
* Accordingly, the CN controller-1, which is specific to the slice 1, configures CN-DP (UPF)-1.
* The CN controller-1 informs the RAN controller-1 (which is also slice-specific) about the PDU session setup on the inter-controller interface.
* Subsequently, the RAN controller-1 establishes a PDU session on the RAN-DP-1 by sending Data Radio Bearer (DRB) configuration message to RAN-DP-1. It also notifies about DRB and PDU session establishment to the UE signalling service function (after the PDU session is established).
* UE signalling service function sends the RRC reconfiguration message to the UE.
* RAN controller-1 sends PDU session context update to CN controller-1.
* PDU session is now established for UE to access slice 1. The same message sequence is followed for UE to access the second slice, as shown in the second part of Fig. <ref>.
On comparing the message sequences for accessing the network slices in the case of the existing 5GS <cit.> and the proposed system, there is a reduction in the total number of messages by employing the proposed architecture as compared to the existing 5GS. For instance, in the existing 5GS procedure, there are request/response messages sent and received between AMF and SMF for creating and updating a PDU session context. In contrast, responses from the CN controller to the UE signalling service function (for the PDU session context create) and also to the RAN controller (for the PDU session context update) are not required. N1N2 messages, which are communicated between AMF and SMF in the existing 5GS, are removed completely from the proposed call flow as the RAN controller and CN controller (SMF') can communicate directly through an inter-controller interface. Overall, the message sequence for PDU session establishment (for two slices) for the proposed scheme is simplified as compared to the existing 5GS, which validates enhanced modularity in the procedures of the proposed system.
§ SYSTEM MODEL
In this section, we describe the system model of the proposed architecture by considering the example of PDU session establishment call flow. We used Performance Evaluation Process Algebra (PEPA) <cit.>, a formal high-level language for modelling distributed systems and their evaluation. The modelling of the proposed call flow (Fig. <ref>) is provided in Table <ref>. Various Network Functions (NFs), e.g. UE, RAN-DP-1 (DP-1), UE signalling service function (USSF), RAN controller-1 (RANC-1), CN controller-1 (CNC-1), and CN-DP (UPF)-1 for the proposed architecture are modelled as PEPA modules. The NF's states are defined with the corresponding NF name and a number (NF_1) (refer to Table <ref>) (e.g., Ranc_1 indicates the first state for the RAN controller). Further, the action types are denoted in lowercase and subscripts are added to specify the detail of the defined actions (actiontype_detail). For example, request and reply for any service, e.g. PDU session create context, can be specified as req_sc1 and rep_sc1, respectively. Each action type is associated with a specific rate value, r. The rates (number of actions performed per unit time) model the expected duration of a specific type of action in the PEPA component and are taken as reference from <cit.>, <cit.> and <cit.>.
Let us consider an NF, for example, CN-DP (UPF)-1, to understand the modelling of the system NF. Various messages (actions) are associated with this NF (CN-DP (UPF)-1) during the session establishment. It has two states, i.e., Upf_1 and Upf_2. The first state, Upf_1, describes the request (req_n4est1) received from CN controller-1 to establish the N4 session. The second state, Upf_2, is for accessing the processor (get_upfp1) to process the received request and send the response (rep_n4est1) to CN controller-1 for N4 session establishment.
Each NF requires processing capability to process a request. Therefore, each NF is assigned a corresponding processor, as defined in <cit.>. Processors (such as UE processor (UEP), DP-1 processor (DPP), RANC-1 processor (RANCP), USSF processor (USSFP), CNC-1 processor (CNCP) and CN-DP (UPF)-1 processor (UPFP)) are defined using a two-state model for a single processing NF. For instance, the CN-DP (UPF)-1 processor is defined in two states. The first state, Upfp_1, to get access to the processor (get_upfp1), and the second state to perform actions associated with the processor (rep_n4est1). Similarly, other processors corresponding to their NFs are defined.
The system equation describes the overall interaction between the NFs. These interactions are defined as different actions (for example, S = <action_1, action_2>) performed between the network functions (say, NF_1[N] _S^⋈ NF_2[N]) to implement the system equation. For example, Ussf_1[N] _S_2^⋈ Ranc_1[N] signifies the interaction between USSF and RANC NFs, where S_2 consists of <drb_1, notify_1> actions to interact between these two NFs. Likewise, the interaction between various other NFs is modelled and is shown in Table <ref>. In the system equation, n is the number of UEs. For the proposed architecture, N_nf denotes the number of network functions for a particular category, for example, N_dp1, N_ranc1, N_ussf, N_cnc1, N_upf1 denote the number of RAN-DP-1, RAN-controller-1, UE signalling service function, CN controller-1 and CN-DP (UPF)-1 NFs, respectively. Note that each processor can handle a set of concurrent threads, N_t and the number of processors for each network function is denoted as N_nfp. Thus, N = N_nf·N_nfp·N_t (mentioned in the system model equation) represents the total number of threads for an NF of a particular category. Moreover, N_p = N_nf·N_nfp is the total number of processors allocated to a particular NF type. Please note that the table presents only the system model to access one slice, as the modelling remains the same to access another slice. Similarly, modelling is done for existing 5GS procedures in a sliced environment. However, the simulations are performed for both existing 5GS and the proposed architecture considering two slices.
§ PERFORMANCE EVALUATION
This section presents the performance evaluation of the proposed end-to-end slicing network solution. We have created two slices in each of the existing 5GS and in the proposed architecture and then we have evaluated the performance of the session establishment procedure in a sliced environment. We compare the existing 5GS and the proposed architecture based on various performance measures such as the number of sessions established per unit of time slice-wise session establishment rate), Average Response Time (ART), and processor utilization in a sliced environment. The evaluation of these measures also helps in analysing the network's scalability.
Slice-wise session establishment rate measures the frequency of established sessions in the context of the specific action (say, rep_se1, which represents the response of the request sent from UE for session establishment). ART measures the UE's waiting time for PDU session establishment <cit.>. Processor utilization measures the NF's processor capacity utilized during the entire process.
We can see that the proposed architecture with separate controllers, user plane functions and UE signalling service functions can be considered a distributed system similar to the existing 5GS. Hence, we can use the scalability metric of a distributed system to evaluate the scalability of the proposed architecture and compare it with existing 5GS. The scalability metric for a distributed system is based on productivity as defined in <cit.>. Therefore, scalability (Q) (given in Equation <ref>) is defined as the ratio between the productivity of a system at two configurations having different scales m_1 and m_2 <cit.>. The scaled configurations (m_1 and m_2) correspond to the different number of NFs used in the network, say (m_1 = (1,1,1,1,1,1,1,1,1) and (m_2 = (3,3,3,3,3,3,3,3,3)). Here, configuration m_1 implies that (N_dp1, N_dp2, N_ussf, N_ranc1, N_ranc2, N_cnc1, N_cnc2, N_upf1, N_upf2) = (1,1,1,1,1,1,1,1,1) for the proposed architecture, which is the basic configuration with single network function assigned for all functions. Similarly, (N_dp1, N_dp2, N_ussf, N_ranc1, N_ranc2, N_cnc1, N_cnc2, N_upf1, N_upf2) = (3,3,3,3,3,3,3,3,3), which is the configuration for a scaled system. Further, the mathematical expression for scalability is given as <cit.>:
Q(m_1,m_2) = P(m_2)/P(m_1)
Where P(m) is the productivity of a system at the scale m which can be defined as (Equation <ref>):
P(m) = t(m)f(m)/R(m)
Where t(m) is the average number of PDU sessions established at scale m, R(m) is the processor utilization of the system at scale m, and f(m) (Equation <ref>) is determined by evaluating the response time performance of the scaled system. We consider the following equation <cit.> to evaluate the performance function f(m) by using the average response time T(m), at scale m, with the target average response value T <cit.>:
f(m) =1/1+T(m)/T
Figures <ref> and <ref> present the results of the number of sessions established per unit time for UEs to access the network slices in case of the proposed and the existing 5GS architectures for two different configurations (m_1, m_2). Even if the same configuration (m_1) having similar hardware requirements is used for both architectures, the saturation point for existing 5GS is at 8000 users and for the proposed architecture is at 20,000 users for basic configuration, shown in Fig. <ref>. Similarly, Fig. <ref> shows that for scaled configuration, existing 5GS saturates at 20,000 users, while the proposed architecture saturates at 46,000 users. Here saturation point is the maximum number of users that can be served by the system. This saturation point corresponds to the saturation of processor utilization (discussed next).
The processor utilization (for all the NFs of slice-1) of the existing 5GS and the proposed architecture for basic configuration is shown in Fig. <ref>. For instance, the AMFP reaches its maximum utilization explaining the saturation point for the number of session establishments. Although at this point, other NFs are not fully utilized. These results show that the request processing chain fails if an NF becomes a bottleneck for the consecutive chain. Similarly, Fig. <ref> shows the processor utilization (for all the NFs of slice-1) results for the existing 5GS and the proposed architecture by considering scaled configuration. Please note that the solid line in Fig. <ref> and Fig. <ref> represents the processors of the proposed architecture, while the dotted line is for the existing 5GS architecture. It is evident that processors are saturated earlier in the case of existing 5GS as compared to the proposed architecture, as the number of messages in existing 5GS is more compared to the proposed architecture.
Based on results obtained for the slice-wise session establishment rate, ART and processor utilization (from the PEPA-based simulation and modelling), scalability is evaluated (from equation <ref>) and plotted in Fig. <ref>. We consider the same two configurations m_1 and m_2 as referred to above for the estimation of scalability. The same saturation points can be observed in Fig. <ref>. The existing 5GS saturates at 8,000 users, while the proposed architecture saturates at 20,000 users for basic configuration. Similarly, for scaled configuration, the existing 5GS saturates at 20,000 users, while the proposed architecture saturates at 46,000 users. It indicates that the proposed architecture performs better than the existing 5GS and can serve more concurrent users with the same scaling configuration. Further, it signifies that with the help of the same configuration or resources assigned to both architectures, the proposed one performs better in terms of slice-wise session establishment rate, ART and processor utilization; and as a result, is more scalable. We provide results for the performance of two architectures in the sliced environments using multiple slices. The proposed architecture performs better compared to the existing 5GS architecture in a sliced environment.
§ CONCLUSION
In this paper, we have proposed architectural enhancement for end-to-end slicing by employing slice-specific control plane evolution for beyond 5G networks. In the evolved control plane, the UE signalling service function is responsible for signalling exchange with UEs, which has been decoupled from the existing control plane for efficient slice-specific control function deployment. This kind of slice-specific control plane deployment is not possible in the existing 5GS. For example, gNB-CU-CP typically needs to manage multiple slices simultaneously, whereas, in our proposal, a RAN controller can manage an individual slice in RAN. It also leads to a reduced number of messages and simplified interfaces between the control plane and the data plane. The performance of the existing 5GS and the proposed architecture has been compared based on parameters such as slice-wise session establishment rate, ART, processor utilization and scalability to validate the advantages of the proposed idea. The proposed architecture results in simplified slice-specific control signalling, enhanced modularity of the control plane and improved scalability compared to the existing 5GS.
§ ACKNOWLEDGMENT
We acknowledge the Ministry of Electronics and Information Technology (MeitY), Govt. of India for supporting the project.
IEEEtran
|
http://arxiv.org/abs/2307.10218v2 | 20230714150640 | Experimental determination of the $^3$He($α$,$γ$)$^7$Be reaction cross section above the $^7$Be proton separation threshold | [
"Á. Tóth",
"T. Szücs",
"T. N. Szegedi",
"Gy. Gyürky",
"Z. Halász",
"G. G. Kiss",
"Zs. Fülöp"
] | nucl-ex | [
"nucl-ex"
] |
Institute for Nuclear Research (Atomki), Debrecen, Hungary
University of Debrecen, Doctoral School of Physics, Debrecen, Hungary
[email protected]
Institute for Nuclear Research (Atomki), Debrecen, Hungary
Institute for Nuclear Research (Atomki), Debrecen, Hungary
Institute for Nuclear Research (Atomki), Debrecen, Hungary
Institute for Nuclear Research (Atomki), Debrecen, Hungary
Institute for Nuclear Research (Atomki), Debrecen, Hungary
Institute for Nuclear Research (Atomki), Debrecen, Hungary
Background The reaction plays a major role both in the Big Bang Nucleosynthesis producing the majority of the primordial ^7Li, and in the pp-chain of solar hydrogen burning, where it is the branching point between the pp-I and pp-II,-III chains. As a few-nucleon system, this reaction is often used to validate ab-initio theoretical calculations and/or test R-matrix theory and code implementations. For the latter, experimental data in an extended energy range is of crucial importance to test the fit and extrapolation capabilities of the different codes.
Purpose The reaction cross section has been measured by several groups up to the first resonance (E_c.m.≈ 3 MeV) in the reaction. However, only one dataset exists above the ^7Be proton separation threshold measured in a narrow energy range (E_c.m. = 4.0-4.4 MeV). In this work we extend the available experimental capture cross section database to the energy range of known ^7Be levels, where only particle scattering experiments are available for testing the models.
Method The activation method was used for the reaction cross section determination. The experiment was performed using a thin-window gas cell with two high-purity Al foils as entrance and exit windows. The activity of the ^7Be nuclei implanted in the exit/catcher foil was measured by detecting the yield of the emitted γ rays using shielded high-purity germanium detectors.
Results New experimental reaction cross section data were obtained for the first time in the E_c.m.=4.3-8.3 MeV energy region, corresponding to E_x=5.8-10 MeV excitation energies of ^7Be. The new dataset with about 0.2 MeV step covers the energy range of known levels and particle separation thresholds. No prominent structures are observer around the ^7Be levels.
Conclusions The measured reaction cross section is slowly increasing with increasing energy in the range of E_x=6-8 MeV from 10 μb to 13 μb. Above the ^6Li+p_1 threshold, a decrease starts in the cross section trend and reaches a value of about 8 μb around E_x=10 MeV. The overall structure of the cross section suggest a broad resonance peaking around E_x=7.5 MeV ^7Be excitation energy, with a width of 8 MeV.
Experimental determination of the reaction cross section above the ^7Be proton separation threshold
Zs. Fülöp
August 12, 2023
===================================================================================================
§ INTRODUCTION
The reaction is of crucial importance in three different nuclear astrophysics scenarios. It is one of the branching reactions of the proton-proton (pp) chains in solar and stellar hydrogen-burning. More specifically, it is the initial reaction of the pp-II and pp-III chains. These chains are the source of a significant portion of the high-energy neutrinos emitted by the Sun <cit.>. Accurate estimates of the neutrinos produced in the Sun can be used to refine solar models <cit.>, however the rate uncertainty of the branching reaction directly affects the modeled neutrino flux uncertainty.
The is also an important reaction of element formation in the Big Bang Nucleosynthesis (BBN). The formation of ^7Li happens mainly via the reaction and the subsequent beta-decay of ^7Be <cit.>.
Additionally, classical novae also play an important role in the galactic production of ^7Li. The synthesis of the ^7Be (which is transformed into ^7Li nuclei) in the reaction can also be observed in carbon-oxygen type novae.
^7Be is detected by spectroscopic methods and several simulations have been carried out on the amount of ^7Be <cit.> (and further references therein).
In the relevant reaction energy range in stars, in the so-called Gamow window, direct experimental data are difficult to obtain because of the extremely low reaction cross sections in the attobarn range. Here experimental information can be gained via indirect methods, e. g. the Asymptotic Normalization Coefficient (ANC) of the reaction was determined by using transfer reaction <cit.>, or the reaction cross section in the solar Gamow-window (0.018-0.029 MeV) was determined utilizing the measured solar neutrino fluxes and the predictions of the Standard Solar Model <cit.>. The relevant energy range in classical novae (0.05-0.2 MeV) and the BBN Gamow-window (0.1-0.5 MeV) are somewhat higher in energy, up to which the reaction cross section is increasing exponentially, reaching the nanobarn range. With an enormous effort, the LUNA collaboration <cit.> was able to provide direct experimental data in this energy range with high precision <cit.>. The collaboration explored the energy range of E_c.m. = 0.09-0.17 MeV, with their deep underground settings, where the environmental background signals in the detectors are orders of magnitude lower than that can be achieved in an overground setup <cit.>.
Many other modern datasets are available in the E_c.m. = 0.3-3.1 MeV energy range <cit.> proving the positive slope of the astrophysical towards higher energies.
In addition the reaction cross section was measured around the proton separation energy of the compound ^7Be nucleus in a narrow energy range of E_c.m. = 4.0-4.4 MeV <cit.>. In this energy range a positive parity level of ^7Be was suggested from the reaction <cit.> but not confirmed later in any direct experiment <cit.> or indirect work <cit.>.
Because the energy range of the solar and stellar reaction is not reachable by present day direct experimental technique, extrapolation to those energies are inevitable. For this purpose, one of the often used methods is the R-matrix analysis <cit.>. With the rapid growth of computational power, recently multi-level, multichannel R-matrix codes became available <cit.>, using known level properties from different experiments to extrapolate the S-factor into unknown energy ranges. Most of the previous R-matrix studies for this purpose used the low energy radiative capture datasets only, below the ^7Be proton separation threshold <cit.>. However, the extrapolations may benefit from new experimental datasets in previously unexplored energy region, where the reaction of interest as well as other reaction channels can be used to constrain their parameters.
To describe the low energy trend of the S-factor, broad positive parity states have to be assumed in the R-matrix fit <cit.>. Such an assumption is reasonable, based on the fact that in its mirror nucleus ^7Li, a broad structure was found in γ-scattering experiments <cit.> around 7 MeV excitation energy. In the present work the corresponding energy range is addressed in ^7Be.
In addition, recently the ^3He+^4He scattering datasets were used to cross-validate several R-matrix codes <cit.>. An extension of that work could be the inclusion of radiative capture channels, which requires experimental data up to 20 MeV in ^7Be excitation energy.
Answering also to this call, we provide here an experimental capture cross section dataset up to E_x=10 MeV.
The cross section of the reaction can be measured with several methods. In the case of prompt -ray detection, the direct capture rays and/or the secondary -s are detected. This method was used so far only below the first resonance. The main reason is that the angular distribution of the prompt rays is affecting the deduced cross sections, this needs to be known with high precision, or shall be estimated to be a small correction. Since the angular distribution by now is known only from theoretical works, the latter requirement is fulfilled only far away from resonances. A recent attempt is made to experimentally determine the prompt -ray angular distribution <cit.>, and further work is in progress, which may unlock the potential of these kinds of measurements for precise cross section determination.
An alternative method for the capture cross section determination is the direct detection of the ^7Be recoils. Because of the technical challenges, this method was successfully applied only by one group so far utilizing the ERNA (European Recoil Separator for Nuclear Astrophysics) apparatus, resulting in a dataset up to, and covering, the first resonance <cit.>. Experiments were carried out using the DRAGON (Detector of Recoils And Gammas Of Nuclear reactions) recoil separator <cit.>, however, the results are still reported only as conference contributions <cit.>.
The third method is the so-called activation <cit.>. The ^7Be reaction product is radioactive with a half-life of 53.22 days and 10.44% of its decays lead to the first excited state of ^7Li, which subsequently emits a photon with an energy of 477.6 keV <cit.>. By detecting this latter ray, the number of reaction products, thus the reaction cross section can be deduced.
The activation method is free from the angular distribution effects, influencing the other two methods, thus can be safely used also in energy ranges, where only limited information is available about the levels.
Therefore in this work, the activation method is applied to determine the reaction cross section in an energy range never investigated in this radiative capture reaction before. The new dataset spans the energy range of known ^7Be levels and particle emission thresholds.
This paper is organized as follows: in sec:exp the experimental details are given, highlighting all the constituents of the cross section determination. In sec:results the data analysis and the experimental results are presented. Finally a summary is given in sec:sum.
§ EXPERIMENTAL DETAILS
§.§ The gas-cell target
In the present work, a thin-window gas-cell target was used, an updated version of those used in Refs. <cit.>. With this solution, differential pumping and calorimetric beam current measurement, which is often necessary for a windowless gas target <cit.> can be avoided. The disadvantage of a window is the beam energy loss in the entrance foil, which is not problematic in our case due to the relatively high beam energies used. The nominal 10 μm thick aluminium foils used as entrance window cause 0.5-1 MeV energy loss in the beam energy range (E_α = 11-20 MeV) of our investigations. There were a few data-points measured with thinner (∼7 μm) entrance foils to explore the excitation function with finer energy steps in the vicinity of known ^7Be levels.
The exact entrance foil thicknesses were determined by measuring the energy loss of passing α particles. The energy of α-s emerging from a triple-isotope α source penetrating through the foil was measured. The thickness of the aluminium foil was then determined from the energy loss as described in the previous work <cit.>. The statistical uncertainty of the thickness measurement was 0.3-0.5%, while the stopping power uncertainty was taken into account as follows. In the energy range of 3.15-5.80 MeV covering the initial α energies as emerged from the source and the decelerated ions as detected, there are several stopping power measurements, providing a handful of datasets <cit.> with different accuracies to be compared by the SRIM tables <cit.> used in the present calculations. The energy dependence of most of the datasets are well described by the SRIM curve, however scaling factors within their quoted accuracy shall be applied. The weighted mean of these scale factors amounts to 1.009, while its uncertainty is 1.0%. Therefore the thicknesses calculated using the SRIM tables were multiplied by 0.991 to account for this effect, while the uncertainty of the stopping power is taken as the spread of the scale factors, thus 1% added quadratically to the statistical uncertainty of the measurement.
The parameters of the different irradiations, together with the window thicknesses as target areal densities are summarized in tab:run.
The 4.19-cm long cell was filled with high purity isotopically enriched (99.999%) ^3He gas. The foils were placed on O-rings at the entrance and exit of the cell secured and pressed by tantalum rings. A 12 mm diameter surface of the foil was exposed to the gas, which securely kept the pressure against the beamline vacuum. Before the irradiations, the cell was filled with up to 100 mbar of ^3He gas. From this initial pressure, temperature and the known cell length, the surface density of the target nuclei was determined (see tab:run) applying the ideal gas law.
In the experiments, 4 different thicknesses of exit foils (10, 15, 20, 25 um) were used serving as catcher, depending on the energy of ^7Be produced in the reaction. The ^7Be energy is calculated from the kinematics, and simulations with the SRIM program <cit.> were done to determine the thickness of the catcher foil required at the given energies to ensure that the ^7Be nuclei are stopped in the foil and do not pass through it. The thinnest available foils were then used to reduce the number of target atoms for possible parasitic reactions.
§.§ Irradiations
The irradiations were performed by the Atomki cyclotron <cit.>. The activation chamber containing the thin window gas-cell acted as a Faraday-cup.
A voltage of -300 V was applied to an aperture at the entrance of the chamber to eliminate the effect of any secondary electrons that may be generated in the last beam defining aperture or from the target. Since the target gas was within the Faraday-cup, charge exchanges did not affect the current measurement. This allowed the determination of the number of bombarding particles via charge integration. A pressure gauge was also connected to the cell, and the pressure data were saved every 10 minutes. During the irradiation the pressure in the cell increased steadily (by about 15% observed until the end of a given irradiation), and a slow decrease was observed after the irradiation was stopped (1-2% within a few days).
Since the whole cell was surrounded by vacuum, in case of a foil or o-ring failure only pressure drop shall be observed, thus the pressure increase is considered to be a temperature effect (few % increase is consistent with the few degree temperate increase), and mainly gas desorption from the foils and cell walls. The pressure increase was always more significant when the cell was exposed to air for a longer time before irradiation. None of these effects alter the number of active target atoms. The energy loss of the beam inside the gas volume was in the order of a few tens of keV. Considering the total pressure increase caused by air desorption, this small extra gas amount alters the center-of-mass energy by less than 0.1%, well within the initial beam energy uncertainty.
The unreacted beam was dumped in a water cooled tantalum cap. The drawing of the gas target chamber is shown in fig:setup.
The beam intensity was monitored by a charge integrator combined with a multichannel scaler. The accumulated charge was recorded in every 60 seconds, which allowed the current variation be to taken into account in the data analysis.
The length of the irradiations was between 10 and 26 hours to create the adequate activity in the catcher foil. The electric beam current of the doubly charged α particles varied between 0.6-1.2 μA depending on the actual performance of the accelerator.
§.§ γ-ray detection
During the irradiation, all the ^7Be nuclei were implanted in the catcher foil. SRIM simulations <cit.> were carried out to investigate the possible backscattering or stopping of ^7Be in the gas volume. Both of these effects were found to be negligible (< 0.01%).
Due to the reaction kinematics, the ^7Be was created in a cone with a maximum of 26.0 mrad opening angel (in case of the highest energy irradiation). This would results in a maximum 2 mm diameter spot on the catcher. Since the original α beam had a size of 5 mm, defined by the last aperture, this additional broadening of the ^7Be distribution is not significant.
After the irradiations, the catcher foils were removed from the gas-cell, and were placed in front of a high-purity germanium (HPGe) detector with a sample detector distance of 1 cm. Typically, the γ-ray counting was started with a cooling time of at least one day after irradiation, because there was some significant beam-induced short lived activity found in the catcher immediately after irradiation.
Two HPGe detectors were employed for γ-ray countings depending on their availability.
A Canberra GL2015R type Low Energy Germanium Detector (LEGe) <cit.> with standard dipstick cryostat and a Canberra GR10024 N-type detector with Ultra Low Background (ULB) <cit.> cryostat. The detectors were surrounded by lead shielding including inner layer of Cd and Cu, with which their sensitivity become comparable at E_γ = 477.6 keV <cit.>.
In most of the cases a given sample was measured by both of the detectors, and the yields obtained were within statistical uncertainty. These data points obtained with the two detectors have most of their systematic uncertainties common, thus the evaluation was performed with the spectra measured with the ULB detector, since it had better statistics. Countings were performed in several cycles, a given sample was in the detector setup for a few days, then it was placed back after about a week of waiting time. In this way the ^7Be decay was followed in each sample, which was found to be compatible with the expectations assuming the literature half-life of ^7Be. The total counting time of a given sample was 4-22 days, depending on the activity, to reach 2-2.5% statistical uncertainty.
The efficiency calibration of the detectors were performed with a custom-made ^7Be single-line source produced via ^7Li(p,n)^7Be reaction. The source was created in the same irradiation setup, thus the proton beam was collimated to a 5 mm spot, where the activity was created evenly in the target material. With this method a calibration source geometry was achieved which was similar to the extended activity distribution in the catcher.
The ^7Be source activity was measured with high precision at the ULB detector using 27 cm source-detector distance.
Commercial calibration sources of known activities (^152Eu and ^133Ba) were used for the determination of detector efficiency-energy function at this distance. With these multi-line sources, direct close geometry calibration would be affected by the true coincidence summing, which was avoided in this way.
Using high intensity γ transitions from both sources, the detection efficiency was determined, then a log-log linear function was fitted separately for the values obtained by both sources. In case of ^133Ba, the detection efficiency at the E_γ = 477.6 keV ^7Be line was extrapolated, in case of ^152Eu interpolation was possible (see fig:eff).
The determined detection efficiency at the ^7Be line was in mutual agreement. Taking into account the normalisation uncertainty stemming from the source activities, the weighted average value was used later in the analysis, carrying only 1% uncertainty, which in turn gives the precision of the ^7Be source activity.
The detection efficiencies at 1 cm source-detector distances for E_γ = 477.6 keV of both detectors were then determined with the precisely calibrated ^7Be single-line source with high accuracy (1.5%).
The detector efficiency was measured again after the γ-ray countings with the same calibration sources and with another freshly produced ^7Be source to test the stability of the system. There were no significant change in the efficiency (in the order of 0.3%, within the statistical uncertainty), thus the error-weighted average of the two efficiency results was used in the analysis.
§.§ Data analysis
Typical γ-ray spectra are shown in fig:spe: the spectra taken after lower energy irradiations show less beam induced background from parasitic reactions in both detectors (fig:spe (a) and (b)). A spectrum taken after a higher energy irradiation, which caused more parasitic activity in the foil, thus featuring more contaminant peaks in the spectrum is also displayed (fig:spe (c)).
Despite the high purity (99.99% Al) of the foils, they contain some impurities on ppm level (Cu, Fe, Mg, Si...), due to the manufacturing process. These impurities in the foil may also undergo α-particle induced reactions. The half-life of the resulting radioactive nuclei is usually less than 1 day. The most probable reactions are (α,n), but at such high energies several reaction channels can be open, such as reactions. The latter is of great importance, since the ^54Fe nucleus present in the foil (among others) is the target of such a reaction, with a reaction threshold of 17.2 MeV. The ^54Fereaction produces ^56Ni, which is radioactive and has a half-life of almost 6 days. During the decay of ^56Ni, γ photons with energy of E_γ = 480 keV are emitted. Due to the finite energy resolution of the detector, this manifests as a side peak/shoulder of the ^7Be peak with its energy of E_γ = 477.6 keV. This small structure was considered in the peak area determination for the E_α = 17.5-20 MeV irradiations.
In addition, a prominent peak at E_γ = 496 keV and a smaller one at E_γ = 486 keV are visible in the spectra after the high energy irradiations (see fig:spe bottom panel). Even though this directly does not affect our peak of interest, such parasitic peaks are not expected from reactions on the foil impurities. From their intensity ratio and half-life (and from other observed peaks) the source of these peaks was identified as ^131Ba. This isotope was created via the ^129Xereaction, which has huge cross section (0.3-0.5 barn) above its threshold of 16.2 MeV <cit.>. These parasitic peaks were visible only in the spectra taken after the irradiation at and above E_α =17.0 MeV corresponding to 16.5 MeV effective beam energy behind the entrance foil. Trace xenon impurity on the ppm level was enough to create the observed amount of activity. Since the gas handling part of the gas-cell was previously used with natural Xe gas, despite the evacuation, a trace amount of xenon was trapped and mixed into the helium used for our experiments.
In principle ^131Ba can also be created via the ^128Xereaction above the reaction threshold of 9 MeV, however this production is insignificant, because ^128Xe has more than one order of magnitude lower natural abundance than ^129Xe, and orders of magnitude lower ^131Ba production cross section <cit.>.
The ^7Be peak area was determined by fitting the spectrum with a lognormal function assuming linear background below the peak. The slight low energy wing of the peaks due to incomplete charge collection in the germanium crystal is taken into account in this way. The asymmetry is small, assuming Gaussian peak would change the peak area within the statistical uncertainty. The peak area was then corrected for detector dead time, and random coincidence loss effects, both on the 0.1% level.
The statistical uncertainty of γ countings was generally 2-2.5%. The uncertainty of the ^3He target thickness was between 2.5 - 2.7%. One of the dominant uncertainties was the cell length uncertainty of 1 mm amounting to 2.4%. This is a conservative upper limit including the uncertainty of the length measurement (0.2 mm), and the bending of the foils in the order of 0.3 mm, when exposed to pressure difference <cit.>. Further uncertainties were considered such as beam heating effect (between 0.6-1%) <cit.>, the cooling water temperature, which defines initial gas temperature (0.7%) and the pressure in the cell (0.3%). The uncertainty of the bombarding particle flux was assumed to be 3%. Taking the quadratic sum of the above partial uncertainties, the reaction cross section was determined with an accuracy of 4.6-5.8%.
The uncertainty in the center-of-mass energy was between 0.3-0.5% which is the quadratic sum of the cyclotron energy uncertainty of 0.3% and the uncertainty caused by the energy loss in the entrance foil (0.1-0.4%). This latter stems from the uncertainty of the Al foil thickness and the stopping power uncertainty. The value of the stopping power used in the calculation is 0.985 times that of in the SRIM <cit.> tables. This is due to the fact, that in the 10.5 - 20 MeV α-energy range, which is the range of the α particles used for the irradiations, there is only one experimental stopping power dataset <cit.>. The SRIM curve describes well the energy dependence of the high accuracy (0.6%) data, but the absolute magnitude of the SRIM curve is 1.5% higher. This scale shift was applied in our calculations, and a conservative uncertainty of 1.5% was assumed.
The energy loss in the target gas was 25-44 keV, assuming 4.4% uncertainty of the stopping power according to SRIM <cit.>, this amounted to a negligible (max. 0.02%) uncertainty of the effective reaction energy. Since the cross section is roughly constant within the above mentioned target thickness, the effective reaction energy was taken as the energy at the middle of the target.
The experimental cross section results together with the effective center-of-mass reaction energies are summarized in tab:res and displayed in fig:XS.
§ DISCUSSION
The obtained excitation function is shown in fig:XS. The gas-cell in the present work is different from the one used in Ref. <cit.>, thus an overlapping point was taken at about E_c.m. = 4.3 MeV as a cross validation. The new data point is in perfect agreement with the previous one.
In fig:XS differential elastic scattering and ^3He(α,p)^6Li reaction cross sections from Ref. <cit.> are also plotted for selected angles. A complete compilation of the available datasets are beyond the scope of this work, here only the major features are highlighted.
In the present dataset no structures are visible around the known ^7Be levels, while the 6.73 MeV level appears in the elastic scattering data, and the 7.21 MeV level forms a structure in the ^3He(α,p_0)^6Li dataset. This suggest a marginal γ widths for these levels beside sizable particle widths. Similarly, the two other levels in the investigated energy range show structures in the particle channels, but not visible in the radiative capture dataset. Above its threshold, the ^3He(α,p_1)^6Li cross section becomes dominant, this is the energy range, where the present cross section starts to drop. The ^3He(α,p_1)^6Li cross section peaks at the ^3He(α,p_2)^6Li reaction threshold, from which point supposedly the latter reaction becomes dominant, however no experimental data are available for that reaction channel. Additionally, the ^3He(α,p_2)^6Li reaction threshold energy is close to a broad 7/2^- level in ^7Be, thus that may also cause the structure of the ^3He(α,p_1)^6Li cross section.
Comparing the present data to the cross section of the mirror reaction i.e. ^3H(α,γ)^7Li_GS, shows remarkable common features (see fig:7Li). The higher energy ^3H(α,γ)^7Li_GS data were obtained by γ induced breakup reaction on ^7Li <cit.>. The cross section of the measured ^7Li(γ,α)^3H reaction is converted to the plotted ^3H(α,γ)^7Li_GS using the principle of detailed balance.
A similar broad structure between 4 and 9 MeV is visible in both reactions. Because of the maximum energy of our accelerator, the new dataset does not cover higher energies, where the <cit.> data become constant. Investigations up to E_c.m.= 13 MeV is recommended to confirm this similar behavior towards higher energies.
Finally, the present data are also compared to previous literature R-matrix fits <cit.> using the AZURE2 code <cit.> (see fig:R-matrix). Those fits considered only data below E_c.m.=3 MeV, and used many background poles well outside the range of the data. In tab:r-matrix the spin, parity, energy, and α widths of the levels considered in the previous R-matrix fits and in the present one are shown.
Ref. <cit.> used altogether 7 poles all placed at E_x=11 MeV, but the 5/2^- one, which was placed at E_x=7 MeV to account for known levels around this energy. Ref. <cit.> used altogether 6 poles skipping the 7/2^-, and placed positive and negative parity poles at different energies.
As can be seen in fig:R-matrix, the two fits start to deviate from each other already from E_c.m.=3 MeV, and completely miss the new data sets, as they were not intended to be used in that energy range.
A new R-matrix fit with limited number of reaction channels (radiative capture and elastic scattering involving only ^3He and ^4He), and datasets was performed. Hereafter we discuss this new limited fit.
A comprehensive R-matrix fit including multiple reaction channels such as ^3He(α,p_0)^6Li, ^3He(α,p_1)^6Li, ^6Li(p,γ)^7Be, ^6Li(p,p)^6Li and more datasets is beyond the scope of this paper.
For the radiative capture channel, the new data and two previous datasets <cit.> were considered. For the scattering channel one dataset from Ref. <cit.> was used to better constrain the α widths.
As a starting point, the levels and α widths from the most recent compilation <cit.> were used and the γ widths and ANC-s were taken from Ref. <cit.>. Since no low energy data was used, the ANC-s were kept fixed.
The energy of the levels was also fixed, the only exception was the 1/2^+ level which was initially placed to E_x=7.5 MeV and fitted in the seek of describing the apparent broad structure. The α widths of the positive parity poles were varied, so were the γ widths of them. The partial γ width to the ground and to the first excited state was not constrained here, because no partial cross sections were used for the fit. The 1/2^- level at E_x=10 MeV is indicated in the compilation only as a possible level marking it as "broad".
The resulting fit is plotted in fig:R-matrix. The energy of the 1/2^+ level did not changed significantly from its initial value, while the fit attributed an α width of 8 MeV to this level.
The fit describes the trend of the data quite well, however the scattering data especially in backward angles are poorly reproduced. Nevertheless, the positive parity state in the range of the data did not make significant change in the low energy behavior of the R-matrix fit, their trend staid the same. Since no partial cross section data were used in the present fit, the ground state and first excited state ANC-s cannot be constrained separately. Thus, they were fixed to the values of previous works, resulting no change in the extrapolated zero energy cross section value.
§ SUMMARY
The cross section of the ^3He(α,γ)^7Be reaction was measured for the first time over the energy range of E_c.m.=4.3-8.3 MeV, with 0.2 MeV energy step, using the activation technique.
The known ^7Be levels cause no prominent features in the excitation function. However, the overall shape of the obtained cross section indicates a broad structure peaking at E_x=7.5 MeV ^7Be excitation energy. A similar structure is visible in the ^3H(α,γ)^7Li mirror reaction.
A limited R-matrix fit was performed using only few additional radiative capture datasets <cit.> and one elastic scattering dataset <cit.>. The energies and widths of most of the levels were kept fixed, only the parameters of the positive parity poles, required for the description of the low energy behavior of the cross section <cit.> were varied. Treating the broad structure as a 1/2^+ positive parity state, the fit nicely describes the capture data in the energy range of the new dataset. The new fit does not differ significantly at lower energies from fits in previous investigations. The description of the elastic scattering dataset is poor, however not much worse than in other works. It has to be mentioned here, that those previous works did not use data in the energy range of this study, and were not intended for extrapolation to this higher energy range.
A comprehensive R-matrix fit is recommended which would use other reaction channels and partial cross section datasets to better describe the level scheme of the ^7Be nucleus.
Similarly, the study of the cross section of the radiative capture at even higher energy is required to compare the energy dependence of the cross section with the upturn observed in the ^3H(α,γ)^7Li mirror reaction.
The authors thank R. J. deBoer (University of Notre Dame) for valuable discussions. We thank the operating crews of the cyclotron accelerator for their assistance during the irradiations. This work was supported by (OTKA FK134845 and K134197), New National Excellence Programs of the Ministry of Human Capacities of Hungary under nos. and , and by the European Union (ChETEC-INFRA, project no. 101008324). T.S. acknowledges support from the János Bolyai research fellowship of the Hungarian Academy of Sciences.
57
Haxton13-ARAA
W. Haxton, R.H. Robertson, A.M. Serenelli, Annu. Rev. Astron. Astrophys.
51, 21 (2013)
Magg22-AA
E. Magg et al., Astron. Astrophys. 661, A140 (2022)
Fields11-ARNPS
B.D. Fields, Annu. Rev. Nucl. Part. Sci. 61, 47 (2011)
Starrfield20-AJ
S. Starrfield et al., Astrophys. J. 895, 70 (2020)
Kiss20-PLB
G. Kiss et al., Phys. Lett. B 807, 135606 (2020)
Takacs15-PRD
M.P. Takács, D. Bemmerer, T. Szücs, K. Zuber, Phys. Rev. D 91,
123526 (2015)
luna
Bemmerer06-PRL
D. Bemmerer et al. (LUNA Collaboration), Phys. Rev. Lett. 97, 122502
(2006),
Confortola07-PRC
F. Confortola et al. (LUNA Collaboration), Phys. Rev. C 75, 065803
(2007)
Gyurky07-PRC
G. Gyürky et al. (LUNA Collaboration), Phys. Rev. C 75, 035805
(2007)
Costantini08-NPA
H. Costantini et al. (LUNA Collaboration), Nucl. Phys. A 814, 144
(2008)
Caciolli09-EPJA
A. Caciolli et al. (LUNA Collaboration), Eur. Phys. J. A 39, 179
(2009),
Singh04-PRL
B.S. Nara Singh, M. Hass, Y. Nir-El, G. Haquin, Phys. Rev. Lett. 93,
262503 (2004)
Brown07-PRC
T.A.D. Brown et al., Phys. Rev. C 76, 055801 (2007)
DiLeva09-PRL
A. Di Leva et al., Phys. Rev. Lett. 102, 232502 (2009), 103, 159903(E) (2009)
Carmona-Gallardo12-PRC
M. Carmona-Gallardo et al., Phys. Rev. C 86, 032801 (2012)
Bordeanu13-NPA
C. Bordeanu et al., Nucl. Phys. A 908, 1 (2013)
Kontos13-PRC
A. Kontos et al., Phys. Rev. C 87, 065804 (2013)
Szucs19-PRC
T. Szücs et al., Phys. Rev. C 99, 055804 (2019); 105, 069901(E) (2022),
He13-PLB
J. He et al., Phys. Lett. B 725, 287 (2013)
Piatti20-PRC
D. Piatti et al. (LUNA Collaboration), Phys. Rev. C 102, 052802 (2020)
Kiss21-PRC
G.G. Kiss et al., Phys. Rev. C 104, 015807 (2021)
Azuma10-PRC
R.E. Azuma et al., Phys. Rev. C 81, 045805 (2010)
Thompson19-EPJA
I.J. Thompson et al., Eur. Phys. J. A 55, 92 (2019)
deBoer14-PRC
R.J. deBoer et al., Phys. Rev. C 90, 035804 (2014)
Odell22-FP
D. Odell et al., Front. Phys. 10, 888476 (2022),
Skopik79-PRC
D.M. Skopik, J. Asai, E.L. Tomusiak, J.J. Murphy, Phys. Rev. C 20,
2025 (1979)
Junghans79-ZPA
G. Junghans et al., Z. Physik A 291, 353 (1979)
Munch20-PRC
M. Munch et al., Phys. Rev. C 101, 055801 (2020)
Turkat19-SNC
S. Turkat et al., Measurement of the ^3He(α,γ)^3Be
γ-ray angular distribution, in Solar Neutrinos, edited by
M. Meyer, K. Zuber (WORLD SCIENTIFIC, 2019), p. 513
Sjue13-NIMA
S. Sjue et al., Nucl. Instrum. Meth. A 700, 179 (2013)
Singh12-JPCS
B.S.N. Singh et al., J. Phys. Conf. Ser. 337, 012057 (2012)
CarmonaGallardo14-EPJWOC
M. Carmona-Gallardo et al., EPJ Web of Conferences 66, 07003 (2014)
Gyurky19-EPJA
G. Gyürky et al., Eur. Phys. J. A 55, 41 (2019),
Tilley02-NPA
D. Tilley et al., Nucl. Phys. A 708, 3 (2002)
Bordeanu12-NIMA
C. Bordeanu et al., Nucl. Instrum. Meth. A 693, 220 (2012)
Ferraro18-EPJA
F. Ferraro et al., Eur. Phys. J. A 54, 44 (2018)
Andersen77-PRA
H.H. Andersen, J.F. Bak, H. Knudsen, B.R. Nielsen, Phys. Rev. A 16,
1929 (1977)
Diwan15-NIMB
P.K. Diwan, S. Kumar, Nucl. Instrum. Meth. B 359, 78 (2015)
Santry86-NIMB
D. Santry, R. Werner, Nucl. Instrum. Meth. B 14, 169 (1986)
Raisanen91-REDS
J. Räisänen et al., Radiation Effects and Defects in Solids 118,
97 (1991)
Nakata69-CJP
H. Nakata, Can. J. Phys. 47, 2545 (1969)
Desmarais84-AJP
D. Desmarais, J.L. Duggan, American Journal of Physics 52, 408 (1984)
Trzaska18-NIMB
W.H. Trzaska et al., Nucl. Instrum. Meth. B 418, 1 (2018)
Hsu05-NIMB
J.Y. Hsu, Y.C. Yu, J.H. Liang, K.M. Chen, Nucl. Instrum. Meth. B
241, 155 (2005)
Majackij88-UFZ
V.D. Mayatskij, N.N. Pucherov, Ukrainskij Fizicheskij Zhurnal 33, 1285
(1988)
srim
J.F. Ziegler, M. Ziegler, J. Biersack, Nucl. Instrum. Meth. B 268,
1818 (2010),
Biri21-EPJP
S. Biri et al., Eur. Phys. J. Plus 136, 247 (2021)
LEGe
ULB
Szucs14-AIPConf
T. Szücs, G.G. Kiss, Z. Fülöp, AIP Conf. Proc. 1595, 173 (2014)
TALYS-V19
A.J. Koning, S. Hilaire, S. Goriely, computer code talys, version
1.9 (2017),
Rauscher00-ADNDT
T. Rauscher, F.K. Thielemann, At. Data Nucl. Data Tables 75, 1 (2000)
Spiger67-PR
R.J. Spiger, T.A. Tombrello, Phys. Rev. 163, 964 (1967)
Halasz16-PRC
Z. Halász et al., Phys. Rev. C 94, 045801 (2016),
Marta06-NIMA
M. Marta et al., Nucl. Instrum. Meth. A 569, 727 (2006)
Brune94-PRC
C.R. Brune, R.W. Kavanagh, C. Rolfs, Phys. Rev. C 50, 2205 (1994)
|
http://arxiv.org/abs/2307.04504v1 | 20230710115604 | An Algorithm with Optimal Dimension-Dependence for Zero-Order Nonsmooth Nonconvex Stochastic Optimization | [
"Guy Kornowski",
"Ohad Shamir"
] | math.OC | [
"math.OC",
"cs.LG"
] |
theoremTheorem
*theorem*Theorem
proposition[theorem]Proposition
*proposition*Proposition
example[theorem]Example
lemma[theorem]Lemma
corollary[theorem]Corollary
definition[theorem]Definition
remark[theorem]Remark
assumption[theorem]Assumption
claim[theorem]Claim
|
http://arxiv.org/abs/2307.04330v1 | 20230710035411 | A uniform and pressure-robust enriched Galerkin method for the Brinkman equations | [
"Seulip Lee",
"Lin Mu"
] | math.NA | [
"math.NA",
"cs.NA",
"65N15, 65N30, 76D07"
] |
Quasicrystalline second-order topological semimetals
Dong-Hui Xu
August 12, 2023
====================================================
This paper presents a pressure-robust enriched Galerkin (EG) method for the Brinkman equations with minimal degrees of freedom based on EG velocity and pressure spaces. The velocity space consists of linear Lagrange polynomials enriched by a discontinuous, piecewise linear, and mean-zero vector function per element, while piecewise constant functions approximate the pressure. We derive, analyze, and compare two EG methods in this paper: standard and robust methods. The standard method requires a mesh size to be less than a viscous parameter to produce stable and accurate velocity solutions, which is impractical in the Darcy regime. Therefore, we propose the pressure-robust method by utilizing a velocity reconstruction operator and replacing EG velocity functions with a reconstructed velocity. The robust method yields error estimates independent of a pressure term and shows uniform performance from the Stokes to Darcy regimes, preserving minimal degrees of freedom. We prove well-posedness and error estimates for both the standard and robust EG methods. We finally confirm theoretical results through numerical experiments with two- and three-dimensional examples and compare the methods' performance to support the need for the robust method.
10pt
Keywords: enriched Galerkin finite element methods, Brinkman equations, pressure-robust, velocity reconstruction, uniform performance
§ INTRODUCTION
We consider the stationary Brinkman equations in a bounded domain Ω⊂ℝ^d for d=2,3 with simply connected Lipschitz boundary ∂Ω: Find fluid velocity :Ω→ℝ^d and pressure p:Ω→ℝ such that
-μΔ𝐮 + μ/K 𝐮 + ∇p = 𝐟 in Ω,
∇·𝐮 = 0 in Ω,
=0 on ∂Ω,
where μ is fluid viscosity, K is media permeability, and is a given body force.
The Brinkman equations describe fluid flow in porous media characterized by interconnected pores that allow for the flow of fluids, considering both the viscous forces within the fluid and the resistance from the porous media. The Brinkman equations provide a mathematical framework for studying and modeling complex phenomena such as groundwater flow, multiphase flow in oil reservoirs, blood flow in biological tissues, and pollutant transport in porous media.
In this paper, for simplicity, we consider the scaled Brinkman equations
-νΔ𝐮 + 𝐮 + ∇p = 𝐟 in Ω,
∇·𝐮 = 0 in Ω,
=0 on ∂Ω,
where ν∈[0,1] is a viscous parameter.
Mathematically, the Brinkman equations can be seen as a combination of the Stokes and Darcy equations.
When ν→1, the Brinkman equations approach a Stokes regime affected by the viscous forces, so standard mixed formulations require the H^1-conformity for velocity.
On the other hand, since the Darcy model becomes more prominent as ν→ 0, finite-dimensional spaces for velocity are forced to satisfy the H(div)-conformity.
This compatibility in velocity spaces makes it challenging to construct robust numerical solvers for the Brinkman equations in both the Stokes and Darcy regimes.
The numerical tests in <cit.> show that standard mixed methods with well-known inf-sup stables Stokes elements, such as MINI and Taylor-Hood elements, produce suboptimal orders of convergence in the Darcy regime.
Moreover, with piecewise constant approximations for pressure, the standard methods' velocity errors do not converge in the Darcy regime, while mesh size decreases.
On the other hand, Darcy elements such as Raviart-Thomas and Brezzi-Douglas-Marini do not work for the Stokes domain because they do not satisfy the H^1-conformity.
Therefore, the development of robust numerical solvers for the Brinkman equations has had considerable attention.
There have been three major categories in developing robust numerical methods for the Brinkman equations. The first category considers Stokes/Darcy elements and adds stabilization (or penalty) terms or degrees of freedom to impose normal/tangential continuity, respectively. This approach allows Stokes elements to cover the Darcy regime <cit.> or H(div)-conforming finite elements to be extended to the Stokes regime <cit.>. Also, the stabilized method in <cit.> coarsens a pressure space and applies a stabilization term on pressure, while the robust method in <cit.> uses an enlarged velocity space. The second approach is to introduce another meaningful unknown and define its suitable formulation and finite-dimensional space, such as velocity gradient <cit.>, vorticity <cit.>, and Lagrange multipliers at elements' boundaries <cit.>. The third direction is the development of a velocity reconstruction operator, first introduced in <cit.>, mapping Stokes elements into an H(div)-conforming space. In a discrete problem for the Brinkman equations, reconstructed velocity functions replace Stokes elements in the Darcy term and the test function on the right-hand side. This idea has been adopted for a uniformly robust weak Galerkin method for the Brinkman equations <cit.>, which inspires our work because of its simplicity in modification.
Our research focuses on developing a robust numerical method for the Brinkman equations with minimal degrees of freedom. The enriched Galerkin (EG) velocity and pressure spaces have been proposed by <cit.> for solving the Stokes equations with minimal degrees of freedom. The velocity space consists of linear Lagrange polynomials enriched by a discontinuous, piecewise linear, and mean-zero vector function per element, while piecewise constant functions approximate the pressure. More precisely, a velocity function =^C+^D consists of a continuous linear Lagrange polynomial ^C and a discontinuous piecewise linear enrichment function ^D, so interior penalty discontinuous Galerkin (IPDG) formulations are adopted to remedy the discontinuity of ^D. These velocity and pressure spaces satisfy the inf-sup condition for the Stokes equations, so they are stable Stokes elements.
We first observe a standard EG method derived from adding the Darcy term (,)_Ω to the Stokes discrete problem in <cit.>.
Our numerical analysis and experiments show that the standard EG method provides stable solutions and convergent errors for the Brinkman equations if a mesh size satisfies the condition h<√(ν) that is impractical in the Darcy regime (ν→0). Hence, inspired by <cit.>, we use the velocity reconstruction operator <cit.> mapping the EG velocity to the first-order Brezzi-Douglas-Marini space, whose consequent action is preserving the continuous component ^C and mapping only the discontinuous component ^D to the lowest-order Raviart-Thomas space. Then, we replace the EG velocity in the Darcy term and the test function on the right-hand side with the reconstructed linear H(div)-conforming velocity.
Therefore, with this simple modification, our resulting EG method yields pressure-robust error estimates and shows uniform performance from the Stokes to Darcy regime without any restriction in a mesh size, which is verified by our numerical analysis and experiments. Through two- and three-dimensional examples, we compare the numerical performance of our robust EG and the standard EG methods with the viscous parameter ν and mesh size h. The numerical results demonstrate why the standard EG method is not suitable for the Brinkman equations in the Darcy regime and show that the robust EG method has uniform performance in solving the Brinkman equations.
The remaining sections of this paper are structured as follows:
Some important notations and definitions are introduced in Section <ref>.
In Section <ref>, we introduce the standard and robust EG methods for the Brinkman equations, recalling the EG velocity and pressure spaces <cit.> and the velocity reconstruction operator <cit.>.
We prove the well-posedness and error estimates of the standard EG method in Section <ref>.
In Section <ref>, we show the robust method's well-posedness and error estimates that mathematically verify the uniform performance from the Stokes to Darcy regimes.
Section <ref> validates our theoretical results through numerical
experiments in two and three dimensions. Finally, we summarize our contribution in this paper and discuss
related future research in Section <ref>.
§ PRELIMINARIES
In this section, we introduce some notations and definitions used in this paper.
For a bounded Lipschitz domain 𝒟∈ℝ^d, where d=2,3, we denote the Sobolev space as H^s(𝒟) for a real number s≥ 0.
Its norm and seminorm are denoted by ·_s,𝒟 and |·|_s,𝒟, respectively.
The space H^0(𝒟) coincides with L^2(𝒟), and the L^2-inner product is denoted by (·,·)_𝒟.
When 𝒟=Ω, the subscript 𝒟 will be omitted.
This notation is generalized to vector- and tensor-valued Sobolev spaces.
The notation H_0^1(𝒟) means the space of v∈ H^1(𝒟) such that v=0 on ∂𝒟, and L_0^2(𝒟) means the space of v∈ L^2(𝒟) such that (v,1)_𝒟=0.
The polynomial spaces of degree less than or equal to k are denoted as P_k(𝒟).
We also introduce the Hilbert space
H(div,𝒟):={∈ [L^2(𝒟)]^d:div ∈ L^2(𝒟)}
with the norm
_H(div,𝒟)^2:=_0,𝒟^2+div _0,𝒟^2.
For discrete setting, we assume that there exists a shape-regular triangulation of Ω whose elements T∈ are triangles in two dimensions and tetrahedrons in three dimensions.
Also, denotes the collection of all edges/faces in , and =∪, where is the collection of all the interior edges/faces and is that of the boundary edges/faces.
For each element T∈, let h_T denote the diameter of T and _T (or ) denote the outward unit normal vector on ∂ T.
For each interior edge/face e∈ shared by two adjacent elements T^+ and T^-, we let _e be the unit normal vector from T^+ to T^-.
For each e∈, _e denotes the outward unit normal vector on ∂Ω.
In a triangulation , the broken Sobolev space is defined as
H^s():={v∈ L^2(Ω):v|_T∈ H^s(T), ∀ T∈},
equipped with the norm
v_s,:=(∑_T∈v^2_s,T)^1/2.
When s=0, the L^2-inner product on is denoted by (·,·)_.
Also, the L^2-inner product on is denoted as ⟨·,·⟩_, and the L^2-norm on is defined as
v_0,:=(∑_e∈v^2_0,e)^1/2.
The piecewise polynomial space corresponding to the broken Sobolev space is defined as
P_k() = {v∈ L^2(Ω): v|_T∈ P_k(T), ∀ T∈}.
In addition, the jump and average of v on e∈ are defined as
v:={[ v^+-v^- on e∈,; v on e∈, ].
v:={[ (v^++v^-)/2 on e∈,; v on e∈, ].
where v^± is the trace of v|_T^± on e∈∂ T^+∩∂ T^-. These definitions are extended to vector- and tensor-valued functions.
We finally introduce the trace inequality that holds for any function v∈ H^1(T),
v_0,e^2≤ C(h_T^-1v_0,T^2+h_T∇ v_0,T^2).
§ ENRICHED GALERKIN METHODS FOR THE BRINKMAN EQUATIONS
We first introduce the enriched Galerkin (EG) finite-dimensional velocity and pressure spaces <cit.>.
The space of continuous components for velocity is
= {^C ∈ : ^C|_T∈ [P_1(T)]^d, ∀ T ∈}.
The space of discontinuous components for velocity is defined as
= {^D ∈ L^2(Ω) : ^D|_T = c ( - _T), c ∈ℝ, ∀ T ∈},
where _T is the barycenter of T∈.
Thus, the EG finite-dimensional velocity space is defined as
:= ⊕.
We note that any function ∈ consists of unique continuous and discontinuous components, =^C+^D for ^C∈ and ^D∈.
At the same time, the EG pressure space is
Q_h := { q ∈ : q|_T ∈ P_0(T), ∀ T ∈}.
Therefore, we formulate a standard EG method for the Brinkman equations with the pair of the EG spaces × Q_h by adding the Darcy term to the Stokes formulation <cit.>.
This algorithm employs interior penalty discontinuous Galerkin (IPDG) formulations because any EG velocity function in 𝐕_h has a discontinuity.
IPDG formulations include two penalty terms scaled by h_e with the penalty parameters ρ_1 and ρ_2.
The method provides reliable numerical solutions in the Stokes regime.
However, this approach may not be effective in solving the Brinkman equations in the Darcy regime because it requires
H(div)-conforming discrete velocity functions. Moreover, the method's velocity error bounds may depend on a pressure term inversely proportional to ν.
For this reason, we develop a pressure-robust EG method that produces stable and accurate solutions to Brinkman problems with any value of ν∈(0,1].
First, the velocity reconstruction operator <cit.> is defined as : →ℬDM_1()⊂ H(div,Ω) such that
∫_e () ·_e p_1 ds = ∫_e ·_e p_1 ds,
∀p_1 ∈P_1(e), ∀e ∈,
∫_e () ·_e p_1 ds = 0, ∀p_1 ∈P_1(e), ∀e ∈,
where ℬDM_1() is the Brezzi-Douglas-Marini space of index 1 on .
Then, we propose the pressure-robust EG method as follows.
Using the velocity reconstruction operator , we force discrete velocity functions in to be H(div)-conforming.
We replace the velocity functions in the bilinear form (,)_ in (<ref>) and the right-hand side with the reconstructed velocity .
Thus, the term (,)_ with the H(div)-conforming velocity dominates the formulation when ν approaches to 0 (the Darcy regime).
Moreover, the reconstructed velocity on the right-hand side allows us to obtain error bounds independent of a pressure term inversely proportional to ν.
§ WELL-POSEDNESS AND ERROR ANALYSIS FOR ST-EG (ALGORITHM <REF>)
First of all, we introduce the discrete H^1-norm in <cit.> for all ∈,
^2 := ∇_0, ^2 + ρ_1 h_e^-1/2_0, ^2,
where ρ_1 is an H^1-penalty parameter. With this norm, the coercivity and continuity results for the bilinear form (·,·) have been proved in <cit.>: For a sufficiently large H^1-penalty parameter ρ_1, there exist positive constants κ_1 and κ_2 independent of ν and h such that
(, ) ≥κ_1 ^2 ∀∈,
|(, )| ≤κ_2 ∀, ∈.
Then, we define an energy norm for Brinkman problems involving the discrete H^1-norm and L^2-norm,
^2 := ν^2 + _0^2 +ρ_2 h_e^1/2_0, ^2.
In this case, ρ_2 is an L^2-penalty parameter that should be sufficiently large for well-posedness, and its simple choice is ρ_2=ρ_1.
The following lemma shows an essential norm equivalence between · and · scaled by ν and h.
For given ν and h, we define a positive constant C_ne (Norm Equivalence) as
C_ne:=C√(ν+h^2(ρ_2/ρ_1+1)),
where C is a generic positive constant independent of ν and h.
Then, the following norm equivalence holds: For any ∈, we have
√(ν)≤√(ν+c_1 h^2)≤≤ C_ne,
for some small 0<c_1<1. Moreover, the constant C_ne is bounded as
C_ne≤ C( √(ν)+h)
for some generic constant C>0.
We observe each term in the energy norm
^2=ν^2 + _0^2 +ρ_2 h_e^1/2_0, ^2.
Since .|_T is a linear polynomial in the second term, a scaling argument implies
_0≤ Ch∇_0,≤ Ch.
For the trace term, we have
ρ_2 h_e^1/2_0, ^2≤ Ch^2(ρ_2/ρ_1)ρ_1h_e^-1/2_0, ^2≤ Ch^2(ρ_2/ρ_1)^2.
Thus, we obtain
^2≤ C(ν+h^2(ρ_2/ρ_1+1))^2.
On the other hand, the inverse inequality and the same argument for the trace term lead to
^2≤ C h^-2(^2_0+ρ_2 h_e^1/2_0, ^2),
where C contains ρ_1/ρ_2. In this case, we assume C>1 and set c_1=1/C, so
(ν+c_1h^2)^2≤^2.
Let us introduce the interpolation operator in <cit.> : [H^2(Ω)]^d → defined by
Π_h=Π_h^C+Π_h^D,
where Π_h^C∈_h
is the nodal value interpolant of and Π_h^D∈_h satisfies
(∇·Π_h^D,1)_T=(∇·( - Π_h^C ), 1)_T for all T∈.
The following interpolation error estimates and stability <cit.> are used throughout our numerical analysis:
|- | _j, ≤C h^m-j ||_m, 0 ≤j ≤m ≤2, ∀∈[H^2(Ω)]^d,
- ≤C h _2, ∀∈[H^2(Ω)]^d,
≤C _1,
∀∈.
For the pressure, we introduce the local L^2-projection 𝒫_0: → Q_h such that (q - q, 1)_T = 0 for all T∈. Its interpolation error estimate is given as,
q -
q_0 ≤ C h q_1, ∀ q ∈ H^1(Ω).
§.§ Well-posedness
We first prove the coercivity and continuity results concerning the energy norm ·.
For any ,∈𝐕_h, we have the coercivity and continuity results:
ν(,)+(,) ≥ K_1^2,
|ν(,)+(,)| ≤ K_2,
where K_1=min(κ_1,1) and K_2=max(κ_2,1).
If we observe the bilinear forms (·,·) and (·,·) and use the coercivity (<ref>), then we have
ν(,)+(,) ≥κ_1ν^2+_0^2 +ρ_2 h_e^1/2_0, ^2
≥min(κ_1,1)^2.
Moreover, it follows from the Cauchy-Schwarz inequality and the continuity (<ref>) that
|ν(,)+(,)| ≤κ_2ν+_0_0
+ (√(ρ_2)h_e^1/2_0,)(√(ρ_2)h_e^1/2_0,)
≤max(κ_2,1).
Next, we prove the discrete inf-sup condition for the problem (<ref>) in Algorithm <ref>.
Assume that the penalty parameters ρ_1 and ρ_2 are sufficiently large.
Then, there exists a positive constant C_1:=C_is/C_ne such that
sup_∈(,q)/≥ C_1q_0, ∀ q∈ Q_h,
where C_is>0 (Inf-Sup), independent of ν and h, is the constant for the inf-sup condition for · in <cit.>.
It follows from the discrete inf-sup condition in <cit.> and the upper bound of in (<ref>) that
C_isq_0≤sup_∈(,q)/≤ C_nesup_∈(,q)/.
Furthermore, Lemma <ref> yields the continuity of (·,·) with .
For any ∈ and q∈ Q_h, there exists a positive constant C independent of ν and h such that
|(,q)|≤C/√(ν+c_1 h^2)q_0.
It follows from
the continuity of (·,·) in <cit.> and
the upper bound of in (<ref>) that
|(,q)|≤ Cq_0≤C/√(ν+c_1 h^2)q_0.
Thus, we obtain the well-posedness of the method in Algorithm <ref>.
There exists a unique solution (,)∈× Q_h to the method.
It suffices to show that _h=0 and p_h=0 when =0 because and Q_h are finite-dimensional spaces.
Choosing =_h in (<ref>) and q=p_h in (<ref>) and adding the two equations imply ν(_h,_h)+(_h,_h)=0.
Hence, _h=0 by (<ref>), so _h=0.
If _h=0 in (<ref>), then (,p_h)=0 for all ∈. Therefore, the inf-sup condition (<ref>) yields p_h_0=0, so p_h=0.
§.§ Error estimates
Let (,p)∈ [H_0^1(Ω)∩ H^2(Ω)]^d× [L_0^2(Ω)∩ H^1(Ω)] be the solution to (<ref>)-(<ref>).
We define the error functions used in the error estimates
χ_h:=-Π_h, 𝐞_h:=Π_h-_h, ξ_h:=p- p, ϵ_h:= p-p_h.
First, we derive error equations in the following lemma.
For any ∈ and q∈ Q_h, we have
ν(_h,)+(_h,)-(,ϵ_h) =l_1(,)+l_2(,)+𝐬(Π_h,)+(,ξ_h),
(_h,q) =-(χ_h,q),
where the supplemental bilinear forms are defined as follows:
l_1(,):=ν(Π_h-,),
l_2(,):=(Π_h-, )_,
𝐬(Π_h,):=ρ_2⟨ h_eΠ_h,⟩_.
We have -(Δ,)
_=(,) for any ∈ from <cit.>, which implies that
-ν(Δ,)_
=ν(Π_h,)-ν(Π_h-,).
The definition of (·,·) also gives
(,)_=(Π_h,)-(Π_h-, )_-ρ_2⟨ h_eΠ_h,⟩_,
and integration by parts and continuity of p lead to
(∇ p,)_ = ∑_T∈⟨ p,·⟩_∂ T -(p,∇·)_T= -(,p).
Thus, the equation (<ref>) imposes
ν(Π_h,)+(Π_h,)-(,p)=(,)+l_1(,)+l_2(,)+𝐬(Π_h,).
By comparing this equation with (<ref>) in the method, we arrive at
ν(_h,)+(_h,)-(,ϵ_h)=l_1(,)+l_2(,)+𝐬(Π_h,)+(,ξ_h).
Moreover, it follows from the continuity of and (<ref>) that
(∇·,q)_=(,q)=0=(_h,q),
which implies (<ref>).
In what follows, we prove estimates for the supplemental bilinear forms in Lemma <ref>.
Assume that ∈[H^2(Ω)]^d and ∈. Then, we have
|l_1(,)|≤C√(ν) h_2,
|l_2(,)|≤C h^2_2,
|𝐬(Π_h,)|≤Ch^2_2,
where C is a generic positive constant independent of ν and h and may vary in each case.
It follows from (<ref>), (<ref>), and (<ref>) that
|l_1(,)| =|ν(Π_h-,)|
≤νκ_2Π_h-
≤ Cν h _2
≤ C√(ν)h_2.
Using the Cauchy-Schwarz inequality and (<ref>),
we get the following upper bounds
|l_2(,)| =|(Π_h-,)_|
≤Π_h-_0_0
≤ Ch^2||_2.
Finally, the Cauchy-Schwarz inequality, trace inequality (<ref>), and (<ref>) imply
|𝐬(Π_h,)| =|ρ_2⟨ h_eΠ_h,⟩_|
=|ρ_2⟨ h_eΠ_h-,⟩_|
≤ρ_2h_e^1/2Π_h-_0,h_e^1/2_0,
≤h_e^1/2Π_h-_0,
≤ Ch^2||_2.
In addition, we expand the continuity of (·,·) in <cit.> to be relevant to the error equations (<ref>) because χ_h=-Π_h∉𝐕_h and ξ_h=p- p∉Q_h.
For any ∈ and q∈ Q_h, we have
|(,ξ_h)|≤Ch p_1,
|(χ_h,q)|≤Chq_0_2,
where C is a generic positive constant independent of ν and h and may vary in each case.
First, we use the Cauchy-Schwarz inequality to get
|(,ξ_h)| =|(∇·,ξ_h)_-⟨·_e,ξ_h⟩_|
≤ C(∇_0,ξ_h_0+h_e^-1/2_0,h_e^1/2ξ_h_0,).
Then, the trace term is bounded by using the trace inequality (<ref>) and interpolation error estimate (<ref>),
h_e^1/2ξ_h_0,^2≤ C(ξ_h_0^2+h^2∇ξ_h_0,^2)≤ Ch^2p_1^2
because ∇ξ_h=∇(p- p)=∇ p.
Hence, the definition of the discrete H^1-norm and estimate (<ref>) imply
|(,ξ_h)|≤ Chp_1.
Similarly, it follows from the Cauchy-Schwarz inequality, trace inequality (<ref>), and (<ref>) that
|(χ_h,q)| ≤ C(∇χ_h_0,q_0+h_e^-1/2χ_h_0,h_e^1/2q_0,)
≤ Cq_0χ_h≤ Chq_0_2.
Therefore, we show error estimates of the method in Algorithm <ref> for the Brinkman equations.
Let (,p)∈ [H_0^1(Ω)∩ H^2(Ω)]^d× [L_0^2(Ω)∩ H^1(Ω)] be the solution to (<ref>)-(<ref>), and (,p_h)∈× Q_h be the discrete solution from the method. Then, we have the following error estimates
Π_h-_h ≤ C[(√(ν)+1)h_2 + ( h+h/√(ν+c_1 h^2))p_1 ],
p-p_h_0 ≤ C[ (ν+√(ν))h_2 + (√(ν)+1)hp_1 ].
First of all, we apply the continuity results (<ref>), (<ref>), the estimates (<ref>), and the norm equivalence (<ref>) to the error equation (<ref>),
(,ϵ_h) =ν(_h,)+(_h,)-l_1(,)-l_2(,)-𝐬(Π_h,)-(,ξ_h)
≤ C(_h+√(ν)h_2+h^2_2+h/√(ν+c_1 h^2)p_1).
Thus, the inf-sup condition (<ref>) with (<ref>) implies
ϵ_h_0≤ C(√(ν)+h)(_h+√(ν)h_2+h^2_2+h/√(ν+c_1 h^2)p_1).
We choose =_h in (<ref>) and q=ϵ_h in (<ref>) and substitute (_h,ϵ_h) with -(χ_h,ϵ_h) to obtain
ν(_h,_h)+(_h,_h)=-(χ_h,ϵ_h)+l_1(,_h)+l_2(,_h)+𝐬(Π_h,_h)+(_h,ξ_h).
In this case, we estimate the term (χ_h,ϵ_h)
using (<ref>),
|(χ_h,ϵ_h)|≤ Ch_2ϵ_h_0.
The term (_h,ξ_h) is estimated by using (<ref>) and (<ref>),
|(_h,ξ_h)|≤ Chp_1_h≤ Ch/√(ν+c_1h^2)p_1_h.
Hence, it follows from (<ref>), (<ref>), (<ref>), and (<ref>) that
_h^2≤ C(h_2ϵ_h_0 + √(ν)h_2_h+h^2_2_h + h/√(ν+c_1 h^2)p_1_h).
We use the estimate (<ref>) and omit high-order terms (h^3 or h^4) to obtain,
h_2ϵ_h_0 ≤ C( (√(ν)+h)h_2_h + ν h^2_2^2 + √(ν)+h/√(ν+c_1 h^2)h^2_2p_1)
≤ C( (√(ν)+h)h_2_h + ν h^2_2^2+ h^2_2p_1)
because √(ν) +h≤ (√(2/c_1))√(ν+c_1 h^2).
If we apply the Young’s inequality to each term with a positive constant α, then we have
√(ν)h_2_h≤ν h^2/2α_2^2+α/2_h^2,
h^2_2_h≤h^4/2α_2^2 + α/2_h^2,
h^2_2p_1≤h^2/2α_2^2 + α h^2/2p_1^2,
h/√(ν+c_1 h^2)p_1_h≤h^2/2α(ν+c_1 h^2)p_1^2+α/2_h^2.
Therefore, a proper α implies
_h^2≤ C[(ν+1)h^2_2^2 + ( h^2+h^2/ν+c_1 h^2)p_1^2 ],
so we finally get
_h≤ C[(√(ν)+1)h_2 + ( h+h/√(ν+c_1 h^2))p_1 ].
On the other hand, we observe the intermediate estimate (<ref>) and omit high-order terms (h^2 or h^3) to show the pressure error estimate,
ϵ_h_0≤ C[(√(ν)+h)_h+ν h_2+hp_1].
Thus, we bound _h with the velocity error estimate (<ref>), so we finally obtain
ϵ_h_0≤ C[ (ν+√(ν))h_2 + (√(ν)+1)hp_1 ],
when omitting h^2-terms.
Theorem <ref> explains that the errors converge in the first order with h under the condition h<√(ν) easily satisfied in the Stokes regime.
However, the velocity error in the Darcy regime may not decrease with h due to the pressure term in the velocity error bound, that is, when ν→ 0,
h/√(ν+c_1h^2)p_1→1/√(c_1)p_1.
We will confirm these theoretical results through numerical experiments.
For this reason, the method in Algorithm <ref> may not be effective in solving the Brinkman equations with small ν, which motivates us to develop and analyze the method in Algorithm <ref>.
§ WELL-POSEDNESS AND ERROR ANALYSIS FOR PR-EG (ALGORITHM <REF>)
In this section, we prove well-posedness and error estimates for the method in Algorithm <ref>.
The error estimates show that the method's velocity and pressure errors decrease in the optimal order of convergence in both the Stokes and Darcy regimes, so we expect stable and accurate numerical solutions with any ν as h decreases.
We first define another energy norm by replacing _0 with _0,
^2_ℛ := ν^2 + _0^2 +ρ_2 h_e^1/2_0, ^2.
We also introduce the interpolation error estimate of the operator in <cit.>.
For any ∈, there exists a positive constant C independent of ν and h such that
- _0≤ Chh_e^-1/2_0,≤ C h .
This interpolation error estimate allows to have the norm equivalence between _ℛ and scaled by ν and h, similar to Lemma <ref>.
For any ∈, it holds
√(ν)≤√(ν+c_2 h^2)≤_ℛ≤ C_ne,
where C_ne is the constant defined in Lemma <ref> and 0<c_2<1 is a small constant.
It suffices to prove that _0≤ Ch for the upper bound because _0 is replaced by _0 in the norm _ℛ.
Indeed, it follows from the triangle inequality, the error estimate (<ref>), and the argument in the proof of Lemma <ref> that
_0 ≤_0 + -_0≤_0+Ch≤ Ch.
Hence, we obtain
_ℛ^2=ν^2 + _0^2 +ρ_2 h_e^1/2_0, ^2≤ C(ν + h^2(ρ_2/ρ_1+1))^2.
For the lower bound, we recall the result in Lemma <ref> and apply (<ref>) to it,
^2 ≤ C h^-2(_0^2+ρ_2 h_e^1/2_0, ^2)
≤ C h^-2(_0^2+-_0^2+ρ_2 h_e^1/2_0, ^2)
≤ Ch^-2(_0^2+ h^2h_e^-1/2_0,^2+ρ_2 h_e^1/2_0, ^2)
=Ch^-2(_0^2+ρ_2 h_e^1/2_0, ^2)+C_0h_e^-1/2_0,^2,
where C_0 contains ρ_1/ρ_2 but is independent of ν and h.
Then, for a sufficiently large ρ_1, we have
ρ_1-C_0/ρ_1^2≤ Ch^-2(_0^2+ρ_2 h_e^1/2_0, ^2).
Therefore, we set c_2=(ρ_1-C_0)/(Cρ_1) and assume c_2<1 to have
c_2h^2^2≤_0^2+ρ_2h_e^1/2_0,^2,
which implies
(ν+c_2h^2)≤_.
In addition, we prove the norm equivalence between and _ using the results in Lemma <ref>, Lemma <ref>, and Lemma <ref>.
For any ∈, it holds
c_*_≤≤ c^*_,
where c_* and c^* are positive constants independent of ν and h.
It follows from the results in Lemma <ref> and Lemma <ref> that
ν^2+_0^2≤ C(ν^2+c_1h^2^2+_0^2)≤ C^2.
Similarly, from Lemma <ref> and Lemma <ref>, we obtain
ν^2+_0^2≤ C(ν^2+c_2h^2^2+_0^2)≤ C^2_.
§.§ Well-posedness
Most of the results for the well-posedness of the method are similar to those of the method. Thus, we briefly state and prove the results concerning ·_ℛ in this subsection.
For any ,∈𝐕_h, the coercivity and continuity results hold:
ν(,)+𝐜̃(,) ≥ K_1^2_ℛ,
|ν(,)+𝐜̃(,)| ≤ K_2_ℛ_ℛ,
where K_1=min(κ_1,1) and K_2=max(κ_2,1).
The proof is the same as that of Lemma <ref>, so we omit the details here.
Assume that the penalty parameters ρ_1 and ρ_2 are sufficiently large.
Then, we have
sup_∈(,q)/_ℛ≥ C_1q_0, ∀ q∈ Q_h,
for C_1=C_is/C_ne defined in Lemma <ref>.
Similar to the proof of Lemma <ref>, the discrete inf-sup condition in <cit.> and the upper bound of _ℛ in (<ref>) imply
C_isq_0≤sup_∈(,q)/≤ C_nesup_∈(,q)/_ℛ.
For any ∈ and q∈ Q_h, it holds
|(,q)|≤C/√(ν+c_2 h^2)q_0_ℛ,
for a generic positive constant C independent of ν and h.
Similar to the proof of Lemma <ref>, this result is proved by the continuity of (·,·) in <cit.> and the upper bound of in (<ref>).
Finally, we obtain the well-posedness of the method in Algorithm <ref>.
There exists a unique solution (,)∈× Q_h to the method.
The proof is the same as Theorem <ref>, so we omit the details here.
§.§ Error estimates
We recall the error functions
χ_h:=-Π_h, 𝐞_h:=Π_h-_h, ξ_h:=p- p, ϵ_h:= p-p_h,
where (,p)∈ [H_0^1(Ω)∩ H^2(Ω)]^d× [L_0^2(Ω)∩ H^1(Ω)] is the solution to (<ref>)-(<ref>).
Then, we derive error equations for the method.
For any ∈ and q∈ Q_h, we have
ν(_h,)+(_h,)-(,ϵ_h) =l_1(,)+l_3(,)+l_4(,)+𝐬(Π_h,),
(_h,q) =-(χ_h,q),
where l_1(,) and 𝐬(Π_h,) are defined in Lemma <ref>, and the other supplemental bilinear forms are defined as follows:
l_3(,):=ν(Δ, -)_,
l_4(,):=(Π_h-,)_.
Since -(Δ,)
_=(,) for any ∈, we have
-ν(Δ,)_ =-ν(Δ,)_-ν(Δ,-)_
=ν(,)-ν(Δ,-)_
=ν(Π_h,)-ν(Π_h-,)-ν(Δ,-)_.
By the definition of (·,·), we also have
(,)_ =(Π_h,)_-(Π_h-,)_
=(Π_h,)-(Π_h-,)_-ρ_2⟨ h_eΠ_h,⟩_.
Since · is continuous on ∂ T and ∇· is constant in T, integration by parts implies
(∇ p,)_ = -(, p).
Hence, we obtain the following equation from (<ref>),
ν(Π_h,)+(Π_h,)-(, p)=(,)+l_1(,)+l_3(,)+l_4(,)+𝐬(Π_h,).
If we compare this equation with (<ref>) in the method, then we arrive at
ν(_h,)+(_h,)-(,ϵ_h)=l_1(,)+l_3(,)+l_4(,)+𝐬(Π_h,).
For the second equation (<ref>), the continuity of and (<ref>) in the method lead us to
(∇·,q)_=(,q)=0=(_h,q).
We present estimates for the supplementary bilinear forms used in Lemma <ref>.
Assume that ∈[H^2(Ω)]^d and ∈. Then, we have
|l_1(,)|≤C√(ν)h_2_ℛ,
|l_3(,)|≤C√(ν)h_2_ℛ,
|l_4(,)|≤C h_2_ℛ,
|𝐬(Π_h,)|≤C h^2_2_ℛ,
where C is a generic positive constant independent of ν and h and may vary in each case.
The estimates (<ref>) and (<ref>) are proved by the estimate in Lemma <ref> and the norm equivalence (<ref>).
On the other hand, the Cauchy-Schwarz inequality, (<ref>), and (<ref>) lead to
|l_3(,)| =|ν(Δ, -)_|
≤ν_2-_0
≤ Cν h_2
≤ C√(ν)h_2_ℛ.
Using the Cauchy-Schwarz inequality, (<ref>), (<ref>), and (<ref>),
we get the following upper bounds,
|l_4(,)| =|(Π_h-,)_|
≤|(Π_h-Π_h,)_|+|(Π_h-,)_|
≤ ChΠ_h_0+Π_h-_0_0
≤ Ch||_1_ℛ.
Hence, we prove error estimates of the method in Algorithm <ref>.
Let (,p)∈ [H_0^1(Ω)∩ H^2(Ω)]^d× [L_0^2(Ω)∩ H^1(Ω)] be the solution to (<ref>)-(<ref>), and (,p_h)∈× Q_h be the discrete solution from the method. Then, we have the following pressure-robust error estimates
Π_h-_h_ℛ≤ Ch(√(ν)+1)_2,
𝒫_0p-p_h_0≤ C h(ν+√(ν))_2 + Ch^2_2.
We start with the error equation (<ref>),
(,ϵ_h)=ν(_h,)+(_h,)-l_1(,)-l_3(,)-l_4(,)-𝐬(Π_h,).
Then, it follows from (<ref>) and (<ref>) that
(,ϵ_h)≤ C(_h _ℛ+√(ν)h_2+h_2+h^2_2)_ℛ.
From the inf-sup condition (<ref>) with (<ref>), we obtain
ϵ_h_0≤ C(√(ν)+h)(_h_ℛ+√(ν)h_2+h_2+h^2_2).
We also choose =_h and q=ϵ_h in (<ref>) and substitute (<ref>) into (<ref>) to get
ν(_h,_h)+(_h,_h)=-(χ_h,ϵ_h)+l_1(,_h)+l_3(,_h)+l_4(,_h)+𝐬(Π_h,_h).
Here, it follows from (<ref>) that
|(χ_h,ϵ_h)|≤ Ch_2ϵ_h_0.
Therefore, from (<ref>), (<ref>), and (<ref>), we have
_h_ℛ^2≤ C( h_2ϵ_h_0+√(ν)h_2_h_ℛ+h_2_h_ℛ),
while omitting h^2-terms.
We also replace ϵ_h_0 by its upper bound in (<ref>) omitting high-order terms,
_h^2_ℛ≤ C(√(ν)h_2_h_ℛ+h_2_h_ℛ).
In this case, the Young's inequality gives
√(ν)h_2_h_ℛ≤ν h^2/2α_2^2+α/2_h^2_ℛ,
h_2_h_ℛ≤h^2/2α_2^2+α/2_h^2_ℛ.
Therefore, it follows from choosing a proper α that
_h^2_ℛ≤ Ch^2(ν+1)_2^2,
which implies that
_h_ℛ≤ Ch(√(ν)+1)_2.
If we apply this estimate to (<ref>), then we obtain
ϵ_h_0≤ Ch(ν+√(ν))_2+Ch^2_2.
We emphasize that the error bounds in Theorem <ref> are pressure-robust and have no detrimental effect from small ν.
With ν→0, the method's velocity errors decrease in the optimal order, and pressure errors do in the second order (superconvergence is expected).
This result implies that the method produces stable and accurate solutions to the Brinkman equations in the Darcy regime.
In addition, we prove total error estimates showing the optimal orders of convergence in velocity and pressure.
Under the same assumption of Theorem <ref>, we have the following error estimates
-_h_ℛ≤ Ch(√(ν)+1)_2,
p-p_h_0≤ Ch((ν+√(ν))_2+p_1).
For the velocity error estimate, we show
-Π_h_ℛ≤ C√(ν)h_2.
More precisely, we recall χ_h=-Π_h and observe the energy norm,
χ_h^2_ℛ=νχ_h^2+χ_h_0^2+ρ_2h_e^1/2χ_h_0,^2.
Then, it follows from (<ref>), (<ref>), and (<ref>) that
χ_h_0≤χ_h-χ_h_0+χ_h_0≤ Chχ_h+χ_h_0≤ Ch^2_2.
Also, from (<ref>) and (<ref>), we obtain
h_e^1/2χ_h_0,≤ C(χ_h_0^2+h^2∇χ_h_0,^2)^1/2≤ Ch^2_2.
Hence, since χ_h≤ Ch_2, the error bound is
χ_h_ℛ≤ C(√(ν)h+h^2)_2.
Furthermore, the pressure error estimate is readily proved by the triangle inequality and interpolation error estimate (<ref>).
In conclusion, the proposed method solves the Brinkman equations in both the Stokes and Darcy regimes, having the optimal order of convergence for both velocity and pressure.
§ NUMERICAL EXPERIMENTS
This section shows numerical experiments validating our theoretical results with two- and three-dimensional examples.
The numerical methods in this paper and their discrete solutions are denoted as follows:
* (_h^,p_h^): Solution by the method in Algorithm <ref>.
* (_h^,p_h^): Solution by the method in Algorithm <ref>.
While considering the scaled Brinkman equations (<ref>) with the parameter ν, we recall the error estimates for the method in Theorem <ref>,
Π_h-^_h≲(√(ν)+1)h_2 + ( h+h/√(ν+c_1 h^2))p_1,
p-p_h^_0≲(ν+√(ν))h_2 + (√(ν)+1)hp_1,
and the error estimates for the method from Theorem <ref>
Π_h-_h^≲(√(ν)+1)h_2,
p-p_h^_0≲(ν+√(ν))h_2+h^2_2.
We mainly check the error estimates (<ref>) and (<ref>) by showing various numerical experiments with ν and h.
We also display the difference between the numerical solutions for and in the Darcy regime, which shows that the method is needed to obtain stable and accurate velocity solutions.
Moreover, we present permeability tests considering the Brinkman equations (<ref>) with viscosity μ and permeability K and applying both EG methods.
The permeability tests enhance the motivation of using the method for the case of extreme μ or K.
We implement the numerical experiments using the authors' MATLAB codes developed based on iFEM <cit.>.
The penalty parameters are ρ_1=ρ_2=3 for all the numerical experiments.
§.§ Two dimensional tests
Let the computational domain be Ω=(0,1)× (0,1). The velocity field and pressure are chosen as
= ([ 10x^2(x-1)^2y(y-1)(2y-1); -10x(x-1)(2x-1)y^2(y-1)^2 ]),
p = 10(2x-1)(2y-1).
Then, the body force and the Dirichlet boundary condition are obtained from (<ref>) using the exact solutions.
§.§.§ Robustness and accuracy test
We compare the and methods to see robustness and check their accuracy based on the error estimates (<ref>) and (<ref>).
First, we interpret the method's velocity error estimate (<ref>) depending on the relation between coefficient ν and mesh size h.
The first-order convergence of the energy norm with h is guaranteed when ν≫ h^2, but it is hard to tell any order of convergence when ν is smaller than h^2 due to the term h/√(ν+c_1h^2).
On the other hand, the velocity error estimate for the method (<ref>) means the first-order convergence in h regardless of ν.
In Figure <ref>, we check the discrete H^1-error for the velocity scaled by ν, √(ν)-_h. It is a component of the energy norm -_h.
The method tends to produce errors increasing with 𝒪(h^-1/2) when h>√(ν), while the errors decrease with 𝒪(h^3/2) when h<√(ν).
This result supports the error estimates (<ref>) (superconvergence may happen because we solve the problem on structured meshes) and means that a tiny mesh size is needed for accurate solutions with small ν.
However, the method's errors uniformly show the first-order convergence, 𝒪(h), regardless of ν.
This result supports the error estimates (<ref>), so the method guarantees stable and accurate solutions in both the Stokes and Darcy regimes.
We fix ν=10^-6 and compare the velocity errors and solutions of the and methods.
Table <ref> displays the energy errors and their major components, the discrete H^1-errors scaled by ν and L^2-errors.
For the method, the energy errors decrease in the half-order convergence because the L^2-errors are dominant and decrease in the same order.
However, the H^1-errors keep increasing unless h<√(ν)=10^-3, so the H^1-errors will become dominant and deteriorate the order of convergence of the energy errors.
On the other hand, using the method, we expect from (<ref>) that the energy errors and major components converge in at least the first order of h.
Indeed, Table <ref> shows that the H^1-errors decrease in the first order with h, while the L^2-errors reduce in the second order.
Since the energy error involve both H^1- and L^2-errors, the energy errors decrease in the second order because of the dominant L^2-errors but eventually converge in the first order coming from the H^1-errors.
In Figure <ref>, the method produces accurate velocity solutions clearly showing a vortex flow pattern when ν=10^-6 and h=1/16. In contrast, the numerical velocity from the method includes significant oscillations around the boundary of the domain.
Moreover, the pressure error estimates (<ref>) and (<ref>) tell us that the convergence order for the pressure errors is at least 𝒪(h) in both methods. However, the method can produce superconvergent pressure errors because the term h^2p_1 is dominant when ν is small.
In Table <ref>, the pressure errors of the method, p-p_h^_0, decrease in at least 𝒪(h^3), which means superconvergence compared to the interpolation error estimate (<ref>).
On the other hand, the method still yields pressure errors converging in the first order with h.
Since the interpolation error is dominant in the total pressure errors p-p_h_0, the errors in Table <ref> have the first-order convergence with h in both methods.
Therefore, the numerical results support the pressure error estimates (<ref>) and (<ref>).
§.§.§ Error profiles with respect to ν
We shall confirm the error estimates (<ref>) and (<ref>) in terms of the parameter ν by checking error profiles depending on ν.
We define the following error profile functions of ν based on the error estimates and show that these functions explain the behavior of the velocity and pressure errors with ν:
* E_,2^(ν):=0.1h√(ν)+0.3h/√(ν+3h^2)+0.4h=0.1/32√(ν)+0.3/√(32^2ν+3)+0.4/32 from (<ref>),
* E_,2^(ν):=0.8h√(ν)+0.05h=0.8/32√(ν)+0.05/32 from (<ref>),
* E_p,2^(ν):=2hν+3h√(ν)+0.3h=2/32ν+3/32√(ν)+0.3/32 from (<ref>),
* E_p,2^(ν):=0.5hν+0.01h√(ν)+0.01h^2=0.5/32ν+0.01/32√(ν)+0.01/32^2 from (<ref>),
where h=1/32.
Figure <ref> shows the velocity and pressure errors and the graphs of the above error profile functions when ν decreases from 1 to 0 and h=1/32.
As shown in Figure <ref>, the velocity errors for the method increase when ν is between 1 to 10^-4 and tend to remain constant when ν is smaller.
The method's pressure errors decrease slightly and stay the same as ν→0.
On the other hand, the velocity and pressure errors for the method significantly reduce and remain the same after ν=10^-4.
This error behavior can be explained by the graphs of the error profile functions guided by the error estimates (<ref>) and (<ref>), so this result supports the estimates concerning ν.
In addition, the velocity and pressure errors for the method are almost 1000 times smaller than the method in Figure <ref>.
Therefore, we confirm that the method guarantees more accurate solutions for velocity and pressure when ν is small.
§.§.§ Permeability test
In this test, we consider the Brinkman equations (<ref>) with viscosity μ=10^-6 and permeability given as the permeability map in Figure <ref>.
The permeability map indicates that fluid tends to flow following the blue regions, so the magnitude of numerical velocity will be more significant in the blue areas than in the red parts.
We set the velocity on the boundary of the domain as =⟨ 1,0⟩ and body force as = ⟨ 1, 1⟩.
We mainly compare the magnitude of the numerical velocity obtained from the two methods in Figure <ref>.
We clearly see that the method's velocity is more stable than the method's velocity containing nonnegligible noises (or oscillations) around the boundary.
This result tells that the method is necessary for stable and accurate velocity solutions to the Brinkman equations with extreme viscosity and permeability.
§.§ Three dimensional tests
We consider a three-dimensional flow in a unit cube Ω=(0,1)^3. The velocity field and pressure are chosen as
= ([ sin(π x)cos(π y) - sin(π x)cos(π z); sin(π y)cos(π z) - sin(π y)cos(π x); sin(π z)cos(π x) - sin(π z)cos(π y) ]),
p = π^3sin(π x)sin(π y)sin(π z)-1.
The body force and the Dirichlet boundary condition are given in the same manner as the two-dimensional example.
§.§.§ Robustness and accuracy test
In the two-dimensional tests, we checked that the condition h<√(ν) was required to guarantee the optimal order of convergence for the method, while the method showed a uniform performance in convergence independent of ν.
We obtained the same result as in Figure <ref> from this three-dimensional test.
Table <ref> displays the velocity solutions' energy errors and influential components, comparing the method with when ν=10^-6.
The method's energy errors tend to decrease because the dominant L^2-errors decrease, but the H^1-errors scaled by ν increase.
These H^1-errors may make the energy errors nondecreasing until h<√(ν)=10^-3.
However, the methods guarantee at least first-order convergence for all the velocity errors, showing much smaller errors than the method.
This numerical result supports the velocity error estimates in (<ref>) and (<ref>), and we expect more accurate solutions from the method when ν is small.
In addition, we compare numerical velocity solutions of the and methods when ν=10^-6 and h=1/16 in Figure <ref>.
The velocity solutions of both methods seem to capture a three-dimensional vortex flow expected from the exact velocity.
However, the velocity of the method contains noises around the right-top and left-bottom corners, where the streamlines do not form a circular motion.
In Table <ref>,
as expected in (<ref>), the method's pressure errors decrease in at least first-order.
On the other hand, the method's pressure errors, p -p_h^𝚄𝚁_0, decrease much faster, showing superconvergence.
This phenomenon is expected by the pressure estimate (<ref>) when ν is small.
Moreover, the orders of convergence of the total pressure errors, p-p_h_0,
for both methods are approximately one due to the interpolation error.
§.§.§ Error profiles with respect to ν
We define error profile functions suitable for the three-dimensional test by determining constants in the estimates (<ref>) and (<ref>):
* E_,3^(ν):=0.1h√(ν)+h/√(ν+3h^2)+9h=0.1/16√(ν)+1/√(16^2ν+3)+9/16 from (<ref>)
* E_,3^(ν):=6h√(ν)+0.25h=6/16√(ν)+0.25/16 from (<ref>),
* E_p,3^(ν):=1.5hν+h√(ν)+2.5h=1.5/16ν+1/16√(ν)+2.5/16 from (<ref>),
* E_p,3^(ν):=2hν+0.02h√(ν)+0.2h^2 = 2/16ν+0.02/16√(ν)+0.2/16^2 from (<ref>),
where h=1/16.
In Figure <ref>, the method's velocity and pressure errors decrease when ν changes from 1 to 10^-4 and remain the same when ν gets smaller.
However, the errors for the method slightly increase or decrease when 10^-4≤ν≤ 1, and they stay the same as ν→0.
Thus, the errors of the method are almost 100 times smaller than the method when ν≤ 10^-4, which means the method solves the Brinkman equations with small ν more accurately.
The error profile functions show similar error behaviors in Figure <ref>, supporting error estimates (<ref>) and (<ref>).
§.§.§ Permeability test
We apply piecewise constant permeability to the Brinkman equations (<ref>) in the cube domain Ω=(0,1)^3,
K() = {[ 10^-6 if ||≤ (0.25)^2,; 1 otherwise. ].
The other conditions are given as; viscosity μ=10^-6, boundary condition =⟨ 1,0,0⟩, and body force =⟨ 1, 1,1⟩.
We expect the fluid flow to be faster out of the ball with small permeability, and it tends to avoid the ball and be affected by the boundary velocity.
The streamlines and colored magnitude of the method's velocity in Figure <ref> exactly show such an expectation on the fluid flow, while the method fails to provide a reliable velocity solution.
§ CONCLUSION
In this paper, we proposed a pressure-robust numerical method for the Brinkman equations with minimal degrees of freedom based on the EG piecewise linear velocity and constant pressure spaces <cit.>.
To derive the robust method, we used the velocity reconstruction operator <cit.> mapping the EG velocity to the first-order Brezzi-Douglas-Marini space.
Then, we replaced the EG velocity in the Darcy term and the test function on the right-hand side with the reconstructed velocity. With this simple modification, the robust EG method showed uniform performance in both the Stokes and Darcy regimes compared to the standard EG method requiring the
mesh restriction h<√(ν) that is impractical in the Darcy regime.
We also validated the error estimates and performance of the standard and robust EG methods through several numerical tests with two- and three-dimensional examples.
Our efficient and robust EG method for the Brinkman equations can be extended to various Stokes-Darcy modeling problems, such as coupled models with an interface and time-dependent models. Also,
the proposed EG method can be extended for nonlinear models, such as nonlinear Brinkman models for non-Newtonian fluid and unsteady Brinkman-Forchheimer models.
plain
|
http://arxiv.org/abs/2307.04893v2 | 20230710203123 | Choosing Well Your Opponents: How to Guide the Synthesis of Programmatic Strategies | [
"Rubens O. Moraes",
"David S. Aleixo",
"Lucas N. Ferreira",
"Levi H. S. Lelis"
] | cs.LG | [
"cs.LG",
"cs.AI"
] |
Noise in the direction of motion determines the spatial distribution and proliferation of migrating cell collectives
Abdul N. Malmi-Kakkada
August 12, 2023
====================================================================================================================
This paper introduces (), an algorithm for providing a set of reference strategies to guide the search for programmatic strategies in two-player zero-sum games. Previous learning algorithms, such as Iterated Best Response (IBR), Fictitious Play (FP), and Double-Oracle (DO), can be computationally expensive or miss important information for guiding search algorithms. actively selects a set of reference strategies to improve the search signal. We empirically demonstrate the advantages of our approach while guiding a local search algorithm for synthesizing strategies in three games, including , a challenging real-time strategy game. Results show that learns reference strategies that provide a stronger search signal than IBR, FP, and DO. We also simulate a tournament of , where a synthesizer using outperformed the winners of the two latest competitions, which were programmatic strategies written by human programmers.
§ INTRODUCTION
Programmatic strategies encode game strategies in human-understandable programs. Such programmatic encoding allows domain experts to interpret and modify computer-generated strategies, which can be valuable depending on the application domain (e.g., the games industry). Previous works have used Iterated-Best Response (IBR) <cit.> as the learning algorithm for synthesizing programmatic strategies <cit.>. Given a game, IBR starts with an arbitrary strategy for playing the game and it approximates a best response to it; in the next iteration, it approximates a best response to the best response. This process is repeated a number of iterations and the programmatic strategy synthesized in the last iteration is returned.
The computation of the best responses in the IBR loop is performed by searching in the programmatic space defined by a domain-specific language. Given a target strategy, the algorithm searches for a program encoding a best response to it. Previous work used local search algorithms for searching in the programmatic space <cit.>. The target strategy that IBR provides serves as a guiding function. In the context of local search, when considering the neighbors of a candidate solution, local search algorithms prefer to accept a program that achieves a higher utility value against the target strategy. Since IBR considers a single strategy as a target, the search signal is often weak. This is because the neighbors of a candidate solution that performs poorly against the target strategy are also likely to perform poorly against it—small changes to a losing program will also generate a losing program. Moreover, IBR can loop around the strategy space in games with dynamics similar to Rock, Paper, and Scissors, without making progress toward strong solutions.
In this paper, we adapt Fictitious Play (FP) <cit.> and Double Oracle (DO) <cit.> to the context of programmatic strategies. FP and DO have been used in the context of neural strategies to overcome some of the weaknesses of IBR <cit.>. Despite providing a better search signal than IBR, we show that FP and DO can still fail to provide relevant information for the search. We then introduce a novel learning algorithm, (), that is designed specifically for guiding local search algorithms in the synthesis of programmatic strategies. uses information gathered while computing best responses to decide the set of target strategies to be used in future iterations of the algorithm as a means of optimizing the search signal.
We evaluate on three two-player zero-sum games: <cit.>, Poachers & Rangers, and Climbing Monkeys. The results show that synthesized strategies that are never worse and often far superior to strategies synthesized with IBR, FP, and DO in all three domains. We also performed a simulated competition of with strategies synthesized with IBR, FP, DO, as well as the programmatic strategies that won the last two competitions, which were written by programmers. obtained the highest average winning rate in our tournament.
§ PROBLEM DEFINITION
We consider the synthesis of programmatic strategies assuming zero-sum two-player games G = (P, S, s_init, A, T, U). Let P = {i, -i} be the pair of players; S be the set of states, with s_init in S being the initial state. Each player i can perform an action from a legal set of actions A_i(s) in A for a given state s.
The action of each player is given by a strategy, which is a function σ_i that receives a state s in S and returns an action in A_i for s.
A transition function T receives a state and an action for each player and deterministically returns the next state of the game, which could be a terminal state, where the utility of each player is determined.
The utility function U returns the value of the game in a given state (terminal or not). For s, the value of the game is denoted by U(s,σ_i,σ_-i) when player i follows the strategy σ_i and player -i, σ_-i. Considering that the game G is zero-sum, the utility function for -i is -U(s,σ_i,σ_-i). In this paper, we encode strategies for G as programs written in a domain-specific language (DSL).
A DSL can be defined as a context-free grammar (M, Ω, R, S), where M, Ω, R, and S are the sets of non-terminals, terminals, relations defining the
production rules of the grammar, and the grammar's initial symbol, respectively. Figure <ref> (right) shows an example of a DSL, where M = {S, C, B}, Ω = {c_1, c_2, b_1, b_2 if, then}, R are the production rules (e.g., C → c_1), and S is the initial symbol.
The DSL in Figure <ref> allows programs with a single command (e.g., c_1 or c_2) and programs with branching. We represent programs as abstract syntax trees (AST), where the root of the tree is S, the internal nodes are non-terminals, and the leaf nodes are terminals.
Figure <ref> (left) shows an example of an AST. We use a DSL D to define the space of programs D, where each program p ∈ D is a game strategy.
One solves the problem of synthesizing programmatic strategies by solving the following equation
max_σ_i∈ D min_σ_-i∈ D U(s_init, σ_i, σ_-i ) .
The strategies σ_i
and σ_-i in D able to solve Equation <ref> define a Nash equilibrium profile in the programmatic space.
We consider a programmatic variant of PSRO <cit.> to approximate a solution to Equation <ref>.
§ PROGRAMMATIC PSRO (PPSRO)
Let λ be a normal-form game defined by (Σ, P, U_Σ), where Σ = {Σ_i, Σ_-i} represents a set of strategies for each player in P= {i, -i}, and U_Σ is the utility payoff table between each pair of strategies in Σ. A mixed strategy σ is a probability distribution over strategies Σ_i and Σ_-i for players i and -i, respectively.
An empirical game of a normal-form game contains only a subset of the strategies of the original game.
Policy-Space Response Oracles (PSRO) is a framework for learning strategies that “grow” an empirical game <cit.>.
In PSRO, the empirical game starts with a single strategy in Σ_i and Σ_-i and it grows these sets by including a new strategy for each player in each iteration of the algorithm.
Let a mixed strategy over the sets Σ_i and Σ_-i of the empirical game be called a meta-strategy. PSRO grows Σ_i and Σ_-i by adding best responses to meta-strategies. Once a best response is added to a set, a new meta-strategy is computed, and the process is repeated. That is, given a meta-strategy σ_-i (resp. σ_i), for player -i (resp. i), the best response to σ_-i (resp. σ_i) is added to Σ_i (resp. Σ_-i).
PSRO generalizes algorithms such as IBR, FP, and DO depending on how the meta-strategies are computed. Let σ_k = (p_1, p_2, ⋯, p_n) be a meta-strategy for player k (k can be either i or -i). Here, p_j in σ_k represents the probability in which σ_k plays the j-th strategy added to the empirical game for player k. PSRO generalizes IBR if the meta-strategies are of the form (0.0, 0.0, ⋯, 1.0), i.e., the only strategy in the support of the meta-strategy is the last strategy added to the empirical game. If the meta-strategy σ_-i with n strategies is of the form (1/n, 1/n, ⋯, 1/n), i.e., all the previous strategies added to the game are played with equal probability, then PSRO generalizes FP.
PSRO also generalizes DO <cit.> when the meta-strategy is computed by solving the empirical game.
We use a variant of PSRO, which we call Programmatic PSRO (PPSRO), to approximate a solution to Equation <ref>. PPSRO is shown Algorithm <ref>.
PPSRO starts by initializing the set of strategies, Σ_i and Σ_-i, with two arbitrary strategies (line <ref>).
PPSRO runs a number of iterations according to a given computational budget (e.g., the number of games played). In each iteration, PPSRO invokes a learning algorithm Ψ (e.g., IBR) that receives the current empirical game and returns a meta-strategy σ_-i (line <ref>). Then it searches in the programmatic space of strategies for a best response σ'_i to σ_-i. We consider local search algorithms for computing σ'_i. The search algorithm, described in Section <ref>, initializes its computation with the last strategy added to the empirical game for i, which is denoted as σ_i[-1] (line <ref>). The best response σ'_i is then added to Σ_i. At the end, PPSRO returns the last meta-strategy σ_i as an approximate solution for player i to Equation <ref> (line <ref>).
The choice of meta-strategies across iterations of PPSRO determines how quickly it is able to approximate a Nash equilibrium profile for the game. Previous work investigated different approaches to define meta-strategies in the context of PSRO and neural policies <cit.>. However, searching in programmatic space is different from searching in neural space, since the former does not have a gradient signal to guide the search. As we show in our experiments, meta-strategies used with PSRO might not work well with PPSRO.
§ HILL CLIMBING FOR SYNTHESIS OF STRATEGIES
Hill Climbing (HC) is a local search algorithm that starts with an arbitrary candidate solution to a combinatorial search problem and attempts to improve it with greedy changes to the candidate. We use HC to approximate the best responses to strategies σ_-i in the PPSRO main loop (line <ref> of Algorithm <ref>). HC receives the last strategy added to the empirical game for player i, which is denoted as σ_i[-1], and σ_-i. The algorithm returns an approximate best response to σ_-i. This is achieved by searching in the programmatic space defined by the DSL. The starting candidate solution σ_0 of the search is σ_i[-1]. HC attempts to approximate a best response to σ_-i by evaluating neighbor strategies of σ_0. We update the current candidate solution σ_0 to a neighbor σ_i' if the value U(s_init,σ_i',σ_-i) is greater than U(s_init,σ_0,σ_-i). Otherwise, HC generates and evaluates a new neighbor solution σ_i' of σ_0. This process is repeated until we have exhausted the search budget. HC returns the strategy encountered in search with the highest U-value as its approximated best response to σ_-i.
Neighbor solutions are produced by applying a “mutation” in the AST of σ_0. A mutation is carried out by uniformly sampling a non-terminal symbol S in the AST, and replacing the subtree rooted at S with a new subtree. The new subtree is generated by replacing S with the right-hand side of a production rule for S that is selected uniformly at random. The mutation process repeatedly replaces a non-terminal leaf node in the generated program with the right-hand side of a random production rule of the DSL until the program's AST contains only terminal symbols as leaves.
HC is initialized with a random program only in the first iteration of PPSRO; HC is initialized with the programmatic best response computed in the previous iteration of PPSRO otherwise (σ_i[-1] in line <ref> of Algorithm <ref>).
§ SHORTCOMINGS OF EXISTING APPROACHES
The effectiveness of the search algorithm, e.g., HC, for computing a best response depends on the computational cost of σ_-i and on the information σ_-i encodes, as we explain next. The meta-strategy σ_-i determines how fast we can approximate a Nash equilibrium profile for the game. This is because the utility function U(s_init, σ_i, σ_-i) provides the search signal for the synthesis of a best response σ_i to σ_-i in the D space. For example, if the meta-strategy σ_-i with n strategies is of the form (1/n, 1/n, ⋯, 1/n), i.e., all the previous strategies synthesized in the previous iterations are in σ_-i's support, then σ_-i is able to provide a richer guiding signal than IBR's meta-strategy, which accounts only for a single strategy.
Note that PSRO (and PPSRO) with meta-strategies that account for all strategies with equal probability is equivalent to FP <cit.>. Although FP provides a richer search signal, it incurs a higher computational cost as the guiding function U(s_init, σ_i, σ_-i) requires one to evaluate all strategies in the support of the meta-strategy. Example <ref> illustrates the IBR's lack of information to guide the search in the game of Poachers and Rangers (P&R).
P&R is a simultaneous-move two-player zero-sum game without ties where rangers need to protect the gates of a national park to avoid poachers getting inside. In the game, poachers need to attack at least one unprotected gate to enter the park, and rangers succeed if they protect all gates attacked by poachers. Rangers receive the utility of 1 if they protect all attacked gates and -1 otherwise
The game has a trivial Ranger's dominant strategy, where they protect all the gates. Despite having a trivial solution, the game is particularly hard as a program synthesis task. This difficulty is inherent to the size of the programmatic solution required to solve this game.
If the number of gates is arbitrarily large, current synthesizers might struggle to synthesize such long programs.
For example, for a game with n gates, the optimal programmatic strategy is any permutation of the instructions in the following program: [1], [2], ⋯, [n], which we also denote as [1, 2, ⋯, n] for conciseness.
Let us consider a P&R instance with 2 gates. In the first iteration, IBR generates an arbitrary strategy for Rangers: [2]. In the next iteration, it computes a best response to [2]: [1].
Next, IBR computes a best response to the Poachers strategy, [1], so it produces the strategy [1]. Then, IBR computes a best response to [1], thus generating [2] for Poachers. In the next iteration, IBR computes [2] as a best response to [2]. Note that [2] is the strategy in which IBR started the learning procedure—IBR just looped back to the beginning of the process. Since IBR uses only the last synthesized strategy, it can loop over suboptimal strategies which could delay the convergence to the optimal strategy [1, 2].
By contrast, in FP one considers all previous strategies synthesized in the learning process. Once the empirical game has the strategies [1] and [2], the search algorithm is guided to synthesize the optimal [1, 2].
DO may strike a balance between computational cost and search guidance, i.e., it includes fewer strategies than FP, but more than IBR in the support of the meta-strategy. With DO, only the strategies in the empirical game that are deemed important, i.e., that are in the support of a Nash equilibrium strategy, will be considered in search. However, DO might still miss important information to guide local search algorithms in the context of PPSRO, as we show in Example <ref>.
Let us consider a P&R instance with 5 gates. In the first iteration, DO generates two arbitrary strategies:[2] and [1] for Rangers and Poachers, respectively. Let us assume that PPSRO instantiated as DO generates the empirical game shown in Table <ref> after a few iterations. In the following iteration, PPSRO adds a strategy for Rangers to the empirical game. This is achieved by solving the empirical game shown in Table <ref> to generate a meta-strategy σ_-i for Poachers and then approximating a best response σ_i to σ_-i. The last row of Table <ref> shows the strategy for -i in the Nash equilibrium profile for the empirical game, which is used as the meta-strategy σ_-i. Any strategy σ_i for Rangers that defends at least gates 1, 2, and 5 is a best response to σ_-i since the support of σ_-i only accounts for [1, 2, 5]. The best response σ_i does not need to defend the gate 3, despite being part of the empirical game for Poachers (in strategy [1, 2, 3]). If both [1, 2, 3] and [1, 2, 5] were in the support of σ_-i, PPSRO would be forced to synthesize a strategy that defends gates 1, 2, 3, and 5. However, DO does not include [1, 2, 3] in the support of σ_-i, so PPSRO is only forced to synthesize a strategy that defends gates 1, 2, and 5, which could delay the convergence of the algorithm for missing gate 3.
To address these limitations described for IBR, FP, and DO, we propose a new algorithm able to better guide the synthesis of programmatic strategies in the context of PPSRO.
§ ()
We propose a new instance of PPSRO called (), which can overcome the limitations of IBR, FP, and DO presented in the previous section. defines meta-strategies that are “in between” those IBR and FP define in terms of the number of strategies in the meta-strategy's support. can use more strategies than IBR to provide a better signal to the search algorithm, but it also attempts to use fewer strategies than FP to reduce the computational cost of the evaluation.
The following P&R example illustrates how works.
Let us consider a P&R instance with n > 2 gates. We initialize
with an arbitrary strategy ([2]) for Poachers and compute a best response to it: [2]. Next iteration, we compute a best response to [2]: [1]. Next, returns a meta-strategy σ_-i for Poachers so we can compute a best response to it and add to the empirical game a new strategy for Rangers. Similarly to what FP would do, in this case, returns a meta-strategy for Poachers that considers all strategies currently in the empirical game ([2] and [1]): σ_-i = (0.5, 0.5). Let us suppose that the search returns the best response [1, 2] to σ_-i, which is added to the empirical game. then returns a meta-strategy σ_i = (0.5, 0.5) for Rangers that also considers all strategies currently in the empirical game ([2] and [1, 2]). While computing a best response to σ_i, learns that the strategy [2] is redundant and can be dropped from the support of σ_i in future iterations. Before finding a best response σ_i (e.g., [3]), let us assume that the search evaluates strategies [1] and [2]. Note that [2] is a best response to only [2], while [1, 2] is a best response to both. Given the strategies evaluated in search and that [1, 2] is in the support of the meta-strategy, [2] does not add new information to the search and can therefore be dropped.
initially assumes that all the strategies inserted in the empirical game are helpful in guiding the search, so it adds them to the support of its meta-strategy σ_-i. While computing a best response to σ_-i, it collects data on each strategy in σ_-i and removes from its support all “redundant strategies”.
§.§ Formal Description
Let Σ_k = {σ_1,k, ⋯, σ_n,k} be the set of strategies for player k in the empirical game
in an execution of
PPSRO, where k is either i or -i and σ_j,k is the j-th strategy added for k in the empirical game. Let σ_k = (p_1, ⋯, p_n) be a meta-strategy over Σ_k where p_j in σ_k indicates the probability in which σ_k plays the j-th strategy in Σ_k. We denote p_j in σ_k as σ_k[j].
Let Σ_σ_k be the subset of strategies in the support of σ_k, i.e., the strategies whose p_j-value is greater than zero in σ_k.
While computing a best response to a meta-strategy σ_k, employs a search algorithm that evaluates a number of strategies as potential best responses to σ_k. Let S be the set of strategies evaluated in search that are best responded by at least one strategy in Σ_σ_k.
We call helpful strategies, denoted Σ_σ_k^h, the smallest subset of Σ_σ_k that contains at least one best response to any strategy in S.
We call redundant strategies the set Σ_σ_k minus the helpful strategies Σ_σ_k^h.
In Example <ref>, when computing a best response to σ_i = (0.5, 0.5) with Σ_σ_i = {[2], [1,2]} we have S = {[1], [2]} and Σ_σ_i^h = {[1, 2] }. is then able to remove the redundant set {[2]} from Σ_σ_i for future iterations of the algorithm.
In practice, we are unable to compute the smallest set Σ_σ_k^h possible due to two reasons. First, the search may not find the strategies needed to prove that a strategy is helpful. In Example <ref>, if the synthesis algorithm encounters [2] but it does not encounter [1] during the search, then strategies [2] and [1, 2] would be “equally helpful” and either one could be selected depending on the tie-breaking procedure implemented. Second, finding the smallest set Σ_σ_k^h given S is equivalent to solving a set cover problem, which is NP-hard <cit.>. uses a polynomial-time greedy algorithm to approximate a solution to the set cover problem. Namely, we define an initially empty set S'. Then, in every iteration, we select the strategy σ in Σ_σ_k that is a best response to the largest number of strategies in S ∖ S' and we add all the strategies for which σ is a best response to S'. We stop when S = S'. The strategies selected from Σ_σ_k in this procedure approximate Σ_σ_k^h, which gives us an approximation of the redundant strategies.
works by executing the following steps:
* Initialize Σ_-i and Σ_σ_-i with {σ_1,-i} for some arbitrary strategy σ_1,-i; compute a best response σ_1, i to σ_1,-i and initialize Σ_i and Σ_σ_i with {σ_1, i}. Define meta-strategies σ_i and σ_-i as (1.0).
* While there is time for learning and alternating k to be -i in one iteration and i in the next, execute:
* Compute a best response σ to σ_-k and add it to Σ_k and to Σ_σ_k; set σ_k[j] = 1.0/|Σ_σ_k| for all σ_j in Σ_σ_k.
* Set σ_-k[j] = 0 and remove it from Σ_σ_-k all σ_j, -k in Σ_-k that were estimated as redundant.
* Set σ_-k[j] = 1.0/|Σ_σ_-k| for all σ_j in Σ_σ_-k.
starts by initializing the set of strategies of the empirical game and the set of strategies in the support of the meta-strategy with an arbitrary strategy for one of the players (-i in the pseudocode above). Then, it computes a best response to this arbitrary strategy and uses the best response to initialize Σ_i and Σ_σ_i. The meta-strategies are of the form (1.0) because the empirical game has a single strategy for each player (see Step <ref> above). Step <ref> refers to PPSRO's loop, where it computes the best responses while alternating the players. Once a best response σ is computed to strategy σ_-k, it is added to the support of σ_k with uniform probability (see Step <ref>).
estimates which strategies in the support of σ_-k are redundant while computing the best response σ to σ_-k. In Step <ref>, removes the redundant strategies from the support of σ_-k and, in Step <ref>, redistributes the probabilities so that each strategy in the support has the same probability.
In Example <ref>, we showed that DO fails to include both [1, 2, 3] and [1, 2, 5] in the support of the meta-strategy σ_-i, thus missing the guidance information [1, 2, 3] provides.
Once the strategy [1, 2, 5] is added to the empirical game, the meta-strategy will automatically have both [1, 2, 3] and [1, 2, 5] in its support. In contrast with DO, retains both strategies in the support of σ_-i for the next iteration as long as strategies such as [1, 2, 3] and [1, 2, 5] are evaluated in search as both [1, 2, 3] and [1, 2, 5] will be flagged as helpful.
A weakness of as presented above is that it can flag as redundant a strategy that is helpful if it does not sample enough strategies in the search. For example, if the meta-strategy for Rangers has both [1] and [2] in its support, but it never evaluates a strategy that attacks gate 1 in search, then [1] will mistakenly be removed from the meta-strategy's support. We implement the following enhancement to fix this weakness. Whenever the search returns a best response σ to a meta-strategy σ_-i (resp. σ_i), we evaluate σ against all strategies in the empirical game, including those not in the support of σ_-i (resp. σ_i). If there is a strategy σ' in the empirical game that is a best response to σ, then it must be that mistakenly removed σ' from the support of the meta-strategy. In this case, we repeat the search for a best response with σ' added to the support of the meta-strategy.
This enhancement can increase the number of times the search algorithm is invoked in each iteration of the PPSRO loop. While we perform a single search per iteration with IBR, FP, and DO, in the worst case, can perform a number of searches equal to the number of strategies in the game. This is because, in the worst case, we add all strategies of the empirical game to the support of the meta-strategy.
Despite the possible additional searches, preliminary experiments showed that this enhancement improves the sampling efficiency of . All results in this paper use this enhancement.
In practice, we do not have the guarantee that the search algorithm used in PPSRO's main loop is able to return a best response to a meta-strategy. So we use whichever approximation the search returns as if it was a best response to the meta-strategy. Moreover, depending on the game, we might not be able to immediately recognize a best response to strategy once we see one, as one would have to prove the strategy to be a best response. This could be problematic, for example, when implementing the enhancement that re-runs the search if there is a strategy in the empirical game that is a best response to the strategy the search returns.
We run our experiments in games with utilities of -1, 0, +1.
If a best response cannot be easily verified (e.g., ), then we consider that σ is a best response to σ' if U(s_init, σ, σ') = +1.
Once reaches a computational budget, it can return different strategies as its approximate solution to Equation <ref>. Similarly to IBR, it can return the last strategy added to the empirical game for each player. can also return a mixed strategy that is given by the distribution of strategies added to the empirical game, as does FP. We can also solve the resulting empirical game with linear programming, like DO does, and return the resulting strategy. In this paper, we assume the games have a pure dominant strategy for which IBR's approach of returning the last strategy added to the empirical game is suitable; this is what we use in our experiments.
§ EMPIRICAL EVALUATION
§.§ Problem Domains
In addition to P&R, we introduce Climbing Monkey (CM), another two-player zero-sum game with a trivial optimal strategy that is also challenging in the context of programmatic strategies. In CM, monkeys need to climb to a branch of a tree that is higher than the branch the opponent's monkey is able to reach. The branches need to be climbed one at a time, without skipping any branch. The monkey that climbs to a higher branch wins the game. The game ends in a draw if both monkeys climb to a branch of the same height. For a tree with n branches, a dominant programmatic strategy is [1], [2], ⋯, [n].
Similarly to P&R, CM is challenging because, depending on the number of branches, it requires one to synthesize long programs.
In P&R, learning algorithms perform better if using a larger number of strategies in the support of meta-strategies as having many strategies helps Rangers converge to a strategy that protects all gates. CM is a game where all one needs to use is the last strategy added to the empirical game, i.e., the strategy that allows the monkey to climb to the highest branch. We hypothesize that is capable of detecting which strategies are needed in the support of the meta-strategies for these two games.
We also evaluate in , a real-time strategy game designed for research. There is an active research community that uses as a benchmark to evaluate intelligent systems.[https://github.com/Farama-Foundation/MicroRTS/wiki] is a game played with real-time constraints and very large action and state spaces <cit.>. Each player can control two types of stationary units (Bases and Barracks) and four types of mobile units (Workers, Ranged, Light, and Heavy). Bases are used to store resources and train Worker units. Barracks can train Ranged, Light, and Heavy units. Workers can build stationary units, harvest resources, and attack opponent units. Ranged, Light, and Heavy units have different amounts of hit points and inflict different amounts of damage to the opponent units. Ranged units differ from each other by causing damage from long distances. In , a match is played on a grid, which represents the map. Different maps might require different strategies to play the game well.
§.§ Empirical Methodology
The games of P&R and CM allow for a comparison of IBR, FP, DO, and that is easy to understand and analyze as they have trivial optimal strategies. Experiments with allow us to compare not only existing learning algorithms with , but also other methods for playing . Namely, we compare the programmatic strategies of IBR, FP, DO, and with programmatic strategies human programmers wrote to win the last two competitions: COAC[https://github.com/Coac/coac-ai-microrts] and Mayari.[https://github.com/barvazkrav/mayariBot] We also include two programmatic strategies that have been used in the competition since 2017: WorkRush (WR) and LightRush (LR). LR was the winner of the 2017 competition. We use seven maps of different sizes: 8×8A BasesWorkers,
16×16 BasesWorkers,
24×24A BasesWokers, 24×24 DoubleGame, BWDistantResources 32×32, Chambers 32×32, and 32×32 BasesWorkers. We consider two starting locations (the location of player's base) on each map. When evaluating two strategies, to ensure fairness, each strategy plays an equal number of matches in both locations against the other strategy.
We are interested in evaluating the sample efficiency of the different approaches, i.e., the strength of the strategies they synthesize as a function of the number of games they need to play to synthesize the strategies. We present plots such as the one in Figure <ref>, where the x-axis shows the number of games played and the y-axis a performance metric. We measure performance in P&R in terms of the number of gates Rangers protect; for CM we measure how high a monkey climbs.
In the plots (Figure <ref>) we evaluate the strategy a method returns after a number of games played (x-axis) in terms of its winning rate in a tournament with the strategy the other three methods return at the end of their synthesis process (strategies right side of the plots). In the tournament, each strategy plays the other strategies 10 times, 5 at each starting location on the map. matches can finish in draws. Following previous work, we assign a score of 1.0 for each win and 0.5 for each draw. The winning rate is given by adding the number of wins with half the number of draws, divided by the total number of matches <cit.>.
Since the mutation operation we use in the hill climbing algorithm is stochastic, we perform multiple independent runs of each experiment and report the average results and standard deviation. The number of runs performed in each experiment is specified below. We use medeiros2022can's DSL for .
[Our code is at <https://github.com/rubensolv/LocalLearnerIJCAI>]
§.§ Empirical Results
P&R. Figure <ref> (left) presents the results for P&R, where each line represents the average number of gates protected over 10,000 independent runs of the algorithms. The x-axis is on a logarithmic scale. derives strategies that protect more gates with many fewer games played than all other approaches tested. IBR performs the worst likely because it can cycle through strategies it has already seen in learning, as we illustrated in Example <ref>. FP performs best after as it is able to remember all previous strategies. However, FP uses more strategies than it needs to make progress, which explains the gap between the and FP lines.
CM. Figure <ref> (right) presents the results for CM, where each line is an average of branches climbed over 300 independent runs; the x-axis is in log scale. IBR, DO, and perform equally well as they all use only the last strategy added to the empirical game in the meta-strategy, which in this domain is the true set of helpful strategies. FP is the worst-performing method in this domain as it unnecessarily uses all strategies from the empirical game in the support of the meta-strategy.
. Figure <ref> presents the results for , with plots for four representative maps. Each line represents the average of 40 independent runs of each system. is never worse and is often far superior to the other methods. DO also performs well on most maps, but it is outperformed by a large margin by and FP in BaseWorkers32x32A.
IBR performs poorly on all maps.
§.§ Simulated Competition Results ()
Table <ref> shows the average results for a set of simulated competitions using the seven maps mentioned in the empirical methodology section.
Each entry in the table shows the average winning rate and the standard deviation of the row method against the column method; the last column shows the average and standard deviation across a given row.
The numbers in Table <ref> are the average winning rate computed by simulating 5 tournaments. The strategy we use in each tournament for IBR, FP, DO, and is generated as follows. We run each method 8 times, thus producing 8 different strategies each. Then, we run a round-robin evaluation among the 8 strategies of a given method, and the winning strategy in this evaluation is used as the method's strategy in our tournament. For a given tournament, the winning rate is computed by having the strategy of each method play the other strategies 10 times in each map, 5 for each starting location.
is the only method to obtain an average winning rate greater than 0.50 against all opponents; also obtained the highest average winning rate when considering all opponents: 0.72 (column “Total”). In particular, it obtains average winning rates of 0.76 and 0.66 against COAC and Mayari, respectively, the winners of the two latest competitions.
A Welch's t-test shows that the difference between and the competition winners COAC and Mayari, in terms of the total average winning rate, is statistically significant with p < 10^-5.
These results on P&R, CM, and show that 's approach to defining its meta-strategy can be quite effective in guiding a synthesizer that uses HC search.
§ MORE RELATED WORKS
In addition to PSRO <cit.>, this work is related to programmatic policies <cit.>, where the goal is to synthesize human-readable programs that encode policies to solve reinforcement learning problems <cit.>. Generalized planning (GP) is also related because it deals with the synthesis of programs to solve classical planning problems <cit.>. differs from these works because it learns how to solve two-player games, while the latter focus on single-agent problems.
marino2021programmatic,MARINO2022108860 also use local search algorithms to synthesize programmatic strategies, and they also evaluate their system in . In terms of learning algorithms, they only use IBR, so the IBR version in our experiments is a representation of their work. medeiros2022can presents a system for learning sketches with imitation learning as a means of speeding up the computation of programmatic best responses. They focus on the computation of best responses, so their solution can be combined in theory with any of the learning algorithms we evaluated in this paper.
§ CONCLUSIONS
In this paper, we introduced Local Learner, a learning algorithm based on the PSRO framework to guide local search algorithms on the task of synthesizing programmatic strategies. uses information collected from the computation of best responses to approximate a set of helpful strategies to have in the support of 's meta-strategy, which serves as a guiding function for the search. We empirically showed in three games the advantages of over adaptations of the learning algorithms IBR, FP, and DO to programmatic strategies. The empirical results show that 's approach of using information collected during search to determine its own guiding function can be quite effective in practice. is never worse than the other learning algorithms and is often far superior. In particular, in the game of , we simulated a competition with the last two winners of the annual competition, and the strategies synthesized obtained the highest winning rate across all evaluated systems.
§ ACKNOWLEDGMENTS
This research was supported by Canada's NSERC and the CIFAR AI Chairs program and Brazil's CAPES. The research was carried out using computational resources from Compute Canada. We thank the anonymous reviewers for their feedback.
named
|
http://arxiv.org/abs/2307.07547v1 | 20230714180000 | Lectures on Generalized Symmetries | [
"Lakshya Bhardwaj",
"Lea E. Bottini",
"Ludovic Fraser-Taliente",
"Liam Gladden",
"Dewi S. W. Gould",
"Arthur Platschorre",
"Hannah Tillim"
] | hep-th | [
"hep-th"
] |
=1
patterns
claim [linecolor=black,backgroundcolor=NavyBlue!5]
state[section]
state[2][]
state
#1
frametitle=
[baseline=(current bounding box.east),outer sep=0pt]
[anchor=east,rectangle,fill=NavyBlue!50]
Statement ;
frametitle=
[baseline=(current bounding box.east),outer sep=0pt]
[anchor=east,rectangle,fill=NavyBlue!20]
Statement : #1;
innertopmargin=10pt,linecolor=NavyBlue!20,
linewidth=2pt,topline=true,
frametitleaboveskip=-
[]
mainstate[section]
mainstate[2][]
mainstate
#1
frametitle=
[baseline=(current bounding box.east),outer sep=0pt]
[anchor=east,rectangle,fill=NavyBlue!50]
Main Statement;
frametitle=
[baseline=(current bounding box.east),outer sep=0pt]
[anchor=east,rectangle,fill=NavyBlue!20]
Main Statement: #1;
innertopmargin=10pt,linecolor=NavyBlue!20,
linewidth=2pt,topline=true,
frametitleaboveskip=-
[]
note[section]
note[2][]
note
#1
frametitle=
[baseline=(current bounding box.east),outer sep=0pt]
[anchor=east,rectangle,fill=PineGreen!30]
;
frametitle=
[baseline=(current bounding box.east),outer sep=0pt]
[anchor=east,rectangle,fill=PineGreen!30]
Definition : #1;
innertopmargin=10pt,linecolor=PineGreen!30,
linewidth=2pt,topline=true,
frametitleaboveskip=-
[]
tech[2][]
#1
frametitle=
[baseline=(current bounding box.east),outer sep=0pt]
[anchor=east,rectangle,fill=red!20]
;
frametitle=
[baseline=(current bounding box.east),outer sep=0pt]
[anchor=east,rectangle,fill=red!20]
#1;
innertopmargin=10pt,linecolor=red!20,
linewidth=2pt,topline=true,
frametitleaboveskip=-
[]
example[section]
example[2][]
example
#1
frametitle=
[baseline=(current bounding box.east),outer sep=0pt]
[anchor=east,rectangle,fill=orange!50]
Example ;
frametitle=
[baseline=(current bounding box.east),outer sep=0pt]
[anchor=east,rectangle,fill=orange!20]
Example : #1;
innertopmargin=10pt,linecolor=orange!20,
linewidth=2pt,topline=true,
frametitleaboveskip=-
[]
3cm
-1.5cm
-1.5cm
3.0cm
-1.5cm
#1#1
figures/
positioning
calc
decorations.pathreplacing,calligraphy
decorations.pathmorphing
decorations.markings
arrows
shapes
matrix
positioning
shapes.multipart
|
http://arxiv.org/abs/2307.04779v1 | 20230710075009 | Law of Large Numbers for Bayesian two-layer Neural Network trained with Variational Inference | [
"Arnaud Descours",
"Tom Huix",
"Arnaud Guillin",
"Manon Michel",
"Éric Moulines",
"Boris Nectoux"
] | stat.ML | [
"stat.ML",
"math.PR",
"math.ST",
"stat.TH"
] |
Relieving the S_8 Tension: Exploring the Surface-type DBI Model as a Dark Matter Paradigm
Huanyuan Shan
August 12, 2023
=========================================================================================
We provide a rigorous analysis of training by variational inference
(VI) of Bayesian neural networks in the two-layer and infinite-width
case. We consider a regression problem with a regularized evidence
lower bound (ELBO) which is decomposed into the expected
log-likelihood of the data and the Kullback-Leibler (KL) divergence
between the a priori distribution and the variational
posterior. With an appropriate weighting of the KL, we prove a law
of large numbers for three different training schemes: (i) the
idealized case with exact estimation of a multiple Gaussian integral
from the reparametrization trick, (ii) a minibatch scheme using
Monte Carlo sampling, commonly known as Bayes by Backprop,
and (iii) a new and computationally cheaper algorithm which we
introduce as Minimal VI. An important result is that all
methods converge to the same mean-field limit. Finally, we
illustrate our results numerically and discuss the need for the
derivation of a central limit theorem.
Bayesian neural networks, variational inference, mean-field, law of large numbers, infinite-width neural networks.
§ INTRODUCTION
Deep Learning has led to a revolution in machine learning with
impressive successes. However, some limitations of DL have been
identified and, despite, many attempts, our understanding of DL is
still limited. A long-standing problem is the assessment of predictive
uncertainty: DL tends to be overconfident in its predictions
<cit.>, which is a problem in applications such as
autonomous driving
<cit.>, medical
diagnosis <cit.>, or
finance; cf
<cit.>. Therefore,
on the one hand, analytical efforts are being made to thoroughly
investigate the performance of DL; and on the other hand, many
approaches have been proposed to alleviate its shortcomings. The
Bayesian paradigm is an attractive way to tackle predictive
uncertainty, as it provides a framework for training uncertainty-aware
neural networks (NNs) (e.g.
<cit.>).
Thanks to a fully probabilistic approach, Bayesian Neural Networks
(BNN) combine the impressive neural-network expressivity with the
decision-theoretic approach of Bayesian inference, making them capable
of providing predictive uncertainty; see
<cit.>.
However, Bayesian inference requires deriving the posterior
distribution of the NN weights. This posterior distribution is
typically not tractable. A classical approach is to sample the
posterior distribution using Markov chain Monte Carlo methods (such as
Hamilton-Monte-Carlo methods). There are however long-standing
difficulties, such as the proper choice of the prior and fine-tuning
of the sampler. Such difficulties often become prohibitive in
large-dimensional cases,<cit.>. An alternative is to
use variational inference, which has a long history
<cit.>. Simpler
methods that do not require exact computation of integrals over the
variational posterior were then developed, e.g. first by
<cit.> thanks to some approximation and then by
<cit.> with the Bayes by Backprop
approach. In the latter, the posterior distribution is approximated by
a parametric distribution and a generalisation of the
reparametrization trick used by <cit.> leads to an unbiased
estimator of the gradient of the ELBO; see also
<cit.>. Despite
the successful application of this approach, little is known about the
overparameterized limit and appropriate weighting that must be assumed
to obtain a nontrivial Bayesian posterior, see
<cit.>. Recently, <cit.> outlined the
importance of balancing in ELBO the integrated log-likelihood term and
the KL regularizer, to avoid both overfitting and dominance of the
prior. However, a suitable limiting theory has yet to be established,
as well as guarantees for the practical implementation of the
stochastic gradient descent (SGD) used to estimate the parameters of
the variational distribution.
Motivated by the need to provide a solid theoretical framework,
asymptotic analysis of NN has gained much interest recently. The main focus
has been on the gradient descent algorithm and its variants
<cit.>. In
much of these works, a mean-field analysis is performed to
characterize the limiting nonlinear evolution of the weights of a
two-layer NN, allowing the derivation of a law of large numbers and a
central limit theorem for the empirical distribution of neuron
weights. A long-term
goal of these works is to demonstrate convergence toward a global
minimum of these limits for the mean field. Despite some progress in
this direction, this is still an open and highly challenging problem;
cf <cit.>. Nevertheless, this
asymptotic analysis is also of interest in its own right, as we show
here in the case of variational inference for Bayesian neural
networks. Indeed, based on this asymptotic analysis, we develop an
efficient and new variant of the stochastic gradient descent (SGD)
algorithm for variational inference in BNN that computes only the
information necessary to recover the limit behavior.
Our goal, then, is to work at the intersection of analytical efforts
to gain theoretical guarantees and insights and of practical methods
for a workable variational inference procedure. By adapting the
framework developed by <cit.>, we produce a rigorous
asymptotic analysis of BNN trained in a variational setting for a
regression task. From the limit equation analysis, we first
find that a proper regularisation of the Kullback-Leibler divergence
term in relation with the integrated loss leads to their right
asymptotic balance. Second, we prove the asymptotic equivalence of
the idealized and Bayes-by-Backprop SGD schemes, as both preserve
the same core contributions to the limit. Finally, we introduce a
computationally more favourable scheme, directly stemming from the
effective asymptotic contributions. This scheme is the true
mean-field algorithmic approach, as only deriving from
non-interacting terms.
More specifically, our contributions are the following:
* We first focus on the idealized SGD algorithm, where the
variational expectations of the derivative of the loss from the
reparametrization trick of <cit.> are computed
exactly. More precisely, we prove that with the number of neurons
N→ +∞, the sequence of trajectories of the scaled empirical
distributions of the parameters satisfies a law of large
numbers. This is the purpose of Theorem <ref>. The proof
is completely new: it establishes directly the limit in the topology
inherited by the Wasserstein distance bypassing the highly technical
Sobolev space arguments used in <cit.>.
The idealized SGD requires the computation of some integrals, which in
practice prevents a direct application of this algorithm. However, we
can prove its convergence to an explicit nonlinear process. These
integrals are usually obtained by a Monte Carlo approximation, leading to the
Bayes-by-Backprop SGD, see <cit.>.
* We show for the Bayes-by-Backprop SGD (see Theorem
<ref>) that the sequence of trajectories of the scaled
empirical distributions of the parameters satisfies the same law of
large numbers as that in Theorem <ref>, which justifies
such an approximation procedure. Note that each step of the
algorithm involves the simulation of O(N) Gaussian random
variables, which can make the associated gradient evaluation
prohibitively expensive.
* A careful analysis of the structure of the limit equation
(<ref>) allows us to develop a new algorithm, called
Minimal-VI SGD, which at each step generates only two
Gaussian random variables and for which we prove the same limiting
behavior. The key idea here is to keep only those contributions which
affect the asymptotic behavior and which can be understood as the
mean-field approximation from the uncorrelated degrees of
freedom. This is all the more interesting since
we observe numerically that the number weights N required to reach
this asymptotic limit is quite small which makes this variant of
immediate practical interest.
* We numerically investigate the convergence of the three methods
to the common limit behavior on a toy example. We observe that the
mean-field method is effective for a small number of neurons
(N=300). The differences between the methods are
reflected in the variances.
The paper is organized as follows: Section <ref>
introduces the variational inference in BNN, as well as the SGD
schemes commonly considered, namely the idealized and
Bayes-by-backprop variants. Then, in Section <ref> we
establish our initial result, the LLN for the idealized SGD. In
Section <ref> we prove the LLN for the
Bayes-by-backprop SGD and its variants. We show that both SGD
schemes have the same limit behavior. Based on an analysis of the
obtained limit equation, we present in Section <ref> the
new minimal- VI. Finally, in Section <ref> we
illustrate our findings using numerical experiments. The proofs of the
mean-field limits, which are original and quite technically demanding,
are gathered in the supplementary paper.
Related works.
Law of Large Numbers (LLN) for mean-field interacting particle
systems, have attracted a lot of attentions; see for
example <cit.> and references therein. The use of mean-field
particle systems to analyse two-layer neural networks with random
initialization have been considered in <cit.>, which
establish a LLN on the empirical measure of the weights at fixed times
- we consider in this paper the trajectory convergence, i.e. the whole empirical measure process (time indexed) converges uniformly w.r.t. Skorohod topology. It enables not only to use the limiting PDE, for example to study the convergence of the weights towards the infimum of the loss function (see <cit.> for preliminary results), but is is also crucial to establish the central limit theorem, see for example <cit.>. <cit.> give conditions for global convergence of
GD for exact mean-square loss and online stochastic gradient descent
(SGD) with mini-batches increasing in size with the number of weights
N. A LLN for the entire trajectory of the empirical measure is also
given in <cit.> for a standard SGD.
<cit.> establish the propagation of chaos for SGD with
different step size schemes. Compared to the existing literature
dealing with the SGD empirical risk minimization in two-layer neural
networks, <cit.> provide the first rigorous proof of
the existence of the limit PDE, and in particular its uniqueness, in
the LLN.
We are interested here in deriving a LLN but for Variational Inference
(VI) of two-layer Bayesian Neural Networks (BNN), where we consider a
regularized version of the Evidence Lower Bound (ELBO).
§ VARIATIONAL INFERENCE IN BNN: NOTATIONS AND COMMON SGD
SCHEMES
§.§ Variational inference and Evidence Lower Bound
Setting. Let 𝖷 and 𝖸 be subsets of 𝐑^n (n≥ 1) and 𝐑 respectively.
For N≥1 and w=(w^1,…,w^N)∈(𝐑^d)^N, let f_w^N: 𝖷→𝐑 be the following two-layer neural network: for x∈𝖷,
f_w^N(x):=1/N∑_i=1^Ns(w^i,x)∈𝐑,
where s:𝐑^d×𝖷→𝐑 is the activation function.
We work in a Bayesian setting, in which we seek a distribution of the latent variable w which represents the weights of the neural network. The standard problem in Bayesian inference over complex models is that the posterior distribution is hard to sample. To tackle this problem, we consider Variational Inference, in which we consider a family of distribution 𝒬^N={ q_θ^N, θ∈Ξ^N} (where Ξ is some parameter space) easy to sample. The objective is to find the best q_θ^N∈𝒬^N, the one closest in KL divergence (denoted 𝒟_ KL) to the exact posterior. Because we cannot compute the KL, we optimize the evidence lower bound (ELBO), which is equivalent to the KL up to an additive constant.
Denoting by 𝔏: 𝐑×𝐑→𝐑_+ the negative log-likelihood (by an abuse of language, we call this quantity the loss), the ELBO (see <cit.>) is defined, for ∈Ξ^N, (x,y)∈𝖷×𝖸, by
E_ lbo(θ,x,y) :=- ∫_(𝐑^d)^N𝔏(y,f_w^N(x))q_θ^N(w)w - 𝒟_ KL(q_^N|P_0^N),
where P_0^N is some prior on the weights of the NN. The ELBO is
decomposed into two terms: one corresponding to the Kullback-Leibler
(KL) divergence between the variational density and the prior and the
other to a marginal likelihood term. It was empirically found that the
maximization of the ELBO function is prone to yield very poor
inferences <cit.>. It is argued in <cit.> and
<cit.> that optimizing the ELBO leads as N →∞ to the
collapse of the variational posterior to the prior. <cit.>
proposed to consider a regularized version of the ELBO, which consists
in multiplying the KL term by a parameter which is scaled by the
inverse of the number of neurons:
E_ lbo^N(θ,x,y) :=- ∫_(𝐑^d)^N𝔏(y,f_w^N(x))q_θ^N(w)w -1/N𝒟_ KL(q_^N|P_0^N),
A first objective of this paper is to show
that the proposed regularization leads to a stable asymptotic behavior
and the effect of both the integrated loss and Kullback-Leibler terms on the
limiting behavior are balanced in the limit N →∞.
The maximization of E_ lbo^N is carried out using SGD.
The variational family 𝒬^N we consider is a Gaussian family of distributions. More precisely, we assume that for any =(θ^1,…,θ^N)∈Ξ^N, the variational distribution q_^N factorizes over the neurons: for all w=(w^1,…,w^N)∈(𝐑^d)^N, q_^N(w)=∏_i=1^Nq^1_θ^i(w^i), where
θ=(m,ρ)∈Ξ:=𝐑^d×𝐑 and q^1_θ is the probability density function (pdf) of 𝒩(m,g(ρ)^2 I_d), with g(ρ)=log(1+e^ρ), ρ∈𝐑.
In the following, we simply write 𝐑^d+1 for 𝐑^d×𝐑.
In addition, following the reparameterisation trick of <cit.>, q^1_θ(w) w is the pushforward of a reference probability measure with density γ by Ψ_θ (see more precisely Assumption A1).
In practice, γ is the pdf of 𝒩(0,I_d) and Ψ_θ(z)=m+g(ρ)z. With these notations, (<ref>) writes
E_ lbo^N(θ,x,y) =- ∫_(𝐑^d)^N𝔏(y,1/N∑_i=1^Ns(Ψ_θ^i(z^i),x)) γ(z^1)…γ(z^N) z_1… z_N -1/N𝒟_ KL(q_^N|P_0^N).
Loss function and prior distribution.
In this work, we focus on the regression problem, i.e.
𝔏 is the Mean Square Loss: for y_1,y_2∈𝐑, 𝔏(y_1,y_2)=1/2|y_1-y_2|^2.
We also introduce the function ϕ:(θ,z,x)∈𝐑^d+1×𝐑^d×𝖷↦ s(Ψ_θ(z),x). On the other hand, we assume that the prior distribution P_0^N write, for all w∈(𝐑^d)^N,
P_0^N(w)=∏_i=1^NP_0^1(w^i),
where P_0^1:𝐑^d→𝐑_+ is the pdf of 𝒩(m_0,σ^2_0I_d), and σ_0>0. Therefore 𝒟_ KL(q_^N|P_0^N)=∑_i=1^N𝒟_ KL(q_θ^i|P_0^1) and, for θ=(m,ρ)∈𝐑^d+1,
𝒟_ KL(q_θ^1|P_0^1)=∫_𝐑^d q^1_θ(x) log(q^1_θ(x)/P_0^1(x)) x=m-m_0_2^2/2σ_0^2+d/2(g(ρ)^2/σ_0^2-1)+d/2log(σ_0^2/g(ρ)^2).
Note that 𝒟_ KL has at most a quadratic growth in m and ρ.
Note that we assume here a Gaussian prior to get an explicit expression of the Kullback-Leibler divergence. Most arguments extend to sufficiently regular densities and are essentially the same for exponential families, using conjugate families for the variational approximation.
§.§ Common SGD schemes in backpropagation in a variational setting
Idealized SGD. Let (Ω, ℱ,𝐏) be a probability space. Consider a data set {(x_k,y_k)}_k≥ 0 i.i.d. w.r.t. π∈𝒫(𝖷×𝖸), the space of probability measures over 𝖷×𝖸. For N≥1 and given a learning rate η>0, the maximization of θ∈𝐑^d+1↦E_ lbo^N(θ,x,y) with a SGD algorithm writes as follows:
for k≥ 0 and i∈{1,…,N},
θ_k+1=θ_k+ η∇_θE_ lbo^N(θ_k,x_k,y_k)
θ_0 ∼μ_0^⊗ N,
where μ_0∈𝒫(𝐑^d+1) and θ_k=(θ^1_k,…, θ^N_k).
We now compute ∇_θE_ lbo^N(θ,x,y).
First, under regularity assumptions on the function ϕ (which will be formulated later, see A1 and A3 below) and by assumption on 𝔏, we have for all i∈{1,…,N} and all (x,y)∈𝖷×𝖸,
∫_(𝐑^d)^N∇_θ^i𝔏(y,1/N∑_j=1^Nϕ(θ^j,z^j,x))γ(z^1)…γ(z^N) z^1… z^N
= -1/N^2∑_j=1^N∫_(𝐑^d)^N(y-ϕ(θ^j,z^j,x))∇_θϕ(θ^i,z^i,x)γ(z^1)…γ(z^N) z^1… z^N
=-1/N^2[∑_j=1,j≠ i^N(y-⟨ϕ(θ^j,·,x),γ⟩)⟨∇_θϕ(θ^i,·,x),γ⟩ + ⟨(y-ϕ(θ^i,·,x))∇_θϕ(θ^i,·,x),γ⟩],
where we have used the notation ⟨ U,ν⟩=∫_𝐑^qU(z)ν( z) for any integrable function U:𝐑^q→𝐑 w.r.t. a measure ν (with a slight abuse of notation, we denote by γ the measure γ(z) z). Second, for θ∈𝐑^d+1, we have
∇_θ𝒟_ KL(q_θ^1|P_0^1)=
[ ∇_m𝒟_ KL(q_θ^1|P_0^1); ∂_ρ𝒟_ KL(q_θ^1|P_0^1) ]
=
[ 1/σ_0^2(m-m_0); d/σ_0^2g'(ρ)g(ρ)-dg'(ρ)/g(ρ) ].
In conclusion, the SGD (<ref>) writes: for k≥ 0 and i∈{1,…,N},
θ_k+1^i=θ_k^i-η/N^2∑_j=1,j≠ i^N(⟨ϕ(θ_k^j,·,x_k),γ⟩-y_k)⟨∇_θϕ(θ_k^i,·,x_k),γ⟩
-η/N^2⟨(ϕ(θ_k^i,·,x_k)-y_k)∇_θϕ(θ_k^i,·,x_k),γ⟩-η/N∇_θ𝒟_ KL(q_θ^i_k^1|P_0^1)
θ_0^i ∼μ_0.
We shall call this algorithm idealised SGD because it contains an intractable term given by the integral w.r.t. γ. This has motivated the development of methods where this integral is replaced by an unbiased Monte Carlo estimator (see <cit.>) as detailed below.
Bayes-by-Backprop SGD. The second SGD algorithm we study
is based on an approximation, for i∈{1,…,N}, of ∫_(𝐑^d)^N(y-ϕ(θ^j,z^j,x))∇_θϕ(θ^i,z^i,x)γ(z^1)…γ(z^N) z^1… z^N (see (<ref>))
by
1/B∑_ℓ=1^B (y-ϕ(θ^j, 𝖹^j,ℓ,x) )∇_θϕ(θ^i,𝖹^i,ℓ,x)
where B∈𝐍^* is a fixed integer and (𝖹^q,ℓ, q∈{i,j}, 1≤ℓ≤ B) is a i.i.d finite sequence of random variables distributed according to γ(z) z.
In this case, for N≥ 1, given a dataset (x_k,y_k)_k≥0, the maximization of θ∈𝐑^d+1↦E_ lbo^N(θ,x,y) with a SGD algorithm is the following: for k≥ 0 and i∈{1,…,N},
θ_k+1^i=θ_k^i -η/N^2B∑_j=1^N∑_ℓ=1^B (ϕ(θ_k^j,𝖹^j,ℓ_k,x_k)-y_k )∇_θϕ(θ_k^i,𝖹^i,ℓ_k,x_k)
-η/N∇_θ𝒟_ KL(q_θ^i_k^1|P_0^1)
θ_0^i=(m_0^i,ρ_0^i)∼μ_0,
where η>0 and (𝖹^j,ℓ_k, 1≤ j≤ N, 1≤ℓ≤ B, k≥ 0) is a i.i.d sequence of random variables distributed according to γ.
§ LAW OF LARGE NUMBERS FOR THE IDEALIZED SGD
Assumptions and notations. When E is a metric space and ℐ= 𝐑_+ or ℐ=[0,T] (T≥ 0), we denote by 𝒟(ℐ,E) the Skorohod space of càdlàg functions on ℐ taking values in E and 𝒞(ℐ,E) the space of continuous functions on ℐ taking values in E.
The evolution of the parameters ({θ_k^i, i=1,…,N})_k≥ 1 defined by (<ref>) is tracked through their empirical distribution ν_k^N (for k≥ 0) and its scaled version μ_t^N (for t∈𝐑_+), which are defined as follows:
ν_k^N:=1/N∑_i=1^Nδ_θ_k^i and μ_t^N:=ν_⌊ Nt⌋^N, where the θ^i_k's are defined (<ref>).
Fix T>0.
For all N≥1, μ^N:={μ_t^N, t∈[0,T]} is a random element of 𝒟([0,T],𝒫(𝐑^d+1)), where 𝒫(𝐑^d+1) is endowed with the weak convergence topology. For N≥1 and k≥1, we introduce the following σ-algebras:
ℱ_0^N=σ(θ_0^i, 1≤ i≤ N) and ℱ_k^N=σ(θ_0^i, (x_q,y_q),1≤ i≤ N, 0≤ q≤ k-1).
Recall q_θ^1:𝐑^d→𝐑_+ be the pdf of 𝒩(m,g(ρ)^2I_d) (θ=(m,ρ)∈𝐑^d+1).
In this work, we assume the following.
A1.
There exists a pdf γ:𝐑^d→𝐑_+ such that for all θ∈𝐑^d+1, q^1_θ x=Ψ_θ#γ x, where {Ψ_θ, θ∈𝐑^d+1} is a family of 𝒞^1-diffeomorphisms over 𝐑^d such that for all z∈𝐑^d, θ∈𝐑^d+1↦Ψ_θ(z) is of class 𝒞^∞.
Finally, there exists 𝔟:𝐑^d→𝐑_+ such that for all multi-index α∈𝐍^d+1 with |α|≥ 1, there exists C_α>0, for all z∈𝐑^d and θ=(θ_1,…,θ_d+1)∈𝐑^d+1,
| ∂_αΨ_θ(z)| ≤ C_α𝔟(z) with for all q≥ 1, ⟨𝔟^q, γ⟩ <+∞,
where ∂_α= ∂_θ_1^α_1…∂_θ_d+1^α_d+1 and ∂_θ_j^α_j is the partial derivatives of order α_j w.r.t. to θ_j.
A2.
The sequence {(x_k,y_k)}_k≥ 0 is i.i.d. w.r.t. π∈𝒫(𝖷×𝖸).
The set 𝖷×𝖸⊂𝐑^d×𝐑 is compact. For all k≥0, (x_k,y_k)ℱ_k^N, where ℱ_k^N is defined in (<ref>).
A3.
The activation function s:𝐑^d×𝖷→𝐑 belongs to 𝒞^∞_b(𝐑^d×𝖷) (the space of smooth functions over 𝐑^d×𝖷 whose derivatives of all order are bounded).
A4.
The initial parameters (θ_0^i)_i=1^N are i.i.d. w.r.t. μ_0∈𝒫(𝐑^d+1) which has compact support.
Note that A1 is satisfied when γ is the pdf of 𝒩(0,I_d) and Ψ_θ(z)=m+g(ρ)z, with 𝔟(z)=1+|z|.
With these assumptions, for every fixed T>0, the sequence ({θ_k^i, i=1,…,N})_k=0, …, ⌊ NT ⌋ defined by (<ref>) is a.s. bounded:
Assume A1→A4. Then,
there exists C>0 such that a.s. for all T>0, N≥ 1,
i∈{1,…, N}, and 0≤ k≤⌊ NT⌋,
|θ_k^i|≤ Ce^[ C(2+T)]T.
Lemma <ref> implies that a.s. for all T>0 and N≥ 1, μ^N ∈𝒟([0,T],𝒫(Θ_T)), where
Θ_T={θ∈𝐑^d+1, |θ|≤ Ce^[ C(2+T)]T}.
Law of large numbers for (μ^N)_N≥1 defined in (<ref>). The first main result of this work is the following.
Assume A1→A4. Let T>0. Then, the sequence (μ^N)_N≥1⊂𝒟([0,T],𝒫(Θ_T)) defined in (<ref>) converges in probability to the unique deterministic solution μ̅∈𝒞([0,T],𝒫(Θ_T)) to the following measure-valued evolution equation: ∀ f∈𝒞^∞(Θ_T) and ∀ t∈ [0,T],
⟨ f,μ̅_t⟩-⟨ f,μ_0⟩ =- η∫_0^t∫_𝖷×𝖸⟨ϕ(·,·,x)-y,μ̅_s⊗γ⟩⟨∇_θ f·∇_θϕ( · ,·,x),μ̅_s⊗γ⟩π( x, y) s
- η∫_0^t⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ̅_s⟩ s.
The proof of Theorem <ref> is given in Appendix
<ref>. We stress here the most important steps and used
techniques. In a first step, we derive an identity satisfied by
(μ^N)_N≥ 1, namely the pre-limit
equation (<ref>); see Sec. <ref>. Then we
show in Sec. <ref> that (μ^N)_N≥ 1 is relatively
compact in 𝒟([0,T],𝒫(Θ_T)).
To do so, we check that the sequence (μ^N)_N≥ 1 satisfies all the required assumptions of <cit.> when E= 𝒫(Θ_T) there.
In
Sec. <ref> we prove that every limit point of
(μ^N)_N≥ 1 satisfies the limit equation (<ref>). Then, in Section <ref>,
we prove that there is a unique solution of the measure-valued equation (<ref>).
To prove the uniqueness of the solution of (<ref>),
we use techniques developed in <cit.> which are based on
a representation formula for solution to measure-valued equations <cit.> together with estimates in Wasserstein distances between two solutions of (<ref>) derived in <cit.>.
In Section <ref>, we also conclude the
proof of Theorem <ref>. Compared
to <cit.>, the fact that
({θ_k^i, i=1,…,N})_k=0, …, ⌊ NT ⌋
defined by (<ref>) are a.s. bounded allows to use
different and more straightforward arguments to prove (i) the relative
compactness in 𝒟([0,T],𝒫(Θ_T)) of
(μ^N)_N≥1 (defined in (<ref>)) (ii) the
continuity property of the operator
𝗆↦Λ_t[f](𝗆) defined in
(<ref>) w.r.t. the topology of
𝒟([0,T],𝒫(Θ_T)) and (iii) (μ^N)_N≥ 1
has limit points in 𝒞([0,T],𝒫(Θ_T)). Step
(ii) is necessary in order to pass to the limit N→ +∞ in the
pre-limit equation and Step (iii) is crucial since we prove that there is at most
one solution of (<ref>) in
𝒞([0,T],𝒫(Θ_T)). It is worthwhile to
emphasize that, as N →∞, the effects of the integrated loss
and of the KL terms are balanced, as conjectured in <cit.>.
To avoid further technicalities, we have chosen what may seem restrictive assumptions on the data or the activation function. Note however that it readily extends to unbounded set 𝖷, and also unbounded 𝖸 assuming that π as polynomial moments of sufficiently high order. Also, RELU (or more easily leaky RELU) may be considered by using weak derivatives (to consider the singularity at 0), and a priori moment bounds on the weights.
§ LLN FOR THE BAYES-BY-BACKPROP SGD
The sequence {θ_k^i, i∈{1,… N}}_k=0, …, ⌊ NT ⌋ defined recursively by the algorithm (<ref>) is in general not bounded, since ∇_θϕ(θ ,𝖹, x) is not necessarily bounded if 𝖹∼γ(s) z. Therefore, we cannot expect Lemma <ref> to hold for {θ_k^i, i∈{1,… N}}_k=0, …, ⌊ NT ⌋ set by (<ref>). Thus, the sequence {θ_k^i, i∈{1,… N}}_k=0, …, ⌊ NT ⌋ is considered on the whole space 𝐑^d+1.
Wasserstein spaces and results.
For N≥1, and k≥ 1, we set
ℱ_k^N=σ (θ_0^i , 𝖹^j,ℓ_q,(x_q,y_q), 1≤ i,j≤ N, 1≤ℓ≤ B, 0≤ q≤ k-1} ).
In addition to A1→A4 (where in A2, when k≥ 1, ℱ_k^N is now the one defined in (<ref>)),
we assume:
A5. The sequences (𝖹^j,ℓ_k,1≤ j≤ N, 1≤ℓ≤ B, k≥ 0) and ((x_k,y_k), k≥ 0) are independent. In addition, for k≥ 0, ((x_k,y_k),𝖹^j,ℓ_k, 1≤ j≤ N, 1≤ℓ≤ B)ℱ_k^N.
Note that the last statement of A5 implies the last statement of A2.
We introduce the scaled empirical distribution of the parameters of the algorithm (<ref>), i.e. for k≥ 0 and t≥ 0:
ν_k^N:=1/N∑_i=1^Nδ_θ_k^i and μ_t^N:=ν_⌊ Nt⌋^N, where the θ^i_k's are defined (<ref>).
One can no longer rely on the existence of a compact subset Θ_T⊂𝐑^d+1 such that a.s. (μ^N)_N≥1⊂𝒟([0,T], 𝒫(Θ_T)), where μ^N={t≥ 0↦μ_t^N} is defined in (<ref>). For this reason, we will work in Wasserstein spaces 𝒫_q(𝐑^d+1), q≥ 0, which, we recall, are defined by
𝒫_q(𝐑^d+1)={ν∈𝒫(𝐑^d+1), ∫_𝐑^d+1 |θ|^q ν (θ)<+∞}.
These spaces are endowed with the Wasserstein metric 𝖶_q, see e.g. <cit.> for more materials on Wasserstein spaces. For all q≥ 0, (μ^N)_N≥1⊂𝒟(𝐑_+,𝒫_q(𝐑^d+1)).
The second main results of this work is a LLN for (μ^N)_N≥1 defined in (<ref>).
Assume A1→A5. Let γ_0> 1+ d+1/2. Then, the sequence (μ^N)_N≥1 defined in (<ref>) converges in probability in 𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)) to a deterministic element μ̅∈𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)), where μ̅∈𝒞(𝐑_+,𝒫_1(𝐑^d+1)) is the unique solution in 𝒞(𝐑_+,𝒫_1(𝐑^d+1)) to the following measure-valued evolution equation:∀ f∈𝒞^∞_b(𝐑^d+1) and ∀ t∈𝐑_+,
⟨ f,μ̅_t⟩-⟨ f,μ_0⟩ =- η∫_0^t∫_𝖷×𝖸⟨ϕ(·,·,x)-y,μ̅_s⊗γ⟩⟨∇_θ f·∇_θϕ( · ,·,x),μ̅_s⊗γ⟩π( x, y) s
- η∫_0^t⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ̅_s⟩ s.
Theorem <ref> is proved in the appendix <ref>.
Since {θ_k^i, i∈{1,… N}}_k=0, …, ⌊ NT ⌋ defined by (<ref>) is not bounded in general, we work in the space 𝒟(𝐑_+, 𝒫_γ_0(𝐑^d+1)). The proof of Theorem <ref> is more involved than that of Theorem <ref>, and generalizes the latter to the case where the parameters of the SGD algorithm are unbounded.
We prove that (μ^N)_N≥1 (defined in (<ref>)) is relatively compact in 𝒟(𝐑_+, 𝒫_γ_0(𝐑^d+1)). To this end we now use <cit.>. The compact containment, which is the purpose of Lemma <ref>, is not straightforward since 𝒫_γ_0(𝐑^d+1) is not compact contrary to Theorem <ref> where we used the compactness of 𝒫(Θ_T). More precisely, the compact containment of (μ^N)_N≥ 1 relies on a characterization of the compact subsets of 𝒫_γ_0(𝐑^d+1) (see Proposition <ref>) and moment estimates on {θ_k^i, i∈{1,… N}}_k=0, …, ⌊ NT ⌋ (see Lemma <ref>).
We also mention that contrary to what is done in the proof of Theorem <ref>, we do not show that every limit point of (μ^N)_N≥1 in 𝒟(𝐑_+, 𝒫_γ_0(𝐑^d+1)) is continuous in time but we still manage to prove that they all satisfy (<ref>). Then, using the duality formula for the 𝖶_1-distance together with rough estimates on the jumps of t↦⟨ f, μ_t^N⟩ (for f uniformly Lipschitz over 𝐑^d+1), we then show that every limit point of (μ^N)_N≥1 in 𝒟(𝐑_+, 𝒫_γ_0(𝐑^d+1)) belongs a.s. to 𝒞(𝐑_+, 𝒫_1(𝐑^d+1)). Again this is important since we have uniqueness of (<ref>) in 𝒞(𝐑_+, 𝒫_1(𝐑^d+1)).
We conclude this section with the following important uniqueness result.
Under the assumptions of Theorems <ref> and <ref>, the solution to (<ref>) is independent of T and is equal to the solution to (<ref>).
This uniqueness result states that both idealized and
Bayes-by-backprop SGD have the same limiting behavior. It is also noteworthy that the mini-batch B is held fixed B. The effect of batch size can be seen at the level of the central limit theorem, which we leave for future work.
§ THE MINIMAL-VI SGD ALGORITHM
The idea behing the Bayes-by-Backprop SGD stems from the fact
that there are integrals wrt γ in the loss function that cannot
be computed in practice and it is quite natural up to a
reparameterization trick, to replace these integrals by a Monte Carlo
approximation (with i.i.d. gaussian random variables). To devise a
new cheaper algorithm based on the only terms impacting the asymptotic
limit, we directly analyse the limit equation (<ref>) and remark that it can be rewritten as,
∀ f∈𝒞^∞(Θ_T) and ∀ t∈
[0,T],
⟨ f,μ̅_t⟩-⟨ f,μ_0⟩
=- η∫_0^t∫_𝖷×𝖸× (𝐑^d)^2⟨ϕ(·,z_1,x)-y,μ̅_s⟩⟨∇_θ f·∇_θϕ( · ,z_2,x),μ̅_s⟩γ^⊗ 2( z_1 z_2)π( x, y) s
- η∫_0^t⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ̅_s⟩ s.
Thus, the integration over γ^⊗ 2 can be considered as that over π, i.e., we can consider them as two more data variables that only need to be sampled at each new step. In this case, the SGD (<ref>) becomes: for k≥ 0 and i∈{1,…,N},
θ_k+1^i=θ_k^i -η/N^2∑_j=1^N (ϕ(θ_k^j,𝖹^1_k,x_k)-y_k )∇_θϕ(θ_k^i,𝖹^2_k,x_k)
-η/N∇_θ𝒟_ KL(q_θ^i_k^1|P_0^1)
θ_0^i=(m_0^i,ρ_0^i)∼μ_0,
where η>0 and (𝖹^p_k, p∈{1,2}, k≥ 0) is a
i.i.d sequence of random variables distributed according to
γ^⊗2. We call this backpropagation scheme
minimal- VI SGD which is much cheaper in terms of computational complexity, with the same limiting behavior as we now discuss.
We introduce the σ-algebra for N,k≥ 1:
ℱ_k^N=σ (θ_0^i , 𝖹^p_q,(x_q,y_q), 1≤ i≤ N, p∈{1,2}, 0≤ q≤ k-1} ).
In addition to A1→A4 (where in A2, ℱ_k^N is now the one defined above in (<ref>) when k≥ 1), the following assumption
A6. The sequences (𝖹^p_k, p∈{1,2}, k≥ 0) and ((x_k,y_k), k≥ 0) are independent. In addition, for k≥ 0, ((x_k,y_k),𝖹^p_k, p∈{1,2})ℱ_k^N, where ℱ_k^N is defined in (<ref>).
Set for k≥ 0 and t≥ 0, ν_k^N:=1/N∑_i=1^Nδ_θ_k^i and μ_t^N:=ν_⌊ Nt⌋^N, where the θ^i_k's are defined in (<ref>).
The last main result of this work states that the sequence (μ^N)_N≥1 satisfies the same law of large numbers when N→ +∞ as the one satisfied by (<ref>), whose proof will be omitted as it is the same as the one made for Theorem <ref>.
Assume A1→A4 and A6. Then, the sequence of (μ^N)_N≥1 satisfies all the statements of Theorem <ref>.
§ NUMERICAL EXPERIMENTS
In this section we illustrate the theorems <ref>,
<ref>, and <ref> using the following toy model. We
set d=5. Given θ^*∈𝐑^d (drawn from a normal
distribution and scaled to the unit norm), we draw i.i.d observations
as follows: Given x∼𝒰([-1,1]^d), we draw
y=tanh(x^⊤θ^*)+ϵ, where ϵ is zero mean with
variance 10^-4. The initial distribution of parameters is centered
around the prior:
θ_0∼ (𝒩(m_0,0.01I_d)×𝒩(g^-1(σ_0),0.01))^⊗ N, with m_0=0 and
σ_0=0.2. Since the idealized algorithm cannot be
implemented exactly, a mini-batch of size 100 is used as a proxy for
the following comparisons of the different algorithms. For the
algorithm (<ref>) SGD we set B=1.
Evolution and limit of the distribution
Fig. <ref> displays the histograms of
{F(θ_⌊ Nt⌋^i), i=1,…,N}
(F(θ)=m_2, g(ρ) or m, where
θ=(m,ρ)∈𝐑^d×𝐑), for N=10000, at initialization, halfway through training, and at the end of training. The empirical distributions illustrated by these histograms are very similar over the course of training. It can be seen that for N=10000 the limit of the mean field is reached.
Convergence with respect to the numbers of neurons.
We investigate here the speed of convergence of μ_t^N to
μ̅_t (as N→+∞), when tested against test functions
f. More precisely, we fix a time T (end of training) and Figure
<ref> represents the empirical mean of
⟨ f, μ_T^N⟩ over 50 realizations. The test
functions used for this experiment are f_m(θ) = ‖ m‖_2,
f_Elbo(θ) = -
Ê_lbo(θ)^N where
Ê_lbo is the empirical
E_lbo^N (see (<ref>)) computed with 100
samples of (x,y) and (z^1,…,z^N). Finally,
f_pred(θ) =
𝔼̂_x[𝕍̂_w∼
q_θ^N[f_w^N(x)]^1/2]
where 𝔼̂ and 𝕍̂ denote respectively the
empirical mean and the empirical variance over 100 samples. All
algorithms are converging to the same limit and are performing similarly
even with a limited number of neurons (N=300 in this example).
Convergence with respect to time.
This section illustrates the training process of a BNN with a given
number of neurons N = 10000. In Figure <ref>,
we plot the negative ELBO on a test set and its two components, the
loss and the KL-divergence terms. Figure <ref>
shows that the BNN is able to learn on this specific task and all
algorithms exhibit a similar performance. It illustrates the
trajectorial convergence of {μ_t^N, t∈[0,T]}_N≥1 to
{μ̅_t, t∈[0,T]} as N→+∞.
Behavior around the limit μ̅.
On Figure <ref>, we plot the boxplots
of ⟨ f,μ_t^N⟩ for 50 realizations and N=10000, at
different times of the training.
Minimal-VI scheme (which is computationally cheaper as
explained in <ref>) exhibit a larger variance than the
other algorithms.
§ CONCLUSION
By establishing the limit behavior of the idealized
SGD for the variational inference of BNN with the weighting suggested
by <cit.>, we have rigorously shown that the most-commonly used
in practice Bayes-by-Backprop scheme indeed exhibits the same
limit behavior. Furthermore, the analysis of the limit equation
led us to validate the correct scaling of the KL divergence term in
with respect to the loss. Notably, the mean-field limit dynamics
has also helped us to devise a far less costly new SGD algorithm,
the Minimal-VI. This scheme shares the same limit
behavior, but only stems from the non-vanishing asymptotic
contributions, hence the reduction of the computational cost. Aside
from confirming the analytical results, the first simulations
presented here show that the three algorithms, while having the same
limit, may differ in terms of variance. Thus, deriving a CLT result
and discussing the right trade-off between computational complexity
and variance will be done in future work. Also, on a more
general level regarding uncertainty quantification, an interesting
question is to analyse the impact of the correct scaling of the KL
divergence term on the error calibration and how to apply the same
analysis in the context of deep ensembles.
A.D. is grateful for the
support received from the Agence Nationale de la Recherche (ANR) of
the French government through the program "Investissements d'Avenir"
(16-IDEX-0001 CAP 20-25) A.G. is supported by the French ANR under
the grant ANR-17-CE40-0030 (project EFI) and the Institut
Universtaire de France. M.M. acknowledges the support of the the
French ANR under the grant ANR-20-CE46-0007 (SuSa project).
B.N. is supported by the grant IA20Nectoux from the Projet I-SITE
Clermont CAP 20-25. E.M. and T.H. acknowledge the support of ANR-CHIA-002, "Statistics, computation and Artificial Intelligence"; Part of the work has been developed under the auspice of the Lagrange Center for Mathematics and Calculus
§ PROOF OF THEOREM <REF>
For simplicity, we prove the theorem <ref> when T=1, and we denote Θ_1 simply by Θ. In this section we assume A1–A4.
§.§ Pre-limit equation (<ref>) and error terms in (<ref>)
§.§.§ Derivation of the pre-limit equation
The aim of this section is to establish the so-called pre-limit equation (<ref>), which will be our starting point to derive Equation (<ref>).
Let N≥ 1, k∈{0,…,N}, and f∈𝒞^∞(Θ). Recall that by Lemma <ref> and since 0≤ k ≤ N, a.s. θ^i_k∈Θ, and thus a.s. f(θ^i_k) is well-defined. The Taylor-Lagrange formula yields
⟨ f,ν_k+1^N⟩-⟨ f,ν_k^N⟩ =1/N∑_i=1^Nf(θ_k+1^i)-f(θ_k^i)
=1/N∑_i=1^N∇_θ f(θ_k^i)·(θ_k+1^i-θ_k^i) +1/2N∑_i=1^N(θ_k+1^i-θ_k^i)^T∇^2f(θ_k^i)(θ_k+1^i-θ_k^i),
where, for all i∈{1,…, N}, θ_k^i∈ (θ_k^i,θ_k+1^i)⊂Θ.
Using (<ref>), we then obtain
⟨ f,ν_k+1^N⟩-⟨ f,ν_k^N⟩ =
-η/N^3∑_i=1^N∑_j=1,j≠ i^N (⟨ϕ(θ_k^j,·,x_k),γ⟩-y_k )⟨∇_θ f(θ_k^i)·∇_θϕ(θ_k^i,·,x_k),γ⟩
-η/N^2⟨(ϕ(·,·,x_k)-y_k)∇_θ f·∇_θϕ(·,·,x_k),ν_k^N⊗γ⟩
-η/N⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),ν_k^N⟩ + 𝐑_k^N[f],
where
𝐑_k^N[f]:=1/2N∑_i=1^N(θ_k+1^i-θ_k^i)^T∇^2f(θ_k^i)(θ_k+1^i-θ_k^i).
Let us define
𝐃_k^N[f] := 𝐄[-η/N^3∑_i=1^N∑_j=1,j≠ i^N (⟨ϕ(θ_k^j,·,x_k),γ⟩-y_k )⟨∇_θ f(θ_k^i)·∇_θϕ(θ_k^i,·,x_k),γ⟩|ℱ_k^N]
-𝐄[η/N^2⟨(ϕ(·,·,x_k)-y_k)∇_θ f·∇_θϕ(·,·,x_k),ν_k^N⊗γ⟩|ℱ_k^N].
Note that using (<ref>) and (<ref>) together with the fact that |∇_θ f(θ_k^i)|≤sup_θ∈Θ |∇_θ f(θ)|,
the integrant in (<ref>) is integrable and thus 𝐃_k^N[f] is well defined.
Using the fact that (x_k,y_k)ℱ_k^N by A2 and that {θ_k^i, i=1,…,N} is ℱ_k^N-measurable by (<ref>), we have:
𝐃_k^N[f] =-η/N^3∑_i=1^N∑_j=1,j≠ i^N∫_𝖷×𝖸 (⟨ϕ(θ_k^j,·,x),γ⟩-y )⟨∇_θ f(θ_k^i)·∇_θϕ(θ_k^i,·,x),γ⟩π( x, y)
-η/N^2∫_𝖷×𝖸⟨(ϕ(·,·,x)-y)∇_θ f·∇_θϕ(·,·,x),ν_k^N⊗γ⟩π( x, y).
Introduce also
𝐌_k^N[f] :=-η/N^3∑_i=1^N∑_j=1,j≠ i^N(⟨ϕ(θ_k^j,·,x_k),γ⟩-y_k)⟨∇_ θ f(θ_k^i)·∇_θϕ(θ_k^i,·,x_k),γ⟩
-η/N^2⟨(ϕ(·,·,x_k)-y_k)∇_θ f·∇_θϕ(·,·,x_k),ν_k^N⊗γ⟩-𝐃_k^N[f].
Note that 𝐄 [𝐌_k^N[f]|ℱ_k^N]=0. Equation (<ref>) then writes
⟨ f,ν_k+1^N⟩-⟨ f,ν_k^N⟩ =𝐃_k^N[f]+ 𝐌_k^N[f] -η/N⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),ν_k^N⟩ +𝐑_k^N[f].
Notice also that
𝐃_k^N[f] =-η/N^3∑_i=1^N∑_j=1^N∫_𝖷×𝖸(⟨ϕ(θ_k^j,·,x),γ⟩-y)⟨∇_θ f(θ_k^i)·∇_θϕ(θ_k^i,·,x),γ⟩π( x, y)
+η/N^3∑_i=1^N∫_𝖷×𝖸(⟨ϕ(θ_k^i,·,x),γ⟩-y)⟨∇_θ f(θ_k^i)·∇_θϕ(θ_k^i,·,x),γ⟩π( x, y)
-η/N^2∫_𝖷×𝖸⟨(ϕ(·,·,x)-y)∇_θ f·∇_θϕ(·,·,x),ν_k^N⊗γ⟩π( x, y)
=-η/N∫_𝖷×𝖸⟨ϕ(·,·,x)-y,ν_k^N⊗γ⟩⟨∇_θ f·∇_θϕ(·,·,x),ν_k^N⊗γ⟩π( x, y)
+η/N^2∫_𝖷×𝖸⟨(⟨ϕ(·,·,x),γ⟩-y)⟨∇_θ f·∇_θϕ(·,·,x),γ⟩,ν_k^N⟩π( x, y)
-η/N^2∫_𝖷×𝖸⟨(ϕ(·,·,x)-y)∇_θ f·∇_θϕ(·,·,x),ν_k^N⊗γ⟩π( x, y).
Now, we define for t∈ [0,1]:
𝐃_t^N[f]:=∑_k=0^⌊ Nt⌋-1𝐃_k^N[f], 𝐑_t^N[f]:=∑_k=0^⌊ Nt⌋-1𝐑_k^N[f], and 𝐌_t^N[f]:=∑_k=0^⌊ Nt⌋-1𝐌_k^N[f] .
We can rewrite 𝐃_t^N[f] has follows:
𝐃_t^N[f]=∑_k=0^⌊ Nt⌋-1∫_k/N^k+1/NN 𝐃_⌊ Ns⌋^N[f] s=N∫_0^t 𝐃_⌊ Ns⌋^N[f] s-N∫_⌊ Nt⌋/N^t 𝐃_⌊ Ns⌋^N[f] s.
Since ν_⌊ Ns⌋^N=μ_s^N (by definition, see (<ref>)), we have, using also (<ref>) with k=⌊ Ns⌋,
𝐃_t^N[f] =-η∫_0^t∫_𝖷×𝖸⟨ϕ(·,·,x)-y,μ_s^N⊗γ⟩⟨∇_θ f·∇_θϕ(·,·,x),μ_s^N⊗γ⟩π( x, y) s
+η/N∫_0^t∫_𝖷×𝖸⟨⟨ϕ(·,·,x)-y,γ⟩⟨∇_θ f·∇_θϕ(·,·,x),γ⟩,μ_s^N⟩π( x, y) s
-η/N∫_0^t∫_𝖷×𝖸⟨(ϕ(·,·,x)-y)∇_θ f·∇_θϕ(·,·,x),μ_s^N⊗γ⟩π( x, y) s-𝐕_t^N[f],
where
𝐕_t^N[f] :=-η∫^t_⌊ Nt⌋/N∫_𝖷×𝖸⟨ϕ(·,·,x)-y,μ_s^N⊗γ⟩⟨∇_θ f·∇_θϕ(·,·,x),μ_s^N⊗γ⟩π( x, y) s
+η/N∫^t_⌊ Nt⌋/N∫_𝖷×𝖸⟨⟨ϕ(·,·,x)-y,γ⟩⟨∇_θ f·∇_θϕ(·,·,x),γ⟩,μ_s^N⟩π( x, y) s
-η/N∫^t_⌊ Nt⌋/N∫_𝖷×𝖸⟨(ϕ(·,·,x)-y)∇_θ f·∇_θϕ(·,·,x),μ_s^N⊗γ⟩π( x, y) s.
On the other hand, we also have for t∈ [0,1],
∑_k=0^⌊ Nt⌋-1-η/N⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),ν_k^N⟩ =-η∫_0^⌊ Nt⌋/N⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ_s^N⟩ s.
We finally set:
𝐖_t^N[f]:=- 𝐕_t^N[f] + η∫^t_⌊ Nt⌋/N⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ_s^N⟩ s.
Since ⟨ f,μ_t^N⟩-⟨ f,μ_0^N⟩=∑_k=0^⌊ Nt⌋-1⟨ f,ν_k+1^N⟩-⟨ f,ν_k^N⟩, we deduce from (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>), the so-called pre-limit equation satisfied by
μ^N: for N≥1, t∈ [0,1], and f∈𝒞^∞(Θ),
⟨ f,μ_t^N⟩-⟨ f,μ_0^N⟩ =-η∫_0^t∫_𝖷×𝖸⟨ϕ(·,·,x)-y,μ_s^N⊗γ⟩⟨∇_θ f·∇_θϕ(·,·,x),μ_s^N⊗γ⟩π( x, y) s
-η∫_0^t ⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ_s^N⟩ s
+η/N∫_0^t∫_𝖷×𝖸⟨⟨ϕ(·,·,x)-y,γ⟩⟨∇_θ f·∇_θϕ(·,·,x),γ⟩,μ_s^N⟩π( x, y) s
-η/N∫_0^t∫_𝖷×𝖸⟨(ϕ(·,·,x)-y)∇_θ f·∇_θϕ(·,·,x),μ_s^N⊗γ⟩π( x, y) s
+ 𝐌_t^N[f] +𝐖_t^N[f]+ 𝐑_t^N[f].
§.§.§ The last five terms in (<ref>) are error terms
The purpose of this section is to show that the last five terms appearing in the r.h.s. of (<ref>) are error terms when N→+∞.
For J∈𝐍^* and f∈𝒞^J(Θ), set ‖ f‖_𝒞^J(Θ):=∑_|k|≤ J‖∂_kf ‖_∞, Θ,
where ‖ g‖_∞, Θ=sup_θ∈Θ|g(θ)| for g:Θ→𝐑^m.
Assume A1→A4. Then, there exists C>0 such that a.s. for all f∈𝒞^∞(Θ) and N≥1,
* η/N∫_0^1∫_𝖷×𝖸|⟨⟨ϕ(·,·,x)-y,γ⟩⟨∇_θ f·∇_θϕ(·,·,x),γ⟩,μ_s^N⟩| π( x, y) s ≤ C‖ f‖_𝒞^1(Θ)/N.
* η/N∫_0^1∫_𝖷×𝖸|⟨(ϕ(·,·,x)-y)∇_θ f·∇_θϕ(·,·,x),μ_s^N⊗γ⟩| π( x, y) s≤ C‖ f‖_𝒞^1(Θ)/N.
* sup_t∈[0,1]|𝐖_t^N[f]|+ sup_t∈[0,1]|𝐑_t^N[f]| ≤ C‖ f‖_𝒞^2(Θ)/N.
Finally, sup_t∈[0,1]𝐄[|𝐌_t^N[f]|]≤C‖ f‖_𝒞^1(Θ)/√(N).
All along the proof, C>0 denotes a positive constant independent of N≥ 1,k∈{0,…,N-1},(s,t)∈ [0,1]^2,(x,y)∈𝖷×𝖸,θ∈Θ,z∈𝐑^d, and f∈𝒞^∞(Θ) which can change from one occurrence to another.
Using (<ref>), the Cauchy-Schwarz inequality, and the fact that ∇_θ f is bounded over Θ imply:
|⟨∇_θ f(θ)·∇_θϕ(θ,·,x),γ⟩|≤⟨|∇_θ f(θ)·∇_θϕ(θ,·,x)|,γ⟩≤ C‖ f‖_𝒞^1(Θ).
Combining (<ref>) and (<ref>), we obtain:
∫_0^1∫_𝖷×𝖸|⟨⟨ϕ(·,·,x)-y,γ⟩⟨∇_θ f·∇_θϕ(·,·,x),γ⟩,μ_s^N⟩| π( x, y) s≤ C‖ f‖_𝒞^1(Θ)
and
∫_0^1∫_𝖷×𝖸|⟨(ϕ(·,·,x)-y)∇_mf·∇_mϕ(·,·,x),μ_s^N⊗γ⟩| π( x, y) s≤ C‖ f‖_𝒞^1(Θ),
which proves Items <ref> and <ref>.
Let us now prove Item <ref>. By (<ref>) and (<ref>), sup_t∈[0,1]|𝐕_t^N[f]|≤ C‖ f‖_𝒞^1(Θ)/N.
On the other hand,
because f∈𝒞^∞(Θ) and θ↦∇_θ𝒟_ KL(q_θ^1|P_0^1) is continuous (see (<ref>)) over Θ which is compact, it holds, ‖∇_θ f·∇_θ𝒟_ KL(q_θ^1|P_0^1)‖_∞,Θ<+∞.
Hence, it holds:
sup_t∈[0,1]|∫^t_⌊ Nt⌋/N⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ_s^N⟩ s|≤ C‖ f‖_𝒞^1(Θ)/N.
Using (<ref>), it then holds sup_t∈[0,1]|𝐖_t^N[f]| ≤ C‖ f‖_𝒞^1(Θ)/N.
Since f∈𝒞^∞(Θ), we have, by (<ref>), for N≥ 1 and 0≤ k≤ N-1, |𝐑_k^N[f]|≤‖ f‖_𝒞^2(Θ)C/N∑_i=1^N|θ_k+1^i-θ_k^i|^2.
By (<ref>) and Lemma <ref>, |θ_k+1^i-θ_k^i|^2≤ C/N^2 and consequently, one has:
|𝐑_k^N[f]|≤C‖ f‖_𝒞^2(Θ)/N^2.
Hence, for all t∈[0,1], |𝐑_t^N[f]|≤C‖ f‖_𝒞^2(Θ)/N.
This proves Item <ref>.
Let us now prove the last item in Lemma <ref>. Let t∈[0,1]. We have, by (<ref>),
|𝐌_t^N[f]|^2=∑_k=0^⌊ Nt⌋-1 |𝐌_k^N[f] |^2+2∑_k<j𝐌_k^N[f] 𝐌_j^N[f] .
For all 0≤ k<j<⌊ Nt⌋, 𝐌_k^N[f] is ℱ_j^N-measurable (see (<ref>)), and since 𝐄 [𝐌_j^N[f]|ℱ_j^N]=0, one deduces that 𝐄 [ 𝐌_k^N[f] 𝐌_j^N[f] ]=𝐄 [𝐌_k^N[f] 𝐄 [𝐌_j^N[f]|ℱ_j^N] ]=0.
Hence, 𝐄[|𝐌_t^N[f]|^2]=∑_k=0^⌊ Nt⌋-1𝐄[|𝐌_k^N[f]|^2].
By (<ref>) and (<ref>), one has a.s. for all 0≤ k≤ N-1,
|𝐌_k^N[f]|≤ C‖ f‖_𝒞^1(Θ)/N.
Hence, 𝐄[|𝐌_t^N[f]|^2]≤ C‖ f‖_𝒞^1(Θ)/N, which proves the last inequality in Lemma <ref>.
§.§ Convergence to the limit equation as N→+∞
In this section we prove the relative compactness of (μ^N)_N≥ 1 in 𝒟([0,1],𝒫(Θ)). We then show that any of its limit points satisfies the limit equation (<ref>).
§.§.§ Wasserstein spaces and duality formula
In this section we recall some basic results which will be used throughout this work on the space 𝒫(𝒮) when (𝒮, 𝖽) is a Polish space. First when endowed with the weak convergence topology, 𝒫(𝒮) is a Polish space <cit.>. In addition, 𝒫_q(𝒮)= {ν∈𝒫(𝒮), ∫_𝒮𝖽(w_0,w)^q ν ( w)<+∞}, where w_0∈𝒮 is arbitrary (note that this space was defined previously in (<ref>) when 𝒮=𝐑^d+1) when endowed with the 𝖶_q metric is also a Polish space <cit.>.
Recall also the duality formula for the 𝖶_1-distance on 𝒫_1(𝒮) (see e.g <cit.>):
𝖶_1(μ,ν)=sup{|∫_𝒮f(w)μ(w)-∫_𝒮f(w)ν( w)|, f_Lip≤ 1}.
Finally, when 𝒦⊂𝐑^d+1 is compact, the convergence in 𝖶_q-distance is equivalent to the usual weak convergence on 𝒫(𝒦) (see e.g. <cit.>).
§.§.§ Relative compactness
The main result of this section is to prove that (μ^N)_N≥ 1 is relatively compact in 𝒟([0,1],𝒫(Θ)), which is the purpose of Proposition <ref> below. To this end, we need to prove that for all f∈𝒞^∞(Θ), every sequence
(⟨ f,μ_t^N⟩)_N≥ 1 satisfies some regularity conditions, which is the purpose of the next result.
Assume A1→A4.
Then there exists C>0 such that a.s. for all f∈𝒞^∞(Θ), 0≤ r<t≤ 1, and N≥1:
|⟨ f,μ_t^N⟩-⟨ f,μ_r^N⟩|≤ C‖ f‖_𝒞^2(Θ)[|t-r|+|t-r|/N+1/N].
Let f∈𝒞^∞(Θ) and let N≥1 and 0≤ r<t≤ 1. In the following C>0 is a positive constant independent of f∈𝒞^∞(Θ), N≥1, and 0≤ r<t≤ 1, which can change from one occurrence to another.
From (<ref>), we have
⟨ f,μ_t^N⟩-⟨ f,μ_r^N⟩ =𝐀_r,t^N[f] - η∫_r^t ⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ_s^N⟩ s
+𝐌_t^N[f]-𝐌_r^N[f] +𝐖_t^N[f]-𝐖_r^N[f]+𝐑_t^N[f]-𝐑_r^N[f],
where
𝐀_r,t^N[f] =-η∫_r^t∫_𝖷×𝖸⟨ϕ(·,·,x)-y,μ_s^N⊗γ⟩⟨∇_θ f·∇_θϕ(·,·,x),μ_s^N⊗γ⟩π( x, y)
+η/N∫_r^t∫_𝖷×𝖸⟨⟨ϕ(·,·,x)-y,γ⟩⟨∇_θ f·∇_θϕ(·,·,x),γ⟩,μ_s^N⟩π( x, y)
-η/N∫_r^t∫_𝖷×𝖸⟨(ϕ(·,·,x)-y)∇_θ f·∇_θϕ(·,·,x),μ_s^N⊗γ⟩π( x, y).
By (<ref>) and (<ref>), |𝐀_r,t^N[f]| ≤ C‖ f‖_𝒞^1(Θ)[|t-r|+|t-r|/N].
In addition, since θ↦𝒟_ KL(q_θ^1|P_0^1) is bounded over Θ (since it is smooth and Θ is compact),
| ∫_r^t ⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ_s^N⟩ s |≤ C‖ f‖_𝒞^1(Θ)|t-r|.
Furthermore, using (<ref>),
|𝐌_t^N[f]-𝐌_r^N[f]|=|∑_k=⌊ Nr⌋^⌊ Nt⌋-1𝐌_k^N[f]|≤ (⌊ Nt⌋-⌊ Nr⌋) C‖ f‖_𝒞^1(Θ)/N.
Next, we have, by Item <ref> in Lemma <ref>, |𝐖_t^N[f]-𝐖_r^N[f]|≤|𝐖_t^N[f]|+|𝐖_r^N[f]|≤C‖ f‖_𝒞^2(Θ)/N.
Finally, by (<ref>),
|𝐑_t^N[f]-𝐑_r^N[f]|=|∑_k=⌊ Nr⌋^⌊ Nt⌋-1𝐑_k^N[f]|≤ (⌊ Nt⌋-⌊ Nr⌋) C‖ f‖_𝒞^2(Θ)/N^2.
The proof of Proposition <ref> is complete plugging all the previous estimates in (<ref>).
Assume A1→A4. Then, the sequence (μ^N)_N≥ 1 is relatively compact in 𝒟([0,1],𝒫(Θ)).
The proof consists in applying <cit.> with E=𝒫(Θ) endowed with the weak convergence topology.
Set 𝔽={𝔏_f, f∈𝒞^∞(Θ)} where
𝖫_f: ν∈𝒫(Θ)↦⟨ f, ν⟩.
The class of continuous functions 𝔽 on 𝒫(Θ) satisfies Conditions <cit.>.
On the other hand, the condition <cit.> is satisfied since 𝒫(Θ) is compact because Θ is compact (see e.g. <cit.> together with <cit.>).
It remains to verify Condition (3.4) of <cit.>, i.e. that for all f∈𝒞^∞(Θ), (⟨ f,μ^N⟩)_N≥1 is relatively compact in 𝒟([0,1],𝐑). To this end, we apply <cit.>. Condition (i) in <cit.> is satisfied because
|⟨ f,μ^N_t⟩|≤‖ f‖_∞,Θ for all t∈ [0,1] and N≥ 1. Let us now show that Condition (ii) in <cit.> holds.
For this purpose, we use Lemma <ref>.
For δ,β>0 sufficiently small, it is possible to construct a subdivision { t_i}_i=0^v of [0,1] such that t_0 =0, t_v=1, t_i+1-t_i = δ+β for i∈{0,…,v-2} and δ+β≤ t_v -t_v-1≤ 2(δ+β). According to the terminology introduced in <cit.>, { t_i}_i=0^v is δ-sparse. Then, by Lemma <ref>, there exists C>0 such that a.s.
for all δ,β>0, all such subdivision { t_i}_i=0^v, i∈{0,…,v-1}, and N≥ 1,
sup_t,r∈[t_i ,t_i+1 ] |⟨ f,μ_t^N⟩-⟨ f,μ_r^N⟩|≤ C(|t_i+1 -t_i |+|t_i+1 -t_i |/N+1/N)≤ C(2(δ+β)+2(δ+β)/N+1/N).
Thus, one has:
inf_β>0max_isup_t,r∈[t_i ,t_i+1 ]|⟨ f,μ_t^N⟩-⟨ f,μ_r^N⟩|≤ C(2δ+2δ/N+1/N).
Consequently, there exists C>0 such that a.s. for all δ>0 small enough and N≥ 1,
w'_⟨ f,μ^N⟩(δ):=inf_{t_i}
δ-sparsemax_isup_t,r∈[t_i,t_i+1]|⟨ f,μ_t^N⟩-⟨ f,μ_r^N⟩|≤ C(2δ+2δ/N+1/N).
This implies lim_δ→0lim sup_N→+∞𝐄[w'_⟨ f,μ^N⟩(δ)]=0. By Markov's inequality, this proves Condition (ii) of <cit.>. Therefore, for all f∈𝒞^∞(Θ), using also Prokhorov theorem, the sequence (⟨ f,μ^N⟩)_N≥1⊂𝒟([0,1],𝐑) is relatively compact. In conclusion,
according to <cit.>, (μ^N)_N≥ 1⊂𝒟([0,1],𝒫(Θ)) is tight.
§.§.§ Limit points satisfy the limit equation (<ref>)
In this section we prove that every limit point of (μ^N)_N≥ 1 in 𝒟([0,1],𝒫(Θ)) satisfies (<ref>).
Let 𝗆,(𝗆^N)_N≥ 1⊂𝒟([0,1],𝒫(Θ)) be such that 𝗆^N→𝗆 in 𝒟([0,1],𝒫(Θ)). Then, for all Lipschitz continuous function f:Θ→𝐑, we have ⟨ f,𝗆^N⟩→⟨ f,𝗆⟩ in 𝒟([0,1],𝐑).
Let f be such a function.
By <cit.>, 𝗆^N→𝗆 in 𝒟([0,1],𝒫(Θ)) iff there exist functions λ_N: [0,1]→ [0,1] continuous, increasing onto itself such that sup_t∈[0,1]|λ_N(t)-t|→_N→∞ 0 and
sup_t∈ [0,1]𝖶_1(𝗆_λ_N(t)^N,𝗆_t)→_N→∞0.
Then ⟨ f,𝗆^N⟩→⟨ f,𝗆⟩ in 𝒟([0,1],𝐑) since by (<ref>), sup_t∈ [0,1]|⟨ f,𝗆_λ_N(t)^N⟩-⟨ f,𝗆_t⟩| ≤f_Lipsup_t∈ [0,1]𝖶_1(𝗆_λ_N(t)^N,𝗆_t)→_N→∞0.
Let f∈𝒞^∞(Θ). Then, any limit point of (⟨ f,μ^N⟩)_N≥1⊂𝒟([0,1],𝐑) belong a.s. to 𝒞([0,1],𝐑).
Fix t∈ (0,1]. Letting r→ t in (<ref>), we obtain
|⟨ f,μ_t^N⟩-⟨ f,μ_t^-^N⟩|≤ C/N.
Therefore sup_t∈(0,1]|⟨ f,μ_t^N⟩-⟨ f,μ_t^-^N⟩| 0 as N→+∞. The result follows from <cit.>.
Let μ^*∈𝒟([0,1], 𝒫(Θ)) be a limit point of (μ^N)_N≥1⊂𝒟([0,1], 𝒫(Θ)). Then, a.s. μ^*∈𝒞([0,1], 𝒫(Θ)).
Up to extracting a subsequence, we assume that μ^Nμ^*. By Skorohod representation theorem, there exists another probability space (Ω̂, ℱ̂,𝐏̂) on which are defined random elements (μ̂^N)_N≥1 and μ̂^*, where,
μ̂^*𝒟=μ^*, and for all N≥1, μ̂^N𝒟=μ^N,
and such that 𝐏̂-a.s., μ̂^N→μ̂^* in 𝒟([0,1], 𝒫(Θ)) as N→ +∞. Fix f∈𝒞^∞(Θ). We have, by Lemma <ref>,
𝐏̂-a.s., ⟨ f,μ̂^N⟩→_N→+∞⟨ f,μ̂^*⟩ in 𝒟([0,1],𝐑).
In particular, ⟨ f,μ̂^N⟩→_N→+∞⟨ f,μ̂^*⟩ in distribution. By Proposition <ref>, there exists Ω̂_f ⊂Ω̂ of 𝐏̂-mass 1 such that for all ω∈Ω̂_f, ⟨ f,μ̂^*(ω)⟩∈𝒞([0,1],𝐑). Denote by ℱ the class polynomial functions with rational coefficients. Since this class is countable, the set Ω̂_ℱ:=∩_f∈ℱΩ̂_f
is of 𝐏̂-mass 1.
Consider now an arbitrary f∈𝒞(Θ) and let us show that for all ω∈Ω̂_ℱ, ⟨ f,μ̂^*(ω)⟩∈𝒞([0,1],𝐑). By the Stone-Weierstrass theorem, there exist (f_n)_n≥1⊂ℱ such that f_n-f_∞,Θ→_n→+∞0. On Ω̂_ℱ, for all n,
t∈ [0,1]↦⟨ f_n,μ̂_t^*⟩ is continuous and converges uniformly to t∈ [0,1]↦⟨ f,μ̂_t^*⟩.
Hence, for all ω∈Ω̂_ℱ and f∈𝒞 (Θ), ⟨ f,μ̂^*(ω)⟩∈𝒞([0,1],𝐑), i.e. for all ω∈Ω̂_ℱ, μ̂^*(ω)∈𝒞([0,1],𝒫(Θ)). This concludes the proof.
Now, we introduce, for t∈[0,1] and f∈𝒞^∞(Θ), the function Λ_t[f]:𝒟([0,1],𝒫(Θ))→𝐑_+ defined by:
Λ_t[f]:𝗆↦ |⟨ f,𝗆_t⟩-⟨ f,μ_0⟩
+η∫_0^t∫_𝖷×𝖸⟨ϕ(·,·,x)-y,𝗆_s⊗γ⟩⟨∇_θ f·∇_θϕ(·,·,x),𝗆_s⊗γ⟩π( x, y) s
+ η∫_0^t ⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),𝗆_s⟩ s |.
We now study the continuity of Λ_t[f].
Let (𝗆^N)_N≥ 1⊂𝒟([0,1],𝒫(Θ)) converge to 𝗆∈𝒟([0,1],𝒫(Θ)). Then, for all continuity point t∈[0,1] of 𝗆 and all f∈𝒞^∞(Θ), we have Λ_t[f](𝗆^N)→Λ_t[f](𝗆).
Let f∈𝒞^∞(Θ) and denote by 𝒞(𝗆)⊂[0,1] the set of continuity points of 𝗆. Let t∈𝒞(𝗆). From <cit.>, we have, for all s∈𝒞(𝗆),
𝗆^N_s→𝗆_s in 𝒫(Θ).
Thus, ⟨ f,𝗆_t^N⟩→_N→∞⟨ f,𝗆_t⟩.
For all z∈𝐑^d and (x,y)∈𝖷×𝖸, A1 and A3 ensure that the functions θ∈Θ↦ϕ(θ
,z,x)-y and θ∈Θ↦∇_θ f(θ)·∇_θϕ(θ,z,x) are continuous and also bounded because Θ is compact. Hence, for all s∈ [0,t]∩𝒞(𝗆), using (<ref>),
⟨ϕ(·,z,x)-y,𝗆_s^N⟩→⟨ϕ(·,z,x)-y,𝗆_s⟩ and ⟨∇_θ f·∇_θϕ(·,z,x),𝗆_s^N⟩→⟨∇_θ f·∇_θϕ(·,z,x),𝗆_s⟩
Since [0,1]\𝒞(𝗆) is at most countable (see <cit.>) we have that for a.e. (s,z',z,x,y)∈ [0,t]×𝐑^d×𝐑^d×𝖷×𝖸,
⟨ϕ(·,z',x)-y,𝗆_s^N⟩⟨∇_θ f·∇_θϕ(·,z,x),𝗆_s^N⟩→⟨ϕ(·,z',x)-y,𝗆_s⟩⟨∇_θ f·∇_θϕ(·,z,x),𝗆_s⟩.
Since ϕ(θ,z',x)-y is bounded and by (<ref>), there exists C>0 such that for all (s,z',z,x,y)∈ [0,t]×𝐑^d×𝐑^d×𝖷×𝖸, ⟨ |ϕ(·,z',x)-y|,𝗆_s^N⟩⟨|∇_θ f·∇_θϕ(·,z,x)|,𝗆_s^N⟩≤ C‖∇ _θ f‖_∞,Θ𝔟(z).
By the dominated convergence theorem, we then have:
∫_0^t∫_𝖷×𝖸⟨ϕ(·,·,x)-y,𝗆_s^N⊗γ⟩⟨∇_θ f·∇_θϕ(·,·,x),𝗆_s^N⊗γ⟩π( x, y) s
N→+∞⟶∫_0^t∫_𝖷×𝖸⟨ϕ(·,·,x)-y,𝗆_s⊗γ⟩⟨∇_θ f·∇_θϕ(·,·,x),𝗆_s⊗γ⟩π( x, y) s.
With the same arguments as above, one shows that ∫_0^t ⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),𝗆_s^N ⟩ s →∫_0^t ⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),𝗆_s ⟩ s.
The proof of the lemma is complete.
Let μ^*∈𝒟([0,1],𝒫(Θ)) be a limit point of (μ^N)_N≥1⊂𝒟([0,1],𝒫(Θ)). Then, a.s. μ^* satisfies (<ref>).
Up to extracting a subsequence, we can assume that μ^Nμ^* as N→ +∞. Let f∈𝒞^∞(Θ). The pre-limit equation (<ref>) and Lemma <ref> imply that a.s. for all N≥ 1 and t∈[0,1], Λ_t[f](μ^N)≤ C/N+ 𝐌_t^N[f].
Hence, using the last statement in Lemma <ref>, it holds for all t∈[0,1],
lim_N→∞𝐄[Λ_t[f](μ^N)]=0.
In particular, Λ_t[f](μ^N) 0. Let us now show that Λ_t[f](μ^N)Λ_t[f](μ^*).
Denoting by 𝖣(Λ_t[f]) the set of discontinuity points of Λ_t[f], we have, from Proposition <ref> and Lemma <ref>, for all t∈[0,1] and f∈𝒞^∞(Θ),
𝐏(μ^*∈𝖣(Λ_t[f])) =0.
By the continuous mapping theorem, Λ_t[f](μ^N)Λ_t[f](μ^*).
By uniqueness of the limit in distribution, we have that for all t∈[0,1] and f∈𝒞^∞(Θ), a.s. Λ_t[f](μ^*)=0. Let us now prove that a.s. for all t∈[0,1] and f∈𝒞^∞(Θ), Λ_t[f](μ^*)=0.
On the one hand, for all f∈𝒞^∞(Θ) and 𝗆∈𝒟([0,1],𝒫(Θ)), the function t↦Λ_t[f](𝗆) is right-continuous. Since [0,1] is separable, we have that for all f∈𝒞^∞(Θ), a.s. for all t∈[0,1], Λ_t[f](μ^*)=0.
One the other hand 𝒞^∞(Θ) is separable (when endowed with the norm f_𝒞^∞(Θ)= ∑_k≥ 02^-kmin(1,∑_|j|=k∂_jf_∞,Θ)) and the function f∈𝒞^∞(Θ) ↦Λ_t[f](𝗆) is continuous (for fixed t∈[0,1] and 𝗆∈𝒟([0,1],𝒫(Θ))) relatively to the topology induced by f_𝒞^∞(Θ).
Hence, we obtain that a.s. for all t∈[0,1] and f∈𝒞^∞(Θ), Λ_t[f](μ^*)=0. The proof of the proposition is thus complete.
§.§.§ Uniqueness and end of the proof of Theorem <ref>
There exists a unique solution to (<ref>) in 𝒞([0,1],𝒫(Θ)).
First of all, the fact that there is a solution to (<ref>) is provided by Propositions <ref>, <ref> and <ref>.
The proof of the fact that there is a unique solution to (<ref>) relies on the same arguments as those used in the proof of <cit.>.
For μ∈𝒫(𝐑^d+1), we introduce v[μ]:𝐑^d+1→𝐑^d+1 defined, for θ=(m,ρ)∈𝐑^d+1, by
v[μ](θ)=
-η∫_𝖷×𝖸⟨ϕ(·,·,x)-y,μ⊗γ⟩⟨∇_θϕ(θ,·,x),γ⟩π( x, y)-η∇_θ𝒟_ KL(q_θ^1|P_0^1).
In addition, if μ̅∈𝒞([0,1],𝒫(Θ)) is solution to (<ref>), it satisfies also (<ref>) with test functions f∈𝒞^∞_c( 𝐑^d+1). Then, adopting the terminology of <cit.>,
any solution μ̅ to (<ref>) is a weak solution[We mention that
according to <cit.>, the
two notions of solutions of (<ref>) (namely the weak
solution and the distributional solution) are equivalent.]
on [0,T] of the measure-valued equation
∂_tμ̅_t=div( v[μ̅_t]μ̅_t)
μ̅_0=μ_0.
Let us now prove that:
* There exists C>0 such that for all μ∈𝒫(𝐑^d+1) and θ∈𝐑^d+1,
|J_θ v[μ](θ)|≤ C.
* There exists C>0 such that for all μ̅∈𝒞([0,1],𝒫(Θ)) solution to (<ref>), 0≤ s,t≤ 1, and θ∈𝐑^d+1,
| v[μ̅_t](θ)- v[μ̅_s](θ)|≤ C|t-s|.
* There exists L'>0 such that for all μ,ν∈𝒫_1(𝐑^d+1),
sup_θ∈𝐑^d| v[μ](θ)- v[ν](θ)|≤ L'𝖶_1(μ,ν).
Before proving the three items above, we quickly conclude the proof of the proposition. Items 1 and 2 above imply that v(t,θ)= v[μ̅_t](θ) is globally Lipschitz continuous over [0,1]×𝐑^d+1 when μ̅∈𝒞([0,1],𝒫(Θ)) is a solution to (<ref>). Since μ̅∈𝒞([0,1],𝒫(Θ))⊂𝒞([0,1],𝒫(𝐑^d+1)), this allows to use the representation theorem <cit.> for the solution of (<ref>) in 𝒞([0,1],𝒫(𝐑^d+1)), i.e. it holds:
∀ t∈ [0,1], μ̅_t=ϕ_t#μ_0,
where ϕ_t is the flow generated by the vector field v[μ̅_t](θ) over 𝐑^d+1.
Equation (<ref>) and the fact that 𝒞([0,1],𝒫(Θ))⊂𝒞([0,1],𝒫_1(𝐑^d+1)) together with Item 3 above and the same arguments as those used in the proof of <cit.> (which we recall is based estimates in Wasserstein distances between two solutions of (<ref>) derived in <cit.>), one deduces that there is a unique solution to (<ref>).
Let us prove Item 1.
Recall g(ρ)= ln(1+e^ρ). The functions
ρ↦ g”(ρ)g(ρ), ρ↦ g'(ρ), ρ↦g'(ρ)/g(ρ), and ρ↦g”(ρ)/g(ρ)
are bounded on 𝐑. Thus, in view of (<ref>), ‖ Hess_θ 𝒟_ KL(q_θ^1|P_0^1)‖_∞,𝐑^d+1<+∞.
On the other hand, by A1 and A3, for x∈𝖷, z∈𝐑^d, θ∈Θ↦ϕ(θ,z,x) is smooth and
there exists C>0, for all x∈𝖷, θ∈𝐑^d+1, z∈𝐑^d:
| Hess_θϕ(θ,z,x) | ≤ C(𝔟(z)^2+𝔟(z)).
This bound allows us to differentiate under the integral signs in (<ref>) and proves that |J_θ∫_𝖷×𝖸⟨ϕ(·,·,x)-y,μ⊗γ⟩⟨∇_θϕ(θ,·,x),γ⟩π( x, y)|≤ C, where C>0 is independent of μ∈𝒫(Θ) and θ∈Θ. The proof of Item 1 is complete.
Let us prove Item 2. Let μ̅∈𝒞([0,1],𝒫(Θ)) be a solution to (<ref>), 0≤ s≤ t≤ 1, and θ∈𝐑^d+1. We have
v[μ̅_t](θ)- v[μ̅_s](θ)=
-η∫_𝖷×𝖸⟨ϕ(·,·,x),(μ̅_t-μ̅_s)⊗γ⟩⟨∇_θϕ(θ,·,x),γ⟩π( x, y).
Let z∈𝐑^d and x∈𝖷. By A1 and A3, ϕ(·,z,x)∈𝒞^∞(Θ). Therefore, by (<ref>),
⟨ϕ(·,z,x),μ̅_t-μ̅_s⟩ = -η∫_s^t ∫_𝖷×𝖸⟨ϕ(·,·,x')-y,μ̅_r⊗γ⟩⟨∇_θϕ(·,z,x)·∇_θϕ(·,·,x'),μ̅_r⊗γ⟩π( x', y) r
-η∫_s^t⟨∇_θϕ(·,z,x)·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ̅_r⟩ r
We have ‖∇_θ𝒟_ KL(q_θ^1|P_0^1)‖_∞,Θ<+∞. Using also (<ref>)
and the fact that 𝖷×𝖸 is a compact (see A2), it holds:
|⟨ϕ(·,z,x),μ̅_t-μ̅_s⟩|≤ C 𝔟(z)|t-s|.
Hence, for all x'∈𝖷,
|⟨ϕ(·,·,x'),(μ̅_t-μ̅_s)⊗γ⟩|≤⟨|⟨ϕ(·,·,x'),μ̅_t-μ̅_s⟩|,γ⟩≤ C|t-s|.
Thus, by (<ref>) and (<ref>), | v[μ̅_t](θ)- v[μ̅_s](θ)|≤ C|t-s|. This ends the proof of Item 2.
Let us now prove Item 3. Fix μ,ν∈𝒫_1(𝐑^d+1) and θ∈𝐑^d+1.
We have
v[μ](θ)- v[ν](θ)= -η∫_𝖷×𝖸⟨ϕ(·,·,x),( μ -ν)⊗γ⟩⟨∇_θϕ(θ,·,x),γ⟩π( x, y)
For all x∈𝖷, using (<ref>) and (<ref>), it holds:
|⟨ϕ(·,·,x),(μ-ν)⊗γ⟩| ≤∫_𝐑^d|⟨ϕ(·,z,x),μ⟩-⟨ϕ(·,z,x),ν⟩|γ(z) z
≤ C ∫_𝐑^d𝖶_1(μ,ν)𝔟(z)γ(z) z≤ C 𝖶_1(μ,ν).
Finally, using in addition (<ref>) and (<ref>), we deduce Item 3.
This ends the proof of the proposition.
We are now ready to prove Theorem <ref>.
Recall Lemma <ref> ensures that a.s. (μ^N)_N≥1⊂𝒟([0,1],𝒫(Θ)). By Proposition <ref>, this sequence is relatively compact. Let μ^*∈𝒟([0,1],𝒫(Θ)) be a limit point. Along some subsequence N', it holds:
μ^N'μ^*.
In addition, a.s. μ^*∈𝒞([0,1],𝒫(Θ)) (by Proposition <ref>) and μ^* satisfies (<ref>) (by Proposition <ref>). By Proposition <ref>, (<ref>) admits a unique solution μ̅∈𝒞([0,1],𝒫(Θ)). Hence, a.s. μ^*=μ̅. Therefore,
μ^N'μ̅.
Since the sequence (μ^N)_N≥1 admits a unique limit point, the whole sequence converges in distribution to μ̅. The convergence also holds in probability since μ̅ is deterministic. The proof of Theorem <ref> is complete.
§.§ Proof of Lemma <ref>
In this section we prove Lemma <ref>.
We start with the following simple result.
Let T>0, N≥ 1, and c_1>0. Consider a sequence (u_k)_0≤ k≤⌊ NT⌋⊂𝐑_+ for which there exists v_0 such that u_0≤ v_0 and for all 1≤ k≤⌊ NT⌋, u_k≤ c_1 (1+1/N∑_ℓ=0^k-1u_ℓ). Then, for all 0≤ k≤⌊ NT⌋, u_k≤ v_0e^c_1T.
Define v_k=c_1(1+1/N∑_ℓ=0^k-1v_ℓ). For all 0≤ k≤⌊ NT⌋, u_k≤ v_k and v_k=v_k-1(1+c_1/N). Hence v_k=v_0 (1+ c_1/N)^k≤ v_0(1+ c_1/N)^⌊ NT⌋≤ v^0e^c_1T.
This ends the proof of the Lemma.
Since ρ↦ g'(ρ) and ρ↦ g'(ρ)/g(ρ) are bounded continuous functions over 𝐑, and since |g(ρ)|≤ C(1+|ρ|), according to (<ref>), there exists c>0, for all θ∈𝐑^d+1,
|∇_θ𝒟_ KL(q_θ^1|P_0^1)|≤ c(1+|θ|).
All along the proof, C>0 is a constant independent of N≥ 1, T>0, i∈{1,…, N}, 1≤ k≤⌊ NT⌋, (x,y)∈𝖷×𝖸, θ∈𝐑^d+1, and z∈𝐑^d, which can change from one occurence to another.
It holds:
|θ_k^i|≤ |θ_0^i|+ ∑_ℓ=0^k-1|θ_ℓ+1^i-θ_ℓ^i|.
Using (<ref>), we have, for 0≤ℓ≤ k-1,
|θ_ℓ+1^i-θ_ℓ^i| ≤η/N^2∑_j=1,j≠ i^N|(⟨ϕ(θ_ℓ^j,·,x_ℓ),γ⟩-y_ℓ)⟨∇_θϕ(θ_ℓ^i,·,x_ℓ),γ⟩|
+ η/N^2|⟨(ϕ(θ_ℓ^i,·,x_ℓ)-y_ℓ)∇_θϕ(θ_ℓ^i,·,x_ℓ),γ⟩| +η/N |∇_θ𝒟_ KL(q_θ_ℓ^i^1|P_0^1)|.
For all θ∈𝐑^d+1, z∈𝐑^d, (x,y)∈𝖷×𝖸, we have, by A2 and A3, since ϕ(θ,z,x)=s(Ψ_θ(z),x),
|ϕ(θ,z,x)-y|≤ C.
Moreover, we have ∇_θϕ(θ,z,x)=∇_1s(Ψ_θ(z),x) J_θΨ_θ(z) (here ∇_1s refers to the gradient of s w.r.t. its first variable). By A3, |∇_1s(Ψ_θ(z),x)|≤ C and, hence, denoting by J_θ the Jacobian w.r.t. θ, using (<ref>),
|∇_θϕ(θ,z,x)|≤ C|J_θΨ_θ(z)|≤ C𝔟(z).
Therefore, by (<ref>),
⟨|∇_θϕ(θ,·,x)|,γ⟩≤ C.
Hence, we obtain, using (<ref>) and (<ref>),
|θ_ℓ+1^i-θ_ℓ^i| ≤η/N^2∑_j=1,j≠ i^NC+η/N^2C + cη/N(1+|θ_ℓ^i|) ≤C/N(1+ |θ_ℓ^i|).
Using A4, there exists K_0>0 such that a.s. for all i, |θ_0^i|≤ K_0.
Then, from (<ref>) and (<ref>), for 1≤ k≤⌊ NT⌋, it holds:
|θ_k^i|≤ K_0 + C/N∑_ℓ=0^k-1(1+|θ_ℓ^i|)≤ K_0+CT+ C/N∑_ℓ=0^k-1 |θ_ℓ^i|≤ C_0,T(1+ 1/N∑_ℓ=0^k-1 |θ_ℓ^i|),
with C_0,T=max(K_0+CT, C)≤ K_0+C(1+T).
Then, by Lemma <ref> and A4, we have that for all N≥1, i∈{1,…,N} and 0≤ k≤⌊ NT⌋, |θ_k^i|≤ K_0e^[K_0+C(1+T)]T.
The proof of Lemma <ref> is thus complete.
§ PROOF OF THEOREM <REF>
In this section, we assume A1→𝐀5
(where in A2, when k≥ 1, ℱ_k^N is now the one defined in (<ref>)) and the θ^i_k's (resp. μ^N) are those defined by (<ref>) for i∈{1,…,N} and k≥ 0 (resp. by (<ref>) for N≥ 1).
§.§ Preliminary analysis and pre-limit equation
§.§.§ Notation and weighted Sobolev embeddings
For J∈N and β≥0, let ℋ^J,β(𝐑^d+1) be the closure of the set
𝒞_c^∞(𝐑^d+1) for the norm
f_ℋ^J,β:=(∑_|k|≤ J∫_𝐑^d+1|∂_kf(θ)|^2/1+|θ|^2βθ)^1/2.
The space
ℋ^J,β(𝐑^d+1) is a separable Hilbert space and we denote its dual space by
ℋ^-J,β(𝐑^d+1) (see e.g. <cit.>).
The associated
scalar product on ℋ^J,β(𝐑^d+1) will be denoted
by ⟨·,·⟩_ℋ^J,β. For
Φ∈ℋ^-J,β(𝐑^d+1), we use the notation
⟨ f,Φ⟩_J,β= Φ[f], f∈ℋ^J,β(𝐑^d+1).
For ease of notation, and if no confusion is possible, we simply
denote ⟨ f,Φ⟩_J,β by ⟨ f,Φ⟩.
The set 𝒞^J,β_0(𝐑^d+1) (resp. 𝒞^J,β(𝐑^d+1)) is defined as the space of
functions f:𝐑^d+1→𝐑 with continuous
partial derivatives up to order J∈N such that
for all |k|≤ J, lim_|θ|→∞|∂_kf(θ)|/1+|θ|^β=0 (resp. ∑_|k|≤ J sup_θ∈𝐑^d+1|∂_kf(θ)|/1+|θ|^β<+∞).
The spaces 𝒞^J,β(𝐑^d+1) and 𝒞^J,β_0(𝐑^d+1) is endowed with the norm
f_𝒞^J,β:=∑_|k|≤ J sup_θ∈𝐑^d+1|∂_kf(θ)|/1+|θ|^β.
We note that
θ∈𝐑^d+1↦ (1-χ(θ))|θ|^α∈ℋ^J,β(𝐑^d+1) if β-α>(d+1)/2,
where χ∈𝒞_c^∞(𝐑^d+1) equals 1 near 0.
We recall that from
<cit.>, for m'>(d+1)/2 and α,j≥ 0, ℋ^m'+j,α(𝐑^d+1)↪𝒞_0^j,α(𝐑^d+1).
In the following, we consider γ_0,γ_1∈𝐑 and L_0∈𝐍 such that
γ_1>γ_0> d+1/2+1 and L_0> d+1/2 +1.
We finally recall the following standard result.
Let q>p≥ 1 and C>0. The set 𝒦_C^q:={μ∈𝒫_p(𝐑^d+1), ∫_𝐑^d+1|x|^qμ( x)≤ C} is compact.
§.§.§ Bound on the moments of the θ_k^i's
We have the following uniform bound in N≥ 1 on the moments of the sequence {θ_k^i, i∈{1,…,N}}_k= 0,…, ⌊ NT ⌋ defined by (<ref>).
Assume A1→ 𝐀5. For all T>0 and p≥ 1, there exists C>0 such that for all N≥1, i∈{1,…,N} and 0≤ k≤⌊ NT⌋,
𝐄[|θ_k^i|^p]≤ C.
Let p≥ 1.
By A4, 𝐄[|θ_0^i|^p]≤ C_p for all i∈{1,…,N}. Let T>0.
In the following C>0 is a constant independent of N≥1, i∈{1,…,N}, and 1≤ k≤⌊ NT⌋.
Using (<ref>), the fact that ϕ is bounded, 𝖸 is bounded, and (<ref>), we have, for 0≤ n ≤ k-1,
|θ_n+1^i-θ_n^i| ≤C/N^2B∑_j=1^N∑_ℓ=1^B 𝔟(𝖹^i,ℓ_n) +C/N |∇_θ𝒟_ KL(q_θ_n^i^1|P_0^1)|
≤C/NB∑_ℓ=1^B (1+𝔟(𝖹^i,ℓ_n)) +C/N (1+|θ_n^i|),
where we have also used (<ref>) for the last inequality.
Let us recall the following convexity inequality: for m,p≥ 1
and x_1,…,x_p∈𝐑_+,
(∑_n=1^mx_n)^p≤ m^p-1∑_n=1^mx_n^p.
Using (<ref>), A1 with q=p, and the fact that 1≤ k ≤⌊ NT⌋, one has setting u_k=𝐄[|θ_k^i|^p], u_k≤ C (1+1/N∑_n=0^k-1u_n). The result then follows from
Lemma <ref>.
§.§.§ Pre-limit equation
In this section, we derive the pre-limit equation for μ^N defined by (<ref>). For simplicity we will keep the same notations as those introduced in Section <ref>, though these objects will now be defined with θ^i_k set by (<ref>), and on 𝒞^2,γ_1(𝐑^d+1), for all integer k≥ 0, and all time t≥ 0. Let f∈𝒞^2,γ_1(𝐑^d+1).
Then, set for k≥ 0,
𝐃_k^N[f] =-η/N^3∑_i=1^N∑_j=1,j≠ i^N∫_𝖷×𝖸 (⟨ϕ(θ_k^j,·,x),γ⟩-y )⟨∇_θ f(θ_k^i)·∇_θϕ(θ_k^i,·,x),γ⟩π( x, y)
-η/N^2∫_𝖷×𝖸⟨(ϕ(·,·,x)-y)∇_θ f·∇_θϕ(·,·,x),ν_k^N⊗γ⟩π( x, y).
Note that 𝐃_k^N above is the one defined in (<ref>) but now on 𝒞^2,γ_1(𝐑^d+1) and with θ^i_k defined by (<ref>).
For k≥ 0, we set
𝐌_k^N[f]= -η/N^3B∑_i,j=1^N ∑_ℓ=1^B(ϕ(θ_k^j,𝖹_k^j,ℓ,x_k)-y_k)∇_θ f(θ_k^i)·∇_θϕ(θ_k^i,𝖹_k^i,ℓ,x_k)-𝐃_k^N[f].
By Lemma <ref> together with (<ref>) and (<ref>), 𝐌_k^N[f] is integrable.
Also, using A5 and the fact that θ_k^j is ℱ_k^N-measurable (see (<ref>)),
𝐄 [𝐌_k^N[f]|ℱ_k^N ]=0.
Set 𝐌_t^N[f]=∑_k=0^⌊ Nt⌋-1𝐌_k^N[f], t≥ 0. We now extend the definition of 𝐖_t^N[f]
and 𝐑_k^N[f] in (<ref>) and (<ref>) to any time t≥ 0, k≥ 0, and f∈𝒞^2,γ_1(𝐑^d+1), and with θ^i_k set by (<ref>). We then set
𝐑_t^N[f]=∑_k=0^⌊ Nt⌋-1𝐑_k^N[f], t≥ 0.
With the same algebraic computations as those made in Section <ref>, one obtains the following pre-limit equation: for N≥ 1, t≥ 0, and f∈𝒞^2,γ_1(𝐑^d+1),
⟨ f,μ_t^N⟩-⟨ f,μ_0^N⟩ =-η∫_0^t∫_𝖷×𝖸⟨ϕ(·,·,x)-y,μ_s^N⊗γ⟩⟨∇_θ f·∇_θϕ(·,·,x),μ_s^N⊗γ⟩π( x, y) s
-η∫_0^t ⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ_s^N⟩ s
+η/N∫_0^t∫_𝖷×𝖸⟨⟨ϕ(·,·,x)-y,γ⟩⟨∇_θ f·∇_θϕ(·,·,x),γ⟩,μ_s^N⟩π( x, y) s
-η/N∫_0^t∫_𝖷×𝖸⟨(ϕ(·,·,x)-y)∇_θ f·∇_θϕ(·,·,x),μ_s^N⊗γ⟩π( x, y) s
+ 𝐌_t^N[f] +𝐖_t^N[f]+ 𝐑_t^N[f].
We will now show that the sequence (μ^N)_N≥1 is relatively compact in 𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)).
§.§ Relative compactness and convergence to the limit equation
§.§.§ Relative compactness in 𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1))
In this section we prove the following result.
Assume A1→𝐀5. Recall γ_0> d+1/2+1.
Then, the sequence (μ^N)_N≥1 is relatively compact in 𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)).
We start with the following lemma.
Assume A1→ 𝐀5. Then, ∀ T>0 and f∈𝒞^2,γ_1(𝐑^d+1),
sup_N≥1𝐄[sup_t∈[0,T]⟨ f,μ_t^N⟩^2]<+∞.
Let T>0. In what follows, C>0 is a constant independent of f∈𝒞^2,γ_1(𝐑^d+1), (s,t)∈ [0,T]^2, and z∈𝐑^d which can change from one occurence to another. We have by A4, 𝐄[⟨ f,μ_0^N⟩^2]≤ C f_𝒞^2,γ_1^2.
By (<ref>) and (<ref>), it holds:
sup_t∈[0,T]⟨ f,μ_t^N⟩^2 ≤ C[ f_𝒞^2,γ_1^2+ ∫_0^T∫_𝖷×𝖸 |⟨⟨ |∇_θ f·∇_θϕ(·,·,x) |,γ⟩,μ_s^N⟩ | ^2 π( x, y) s
∫_0^ T | ⟨ |∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1) |,μ_s^N⟩ | ^2 s
+1/N^2∫_0^T∫_𝖷×𝖸 |⟨⟨ |∇_θ f·∇_θϕ(·,·,x) |,γ⟩,μ_s^N⟩ | ^2 π( x, y) s
+ sup_t∈[0,T] |𝐌_t^N[f]|^2 +sup_t∈[0,T] |𝐖_t^N[f]|^2+ sup_t∈[0,T] |𝐑_t^N[f]|^2.].
We have using (<ref>), for s∈ [0,T] and z∈𝐑^d,
| ∇_θ f (θ^i_⌊ Ns⌋) ·∇_θϕ(θ^i_⌊ Ns⌋,z,x)|≤ C f_𝒞^1,γ_1𝔟(z) (1+|θ^i_⌊ Ns⌋|^γ_1).
Thus, using Lemma <ref>,
𝐄[ ⟨⟨|∇_θ f·∇_θϕ(·,·,x)|,γ⟩ ,μ_s^N⟩^2 ]≤ Cf_𝒞^1,γ_1^2.
Using (<ref>), for s∈ [0,T], it holds:
| ∇_θ f(θ^i_⌊ Ns⌋)·∇_θ𝒟_ KL(q_θ^i_⌊ Ns⌋^1|P_0^1) | ≤ C f_𝒞^1,γ_1 (1+|θ^i_⌊ Ns⌋|^γ_1+1).
Thus, using Lemma <ref>,
𝐄 [ | ⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ_s^N⟩ | ^2 ]≤ Cf_𝒞^1,γ_1^2.
On the other hand, we have using (<ref>):
sup_t∈ [0,T]|𝐌_t^N[f]|^2≤⌊ NT⌋∑_k=0^⌊ NT⌋-1| 𝐌_k^N[f]|^2.
Recall (<ref>). By (<ref>), (<ref>), A1, and (<ref>), it holds:
|𝐃_k^N[f]|^2≤ Cf_𝒞^1,γ_1^2 [1/N^4∑_i≠ j=1^N (1+|θ^i_k|^2γ_1)+ 1/N^4 (1+⟨ |· |^2γ_1, ν_k^N⟩)]≤C/N^2f_𝒞^1,γ_1^2 (1+|θ^i_k|^2γ_1)
and
|𝐌_k^N[f]|^2≤C/N^4B∑_i,j=1^N ∑_ℓ=1^Bf^2_𝒞^1,γ_1 |𝔟(𝖹_k^i,ℓ)|^2 (1+|θ^i_⌊ Ns⌋|^2γ_1)+ |𝐃_k^N[f]|^2.
By Lemma <ref> and A1, one deduces that
𝐄[|𝐌_k^N[f]|^2]≤Cf_𝒞^1,γ_1^2/N^2.
Going back to (<ref>), we then have 𝐄[sup_t∈ [0,T]|𝐌_t^N[f]|^2]≤ Cf_𝒞^1,γ_1^2.
Using the same arguments as those used so far,
one also deduces that for t∈ [0,T]
sup_t∈[0,T]|𝐖_t^N[f]|^2 ≤Cf_𝒞^1,γ_1^2/N^2sup_t∈[0,T] (1+⟨ |· |^γ_1+1, ν_⌊ Nt⌋^N⟩)^2
= Cf_𝒞^1,γ_1^2/N^2max_0≤ k≤⌊ NT⌋(1+⟨ |· |^γ_1+1, ν_k^N⟩)^2
≤Cf_𝒞^1,γ_1^2/N^2∑_k=0^⌊ NT⌋ (1+⟨ |· |^γ_1+1, ν_k^N⟩)^2.
and thus
𝐄[sup_t∈[0,T]|𝐖_t^N[f]|^2] ≤Cf_𝒞^1,γ_1^2/N.
Let us finally deal with the term involving 𝐑_t^N[f].
One has using (<ref>):
sup_t∈[0,T]|𝐑_t^N[f]|^2≤⌊ NT⌋∑_k=0^⌊ NT⌋-1|𝐑_k[f]|^2.
For 0≤ k≤⌊ NT⌋-1, we have, from (<ref>),
|𝐑_k^N[f]|^2 ≤Cf_𝒞^2,γ_1^2/N∑_i=1^N|θ_k+1^i-θ_k^i|^4(1+|θ̂_k^i|^γ_1)^2
≤Cf_𝒞^2,γ_1^2/N∑_i=1^N|θ_k+1^i-θ_k^i|^4(1+|θ_k+1^i|^2γ_1+|θ_k^i|^2γ_1).
Using (<ref>),
|θ_k+1^i-θ_k^i|^4≤ C[1/N^4+|θ_k^i|^4/N^4+1/N^4B∑_ℓ=1^B|𝔟(𝖹_k^i,ℓ)|^4].
By Lemma <ref> and A1, it then holds
𝐄[|θ_k+1^i-θ_k^i|^4(1+|θ_k+1^i|^2γ_1+|θ_k^i|^2γ_1)] ≤C/N^4.
Hence, one deduces that
𝐄[sup_t∈[0,T]|𝐑_t^N[f]|^2]≤ C f_𝒞^2,γ_1^2 /N^2.
This ends the proof of Lemma <ref>.
Assume A1→𝐀5. Let 0<ϵ<γ_1-γ_0. For every T>0,
sup_N≥1𝐄[sup_t∈[0,T]∫_𝐑^d+1|x|^γ_0+ϵμ_t^N( x) ] <+∞.
Apply Lemma <ref> with f:θ↦(1-χ)|θ|^γ_0+ϵ∈𝒞^2,γ_1(𝐑^d+1).
Assume A1→𝐀5. Let T>0 and f∈𝒞^2,γ_1(𝐑^d+1). Then, there exists C>0 such that for all δ>0 and 0≤ r<t≤ T such that t-r≤δ, one has for all N≥ 1,
𝐄[|⟨ f,μ_t^N⟩ -⟨ f,μ_r^N⟩ |^2]≤ C (δ^2+δ/N+ 1/N).
Using (<ref>), Jensen's inequality, (<ref>), (<ref>), and (<ref>), one has for f∈𝒞^2,γ_1(𝐑^d+1),
𝐄[|⟨ f,μ_t^N⟩ -⟨ f,μ_r^N⟩ |^2] ≤ C[(t-r)^2(1+1/N^2)f_𝒞^1,γ_1^2 +𝐄[ | ∑_k=⌊ Nr⌋^⌊ Nt⌋-1𝐌_k^N[f] |^2]
+𝐄[| 𝐖_t^N[f] - 𝐖_r^N[f] |^2]+𝐄[| 𝐑_t^N[f] - 𝐑_r^N[f] |^2].
We also have with the same arguments as those used just before (<ref>)
𝐄[ | ∑_k=⌊ Nr⌋^⌊ Nt⌋-1𝐌_k^N[f] |^2]=∑_k=⌊ Nr⌋^⌊ Nt⌋-1𝐄[|𝐌_k^N[f]|^2].
Using in addition (<ref>), one has
𝐄[ | ∑_k=⌊ Nr⌋^⌊ Nt⌋-1𝐌_k^N[f] |^2]≤ C (Nδ+1) f_𝒞^1,γ_1^2/ N^2. Note that with this argument, we also deduce that
𝐄[ | 𝐌_t^N[f]|^2]≤ Cf_𝒞^1,γ_1^2/ N.
On the other hand, by (<ref>) and (<ref>), one has
𝐄[| 𝐖_t^N[f] - 𝐖_r^N[f] |^2]≤ C f^2_𝒞^1,γ_1/ N and 𝐄[| 𝐑_t^N[f] - 𝐑_r^N[f] |^2]≤ C f_𝒞^2,γ_1^2/ N^2.
One then plugs all the previous estimates in (<ref>) to deduce the result of Lemma <ref>.
We are now in position to prove Proposition <ref>.
The proof consists in applying <cit.> with E= 𝒫_γ_0(𝐑^d+1) and 𝔽={𝖧_f, f∈𝒞^∞_c(𝐑^d+1)} where
𝖧_f: ν∈𝒫_γ_0(𝐑^d+1)↦⟨ f, ν⟩.
The set 𝔽 on 𝒫_γ_0(𝐑^d+1) satisfies Conditions <cit.>. Condition (4.8) there follows from Proposition <ref>, Lemma <ref>, and Markov's inequality.
Let us now show <cit.> is verified, i.e. that for all f∈𝒞^∞_c(𝐑^d+1), the family (⟨ f,μ^N⟩)_N≥1 is relatively compact in 𝒟(𝐑_+,𝐑).
To do this, it suffices to use Lemma <ref> and <cit.> (with ℋ_1=ℋ_2=𝐑 there).
In conclusion, according to <cit.>, the sequence (μ^N)_N≥1⊂𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)) is relatively compact.
§.§.§ Limit points satisfy the limit equation (<ref>)
For f∈𝒞^1,γ_0-1(𝐑^d+1)
and t≥ 0,
we introduce for 𝗆∈𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)),
Φ_t[f]:𝗆↦ |⟨ f,𝗆_t⟩-⟨ f,μ_0⟩
+η∫_0^t∫_𝖷×𝖸⟨ϕ(·,·,x)-y,𝗆_s⊗γ⟩⟨∇_θ f·∇_θϕ(·,·,x),𝗆_s⊗γ⟩π( x, y) s
+ η∫_0^t ⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),𝗆_s⟩ s |.
Note that Φ_t[f] is the function Λ_t[f] previously defined in (<ref>) for test functions f∈𝒞^1,γ_0-1(𝐑^d+1) and for 𝗆∈𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)).
Assume A1→𝐀5. Let f∈𝒞^1,γ_0-1(𝐑^d+1). Then
Φ_t[f] is well defined. In addition, if a sequence (𝗆^N)_N≥ 1 converges to 𝗆 in 𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)), then, for all continuity point t≥ 0 of 𝗆, we have Φ_t[f](𝗆^N)→Φ_t[f](𝗆).
Using A1, and because 𝖸 is bounded and the function ϕ is bounded, 𝒢_1^x,y: θ↦⟨ϕ(θ,·,x)-y,γ⟩∈𝒞^∞_b(𝐑^d+1). In addition, for all multi-index α∈𝐍^d+1, there exists C>0, for all x,y∈𝖷×𝖸 and all θ∈𝐑^d+1, |∂_α𝒢_1^x,y(θ)|≤ C. The same holds for the function
𝒢_2^x: θ∈𝐑^d+1↦⟨∇_θϕ(θ,·,x), γ⟩.
Consequently, θ↦∇_θ f(θ)·𝒢_2^x(θ)∈𝒞^0,γ_0-1(𝐑^d+1)↪𝒞^0,γ_0(𝐑^d+1). Then, there exists C>0 independent of (x,y)∈𝖷×𝖸 and s∈ [0,t] such that
|⟨𝒢_1^x,y,𝗆_s⟩|≤ C,
and
|⟨∇_θ f·𝒢_2^x,𝗆_s⟩ |≤ C ‖ f ‖_𝒞^1,γ_0-1⟨ 1+|.|^γ_0, 𝗆_s⟩.
Finally, the function θ↦∇_θ𝒟_ KL(q_θ^1|P_0^1) is smooth (see (<ref>)) and (<ref>) extends to all its derivatives, i.e. for all multi-index α∈𝐍^d+1,
there exists c>0, for all θ∈𝐑^d+1,
|∂_α∇_θ𝒟_ KL(q_θ^1|P_0^1)|≤ c(1+|θ|).
Thus, ∇_θ f·∇_θ𝒟_ KL(q_θ^1|P_0^1)∈𝒞^0,γ_0(𝐑^d+1) and for some C>0 independent of s∈ [0,t]
|⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),𝗆_s⟩|≤ C ‖ f ‖_𝒞^1,γ_0-1⟨ 1+|.|^γ_0, 𝗆_s⟩.
Since in addition sup_s∈ [0,t]⟨ 1+|.|^γ_0, 𝗆_s⟩<+∞ (since s↦⟨ 1+|.|^γ_0, 𝗆_s⟩∈𝒟(𝐑_+,𝐑)), Φ_t[f] is well defined. To prove the continuity property of Φ_t[f] it then suffices to use the previous upper bounds together similar arguments as those used in the proof of Lemma <ref> (see also <cit.>).
Assume A1→𝐀5. Let μ^* be a limit point of (μ^N)_N≥1 in 𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)). Then, μ^* satisfies a.s. Equation (<ref>).
Let us consider f∈𝒞_c^∞(𝐑^d+1) and μ^* be a limit point of (μ^N)_N≥1 in 𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)). Recall that by <cit.>, the complementary of the set
𝒞(μ^*)={t≥ 0, 𝐏(μ^*_t^-= μ^*_t)=1}
is at most countable. Let t_*∈𝒞(μ^*). Then, by Lemma <ref>, one has that 𝐏(μ^*∈𝖣(Φ_t_*[f]))=0. Thus, by the continuous mapping theorem, it holds
Φ_t_*[f](μ^N)Φ_t_*[f](μ^*).
On the other hand, using (<ref>) and the estimates (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>), it holds
lim_N→∞𝐄[Φ_t_*[f](μ^N)]=0.
Consequently, for all f∈𝒞_c^∞(𝐑^d+1) and t_*∈𝒞(μ^*), it holds a.s. Φ_t_*[f](μ^*)=0. On the other hand, for all ψ∈𝒞_c^∞(𝐑^d+1), 𝗆∈𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)), and s≥ 0, the mappings
t≥ 0↦Φ_t[ψ ](𝗆)
is right continuous, and
f∈ℋ^L_0,γ_0-1(𝐑^d+1)↦Φ_s[f](𝗆)
is continuous (because ℋ^L_0,γ_0-1(𝐑^d+1)↪𝒞_0^1,γ_0-1(𝐑^d+1)).
In addition, ℋ^L_0,γ_0-1(𝐑^d+1) admits a dense and countable subset of elements in 𝒞_c^∞(𝐑^d+1). Moreover, there exists a countable subset 𝒯_μ^* of
𝒞(μ^*) such that for all t≥ 0 and ϵ>0, there exists s∈𝒯_μ^*, s∈ [t,t+ϵ]. We prove this claim. Since ℝ_+ is a metric space, 𝒞(μ^*) is separable and thus admits a dense subset 𝒪_μ^*. Since [t+ϵ/4,t+3ϵ/4]∩𝒞(μ^*)≠∅, there exists u∈ [t+ϵ/4,t+3ϵ/4]∩𝒞(μ^*). Consider now s∈𝒪_μ^* such that |s-u|≤ϵ/4. It then holds t≤ s≤ t+ ϵ, proving the claim with 𝒯_μ^*=𝒪_μ^*.
Hence, we have with a classical argument that a.s. for all f∈ℋ^L_0,γ_0-1(𝐑^d+1) and t≥ 0, Λ_t[f](μ^*)=0. Note also that 𝒞^∞_b(𝐑^d+1)⊂ℋ^L_0,γ_0-1(𝐑^d+1) since 2γ_0>d+1. This ends the proof of the proposition.
§.§ Uniqueness of the limit equation and end of the proof of Theorem <ref>
In this section, we prove that there is a unique solution to (<ref>) in 𝒞(𝐑_+,𝒫_1(𝐑^d+1)). To this end, we first need to prove that every limit points of (μ^N)_N≥ 1 a.s. belongs to 𝒞(𝐑_+,𝒫_1(𝐑^d+1)).
§.§.§ Limit points belong to 𝒞(𝐑_+,𝒫_1(𝐑^d+1))
Assume A1→𝐀5. Let μ^*∈𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)) be a limit point of (μ^N)_N≥ 1 in 𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)). Then, a.s. μ^*∈𝒞(𝐑_+,𝒫_1(𝐑^d+1)).
Note that since 𝖶_1≤𝖶_γ_0, μ^N'μ^* also in 𝒟(𝐑_+,𝒫_1(𝐑^d+1)), along some subsequence N'. According to <cit.>, μ^*∈𝒞(𝐑_+,𝒫_1(𝐑^d+1)) a.s. if for all T>0, lim_N→ +∞𝐄[ sup_t∈ [0,T]𝖶_1(μ^N_t_-,μ^N_t) ]=0. Using (<ref>), this is equivalent to prove that
lim_N→ +∞𝐄[ sup_t∈ [0,T]sup_‖ f‖_Lip≤ 1|⟨ f,μ^N_t_-⟩-⟨ f,μ^N_t⟩| ]=0.
Let us consider T>0 and a Lipschitz function f:𝐑^d+1→𝐑 such that ‖ f‖_Lip≤ 1. We have ⟨ f,μ_t^N⟩=⟨ f,μ_0^N⟩+ ∑_k=0^⌊ Nt⌋-1⟨ f,ν_k+1^N⟩-⟨ f,ν_k^N⟩ (with usual convention ∑_0^-1=0). Thus the discontinuity points of t∈ [0,T]↦⟨ f,μ_t^N⟩ lies exactly at {1/N, 2/N,…, ⌊ NT⌋/N} and
|⟨ f,μ^N_t_-⟩-⟨ f,μ^N_t⟩|≤max_k=0,…,⌊ NT⌋-1|⟨ f,ν_k+1^N⟩-⟨ f,ν_k^N⟩|, ∀ t∈ [0,T], f Lipschitz.
Pick k=0,…,⌊ NT⌋-1. We have by (<ref>),
|⟨ f,ν_k+1^N⟩-⟨ f,ν_k^N⟩| ≤1/N∑_i=1^N |θ_k+1^i-θ_k^i| ≤C/N∑_i=1^N[ 1/NB∑_ℓ=1^B (1+𝔟(𝖹^i,ℓ_k)) +1/N (1+|θ_k^i|)]=:d_k^N
Hence, it holds:
|d_k^N|^2 ≤C/N∑_i=1^N[ 1/N^2B∑_ℓ=1^B (1+𝔟^2(𝖹^i,ℓ_k)) +1/N^2 (1+|θ_k^i|^2)],
where thanks to Lemma <ref> and A1, for all k=0,…,⌊ NT⌋-1, 𝐄[|d_k^N|^2]≤ C/N^2 for some C>0 independent of N≥ 1 and k=0,…,⌊ NT⌋-1.
Thus, using (<ref>) and (<ref>),
𝐄[ sup_t∈ [0,T]sup_‖ f‖_Lip≤ 1|⟨ f,μ^N_t_-⟩-⟨ f,μ^N_t⟩| ] ≤𝐄[ sup_‖ f‖_Lip≤ 1max_k=0,…,⌊ NT⌋-1|⟨ f,ν_k+1^N⟩-⟨ f,ν_k^N⟩| ]
≤𝐄[ max_k=0,…,⌊ NT⌋-1d_k^N ]
≤𝐄[ √(∑_k=0^⌊ NT⌋-1 |d_k^N|^2 )]
≤√(𝐄[ ∑_k=0^⌊ NT⌋-1 |d_k^N|^2 ])≤C/√(N).
This concludes the proof of Proposition <ref>.
§.§.§ Uniqueness of the solution to (<ref>)
There is a unique solution μ̅∈𝒞(𝐑_+,𝒫_1(𝐑^d+1)) to (<ref>).
First of all, the existence of a solution is provided by Propositions <ref>, <ref> and <ref>.
Let us now prove that there is a unique solution to (<ref>) in 𝒞(𝐑_+,𝒫_1(𝐑^d+1)).
Recall the definition of v[μ] in (<ref>). We claim that
for all T>0 and
all solution μ̅∈𝒞(𝐑_+,𝒫_1(𝐑^d+1)) of (<ref>), there exists C>0 such that
| v[μ̅_t](θ)- v[μ̅_s](θ)|≤ C|t-s|, for all 0≤ s ≤ t≤ T and θ∈𝐑^d+1.
The proof of item (<ref>) is the same as the one made for Item 2 in Proposition <ref> since it holds using (<ref>) and (<ref>), for all 0≤ s≤ t≤ T and z∈𝐑^d,
|∫_s^t⟨∇_θϕ(·,z,x)·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ̅_r⟩ r | ≤ C𝔟(z)∫_s^t ⟨ (1+|·| ), μ̅_r⟩ r
≤ C𝔟(z) max_r∈ [0,T]⟨ (1+|·| ), μ̅_r⟩ |t-s|.
We now conclude the proof of Proposition <ref>.
Item 1 in the proof of Proposition <ref> and (<ref>) imply that v(t,θ)= v[μ̅_t](θ) is globally Lipschitz on [0,T]×𝐑^d+1, for all T>0, when μ̅∈𝒞(𝐑_+,𝒫_1(𝐑^d+1)) is a solution of (<ref>). Since in addition a solution μ̅ to (<ref>) is a weak solution on 𝐑_+ to (<ref>) in 𝒞(𝐑_+,𝒫(𝐑^d+1)), it holds by <cit.>:
∀ t≥ 0, μ̅_t=ϕ_t#μ_0,
where ϕ_t is the flow generated by the vector field v[μ̅_t](θ) over 𝐑^d+1.
Together with Item 3 in the proof of Proposition <ref> and using the same arguments as those used in Step 3 of the proof of <cit.>, two solutions agrees on each [0,T] for all T>0. One then deduces the uniqueness of the solution to (<ref>). The proof of Proposition <ref> is complete.
We are now in position to end the proof of Theorem <ref>.
By Proposition <ref>, (μ^N)_N≥1 is relatively compact in 𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)). Let μ^1,μ^2∈𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)) be two limit points of this sequence. By Proposition <ref>, a.s. μ̅^1,μ̅^2∈𝒞(𝐑_+,𝒫_1(𝐑^d+1)). In addition, according to Proposition <ref>, μ^1 and μ^2 are a.s. solutions of (<ref>). Denoting by μ̅∈𝒞(𝐑_+,𝒫_γ_0(𝐑^d+1)) the unique solution to (<ref>) (see Proposition <ref>), we have a.s.
μ̅^1 =μ̅ and μ̅^2=μ̅ in 𝒞(𝐑_+,𝒫_1(𝐑^d+1)).
In particular μ̅∈𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1) ) and μ̅^j=μ̅ in 𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)), j∈{1,2}. As a consequence, μ̅ is the unique limit point of (μ^N)_N≥1 in 𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)) and the whole sequence (μ^N)_N≥1 converges to μ̅ in 𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)). Since μ̅ is deterministic, the convergence also holds in probability. The proof of Theorem <ref> is complete.
Let us now prove Proposition <ref>.
Any solution to (<ref>) in 𝒞([0,T],𝒫(Θ_T)) is a solution to (<ref>) in 𝒞([0,T],𝒫_1( 𝐑^d+1)). The result follows from Proposition <ref>.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.