entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 17
188
| authors
sequence | primary_category
stringlengths 5
18
| categories
sequence | text
stringlengths 2
629k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2307.05881v1 | 20230712030340 | Dynamic Prediction using Time-Dependent Cox Survival Neural Network | [
"Lang Zeng",
"Jipeng Zhang",
"Wei Chen",
"Ying Ding"
] | stat.ML | [
"stat.ML",
"cs.LG"
] |
firstpage–lastpage
1
Year
Month
xx.xxxx/x.xxxx-xxxx.xxxx.xxxxx.x
The target of dynamic prediction is to provide individualized risk predictions over time which can be updated as new data become available. Motivated by establishing a dynamic prediction model for the progressive eye disease, age-related macular degeneration (AMD), we proposed a time-dependent Cox model-based survival neural network (tdCoxSNN) to predict its progression on a continuous time scale using longitudinal fundus images. tdCoxSNN extends the time-dependent Cox model by utilizing a neural network to model the non-linear effect of the time-dependent covariates on the survival outcome. Additionally, by incorporating the convolutional neural network (CNN), tdCoxSNN can take the longitudinal raw images as input. We evaluate and compare our proposed method with joint modeling and landmarking approaches through comprehensive simulations using two time-dependent accuracy metrics, the Brier Score and dynamic AUC. We applied the proposed approach to two real datasets. One is a large AMD study, the Age-Related Eye Disease Study (AREDS), in which more than 50,000 fundus images were captured over a period of 12 years for more than 4,000 participants. Another is a public dataset of the primary biliary cirrhosis (PBC) disease, in which multiple lab tests were longitudinally collected to predict the time-to-liver transplant. Our approach achieves satisfactory prediction performance in both simulation studies and the two real data analyses. tdCoxSNN was implemented in PyTorch, Tensorflow, and R-Tensorflow.
Cox model; dynamic prediction; neural network; survival analysis; time-dependent covariate.
Knowledge-Driven Resource Allocation for D2D Networks: A WMMSE Unrolled Graph Neural Network Approach
Hao Yang, Student Member, IEEE,
Nan Cheng, Member, IEEE,
Ruijin Sun, Member, IEEE,
Wei Quan, Member, IEEE,
Rong Chai, Senior Member, IEEE,
Khalid Aldubaikhy, Member, IEEE,
Abdullah Alqasir, Member, IEEE, and Xuemin (Sherman) Shen, Fellow, IEEE
Hao Yang, Nan Cheng, and Ruijin Sun are with the State Key Lab. of ISN and School of Telecommunications Engineering, Xidian University, Xi’an 710071, China (e-mail: [email protected]; [email protected]; [email protected]).
Wei Quan is with School of Electronic and Information Engineering, Beijing
Jiaotong University, Beijing 100044, China (e-mail: [email protected]).
Rong Chai is with School of Communication and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing 400065, China (e-mail: [email protected]).
K. Aldubaikhy and A. Alqasir are with the Department of Electrical Engineering, College of Engineering, Qassim University, Qassim, Saudi Arabia
(e-mail: {khalid, a.alqasir}@qec.edu.sa).
Xuemin (Sherman) Shen is with the Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, N2L 3G1, Canada (e-mail: [email protected]).
Corresponding Author: Ruijin Sun.
August 12, 2023
=======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
For many chronic progressive diseases, the prognosis and severity of the disease change over time. A dynamic prediction model that can forecast the longitudinal disease progression profile is a crucial and unmet need <cit.>. The unstructured observation
times and the varying number of observations across subjects make it challenging to build
a dynamic prediction model. The collection of high-dimensional longitudinal data requires the development of novel dynamic prediction models which can handle various inputs such as images.
Joint modeling and landmarking are the two dominating techniques for dynamic prediction. The first prediction approach jointly models the longitudinal and time-to-event data through a longitudinal sub-model and a survival sub-model <cit.>. However, joint modeling is computationally demanding <cit.> and struggles to directly model the large-scale longitudinal data. In contrast, landmarking is a more pragmatic model which avoids directly modeling the process for the longitudinal covariates. It estimates the effect of predictors through the survival model over all subjects at risk at a given landmark time point <cit.>. <cit.> and <cit.> compared the prediction accuracy of two models and found that joint modeling performs better than landmarking when the longitudinal process is correctly modeled. In the cases that the longitudinal process is misspecified or difficult to estimate, such as with sparse longitudinal data, the landmarking method provided a good enough prediction <cit.>.
Recently, there have been extensions to joint modeling methods aimed at addressing the nonlinear patterns in longitudinal outcomes.
<cit.> proposed the functional JM model to model the multiple longitudinal outcomes as multivariate sparse functional. <cit.> applied the functional JM model to predict the progression of Alzheimer’s disease using pre-specified MRI voxels. However, this approach heavily relies on image registration/pre-processing and disregards the correlation between voxels.
With the development of machine learning and its success in survival analysis <cit.>, new methods have been developed to integrate the dynamic prediction models with machine learning techniques to expand their application and enhance the prediction accuracy in more complex settings.
<cit.> proposed using a neural network to jointly model the survival and longitudinal process. <cit.> combined the landmarking with machine learning ensembles to integrate prediction from standard methods. However, these two approaches can not be directly applied to the situation with high-dimensional longitudinal variables. <cit.> proposed different deep-learning models for dynamic prediction under the discrete-time scenario. Although discretizing the time does not necessarily diminish prediction accuracy, the number of time intervals used for discretization significantly impacts accuracy so needs to be carefully tuned in practice <cit.>. In summary, no existing dynamic prediction models could directly handle the high-dimensional, longitudinal variables collected at irregular observational times.
The time-dependent Cox model <cit.> is a straight-forward continuous-time method used to incorporate the relationship between the longitudinal and time-to-event processes. This approach has received numerous criticisms as it may not accurately reflect the longitudinal process <cit.>. Nonetheless, we discovered that it can be easily combined with neural network techniques to create a dynamic prediction method capable of handling complex longitudinal markers (e.g. images) and their nonlinear relationship with survival outcomes. This paper proposes a dynamic prediction method on a continuous time scale when the longitudinal covariates could be high-dimensional and measured at unstructured time points. Specifically, we combined the time-dependent Cox model with the Cox survival neural network. The proposed method can incorporate structured inputs (e.g. images, texts) through an additional neural network architecture. The rest of the article is organized as follows: Section <ref> describes the AREDS dataset which highly motivates this study. Section <ref> introduces notation and the standard dynamic prediction techniques. Section <ref> presents the proposed model. Section <ref> introduces the accuracy metrics for the evaluation of prediction performance. Section <ref> and <ref> presents the simulation and two real data analysis results. Finally, we conclude with a discussion in Section <ref>.
§ AMD PROGRESSION PREDICTION AND EXISTING WORKS
This research is highly motivated by developing a dynamic prediction model for progressive eye disease, Age-related Macular Degeneration (AMD), using longitudinal fundus images. AMD is a polygenic and progressive neurodegenerative eye disease, which is a leading cause of blindness in the older population, especially in developed countries. It has been reported that by 2040, AMD is going to affect about 288 million people worldwide <cit.>.
Once the disease is in the late stage, it is typically not curable. Therefore, accurate predictive models for predicting the risk of progressing to late-AMD at an early stage are needed. This will allow clinicians to identify high-risk individuals for late-AMD at their subclinical stage so that they can initiate preventive interventions for those individuals.
Colored fundus photographs have been routinely used to examine and document the presence and severity of AMD in clinical practice and trials. Our motivating study is the Age-related Eye Disease Study (AREDS), which is a large multi-center, controlled, randomized clinical trial of AMD and age-related cataract <cit.>. It was designed to assess the clinical course and risk factors related to the development and progression of AMD and cataract. The participants were followed up for 12 years and the fundus photographs were performed every six months during the first six years, and annually thereafter.
Many models have been proposed in recent years to characterize and predict AMD progression. <cit.> built a survival neural network prediction model to predict the progression risk with the baseline demographic and genotype data. <cit.> used both genotype and fundus image data to predict the risk of progression to late-AMD at discretized given time points, where images were processed through the convolutional neural network (CNN). <cit.> used the deep features of baseline fundus images obtained from DeepSeeNet <cit.> along with demographic and genotype data to generate individualized progression curves. <cit.> used the deep features of the fundus images from baseline, year 2, and 3 through a recurrent neural network (RNN). <cit.>trained a image generation model on all consecutive time-points data to predict the fundus image at next visit. These prediction models either rely solely on baseline predictors or predictors from specific years.
§ NOTATION AND EXISTING APPROACHES FOR DYNAMIC PREDICTION
§.§ Notations
Let {T_i,δ_i,{𝒴_i(t), 0≤ t ≤ T_i },X_i; i = 1,…,n} denote n samples. T_i = min(T_i^*,C_i^*) denotes the observed event time for subject i, with T_i^* and C_i^* denoting the underlying true event time and censoring time, and δ_i = I(T_i^* ≤ C_i^*) is the event indicator. X_i is time-invarying measurement for subject i. {𝒴_i(t), 0≤ t ≤ s } is the measurements in the interval [0,s]. It's often the case that the history can not be fully measured and we only observed {𝒴_i(0), 𝒴_i(t_i1), …, 𝒴_i(t_in_i),t_in_i≤ s}. We focus on the scenario where t is continuous and allow the number of longitudinal observations n_i+1 to differ across individuals.
For dynamic prediction, we are interested in predicting the probability that a new patient j, with time-varying measurements up to time s, will survive up to time u for u>s, denoted as
π_j(u|s)=Pr(T_j^*> u|T_j^*>s,{𝒴j(t), 0≤ t ≤ s },X_j).
In contrast to static predictions, dynamic models allow predictions to
be updated and obtain π̂_j(u|s') when new information is available at a new time s'>s.
§.§ Existing approaches
Joint modeling is a popular approach for dynamic prediction. It consists of a sub-model for the longitudinal process and a sub-model for the survival process linked through the shared random effects <cit.> or some functional forms of the longitudinal outcomes <cit.>. The estimation of the joint modeling is performed by the Bayesian approach <cit.>. It is usually computationally expensive and especially challenging with high-dimensional longitudinal features such as images. After estimating the parameters, the prediction of the survival probability π̂_j(u|s) is obtained from the posterior predictive distribution of the survival process <cit.>.
Another popular dynamic prediction approach is the landmarking method (LM). Different from joint modeling, LM does not model the longitudinal process. It predicts π_j(u|s) through a model fitted based on subjects still at risk at time s, which is called the landmark time. And the administrative censoring at u will be applied, hence the estimated effect (β̂_LM(s,u)) of the predictors at landmark time can approximate the effect of the time-varying covariates on survival outcome within the time window (s,u] <cit.>. The predictors at landmark time s is usually a summary of the history. For example, one may use the mean or maximum of {𝒴_j(t), 0≤ t ≤ s} as a summary. The choice of the time-dependent covariate depends on the research problem and is discussed in <cit.>. The idea of generating a summarized time-dependent covariate can also be applied to JM. For simplicity, we used the instantaneous measurements 𝒴_i(s) as time-varying covariates in this work.
§.§ Using time-dependent Cox model for dynamic prediction
Before we dive into the technical details, we want to highlight the difference between the time-dependent Cox method and the standard LM. The standard LM directly models the bilateral relationship between predictor variables at landmarking time s and the corresponding residual survival time with administrative censoring time u through a survival model (e.g. time-independent Cox model), with all observations after s disregarded. Both s and u need to be prespecified which makes the model less flexible. Moreover, using β̂_LM(s,u) to approximate the true effect of time-dependent covariates can be inaccurate when u is large. With u more away from s, the effect estimated in LM attenuates <cit.> compared to the estimation from the time-dependent Cox model which considers the observations after landmarking time. For those reasons, we will not consider LM + the time-independent models such as <cit.> and <cit.> in our simulations and analyses. Instead, following the spirit of LM, we fitted the time-dependent Cox model on the representative subjects with a given landmark time to evaluate the performance of the LM + time-dependent Cox model when we do have a landmark time of interest in simulation 1 (section <ref>).
Using the time-dependent Cox model is a trade-off between JM and the standard LM. Similar to JM, it fully uses the available longitudinal variables and is flexible (without pre-specifying landmark time s and administrative censoring time u), while keeping the simplicity of LM (without modeling the longitudinal process). We propose to construct a survival neural network under the time-dependent Cox model to incorporate the non-linear effect of the time-varying predictors, and can further use high-dimensional longitudinal predictors (e.g. image) as input.
§ DYNAMIC PREDICTION USING TIME-DEPENDENT COX SURVIVAL NEURAL NETWORK
§.§ Cox model with time-dependent covariates
The time-dependent Cox model takes the form
h_i(t) = h_0(t)exp[g_β,γ(X_i,𝒴_i(t))] = h_0(t)exp[β^TX_i+γ ^T𝒴_i(t)]
and estimate (β,γ) through the partial likelihood
pL = ∏_i^n[exp[g_β,γ(X_i,𝒴_i(T_i))]/∑_j:T_j≥ T_iexp[g_β,γ(X_j,𝒴_j(T_i))]-E_i(g_β,γ)]^δ_i.
With Efron's approximation for handling the tied events <cit.>, we have
E_i(g) = ∑_j1(j>i,T_j= T_i)/∑_j1(T_j= T_i)∑_j:T_j=T_iexp[g(X_j,𝒴_j(T_i))].
In the partial likelihood for the time-dependent Cox model, at each event time T_i, 𝒴(T_i) is required for all the subjects still at risk at T_i, which is not always available. Therefore, interpolation between longitudinal measurements is required. The last observation carried forward (LOCF) method is commonly used <cit.>, which assumes the values of the longitudinal predictors stay constant until the next measurement is available.
To predict π_j(u|s), similarly, we assume 𝒴_j(t) = 𝒴_j(s) for all s<t≤ u. Therefore, the predicted survival probability is
π̂_j(u|s)= exp{-(Ĥ_0(u)-Ĥ_0(s))exp[g_β̂,γ̂(X_j,𝒴_j(s))]},
where Ĥ_0(t)=∑_i=1^nI(T_i≤ t)δ_i/∑_j:T_j≥ T_iexp[g_β̂,γ̂(X_j,𝒴_j(T_i))]-E_i(g_β̂,γ̂)
is the Breslow estimator of the cumulative baseline hazard function <cit.> with Efron's approximation.
§.§ Cox survival neural network with time-dependent covariates
To establish a dynamic prediction model using high-dimensional image data, we consider the use of the neural network to augment the time-dependent Cox model. A neural network is an architecture that models the relationship between the input 𝐱∈ℝ^p_0 and output f(𝐱)∈ℝ^p_L+1 through the recursive layer structures
f(𝐱)=W_L×σ_𝐕_L(W_L-1×σ_𝐕_L-1(… W_1×σ_𝐕_1(W_0×𝐱)) ).
L is the total number of hidden layers (depth of neural network) with p_k nodes (width) in each layer k (k=1,…,L). The activation function σ with the shift vector 𝐕_k is a nonlinear transformation that operates componentwise σ_𝐕_i((y_1,…,y_p_k)^T) = (σ(y_1-v_1),…,σ(y_p_k-v_p_k))^T. The depth L, width 𝐩=(p_1,…,p_L), and activation function σ should be pre-specified before the model fitting.
The weight matrix W_k ∈ p_k+1× p_k and the shift vector 𝐕_k∈ℝ^p_k are the parameters that will be estimated by minimizing the loss function.
We apply the neural network to model the nonlinear effect of time-dependent covariates and incorporate the high-dimensional longitudinal data. Instead of assuming the unknown risk score function g_0(X_i,𝒴_i(t)) takes the linear form g_β,γ(X_i,𝒴_i(t)) = β^TX_j+γ ^T𝒴_j(t), we leave the form of g_0(X_i,𝒴_i(t)) unspecified and model it through a neural network g_θ(X_i,𝒴_i(t)) parameterized by the weight matrixes and shift vectors of the neural network θ = (𝐖,𝐕).
The time-dependent Cox survival neural network (Cox SNN) is a feed-forward neural network that models the effect of time-dependent covariates on their hazard function. The input is the covariates (X_i,𝒴_i(t)) and the output g_θ(X_i,𝒴_i(t)) is a single node with linear activation function so that g_θ(X_i,𝒴_i(t)) ∈ℝ (Figure <ref>). θ is estimated by minimizing the negative log partial likelihood θ̂= min [l(θ|g_θ)], where
l(θ|g_θ) = -1/n∑_i=1^nδ_i[g_θ(X_i,𝒴_i(T_i))-log (∑_j:T_j≥ T_iexp{g_θ(X_j,𝒴_j(T_i))}-E_i(g_θ) )].
Following the prediction formula (<ref>) for the time-dependent Cox model, the predicted probability under the Cox SNN is given by
π̂_j(u|s)=exp{-(Ĥ_0(u)-Ĥ_0(s))exp[g_θ̂(X_j,𝒴_j(s))]},
where Ĥ_0(t)=∑_i=1^nI(T_i≤ t)δ_i/∑_j:T_j≥ T_iexp[g_θ̂(X_j,𝒴_j(T_i))]-E_i(g_θ̂). π̂_j can be updated when new information of subject j is available (Figure <ref>).
The estimator of θ which can minimize the loss (<ref>) is not unique. For a given minimizer θ̂, it is able to find θ such that g_θ := g_θ̂+c. Note that θ is also a minimizer since l(θ|g_θ) = l(θ̂|g_θ). However, the constant shift c of the estimated hazard function will not change the predicted probability in (<ref>) (as it appears both in the numerator and the denominator). Therefore, π̂_j(u|s) in (<ref>) is robust to a constant shift of the neural network output.
Besides the ability to model the nonlinear effects, another benefit of using the neural network structure is that a pre-trained neural network, which can process a specific data type such as image or text, can be readily combined with SNN (transfer learning). For example, ResNet50 <cit.> is a CNN for image classification whose weights have been trained over one million images. Adding the pre-trained CNN on the top of SNN allows SNN to take the raw images as input, which is the approach we take for this study.
§ PROSPECTIVE ACCURACY METRICS
Methods for assessing the predictive performance of survival models concentrates either on calibration, i.e., how well the model predicts the observed data, or on discrimination, i.e., how well the model discriminates between subjects with the event from subjects without. We consider both calibration and discrimination metrics for model performance evaluation under the time-dependent setting.
Similar to previous studies, we compare dynamic prediction models on a time-window (t,t+Δ t] where the landmark time t and the length of time window Δ t are pre-specified <cit.>. This is to evaluate, for subjects survived to time t on a separate test data with time-dependent covariates 𝒴_i(t) up to time t collected, how well the predicted π̂_i(s|t) (t<s<t+Δ t) agrees with the observed data. The censoring-free probability G(t) = P(C > t) is used to account for the censoring as a weight in the metric.
§.§ Calibration metric: time-dependent Brier Score
The time-dependent Brier Score measures the mean squared error between the observed survival status and the predicted survival probability weighted by the inverse probability of censoring (IPCW) <cit.>. A lower Brier Score indicates a higher prediction accuracy. For a given landmark time t, the estimated Brier Score at time t+Δ t is
B̂S(t,Δ t;π̂) = 1/∑_i1(T_i>t)∑_i=1^n( 1(T_i>t)Ŵ_i(t,Δ t) {1(T_i>t+Δ t)-π̂_i(t+Δ t|t)}^2 ),
where Ŵ_i(t,Δ t) = {1(T_i>t+Δ t)/Ĝ(t+Δ t|t)+1(T_i≤ t+Δ t)δ_i/Ĝ(T_i^-|t)} is the IPCW weight and Ĝ(s|t) = Ĝ(s)/Ĝ(t) is the Kaplan-Meier estimate of the conditional censoring distribution.
§.§ Discrimination metric: time-dependent AUC
The area under the receiver operating characteristic curve (AUC) measures the discrimination of the prediction model. It ranges from 0 to 1 with 0.5 indicating that the discriminability is no better than random guessing and AUC further away from 0.5 suggesting better discrimination. We use cumulative sensitivity and dynamic specificity AUC (cdAUC) <cit.> to evaluate the discrimination performance of the models at different time points. Given a threshold b and predictor X, the cumulative sensitivity is defined as Se^C(b,Δ t)=P(X_i>b|T_i≤Δ t) while the dynamic specificity is Sp^D(b,Δ t)=P(X_i≤ b|T_i>Δ t). The term “cumulative" is to differentiate this sensitivity from the incident sensitivity Se^I(b,Δ t)=P(X_i>b|T_i=Δ t) which assesses the sensitivity for the population whose survival time exactly equals Δ t. With the cumulative sensitivity and the dynamic specificity, the corresponding cdAUC(t,Δ t) = P(X_i > X_j|t < T_i≤Δ t,T_j>Δ t), i ≠ j. Specifically, for a given time interval (t,t+Δ t], the IPCW estimator of cdAUC is
cdÂÛĈ(t,Δ t) = ∑_i=1^n∑_j=1^n 1_(π̂_i(t+Δ t|t)<π̂_j(t+Δ t|t))δ_i1_(t<T_i≤ t+Δ t)1_(T_j> t+Δ t) W_i(t,Δ t)W_j(t,Δ t)/∑_i=1^n∑_j=1^n δ_i1_(t<T_i≤ t+Δ t)1_(T_j> t+Δ t)W_i(t,Δ t)W_j(t,Δ t).
It computes the IPCW weighted percentage of the comparable subject pairs (i,j) where their predicted survival probabilities are consistent with their observed data for the given time interval (t, t+Δ t]. The comparable pair (i,j) is two subjects in which the subject i experiences the event within the time interval (t, t + Δ t] and subject j is event-free by t+Δ t.
§ NUMERICAL IMPLEMENTATION AND SIMULATIONS
§.§ Numerical implementation
For the time-dependent Cox SNN, we implemented the log partial likelihood function with Efron tie approximation (<ref>) using Tensorflow <cit.>. It is implemented through matrix operations so the calculation is fast (details can be found in the Appendix). The PyTorch <cit.> and R-Tensorflow <cit.> version can also be found at https://github.com/langzeng/tdCoxSNNhttps://github.com/langzeng/tdCoxSNN. The neural network was optimized through the Adam optimizer <cit.>. We used the following survival neural network structure in all simulations and real data analysis: input layer → hidden layer → batch normalization layer → dropout layer → output layer. The batch normalization layer <cit.> accelerates the neural network training and the dropout layer <cit.> protects the neural network model from over-fitting. Hyper-parameters were also fixed in all analyses: 30 nodes in the hidden layer, Scaled Exponential Linear Unit (SeLU) as the hidden layer activation function, batch size 50, epoch size 20, learning rate 0.01, and dropout rate 0.2.
§.§ Simulation
§.§.§ Simulation 1: Low dimensional predictors
We carried out simulation studies to empirically compare the dynamic prediction performance of the proposed time-dependent Cox SNN with the time-dependent Cox model and joint modeling in a low-dimensional setting. Data were generated through the joint models.
For all simulations, for sample i, one-dimensional longitudinal covariate 𝒴_i(t) was generated through 𝒴_i(t) = y_i(t)+ϵ where y_i(t) is the true longitudinal trajectory over time and ϵ∼ N(0,0.3^2) represents a measurement error. We considered the true value of time-varying covariate to be given by
y_i(t) = β_0+β_1t+β_2t^2+b_i0+b_i1t+b_i2t^2 and 𝐛_i = (b_i0,b_i1,b_i2)^T∼ N(0,Σ_3× 3)
where β_0 = 3.2,β_1=-0.07. Σ denotes a 3 by 3 inter-subject variance matrix with Σ_11 = 1.44, Σ_22=0.6 and we assume the covariances Σ_ij are zero in all simulations. (β_2, Σ_33) capture the non-linearity of the trajectory and we considered the trajectory of the longitudinal measurement to be linear (β_2 = 0, Σ_22 = 0) or nonlinear (β_2 = 0.004, Σ_22 = 0.09). y_i(t) was measured regularly per time unit at t=0,1,2,…,14. The survival time T_i^* was obtained through a Weibull model h_i(t) =λρ t^ρ-1exp{g(X_i,y_i(t))} with ρ=1.4 and λ = 0.1. The censoring time C_i^* was generated through an exponential distribution exp(2/14). The observed survival time T_i = min(T^*_i,C_i^*) and the event indicator δ_i = 1(T^*_i≤ C_i^*) were then calculated. Longitudinal values 𝒴_i(t) with t≥ T_i were disregarded, only 𝒴_i(t) measured before T_i were kept as the observed longitudinal measurements for subject i. The baseline covariates x_k (k=1,2,3,4) were independently generated from the continuous uniform distribution on [-0.5, 1.5]. Models were fitted using variables (x_1,x_2,x_3,x_4,𝒴_i(t)). We considered four different cases (see below) with the linear or nonlinear risk function g(X,y(t)) and the linear or quadratic trend (in time) of longitudinal measurement y(t). We added the intercept term -10 to make the censoring rate close to that of AREDS data (around 80%) in each simulation.
* Case 1: g(X,y(t)) = x_1+2x_2+3x_3+4x_4+0.3y(t)-10
y(t)=(3.2+b_0)-(0.07+b_1)t
* Case 2: g(X,y(t)) = x_1+2x_2+3x_3+4x_4+0.3y(t)-10
y(t)=(3.2+b_0)-(0.07+b_1)t+(0.004+b_2)t^2
* Case 3: g(X,y(t)) = ({x_1^2x_2^3+log(x_3+1)+(0.3y(t)x_4+1)^1/3+exp(x_4/2)+0.3y(t)}^2/3)-10
y(t)=(3.2+b_0)-(0.07+b_1)t
* Case 4: g(X,y(t)) = ({x_1^2x_2^3+log(x_3+1)+(0.3y(t)x_4+1)^1/3+exp(x_4/2)+0.3y(t)}^2/3)-10
y(t)=(3.2+b_0)-(0.07+b_1)t+(0.004+b_2)t^2
The time-dependent Cox model was fitted through the R function {survival::coxph} <cit.>.
For joint modeling, we used the R package {JMBayes} <cit.>
and included the linear effect of time t in the longitudinal sub-model. The time-dependent Brier Score and time-dependent AUC were calculated through the R packages {pec} <cit.>
and the R package {timeROC} <cit.>, respectively.
In each setting, we performed 100 simulation runs. Models were fitted on the training set and the prediction metrics were evaluated over the separate test samples. We compared n_train= 500 and 1000 to evaluate the sample size effect on the performance of SNN, since a large sample size is usually required to train a neural network well. Separate test samples with n_test = 200 were generated in each run to evaluate the fitted models.
We set landmark time t=1 in all simulations. Comparisons were made across seven models including the landmark time-dependent Cox model (LM-CoxPH), the landmark time-dependent Cox SNN (LM-CoxSNN), the time-dependent Cox model (CoxPH), the time-dependent Cox SNN (CoxSNN), the Joint modeling (JM), the Kaplan-Meier (KM), and the true conditional survival curve (Truth). The LM-CoxPH and LM-CoxSNN were fitted among training samples still at risk at the landmark time. CoxPH and CoxSNN were fitted over all training subjects. For the longitudinal sub-model in JM, we included the main effect of time in the fixed-effects part and an intercept and a time term in the random-effect design matrix. Predictions using the KM and the true model represented the worst and the best prediction we could obtain. Calibration metric B̂Ŝ (t,Δ t) and discrimination metric ÂÛĈ(t,Δ t) for the seven methods were calculated at Δ t=1,2,3,4 from the landmark time.
§.§.§ Simulation 2: High-dimensional predictors
We also evaluated the performance of time-dependent Cox SNN with high-dimensional predictors. The simulation mechanism is the same as simulation 1 in section <ref>. After data was generated, at each visit time, we mapped the true risk score g(X,y(t)) to handwritten digit images from the MNIST <cit.>. Specifically, we standardized g(X,y(t)) to [0,0.99] and rounded it to 2 decimal places. Then the two handwritten digit images representing the tenth and hundredth numbers were randomly sampled in the corresponding digit class of MNIST. The two 28×28×1 images are treated as the observed predictor at time t to train the model as well as make the prediction.
To deal with the images, the convolutional layers and max pooling layers were added on top of the SNN structure introduced in section <ref>. Details can be found in Appendix. The time-dependent Cox SNN (td-CoxSNN) was fitted directly using the longitudinal images. As a comparison, the baseline Cox SNN (Base-CoxSNN) was fitted on the baseline images only. The baseline Cox model (Oracle-BaseCoxPH) and time-dependent Cox model (Oracle-tdCoxPH) were fitted on the true g(X,y(t)) to represent the best performance that CoxSNN can achieve with image predictors.
* Case 5: g(X,y(t)) = x_1+2x_2+3x_3+4x_4+0.3y(t)-5
y(t)=(3.2+b_0)-(0.07+b_1)t
* Case 6: g(X,y(t)) = x_1+2x_2+3x_3+4x_4+0.3y(t)-5
y(t)=3.2+b_0
Two simulation cases were considered. The setting of Case 5 is to evaluate the performance of the proposed method with time-varying longitudinal high-dimensional predictors (images). Case 6 is a scenario without time-varying covariates but the longitudinal images vary since the mapping of risk scores to images was performed separately on each visit. Therefore, the only difference between Base-CoxSNN and td-CoxSNN is that td-CoxSNN would see more images. We performed 100 simulation runs for each case n_train = 2,000 and 10,000. The prediction accuracy B̂Ŝ (t,Δ t) and ÂÛĈ(t,Δ t) was evaluated on a separate n_test=200 test samples at t = 1 and Δ t=1,2,3,4.
§.§ Simulation results
§.§.§ Simulation 1: Low dimensional predictors
Figure <ref> and Figure <ref> display the boxplots of cdAUC and BS at times 1,2,3,4 after the landmark time over 100 simulation runs for each case. The plots indicate that the model with better discrimination ability (the higher cdAUC) tends to have better calibration ability (the lower BS) as well. In general, the CoxSNN is strongly competitive at the different time points after the landmark time. Under the complex setting of case 3 and case 4, CoxSNN outperformed CoxPH, and LM-CoxSNN outperformed LM-CoxPH. In simpler case 1 and case 2 where the effects of predictors are linear, CoxSNN and LM-CoxSNN performed similarly to the CoxPH and LM-CoxPH. This demonstrates that the time-dependent Cox SNN is able to learn the nonlinear effect in complex settings while maintaining a similar performance as the time-dependent Cox model when the effect is linear. Besides, JM performed the best when both the effect of longitudinal predictors and the sub-model for the longitudinal process were correctly specified (case 1). In case 3, JM correctly modeled the longitudinal process but didn't outperform neural network models as it failed to reflect the nonlinear effect of the predictors. Moreover, JM performed the worst in cases 2 and 4, when the longitudinal sub-model was incorrectly specified. This implies the performance of joint modeling highly depends on the correctness of the longitudinal sub-model.
We further evaluate the choice of sample for the neural network fitting. In case 1 and case 2, the prediction accuracy of models fitted with all subjects (CoxPH and CoxSNN) is close to the accuracy of landmarking models fitted with those still at-risk at the landmark time (LM-CoxPH and LM-CoxSNN). In settings where the effect of predictors is complex (case 3 and case 4), the landmarking methods demonstrated slightly better performance, suggesting that landmarking can potentially enhance prediction accuracy, particularly when there is a landmark time of interest. The landmark individuals can serve as an appropriate representation of the target population who survived the landmark time.
Lastly, when n_train decreased from 1000 to 500, the performance of SNN is relatively stable. Note that when n_train = 500 with 80% censoring rate, there is only 100 observed event in the training data, suggesting that the SNN can perform well with moderate sample sizes.
Overall, our simulation shows that time-dependent Cox survival neural network can achieve satisfactory prediction performance across diverse settings.
§.§.§ Simulation 2: High-dimensional predictors
Figure <ref> presents the results of the simulation with longitudinal images as predictors in case 6. In this scenario, the risk score g(X,𝒴(t)) is constant over time while the longitudinal images mapped to the risk score are time-dependent (the information embedded in images is time-independent). For oracle-CoxPH models, since they used the true risk score as the predictor and it is time-independent, they had the same prediction accuracy as the prediction from the real survival curve. When using the images as predictors, td-CoxSNN outperformed baseline-CoxSNN, especially when the training sample size (n_train=2,000) is small. This demonstrates the priority of td-CoxSNN over baseline-CoxSNN even the longitudinal images are uninformative.
When the image embeddings are time-varying (case 5), the baseline models were less accurate than the time-dependent models, as the baseline models ignored the longitudinal measurements and hence failed to evaluate the effects of time-varying predictors. Additionally, the baseline-CoxSNN may also be suffered from seeing fewer images as demonstrated above. The boxplots of cdAUC and BS for case 5 are provided in the appendix.
§ REAL DATA ANALYSIS
§.§ Application to AMD progression prediction
Characteristics of the participants in AREDS data
3cCox Regression Result
3-5
-2*Characteristics -2*AREDS Data Hazard Ratio 95% CI p-value
Subject-level variables 4,335 subjects 1l 1l 1l
Baseline age, year (mean ± s.d.) 69.3 ± 5.1 1.05 [1.04, 1.07] 0.001
Female (N, %) 2426 (56.0) 1.03 [0.91, 1.16] 0.68
Educational level at least highschool (N, %) 2814 (64.9) 0.87 [0.77, 0.98] 0.02
Baseline smoking status (N, %)
Never smoked 1942 (44.8)
Former smoker 2059 (47.5) 1.16 [1.02, 1.31] 0.02
Current smoker 334 (7.8) 1.97 [1.59, 2.43] 0.001
Eye-level variables 7,865 eyes
Follow-up time*, year (mean ± s.d.) 8.2 ± 3.3 1l 1l 1l
Baseline manually extracted image features
Maximum Drusen Size (mean ± s.d.) 2.9 ± 1.3 1.19 [1.07, 1.32] 0.001
Pigmentary Abnormalities (N, %) 1123 (14.3) 1.77 [1.55, 2.01] 0.001
Soft Drusen (N, %) 4016 (51.1) 2.47 [1.84, 3.32] 0.001
Calcified Drusen (N, %) 126 (1.6) 1.97 [1.55, 2.50] 0.001
Reticular Drusen (N, %) 131 (1.7) 1.24 [0.99, 1.55] 0.06
Drusen area (mean ± s.d.) 2.6 ± 2.4 1.48 [1.42, 1.55] 0.001
Increased Pigment (N, %) 2077 (26.4) 1.84 [1.62, 2.09] 0.001
2l* For eyes which developed late-AMD, it is the time to diagnosis. 1l 1l 1l
We applied the proposed time-dependent SNN on AREDS data and built a dynamic prediction model using longitudinal fundus images. In AREDS data, at each follow-up for each eye, there is a fundus image along with multiple manually graded image features performed by a medical center, for example, the size of the abnormal area (drusen) of a fundus image. After removing the eyes with late-AMD at baseline and images with low quality (i.e., image features not gradable or missing), our working data included 53,076 eye-level observations from 7,865 eyes of 4,335 subjects. The median follow-up time is 9.9 years, and 20.5% of the eyes progressed to late-AMD by the end of the study (Table <ref>). For the prediction perspective, we excluded observations that were measured after the diagnosis of late-AMD and conducted the eye-level analysis without considering the correlation between the two eyes from the same individual.
We performed a 5-fold cross-validation as follows. Data were split into five folds where models were trained on each 4-folds (training set) and evaluated on the remaining 1-fold (test set). The split was done on the subject level to ensure the two eyes from the same subject were included in the same fold. To fit the time-dependent Cox model and the time-dependent Cox SNN, longitudinal data were formatted into multiple intervals [tstart, tstop) where each interval represents the time window between one visit and the next visit (Figure <ref>). To evaluate the prediction accuracy, because the fundus images were taken at different time points across subjects, on the test set, we chose individualized landmark time t_i as follows. For each subject i in the test set, t_i was chosen randomly from their longitudinal measurement time points prior to the last observed time T_i. This is to mimic the real-world scenario when a new subject i comes at time t_i, we use their measurements up to t_i to predict their survival at t_i+Δ t. Prediction metrics (BS and cdAUC) were evaluated on the Δ t = 1,…, 7 windows across all subjects (Figure <ref>).
We compared three dynamic prediction approaches: joint modeling (JM), time-dependent Cox model (CoxPH), and time-dependent Cox SNN (CoxSNN). The predictors include baseline demographic variables (age at enrollment, educational level, and smoking status) and seven longitudinal manually graded image features, which were found significant from a multi-variable baseline Cox regression model (Table <ref>). We also fitted the time-dependent Cox SNN directly on the longitudinal fundus images where the images were modeled by a CNN. Specifically, on top of the survival neural network, we added DeepSeeNet, which is a well-trained CNN for grading late-AMD using fundus images <cit.>. The 256 nodes from the DeepSeeNet hidden layer and the baseline demographic nodes together formed the input layer of time-dependent Cox SNN. The details of the survival neural network structure and hyper-parameters can be found in section <ref>.
Figure <ref> shows that the Brier Scores are lowest for the time-dependent Cox SNN either using the seven manually graded image features or directly using the fundus images across all prediction time points. As for the cdAUC, the time-dependent Cox SNN using the longitudinal images is comparable to the three dynamic prediction models fitted on the seven manually graded image features. In practice, grading the image features is labor intensive and requires special medical image expertise. This prediction model (time-dependent Cox SNN with fundus images) can directly handle the raw colored fundus images without any further input from clinicians. Overall, joint modeling performed the worst in terms of the two accuracy metrics except for the long-term prediction (Δ t = 7).
For the interpretation of the SNN model fitted with longitudinal images, we generated the saliency map to visualize the regions with the most significant impact on the risk score for a given subject. Figure <ref> displays a fundus image from a participant's left eye at year 2.3 (since being on the AREDS study) where the eye showed multiple large drusens (i.e., the yellow spots). Our model predicts this eye will develop late-AMD with a high probability (60.1% since Δ t=2 year and 70.5% since Δ t=2.8 year) and the truth is this eye developed the disease 2.7 years later. We can see the saliency map successfully detects the pathological areas which are predictive of the disease progression.
After completing the model training, it is also possible to identify individuals with a higher risk of developing the disease through the estimated risk score ĝ, which is the output of tdCoxSNN. For illustrative purposes, we used the training and test data from the first cross-validation split. The baseline risk score ĝ for 1,582 eyes in test data was estimated using their baseline fundus images and demographics. We identified two subgroups from the gaussian mixture model and compared their survival curves (Figure <ref>). During the follow-up period, a majority of the individuals in high-risk groups developed the disease, whereas those in low-risk groups maintained better health. We further compared the seven baseline manually extracted image features between the two groups (Table <ref>).
The baseline fundus images in the high-risk group exhibited a greater number of higher-risk image features (p<0.001, for each feature), which are significantly associated with the late-AMD (Table <ref>). The difference between the two groups for subject-level characteristics (excluding those with two eyes of differing risk) was found to be small. This suggests that the identification of subgroups was primarily based on the baseline fundus image.
§.§ Application to PBC2 data
We also applied the proposed method (continuous-time model) to a publicly-available dataset with low-dimensional longitudinal predictors and compared with three discrete-time deep learning models. This data was collected between 1974-1984 for the research of primary biliary cirrhosis (PBC) disease <cit.>. The dataset consists of 312 subjects (1,912 visits) with the event of interest being time-to-liver transplant and a censoring rate of 55%. The predictors include 12 longitudinal variables (7 continuous lab tests, such as albumin, and 5 categorical, such as liverenlargement) and 3 baseline variables (e.g. gender, age at start-of-study, and treatment indicator).
We followed the data processing steps from <cit.> for a fair comparison. The LM-tdCoxSNN (fitted for each landmark time) and tdCoxSNN (using all subjects) were fitted on the discrete-time scale. The prediction accuracies for the three discrete-time deep learning models were obtained directly from <cit.>. Each entry represents an average of accuracy calculated at 2,4,6, and 8 months after the landmark times.
Table <ref> presents the prediction accuracy of the five methods across different landmark times. There is no universally optimal deep learning method, and our method is comparable to the existing discrete-time methods when applied to the discrete-time data. We observed that tdCoxSNN outperformed LM-tdCoxSNN in terms of both BS and dynamic C-Index, suggesting that the proposed method could benefit from retaining more samples during training, especially given the relatively small total sample size (1,912 visits in total).
§ DISCUSSION
We combined the time-dependent Cox model with the survival neural network to establish a dynamic prediction model in a continuous time scale. The proposed approach not only provides a powerful tool to model the non-linear effect of predictors on the risk but also allows users to directly incorporate the longitudinal high-dimensional features such as images without modeling the longitudinal process. Due to the neural network nature of SNN, existent neural networks can be added on top of the time-dependent Cox SNN to take advantage of well-developed deep learning structures for complex data, for example, RNN for sequential data <cit.> and CNN for images <cit.>. With the availability of more and more high-dimensional biomarkers with non-linear effects and unstructured longitudinal trends, our approach makes it possible to build a dynamic prediction model using complex longitudinal biomarkers (e.g., MRI, metabolomics data) for future research.
A limitation of the proposed method is the Last Observation Carried Forward (LOCF) assumption between visits, which may not accurately reflect reality. One potential direction for future research involves integrating joint modeling with machine learning techniques, although this could be computationally demanding. Additionally, our method was only compared to a limited number of discrete-time machine learning dynamic prediction approaches using a single dataset (section <ref>). Further efforts are necessary to thoroughly benchmark these methods across various settings.
Compared with discrete time dynamic prediction models <cit.>, our model does not require the selection of time intervals for discretization, which makes it easier to process the longitudinal data for model fitting. Our model achieves satisfactory prediction performance in both simulation and real data analysis. It is worth noting that the same SNN structure and hyper-parameters introduced in section <ref> worked well through all analyses. The training procedure with 20 epochs makes fitting the SNN very fast. Additional tuning of the hyper-parameters may further improve the prediction accuracy of the model.
In this work, we used the saliency map to help interpret the fitted SNN model. One alternative solution for model interpretation is to use a partially linear Cox model where the risk score consists of a parametric component for predictors of interpretation interest and a nonparametric component modeled through the neural network. <cit.> proved the semiparametric efficiency of the parametric estimator which allows the method to make inferences along with using the SNN model for nuisance covariates. For future research, one may consider the partially linear structure in the time-dependent Cox SNN to improve model interpretability while maintaining the flexibility for modeling complex non-linear effects.
biom
|
http://arxiv.org/abs/2307.04449v1 | 20230710095935 | Graph Convolutional Networks for Simulating Multi-phase Flow and Transport in Porous Media | [
"Jiamin Jiang",
"Bo Guo"
] | physics.comp-ph | [
"physics.comp-ph",
"cs.LG"
] |
[mycorrespondingauthor]Corresponding author
[email protected]
Chevron Technical Center
Hydrology and Atmospheric Sciences, The University of Arizona
Numerical simulation of multi-phase fluid dynamics in porous media is critical for many subsurface applications. Data-driven surrogate modeling provides computationally inexpensive alternatives to high-fidelity numerical simulators. While the commonly used convolutional neural networks (CNNs) are powerful in approximating partial differential equation solutions, it remains challenging for CNNs to handle irregular and unstructured simulation meshes. However, subsurface simulation models often involve unstructured meshes with complex mesh geometries, which limits the application of CNNs. To address this challenge, here we construct surrogate models based on Graph Convolutional Networks (GCNs) to approximate the spatial-temporal solutions of multi-phase flow and transport processes. We propose a new GCN architecture suited to the hyperbolic character of the coupled PDE system, to better capture the saturation dynamics. Results of 2D heterogeneous test cases show that our surrogates predict the evolutions of the pressure and saturation states with high accuracy, and the predicted rollouts remain stable for multiple timesteps. Moreover, the GCN-based models generalize well to irregular domain geometries and unstructured meshes that are unseen in the training dataset.
§ INTRODUCTION
Dynamics of multiple fluid phases in porous media are critical for many applications in Earth's subsurface, including oil and gas recovery, groundwater remediation, geological CO_2 sequestration, and subsurface hydrogen storage. Numerical simulations play an increasingly important role in understanding, quantifying, and controlling these multi-phase flow processes. Predicting the evolution of subsurface fluid dynamics requires solving partial differential equations (PDEs) governing the multi-phase flow and transport processes. These PDEs are often highly nonlinear and exhibit an intricate mixture of elliptic and hyperbolic characteristics, posing challenges to numerical methods. Moreover, significant uncertainties are present in the model parameters due to data scarcity in the subsurface. As a result, many model simulation runs (e.g., thousands) are required to quantify the uncertainties propagated from the parameters to the predictions. Therefore, computationally efficient simulation techniques are critical for subsurface applications.
The deep learning revolution (LeCun et al. 2015; Krizhevsky et al. 2017) has dramatically changed scientific fields such as computer vision and natural language processing. More recently, deep learning algorithms have been extended towards constructing data-driven surrogate models to approximate the solutions of PDEs, particularly in the context of fluid dynamics (Guo et al. 2016; Kutz 2017; Long et al. 2018; Bar-Sinai et al. 2019; Santos et al. 2020; Li et al. 2020; Lu et al. 2021; Vinuesa and Brunton 2022). Compared to high-fidelity numerical simulators, a learned simulator can provide much faster predictions, especially for high-dimensional nonlinear systems.
A number of studies have applied image-based approaches and snapshots of simulation data over a spatially discretized input domain for surrogate modeling of subsurface flow and transport problems. Most of these works leverage convolutional neural networks (CNNs) to learn the nonlinear mappings from the input properties (e.g., permeability) to the output states (pressure and saturation) on regular Cartesian meshes (Mo et al. 2019; Tang et al. 2020; Wang and Lin 2020; Wen et al. 2021; Zhang et al. 2021; Jiang et al. 2021; Yan et al. 2022; Maldonado-Cruz and Pyrcz 2022). While CNNs are powerful in approximating PDE solutions, they are restricted to a specific discretization of the physical domain in which they are trained. Due to the inherent limitations of standard convolution operations, it remains challenging for CNNs to handle irregular and unstructured simulation meshes. However, driven by the need to accurately characterize complex geological features and heterogeneity, subsurface simulation models often involve corner-point and unstructured meshes with skewed and degenerate mesh geometries. These complexities limit the application of CNN-based models for subsurface problems. Note that Maucec and Jalali (2022) recently applied the interaction networks (Battaglia et al. 2016) for surrogate modeling of a two-phase incompressible flow problem, but their surrogate leads to large prediction errors of the pressure field.
Graph Neural Networks (GNNs) have successfully been employed to learn the dynamic evolutions of PDEs, under mesh-based simulation frameworks (Pfaff et al. 2020; Belbute-Peres et al. 2020; Iakovlev et al. 2020; Chen et al. 2021; Brandstetter et al. 2022; Pilva and Zareei 2022). In contrast to CNNs, GNNs naturally enable operating on unstructured meshes with complex domain boundaries. A simulation mesh can be viewed as a graph composed of nodes, and a set of edges representing the connectivity between the nodes. The key idea of GNNs is to aggregate and propagate the local information of system states from their neighborhoods into node representations, through multiple message passing layers (Kipf and Welling 2016; Gilmer et al. 2017).
In the present work, we apply Graph Convolutional Networks (GCNs) to learn surrogate models for predicting the spatial-temporal solutions of multi-phase flow and transport in porous media. We seperately design two GCN architectures that are suited to the elliptic and hyperbolic characteristics of the coupled PDE system, to better capture the pressure and saturation dynamics. The GCN-based models are trained by supervising on the per-node output states. We evaluate the prediction performance of the trained surrogates using 2D heterogeneous cases. The results show that our surrogates predict the dynamic evolutions with high accuracy, and the predicted rollouts remain stable for multiple timesteps. Moreover, our GCN models generalize well to irregular domain geometries and unstructured meshes that are not present in the training dataset.
§ MATHEMATICAL MODEL AND DISCRETIZATION
§.§ Immiscible multi-phase flow in porous media
We consider compressible and immiscible flow and transport in porous media with n_p number of phases. The mass-conservation equation for phase l (l ∈{ 1,...,n_p }) can be written as
∂/∂ t ( ϕρ_l s_l ) + ∇· (ρ_lv_l ) - ρ_l q_l = 0,
where t is time. ϕ is rock porosity. q_l is the volumetric injection or pumping rate of wells (source or sink term). ρ_l is phase density. s_l is phase saturation, which is constrained by
∑_l s_l = 1,
The Darcy phase velocity, v_l, is expressed as
v_l = -k λ_l ( ∇ p_l - ρ_l g ∇ z ).
where k is rock permeability. p_l is phase pressure. g is gravitational acceleration and z is depth (assuming positive downward). λ_l = k_rl/μ_l is phase mobility, where k_rl and μ_l are relative permeability and fluid viscosity, respectively.
For oil-water flow that only involves two fluid phases, Eq. (<ref>) can be simplified to
∂/∂ t ( ϕρ_o s_o ) + ∇· ( ρ_ov_o ) - ρ_o q_o = 0,
∂/∂ t ( ϕρ_w s_w ) + ∇· ( ρ_wv_w ) - ρ_w q_w = 0,
with the saturation constraint as s_o + s_w - 1 = 0.
§.§ Fully-implicit discretization
To solve the PDE system from Eq. (<ref>), we apply a finite volume method that discretizes the simulation domain into a mesh consisting of n_b cells and the fully-implicit scheme for the time discretization
| Ω_i |/Δ t ( ( ϕ_i ρ_l,i s_l,i )^n+1 - ( ϕ_i ρ_l,i s_l,i )^n ) - ∑_j∈ adj(i) ( ρ_l,ijυ_l,ij )^n+1 - Q_l,i^n+1 = 0,
where i ∈{ 1,...,n_b } is cell index, | Ω_i | is cell volume, (ij) corresponds to the interface between cells i and j. Superscripts represent timesteps, and Δ t is timestep size.
The discrete phase flux based on the two-point flux approximation can be written as
υ_l,ij = T_ijλ_l,ijΔΦ_l,ij,
where ΔΦ_l,ij = Δ p_l,ij - g_l,ij is the phase-potential difference with the discrete weights g_l,ij = ρ_l,ij g Δ z_ij. The phase mobility λ_l,ij is evaluated using the Phase-Potential Upwinding (PPU) scheme (Sammon 1988; Brenier and Jaffré 1991). In PPU, the mobility of each phase is treated separately according to the sign of the phase-potential difference. The upwinding criterion is given as
λ_l,ij = {[ λ_l(s_i), ΔΦ_l,ij≥ 0; λ_l(s_j), otherwise ].
where s_i = { s_l,i}_l ∈{ 1,...,n_p } denotes the saturations of cell i.
The total face transmissibility T_ij combines two half-transmissibilities in a half of the harmonic average
T_ij = T_i T_j/T_i + T_j , T_i = k_i A_ij/d_i.
where A_ij denotes the interface area, k_i is the permeability of cell i, and d_i is the length from the cell centroid to the interface.
In the finite volume formulation, the discrete source (or sink) term for a mesh cell containing a well (referred to as well cell) is written as
Q_l,i = WI_i ( ρ_lλ_l )_i ( p_l - p^W )_i,
which represents the well flux for phase l in cell i. p_l,i is well-cell pressure, p^W_i is wellbore pressure, and WI_i is well index.
The discretized nonlinear system, written in a residual format, has the following form
ℛ(u^n+1) = 0
where u represents the state variables (pressure and saturation) of mesh cells. The nonlinear system is often solved using the Newton method, which performs multiple iterations until convergence. For each timestep, with the solution u^n, and a chosen timestep size Δ t, the new state u^n+1 is obtained.
§ SURROGATE MODELS
A simulator ℍ maps the current state of mesh cells to the next timestep state. We denote a simulated rollout trajectory as ( u^0, u^1, ..., u^n_t ), which is computed iteratively by u^n+1 = ℍ ( u^n ) for every timestep.
The goal of our surrogate learning task is to replace the computationally expensive high-fidelity simulator with surrogate simulators that predict the next state
u^n+1≈u^n+1 = ℕ ( u^n; Θ ),
where ℕ is a next-step prediction model based on GNN, whose parameters Θ can be optimized for some training objectives. u^n+1 indicates the predicted state from the surrogate model. Given the initial state u^0, ℕ ( ; Θ ) can rapidly produce a rollout trajectory of states ( u^0, u^1, ..., u^n_t ), where n_t is the number of timesteps.
The coupled multi-phase system (<ref>) has an intricate mixture of elliptic and hyperbolic characteristics. It is beneficial to employ specialized GNN architectures suitable for the specific characteristics of the coupled system. Therefore in the present work we seperately design and train two models that compute the solutions of pressure and saturation in a sequential manner as
{p^n+1 = ℕ_p ( p^n, s^n; Θ_p ) ,
s^n+1 = ℕ_s ( p^n+1, s^n; Θ_s ) .
.
where ℕ_p and ℕ_s represent respectively the pressure and saturation models. At each time step, the saturation model takes the input from the pressure model.
The above process can be written using a compact operator as
[ p^n+1, s^n+1 ] = ℕ_s ∘ℕ_p ( p^n, s^n; Θ_p, Θ_s ).
§ GRAPH NEURAL NETWORKS
We leverage the power of GNNs to construct data-driven surrogate simulators to approximate the PDE solutions (Equation (<ref>)). GNNs provide a flexible and efficient way to operate over data that are structured as graphs, naturally fitting mesh-based simulations (Pfaff et al. 2020; Pilva and Zareei 2022).
Let G = (X, E) be graph representation (Fig. <ref>) of a simulation mesh with nodes X (blue dots) where x_i denotes the cell centroid, and undirected edges E (red line segments) where e_ij represents the connecting neighboring cells at x_i and x_j. 𝒩(i) is the set of adjacent nodes around node i. We further denote the node and edge features by u_i and e_ij respectively.
§.§ Message Passing Framework
A GNN-based model consists of a stack of neural network layers, each aiming to aggregate local neighborhood information, i.e., features of neighbors, around each node and then passes this aggregated information on to the next layer (Kipf and Welling 2016). The fundamental operation in GNNs is the message-passing procedure, which updates the feature vector of each node based on the features of its neighboring nodes. The message-passing rule is generally formulated as (Gilmer et al. 2017)
u'_i = γ ( u_i, j∈𝒩(i)⊕ψ ( u_i, u_j, e_ij ) ),
where ⊕ denotes a differentiable, permutation-invariant function (e.g., summation, mean, or maximum), and γ and ψ are differentiable neural networks such as MultiLayer Perceptrons (MLPs). Each subsequent message-passing layer contains a separate set of network parameters, and operates on the output u'_i of the previous layer.
In our work, we consider weighted graphs and employ the GCN operator, GraphConv, from Morris et al. (2019). At layer m, the new node features are updated as
u^(m+1)_i = σ ( W_1 u^(m)_i + W_2 ∑_j∈𝒩(i) w_ij·u^(m)_j ),
where W_1 and W_2 are parameter matrices, w_ij denotes the edge weight, and σ denotes a non-linear activation function, e.g., ReLU or Tanh.
We additionally utilize the edge convolution operator, EdgeConv, from Wang et al. (2019). EdgeConv exploits local geometric structures by constructing a local graph and applying convolution operations on the edges connecting neighboring pairs of nodes. The layer output can be computed by
u^(m+1)_i = j∈𝒩(i)maxΨ ( u^(m)_i, u^(m)_j - u^(m)_i ).
where Ψ denotes an MLP. As can be seen, the max aggregation operation is used on the edge features associated with all the edges emanating from a node.
§ MODEL ARCHITECTURES
In this section, we present the detailed surrogate models which can predict the next-step dynamic states of the coupled PDE system. Our GCN models have an Encoder-Processor-Decoder structure. Schematic of a general GCN model architecture is plotted in Fig. <ref>. The node features are first encoded into latent vectors of size n_H. The input features u_i^n of mesh node i for each timestep contain the dynamic variables (pressure and saturation), permeability, and pore volume. A one-hot vector indicating node type (reservoir, production, and injection nodes), and well index are also added. We assign the transmissibility T_ij of each cell interface as edge weight. Each feature is scaled individually to [0, 1] using the min-max normalization method. The Decoder extracts one output state (p^n+1 or s^n+1) from the latent node features after the final processing layer. The Encoder and Decoder are two-layer MLPs with ReLU nonlinearities except for the output layer of the Decoder, after which we do not apply any nonlinearity.
The Processor of the pressure model ℕ_p is constructed by stacking 7 identical GraphConv layers with the mean aggregation operation and ReLU nonlinearities, to obtain a sequence of updated latent features. For the ℕ_s model, we propose a combined architecture (3 EdgeConv followed by 5 GraphConv layers with max aggregation), which is found to be quite effective for capturing the hyperbolic (saturation) solution. The Tanh activation function is applied. The sizes of hidden units for ℕ_p and ℕ_s are n_Hp = 32 and n_Hs = 128, respectively.
§ TRAINING PROCEDURE
We train the GCN models using the dynamic state pairs ( u^n; u^n+1 ) from n_Y number of simulated rollout trajectories. We employ a mean squared error loss between predictions u_y^n and their corresponding ground truth values u_y^n (simulator reference). The L_2 loss function is minimized through
Θ^* = Θargmin 1/n_Y1/n_t∑_y=1^n_Y∑_n=1^n_tu_y^n - u_y^n_2^2
where n_t is the number of timesteps, and u_y^n denotes either pressure or saturation of every mesh node, at time t_n, for training sample y.
Modeling a complex time-dependent PDE system requires the model to mitigate error accumulation over long rollout trajectories (Sanchez-Gonzalez et al. 2020). Because we only train our surrogates on ground-truth one-step data, we corrupt the input saturation states with normal noise N_s ( 0, σ_s = 0.02 ). In this way, the rollouts of multiple timesteps from trained models become robust to their own noisy, previous predictions as input.
§ SURROGATE MODEL EVALUATIONS
We explore the prediction performance of the surrogate models and their generalization capabilities on out-of-training domain shapes and meshes. As an example, we consider 2D reservoir models in the x-y domain containing two wells (one injector and one producer) that operate under constant bottom-hole pressure (BHP). No-flow boundary condition is specified at the reservoir boundaries. The set-up of the base model is summarized in Table <ref>. Quadratic relative permeabilities are used. Capillary pressure is neglected. Total simulation time is 100 days, with a number of 20 timesteps.
There are 160 high-fidelity simulation runs as training data samples with random well locations and rock properties. The realizations of heterogeneous permeability and porosity fields are generated using a Gaussian distribution. The surrogate models are trained on a NVIDIA Tesla V100 GPU using the Adam optimizer (Kingma and Ba 2014) with learning rate 1e-4. The training loss (MSE) curves are plotted in Fig. <ref>.
It takes around 2 and 4 hours to train the pressure and saturation models, respectively. Note that the training times can be reduced by optimizing the hyperparameters of GCNs, and the learning rate schedule of the optimizer. Moreover, the large numbers of training epochs currently used are actually not necessary to reach reasonably low prediction errors. The trained models can predict a rollout trajectory in 0.1 seconds, achieving a significant reduction of computational time compared with the high-fidelity simulator, which requires about 22 seconds for a simulation run.
§.§ Regular Cartesian mesh
We first present the predictions of three representative testing samples on a regular 60 × 60 Cartesian mesh. The rock property fields of the three cases are shown in Fig. <ref>, Fig. <ref> and Fig. <ref>, respectively.
We only compare the solution profiles (pressure and water saturation) at the end of the simulation between the surrogate (prediction) and high-fidelity (ground truth) simulators, because the solutions at the final time of a rollout trajectory should exhibit the largest accumulated errors. The pressure and water saturation profiles of the three cases are shown in Fig. <ref>, Fig. <ref> and Fig. <ref>, respectively. As can be seen, the GCN-based surrogate models accurately capture the evolutions of the pressure and saturation states, with very low mean errors in all the three cases. It is important to note that even though our models were trained on the next-step predictions, the rollouts remain stable for multiple steps.
We can also see that the pressure model is capable of providing physically smooth pressure solutions. The saturation fields are strongly impacted by the well locations and heterogeneous rock properties. The saturation model based on our new GCN architecture incorporating EdgeConv (<ref>) can reproduce both the shapes and heterogeneous details of the discontinuous saturation fronts quite well. Note that relatively large saturation differences are mainly observed near the water fronts.
To demonstrate the improvement due to EdgeConv, here we additionally show the results from a GCN model with only 7 GraphConv layers (without EdgeConv) and the max aggregation operation. The water saturation profiles of the three cases are shown in Fig. <ref>. Although the model (GraphConv) captures the overall shapes of the saturation fields with reasonable accuracy, greater errors are evident near the water fronts. Moreover, some heterogeneous details inside the water plume are smeared out, compared with the model with the combined architecture (EdgeConv+GraphConv). The mean relative errors of the invaded region (saturation bigger than the residual value) from the two saturation models are reported in Table <ref>.
§.§ Irregular Cartesian mesh
We further evaluate the generalization ability of the trained surrogate simulators, using two test samples with irregular mesh geometries. The rock property fields of the two cases on irregular Cartesian mesh are shown in Fig. <ref> and Fig. <ref>, respectively. The solution profiles of the two cases are shown in Fig. <ref> and Fig. <ref>, respectively. We can see that the surrogates predict the state evolutions with high accuracy. There is no significant saturation error from the predictions, except within certain regions near the domain boundaries. The results demonstrate that the Graph Convolutional Networks can generalize well to unseen domain geometries, even though our models were trained using only the data samples on a regular square domain.
§.§ PEBI mesh
Furthermore, we perform testing on a perpendicular bisector (PEBI) mesh with homogeneous permeability of 700 md and porosity of 0.3. Because the neighboring node numbers are different from the previous Cartesian meshes, we add 5 training samples with different rock properties and well locations on the PEBI mesh to improve prediction accuracy, adding up to a total of 165 samples in the new training dataset. A schematic of the PEBI mesh is plotted in Fig. <ref>. The solution profiles are shown in Fig. <ref>. Again, we can observe that the solutions from the surrogates closely match the high-fidelity simulation. Our GCN models generalize well to unstructured meshes, suggesting that the networks learn a general understanding of the physical processes of the multi-phase flow and transport PDE system.
§ SUMMARY
We apply GCNs for surrogate modeling to approximate the spatial-temporal solutions of multi-phase flow and transport in porous media. We propose a new GCN architecture suited to the hyperbolic character of the coupled PDE system, to better capture the saturation dynamics. Our surrogate models can provide significant speedups (220 orders of magnitude) compared to the high-fidelity simulator.
The prediction performance of the trained surrogates and their generalization capabilities on out-of-training domain shapes and meshes are evaluated using 2D heterogeneous test cases. The results show that our surrogates accurately predict the evolutions of the pressure and saturation states. Even though the models were trained on the next-step predictions, the rollouts remain stable for multiple timesteps. The saturation model based on the GCN architecture incorporating EdgeConv can reproduce both the shapes and heterogeneous details of the discontinuous saturation fronts with high accuracy. Moreover, we demonstrate that the GCN-based models generalize well to unseen domain geometries and unstructured meshes.
§ ACKNOWLEDGEMENTS
We thank Sidian Chen at The University of Arizona for constructive discussions.
§ REFERENCES
Brenier, Y. and Jaffré, J., 1991. Upstream differencing for multiphase flow in reservoir simulation. SIAM journal on numerical analysis, 28(3), pp.685-696.
Battaglia, P., Pascanu, R., Lai, M. and Jimenez Rezende, D., 2016. Interaction networks for learning about objects, relations and physics. Advances in neural information processing systems, 29.
Bar-Sinai, Y., Hoyer, S., Hickey, J. and Brenner, M.P., 2019. Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences, 116(31), pp.15344-15349.
Belbute-Peres, F.D.A., Economon, T. and Kolter, Z., 2020, November. Combining differentiable PDE solvers and graph neural networks for fluid flow prediction. In international conference on machine learning (pp. 2402-2411). PMLR.
Brandstetter, J., Worrall, D. and Welling, M., 2022. Message passing neural PDE solvers. arXiv preprint arXiv:2202.03376.
Chen, J., Hachem, E. and Viquerat, J., 2021. Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids, 33(12), p.123607.
Guo, X., Li, W. and Iorio, F., 2016, August. Convolutional neural networks for steady flow approximation. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 481-490).
Gilmer, J., Schoenholz, S.S., Riley, P.F., Vinyals, O. and Dahl, G.E., 2017, July. Neural message passing for quantum chemistry. In International conference on machine learning (pp. 1263-1272). PMLR.
Iakovlev, V., Heinonen, M. and Lähdesmäki, H., 2020. Learning continuous-time pdes from sparse data with graph neural networks. arXiv preprint arXiv:2006.08956.
Jiang, Z., Tahmasebi, P. and Mao, Z., 2021. Deep residual U-net convolution neural networks with autoregressive strategy for fluid flow predictions in large-scale geosystems. Advances in Water Resources, 150, p.103878.
Kingma, D.P. and Ba, J., 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Kipf, T.N. and Welling, M., 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907.
Krizhevsky, A., Sutskever, I. and Hinton, G.E., 2017. Imagenet classification with deep convolutional neural networks. Communications of the ACM, 60(6), pp.84-90.
Kutz, J.N., 2017. Deep learning in fluid dynamics. Journal of Fluid Mechanics, 814, pp.1-4.
LeCun, Y., Bengio, Y. and Hinton, G., 2015. Deep learning. nature, 521(7553), pp.436-444.
Long, Z., Lu, Y., Ma, X. and Dong, B., 2018, July. Pde-net: Learning pdes from data. In International conference on machine learning (pp. 3208-3216). PMLR.
Morris, C., Ritzert, M., Fey, M., Hamilton, W.L., Lenssen, J.E., Rattan, G. and Grohe, M., 2019, July. Weisfeiler and leman go neural: Higher-order graph neural networks. In Proceedings of the AAAI conference on artificial intelligence (Vol. 33, No. 01, pp. 4602-4609).
Mo, S., Zhu, Y., Zabaras, N., Shi, X. and Wu, J., 2019. Deep convolutional encoder‐decoder networks for uncertainty quantification of dynamic multiphase flow in heterogeneous media. Water Resources Research, 55(1), pp.703-728.
Maldonado-Cruz, E. and Pyrcz, M.J., 2022. Fast evaluation of pressure and saturation predictions with a deep learning surrogate flow model. Journal of Petroleum Science and Engineering, 212, p.110244.
Maucec, M. and Jalali, R., 2022. GeoDIN-Geoscience-Based Deep Interaction Networks for Predicting Flow Dynamics in Reservoir Simulation Models. SPE Journal, 27(03), pp.1671-1689.
Peaceman, D.W., 1983. Interpretation of well-block pressures in numerical reservoir simulation with nonsquare grid blocks and anisotropic permeability. Society of Petroleum Engineers Journal, 23(03), pp.531-543.
Pfaff, T., Fortunato, M., Sanchez-Gonzalez, A. and Battaglia, P.W., 2020. Learning mesh-based simulation with graph networks. arXiv preprint arXiv:2010.03409.
Pilva, P. and Zareei, A., 2022. Learning time-dependent PDE solver using Message Passing Graph Neural Networks. arXiv preprint arXiv:2204.07651.
Sammon, P.H., 1988. An analysis of upstream differencing. SPE reservoir engineering, 3(03), pp.1-053.
Sanchez-Gonzalez, A., Godwin, J., Pfaff, T., Ying, R., Leskovec, J. and Battaglia, P., 2020, November. Learning to simulate complex physics with graph networks. In International conference on machine learning (pp. 8459-8468). PMLR.
Santos, J.E., Xu, D., Jo, H., Landry, C.J., Prodanović, M. and Pyrcz, M.J., 2020. PoreFlow-Net: A 3D convolutional neural network to predict fluid flow through porous media. Advances in Water Resources, 138, p.103539.
Tang, M., Liu, Y. and Durlofsky, L.J., 2020. A deep-learning-based surrogate model for data assimilation in dynamic subsurface flow problems. Journal of Computational Physics, 413, p.109456.
Vinuesa, R. and Brunton, S.L., 2022. Enhancing computational fluid dynamics with machine learning. Nature Computational Science, 2(6), pp.358-366.
Wang, Y., Sun, Y., Liu, Z., Sarma, S.E., Bronstein, M.M. and Solomon, J.M., 2019. Dynamic graph cnn for learning on point clouds. Acm Transactions On Graphics (tog), 38(5), pp.1-12.
Wang, Y. and Lin, G., 2020. Efficient deep learning techniques for multiphase flow simulation in heterogeneous porousc media. Journal of Computational Physics, 401, p.108968.
Wen, G., Hay, C. and Benson, S.M., 2021. CCSNet: a deep learning modeling suite for CO2 storage. Advances in Water Resources, 155, p.104009.
Yan, B., Harp, D.R., Chen, B., Hoteit, H. and Pawar, R.J., 2022. A gradient-based deep neural network model for simulating multiphase flow in porous media. Journal of Computational Physics, 463, p.111277.
Zhang, K., Wang, Y., Li, G., Ma, X., Cui, S., Luo, Q., Wang, J., Yang, Y. and Yao, J., 2021. Prediction of field saturations using a fully convolutional network surrogate. SPE Journal, 26(04), pp.1824-1836.
|
http://arxiv.org/abs/2307.06148v2 | 20230712131008 | NetGPT: A Native-AI Network Architecture Beyond Provisioning Personalized Generative Services | [
"Yuxuan Chen",
"Rongpeng Li",
"Zhifeng Zhao",
"Chenghui Peng",
"Jianjun Wu",
"Ekram Hossain",
"Honggang Zhang"
] | cs.LG | [
"cs.LG"
] |
NetGPT: An AI-Native Network Architecture for Provisioning Beyond Personalized Generative Services
Yuxuan Chen, Rongpeng Li, Zhifeng Zhao, Chenghui Peng, Jianjun Wu, Ekram Hossain, and Honggang Zhang
Y. Chen and R. Li are with Zhejiang University, Hangzhou 310027, China, (email: {cyx00, lirongpeng}@zju.edu.cn).
C. Peng and J. Wu are with Huawei Technologies Co., Ltd., Shanghai 210026, China (email: {pengchenghui,wujianjun}@huawei.com).
Z. Zhao and H. Zhang are with Zhejaing Lab, Hangzhou 310012, China as well as Zhejiang University, Hangzhou 310027, China (email: {zhaozf,honggangzhang}@zhejianglab.com).
E. Hossain is with University of Manitoba, Winnipeg, Manitoba, Canada (email: [email protected]).
August 12, 2023
================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Large language models (LLMs) have triggered tremendous success to empower our daily life by generative information. The personalization of LLMs could further contribute to their applications due to better alignment with human intents. Towards personalized generative services, a collaborative cloud-edge methodology is promising, as it facilitates the effective orchestration of heterogeneous distributed communication and computing resources. In this article, after discussing the pros and cons of several candidate cloud-edge collaboration techniques, we put forward to capably synergize appropriate LLMs at the edge and the cloud based on their computing capacity. In addition, edge LLMs could efficiently leverage location-based information for personalized prompt completion, thus benefiting the interaction with the cloud LLM. After deploying representative open-source LLMs (e.g., GPT-2-base model and LLaMA model) at the edge and the cloud, we present the feasibility of on the basis of low-rank adaptation-based light-weight fine-tuning. Subsequently, we highlight the essential changes required for an artificial intelligence (AI)-native network architecture towards , with emphasis on deeper integration of communications and computing resources and careful calibration of logical AI workflow. Furthermore, we demonstrate several benefits of NetGPT, which come as by-products, as the edge LLMs' capability to predict trends and infer intents promises a unified solution for intelligent network management & orchestration. We argue that is a promising AI-native network architecture for provisioning beyond personalized generative services.
§ INTRODUCTION
With the remarkable success of deep learning spanning from decision-making in AlphaGo to human-level interaction like ChatGPT, it is anticipated that artificial intelligence (AI) will be embodied in 6G networks. Along with the enhanced edge computing capabilities, AI could benefit effective orchestration of network resources and improve the quality of service (QoS). Correspondingly, investigation on efficient AI-based service provisioning has attracted an intense research interest. On the other hand, application of one AI model is often limited to certain scenarios or tasks.
In this context, large language models (LLMs) (e.g., generative pre-trained transformer, GPT <cit.>) could perform well in various natural language processing (NLP) and computer vision tasks. However, fine-tuning is still a perquisite to align pre-trained LLMs to follow human intents <cit.> and yield personalized outputs. Therefore, it might be cost-ineffective to deploy LLMs on the centralized clouds only, as it inevitably requires multiple copies of complete model parameters to support different purposes.
In order to boost the personalization of LLMs, a collaborative cloud-edge methodology is essential <cit.>. Compared to the cloud-only LLM deployment, such a cloud-edge collaboration enjoys multi-folded merits. Firstly, it provides more freedom to allow edge servers to deploy various fine-tuned LLMs and adapt to environmental differences, thus making the service personalization and customization possible. Meanwhile, it contributes to bridging data-abundant generative devices with more adjacent servers. Therefore, it could reduce the latency and save the communication overhead to upload data to more remote cloud servers.
Incorporating generative LLMs into the edge networks promises to facilitate the effective utilization of communication and computing (C&C) resources.
As illustrated in Fig. <ref>, there are several distinctive ways to implement the cloud-edge collaboration for deployment of LLMs (e.g., local fine-tuning, model splitting). Specifically, by offloading cloud-trained LLMs, local edge servers tailor the cloud-trained LLMs to accommodate personalized and customized services based on the user preference and specified scenarios. In this regard, federated learning or parallel training can be leveraged as a complement to facilitate the tuning <cit.>. However, such an approach might face severe implementation issues in practice, as repetitive fine-tuning of complete LLMs implies significant computational burden, and also distributed deployment of proprietary LLMs might raise intellectual property concerns from model developers. Meanwhile, force-fitting an entire LLM on edge possibly strains the limited computing resources of edge servers and makes the cost of edge computing unacceptable, while offloading the LLMs incurs significant communication overhead. Alternatively, splitting LLMs to cloud and edge servers <cit.>, by deploying some layers of large-scale deep neural network (DNNs) at the edge while leaving the remaining layers to the cloud, can effectively balance the disproportionate computing resources of edge and cloud servers. Within the model splitting, how to effectively partition the DNNs between the edge and the cloud belongs to one of the most challenging issues, as it should minimize the end-to-end latency while maintaining a sufficiently small model size for the edge servers <cit.>. Such a model partitioning can be even more intricate, given billions of parameters in a typical LLM. Moreover, the wide adoption of residual links in LLMs possibly limits the choice of suitable partitioning points. Besides, the LLMs might leak private details from the data for training <cit.>. In other words, it might be challenging to directly adopt both local fine-tuning and model splitting as an implementation means of collaborative cloud-edge methodology.
In this article, we put forward that aims to respect the cloud-edge resource imbalance and synergize different sizes of functional LLMs at the edge and cloud. Specifically, in apparent contrast to AI-exogenous network with decoupled C&C resources, could leverage converged C&C to deploy smaller edge LLMs for the edge while larger one for the cloud, and meaningfully realize collaborative cloud-edge computing to provision personalized content generation services. Besides, incorporates a logical AI workflow that could be developed to determine performance-consistent communication links. For example, in , the performance-driven communication link could terminate at the edge to accelerate the response assuming the availability of satisfactory edge LLM-induced content. Otherwise, inspired by the idea of prompt learning <cit.>, the LLMs at the edge can infer the context and actively append (or fill in) some local or personalized information, so as to acquire a more comprehensive result at the cloud. Furthermore, as a by-product, the edge LLMs contribute to a unified solution for intelligent network management & orchestration (e.g., user intent inference and popularity prediction). Therefore, coincides with the trend to deeply integrate C&C, and represents an LLM-empowered AI-native architecture.
§ IMPLEMENTATION SHOWCASE OF NETGPT
As illustrated in Fig. <ref>, we present a synergistic cloud-edge framework to accomplish personalized generative services, by leveraging distinctive pre-trained LLMs for cloud and edge (e.g., base stations [BSs]) deployment. In particular, limited by the availability of open-source LLMs, we select and deploy the LLaMA-7B model <cit.> and the GPT-2-base model, which consist of approximately 6.7 and 0.1 billion parameters, at the cloud and the edge, respectively. However, it should be noted that allows the utilization of other LLMs as well. On this basis, we delve into implementation details of cloud-edge LLM synergy towards in an incremental manner. In particular, we start with a quick and general overview of transformer, the basis of LLMs, and enumerate detailed DNN structures of two LLMs (i.e., LLaMA-7B model and GPT-2-base model). Then, we discuss the effective means to fine-tune these LLMs on computation-limited devices, and manifest the effectiveness for location-based personalized generative services.
§.§ An Overview of Transformer
Transformer has been extensively adopted as the foundation model to constitute a multi-layer decoder in LLMs. The fundamental concept behind the transformer involves the construction of the DNN structure through the use of multi-layer self-attention and feed-forward neural networks (FNNs). In particular, self-attention relies on an attention head parameterized by query, key and value matrices (i.e., Q, K and V) to compute the internal correlation within an input sequence, by deriving and assigning different weights to different positions within the sequence. Meanwhile, the FNN applies a non-linear transformation to the representation of each position. Additionally, techniques such as layer normalization are employed to mitigate the issue of vanishing gradients.
§.§ DNN Structure of LLMs at the Edge and Cloud
§.§.§ DNN structure of GPT-2-base model:
The GPT-2-base model, which is the smallest version of the GPT-2 series, encompasses 12 stacked layers of the original transformer structure (i.e., an 8-head self-attention sublayer and an FNN sublayer). A fixed absolute positional encoding of sine and cosine positions is employed to pre-transform the input sequence. In addition, GPT-2 leverages a rectified linear unit (ReLU) activation function (i.e., f_ReLU(x) = max(0,x)). Due to its relatively exceptional performance and minimal computational requirement, it can be appropriate to be deployed at the network edge.
§.§.§ DNN structure of LLaMA model:
LLaMA, which is trained on a large set of unlabeled data and is ideal for fine-tuning for downstream tasks, features various parameter versions as well <cit.>. Compared to GPT-3, LLaMA incorporates several specific enhancements to maintain similar performance while significantly reducing the number of parameters <cit.>. For example, in order to enhance training stability, LLaMA normalizes the input of each sub-layer instead of normalizing the output. Moreover, it adopts the root mean square layer normalization (RMSNorm) function <cit.> as a simplified replacement for layer normalization, by employing the root mean square (RMS) rather than the standard deviation. Additionally, RMSNorm introduces a learnable scaling factor that enables adaptive feature scaling. Thus, it contributes to improving normalization effects for diverse features that have distinctive value ranges. Secondly, LLaMA replaces the ReLU activation function with Swish-gated linear unit (SwiGLU) <cit.>, which combines the Swish function (i.e., f_Swish(x)=x·σ (β x) with σ (x) = 1/1+e^- x and a trainable parameter β) with GLU (i.e., f_GLU(x) = x ·σ(Wx + b) with trainable parameters W and b), thereby possibly activating neurons according to the input in a more selective manner and imparting smoothness to effectively capture intricate non-linear relationships. Lastly, LLaMA introduces rotary position embedding (RoPE) <cit.>, which encodes positional information with a pre-defined rotation matrix and naturally incorporates explicit relative position dependency in the self-attention formulation. Compared to absolute position encoding which assigns a distinct encoded representation to each position in the sequence, the taken form of relative position encoding in RoPE enables a more effective modeling of long-range dependencies within the contextual information. Thereby, RoPE could align with intuitive understanding and it exhibits superior performance in practice.
§.§ Fine-Tuning Techniques
§.§.§ Low-rank adaptation-based light-weight fine-tuning:
As LLaMA lacks the capability to generate responsive text <cit.>, an extra fine-tuning is still required. However, a direct fine-tuning of LLMs such as a LLaMA still requires significant computational resources. For example, it demands 112 GB video random access memory (VRAM) to fine-tune the LLaMA-7B model, far more than the capacity of NVIDIA A100 Tensor Core GPU. Therefore, we leverage a low-rank adaptation (LoRA) technique <cit.> to achieve parameter-efficient fine-tuning on a consumer-level hardware. In particular, in order to fine-tune a complete parameter matrix W∈ℝ^d_in× d_out, LoRA specially adds a bypass pathway to simulate the matrix update ΔW by using two downstream parameter matrices A∈ℝ^d_in× r and B∈ℝ^r× d_out with the intrinsic rank r. In other words, under the condition that r ≪min(d_in,d_out), LoRA successfully transforms large parameter matrix ΔW into lower-rank dimensional ones with ΔW≈AB. Our experiment shows that it only costs 28 GB VRAM to fine-tune the LLaMA-7B model, without significantly elongating the training duration. Additionally, the required storage space for fine-tuning could be greatly reduced from 12.55 GB to 50 MB[Such statistics are obtained under the configuration that r=8 and a learning rate-related scalar factor equals 16.]. On the basis of LoRA, we can utilize the Stanford Alpaca dataset <cit.> to fine-tune LLaMA-7B model and obtain a qualified responsive LLaMA-7B model.
§.§.§ LLM-instructed data collection:
In order to implement personalized edge LLM, it is crucial to grant the GPT-2-base model the capability to extend a “concise” prompt by appending location-based information. Basically, the positioning information can be conveniently obtained according to the locations of affiliated BSs stored in the 5G access and mobility management function (AMF). Meanwhile, in order to complement more comprehensive information, we take the self-instruct approach <cit.> and leverage OpenAI's Text-Davinci-003 model to generate useful text samples. In particular, as for each location, we use a set of manually-written location-related prompts to interact with the OpenAI's Text-Davinci-003 model, and leverage the generative response texts as the “comprehensive prompt”. Correspondingly, a series of mappings between the “concise prompt" and a “comprehensive prompt" can be collected. Considering the size and task complexity of the edge LLM, we collect a dataset comprising approximately 4,000 samples for directly fine-tuning the GPT-2-base model towards a prompt-completion model. Notably, for scenarios where stronger generality is required, edge LLMs can be enhanced with a larger-scale LLM, and fine-tuning techniques such as LoRA can be employed as well.
§.§ Performance Showcase
Fig. <ref> demonstrates the performance of . In particular, as illustrated in Fig. <ref>, the edge LLM is capable to complement the “concise prompt" according to the chart at the left part of Fig. <ref>, which highlights most frequently used words for generating each corresponding “comprehensive prompt”. Furthermore, different BSs add location-based personalized information so as to satisfy distinctive requirements. Subsequently, the edge LLM processes the user-submitted “concise prompt”, and feeds the complemented prompt to the cloud. Next, a more complete generative response could be anticipated. It can be observed from the right part of Fig. <ref> that could yield different location-based responses, which manifests the capability to handle personalized generative services through effective cloud-edge synergy.
§ AI-NATIVE NETWORK ARCHITECTURE TOWARDS NETGPT
We aruge that provides the opportunity to transform cellular networks into an AI-native networking architecture, which provisions personalized, networked and inclusive intelligence for end users and grants users more privilege to access generative services anytime and anywhere. Nevertheless, such a transformation does come at a cost. It requires substantial changes, far more than installing racks of servers at the edge location and local break-out of the traffic for edge processing. In particular, compared with conventional connectivity-oriented communications systems, wherein a typical service establishes connections between two specific terminals and the communication source and destination are clearly defined by end users, requires to establish generative performance-driven connections in a more implicit manner. Moreover, as involves more frequent data collection and processing modules for training personalized LLM models, computing resources shall be consistently scheduled to accomplish . In other words, as shown in Fig. <ref>, necessitates the design of deeply converged C&C in radio access networks (RANs). On top of these novel features, a logical AI workflow shall be devised to establish (beyond) generative service orchestration.
§.§ Converged C&C in RAN
In order to effectively organize heterogeneous resources, which may simultaneously cover terrestrial and non-terrestrial communications, the RAN for has to first provide a control plane (CP) for seamless connectivity control to ubiquitously support prompting and generative information transmission in the user plane (UP). Such elements can be developed in accordance with the 5G and 5G-Advanced techniques. In addition, it is worthwhile to introduce an independent computing plane (CmP) to coordinate computing resources and perform AI-related functionalities, so as to facilitate the deployment and update of generative services.
§.§ Data Processing and Privacy Protection
As discussed in Section <ref>, data processing (e.g., data collection and fine-tuning) is heavily leveraged to lay the very foundation for producing generative LLM models. Besides collecting and storing data, it is essential to introduce data desensitization modules as key data processing services, so as to avoid privacy risks and protect the privacy embedded in the data. Meanwhile, data policy enforcement modules, which handle data according to regulatory as well as non-regulatory rules (e.g., geographic restrictions), will be executed by default to ensure the integrity and legitimation of data processing. Moreover, contingent on the regulation and data usage policy, it is also feasible to devise some data processing model libraries and expose the capabilities with appropriate access control for entities to utilize the data services.
§.§ Personalized Profiling
In order to create a highly customized , location-oriented profiling shall be significantly enhanced to support the definition and operation of personalized generative AI services. For example, local venue and facility information can be specially gathered to train edge LLMs. On the other hand, user service nodes (USN) can contain customized services at end-user level as well, so as to meet diversified customer requirements. Meanwhile, it could further support to establish the user profiling and characterize connected terminals.
§.§ C&C Resource Management
As part of provisioned services in future cellular networks, the orchestration of resources for shares some similarities as that for other network services, including establishing connectivity and computing resource allocation. However, it also poses additional challenges, since the scope of resources spans distributed nodes from the cloud to the terminal. Therefore, novel protocol stack needs be carried on radio control signaling (e.g., RRC) or radio data protocol (e.g. PDCP or SDAP) to transmit AI-generative messages and implement model update & distribution.
§.§ Logical AI Workflow
In order to effectively provision AI services, it is critical to develop some logical AI workflows to parse and orchestrate services. Notably, a logical AI workflow, which facilitates a set of network functions physically distributed at both the edge and the cloud to coherently deliver “concise prompt”, “comprehensive prompt” and “generative responses”, regulates data processing and profiling to train personalized LLMs at the CmP. Furthermore, logical AI workflows are mapped to physical resources during service deployment, so as to take into account the QoS requirements of related services. Notably, as the workflow covers a wide scope of network functions, the processing may be serial or directed acyclic graph-based, and thus involves comprehensive optimization techniques beyond the scope of this article. On the other hand, the logical AI workflow is not limited to generative services. As discussed in Section <ref> lately, the logical AI workflow significantly contributes to the improvement of QoS in a more customizable manner.
§ LLM-BASED UNIFIED SOLUTION FOR NETWORK MANAGEMENT & ORCHESTRATION
Apart from providing personalized generative services, and the AI-native architecture could provide a unified solution for intelligent network management & orchestration, on top of deployed edge LLMs.
§.§ Popularity Prediction
Popularity prediction could significantly contribute to improving networking efficiency by adapting the C&C resources to predicted demands <cit.>. Considering the underlying principles of DNN structure, GPT-2 promises the ability to interprets user' preferences from historical visiting records from affiliated terminals at the RAN. Furthermore, by incorporating location-specific data, the edge LLM can be rather different to better capture personalized characteristics unique to each area.
In order to testify the prediction capability of the edge LLM (i.e., the GPT-2-base model), we take the Netflix audience behavior dataset as a showcase. In order to mitigate the data sparsity, the range of time is first divided into intervals based on a 6-hour cycle and tagged a number. Subsequently, 20 movies with the highest frequency are selected and labeled according to the presence or absence of each movie in a particular interval. Later, benefiting from the data format capability in CmP, the related historical information is converted into some natural languages conforming to a specific template. For example, “In interval 1, movie `Iron man 2' appear :1" indicates the movie “Iron man 2” appears in the Interval 1, which corresponds to some specific date-time given in the left-bottom part of Fig. <ref>. Meanwhile, special tokens are added to create a prompt template that aids the language model in information comprehension and response generation. After direct fine-tuning, the edge LLM could generate labels following the prompt template format, i.e., whether the movie appears under the interval. Furthermore, to enhance model universality, we specifically utilize data from the last half year in the dataset for experimentation, and divide the dataset as the training set and test set according to the proportion 95% to 5%. Fig. <ref> finally presents the prediction accuracy of the edge LLM. It can be observed that GPT-2 exhibits an acceptable level of accuracy on this task, and significantly outperforms other classical algorithms (e.g., LSTM, GRU).
§.§ Intent Inference
Intent-based networking aims to tackle the increased difficulty of applying template-based services to vertical business, and it needs to perceive the real-time requirements of customers before replacing the manual processes of configuring networks and reacting to network issues <cit.>. In that regard, how to precisely understand the intent of customers and translate it into feasible network configuration belongs to one of the most fundamental issues. Coincidentally, edge LLMs exactly satisfy such intent-recognition process <cit.> and benefit the accurate understanding of more verbal statements. In particular, by adopting a similar fine-tuning approach as before, there emerges the capability in the edge LLMs to understand and extract keywords from arbitrary natural language input. Notably, such a fine-tuning can be easily accomplished, as only 4,000 input-keywords pairs are used in our experiment. Fig. <ref> presents the corresponding capability of GPT-2-base to identify customer intent. It can be observed from Fig. <ref> that if one user wants to establish a 10 Gbps connection from Access 1 to Cloud 2 with traffic protection, accurate keywords could be conveniently extracted by GPT-2-base model regardless of statement distinctions. Therefore, it avoids the cumbersome template design and customer learning process. In this regard, compared to conventional NLP tools, LLMs manifest stronger capability towards fulfilling intent-driven networking.
§ CONCLUSION
In this article, based on LLMs, we have advocated an AI-native network architecture, namely , for provisioning network services beyond personalized generative content. In particular, through the effective cloud-edge LLM synergy, we have demonstrated the feasibility of for location-based personalized services by deploying some representative open-source LLMs (e.g., GPT-2-base model and LLaMA model) at the edge and the cloud, and evaluating their coherent performance with the adoption of low-rank adaptation-based parameter-efficient fine-tuning techniques. On top of that, we have highlighted some substantial architectural changes (e.g., deep C&C integration and a logical AI workflow) that will require. As a by-product, we have presented a possible unified AI solution for network management & orchestration empowered by edge LLMs through exemplifying the performance for popularity prediction and intent inference.
While is a promising AI-native network architecture for provisioning beyond personalized generative services, in this article, we have not discussed all of the major research challenges. For successful deployment of , the following questions will need to be answered.
* Given the success of LLaMA to shrink model sizes through effective algorithmic and structural updates, how to implement the inference and fine-tuning at the terminals, so as to satisfy the limited computing capability in cost-limited devices?
* Considering the continual evolution of knowledge, how to emulate new Bing[New Bing refers to GPT-empowered search engine available at <https://www.bing.com/new>.] and implement online learning-based LLMs to adapt to the dynamicity of wireless environment at the edge?
* Due to the limited sensitivity for numerical inference and possible deception effects, how to further improve the rigorousness of LLMs and what lessons can be learned from the latest LLM? Meanwhile, how to incorporate the evaluation metric of LLM to derive a suitable logical AI workflow?
* Besides network optimization at higher layers, how to develop LLM-based physical and radio link layers to unleash the power of LLMs? How to leverage the LLMs to meet the requirements for low-latency and ultra-reliable communications? Also, how to optimize wireless communications systems for efficient deployment and operation of LLMs in future networks?
IEEEtran
10
url@samestyle
openai_gpt4_2023
OpenAI, “GPT-4 technical report,” Mar. 2023. [Online]. Available:
<http://arxiv.org/abs/2303.08774>
zhang_building_2023
J. Zhang, S. Vahidian, et al., “Towards building the federated GPT:
Federated instruction tuning,” May 2023, arXiv:2305.05644 [cs, eess].
[Online]. Available: <http://arxiv.org/abs/2305.05644>
xu_unleashing_2023
M. Xu, H. Du, et al., “Unleashing the power of edge-cloud generative
AI in mobile networks: a survey of AIGC services,” Mar. 2023,
arXiv:2303.16129 [cs]. [Online]. Available:
<http://arxiv.org/abs/2303.16129>
wu_split_2023
W. Wu, M. Li, et al., “Split learning over wireless networks:
Parallel design and resource management,” IEEE J. Sel. Area. Comm.,
vol. 41, no. 4, pp. 1051–1066, Apr. 2023.
kandpal_deduplicating_2022
N. Kandpal, E. Wallace, et al., “Deduplicating training data mitigates
privacy risks in language models,” in Proc. ICML 2022, Baltimore,
USA, Jun. 2022.
liu_pretrain_2023
P. Liu, W. Yuan, et al., “Pre-train, prompt, and predict: A
systematic survey of prompting methods in natural language processing,”
ACM Comput. Surv., vol. 55, no. 9, pp. 195:1–195:35, Jan. 2023.
touvron_llama_2023
H. Touvron, T. Lavril, et al., “enLLaMA: Open
and efficient foundation language models,” Feb. 2023, arXiv:2302.13971
[cs]. [Online]. Available: <http://arxiv.org/abs/2302.13971>
zhang-sennrich-neurips19
B. Zhang and R. Sennrich, “Root mean square layer normalization,” in
Proc. NeurIPS 2019, Vancouver, Canada, Dec. 2019.
shazeer_glu_2020
N. Shazeer, “enGLU variants improve transformer,”
Feb. 2020, arXiv:2002.05202 [cs, stat]. [Online]. Available:
<http://arxiv.org/abs/2002.05202>
su_roformer_2022
J. Su, Y. Lu, et al., “enRoFormer: Enhanced
transformer with rotary position embedding,” Aug. 2022, arXiv:2104.09864
[cs]. [Online]. Available: <http://arxiv.org/abs/2104.09864>
hu_lora_2022
E. J. Hu, Y. Shen, et al., “LoRA: Low-rank adaptation of large
language models,” in Proc. ICLR 2022, Virtual Edition, Jan. 2022.
alpaca
R. Taori, I. Gulrajani, et al., “Stanford alpaca: An
instruction-following llama model,”
<https://github.com/tatsu-lab/stanford_alpaca>, 2023.
selfinstruct
Y. Wang, Y. Kordi, et al., “Self-instruct: Aligning language model with
self generated instructions,” in Proc. ACL 2023, Toronto, Canada,
Jul. 2023.
9978680
J. Zhu, R. Li, et al., “AoI-based temporal attention graph neural
network for popularity prediction and content caching,” IEEE Trans.
Cogn. Commun. Netw., vol. 9, no. 2, pp. 345–358, 2023.
8968429
L. Pang, C. Yang, et al., “A survey on intent-driven networks,”
IEEE Access, vol. 8, pp. 22 862–22 873, 2020.
§ AUTHOR BIOGRAPHIES
Yuxuan Chen is a PhD Candidate in Zhejiang University, Hangzhou, China. His research interests currently focus on Large Language Model in communication.
Rongpeng Li is an associate professor in Zhejiang University, Hangzhou, China. His research interests currently focus on networked intelligence for communications evolving.
Zhifeng Zhao is the Chief Engineer with Zhejiang Lab, Hangzhou, China. His research area includes collective intelligence and software-defined networks.
Chenghui Peng is a Principal Researcher of Huawei Technologies. His current research interests focus on 6G native AI architecture design.
Jianjun Wu is the Chief Researcher and Director of Future Network Lab, Huawei Technologies. He is leading the future network architecture design in Huawei.
Ekram Hossain is a Professor and the Associate Head (Graduate Studies) in the Department of Electrical and Computer Engineering at University of Manitoba, Canada.
Honggang Zhang is a principal researcher with Zhejiang Lab, Hangzhou, China. He is currently involved in research on cognitive green communications.
|
http://arxiv.org/abs/2307.04244v1 | 20230709184758 | Reinforcement Learning for Joint Design and Control of Battery-PV Systems | [
"Marine Cauz",
"Adrien Bolland",
"Bardhyl Miftari",
"Lionel Perret",
"Christophe Ballif",
"Nicolas Wyrsch"
] | math.OC | [
"math.OC"
] |
§ INTRODUCTION
§.§ Background and related work
The current transition to renewable energy sources requires rethinking new energy systems, characterized by decentralized and intermittent production. The development of these systems typically occurs in two distinct steps, namely the design and control of these systems. The design problem involves identifying the design variables which are the optimal size of energy system components. The control problem aims to determine the control variables which are the optimal actions to operate the energy system components. Both design and control problem should jointly minimize a cost function and are typically solved sequentially. This paper explores the value of solving the design and control tasks, using a reinforcement learning (RL) method as appropriate design is intrinsically linked to subsequent operation. To evaluate the effectiveness of this approach, its performance are compared with that of the Mixed Integer Linear Programming (MILP) method.
On the one hand, RL is a data-driven approach where an agent learns to make decisions in a dynamic environment through trial-and-error experience. It involves an agent interacting with an environment and receiving feedback in the form of rewards or penalties based on its actions, with the goal of maximizing its cumulative reward over time. On the other hand, Mixed Integer Linear Programming (MILP) is a mathematical optimization technique used to solve problems with linear constraints and integer variables. It involves formulating a mathematical model of the problem and using an optimization algorithm to find the best solution. Both RL and MILP methods will be used to benchmark the results of a one-year time series.
As highlighted in a recent review <cit.>, RL-based approaches have significant potential, yet not fully exploited, in the energy field. Specifically, the review points out that energy systems are typically designed using either MILP or heuristic methods, with RL approaches dedicated to their control. Integrating RL beyond energy flow control would open new interesting research questions. In <cit.>, RL is used to support distributed energy system design due to its flexibility and model-free nature, which allows it to be adapted to different environments at different scales. However, they did not simultaneously address the dispatch and design problem as a distributed reward problem, as done in this work. Instead, they used a cooperative coevolution algorithm (COCE) to assist the optimization process. Jointly addressing the design and operation of energy systems is a key issue, especially for multi-energy systems, as discussed in <cit.>, where multi-objective evolutionary algorithms (EMOO) and MILP are used to integrate biomass technologies in a multi-energy system. In <cit.>, the focus is on evolution algorithms and their comparison with deep reinforcement learning strategies. After clarifying the fundamental differences between the two approaches, the discussion revolves around their ability to parallelize computations, explore environments, and learn in dynamic settings. The potential of hybrid algorithms combining the two techniques is also investigated, along with their real-world applications.
RL-based frameworks are successfully applied to the operation of energy systems <cit.>, although these methods have not, to the authors' knowledge, been extended to solve real-world design problem in energy system. As reviewed in <cit.>, RL-based frameworks are popular for addressing electric vehicle (EV) charging management, mostly with variants of the DQN algorithm, and outperform other traditional methods. In <cit.>, various deep RL algorithms are benchmarked against rule-based control, model predictive control, and deterministic optimization in the presence of PV generation. The study, which aims to increase PV self-consumption and state-of-charge at departure, demonstrates the potential of RL for real-time implementation. For solving V2G control under price uncertainty, <cit.> modeled the problem with a Markov Decision Process (MDP) <cit.>, a mathematical framework for modeling system where stochasticity is involved. Additionally, a linear MDP formulation is also used in <cit.> to address the coordination of multiple charging points at once. Finally in <cit.>, a data-driven approach is defined and evaluated for coordinating the charging schedules of multiple EVs using batch reinforcement learning with a real use case. In conclusion, these studies provide valuable insights and tools for optimizing and improving energy systems, demonstrating the potential of RL to tackle the operation of complex energy systems.
§.§ Contribution
This work aims to evaluate the relevance of jointly designing and controlling an energy system using a deep RL approach. To achieve this purpose, two methods are benchmarked to address jointly the design and control problem of a real-world PV-battery system. The first method, MILP, computes the optimal design and control solution over a sequence of historical data. The second method, RL, computes the optimal design and a control policy through interactions with a simulator by trial and error. The specific RL algorithm used in this study is referred to as Direct Environment and Policy Search (DEPS) <cit.>. DEPS extends the REINFORCE algorithm <cit.> by combining policy gradient with model-based optimization techniques to parameterize the design variables. In this framework, an agent looks for the design and control variables that jointly maximize the expected sum of rewards collected over the time horizon of interest. The outcomes of both methods are discussed in the subsequent sections of this paper.
This paper is structured as follows. Section <ref> provides two formulations of the energy system, one designed for MILP and the other for RL, and discusses the methodology used to benchmark the results. In Section <ref>, the outcomes of the study are presented, and these results are discussed in Section <ref>, with a focus on the potential of RL for joint design and control of energy systems. Finally, the paper concludes with a summary in Section <ref>.
§ METHOD
§.§ Problem statement
The study is carried out for the energy system illustrated in Figure <ref>, whose components are detailed in the subsections below. Overall, the system refers to an office building that has been fitted with a PV installation and a stationary lithium-ion battery to meet its own electricity consumption. Additionally, the building is connected to the electricity grid.
The objective of the study is to jointly propose a design of the PV and battery components, as well as a control strategy of the described energy system in order to minimize the total cost of its ownership. In the following Subsection <ref>, the system is expressed as a mathematical program made-up of constraints and objectives. To be more precise, it is tackled as a Mixed-Integer Linear Program. Subsection <ref> formulates a surrogate environment as an MDP. The latter represents the same dynamics and rewards as the original problem but the objective is to maximize the sum of rewards gathered over one week on expectation over the 52 weeks of the year of data. By doing so, it allows the use of the RL algorithm and expects the optimal solution to be close to the solution of the original problem. Results are discussed in Section <ref>. Finally, for both methods, the energy system is studied over a finite time horizon T, on which all costs are evenly distributed across each time step t. The methodology and the context of the experiments conducted are specified in Subsection <ref>.
§.§ Energy system
This subsection describes the physical constraints that apply to the components of the energy system. Theses components, in sequential order, consist of the PV panels, the battery, the electrical load and the power grid. The set of design and control variables and the parameters of the whole system, which is modeled as a discrete-time system, are gathered respectively in Table <ref> and <ref>, respectively.
§.§.§ PV system
The objective of the PV installation is to generate electricity on-site to fulfill the local electricity demand. The design of this component is one of the two design variables that will result from the optimization process. The range of the suitable nominal power P^nom, corresponding to its design variable, is set in Eq. (<ref>) and the production at time t is directly proportional to this nominal design variable as shown in Eq. (<ref>). The normalized annual curve p_t^prod corresponds to the actual hourly averaged PV production power of the building.
P^nom_min ≤ P^nom≤ P^nom_max
P^prod_t = P^nom·p^prod_t
The capex and opex values, which are respectively the initial investment and the annual maintenance cost, of the installation are made up of a fixed and a variable part to take account of potential scale effects.
cx_pv = cx_pv^fix +
cx_pv^var· P^nom
ox_pv = ox_pv^fix +
ox_pv^var· P^nom
§.§.§ Battery
To maximize the potential for on-site self-consumption, a stationary lithium-ion battery is available. The design of this component, corresponding to its capacity B, is the second design variable to determine during the optimization process. This battery capacity can vary in the range of Eq. (<ref>).
B_min ≤ B ≤ B_max
The state of charge variable, soc_t, changes as a function of the power exchanged with the battery denoted P^B_t. This power is constrained, for charging, by the nominal capacity, Eq. (<ref>), and, for discharging, by the energy stored, Eq. (<ref>). Additionally, the battery efficiency, denoted η^b, is assumed identical for both the charging and the discharging processes.
P^B_t ≤B - soc_t/Δ t if P^B_t ≥ 0
P^B_t ≥-soc_t/Δ t if P^B_t ≤ 0
Knowing the power exchanged with the battery, the state of charge can be updated:
soc_t+1 =
soc_t + P^B_t ·η^b·Δ t if P^B_t ≥ 0
soc_t + P^B_t/η^b·Δ t if P^B_t < 0
At the beginning of the optimization, i.e., t=0, the battery soc is set to half of its capacity value, to initialize the model. Moreover, to avoid any artificial benefit, the final soc is constrained to be equal to the initial value, as formulated in Eq. (<ref>).
soc_t=0 = B/2
soc_t=0 = soc_t=T
Similar to the PV plant, the capex and opex of the battery consist of both fixed and variable parts.
cx_B = cx_B^fix +
cx_B^var·B
ox_B = ox_B^fix +
ox_B^var·B
§.§.§ Electrical load
The electrical load used in this project is real data from an office building in Switzerland. This consumption is monitored on an hourly basis and reflects the consumption patterns of office days. This building load power, P^load_t, is provided as input and corresponds to an actual measurement sampled by hours over a year.
§.§.§ Electrical grid
To absorb excess solar production or to meet the electricity consumption in the absence of local production, the system is connected to the low-voltage electrical grid. This connection is modeled here as a single balance equation, called the conservation of electrical power, shown in Eq. (<ref>). The power imported from the grid is referred to as P^imp_t and the power injected is referred to as P^exp_t.
P^prod_t + P^imp_t = P^load_t + P^B_t + P^exp_t
The grid power value at each time t is derived from Eq. <ref>, and the power limit can be described as follows.
0 ≤ P^imp_t ≤ P^max_grid
0 ≤ P^exp_t ≤ P^max_grid
Based on the import and export power, the total cost of supplying electricity through the network C_grid can be computed.
C_grid = ∑_t=0^T-1 C_grid,t = ∑_t=0^T-1 P^imp_t · C^imp_grid,t - P^exp_t · C^exp_grid,t
§.§.§ Objective function
The objective of this study is to propose a design for the PV and battery components, along with their dispatch, with the aim of minimizing the total cost of ownership. This objective function, of minimizing the overall cost of the system, can be formulated as follows.
min totex
The total cost of the system, denoted totex, is composed of the capex and opex of both PV and battery components, as well as the grid cost.
totex = opex + capex + C_grid
opex = ox_pv + ox_B
capex = cx_pv· R_pv + cx_B· R_B
The opex and grid cost are computed over a finite time period T. However, the capex is an investment cost that is independent of T. To enable the adaptation of the investment cost to the project duration, an annuity factor R adjusts the capex for the finite time horizon T. This annuity factor is computed according to Eq. (<ref>), by taking into account the values of T, the annual discount rate r, and the lifetime L of the component. This formula includes a scaling factor T/8760 to adapt R to the period T, based on the assumption that T is expressed in hours since 8760 is the number of hours in a year.
R = r · (1 + r) ^L/ (1 + r) ^ L - 1·T/8760
§.§ MDP formulation
This section presents an alternative formulation of the problem as a Markov Decision Process (MDP), which is a well-established framework for modeling sequential decision-making problems. This alternative formulation is required for applying DEPS. More precisely, an MDP(S, A, P, R, T), as presented below, consists of the following elements: a finite set of states S, a finite set of actions A, a transition function P, a rewards function R, and a finite time horizon T.
§.§.§ State Space
The state of the system can be fully described by
s_t = (h_t, d_t, soc_t, P^prod_t, P^load_t)
∈ S = {0, ..., 23}×{0, ..., 364}× [0, B] ×ℝ_+ ×ℝ_+
* h_t ∈{0, ..., 23} denotes the hour of the day at time t. The initial value is set to 0.
* d_t ∈{0, ..., 364} denotes the day of the year at time t. The initial value is set randomly.
* soc_t is the state of charge of the battery at time t, this value is upper bounded by the nominal capacity of the installed battery B. The value is set initially to a random value during the training process and to half of its capacity during the validation process.
* P^prod_t represents the expected PV power at time t. This value is obtained by scaling normalized historical data p^prod_t with the total installed PV power (P^nom) and considering h_t and d_t values.
* P^load_t denotes the expected value of the electrical load at time t. The load profile is determined using historical data that corresponds to the same hour and day as the PV power.
§.§.§ Action Space
The action of the system corresponds to the power exchanged with the battery.
a_t = (P^B_t)
After projecting the action to fall within the acceptable range specified by Eq. (<ref>) and (<ref>), the resulting value is used as a_t, as shown in Eq. (<ref>). This corresponds to the power exchanged with the battery, denoted P^B_t, this value is positive when the battery is being charged and negative when it is being discharged.
P^B_t =
B - soc_t/Δ t if P^B_t > B - soc_t/Δ t
soc_t/Δ t if P^B_t < - soc_t/Δ t
P^B_t otherwise
§.§.§ Transition Function
Each time step t in the system corresponds to one hour, which implies the evolution specified in Eq. (<ref>) of the state variable h and every 24 time steps, the day is incremented by 1.
h_t+1 = (h_t + 1) mod 24
d_t+1 = Int(h_t + 1/24)
where the function Int takes the integer value of the expression in Eq. (<ref>).
The soc_t of the battery is updated as Eq. (<ref>), based on the projected action value, and all other state variables are taken from input data.
P^prod_t+1 = p^prod_h_t+1, d_t+1· P^nom
P^load_t+1 = p^load_h_t+1, d_t+1
§.§.§ Reward Function
The reward signal to optimize the agent's actions in RL serves a similar aim as the objective function in the MILP formulation. Therefore, the reward here is the opposite value of the TOTEX defined at Eq. (<ref>). This cost is composed of (i) the investment cost, (ii) the operating cost and (iii) the cost from the purchase and resale of electricity from the grid defined in Eq. (<ref>).
r_t = - totex_t
= - capex - opex - C_grid,t
= - capex - opex - P^imp_t · C^imp_grid,t + P^exp_t · C^exp_grid,t
where the grid cost is the only time-dependent factor, while capex and opex are fixed values for a specific value of P^nom and B.
§.§ Methodology
This subsection discusses the fundamental differences between the two methods (i.e., MILP and RL), along with the experimental protocol that was employed to compare the results. As discussed briefly earlier, although the same problem is aimed to be solved, the methods under study are fundamentally different.
MILP is a method for solving problems that involves optimizing a linear function of variables that are either integer or constrained by linear equalities, as the problem described in Subsection <ref> The MILP algorithm solves the optimization problem by iteratively adjusting the values of the design and control variables, subject to the constraints, until it finds the optimal solution that maximizes or minimizes the objective function, depending on the problem's goal. This method is applied to the problem described in Subsection <ref> over a one-year time horizon (T=8760). The solution is said to be computed with perfect foresight meaning that all variables are selected accounting for the future realization of (normally unknown) events in the time series, providing an optimistic upper bound on the true performance of the control and design. Concretely, the MILP problem is here encoded in the Graph-Based Optimization Modelling Language (GBOML) <cit.> paired with the Gurobi solver <cit.>.
In contrast, RL is a stochastic optimization method that learns from experience through trials and errors. In this study, we use DEPS <cit.>, an algorithm optimizing design and control variables in an MDP, as the one described in Subsection <ref>, with a finite-time horizon. The agent receives feedback in the form of rewards when it selects a particular design and performs specific actions. The objective of the agent is to maximize the expected cumulative reward, which drives it to learn a design and a control policy. Ideally, as with MILP, the time horizon should be annual, or cover the entire lifetime of the system, taking into account seasonal production and consumption fluctuations and/or equipment aging. However, such extended time horizons are unsuitable for this RL approach. Therefore, to strike a balance between a horizon that is short enough for DEPS and long enough to observe the consequences of decisions on the system, a horizon of T=168 hours, i.e., 7 days, is defined. Additionally, for each simulation, the initial day is sampled uniformly from the year-long data set and the initial state-of-charge of the battery is also sampled uniformly at random. As the reward is optimized on expectation over all days, the resulting design and control policy is expected to account for the seasonality and other different hazard in the historical data. The DEPS algorithm is trained on a predetermined number of iterations. The PV power and battery capacity values obtained from the last iteration of the algorithm are then taken as the values of the design variables and the final policy is used for the control.
Unlike MILP, the RL method does not secure optimality, therefore the experimental protocol aims to compare both results to see how far the RL solution is from the optimal one. The experimental protocol is conducted in two distinct scenarios to differentiate the impact of adding the design variables in the joint problem. The first control-only scenario (CTR) assessed the control variables when the design variables are fixed. The second scenario, considering both the control and design (CTR & DES) problem, allows for flexibility in designing the battery capacity and PV power, the two design variables. To benchmark the performance of both methods in each scenario (i.e., CTR and CTR & DES), the reward and income value are reported. The reward value is computing according to Eq. (<ref>) for the RL method. To estimate the average reward value for the MILP method, all reward values r_t are averaged over time horizons of T=168. Comparing the average cumulative reward value of the MILP method to that of the RL method provides a first benchmark for evaluating the performance of both approaches. However, as shown in Eq. (<ref>), only the grid cost is time-dependent, while the capex and opex depend solely on investment decisions. Therefore, the income value is defined as the average reward value, but it only includes the grid cost and can be computed as follows:
Income = ∑_t=0^T-1 - P^imp_t · C^imp_grid,t + P^exp_t · C^exp_grid,t
Finally, the experiments are performed in two steps. First, to perform a simple comparative study, working on a same finite time horizon T=168, both methods are conducted using data from a single summer week. Second, the data set is extended to include the one-year data set.
§ RESULTS
The energy system presented in Section <ref> is solved using the RL and MILP approaches with parameter values listed in Table <ref>. To differentiate the performance of the DEPS algorithm for control and design aspects, the study is conducted in two distinct scenarios. The first control-only scenario (CTR) assessed the control aspect for fixed design variables, meaning that the PV power and battery capacity are fixed. The second scenario, considering both the control and design (CTR & DES) aspects, allows for flexibility in designing the battery capacity and PV power. The two following subsections describe the results of the study performed in two steps, over the one-week and one-year data set, respectively.
§.§ A one-week toy example
In order to perform a simple comparative study, both CTR and CTR & DES analyses were conducted using data from a single summer week. This enables to optimize both methods on the same time horizon. This means training the RL algorithm on the same 168 time steps, with an initial day uniformly randomly selected over the week but an initial hour fixed at midnight. Additionally, during the training phase, the battery's initial soc is uniformly sampled such that the RL algorithm is presented with a large variety of scenarios for improving the quality of the learned policy. The results for both the CTR scenario, where the design variables (i.e., the PV power and battery capacity) are fixed, and the CTR & DES scenario, where the PV and battery design variables are optimized in addition to control, are presented in Table <ref>.
§.§.§ RL and MILP optimal objective values are similar in both scenarios but with different designs in the control and design scenario.
Table <ref> shows that in the CTR scenario, the results of the RL approach are similar to those of MILP. This confirms that the DEPS algorithm is able to converge to the optimal solution of this specific problem. In the CTR & DES scenario, RL design variables differ from the MILP solution, resulting in an unexpected higher reward value (-40) than the MILP optimal one (-46). A detailed analysis reveals that this unexpected higher value is due to Eq. (<ref>), which is not imposed in the MDP. In order to validate this analysis, the additional grid cost needed to fulfill Eq. (<ref>) has been computed, taking into account the battery's final soc obtained with the RL approach. As a result, the reward value has increased to -67 (instead of -40). This clearly highlights the importance that Eq. (<ref>) plays in term of overall objective.
§.§.§ The CTR & DES scenario highlights differences in RL and MILP strategies.
It is seen from Table <ref> that in the second scenario, the optimal design variables of the RL and MILP solutions differ. Finding different values in design variables shows that the DEPS algorithm is able to identify solutions with comparable reward but using different design strategies. In order to study the sensitivity of the optimal solution, the MILP method was applied by imposing the design variable values obtained with the RL, as it can be seen in the last column of Table <ref>. This indicates that the RL design solution is less optimal (-53) than the MILP one (-46).
§.§ A one-year case study
Optimal solutions of RL and MILP methods in both scenarios are now computed using data from a full year. The time horizon for the RL algorithm is still equal to T=168, but the starting days are uniformly randomly selected over the year. The RL algorithm is trained over a pre-determined number of 100'000 iterations and the values of the RL design variables considered are the ones from the final iteration. The results are shown in Table <ref>.
§.§.§ The difficulty of generalizing a policy with stochasticity in the model and on the estimation of the expectation
It can be seen from Table <ref> that in both the CTR and CTR & DES scenarios, the optimal reward obtained by the RL method is poorer than the MILP optimal rewards. Furthermore, as depicted in Fig. <ref>, due to the significant variations in the input data, the reward and income values exhibit substantial fluctuations across iterations.
During training in the CTR scenario (Fig. <ref>, left), the RL model achieved maximum reward and income values of -180 and -131, respectively, which are significantly better than the final results obtained from both methods in Table <ref>. This could suggest that depending on the set of weeks that are averaged at each iteration, it is possible to obtain a better or worse reward. Therefore, it seems important to work with a sufficiently representative number of weeks throughout the year. A similar observation can be made in the CTR & DES scenario, where the maximum reward and income values achieved were -195 and -148, respectively (Fig. <ref>, right).
§.§.§ The RL method seems to promote lower design variable values
From Table <ref> it is also seen that the RL approach seems to promote solutions involving lower values of design variables. To further investigate the reasons underlying this result, the design variables for the evolution of the battery capacity and PV power, during the training process, are reported in Fig. <ref> in the CTR & DES scenario.
As indicated in Table <ref>, the design variable values can range from 0 to 200. However, it can be seen that higher values are not explored by the RL method. This latter resulted, at the last iteration, in design variables of 44 kWh for battery capacity and 57 kWp for PV power. During the training phase, the maximum values reached were 59 kWh and 90 kWp for battery capacity and PV power, respectively. This maximal explored battery capacity value is lower than the optimal one found by the MILP approach (95 kWh). Thus, the RL solution of the PV power value is expected to be lower. Indeed, the reward value is penalized if the RL agent injects PV production into the grid, since the cost of exported energy into the grid (C_grid^exp) is defined as a negative value in Table <ref>. Consequently, limited battery capacity intrinsically causes a lower PV power value.
§ DISCUSSION
This section discusses the main observations that can be drawn from solving a battery-PV system with both RL and MILP approaches using the one-week and one-year data set.
§.§ The promises of RL for joint design and control of energy systems
The motivation for this study was to explore the potential of RL to enable joint control and design of energy systems. Tables <ref> (one-week data) and <ref> (one-year data) show that RL provides a solution that is close to the optimal MILP one. This is encouraging as it suggests that despite RL relying on a different optimization strategy, it is able to identify a meaningful solution in a simple case. However, the difference of reward value between MILP and RL increases when integrating design variables to the optimization problem, i.e., CTR & DES scenarios in Tables <ref> and <ref>. Interestingly, the solutions for design variables are consistently smaller in RL as compared to MILP. Furthermore, from Fig. <ref>, it can be seen that the RL algorithm did not explore higher design variable values in the one-year case study. This observation can be explained by two possibilities: first, DEPS is a local-search method that is thus subject to converging towards local extrema. Once the control policy is too specialised to the investment parameters (under optimization too), these parameters are thus expected to be locally optimal and the algorithm is stuck. Second, the RL algorithms is subject to many hyperparameters to which the final results are sensible, it is possible that a different policy architecture, learning rate, or simply more iterations would ameliorate the performance of the method. Supporting the first explanation is the similarity between the reward values of the RL (-250) and MILP, based on same investments, (-247) approaches for the CTR & DES scenario with T=8760 (Table <ref>). Hence, in this specific energy system case study, it could be likely that the RL algorithm did not deem it advantageous to enhance the value of the design variables for either one or both of the two reasons stated.
Overall, these results show that RL provides realistic control and design strategies. Based on this, RL could be used to define new real-time control strategies integrating design constraints, and that are less sensitive to linearization inaccuracies <cit.>. Given the differences in how uncertainties are accounted for by both methods, RL could also be a better candidate to integrate resources coming with high levels of uncertainty such as electric mobility.
§.§ Technical challenges and future directions
The main technical challenges encountered in this study are essentially the ones inherent to RL methods. First, various parameters need to be tuned: neural network architecture for the policy, the batch size for the optimization, the learning rate, or the different scaling among others. These parameters were tuned by trial and error and would need to be adapted to each new application. For example, the number of layers required in the one-year case study was larger than for the one-week toy example. Second, convergence of the RL method is not guaranteed, and when convergence happens, the solution is not guaranteed to be globally optimal. Third, as illustrated here above for the results of Figure <ref>, determining the number of iterations (set to 100,000 for the training phase in all our experiments) is also crucial and might affect RL solutions. Therefore, comparing RL and MILP solutions is not trivial because its is difficult to compare perfect foresight with policy based decisions. This should be accounted for when analyzing results from Tables <ref> and <ref>.
From a technical point of view, future work will aim at using more advanced RL methods. In particular, the RL algorithm used here is a modified version of the REINFORCE algorithm <cit.>, which was developed in 1992 and is one of the earliest RL algorithms. Today, more advanced algorithms are available for control problems, which can converge more rapidly or account for infinite time horizons, such as actor-critic algorithms (e.g., PPO <cit.> and GAE <cit.>), but are yet to be adapted to joint design and control. In terms of applications, future work will aim to better evaluate the added value of RL by assessing the long-term performance of real-time sized systems. For example, a control framework could be developed to establish an operation strategy for the MILP-sized system. The framework would then be evaluated using several years of real-time data from the same system used for design. The same exercise would be applied to the trained model of the DEPS algorithm and performance obtained from several years of system control would be benchmarked, and the impact of design decisions could be discussed with more perspective.
§ CONCLUSIONS
In most studies, MILP is used for the design of energy systems and RL for the control. On the one hand, MILP assumes a perfect foresight of the future and is difficult to generalize to new data. On the other hand, RL methods proved to be efficient in other tasks linked to design and control but not on energy systems. In this study, we assessed the potential of an RL method, DEPS, i.e. an RL algorithm proven efficient for designing and controlling complex systems, for the joint design and control of energy systems.
The energy system studied is a PV-battery system used to answer a real-life demand in order to minimize the overall cost. In order to assess the efficiency of the RL method, we compared the outcomes with those obtained with a MILP. As these two approaches are fundamentally different, the optimization problem was formulated in two distinct ways: first as a MILP and second as an MDP. The methodology and experimental context were clarified to facilitate the discussion of results and have a fair comparison. Both approaches are discussed in terms of their strengths and weaknesses.
The findings show that RL can produce control strategies that are close to optimal, while using different values of design variables. This highlights the potential of RL for joint design and control of energy systems, particularly in scenarios where stochasticity is a key factor. However, the study also highlights the difficulty of tuning and using theses methods. Moving forward, there are several challenges to address, including the need to ensure that the RL solution converges to a global optimum. However, the promising results obtained in this study suggest that RL has the potential to be a valuable tool for jointly designing and controlling energy systems.
|
http://arxiv.org/abs/2307.04194v1 | 20230709145658 | A Search for AGN sources of the IceCube Diffuse Neutrino Flux | [
"K. McDonough",
"K. Hughes",
"D. Smith",
"A. G. Vieregg"
] | astro-ph.HE | [
"astro-ph.HE"
] |
Natural Language Instructions for Intuitive Human Interaction with Robotic Assistants in Field Construction Work
Somin Park^1^1Dept. of Civil and Env. Engineering, University of Michigan {somin, menassa, vkamat}@umich.edu., Xi Wang^2^2Dept. of Construction Science, Texas A&M University, [email protected]., Carol C. Menassa^1, Vineet R. Kamat^1, and Joyce Y. Chai^3^3Dept. of Elec. Engineering and Computer Science, University of Michigan, [email protected].
August 12, 2023
===========================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
The spectrum of IceCube's high-energy neutrino flux, first detected in 2013 <cit.>, is consistent with a power-law extending from tens of TeV to a few PeV, and with flavor ratios consistent with a pion decay origin <cit.>. While searches for point sources contributing to this neutrino flux have largely produced null results, there have been significant excesses from two specific sources. Evidence for the first source, TXS 0506+056, includes a neutrino event coincident in time and space with the flaring blazar detected by Fermi and MAGIC with a significance of 3.5 σ <cit.>. More recently, NGC 1068, a nearby non-blazar active galactic nuclei (AGN), was identified as a potential neutrino source with a significance of 4.2 σ <cit.>. However, these sources alone do not explain the majority of the diffuse neutrino flux, indicating that neutrinos in this energy range are produced by a large number of extragalactic sources <cit.>.
There have been various proposals for possible sources of the diffuse neutrino flux. Gamma-ray bursts (GRBs) <cit.>, star-forming galaxies <cit.>, both blazar and non-blazar AGN <cit.>, and fast radio bursts <cit.> have all been considered. However, various models have been ruled out as the primary source for IceCube's neutrino flux. In particular, there is an observed lack of correlation with with blazar AGN, effectively eliminating flaring blazars as the primary source of the diffuse neutrino flux <cit.>. Starburst and other star-forming galaxies are also unable to account for the entirety of this signal without exceeding the measured intensity of the isotropic gamma-ray background <cit.>.
Our work in this paper consists of three analyses conducted using the publicly available data set from IceCube, containing 10 years of muon track neutrino events <cit.>. First, we present an update of an earlier analysis that used IceCube's three-year public data set <cit.> to investigate whether a significant fraction of the neutrino flux could originate from blazar or non-blazar AGN. In this updated analysis, we compare IceCube neutrino events from the 10 year catalog to sources in the Fourth Catalog of AGN detected by the Fermi Large Area Telescope (the 4LAC catalog) <cit.>, and our results are consistent with both the original work and an updated work <cit.>.
We then go on to incorporate neutrino energy information, new in IceCube's 10 year catalog, and perform an energy-dependent analysis with a likelihood formulation that includes the reconstructed energy of the neutrino, focusing on the 4LAC sources in the Northern sky using the approach outlined in <cit.>. We restrict this analysis to the Northern sky because the IceCube data set is dominated by the atmospheric muon background at declination angles less than 10^∘. Without a dedicated cosmic ray simulation to characterize the energy dependence of the muon background, we choose to only look at the Northern sky where this background is negligible. While this analysis only focuses on the Northern sky, the addition of information about the neutrino energy distribution increases the sensitivity of the search overall; previous work found that with energy information included, the total number of events needed for a 5 σ significance is reduced by a factor of two <cit.>. The same study also outlines a process that can be repeated in the Southern sky with a more robust energy reconstruction simulation that characterizes the muon background.
The third analysis presented here is a time-dependent threshold analysis of the MOJAVE XV radio catalog <cit.>. The MOJAVE XV catalog consists of observations in the 15 GHz band of 437 AGN over the course of 20 years, between 1996 and 2016. The fluctuations seen in the radio emission from these galaxies could be correlated with high-energy neutrino emission; p-γ interactions in the sources could create both neutrinos at high energies and photons at lower energies after the initial
high-energy photon is lost through a chain of pair production <cit.>.
Radio AGN in the MOJAVE radio catalog have previously been investigated as a possible source class for the IceCube neutrino flux in a time-independent analysis <cit.>, and our addition of a time-dependent likelihood function decreases the background and thus increases the sensitivity of such a search. The multiple epochs present in the MOJAVE data make it an ideal catalog for investigating a possible correlation between neutrinos and radio emission.
We apply the process outlined in <cit.> for an individual source to the entire MOJAVE catalog.
§ METHOD OVERVIEW
Each analysis presented here uses data from IceCube's data release consisting of muon track events from April 2008 to July 2018 <cit.>. The data is from the 40 string detector in 2008, the 59 string detector in 2009, the 79 string detector in 2010, and the completed 86 string detector for the remaining 7 years. The entire data set contains 1,134,450 muon track events. Each event has a reported direction, angular resolution, time of event, and “energy proxy,” which is related to the energy deposited in the detector. Each year of this data set includes an effective area for the detector as determined by simulation, as a function of declination and neutrino energy.
To test for evidence of a neutrino signal from an individual point source, we follow the approach outlined in <cit.>. The likelihood that a given source results in n_s events, out of a total N recorded in the detector, is given by:
ℒ (n_s) = ∏^N_i[ n_s/N S_i (|x_s-x_i|)+( 1 - n_s/N) B_i (sinδ_i ) ],
where S_i and B_i are probability distribution functions (PDFs), respectively. The signal PDF consists of individual spatial, energy, and time PDFs:
S_i = P^sig_i(σ_i, x_i|x_s) ·ϵ^sig_i(E_i, δ_i|γ) · T^sig_i,
where P^sig_i, ϵ^sig_i and T^sig_i are the respective spatial, energy, and time components for the signal PDF. The background PDF is defined similarly by:
B_i = P^bkg_i(σ_i, x_i|x_s) ·ϵ^bkg_i(E_i, δ_i|γ) · T^bkg_i,
where again P^sig_i, ϵ^sig_i, and T^sig_i are the respective spatial, energy, and time components for the background PDF. For the work presented in this paper, the spatial component is included in all three analyses, while the energy and time components are only used in the energy-dependent analysis and the MOJAVE catalog study, respectively. In the following sections, the PDFs used in each analysis will be described.
§ A SEARCH FOR CORRELATIONS WITH BLAZAR AND NON-BLAZAR AGN USING A SPATIALLY-DEPENDENT LIKELIHOOD
We first search for correlations with blazar and non-blazar AGN using a spatially-dependent likelihood.
We define the spatial PDF as:
P^sig = 1/2πσ^2_ie^- |x_s-x_i|^2/2 σ^2_i
P^bkg = 𝒫_B (sinδ_i)/2π,
where x_s is the direction to the source, x_i is the reported direction of the event, and σ_i is the uncertainty associated with each event, given in terms of angular resolution provided by the data set. The function 𝒫_B is equal to the fraction of events in the data set averaged across a band of ± 6^∘ in declination, δ, around a given source. We only consider sources with declination between ± 87^∘ due to the limited amount of solid angle near the poles with which to characterize the background PDF. To find the value of n_s for a given source, the likelihood function is maximized with respect to the free parameter n_s, as outlined in <cit.>.
The statistical significance of a neutrino point source over a background-only hypothesis is calculated using the following test statistic:
2Δℒ(n_s) = 2 [lnℒ(n_s) - lnℒ(0)],
where ℒ(0) is the likelihood for the background-only hypothesis in which there are no signal events from a given direction. From this, the p-value can be calculated by performing an integral over a χ^2 distribution with one degree of freedom.
We go on to constrain the fraction of IceCube's diffuse neutrino flux that originates from known classes of astrophysical objects using three different weighting hypotheses for the expected neutrino emission from a given source class:
* Gamma-Ray Scaling: The neutrino flux from a given source is proportional to the gamma-ray flux from that source. The gamma-ray flux in the 4LAC catalog is in units of photons between 1-100 GeV per area, per time. This hypothesis would be expected if the gamma-ray emission is primarily produced from hadronic interactions, resulting in a fixed ratio of photons and neutrinos.
* Geometric Scaling: The neutrino flux of a given source is proportional to 1/D_L^2. This hypothesis assumes that the neutrino luminosity of a source is only correlated to distance between the source and the detector. This calculation can only be done with sources with measured redshifts, so some sources are excluded in the analysis under this hypothesis.
* Flat Scaling: The neutrino flux from a given source is uncorrelated to any other information.
These hypotheses remain unchanged from our previous two studies <cit.>.
§.§ Results
In Fig. <ref>, we show an all-sky map of the likelihood of a neutrino point source, √(2Δlnℒ), in terms of right ascension and declination. The scan was performed in steps of 0.2^∘, and at each point we show the value of n_s that maximizes the test statistic defined in Eq. <ref>. In Fig. <ref> we show the distribution of this test statistic across the sky. When excluding the points containing NGC 1068, a 4.2 σ source discovery by IceCube <cit.>, the distribution is consistent with a normal distribution. In Table <ref>, we present the most significant locations in the sky other than NGC 1068, along with the associated likelihoods and pre- and post-trials p-values.
The 2863 sources in the Fermi 4LAC catalog are unchanged from our previous analysis <cit.> and consist of 2796 blazars and 63 non-blazars. The blazars can be further broken down into 658 flat spectrum radio quasars (FSRQs), 1067 BL Lacs, and 1071 “blazars of unknown origin.” We do not consider sources within 3^∘ of the poles.
The process follows that outlined in <cit.>. When considered individually, no source has a statistically significant likelihood after accounting for the appropriate trials factor. When comparing the qualities of the sources with the highest likelihood, we again find no clear features that set these sources apart from other sources in the 4LAC catalog more generally.
Finding no statistically significant individual sources, place an upper limit on the neutrino flux from these source classes. In Fig. <ref>, the limits from this analysis on the contribution of blazar and non-blazar AGN to the IceCube diffuse neutrino flux are shown. We separately show results for sources identified as “non-variable” by applying cuts to sources with a variability index below 18.48, as reported by the Fermi Collaboration. Of the original 2796 blazars and 63 non-blazars, we identify 1674 and 47 “non-variable” sources, respectively.
In all cases, we apply the appropriate completeness factors to account for the incompleteness of the source catalog <cit.>. For blazars, we apply a completeness factor of 1.4. For non-blazars, we apply a completeness factor of 50.6 (154.6) for non-blazar (non-variable) AGN. We place limits using the three hypotheses outlined in Section <ref>, and for two choices of spectral index. We use a spectral index of α=2.5, based on the spectral flux measured by IceCube, α=2.0, the value predicted assuming Fermi acceleration.
From the upper limits presented, we conclude that blazars can produce no more than 13% of the diffuse neutrino flux, which is consistent with our previous analysis <cit.> and a publication earlier this year <cit.>. This is a slightly more restrictive constraint than our previous result, a direct result of the increased livetime of the 10-year IceCube data. Additionally, we find that under the flat scaling hypothesis, non-variable, non-blazar AGN could produce the entirety of the IceCube diffuse neutrino flux, and that in the case of the other two hypotheses, non-variable, non-blazar AGN could still produce a majority of the flux.
§ AN ENERGY-DEPENDENT CONSTRAINT ON THE NEUTRINO FLUX FROM BLAZAR AND NON-BLAZAR AGN
In addition to the spatial-only likelihood analysis described above, we have also perfomed an analysis using an energy-dependent likelihood on data from the Northern hemisphere alone, which unlike the Southern hemisphere is not background-limited by atmospheric muons.
We use the “energy proxy” for the muon track events in the IceCube data set, and adopt the smearing matrices from the IceCube data release and a spectral index of α=2 to create a function that describes the likelihood of obtaining the reconstructed neutrino energy given the assumed spectral index, as outlined in <cit.>. This likelihood is normalized to one and its distribution as a function of energy can be seen in the left panel of Fig. <ref>. To speed up the computational process, only events with an energy proxy greater than 100 GeV were included.
Most of the events in this data set are not astrophysical neutrinos; the Northern hemisphere data set is dominated by atmospheric neutrinos, which are a background for this analysis. Therefore, we use the data set itself to construct the energy-dependent component of the likelihood, dependent on the declination of the point in the sky being tested.
§.§ Results
We create a test statistic based on the new likelihood formulation and find the updated distribution of the test statistic shown in the right panel Fig. <ref>.
This distribution falls along a normal distribution, consistent with a no significant source excesses. The most significant locations on the sky in this search are presented in Table <ref>. NGC 1068 falls just below the horizon and is thus excluded from this analysis.
We again compare against the Fermi 4LAC catalog with the same source hypotheses outlined in Section <ref>.
Because we only analyze half the sky, the number of sources in our catalog is decreased by roughly 50%; however, we also expect the background to decrease by more than 50%, as the atmospheric events in the Southern hemisphere contribute significantly to the overall background rate at the highest energies.
The limit set by this energy-dependent coincident search in the Northern hemisphere with the 10-year data set are even more restrictive than that of the spatial-only analysis presented in Section <ref>.
We use the same completeness factor as in the spatial-only analysis with an additional factor of 2 that accounts for only looking in the Northern hemisphere.
The resulting 95% upper limits are summarized in Fig. <ref> for each of the neutrino flux scaling hypotheses. At most, non-variable blazar AGN could contribute up to 9% of the neutrino flux. The improved limits for blazar AGN again suggests that the majority of the IceCube neutrino flux is likely originating from a different source class.
The limits set for non-blazar AGN are also slightly improved after folding in the event energy. Even still, non-blazar, non-variable AGN could contribute up to 95% of the IceCube diffuse neutrino flux under the flat scaling hypothesis. This suggests that non-blazar AGN are still a viable candidate source class.
§ A SEARCH FOR CORRELATIONS WITH MOJAVE RADIO AGN
We apply a triggered time-dependent analysis on the MOJAVE catalog, which is contains measurements of the radio emission from various AGN over a period of two decades, including one decade of overlap with the IceCube observatory. The time-dependent component of the likelihood T^sig is created by calculating the mean value of the flux density for all sources in the MOJAVE catalog after being scaled appropriately with respect to their luminosity distance from Earth. Then, sources are identified as flaring if their flux density at a given time is greater than 2 σ above the average flux density.
We perform the analysis using 436 radio-loud AGN from the MOJAVE catalog. Due to detector location and limits, the catalog only consists of sources higher than -30^∘ in declination. Each source in the catalog has a minimum of 5 flux density recordings at different times.
Two sources are eliminated from the analysis, MOJAVE source 0415+379 and source 2200+420, as all of their flux density recordings were above the 2 σ threshold and can thus be categorized as always flaring. These two sources could be analyzed independently of this temporal analysis. This methodology was chosen to fairly assess all of the sources in the MOJAVE catalog, which can individually have as few as five data points and as many as 30 over the two decades of data.
Once the threshold on the catalog is set, any flux density point measured under the threshold is given a time likelihood value of 0. Above the threshold, we bin the data into 0.1 year bins, as outlined in <cit.> as the smallest reasonable bin size for Radio AGN. This was selected to ensure that there is minimal likelihood overlap, as larger time bins could result in neutrinos and flaring sources being erroneously correlated.
After binning the flux density, the total flux density in all the bins is normalized to one. This creates the time-dependent portion of the signal PDF, where the likelihood for each muon track neutrino event is assigned based on the trigger time of the IceCube event. This likelihood is used in combination with the spatially-dependent likelihood described in Section <ref>. The delay between the arrival of the neutrino and the photons from the source is on the scale of seconds and can be ignored, as it is absorbed in the 0.1 year binning.
The time-dependent component of the background PDF is independent of seasonal variations <cit.>, but is dependent on the declination and year of the neutrino event. Due to the construction of IceCube during the recording of data, the first three years of data has a higher fraction of atmospheric muons in the data set than later years. Thus, there are more points in the data set at <0^∘ declination in the first three years than the rest of the catalog. Due to this, neutrino points in the first three years of the catalog are given a background PDF dependent on this difference, then divided by the number of 0.1 year time bins covered by the analysis (86).
§.§ Results
In Fig. <ref>, we show the likelihood distribution of the MOJAVE sources tested. After accounting for trials factors, no statistically significant sources are identified.
We also consider each source individually to set independent limits on the expected flux of neutrinos from each source. These limits are shown in Fig. <ref>.
The list of these sources with their reference name, coordinates, and upper limit neutrino flux density can be found in Appendix <ref>. Because we only consider individual sources, rather than source classes, no completeness factor is required.
The MOJAVE catalog includes sources in four different subclasses: quasars, non-blazar AGN, blazar AGN, and Seyfert galaxies. Across each of these subclasses, there is no obvious relationship or trend between source type and neutrino flux limit.
§ CONCLUSION
The analyses presented here utilize spatial, energy-dependent, and time-dependent likelihood functions to compare the IceCube diffuse neutrino flux to the Fermi 4LAC and MOJAVE catalogs. We confirm that blazar AGN are unlikely to explain the entirety of the IceCube neutrino flux, while non-blazar AGN are not ruled out. Adding an energy dependence to the previously described spatial analysis tightened constraints on the flux fraction for all source classes tested. The time-dependent analysis with the MOJAVE catalog sets neutrino flux limits for flaring sources broken down by source class.
While we are unable to definitively confirm a specific origin for the diffuse neutrino flux, we are hopeful that future analyses will utilize increased livetime, more complete source catalogs, and a substantive cosmic ray simulation to improve these results further. Additionally, considering specific theoretical models for source neutrino production as a function of neutrino energy could be included to further constrain possible sources. The time-dependent analysis framework could be used in tandem with other time-dependent catalogs as they become available.
§ APPENDIX: MOJAVE SOURCE LIMITS
Source Number Flux Upper Limit [s^-1cm^-2] MOJAVE Reference Source Class
1 2.12e-11 1145-071 QSO
2 1.82e-11 0106+013 QSO
3 1.75e-11 1329-049 QSO
4 1.34e-11 1149-084 QSO
5 1.29e-11 0414-189 QSO
6 1.21e-11 0203-120 QSO
7 9.37e-12 0847-120 QSO
8 8.75e-12 0752-116 QSO
9 8.72e-12 1236+077 QSO
10 7.51e-12 0859-140 QSO
11 6.58e-12 2345-167 QSO
12 5.9e-12 0027+056 QSO
13 5.69e-12 1047+048 QSO
14 5.61e-12 0607-157 QSO
15 3.96e-12 1908-201 QSO
16 3.92e-12 1504-166 QSO
17 3.48e-12 1127-145 QSO
18 2.85e-12 1341-171 QSO
19 2.8e-12 1622-253 QSO
20 2.42e-12 2328-220 QSO
21 2.36e-12 1920-211 QSO
22 2.28e-12 1244-255 QSO
23 1.93e-12 0142-278 QSO
24 1.78e-12 0256+075 QSO
25 1.76e-12 1034-293 QSO
26 1.75e-12 0742+103 QSO
27 1.65e-12 2331+073 QSO
28 1.58e-12 1551+130 QSO
29 1.54e-12 0119+115 QSO
30 1.18e-12 0229+131 QSO
31 8.45e-13 2136+141 QSO
32 7.11e-13 0710+196 QSO
33 6.03e-13 1441+252 QSO
34 4.15e-13 2113+293 QSO
35 3.84e-13 2223+210 QSO
36 3.34e-13 1324+224 QSO
37 3.33e-13 1049+215 QSO
38 2.96e-13 1611+343 QSO
39 2.96e-13 2209+236 QSO
40 2.92e-13 0716+332 QSO
41 2.76e-13 1633+382 QSO
42 2.51e-13 0109+351 QSO
43 2.36e-13 0650+453 QSO
44 2.35e-13 1638+398 QSO
Source Number Flux Upper Limit [s^-1cm^-2] MOJAVE Reference Source Class
45 2.31e-13 0110+318 QSO
46 2.22e-13 1417+385 QSO
47 2.12e-13 1758+388 QSO
48 1.94e-13 1015+359 QSO
49 1.76e-13 1030+415 QSO
50 1.66e-13 0917+449 QSO
51 1.42e-13 2005+403 QSO
52 1.36e-13 0954+556 QSO
53 1.18e-13 0859+470 QSO
54 1.18e-13 1716+686 QSO
55 1.1e-13 0850+581 QSO
56 1.09e-13 0804+499 QSO
57 1.02e-13 1602+576 QSO
58 9.27e-14 0646+600 QSO
59 8.73e-14 0224+671 QSO
60 7.52e-14 1642+690 QSO
61 7.43e-14 1532+016 QSO
62 6.92e-14 0113-118 QSO
63 6.85e-14 0837+012 QSO
64 6.8e-14 1128-047 QSO
65 6.44e-14 0906+015 QSO
66 6.43e-14 0805-077 QSO
67 6.4e-14 0015-054 QSO
68 6.37e-14 0420-014 QSO
69 6.32e-14 2258-022 QSO
70 6.26e-14 1406-076 QSO
71 5.45e-14 1118-056 QSO
72 4.51e-14 1741-038 QSO
73 4.31e-14 0440-003 QSO
74 4.15e-14 2320-035 QSO
75 4.02e-14 0336-019 QSO
76 3.15e-14 0539-057 QSO
77 2.26e-14 0743-006 QSO
78 3.35e-11 0636+680 Non-Blazar AGN
79 2.5e-11 0212+735 Non-Blazar AGN
80 2.42e-11 1959+650 Non-Blazar AGN
81 2.36e-11 0954+658 Non-Blazar AGN
82 2.29e-11 1803+784 Non-Blazar AGN
83 1.96e-11 0238+711 Non-Blazar AGN
84 1.82e-11 0454+844 Non-Blazar AGN
85 1.81e-11 1027+749 Non-Blazar AGN
86 1.48e-11 1542+616 Non-Blazar AGN
87 1.31e-11 1557+565 Non-Blazar AGN
88 1.05e-11 1726+455 Non-Blazar AGN
89 9.4e-12 0846+513 Non-Blazar AGN
Source Number Flux Upper Limit [s^-1cm^-2] MOJAVE Reference Source Class
90 8.49e-12 0128+554 Non-Blazar AGN
91 8.41e-12 0708+506 Non-Blazar AGN
92 8.08e-12 0806+524 Non-Blazar AGN
93 7.58e-12 1656+482 Non-Blazar AGN
94 6.75e-12 1011+496 Non-Blazar AGN
95 6.55e-12 2344+514 Non-Blazar AGN
96 5.9e-12 1738+476 Non-Blazar AGN
97 5.41e-12 1206+416 Non-Blazar AGN
98 4.89e-12 1652+398 Non-Blazar AGN
99 4.86e-12 1722+401 Non-Blazar AGN
100 4.85e-12 0603+476 Non-Blazar AGN
101 4.65e-12 1250+532 Non-Blazar AGN
102 4.39e-12 1823+568 Non-Blazar AGN
103 4.39e-12 0613+570 Non-Blazar AGN
104 4.21e-12 0814+425 Non-Blazar AGN
105 3.98e-12 1418+546 Non-Blazar AGN
106 3.96e-12 0912+297 Non-Blazar AGN
107 3.7e-12 0321+340 Non-Blazar AGN
108 3.43e-12 1308+326 Non-Blazar AGN
109 3.31e-12 0621+446 Non-Blazar AGN
110 2.92e-12 2023+335 Non-Blazar AGN
111 2.79e-12 1040+244 Non-Blazar AGN
112 2.42e-12 1215+303 Non-Blazar AGN
113 2.15e-12 0133+388 Non-Blazar AGN
114 1.95e-12 1404+286 Non-Blazar AGN
115 1.84e-12 0518+211 Non-Blazar AGN
116 1.56e-12 0619+334 Non-Blazar AGN
117 1.36e-12 0235+164 Non-Blazar AGN
118 1.32e-12 2201+171 Non-Blazar AGN
119 1.12e-12 0722+145 Non-Blazar AGN
120 1.05e-12 1741+196 Non-Blazar AGN
121 9.2e-13 1514+197 Non-Blazar AGN
122 9e-13 1717+178 Non-Blazar AGN
123 8.84e-13 2013+163 Non-Blazar AGN
124 7.88e-13 0859+210 Non-Blazar AGN
125 5.86e-13 1228+126 Non-Blazar AGN
126 4.65e-13 0446+112 Non-Blazar AGN
127 4.36e-13 2247-283 Non-Blazar AGN
128 4.22e-13 1940+104 Non-Blazar AGN
129 4.02e-13 0754+100 Non-Blazar AGN
130 2.85e-13 1811+062 Non-Blazar AGN
131 2.67e-13 1038+064 Non-Blazar AGN
132 2.62e-13 0823-223 Non-Blazar AGN
133 1.72e-13 0403-132 Non-Blazar AGN
134 1.6e-13 2128-123 Non-Blazar AGN
Source Number Flux Upper Limit [s^-1cm^-2] MOJAVE Reference Source Class
135 1.45e-13 0301-243 Non-Blazar AGN
136 1.32e-13 0823+033 Non-Blazar AGN
137 1.28e-13 0903-088 Non-Blazar AGN
138 1.26e-13 0845-068 Non-Blazar AGN
139 8.9e-14 0111+021 Non-Blazar AGN
140 7.62e-14 0808+019 Non-Blazar AGN
141 6.39e-14 0939-077 Non-Blazar AGN
142 6.38e-14 0723-008 Non-Blazar AGN
143 6.19e-14 1514+004 Non-Blazar AGN
144 5.18e-14 0422+004 Non-Blazar AGN
145 4.58e-14 2131-021 Non-Blazar AGN
146 3.12e-11 1849+670 Seyfert_1
147 2.91e-11 1458+718 Seyfert_1
148 2.57e-11 2043+749 Seyfert_1
149 2.56e-11 0106+612 Blazar
150 1.73e-11 2021+614 Seyfert_2
151 1.69e-11 1700+685 Seyfert_1
152 1.45e-11 1030+611 Seyfert_1
153 1.44e-11 0241+622 Seyfert_1
154 1.37e-11 0831+557 Seyfert_2
155 1.29e-11 1637+574 Seyfert_1
156 5.25e-12 1828+487 Seyfert_1
157 5.17e-12 1957+405 Seyfert_2
158 5.16e-12 0309+411 Seyfert_1
159 4.47e-12 0010+405 Seyfert_1
160 3.11e-12 0415+379 Seyfert_1
161 2.46e-12 1607+268 Seyfert_2
162 2.4e-12 1901+319 Seyfert_1
163 2.35e-12 0738+313 Seyfert_1
164 1.2e-12 0202+149 Blazar
165 9.49e-13 1345+125 Seyfert_2
166 6.77e-13 0838+133 Seyfert_1
167 5.37e-13 2141+175 Seyfert_1
168 4.48e-13 1622-297 Blazar
169 3.93e-13 0528+134 Blazar
170 2.88e-13 1502+106 Blazar
171 1.76e-13 0430+052 Seyfert_1
172 1.29e-13 1302-102 Seyfert_1
173 9.81e-14 1510-089 Blazar
174 5.87e-14 1502+036 Seyfert_1
175 4.23e-14 0946+006 Seyfert_1
JHEP
|
http://arxiv.org/abs/2307.05081v1 | 20230711072918 | Argumentative Segmentation Enhancement for Legal Summarization | [
"Huihui Xu",
"Kevin Ashley"
] | cs.CL | [
"cs.CL"
] |
2023
Copyright for this paper by its authors.
Use permitted under Creative Commons License Attribution 4.0
International (CC BY 4.0).
Proceedings of the Sixth Workshop on Automated Semantic Analysis of Information in Legal Text (ASAIL 2023), June 23, 2023, Braga, Portugal.
1,3]Huihui Xu[
[email protected]
]
[1]
[1]Intelligent Systems Program, University of Pittsburgh, PA, USA
1,2,3]Kevin Ashley[
[email protected]
]
[2]School of Law, University of Pittsburgh, PA, USA
[3]Learning Research and Development Center, University of Pittsburgh, PA, USA
[1]Corresponding author.
We use the combination of argumentative zoning <cit.> and a legal argumentative scheme to create legal argumentative segments. Based on the argumentative segmentation, we propose a novel task of classifying argumentative segments of legal case decisions. GPT-3.5 is used to generate summaries based on argumentative segments. In terms of automatic evaluation metrics, our method generates higher quality argumentative summaries while leaving out less relevant context as compared to GPT-4 and non-GPT models.
legal summarization natural language processing neural networks argument mining
Argumentative Segmentation Enhancement for Legal Summarization
[
August 12, 2023
==============================================================
§ INTRODUCTION
Automatic text summarization is a process of automatically generating shorter texts that convey important information in the original documents <cit.>. There are in general two different approaches for automatic text summarization: extractive summarization and abstractive summarization <cit.>. Extractive summarization can be conceptualized as a sentence classification task, where the algorithm selects important sentences from the original document directly <cit.>. Abstractive summarization can be a more natural way of summarizing in terms of novel words and expressions <cit.>. Authors of <cit.> have experimented with several extractive summarization methods in domains like law and science.
Abstractive summarization is flourishing in recent years because of the rise of large pre-trained language models, like BART <cit.>, T5 <cit.>, and Longformer <cit.>. However, those models still require sizable training datasets to tackle a new task. For example, a language model trained on a Wikipedia text corpus requires fine-tuning on a legal dataset. In addition, unlike news articles, legal case decisions are longer and contain argumentative structures <cit.>.
While some summarization approaches are beginning to take the argumentative structure of a legal case decision into account (e.g., <cit.>), none do so in a zero-shot setting.
In this paper, we conduct a study of summarizing argumentative segments extracted from a legal document using the latest GPT-3.5 model (text-davinci-003) and GPT-4 <cit.> model. The new GPT-3.5 version is based on InstructGPT <cit.>, which is also trained with reinforcement learning from human feedback (RLHF). Despite its potential for generating high quality summaries, the GPT-3.5 model has a 4,097-token input limitation. This is a disadvantage for summarizing long legal documents. Our work employs a method of cutting the long documents into shorter segments while still preserving argumentative components. GPT-4 is also trained with RLHF like GPT-3.5 but with more capability. For example, GPT-4 can handle 8,192[There is another version of the model that supports 32,768 tokens.] tokens as input, which has doubled GPT-3.5's input length. Even though GPT-4 can handle longer documents, there are still some legal documents that exceed the input limitation. Besides, we believe that legal argument mining and argumentative zoning can extract argumentative segments that can help models to generate better legal summaries.
In order to extract argumentative segments from legal decisions, we propose a novel task for automatically classifying segments as argumentative or non-argumentative segments. This task stems from Argumentative Zoning (AZ) addressed in <cit.>. Teufel et al. define the task of AZ as a sentence level classification with mutually exclusive categories given an annotation scheme. AZ divides a paper into zones on the basis of the content knowledge claim in the corresponding segment <cit.>. We adopt the reasoning behind AZ and divide textual segments into argumentative or non-argumentative segments by examining if any argumentative sentences exist in the corresponding segment. The identified argumentative segments are then fed into the model for generating summaries.
Figure <ref> illustrates the summarization pipeline of our approach. The pipeline comprises three stages. First, the document, a full-text legal opinion is segmented into several parts. Then, every segment is assigned a label based on the existence of argumentative sentences using a classifier. Finally, the predicted argumentative segments are fed into the model. The model will summarize each segment and concatenate them as the final summary for the decision.
Our contributions in this work are: (1) We propose a novel task of predicting argumentative segments in the legal context. (2) We show that our approach for using argumentative segments to guide summarizing is effective. (3) We overcome the token limitation of GPT-3.5 when applied to long document summarization and show a promising result in a zero-shot setting.
§ RELATED WORK
§.§ Argumentative Zoning
Teufel, et al. <cit.> first proposed and defined the task of AZ as a sentence level classification with mutually exclusively categories given a certain annotation scheme for scientific papers. The earliest scheme includes seven categories of zones, such as Aim and Background. The annotation scheme is based on the rhetoric roles employed by authors. For example, one can identify sections that cover the background of the scientific research in a technical paper among other sections. Later, <cit.> made attempts toward discipline-independent argumentative zoning in two different domains. The idea of AZ is seeking to extract the structure of research components that follows authors' knowledge claims. As a result, there are different AZ schemes for different domains, such as <cit.> for chemistry research articles, <cit.> for physical sciences and engineering and life and health sciences. AZ was later adopted for legal documents in <cit.>. Since AZ classified sentences into different categories, it is helpful for generating summaries for long documents. <cit.> proposed a tool for AZ annotation and summarization. However, AZ annotation for legal documents can be expensive. We propose to leverage our sentence level annotation for AZ in the context of argumentative segmentation classification.
§.§ Legal Argument Mining
Legal Argument mining aims to extract legal argumentative components from legal documents. Most argument mining work consists of three sub-tasks: identifying argumentative units, classifying the roles of the argumentative units, and detecting the relationship between the argumentative units. <cit.> explored the argumentative characteristics of legal documents.<cit.>
identified rhetorical roles that sentences play in a legal context. Early work in legal argument mining rely on word patterns and syntactic features <cit.>. Recently, contextual embedding has been used for legal argument mining <cit.>,
like Sentence-BERT <cit.> and LegalBERT <cit.> embedding.
<cit.> have proposed a legal argument triples scheme to classify sentences for summarizing legal opinions in terms of Issues, Reasons, and Conclusions.
§.§ Summarization Methods and GPTs
As noted, the automatic summarization methods can be categorized as extractive or abstractive.
Most ML approaches for learning to extract sentences for summarizing documents
are unsupervised <cit.>. They are based on learning sentence importance scores for selecting sentences to form summaries. The development of better sentence representations, like Sentence-BERT, has lead to improvements in generating better summaries <cit.>.
Recent research applying sequence-to-sequence neural models to summarization is gaining more attention. <cit.> proposed a pointer generator architecture for generating higher quality abstractive summaries. Transformer-based sequence-to-sequence models, like BART (Bidirectional and Auto-Regressive Transformer), T5 (Text-to-Text Transfer Transformer) and Longformer, have been used in generating abstractive summaries. <cit.> incorporate legal argumentative structures into sequence-to-sequence model to further enhance the quality of summaries. In this work, Longformer Encoder-Decoder (LED), T5 and BART serve as the baseline for our experiments.
The mainstream transformer-based models, however, require a curated training set to adapt to a new domain. The success of prompt-based models provides a new way of solving the domain adaption problem by learning from a large unlabelled dataset. GPT-3.5 and GPT-4, developed by OpenAI, are the examples of prompt-based models.
<cit.> investigated how zero-shot learning with GPT-3 compares with fine-tuned models on news summarization task. Their results show that GPT-3 summaries are preferred by humans. Our work focuses instead on legal summarization and takes argumentative structure into account. The results show a higher performance in terms of automatic evaluation metrics by taking account of argument structures. We further experimented with GPT-4 on legal summarization, since it has a larger context window compared to GPT-3.5. Our findings demonstrate that considering the argumentative structure leads to improved summaries.
§ LEGAL DECISION SUMMARIZATION DATASET
We use the legal decision summarization dataset provided by the Canadian Legal Information Institute (CanLII).[<https://www.canlii.org/en/>] The summaries are prepared by attorneys, members of legal societies, or law students. The basic statistics of the annotated dataset are listed in Table <ref>. The court decisions involve a wide variety of legal claims.
The average length of the court decisions is 4,382 tokens. It exceeds the token limitation of GPT-3.5 (4,097 tokens). This motivates us to explore argumentative segmentation to reduce the input document length.
In prior work, researchers conceptualized a type system for annotating sentences in legal case decisions and summaries, which includes:
Issue – Legal question which a court addressed in the case;
Conclusion – Court's decision for the corresponding issue;
Reason – Sentences that elaborate on why the court reached the Conclusion <cit.>.
Those sentences are referred to as IRC triples. We have accumulated 1,049 annotated legal case decision and summary pairs. <cit.> use the same dataset for legal summarization tasks. <cit.> use the IRC annotations as markers to inform models with argumentative information. <cit.> explored the structure of legal decisions and used the annotated dataset as the basis for domain-specific evaluation of summaries.
In this work, we use the idea of argumentative zoning to further expand the use of IRC triples. The documents in the dataset have already been split at a sentence level. They have not yet been split into paragraphs or annotated in terms of explicit rhetorical zones. We adopt C99 <cit.>, a domain-independent linear text segmentation algorithm, to further segment the legal case decisions on a higher level. This algorithm measures the similarity between all sentence pairs to generate a similarity matrix. The similarity between a pair of sentences x, y is calculated using cosine similarity. Sentence-BERT is used for representing all sentences in the same space before computing the similarity scores. Then we cluster the neighboring sentences into groups based on the similarity scores.
Here, we propose a novel task – argumentative segmentation classification. For each group of sentences, we assign an “argumentative segment (1)” if there exists one or more IRC sentences, or a “non-argumentative segment (0)” otherwise. This combines the idea of argumentative zoning with semantic segmentation. Table <ref> shows an example of an argumentative segment. As the example shows, segment no. 9 is labeled as an argumentative segment because of the existence of a conclusion sentence.
We split our data into 80% training, 10% validation and 10% test datasets.
§ EXPERIMENTS AND RESULTS
§.§ Argumentative Segment Classification
Every legal case decision in our dataset has been split into segments using the C99 algorithm. Table <ref> shows the results of C99 segmentation. From the table, the average number of argumentative segments is 6 in a legal decision while the number of non-argumentative segments is 59. Thus, the number of argumentative segments is much less than non-argumentative segments in legal decisions. We performed a segment-level classification using the mentioned data split. We conducted experiments with different transformer models, BERT <cit.> and LegalBERT<cit.>. We use those models to predict the argumentativeness of segments (i.e., argumentative segment, or non-argumentative segment). Figure <ref> shows the results of the binary classification. The figure shows LegalBERT achieved a better classification result compared to BERT. LegalBERT achieved 80.14% F1 score while BERT has 78.24%. As a result, we chose to use LegalBERT's predictions to select input segments for the following summarization task.
§.§ Baselines
We use two different types of baselines for our proposed argumentative segmentation enhanced summarization method. One is non-GPT abstractive summarzation model, like LED, T5, and BART. The other one is vanilla GPT-3.5 and GPT-4. They are both developed by OpenAI.
The GPT-3.5 model is an auto-regressive language model. This model can generate high quality news summaries in a zero-shot setting according to <cit.>. We used the latest version, text-davinci-003, in our work just released in November 2022. There is little or no work, however, measuring how well the model performs on legal documents. GPT-4 is a multi-modal large language model, which is more capable than GPT-3.5. GPT-4 was released in March 2023, and it is by far the most advanced large language model in the field.
§.§ Prompting for GPT-3.5 and GPT-4
As mentioned, GPT-3.5 and GPT-4 are both prompt-based model. In order to use GPT-3.5 and GPT-4 to summarize a chunk of text, we have to inform the model of the type of task to perform. In our experiment, we add a short text “TL;DR” immediately after the input text. “TL;DR” is an abbreviation for “Too Long; Don't Read”, and \n is the change of a new line. “TL;DR" instructs GPT-3.5 and GPT-4 to summarize the text in a fewer number of words. The example prompt is listed below:
{{Text}} + \ n TL;DR
We only need to control the max output tokens and temperature without fine-tuning on our dataset. This is a zero-shot setting because the model does not see any human-written summaries before generating summaries. We noticed that the lengths of generated
summaries are consistent. The average lengths of model-generated summaries are reported in Table <ref>, Table <ref> and Table <ref>.
For the baseline GPT-3.5 model, we chunk the original document into lengths which the model accepts. We tried different lengths, and finally settled on 2,500 tokens to avoid an “over token request limitation error.” The argumentative segmentation enhanced GPT-3.5 model does not have this problem because the argumentative segments are shorter than GPT-3.5's token limitation. It also helps GPT-3.5 to focus on the chunks of text that have important argument-related information. Even though GPT-4 has much longer context length, it still falls short for dealing with some long documents. We set 7,500 tokens as the limit of prompt length to avoid “over token request limitation error.”
§.§ Results
Rouge-1, Rouge-2, Rouge-L, BLEU, METEOR, and BERTScore are used to measure the performance.
Rouge stands for Recall-oriented Understudy for Gisting Evaluation <cit.>. Rouge-based evaluation metrics examine lexical overlap between generated and reference summaries. BLEU stands for Bilingual Evaluation Understudy <cit.>; it measures word overlap taking order into account. It is often used to measure the quality of machine translation. METEOR <cit.> computes the similarity between generated and reference sentences by mapping unigrams. BERTScore <cit.> uses contextual token embedding to compute similarity scores between generated and reference summaries on a token level.
Table <ref> shows the test set results of different summarization models in different experimental settings. We first experimented with those non-GPT models in a zero-shot setting, and the results are shown in parentheses. Since zero-shot performance is not good, we further fine-tuned those models on the training set. We adopt some of the training hyperparameters from <cit.>: initializing LED and BART with learning rate of 2e^-5, T5 with learning rate of 1e^-4; and training both models for 10 epochs; set maximum input length is 6144 words for LED and T5 and 1024 for BART; maximum output length is set to 512 tokens for all the models. LED, T5 and BART outperform baseline GPT-3.5 and GPT-4 in term of automatic evaluation metrics. We also find that LED, T5 and BART produce longer summaries than GPT-3.5 and GPT-4 on average, which might directly contribute to the higher scores across some of the metrics.
Table <ref> shows different combinations of two important control parameters in GPT-3.5: and . According to the official website,[<https://platform.openai.com/playground/p/default-tldr-summary?model=text-davinci-003>] ranges between 0 and 1 and controls the randomness of generated text. With a 0 , GPT-3.5 will select the most deterministic response, while a 1 is the most random. parameter controls the number of generated tokens. We found that the model generally performs better at a lower . For example, when the parameter is fixed at 128, the Rouge and BLEU scores decrease when the rises from 0 to 0.8. We also notice that the also affect the performance: when the is set to 0, the model with 128 achieves the best scores across all metrics except the BERTScore. We control GPT-4 with the same parameters, and the results are presented in Table <ref>.
Table <ref> shows the comparison between a reference summary and GPT generated summary when the input does not exceed either the GPT-3.5 and GPT-4 token limitations. We observe that the generated summaries provide similar information regarding the case facts. However, the argumentative segmentation enhanced GPT-3 generated summary provides additional information about the judge's considerations.
Since GPT-3.5 imposes the token request limitation, any input text longer than the limit should be chunked before submitting to the model. In our test dataset, almost half of the cases exceed the token limitation. For these longer opinions, segmenting them using our implementation of argument zoning would seem to be a reasonable step, possibly increasing the likelihood that GPT-3.5's summaries would include useful argument-related information. Table <ref> shows an example of generated summaries when the original case decision substantially exceeds GPT-3.5's token limit. As a result, we need to shorten the document first before feeding it to the model. Meanwhile, GPT-4 can handle the length of the original case decision. We noticed that the baseline GPT-4 summary lacks some necessary details as compared to the argumentative segmentation enhanced approach. The latter included a more detailed presentation of the issue and conclusion and more of the reasons. The result was expected, since the input was shortened for the baseline. Despite the richness of information that a GPT-3.5 summary provides, GPT-4 generates smoother summaries. The main reason is that GPT-4 has a longer context span than GPT-3.5.
In terms of cost, we consider the current pricing scheme for both GPT-3.5 and GPT-4 based on the number of tokens
submitted to and generated by the model. The pricing of GPT-3.5 is set to $0.02 per 1,000 tokens in both prompt and completion, while the pricing for GPT-4 is set to $0.03 per 1,000 tokens in prompt and $0.06 per 1,000 tokens in completion. The cost of using GPT-3.5 with argumentative segmentation to generate a summary is approximately $0.19 on average. In comparison, the average cost for using GPT-4 is about $1.31. This means that GPT-4 is approximately 10 times more expensive than GPT-3.5 for the summarization task.
We also examined some of the summaries generated by the non-GPT models. The quality of summaries is clearly lower than GPT generated summaries. One possible reason is that large language models are trained on a much larger corpus and have more extensive model architectures, which makes them better few-shot or even zero-shot learners <cit.>.
§ LIMITATIONS
In this study, we focus on the effect of using argumentative segmentation on legal summarization. While we observed improvements in the model performance of legal summarization with argumentative segmentation, we also some coherency issues in the generated summaries. For example, “Yes, I agree with Mr. Stobie" interrupt the information flow of the summary from Table <ref>. Thus, a systematic human evaluation of generated summaries is needed to further examine the performance of the models and address these coherency issues.
Furthermore, reproducing our results may be challenging due to the proprietary nature of the OpenAI GPT models used in our experiments. Especially, we employed different combinations of control parameters in the experiment will further decrease the possibility of reproduction. Additionally, any updates or changes to the GPT models by OpenAI may result in changes to performance and results. So it is crucial to develop methods to increase the reproducibility of the results.
§ CONCLUSION AND FUTURE WORK
We have proposed a novel task of extracting argumentative segments that include the main points of legal case decisions. We further proposed to utilize these argumentative segments to guide a summarizer. Our experiments with GPT-3.5, GPT-4 and other models showed that the argumentative segmentation enhanced method can improve the automatic evaluation scores of generated summaries. This method also overcomes the request token limitation imposed by GPT-3.5. Our findings reveal a boost in performance across all types of automatic evaluations scores using the predicted argumentative segments. Additionally, we observed that GPT-4 tends to produce more coherent summaries compared to GPT-3.5.
For future work, we will further explore methods to ensure more reliable performance of the proprietary models. Furthermore, we plan to investigate alternative prompt engineering techniques for the summarization task. Due to the nature of generative models, a systematic human evaluation on the generated summaries are much needed in the future.
This work has been supported by grants from the Autonomy through Cyberjustice Technologies Research Partnership at the University of Montreal Cyberjustice Laboratory and
the National Science Foundation, grant no. 2040490, FAI: Using AI to Increase Fairness by Improving Access to Justice. The Canadian Legal Information Institute provided
the corpus of paired legal cases and summaries. This work was supported in part by the University of Pittsburgh Center for Research Computing through the resources provided. Specifically, this work used the H2P cluster, which is supported by NSF award number OAC-2117681.
|
http://arxiv.org/abs/2307.04922v1 | 20230710220908 | Programmable XY-type couplings through parallel spin-dependent forces on the same trapped ion motional modes | [
"Nikhil Kotibhaskar",
"Chung-You Shih",
"Sainath Motlakunta",
"Anthony Vogliano",
"Lewis Hahn",
"Yu-Ting Chen",
"Rajibul Islam"
] | quant-ph | [
"quant-ph",
"cond-mat.other",
"cond-mat.soft",
"cond-mat.supr-con",
"physics.atom-ph"
] |
APS/123-QED
Institute for Quantum Computing and Department of Physics and Astronomy, University of Waterloo, Waterloo, ON, N2L 3G1, Canada
We propose and experimentally demonstrate an analog scheme for generating XY-type (J_ij^x+ J_ij^y) Hamiltonians on trapped ion spins with independent control over the J_ij^x and J_ij^y terms.
The Ising-type interactions and are simultaneously generated by employing two spin-dependent forces operating in parallel on the same set of normal modes.
We analytically calculate the region of validity of this scheme, and provide numerical and experimental validation with ions.
This scheme inherits the programmability and scalability of the Ising-type interactions with trapped ions that have been explored in numerous quantum simulation experiments.
Our approach extends the capabilities of existing trapped ion quantum simulators to access a large class of spin Hamiltonians relevant for exploring exotic quantum phases such as superfluidity and spin liquids.
Programmable XY-type couplings through parallel spin-dependent forces on the same trapped ion motional modes
Rajibul Islam
August 12, 2023
=============================================================================================================
Trapped ions are ideal quantum simulators of interacting spin systems <cit.> due to their tunable long-range interactions <cit.>, long coherence times <cit.> and high fidelity quantum state preparation and measurement <cit.>.
Interacting spin models illuminate a large variety of many-body phenomena such as quantum magnetism and phase transitions, spin liquids, superconductivity, and superfluidity <cit.>.
Spins encoded in the internal degrees of freedom (such as hyperfine states) of individual trapped ions can interact via off-resonant excitation of their collective phonon modes through laser-driven spin-dependent dipole forces.
By varying the laser parameters, long-range and tunable Ising-type interactions have been experimentally demonstrated and used in a large number of quantum simulations to explore both equilibrium and dynamic phenomena.
Further, it has been proposed that the inherent long-range Coulomb interactions can be used to realize an arbitrarily programmable, all-to-all connected Ising spin system <cit.>.
Existing proposals to simulate models that capture symmetries and phenomena beyond the reach of Ising models, such as XY and Heisenberg models, are either experimentally challenging or unfeasible, or limited in their applicability and tunability.
For example, non-local propagation of spin correlations <cit.> and many-body localization <cit.> were studied, on an effective XY-Hamiltonian, by applying a large transverse magnetic field to the Ising Hamiltonian.
The transverse field restricts the Hilbert space contributing to the spin dynamics and results in an effective XY Hamiltonian only at discrete times <cit.>.
The above approach also breaks down for long evolution times <cit.>, and does not allow the simulation of the anisoptopic XY model (i.e., interactions of the form J^x_i,j+ J^y_i,jwith J^x_i,j≠ J^y_i,j, where σ_x(y)^i are the usual spin-1/2 Pauli matrices).
Alternate proposals make use of orthogonal sets <cit.> of motional modes, with each set mediating an independent Ising term (such as or ).
However, exciting multiple sets of orthogonal normal modes require additional laser beams, electronic controls, and complex optical design beyond the scope of current experimental setups.
Further, this scheme may not produce the same form and range of interactions along different spin axes <cit.>, limiting the usefulness of such simulations.
Here, we demonstrate the creation of XY-type interactions, including the anisotropic XY-model, that simulates the equivalent spin dynamics in continuous time (limited by the coherence time of the system), and is readily implemented in existing experimental setups.
Our key insight is that the same set of motional modes can mediate both and interactions and the error in the evolution can be made negligible with the proper choice of the applied spin-dependent forces.
We experimentally demonstrate the dynamics of two ion spins under the simulated XY Hamiltonian and numerically show that the scheme is easily scaled for larger systems.
Spin-spin interactions can be induced <cit.> between ions by employing spin-dependent forces (SDF) that off-resonantly excite their collective vibrational modes.
Such SDFs can be applied using Raman transitions from lasers that are far detuned from unwanted atomic excitation.
The Mølmer-Sørensen scheme <cit.> for generating Ising-type interactions uses an SDF at a frequency μ that is generated by simultaneously applying Raman `beatnote' frequencies ω_hf±μ (the so-called `blue' and `red' sidebands) <cit.>.
Here, ω_hf is the frequency splitting between the two spin states.
Under the rotating wave (ω_hf≫μ) and the Lamb-Dicke approximations <cit.>,
the spin-phonon Hamiltonian is:
H = ∑_iΩ_i cos( μ t + ψ_i ) ( δk⃗·x⃗_i ) σ_θ_i^i
where, δk⃗·x⃗_i = ∑_m η_im ( â_m e^-i ω_m t + â_m^† e^ i ω_m t ).
Here, Ω_i is the Rabi frequency at the i^th ion position, â_m and â_m^† are the phonon annihilation and creation operators for the m^th motional mode at frequency ω_m.
The Lamb-Dicke parameters η_im = b_im |δk⃗| √(ħ/2Mω_m) include the normal mode transformation matrix element b_im of the i^th ion and m^th normal mode <cit.>.
Where ∑_i |b_im|^2 = ∑_m |b_im|^2 = 1, M is the mass of the ion and σ_θ_i^i = σ_x^i cosθ_i + σ_y^i sinθ_i.
The spin phase θ_i and the motional phase ψ_i are determined from the relative phases of the red and blue sidebands <cit.>.
The evolution operator for this Hamiltonian can be found using the Magnus expansion, which terminates after the second term,
U(τ) = exp(-i ∫_0^τdt H(t) - 1/2∫_0^τ dt_1 ∫_0^t_1dt_2 [ H(t_1), H(t_2) ] )
= exp( ∑_i ϕ̂_i(τ)σ_θ_i^i
+ i ∑_i<jχ_i,j(τ) σ_θ_i^i σ_θ_j^j ).
In the `slow' regime (|μ - ω_m| ≫η_imΩ_i), ϕ̂_̂î(τ) is negligible <cit.> and χ_i,j(τ) in Eq. (<ref>) is dominated by a `secular' term, which grows linearly with t, giving rise to an effective Hamiltonian,
H_eff = ħ∑_i<j J_i,jσ_θ_i^i σ_θ_j^j
where, the Ising coupling,
J_i,j = Ω_i Ω_j ħ ( δk⃗ )^2/2M∑_m b_im b_jm/μ^2 - ω_m^2.
Note that, the unitary evolution in Eq. (<ref>) will, in general, include additional AC Stark shifts such as from off-resonant excitation of the `carrier' spin transition from the SDFs (which we did not include in Eq. (<ref>) and (<ref>), as in Refs. <cit.> for simplicity, but they must be accounted for in experiments).
Multiple SDFs operating in parallel have been theoretically suggested <cit.> for creating more complex interactions.
Recent experiments have used SDFs along two orthogonal modes to generate parallel quantum gates <cit.>.
In our protocol, we apply two SDFs at frequencies μ_1 and μ_2 (Fig. 1(a)), with both of them exciting (off-resonantly) the same motional modes.
We choose the red and blue sideband phases to generate a spin phase of 0 (corresponding to σ_θ_i = σ_x in Eq. (<ref>) ) for the first SDF and π/2 (corresponding to σ_θ_i = σ_y) for the second SDF.
The resulting spin-phonon Hamiltonian becomes,
H = H_x + H_y,
where,
H_x = ∑_i,mη_imΩ_i^x cos( μ_1 t + ψ^x_i ) ( â_m e^-i ω_m t + h.c.) σ_x^i,
H_y = ∑_i,mη_imΩ_i^y cos( μ_2 t + ψ^y_i ) ( â_m e^-i ω_m t + h.c.) σ_y^i.
Here, Ω^x_i and Ω^y_i are Rabi frequencies, and ψ_i^x, ψ_i^y are motional phases corresponding to the 2 SDFs respectively.
Again, if we operate each of the two forces in the slow regime (|μ_1 - ω_m| ≫η_imΩ_i^x, |μ_2 - ω_m| ≫η_imΩ_i^y ), the first term in the Magnus expansion is negligible <cit.> and the evolution operator arising from the second term in the Magnus expansion becomes,
U(τ) = exp( - 1/2∫_0^τ dt_1 ∫_0^t_1dt_2 [ H(t_1), H(t_2) ] )
= exp( -iτ∑_i<j J_ij^x σ_x^i σ_x^j -iτ∑_i<jJ_ij^y σ_y^i σ_y^j
+ ∑_i<jΛ_ij(τ)σ_x^i σ_y^j + ∑_i ζ̂_i(τ) σ_z^i ).
Where,
J_ij^x = Ω_i^x Ω_j^x ħ ( δk⃗ )^2/2M∑_m b_im b_jm/μ_1^2 - ω^2_m,
J_ij^y = Ω_i^y Ω_j^y ħ ( δk)^2/2M∑_m b_im b_jm/μ_2^2 - ω^2_m.
The first two terms in the exponent in Eq. (<ref>) come from [ H_x(y)(t_1),H_x(y)(t_2) ], and result in the desired spin-spin interactions.
The last two terms come from the cross terms, i.e. [ H_x(y)(t_1),H_y(x)(t_2) ], and lead to an undesirable spin-phonon coupling.
For μ_1=μ_2, the single frequency SDF can be rewritten with a different spin phase, as can be seen from Eq. (<ref>), and therefore the resulting effective spin-spin Hamiltonian is Ising type (σ_θ^i σ_θ^j) (Fig. <ref>(b)).
For μ_1 ≠μ_2, there is no secular term in Λ_ij(τ) and ζ̂_i(τ), however, they may have non-trivial oscillatory terms (see appendix).
As |μ_1 - μ_2| is increased beyond zero, the contribution of the oscillatory terms diminishes (Fig. <ref>(c)), and the evolution becomes consistent with an effective XY-type Hamiltonian (Fig. 1(d)),
H_eff = ħ∑_i<j J_ij^x σ_x^i σ_x^j + ħ∑_i<j J_ij^y σ_y^i σ_y^j,
when,
|μ_1-μ_2| ≫max_i,j(| J_ij^x| ), |μ_1-μ_2| ≫max_i,j( |J_ij^y| ).
In the following section, we provide experimental validation of the above analysis.
The experiments are performed on ions in a four-rod Paul trap with trap frequencies
ω_X ≈ 2π× 1.135 MHz, ω_Y ≈ 2π× 0.920 MHz and ω_Z ≈ 2π× 201 kHz.
The spins are encoded in the two hyperfine `clock' states, S_1/2|F=0,m_F=0⟩ (|↓⟩) and S_1/2|F=1,m_F=0⟩ (|↑⟩), of the ions, separated in energy by the hyperfine splitting ω_hf/2π = 12.643 GHz.
Here, F and m_F are quantum numbers representing the total atomic angular momentum and its projection along a weak magnetic field of around 3.5 G.
We perform coherent operations on the ions, through 2-photon Raman transitions with a 355 nm pulsed laser with a repetition rate of 80 MHz <cit.>.
The wave-vector difference of the two Raman beams, δk⃗, is oriented such that we can excite phonon modes along both transverse trap axes, X' and Y' (Fig. <ref>(a)) .
We modulate the frequency of the Raman 1 beam with four harmonic tones to create four beatnotes driving SDFs at two frequencies μ_1 = ω^X'_TILT - 2π× 8 kHz, μ_2 = ω^X'_TILT - 2π× 5 kHz respectively (Fig. <ref>(b)).
Here, ω^X'_COM=ω_X and ω^X'_TILT= 2π× 1.117 MHz are the frequencies of the COM and TILT modes in the X' direction respectively.
The SDF detunings are chosen to be smaller than the separation between the modes to minimize the contribution from all modes except the X' TILT mode.
The experimental sequence is as follows.
We apply 1.5 ms of Doppler cooling and 8 ms of Raman sideband cooling to cool all transverse modes to n < 1 to be in the Lamb-Dicke regime, and global optical pumping for 20 μs to initialize in state.
An optional π-pulse driven by microwave radiation (at frequency ω_hf) and a site-selective optical pumping (that maintains the coherence of the neighboring ion with ∼ 99.9% fidelity <cit.>) can alternatively prepare the initial states and .
The spin-dependent forces are then applied (Eq. (<ref>)) for a pulse duration τ.
We finally measure the spin states by state-dependent fluorescence on a photo-multiplier tube (PMT) for 1.5 ms.
We calibrate the fluorescence counts by preparing the spins in with a microwave π-pulse in a separate experiment and obtain approximately 80 PMT counts for this state.
Since global fluorescence measurements cannot distinguish between and states, we apply a local optical pumping pulse on the first ion just before the measurement to convert it to a single ion measurement, when necessary.
Figure <ref>(c) shows the spin dynamics when initialized in state.
As in the numerical simulations Fig. <ref>, we tune the Rabi frequencies to achieve J^x_12≈ J^y_12 to get an effective XY model when the interactions are mediated simultaneously.
We set the Rabi frequencies, Ω^x_i = 2 π× 15 kHz and Ω^y_i = 2 π× 11.5 kHz (approximately equal between the two ions).
With H_x and H_y applied separately, we observe oscillations between and , as expected.
We estimate J^x_12 = 2π×77(2) Hz and J^y_12 = 2π×80(3) Hz.
When applied simultaneously, we find no observable oscillations, which is the signature of the XY Hamiltonian, since (σ_x^1 σ_x^2 + σ_y^1 σ_y^2 )= 0.
The slow increase in the fluorescence counts for the XY Hamiltonian is likely due to the decoherence in the system and slow drifts in the intensity of the laser and the trap frequency (estimated to contribute <15% drift in J^x_12 or J^y_12) over the duration of the data run.
To further validate that the and the couplings are present simultaneously, we initialize the ions in the state and repeat the experiment.
Here, we expect oscillations between the and the states, at frequency J^x_12 + J^y_12 (see appendix).
With a global detection, we expect the fluorescence counts to be flat as observed in Fig. <ref>(d).
However, with an individual detection on ion 1 (i.e. by pumping one ion before detection), we observe oscillations in the fluorescence counts as expected from oscillations between the and .
From the data in Fig. <ref>(d), we observe oscillations at 2π× 178(2) which is within the expected 10% fluctuations of the extracted J^x_12 + J^y_12 from previous experiments.
The inherent full-connectivity of trapped ions allows for the scheme to be readily scalable to a large number of ions like in the case of the tunable Ising interactions <cit.>.
For example, an interaction profile with a power law decay that has been widely studied for quantum Ising models <cit.> can be extended to XY interactions.
To achieve this interaction profile, first the Rabi frequencies (Ω_i^x) and the detuning (μ_1) can be chosen to obey an approximate power law in the coupling matrix J^x in Eq. (<ref>).
Then, μ_2 can be chosen to satisfy Eq. (<ref>) while keeping it close to μ_1, to maintain the same form for J^x and J^y.
Further, if J^x=J^y is desired, then Ω_i^ys can be calculated using a global scaling w.r.t Ω_i^xs to compensate for the unequal μ_1 and μ_2.
Figure 3 shows that the interaction profiles along x and y-directions match to better than 99% with the global scaling of Rabi frequencies, even when |μ_1-μ_2| is chosen to be 30 times higher than max(J^x).
We find that this approach of scaling the Rabi frequencies works well whenever μ_1(μ_2) is parked close to a motional mode since the contribution to the J_ij^x (J_ij^y) is dominated by a single motional mode.
Note that, Eq. (<ref>) is weaker than the constraint for applying each of the forces in the slow regime (|μ_1(2) - ω_m| ≫η_imΩ_i^x(y) ).
This is because, in the slow regime, individual matrix elements J_ij^x (J_ij^y) are an order of magnitude smaller than |μ_1(2) - ω_m| and hence leave enough freedom to satisfy Eq. (<ref>).
Thus, by simultaneously applying a pair of SDFs near each motional mode, the full spin-spin interaction profile can be engineered arbitrarily <cit.>.
It should be noted, however, that the calculation of the coupling matrix should take into account any AC Stark shift induced from the spin-dependent forces when the relative scale of the couplings needs to be precisely matched.
In summary, we have demonstrated tunable long-range XY-type couplings (J_ij^x+ J_ij^y) by the parallel application of two spin-dependent forces on the same motional modes.
Our approach allows for analog quantum simulations of the XY and anisotropic XY models, as the effective Hamiltonian (Eq. (<ref>)) is valid in continuous time.
This opens possibilities to study ground state order of frustrated XY-type models, in principle on programmable lattice geometries <cit.>, and investigate exotic quantum phases, such as spin liquids <cit.>.
Further, evolution under the XY Hamiltonian can be interspersed with single spin quantum gates in analog-digital hybrid quantum simulations <cit.> to investigate dynamical phase transitions, Hamiltonian quenches, and quantum transport.
Our demonstration of parallel SDFs on the same motional modes can further be extended to simulate XYZ-type Hamiltonians by adding a σ_z-SDF (readily implemented using the light-shift gate schemes <cit.>).
We thank Jingwen Zhu for helping us on the experimental setup. We acknowledge financial support from the Natural Sciences and Engineering Research Council of Canada Discovery (RGPIN-2018-05250) program, Ontario Early Researcher Award, Canada First Research Excellence Fund (CFREF), New Frontiers in Research Fund (NFRF), University of Waterloo, and Innovation, Science and Economic Development Canada (ISED).
apsrev.bst
§ DISTINGUISHING BETWEEN XX AND XY HAMILTONIAN (2 IONS)
If we start with the Hamiltonian H = J^x_12σ_x^1 σ_x^2 + J^y_12σ_y^1 σ_y^2 and the evolution operator U such that U(τ) = exp(-iHτ).
It is easy to show that:
* U = cos(τ(J^x_12 - J^y_12) ) - i sin(τ(J^x_12 - J^y_12) )
* U = cos(τ(J^x_12 - J^y_12) ) - i sin(τ(J^x_12 - J^y_12) )
* U = cos(τ(J^x_12 + J^y_12) ) - i sin(τ(J^x_12 + J^y_12) )
* U = cos(τ(J^x_12 + J^y_12) ) - i sin(τ(J^x_12 + J^y_12) )
For the case of the XY Hamiltonian, we have that J^x_12 = J^y_12.
When initialized in , we do not expect to see oscillations between ( , ).
But when initialized in , we expect to see oscillations between ( , ) at a frequency of (J^x_12 + J^y_12).
§ DETAILED DERIVATION OF CONSTRAINT IN EQ. (<REF>)
After RWA to Eq. (<ref>) and with δ_xm = ω_m - μ_1 and δ_ym = ω_m - μ_2 we have,
H_x = ∑_i,mη_imΩ_i^x ( a_m e^-i (δ_xm t + ψ_x) + h.c.) σ_x^i,
H_y = ∑_i,mη_imΩ_i^y ( a_m e^-i (δ_ym t + ψ_y) + h.c.) σ_y^i
The first two terms in the exponent of (<ref>) come from calculations already described in <cit.>.
The last two terms come from the cross commutators,
[H_x(t_1),H_y(t_2)] + [H_y(t_1),H_x(t_2)] = [∑_i,mη_imΩ_i^x ( a_m e^-i (δ_xm t_1 + ψ_x) + h.c.) σ_x^i,
∑_j,nη_jnΩ_j^y ( a_n e^-i (δ_yn t_2 + ψ_y) + h.c.) σ_y^i ]
+ [∑_i,mη_imΩ_i^y ( a_m e^-i (δ_ym t_1 + ψ_y) + h.c.) σ_y^i,
∑_j,nη_jnΩ_j^x ( a_n e^-i (δ_xn t_2 + ψ_x) + h.c.) σ_x^i ]
= ∑_i,m,nη_imη_inΩ_i^x Ω_i^y ( a_m e^-i (δ_xm t_1 + ψ_x) + h.c.)
( a_n e^-i (δ_yn t_2 + ψ_y) + h.c.) [σ_x^i,σ_y^i]
+ ∑_i,m,nη_imη_inΩ_i^y Ω_i^x ( a_m e^-i (δ_ym t_1 + ψ_y) + h.c.)
( a_n e^-i (δ_xn t_2 + ψ_x) + h.c.) [σ_y^i,σ_x^i]
+ ∑_i,j,mη_imη_jmΩ_i^x Ω_j^y [( a_m e^-i (δ_xm t_1 + ψ_x) + h.c.), ( a_m e^-i (δ_ym t_2 + ψ_y) + h.c.)] σ_x^i σ_y^j
+ ∑_i,j,mη_imη_jmΩ_i^y Ω_j^x [( a_m e^-i (δ_ym t_1 + ψ_y) + h.c.), ( a_m e^-i (δ_xm t_2 + ψ_x) + h.c.)] σ_y^i σ_x^j
= 2 i ∑_i,m,nη_imη_inΩ_i^x Ω_i^y ( a_m e^-i (δ_xm t_1 + ψ_x) + h.c.)
( a_n e^-i (δ_yn t_2 + ψ_y) + h.c.) σ_z^i
- 2 i ∑_i,m,nη_imη_inΩ_i^x Ω_i^y ( a_m e^-i (δ_ym t_1 + ψ_y) + h.c.)
( a_n e^-i (δ_xn t_2 + ψ_x) + h.c.) σ_z^i
+ 2i ∑_i,j,mη_imη_jmΩ_i^x Ω_j^y σ_x^i σ_y^j (sin( δ_ym t_2 - δ_xm t_1 + ψ_y - ψ_x) - sin( δ_ym t_1 - δ_xm t_2 - ψ_y + ψ_x) )
Using the above expressions ( and performing the integrals in Eq. (<ref>) we find that For μ_1 ≠μ_2, there is no secular term in Λ_ij(τ) and ζ̂_i(τ), however, the oscillatory term in Λ_ij(τ) could get unbounded due to the presence of (δ_xm - δ_ym) in the denominator.
Let us define Λ_ij to be the coefficient of the largest oscillatory term in Λ_ij(τ).
It can then we shown that
|Λ_ij| ≤4 ×max( |J^x_ij| , |J^y_ij| )/ |μ_1 - μ_2| .
When Eq. (<ref>) is satisfied, Λ_ij remains small and the contribution of the cross terms in Eq. (<ref>) becomes negligible and the XY-type couplings remain in the effective Hamiltonian.
|
http://arxiv.org/abs/2307.07632v1 | 20230714210348 | A fast surrogate cross validation algorithm for meshfree RBF collocation approaches | [
"Francesco Marchetti"
] | math.NA | [
"math.NA",
"cs.NA"
] |
Features of a spin glass in the random field Ising model
Sourav ChatterjeeDepartment of Statistics, Stanford University, 390 Jane Stanford Way, Stanford, CA 94305, USA. Email: mailto:[email protected]@stanford.edu.
August 12, 2023
=============================================================================================================================================================================
§ ABSTRACT
Cross validation is an important tool in the RBF collocation setting, especially for the crucial tuning of the shape parameter related to the radial basis function. In this paper, we define a new efficient surrogate cross validation algorithm, which computes an accurate approximation of the true validation error with much less computational effort with respect to a standard implementation. The proposed scheme is first analyzed and described in details, and then tested in various numerical experiments that confirm its efficiency and effectiveness.
§ INTRODUCTION
Radial Basis Function (RBF) collocation methods are widely-investigated approaches for solving various Partial Differential Equations (PDEs). They are kernel-based meshfree methods, flexible and potentially highly accurate, which deal with PDEs in their strong form by interpolating at a set of collocation points. For this reason, they can be intended and studied in a generalized interpolation framework <cit.>.
For both collocation and interpolation tasks, a crucial aspect for the construction of an effective approximant is an appropriate choice of the RBF, whose form and behavior very often depend on a positive shape parameter. Although a variable shape parameter is employed in some cases <cit.>, in literature greater attention has been put on the selection of a global value, and many strategies have been proposed for its fine tuning <cit.>, including some theoretical observations and rules of thumb <cit.>. In particular, many of these strategies take advantage of Cross Validation (CV) approaches <cit.>, which are empirical schemes that are widely employed in numerous fields in order to estimate the generalization capability of a model. Historically, in the RBF interpolation setting, the most considered CV approach is the Leave-One-Out CV (LOOCV) method, mainly because of its efficient computation provided by an algorithm originally proposed by S. Rippa in <cit.>, and modified by some authors later on <cit.> . In <cit.>, the Extended Rippa's Algorithm (ERA) generalized Rippa's scheme to the wider k-fold CV framework, still including in its formulation the original LOOCV case.
Because of its efficiency, Rippa's approach has been extensively employed beyond the interpolation setting, with the idea of obtaining an inexact but effective CV tool. In <cit.>, the interpolation matrix used in Rippa's LOOCV scheme is directly substituted by the collocation matrix, with applications to elliptic and time-dependent PDEs, also on irregular domains. Many other works share a similar spirit, <cit.> are some examples. In <cit.> the authors adapted Rippa's approach to obtain a cost indicator in the context of RBF-PseudoSpectral (RBF-PS) methods and approximate moving least squares, and a modified scheme was designed in the partition of unity interpolation framework in <cit.>. However, a concrete and justified generalization of the ERA to the collocation setting is still missing.
In this paper, our purpose is to fill this gap and to provide a generalization of the ERA scheme for PDEs, by analyzing to what extent the original algorithm can be adapted from RBF interpolation to collocation. The result is a surrogate CV scheme that retains the efficiency of the ERA, but computes an approximated validation error. Nevertheless, differently from the adaptations already suggested in literature, the discrepancy between the validation error and its approximation can be clearly formalized. In fact, maybe quite surprisingly, we show that this approximation gap involves the reconstruction provided by the RBF-PS method. Various numerical experiments show that the proposed technique well approximates exact CV, being more accurate than the empirical adaptation of Rippa's approach proposed for the collocation framework in literature. Moreover, in some situations our scheme can also be employed when collocation points do not coincide with centers.
The paper is organized as follows. In Section <ref> we recall the main characteristics of the generalized RBF interpolation setting, and how classical collocation approaches can be framed into it. In Section <ref> we present and analyze in details the proposed Surrogate CV scheme, both from a theoretical and a computational perspective. Various numerical tests are discussed in Section <ref>. Finally, in Section <ref> we draw some conclusions.
§ GENERALIZED INTERPOLATION AND COLLOCATION
Let Ω⊂ℝ^d and let κ:Ω×Ω⟶ℝ be a strictly positive definite kernel that is radial, i.e., there exists a univariate Radial Basis Function (RBF) φ:[0,∞)⟶ℝ so that
κ(x̅,y̅)=φ(‖x̅-y̅‖ ), x̅,y̅∈Ω.
The RBF φ usually depends on a positive shape parameter ε>0. However, to keep a simpler notation, we avoid to make this dependency explicit until Section <ref>, where ε will be of interest in our numerical experiments. We can introduce the framework of kernel-based collocation as a generalized interpolation setting. Let 𝒢={γ_1,…,γ_m} and ℒ={λ_1,…,λ_n} be sets of distinct linear functionals acting on real-valued functions defined on Ω, m,n∈ℕ. Given h:Ω⟶ℝ, our purpose is to find s:Ω⟶ℝ so that
γ_i(h)=γ_i(s) i=1,…,m.
If we restrict to the kernel-based ansatz
s(x̅)=∑_j=1^n c_jλ_j(κ(x̅,y̅)),
where λ_j acts on κ seen as a function of its second input and c_j∈ℝ, then (<ref>) becomes
γ_i(h)=∑_j=1^n c_j(γ_i∘λ_j)(κ(x̅,y̅)), i=1,…,m,
with γ_i acting on κ seen as a function of its first input. Therefore, the vector of coefficients c̅=(c_1,…,c_n)^⊺ needs to satisfy
𝖦c̅=g̅,
being 𝖦_i,j=(γ_i∘λ_j)(κ(x̅,y̅)) the collocation matrix and g̅=(γ_1(h),…,γ_m(h))^⊺.
In this context, the considered functionals are usually related to sets of points in Ω. We then associate the set of collocation points X={x̅_1,…,x̅_m}⊂Ω and centers Y={y̅_1,…,y̅_n}⊂Ω to 𝒢 and ℒ, respectively. More precisely, γ_i=γ_x̅_i and λ_j=λ_y̅_j for each i=1,…,m and j=1,…,n. We point out that 𝒢 and ℒ may be related to the same set of points, even if they do not coincide.
Another relevant tool in our analysis is then the evaluation matrix 𝖫_i,j=(δ_x̅_i∘λ_y̅_j)(κ(x̅,y̅)), where δ_x̅_i(f)=f(x̅_i), i=1,…,m, are point evaluation functionals at the collocation points in X. By defining s̅=(s(x̅_1),…,s(x̅_m))^⊺, note that we can write
s̅=𝖫c̅.
In general, there is no guarantee that (<ref>) admits one or more solutions, and therefore the representation of s̅ provided in (<ref>) might not exist, or it might be non unique.
Let us outline two well-known collocation approaches that fall into the presented framework. We will consider them in our numerical experiments in Section <ref>.
§.§ Kansa's method: γ≠λ, λ=δ
In the approach originally proposed by Ed Kansa in <cit.>, the functionals in ℒ are chosen as λ_y̅_j=δ_y̅_j, j=1,…,n. The model (<ref>) then takes the form
s(x̅)=∑_j=1^n c_jκ(x̅,x̅_j),
which is the same of function interpolation and regression settings. Therefore, this approach takes advantage of a simple ansatz. On the other hand, since the functionals in 𝒢 are employed to apply the differential operators that characterize the PDE, the resulting collocation matrix 𝖦 is typically non-symmetric (see e.g. <cit.>). Even if it has been shown that is unlikely for 𝖦 to not have full rank <cit.>, this represented a relevant obstacle for the theoretical analysis of this approach <cit.>. Asides from these issues, Kansa's approach has been tailored to various differential problems and largely employed by the relevant scientific community <cit.>.
§.§ Hermite method: γ= λ≠δ
In the Hermite approach, both the functionals in 𝒢 and in ℒ are used to represent the operators involved in the PDE (see, e.g., <cit.>). A great advantage of this method is that the collocation matrix is always symmetric and invertible in the case where collocation points and centers coincide, and therefore (<ref>) is well posed (refer to e.g. <cit.>). On the other hand, applying differential operators twice on the first and second input of the kernel may lead to a solution that is double regular than needed, and this can represent a drawback. Similarly to Kansa's method, the Hermite approach caught the interest of many authors in the last years <cit.>.
§ EFFICIENT SURROGATE CROSS VALIDATION
§.§ Case X=Y
It is convenient to start from the case where collocation points and centers coincide, i.e., X=Y and m=n. In this setting both 𝖦 and 𝖫 are square matrices, and 𝖦 is symmetric in the case of the Hermite method.
Before presenting in details the proposed algorithm, we recall the main characteristics of classical Cross Validation (CV). Let k∈ℕ, k≤ m. In the k-fold CV setting, the set X is split into k disjoint non-empty subsets X_1,…,X_k. For a simpler presentation, we assume m to be multiple of k, so that we can consider subsets of equal cardinality v=m/k∈ℕ, but this is not required by the scheme. Then, for every ℓ=1,…,k we proceed as follows.
* We construct the approximant s by employing only the nodes in ⋃_i=1
i≠ℓ^k X_ℓ.
* We assess the validation performance of s on the remaining set X_ℓ by computing, e.g., the L_2-error with respect to the ground truth h.
At the end of this process, we computed and evaluated k independent models exploiting the information in X only. The global performance on X obtained by putting together the k validation errors is then employed as an indicator of the overall accuracy of the model when constructed on the full set of collocation points X. In this sense, k-fold CV is a fair and concrete procedure that does not assume knowledge at unknown data sites, differently with respect to, e.g., trial-and-error approaches.
The computational complexity required by the whole process is 𝒪(k(m-v)^3)≈𝒪(m^3k): we need to solve k different (m-v)× (m-v) linear systems (we assume the usage of non-specific techniques such as LU decomposition to solve the linear systems). In RBF applications it is very common to set v≪ m, and in particular when v=1 we get the so-called Leave-One-Out CV (LOOCV). Consequently, the computational cost required by a straightforward application of LOOCV is approximately 𝒪(m^4).
In the framework of kernel-based interpolation, the Extended Rippa's Algorithm (ERA) computes exact k-fold CV at the cost of 𝒪(m^3)+𝒪(m^3/k^2) operations, which is indeed really advantageous especially when v is small.
Inspired by the ERA, our purpose is to construct an algorithm for validating RBF collocation schemes and enhancing the efficiency of a standard k-fold CV approach.
Let us consider a single validation step of the CV procedure (a certain ℓ∈{1,…,m}), and let us fix some notations. We define as p̅=(p_1,…,p_v)^⊺, p_i∈{1,…,n} the vector of distinct validation indices that identifies the elements of the validation set X_ℓ={x̅_p_1,…,x̅_p_v}. Furthermore, given a m-dimensional vector z̅ and a m× m matrix 𝖠, we denote as:
* z̅_p̅ the v-dimensional vector obtained by restricting to the elements whose index is in p̅, and z̅^p̅ the (m-v)-dimensional vector obtained by restricting to the elements whose index is not in p̅.
* 𝖠_p̅,p̅ the v× v matrix obtained by restricting to the rows and columns whose index is in p̅, and 𝖠^p̅,p̅ the (m-v)× (m-v) matrix obtained by restricting to the rows and columns whose index is not in p̅.
The introduced notation is helpful for presenting the following result.
Assume that the m× m matrices 𝖦 and 𝖫 in (<ref>) and (<ref>) are invertible. Let s be the approximant (<ref>) constructed at the full set of collocation points/centers X by solving the linear system (<ref>), and let s^(p̅) be the approximant built by excluding the functionals related to the nodes in X_ℓ, i.e.,
s^(p̅)(x̅)=∑_x̅_j∉ X_ℓ c_j^(p̅)λ_x̅_j(κ(x̅,y̅)),
where the vector of coefficients c̅^(p̅)=(c_1^(p̅),…,c_v^(p̅))^⊺ solves the linear system 𝖦^p̅,p̅c̅^(p̅)=g̅^p̅. Moreover, let h̅=(δ_1(h),…,δ_m(h))^⊺ be the vector of evaluations related to the underlying function h. Then, the vector of signed validation errors e̅_p̅=s̅^(p̅)(X_ℓ)-g̅_p̅, with s̅^(p̅)(X_ℓ)=(s̅^(p̅)(x̅_p_1),…,s̅^(p̅)(x̅_p_v))^⊺, can be approximated as
e̅_p̅≈ϵ̅_p̅=((𝖫^-1)_:,p̅)^+(𝖦^-1)_:,p̅((𝖦^-1)_p̅,p̅)^-1(𝖦^-1g̅)_p̅-f̅_p̅+h̅_p̅,
where 𝖠^+ denotes the Moore-Penrose inverse (or pseudoinverse) of the matrix and f̅=𝖫𝖦^-1g̅ is the RBF-PS solution at X. Precisely, e̅_p̅=ϵ̅_p̅ if the residual
‖((𝖫^-1)_:,p̅((𝖫^-1)_:,p̅)^+-𝖨)(𝖦^-1)_:,p̅((𝖦^-1)_p̅,p̅)^-1(𝖦^-1g̅)_p̅‖_2
is equal to zero, being 𝖨 the m× m identity matrix.
Let f̅=𝖫c̅∈ℝ^m, which implies f̅= 𝖫𝖦^-1g̅. Moreover, let b̅=(b_1,…,b_m)^⊺∈ℝ^m be such that:
(P1) b̅_p̅≡0̅.
(P2) 𝖫b̅ = f̅ - ∑_j=1^vα_j 𝖨_:,p_j, where 𝖨_:,p_j denotes the p_j-th column of the m× m identity matrix 𝖨, and α̅=(α_1,…,α_v)^⊺∈ℝ^v.
(P3) 𝖦b̅ = g̅ - ∑_j=1^vβ_j 𝖨_:,p_j, where β̅=(β_1,…,β_v)^⊺∈ℝ^v.
Note that for any m× m matrix 𝖠 and vector z̅ such that 𝖠b̅=z̅, (P1) implies 𝖠^p̅,p̅b̅^p̅=z̅^p̅. By taking into account also (P2) and recalling (<ref>), we obtain
s^(p̅)(x̅_p_i)=∑_x̅_j∉ X_ℓ c_j^(p̅)(δ_x̅_p_i∘λ_x̅_j)(κ(x̅,y̅))=∑_j=1^m b_j (δ_x̅_p_i∘λ_x̅_j)(κ(x̅,y̅))=(𝖫b̅)_p_i=f̅_p_i-α_p_i,
which implies
α̅=f̅_p̅-s̅^(p̅)(X_ℓ)=(𝖫𝖦^-1g̅)_p̅-s̅^(p̅)(X_ℓ).
Now, from (P3) we get
b̅ = 𝖦^-1(g̅ - ∑_j=1^vβ_j 𝖨_:,p_j)= c̅- ∑_j=1^vβ_j(𝖦^-1)_:,p_j,
and then 0̅≡b̅_p̅= c̅_p̅ - (𝖦^-1)_p̅,p̅β̅, from which we can calculate β̅=((𝖦^-1)_p̅,p̅)^-1c̅_p̅.
The next step consists in analyzing α̅ in terms of β̅. Therefore, we combine (P2) and (P3) and get
𝖫^-1(f̅ - ∑_j=1^vα_j 𝖨_:,p_j)=𝖦^-1(g̅ - ∑_j=1^vβ_j 𝖨_:,p_j),
from which it follows
∑_j=1^vα_j (𝖫^-1)_:,p_j=∑_j=1^vβ_j(𝖦^-1)_:,p_j.
Therefore, the sought vector α̅ has to satisfy (𝖫^-1)_:,p̅α̅=(𝖦^-1)_:,p̅((𝖦^-1)_p̅,p̅)^-1(𝖦^-1g̅)_p̅. This linear system is likely to be overdetermined, and to not admit an exact solution. Thus, we take the least squares solution
α̅^⋆=min_α̅‖ (𝖫^-1)_:,p̅α̅-(𝖦^-1)_:,p̅((𝖦^-1)_p̅,p̅)^-1(𝖦^-1g̅)_p̅‖_2.
By employing the pseudoinverse, we can write
α̅^⋆= ((𝖫^-1)_:,p̅)^+(𝖦^-1)_:,p̅((𝖦^-1)_p̅,p̅)^-1(𝖦^-1g̅)_p̅.
Finally, we observe that standard CV yields to
e̅_p̅=h̅_p̅-s̅^(p̅)(X_ℓ),
therefore
e̅_p̅-α̅^⋆=h̅_p̅-f̅_p̅,
and
ϵ̅_p̅=α̅^⋆-f̅_p̅+h̅_p̅.
In the case of LOOCV where p̅=p∈{1,…,m}, we get
ϵ̅_p=((𝖫^-1)_:,p)^+(𝖦^-1)_:,p(𝖦^-1g̅)_p/(𝖦^-1)_p,p.
The result is achievable by some simple manipulations, and by observing that (𝖦^-1)_p,p∈ℝ.
Let us comment on the presented theorem. In general, the assumption regarding the invertibility of the collocation and evaluation matrices is not theoretically justified. Nevertheless, as also outlined in Subsection <ref>, it is a common practice to assume such hypothesis in the context of RBF collocation. The Surrogate CV requires the calculation of the inverses of 𝖦 and 𝖫, but in each validation step ℓ=1,…,k the required computational effort is reduced with respect to the classical approach. We better analyze this important aspect in Subsection <ref>.
As far as the discrepancy between the error vectors computed by exact and Surrogate CV is concerned, the inexactness lies in the fact that ϵ̅_p̅ is likely to only approximate the validation error between the true underlying function h and the approximation computed by means of a RBF-PseudoSpectral (RBF-PS) approach that takes advantage of the full set X (cf. e.g. <cit.>). Nevertheless, we point out that the RBF-PS approximation can be calculated explicitly in our setting without adding a relevant computational cost (see Algorithm <ref>).
If we choose γ_x̅_i=λ_x̅_i=δ_x̅_i, i=1,…,m, then 𝖦=𝖫. As a consequence, in the proof of Theorem <ref> we get f̅=g̅ and we recover the exact ERA from the RBF interpolation framework. In this sense, the proposed Surrogate CV algorithm can be considered as a generalization of the ERA for generalized interpolation.
§.§ Case X≠ Y
In the following, we investigate whether the proposed Surrogate CV scheme can be extended to the more general case where collocation points differ from centers. This setting is more difficult to treat from a formal viewpoint than the X=Y case, both in interpolation and in collocation, but considering centers that do not exactly correspond to collocation points may lead to an increased numerical stability in certain situations as observed, e.g., in <cit.>. In order to apply our scheme, the validation partition X_1,…,X_v has to be relatable to both the rows and the columns of 𝖦 and 𝖫. This is clear since the first part of the proof of Theorem <ref>, until (<ref>), where the validation set X_ℓ and the corresponding vector of indices p̅ have influence on both δ_i and λ_j. Therefore, we can say something in the case where the set of centers consists of the collocation points plus some other additional centers.
Let Y=X∪ Z, where Z={z̅_1,…,z̅_ν}⊂ℝ^d, ν∈ℕ, is an additional set of centers that are not considered as collocation points. Then, the Surrogate CV that relies on the collocation points set X leads to
ϵ̅_p̅= ((𝖫^+)_:,p̅)^+(𝖦^+)_:,p̅((𝖦^+)_p̅,p̅)^-1(𝖦^+g̅)_p̅-f̅_p̅+h̅_p̅.
The matrices 𝖦 and 𝖫 are now (m× n)-dimensional with n=m+ν. We can construct them in such a way that the last columns correspond to the functionals related to Z, i.e.,
𝖦_i,j=(γ_x̅_i∘λ_x̅_j)(κ(x̅,y̅)), j=1,…,m,
𝖦_i,j=(γ_x̅_i∘λ_z̅_j-m)(κ(x̅,y̅)), j=m+1,…,n,
and doing the same 𝖫. Consequently, we can proceed as in the proof of Theorem <ref>, keeping in mind that:
* Classical matrix inversions are replaced by pseudoinverses, with the effect that we choose minimal norm solutions among the infinite admissible ones (the related linear systems are now undetermined).
* The validation procedure does not involve the additional centers in Z, that are therefore always employed as centers in every model and never excluded during the process.
§.§ Formulating the algorithm
We can exploit the findings of the previous subsections to finalize the Surrogate CV method, which is detailed in Algorithm <ref>. In order to present a unique version of the scheme, we consider the more general assumptions of Corollary <ref>, observing that in the case X=Y the pseudoinverses turn to classical matrix inverses when appropriate.
The overall computational cost of the scheme is analyzed in the following.
Assume that p̅_1,…,p̅_k are all v-dimensional vectors and v=m/k. Then, the computational cost required by Algorithm <ref> is 𝒪(mn^2)+𝒪(m^3+m^2nk+mn^2k^2/k^2).
By using, e.g., the singular value decomposition, a cost 𝒪(mn^2) is required for computing 𝖦 and 𝖫. Then, by taking into account matrix multiplications and (pseudo)inversions, for each ℓ=1,…,k we have a cost of 𝒪(v^3+v^2n+vn^2). Putting together, we get 𝒪(mn^2)+k𝒪(v^3+v^2n+vn^2). Finally, by using v=m/k the thesis follows.
To compare what obtained to the computational cost of a standard CV implementation, which is outlined in Subsection <ref> and detailed in Algorithm <ref>, note that when m=n the result in Proposition <ref> becomes 𝒪(m^3)+𝒪(m^3+m^3k+m^3k^2/k^2). Therefore, if k≈ m, the Surrogate CV algorithm computational effort is 𝒪(m^3), more efficient than the 𝒪(m^4) required by the standard approach.
In our numerical tests, we will also compare our Surrogate CV to the empirical Rippa-like strategy that has been employed in some previous works, which is detailed in Algorithm <ref>. Differently with respect to Algorithms <ref> and <ref>, we point out that such an empirical scheme is employable in the LOOCV case only, and with X=Y. On the other hand, its related computational cost is limited to 𝒪(m^3).
A Matlab implementation of the proposed Surrogate CV Algorithm <ref> is available at
.
§ NUMERICS
In the following, we test the discussed Surrogate CV algorithm by carrying out different numerical experiments that aim to show the benefits of the proposed scheme: restricted computational times and accurate approximation of the exact validation error. Therefore, we consider the following elliptic problem (see <cit.>), which will be addressed via different validation strategies and in various settings.
Δ u(x̅) =-5/4π^2sin(π x_1)cos(π x_2/2), x̅=(x_1,x_2)∈Ω=[0,1]^2,
u(x̅)= sin(π x_1), x̅∈Γ_1,
u(x̅)= 0, x̅∈Γ_2,
where Γ_1={x̅∈Ω | x_2=0} and Γ_2=∂Ω∖Γ_1. The exact solution of this problem is
u(x̅)=sin(π x_1)cos(π x_2/2).
Moreover, we will consider the following RBFs, that depend on a shape parameter that needs to be tuned,
φ_M,ε(r)= e^-ε r(1+ε r) , Matérn C^2,
φ_I,ε(r)= 1/√(1+(ε r)^2), Inverse Multiquadrics C^∞.
To construct the collocation schemes, we also consider
Δφ_M,ε(r)= ε^2e^-ε r(ε r - 1),
Δφ_I,ε(r)= ε^2((ε r)^2-2)/(1+(ε r)^2)^5/2, ΔΔφ_I,ε(r)= 3ε^4(3(ε r)^4-24(ε r)^2+8)/(1+(ε r)^2)^9/2
The experiments have been carried out in Matlab on a Intel(R) Core(TM) i7-1165G7 [email protected] processor.
§.§ Test 1: computational times and validation accuracy with Kansa's approach
Let μ̅=(4^2,8^2,…,28^2,32^2). For each μ∈μ̅, we perform the following test.
* We consider a set of internal collocation Halton points H_μ <cit.>, along with a set of √(μ) boundary collocation points B_√(μ) that are equispaced on the boundary of Ω (see Figure <ref>). Therefore, we have m=μ + √(μ) collocation points X=H_μ∪ B_√(μ).
* We let the centers coincide with the collocation points (case X=Y) and we implement Kansa's approach by setting
λ_x̅_j(u) = δ_x̅_j(u)=u(x̅_j) j=1,…,m,
and
γ_x̅_i(u)=(Δ u)_|x̅=x̅_i, x̅_i∈Ω,
γ_x̅_i(u)=δ_x̅_i(u), x̅_i∈∂Ω.
* Let ε̅ be a vector of 100 shape parameter values between 2^-5 and 2^5 discretized in log-form. We want to choose ε∈ε̅ so that the L_2-norm of the LOOCV error vector computed on X is minimized.
* Therefore, for each ε∈ε̅ we compute the LOOCV (k=m) error via three different strategies.
* Exact LOOCV: we compute the classical LOOCV error vector e̅.
* Surrogate LOOCV: we calculate an approximate value of the vector of LOOCV error ϵ̅ by using the proposed Algorithm <ref>.
* Empirical LOOCV: we compute an inexact LOOCV error vector η̅ by using the empirical Algorithm <ref>.
In Figure <ref> we report the results obtained by considering φ_I,ε, while in Figure <ref> we use as basis function φ_M,ε.
First, we observe that the proposed Surrogate LOOCV is definitely more efficient than Exact LOOCV, as expected, and slightly slower than Empirical LOOCV. Moreover, the central plots in Figures <ref> and <ref> show that Surrogate LOOCV computes a much more accurate validation error with respect to the empirical scheme. As a consequence, it is more likely for Surrogate LOOCV to select a shape parameter that coincides with the choice of the exact scheme, differently from Empirical LOOCV.
§.§ Test 2: a numerical experiment with the Hermite method
In the following, we set μ=256 and the set of collocation points and centers X=H_μ∪ B_√(μ). However, differently with respect to the previous section, here we restrict to φ_I,ε and we test our algorithm in dealing with the Hermite method, which is characterized by the choice
λ_x̅_i(u)=γ_x̅_i(u)=(Δ u)_|x̅=x̅_i, x̅_i∈Ω,
λ_x̅_i(u)=γ_x̅_i(u)=δ_x̅_i(u), x̅_i∈∂Ω.
Then, we perform steps 3. and 4. of Subsection <ref> and we focus on the obtained validation errors and best shape parameter values. The results are reported in Table <ref>, where we highlight the L_2-norm of the obtained validation error vector related to the chosen shape parameter value.
Also in this Hermite case, Surrogate LOOCV outperforms Empirical LOOCV in terms of accuracy.
§.§ Test 3: dealing with a non-square collocation matrix
Here, we set again μ=256 and X=H_μ∪ B_√(μ), but we choose a different set of centers Y. Precisely, we add √(μ) centers that lie outside Ω as depicted in Figure <ref>. This is a well-known strategy adopted to improve the accuracy of the collocation scheme nearby the boundary of the domain <cit.>. Then, we consider Kansa's approach, with basis function φ_M,ε, and we proceed as in the previous subsection. However, note that in this case the matrices 𝖦 and 𝖫 are not squared, therefore the empirical LOOCV of Algorithm <ref> is not applicable.
We observe that our proposed Surrogate LOOCV is fairly accurate in this non-square collocation setting too, being ‖ϵ̅‖_2 close to ‖e̅‖_2.
§.§ Test 4: varying k in k-fold CV
In this subsection, our purpose is to analyze the behavior of the proposed Surrogate CV varying the number of folds k, in terms of both computational time and accuracy. To do this, we set again μ=256 and X=Y=H_μ∪ B_√(μ) and employ Kansa's approach with basis function φ_M,ε. However, here steps 3. and 4. of Subsection <ref> are repeated for k∈{⌊ m/2^i⌋ | i=0,…,7}. We report the achieved results in Figure <ref>.
As far as the computational times are concerned, as expected Exact CV is faster than Surrogate CV for very small values of k only. About the accuracy of the proposed scheme varying k, the right-hand plot in Figure <ref> provides an interesting insight: the approximation of the exact validation error gets worser as k becomes smaller. This fact can be related to (<ref>), from which the inexactness of our Surrogate CV originates. Indeed, a small value of k implies a large number v of validation samples, and therefore the model is built on a restricted number of data. Consequently, it is natural to deviate from the RBF-PS solution. Nevertheless, we observe that the scheme can retain a good accuracy for any k.
§ CONCLUSIONS
In this work, we proposed a new surrogate k-fold CV scheme for the RBF collocation setting, which is inspired by the extended Rippa's algorithm employed in the interpolation framework. The proposed method is more efficient than a straightforward calculation of exact k-fold CV, especially when k is large, and can also be used when additional centers besides collocation points are taken into consideration. Moreover, in the case of LOOCV, it provides a more accurate approximation of the exact validation error with respect to the Rippa-like empirical LOOCV approach that is used in many previous works. The discussed Surrogate CV Algorithm <ref> has been first analyzed in details, and then tested in various numerical experiments. Future work consists in studying the proposed scheme in other collocation settings, also with conditionally positive definite kernels, and in evaluating possible modifications to further enhance its efficiency <cit.>.
99
Campagna20
R. Campagna, S. Cuomo, S. De Marchi, E. Perracchione, G. Severino,
A stable meshfree PDE solver for source-type flows in porous media, Appl. Numer. Math 149 (2020), pp. 30–42.
Cavoretto22
R. Cavoretto,
Adaptive LOOCV-based kernel methods for solving time-dependent BVPs, Appl. Math. Lett. 429 (2022), 127228.
Cavoretto20a
R. Cavoretto, A. De Rossi,
A two-stage adaptive scheme based on RBF collocation for solving elliptic PDEs, Comput. Math. Appl. 79 (2020), pp. 3206-3222.
Cavoretto20b
R. Cavoretto, A. De Rossi,
An adaptive LOOCV-based refinement scheme for RBF collocation methods over irregular domains, Appl. Math. Lett. 103 (2020), 106178.
Cavoretto18
R. Cavoretto, A. De Rossi, E. Perracchione, Optimal Selection of Local Approximants in RBF-PU Interpolation, J Sci Comput 74 (2018), pp. 1–22.
Chen20
C. S. Chen, A. Karageorghis, H. Zheng, Improved RBF Collocation Methods for Fourth Order Boundary Value Problems, Commun. Comput. Phys. 27 (2020), pp. 1530–1549.
Chen22
Y.-T. Chen, C. Li, L.-Q. Yao, Y. Cao, A Hybrid RBF Collocation Method and Its Application in the Elastostatic Symmetric Problems, Symmetry 14(7) (2022), 1476.
Chen22a
M. Chen, L. Ling, Y. Su, Solving interpolation problems on surfaces stochastically and greedily, Dolomites Res. Notes Approx. 15(3) (2022), pp. 26–36.
Chiappa20
A. Chiappa, C. Groth, A. Reali, M. Evangelos Biancolini, A stress recovery procedure for laminated composite plates based on strong-form equilibrium enforced via the RBF Kansa method, Compos. Struct. 244 (2020), 112292.
Chiu20
S.N. Chiu, L. Ling, M. McCourt, On variable and random shape Gaussian interpolations, Appl. Math. Comput. 377 (2020), 125159.
Chu14
F. Chu, L. Wang, Z. Zhong, J. He, Hermite radial basis collocation method for vibration of functionally graded plates with in-plane material inhomogeneity, Comput. Struct. 142 (2014), pp. 79–89.
Dehghan15
M. Dehghan, M. Abbaszadeh, A. Mohebbi, An implicit RBF meshless approach for solving the time fractional nonlinear sine-Gordon and Klein–Gordon equations, Eng. Anal. Bound. Elem. 50 (2015), pp. 412–434.
Dehghan14
M. Dehghan, V. Mohammadi, The numerical solution of Fokker-Planck equation with radial basis functions (RBFs) based on the meshless technique of Kansa's approach and Galerkin method, Eng. Anal. Bound. Elem. 47 (2014), pp. 38–63.
Fasshauer07
G.E. Fasshauer,
Meshfree Approximations Methods with Matlab,
World Scientific, Singapore, 2007.
Fasshauer15
G.E. Fasshauer, M.J. McCourt,
Kernel-based Approximation Methods Using Matlab,
World Scientific, Singapore, 2015.
Fasshauer07a
G.E. Fasshauer, J.G. Zhang, On choosing optimalshape parameters for RBF approximation, Numer. Algorithms 45 (2007), pp. 345–368.
Fornberg07
B. Fornberg, J. Zuev, The Runge phenomenon and spatially variable shape parameters in RBF interpolation, Comput. Math. Appl. 54(3) (2007), pp. 379–398.
Gherlone12
M. Gherlone, L. Iurlaro, M. Di Sciuva, A novel algorithm for shape parameter selection in radial basis functions collocation method, Compos. Struct. 94(2) (2012), pp. 453–461.
Golbabai15
A. Golbabai, E. Mohebianfar, H. Rabiei, On the new variable shape parameter strategies for radial basis functions, J. Comput. Appl. Math. 34 (2015), pp. 691–704.
Golub79
G. H. Golub, M. Heath, G. Wahba, Generalized cross-validation as a method
for choosing a good ridge parameter, Technometrics 21(2) (1979), pp. 215–223.
Halton60
J. H. Halton, On the efficiency of certain quasi-random sequences of points in evaluating multi-dimensional integrals, Numer. Math. 2 (1960), pp. 84–90.
Hon01
Y. C. Hon, R. Schaback, On nonsymmetric collocation by radial basis functions, Appl. Math. Comput. 119 (2001), pp. 177–186.
Kansa90
E. J. Kansa,
Multiquadrics–A scattered data approximation scheme with applications to computational fluid-dynamics–II solutions to parabolic, hyperbolic and elliptic partial differential equations, Comput. Math. Appl. 19 (1990), pp. 147–161.
Kansa92
E. J. Kansa, R. Carlson,
Improved accuracy of multi-quadric interpolation using variable shape parameters, Comput. Math. Appl. 24 (1992), pp. 20–99.
Karageorghis21
A. Karageorghis, D. Tappoura, C.S. Chen,
The Kansa RBF method with auxiliary boundary centres for fourth order boundary value problems, Math. Comput. Simul. 181 (2021), pp. 581–597.
Katsiamis20
A. Katsiamis, A. Karageorghis,
Kansa radial basis function method with fictitious centres for solving nonlinear boundary value problems, Eng. Anal. Bound. Elem. 119 (2020), pp. 293–301.
Kazem12
S. Kazem, J.A. Rad, K. Parand,
Radial basis functions methods for solving Fokker–Planck equation, Eng. Anal. Bound. Elem. 36(2) (2012), pp. 181–189.
Krowiak19
A. Krowiak, J. Podgórski,
On choosing a value of shape parameter in radial basis function collocation methods, AIP Conference Proceedings 2116(1) (2019), 450020.
LaRocca05
A. La Rocca, A. Hernandez Rosales, H. Power,
Radial basis function Hermite collocation approach for the solution of time dependent convection–diffusion problems, Eng. Anal. Bound. Elem. 29(4) (2005), pp. 359–370.
LaRocca06
A. La Rocca, H. Power,
A Hermite radial basis function collocation approach for the numerical simulation of crystallization processes in a channel, Commun. Numer. Meth. Engng. 22 (2006), pp. 119–135.
Ling22
L. Ling, F. Marchetti,
A stochastic extended Rippa’s algorithm for LpOCV, Appl. Math. Lett. 129 (2022), 107955.
Liu15
XY. Liu, A. Karageorghis, C.S. Chen,
A Kansa-Radial Basis Function Method for Elliptic Boundary Value Problems in Annular Domains, J. Sci. Comput. 65 (2015), pp. 1240–1269.
Ma21
X. Ma, B. Zhou, S. Xue,
A meshless Hermite weighted least-square method for piezoelectric structures, Appl. Math. Comput. 400 (2021), 126073.
Marchetti21
F. Marchetti,
The extension of Rippa’s algorithm beyond LOOCV, Appl. Math. Lett. 120 (2021), 107262.
Mongillo11
M. Mongillo, Choosing Basis Functions and Shape Parameters for Radial Basis Function Methods, SIAM SIURO publications 4 (2011).
Rippa99
S. Rippa, An algorithm for selecting a good value for the parameter c in radial basis function interpolation, Adv. Comput. Math. 11 (1999), pp. 193–210.
Roque10
C.M.C. Roque, A.J.M. Ferreira, Numerical Experiments on Optimal ShapeParameters for Radial Basis Functions, Numer. Meth. Partial Diff. Eqs. 26 (2010), pp. 675–689.
Schaback07
R. Schaback, Convergence of Unsymmetric Kernel-Based Meshless Collocation Methods, SIAM J Numer Anal 45(1) (2007), pp. 333–351.
Scheuerer11
M. Scheuerer, An alternative procedure for selecting a good value for the parameter c in RBF-interpolation, Adv. Comput. Math. 34 (2011), pp. 105–126.
Trahan03
C.J. Trahan, R.E. Wyatt, Radial basis function interpolation in the quantum trajectory method: optimization of the multi-quadric shape parameter, J. Comput. Phys. 185 (2003), pp. 27–49.
Uddin14
M. Uddin, On the selection of a good value of shape parameter in solving time-dependent partial differential equations using RBF approximation method, Appl. Math. Model. 38 (2014), pp. 135–144.
Yang18
F. Yang, L. Yan, L. Ling, Doubly stochastic radial basis function methods, J. Comput. Phys. 363 (2018), pp. 87–97.
Wang18
F. Wang, W. Chen, C. Zhang, Q. Hua, Kansa method based on the Hausdorff fractal distance for Hausdorff derivative Poisson equations, Fractals 26(4) (2018), 1850084.
Wendland05
H. Wendland,
Scattered Data Approximation,
Cambridge Monogr. Appl. Comput. Math., vol. 17, Cambridge Univ. Press, Cambridge, 2005.
|
http://arxiv.org/abs/2307.04750v1 | 20230710175544 | Quantum oscillations with topological phases in a kagome metal CsTi$_3$Bi$_5$ | [
"Yongkang Li",
"Hengxin Tan",
"Binghai Yan"
] | cond-mat.str-el | [
"cond-mat.str-el",
"cond-mat.mtrl-sci"
] |
[email protected]
Department of Condensed Matter Physics, Weizmann Institute of Science, Rehovot 7610001, Israel
Quantum oscillations can reveal Fermi surfaces and their topology in solids and provide a powerful tool for understanding transport and electronic properties. It is well established that the oscillation frequency maps the Fermi surface area by Onsager's relation. However, the topological phase accumulated along the quantum orbit remains difficult to estimate in calculations, because it includes multiple contributions from the Berry phase, orbital and spin moments, and also becomes gauge-sensitive for degenerate states. In this work, we develop a gauge-independent Wilson loop scheme to evaluate all topological phase contributions and apply it to CsTi_3Bi_5, an emerging kagome metal. We find that the spin-orbit coupling dramatically alters the topological phase compared to the spinless case.
Especially, oscillation phases of representative quantum orbits demonstrate a strong 3D signature despite their cylinder-like Fermi surface geometry.
Our work reveals the Fermi surface topology of CsTi_3Bi_5 and paves the way for the theoretical investigation of quantum oscillations in realistic materials.
Quantum oscillations with topological phases in a kagome metal CsTi_3Bi_5
Binghai Yan
August 12, 2023
============================================================================
§ INTRODUCTION
Kagome lattice, a 2D corner-sharing triangle lattice, has attracted much interest due to its geometric frustration and non-trivial band geometry. Among various materials containing such 2D lattice structure, Kagome material family AV_3Sb_5 (A = K, Rb, Cs)<cit.> receives special attention since it exhibits many exotic quantum phenomena including ℤ_2 topology and flat bands<cit.>, possible unconventional superconductivity<cit.> and density wave order <cit.>. However, because of the interplay and competition between different correlated states, the origin of these physical properties and their relation to the unique electronic structure remains elusive.
Recently, a new Ti-based Kagome material ATi_3Bi_5 (A = K, Rb, Cs) isostructural to AV_3Sb_5 is synthesized<cit.> and investigated<cit.>. Unlike V-based AV_3Sb_5 family, the charge density wave (CDW) order is absent in ATi_3Bi_5 family as shown in transport and scanning tunneling microscopy (STM) experiments<cit.>. First-principles calculation also shows the absence of lattice structural instability<cit.>. Hence, ATi_3Bi_5 could serve as a complementary system to AV_3Sb_5, in which the origin of these exotic phenomena and their relation to electronic properties can be investigated without reference to lattice's effect. For example, the observed two-fold rotational symmetry and orbital selectivity in the electronic structure of ATi_3Bi_5 <cit.> may form a pure electronic nematic phase, similar to that in Fe-based high-temperature superconductors<cit.>. Understanding the band structure and Fermi surface of ATi_3Bi_5 is crucial for further investigating these correlating properties.
Quantum oscillation measurement is one way to measure the Fermi surface topology as well as its associated properties like cyclotron mass and carrier mobility<cit.>. More importantly, the phase of the fundamental oscillation is related to the band topology. Usually, a π phase shift in the oscillation is regarded as π Berry phase which indicates a topological band structure<cit.>. The quantum oscillation analysis from this perspective has been carried out in AV_3Sb_5<cit.> and also recently in ATi_3Bi_5<cit.>, which claims nontrivial band topology due to this π Berry phase.
The topological phase actually has other contributions entangled with the Berry phase<cit.>. Especially in the degenerate case with strong spin-orbit coupling (SOC), such π phase may mainly come from orbital or spin magnetic moment other than the Berry phase, as revealed recently in CsV_3Sb_5 <cit.>. Hence, the analysis of the topological properties based on the phase shift in quantum oscillation should consider all contributions. Apart from the experiment, this phase can be independently evaluated from ab-initio band structures. However, such calculation has to deal with the gauge fixing problem in the presence of degeneracy which is common for centrosymmetric nonmagnetic materials. A numerical study for all phase contributions without gauge ambiguities has not been explored in detail before.
In this work, we develop a Wilson loop method to determine the quantum oscillation phase and apply it to CsTi_3Bi_5. We first detail the method which has explicit gauge independence and can be implemented conveniently in the case of degenerate bands. Then combining this method with first-principles calculation we resolve the Fermi surface of CsTi_3Bi_5 and determine the total oscillation phase for all quantum orbits. Its relation to the Fermi surface geometry and band topology is clarified at last. The 3D nature of several representative quantum orbits present is imprinted in the topological phase, although related Fermi surfaces show a cylinder-like shape.
Our work provides a useful theoretical tool to investigate the Fermi surfaces and topological electronic properties in materials.
§ OVERVIEW ON THE QUANTUM OSCILLATION PHASE
In the presence of a strong magnetic field, the physical quantities (e.g., resistance and magnetization) show oscillation with respect to a magnetic field (B) due to the formation of quantized Landau levels (LLs). Under the semiclassical limit in which the scale of k-space orbit is much larger than the inverse of magnetic length l_B^-1 (l_B=√(ħ /eB)), the oscillation is periodic with respect to 1/B and can be expanded as a sum of Fourier series in general:
δ A = ∑_i∑_r A_i,rcos[r(l_B^2 S_F,i+θ_i+ϕ_M,i) + δ_i + φ_A ].
Here, A is the physical quantity being measured which is usually magnetization M or longitudinal resistivity ρ_xx, δ A is the oscillation part and A_i,r is the oscillation amplitude for the r-th harmonic of the i-th extremal orbit. S_F,i is the momentum space area of the i-th extremal orbit on Fermi surface and determines the i-th oscillation frequency. Here the total oscillation phase is decomposed into four parts: θ_i is the first-order correction to the dynamical phase including the geometry phase and (orbital and spin) magnetic moment phase. ϕ_M,i is Maslov correction which equals to π for a simple closed orbit. δ_i is dimension related phase resulting from the integration over k_z if a 3D solid is measured (suppose B is along z direction). The last term φ_A is measured quantity (A) related phase (see the following discussion).
All phases except φ_A depend only on the Fermi surface properties and are universal for any oscillatory quantity. Below we show that each phase can be determined from first-principles calculations to understand experiments. We note that a comprehensive theory on quantum oscillations was established in Refs. <cit.>. We first overview this theory and then introduce the Wilson loop method to compute the topological phase.
§.§ Phase θ
The first two phases θ and ϕ_M (below we focus on a single orbit and ignore the subscript i) are related to LLs. In general, there are no simple rules to determine the exact LL for arbitrary band structure. However, in the semiclassical limit, approximate LL can be determined from Bohr-Sommerfeld-like quantization rules. For a group of D-fold degenerate bands, the j-th LLs can be obtained up to leading order in l_B^-1 as,
l_B^2 S(E_a,j) +λ_a + ϕ_M = 2π j + O(l_B^-2/3).
a ∈ℤ_D:={1, …, D} is the band index among D degenerate bands and λ_a is a phase that we are interested.
λ_a is equivalent to θ if there is no degeneracy,i.e., D=1. ϕ_M is Maslov correction and can be determined from the topology of the orbit, which equals π for a simple closed orbit.
Because of degeneracy, D LLs create D oscillation terms with the same frequency F = ħ S_F/2π e by Onsager's relation but different phase shift λ_a. It amounts to a single oscillation term with reduced amplitude C and effective phase shift θ,
∑^D_a=1cos[r(l_B^2 S_F+λ_a+ϕ_M)] = Ccos[r(l_B^2 S_F+θ+ϕ_M)].
For example, all bands are doubly degenerate (D=2) in the presence of combined inversion and time reversal (𝒫𝒯) symmetries, which is the case of kagome metals CsV_3Sb_5 and CsTi_3Bi_5. We regulate λ_1,2 in the range of [-π,π] and then 𝒫𝒯 symmetry leads to λ_1 = - λ_2. Hence, summing two cosine functions in Eq.(<ref>) leads to
θ =
0, if |λ_1| < π/2
π, if |λ_1| > π/2
C = |cos(λ_1)| ,
One can find θ is a quantized topological invariant (0 or π) <cit.> insensitive to orbit details.
In general, phase λ_a can be determined from the spectrum {e^iλ_a}_a=1^D of propagator<cit.>
𝒜[𝔬]=exp[i ∮_𝔬{(A+R) · d k+Z(σ^z / v^⊥) d k}].
Here exp means path-ordered product, A(k)_m n=i⟨ u_m k| ∇_k u_n k⟩ is non-Abelian Berry connection and
R_m n· d k =∑_l ∉ℤ_DA_m l^x Π_l n^y d k_x / 2 v_y+(x ↔ y)
=-iħ∑_l ∉ℤ_DΠ_m l^x Π_l n^y/ε_m k-ε_l kd k_x/2 v_y + (x ↔ y)
=-(M_z/ev^⊥)dk,
is Roth term and represents the orbital correction (-M_z B_z) to the band energy. Π(k)_l n=⟨ u_l k|(1/ħ)∇_kĤ(k)| u_n k⟩ is velocity matrix element and v=Π_n n is group velocity. ϵ_mk is band energy and v^⊥ is the velocity in xy plane. M_z=i(eħ/2)∑_l ∉ℤ_DΠ_m l^xΠ_l n^y/(ε_m k-ε_l k) - (x ↔ y) is the self rotation part of orbital magnetic moment<cit.>. Furthermore,
σ_z,mn = ⟨ u_l k|σ̂_z| u_n k⟩ (σ̂_z is spin Pauli matrix) and Z=g_0 ħ/4m. The last term is the spin Zeeman term. Once the propagator (𝒜[𝔬]) is known, the phase λ_a can be easily obtained by diagonalizing it.
Though its formulation is clear in theory, the numerical calculation of this propagator needs to deal with the derivatives in the Berry connection. Besides, the multi-band magnetic moment (including orbital and spin) is a gauge covariant quantity whose matrix elements depend on the gauge. If a random gauge is chosen, the magnetic moment transforms independently at each point along the orbit, rendering the (<ref>) meaningless. To deal with these problems, one can choose a smooth gauge by finding the maximally localized Wannier function<cit.>. Alternatively, the Wilson loop method<cit.> can be applied to avoid the choice of any specific gauge. Below, we shall use the Wilson loop method for the calculation of λ_a.
In this way, the quantum orbit is discretized into N segments (Fig.<ref>) and the propagator is written as the product for each segment. If the segment is small enough, the exponent can be split into Berry connection and magnetic moment parts.
𝒜[𝔬] = ∏_i=1^Nexp{i[(A(k_i) + R(k_i))· dk_i+Zσ^z/v^⊥|dk_i|]}
≈∏_i=1^Nexp[iA(k_i) · dk_i] exp[iR(k_i) · dk_i+i Zσ^z/v^⊥|dk_i|].
For numerical calculation, the Berry connection part is usually expressed by an overlap matrix M^i =exp[iA(k_i) · dk_i]. M_i is a D by D matrix with M^i_mn=⟨ u_m k_i+1|u_n k_i⟩.
The last ingredient for the propagator is an appropriate expression for the Roth term which shows explicit gauge covariance. It can be written as a summation of velocity matrix elements over all other states as in (<ref>).
Instead, we propose another method that considers only D degenerate states on the Fermi surface using the covariant derivative<cit.>. The covariant derivative is defined as
|D_α u_n k⟩ =Q_k|∂_α u_n k⟩,
Q_k := I-∑_a ∈ℤ_D|u_a k⟩⟨ u_a k|.
In numerical calculation, it can be evaluated as an appropriate finite difference<cit.>
|D_α u_n k⟩=1/2|q_α|(|u_n, k+q_α⟩-|u_n, k-q_α⟩),
where the dual state |u_n, k+q⟩ is a linear combination of |u_n, k+q⟩ and has the property ⟨u_m k| u_n k+q⟩=δ_m n. This ensures the orthogonality between the covariant derivative and states in degenerate space, i.e. ⟨ u_m k| D_α u_n k⟩=0. Dual states are constructed as
|u_n, k+q⟩=∑_n^'(S_k, k+q^-1)_n^' n|u_n^', k+q⟩
and
(S_k, k+q)_n n^'=⟨ u_n k | u_n^', k+q⟩ .
Using covariant derivative, Eq. (<ref>) is expressed only by states inside the degenerate space
ℜ_m n· d k = -i/ħ∑_l ∉ℤ_DA_m l^x (ε_n k-ε_l k) A_l n^y d k_x / 2 v_y+(x ↔ y)
= -i/ħ∑_l ∉ℤ_D⟨ D_x u_m k| u_l k⟩ (ε_n k-ε_l k) ⟨ u_l k| D_y u_n k⟩ d k_x / 2 v_y
+ (x ↔ y)
= -i/ħ⟨ D_x u_m k| ε_n k-Ĥ(k) | D_y u_n k⟩ d k_x / 2 v_y+(x ↔ y).
In Appendix we show both Eq. (<ref>) and Eq. (<ref>) are gauge independent, which can be implemented easily in first-principles calculation. The Eq. (<ref>) is practical for tight-binding models with a small number of bands but quite tedious if the total number of bands is large. The Eq. (<ref>) avoids these problems and focuses only on the degenerate space and it is convenient when covariant derivatives can be easily calculated.
§.§ Phase δ
The above discussion about phase θ is for a single k-plane perpendicular to the magnetic field. For 3D material, one needs to integrate over k_z to get the contribution from the whole Fermi surface. Extremal orbits will dominate in the integration and this procedure will introduce another phase δ for each of them, which is generally ±π/4 (+ for minimum cross-section and - for maximum cross-section). δ = 0 for 2D material since there is only one k-plane. But for a nearly cylindrical Fermi surface (e.g., Fig.<ref>), δ lies between these two limits. Below we adopt a simple model from Refs. <cit.> to determine δ for every extremal orbit that lies in a mirror plane. Here, we assume 𝒫𝒯 symmetry for simplicity.
The oscillation of 3D Fermi surface is calculated first for a 2D plane with thickness dk_z and then integrate with respect to k_z, i.e.
A_r = ∑_a ∫ dk_z A_r(k_z) cos[r (2πF(k_z)/B + λ_a(k_z)) + ϕ_M ]
∝∫ dk_z A_r(k_z) cos[r (2πF(k_z)/B + θ(k_z)) + ϕ_M ],
where A_r(k_z) is the oscillation amplitude of 2D plane, which depends on k_z through cyclotron frequency F(k_z) and cyclotron mass m(k_z). The relative change of F(k_z) and m(k_z) in the interval where the integral is appreciable is usually small. Hence in the integration of Eq. (<ref>), A_r(k_z) can be treated approximately as a constant while F(k_z) in the cosine function can't be treated as fixed because F(k_z)≫ B. Maslov phase ϕ_M remains constant as long as the orbit on the Fermi surface doesn't change its topology. Moreover, 𝒫𝒯 symmetry cause the phase θ(k_z) quantized to 0 or π as in Eq. (<ref>). So the k_z dependence of θ can also be ignored and only the k_z-variation of F(k_z) needs to be considered.
We expand F(k_z) near its extremal value to the fourth order and all odd orders are zero due to mirror symmetry.
F(k_z) = F_0 + 1/2F_2k_z^2 + 1/24F_4k_z^4.
Introducing dimensionless variable x = (2r|F_2|/B)^1/2 k_z and α = sgn(F_2)F_4B/24 r |F_2|^2 then the integration can be calculated as
A_r ∝Re∫exp[i(2π rF(k_z)/B + rθ + ϕ_M)] dk_z
∝Re exp[i(2π rF_0/B + rθ + ϕ_M) ] ∫exp[sgn(F_2)iπ/2 x^2 (1+α x^2)]dx
∝cos[r (2πF_0/B + θ) + ϕ_M + δ].
where phase δ is the argument of the last integral
δ = arg{∫^x_m_x_mexp[sgn(F_2)iπ/2 x^2 (1+α x^2)] dx}.
δ was numerically determined by carrying out the integral with given value α <cit.>, for which F_2, F_4 can be found from the polynomial fitting of F(k_z) around the extremal orbit. The integral limit x_m can be taken as ∞ when α>0 because the main contribution comes from x≈ 0.
However, this argument does not apply when α<0 due to the two extra artificial extrema. Since the real cross-section varies monotonically on either side of x=0, x_m should be taken less than the turning point 1/√(2|α|) to avoid these artificial extrema. In calculation, the argument of the integral goes to a steady value before the turning point, which should be assigned as δ. It's obvious that δ= 0 from Eq. (<ref>) when F(k_z)=F_0.
For a general 3D material, if α→ 0 (i.e., F_4 → 0 and F_2 k_z^2 is the leading dispersion), one can get δ=±π/4. Otherwise, δ may take a value between 0 and ±π/4.
§.§ Phase φ_A
The last phase φ_A depends on the type of physical quantity A. When A is the density of states (DOS), this phase vanishes φ_DOS=0. For other quantities, φ_A represents the connection between the oscillation of A and the oscillation of DOS. For example, φ_M=π/2 if A is sample magnetization, and φ_χ=π if A is magnetic susceptibility. In four terminal devices, the longitudinal conductivity σ_xx oscillates in phase with DOS hence φ_σ=0. But since σ_xx=ρ_xx/(ρ_xx^2+ρ_xy^2), the resistivity ρ_xx can be in phase (if ρ_xx≪ρ_xy) or out of phase (if ρ_xx≫ρ_xy) with σ_xx, so φ_ρ = 0 if ρ_xx≪ρ_xy or φ_ρ = π if ρ_xx≫ρ_xy<cit.>.
To summarize, all the phases in the oscillation term Eq. (<ref>) have the following intuitive explanations. First, the magnetic-field-dependent term l_B^2 S_F is given by the combination of the de Broglie phase (determined by the number of wavelengths in an orbit) and the Aharonov–Bohm phase. Then there is a phase λ_a associated with each orbit and each band coming from geometric effects and magnetic moment energy. λ_a of degenerate bands for the same orbit will combine to give the phase θ. The reflection of the wave packet at turning points in the orbit causes phase ϕ_M. These phases are the total phase for a single orbit lying in the kx-ky plane. For 3D materials, k_z integration needs to be carried out to incorporate the whole Fermi surface's contribution, which gives phase δ. At last, depending on what quantity A is measured, there will be another phase ϕ_A if the oscillation of A is not synchronized with the oscillation of DOS.
§ RESULTS AND DISCUSSIONS
The crystal structure of CsTi_3Bi_5 is fully relaxed within the Density Functional Theory (DFT) as implemented in the Vennia ab-inito Simulation Package <cit.>. The cutoff energy for the plane-wave basis set is 300 eV. The force convergence criteria is 5 meV/Å.
The electronic structure is calculated with the full-potential local-orbital minimum-basis code (FPLO) <cit.>.
The default atomic basis sets are employed for the wave function expansion.
The generalized gradient approximation parameterized by Perdew, Burke, and Ernzerhof (PBE) <cit.> is employed to mimic the exchange-correlation interaction between electrons throughout.
The Brillouin zone is sampled by a k-mesh of 12×12×6.
The tight-binding Hamiltonian of CsTi_3Bi_5 is extracted via the maximally localized Wannier functions <cit.> as implemented in FPLO, which enforces all crystal symmetries.
The Wannier basis set is composed of the Ti d and Bi p orbitals.
The Fermi surface is calculated with the tight-binding Hamiltonian on a k-mesh of 300×300×100.
We mention that the above Wilson loop method for the total oscillation phase shift has been successfully applied to the 𝒫𝒯 symmetric kagome metal CsV_3Sb_5 <cit.>, which predicted consistent results with experiments.
In the following, we will apply the Wilson loop method to the recently discovered kagome superconductor CsTi_3Bi_5 <cit.> to further demonstrate the reliability of this method.
We note here that the characterization of the dimensionality of the quantum orbit by the phase δ has not been discussed in our previous work on CsV_3Sb_5.
The band structure of CsTi_3Bi_5 with spin-orbit coupling is plotted in Fig.<ref>(a), which contains rich topological properties.
Due to the 𝒫𝒯 symmetry in CsTi_3Bi_5, each band is doubly degenerate.
Characteristic features of the kagome lattice, such as Dirac points at K/H points away from the Fermi level which are gapped by SOC, van Hove singularities at M/L, and flat bands along M-K/L-H lines <cit.>, are shown. There are also type II Dirac crossings on the Γ-M and A-L lines, which form a Dirac nodal line <cit.> in the Γ-M-A plane. Besides, both the experiment and theory have shown that CsTi_3Bi_5 has topological Dirac surface states at the Γ point on the (001) surface <cit.>.
The band structure on the k_z = 0 plane looks similar to the band structure on the k_z = 0.5 plane (in units of 2π/c, c is the lattice constant), which indicates the quasi-two-dimensional feature of the electronic structure of CsTi_3Bi_5. Indeed, the 3D Fermi surface shown in Fig.<ref>(b) shows a good cylindrical shape for all pieces.
There are totally four bands crossing the Fermi level creating five pieces of the Fermi surface. By sweeping k_z, all extremal quantum orbits perpendicular to the z-direction are found to locate at the two mirror planes k_z=0 and k_z=0.5, shown in Fig.<ref>(c) and (d). The initial experiment reported an oscillation frequency of 200 T <cit.>.
A more recent transport experiment <cit.> reported a series of oscillation frequencies, ranging from 217 to 1013 T.
Our calculations show agreement with the experiments in the low-frequency region. For example, the calculated frequencies of 213, 336, and 542 T might correspond to the observed frequencies of 200/217, 281, 498 or 594 T, respectively.
We notice that our calculated frequencies are slightly different from the calculations in Ref. Dong2023CTB, which might be induced by the mismatch of Fermi energy and/or different calculation parameters employed.
The cyclotron masses m^* of all calculated quantum orbits are summarized in Table <ref>.
Except for the two small pockets (336 and 213 T) around M/L points, all other orbits are electron pockets, whose cyclotron masses are defined as positive.
The two largest hexagonal orbits centered around the Γ point (7488 and 8111 T) have the largest cyclotron masses (1.6∼1.7) while others have relatively small cyclotron masses.
The different quantum oscillation phases, as mentioned above, of all orbits are calculated and listed in Table.<ref>.
Here every cyclotron orbit is a simple closed curve; thus the Maslov correction ϕ_M=π is omitted in the table.
The phase λ_a is calculated by Eq. (<ref>) with random gauge choices to test the gauge invariance, which presents the same results.
We also confirm the relation λ_1 = -λ_2 for any two degenerate quantum orbits imposed by the 𝒫𝒯 symmetry.
Thus only the positive one λ_1 is listed. The Berry phases without (ϕ_B0) or with SOC (ϕ_B) are also listed for comparison.
According to our previous discussion of Eq. (<ref>), the final phase shift of the quantum orbit θ must be quantized to either 0 or π, depending on the magnitude of λ_1, as listed in Table.<ref>.
From these phases, it's clear that phase λ_1 is in general different from the Berry phase ϕ_B due to the orbital and spin magnetic moment contribution. Also for the 𝒫𝒯-symmetric system, the topology of the quantum orbit is not equivalent to the band topology of the individual Fermi surface. For example, the quantum orbits of 336 T (around M) and 8111 T (around Γ) have Berry phases ϕ_B close to 0 but the oscillation phase shifts are π.
On the contrary, the quantum orbit of 4907 T (around A) has a Berry phase close to π but a zero oscillation phase shift.
We note here that the strong SOC is important because these orbits have only a trivial Berry phase in the spinless case.
Therefore, the incorporation of the magnetic moment contribution in the oscillation phase by SOC is crucial and the quantum phase shift extracted from the Landau fan diagram should be interpreted more carefully, rather than just interpreting it as the Berry phase.
The recent experiment <cit.> finds that the quantum orbit of 281 T is non-trivial with a π phase shift (θ=π), which is consistent with our calculated non-trivial quantum orbit of 336 T.
Because the 3D Fermi surface is nearly cylindrical, the dimension-related phase δ should be determined by considering higher order terms in the expansion of F(k_z) in Eq. <ref>. From numerical calculations, the frequency F and cyclotron mass m^* of all extremal orbits have a small relative change on the Fermi surface (less than 5% in the interval |Δ k_z| ≤ 0.1).
Since CsTi_3Bi_5 has 𝒫𝒯 symmetry and all extremal orbits locate in mirror planes, Eq. (<ref>) applies, which is used to calculate phase δ.
The phase δ is calculated with the magnetic field B varying from 5 T to 40 T, covering the range of B in general oscillation experiments <cit.>.
The variation of δ is very small in the considered B range. Thus the δ can be approximately treated as a constant, whose average value is listed in Table <ref>. It shows that all quantum orbits except for the 213 and 802 T ones have a phase δ quite close to ±π/4.
Therefore, most orbits should be classified as 3D cases in quantum oscillation, even though the Fermi surfaces in Fig. <ref>(b) show a strong quasi-2D feature.
On the other hand, the Fermi surface around A is almost dispersionless along k_z, so the δ for the quantum orbit of 802 T is closer to zero than others. As a result, this quantum orbit is 2D. However, the quantum orbit of 713 T which comes from the same Fermi surface as the 802 T orbit but on the k_z=0 plane, has a δ=π/4.
Consequently, the character (2D or 3D) of a quantum orbit should not be simply determined from the appearance of the related Fermi surface in the 3D k space.
§ CONCLUSION
We theoretically studied the quantum oscillations by revealing their frequencies and topological phases through a Wilson loop method in CsTi_3Bi_5.
We revealed three quantum orbits with θ = π phase shift.
Despite most Fermi surfaces are quasi-2D,
the dimensional-related phase δ, beyond the angle-dependent frequency, clearly indicates their 3D nature.
Our method can be applied to other quantum materials and provides a general way to study quantum oscillations assisted by first-principles calculations.
§ ACKNOWLEDGEMENT
B.Y. acknowledges the financial support by the European Research Council (ERC Consolidator Grant “NonlinearTopo”, No. 815869) and the ISF - Personal Research Grant (No. 2932/21).
§ APPENDIX
The most general gauge transformation is a U(D) basis transformation among the degenerate bands
|u_n k⟩ →∑_m=1^D U(k)_m n|u_m k⟩ , U^-1=U^†,
It has already been shown that the propagator 𝒜[𝔬] is gauge covariant under such transformation<cit.> provided that the same wave function is used at the initial point and the final point, i.e. | u(k_N+1) ⟩=| u(k_1) ⟩. Here we use the same way to show our numerical formula inherits this property so it's appropriate for calculation.
First, covariant derivatives transform as states under the U(D) gauge transformation
|u_n, k+q⟩ →∑_n^'(U(k)^† S_k, k+q U(k+q))^-1_n^' n|u_n^', k+q⟩
= ∑_n^',m,l,m^'U(k+q)^-1_n^'m (S_k, k+q^-1)_m l U(k)_l n
U(k+q)_m^'n^'|u_m^', k+q⟩
= ∑_m,l (S_k, k+q^-1)_m l U(k)_l n|u_m, k+q⟩
= ∑_l U(k)_l n|u_l, k+q⟩
which makes the covariant derivative expression of Roth term (<ref>) transform covariantly. This is also true for the matrix elements expression of Roth term (<ref>) and spin matrix σ_z, meaning that
R(k_i)_m n· d k_i
→ U(k_i)^-1R(k_i)_m n· d k_i U(k_i)
σ_z(k_i)_m n· d k_i
→ U(k_i)^-1σ_z(k_i)_m n· d k_i U(k_i)
Therefore, the second term in (<ref>) is also gauge covariant
exp[iR(k_i) · dk_i+i Zσ^z/v^⊥|dk_i|]
→exp{i U(k_i)^-1[R(k_i) · dk_i+Zσ^z/v^⊥|dk_i|]U(k_i)}
= U(k_i)^-1exp[iR(k_i) · dk_i+i Zσ^z/v^⊥|dk_i|] U(k_i)
Besides, the overlap matrix M^i_mn=⟨ u_m k_i+1|u_n k_i⟩ transforms like
M^i→ U(k_i+1)^-1 M^i U(k_i)
Hence, the covariance of discretized propagator (<ref>) follows from the transformation properties of the two separate terms as
𝒜[𝔬] →∏_i=1^N U(k_i+1)^-1 M^i U(k_i) · U(k_i)^-1
exp[iR(k_i) · dk_i+i Zσ^z/v^⊥|dk_i|]U(k_i)
= U(k_N+1)^-1
{∏_i=1^N M^i·exp[iR(k_i) · dk_i+i Zσ^z/v^⊥|dk_i|]} U(k_1)
= U(k_1)^-1𝒜[𝔬] U(k_1)
Since propagator 𝒜[𝔬] transforms covariantly, its spectrum {e^iλ_a}_a=1^D is gauge invariant. In other words, the phase λ_a obtained through these numerical formulas is uniquely determined (module 2π) independent of gauge choice in the calculation.
|
http://arxiv.org/abs/2307.04092v1 | 20230709042603 | Coupled-channel $D^\ast K^\ast -D_s^\ast ρ$ interactions and the origin of $T_{c\bar{s}0}(2900)$ | [
"Man-Yu Duan",
"Meng-Lin Du",
"Zhi-Hui Guo",
"En Wang",
"Dian-Yong Chen"
] | hep-ph | [
"hep-ph",
"hep-ex"
] |
[addref]
|
http://arxiv.org/abs/2307.04961v1 | 20230711015143 | Still Waters Run Deep: Extend THz Coverage with Non-Intelligent Reflecting Surface | [
"Chong Han",
"Yuanbo Li",
"Yinqin Wang"
] | cs.IT | [
"cs.IT",
"eess.SP",
"math.IT"
] |
Still Waters Run Deep: Extend THz Coverage with Non-Intelligent Reflecting Surface
Chong Han, Member, IEEE,, Yuanbo Li, Yiqin Wang
Chong Han, Yuanbo Li, and Yiqin Wang are with the Terahertz Wireless Communications (TWC) Laboratory, Shanghai Jiao Tong University, Shanghai, China (e-mail: {chong.han, yuanbo.li, wangyiqin}@sjtu.edu.cn).
August 12, 2023
=================================================================================================================================================================================================================================================================
empty
Large reflection and diffraction losses in the Terahertz (THz) band give rise to degraded coverage abilities in non-line-of-sight (NLoS) areas. To overcome this, a non-intelligent reflecting surface (NIRS) can be used, which is essentially a rough surface made by metal materials. NIRS is not only able to enhance received power in large NLoS areas through rich reflections and scattering, but also costless and super-easy to fabricate and implement. In this article, we first thoroughly compare NIRS with the lively discussed intelligent reflecting surface (IRS) and point out the unique advantages of NIRS over IRS. Furthermore, experimental results are elaborated to show the effectiveness of NIRS in improving coverage. Last but not least, open problems and future directions are highlighted to inspire future research efforts on NIRS.
Non-intelligent reflecting surface, Terahertz communications, Coverage extension.
§ INTRODUCTION
Over the past few decades, wireless communication networks have experienced revolutionary developments, from the first generation (1G) to the most recent fifth generation (5G).
Nonetheless, looking towards 2030, the mobile communication network will further evolve to the sixth generation (6G), where internet-of-everything (IoE) is expected to be achieved with ubiquitous network coverage and massive connectivity <cit.>. Various and abundant intelligent devices, such as smartphones, mixed reality (MR) headsets, as well as sensors and machines, will generate a large amount of message and data for wireless communications. As electromagnetic infrastructure to support them, ultra high data rates (e.g., up to 1 Terabits per second) are needed, which can not be fullfilled by current spectrum resource and thus motivate the exploration of the Terahertz (THz) band. Spanning the frequency between 0.1THz to 10THz, the THz band is envisioned as a key technology to address the spectrum scarcity and capacity limitations of current wireless systems <cit.>, thanks to its broad contiguous bandwidth (from tens up to hundreds of GHz).
Wonderful as THz communication is, however, it has its own drawbacks. Among others, one key problem is the weak coverage ability of THz communications in non-line-of-sight (NLoS) areas. At high frequencies, reflection, diffraction, and penetration losses worsen <cit.>. As a result, when line-of-sight (LoS) transmission is blocked, sometimes drastic degradation of link quality may occur. To address LoS blockage problem, one natural solution is to add more active nodes in the network, such as base stations (BSs), access points (APs), and active relays, which however, associate with extra hardware and energy costs. By contrast, energy efficient solutions are preferred, such as intelligent reflecting surface (IRS), which is a passive tunable metasurface and able to redirect propagating THz waves <cit.>.
However, even though the theoretical performance of IRS is extraordinary, realization of IRS in the THz band might be difficult and far from practice, for the following reasons. First, due to high frequencies of THz waves, thousands of elements are required to compensate for the large path loss from IRS to receiver. The fabrication of such large number of IRS elements and corresponding control circuits might be very difficult and costly. Second, attributed to the small wavelength of THz waves, tiny antenna elements in the order of sub-millimeter could be fabricated and densely placed to form ultra massive multiple-input-multiple-output (UM-MIMO) system, resulting in improved spectral efficiency and coverage capability. Integrating UM-MIMO and IRS, the concatenated channel from transmitter (Tx) to IRS to receiver (Rx) is expressed with a channel tensor with dimensions of N_t× N_IRS× N_r, with N_t, N_IRS, and N_r denoting the numbers of elements of transmitter array, IRS, and receiver array, respectively. The massive IRS elements would make the accurate channel estimation of the large-scale channel tensor computationally complex, for which the joint optimization of the UM-MIMO and IRS is hard to achieve. Therefore, still a long and spiny path is in front to practically implement IRS in the THz band.
By contrast, a more realistic and easier way is to use non-intelligent reflecting surface (NIRS), which is essentially a rough surface simply made of metal materials. Compared to IRS, NIRS loses the ability to adapt to mobile users or suppress interference from neighbouring BS, while gaining advantages such as nearly no cost, no fabrication, and super-easy deployment. We hereby note that NIRS is different from the frequently mentioned reflectors in cmWave and mmWave bands <cit.> as follows. NIRS is rough and require no specific design, while reflector is a smooth surface acting as electromagnetic mirrors. The reason that NIRS is preferred than reflectors is two-fold. On one hand, due to the small wavelength, THz waves are more sensitive to surface roughness, resulting in a stricter requirement for a reflector to be smooth considering the sub-millimeter wavelength. Lower fabrication difficulty is the key advantage of NIRS, compared to reflectors. On the other hand, the high sensitivity of THz waves lead to strong scattering, i.e., non-specular reflections, especially when interacting with rough metal surfaces. Even though NIRS performs worse than reflectors in specular directions, it can simultaneously enhance signal strength in non-specular directions, thus covering wider NLoS areas. In summary, even though NIRS appears more clumsy compared to IRS or reflectors, the outstanding low cost, low utilization difficulty, and wide coverage of NIRS promote it to be a good technique for coverage extension in THz networks.
Fig. <ref> illustrates several typical use cases of NIRS, in both indoor and outdoor scenarios. For instance, in L-shaped corridors, the LoS path is blocked by walls, which results in significant link performance degradation. In light of this, NIRS can be deployed on the walls near the turning corner, to provide once-scattering or high-order reflections path to enhance the coverage in the NLoS region. Moreover, objects in indoor rooms, such as bookshelf in library, server rack in data center, etc., can also shelter the receivers from the access points. In this regard, the NIRS can be deployed on walls or ceilings, to bypass the blocking objects. Similarly, the blockage of high-rise buildings in urban areas can also be address by deploying NIRS on building surfaces. Last but not least, for pedestrian communicating with lamppost base stations, the blockage of human body and foliage can be severe, due to the weak penetration ability of THz waves. Therefore, the NIRS deployed on nearby walls or grounds can help redirect a reliable link.
To this end, NIRS is a promising technique for THz communications to address LoS blockage problem, by exploiting the benefits from rough surface scattering. Motivated by this, we provide an overview of NIRS, including the main advantages and disadvantages of NIRS compared to IRS, in terms of flexibility, fabrication and design difficulty, and compatibility to UM-MIMO systems. Moreover, to show the efficacy of NIRS, a preliminary experiment in an indoor corridor scenario is elaborated. Coverage and capacity are improved with deployment of NIRS.
Furthermore, open problems and future challenges are highlighted to inspire future research, including NIRS channel modeling, reliable design, deployment and coordination optimization, joint communication and sensing enhancement.
§ IRS V.S. NIRS: TRADE-OFF OF PERFORMANCE AND IMPLEMENTATION DIFFICULTY
As an overview, the comparison of IRS and NIRS is shown in Table <ref>. In short, the attractive excellent flexibility and intelligence of beam control via IRS comes at prices of high hardware and computation costs, which may prevent its realization in the THz band. On the contrary, the design and usage of NIRS are much easier in practice, with noticeable coverage enhancement.
§.§ Beam Control Ability
High flexibility and adaptability are the key advantage of IRS compared to NIRS, which also distinguish their intelligence. IRS is composed of a large number of reflecting elements, which can manipulate the reflecting amplitude and phase shift of the impinging THz waves. Enabled by them, IRS has the so-called passive beamforming ability, i.e., the scattering pattern can be electrically controlled to realize certain purposes. For instance, when used for coverage extension, IRS can intelligently concentrate the signal power towards the directions of users. As the user moves, the IRS beam constantly steers to follow, achieving reliable communication links. Moreover, just like active beamforming of antenna arrays, IRS can also create zero-directions to suppress interference from neighbouring BS in down-link or to them in up-link.
In this regard, NIRS is much more clumsy and thus non-intelligent. It can neither simultaneously change the scattering pattern to track mobile users, nor cancel interference in multi-user or multi-BS scenarios. In fact, NIRS may cause more interference since the NLoS signals are all enhanced, no matter from the targeted BS/AP or other BSs/APs. Therefore, before deployment in realistic communication networks, it is of importance to appropriately design and deploy the NIRS in positions maximizing the signal-to-interference and noise ratio (SNR).
§.§ Link Path Loss
A key factor to evaluate link performance is the path loss, which determines the SNR at the receiver side. Due to the different design considerations, the path losses of NIRS and IRS-aided communication links are fundamentally different. Particularly, for IRS, the communication channel is concatenated by two segments, namely the Tx-IRS channel and the IRS-Rx channel. Moreover, to steer the IRS beams toward any user location, IRS elements are designed to scatter the incident signal omnidirectionally. In other words, the reflected signal from a single IRS element is just like being radiated by an omnidirectional antenna. As a result, the path loss of IRS-aided communication link is inversely proportional to the product of distance from Tx to IRS and distance from IRS to Rx, as clearly explained as the product-distance path loss model in <cit.>. Considering only the LoS path, by using the Friis' formula, the free space path loss at 300GHz could exceed 80dB for a distance of only 1m. As a result, an overall path loss of Tx-IRS-Rx link would surpass 160dB if Tx-IRS and IRS-Rx distances are both 1m. To overcome the severe path loss, high beamforming gains and thus very large amount of IRS elements are needed. For example, as analyzed in <cit.>, to outperform a direct link, the number of IRS elements needs to exceed 4096 in the THz band.
Unlike the omnidirectional IRS elements that spread signal energy uniformly to all directions, the reflected/scattered signals from NIRS is concentrated on several directions, such as the specular direction. Therefore, the path loss of Tx-NIRS-Rx link consists of two parts, namely the spreading loss part and additional reflection loss part. The spreading loss part is dependent on the overall link distance, i.e., reversely proportional to the summation of Tx-NIRS and NIRS-Rx distances, following the sum-distance path loss model <cit.>. Moreover, the additional reflection loss part is dependent on reflection angles of NIRS-Rx direction. In strong reflection/scattering directions, such as the specular direction, the additional reflection loss could be as low as several dB, while in other directions, the additional reflection loss could increase up to tens of dB. Generally speaking, an overall path loss of 100140dB could occur for NIRS-aided links, depending on propagation distance and receiver locations <cit.>. Nonetheless, compared to the situations in NLoS areas without NIRS, the path loss values are mitigated by 317dB.
§.§ Fabrication and Control Difficulty
The super-easy fabrication and control is the key strength of NIRS over IRS. As mentioned above, to explore the potential of IRS in the THz band, thousands of elements are needed to compensate for the significant path loss, each of which is electrically controlled. Moreover, due to the small wavelength of THz waves, IRS elements need to be rather small to omnidirectionally scatter the signals, e.g., a fifth of the wavelength (tends of micrometer). Such ultra-massive yet ultra-tiny IRS elements, and more importantly, their corresponding control circuits, would make it extremely costly and difficult to fabricate. On the contrary, NIRS require no specific design, whose fabrication is much easier and costless. As reported in <cit.>, NIRS can be simply made with super cheap aluminium foils, which yet realize considerable coverage extension for THz communications.
§.§ Jointly Usage With UM-MIMO
To exploit high spectral efficiency with spatial multiplexing, UM-MIMO is a critical mass in the THz band <cit.>. When embedding IRS in UM-MIMO systems, the joint optimization requires knowledge of the concatenated Tx array-IRS-Rx array channel, which however, is computationally complex due to the high dimensions. By contrast, the joint usage of NIRS and UM-MIMO system is natural since the NIRS needs no real-time control and thus does not add additional signal processing burden. Moreover, since NIRS creates rich reflection and scattering, the spatial degree of freedom improves, which is originally limited in the THz UM-MIMO system due to the sparse THz channel. Consequently, the rank of the UM-MIMO channel matrix and spatial multiplexing gain increase, which further improves the channel capacity.
§ EXPERIMENTAL RESULTS FOR NIRS-AIDED THZ COVERAGE EXTENSION
In this section, we elaborate an experiment of using NIRS to extend THz coverage in a corridor scenario. The experiment set-up, deployment, and results are explained in the following part.
§.§ Experiment Set-up and Deployment
Experiments are conducted with a vector network analyzer (VNA)-based channel sounder, which is introduced in detail in <cit.>. Specifically, the measured frequency bands are 306321GHz and 356371GHz. During the channel measurement, the transmitter is only equipped with a standard waveguide WR2.8 for large coverage with wide beam, which has 7dBi antenna gain and a 30^∘ half-power beamwidth (HPBW). For the Rx side, to obtain omnidirectional channel observations, direction-scan sounding (DSS) scheme is utilized. Particularly, equipped with a directional horn antenna with a 25dBi antenna gain and a 8^∘ HPBW, the Rx scans the spatial domain with 10^∘ angle steps, from 0^∘ to 360^∘ in the azimuth plane and -20^∘ to 20^∘ in the elevation plane.
The measurement scenario is an indoor corridor on the second floor of the Longbin Building in Shanghai Jiao Tong University. Particularly, the transmitter is placed in the middle of the corridor near room a and remains static, while 9 Rx positions in NLoS areas in room d are selected, as shown in Fig, <ref>. To test the effectiveness of NIRS, a homemade NIRS with a size of 1.2m×1.2m is glued near the turning corner. As shown in Fig. <ref>, the NIRS is a foam board overlaid by aluminium foils that are manually pasted, resulting in a rough and irregular metal surface.
§.§ Power Enhancement By Including NIRS
To observe the performance of NIRS, we compare the measured path loss with/without NIRS. Since only limited Rx positions are measured due to high time consumption of channel measurements, to analyze the coverage situations in the whole area, the path loss in positions between adjacent Rx locations are obtained through linear interpolation. As a result, the power enhancement is calculated as the difference of path loss before/after adding the NIRS, where the results are shown in Fig. <ref> and several observations are made as follows.
First, for both 306321GHz and 356371GHz bands, received power is enhanced in most areas. Specifically, 63.6% area at 306321GHz and 51.6% area at 356371GHz obtains power enhancement of more than 3dB. Moreover, the maximum power enhancement is 12.56dB and 9.56dB at 306321GHz and 356371GHz, respectively. This proves the effectiveness of the NIRS. Second, the power enhancement by adding the NIRS is not uniform in the NLoS areas. At certain Rx positions, such as the top-middle receiver location, the path loss is decreased by nearly 10dB, while in other Rx positions, especially those receiver positions that are far away from the NIRS, the received power barely changes. Third, comparing the two frequency bands, the power enhancement shows different patterns. Therefore, it might be hard to control the NIRS to enhance the received power at certain Rx locations.
§.§ Channel Capacity With/Without NIRS
To clearly show the effectiveness of NIRS, SNR is calculated based on the measured path loss results, assuming a realistic THz communication link with reference parameters in <cit.>. Key parameters include a bandwidth of 15GHz, transmitter power of 13dBm, Tx and Rx antenna gains of 25dB. Furthermore, based on the SNR results, the channel capacity can be evaluated, as shown in Fig. <ref>. The results show that by adding the NIRS, the channel capacity in NLoS areas are greatly increased, especially in the 306321GHz band. Specifically, the average channel capacity increases from 5.42Gbps to 13.55Gbps at 306321GHz, and from 3.46Gbps to 7.97Gbps at 356371GHz, respectively. Moreover, with NIRS, in the best ten percent areas, the channel capacity exceeds 27.08Gbps and 15.85Gbps at 306321GHz and 356371GHz, respectively, while the values are only 8.96Gbps and 4.73Gbps at these two frequency bands without NIRS, respectively. Therefore, by including the NIRS, the channel capacity doubles or even triples than without using NIRS, proving its effectiveness.
§ OPEN PROBLEMS AND FUTURE DIRECTIONS
To effectively make use of NIRS, several open problems need to be addressed, including the channel modeling of the NIRS-aided communications, reliable design for site-specific coverage extension, optimal deployment and coordination of multiple NIRS, and possible joint communication and sensing enhancement.
§.§ NIRS Channel Modeling
Unlike IRS and reflectors, which involve either diffusely scattering or specular reflection, the scattering phenomenon on rough NIRS depends on multiple factors, including surface roughness, material, surface size, etc. To characterize NIRS channels and further evaluate the link performance of NIRS-aided communications, an accurate yet efficient channel model is necessary. Since the problem of wave scattering from rough surfaces has no closed-form solutions, existing studies usually use approximate solutions. For instance, the Kirchhoff scattering theory might be used to calculate the scattering efficient, which further depends on rough surface height standard deviation <cit.>. However, since the fabrication and design of NIRS is casual, such quantitative characteristics might not be available. Therefore, other models, such as statistical models or empirical fitting results based on real measurements, may be preferred in practice.
Another key problem related to NIRS channel modeling is the assessment of multipath richness and near-field effect resulted by NIRS. As mentioned above in Sec.II-D, NIRS can be easily embedded into UM-MIMO systems. The spatial multiplexing gain and channel capacity of UM-MIMO links highly rely on the number of significant paths in the communication channels. Therefore, by involving NIRS, the surrounding environment become more sensitive to THz waves, for which the originally weak high-order reflection/scattering paths can become more significant. Meanwhile, far-field propagation might convert to near-field, or cross near-and-far-field after experiencing NIRS scattering. Thus, the spatial multiplexing gain and channel capacity may increase. Extensive channel measurements are needed to analyze the multipath channel and analyze channel capacity of NIRS-aided THz communications.
§.§ Reliable NIRS Design for Site-Specific Coverage Extension
With the low fabrication cost, NIRS can enhance the THz coverage in the NLoS areas, as discussed in Sec. III. However, casual design of NIRS brings drawbacks such as random scattering pattern. Without careful design, it is hard to control which NLoS area to be enhanced, except for the specular directions that always receives better coverage due to smaller reflection loss. However, usage of NIRS in reality may be very site-specific, i.e., one usually has a target area or direction whose coverage needs to improved. In such cases, the random scattering pattern of NIRS prevents its effective usage.
There are several possible research directions to address this issue. First, accurate modeling of the scattering pattern could enable reliable designs for practical use, which however, might be difficult due to the reasons aforementioned. Second, one way to obtain desired scattering pattern is to control the roughness of NIRS. For this purpose, special structure might be explored, such as placing polished metal cubes of different heights in a grid pattern. This scheme is potential, yet causing design difficulty. Third, a possible cost-effective and simple solution is to add more NIRS in the NLoS area, similar to the usage of double IRS for improved performance <cit.>. However, this may incur interference problems, for which the joint deployment and coordination of multiple NIRS need to be considered.
§.§ NIRS Deployment and Coordination Optimization
Since NIRS is unable to change the beam steering direction after placement, where to deploy the NIRS is a key question to investigate. Generally speaking, since the specular reflection produces the strongest reflection, it is beneficial to place the NIRS in the specular reflection points between transmitter and receiver. Nonetheless, since NIRS can also enhance high-order reflections and scattering, practical deployment is rather more complicated. Moreover, considering the mobility of users, it is usually preferred to obtain a good coverage enhancement in most of the NLoS area, rather than great improvement at several locations while neglecting others. Therefore, the NIRS deployment optimization is a question that needs to be answered.
Furthermore, as the NIRS scattering pattern is hard to control, it is intuitive to place more NIRS to fully cover the NLoS areas. Ideally speaking, by placing multiple NIRS in the appropriate positions, the coverage ability of THz communications can be greatly extended. The first NIRS closest to Tx can cover part of the NLoS area with first-order reflection/scattering, while the second and later NIRS can further extend the coverage in deep NLoS areas, i.e., those areas that are far from the LoS region and barely receive enough signal strength. To achieve this, coordination of multiple NIRS is a key problem to be solved. With more NIRS, the dimension of the optimization problem grows, for which an computationally efficient and effective method to find the global maximum is needed.
§.§ Joint Communication and Sensing Enhancement with NIRS
In 6G and beyond wireless systems, it is expected that high-level integration of sensing and communication (ISAC) will play an important role. This is even enticing in the THz band, which promises unprecedented millimeter-level sensing accuracy. Even though NIRS is proposed to extend the coverage ability of THz communications, it is also potential to improve the sensing ability. By including NIRS in surrounding environment, the reflection and scattering loss are reduced. This leads that the back-scattered echo signal for sensing amplifies and therefore, the sensing SNR is increased and higher sensing accuracy can be achieved. However, it is also possible that if being placed in inappropriate positions, scattering from NIRS can cause stronger interference to the sensing echo signals. Experiments are needed to verify whether gain or loss NIRS may bring to THz sensing systems. Furthermore, due to the different metrics of communication and sensing systems, effective algorithms and methods are needed to joint optimize communication and sensing performance, or putting forward a good balance between them.
§ CONCLUSION
In this article, we provided an overview of the non-intelligent reflection surface (NIRS), which is a rough surface simply made of metal materials. The advantages and disadvantages of NIRS compared to IRS are presented. Still waters run deep - with almost nil-cost and extremely low fabrication difficulty, NIRS can effectively solve the LoS blockage problem, as well as enhance coverage, channel capacity and even sensing capabilities. Experimental results show that by using the NIRS, the channel capacity in the NLoS area could double on average. To shed light on studying THz NIRS, open problems and future directions are elaborated, including the NIRS channel modeling, reliable design of site-specific usage, deployment and coordination optimization, and joint communication and sensing enhancement.
IEEEtran
|
http://arxiv.org/abs/2307.04712v1 | 20230710172149 | Machine learning potentials with Iterative Boltzmann Inversion: training to experiment | [
"Sakib Matin",
"Alice Allen",
"Justin S. Smith",
"Nicholas Lubbers",
"Ryan B. Jadrich",
"Richard A. Messerly",
"Benjamin T. Nebgen",
"Ying Wai Li",
"Sergei Tretiak",
"Kipton Barros"
] | physics.app-ph | [
"physics.app-ph"
] |
[email protected]
Department of Physics, Boston University, Boston, Massachusetts 02215
Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87546
Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, New Mexico 87546
Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87546
Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, New Mexico 87546
Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87546
NVIDIA Corp., Santa Clara, CA 95051, USA
Computer, Computational, and Statistical Sciences Division, Los Alamos National Laboratory, Los Alamos, NM 87545, USA
Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87546
Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87546
Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87546
Computer, Computational, and Statistical Sciences Division, Los Alamos National Laboratory, Los Alamos, NM 87545, USA
Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87546
Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, New Mexico 87546
Center for Integrated Nanotechnologies, Los Alamos National Laboratory, Los Alamos, New Mexico 87546
Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87546
Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, New Mexico 87546
Methodologies for training machine learning potentials (MLPs) to quantum-mechanical simulation data have recently seen tremendous progress. Experimental data has a very different character than simulated data, and most MLP training procedures cannot be easily adapted to incorporate both types of data into the training process. We investigate a training procedure based on Iterative Boltzmann Inversion that produces a pair potential correction to an existing MLP, using equilibrium radial distribution function data. By applying these corrections to a MLP for pure aluminum based on Density Functional Theory, we observe that the resulting model largely addresses previous overstructuring in the melt phase. Interestingly, the corrected MLP also exhibits improved performance in predicting experimental diffusion constants, which are not included in the training procedure. The presented method does not require auto-differentiating through a molecular dynamics solver, and does not make assumptions about the MLP architecture. The results suggest a practical framework of incorporating experimental data into machine learning models to improve accuracy of molecular dynamics simulations.
Machine learning potentials with Iterative Boltzmann Inversion: training to experiment
Kipton Barros
August 12, 2023
======================================================================================
§ INTRODUCTION
Molecular dynamics simulations are ubiquitous in chemistry <cit.> and materials modeling <cit.>. Several methods exist for calculating interatomic forces from first principles <cit.> underpinning ab initio molecular dynamics. The cost of such atomistic quantum-mechanical (QM) calculations grows rapidly with system size, and this limits the scale of molecular dynamics simulations. Machine learning potentials (MLPs) offer a path towards achieving the fidelity of QM calculations at drastically reduced cost <cit.>.
MLP performance strongly depends on the quality of training data. Active learning is commonly used to ensure diversity of structural configurations and wide coverage of the relevant chemical space <cit.>. MLPs trained on active learned data tend to yield more stable molecular dynamics simulations <cit.>. MLPs have been successfully applied to predicting potential energy surfaces <cit.>, and have been extended to charges <cit.>, spin <cit.>, dispersion coefficients <cit.>, and bond-order quantities <cit.>. Training data sets are typically obtained with Density Functional Theory (DFT), which serves as reasonably accurate and numerically accessible reference QM approach. MLPs impose symmetry constraints (rotation, translation, permutation) and typically assume that energy can be decomposed as a sum of local atomic contributions (nearsightedness cutoff of order 10 ) but are otherwise extremely flexible, and may involve up to 10^5 fitting parameters <cit.>. This is in stark contrast to classical force fields <cit.>, which involve simple and physically-motivated functional forms, with fewer fitting parameters. A limitation of classical force fields is that they can be system-specific, and may require tuning or even re-fitting for new applications.
In contrast to classical force-fields, MLPs trained to a sufficiently broad training dataset can exhibit remarkable accuracy and transferability <cit.>. A recent study of aluminum used an active learning procedure to train a MLP for bulk aluminum, without any hand-design of the training data <cit.>. The resulting model was capable of accurately reproducing low-temperature properties such as cold curves, defect energies, elastic constants and phonon spectra. Despite these successes, there is still room for improvement. The MLP predicts an overstructured radial distribution function (RDF) in the liquid phase <cit.> relative to experiment <cit.>, and the deviation grows with increasing temperature. Such overstructured RDFs have been previously reported for ab initio calculations <cit.>. This suggests that error in MLP-driven simulations may be due to limitations of the DFT reference calculations <cit.>, rather than training-set diversity. To test this hypothesis, in this work we verify that the overstructured RDFs appear for two distinct MLP architectures, providing evidence that the limitation is either in the DFT training data itself, or in a fundamental assumption of both MLP architectures [It seems possible that the nearsightedness assumption of traditional MLPs excludes information that would be important to make a determination about the electronic structure of the global many-body quantum state.]. Therefore, an important question is how to improve MLPs by explicitly including experimental liquid-phase data, such as RDFs, into the training procedure.
While MLPs trained to large datasets of high-fidelity QM calculations <cit.> have seen explosive growth, training to experimental data remains underutilized <cit.> This is partly because there are well-established workflows, such as stochastic gradient descent, for training MLPs directly to their target output, i.e., the energy and forces of a microscopic atomic configuration. Such atomistic-level data cannot typically be accessed experimentally <cit.>, where measurements frequently provide information on quantities averaged over some characteristic length and time scales. Sparsity and frequently unknown errors (e.g. introduced by defects) in experimental data further complicate the problem. An alternative method for training to experimental observables would involve auto-differentiating through a molecular dynamics simulation that is used to measure statistical observables <cit.>. This direct automatic-differentiation approach may be impractical in various situations: it requires memory storage that grows linearly with the trajectory length, and will also exhibit exploding gradients when the dynamics is chaotic. To address the latter, Ref. thaler2021learning introduced the differentiable trajectory re-weighting method, which MLPs use re-weighting to avoid costly automatic differentiation when training. An alternative approach is the inverse modeling methods of statistical mechanics, which optimize microscopic interactions to match macroscopic time-averaged targets such as equilibrium correlation functions <cit.>. Thus targets obtained from experiments can be readily utilized by inverse methods <cit.>, which do not require a differentiable molecular dynamics solver. Inverse methods have been successfully applied to both fluid <cit.> and solid state targets <cit.> as well as designing systems with specific self-assembly objectives <cit.>. In particular, Iterative Boltzmann Inversion (IBI) is a popular inverse method, which optimizes an isotropic pair potential to match target RDF data <cit.>.
In this paper, we use IBI to construct a corrective pair potential that is added to our MLP to match experimental RDF data. To highlight the generality of this approach, we report results for two distinct neural network models, namely the Accurate NeurAl networK engINe for Molecular Energies (ANI for short) <cit.>, which uses modified Behler-Parrinello Atom-Centered Symmetry Functions with nonlinear regression, and Hierarchically-Interacting-Particle Neural Network (HIP-NN) <cit.>, a message-passing graph neural network architecture. Trained to the same aluminum data set <cit.>, the two MLPs behave qualitatively similarly. They accurately reproduce low-temperature properties such as cold curves and lattice constants in the solid phase. In the liquid phase, however, both MLPs predict overstructured RDFs and underestimate diffusion in the liquid phase. To address these MLP errors, we use IBI to design temperature dependent pair potentials that correct the MLP, such that simulated RDFs match the liquid phase experimental targets. Although the IBI only trains to the RDF (a static quantity), the corrective pair potentials also improve predictions of the diffusion constant, which is a dynamical observable. We find that the IBI corrective potentials become smaller with temperature, which is consistent with the fact that the uncorrected MLP is already accurate at low temperatures. An MLP with a temperature-dependent corrective potential leverages both atomistic DFT data and macroscopic experimental training targets to achieve high accuracy at given temperatures. Future work might consider interpolating between corrective, temperature-dependent potentials to achieve high accuracy over a continuous range of a range of temperatures.
§ METHODS
We train MLPs on the condensed phase aluminum data set from Ref. smith2021automated. The data set was generated using automated active learning framework, which ensures adequate coverage of the configurational space <cit.>. Active learning is an iterative procedure. At each stage, non-equilibrium molecular dynamics simulations are performed using the MLP under construction. A “query by committee” metric measures the disagreement between the predictions of an ensemble of MLPs to identify gaps in the training dataset. If an atomic configuration is identified for which there is large ensemble variance, then a new reference DFT calculation is performed, and resulting energy and forces are added to the training data. The final active learned dataset consists of about 6,000 DFT calculations, over a range of non-equilibrium conditions, with periodic boxes that contain between 55 and 250 aluminum atoms. The dataset is available online. <cit.>.
Here, we use two different MLPs: ANI and HIP-NN. The ANI MLP <cit.> uses modified Behler-Parrinello atom-centered symmetry functions <cit.> to construct static atomic environment vectors from the input configurations. Feed-forward neural network layers map the atomic environment vectors to the output energy and forces predictions. HIP-NN uses a message passing graph neural network architecture <cit.>. In contrast to ANI, HIP-NN uses learnable atomic descriptors. Additionally, HIP-NN can use multiple message passing (interaction) layers to compute hierarchical contributions to the energy and forces <cit.>. Despite the striking differences between the two architectures, our results are consistent across both MLPs. This highlights the generality of this approach.
The ANI and HIP-NN MLPs are trained from data that is available online <cit.>. The hyper-parameters for both ANI and HIP-NN are discussed in Appendix <ref>. HIP-NN achieves an out-of-sample root mean-squared error of 4.1 meV/atom for energy, comparable to the ANI MLP with error of 3.5 meV/atom <cit.>. Additionally, ANI and HIP-NN predict the ground state FCC lattice constants 4.054 and 4.037 respectively, which are consistent with the experimental value 4.046 ± 0.004 <cit.>. The lattice constants are computed using the Lava package <cit.>.
After training, the molecular dynamics is performed using Atomic Simulation Environment (ASE) codebase <cit.>. We initialize the system with 2048 atoms with FCC lattice structure and use the NPT ensemble in all MD simulations with a time-step of 1 femtosecond. The initial melt and density equilibrations are performed for 200 ps. Then the RDFs are computed by averaging over 100 snapshots, each 0.1 ps apart. The RDF data is collected in bins of width 0.05 .
§ ITERATIVE BOLTZMANN INVERSION
IBI builds a pair potential u(r) such that molecular dynamics simulations match a target RDF <cit.>. Distinct from previous works <cit.>, the present study uses IBI to generate a pair potential u(r) that is a correction on top of an existing MLP. Whereas the original MLP was trained to DFT energies and forces, the corrective potential is trained to experimental RDF data.
The corrective potential is built iteratively. At each iteration t, an updated pair potential is calculated using the IBI update rule,
u^t+1(r) = u^t(r) + α k_B T w(r) ln[g^t(r)/g_E(r)],
which we have modified to include an arbitrary weight function w(r). We select a relatively small learning rate α=2×10^-4 which aids in the smoothness of the corrective potential. g_E(r) is the experimental RDF. g^t(r) is the simulated RDF, generated using the sum of the MLP and corrective potential u^t(r). At the zeroth generation there is not yet a correction, u^t=0=0, such that g^t=0 corresponds to the simulated RDF for the original MLP. For our numerical implementation of u(r), we use Akima splines <cit.>. The Akima interpolation method uses a continuously differentiable sub-spline built from piece-wise cubic polynomials so that both u(r) and its first derivative are continuous. For every iteration step t when corrective potential is updated, then the MD is performed in the NPT ensemble to allow the system to equilibrate to the new density. Then RDFs for the next iteration are averaged over 100 configurations over a 10 ps trajectory to ensure smoothness.
In the original IBI method, the weight function is w(r)=1. In our variant of this method, which we call the weighted Iterative Boltzmann Inversion (wIBI), we select w(r)=g_E(r). In the limit t →∞, both IBI and wIBI should converge to the same corrective potential u(r) that yields a perfect simulated RDF <cit.>. At early iterations t, however, there can be significant differences. By design, the wIBI method effectively ignores errors in the RDF at very small r, which may be associated with experimental uncertainty <cit.>, and favors corrections at the RDF peaks. We further truncate u(r) beyond 10 because g_E(r)→ 1 for all temperatures considered. Other functional forms for the weight w(r) may be used, provided that w(r) is positive semi-definite and w(r)>0 for all r where the experimental RDF is non-zero.
Figure <ref> shows how the corrective potential u^t(r) generated using the wIBI method results in improved match with the experimental RDF at 1023 K <cit.>. The overstructured ANI-MLP simulated RDF (zeroth generation) is evident in the inset of Fig. <ref>. By the 15th generation, the first shell peak matches the experimental results. Figure <ref> compares the wIBI (w(r)=g_E(r)) and IBI (w(r)=1). For r>3 , u(r) for both wIBI and IBI is similar. The shape of u(r) reflects the initial differences between the original MLP RDF and the experimental one, Δ g(r)= g_r^t=0(r)-g_E(r).
Compared to the experimental RDFs, the ANI and HIP-NN simulated RDFs are overstructured for all temperatures between 943 K up to 1323 K obtained from different experiments <cit.>. The RDFs for 1023 K and 1323 K show overstructuring in the first shell, whereas for 1148 K and 198 K, the second shell is overstructured as well. In Fig. <ref>, the corrective potentials u^t=15(r) at the 15th generation of wIBI for both ANI and HIP-NN for 1023 K, 1148 K, 1198 K, and 1323 K highlight that larger corrections are needed at higher temperatures. In Fig. <ref>, the shape of u(r) reflects the corresponding Δ g(r), namely, the differences between the MLP RDF and the experimental one. Given that the correction required is very similar for ANI and HIP-NN, we attribute the MLP simulated overstructured RDF to limitations of the DFT method used for the training data. Similar overstructured RDFs for DFT and other ab initio methods have been observed in metals <cit.> and water <cit.>. Typically, DFT functionals are well parameterized for near-equilibrium configurations and may perform poorly for the high temperature liquid phase. While improved DFT functionals can potentially alleviate some of these issues, our work presents a data driven correction, which may be readily applied to other systems <cit.>.
§ OUT OF SAMPLE VALIDATION
To validate our results, we compare the diffusion constants calculated for both the ANI and HIP-NN MLPs with and without the IBI corrective potential u(r) at different temperatures. We measure the diffusion constants by averaging over 30 trajectories of length 1 ps, and 1000 snapshots. We fit the simulated trajectory to the Einstein Equation to infer the diffusion constant using Atomic Simulation Environment codebase <cit.>.
Figure <ref> shows that the ANI MLP diffusion constants are underestimated compared to the data from two different experiments <cit.> for all temperatures. The ANI and HIP-NN (not shown) MLPs underestimate relation between the diffusion and temperature; the slope relating the diffusion constant D to temperature T differs from the experiment by approximately a factor of two. The underestimated diffusion constant is physically consistent with MLPs' overstructured RDFs prediction. As expected the deviation between the DFT-based MLP prediction and experimental diffusion constant decreases with temperature. For each temperature, u(r) improves the MLP simulated over-structured RDF and underestimated diffusion constant. We find that that u(r) improves the predictions of both MLPs. The ANI MLP overestimate of equilibrium densities in the melt phase, as seen in Fig. <ref>, are partially corrected by the Note that the u(r) is trained only to the RDF, which is an equilibrium statistic, independent of dynamics. In contrast, the diffusion constant is a dynamical property. Noticeable improvements in the predictions of an `out-of-sample' dynamical observable is strong evidence that the IBI corrective potential is physically meaningful.
We find that extrapolating the corrective pair potential to temperatures beyond the experimental training data can lead to incorrect predictions. The high temperature corrective potentials we trained at u(r;T=1023) and u(r;T=1323) are ineffective when extrapolated to the solid phase. Either of these corrections worsens MLP predictions of zero-temperature properties such as lattice constants, and cold curves. Near the aluminum melting point of 933 K, the corrective potentials have a more neutral effect. The original ANI MLP predicts a melting temperature of 920±5 K, and adding the u(r;T=1023) corrective potential does not alter this. However, adding the u(r;T=1323) correction lowers the melting temperature to 905±5 K, which is further from the true experimental value. The corrections derived for the higher-temperature melt phase are not found to be helpful at lower temperatures. As such, care should be taken when applying the IBI rectifications, which are only applicable to the MLP simulations for a relevant temperature regime.
§ CONCLUSIONS
This study reports a method for generating a corrective pair potential for two distinct MLPs to match target experimental RDFs using the modified IBI technique. Compared to the traditional IBI method with a uniform w(r)=1 weight, our wIBI uses a distance dependent weight w(r)=g_E(r), which avoids unphysical corrections at small distances. Trained on to a DFT dataset alone <cit.>, both ANI and HIP-NN accurately reproduce DFT energies and forces, as well as cold-curves and lattice constants in the low-temperature solid crystalline phases. Adding a temperature dependent, corrective pair potential fixes the overstructured RDFs in the high temperature liquid phase. The improved predictions for diffusion constants indicate that the corrective potential is physically valid. These out of sample validation tests, such as diffusion constant predictions in this case, are important for any framework to incorporate experimental results into MLPs.
Our work does not require auto-differentiation through a MD solver and can be applied any MLP. Furthermore, the results are interpretable because the form of the pair potential relates to the differences in between the RDFs from simulation and experiment. If more experimental data is collected, it can be readily incorporated to further improve our results. Here, the wIBI potential u(r) makes small corrections on top of an existing MLP. Future work could explore the efficacy of the method when there are significant deviations between the MLP and experimental RDFs. Another important consideration is that each wIBI corrective potential has been derived from experimental data for a specific temperature. Naive application of such a corrected MLP to new temperature regimes may yield poor accuracy. This is particularly evident when applying high temperature corrections to simulations at low temperature, as was shown in the aluminum examples.
In the future, we will extend our methods by using other inverse methods such as relative entropy minimization <cit.>, or re-weighting techniques <cit.>. By combining the differentiable trajectory re-weighting technique <cit.> with our current methods, we may be able avoid long MD simulations when learning from experimental targets. Additionally, training to the three-body angular distribution function would be relevant for MLPs of water <cit.>. We intend to explore how multi-state IBI methods <cit.> may be used to fit to RDFs from different temperatures simultaneously and ideally provide continuous corrections to QM based MLPs.
Ultimately, this work is an example of how the experimental data can complement MLPs trained on ab initio data. By incorporating temperature-dependent corrective pair potentials, the resulting models allow for accurate simulations of aluminum in the melt phase. The magnitude of the learned corrections decreases monotonically with decreasing temperature. We did not, however, find any benefit in extrapolating these corrections to the solid phase, where the original MLP is known to be accurate <cit.>.
§.§ Acknowledgments
We acknowledge support from the US DOE, Office of Science, Basic Energy Sciences, Chemical Sciences, Geosciences, and Biosciences Division under Triad National Security, LLC (“Triad”) contract Grant 89233218CNA000001 (FWP: LANLE3F2). The research is performed in part at the Center for Nonlinear Studies (CNLS) and the Center for Integrated Nanotechnologies Nanotechnologies (CINT), a U.S. Department of Energy, Office of Science user facility at Los Alamos National Laboratory (LANL). This research used resources provided by the LANL Institutional Computing (IC) Program and the CCS-7 Darwin cluster at LANL. LANL is operated by Triad National Security, LLC, for the National Nuclear Security Administration of the U.S. Department of Energy (Contract No. 89233218NCA000001).
§ HYPER-PARAMETERS
The ANI MLPs are implemented in the NeuroChem C++/CUDA software packages. Pre-compiled binaries for the ensemble of ANI-MLP is available for download <cit.>. The he loss function is
ℒ = w_energy (Ê-E)^2 + w_force^2 ∑_j=1^3N (f̂_j - f_j)^2,
where N is the number of atoms in a configuration or sample. Weight of 1.0 and 0.01 is used for the energy term and force terms respectively. Batch-size of 128 was used. The ADAM update is used during training. The learning rate was initialized at 0.001 and ultimately converged to 0.00001, following the annealing schedule in Ref. smith2017ani
All ANI-Al model symmetry function parameters are provided below:
Radial Cutoff (Radial): 7.0
Radial Cutoff (Angular): 5.0
Radial Eta: [43.9]
Radial Shift: [1.2500000, 1.4296875, 1.6093750, 1.7890625, 1.9687500, 2.1484375, 2.3281250, 2.5078125, 2.6875000, 2.8671875, 3.0468750, 3.2265625, 3.4062500, 3.5859375, 3.7656250, 3.9453125, 4.1250000, 4.3046875, 4.4843750, 4.6640625, 4.8437500, 5.0234375, 5.2031250, 5.3828125, 5.5625000, 5.7421875, 5.9218750, 6.1015625, 6.2812500, 6.4609375, 6.6406250, 6.8203125]
Angular Zeta: [69.4]
Angular Shift: [0.19634954, 0.58904862, 0.98174770, 1.3744468, 1.7671459, 2.1598449, 2.5525440, 2.9452431]
Angular Eta: [6.5]
Angular Radial Shift: [1.2500000, 1.7187500, 2.1875000, 2.6562500, 3.1250000, 3.5937500, 4.0625000, 4.5312500]
The HIP-NN <cit.> MLP is implemented in PyTorch software package and is available for download <cit.>. The loss function is
ℒ = 100 ×RMSE_energy-per-atom
+ 100 ×MAE_energy-per-atom
+ RMSE_forces
+ MAE_forces + 10^-6×∑_iw_i^2,
where the last term corresponds to the L2 regularization with respect to the weights of the network. We use a network with 1 interaction layer, 3 atom layers (feed-forward layer) with a width of 15 features. For the sensitivity functions, 20 radial basis functions are used with a soft-min cutoff of 1.25, the soft maximum cutoff of 7.0, and hard maximum cutoff of 5. We used the Adam Optimizer, with an initial learning rate of 0.001, which is halved with a patience of 25 epochs.
|
http://arxiv.org/abs/2307.04105v1 | 20230709055525 | Towards Assumption-free Bias Mitigation | [
"Chia-Yuan Chang",
"Yu-Neng Chuang",
"Kwei-Herng Lai",
"Xiaotian Han",
"Xia Hu",
"Na Zou"
] | cs.LG | [
"cs.LG",
"cs.CY"
] |
Texas A&M University
[email protected]
Rice University
[email protected]
Rice University
[email protected]
Texas A&M University
[email protected]
Rice University
[email protected]
Texas A&M University
[email protected]
Despite the impressive prediction ability, machine learning models show discrimination towards certain demographics and suffer from unfair prediction behaviors. To alleviate the discrimination, extensive studies focus on eliminating the unequal distribution of sensitive attributes via multiple approaches. However, due to privacy concerns, sensitive attributes are often either unavailable or missing in real-world scenarios. Therefore, several existing works alleviate the bias without sensitive attributes. Those studies face challenges, either in inaccurate predictions of sensitive attributes or the need to mitigate unequal distribution of manually defined non-sensitive attributes related to bias. The latter requires strong assumptions about the correlation between sensitive and non-sensitive attributes. As data distribution and task goals vary, the strong assumption on non-sensitive attributes may not be valid and require domain expertise.
In this work, we propose an assumption-free framework to detect the related attributes automatically by modeling feature interaction for bias mitigation. The proposed framework aims to mitigate the unfair impact of identified biased feature interactions.
Experimental results on four real-world datasets demonstrate that our proposed framework can significantly alleviate unfair prediction behaviors by considering biased feature interactions.
Our source code is available at: https://anonymous.4open.science/r/fairint-5567
<ccs2012>
<concept>
<concept_id>10002951.10003227.10003351.10003269</concept_id>
<concept_desc>Information systems Collaborative filtering</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
<concept_id>10010147.10010257.10010293.10010319</concept_id>
<concept_desc>Computing methodologies Learning latent representations</concept_desc>
<concept_significance>300</concept_significance>
</concept>
</ccs2012>
Towards Assumption-free Bias Mitigation
Na Zou
Accepted 08-Jul-2023. Received 18-Jun-2023; in original form 23-May-2023
============================================================================
§ INTRODUCTION
Machine learning models have shown superiority in various high-stake decision-makings <cit.>, and have been deployed in many real-world applications, such as credit scoring <cit.>, loan approval <cit.>, criminal justice <cit.>, education opportunity <cit.>.
However, machine learning models show discrimination towards certain demographics and suffer from biased prediction behavior, which may negatively impact the minority groups in those application fields.
For example, COMPAS, a recidivism prediction system, shows discrimination towards African-American offenders with a higher possibility of becoming a recidivist two years after leaving prison <cit.>. Recent works focus on bias mitigation techniques to alleviate discrimination in machine learning models.
Existing works to tackle the fairness issues are generally based on two groups of assumptions, i.e., bias assumptions and correlation assumptions.
For the works based on bias assumptions, they mitigate bias with known distributions of sensitive attributes by fairness regularization <cit.>, contrastive learning <cit.>, adversarial learning <cit.>, disentanglement representations <cit.>, and representation neutralization <cit.>.
However, due to privacy concerns, sensitive attributes are often missing <cit.>. Therefore, existing works adopt clustering methods <cit.> and an auxiliary module <cit.> to simulate the sensitive attributes. However, they often suffer from the inaccuracy of the predicted sensitive attributes when adopting clustering algorithms <cit.>.
Thus, the work based on correlation assumptions, FairRF <cit.>, addresses the unfair issues with a strong assumption that the unfair model prediction actually comes from the relationship between sensitive attributes and a set of predefined related non-sensitive attributes.
In this paper, we argue that correlation assumptions between sensitive and non-sensitive attributes may not be valid as data distribution and task goals vary. For example, FairRF <cit.> predefines (inherently assumes) that the related features of gender in the Adult dataset are age, relationship, and marital status. To show this assumption is invalid, we conducted an experiment to explore the relationship between gender and all other features. This assumption is not consistent with the linear relationships between gender and all other features, as shown in Figure <ref>. Additionally, domain expertise and knowledge are required to predefine the related features.
Therefore, we raise the following question: Can we achieve fairness without assuming a predefined relation between sensitive and non-sensitive attributes?
To tackle the limitations of 1) the correlation assumption that unfair model prediction comes from the handcrafting predefined related attributes and 2) the further fairness problems caused by feature interactions, we aim to develop an assumption-free framework to automatically detect and integrate feature interactions for bias mitigation.
It is nontrivial to achieve our goal due to the following challenges.
First, in the real-world scenario, implicit bias of feature interactions are difficult to be detected, especially when sensitive attributes are missing.
Specifically, it is hard to find the high-order statistical interactions that may lead to biased predictions of deep neural networks due to the complex model structures.
Thus, when neither the sensitive attributes are available nor make strong correlation assumptions on related features, it becomes very challenging to identify the biased feature interactions.
For example, identifying the biased feature interactions among all the combinations of features without the sensitive attributes may lead to numerous candidate features interactions, which make models extremely hard to learn the distribution of actual biased feature interactions.
Second, it is challenging to mitigate bias in feature interactions due to the uneven distribution among feature interactions.
For example, without considering the potential uneven distribution of the feature interactions, trained prediction models may fail to detect and mitigate the bias in feature interactions.
To address the aforementioned challenges, we propose FairInt, an assumption-free framework to automatically identify and further mitigate the bias in feature interactions.
Specifically, we develop a sensitive attribute reconstructor for tackling a situation where sensitive attributes are unavailable during the inference stage.
By designing a sensitive-oriented attention score, we develop a biased interaction detection layer to automatically identify the biased feature interactions and then embed the biased interaction information into the latent representation.
It is different from traditional deep neural networks that model feature interactions among all possible feature combinations and cannot identify specific biased feature interactions.
To equalize the probability distribution of sensitive attributes, we design two bias regularizations for debiasing the latent representation that contains biased interaction information.
These two regularizations debias the feature interactions by minimizing the divergence of latent space and the model predictions between different sensitive attribute groups.
We evaluate our framework on four real-world datasets across three different application domains, which include finance, education, and healthcare.
Compared with baseline models, the experimental results demonstrate that the FairInt can successfully further mitigate the biased prediction behaviors while providing similar performances of downstream tasks by considering biased feature interactions.
Moreover, by observing the modeled feature interaction, the FairInt shows the ability to provide better explainability via the designed sensitive-oriented attention score. We highlight our contributions as follows:
* We argue that the related attributes with high correlations to sensitive attributes that can be identified by prior knowledge is problematic. Because the correlations between sensitive and non-sensitive attributes will be changed with different models.
* We propose an assumption-free framework to automatically identify and further mitigate the biased feature interactions. Our framework does not need to handcraft related attributes for mitigating the unfair model prediction that comes from the interactions between sensitive and non-sensitive attributes. Instead, the proposed framework automatically identifies related attributes without prior knowledge during the inference stage.
* Experimental results on several real-world datasets demonstrate the effectiveness of the proposed FairInt framework.
Additionally, our framework provides better explainability via observing the attention weights between sensitive and non-sensitive attributes.
§ PRELIMINARIES
In this section, we introduce the existing bias mitigation strategies for deep neural networks and feature interaction modeling methods that inspire our proposed framework.
§.§ Bias Mitigation
To tackle the prejudicial decisions problem in deep learning models, there is increased attention to bias mitigation methods in recent studies <cit.>.
Extensive approaches apply regularization-based methods to the objective function of the proposed models, which require pre-hoc assumptions to develop.
Existing alleviating techniques are generally based on two groups of assumptions.
Bias Assumptions.
Because machine learning models show discrimination towards certain demographics, people assume that machine learning models have biased behaviors against certain groups.
With a known distribution of a sensitive attribute set, there are several advancements proposed to mitigate bias, such as:
1) Fairness regularization: the objective function of the bias mitigated models generally adds the fairness-related constraint terms <cit.>, which may penalize the prejudiced behaviors of the prediction models. Another existing work <cit.> compares the distributions of model predictions of different sensitive attributes and then minimizes KL-divergence between each sensitive attribute.
2) Adversarial learning: adversarial learning alleviates the biased effects from the known sensitive attributions by simultaneously building an Adversary with the Predictor of machine learning models.
One previous work <cit.> aims to leverage bias alleviation by proposing an adversarial learning strategy with the given distribution of sensitive attributes.
The model includes a Predictor, which accomplishes the downstream task predictions, and an Adversary, which predicts the target sensitive attributes.
The framework adopts adversarial training by minimizing Predictor and maximizing adversary, which aims to debias the unfair situations brought from Predictor.
3) Latent representation neutralization: one latent representation neutralization work <cit.> is to implicitly mitigate bias by adjusting the distribution of latent representations during the model training.
Correlation Assumptions.
In the real-world scenario, it is hard to get the true distribution of sensitive attributes due to privacy concerns, we thus assume that the unfair model predictions are caused by certain related attributes that have high correlations to sensitive attributes.
Specifically, when we face the fairness issue for model prediction, it is challenging to leverage the model bias if we lack sensitive feature information.
Thus, there are some works that focus on eliminating prediction bias under the constraint of unknown sensitive attributes' distribution.
ARL <cit.> utilizes adversarial learning based on Rawlsian Max-Min fairness objectives. However, this approach could be too strict in enhancing fairness across groups, and it is hard to maintain the performance of downstream tasks.
FairRF <cit.> addresses the biased issues by leveraging the relatedness between a set of related non-sensitive attributes and sensitive attributes.
This work assumes that the bias of model prediction actually comes from the high correlation between non-sensitive attributes and sensitive features.
In this manner, a fair model can be achieved by the proposed objective function of alleviating the relatedness between non-sensitive attributes and sensitive attributes. Formally, the objective function of FairRF can be illustrated as follows:
Let f_i ∈ F_n be a set of predefined related non-sensitive attributes, where F_n is a set of non-sensitive features, FairRF applies correlation regularization R_related on each f_i to make trained model fair toward sensitive attribute s by calculating the following function:
min_θℛ_related = ∑_i=1^Kλ_i ·ℛ(f_i, ŷ),
where λ is the weight for regularizing correlation coefficient between x^i and ŷ.
However, this correlation assumption between sensitive and non-sensitive attributes may be sub-optimal, because it requires strong assumptions on feature dependencies.
In other words, data-specific and distribution similarity are necessary.
For example, when we define the related features of sensitive features Gender are three non-sensitive features, which are Age, Relation, and Marital-Status, an accompanying assumption comes up that the three non-sensitive features have top-3 highest correlation with the given sensitive feature Gender.
Nevertheless, it is possible that the true highest correlation-related features of the sensitive feature in the dataset are not obvious for human beings, so we cannot define it correctly.
For instance, maybe the highest related features of the sensitive feature gender are color of eyes and sleeping quality in a certain dataset, and it is hard for humans to associate the two features as related features of gender.
In our work, instead of adopting assumptions on bias feature distribution with its related features, we propose an assumption-free framework for automatically detecting the related features for bias mitigation.
§.§ Learning Feature Interactions
One major advantage of neural networks is their ability to model complex interactions between features by automatic feature learning.
In the territory of click-through rate prediction, CTR prediction, feature interaction modeling has been playing a key role in improving downstream task performances by modeling different orders of feature combinations.
Instead of multiple layers of non-linear neural network approaches which suffer from inefficient and lack of good explanation of feature interactions <cit.>, there are popular approaches that are able to explicitly model different orders of feature combinations and meanwhile offer good model interpretability.
One of the previous works models feature interactions by calculating the inner products between a feature embedding and a trainable matrix, afterward calculating the Hadamard product of another feature embedding <cit.>.
AutoInt <cit.> models feature interactions by adopting the key-value attention mechanism and using the conducted attention weights between all feature pairs to weighted-sum the all input feature embedding.
AutoInt utilizes the inner product operator ψ(·, ·) to define the similarity between two feature embeddings e_j and e_c, and leverages it to compute the attention weights under a specific attention head h by the following equation:
a^(h)_j, c = exp(ψ^(h)(e_j, e_c))/∑_n=1^Nexp(ψ^(h)(e_j, e_n)),
where N represents the number of input features.
The classic self-attention-based approach considers all feature pairs for feature interaction learning, therefore it is difficult to significantly identify the bias between feature pairs containing target sensitive attributes.
In our work, we only consider the feature pairs which treat target sensitive attributes as a Query of attention components to identify the feature interactions between sensitive and non-sensitive attributes for further alleviating the biased interactions.
Our framework can automatically detect the related features for bias mitigation.
§.§ Problem Definition
We first define the notations used in this work.
Let X be the input data set and Y be the ground truth label set of the model output, where X = { x_1, …, x_p} is the p-kind attribute set and Y∈{0, 1} is the binary label set.
Among the input attribute set X = S∪C, where sensitive attributes set S (e.g. gender, race, marital status) and non-sensitive attributes set C.
We observe that the biased feature interactions are the influential factor in yielding fairness of predictive results.
Formally, we define the sensitive feature interaction set as ℐ_s = {ℐ(s, c_1), … , ℐ(s, c_p-1) | ∀ c_j ∈C}, where ℐ(·, ·) denotes an feature interaction between any two features, and s ∈S is a sensitive attribute.
For example, an interaction between a sensitive attribute gender and non-sensitive attribute job can be denoted as ℐ(gender, job).
Based on modeling the feature interactions throughout the prediction models, the biased interactions from ℐ_s eventually lead to bias on prediction tasks.
Based on the definitions and the intuitions above, we consider the interaction bias from prediction model f(X, θ) ≡ p(g(X)), where θ is the model parameters and p(·) is a single-layer prediction head of d-dimensional feature embedding encoder g(·): X→ℝ^d.
In our work, let ℐ_s be the sensitive feature interaction set learned from prediction model f(·), we aim to identify the biased interaction that appears in ℐ_s such that the detected biased interactions are alleviated during the prediction model training.
§ METHODOLOGY
In this section, we introduce an assumption-free fair mitigation framework, FairInt, to alleviate the biased feature interactions.
Figure <ref> illustrates our FairInt framework with two components: Assumption-free Bias Detection, which includes Sensitive Attributes Reconstructor (SAR) and Bias Interaction Detection (BID) layer, and Interaction-wise Bias Mitigation, which includes the regularizations Fairness Constraint (FC) and Interaction Fairness Constraint (IFC).
Our goal is to encourage the classifier to disentangle the biased interaction between sensitive and non-sensitive attributes and instead focus more on learning task-relevant information.
Assumption-free Bias Detection aims at identifying bias within feature interactions without predefined related features, and Interaction-wise Bias Mitigation focuses on alleviating the identified feature interaction bias.
In the following sections, we give a comprehensive description of our FairInt framework.
We first illustrate the details of the proposed bias detection component (Sec. <ref>).
Then, we introduce our two bias mitigation components (Sec. <ref>).
Finally, we demonstrate how to learn the fair predictor through our FairInt framework (Sec. <ref>).
§.§ Assumption-free Bias Detection
sensitive attributes s ∈S are generally unavailable in real-world scenario during the inference stage. Many existing works mitigate the interaction bias under the assumption of the known distribution of sensitive attributes. However, in real-world scenarios, the unavailability of sensitive attributes exists due to various reasons, such as legal issues, which make most of the existing advancements unworkable. To tackle the problems, we develop two corresponding components: Sensitive Attributes Reconstructor (SAR) for sensitive attributes bias assumption-free, and Bias Interaction Detection (BID) for feature interaction assumption-free. Our assumption-free framework aims to disentangle the hand-crafted assumptions of the feature dependency between sensitive and specific non-sensitive attributes during the debiasing process.
Sensitive Attributes Reconstructor (SAR).
Since sensitive attributes s ∈S are generally unavailable in real-world scenario during the inference stage, we design Sensitive Attributes Reconstructor (SAR) to simulate the sensitive attributes for alleviating the implicit interaction bias obtaining in non-sensitive attributes.
Specifically, we aim to generate a pseudo-sensitive attribute ŝ by imitating the distribution of sensitive attributes s ∈S throughout our proposed reconstructor, which brings out the biased interaction between the sensitive attributes and all other non-sensitive features.
Let the input attribute set be x ∈X without the sensitive attributes s ∈S. The objective of Sensitive Attributes Reconstructor (SAR) is to construct a reconstructor f to generate a pseudo-sensitive attribute ŝ for identifying the implicitly biased interactions toward non-sensitive features. The generating process of a pseudo-sensitive attribute can be formally illustrated as follows:
ŝ = SAR(e_x/s; Θ_r),
where Θ_r is the trainable parameters of reconstructor r, and e_x/s denotes the latent representation set of input features x without sensitive attribute s.
Specifically, we leverage the embeddings of all non-sensitive attributions to generate a pseudo-sensitive attribute vector. This makes the reconstructor extract the correlated information between sensitive and non-sensitive features. During training stage, the reconstructor loss ℒ_SAR can be shown as follows:
ℒ_SAR≡min_Θ_r∑_i=1^N (ŝ_i - s_i)^2,
where N is the number of training instance.
The effectiveness of SAR was evaluated by predicting unavailable sensitive attributes using non-sensitive features from Adult and Law School datasets. SAR achieved 87% accuracy for predicting Sex in Adult and 94% for predicting Race in Law School.
The results show that SAR can achieve impressive performance by capturing the correlations between non-sensitive attributes and unobserved sensitive attributes.
Besides predicting the pseudo-sensitive attributes ŝ, SAR advantages our FairInt to better capture the interactions between unobserved sensitive and non-sensitive attributes.
Bias Interaction Detection (BID) Layer.
Optimizing Eq. <ref> in SAR generates a pseudo-sensitive attribute ŝ as a sensitive sensitive attribute, which allows our proposed FairInt to quantitatively analyze the interaction between pseudo-sensitive attributes and non-sensitive attributes. Thus, we propose Bias Interaction Detection (BID) to identify the highly potential biased interactions with the generated pseudo-sensitive attribute.
We first let all the input features be the p-kind attribute set X = {x_1, …, x_p} which contains categorical and numerical features.
Because categorical features are too sparse to learn, we map all the input features into low-dimensional spaces with the unique feature embeddings e_i. The formula can be illustrated as e_i = M_i x_i,
where M_i is an embedding lookup matrix corresponding to feature x_i with dimension d.
Feature interactions are typically modeled by either the inner product similarity or attention scoring mechanism between two feature embeddings <cit.>. For instance, AutoInt <cit.> utilizes the multi-head self-attention mechanism to model high-order feature interactions for improving the downstream task's performance.
AutoInt learns feature interaction within a hierarchical representation structure, which has proven to be effective in several machine learning territories <cit.>.
Especially, self-attention-based mechanism has been utilized in several machine learning areas for capturing the importance within features of input instances <cit.>.
In our work, we exploit self-attention machanism <cit.> to model feature interactions. The main goal of our framework is to mitigate the biased feature interaction for the model predictions but without predefined assumptions.
Therefore, based on the ability of self-attention mechanism to identify important feature interactions, we design Bias Interaction Detection (BID) to point out the key biased interactions of pseudo-sensitive attributes.
Unlike original self-attention mechanism that calculates the attention weights between all the feature two by two, we focus on modeling the feature interactions only between the sensitive pseudo-sensitive attribute ŝ and other non-sensitive features by computing their attention weights.
Specifically, we model the interactions between a pseudo-sensitive attribute ŝ and one non-sensitive features c ∈C with attention head h as a_ŝ, c, which can be calculated as follows:
a_ŝ, c = exp(ψ^h(ê_̂ŝ, e_c))/∑_c ∈Cexp(ψ^h(ê_̂ŝ, e_c)),
where ê_̂ŝ and e_c are the low-dimensional embedding of ŝ and c, and ψ^h(ê_̂ŝ, e_c) denotes as the scoring operator to evaluate the similarity between ê_̂ŝ and e_c.
In this paper, we adopt dot product as an example for ψ^h(ê_̂ŝ, e_c), which can be illustrated as follows:
ψ^h(ê_̂ŝ, e_c) = ⟨ W^h_Queryê_̂ŝ, W^h_Key e_c ⟩,
where ⟨· , ·⟩ is inner product operator, and W^h_Query and W^h_Key are embedding matrices for ê_̂ŝ and e_c. The biased interaction scores can now be defined as a_ŝ, c in this manner.
After obtaining the biased interaction scores between the sensitive and non-sensitive features, we generate the biased interaction embeddings ê^H_s to represent the biased interactions for bias mitigation. We formally define the biased interaction embeddings as following formula:
ê^H_s = _h=1^|H| ∑_c=1^C a_ŝ, c (W^h_value· e_c),
where W^h_value is a trainable embedding matrix, and ‖ denotes the concatenation operator for all biased interaction embeddings of each attention layer h ∈H.
§.§ Interaction-wise Bias Mitigation
After receiving the detected bias interaction embeddings ê^H_s, we focus on alleviating the bias from feature interactions.
Our goal is to equalize the conditional probability distribution of bias interaction embeddings given different sensitive attributes s ∈S. However, the sensitive attribute information in ê^H_s can be easily perturbed due to the imbalance amounts of sensitive and non-sensitive attributes. This may affect the bias mitigation performance since the alleviation process requires an explicate sensitive attribute as a pivot to mitigate. Hence, we adopt a residual neural network (ResNet) <cit.> to enrich the information of pseudo-sensitive attributes, which we can formally reveal as follows:
e_ŝ = ReLU(ê^H_s + W_Res·ê_̂ŝ),
where W_Res is the residual model weight and ê_̂ŝ is the embedding of pseudo-sensitive attributes.
In this work, we design two fairness constraints: Interaction Fairness Constraint and Fairness Constraint for biased interaction mitigation.
Interaction Fairness Constraint (IFC) Loss.
In order to mitigate the detected bias interactions from different sensitive attribute groups, we design the Interaction Fairness Constraint (IFC) loss to minimize the KL-divergence between the sensitive attribute groups. IFC can then ensure the equivalent information gained from each feature interaction.
Formally, IFC can be formulated as follows:
ℒ_IFC = ∑_i ∈S∑_j ∈S/iKL(e_[ŝ≈ i], e_[ŝ≈ j]),
where KL(·) denotes the KL-Divergence, and e_[ŝ≈ i] is the subset of e_ŝ that is more similar to sensitive attributes i ∈S. To the convenience of our work, we set the hierarchical boundary with expected value of uniform distributed S to distinguish which group ŝ belongs to in S.
IFC loss mitigates the bias information of the latent representation by calculating the KL-divergence scores as biased scores between each group in pseudo-sensitive attributes S.
Therefore, by adding ℒ_IFC as a regularization term to our framework, the bias feature interaction of latent representation e_ŝ can be alleviated.
Fairness Constraint (FC) Loss.
Although our proposed IFC mitigates most of the biased interaction information from the embedding aspect, the remaining biased interaction may be amplified by prediction models and generate unfair task predictions.
To alleviate the unfairness of model predictions on downstream tasks, we adopt the Fairness Constraint (FC) loss toward pseudo-sensitive attributes ŝ. In this work, we focus on classification tasks.
Our proposed FC aims to mitigate biased prediction behaviors ŷ by computing the absolute differences of the cross entropy between every two of each pseudo-sensitive attribute (ŝ_i, ŝ_j) ∈S.
Formally, FC can be formulated as follows:
ℒ_FC = ∑_i ∈S∑_j ∈S/i |CE_[ŝ≈ i] - CE_[ŝ≈ j]|,
where CE_[ŝ≈ i] is cross entropy which belongs to a certain sensitive attribute i ∈S.
Based on the meaning of cross entropy, it reflects the correctness of classification prediction ŷ. The idea of FC loss is to ensure the discrepancy of the correctness of ŷ by giving every two of each pseudo-sensitive attribute.
Thus, ℒ_FC can effectively alleviate the prejudiced model predictions among the pseudo-sensitive attribute set.
§.§ Fair Classifier with FairInt
Here we discuss how to incorporate the IFC loss ℒ_IFC and the FC loss ℒ_FC with a classifier to alleviate the biased interaction.
We adopt the reconstructor loss ℒ_SAR to our framework for training the SAR to generate the pseudo-sensitive features.
As the IFC loss can be a stand-alone optimization objective function, it is capable of mitigating bias feature interaction in latent representations for any kinds of classification model.
In our work, we evaluate the effectiveness of our framework on a one-layer multi-layer perceptron as the classification model, which can be replaced by any deeper or more powerful models.
To train a classification model with our proposed FairInt framework, we optimize the cross entropy loss ℒ_0.
We then incorporate ℒ_0 with the Interaction Fairness Constraint (IFC) loss ℒ_IFC, Fairness Constraint (FC) loss ℒ_FC, and reconstructor loss ℒ_SAR as the final objective function to fair classifier training.
Our proposed IFC loss and FC loss help the classification models mitigate the bias feature interactions from the views of latent representations and alleviate the prejudiced model predictions with given different kinds of sensitive attributes during training.
Specifically, we optimize the proposed FairInt by illustrating the following joint loss function:
ℒ_FairInt = ℒ_0 + λ_IFCℒ_IFC + λ_FCℒ_FC + ℒ_SAR,
where ℒ_FairInt denotes as the loss function to the proposed FairInt and λ_IFC and λ_FC are the weighting hyper-parameters to balance the biased interaction mitigating and feature interactions modeling.
By optimizing ℒ_FairInt, we can alleviate the bias model predictions by mitigating the detected bias feature interactions without defining any related and potentially biased features interactions in advance.
In inference stage, the trained FairInt framework can provide fair predictions without sensitive attributes.
§ EXPERIMENT
In this section, we empirically evaluate the proposed FairInt framework. We mainly focus on the following research questions:
* Compared with the existing baseline methods, can our assumption-free FairInt framework mitigate the unfair model prediction on the downstream tasks (Sec. <ref>)?
* Can our proposed Bias Interaction Detection layer identify the bias feature interaction and encode it in the latent representation (Sec. <ref>)?
* How do the proposed Interaction Fairness Constraint loss and Fairness Constraint loss in Eq. <ref> and Eq. <ref> impact the fairness of the classification model (Sec. <ref>)?
* How do the hyper-parameters impact the fairness performance of the proposed FairInt (Sec. <ref>)?
* How does our assumption-free FairInt framework automatically detect the related features and further mitigate the bias feature interaction (Sec. <ref>)?
§.§ Datasets
We consider four real-world tabular datasets <cit.> that are commonly used for studying fairness-aware classification, which include three application domains as shown in Table <ref>.
§.§ Baselines and Fairness Metrics
Besides the ARL <cit.> and FairRF <cit.> mentioned in Sec <ref>, we also leverage two fairness constraint regularization methods to train vanilla MLP models as baselines for comparing with our framework.
We adopt the Fair Constraint (FC) loss as a regularization to a vanilla MLP model as a baseline, where the FC loss is to mitigate biased behaviors toward model predictions, and it can be calculated as Eq. <ref>.
For another baseline, we apply a regularization-based mitigation method to a vanilla MLP model, Prejudice Remover <cit.>, which considers the mutual information for equalizing the distributions between two variables to alleviate biases.
For each dataset, both the two baselines are leveraged to the same vanilla MLP model which we will describe in the Sec. <ref>.
We also compare the proposed FairInt to two other baselines including vanilla MLP classification models and the CTR prediction model AutoInt, which modeling feature interaction by adopting the key-value attention mechanism to improve the performance on CTR prediction tasks.
We use two group fairness metrics to evaluate the fairness of prediction models: Demographic Parity (Δ DP) <cit.> and Equalized Odds (Δ EO) <cit.>.
Δ DP measures the difference in the probability of a positive outcome between different sensitive groups and it is better to be closer to 0, which can be calculated as follows:
Δ DP = p(ŷ = 1|s = s_i) - p(ŷ = 1|s = s_j),
where s_i and s_j represent different sensitive groups.
Equalized Odds require the probability of positive outcomes to be independent of the sensitive group s, conditioned on the ground truth label y.
Specifically, Δ EO calculates the summation of the True Positive Rate difference and False Positive Rate difference as follows:
Δ EO = |P(ŷ = 1|s = s_i, y = 1) - P(ŷ = 1|s = s_j, y = 1)|
+ |P(ŷ = 1|s = s_i, y = 0) - P(ŷ = 1|s = s_j, y = 0)|,
where Δ EO is better to be closer to 0.
§.§ Implementation Details
In FairInt, we set the embedding dimension d=4 and the number of attention heads in BID layer as 1 among all four datasets.
For the Adult dataset, we use a two-layer MLP with 64 and 32 units of each hidden layer as the vanilla MLP model.
For the Law School dataset, we use a two-layer MLP with 64 and 32 units of each hidden layer as the vanilla MLP model.
For the Bank Marketing dataset, we use a two-layer MLP with 40 and 20 units of each hidden layer as the vanilla MLP model.
For the Diabetes dataset, we use a four-layer MLP with 512, 256, 64, and 64 units of each hidden layer as the vanilla MLP model.
As for the AutoInt, we set the embedding dimension of each feature to 4, the number of interaction layers to 2, and the number of attention heads in each interaction layer to 2 for all four datasets.
To prevent overfitting, we search dropout rate from the set {0.1, 0.3, 0.5, 0.7, 1.0} and search l_2 norm regularization from the set {5e-1, 1e-1, 5e-2, 1e-2, 5e-3, 1e-3, 5e-4, 1e-4, 5e-5, 1e-5} for all the models.
To address the situation of the unavailablity of sensitive attributes in the inference stage, we utilize a four-layer MLP as the SAR on both vanilla MLP predictor and AutoInt models.
§.§ (Q1) Performance Comparison on Real-world Datasets
In this section, we provide the results of classification prediction by using a binary classifier performance measure AUC <cit.> for evaluating imbalanced data.
The fairness testings are evaluated with the two aforementioned fairness metrics: Δ DP and Δ EO.
Table <ref> summarizes the performance of the FairInt and the baselines on the four real-world datasets, where FC and PR refer to two vanilla MLP models which are debiased by two regularization-based bias alleviation methods Fair Constraint proposed in Sec. <ref> and Prejudice Remover <cit.>.
We observe that our FairInt significantly and consistently mitigates the bias predictions with the lower Δ DP and Δ EO across all four datasets.
The best fairness results are highlighted in bold.
Given the limitations demonstrated by FairRF <cit.> and ARL <cit.> in balancing the trade-off between AUC and fairness performance as assessed in the Law School and Bank Marketing, a comparison of their DP and EO performance with other methods is not performed.
Compared with the best bias alleviated baselines between FC and PR, our FairInt improves Δ DP by 5.37%, 36.36%, 8.08%, and 19.61% on Adult, Law School, Bank Marketing, and Diabetes, respectively.
As for Δ EO, our FairInt improves Δ EO by 4.89%, 37.70%, 17.82% and 40.35% on the four datasets, respectively.
We also make the following observations of the experimental results.
First, AutoInt can slightly improve the AUC performance with the attention-based feature interaction modeling mechanism, but it also augments the biased prediction behaviors.
As we can see from Table <ref>, AutoInt can improve the AUC of vanilla MLP models by 0.44%, 0.15%, 0.36% and 0.74% on the four datasets, respectively.
However, it at the same time increases Δ DP and Δ EO on all four datasets.
Compared with the vanilla MLP models, AutoInt increases Δ DP on three out of four datasets and increases Δ EO on all four datasets.
The reason is that the modeled feature interactions not only improve the downstream task performances but also contain the biased feature interactions that will augment the biased behaviors of predictions.
Second, our FairInt can maintain the competitive classification performance compared with the other debiased baselines.
As we can see from Table <ref>, the fairness performances of our proposed FairInt are improved significantly while the classification performances are slightly decreased.
We compare our FairInt with the Vanilla MLP model, AutoInt, and the debiased baselines PR in Figure <ref>, which illustrates their fairness-AUC curves for the four datasets.
The hyper-parameter λ_IFC and λ_FC in Eq. <ref> controls the trade-off between AUC and fairness for FairInt.
For the debiased vanilla MLP with PR, the hyper-parameter in front of the regularization term also controls the trade-off.
From Figure <ref> we also can observe that our proposed FairInt can achieve the best Δ DP and Δ EO in all four datasets while remaining competitive AUC compared to PR.
§.§ (Q2) Analysis of Bias Interaction Detection Layer
We analyze the ability of the Bias Interaction Detection (BID) layer that can identify the biased feature interactions.
In Table <ref>, the Vanilla FairInt refers to the FairInt framework without the two interaction-wise bias mitigation regularization IFC and FC, and it keeps the Bias Interaction Detection (BID) layer which is designed to identify biased feature interactions.
Compared with the vanilla MLP models, the Vanilla FairInt significantly augments biased behaviors of model predictions.
For all four datasets, the Vanilla FairInt increases Δ DP by 2.99%, 16.06%, 22.22% and 38.67%, and it increases Δ EO by 33.10%, 48.13%, 37.24% and 55.29%, respectively.
The reason Vanilla FairInt can remarkably augment the biased predictions is that BID focuses on detecting the biased feature interactions and embedding them into the latent representation.
A similar scenario can be observed in AutoInt because it models all the interactions between all the feature pairs that include biased feature interactions.
Unlike the AutoInt, our proposed FairInt focuses on learning to model the biased feature interactions only among the feature pairs which contain a sensitive attribute.
By doing so, the latent representations in FairInt embed the biased feature interaction information without other noising knowledge.
§.§ (Q3) Analysis of Fairness Constraint Components
After the latent representations in FairInt embed the bias feature interaction information, we leverage the two fairness constraints to mitigate the embedded bias feature interactions.
To better understand the effects of the two fairness constraints, Interaction Fairness Constraint and Fairness Constraint, in the proposed FairInt, we conduct the ablation studies to analyze and verify their contributions to the FairInt framework.
In Table <ref>, the Vanilla FairInt refers to the FairInt framework without the two interaction-wise bias mitigation regularization IFC and FC, + FC refers to the Vanilla FairInt with Fairness Constraint, and + IFC refers to the Vanilla FairInt with Interaction Fairness Constraint.
Although the debiasing effects of the + FC are not as significant as the FairInt, it can achieve the same level of Δ DP and Δ EO as the vanilla MLP models debiased by FC in all the four datasets.
Compared with the + FC, the + IFC more focuses on improving Δ EO than Δ DP.
The reason is that the implicit mitigation regularization IFC focuses on optimizing the latent representation rather than directly mitigating bias behaviors against the model predictions.
Therefore, when the FairInt adopts the IFC with the FC, it can markedly improve the fairness with the lower Δ DP and Δ EO than only leverage one of the two bias regularization components.
§.§ (Q4) Analysis of Sensitive Hyper-parameter
In this section, we study the impact of the hyper-parameter λ_IFC and λ_FC in the Eq. <ref> to answer the research question Q4.
We conduct the sensitivity analysis for both the two hyper-parameters on the Adult and Bank Marketing datasets.
To analyze the influence of λ_FC, we fix the best λ_IFC to see the trend of AUC, Δ DP, and Δ EO when changing λ_FC on the two datasets, respectively.
As shown in Figure <ref>, in the proposed FairInt, λ_FC is not sensitive to downstream task performances AUC.
As for the two fairness metrics Δ DP and Δ EO, they will be improved when the λ_FC increases, and the improvement will gradually converge to a certain level.
And to analyze λ_IFC, we fix the best λ_FC to observe the trend of AUC, Δ DP and Δ EO when changing λ_IFC on the Adult and Bank Marketing datasets, respectively.
According to the observations from Figure <ref>, in the FairInt, λ_IFC is not sensitive to downstream task performance AUC.
At the same time, the best λ_IFC can typically achieve the best Δ DP when reaching the best Δ EO on both Adult and Bank Marketing datasets.
§.§ (Q5) Key Observations on Interaction
One of the benefits of modeling feature interaction is that it provides better interpretability by observing the pattern of modeled feature interactions.
Therefore, in this section, we provide the key observations on the feature interactions, which refer to the attention weights a_s,k calculated by Eq. <ref> in our proposed FairInt.
Here, we show the feature interactions between the sensitive and non-sensitive attributes on the Adult dataset, and we treat the FairInt w/o Both as a biased model, the FairInt w/o FC as a slightly fair model, the FairInt w/o IFC as a fair model, and FairInt as a fairer model.
The feature interactions of FairInt w/o Both, FairInt w/o FC, FairInt w/o IFC and FairInt are shown in the Figure <ref>.
In the four figures, the yellow points represent the mean values of each attention weight between the sensitive attribute gender and a non-sensitive attribute.
By comparing the feature interactions between biased and fair models, we conclude with the two factors of the feature interactions, which are variance and mean value.
Fair models have a lower variance of each feature interaction between sensitive and non-sensitive attributes, and a mean value of one feature interaction represents the correlation between the sensitive and the non-sensitive attribute.
For example, comparing the attention weights of FairInt, the fairest one among the four models, with FairInt (w/o Both), the most unfair one among the four models, the feature interactions between gender and all other non-sensitive attributes have lower variances in the more fair model.
Also, the mean value of the feature interaction between gender and relationship is lower in the fairest model, which implies the fairer model treats relationship as a less relevant attribute against gender.
§ CONCLUSION AND FUTURE WORKS
In this paper, we proposed FairInt, an assumption-free framework that automatically identifies and mitigates the biased feature interactions. Our framework doesn't need prior knowledge to identify the related attributes in advance for mitigating the unfair model predictions. FairInt is composed of Sensitive Attribute Reconstructor, Bias Interaction Detection, and Interaction-wise Bias Mitigation, which aims to predict pseudo-sensitive attributes, model the information of identified bias feature interactions, and mitigate biased interaction with FC and IFC, respectively. Experiments on four real-world datasets demonstrate that FairInt can alleviate the unfair model predictions while maintaining the competitive classification performance.
As for the future direction, we will explore the novel fairness constraint by limiting the variance of feature interaction, which implies the fairness extent of the proposed FairInt.
acm
|
http://arxiv.org/abs/2307.03915v1 | 20230708063742 | Galaxy-dark matter connection of photometric galaxies from the HSC-SSP Survey: Galaxy-galaxy lensing and the halo model | [
"Navin Chaurasiya",
"Surhud More",
"Shogo Ishikawa",
"Shogo Masaki",
"Daichi Kashino",
"Teppei Okumura"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.CO"
] |
firstpage–lastpage
Phased Geometric Controls of V-Shaped Three-Level System for Zero-field Quantum Sensing
Jiangfeng Du
August 12, 2023
=======================================================================================
We infer the connection between the stellar mass of galaxies from the Subaru Hyper Suprime-Cam (HSC) survey, and their dark matter halo masses and its evolution in two bins of redshifts between [0.3, 0.8]. We use the measurements of the weak gravitational lensing signal of the galaxies using background galaxies from the Year 1 catalog of galaxy shapes from the HSC survey. We bin the galaxies in stellar mass with varying thresholds ranging from 8.6 ≤log[ M_*/(h^-2M_⊙)] ≤ 11.2 and use stringent cuts in the selection of source galaxies to measure the weak lensing signal. We present systematic and null tests to demonstrate the robustness of our measurements. We model these measurements of the weak lensing signal together with the abundance of galaxies in the halo occupation distribution framework. For each stellar mass threshold bin, we obtain constraints on the halo occupation parameters of central galaxies M_ min and σ_log M, which correspond to the halo mass at which central galaxies in the threshold sample reach half occupancy, and its scatter, respectively, along with parameters that describe the occupation of the satellite galaxies. The measurements of abundance and weak lensing individually constrain different degeneracy directions in the M_ min and σ_log M plane, thus breaking the degeneracy in these parameters. We demonstrate that the weak lensing measurements are best able to constrain the average central halo masses, . We compare our measurements to those obtained using the abundance and clustering of these galaxies as well as the subhalo abundance matching measurements and demonstrate qualitative agreement. We find that the galaxy-dark matter connection does not vary significantly between redshift bins we explore in this study. Uncertainties in the photometric redshift of the lens galaxies imply that more efforts are required to understand the true underlying stellar mass-halo mass relation of galaxies and its evolution over cosmic epoch.
galaxies: evolution – galaxies: haloes – (cosmology:) large-scale structure of Universe - gravitational lensing: weak - cosmology: observations
§ INTRODUCTION
In the standard cosmological model, structure formation in the Universe is governed by the interplay between dark matter, which enhances overdensities of matter distribution, and dark energy, which acts to hinder such growth. Dark matter halos form the basic unit of the large scale structure, and their abundance is highly sensitive to this interplay between the cosmological parameters <cit.>. The formation and evolution of galaxies in dark matter halos is a result of complex astrophysical processes related to the formation and evolution of stars, its effect on the gas, the feedback from supermassive black holes at their centers, as well as, the mergers of galaxies <cit.>. Direct inference of the connection between dark matter halos and galaxies is thus important to understand these astrophysical processes <cit.>. In turn, an accurate determination of this connection can help in the inference of cosmological parameters <cit.>.
The stellar mass contained within galaxies reflects the integrated star formation efficiency of dark matter halos of various masses. It is now well established that the star formation efficiency of halos peaks around intermediate mass halos of around 10^12 <cit.> and halos on either side of this are less efficient due to various forms of feedback associated with star formation at the low mass end and supermassive black holes at the high mass end. The evolution of the stellar mass-halo mass relation can thus provide insights into how this star formation efficiency changes with time <cit.>.
Various observational techniques have been used to probe the dark matter halos of galaxies. One of the techniques that directly probes the halo masses beyond few tens of is the inference of masses using the kinematics of satellite galaxies in dark matter halos <cit.>. Satellite kinematics however has to rely on the assumption of virial equilibrium, the anisotropy of the dispersion in the orbits of satellite galaxies in dark matter halos, velocity bias which could arise from the differences in the distribution of matter compared to satellite galaxies, and accurate determination of the interloper galaxies which could masquerade as satellites. Indirect techniques such as subhalo abundance matching <cit.> instead rely on the ansatz of a monotonous relation between the stellar mass and halo masses of galaxies, along with a scatter in addition to a fixed set of cosmological parameters which determines the (sub)halo abundances. The technique of matching these abundances to the abundance of galaxies measured as the stellar mass function, allows an inference of the stellar mass-halo mass relation <cit.>. The clustering of galaxies on large scales can also indirectly provide information about this relation <cit.> by utilizing the dependence of the large scale bias of halos on the mass of halos <cit.>.
The weak gravitational lensing signal <cit.> of galaxies provides another direct method to constrain the galaxy-dark matter connection. According to general theory of relativity, an overdensity of matter warps spacetime in its vicinity in a manner that distorts light bundles from distant background sources traveling toward us. In its weak form, gravitational lensing causes a coherent tangential distortions in the shapes of such background galaxies. The distortion in the shape of a single galaxy due to weak lensing is quite small and difficult to disentangle from the intrinsic elliptical shape of its isophotes. A statistical averaging of the shapes of many such background galaxies gets rid of the uncorrelated intrinsic shapes of galaxies and allows the measurement of the coherent shear imprinted on the background galaxies due to weak lensing. Measurements of shapes of galaxies from ground based imaging data is challenging (see e.g., <cit.>), as atmospheric light propagation and the telescope optics can also corrupt the measurements of shapes of galaxies. A number of tests need to be conducted for residual systematics in weak lensing measurements, but once modelled, the weak lensing signal can also provide constraints on the stellar mass-halo mass relation of galaxies <cit.>.
A number of ongoing weak lensing surveys cover large areas of sky with excellent quality imaging in order to map out the dark matter distribution in the Universe. The Dark Energy Survey (DES)[<http://darkenergysurvey.org>], the Kilo Degree Survey (KiDS)[<http://kids.strw.leidenuniv.nl>], and the Subaru Hyper Suprime-Cam survey (HSC)[<http://hsc.mtk.nao.ac.jp/ssp>] have covered areas that range from 1000 to 5000 sq. degree in this pursuit. Amongst these, the HSC is the deepest and thus allows us to carry out studies of evolution of the connection between galaxies and their dark matter halos that extend over a wide range of stellar masses. In this paper, we use galaxies from the HSC survey along with their stellar mass and photometric redshift estimates from their photometry in order to infer the stellar mass-halo mass relation in two redshift bins, [0.30-0.55] and [0.55-0.80].
In recent works, <cit.> and <cit.>, the clustering and abundance of galaxies have been used to constrain the galaxy-dark matter connection of the same sample of galaxies. The former amongst these studies, model their measurements of the clustering signal using an analytical halo occupation distribution (HOD) framework, while the latter use a modification to the traditional subhalo abundance matching method in order to explain the same observables. These different methodologies can explain the measurements equally well, even though they may not agree on the prescription of how galaxies occupy their dark matter halos and thus predict a different weak lensing signal. Our weak lensing signal (hereafter, WLS) measurement can thus be used as a discriminant for such theoretical models and the assumptions that they rely on.
This paper is organised as follows: We describe the lens and source data in section <ref>. Sec. <ref> describes the abundance data we use to constrain our HOD model and to study the impact of abundances on scaling relations. The formalism of stacked weak lensing signal computations and tests of survey systematics have been detailed in sec. <ref>. We summarise our theoretical HOD modelling formalism and model fitting details in sec. <ref>. Results and inferences are discussed in sec. <ref> and previous studies employing the same datasets have been compared in sec. <ref>. We finally discuss the issues and challenges associated with photometric datasets in inferring galaxy-halo connections and possible future directions of improvements in sec. <ref> and present the summary of the results from this paper in sec. <ref>.
In this paper, we assume a standard 6-parameter flat ΛCDM cosmology with cosmological parameters set by cosmic microwave background observations <cit.>. We use (Ω_m ,Ω_Λ ,Ω_b ,σ_8 , n_s ,h ) = ( 0.309, 0.691, 0.049, 0.816, 0.967, 0.677), where, Ω_m, Ω_Λ, Ω_b denote the matter, dark energy and baryonic density with respect to the critical density of the Universe, σ_8 is related to the variance of density fluctuations on scale of 8, n_ s is the power law index of the power spectrum of density fluctuations on large scales, and h is the dimensionless Hubble parameter given by h = H_0/ 100 kms^-1 Mpc^-1. All the distances are measured in comoving units of h^-1 Mpc and stellar, halo masses are expressed in units of h^-2 M_⊙ and h^-1 M_⊙ respectively. Throughout the paper, we use log to denote 10-based logarithms.
§ DATA
§.§ HSC-SSP survey
The Hyper Suprime-Cam instrument <cit.> is a wide field imaging camera (1.5 FoV diameter) mounted on the prime focus of the 8.2m Subaru Telescope located at the summit of Mauna kea in Hawaii. The Hyper Suprime-Cam survey, a Subaru Strategic Program <cit.> is a three-layered (wide, deep and ultra-deep), multi-band (grizy plus 4 narrow-band filters) imaging survey. The HSC survey has efficiently imaged ∼ 1200 sq. deg. of the sky in its wide layer, utilizing the excellent seeing conditions at the summit and the large FoV of the camera. The data is processed using a fork of the Rubin LSST science pipelines <cit.>. The processed data from the survey has been released publicly at regular intervals. The measurement of the weak lensing signal requires well calibrated measurements of the shapes of galaxies. In our work, we use the first year shape catalog made public by the HSC survey collaboration to measure the weak lensing signal.
§.§ First year HSC shape catalog
The first year HSC shape catalog is based on an internal data release of the HSC survey (S16A). It consists of wide layer data observed over a period of 90 nights between March 2014 - April 2016. It covers an area of ∼ 140 deg^2 spread over six disjoint fields - HECTOMAP, VVDS, WIDE12H, GAMA15H, GAMA09H, and XMM. The shape measurements are performed in the i-band. Therefore, the imaging in the i-band was carried out when the full width at half maximum (FWHM) for the seeing was better than ∼ 0.8. This results in the median i-band seeing FWHM of 0.58. The corresponding 5σ point-source depth of the survey is i∼26 averaged over the area covered by S16A.
The resulting data was processed with the HSC pipeline <cit.> and the shape catalog was curated by applying a number of quality flags and several selection criteria as described in <cit.>. The resultant catalog covers an area of ∼ 136.9 deg^2. The shapes of galaxies were measured using a moments based method which corrects for the effects of the PSF using the re-Gaussianization technique <cit.>. The two components of the ellipticities are given by,
(e_1, e_2) = 1-r^2/1+r^2(cos 2ψ, sin 2ψ)
where r denotes the minor-to-major axis ratio and ψ the angle made by the major axis with respect to the equatorial coordinate system.
The final shape catalog consists of galaxies selected from the full depth-full color region in all five filters. Apart from some basic quality cuts related to pixel level information, the catalog includes extended objects with an extinction corrected cmodel magnitude i<24.5, i-band SNR≥ 10, resolution factor R_2≥0.3, >5σ detection in at least two bands other than i and a cut on the blendedness of the galaxy in the i-band. This conservative selection of galaxies results in an unweighted (raw) source number density of 24.6 arcmin^-2. When lensing related weights are taken in to consideration, the effective number density of sources is ∼ 21.8 arcmin^-2, with a sample of galaxies with median redshift of ∼ 0.8. The additive (c_1, c_2) and multiplicative biases (m) in the shape measurements, as well as the RMS intrinsic distortion of shapes (e_ rms) and the photon noise component (σ_ e) were calibrated using detailed image simulations <cit.> with the software GALSIM <cit.>. These image simulations account for the survey characteristics such as the variation in depth and seeing. The shape catalog is accompanied with inverse variance weights w_ s for each galaxy which is given by
w_ s = 1/σ_ e^2 + e^2_ rms .
The shape catalog satisfies a number of systematics and null tests, with residual systematics at the level of 0.01, sufficient to carry out cosmological analyses with the data.
The shape catalog is also supplemented with six different catalogs of photometric redshifts of galaxies as inferred by a number of methods, some relying on fitting the photometry using templates, while others use machine learning <cit.>. In our analysis we use the estimates of the redshifts provided by MIZUKI code <cit.>, which uses templates of galaxy spectral energy distributions (SEDs) and priors to fit the observed photometry of galaxies. It assumes an exponentially decaying star formation history with a variable decay time scale, along with a solar metallicity for the SED templates. It also assumes that the initial mass function is Chabrier <cit.> and that the dust attenuation is given by <cit.>. Finally nebular emission lines are also added to the SEDs. In addition to various point estimates (e.g. mean, median, mode, best) and the posterior distribution functions (PDFs) of the redshift for individual galaxies, the code also outputs several physical properties, such as stellar mass and specific star formation rate of these galaxies. We will use galaxies with reliable photometric redshifts and thus restrict our source galaxy sample to those galaxies which have photoz_risk_best < 0.5.
§.§ Lens galaxies
As our lens galaxies, we will use the galaxy samples presented by <cit.> in their HOD analysis of the clustering of these galaxies. In brief, our sample excludes galaxies centered on pixels at the edge of photometric images, or affected by cosmic rays, or have saturated pixels using the following flags: flags_pixel_edge, flags_pixel_interpolated_center, flags_pixel_saturated_center, flags_pixel_cr_center, and flags_pixel_bad. We also avoid galaxies with bad fits to the SED models and remove those with χ^2/ dof≥ 3 or photoz_risk_best ≥ 0.1 to use as our lens galaxies. In addition to the above cuts already mentioned in I20, we also apply the full-depth full color mask to the lens galaxy sample, to avoid selecting lenses from regions which were not observed in all bands to the nominal depth of the HSC survey. Finally, we also apply the same star mask <cit.> as that applied to the weak lensing shape catalog (S16A) which ensures full overlap of the lens galaxies spanning 125.7 deg^2 on the sky with the source catalog.
We will focus on the first two redshift bins presented in I20 and use galaxy samples with 0.30 ≤ z_ best < 0.55 (Bin z_1) and 0.55≤ z_ best < 0.80 (Bin z_2). These subsamples have redshifts that are smaller than the median of the redshifts of the source galaxies we use for the weak lensing signals. This allows us to get better signal-to-noise ratios in our measurements. In order to select lens galaxies that reliably lie in the redshift bins of our interest, we follow <cit.> and exclude galaxies which are within one standard deviation error (as reported by MIZUKI) from the bin edges that define the galaxy samples. The redshift distribution of the samples can be seen in Fig. 2 of I20 and Fig. <ref> after applying additional quality masks as mentioned above.
We will further divide the galaxy samples in each redshift bin using M_* - the median estimate of the stellar mass posterior distribution as provided by MIZUKI. We note that <cit.> uses h=0.7 to convert h-factors in M_* and we also use the same to change stellar mass units from to whenever required. We construct stellar mass threshold subsamples within each of the redshift bins. Given the flux limit of HSC, we do not use galaxies with stellar masses below 10^8.6 and 10^9 for the redshift bins z_1 and z_2, respectively. For bin z_1 (z_2) we make 13 (12) stellar mass threshold subsamples, whose statistics are listed in Table <ref>.
§ ABUNDANCE OF GALAXIES
We adopt the measurements of the abundance of galaxies as reported in I20 in order to adopt consistent abundances while comparing the results of the clustering analysis with those obtained from weak lensing. In their work, I20 compare their estimates of the SMF of photometric MIZUKI-HSC galaxies in bins of MIZUKI stellar masses and redshifts with those obtained using a multi-band, multi-survey data available in COSMOS/UltraVISTA field over 1.62 deg^2 sky area with a K_s-band limit of 23.4 mag (90% complete). This allows I20 to infer the completeness of the photometric HSC galaxy sample. They also computed the abundances of MIZUKI galaxies in stellar mass threshold bins[I20 and M13 abundances are available at: <https://github.com/0Navin0/galaxy_halo_connection_in_HSC/tree/main/abundances>].
Abundances of galaxies derived from photometric galaxy catalogs are prone to errors and systematics due to modelling uncertainties in their redshift and stellar mass estimates. These uncertainties in photometric redshifts are also expected to be correlated, a systematic error which results in a higher (lower) redshift for the galaxy, will also end up in systematic error which assigns a higher (lower) stellar mass to the galaxy. Errors in photometric redshifts also potentially translate into errors in the abundance. To reduce the systematics related to photometric redshifts on the abundance estimates, I20 carry out a `trimming' procedure in their section 2.3.2, which removes galaxies at the redshift bin edges with uncertain redshifts. This results in a loss of volume, but can improve the reliability of the lensing measurements, by keeping galaxies which have a higher probability of being in a given redshift bin. As the photometric measurement errors and the associated photometric redshift errors are expected to increase for fainter galaxies, this trimming method is nevertheless expected to systematically affect the abundances of fainter galaxies. The comparison with COSMOS/UltraVISTA in I20 is designed to keep a tab on such effects.
In order to study the impact of varying the abundances of galaxies, we will also carry out our analysis using the abundances that we compute from the best fit Schechter function models to the observed SMFs of galaxies from UltraVISTA in <cit.> and label them as M13 abundances[4] .
In their study, M13 provide single and double Schechter fitting functions for SMFs of galaxies in two redshift bins [0.20,0.50)z^'_1 and [0.50,1.00)z^'_2 which are closer to our original redshift bins z_1 and z_2, respectively. We plot and compare the I20 and M13 abundances as a function of the stellar mass thresholds in Fig. <ref>.
The abundance of central galaxies is related to the abundance of dark matter halos via their halo occupation distribution. In general, galaxies in a catalog do not necessarily come with a label of central or satellite. Although algorithms to group galaxies together exist, the large errors in photometric redshifts imply that it is increasingly difficult to do so in photometric surveys. Therefore, we use a relatively large 15% error on the abundance of galaxies for the I20 and M13 abundance measurements, so that they do not excessively drive the constraints on the halo occupation distributions. This has the effect of increasing the effective weight of our lensing signal to drive the halo occupation constraints. As mentioned before we will explore how the use of abundance changes the constraints of the stellar mass-halo mass relation we obtain.
§ WEAK GRAVITATIONAL LENSING
Weak gravitational lensing induces statistically coherent, tangential distortions in the shapes of background galaxies due to the intervening matter distribution along the line-of-sight towards the background galaxies. The tangential component of the shear γ_t imparted by an intervening matter distribution is related to its excess surface density (ESD) such that
ΔΣ(R) = Σ(<R) - ⟨Σ⟩ (R) = ⟨γ_t⟩(R) Σ_ crit(,) .
Here Σ(R) is the lens surface mass density at a projected separation R from the lens centre at redshift , Σ(<R) denotes the surface density averaged within a circular aperture R from the lens centre, and ⟨Σ⟩ (R) is the surface density averaged azimuthally at a distance R. The quantity Σ_ crit(,), is a geometrical factor dependent upon the physical angular diameter distances between us (observer) and the lens D_ a(), us and the source D_ a() and between the lens and source, D_ a(,), and is given by,
Σ_ crit(,) = c^2/4π GD_ a()/D_ a() D_ a(,) (1+)^2≡Σ_ crit, ls .
The factor of (1+z_ l)^2 in the denominator corresponds to our use of comoving coordinates. The intrinsic shapes of galaxies contribute to the noise in the determination of this shear from the measured ellipticity of galaxies. Therefore the signal has to be measured statistically by averaging the tangential ellipticity over a large sample of galaxies using weights that yield a minimal variance estimator for ΔΣ. For every lens-source pair, we use the weight = ⟨Σ^-1_ crit, ls⟩^2 while performing this average, where is the weight due to error in the shape measurement defined in equation (<ref>). The weight w_ ls defined above automatically down-weights lens-source pairs which are separated by a small distance from each other.
We will use full PDF of the redshift (z-PDF) of each source galaxy and z_ best estimate of the redshift of each lens galaxy as provided by the photo-z estimation code, and compute the average of the inverse critical surface mass density for each lens-source pair ⟨Σ^-1_ crit, ls⟩ given by,
⟨Σ^-1_ crit, ls⟩ = 4π G (1+)^2/c^2∫_^∞D_ a() D_ a(,)/D_ a() p() d .
The minimum variance estimator for ΔΣ is given by
ΔΣ(R) = 1/1+m̂( ∑_ ls e_ t,ls ⟨Σ^-1_ crit, ls⟩^-1/2ℛ∑_ ls.
-.∑_ ls c_ t,ls ⟨Σ^-1_ crit, ls⟩^-1/∑_ ls) ,
where e_ t,ls and c_ t,ls are the tangential components of ellipticity and the additive bias for the source galaxy in a lens-source pair, respectively. The quantity m̂ is the sample-averaged multiplicative bias and is given by
m̂ = ∑_ ls m_ ls/∑_ ls .
The symbol ℛ denotes the ensemble responsivity of the measured distortions to a small shear <cit.> and can be computed using the RMS intrinsic shape distortions e_ rms provided in the catalog as,
ℛ = 1 - ∑_ ls e^2_ rms,ls/∑_ ls .
In addition, to minimize effects of the uncertainty in the photometric redshifts, we use only those source galaxies which satisfy
∫_z_ l, max + z_ diff^∞ p() d > 0.99 ,
where z_ l, max is the maximum redshift in a lens galaxy sample, and we use z_ diff=0.1 in our work. This selection implies that based on the posterior of the redshift from the photometry, the source galaxies we use have a >99% probability of having a redshift greater than the farthest galaxy in the lens sample. Thus they are more likely to be true background galaxies. Even after applying this photo-z filter, source galaxies can still be contaminated by structures correlated to lenses if the posteriors p() are biased. Therefore, we will quantify any such contamination by looking for source galaxies clustered with our lens galaxies.
The shape noise of galaxies constitutes a dominant component of the error budget on small separation scales between the lens and the source, as the number of lens-source pairs at such separations are small in number. The error on the weak lensing signal measured around a sample of galaxies at various projected radial bins can be expected to be correlated as the same source galaxy may be used for the lensing signal around different lens galaxies in the sample. Such covariance between the measurements which arises due to shape noise can be quantified by randomly rotating the source galaxies and measuring the weak lensing signal around the lens galaxies. This preserves the number of pairs but presents a random realization of the source population ellipticities. However, on large scales we also expect the covariance due to the large scale structure. The large scale over-densities in which the lens galaxies reside can coherently shift the measurements up or down, leading to a larger covariance on such scales than that expected from just the shape noise.
We account for the above sources of noise together using the jackknife technique, where we divide the full survey area of the lens catalog in to 103 rectangular jackknife regions, each having an approximately equal area ∼ 1.22 deg^2, distributed contiguously in each survey field. We utilize the random catalog of points provided by the HSC survey, which have a uniform density of 100 random object per square arc minute, and where we can apply the same exact mask that we applied to our lens samples. Throughout this work, the jackknife sub-division of area remains identical for all the subsamples in each redshift bin. We then measure the lensing signals by excluding each region from the entire data at a time. We use these measurements to compute the covariance matrix 𝒞,
C_ij = N-1/N∑ _k=1^N[ ΔΣ(R_i,k) - ΔΣ(R_i) ] [ΔΣ(R_j,k)- ΔΣ(R_j) ] .
Here the indices i,j both vary from 1 to 10 for the 10 projected radial bins, ΔΣ(R_i,k) is signal computed at i^ th projected radial bin with removal of k^ th jackknife chunk, the quantity with bar on top is an average of the jackknife measurements at a particular radial bin.
We also define the cross-correlation matrix of the measurements between the i^ th and the j^ th projected radial bins to be given by
r_ij = 𝒞_ij/√(𝒞_ii𝒞_jj).
The cross-correlation matrix of the measurement for a representative set of stellar mass threshold samples in each of the redshift bins is shown in the different rows of Fig. <ref>. As expected we see that on small scales the off-diagonal components of this matrix are close to zero, however as we approach larger scales, neighbouring radial bins show enhancement in the cross-correlation of their errors.
Next, we present the results of two null-tests of survey systematics. First, we present the measurement of the weak lensing signal (ΔΣ_ rand) around random points which are distributed in the HSC footprint in the same manner as our lens galaxies. Second, we present the cross-component (ΔΣ_ rand, ×) around the random points and the lens galaxies (ΔΣ_ lens, ×); where ΔΣ_× averages the cross-component of the shear which is the ellipticity induced on circular objects with major/minor axes at 45^∘ compared to the line joining the two galaxies. In the absence of systematics, both these measurements should be consistent with zero within the statistical uncertainty.
In order to measure the signal around random galaxies, ΔΣ_ rand, for a given threshold subsample, we resample the photometric reshifts, z_ best, with replacement from the overall redshift distribution of galaxies in that subsample and assign them to the objects in the random catalog. We follow the procedure described by equations (<ref>) - (<ref>) and compute the tangential component of the weak lensing signal ΔΣ_ rand. We subtract this measured signal from the weak lensing signal around lenses from the true subsamples. Our tests indicate, however, that the measurements around random points for each of our subsamples is consistent with zero given the statistical fluctuations. The measurements ΔΣ_ rand and cross-components ΔΣ_ rand, × around random points, as well as the cross-components ΔΣ_ lens, × around lens galaxies along with their jackknife errors are shown in Fig. <ref> for the lowest, a middle and the highest stellar mass threshold, respectively. The p-values to exceed χ^2 for all of our subsamples for both the systematics tests are presented in Table <ref>.
In spite of our conservative sample selection cuts and quality filters (Section <ref> and equation <ref>) in lens and source galaxies, source galaxies can still be contaminated by structures correlated to the lens distribution. These source galaxies may not be even down weighted by the lensing weights if their p() is biased to high redshifts. This effectively dilutes the lensing signal as a function of projected radius. However, the overall dilution can be estimated and adjusted for by multiplying a boost factor to the signal (see e.g., <cit.>). Boost factor, B(R_i) is defined as the ratio of weighted number of l-s pairs per lens galaxy to random-source (r-s) pairs per random point, notationally,
B(R_i) = N_r ∑_ ls/N_l ∑_ rs w_ rs.
We adjust the randoms corrected signals by their corresponding boost factors, in each of the ready-to-model signals and their jackknife covariances. The estimated boost factors for few of the threshold bins are described in Fig. <ref>. The errorbars on B(R) values are computed by the jackknife technique outlined by equation (<ref>). Apart from few smallest projected scales in most massive galaxy samples that we probe, redshift bin z_1 shows boost factors consistent with unity, indicating presence of a non-zero but small amount of source contamination close to the inner-most radial bin; while the redshift bin z_2 shows a consistent contamination of source galaxies at all scales with B(R) ranging from ∼ 4% at inner most radial bin to ∼ 1% around outer most radii. The application of boost factor scales the signal as a function of R and may affect the covariances, however the relative error in the signal remains the same. The relative errors of the signals in bins z_1 and z_2 evolve slowly from ∼ 5% to ∼ 10% in subsamples of increasing threshold stellar mass within log M_ *, limit= (8.6 - 10.8) and (9.0 - 11.0) respectively. The most massive threshold subsamples in each redshift bin have ∼ 15% relative error. Given this level of statistical tolerance, we confirmed that skipping application of boost factors don't change our parameter constraints and thereby the resulting inferences, however, to maintain uniformity throughout our current and future analyses, we include boost factors on weak lensing signal measurements for all subsamples.
Also the photometric redshifts of the galaxies may have both statistical uncertainties and systematic biases. Such uncertainties could cause galaxies that are physically correlated with the lens samples to be included in our source samples, or could cause source galaxies to be wrongly classified as lens galaxies, or could result in background galaxies getting assigned wrong redshifts. The first of these errors are accounted for using boost factors as described in the paragraph above.
We mitigate the second error by using stringent cuts on the choice of source galaxies in this analysis, such that the fraction of source galaxies getting identified as lenses are small. Thus the bias in lensing signals will come mostly from source galaxy photometric redshifts being inconsistent with their true redshifts. We examine this effect using the methodology outlined by <cit.>[See appendix.]. We find that the source photo-z biases in bins z_1 and z_2 are ∼ 1% and ∼ 4% respectively and we have confirmed that such level of biases do not change our results or any of our conclusions in a statistically significant manner. Consequently, we have ignored the photo-z bias correction in our measurements and modelling of the weak lensing signals in this paper.
The weak lensing signals as measured using the above techniques can be seen in Figs. <ref> and <ref> for the two redshift bins we consider in this paper, respectively. The errors on the data points are based on the square root of the diagonal elements of the covariance matrix as defined in equation (<ref>).
§ THEORETICAL MODELLING
We use a halo occupation distribution (HOD) framework in order to model abundance and weak lensing signal. The HOD framework allows us to relate the theoretical predictions of the abundance of dark matter halos, their clustering and the dark matter distribution around these halos to the observed abundance and lensing of galaxies. The various parameters of the HOD of galaxies describe the average number of galaxies, N(>M_*, limit|M), in a particular sample of stellar mass threshold M_*, limit that reside in halos of mass M. In this work we only work with galaxy samples in thresholds of stellar masses, hence for ease of notation we denote it simply as N(M). We separate the total HOD of galaxies into a separate term for central galaxies and satellite galaxies, denoted by N_ c(M) and N_ s(M), respectively, such that,
N(M) = (M) + (M)
We use a 5-parameter model to describe these separate terms <cit.>,
(M) = 1/2[1+ erf(logM - log/)] ,
(M) = (M) ( M-M_0/M_1)^α,
where , , M_0, M_1, α are free parameters which are allowed to vary freely for each threshold subsample. Given that apart from an unknown intrinsic scatter, the relation between central galaxies and their halos is also obscured by uncertainties in the measured signals, we include the total scatter in the host halo masses of the central galaxies by a stochastic model expressed as equation (<ref>). Assuming that each central halo hosts a single galaxy, the first equation denotes the probability that a halo of mass M hosts a galaxy belonging to threshold subsample. According to the functional form, denotes the mass at which half of the halos are occupied by galaxies above the stellar mass threshold of subsample under consideration. Asymptotically, the halo occupation of central galaxies tends to unity. The satellite galaxy halo occupation number is a power law in M-M_0, where M_0 is the mass scale below which there are no satellite galaxies. M_1 can be seen as a typical halo mass to host a satellite galaxy, the exponent α as an indicator of the accumulated star formation history for galaxies of the given mass threshold. The (M) in front of the satellite halo occupation number down weights the galaxies in halos with low . Formally, we treat the two halo occupations to be independent, given that there are cases in which central galaxies are not necessarily the brightest galaxies in their halos.
Further, we also need to specify the position of the central galaxies within the dark matter halos. We assume that the central galaxy resides at the center of the dark matter halo. In our fiducial model we assume that satellite galaxies are distributed according to the NFW profile,
n(r) ∝( r/r_ s)^-1( 1 + r/r_ s)^-2
where r_ s is the scale radius of the halo and is defined as r_ s = r_ 200m/c_ 200m. Here c_ 200m is the concentration of the dark matter within that halo and halo masses are defined to be the masses enclosed within an overdensity of 200 times the background matter density, denoted by M_200m.
The abundance of galaxies in the threshold sample can be computed from the HOD using
n_ gal = ∫ M n(M) [(M) + (M)].
We use the analytical framework presented in <cit.> in order to predict the weak lensing signal from the HOD. Here we briefly repeat the formalism for the sake of completeness.
The ESD profile, equation (<ref>), depends on the correlated surface density of matter which is a line-of-sight projection of the galaxy-matter cross-correlation function ξ_ gm at a halo-centric distance R such that
Σ(R) = ρ̅∫_0^∞ z ξ_ gm([ R^2 + z^2]^1/2) .
Here, we have ignored the uniform density component in the computation of the surface density as it does not impact the weak lensing observables. We have also ignored any possible off-centering of central galaxies. Current modelling assumes that each halo hosts exactly one galaxy at its center
and that the dark matter contributions from subhalos of the satellite galaxies can be safely ignored. The cross-correlation is a Fourier transform of the cross power spectrum between galaxies and dark matter and can be computed using the analytical framework developed in <cit.>.
The total cross power spectrum between galaxies and dark matter can be divided in to 4 different terms, the one halo central and satellite terms, and the two halo central and satellite terms, such that,
P^ gm(k) = P^ 1h_ cm(k) + P^ 1h_ sm(k) + P^ 2h_ cm(k) + P^ 2h_ sm(k) .
Each of these terms can be expressed as
P^ 1h_ xm = ∫ n(M) M H_ x(k, M, z) M/ρ̅ u_ h(k| M, z) ,
P^ 2h_ xm = ∫ n(M') M' H_ x(k, M', z) ×
∫ n(M̃) dM̃ Q(k|M',M̃,z) M̃/ρ̅ u_ h(k| M̃, z) ,
and `x' stands for either central `c' or satellite `s', Q(k|M',M̃, z) describes the cross-power spectrum of halos of mass M' and M̃ at redshift z, and we use
H_ c(k|M, z) = (M)/n̅_ gal
H_ s(k|M, z) = (M)/n̅_ gal u_ s(k|M,z) .
In the equations above, u_ s/h(k|M,z) denotes the Fourier transform of the number density profile of the satellite galaxy (dark matter) distribution within the halo. As indicated previously we assume this to be given by the NFW profile. We allow the satellite and dark matter concentration to vary from the form given by <cit.> to allow for systematic uncertainties due to baryonic effects, as well as effects of averaging the dark matter profiles of halo of the same mass but varying concentrations <cit.>. We implement this with a multiplicative parameter c_ fac which alters the fiducial concentration-mass relation that we adopt in this paper. We include a Gaussian prior with unit mean and a variance of 0.2 for this parameter.
The baryonic component within the galaxy is expected to dominate the weak lensing signal at small projected separations. We model this component as a point mass contribution similar to how it has been modelled in previous studies <cit.>,
ΔΣ_b(R) = M̅_ bary/π R^2 ,
where, M̅_ bary represents average baryonic mass of all the galaxies in a given threshold subsample. We restrict our measurement of the lensing signal to scales above 100, thus our measurements are not very sensitive to the baryonic component (10 percent of the signal at the innermost point for the largest stellar mass bin). Given this relative insensitivity of our results to the baryonic contribution, we simply model this term as the average of the stellar mass contribution of all galaxies within the bin of interest. The total modelled signal is then the sum of ESD due to dark matter-halos and the central baryonic component.
§.§ HOD model fitting specifications
We carry out a Bayesian analysis to infer the posterior distribution of model parameters given the data, P(Θ|D, I), such that
P(Θ| D, I) ∝ P( D|Θ, I) P(Θ| I) ,
where I represents the choice of our model, the quantity P( D|Θ, I) is the likelihood of the data given the model parameters. and P(Θ| I) the priors on our model parameters. We assume the likelihood to be a multi-variate Gaussian, such that
ln P( D|Θ, I) ∝χ^2(Θ;𝒟, I)/2 ,
χ^2 = ∑_ i,j [ ΔΣ - ΔΣ ]_ i^ T [𝒞^-1]_ ij [ ΔΣ - ΔΣ ]_ j + ( n_ gal - n_ gal)^2/σ^2_ gal ,
where, the terms with tilde on top are modelled while those without tilde are observed quantities, subscripts i,j stand for the i^ th and j^ th radial bins respectively, and the covariance matrix, 𝒞, is obtained from jackknife technique discussed in Section <ref> (equation <ref>). We assume uniform priors on most of our parameters (see Table <ref>), unless mentioned otherwise.
We use the analytical HOD modelling framework from <cit.> as implemented by the software aum <cit.> in order to predict the abundance and galaxy-galaxy lensing predictions given the HOD parameters. We sample the posterior distribution of our parameters given the measurements using the affine invariant MCMC ensemble sampler of <cit.> as implemented in the publicly available package emcee v3.1.1 <cit.>. We use 256 walkers for a total of 10000 steps. We remove the first 2000 steps from each walker as a burn-in phase and verify the stationarity of our parameters of interest to confirm convergence.
§.§ Model predictions
In addition to modelling the observables, the ΔΣ and the abundances, we also compute predictions of satellite fractions,
= ∫ M n(M) (M)/N
and average central halo masses,
M_ cen = ∫ M M n(M) (M)/N_ c
for each threshold subsample accounting for the full sampled posterior distributions. Where N=N_ c+N_ s is the total number of galaxies computed for a given subsample and N_ x=∫ M n(M) N_ x; `x' stands for either `c' or `s'.
§ RESULTS AND DISCUSSION
We measure the weak gravitational lensing signal for stellar masses from log[M_*/() ] ≥ 8.6 in z ∈ [0.3, 0.55] and log[M_*/() ] ≥ 9.0 in z ∈ [0.55, 0.80]. Our measurements for the different threshold bins at the two different epochs are shown as black circles in Figs. <ref> and <ref>, respectively. The errors on the points are the square root of the diagonal of the error covariance matrix for each measurement. The figures show R ΔΣ as a function of the projected separation from the lens galaxies and we list the SNR of the measurments in the lower right boxes in each of the subpanels. The weak lensing measurements in each of the redshift bins clearly show that the weak lensing signal increases in strength for lens galaxies with a higher threshold in stellar mass. The lensing signal also show deviation from ΔΣ∝ R as would be expected for a simple isothermal profile.
§.§ HOD modelling of the abundance and lensing signal
We fit the analytical HOD model to each of the measurements described above and obtain the posterior distribution of the parameters of our model given the measurements. The priors that we use on the parameters for our analysis are listed in Table <ref>. The solid magenta lines in Figs. <ref> and <ref> and the associated grey shaded regions indicate the best fit model and the 68, 95 percentile credible intervals using the parameters given the joint fit of the lensing and I20 abundance measurements in the two photometric redshift bins z_1 and z_2, respectively. The best fit χ^2 value obtained from our measurements alongwith the number of degrees of freedom based on the formalism of <cit.> are also indicated in the boxes on the lower right in each of the subpanels.
We decompose the best fit model we obtain into components that correspond to the 1-halo central and 1-halo satellite term, in addition to the 2-halo term indicated by the solid red, solid orange and dotted green lines, respectively. The baryonic contribution to the lensing signals is quite small and we artificially have boosted it ten times its value for clarity and shown it with a dashed line. The 1-halo central component dominates in the innermost regions upto a few hundred kiloparsecs, followed by the rising 1-halo satellite component as we move further out.
The increasing amplitude of the observed lensing signal can be fit with a consistently rising 1-halo central component. Statistically this indicates that central galaxies with higher stellar masses live in more massive dark matter halos. The satellite component corresponds to halos which are more massive than that of centrals. These measurements and our modelling allow us to infer the stellar mass-halo mass relation for the central galaxies together with the satellite fractions in each of our subsamples, and these are a reflection of the scale dependence of the measured weak lensing signal.
Our results indicate that a simple dark-matter only HOD model in ΛCDM cosmology is flexible enough to describe the observed lensing and abundance measurements in each of the threshold stellar mass bins. The best fit χ^2 values corresponding to joint fits of weak lensing with either I20 or M13 abundances are listed in Table <ref>. We obtain similar values for χ^2 despite large differences in abundances between I20 and M13, which hint towards a potential degeneracy among HOD parameters when fitting weak lensing and abundances. Even though they appear statistically consistent, we see some evidence that I20 is better fit than M13 in low threshold mass subsamples for the z_1 bin, while M13 is better fit for high and low mass thresholds in z_1 and z_2 bins respectively.
The two-dimensional marginalized posterior distributions[The posterior distributions for two stellar mass thresholds chosen to be representative at each redshift bin have been made available online in the appendix.]
of free parameters show familiar degeneracies in the central halo occupation parameters and , where an increase in one parameter can be compensated by a corresponding increase in the other parameter. We will discuss the dependence of these degeneracy on our different observable in the following subsection. The satellite parameters are often ill-constrained with a wide variety of satellite parameters leading to similar observables. The constraints on the free parameters of the HOD model along with the inferred satellite fraction, abundances and the for each of the threshold bin in two redshift bins are listed in Tables <ref> and <ref>, respectively.
§.§ Degeneracy among central HOD parameters and abundance
Using the posterior distribution of the HOD parameters in our fiducial analysis, we examine the degeneracy between the central HOD parameters and its dependency on the weak lensing and the abundance, separately. The estimates of the abundance of galaxies differ between I20 and M13, and therefore can lead to different constraints on the HOD parameters. Therefore, we fit the HOD model to these observables individually and in combination to demonstrate the impact of using either of these abundance measurements. In Fig. <ref>, we present the resulting degeneracy contours between central HOD parameters corresponding to each of these observables. The 68 percent credible regions from the weak lensing only fit, the I20 abundance only fit and the M13 abundance only fit are shown with black, blue and red contours, respectively. The orange and the green correspond to the joint fits between weak lensing and the abundances from I20 and M13, respectively. The different subpanels correspond to lens subsamples with different stellar mass thresholds for bin z_1, chosen for illustrative purposes.
The central HOD parameters, log and for each of the observables individually are degenerate with each other and a positive change in one can be compensated by a positive change in the other parameter. This can be understood as follows. The abundance of halos is a decreasing function of halo mass. Thus increasing , in general would lead to smaller abundance. However, this can be compensated by increasing the scatter . The scatter allows galaxies to be populated in the more numerous less massive halos, thus satisfying the observed abundance. The relative shift in the degeneracy contours between the contours corresponding to I20 and M13 reflect the smaller abundance of galaxies inferred by I20 compared to M13. The weak lensing signal on the other hand is sensitive to the average halo masses of the lens samples. Thus an increase in which would nominally increase the average halo masses, can be compensated by increasing the scatter which brings in lower halo masses, thus compensating the increase. At the highest stellar mass threshold, the degeneracy contours become flatter due to the exponential decrease in the halo abundances at the high mass end. Even though the degeneracies in the - are qualitatively similar, the different dependencies of the abundance of halos and their average halo mass on the HOD parameters implies that the quantitative degeneracies have different directions. The combination of the abundance and weak lensing shown in orange/green contours thus results in tighter constraints on each of the central HOD parameters, which otherwise cannot be constrained by either of the observables on their own.
We also observe that weak lensing prefers somewhat higher values for log M_ min than just the abundance information alone until stellar mass thresholds of 10^10.2 (10^10.4) in the redshift bin z_1 (z_2) where the lensing and abundance contours start to cross-over. The exact location of this stellar mass depends upon which study we use the abundance information from. In general, we expect that adding in the abundance information will lead to lower values of than just from weak lensing at the low stellar mass threshold end. At the high stellar mass threshold, the inclusion of abundances can have a non-negligible impact on the inferred value of as can be seen from the relatively flat degeneracy contours in the - plane in the right hand subpanel of Fig. <ref>.
§.§ Galaxy-halo connection
The galaxy-halo scaling relation that we obtain from our joint analysis of weak lensing and the abundance of galaxies can be summarized with the dependence of on the stellar mass threshold log M_*, limit. The parameter corresponds to the mass at which half of the halos are occupied by galaxies in a given stellar mass threshold sample. This implies that the scaling relation between and log M_*, limit can be interpreted as the median of the stellar mass of galaxies at fixed halo mass. We show these constraints for our two redshift bins in different panels of Fig. <ref>. Our fiducial constraints are shown as credible intervals using light (95 percent) and dark grey (68 percent) shaded regions that correspond to the use of our weak lensing measurements combined with abundance measurements presented in I20. If instead, we combine with the abundance measurements from M13, we obtain constraints shown in blue points (median) with errors (68 percent credible interval). As discussed in the previous section, the smaller abundance inferred in I20 compared to M13 leads to a higher when we use abundance from I20 for redshfit bin z_1. In contrast for redshift bin z_2, the abundance of galaxies inferred in I20 and M13 both are roughly equivalent (see Fig. <ref>) and thus the inferred is similar irrespective of which abundances we combine with the weak lensing signal.
In both redshift bins, we observe a scaling relation which shows that 10^12 dark matter halos are most efficient at forming stars and become increasingly inefficient as we move away. At lower mass side the inefficiency of star formation manifests in the stellar masses dropping down precipitously to smaller values, while at the high mass end it is seen in a quick rise in halo masses that is required to form more and more massive galaxies. Qualitatively, this picture is consistent with previous studies. We present the comparison of the parameters and obtained from our analysis when combining our weak lensing measurements with the two different abundance estimates in the first two panels of Fig. <ref>. If taken at face value our results in left panel do not indicate a large evolution in the scaling relation within the two redshift bins, especially if we consider the abundance measurements from M13. However, the abundance measurements from I20 indicate that halos of same mass at lower redshift host galaxies with a median stellar mass which is lower by about 0.2 dex.
The scatter in our HOD parameterization captures the scatter in halo masses of galaxies that have stellar mass at the threshold chosen for our sample. We observe in middle panel of Fig. <ref> that this scatter increases as we increase our threshold to include only massive galaxies. In models which have a fixed scatter in stellar mass of galaxies at fixed halo mass, such behaviour is expected. The slope of the log M_*-M relation is quite shallow at the high mass end, and thus a constant scatter in the stellar masses at fixed halo mass results in a scatter in halo masses that continues to increase with the stellar mass. Our results are therefore qualitatively consistent with studies that indicate such a constant log-normal scatter in stellar masses at fixed halo mass σ_log M_* <cit.>. These trends are consistently observed irrespective of which abundances we use and the redshift bin under consideration.
Previously, we have shown that the parameters is degenerate with and the posterior constraints on are very much dependent on the choice of the abundance measurements, especially in the first redshift bin. The weak lensing signal is expected to be sensitive to the average mass of halos occupied by galaxies in our sample. Given that the small scale weak lensing signal is well measured and is dominated by the 1-halo central term, we expect the average central halo masses to be well determined by the lensing signal for every threshold stellar mass bin. In Fig. <ref>, the blue (orange) shaded region with slanting lines shows our constraints on from weak lensing measurements only. The solid blue (orange) shaded region corresponds to the 68 percent credible intervals derived from a joint fit between lensing and abundance from I20 for redshift bin 1 (2), while the blue (orange) solid points with errors correspond to a similar joint fit but using abundance measurements from M13. While both and have physical meaning, it is clear that M_ cen better reflect the results of our weak lensing measurements and is relatively insensitive to the exact choice of abundance.
We compare the M_ cen obtained for the two redshift bins in Fig. <ref>. When compared in this manner, we see very small differences in the redshift evolution of the scaling relation. The differences seen in the and compensate to result in a scaling relation between M_ cen as a function of the stellar mass threshold that shows very little evolution over the two redshift bins we use.
§.§ Satellite fraction
The weak lensing signal in the innermost regions is dominated by the dark matter halo of the central galaxies in each of our stellar mass threshold subsamples. Some of the galaxies in our subsample are also expected to be satellites. These satellites on average are expected to reside in more massive parent halos than halos hosting centrals of similar stellar mass. However these satellite galaxies do not reside at the centers of their parent dark matter halos, but are distributed within the halo. This signal from the satellite galaxies is thus expected to be a result of convolution of the weak lensing signal expected around the centers of their parent halos with the projected number density of satellite galaxies within the halo. The weak lensing signal at intermediate scales is thus sensitive to the fraction of satellite galaxies within the stellar mass threshold sample as well as the halo occupation distribution of the satellite galaxies in the subsample.
In Fig. <ref>, we show the fraction of satellite galaxies as a function of the stellar mass threshold of our subsamples. The solid blue (orange) colored shaded region shows the 68 percent credible region for the satellite fraction for redshift bin 1 (2) when using the weak lensing measurements along with abundance measurements from I20. The region shaded with slanted line but with the same colors correspond to the case when the lensing measurements alone are used as constraints. To maintain clarity, we do not show results when using M13 abundances here as they are essentially similar within the errors.
Overall, the observations suggest that the satellite fractions decrease as a function of the stellar mass threshold above 10^10 for both redshift bins. There is tentative evidence of a flattening of the satellite fractions at lower stellar mass threshold for redshift bin 1. We do not find significant evidence for the evolution of the satellite fractions with redshift given the large errors in our inference, nor do we find a significant difference depending upon which abundance constraints we use.
§ COMPARISON WITH PREVIOUS STUDIES
As mentioned in Section <ref>, we examine the results from the two studies, I20 and M22, of clustering and abundance of galaxies from the same samples we use in the paper, against our inferences which are driven by the measured weak lensing signals and abundance estimates from I20. This comparison is well suited even in the photometric observable plane due to use of the same dataset. To briefly summarize the results and approaches of these two studies, I20 modelled the measured projected 2-point correlation functions ω (θ), and the measured abundances of galaxies using the same HOD parameterization as used in our modelling scheme.
On the other hand, these same measurements were modelled by M22 using a modified subhalo abundance matching technique using cosmological simulations from the Uchuu suite <cit.>, namely mini and shin. Amongst the two, mini-Uchuu has larger box size of 400 with particle resolution of 3.27× 10^8 and shin-Uchuu has larger resolution of 8.97× 10^5 with box size of 140. Comparison between the two simulations allows us to test the effect of the resolution. In their paper, M22 explore two different proxies of halo mass which monotonically correlate with the stellar mass of galaxies (albeit with a scatter). The first approach uses the traditional peak maximum circular velocity (V_ peak), while the second one utilizes the halo mass of the progenitor of the subhalo at a prior redshift (M_ prog). The constraints presented by M22 correspond only to the first redshift bin.
We utilize the best fit HOD parameters from I20 and predict the expected weak lensing signal for each of the stellar mass threshold lens samples in redshift bins 1 and 2. We use the framework prescribed in Section <ref> to predict the lensing signal. These best-fit predictions are shown as blue lines in Figs. <ref> and <ref> which correspond to redshift bins 1 and 2, respectively. In redshift bin 1, we find that the I20 predictions underestimate the measured lensing signal (by about 10-30%) around small projected radii corresponding to the 1-halo regime for threshold mass bins up to 10^10.0. For higher threshold bins, the predictions overestimate the measured weak lensing signal by up to 50-60%. Although we see qualitatively similar differences in redshift bin 2, the magnitude of these differences is much smaller than in redshift bin 1. For redshift bin 1, we show the best fit predictions for the weak lensing signal from the and model for M22 using light green and blue dotted lines, respectively. Both the SHAM models are able to explain the lensing signals relatively well for the galaxies in mass thresholds below 10^10 compared to more massive thresholds, especially when compared to the fits from I20. In this stellar mass range, the model seems to fit the measurements better than the model, however, we have checked that this is a resolution dependent statement, and with the higher resolution shin run, these differences further disappear. For higher threshold stellar masses, both models seem to fare poorly. However, we see evidence that the model is at least able to capture the large scale lensing signal beyond 1 well. For these bins, we see appreciable differences between the measurements and the predictions on small scales for both models.
The weak lensing signal in the 1-halo regime is driven by the average central halo masses . Therefore, we compare our inference of of each of the threshold sample with that inferred from the results of I20 and M22 in Fig. <ref>. The best-fit predicted average central halo masses from I20 are shown as blue (left panel) and red (right panel) lines for redshift bins 1 and 2 respectively. The comparison shows that the inferences from I20 are statistically larger than ours for M_*, limit>10^10.0, consistent with the expectation based on the comparison of the predicted and measured weak lensing signals. However, for stellar mass thresholds below 10^10.0, the I20 best fit predictions appear consistent with our constraints. This implies that such differences in the weak lensing signal are likely absorbed by the difference in satellite fractions in our model compared to that in I20. This is visible in the comparison of the satellite fractions from I20 with our results shown in Fig. <ref>. The comparison shows that when compared with the lensing only results, I20 prefer larger satellite fractions in both redshift bins.
In the left hand panel of Fig. <ref>, the results from M22 from the model and the model are shown with open squares and open circles with errors, respectively. We distinguish between the results from the two simulations used in M22 with two different colors, green corresponds to the mini simulation while magenta to the shin simulation. We see that both the models infer results which are consistent with our constraints from the weak lensing and abundance from I20 until a stellar mass threshold of 10^10, consistent with the comparison of the weak lensing signals. At higher stellar mass thresholds the differences seen in the weak lensing signal are a result of the higher average central masses in these models. The results for M22 seem to be much more consistent with the results from I20 at these threshold bins, suggesting that the combination with the clustering is driving the larger halo masses. In the comparison of the satellite fractions we also observe that the models from M22 always prefer higher satellite fractions compared to either I20 or our results, with the exact difference depending upon the resolution of the simulation. While comparing these results, it is worth keeping in mind that the scales over which the lensing measurements are carried out (< 5) are smaller than the length scales over which the clustering signal was measured by I20 (≲ 25-30 at the median redshifts of the samples). The inferences from clustering are thus expected to be more sensitive to the large scale bias of the dark matter halos or the 2-halo term, while our inferences rely more significantly on the 1-halo term. The signal on large scales can potentially be affected by the presence of galaxy assembly bias <cit.>, and thus appropriate caution is warranted.
The I20 best fit predictions of SMHM relation for each redshift bin are shown as red circles with errors in the two separate panels of Fig. <ref> for comparison with other studies. Despite the overestimate in for thresholds greater than 10^10, the halo masses are underestimated. As discussed in Section <ref>, such a relation between and can be made possible by choice of small values of halo mass scatters . And we verify in the middle panel of Fig. <ref> that indeed this is the case. Additionally, their and deviation from our constraints increase as we go in the direction of massive galaxy thresholds. Partly this could also be due to clustering information probing different degeneracy direction in degeneracy space of central HOD parameters. We highlight a contrasting feature between lensing and clustering based studies, that the I20 study of galaxy clustering, despite using abundance information which puts strong constraints on the central HOD parameters, is unable to strongly constrain the halo mass scatter parameter at high stellar mass thresholds whereas lensing is able to unveil the huge ambiguity in scatter parameter. This lack of constraint could be driving the disagreements between the two studies for thresholds beyond 10^10.0. Even though high mass slope of SMHM relation makes the stellar mass a poor tracer of its host halo mass <cit.>, lensing is clearly more effective in probing the scatter in SMHM relation than clustering. In the left hand panel for redshift bin 1 of Fig. <ref>, we observe that results of M22 (shown with similar color scheme as described before) for either of the and model are consistent with our results. We do see a difference between the results depending upon the resolution, and it appears that the two simulation boxes can trade between and the scatter so as to maintain similar value for . This can be seen in the right panel of Fig. <ref>, where we compare the scatter from M22 in the two different simulations and compare it to our results.
The best-fit constraint on the halo mass and scatter parameters from I20 are shown as points with 1-σ errors in left and middle panels of Fig. <ref>, where blue and red correspond to redshift bins 1 and 2, respectively. The underestimation of the WLS and average central halo mass at the lowest mass threshold of z_1 bin (see Figs. <ref> and <ref>) is caused by the correspondingly larger best fit value of . However, in redshift bin z_2 their best fit scatter value is in line with our expectation but then the underestimate of WLS is driven by the lower value of preferred by the clustering signal when combined with the abundance.
While we use the same cosmology as I20, we note the fact that differences in their modelling ingredients may have a non-negligible impact on this comparison. To be more specific, I20 uses large scale halo bias function and halo mass function each calibrated from different simulations, that is, bias from <cit.> but mass function as given by the Seth & Tormen halo mass function <cit.>. Also, I20 uses different halo mass-concentration relation than us, although we have an extra free parameter c_ fac which can subsume such differences. Similarly M22 use a halo mass definition that contains mass within the virial radius, M_ vir. We convert M_ vir to M_200m when making a direct comparison of halo masses using colossus <cit.>.
§ CHALLENGES AND FUTURE WORK: PHOTOMETRIC DATA AND ASTROPHYSICAL INFERENCES
We have inferred the galaxy stellar mass - halo mass scaling relation from a joint analysis of the abundance and weak lensing signal in this paper. The inferred relation assumes the lens galaxy properties given by the photometric redshift and stellar mass estimates from the template fitting method MIZUKI <cit.>. However, it is important to note that the presence of statistical or systematic errors in photometric redshifts can propagate in to the selection of our sample, as well as the measured abundances and the weak lensing signal, in a non-linear manner. As discussed in sec. <ref>, the errors in photometric redshifts are expected to positively correlate with those in the stellar masses <cit.>. Such correlated errors, even if they are only statistical, can result in a number of lower mass galaxies to scatter into our stellar mass threshold and some of the high mass galaxies to scatter out instead. Similar effects can also be at play at the redshift boundaries of our redshift bins. The stellar mass bin thus does not represent a true stellar mass threshold in the presence of such errors. Moreover, such errors are also expected to affect the true average redshift of the sample, as well as their abundance measurements. The abundance measurements are further complicated due to issues in the determination of the volume associated with the galaxies due to quality cuts on photometric redshift as applied in I20. In case such volume determination uncertainties affect galaxies at all stellar masses equally, then one could correct for such issues by comparing against prior determinations of the abundance in the literature. However in general, the selection effects are often much more nuanced than simple volume misestimates, and are not entirely straightforward to correct.
Even though we explored the effect of photometric redshifts of source galaxies on the weak lensing signal, these measurements can also be affected due to the uncertainties in the photometric redshifts of the lens galaxies. The lens galaxy redshift is used to assign the projected comoving impact parameter at which the light from background galaxies approaches the cluster before it reaches us. The critical surface density estimates used to convert the shear to the excess surface density also depends upon the redshift of the lens galaxies. Thus, the interpretation of the weak lensing signal can also be affected due to the use of photometric redshifts for the lenses. Therefore, each of the above mentioned measurements can impact the inferred HOD parameters in a variety of ways.
Given these uncertainties, we refrain from making direct comparisons between these results and those present in the literature on the stellar mass halo mass relation. We restrict our comparison to those studies which use the same sample of galaxies and have similar assumptions in order to make a fair comparison between the results of these studies with the results we obtain. In order to enable comparison with the broader literature, in a future study, we will use a forward modelling approach and ascertain the level of systematic bias by making use of mock galaxy catalogs that have the errors in photometric redshifts as expected from the photometry from the HSC survey.
The Subaru HSC survey can map out galaxies out to even higher redshifts than those considered in this study. However beyond the median redshift of the survey we become exceedingly sensitive to potential systematic biases due to the use of photometric redshift estimates of the source galaxies. We also expect that magnification bias to start to play a role by correlating the lens and the source number densities, especially for galaxies that lie at the steep end of the luminosity function <cit.>. Eventually, once we have a control over all the above systematics, it would become interesting to model the clustering, the lensing and the abundance of galaxies as a function of stellar mass and at multiple redshift bins.
§ SUMMARY AND CONCLUSIONS
We have investigated the galaxy-dark matter connection and its evolution using samples of photometric galaxies from the HSC survey with varying thresholds of stellar masses from 8.6 ≤log[ M_*/() ] ≤ 11.2 in the redshift ranges [0.30,0.55) and [0.55,0.80). Our results are based on the weak lensing signal measured for these samples using the Year 1 catalog of source galaxy shapes from the HSC survey, and the measurements of the abundance of galaxies. We carry out a Bayesian analysis to infer the posterior distribution of parameters that describe the halo occupation distribution of these galaxies. The key results and findings of our study are summarized as follows.
* We present high signal-to-noise ratio measurements (SNR ranging from 30-50) of the lensing signals in both redshift bins for all of our samples. We show the robustness of the measured lensing lensing signals with multiple null tests, such as the tangential and cross components of the lensing signal around random points and the cross component around lens galaxies. We also find that the boost factors for our signals are statistically insignificant and the biases due to use of photometric redshifts for the source galaxies are ∼ 1% and ∼ 4% for the redshift bins 1 and 2, respectively. These tests of systematics indicate that our measurements are not heavily affected by contamination of either the foreground or the background galaxies.
* We fit these weak lensing measurements together with the abundances of galaxies with a simple 5 parameter HOD model per sample in the context of the Planck cosmological model and show that the model provides a reasonable description of the data. We infer the posterior distribution of these parameters given the measurements.
* We show that the weak lensing measurements and the abundances on their own constrain the central HOD parameters and the scatter in a degenerate manner. However these degeneracy directions are different for each of the observables and hence a combination of the two helps break the degeneracy. We also show the impact of using different abundances in the literature. We show that the average halo masses of central galaxies are well constrained irrespective of the use of either abundances.
* We find that the average halo masses of central galaxies increases with the threshold for the stellar mass subsample for both redshift bin 1 and 2. Comparison between these scaling relations at the two different redshifts shows very mild evolution between these two redshifts, if any.
* We also compare our results with the constraints obtained by the study of I20 who jointly model the abundance and clustering of the same sample of galaxies. We show that the best fit model of I20 underestimates the observed lensing signals by varying amounts of 10%-30% in the 1-halo central term regime and 50%-60% at larger radii for mass thresholds up to 10^10 and overestimate the lensing signal for more massive threshold samples. Nevertheless, we find excellent agreement between the constraints on the average halo masses of central galaxies for these samples for thresholds until 10^10, and the results from I20 overestimate these average halo masses for higher threshold samples.
* We also compare our results with the subhalo abundance matching method of M22 which uses the abundance and clustering measurements of I20 as constraints. We find that their models which use a monotonic relation between or of the subhalos and the stellar mass of galaxies, is able to predict lensing signal consistent with our measurements for stellar mass thresholds up to 10^10. Both models fail to explain the lensing signal especially within the 1-halo regime for higher stellar mass threshold samples.
* Finally, we find that the satellite fractions predicted by our fiducal analysis are consistent with the clustering study of I20 given the statistical errors. However, we find that the models from M22 based on subhalo abundance matching predict an additional satellite fraction of up to 15% over our constraints.
The paper demonstrates the great potential of large imaging surveys such as the HSC to infer the galaxy-dark matter connection over a large range of redshifts using multiple observational probes such as the abundance of galaxies, their clustering and their galaxy-galaxy lensing signal. An accurate inference of the true underlying scaling relations between stellar mass and halo mass, however, will depend upon quantitative estimates of how the photometric redshift errors in the lens galaxy population affect the underlying stellar mass threshold samples. Assessment of the extent of such biases will be subject of our work in the near future.
§ ACKNOWLEDGEMENTS
We thank Divya Rana, Amit Kumar, Preetish K. Mishra, Susmita Adhikari, Arka Banerjee, Supranta S. Boruah and Priyanka Gawade for useful discussions and their comments on the draft version of the paper. We also thank our research advisory committee members Aseem Paranjape, Masamune Oguri, Anupreeta More for useful discussions on the current project along with comments on the draft version of this paper. NC is thankful for the financial support provided by the University Grants Commission (UGC) of India. He is also thankful to IUCAA for its the amicable environment and hospitality offered to students.
We acknowledge the use of Pegasus, the high performance computing facility at IUCAA. The calculations in part were carried out on Cray XC50 at Center for Computational Astrophysics, National Astronomical Observatory of Japan. Data analysis was in part carried out on the Multi-wavelength Data Analysis System operated by the Astronomy Data Center (ADC), National Astronomical Observatory of Japan.
This work was supported in part by JSPS KAKENHI Grant Numbers JP23K13145 (SI), JP19H00677, JP21H05465, JP22K03644 (S. Masaki) and JP21K13956 (DK).
TO acknowledges support from the Ministry of Science and Technology of Taiwan under Grant Nos. MOST 111-2112-M-001-061- and NSTC 112-2112-M-001-034- and the Career Development Award, Academia Sinica (AS-CDA-108-M02) for the period of 2019 to 2023.
The Hyper Suprime-Cam (HSC) collaboration includes the astronomical communities of Japan and Taiwan, and Princeton University. The HSC instrumentation and software were developed by the National Astronomical Observatory of Japan (NAOJ), the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU), the University of Tokyo, the High Energy Accelerator Research Organization (KEK), the Academia Sinica Institute for Astronomy and Astrophysics in Taiwan (ASIAA), and Princeton University. Funding was contributed by the FIRST program from the Japanese Cabinet Office, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), the Japan Society for the Promotion of Science (JSPS), Japan Science and Technology Agency (JST), the Toray Science Foundation, NAOJ, Kavli IPMU, KEK, ASIAA, and Princeton University.
We also thank Instituto de Astrofisica de Andalucia (IAA-CSIC), Centro de Supercomputacion de Galicia (CESGA) and the Spanish academic and research network (RedIRIS) in Spain for hosting Uchuu DR1, DR2 and DR3 in the Skies & Universes site for cosmological simulations. The Uchuu simulations were carried out on Aterui II supercomputer at Center for Computational Astrophysics, CfCA, of National Astronomical Observatory of Japan, and the K computer at the RIKEN Advanced Institute for Computational Science. The Uchuu Data Releases efforts have made use of the skun@IAA_RedIRIS and skun6@IAA computer facilities managed by the IAA-CSIC in Spain (MICINN EU-Feder grant EQC2018-004366-P).
We have used <cit.> to create degeneracy plots and <cit.> to create triangle/corner plots.
§ DATA AVAILABILITY
The weak lensing signal measurements after applying all correction as mentioned in Section <ref> for our stellar mass threshold lens samples along with the measured covariance matrices and abundances are made available in a public github repository, <https://github.com/0Navin0/galaxy_halo_connection_in_HSC>. This repository also contains our modelling constraints from Tables <ref> and <ref> along with additional relevant plots for interested readers.
mnras
|
http://arxiv.org/abs/2307.04238v1 | 20230709175503 | Measuring the Cosmic X-ray Background accurately | [
"Hancheng Li",
"Roland Walter",
"Nicolas Produit",
"Fiona Hubert"
] | astro-ph.IM | [
"astro-ph.IM"
] |
Measuring the Cosmic X-ray Background accurately]Measuring the Cosmic X-ray Background accurately
[1]Hancheng [email protected]
[1]Roland [email protected]
[1]Nicolas [email protected]
2]Fiona Hubert
*[1]Department of Astronomy, University of Geneva, 16 Chemin d'Ecogia, Versoix, CH-1290, Switzerland
[2]EPF-Ecole d'ingénieur-e-s, , 55 Av. du Président Wilson, Cachan, FR-94230, France
Synthesis models of the diffuse Cosmic X-ray Background (CXB) suggest that it can be resolved into discrete sources, primarily Active Galactic Nuclei (AGNs). Measuring the CXB accurately offers a unique probe to study the AGN population in the nearby Universe. Current hard X-ray instruments suffer from the time-dependent background and cross-calibration issues. As a result, their measurements of the CXB normalization have an uncertainty of the order of ∼15%. In this paper, we present the concept and simulated performances of a CXB detector, which could be operated on different platforms. With a 16-U CubeSat mission running for more than two years in space, such a detector could measure the CXB normalization with ∼1% uncertainty.
[
[
August 12, 2023
===================
§ INTRODUCTION
The diffuse Cosmic X-ray Background (CXB) was discovered during a rocket flight <cit.> together with Sco X-1. A Moon observation with ROSAT <cit.> showed the bright side of the Moon reflecting Solar X-rays whereas the dark side revealed a shadow of the CXB, demonstrating its extrasolar origin <cit.>. The high isotropy of the CXB, measured by Uhuru, confirmed an extragalactic origin <cit.>.
Thanks to the focusing capabilities of soft X-rays instruments and the help of deep field surveys by XMM-Newton <cit.> and Chandra <cit.>, up to 93% of the extragalactic CXB below 10keV has been resolved into point-like Active Galactic Nuclei (AGNs) <cit.>. The sensitivity of the current hard X-ray instruments is not enough to resolve the CXB at hard X-rays where the bulk of its emission lies. As a result only bright AGN could be detected and the fraction of CXB resolved remains less than 39% <cit.> at ∼30keV.
An additional Galactic component is observed below ∼2keV <cit.> with a extension of ∼10^∘ around the Galactic plane <cit.> and explained with the emission of faint cataclysmic variables <cit.>, thermal emission from ionized gas in the local bubble beyond the neutral interstellar medium <cit.>, and scattering of X-rays on illuminated diffuse gas clouds <cit.>.
X-ray instruments suffer from the time-dependent background and sensitivity change caused by solar activities (soft X-rays), cosmic particles, instrument aging and inaccurate in-orbit calibration resulting in systematic uncertainties. As a result, the CXB intensity measured by many experiments disagrees with each other by up to 10%-15% even though its spectral shape is rather well constrained.
Extrapolating stacked AGN spectra accounting for the CXB spectrum below 10keV does not reproduce the CXB at hard X-rays. A large population of Compton thick sources (where the density of obscuring material is high enough - N_H > 10^24 cm^-2 - for Compton scattering to dominate) has been hypothesized to fill the gap <cit.>, however, that population was not detected in deep X-ray surveys <cit.> nor at hard X-rays <cit.>.
The comparison of the observed CXB with the results of synthesis models puts constraints on the fraction of AGNs with different degrees of obscuration and reflection <cit.>. The relation between reflection and absorption contains important information on the average AGN inner geometry. The systematic error in the CXB normalization is however a major source of uncertainty in these models.
To measure the CXB accurately, instrumental background modeling, energy and detection efficiency calibration are meticulously required. In this context, the MVN (Monitor Vsego Neba) instrument was proposed by the Space Research Institute of the Russian Academy of Sciences <cit.>. A cylindrical multi-layer collimator protects an inner spectrometer from receiving off-axis photons up to a certain energy threshold, and a rotating obturator periodically shields the aperture of the collimator to modulate the Field of View (FoV) to discriminate the CXB flux from other components and backgrounds.
In this work, we present an improved detector concept which mainly consists of an array of collimated spectrometers with rotating obturators on top of the apertures. The detector could be operated on a space station, on a small satellite or even on a CubeSat. The science goals of this detector are discussed in Sect. <ref>, the instrument concept, calibration, integration and simulated performance presented in Sect. <ref>-<ref> respectively. The summary and discussion are given in Sect. <ref>.
§ SCIENCE GOALS
§.§ Isotropic CXB measurement
The CXB flux is roughly isotropic over the sky <cit.>. This isotropic flux could be averagely measured by an instrument able to collect X-ray photons from different fields (preferably blank sky) and filter out non-X-ray background & known discrete sources. Previous measurements however remain affected by large uncertainties on the CXB spectral shape and normalization, which ultimately limit our knowledge of the accretion power and of the fraction of heavily obscured AGN in the Universe.
The CXB flux and spectrum have been measured by ASCA/SIS <cit.>, ROSAT <cit.>, RXTE/PCA <cit.>, XMM-Newton <cit.>, Chandra <cit.> and Swift/XRT <cit.> at soft X-rays, and by HEAO1 <cit.> and more recently by Beppo-SAX <cit.>, INTEGRAL <cit.> and Swift/BAT <cit.> at hard X-rays. The measurements are in agreement at a level of 10-15% throughout the full energy range <cit.>.
The method to perform the CXB synthesis has been developed in the seminal works of <cit.> and improved in the following works. To synthesize the CXB spectrum three main “ingredients” must be known: an accurate description of the broadband spectra of the various AGN classes (including reflection effects), the X-ray luminosity function (XLF) which gives the number density of AGN per comoving volume as a function of luminosity and redshift, and the distribution of AGN as a function of absorbing column density (N_ H), the so-called N_ H distribution.
AGN spectra are provided as spectral templates for the various AGN classes: previous works set the parameters of their spectral templates to values representative of observations <cit.> or of models <cit.>. <cit.> used spectral templates derived by stacking Swift/BAT data for various types of AGN. The AGN X-ray luminosity function is derived from deep surveys <cit.> and hence is available only in the soft X-ray range. The N_ H distribution can be derived from data <cit.>, but it is biased against the detection of highly absorbed sources) or from models <cit.>.
The XLF is directly proportional to the CXB normalization, while the spectral templates and the N_ H distribution are mostly determined by the CXB spectral shape, in particular to the spectral slopes below and above 30keV and the break in between. An uncertainty of 10%-15% on the spectrum of the CXB at soft and hard X-rays corresponds to an uncertainty of up to 0.1 on the spectral slopes and of 0.2 on the spectral break. Considering that spectral templates derived from Swift-BAT have similar calibration as the BAT-derived CXB spectrum, the above uncertainties could be divided by a factor of two when considering such templates when comparing the CXB synthesis to the BAT observations.
The contribution of Compton thick (CTK) sources to the CXB flux at 30keV is estimated to be 4%-6% based on the CXB synthesis. As this is twice less than the 10%-15% uncertainty this contribution is highly dependent on CXB spectral uncertainties. A determination of the CXB spectral shape with a few percent levels of uncertainty is therefore required to better estimate the fraction of CTK AGN.
Our proposed detector attempts to measure the CXB spectrum in the 10-100keV with 1% precision, to significantly improve the study of the above topics. This instrument could also extend the CXB measurement up to 1MeV (although with less accuracy). Current data are scarce in this range and an improvement is of interest. The integrated Hawking radiation (HR) of primordial black holes (PBHs) with different mass scales could leave a signature in the isotropic CXB at energies above 100keV. <cit.> and <cit.> derived upper limits to the PBH density using the diffuse X/γ ray background. Extending the CXB spectral measurement to 1MeV together with a better estimate of the contribution of AGNs, derived from lower energies, will improve significantly these limits <cit.>.
§.§ Anisotropic CXB measurement
The CXB is characterized by small-scale fluctuation and large-scale anisotropy. The CXB is a superposition of numerous discrete sources including undetected/unresolved faint sources, while the numbers of them statistically fluctuate from field to field on small scale. This imprints fluctuations on the CXB fluxes with a scale of Ω_e^-0.5 S_ min^0.25 <cit.>, where Ω_e is the field size for evaluation, and S_ min is the detection threshold of the instrument. Detecting such fluctuations is out of the scope of our instrument, however, a dipolar anisotropy of the CXB is expected. First, our proper motion with respect to the distant Universe, where the bulk of the CXB is emitted, should result in a dipolar anisotropy with an amplitude of 0.42% due to the Compton-Getting (CG) effect <cit.>, in a direction matching that of the Cosmic Microwave Background (CMB) dipole <cit.>. Then the distribution of AGNs in the local Universe could produce a small additional anisotropy, the amplitude of which has been estimated as 0.23-0.85% by <cit.>. HEAO-1 A2 <cit.> and RXTE <cit.> have measured a dipole amplitude of < 3% and of ∼2% respectively after subtraction of the CG effect. <cit.> further found that most of the observed CXB anisotropy (CG effect subtracted) can be attributed to low-luminosity AGNs. Better observational constraints are needed to improve these estimates. This requires however an accuracy of 0.1-0.5% on the average CXB intensity.
§.§ Secondary goal
Gamma-Ray Bursts (GRB) are among the most energetic explosions since the Big Bang, with an average rate of one event per day at cosmological distances <cit.>. Assuming an isotropic GRB occurrence, we don't expect to see serendipitous GRB in the FoV of our instrument (only one in 5 years). However, the prompt emission of short Gamma-Ray Bursts (sGRBs) is generally hard with peak energies reaching ∼490keV <cit.>. Photons of such sGRBs could therefore penetrate the platform structure/instrument housing, and reach the detector. Missions like INTEGRAL and Insight-HXMT have successfully employed their inside anticoincidence detectors to monitor GRBs with nearly omnidirectional FoV <cit.>. Based on the log N-log P relationship (where P is the 50-300keV peak flux) of the 4th BATSE GRB Catalog <cit.> and the anticipated instrumental performance of our instrument (see Sect. <ref>, averagely 36cm^2 for off-axis effective area and background level of 76.5cnts/s in 100-300keV), we expect roughly a detection rate of ∼4 GRBs (about 3 long and 1 short) per year with a detection significance >5σ. The brightest GRB(s) could be localized with an accuracy of a few degrees thanks to the directional dependent instrumental response <cit.>.
Luminous gamma-ray pulsars like the Crab pulsar and PSR B1509-08 will be detected each time they pass in the FoV so they can be monitored over the duration of the mission. The timing resolution (at a level of μ s) and the effective area will be good enough to detect their pulsations. Even when the Crab pulsar is out of the FoV, high-energy photons can penetrate the collimators and be detected using phase folding even in case of a very unfavorable signal over background level, as demonstrated with the POLAR detector <cit.>. Detection of the pulsar(s) can be used to calibrate the absolute time stamping and to perform pulsar navigation <cit.>. Dedicated simulations will be performed later to evaluate such secondary goals.
§ INSTRUMENT CONCEPT
§.§ Overview
The diffuse nature of the CXB makes it difficult to be separated from the instrument background. Bright sources, additional sources of diffuse background and instrumental background need to be filtered out. An accurate method to distinguish these different components is crucial.
Some experiments like ASCA/SIS <cit.>, Beppo-SAX <cit.> and RXTE/PCA <cit.> have deeply exposed some high galactic latitude blank sky regions to measure the CXB subtracting the same level of exposure obtained on the dark side of the Earth. INTEGRAL <cit.> and Swift/BAT <cit.> used Earth occultations, during which the Earth transits the field of view and modulates the CXB and other components. HEAO-1 <cit.> used an onboard obturator to separate the CXB from other components.
The detector proposed here utilizes passive collimators and onboard obturators to model the fluxes registered in the detector. Collimators (see Sec. <ref> & <ref>) are used to block surrounding emissions with energy-dependent transparency to reduce the contamination out of the FoV. Obturators (Sec. <ref>) will periodically shield the aperture of the collimator, and introduce a modulation of in-FoV components to separate them from the instrument background noise. To drive such obturators, a compact wheel system is developed (Sect. <ref>).
The science goals mentioned in Sect. <ref> requires observing an energy range of 10-511 keV with a sensitive spectrometer (Sec. <ref>). We propose to use a new generation of CeBr_3 scintillating crystals (Sect. <ref>) which have been studied to assess their suitability as spectrometer modules for space missions <cit.>.
Overall, the detector consists of an array of collimated spectrometers with rotating obturators on top of the collimators. Hereafter, we present the concepts and mechanical designs of the aforementioned components of the detector.
§.§ Collimator
The collimator is a cylindrical tube made of four metal layers which are Aluminium (Al), Tin (Sn), Copper (Cu) and Al from outer to inner. These multi-layers of Al-Sn-Cu-Al shield off-axis X-ray photons. The innermost layer emit K-shell fluorescence (< 2keV) below the energy threshold of the detector. With thicknesses of 1-1-1-2mm for the Al-Sn-Cu-Al layers (the effective thickness is thicker by a projection factor of 1/sinθ, where θ is the incident angle), photons below ∼100keV are expected to have a <0.1% transparency through the collimator tube. The length and inner diameter of the tube is 250 mm and 25 mm respectively, which result in a FoV of around 26 square degrees (Full Width at Half Maximum, FWHM).
§.§ Spectrometer
§.§.§ Crystal
The CeBr_3 crystal will be used as scintillation material of the spectrometer module, with a diameter and thickness of 25 mm and 20 mm respectively. The CeBr_3 provides improved detection performances with high light output (60 photons per keV), excellent energy resolution (∼ 4% at 662keV, FWHM) and fast fluorescence decay time (17 ns) <cit.>, which makes it suitable for the SiPM readout. With a thickness of 20mm, the CeBr_3 will keep ∼100% detection efficiency up to 200keV and still reach a 60% efficiency at 511keV, which is the emission line from a β^+ decay calibration source. <cit.> and <cit.> have tested the CeBr_3 crystal with increasing proton fluences ranging from 10^9 to 10^12 protons cm^-2, while the light yield was barely affected and the measured FWHM energy resolution at 662 keV was extended only by 0.1%. This shows its high radiation hardness against proton-induced damage, which guarantees a stable performance without severe degradation in space.
The CeBr_3 crystal will be sealed in a 0.1mm thick Aluminum frame with an internal optical reflector on the top and sides. The top will additionally be coated by a 0.1 mm thick beryllium window to stop low-energy charged particles. The resulting low energy threshold for X-ray photons will be ∼10keV. At the bottom, an optical interface will connect the crystal to a Quartz window and to the Silicon photomultiplier.
§.§.§ Silicon PhotoMultiplier
Silicon PhotoMultiplier (SiPM) is increasingly used in space-borne detectors. It has large Photo Detection Efficiency (PDE), and its light yield lowers the low energy threshold. Other properties, such as good quantum efficiency, low bias voltage, compactness, robustness and insensitivity to magnetic fields, relax the detector design constraints.
A disadvantage of SiPM is its high dark current, which is highly temperature-dependent and would increase as radiation dose accumulates in space. The temperature will have to be kept as low as possible, so that the temperature variation can be measured and calibrated. The radiation damage is more problematic and unavoidable, it will result in an increase in the threshold energy of the detector. The SiPM degradation was studied[<https://indico.cern.ch/event/1093102>] and constrained with an irradiation campaign <cit.> showing that the CeBr_3 crystal is efficiently protecting the SiPM and that an order of magnitude of increase of the dark current can be expected after one year in space which would result in a threshold increase of a few keV per year at around -20^∘C <cit.>.
§.§.§ Electronics
A suitable electronic was already developed for the 64 channels SiPM array of POLAR-2 <cit.>. A Front-End Electronics (FEE) board (CITIROC-1A ASICs) is equipped for each polarimeter module of POLAR to power (temperature regulated) and readout the SiPM channels, define the trigger logic, and communicate with a Back-End Electronics (BEE) board connected behind. The BEE board takes care of power supply, overall data acquisition, communication with the platform and mission control, etc.
§.§.§ Bottom Lead chamber
A Lead (Pb) chamber will be additionally placed at the bottom to reduce the effective radiation doses received by the scintillators and SiPMs from the back side of the instrument. An 8-mm thick Pb is able to stop 100 keV photons at a level of more than 99% percent.
§.§ The obturator and wheel system
§.§.§ Obturator
The obturator consists of the same sandwich layers as the collimator but twice thicker (10mm, 0.75kg). There are two obturators configured as a propeller, which are counter-rotating to compensate the moment of inertia (0.002kg m^2 for each). The rotation period of the obturator is defined as one rotation per minute (rpm), during which the space environmental background does not change significantly, thus the background levels registered in the tubes during the transit of the obturator are constant. An angular encoder is set in the wheel system to constantly record the rotation phase of the obturator.
A schematic drawing of the obturator is shown in Fig. <ref>. Each obturator is individually symmetrical to avoid shifting the center of mass. The opening angles of the inner and outer sectors are defined to be 75^∘ and 45^∘, such that there are always few tubes being closed by the obturators. Furthermore, the closures by one or two obturator(s) offer a chance to study the induced radioactivity of the obturator and collimator (similar layers) in space, thus helping to understand the internal background of the instrument.
§.§.§ Wheel system
The obturators will be driven by a compact wheel system, which is adapted in a crankcase with a length of 150 mm and a diameter of 40 mm. The combination of the obturators and wheel system is mushroom-like, where the root will be inserted at the center of the collimator array (see Sect. <ref>), such that the obturators could cover the aperture of the collimator tubes. Fig. <ref> shows the cutaway view of the wheel system. The gearing provides two counterrotating coaxial shafts from the unidirectional shaft driven by the motor. A model of this complete system has been built using a Maxon motor[<https://www.maxongroup.net.au/maxon/view/content/overview-BL-DC-Motoren>] and underwent some preliminary space qualification. Tagged radioactive sources will be attached beneath the lower obturator (see Sect. <ref>). The wiring of the tagged source requires standard slip rings in the wheel system for signal connection.
Moving mechanisms working in space are difficult. First, the gears should be robust and especially the ball bearings have to overcome the grouping, vibration and frictional torque problems as shown in <cit.>. Then the motor is required to work in a vacuum and in a large temperature regime <cit.>. Thermal-vacuum tests need to be carried out for space qualification to reduce risks of failure. In the worst-case scenario if the propeller is stopped, the detector loose flux modulation, but the partially or entirely closed and open tubes still enable background modeling.
§.§ Thermal constraints
In order to reduce the impact of dark noise of the SiPM and to maintain a relatively low energy threshold, a cooling system is needed to reach ideally below -20^∘C. The solar radiation power to the instrument is orbit dependent as the Sun incident angle varies. The averaged power over one year can be estimated at 1361W m^-2 beyond the Earth's atmosphere <cit.>, and the planetary albedo at about 35% <cit.>. The whole instrument with dimension given in Sect. <ref> will heat by an average value of 13W. Taking into account ∼30W of additional heating from the electronics, the total cooling power needed is <43W. The tubes of the instrument (with a total area of 0.21m^2 for 18 tubes) could serve as passive radiators for dissipation. The calculated thermal equilibrium of the instrument is expected to be 252K (-21^∘C) with several degrees of variation because of the Sun power modulation along a 90-minute orbit.
§ CALIBRATION
Accurate knowledge of the detector's spectral properties, including the Energy-Channel (E-C) relationship (linearity), energy resolution, detection efficiency, etc., is crucial to measure the CXB normalization. Some of the calibrations will be carried out on ground and some need to be performed during operations.
The absolute detection efficiencies (versus energy and time) depend on geometry (FoV), photopeak versus Compton ratio, light yield of CeBr_3, scintillation light efficiency of transmission and collection, the quantum efficiency of the SiPM, etc. The geometry needs to be characterized very precisely on the flight model before launch and is not expected to change afterwards. During the mission, gradual changes of some parameters are expected: i) the SiPM performance varies with temperature but this can be adjusted (biasing voltage on SiPM is automatically adjusted at all times by the electronic); ii) the SiPM performance can change due to radiation damage; iii) all the CeBr_3 characteristics can change due to aging and radiation damage. The latter two will have an impact on the E-C relationship, energy resolution and detection efficiency, which need to be monitored in orbit.
§.§ On ground
Before launch, the detector will be fully assessed and its spectral responses measured using standard radioactive sources, such as ^241 Am (half-life 432.2 years, specific activity 126.8 GBq/g) and ^22 Na (2.6 years, 1580 TBq/g). The α decay of ^241 Am generates gamma rays of 13.9, 17.8, 26.4 and 59.6keV. ^22 Na has β^+ decay emitting a positron immediately annihilating and releasing two gamma-ray photons at 511keV in coincidence with a 1.2 MeV photon. The energy-channel relationship and energy resolution at different energy peaks will be obtained by measuring the channel spectrum of these sources with the detector.
The absolute detection efficiencies versus energy can be measured by recording the single and coincident photon counts of tagged radioactive sources peaking at different energies. For example, the decay of a ^241 Am source, continuously monitored by a tagged scintillator, can be marked by the triggering of the scintillator above a relatively high threshold (α decay at MeV). Then a detector module can be placed near the source to collect photons. The expected number of photons that should arrive at the detector is in direct proportion to the counts of the scintillator by a ratio of collecting solid angle over the allowed emission angle (4 π assuming an isotropic decay). Eventually, the division of the coincident photon counts to the expected number gives efficiency. The accuracy of the efficiency calibration is purely limited by statistics, which is approximately the square root of one over the coincident counts. At 511keV, an alternative calibration approach is to use the ^22 Na source, which can be placed between two detector modules, then the single and coincident events offer an equation set to resolve the unknown efficiencies of both detector modules.
Finally, a synchrotron radiation facility could generate a collimated broadband X-ray beam with precisely known spectra. An X-ray continuum (covering 10-100keV) irradiating the instrument, allows to calibrate the spectral response as a function of energy, cross-checking and filling the gaps continuously between the discrete energies provided by the radioactive sources.
A Monte-Carlo simulator of the instrument based on Geant4 <cit.> will be developed to verify all the aforementioned calibration procedures. Additionally, as the thermal conditions will change in orbit, we will evaluate all the temperature-regulated parameters, in particular these of the SiPM and electronics. The degradation of the SiPM and of the crystal to radiation dose will be evaluated through irradiation with protons (the major background in space, see <ref> for details).
§.§ In orbit
There are two specific challenges in orbit. The first is to cross-check all the parameters calibrated on-ground. Secondly, the detectors must be monitored for aging and degradation due to radiation. To cope with them, there will be two tagged ^241 Am sources attached beneath the lower obturator (as shown in Fig <ref>, both the inner and outer sectors of the lower obturator will be attached with one source, which will pass by the center of the inner and outer tubes). As the calibration sources are periodically transiting in the FoV, the calibrations can be done independently for every detector tube (they should be uniform at the beginning). The first in-orbit calibration will validate the spectral response characterized on ground.
The E-C relationship and energy resolution can be calibrated in the same way in orbit as on ground (one energy point is sufficient). This will be done constantly to monitor the aging and degradation of the detector. The E-C relationship is closely linked to the performances of SiPMs, which are sensitive to radiation dose and temperature. Their influence will be periodically checked. The energy resolution is anticipated to not be dramatically changed as the CeBr_3 has high radiation hardness (Sect. <ref>). Even a slight change will be monitored as time-dependent responses for scientific analysis.
The detection efficiency calibration in orbit is slightly different than on ground, since the latter is static whereas it is evolving in orbit. The rotation phase of the obturator allows bin the exposure of the calibration sources to every tube. The detection efficiency for every tube can then be calibrated as it was done on ground (<ref>).
A 200Bq ^241 Am source will create about 10^4 events per day in the photopeak region of the 59.6keV photon of the tagged spectrum of each tube (0.99 tagging efficiency, 0.36 branching ratio of the decay at 59.6keV, 0.031 solid angle ratio, 0.13 effective exposure time, 0.6 photopeak efficiency, 0.8-1 deadtime ratio). So statistically the efficiency at 59.6keV of each tube can be measured at 1% level each day. Some other lines of ^241 Am can be used to calibrate the efficiency, energy-channel relationship and energy resolution at lower energy but with a lower accuracy limited by statistics. Besides, an induced instrumental background will include a 511keV feature in the accumulated spectra. This line can also be used for calibrating the E-C relationship and energy resolution. Furthermore, since inner and outer detector tubes have different radiation acceptance, the evolution of their correlated spectral properties will allow to further characterize their degradation.
The selection of coincident events can be done offline or in real-time with a coincidence time resolution of 100 ns, to reduce telemetry. We expect 3 random coincident events per day and detector, which is completely negligible compared to the 10^4 real coincidences.
These calibrations will be performed every day to monitor aging and radiation dose. Additionally, the Crab broadband spectrum will allow to cross-check and monitor the shape of the energy responses of the instrument. Geant4 simulator will model the Crab observations with respect to the on ground calibrations and allow to model the gradual change of the spectral response.
§ DETECTOR INTEGRATION AND PLATFORM REQUIREMENTS
As the space environmental background is highly orbit-dependent (Sect. <ref>), a suitable orbit is important to achieve the science goals. The best orbit is an equatorial orbit that never enters the South Atlantic Anomaly (SAA), but flight opportunities are not frequent. The very popular Sun-synchronous orbit for CubeSat is not well suited for our detector as it passes frequently in the SAA and suffers from continuous Solar radiation.
A typical low Earth orbit (LEO) has an altitude of 300-500km and an inclination of a few tens degrees. Lower orbits (<400km) are preferable but the payload would deorbit rapidly. 500km is the minimum altitude for a free flier without propulsion. We will assume in the following orbital altitude of 500km and inclination of 42^∘. Since our instrument need to continuously scan outer space, pointing to the zenith is required.
As the mission platform will constrain the available resources (size, mass and power consumption) and determine the performance of the detector, we considered a 12U CubeSat implementation and a station-based modular design.
§.§ CubeSat version
We have integrated our detector into a 12-Unit (U) CubeSat payload, translating to 2*2*3 U (one U corresponds to 10*10*10 cm^3). Such a configuration is shown in Fig. <ref>, where the transparent pink box symbolizes 12 U as a reference. It allows the placement of 18 tubes (each includes a collimator and a spectrometer), all of which have the same dimensions: 28cm in height and 35mm in external diameter.
The tubes are placed along a dual-ring structure. The 6 inner tubes will be shaded by the 12 outer ones and get less background from the sides, allowing to study the systematics of the background. A compact electronics (Sect. <ref>) will be placed beneath the wheel system in the center to power and communicate with the motor and spectrometers, the total power consumption is ∼30W.
The necessary platform modules (e.g., power, communication and orbit control) could occupy the corners and sides or another 4-U. This very conservative configuration with 18 tubes is chosen to bring a lot of redundancy to fight systematic effects. A smaller number of tubes (as low as 4 tubes as seen in Fig. <ref>) is possible if considering only statistical effects. Smaller CubeSat (4U, 8U) could probably reach the scientific goals if sacrificing redundancy but could require longer exposure to understand systematical effects due to the space environment.
§.§ Station-based version
Another configuration for a Station-based platform is shown in Fig. <ref>. It is based on four groups, each containing four tubes that are twice the size of the CubeSat version, with a height of 500mm and an inner diameter of 50mm. This provides a collecting area four times bigger than of the CubeSat version. Four groups of obturators are placed on the top, each including a symmetrical sector (opening angle is 90^∘). Neighboring obturators are counter-rotating to compensate for angular momentum. The wheel system is simplified as it needs unidirectional rotation.
§.§ Technical Readiness
The readout electronics are expected to be ready in early 2023 by adapting that of POLAR-2. A prototype of the detector unit will be built right after, to characterize its spectral response and detection efficiency. The overall design will then be finalized, integrating a test model of the detector for comprehensive qualification tests including motor, thermal, vibration, radiation and geometry, lasting until the end of 2023. As soon as a launch opportunity is determined, a flight model will be built by integrating with the platform interfaces and tested. Overall, a flight model can be expected to be delivered in 24-36 months.
§ SIMULATED PERFORMANCE
§.§ Detector spectral responses
In this section, we consider the CubeSat configuration (Sec. <ref>) to evaluate the performance (the station-based version (Sec. <ref>) has better statistics thanks to a bigger photon collecting area). The spectral responses and the background are generated using the Geant4 simulation package (version 10.6.2, <cit.>). Geant4 integrates comprehensively the relevant physics processes.
The mass model of a single tube unit (obturator, collimator, and spectrometer) and the related physical processes have been implemented in Geant4. Monoenergetic photons (covering 10-1000keV with reasonable steps) have been injected with different incident angles and shield coverages of the obturators. Fig. <ref> shows the effective areas for open and closed tubes as a function of incident angle θ and energy. The right panel indicates that the low energy threshold of the close tubes on-axis is ∼100keV. The left plot indicates that the opening angle of the open tubes below 100keV is ∼6^∘. The energy response matrices are also obtained as a function of incident angle and energy.
Spectral responses for the full detector (made of 18 tubes) were also calculated. Tubes of the inner ring are shielded by those of the outer ring, resulting in a non-uniform (or more directional dependent) response. The instrument has more triggers on the external sides of the outer rings tubes, anticipating a localization ability to luminous transient sources like GRBs. Such an idea has been widely applied by multiple instruments (e.g., we have done that for POLAR <cit.>), and will be developed for the instrument presented here in the future. A conservative estimation of the localization accuracy will be a few degrees. While for the CXB, the instrument response remains symmetrical along the zenith.
§.§ Expected count rate
The Burst Alert Telescope (BAT) onboard the Swift observatory has successfully carried on an all-sky hard X-ray survey at 14-195keV and detected 1632 sources in 105 months <cit.>. The source catalog is available on-line[<https://heasarc.gsfc.nasa.gov/W3Browse/all/xray.html>]. We used that catalog to predict the expected count rates from the sources. The CXB count rate was calculated by convolving spectral templates <cit.> with the simulated detector responses.
The derived count rate (10-100keV) distribution over the sky is shown in Fig. <ref> in the unit of the CXB rate (0.129 counts/s/tube) with a bin size corresponding to the FoV of a tube. On average one source is contributing per sky bin. Many bright point sources on the galactic plane, could easily be filtered out.
We also plot the count spectra of the CXB and of five luminous sources in Fig.<ref>, for a 2-year exposure time and a CubeSat mission with 18 tubes (note that only the open tubes, i.e. half of them, contribute to these counts). The CXB will be detected with about 100 times more counts than the brightest sources (which could easily be excluded).
§.§ Background estimation
We assume that the instrument will fly in a Low-Earth Orbit (LEO), where the space environmental background is the main concern. In our background studies, we have included the standard Shielding Physics List in Geant4, including electromagnetic physics, hadronic physics and radioactive decay physics (delayed background).
§.§.§ Cosmic rays
When approaching the Earth, low-energy cosmic rays (CRs) are deflected by the geomagnetic field. Higher-energy ones interact with the atmosphere creating secondary particles creating noise in the detector. The South Atlantic Anomaly (SAA), caused by the offset of the Earth’s magnetic center from the geographic center, features a weaker geomagnetic field and an increased particle (mainly protons and electrons) environment <cit.>.
Even if the instrument will be switched off in the SAA to protect the detector, the delayed background originating from the radioactivity induced by the SAA particles must be taken into account. A light flying unit (single CubeSat) will develop less induced background than that e.g. on a space station.
We estimated the primary particle background spectra, based on the works by <cit.> and <cit.>, for an orbit at a typical altitude of 500km and inclination of 42^∘. The majority of the primary CRs are protons with a spectrum that can be approximated by a power-law <cit.>. We used an orbital averaged spectrum extracted from ESA's SPace ENVironment Information System [<https://www.spenvis.oma.be>] (SPENVIS). Other significant particles are electrons and positrons and we used a spectral model developed by <cit.>, often used for background simulation of space instruments.
The spectra of secondary particles depend on the geomagnetic latitude <cit.>. By averaging results obtained by AMS-01 <cit.>, an averaged spectra of secondary protons could be extracted. The spectra of secondary electrons and positrons are provided by <cit.>. Only a very small portion of them could be mistaken as photons as the low-energy ones are stopped by the beryllium window and cannot reach the detector.
§.§.§ Albedo gamma rays
The Earth's atmosphere features outwards radiation of albedo gamma rays, produced either by the reflection of the CXB (minor) or by the interaction between CRs and the atmosphere. Their spectra are provided by <cit.>. Since they are coming from below the instrument, their influences are highly correlated to the angle between the Earth and the FoV and this can be used to filter them out. Those photons are rather soft and only the high-energy ones will be able to go through the tube shield.
§.§.§ Delayed background
The delayed background originates from the radioactivity mainly induced by the trapped particles in the SAA. Geant4 is able to simulate the amount of induced radiation hitting the instrument from the fraction of the time spent in the SAA. The radioactive isotope production and decay properties can be characterized. The level of delayed background increases with time and gradually saturates. This background will develop characteristic lines (mainly 511 keV). The material of the detector and of the platform has to be chosen to minimize radiogenic materials.
§.§.§ Background rates
The spectra of the different components aforementioned, either adapted from the AMS measurements or from SPENVIS, are shown in Fig. <ref>. They are all considered to be isotropic inputs to the instrument simulator, the mass model of which is constructed by taking into account the geometry defined in Sect. <ref>. The anticipated background rates are shown in Fig. <ref>. Table <ref> lists the rate of different background components in two energy ranges.
§.§ CXB measurement
We are evaluating here the accuracy of the CXB normalization determination in the energy band 10-100 keV for an increasing number of tubes and exposure time.
For data selection purposes the sky is divided into 3072 pixels (healpix binning with N_ side = 16). Sky pixels with |b|<10^∘ or covering the Magellanic clouds or including (non-AGN) sources in the Swift-BAT catalog <cit.> are excluded. Most remaining sources at high galactic latitude are nearby CVs, stars and unidentified sources, which are the bright components of the GRXE population. Pixels with a count rate significantly above their neighbor will also be excluded. As shown in Fig. <ref>, globally about 23% of the sky will be disregarded.
Photons from unresolved galactic X-ray sources and from the instrumental background need to be subtracted from the observations. The statistical accuracy of the CXB measurement can be estimated as P_ sta = (√(C+B)+U)/C, where C, B and U represent the total number of counts detected from the CXB, the instrumental background, known galactic X-ray sources and of the unresolved Galactic Ridge X-ray Emission (GRXE) in all open tubes from all the considered sky pixels.
The GRXE is made of numerous unresolved faint sources like coronally active binaries, cataclysmic variables, etc., and extends up to a galactic latitude of |b|=20^∘. It has a constant spectral shape, softer than the CXB, and an intensity (measured by INTEGRAL) scaling with the stellar mass <cit.> and in particular with the COBE/DIRBE Zodi-Subtracted Mission Average Maps at 4.9μ m [<https://lambda.gsfc.nasa.gov/product/cobe/dirbe_zsma_data_get.html>]. Using this map for the non-discarded pixels, the GRXE spectral template and convolving with the response of the detector indicates U∼ 0.291 counts/s/tube in total for 10^∘ < |b| < 20 ^∘, which correspond averagely to 0.47% of the CXB rate. The GRXE count rate is very low (< 0.067 count/s/tube) for |b|>20^∘ allowing a very good separation of the CXB and of the GRXE fitting the full sky.
As the detector tubes are switching from open and close state thanks to the obturators, the instrument background (about B∼ 4.0 counts/s/tube in the range of 10-100 keV, see Sec. <ref>) can be determined and its evolution precisely modeled (at least within 0.1% in the band 10-100keV).
Fig. <ref> gives the resulting statistical accuracy of the CXB normalization as a function of the number of detector modules and mission time.
The CXB normalization measurement also depends on systematic uncertainties in the various spectral components considered above and on the absolute flux calibration of the instrument. As shown in Sect. <ref>, the latter will be calibrated in orbit by tagged radioactive sources with an accuracy reaching 1% every day. Accumulating calibration events over time allows to improve accuracy for longer effective exposure time. Ideally, a 100-day of exposure to the calibration source, will result in 10^6 calibration events, i.e. a statistical accuracy of 0.1%. While in orbit, the spectral responses (especially the absolute detection efficiency) could change because of the SAA or Solar activity, which will result in time-dependent responses. A dual ring placement of the detector tubes offers a window to study the systematics of such changes day by day and tube by tube. Changes in the response on orbital timescales, for instance following the SAA passage will be calibrated. Data with significant changes (both on the time interval and tube array) will be discarded, resulting in a loss of exposure time (or collecting area). Therefore the accuracy of absolute calibration is limited by the accuracy one can achieve on ground and the systematical variation in orbit. The very large redundancy provided by 18 tubes and the data set representing orders of magnitude more events than what the statistical error requires will allow to study and correct all the possible new systematics due to the space environment. Therefore the final uncertainty will be limited by the ground calibration performance which we estimate at 1%.
Given these simulations, we expect that a CubeSat mission with 18 detector tubes operating for more than two years would allow to measure the CXB normalization with an accuracy of ∼1%.
In the 100-1000 keV band, the subtraction of the instrumental background would leave 2% uncertainty on the CXB normalization for an 18-tube mission running for two years (based on Table <ref>). However, the collimators are gradually becoming transparent above 100 keV, therefore all the known non-AGN sources and unresolved galactic components would contaminate the CXB measurement, each category introducing an uncertainty comparable to that of the instrumental background. The efficiency of detection of the 511 keV line will be very well calibrated on ground using tagged β^+ source. However, in space, each detector will develop a very large 511 keV background line due to activation. This line can be used for energy calibration but it will be very difficult to maintain an absolute efficiency calibration of every tube at high energy. Uncertainty on the CXB normalization in the 100-1000 keV range is therefore estimated at a level of ∼10%
§ SUMMARY AND DISCUSSION
The CXB is made of the superposition of the emission of celestial sources, mostly AGN. Numerous space missions have measured the CXB spectrum, and few of them particularly surveyed the AGN population. As a result, the CXB is nearly (>93%) resolved into point-like AGN at soft X-rays (below 10 keV). A percentage decreases with increasing energy. Accurate measurement of the CXB spectrum and normalization is crucial to study the population of AGN, their obscuration, reflection, average spectra and ultimately the history of accretion in the Universe. The uncertainty on the CXB normalization (∼15%) is one of the main sources of difficulty affecting the CXB modeling.
We propose a detector to determine the CXB normalization with a per cent level accuracy. The detector consists of an array of tubes with collimated spectrometers and rotating obturators modulating the signals and allowing to precisely extract the CXB photons from the background. We present here a preliminary design of the detector which could be accommodated on various platforms (16-U CubeSat, small satellite, space station).
The 16-U CubeSat option has been used to simulate the instrument performance with Geant4 taking into account the point sources and instrumental background to assess their respective count rates and the resulting accuracy on the CXB normalization. In two years, the CubeSat mission is able to measure it with an accuracy ∼1% in the range 10-100keV ultimately limited by the quality of the calibration performed before the launch. This is a significant improvement compared to the current measurements.
§ DECLARATIONS
§.§ Author's Contribution
R. Walter and N. Produit initialized this project. H. Li performed the simulation and analysis. F. Hubert contributed to the design of the wheel system and CAD drawings. All authors contributed to the instrumental design, manuscript drafting, and reviewing.
§.§ Funding
We acknowledge the support of the Swiss National Science Foundation.
§.§ Conflicts of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
§.§ Consent to participate
Not applicable.
§.§ Consent for publication
Not applicable.
§.§ Code availability
Not applicable.
|
http://arxiv.org/abs/2307.04541v2 | 20230710130942 | Learning Large Margin Sparse Embeddings for Open Set Medical Diagnosis | [
"Mingyuan Liu",
"Lu Xu",
"Jicong Zhang"
] | cs.CV | [
"cs.CV",
"cs.AI"
] |
Mingyuan Liu, Lu Xu, Jicong Zhang.
^1School of Biological Science and Medical Engineering,
Beihang University, Beijing, China
^2Hefei Innovation Research Institute, Beihang University, Hefei, Anhui, China
{liumingyuan95, xulu181221, jicongzhang}@buaa.edu.cn
Learning Large Margin Sparse Embeddings for Open Set Medical Diagnosis
Mingyuan Liu1 Lu Xu1 Jicong Zhang1,2,*
August 12, 2023
======================================================================
Fueled by deep learning, computer-aided diagnosis achieves huge advances.
However, out of controlled lab environments, algorithms could face multiple challenges.
Open set recognition (OSR), as an important one, states that categories unseen in training could appear in testing.
In medical fields, it could derive from incompletely collected training datasets and the constantly emerging new or rare diseases.
OSR requires an algorithm to not only correctly classify known classes, but also recognize unknown classes and forward them to experts for further diagnosis.
To tackle OSR, we assume that known classes could densely occupy small parts of the embedding space and the remaining sparse regions could be recognized as unknowns.
Following it, we propose Open Margin Cosine Loss (OMCL) unifying two mechanisms.
The former, called Margin Loss with Adaptive Scale (MLAS), introduces angular margin for reinforcing intra-class compactness and inter-class separability, together with an adaptive scaling factor to strengthen the generalization capacity.
The latter, called Open-Space Suppression (OSS), opens the classifier by recognizing sparse embedding space as unknowns using proposed feature space descriptors.
Besides, since medical OSR is still a nascent field, two publicly available benchmark datasets are proposed for comparison.
Extensive ablation studies and feature visualization demonstrate the effectiveness of each design.
Compared with state-of-the-art methods, MLAS achieves superior performances, measured by ACC, AUROC, and OSCR.
§ INTRODUCTION AND RELATED WORK
Deep learning achieves great success in image-based disease classification.
However, the computer-aided diagnosis is far from being solved when considering various requirements in real-world applications. As an important one, open set recognition (OSR) specifies that diseases unseen in training could appear in testing <cit.>. It is practical in the medical field, caused by the difficulties of collecting a training dataset exhausting all diseases, and by the unpredictably appearing new or rare diseases. As a result, an OSR-informed model should not only accurately recognize known diseases but also detect unknowns and report them. Clinically, these models help construct trustworthy computer-aided systems. By forwarding unseen diseases to experts, not only the misdiagnosis of rare diseases could be avoided, but an early warning of a new disease outbreak could be raised.
There are many fields related to OSR but are essentially different.
In classification with reject options <cit.>, samples with low confidence are rejected to avoid misclassification. However, since its closed set nature, unknown classes could still be misclassified confidently <cit.>.
Anomaly detection, novelty detection, and one-class classification<cit.> aim at recognizing unknowns but ignore categorizing the known classes.
In outlier detection or one-/few-show learning <cit.>, samples of novel classes appear in training.
In zero-shot learning <cit.>, semantic information from novel classes could be accessed. Such as zebra, an unknown class, could be identified given the idea that they are stripped horses, and abundant samples of horse and stripe patterns.
Differently, OSR knows nothing about novel classes and should have high classification accuracy of the known meanwhile recognize unknowns, as illustrated in Fig. <ref> a). Due to limited space, some reviews <cit.> are recommended for more comprehensive conceptual distinctions.
Most OSR researches focus on natural images, while medical OSR is still in its infancy. In medical fields, representative work like T3PO <cit.> introduces an extra task to predict the input image augmentation, and samples with low probabilities are regarded as unknowns.
CSL <cit.> uses generative adversarial neural networks (GAN) to generate proxy images and unknown anchors.
As for natural images, a line of work tries to simulate unknowns using generated adversarial or counterfactual samples using GAN <cit.>. However, whether unknown patterns could be generated by learning from the known is unclear.
Some works learn descriptive feature representations. They enhance better feature separation between unknowns and knowns or assume the known features following certain distributions so that samples away from distributional centers could be recognized as unknowns <cit.>.
Differently, this work categorizes densely distributed known features and recognizes sparse embedding space as unknowns, regardless of the specific distribution.
This work tackles OSR under the assumption that known features could be assembled compactly in feature embedding space, and remaining sparse regions could be recognized as unknowns.
Inspired by this, the Open Margin Cosine Loss (OMCL) is proposed merging two components, Margin Loss with Adaptive Scale (MLAS) and Open-Space Suppression (OSS).
The former enhances known feature compactness and the latter recognizes sparse feature space as unknown.
Specifically, MLAS introduces the angular margin to the loss function, which reinforces the intra-class compactness and inter-class separability. Besides, a learnable scaling factor is proposed to enhance the generalization capacity.
OSS generates feature space descriptors that scatter across a bounded feature space. By categorizing them as unknowns, it opens a classifier by recognizing sparse feature space as unknowns and suppressing the overconfidence of the known.
An embedding space example is demonstrated in Fig. <ref> b), showing OMCL learns more descriptive features and more distinguishing known-unknown separation.
Considering medical OSR is still a nascent field, besides OMCL, we also proposed two publicly available benchmark datasets. One is microscopic images of blood cells, and the other is optical coherence tomography (OCT) of the eye fundus. OMCL shows good adaptability to different image modalities.
Our contributions are summarized as follows.
Firstly, we propose a novel approach, OMCL for OSR in medical diagnosis. It reinforces intra-class compactness and inter-class separability, and meanwhile recognizes sparse feature space as unknowns.
Secondly, an adaptive scaling factor is proposed to enhance the generalization capacity of OMCL.
Thirdly, two benchmark datasets are proposed for OSR. Extensive ablation experiments and feature visualization demonstrate the effectiveness of each design. The superiority over state-of-the-art methods indicates the effectiveness of our method and the adaptability of OMCL on different image modalities.
§ METHOD
In Section <ref>, the open set problem and the formation of cosine Softmax are introduced. The two mechanisms MLAS and OSS are sequentially elaborated in Section <ref> and <ref>, followed by the overall formation of OMCL in Section <ref>.
§.§ Preliminaries
Problem setting:
Both closed set and open set classifiers learn from the training set 𝒟_train={(x_i, y_i)}_i=1^N with N image-label pairs (x_i, y_i), where y_i∈𝒴={1, 2, ..., C} is a class label.
In testing, closed set testing data 𝒟_test shares the same label space 𝒴 with the training data. However, in the open set problem, unseen class y_i=C+1 could appear in testing i.e. y_i∈𝒴_open={1, 2, ..., C, C+1}.
Cosine Loss:
The cosine Softmax is used as the basis of the OMCL. It transfers feature embeddings from the Euclidian space to a hyperspherical one, where feature differences depend merely on their angular separation rather than spatial distance.
Given an image x_i, its vectorized feature embedding z_i, and its label y_i, the derivation progress of the cosine Softmax is
S_cos
=e^W_y_i^Tz_i/∑_j=1^Ce^W_j^Tz_i_Conventioanl Form
=e^∥W_y_i∥∥z_i∥ cos(θ_y_i,i)/∑_j=1^Ce^∥W_j∥∥z_i∥ cos(θ_j,i)
=e^s· cos(θ_y_i,i)/∑_j=1^Ce^s· cos(θ_j,i)_Cosine Form
, where W_j denotes the weights of the last fully-connected layer (bias is set to 0 for simplicity). ∥W_j∥=1 and ∥z_i∥=s are manually fixed to constant numbers 1 and s by L2 normalization. s is named the scaling factor. cos(θ_j,i) denotes the angle between W_j and z_i. By doing so, the direction of W_j could be regarded as the prototypical direction of class j as shown in Fig. <ref> a). Samples with large angular differences from their corresponding prototype will be punished and meanwhile class-wise prototypes will be pushed apart in the angular space.
Compared with Softmax, the cosine form has a more explicit geometric interpretation, promotes more stabilized weights updating, and learns more discriminative embeddings <cit.>.
Moreover, the L2 normalization constrains features to a bounded feature space, which allows us to generate feature space descriptors for opening a classifier (will be further discussed in Section <ref>).
§.§ Margin Loss with Adaptive Scale (MLAS)
MLAS serves three purposes.
1) By applying angular margin, the intra-class compactness and the inter-class separability are strengthened.
2) The threshold could represent the potential probability of the unknowns, which not only prepares for the open set but also learns more confident probabilities of the knowns.
3) A trainable scaling factor is designed to strengthen the generalization capacity.
MLAS is:
S_MLAS
=e^s·(cos(θ_y_i,i)-m)/e^s·(cos(θ_y_i,i)-m)+e^s· t+∑_j=1, j≠ y_i^Ce^s· cos(θ_j,i)
m, t, and s respectively denote margin, threshold, and learnable scaling factor, with corresponding geometric interpretation demonstrated in Fig. <ref> b).
By using the angular margin, the decision boundary could be more stringent.
Without it, the decision boundary is cos(θ_1,i)>cos(θ_2,i) for the i-th sample of class 1.
It becomes cos(θ_1,i)>cos(θ_2,i)+m when using the margin, which leads to stronger intra-class compactness. Moreover, the angular similarities with other classes are punished in the denominator to increase inter-class separability.
The threshold t could be regarded as an extra dimension that prepares for unknown classes. Given the conventional input of Softmax as [q_i^1, q_i^2, ..., q_i^C]∈ℝ^C, ours could be understood as [q_i^1, q_i^2, ..., q_i^C, t]∈ℝ^C+1. Since t is added, the class-wise output q_i^c before Softmax is forced to have a higher value to avoid misclassification (at least larger than t). It reinforces more stringent learning and hence increases the feature compactness in the hyperspherical space.
A large s makes the distribution more uniform, and a small s makes it collapses to a point mass.
In this work, it is learnable, with a learning rate 0.1× the learning rate of the model. It theoretically offers stronger generalization capacity to various datasets and is experimentally observed to converge to different values in different data trails and could boost performances.
LMCL <cit.> and NMCL <cit.> are the most similar arts to ours. Differently, from the task perspective, these designs are proposed for closed-world problems. From the method perspective, an OSS mechanism is designed to tackle OSR leveraging generate pseudo-unknown features for discriminative learning. Moreover, an adaptive scaling factor is introduced for increasing generalization.
§.§ Open-Space Suppression (OSS)
OSS generates feature space descriptors of bounded feature space. By categorizing them into an extra C+1 class, samples in sparse feature space could be recognized as unknown and the overconfidence of the known is suppressed.
OSS selects points scattered over the entire feature space, named descriptors, representing pseudo-unknown samples. Different from existing arts that generate pseudo-unknowns by learning from known samples, the OSS selects points scattered over the feature space. It guarantees all space could be possibly considered for simulating the potential unknowns. By competing with the known features, feature space with densely distributed samples is classified as the known, and the sparse space, represented by the descriptors, will be recognized as unknown.
In this work, the corresponding descriptor set, with M samples, is 𝒟_desc={(z_i, C+1)}_i=1^M, where z_i ∈𝕌[-s,s]^d subject to ∥z_i∥=s. 𝕌[-s,s] denotes random continuous uniform distribution ranges between -s to s, and d is the dimension of feature embeddings.
s is trainable and the descriptors are dynamically generated with the training.
Fig. <ref> c) demonstrates the geometric interpretation. During training, descriptors are concatenated with the training samples at the input of the last fully-connected layer, to equip the last layer with the discrimination capacity of known and unknown samples. The OSS is
S_OSS=e^s· t/e^s· t+∑_j=1^Ce^s· cos(θ_j,i)
where t and s follow the same definition in MLAS.
Most similar arts like AL <cit.> attempts to reduce misclassification by abandoning ambiguous training images. Differently, we focus on OSR and exploit a novel discriminative loss with feature-level descriptors for OSR.
§.§ Open Margin Cosine Loss (OMCL)
OMCL unifies MLAS and OSS into one formula, which is
L_OMCL=-1/N+M∑_i=1^N+M𝕀_i log(S_cos) + λ𝕀_i log(S_MLAS) +λ𝕀_i log(S_OSS)
𝕀_i equals 1 if the i-th sample is training data, and equals 0 if it belongs to the feature space descriptors. λ is a weight factor. Since the output of the channel C+1 is fixed as t, no extra weights W_C+1 are trained in the last fully-connected layer. As a result, OMCL does not increase the number of trainable weights in a neural network. During testing, just as in other works <cit.>, the maximum probability of known classes is taken as the index of unknowns, where a lower known probability indicates a high possibility of unknowns.
§ RESULT
§.§ Datasets, Evaluation Metrics, and Implementation Details
Two datasets are adapted as new benchmarks for evaluating the OSR problem. Following protocols in natural images <cit.>, half of the classes are selected as known and reminders as unknowns. Since the grouping affects the results, it is randomly repeated K times, leading to K independent data trials. The average results of K trials are used for evaluation. The specific groupings are listed in the supplementary material, so that future works could follow it for fair comparisons.
BloodMnist contains 8 kinds of individual normal cells with 17,092 images <cit.>. Our setting is based on the closed set split and prepossessing from <cit.>. Classes are selected 5 rounds (K=5). In each trial, images belonging to 4 chosen classes are selected for training and closed-set evaluation. Images belonging to the other 4 classes in testing data are used for open set evaluation.
OCTMnist has 109,309 optical coherence tomography (OCT) images <cit.>, preprocessed following <cit.>. Among the 4 classes, 1 is healthy and the other 3 are retinal diseases. In data trail splitting, the healthy class is always in the known set, which is consistent with real circumstances, and trails equal to 3 (K=3).
Metrics: Following previous arts <cit.>, accuracy (ACC_c) validates closed set classification. Area Under the Receiver Operating Characteristic (AUROC_o), a threshold-independent value, measures the open set performances. Open Set Classification Rate (OSCR_o) <cit.>, considers both open set recognition and closed set accuracy, where a larger OSCR indicates better performance.
Implementation Details:
The classification network is ResNet18 <cit.>, optimized by Adam with an initial learning rate of 1e-3 and a batch size 64. The number of training epochs is 200 and 100 for BloodMnist and OCTMnist respectively because the number of training samples in BloodMnist is smaller. Margin m, threshold t, λ are experimentally set to -0.1, 0.1, and 0.5 respectively. Images are augmented by random crop, random horizontal flip, and normalization.
§.§ Comparison with State-of-the-art Methods
As demonstrated in Table <ref>, the proposed OMCL surpasses state-of-the-art models, including typical discriminative methods, baseline<cit.>, GCPL<cit.>, and RPL<cit.>; latest generative model DIAS<cit.>; and ARPL+CS<cit.> that hybrids both. All methods are implemented based on their official codes. Their best results after hyperparameter finetunes are reported. Results show the OMCL maintains the accuracy, meanwhile could effectively recognize unknowns.
§.§ Ablation Studies
Effectiveness of MLAS and OSS: Table <ref> demonstrates the respective contributions of MLAS and OSS in OMCL. Each of them enhances the performances and they could work complementarily to further improve performances.
Ablation Study of Adaptive Scaling Factor: Fig. <ref> a) demonstrates the effectiveness of the adaptive scaling factor. Quantitatively, the adaptive design surpasses a fixed one. Moreover, Fig. <ref> b) displays the scaling factor will converge to different values in different training trials. Both results demonstrate the effectiveness and the generalization capacity of the adaptive design.
Ablation Study of Hyperparameters t, m, and λ: Fig. <ref> a), b), and c) respectively show the influence on results when using different hyperparameters. t and m are the threshold and angular margin, presented in equation <ref>, and λ is the trade-off parameter in equation <ref> .
Ablation Study of M: Fig. <ref> d) illustrates the effect of the number of feature space descriptors upon results. The ratio 1:1 is experimentally validated as a proper ratio. Because a randomly generated descriptor could be extremely close to a known feature point, but classified as a novel category, which may disturb the training. If the number of descriptors is far more than that of the training samples (the 5 times shown in Fig. <ref> 4), the performance gets lower.
Feature Visualization: Fig. <ref> b) visualizes the t-SNE results of features z of both known and unknown classes after dimension reduction. For each class, 200 samples are visualized and the perplexity of the t-SNE is set to 30. It shows that OMCL could learn better intra-class compactness and inter-class separability. Moreover, samples of unknown classes tend to be pushed away from known classes, incidcating the effectiveness of our designs.
§ CONCLUSION
In this paper, two publicly available benchmark datasets are proposed for evaluating the OSR problem in medical fields. Besides, a novel method called OMCL is proposed, under the assumption that
known features could be assembled compactly in feature space and the sparse regions could be recognized as unknowns.
The OMCL unifies two mechanisms, MLAS and OSS, into a unified formula. The former reinforces intra-class compactness and inter-class separability of samples in the hyperspherical feature space, and an adaptive scaling factor is proposed to empower the generalization capability.
The latter opens a classifier by categorizing sparse regions as unknown using feature space descriptors.
Extensive ablation experiments and feature visualization demonstrate the effectiveness of each design. Compared to recent state-of-the-art methods, the proposed OMCL performs superior, measured by ACC, AUROC, and OSCR.
splncs04
|
http://arxiv.org/abs/2307.04796v1 | 20230710180007 | Towards $gg\to HH$ at next-to-next-to-leading order: light-fermionic three-loop corrections | [
"Joshua Davies",
"Kay Schönwald",
"Matthias Steinhauser"
] | hep-ph | [
"hep-ph"
] |
-3cm14pt
P3H-23-043, TTP23-024, ZU-TH 34/23
1.5cm
Towards gg→ HH at next-to-next-to-leading order: light-fermionic
three-loop corrections
Joshua Davies^a,
Kay Schönwald^b,
Matthias Steinhauser^c
(a) Department of Physics and Astronomy, University of Sussex,
Brighton BN1 9QH, UK
(b) Physik-Institut, Universität Zürich, Winterthurerstrasse 190,
8057 Zürich, Switzerland
(c) Institut für Theoretische Teilchenphysik,
Karlsruhe Institute of Technology (KIT),
Wolfgang-Gaede Straße 1, 76128 Karlsruhe, Germany
============================================================================================================================================================================================================================================================================================================================================================================================================================
empty
We consider light-fermion three-loop corrections to gg→ HH using forward
scattering kinematics in the limit of a vanishing Higgs boson mass, which
covers a large part of the physical phase space. We compute
the form factors and discuss the technical challenges. The approach outlined
in this letter can be used to obtain the full virtual corrections to
gg→ HH at next-to-next-to-leading order.
§ INTRODUCTION
The simultaneous production of two Higgs bosons is a promising process to
obtain information about their self-coupling in the scalar sector of the
Standard Model and beyond. Its study will be of primary importance after the
high-luminosity upgrade of the Large Hadron Collider and thus it is important
that there are precise predictions from the theory side.
The cross section for Higgs boson pair production is dominated by the gluon-fusion
process, which is loop-induced <cit.>. Thus, at
next-to-leading (NLO) order the virtual corrections require the computation of
two-loop four-point function with massive internal top quarks. There are
numerical results which take into account the full dependence of all mass
scales <cit.>. Furthermore,
there are a number of analytic approximations which are valid in various
limits, which cover different parts of the phase space. Particularly appealing approaches
have been presented in Refs. <cit.> where
the expansion around the forward-scattering kinematics has been combined with
the high-energy expansion and it has been shown that the full phase space can
be covered. Thus, these results are attractive alternatives to computationally
expensive purely numerical approaches.
Beyond NLO, current results are based on expansions for large top quark
masses. Results in the infinite-mass limit are available at
NNLO <cit.> and
N^3LO <cit.> and finite 1/m_t corrections have
been considered at NNLO in Refs. <cit.>.
In Ref. <cit.> the renormalization scheme dependence on the top
quark mass has been identified as a major source of uncertainty of the NLO
predictions. In general, such uncertainties are reduced after including
higher-order corrections, i.e., virtual
corrections at NNLO including the exact dependence on the top quark mass. This
requires the computation of 2→ 2 scattering amplitudes at three-loop order
with massive internal quarks; this is a highly non-trivial problem. Current
analytic and numerical methods are not sufficient to obtain results with full
dependence on all kinematic variables, as is already the case at two loops.
However, after an expansion in the
Mandelstam variable t (see
Refs. <cit.>) and the
application of the “expand and match” <cit.>
method to compute the master integrals, one
obtains semi-analytic results which cover a large part of the phase space.
Such a result allows the study of the renormalizations scheme dependence at
three-loop order. In this letter we outline a path to the three-loop
calculation and present first results for the light-fermionic corrections.
Let us briefly introduce the kinematic variables describing the 2→ 2 process, with
massless momenta q_1 and q_2 in the initial state and massive momenta
q_3 and q_4 in the final state. It is convenient to introduce the
Mandelstam variables as
s = (q_1+q_2)^2 , t = (q_1+q_3)^2 , u = (q_1+q_4)^2 ,
where all momenta are incoming. For gg → HH we have
q_1^2=q_2^2=0 , q_3^2=m_H^2 , q_4^2=m_H^2 ,
and the transverse momentum of the final-state particles is given by
p_T^2 = u t-m_H^4/s .
For Higgs boson pair production one can identify two linearly independent
Lorentz structures
A_1^μν = g^μν - 1/q_12q_1^ν q_2^μ ,
A_2^μν = g^μν
+ 1/p_T^2 q_12(
q_33 q_1^ν q_2^μ
- 2q_23 q_1^ν q_3^μ
- 2q_13 q_3^ν q_2^μ
+ 2q_12 q_3^μ q_3^ν) ,
where q_ij = q_i· q_j, which allows us to introduce two
form factors in the amplitude
M^ab = ε_1,με_2,ν M^μν,ab = ε_1,με_2,νδ^ab X_0 s
( F_1 A_1^μν + F_2 A_2^μν)
.
Here a and b are adjoint colour indices
and X_0 = G_F/2√(2)× T_F α_s(μ)/(2π)
with T_F=1/2. G_F is Fermi's constant and α_s(μ) is the strong
coupling constant evaluated at the renormalization scale μ.
We write the perturbative expansion of the form factors
as
F = F^(0) + (α_s(μ)/π) F^(1)
+ (α_s(μ)/π)^2 F^(2)
+ ⋯ ,
and decompose F_1 and F_2 into “triangle” and “box” form
factors
F_1^(k) = 3 m_H^2/s-m_H^2 F^(k)_ tri+F^(k)_ box1 ,
F_2^(k) = F^(k)_ box2 .
In this notation F^(k)_ box1 and F^(k)_ box2 contain both
one-particle irreducible and reducible contributions. The latter
appear for the first time at two-loop order; exact results for
the so-called “double-triangle” contributions can be found in <cit.>.
Analytic results for the leading-order form factors are available
from <cit.> and the two-loop triangle form factor
has been computed in
Refs. <cit.>. The main
focus of this letter is on the light-fermionic contribution to the three-loop quantities
F^(2)_ box1 and F^(2)_ box2 for t=0 and m_H=0.
Expansions around the large top quark mass limit of F^(2)_ tri,
F^(2)_ box1 and F^(2)_ box2 can be found in
Ref. <cit.> and results for F^(2)_ tri valid for all
s/m_t^2 have been computed in
Refs. <cit.>.
We decompose the three-loop form factors as
F^(2) = n_lT_F F^(2),n_l = n_lT_F (C_F F^FL + C_A F^AL) + … ,
where the ellipses stand for further colour factors which we do not consider here.
Sample Feynman diagrams contributing to F^FL and F^AL
are shown in Fig. <ref>.
In this letter we consider t=0 and m_H=0, i.e. the leading term in an
expansion around t→ 0 and m_H→ 0. This constitutes a crude approximation,
however, in a large part of the phase space it contributes a
major part of the corrections.
For example, choosing t=0 and m_H=0 at two loops (NLO),
at a transverse momentum of p_T=100 GeV the form factor F_ box1
deviates from its exact value by at most
30%, depending on the value of √(s) considered. This means that
more than two thirds of the form factor value are covered by the t=0, m_H=0
approximation. Furthermore,
we concentrate on the one-particle irreducible contributions.
We note that F_ box2 vanishes for t=0.
More details are given below in Section <ref>.
We present here results for the light-fermionic (“n_l”) terms and show that this
approach can be used to obtain the three-loop virtual corrections to gg→ HH. The
remaining contributions contain many more integral topologies and more complicated
integrals, which have to be integration-by-parts (IBP) reduced to master integrals.
In the next section we outline the techniques used for the calculations
and discuss the results in Section <ref>.
In Section <ref> we conclude and provide an outlook
for the computation of the full corrections.
§ TECHNICAL DETAILS
The basic philosophy of our calculation has already been outlined in
Ref. <cit.>, where the two-loop amplitude for gg→ HH has
been considered in the small-t and high-energy limit and it has been shown
that the combination of both expansions covers the whole phase-space. The
starting point for both expansions is the amplitude expressed in terms of
the same master integrals which are obtained from a reduction problem which involves
the dimensional variables s, t and m_t.[A Taylor expansion in
m_H in a first step eliminates the Higgs boson mass from the reduction
problem.] Using currently available tools such a reduction is not possible
at three loops.
To avoid such an IBP reduction, one can try to expand the unreduced amplitude in the
respective limit. The high-energy expansion is obtained via a complicated
asymptotic expansion which involves a large number of different regions. On
the other hand, the limit t→ 0 leads to a simple Taylor expansion which
can be easily realized at the level of the integrands.
Furthermore, the expansion around forward-scattering kinematics covers
a large part of the physically relevant phase space <cit.>.
Our computation begins by generating the amplitude with qgraf <cit.>, and then using tapir <cit.> and exp <cit.> to map the diagrams onto
integral topologies and convert the output to FORM <cit.>
notation. The diagrams are then computed with the in-house “calc”
setup, to produce an amplitude in terms of scalar Feynman integrals. These
tools work together to provide a high degree of automation. We perform
the calculation for general QCD gauge parameter which drops out once the
amplitude is expressed in terms of master integrals. This is a welcome check
for our calculation.
The scalar integrals can be Taylor expanded in m_H at this point, as done at
two loops in Refs. <cit.>,
however at three loops in this letter we keep only the leading term in this
expansion, i.e., set m_H=0.
The next step is to expand the amplitude around the forward kinematics
(t→ 0) at the integrand level. This is implemented in FORM by
introducing q_δ = q_1+q_3 in the propagators and expanding in
q_δ to the required order. Note that q_δ^2=t. After treating
the tensor integrals, where q_δ appears contracted with a loop momentum,
we need to perform a partial-fraction decomposition to eliminate linearly
dependent propagators. The partial fractioning rules are produced automatically
by tapir when run with the forward kinematics (q_3=-q_1)
specified[In an alternative approach, we have also used
LIMIT <cit.> to generate the partial fractioning rules.].
Note that although for the present publication we compute the “t=0 contribution”,
we must properly expand in q_δ to produce the amplitude to order t^0
due to inverse powers of t appearing in the projectors. These
inverse powers ultimately cancel in the final result.
This procedure yields amplitudes
for F_ box1 and F_ box2 in terms of scalar Feynman integrals
which belong to topologies which depend only on s and m_t (and not on t).
At this point the amplitudes are written in terms of 60 integral
topologies, however these are not all independent; they can be reduced to a smaller
set by making use of loop-momentum shifts and identification of common sub-sectors.
In one approach we find these rules with the help of LiteRed <cit.>,
which identifies a minimal set of 28 topologies.
In a second approach we use Feynson <cit.> to generate these maps
and end up with 53 topologies.
The difference in the number of topologies is due to LiteRed mapping
topology sub-sectors, while we used Feynson only at the top level.
When considering the full amplitude, i.e., not just the light-fermionic corrections,
only the Feynson approach is feasible for performance reasons. It is also
possible to use Feynson to find sub-sector mappings, which we will also use
when considering the full amplitude (which is written initially in terms of
522 integral topologies).
The amplitude is now ready for a reduction to master integrals using
Kira <cit.> and
<cit.>.
The most complicated integral topology took about a week on a 16-core node,
using around 500GB of memory. After minimizing the final set of master integrals
across the topologies with Kira, we are left with 177 master integrals
to compute. Comparing results obtained via the LiteRed and
Feynson topology-mapping approaches reveals one additional relation within
this set which is missed by Kira, however, we compute the set of 177
master integrals which was first identified.
To compute the master integrals, we first establish a system of differential
equations w.r.t. x=s/m_t^2. Boundary conditions are provided in the
large-m_t (x→ 0) limit: we prepare the three-loop integrals in the
forward kinematics, and pass them to exp which automates the asymptotic
expansion in the limit that m_t^2 ≫ s. This leads to three-loop vacuum
integrals, as well as products of one- and two-loop vacuum integrals with
two- and one-loop massless s-channel 2→ 1 integrals, respectively.
This expansion leads to tensor vacuum integrals, which our “calc” setup can
compute up to rank 10. We compute the first two expansion terms in s/m_t^2 for
each of the 177 master integrals. To fix the boundary constants for the differential
equations we only need about half of the computed coefficients; the
rest serve as consistency checks.
The differential equations are then used to produce 100 expansion terms for
the forward-kinematics master integrals in the large-m_t limit which we use
to compute F_ box1. Since these results are analytic in the large-m_t
limit we can compare with the results obtained in Ref. <cit.>
in the limit t=0, and find agreement.
The final step is to use the “expand and match” approach <cit.>
to obtain “semi-analytic” results which cover the whole s range. Note that
this approach properly takes into account the threshold effects at the point
s = 4m_t^2. “Semi analytic” means that our final results consist of expansions
around a set of x values, where the expansion coefficients are available
only numerically. Starting from the (analytic) expansion around x=0, each
expansion provides numeric boundary conditions to fix the coefficients of the
subsequent expansion. Each expansion is only ever evaluated within its radius
of convergence.
§ THREE-LOOP LIGHT-FERMIONIC CONTRIBUTIONS TO F_ BOX1
In this section we present the light-fermionic three-loop corrections to
the form factor F_ box1 for Higgs boson pair production. We note again
that in our t=0, m_H=0 approximation, F_ box2 vanishes; we observe this
after IBP reduction and writing the result in terms of the minimal set of master
integrals.
We obtain the renormalized form factors after the renormalization of the
parameters α_s and m_t and the wave functions of the gluons in the
initial state. We then express our results in terms of α_s^(5) and
treat the remaining infrared divergences following
Ref. <cit.>.[For more details see Section 4 of
Ref. <cit.> where analytic large-m_t results for F_ box1 and
F_ box2 have been computed at three-loop order.] This leads to finite results
for F_ box1.
In the following we present numerical results. For the top quark and Higgs
boson masses, we use the values m_t = 173.21 GeV and m_H=125.1 GeV.
Let us first discuss the one- and two-loop results. In
Fig. <ref>
we show the real part of F_ box1 for p_T=100 GeV.
In red, we show the approximation that we use at three loops, i.e., t=0 and m_H=0.
In black, we show curves with the full dependence on t and m_H. At one loop
this is the fully exact result, but at two loops this is an expansion to order
t^5 and m_H^4; we have shown in Ref. <cit.> that this provides
an extremely good approximation of the (unknown) fully exact result.
We observe that the t=0, m_H=0 curves approximate the “exact” results with an
accuracy of about 30% in the region below about √(s)=500 GeV.
For higher energies the approximation works better.
In Fig. <ref> we also show blue curves which include
expansion terms up to t^5, but still only the leading term in the m_H
expansion. These curves lie very close to the red t=0, m_H=0 curves, which
show that for p_T≈ 100 GeV it is more important to incorporate
additional terms in the m_H expansion than in the t expansion.
For higher values of p_T we expect that higher t expansion terms
become more important. This can be seen in Fig. <ref> where results
of the two-loop form factor are shown for various values of p_T.
The panels also show that a large portion of the cross section
is covered by the t=0 approximation, even for
p_T=200 GeV where, for lower values of √(s), about 50% are
captured by the red curve.
In Fig. <ref> we show the new results
obtained in this letter. The plots show both the real (in red) and imaginary
(in green) parts
of the light-fermionic part of F_ box1, both separated into the C_F
and C_A colour factor contributions, and their combination.
We observe a strong variation of the form factor around the
top quark pair threshold region. This behaviour is not caused by a loss
of precision of our semi-analytic expansions around this threshold;
indeed F_ box1 is finite in the limit s → 4 m_t^2, however
whereas at two loops we observe leading logarithmic contributions which go like
v logv, where v = √(1-4m_t^2/s), at three loops we find an
additional power of logv which is responsible for the large variation
around this point.
The numerical value of the light-fermionic contribution to F_ box1 at
three-loops exceeds the size of the two-loop form factor by almost an
order of magnitude. Although this is compensated by the additional factor
of α_s/π, this hints at sizeable three-loop corrections. However,
for a final conclusion, the remaining diagrams need to be computed. The
full computation will also allow a study of the top quark mass scheme
dependence. These issues will be addressed in a future publication.
§ CONCLUSIONS
The computation of three-loop corrections to 2→2 scattering processes with
massive internal particles is a technically challenging
task. Currently-available techniques are most likely not sufficient to obtain
analytic or numerical results without applying any approximation. In this
letter we apply the ideas of
Refs. <cit.> to gg→ HH
and show that three-loop corrections can be obtained. We concentrate on the
light-fermionic three-loop contributions which is a well-defined and
gauge-invariant subset. The obtained results are valid for t=0 and m_H=0
which approximates the full result to 30% or better for p_T≈ 100 GeV.
The approach outlined in this letter can also be used to compute the remaining
colour factor contributions, which are needed to study the overall impact of the
three-loop virtual corrections and also the top quark mass renormalization scheme
dependence.
In addition to the remaining colour factors, we ultimately aim to compute
the t^1 m_H^2 approximation which would address the 30% error discussed in
Section <ref>, improve the approximation for higher values of p_T,
and provide a non-zero value for F_ box2. To compute these terms will
require significantly more CPU time and, most likely, improvements to IBP reduction
software in order to efficiently reduce the large numbers of integrals produced by
the expansions.
§ ACKNOWLEDGEMENTS
We would like to thank Go Mishima for many useful discussions and Fabian
Lange for patiently answering our questions concerning Kira.
This research was supported by the Deutsche
Forschungsgemeinschaft (DFG, German Research Foundation) under grant 396021762
— TRR 257 “Particle Physics Phenomenology after the Higgs Discovery”
and has received funding from the European Research Council (ERC) under
the European Union's Horizon 2020 research and innovation programme grant
agreement 101019620 (ERC Advanced Grant TOPUP).
The work of JD was supported by the Science and Technology Facilities Council (STFC) under
the Consolidated Grant ST/T00102X/1.
99
Glover:1987nx
E. W. N. Glover and J. J. van der Bij,
Nucl. Phys. B 309 (1988), 282-294
Borowka:2016ehy
S. Borowka, N. Greiner, G. Heinrich, S. P. Jones, M. Kerner, J. Schlenk, U. Schubert and T. Zirke,
Phys. Rev. Lett. 117 (2016) no.1, 012001
[erratum: Phys. Rev. Lett. 117 (2016) no.7, 079901]
[arXiv:1604.06447 [hep-ph]].
Borowka:2016ypz
S. Borowka, N. Greiner, G. Heinrich, S. P. Jones, M. Kerner, J. Schlenk and T. Zirke,
JHEP 10 (2016), 107
[arXiv:1608.04798 [hep-ph]].
Baglio:2018lrj
J. Baglio, F. Campanario, S. Glaus, M. Mühlleitner, M. Spira and J. Streicher,
Eur. Phys. J. C 79 (2019) no.6, 459
[arXiv:1811.05692 [hep-ph]].
Bellafronte:2022jmo
L. Bellafronte, G. Degrassi, P. P. Giardino, R. Gröber and M. Vitti,
JHEP 07 (2022), 069
doi:10.1007/JHEP07(2022)069
[arXiv:2202.12157 [hep-ph]].
Davies:2023vmj
J. Davies, G. Mishima, K. Schönwald and M. Steinhauser,
JHEP 06 (2023), 063
[arXiv:2302.01356 [hep-ph]].
deFlorian:2013jea
D. de Florian and J. Mazzitelli,
Phys. Rev. Lett. 111 (2013), 201801
[arXiv:1309.6594 [hep-ph]].
deFlorian:2013uza
D. de Florian and J. Mazzitelli,
Phys. Lett. B 724 (2013), 306-309
[arXiv:1305.5206 [hep-ph]].
Grigo:2014jma
J. Grigo, K. Melnikov and M. Steinhauser,
Nucl. Phys. B 888 (2014), 17-29
[arXiv:1408.2422 [hep-ph]].
Chen:2019lzz
L. B. Chen, H. T. Li, H. S. Shao and J. Wang,
Phys. Lett. B 803 (2020), 135292
[arXiv:1909.06808 [hep-ph]].
Chen:2019fhs
L. B. Chen, H. T. Li, H. S. Shao and J. Wang,
JHEP 03 (2020), 072
[arXiv:1912.13001 [hep-ph]].
Grigo:2015dia
J. Grigo, J. Hoff and M. Steinhauser,
Nucl. Phys. B 900 (2015), 412-430
[arXiv:1508.00909 [hep-ph]].
Davies:2019djw
J. Davies and M. Steinhauser,
JHEP 10 (2019), 166
[arXiv:1909.01361 [hep-ph]].
Davies:2021kex
J. Davies, F. Herren, G. Mishima and M. Steinhauser,
JHEP 01 (2022), 049
[arXiv:2110.03697 [hep-ph]].
Baglio:2020wgt
J. Baglio, F. Campanario, S. Glaus, M. Mühlleitner, J. Ronca and M. Spira,
Phys. Rev. D 103 (2021) no.5, 056002
[arXiv:2008.11626 [hep-ph]].
Degrassi:2022mro
G. Degrassi, R. Gröber, M. Vitti and X. Zhao,
JHEP 08 (2022), 009
doi:10.1007/JHEP08(2022)009
[arXiv:2205.02769 [hep-ph]].
Fael:2021xdp
M. Fael, F. Lange, K. Schönwald and M. Steinhauser,
SciPost Phys. Proc. 7 (2022), 041
[arXiv:2110.03699 [hep-ph]].
Fael:2022miw
M. Fael, F. Lange, K. Schönwald and M. Steinhauser,
Phys. Rev. D 106 (2022) no.3, 034029
[arXiv:2207.00027 [hep-ph]].
Degrassi:2016vss
G. Degrassi, P. P. Giardino and R. Gröber,
Eur. Phys. J. C 76 (2016) no.7, 411
[arXiv:1603.00385 [hep-ph]].
Plehn:1996wb
T. Plehn, M. Spira and P. M. Zerwas,
Nucl. Phys. B 479 (1996), 46-64
[erratum: Nucl. Phys. B 531 (1998), 655-655]
[arXiv:hep-ph/9603205 [hep-ph]].
Harlander:2005rq
R. Harlander and P. Kant,
JHEP 12 (2005), 015
[arXiv:hep-ph/0509189 [hep-ph]].
Anastasiou:2006hc
C. Anastasiou, S. Beerli, S. Bucherer, A. Daleo and Z. Kunszt,
JHEP 01 (2007), 082
[arXiv:hep-ph/0611236 [hep-ph]].
Aglietti:2006tp
U. Aglietti, R. Bonciani, G. Degrassi and A. Vicini,
JHEP 01 (2007), 021
[arXiv:hep-ph/0611266 [hep-ph]].
Davies:2019nhm
J. Davies, R. Gröber, A. Maier, T. Rauh and M. Steinhauser,
Phys. Rev. D 100 (2019) no.3, 034017
[erratum: Phys. Rev. D 102 (2020) no.5, 059901]
[arXiv:1906.00982 [hep-ph]].
Davies:2019roy
J. Davies, R. Gröber, A. Maier, T. Rauh and M. Steinhauser,
PoS RADCOR2019 (2019), 079
[arXiv:1912.04097 [hep-ph]].
Harlander:2019ioe
R. V. Harlander, M. Prausa and J. Usovitsch,
JHEP 10 (2019), 148
[erratum: JHEP 08 (2020), 101]
[arXiv:1907.06957 [hep-ph]].
Czakon:2020vql
M. L. Czakon and M. Niggetiedt,
JHEP 05 (2020), 149
doi:10.1007/JHEP05(2020)149
[arXiv:2001.03008 [hep-ph]].
Nogueira:1991ex
P. Nogueira,
J. Comput. Phys. 105 (1993), 279-289;
.
Gerlach:2022qnc
M. Gerlach, F. Herren and M. Lang,
Comput. Phys. Commun. 282 (2023), 108544
[arXiv:2201.05618 [hep-ph]].
Harlander:1997zb
R. Harlander, T. Seidensticker and M. Steinhauser,
Phys. Lett. B 426 (1998) 125
[hep-ph/9712228].
Seidensticker:1999bb
T. Seidensticker,
hep-ph/9905298.
Ruijl:2017dtg
B. Ruijl, T. Ueda and J. Vermaseren,
[arXiv:1707.06453 [hep-ph]].
Davies:2018ood
J. Davies, G. Mishima, M. Steinhauser and D. Wellmann,
JHEP 03 (2018), 048
[arXiv:1801.09696 [hep-ph]].
Davies:2018qvx
J. Davies, G. Mishima, M. Steinhauser and D. Wellmann,
JHEP 01 (2019), 176
[arXiv:1811.05489 [hep-ph]].
Herren:2020ccq
F. Herren,
“Precision Calculations for Higgs Boson Physics at the LHC - Four-Loop
Corrections to Gluon-Fusion Processes and Higgs Boson Pair-Production at
NNLO,” PhD thesis, 2020, KIT.
Lee:2013mka
R. N. Lee,
J. Phys. Conf. Ser. 523 (2014), 012059
[arXiv:1310.1145 [hep-ph]].
Magerya:2022esf
V. Magerya,
“Semi- and Fully-Inclusive Phase-Space Integrals at Four Loops,”
Dissertation, University of Hamburg, 2022.
.
Klappert:2020nbg
J. Klappert, F. Lange, P. Maierhöfer and J. Usovitsch,
Comput. Phys. Commun. 266 (2021), 108024
[arXiv:2008.06494 [hep-ph]].
Klappert:2020aqs
J. Klappert, S. Y. Klein and F. Lange,
Comput. Phys. Commun. 264 (2021), 107968
[arXiv:2004.01463 [cs.MS]].
Klappert:2019emp
J. Klappert and F. Lange,
Comput. Phys. Commun. 247 (2020), 106951
[arXiv:1904.00009 [cs.SC]].
Catani:1998bh
S. Catani,
Phys. Lett. B 427 (1998), 161-171
[arXiv:hep-ph/9802439 [hep-ph]].
|
http://arxiv.org/abs/2307.05809v1 | 20230711211503 | Excitements and Concerns in the Post-ChatGPT Era: Deciphering Public Perception of AI through Social Media Analysis | [
"Weihong Qi",
"Jinsheng Pan",
"Hanjia Lyu",
"Jiebo Luo"
] | cs.SI | [
"cs.SI"
] |
Excitements and Concerns in the Post-ChatGPT Era: Deciphering Public Perception of AI through Social Media Analysis
Weihong Qi
Department of Political Science
University of Rochester
Rochester, USA
[email protected]
Jinsheng Pan, Hanjia Lyu, Jiebo Luo
Department of Computer Science
University of Rochester
Rochester, USA
{jpan24, hlyu5}@ur.rochester.edu, [email protected]
============================================================================================================================================================================================================================================================================================
As AI systems become increasingly prevalent in various aspects of daily life, gaining a comprehensive understanding of public perception towards these AI systems has become increasingly essential for several reasons such as ethical considerations, user experience, fear, disinformation, regulation, collaboration, and co-creation. In this study, we investigate how mass social media users perceive the recent rise of AI frameworks such as ChatGPT. We collect a total of 33,912 comments in 388 unique subreddits spanning from November 30, 2022 to June 8, 2023 using a list of AI-related keywords. We employ BERTopic to uncover the major themes regarding AI on Reddit. Additionally, we seek to gain deeper insights into public opinion by examining the distribution of topics across different subreddits. We observe that technology-related subreddits predominantly focus on the technical aspects of AI models. On the other hand, non-tech subreddits show greater interest in social issues such as concerns about job replacement or furlough. We leverage zero-shot prompting to analyze the sentiment and perception of AI among individual users. Through a comprehensive sentiment and emotion analysis, we discover that tech-centric communities exhibit greater polarization compared to non-tech communities when discussing AI topics. This research contributes to our broader understanding of public opinion surrounding artificial intelligence.
public opinion, generative AI, GPT, Reddit, topic modeling, zero-shot prompting, sentiment analysis, social media
§ INTRODUCTION
Artificial Intelligence (AI) has become increasingly pervasive in our lives, transforming various sectors and shaping the future of technology. As AI continues to advance and integrate into society, it is essential to gain insights into how the general public perceives this transformative technology. Public perception plays a crucial role in the adoption, acceptance, and ethical considerations surrounding AI. The recent surge in discussions about generative AI, exemplified by models like ChatGPT, highlights the growing interest and excitement surrounding this technology. There was a substantial surge in online discussions about AI during the month of April 2023, coinciding with the presumed release of GPT-4. The development of advanced language models has paved the way for generating human-like text, enabling realistic and interactive conversations with AI systems. ChatGPT was estimated to have reached 100 million monthly active users in January, 2023, just two months after its launch <cit.>. In spite of its widespread popularity, ChatGPT has also raised concerns regarding various issues, including but not limited to data privacy. The use and adoption of ChatGPT have sparked discussions and debates surrounding potential risks associated with the handling and storage of user data. For instance, OpenAI's introduction of privacy controls played a pivotal role in Italy's decision to lift the ban on ChatGPT due to privacy concerns <cit.>. Prior to OpenAI's announcement, Italy had maintained restrictions on the use of ChatGPT over apprehensions about privacy implications <cit.>.
As of May 2023, while a majority of Americans have become aware of ChatGPT, only a limited number have actually engaged with the technology themselves <cit.>. The public perception of artificial intelligence is still in the process of being shaped and evolving. Therefore, it becomes crucial to comprehend and analyze public opinion surrounding AI. By understanding the viewpoints, attitudes, and concerns of the general public, we can gain valuable insights into the current state of public perception, bridge knowledge gaps, and ensure that the development and deployment of AI technologies align with societal expectations and values.
In order to gain insights into public perception of artificial intelligence, particularly with regard to the emerging field of generative AI, we conduct a data collection from Reddit. Our objective is to explore the thematic and sentiment attributes of online discussions surrounding this topic. We aim to answer two research questions:
* RQ 1: What specific topics characterize the discussions of AI on Reddit? How do the topics vary across subreddits?
* RQ 2: What is the prevailing sentiment surrounding the most discussed topics, and do these sentiments differ among subreddits?
Our approach consists of BERTopic topic modeling <cit.>, zero-shot prompting for sentiment analysis <cit.>, the Linguistic Inquiry and Word Count (LIWC) text analuysis <cit.>, and regression analysis. By delving into these topic distributions within various subreddits, we identify differing areas of emphasis and concerns among different user communities. In order to gauge the sentiments expressed in the comments, we infer and compare sentiments both at the topic level and across different subreddits. Our findings shed light on the prominent themes, the varying concerns across different subreddits, and the sentiments expressed in relation to these topics.
§ RELATED WORK
Several studies have explored public perceptions and discussions surrounding artificial intelligence, with a particular focus on generative AI and ChatGPT. Miyazaki <cit.> investigated users' perceptions of generative AI on Twitter, especially focusing on their occupation and usage. The findings reveal that a significant interest in generative AI extends beyond IT-related occupations to encompass individuals across various professional domains. Leiter <cit.> analyzed over 300,000 tweets and more than 150 scientific papers to investigate how ChatGPT is perceived and discussed. The general consensus regarding ChatGPT is that it is perceived as a high-quality system, with positive sentiment prevailing and emotions of joy dominating social media discussions. Furthermore, recent scientific papers portray ChatGPT as a promising opportunity in diverse fields, including the medical domain. However, ethical concerns surrounding ChatGPT's capabilities are also acknowledged, highlighting its potential as a double-edged sword. In the context of education, assessments of ChatGPT are mixed, with varying opinions on its impact and efficacy. These findings are align with Tlili <cit.> and Sullivian <cit.>.
While other social platforms have their own unique advantages and characteristics, Reddit's combination of diverse communities, long-form discussions, user anonymity, and data availability make it a valuable source for conducting text analysis and gaining deeper insights into public opinions and discussions. More specifically, Reddit hosts a vast array of communities called subreddits, each focused on specific topics or interests. This diversity allows researchers to analyze discussions within dedicated communities, providing more focused and specialized insights. In addition, unlike platforms that primarily rely on short-form content like tweets, Reddit facilitates in-depth discussions with longer posts and comments. This allows for more detailed and nuanced analysis of user opinions, arguments, and perspectives <cit.>. Hence, we choose to conduct our study using Reddit data, leveraging its unique attributes to gain deeper insights into the multifaceted landscape of AI discussions.
§ METHOD
§.§ Data Collection and Preprocessing
In the context of the Reddit platform, subreddits function as individual communities covering a variety of topics, interests, and themes. Each subreddit operates under a unique set of guidelines and regulations, designed to steer the conduct and content within that specific community. Within these individual subreddits, users can create posts and comment under the posts to participate in specific discussions.[<https://support.reddithelp.com/hc/en-us/categories/200073949-Reddit-101>]
To facilitate data collection and analysis of public perceptions regarding AI, we employ the Python Reddit API Wrapper (PRAW) provided by Reddit. Through the API, we are able to crawl subreddits, posts, comments, their UTC time stamp of creation and author information from Reddit. Our first step involves the formulation of a list of keywords which encompass the prevalent terminologies frequently discussed since the launch of ChatGPT. The list includes: [“AIGC”, “ChatGPT”, “GPT”, “OpenAI”, “Bard”, “LLM”, “large language model”, “Midjourney”, “diffusion model”, “stability AI”, “AI”, “artificial intelligence”, “artificial intelligence generated content”, “dalle 2”]. Next, we conduct a comprehensive search across all subreddits, identifying those containing any of the keywords in the list. The keyword search is not case sensitive. Subsequently, we extract posts, comments, author information, and the timestamps associated with these comments from each subreddits. To narrow our focus specifically to discussions emerging after the launch of ChatGPT, we impose a temporal constraint, limiting our investigation to the period between November 30, 2022, and June 8, 2023. Bot-generated content and duplicate entries are eliminated from our dataset by identifying the repeated patterns in the comments. Upon completion of this process, we collect a total of 33,912 comments distributed across 388 subreddits, establishing the corpus for our analysis. Table <ref> presents the summary statistics of the ten subreddits that have the most comments regarding AI in our dataset. When it comes to discussions surrounding AI, the subreddit “r/singularity” takes the lead among all other subreddits. This subreddit's unparalleled popularity can be attributed to its staggering membership count of over 955,000 individuals who actively engage in conversations revolving around the concept of technological singularity and its associated subjects.[<https://www.reddit.com/r/singularity/>] Before we use specific models to investigate the research questions, we further process the corpus with the Natural Language Toolkit (NLTK) library to lemmatize the corpus and remove the links, numbers, emojis, punctuation and stop words.
§.§ Topic Modeling and Cross-subreddit Analysis
To identify the topics that characterize the discussions of AI, we leverage BERTopic <cit.>. This approach demonstrates proficiency in extracting topic representations by employing the class-based TF-IDF procedure, integrating Sentence-BERT <cit.> for embedding, as well as HDBSCAN <cit.> for clustering within its framework. While alternative methods for topic modeling such as Latent Dirichlet Allocation (LDA) <cit.>, Non-negative Matrix Factorization (NMF) <cit.>, and Top2Vec <cit.> are also widely used, BERTopic stands out for its exceptional efficiency in analyzing social media text data <cit.>. Furthermore, BERTopic is specifically chosen for this study due to its capability in grasping and portraying context, which is a critical element in our research.
To capture more detailed insights into user attitudes towards AI, comments are particularly valuable due to their longer and more comprehensive nature compared to posts. Therefore, we focus our topic analysis exclusively on the comment corpus. The embedding model for BERTopic is all-MiniLM-L6-v2. We identify 232 topics in total, and 16 of them are considered outlier topics. For each comment in the outlier topic, we select the most frequent topic in that comment based on topic distributions. The most frequent topic is then assigned as the topic for that comment.
After identifying the key topics related to AI on Reddit, we further exploit the hierarchical structure of the Reddit platform, encompassing subreddits, posts, and comments to study the perceptions across different communities. Firstly, we examine the similarity and variation of topics across different subreddits, uncovering the topics that are discussed among multiple groups. Next, we delve into the disparities between tech-centric and non-tech communities. We employ the Linguistic Inquiry and Word Count (LIWC) <cit.> to capture the linguistic and psychological characteristics within the comments and implement a linear regression analysis to quantify the differences between the two groups. The linear regression is specified as follows:
LIWC Attribute = α_0 + α_1 · Tech + ϵ
where LIWC Attribute is the continuous measurement generated by LIWC. LIWC operates by categorizing words into different linguistic and psychological dimensions. It includes a comprehensive dictionary containing words that are associated with specific categories or dimensions. The software can analyze the frequency and distribution of these words in a given text and provide information about the psychological, emotional, and cognitive aspects reflected in the text <cit.>. In our analysis, we use the Tone, Emotion, Prosocial and Conflict merics as the outcomes. Tone represents the degree of positive/negative tones of the corpus, while Emotion is “true emotion labels, as well as words that strongly imply emotions.” Prosocial is the category of behaviors or indicators that demonstrate assistance or empathy towards others, specifically at an interpersonal level. Lastly, Conflict, refers to concepts that indicate or embody conflict in a broad sense <cit.>.
Tech is a binary variable that indicates whether the comment belongs to a tech-centric subreddit, and ϵ is the error term. Among the ten subreddits that have the most comments, we define the subreddit “r/singularity”, “r/technology”, “r/Artificialintelligence”, “r/artificial”, “r/aiwars”, “r/OpenAI”, “r/ChatGPT” as tech-centric subreddits in accordance of their names and descriptions. We assign value of 1 to the Tech variable regarding the comments from these subreddits. Other subreddits, including “r/Futurology”, “r/wallstreetbets” and “r/slatestarcodex” are regarded as non-tech subreddits and the Tech variable regarding their comments is assigned 0.
§.§ Sentiment Analysis
We perform sentiment analysis to discern the attitudes and perception of each user towards AI.
§.§.§ Modeling
ChatGPT (gpt-3.5-turbo) <cit.> is applied to our collected data and classify user comments into three sentiment categories: positive , neutral, and negative. We use zero-shot prompting and an example prompt is demonstrated in Figure <ref>. Temperature is set at 0 to encourage the response from ChatGPT to be more focused and deterministic. Next, the final sentiment inference generated by ChatGPT is attained by employing a keyword matching approach. Upon identifying the presence of positive, neutral or negative within ChatGPT's response, we assign the corresponding sentiment label to the comment as positive, neutral, or negative. Following the sentiment predictions, we proceed to analyze the individual contributions of each topic to both positive and negative sentiments. This analysis involves calculating the following ratio: p = n_t/N_s, where N_s represents the total count of positive or negative sentiment comments, while n_t denotes the count of positive or negative sentiment comments associated with a particular topic or subreddit.
§.§.§ Performance Verification
To evaluate the performance of ChatGPT, we randomly sample 200 comments and two researchers independently annotate these comments into the three sentiment categories. The final annotation is derived through a consensus reached between the two researchers. The Cohen's Kappa is 0.63 which suggests a substantial level of agreement <cit.>. The F1 score of ChatGPT is 0.7, indicating a commendable performance in sentiment classification on our collected dataset.
§ RESULTS
§.§ RQ1 Results
§.§.§ BERTopic Modeling Results
Table <ref> presents the ten most frequently discussed topics, which comprise 29% of all discussions. The contribution of each keyword to its topic is listed in descending order. Although these keywords are provided by the BERTopic model, we adhere to the convention in topic modeling research <cit.> and manually assign a label to each topic based on its associated keywords. Among all the identified topics, those relating to the Consciousness and Intelligence of AI have the highest number of comments. To provide an intuitive sense of the discussions under this prevalent topic, we present an example comment as follows:
“I am not sure if you meant to imply that is the current state of AI? If so, then that is incorrect. Humans have not developed self-aware AI programs (yet). We don't really know if that's possible to realize yet. Also, self-aware AI is pretty hard to define.”[The example comments of all top 10 topics are listed in the Appendix.]
The prevalence of the topic suggests that the potential for AI to possess or develop human-like awareness gains substantial attention on Reddit.
Other topics attracting significant attention include AI development and model training (AI Model and Prompt Engineering), AI in business (OpenAI and AI Industry), the creativity engendered by AI (Art and Creativity of AI), and the potential societal influence of AI (AI and Job Automation). These findings indicate that Reddit users prioritize both the technical progression of AI and its social implications as significant areas of interest and attention.
§.§.§ Cross-subreddit Analysis
To uncover whether different groups of people focus on different topics, we further conduct a cross-subreddit analysis and investigate the topic and user base difference across diverse subreddits. Figure <ref> illustrates the distribution of the top 10 topics across the ten subreddits that elicited the most comments. While Art and Creativity of AI and AI in Gaming and Strategy are dominant in “r/aiwars” and “r/wallstreetbets”, respectively, other subreddits reveal more evenly distributed topics. Remarkably, three topics emerge as particularly prevalent, each accounting for at least 20% of the discussions in at least two subreddits: Consciousness and Intelligence of AI, OpenAI and AI Industry, and AI and Job Automation. Furthermore, while technology-centric subreddits such as “r/ArtificialIntelligence” and “r/technology” are primarily focused on AI's technical advancements, non-technology subreddits such as “r/Futurology” and “r/wallstreetbets” exhibit a higher level of interest in the social implications of AI.
§.§ RQ2 Results
§.§.§ Sentiment across Topics
To further investigate the sentiment differences in terms of topics, we compute the percentages of positive and negative comments of each topic. Figure <ref> displays the percentage of positive/negative comments. The topics of AI in Gaming and Strategy, AI model and Prompt Engineering, GPT Models and Applications, The Potential Impact of AI on Society, and AGI and GPT Models exhibit a higher percentage of positive sentiment compared to negative sentiment. In contrast, the topics of Consciousness and Intelligence of AI, OpenAI and AI Industry, Large Language Model Training, AI and Creativity of AI, and AI and Job Automation exhibit higher percentages of negative sentiment. Based on the data presented in Table <ref>, it is evident that the topics with higher percentages of positive sentiment primarily emphasize the benefits and conveniences offered by AI. Conversely, the topics exhibiting higher percentages of negative sentiment tend to focus on the drawbacks and future development of AI.
The aforementioned findings indicate a positive reception of AI applications and a general comprehension of the potential uses of AI models. The public holds the belief that AI can contribute to the betterment of society, particularly when employed as an assistant in decision-making processes, such as gaming and education. To illustrate, here is an example comment reflecting a positive sentiment from the topic The Potential Impact of AI on Society:
“For what it’s worth, this method of research has existed for a long time. It’s called High Throughput Testing. It’s basically a `throw everything at the wall and see what sticks' approach. I think using AI to test drugs we wouldn’t have otherwise thought of, to test a higher quantity of drugs, and to analyze the efficacy of those drugs is overall a great idea. Of course it will always require humans to verify the results and make final clinical decisions.”
In this particular example, AI is expected to assist in conducting drug tests due to its impressive capabilities. However, given the current limitations of the AI system, human verification remains necessary. Despite this limitation, the overall sentiment towards AI remains positive.
The negative comments highlight a lack of trust in current AI technology and apprehension regarding the future trajectory of AI development. Particularly within topics like Consciousness and Intelligence of AI and AI and Creativity of AI, the public expresses concerns regarding potential issues arising from AI. Keywords from Table <ref> reveal problems such as regulation, hallucination, and job replacement, while security and privacy are recurring themes in these negative comments. Here is an example comment that reflects the sentiment of mistrust towards AI:
“Yes, but there is a difference between understanding and relaying information. GPT 4 can relay information well, but that doesn’t mean it actually understands what it is doing. It’s just tossing around words in an organized manner based on the the prompt that you give it. So basically, it isn’t making it’s own thoughts, it’s just re-engineering words and sequences to make it seem like it’s making new thoughts. Until we learn about the actual nature of consciousness (if there even is one) A.I. is just another marketing buzzword.”
In this specific example, the sentiment expressed is a lack of trust in AI, as people question whether AI (specifically GPT-4) truly embodies real intelligence. Other studies have arrived at similar conclusions. For instance, Beets <cit.> highlighted that in the healthcare domain, individuals reap the benefits of AI advancements, but they exercise caution when AI is involved in making critical personal health decisions. Additionally, Zhang and Dafoe <cit.> proposed that the public supports the development of AI due to its promising potential, yet they also express the belief that AI should be subject to careful management.
§.§.§ Sentiment across Subreddits
Figure <ref> shows the percentage of positive/negative comments of each subreddit. It is noticable that among the analyzed subreddits, namely “r/technology”, “r/Futurology”, “r/aiwars”, “r/artificial”, and “r/slatestarcodex”, which have relatively uniform topic distributions, there is a higher proportion of negative sentiment compared to positive sentiment. The prevalent discussion of AI within these subreddits indicates that Reddit users thoroughly engage in diverse AI-related topics, expressing their viewpoints on the current limitations of AI, including concepts like “misinformation” and “stochastic parrot”. Additionally, the societal implications of AI are of significant concern to the public. This is exemplified by the vibrant online community of “r/aiwars”. As for the topic of Art and Creativity of AI, conversations primarily revolve around AI-generated text and images, which have emerged as dominant subjects of interest. However, the surge in AI creativity has also given rise to derivative issues, such as unemployment and copyright concerns, stimulating thoughtful deliberations among community members.
Conversely, “r/singularity”, “r/ArtificialIntelligence”, “r/OpenAI”, “r/wallstreetbets”, and “r/ChatGPT” demonstrate the opposite pattern. According to Figure <ref>, “r/ChatGPT” accounts for approximately 46% of comments associated with the topics characterized by positive sentiment. People generally hold a positive appreciation for AI, recognizing its capabilities and convenience. A prime example is ChatGPT, which serves as a valuable writing tool, assisting users in generating high-quality text. As a result, business professionals can foresee the potential of AI enhancing their productivity through its intelligent capabilities. This positive sentiment reflects the recognition of AI's strengths and its potential to positively impact various aspects of human endeavors. These findings suggest that users perceive and engage with various topics in distinct ways, leading to differing sentiments across subreddits.
§.§.§ Tech-centric vs. Non-tech Subreddits Analysis
Another question that we aim to explore is whether perceptions of AI differ between tech-centric and non-tech communities. To evaluate the emotional and social attributes of comments, we leverage LIWC (Linguistic Inquiry and Word Count), which is a software for analyzing word use and can be used to study a single individual and groups of people, utilizing its built-in dimensions related to psychological and social processes. More specifically, we examine six key dimensions: positive tone, negative tone, positive emotion, negative emotion, social interactions, and interpersonal conflict. It is worth noting that tones distinct with emotions in the way that they only captures sentiment, while emotions are restricted to estimqte the words that strongly imply emotions <cit.>. Table <ref> shows the regression results regarding each dimension between tech-centric and non-tech subreddits.
The outcomes displayed in columns (1) and (2) in Table <ref> shows the discrepancies in the use of positive and negative tones in comments. Based on these results, it can be deduced that the positive tone scores of the comments from the tech-centric communities, on average, are 0.173 higher than their non-tech counterparts. However, there is no similar disparity found in the regression results regarding the negative tone. This suggests that tech-centric communities exhibit a greater level of optimism in tones regarding AI advancements compared to non-tech communities.
In terms of emotional expression, as depicted in columns (3) and (4), tech-centric communities reveal higher scores in both positive and negative emotional expressions. These findings suggest that tech-centric communities are more polarized in their sentiments compared to non-tech-centric communities. This disparity can be attributed to the familiarity with technology and the resulting tendency for more distinct and definitive expressions within the tech-centric communities.
Furthermore, the tech-centric communities demonstrate a higher score in prosocial behaviors, suggesting that these communities exhibit a greater inclination towards expressing signals of “helping or caring about others” in their discussions about AI, This observation is based on the LIWC psychometric measurements <cit.>. On one hand, the prosocial expressions could stem from the inherent collaborative spirit found within the tech-centric communities, such as the open-source software culture. On the other hand, they could be a result of concerns regarding the ethical implications and potential impact on societal well-being stemming from AI advancements.
§ DISCUSSION AND CONCLUSION
In this study, we delve into understanding the public perception of artificial intelligence, utilizing a dataset of 33,912 comments from 388 unique subreddits, spanning from the launch of ChatGPT on November 30, 2022, to June 8, 2023. Employing BERTopic, we uncover a wide range of diverse topics discussed on Reddit, surpassing the findings of existing literature on public perception of AI <cit.>. The most frequent topics include the discussions about the consciousness and intelligence of AI, AI development and model training, AI in business, the creativity engendered by AI, and the potential societal influence.
The results from our sentiment analysis reveal nuanced variations in sentiment across different subreddits and topics. Overall, the public tends to perceive AI as a beneficial force that can contribute to societal improvement, particularly when used as an assistant in decision-making processes like gaming and education. However, negative comments highlight a lack of trust in current AI technologies and apprehension about the future trajectory of AI development, aligning with the findings of Leiter <cit.>. Furthermore, LIWC is employed to examine the more fine-grained differences in sentiment between the tech-centric and non-tech communities. We find that tech-centric communities exhibit higher polarization in their sentiments compared to non-tech-centric communities. While our analysis uncovers differences across subreddits, which serve as proxies for distinct social groups, further investigations could explore the underlying factors contributing to these differences.
In conclusion, this study on understanding public perception of AI has shed light on the multifaceted landscape of opinions, attitudes, and concerns surrounding artificial intelligence. Through various research methodologies, we have discovered the prevalent topics, sentiments, and thematic variations across different communities. The findings emphasize the importance of considering public opinion in shaping AI policies, addressing ethical considerations, driving user acceptance, promoting education and awareness, and guiding the design and development of AI technologies. This comprehensive understanding of public perception serves as a valuable foundation for fostering responsible and beneficial AI innovations that align with societal expectations and values. By bridging the gap between AI development and public sentiment, we can work towards building a future where AI technologies are embraced, trusted, and utilized in a manner that positively impacts individuals and society as a whole.
ieee_fullname
§.§ Example Comments of the Top 10 Most Prevalent Topics
Consciousness and Intelligence of AI: “I am not sure if you meant to imply that is the current state of AI? If so, then that is incorrect. Humans have not developed self-aware AI programs (yet). We don't really know if that's possible to realize yet. Also, self-aware AI is pretty hard to define.”
AI in Gaming and Strategy: “The purpose of human brains is not to play chess. The game of chess is just one of uncountable activities they can learn to do, because of their extreme flexibility. The complexity of analysis that every conscious brain performs every second outperforms any AI to incredible extents and it's going to stay that way for a long time. Of course a specialized AI may be better at specialized tasks, like playing a certain game. But it's still very limited machine. Machines are often better at their specialized task, than humans, but a single machine won't be able to do a fraction of activities, than a human is able to do. AI trained to play chess is just that, machine to play chess - it want be able to consciously adapt to any other task.”
AI Model and Prompt Engineering: “It's not really in a usable state currently. Basic prompt to ChatGPT or GPT-4 usually gives better results as compared to autogpt. So it's not worth the hassle to read so many prompts, give it human feedbacks and also spend money when you can just get better output for free in a much easier way. However, this experiment does have a potential to become useful in future. One of the ways this could be done is by using adding more agents where each agent is specialized in a single task instead of something general like ChatGPT. Also, I heard that someone is working on re-implementation of AutoGPT as a python package which is a good idea in my opinion as it would allow AutoGPT to be used in actual projects.”
OpenAI and AI Industry: “OpenAI was always going to do that, and the Google memo is wishful thinking since Google search profits will shrink if everyone has a quality free LLM. It could make a lot of sense for some other corporations to co-operate on a free LLM, but that's only likely if OpenAI is asleep at the wheel when it comes to undercutting competition. OA has a big lead on quality and the funding to drive prices low. Even if OA R&D falters the new winner will have the same plan. Outside of LLMs it's quite not so dire for competitive local AI, but even there we're still totally dependent on charity for expensive base models.”
GPT Models and Applications: “Gotcha, I just gave it a try. It got farther than normal restricted mode, but it still stopped once the story was going to get good, LOL. Hard to say if a pinned prompt would help or not. I also feel like ChatGPT can maybe build story elements better for now... I just get the feeling GPT-4 is too concise, but well, once I get API access it'd be fun to play around with for sure. Anyway, I will see when I have time to work on more of those additions for the app. I'll ping you when I have an update to push. For now, I should get back to work, lol.”
The Potential Impact of AI on Society: “AI means the end of reliably documented events. There's just no way around it, sadly. There is no longer any objective reason for anyone to believe anything anybody tells them from now on. The "post-truth" era is real and it's here. What that really means is we can no longer afford to assume any organized system of authority has our best interests in mind.”
Large Language Model Training: “having 95% ChatGPT performance is a strong claim to have and easy to disspell. Also, I have strong doubts about the propietary model that's a delta weight from LLaMA. I think the licence from LLaMA makes that illegal but I'm not that much in the know. Logic problems are a metric for finding how useful these models are. If you want to test your model against others, instead of making wild claims submit it to huggingface's LLM leaderboard”
Art and Creativity of AI: “The difference with using stock photos and such is you either already had the free rights to use them or payed to use them. Stock assets are there to be used. AI generated media is based on training sets composed of stolen and uncredited art, many times without an original artists consent. While its amazing technology its got a very shaky ethically questionable foundation. The very least that could have been done is noting the piece as AI assisted / generated”
AGI and GPT Models: “I wonder if anyone knows and can say. When they would create lets say GPT-5 would it be a completely new system from the ground up? Or would it be a sort of upgrade building on the previous version? I mean obviously no one knows. I guess i am just asking if it is common to `upgrade' systems in this way, or are usually just new models made? I know they have talked about upgrading GPT like in 0.1 increments etc. I guess it depends and if they also make some changes in architecture.”
AI and Job Automation: “the positions in the industry will already be consolidated and companies will be operating with a skeleton crew. This was said with respect to every disruptive technology ever. The reality is that people will move into roles where they manage the workload of AI rather than doing the work themselves. Employment isn't some magic thing that fell from the sky. It's the thing we invent and re-invent constantly in order to demonstrate value to others. If all of the jobs everywhere go away, we'll just invent more of them because it's what we do. Suggesting otherwise is like saying that once AI shows up, children won't play in the yard because AI can do that.”
|
http://arxiv.org/abs/2307.07240v1 | 20230714092647 | MaxSR: Image Super-Resolution Using Improved MaxViT | [
"Bincheng Yang",
"Gangshan Wu"
] | cs.CV | [
"cs.CV"
] |
[email protected]
0000-0002-3903-5425
[email protected]
0000-0003-1391-1762
State Key Laboratory for Novel Software Technology, Nanjing University
163 Xianlin Avenue, Qixia District
Nanjing
Jiangsu
P.R.China
210023
While transformer models have been
demonstrated to be
effective for natural language processing tasks and high-level vision tasks,
only a few attempts have been made to use powerful transformer models for single image super-resolution.
Because transformer models have powerful representation capacity and the in-built self-attention mechanisms in transformer models
help to
leverage self-similarity prior in input low-resolution image to improve performance for single image super-resolution,
we present a single image super-resolution model based on recent hybrid vision transformer of MaxViT, named as MaxSR.
MaxSR consists of four parts, a shallow feature extraction block, multiple cascaded adaptive MaxViT blocks to extract deep hierarchical features and model global self-similarity from low-level features efficiently, a hierarchical feature fusion block, and finally a reconstruction block.
The key component of MaxSR, i.e., adaptive MaxViT block, is based on MaxViT block which mixes MBConv with squeeze-and-excitation,
block attention and grid attention.
In order to achieve better global modelling of self-similarity in input low-resolution image, we improve
block attention and grid attention
in MaxViT block to adaptive
block attention and adaptive grid attention
which do self-attention inside each window across all grids and each grid across all windows respectively in the most efficient way.
We instantiate proposed model for classical single image super-resolution (MaxSR) and lightweight single image super-resolution (MaxSR-light).
Experiments show that our MaxSR and MaxSR-light establish new state-of-the-art performance efficiently.
<ccs2012>
<concept>
<concept_id>10010147.10010257.10010293.10010294</concept_id>
<concept_desc>Computing methodologies Neural networks</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010147.10010178.10010224.10010245.10010254</concept_id>
<concept_desc>Computing methodologies Reconstruction</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
<concept_id>10010147.10010371.10010382.10010383</concept_id>
<concept_desc>Computing methodologies Image processing</concept_desc>
<concept_significance>300</concept_significance>
</concept>
</ccs2012>
[500]Computing methodologies Neural networks
[300]Computing methodologies Reconstruction
[300]Computing methodologies Image processing
MaxSR: Image Super-Resolution Using Improved MaxViT
Gangshan Wu
August 12, 2023
===================================================
§ INTRODUCTION
Single image super-resolution (SISR) is an ill-posed inverse problem because the input low-resolution (LR) image can be mapped to many high-resolution (HR) undegraded images with different details. It has a lot of applications, such as security and surveillance
<cit.>,
medical image
<cit.>
and satellite and aerial imaging <cit.>.
How to use different image priors such as
self-similarity prior <cit.> is the key to tackle challenging ill-posed SISR problem.
Deep convolutional neural networks (CNNs) have been successful to learn a direct mapping between input LR image and output HR image for SISR in recent years <cit.>.
However, while transformer models have been shown to be
effective
for natural language processing tasks and high-level vision tasks,
only a few attempts have been made to use transformer models for SISR <cit.>.
Due to their content-dependent filtering ability and effective long-range dependency modelling ability in contrast to deep CNNs,
transformer models have powerful representation capacity
which can be used to model complex non-linear mappings between input LR image and output HR image.
Moreover,
the inbuilt self-attention (SA) mechanisms in transformer models
help to
leverage self-similarity prior <cit.>
in input LR image to
improve
performance for SISR.
Nevertheless, because the feature maps in SISR network models have
very big spatial sizes and the computational complexity of self-attention is quadratic order with respect to the number of attention locations,
there is a dilemma between long-range dependency modelling ability/self-similarity modelling ability and efficiency.
Therefore, how to utilizing powerful representation capacity and self-similarity modelling ability of transformer models effectively in an efficiency way to improve performance for SISR is an interesting challenge.
In order to
utilize powerful representation capacity and self-similarity modelling ability of transformer models for SISR effectively and efficiently,
we present a novel single image super-resolution
model based on recent hybrid transformer of MaxViT <cit.>,
named as MaxSR
which achieves state-of-the-art performance efficiently as shown in Figure <ref>.
MaxSR is shown in Figure <ref>, which consists
of four parts, a shallow feature extraction block (SFEB), multiple cascaded adaptive MaxViT blocks,
a hierarchical feature fusion block
(HFFB), and finally a reconstruction blocks (RB).
The SFEB uses convolution layers to extract low-level features.
The key component
of MaxViT,
adaptive MaxViT block,
is based on MaxViT block which mixes MBConv with squeeze-and-excitation (SE), block attention and grid attention.
In order to achieve better global modelling of self-similarity in input
LR image, we improve block attention and grid attention in MaxViT block to
adaptive block attention and adaptive grid attention
to form adaptive MaxViT block,
which
has
optimal sub-quadratic complexity
with respect to the spatial size of input feature map
to integrate information from all windows for each grid and integrate information from all grids for each window.
Multiple cascaded adaptive MaxViT blocks are used to extract deep hierarchical features and model global self-similarity in input image
in an efficient way.
The HFFB fuses hierarchical feature maps in the model for reconstruction of output HR image.
The RB uses pixel shuffle layers and a convolutional layer to reconstruct output HR image.
In summary, our main contributions are as follows:
* We present a novel single image super-resolution
model based on recent hybrid vision transformer of MaxViT
for single image super-resolution to utilize powerful
representation capacity and self-similarity modelling ability of transformer models to improve performance for SISR efficiently.
* We improve
MaxViT block to
adaptive MaxViT block in order to model self-similarity globally in input LR image in a better way, which can integrate information from all windows for each grid
and integrate information from all grids for each window
with an optimal sub-quadratic complexity with respect to the spatial size of input feature map to be efficient and scalable.
* Ablation study demonstrates effectiveness of adaptiveness of MaxViT block, relative position embedding, and adaptive block attention or adaptive grid attention for SISR.
* Experiments show our model establishes new state-of-the-art performance for classical and lightweight single image super-resolution tasks efficiently.
§ RELATED WORK
Since the pioneer work of Dong et al. <cit.> uses a three layer convolutional neural network (SRCNN)
to learn a direct mapping from an input LR image to an output HR image for SISR,
a lot of deep
CNN based methods <cit.> have been proposed to boost performance and efficiency for SISR by utilizing various network design strategies, especially
residual connections and dense connections.
Attention mechanisms are used by deep CNN based methods to focus on more important information to improve performance. Channel attention mechanism was first introduced into a
CNN (RCAN) by Zhang et al. <cit.> to solve SISR. Dai et al. design a second-order attention network (SAN) <cit.> to utilize second-order information to compute attention scores for more powerful feature correlation and feature expression learning, which also utilizes non-local attention to model long-range dependencies. To dynamically select appropriate kernel size to adjust the size of receipt field, Zhang et al. design a kernel attention network <cit.> for SISR. A holistic attention network (HAN) was proposed by Niu et al. <cit.> to model holistic interdependencies between layers, channels and positions.
The re-occurrences of small patches in the same image are demonstrated as a strong image prior in natural images <cit.>, which can be utilized by non-local attention to improve reconstruction performance for SISR <cit.>.
Non-local operations were first introduced by Liu et al. <cit.> into a recurrent neural network (NLRN) to improve parameter efficiency and capture feature correlation for image restoration. To utilize local and non-local attention blocks to attend to challenging parts and capture long-range dependencies, Zhang et al. <cit.> propose residual non-local attention network (RNAN). To utilize cross-scale non-local self-similarity of patches in input image for SISR, Mei et al. <cit.> propose a cross-scale non-local attention network (CSNLN), Zhou et al. <cit.> propose a cross-scale internal graph neural network (IGNN).
To boost reconstruction performance and efficiency, a deep network (NLSN) combining non-local operations and sparse representation was proposed by Mei et al. <cit.>.
Xia et al. <cit.> propose an efficient non-local contrastive attention for image super-resolution (ENLCN) to improve performance for SISR by adopting kernel method to approximate exponential function efficiently and contrastive learning to
separate relevant and irrelevant features.
Because of content-based filtering ability and long-range dependency modelling ability of transformer models in contrast to
CNNs,
they have been used to achieve impressive performance for natural language processing tasks
and high-level vision tasks
.
Besides the powerful representation capacity of transformer models, the inbuilt self-attention mechanisms of transformer models help to leverage self-similarity prior in nature images to improve performance for SISR.
A pre-trained image processing transformer (IPT)
based on transformer model
was proposed by Chen et al. <cit.> to jointly process different kinds of low-level vision tasks in one framework. Liang et al. propose image restoration using swin transformer (SwinIR) <cit.> based on swin transformer which utilizes
self-attention with shifted windows to enhance long-range dependency modelling ability.
Zhang et al. <cit.> propose an efficient long-range attention network for image super-resolution (ELAN), which utilizes shift convolution and group-wise multi-scale self-attention to improve efficiency and performance for SISR.
§ METHOD
§.§ Network Architecture
The proposed MaxSR, as shown in Figure <ref>, consists of four parts: a shallow feature extraction block (SFEB),
multiple cascaded adaptive MaxViT blocks
(AMTBs),
a hierarchical feature fusion block (HFFB), and finally a reconstruction block (RB).
Let's denote x as the LR input and ŷ as the HR output of the network.
First, we use two 3×3 convolution layers in SFEB to extract low-level features from network input
x,
F_-1 = Conv_3(x),
F_0 = f_SFEB(x) = Conv_3(F_-1),
where f_SFEB denotes the function of SFEB, F_0 denotes the extracted features to be sent to the first adaptive MaxViT block.
Second, we use multiple cascaded adaptive MaxViT blocks (AMTBs) to learn deep hierarchical features and model global self-similarity from low-level features efficiently. Suppose B blocks are stacked, we have
F_b = f_AMTB,b(f_AMTB,b-1(...(f_AMTB,1(F_0))...)),
where
b∈{1,2,...,B},
f_AMTB,b denotes the function of b-th AMTB, F_b denote the output feature maps of the b-th AMTB.
Afterwards, we fuse hierarchical feature maps extracted by the AMTBs and SFEB using HFFB.
In order to fuse hierarchical feature maps extracted by the AMTBs efficiently,
we divide cascaded B AMTBs into sequential S stages evenly so each stage will have B//S AMTBs (we ensure B will be divided without remainder by S and let L=B//S).
Then we fuse feature maps output by last AMTB of each stage,
so we only fuse feature maps every L AMTBs,
H = f_HFFB(F_-1, F_L, F_2L, ..., F_B),
H = Conv_3(Conv_1(Concat(F_L, F_2L, ..., F_B)))+F_-1,
where F_-1 denotes the output of first convolution layer of SFEB, f_HFFB denotes the function of HFFB and H denotes the output of HFFB.
Finally, we use fused features output by HFFB as input to RB to reconstruct the HR image y,
ŷ = g(x) = f_RB(H) = Conv_3(PS(H)),
where PS denotes the function of pixel shuffle layers, f_RB denotes the function of the RB, and g denotes the function of MaxSR.
Given a set of training image pairs {x^(m), y^(m)}^M_m=1, the network is used to minimize the following Mean Absolute Error (MAE) loss:
L(Θ)=1/M∑^M_m=1||y^(m)-ŷ^(m)||_1,
where Θ is the parameters of MaxSR, which includes parameters in SFEB, AMTBs, HFFB and RB.
§.§ Adaptive MaxViT Block
Adaptive MaxViT block is shown in bottom-left of Figure <ref>, which is based on MaxViT block proposed in MaxViT <cit.>. MaxViT block consists of a MB-Conv block <cit.> with squeeze-and-excitation (SE) module <cit.>, a block attention (block-SA)
with fixed attention footage such as 8×8 and a grid attention (grid-SA)
with fixed attention footage such as 8×8 sequentially.
The combination of block attention and grid attention sequentially is called Max-SA in MaxViT <cit.>.
Because Max-SA adopts fixed size of attention locations for block attention and grid attention, it can not integrate information from all windows of block attention for each grid of grid attention and can not integrate information from all grids of grid attention for each window of block attention when the spatial size of input feature map is large enough, which will impair global modelling ability of self-similarity of Max-SA degrading performance for SISR.
In order to integrate information from all windows of block attention
for each grid of grid attention and integrate information from all grids of grid attention
for each window of block attention, we need
an fixed-size block attention
and an adaptive-size grid attention
with respect to the size of input feature map,
e.g., a block attention with a fixed footage of 8×8 and a grid attention with an adaptive size of ⌈H/8⌉×⌈H/8⌉ after padding the input feature map,
or
an adaptive-size block attention
and a fixed-size grid attention
with respect to the size of input feature map,
e.g., a block attention with an adaptive size of ⌈H/8⌉×⌈H/8⌉ and a grid attention with a fixed footage of 8×8 after padding the input feature map,
or
an adaptive-size block attention
and an adaptive-size grid attention
with respect to the size of input feature map,
e.g., a block attention with an adaptive size of
⌈H^1/3⌉×⌈W^1/3⌉
and a grid attention with an adaptive size of
⌈H^1/3⌉^2×⌈W^1/3⌉^2
after padding the input feature map.
If we use an adaptive-size block attention
and a fixed-size grid attention,
or an fixed-size block attention and an adaptive-size grid attention, the complexity of Max-SA will be quadratic order with respect to the spatial size of input feature map to be inefficient. So the choice will be utilizing an adaptive-size block attention and adaptive-size grid attention for Max-SA and balancing the complexity of two kinds of self-attention to achieve an optimal sub-quadratic complexity.
In order to integrate information from all windows of block attention for each grid of grid attention and integrate information from all grids of grid attention for each window of block attention meanwhile achieve the lowest complexity, the attention footage of block attention and grid attention need all to be approximately ⌈√(H)⌉×⌈√(W)⌉ adaptive to the size of input feature map.
Based on this analysis,
we propose to utilize
adaptive Max-SA in
adaptive Max-ViT block
to improve performance for SISR, which adopts
balanced adaptive size of attention locations for block attention and grid attention to improve global modelling ability of Max-SA in the most efficient way
inspired by the strategy proposed in HiT <cit.>.
More specifically, given an input feature map F ∈ℝ^H× W× C,
adaptive Max-SA first pads the input feature map into
a tensor of shape [⌈√(H)⌉^2,⌈√(W)⌉^2, C]
by zeros (padding the input feature map into a tensor of shape [⌈√(H)⌉×⌈H/⌈√(H)⌉⌉, ⌈√(W)⌉×⌈W/⌈√(W)⌉⌉, C] will be an efficient approximation),
and divides the padded input feature map into non-overlapping windows of
spatial size of
⌈√(H)⌉×⌈√(W)⌉.
Then adaptive Max-SA performs adaptive block attention within each window of spatial size of ⌈√(H)⌉×⌈√(W)⌉ and adaptive grid attention within each uniform grid of spatial size of ⌈√(H)⌉×⌈√(W)⌉ (performing grid attention within each uniform grid of spatial size of ⌈H/⌈√(H)⌉⌉×⌈W/⌈√(W)⌉⌉ will be an efficient approximation corresponding to above approximate padding).
In this way, adaptive Max-SA can integrate information from all windows of adaptive block attention for each grid
of adaptive grid attention and integrate information from all uniform grids of adaptive grid attention for each window
of adaptive block attention to improve information propagation and global modelling ability of similarity which is beneficial to performance for SISR.
The complexity of adaptive Max-SA will be
optimal sub-quadratic order of O[(⌈√(H)⌉×⌈√(W)⌉)^2×(⌈√(H)⌉×⌈√(W)⌉)]
for our motivation.
Compared to original fixed-size Max-SA proposed in MaxViT <cit.>, which can not integrate information from all windows of fixed size of block attention for each grid of grid attention and can not integrate information from all uniform grids of fixed size of grid attention for each window of block attention when the input image is large enough, adaptive Max-SA has better global modelling ability of self-similarity to improve performance for SISR.
Compared to shifted window based self-attention in SwinIR <cit.>, which can only integrate information from neighbor windows in each execution of self-attention after shifting windows, adaptive Max-SA has
more direct and faster information propagation between different windows or grids
which is helpful to global modelling of self-similarity
to improve performance for SISR.
§ EXPERIMENTS
§.§ Datasets and Evaluation Metrics
We use DIV2K <cit.>
and Flickr2K
as training set, five standard benchmark datasets, Set5
, Set14
, BSD100
, Urban100
and Manga109
, for performance evaluation
.
For classical image SR, we train our models using DIV2K or DIV2K plus Flickr2K following the different settings in literature.
For lightweight image SR, we train our models using DIV2K plus Flickr2K following literature <cit.>.
HR images and corresponding LR images which are downsampled using bicubic interpolation are provided in DIV2K and Flickr2K dataset. Two kinds of data augmentation during training are used: 1) Rotate the images of 90, 180, 270 degrees; 2) Flip the images vertically. Peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) are used as evaluation metrics. The networks are trained using all three RGB channels and tested from Y channel of YCbCr color space of images.
§.§ Implementation Details
The MaxSR for classical image SR in experiments has (B=16) adaptive MaxViT blocks, which are divided into (S=4) stages, each stage has (L=4) adaptive MaxViT blocks. The width of network is set to (W=128), except the last 3×3 convolutional layer outputs reconstructed HR color image.
The number of attention heads in adaptive block attention and adaptive grid attention is set to (H=4).
All layers use 3×3 convolutional kernels except 1×1 kernels in adaptive MaxViT blocks and HFFB.
The MaxSR-light for lightweight image SR in experiments reduces adaptive MaxViT blocks to (B=8), the number of adaptive MaxViT blocks in each stage to (L=2), and the width of network to (W=48). The other setting of MaxSR-light is the same as MaxSR.
Mini-batch size is set to 32 following the setting in SwinIR <cit.>. We use 64×64 LR image patches and corresponding HR image patches for 2×, 3×, 4× and 8× scale factors to train our models. At each iteration of training MaxSR and MaxSR-light for each scale factor, we first sample an image pair from all pairs of corresponding HR and LR images uniformly, then sample a training patch pair from the sampled image pair uniformly.
Adam optimizer <cit.> is used to train the network weights.
For 2× scale factor, the learning rate is initialized to 2×10^-4, and the ×2 model is trained for
500K or 1500K
iterations when using DIV2K or DIV2K plus Flickr2K respectively,
the learning rate is step-decayed by 2
after
[250K, 400K, 450K, 475K, 500K]
iterations or
[750K, 1200K, 1350K, 1425K, 1500K]
when using DIV2K or DIV2K plus Flickr2K for training respectively.
For 3×, 4×, 8× scale factors, the models is finetuned from 2× model, and the learning rate, the training iterations and the decay iterations are halved.
We implement our MaxSR with PyTorch framework.
§.§ Ablation Study
We use models trained using DIV2K for classical image SR to do ablation study.
§.§.§ Effect of Adaptive Block Attention or Adaptive Grid Attention
As shown in Table <ref>, if we replace adaptive block attention by adaptive grid attention in MaxSR and denote the model as Ada-Gird-Attention
or replace adaptive grid attention by adaptive block attention in MaxSR and denote the model as Ada-Block-Attention,
the performance of models will degrade, which demonstrates adaptive block attention and adaptive grid attention are all indispensible to achieve good performance for SISR.
§.§.§ Effect of Adaptiveness of MaxViT Block and Relative Position Embedding
In this subsection, the models are trained using a big patch size of 128×128 to differentiate adaptive MaxViT block and original fixed-size MaxViT block during training, training batch size is reduced to 8 correspondingly to maintain the same amount of data used in training.
When we adopt the strategy of original fixed-size MaxViT block, the fixed attention footage is set to 8×8.
The model trained and tested using original MaxViT block
is denoted as BP-Fix-SA-Tr-Fix-SA-Te.
The model trained and tested using proposed adaptive MaxViT block is
denoted as BP-Ada-SA-Tr-Ada-SA-Te, i.e., BP-MaxSR.
As shown in Table <ref>, BP-MaxSR has achieved better performance than BP-Fix-SA-Tr-Fix-SA-Te across all datasets,
which demonstrates the effectiveness of adaptive Max-ViT block compared to original fixed-footage Max-ViT block <cit.> for SISR.
We also train a model with original fixed-footage MaxViT block but test it with adaptive MaxViT block
and denote the model as BP-Fix-SA-Tr-Ada-SA-Te,
the performance of BP-Fix-SA-Tr-Ada-SA-Te will increase compared to that of BP-Fix-SA-Tr-Fix-SA-Te demonstrating the effectiveness of adaptiveness of MaxViT block used only in testing.
As shown in Table <ref>, when we add relative position embedding into adaptive MaxViT block of BP-MaxSR and denote the model as BP-MaxSR-RPE,
which will achieve better performance than BP-MaxSR across all datasets. Although utilizing relative position embedding in adaptive MaxViT block will further boost performance for SISR,
we don't use it for MaxSR in this paper to trade performance for efficiency.
§.§ Results on Classical Image SR
We compare our method with
EDSR <cit.>,
RDN <cit.>, D-DBPN <cit.>, RCAN <cit.>,
RRDB <cit.>,
NLRN <cit.>,
RNAN <cit.>,
SAN <cit.>,
OISR-RK3 <cit.>,
IGNN <cit.>,
HAN <cit.>,
NLSN <cit.>,
IPT <cit.>,
SwinIR <cit.>,
ENLCN <cit.>,
ELAN <cit.>
to validate the effectiveness of MaxSR for classical image SR. We also adopt self-ensemble strategy <cit.> to improve MaxSR (MaxSR+) following the literature
.
The peak signal-to-noise ratio (PSNR) and the structural similarity index measure (SSIM) evaluation metrics on five benchmark datasets are shown in Table <ref>.
MaxSR has established new state-of-the-art performance on different combinations of scale factors and benchmark datasets.
Figure <ref> shows visual comparisons. As shown in
image babara from Set14,
image img092 from Urban100,
our method reconstructs clearer edges, more textures, less artifacts, less blurring and ring effects of images than other methods.
§.§ Results on Lightweight Image SR
We compare our method with
DRRN <cit.>,
CARN <cit.>,
IDN <cit.>,
IMDN <cit.>,
FALSR-A <cit.>,
FALSR-C <cit.>,
AAF-SD <cit.>,
MAFFSRN <cit.>,
PAN <cit.>,
LAPAR-A <cit.>,
RFDN <cit.>,
AAF-L <cit.>,
A-CubeNet <cit.>,
LatticeNet <cit.>,
SwinIR-light <cit.>,
ELAN-light <cit.>
to validate the effectiveness of MaxSR-light for lightweight image SR.
The performance metrics on five benchmark datasets are shown in Table <ref>.
MaxSR-light has established new state-of-the-art performance
on different combinations of scale factors and benchmark datasets.
Figure <ref> shows our Max-light achieves more better visual results compared to other methods.
§ CONCLUSION
In this paper, we propose a novel single image super-resolution model named as MaxSR based on recent hybrid vision transformer of MaxViT to utilize powerful representation capacity and self-similarity modelling ability of transformer models to boost performance for single image super-resolution.
In order to achieve better long-range dependency modelling ability/self-similarity modelling ability, we further improve MaxViT block to adaptive MaxViT block
which can integrate information from all windows
for each grid
and integrate information from all grids
for each window
to improve information propagation and global modelling of self-similarity
in an efficient way
which is helpful to performance
for SISR.
Experiments demonstrate the proposed model establish new state-of-the-art performance for classical image super-resolution (MaxSR) and lightweight image super-resolution (MaxSR-light) efficiently
even without adopting relative position embedding which can further increase performance for SISR.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.04060v1 | 20230708233916 | Double instability of Schwarzschild black holes in Einstein-Weyl-scalar theory | [
"Yun Soo Myung"
] | gr-qc | [
"gr-qc",
"hep-th"
] |
Double instability of Schwarzschild black holes
in Einstein-Weyl-scalar theory
Yun Soo Myung^a[e-mail address: [email protected]]
^aInstitute of Basic Sciences and Department of Computer Simulation, Inje University,
Gimhae 50834, Korea
We study the stability of Schwarzschild black
hole in Einstein-Weyl-scalar (EWS) theory with a quadratic scalar coupling to the Weyl term.
Its linearized theory admits the Lichnerowicz equation for Ricci tensor as well as scalar equation.
The linearized Ricci-tensor carries with a regular mass term (m^2_2), whereas the linearized scalar has a tachyonic mass term (-1/m^2_2).
It turns out that the double instability of Schwarzschild black hole in EWS theory is given by Gregory-Laflamme and tachyonic instabilities.
In the small mass regime of m_2<0.876, the Schwarzschild black hole becomes unstable against Ricci-tensor perturbations,
while tachyonic instability is achieved for m_2<1.174. The former would provide a single branch of scalarized black holes, whereas the latter would induce infinite branches of scalarized black holes.
§ INTRODUCTION
Recently, black hole solutions with scalar hair obtained from Einstein-Gauss-Bonnet-scalar (EGBS) theories <cit.> and Einstein-Maxwell-scalar theory <cit.> have received much attention
because they have uncovered easily an evasion of the no-hair theorem <cit.> by introducing a non-minimal (quadratic) scalar coupling function f(ϕ) to Gauss-Bonnet and Maxwell terms.
We note that these scalarized black hole solutions are closely related to the appearance of tachyonic instability for bald black holes.
In these linearized theories, the instability of Schwarzschild black hole is determined solely by the linearized scalar equation where the Gauss-Bonnet term acts as an effective mass term <cit.>, while
the instability of Reissner-Nordström (RN) black hole is given just by the linearized scalar equation where the Maxwell term plays the role of an effective mass term <cit.>.
This is allowed because their linearized Einstein and Einstein-Maxwell equations reduce to those for the linearized Einstein theory around Schwarzschild black hole and the Einstein-Maxwell theory around RN black hole, which turned out to be stable against tensor (metric) and vector-tensor perturbations.
It was well known that a higher curvature gravity (Einstein-Weyl theory) with a mass coupling parameter m^2_2 has provided the non-Schwarzschild black hole solution which crosses the Schwarzschild black hole solution at the bifurcation point of m_2=0.876 <cit.>.
This solution indicates the black hole with non-zero Ricci tensor (R̅_μν≠0), comparing to zero Ricci tensor (R̅_μν=0) for Schwarzschild black hole.
We note that the trace no-hair theorem for Ricci scalar played an important role in obtaining the non-Schwarzschild black hole solution.
It is worth noting that the instability of Schwarzschild black hole was found in the massive gravity theory <cit.> since the Schwarzschild black hole was known to be dynamically stable against tensor perturbations in Einstein theory <cit.>.
In the linearized Einstein-Weyl theory, the instability bound of Schwarzschild black hole was found as m_2<0.876 with r_+=1 when solving the Lichnerowicz equation for the linearized Ricci tensor <cit.>, which is the same equation as the linearized Einstein equation around a (4+1)-dimensional black string where the Gregory-Laflamme (GL) instability appeared firstly <cit.>.
A little difference is that the instability of Schwarzschild black hole arose from the massiveness of m_2≠0 in the Einstein-Weyl theory, whereas the GL instability appeared from the geometry of an extra z dimension in (4+1)-dimensional black string theory. This means that the mass m_2 trades for the extra dimension z.
In the present work, we wish to study two instabilities of Schwarzschild black holes simultaneously by introducing the Einstein-Weyl-scalar theory with a quadratic scalar coupling to Weyl term, instead of Gauss-Bonnet term. In this case, the linearized Ricci-tensor δ R_μν has a regular mass term m^2_2, whereas the linearized scalar δϕ possesses a tachyonic mass term (-1/m^2_2).
The linearized scalar equation around Schwarzschild black hole undergoes tachyonic instability for m_2<1.174, while the Lichnerowicz equation for linearized Ricci-tensor reveals GL instability for m_2<0.876.
We expect that the former may induce infinite branches (n=0,1,2,⋯) of scalarized black holes, while the latter admits a single branch (m_2≠0) of scalarized black holes.
This means that their role of the mass term are quite different for producing scalarized black holes.
§ EINSTEIN-WEYL-SCALAR (EWS) THEORY
We introduce the EWS theory defined by
S_ EWS=1/16 π∫ d^4 x√(-g)[ R-2∂_μϕ∂^μϕ-f(ϕ)/2m^2_2 C^2],
where f(ϕ)=1+ϕ^2 is a quadratic scalar coupling function, m_2^2 denotes a mass coupling parameter, and C^2 represents the Weyl term (Weyl scalar invariant) given by
C^2(≡ C_μνρσC^μνρσ)=2(R_μνR^μν-R^2/3)+ R_ GB^2
with the Gauss-Bonnet term R_ GB^2=R^2-4R_μνR^μν+R_μνρσR^μνρσ. In the limit of m_2^2→∞, the Weyl term decouples and the theory reduces to the tensor-scalar theory.
We wish to emphasize that scalar couplings to Gauss-Bonnet term were mostly used to find black holes with scalar hair within EGBS theory because it provides an effective mass term for a linearized scalar without modifying metric perturbations <cit.>. This is so because the Gauss-Bonnet term is a topological term in four dimensions.
Actually, the Weyl term is similar to the Maxwell term (F^2) because both they are conformally invariant and their variations with respect to g_μν are traceless.
From the action (<ref>), we derive the Einstein equation
G_μν=2∂ _μϕ∂ _νϕ -(∂ϕ)^2g_μν+2(1+ϕ^2)B_μν/m^2_2-Γ_μν/m^2_2,
where G_μν=R_μν-(R/2)g_μν is the Einstein tensor.
Here, B_μν (B^μ _μ=0) coming from the first part of (<ref>) is the Bach tensor defined as
B_μν = R_μρνσR^ρσ-g_μν/4 R_ρσR^ρσ- R/3(R_μν-g_μν/4R)
+ 1/2(∇^2R_μν-g_μν/6∇^2 R-1/3∇_μ∇_ν R)
and Γ_μν is given by
Γ_μν = -4/3R∇_(μΨ_ν)-∇^αΨ_α(3R_μν-4g_μν/3R)+ 6R_(μ|α|∇^αΨ_ν)
- 3 R^αβ∇_αΨ_β g_μν
+4R^β_ μαν∇^αΨ_β
with
Ψ_μ= 2ϕ∂_μϕ.
Its trace is not zero as Γ^μ _μ=R∇^ρΨ_ρ-2R^ρσ∇_ρΨ_σ.
Importantly, the scalar equation is given by
∇^2 ϕ +C^2/4m^2_2ϕ=0 .
Considering ϕ̅=0, the Schwarzschild solution is found from Eqs.(<ref>) and (<ref>) as
ds^2_ SBH= g̅_μνdx^μ dx^ν=-(1-r_+/r)dt^2+dr^2/(1-r_+/r)+r^2dΩ^2_2
with horizon radius r_+=2M. This Schwarzschild background gives us R̅_μνρσ≠0, R̅_μν=0, and R̅=0.
In this case, one finds easily that C̅^2=R̅_μνρσR̅^μνρσ=12r_+^2/r^6=R̅^2_ GB.
§ DOUBLE INSTABILITY FOR SCHWARZSCHILD BLACK HOLE
For the stability analysis of Schwarzschild black hole, we need the two linearized equations which describe the metric perturbation h_μν in (g_μν=g̅_μν+h_μν) and scalar perturbation δϕ in (ϕ=0+δϕ) propagating around (<ref>). They are obtained by linearizing Eqs.(<ref>) and (<ref>) as
∇̅^2δ G_μν+2R̅_μρνσδ G^ρσ-1/3(∇̅_μ∇̅_ν-g̅_μν∇̅^2)δ R-m^2_2 δ
G_μν=0 ,
(∇̅^2+ 3r_+^2/m^2_2r^6)δϕ= 0
with δ G_μν=δ R_μν-δ R g̅_μν/2 the linearized Einstein tensor. Here, we note that `m^2_2' in Eq.(<ref>) is regarded as a regular mass term, while `3r_+^2/m^2_2r^6' in Eq.(<ref>) corresponds to a tachyonic mass term for m^2_2>0.
Taking the trace over Eq.(<ref>) leads to
m^2_2 δ R=0,
which implies the non-propagation of a linearized Ricci scalar as
δ R=0.
We confirm Eq.(<ref>) by linearizing R=2(∂ϕ)^2+Γ^μ _μ/m^2_2.
This non-propagation of linearized scalar plays an important role in obtaining a linearized theory of the EWS theory.
Plugging Eq.(<ref>) into Eq.(<ref>), one finds
the Lichnerowicz-Ricci tensor equation for the traceless and transverse Ricci tensor δ R_μν as
(Δ̅_ L+m^2_2 ) δ R_μν=0,
where the Lichnerowicz operator on the Schwarzschild background is given by
Δ̅_ Lδ R_μν=-∇̅^2δ R_μν-2R̅_μρνσδ R^ρσ.
Here, we consider m^2_2>0 for non-tachyonic case.
Actually, Eq.(<ref>) describes a massive spin-2 mode (δ R_μν) with mass m_2 propagating on the Schwarzschild black hole background.
Let us solve the Lichnerowicz-Ricci tensor equation (<ref>) by adopting δ R_μν(t, x)=e^Ω tδR̃_μν( x).
Its s(l=0)-mode in polar sector satisfies the Schrödinger-type equation when introducing a tortoise coordinate r_*=∫[dr/(1-r_+/r)]
d^2δR̃^l=0_μν/dr^2_*-[Ω^2+V_ Z(r)]δR̃^l=0_μν=0,
where the Zerilli potential V_ Z(r) is given by <cit.>
V_ Z(r)=(1-r_+/r)[m^2_2 +r_+/r^3-12m^2_2r_+(r-0.5r_+)+6m^4_2r^3(2r_+-r)/(r_++m^2_2r^3)^2].
As is shown in (Left) Fig. 1, all potentials with m_2≠0 induce negative region near the horizon, while their asymptotic forms are given by m^2_2>0.
The negative region becomes wide and deep as the mass parameter m_2 decreases, implying GL instability of the Schwarzschild black hole.
In case of m_2=0, however, there is no GL instability because its potential V_ Z(r) is positive definite outside the horizon.
Solving Eq.(<ref>) numerically with appropriate boundary conditions, one finds the GL instability bound from (Left) Fig. 2 as
0<m_2<m_2^ th=0.876, for r_+=1,
where m_2^ th denotes threshold of GL instability. It is important to note that this bound is found in the EWS theory, but there is no such bound in the EGBS theory.
In the study of the instability for the Euclidean Schwarzschild black hole together with Einstein gravity, Gross, Perry, and Yaffe have found that there is just one normalizable negative-eigenvalue mode of the Licherowicz
operator [(Δ^ E_ L-λ_ GPY)h_μν=0] <cit.>. This connection could be realized from Eq.(<ref>) because when one considers δ R_μν=Δ̅_ Lh_μν/2
for ∇̅^μ h_μν=0 and h^μ _μ=0, Eq.(<ref>) implies that Δ̅_ Lh_μν=0 or (Δ̅_ L+m^2_2)h_μν=0.
Its eingenvalue is given by λ_ GPY[=-(m_2^ th)^2]=-0.768/r_+^2 which was noted in the early study of Schwarzschild black hole within higher curvature gravity <cit.>. Indeed, λ_ GPY is related to the thermodynamic instability of negative heat capacity C=-2π r_+^2 for Schwarzschild black hole in canonical ensemble.
On the other hand, we focus on the linearized scalar equation (<ref>) which is the same form as found in the linearized EGBS theory.
Considering
δϕ(t,r,θ,φ)=u(r)/re^-iω tY_lm(θ,φ),
the radial equation for s(l=0)-mode scalar leads to the Schrödinger-type equation
d^2u/dr_*^2+[ω^2-V_ S(r)]u(r)=0,
where the scalar potential V_ S(r) is given by
V_ S(r)=(1-r_+/r)[r_+/r^3-3r_+^2/m^2_2r^6],
where the last term corresponds to a tachyonic mass term.
Considering ∫^∞_r_+ dr [V_ S(r)/(1-r_+/r)]<0,
one may introduce a sufficient condition of tachyonic instability for a mass parameter m_2 <cit.>
m^2_2r_+^2<12/10⇒ m_2<m_2^ sc=1.095/r_+.
However, Eq.(<ref>) is not a necessary and sufficient condition for tachyonic instability.
Observing (Right) Fig. 1, one finds that the negative region becomes wide and deep as the mass parameter m_2 decreases, implying tachyonic instability of the Schwarzschild black hole.
To determine the threshold of tachyonic instability, one has to solve the second-order differential equation (<ref>) with ω=iΩ numerically,
which may allow an exponentially growing mode of e^Ω t as an unstable mode.
In this case, we choose two boundary conditions: a normalizable
solution of u(∞)∼ e^-Ω r_* at infinity and
a solution of u(r_+)∼(r-r_+)^Ω r_+ near the horizon.
By observing (Right) Fig. 2 together with r_+=1, we read off the
bound for tachyonic instability as
m_2<m_2^ sth=1.174
which implies that the threshold of tachyonic instability is given by 1.174 being greater than 1.095 (sufficient condition for tachyonic instability).
This corresponds to a bifurcation point between Schwarzschild and n=0 branch of scalarized black holes. In the limit of m^2_2 → 0, one has an infinitely negative potential which implies a large Ω as seen from (Right) Fig. 2.
Finally, we obtain an inequality bound for threshold of GL and tachyonic instabilities as
m_2^ th<m_2^ sth.
However, we remind the reader that the linearized Ricci-tensor δ R_μν carries with a regular mass term (m^2_2), whereas the linearized scalar δϕ has a tachyonic mass term (-1/m^2_2).
In this sense, the GL instability is quite different from the tachyonic instability <cit.>.
§ DISCUSSIONS
In this work, we have investigated two instabilities of Schwarzschild black holes simultaneously by introducing the EWS theory with a quadratic scalar coupling to Weyl term. Here, the linearized Ricci-tensor has a regular mass term (m^2_2), whereas the linearized scalar possesses a tachyonic mass term (-1/m^2_2).
The linearized scalar equation around black hole indicates tachyonic instability for m_2<1.174, while the Lichnerowicz equation for linearized Ricci-tensor shows GL instability for m_2<0.876.
This suggests that their mass terms play different roles for generating scalarized black holes because the GL instability is quite different from the tachyonic instability.
We expect that the former may induce infinite branches (n=0,1,2,⋯) of scalarized black holes, while the latter admits single branch (m_2>0) of scalarized black holes.
Now, we would like to mention the non-Schwarzschild black hole solutions obtained from the Einstein-Weyl theory (ϕ=0 EWS theory with m_2^2>0). This solution can be obtained numerically by requiring the no-hair theorem for Ricci scalar (R=0) <cit.>.
Actually, it corresponds to single branch of non-Schwarzschild black holes with Ricci-tensor hair <cit.>. Recently, it was shown that the long-wave length instability bound for non-Schwarzschild black holes is given by m_2<0.876 <cit.>, which is the same bound as the GL instability for Schwarzschild black hole <cit.>, but it contradicts to the conjecture from black hole thermodynamics addressed in <cit.>. We expect that a single branch of non-Schwarzschild black holes with Ricci-tensor and scalar hairs would be found from the EWS theory with f(ϕ)=1+ϕ^2.
On the other hand, we consider the scalar equation (<ref>) with tachyonic mass. From its static equation with ω=0, we obtain an infinite spectrum of parameter m_2 : m_2∈ [1.174=m_2^ sth, 0.453, 0.280, 0.202, · · ·], which defines infinite branches of scalarized black holes: n=0((0,1.174]), n=1((0,0.453]), n=2((0,0.28]), n=3((0,0.202]),⋯. Also, n=0, 1, 2, 3,⋯ are identified with the number of nodes for δϕ(z) = u(z)/z profile.
Thus, it is expected that infinite branches (n=0, 1, 2, 3,⋯) of black hole with scalar hair would be found when solving Eqs.(<ref>) and (<ref>) numerically.
However, this computation seems not to be easy because Eq.(<ref>) includes fourth-order derivatives and its Ricci scalar is not zero (R=2(∂ϕ)^2+Γ^μ _μ/m^2_2).
We wish to introduce a conventional case of f(ϕ)=ϕ^2 quadratic coupling function. In this case, there is no GL instability because the Bach tensor-term does not contribute to the linearized Einstein equation (<ref>).
Here, the linearized EWS theory reduces to the linearized EGBS theory which provides n=0 band with bandwidth
of 1.174 < m_2 < 1.272 <cit.>. This band of black holes with scalar hair is unstable against radial perturbations <cit.>. This is reason why we choose the EWS theory with the quadratic coupling function f(ϕ)=1+ϕ^2.
Finally, for the EWS theory with a quartic coupling function f(ϕ)=(1-e^-κϕ^4)/4κ <cit.>, the linearized scalar equation leads to ∇̅^2δϕ=0, which implies that there is no tachyonic instability. Also, its linearized Einstein equation is given by δ G_μν=0 which indicates that there is no GL instability. In this quartic coupling case, the linearized EWS theory reduces to the linearized EGBS theory, showing tachyonic stability. Without tachyonic instability, one expects to have a single branch of nonlinearly scalarized black holes but not infinite branches of scalarized black holes.
Acknowledgments
The author thanks De-Cheng Zou for helpful discussions.
99
Antoniou:2017acq
G. Antoniou, A. Bakopoulos and P. Kanti,
Phys. Rev. Lett. 120, no.13, 131102 (2018)
doi:10.1103/PhysRevLett.120.131102
[arXiv:1711.03390 [hep-th]].
Doneva:2017bvd
D. D. Doneva and S. S. Yazadjiev,
Phys. Rev. Lett. 120, no.13, 131103 (2018)
doi:10.1103/PhysRevLett.120.131103
[arXiv:1711.01187 [gr-qc]].
Silva:2017uqg
H. O. Silva, J. Sakstein, L. Gualtieri, T. P. Sotiriou and E. Berti,
Phys. Rev. Lett. 120, no.13, 131104 (2018)
doi:10.1103/PhysRevLett.120.131104
[arXiv:1711.02080 [gr-qc]].
Herdeiro:2018wub
C. A. R. Herdeiro, E. Radu, N. Sanchis-Gual and J. A. Font,
Phys. Rev. Lett. 121, no.10, 101102 (2018)
doi:10.1103/PhysRevLett.121.101102
[arXiv:1806.05190 [gr-qc]].
Bekenstein:1995un
J. D. Bekenstein,
Phys. Rev. D 51, no.12, R6608 (1995)
doi:10.1103/PhysRevD.51.R6608
Myung:2018iyq
Y. S. Myung and D. C. Zou,
Phys. Rev. D 98, no.2, 024030 (2018)
doi:10.1103/PhysRevD.98.024030
[arXiv:1805.05023 [gr-qc]].
Myung:2018vug
Y. S. Myung and D. C. Zou,
Eur. Phys. J. C 79, no.3, 273 (2019)
doi:10.1140/epjc/s10052-019-6792-6
[arXiv:1808.02609 [gr-qc]].
Lu:2015cqa
H. Lu, A. Perkins, C. N. Pope and K. S. Stelle,
Phys. Rev. Lett. 114, no.17, 171601 (2015)
doi:10.1103/PhysRevLett.114.171601
[arXiv:1502.01028 [hep-th]].
Babichev:2013una
E. Babichev and A. Fabbri,
Class. Quant. Grav. 30, 152001 (2013)
doi:10.1088/0264-9381/30/15/152001
[arXiv:1304.5992 [gr-qc]].
Brito:2013wya
R. Brito, V. Cardoso and P. Pani,
Phys. Rev. D 88, no.2, 023514 (2013)
doi:10.1103/PhysRevD.88.023514
[arXiv:1304.6725 [gr-qc]].
Regge:1957td
T. Regge and J. A. Wheeler,
Phys. Rev. 108, 1063-1069 (1957)
doi:10.1103/PhysRev.108.1063
Zerilli:1970se
F. J. Zerilli,
Phys. Rev. Lett. 24, 737-738 (1970)
doi:10.1103/PhysRevLett.24.737
Myung:2013doa
Y. S. Myung,
Phys. Rev. D 88, no.2, 024039 (2013)
doi:10.1103/PhysRevD.88.024039
[arXiv:1306.3725 [gr-qc]].
Gregory:1993vy
R. Gregory and R. Laflamme,
Phys. Rev. Lett. 70, 2837-2840 (1993)
doi:10.1103/PhysRevLett.70.2837
[arXiv:hep-th/9301052 [hep-th]].
Lu:2017kzi
H. Lü, A. Perkins, C. N. Pope and K. S. Stelle,
Phys. Rev. D 96, no.4, 046006 (2017)
doi:10.1103/PhysRevD.96.046006
[arXiv:1704.05493 [hep-th]].
Gross:1982cv
D. J. Gross, M. J. Perry and L. G. Yaffe,
Phys. Rev. D 25, 330-355 (1982)
doi:10.1103/PhysRevD.25.330
Whitt:1985ki
B. Whitt,
Phys. Rev. D 32, 379 (1985)
doi:10.1103/PhysRevD.32.379
Held:2022abx
A. Held and J. Zhang,
Phys. Rev. D 107, no.6, 064060 (2023)
doi:10.1103/PhysRevD.107.064060
[arXiv:2209.01867 [gr-qc]].
Stelle:2017bdu
K. S. Stelle,
Int. J. Mod. Phys. A 32, no.09, 1741012 (2017)
doi:10.1142/S0217751X17410123
Blazquez-Salcedo:2018jnn
J. L. Blázquez-Salcedo, D. D. Doneva, J. Kunz and S. S. Yazadjiev,
Phys. Rev. D 98, no.8, 084011 (2018)
doi:10.1103/PhysRevD.98.084011
[arXiv:1805.05755 [gr-qc]].
Doneva:2021tvn
D. D. Doneva and S. S. Yazadjiev,
Phys. Rev. D 105, no.4, L041502 (2022)
doi:10.1103/PhysRevD.105.L041502
[arXiv:2107.01738 [gr-qc]].
Blazquez-Salcedo:2022omw
J. L. Blázquez-Salcedo, D. D. Doneva, J. Kunz and S. S. Yazadjiev,
Phys. Rev. D 105, no.12, 124005 (2022)
doi:10.1103/PhysRevD.105.124005
[arXiv:2203.00709 [gr-qc]].
Lai:2023gwe
M. Y. Lai, D. C. Zou, R. H. Yue and Y. S. Myung,
[arXiv:2304.08012 [gr-qc]].
|
http://arxiv.org/abs/2307.04443v1 | 20230710095228 | Search-time Efficient Device Constraints-Aware Neural Architecture Search | [
"Oshin Dutta",
"Tanu Kanvar",
"Sumeet Agarwal"
] | cs.CV | [
"cs.CV",
"cs.LG"
] |
Indian Institute of Technology
{oshin.dutta,sumeet}@ee.iitd.ac.in, [email protected]
Search-time Efficient Device Constraints-Aware Neural Architecture Search
Oshin Dutta Tanu Kanvar Sumeet Agarwal
=========================================================================
Edge computing aims to enable edge devices, such as IoT devices, to process data locally instead of relying on the cloud. However, deep learning techniques like computer vision and natural language processing can be computationally expensive and memory-intensive. Creating manual architectures specialized for each device is infeasible due to their varying memory and computational constraints. To address these concerns, we automate the construction of task-specific deep learning architectures optimized for device constraints through Neural Architecture Search (NAS). We present DCA-NAS, a principled method of fast neural network architecture search that incorporates edge-device constraints such as model size and floating-point operations. It incorporates weight sharing and channel bottleneck techniques to speed up the search time. Based on our experiments, we see that DCA-NAS outperforms manual architectures for similar sized models and is comparable to popular mobile architectures on various image classification datasets like CIFAR-10, CIFAR-100, and Imagenet-1k. Experiments with search spaces—DARTS and NAS-Bench-201 show the generalization capabilities of DCA-NAS. On further evaluating our approach on Hardware-NAS-Bench, device-specific architectures with low inference latency and state-of-the-art performance were discovered.
§ INTRODUCTION
In recent years, there has been significant progress in developing Deep Neural Network (DNN) architectures <cit.> for edge and mobile devices.However, designing DNN architectures for specific hardware constraints and tasks is a time-consuming and computationally expensive process <cit.>. To address this, Neural Architecture Search (NAS) <cit.> has become popular as it discovers optimal architectures given a task and network operations. Despite its success, traditional NAS techniques cannot guarantee optimal architecture for specific devices with hardware constraints such as storage memory and maximum supported FLOPs.
To address this concern, researchers have developed hardware-aware algorithms <cit.> that find optimal device architectures with low resource training overhead and search time. These methods often use inference latency <cit.>, FLOPs <cit.> or a combination of hardware metrics <cit.> as constraints scaled by a tunable factor. However, the time to tune the scaling factor is often not considered within the NAS search time and can be ten times the reported search time.
To address these issues, we propose the Device Constraints-Aware NAS (DCA-NAS), a principled differentiable NAS method that introduces total allowable model size or floating-point operations (FLOPs) as constraints within the optimization problem, with minimal hyper-parameter tuning. Unlike inference latency which is task dependent, FLOPs and memory are specified with a given hardware and thus are appropriate for our generic method. The approach is adaptable to other hardware metrics such as energy consumption or inference latency using additional metric-measuring functions.
The paper make the following significant contributions:
* It introduces a fast method that uses weight sharing among operations in the search space and channel bottleneck, along with a differentiable resource constraint, for continuous exploration of the search space.
* A training pipeline that allows a user to input device memory or FLOPs and search for optimal architecture with minimal hyper-parameter tuning.
* Our extensive experimentation on vision datasets- CIFAR-10, CIFAR-100, TinyImagenet, Imagenet-1k and inference-latency comparisons of trained models on Hardware-NAS-bench demonstrate the efficiency of our method. The generalization of our method to different search spaces is shown with experiments on DARTS and NAS-Bench.
§ RELATED WORK
Neural Architecture Search
Popular approaches <cit.> designed architectures for high performance on specific tasks or datasets with the traditional deep learning perspective that bigger is better, resulting in computationally and memory-intensive inference on edge devices. Network pruning <cit.>, channels removal <cit.> and weights/activations quantization <cit.> can compress architectures, but require pre-training, hyperparameter tuning, and often lack transferability.Neural Architecture Search (NAS) methods such as Reinforcement Learning <cit.>, Evolutionary Learning <cit.> and Differentiable Neural Architecture Search (DNAS) <cit.> can automatically search for architectures without user intervention, and can transfer across similar tasks. DNAS with surrogate metrics <cit.> have also been used to explore the architecture search space. However, architectures found by DNAS methods are not optimized for deployment on edge devices and smaller models obtained by reducing layers or channels are often sub-optimal.
Hardware-aware Neural Architecture search
Certain NAS methods optimize <cit.> for constraints such as latency, inference speed <cit.>, FLOPS <cit.>, memory usage <cit.>. Some use a separate DNN to predict constraint metrics and evolutionary search to obtain hardware-aware optimal models <cit.>, while others consider real-time latencies of edge devices or provide specific architectures for specific devices <cit.>. However, these methods require significant search time and tuning of scaling factors controlling the trade-off between the performance and the constraint, and do not always account for optimal architectures. In contrast, we use a differentiable hardware-aware objective function with generic hardware metrics, and do not require a tunable scaling factor.
Certain methods <cit.> train a supernet first and then search for a smaller architecture, but this is only efficient when there are more than fifteen different edge devices with different limitations or deployment scenarios <cit.> as training the supernet takes huge resources-32 V100s taking about 1,200 GPU hours. Search stage followed by evaluation, as done in our approach is more efficient when the different number of possible edge devices is less than fifteen.
§ DCA-NAS: DEVICE CONSTRAINTS AWARE FAST NEURAL ARCHITECTURE SEARCH
We present the preliminary gradient-based NAS objective function in section <ref> and then formulate the problem of incorporating the hardware-awareness in NAS as a constrained optimization problem in section <ref> followed by techniques to reduce the search time in section <ref>. The framework of our approach is illustrated in Figure <ref>.
Notation
α_o^i, j :is the architecture parameter for operation o between a pair of nodes (i,j)
b(o) :is the number of learnable parameters or the FLOPs required by the operation o.
w :are the learnable weights of the operations.
K_d :is the resource constraint of the device given as input to the algorithm.
K_d^' :is the constraint metric derived from the look-up graph.
λ :is the Lagrange multiplier for solving the constrained optimization that incorporates model size or FLOPs as constraints.
§.§ Gradient-based NAS Objective Function
Popular DNAS techniques <cit.> have two stages, the search phase and the evaluation phase. During the search phase, given a task or a dataset the techniques search for a network of cells, which are directed acyclic graphs with N nodes. The edges of the graph are network layers, whose operations are to be selected from a pre-defined set 𝒪 containing operations such as 3x3 separable convolution and identity operations with trainable weights w_o.
The search is made differentiable by making the choice of a particular operation to be a softmax of architecture weights α of all operations. Thus, the intermediate output z_j at node j is given by,
z_j=∑_o ∈𝒪exp{α_o^i, j}/∑_o^'∈𝒪exp{α_o^'^i, j}· o(w_o^i,j,𝐳_i)
§.§ DCA-NAS formulation
Previous DNAS approaches <cit.> did not focus on searching architectures specifically for inference on resource-constrained devices. In contrast, we formulate the DNAS objective function as a constrained optimization problem by incorporating device resource constraints (memory or FLOPs) in the search objective function. The constrained bi-level optimization problem is written as,
[ min _α ℒ_val (w^*(α), α); s.t. w^*(α)=argmin_wℒ_train (w, α); s.t. k_s(α) ≤ K_d ]
where training dataset is split into train and val to optimize w and α simultaneously in each iteration subject to the constraint that the architecture's number of parameters or FLOPs k_s must be less than or equal to the device resource constraint K_d. The following equation calculates the architecture's number of parameters or FLOPs during search given the number of cellsc_n . Our method can also be adapted to use other metrics such as latency and energy consumption with additional metric measuring functions.
k_s(α)= c_n∑_(i,j)∈ N∑_o ∈𝒪exp{α_o^i, j} * b(o)/∑_o^'∈𝒪exp{α_o^'^i, j}
§.§.§ Tackling the difference in search and evaluation networks
The size of the architecture in the search phase k_s is different from the architecture size in evaluation phase due to the softmax weighting factor in equation <ref> (demonstration can be found in the appendix). To address this, we introduce a tighter bound on the search constraint K_d^', which is less than the device resource constraint K_d. A lookup graph (LUG) needs to be made for each dataset by varying K_d^' within appropriate bounds and running the algorithm until convergence each time to obtain the corresponding device resource constraint K_d. The computation time of the LUG can be reduced by running the searches in parallel. Thus, on incorporating the tighter constraint by looking-up the graph for the given device resource constraint K_d along with the trainable Lagrange multiplier λ in Equation <ref>, the objective function is re-written as,
[ ℒ =ℒ_val (w^*(α), α)
+λ (k_s(α)-LUG(K_d)); s.t. w^*(α)=argmin_wℒ_train (w, α) ]
§.§ Techniques to reduce search time
Channel Bottleneck We use convolutional layers of 1x1 kernel to reduce the depth of output channels of operations in the search space to save computation time and memory overhead.
Derived Cell and Weight sharing. During architecture search, only one cell with trainable α is used to optimize architecture parameters. The target network for inference is built by stacking cells with architectures derived from highly weighted operations. This can be done during search by deriving the other cell architectures from the first at each iteration <cit.>. The arrangement of the cells for search is given in the appendix. This derived cell saves computation and memory overhead. A weight sharing strategy <cit.> among same operations with the same originating node i to all nodes i<j<N has been applied within a cell. This is motivated by the observation that non-parametric operations operating on the representation of a node produce the same feature map irrespective of the output node and thereby extended to parametric operations. Thus, Equation <ref> may be re-written to the following,
z_j=∑_o ∈𝒪exp{α_o^i, j}/∑_o^'∈𝒪exp{α_o^'^i, j}· o(w_o^i,𝐳_i)
§ EXPERIMENTAL RESULTS
Our approach is evaluated on two search spaces- DARTS and NAS-Bench with vision datasets- CIFAR10, TinyImagenet, Imagenet-16-20 and Imagenet-1k. The details of the search space and implementation is given in the appendix
§.§ Results on DARTS search space
§.§.§ Transferability- learning of coarse features during search.
We transfer the architecture searched on CIFAR-10 to train and evaluate the model weights on TinyImagenet in Table <ref> and ImageNet-1k in Table <ref>. This transferred model yields higher performance than manually designed architectures <cit.> for the target dataset. It is observed that performance of the transferred model is comparable to the architecture searched on the target dataset itself which can be attributed to the architecture learning coarse features than objects during search.
§.§.§ Performance versus Device-Constraints trade-off
DCA-NAS discovers 2 to 4% better-performing architectures than manual designs with a memory constraint of 3.5 million parameters on CIFAR-10 and similar performance on TinyImagenet as in Table <ref>.
On Imagenet-1k, DCA-NAS yields models with similar performance to other NAS methods <cit.> with a constraint of 5.5 million parameters (taken to yield similar sized models as other NAS methods) as in Table <ref>. We vary the input device resource constraint and plot the performance of the searched models against the number of parameters in Figure <ref>. As observed, DCA-NAS searched models can yield 15x lower sized models than manual architectures like PyramidNet-272 <cit.> with at most 1% reduction in accuracy on CIFAR-10. On TinyImagenet, DCA-NAS yields models similar in performance but 6x smaller in size than the manual Resnet variant. In comparison to ProxylessNAS <cit.> for Imagenet-1k, DCA-NAS yields 32% smaller model in terms of model parameters for similar accuracy. In comparison to DNAS methods <cit.> for each of the three datasets, we observe that the performance of the DCA-NAS searched models is retained to a certain extent as resources are further limited after which the model performance degrades. DCA-NAS model of similar size has the advantage of better performance (by 1%) and being automatically searched over MobileNet-v2 <cit.>, a manually designed network on Imagenet-1k.
§.§.§ Search time comparison
For evaluation on TinyImagenet in Table <ref>, the architecture searched on CIFAR-10 with DCA-NAS yields model in the lowest search time which indicates the search-time efficiency of the transferability property. Our method requires about 4x lower search cost than SGAS <cit.> which performs the best among the other transferred architectures and 16x lower search time than the other resource-constrained approach <cit.> for similar performance as seen in Table <ref>. Moreover, ProxylessNAS <cit.> takes about 4x more search time than DCA-NAS whereas PC-DARTS takes about 2x more search time with no capability to constraint model size.
§.§ Results on NAS-Bench-201 search space
§.§.§ Performance and Latency comparisons on different devices
Our method reports the mean by averaging over five runs with different random seed. Figure <ref> compares the performance of models searched with DCA-NAS and PC-DARTS by varying the latency constraints. It shows that unlike PC-DARTS, DCA-NAS can search for more efficient models which have lower inference latency for similar test accuracy. Moreover, we observe that models with similar performance have lower latency when tested on Pixel 3 than on Raspberry Pi 4 due to a faster RAM in Pixel 3.
DCA-NAS takes the lowest search time among all the NAS methods due to the addition of search-time-efficient techniques while being at-par in terms of performance across all datasets.
§ ABLATION STUDY
Effectiveness of various algorithmic augmentations for faster search: We analyze the effectiveness of algorithmic augmentations mentioned preciously <ref> to reduce search cost in our study. We sequentially add weight sharing, channel bottleneck, and derived cells to the baseline DARTS <cit.> method and measure search time and accuracy. Weight sharing, channel bottleneck, and derived cells was observed to significantly reduce search memory overhead, enabling us to use larger batch sizes and reducing overall search cost as seen in Figure <ref>. Adding the resource-constraint in the final DCA-NAS method negligibly increases search cost while maintaining performance.
Stability of the approach:
We test stability by running the search algorithm independently five times with different initial seeds and the same constraints and hyperparameters. The architectures found during each run have similar performance when re-trained and evaluated as shown in Fig. <ref>. Smaller models have lower performance due to restrictions in model complexity compared to larger models.
§ CONCLUSION
We present DCA-NAS, a device constraints-aware neural architecture search framework which discovers architectures optimized to the memory and computational constraints of an edge device in a time-efficient manner. It does so by incorporating a constraint in terms of the number of parameters or floating point operations (FLOPs) in the objective function with the help of a Lagrange multiplier. DCA-NAS in essence searches for a Pareto optimal solution given the edge device memory or FLOPs constraint. Moreover, it enables architecture search with search cost 4 to 17 times lower than the previous state-of-the-art Hardware-aware NAS approaches. DCA-NAS can discover models with size about 10 to 15 times lower than manually designed architectures for similar performance. In comparison to DARTS and its other NAS variants, DCA-NAS can discover models upto 3x smaller in size with similar performance. This hardware-aware approach can be generalized to any future updates to differential neural architecture search and possibly to training-free methods of NAS with some adaptation.
§ ACKNOWLEDGEMENT
We thank the anonymous reviewers; Profs. Surendra Prasad and Brejesh Lall of IIT Delhi; and colleagues at Cadence India for their valuable feedback and inputs. This research is supported by funding from Cadence India; the first author is also supported by a fellowship from the Ministry of Education, India.
splncs04
Appendix
========
§ DERIVING CELL ARCHITECTURES
The searched cells are stacked to form the network whose weights are trained and evaluated. The layers of this network during the evaluation phase is varied from 4 to 20. It can be seen that the models searched with DARTS with only 2-cells perform equally well as those of 8-cell search for target model with layers more than 10. Hence, in our experiments, instead of training architecture parameters for all 8 cells, we train only 2 cells- one normal and the other reduction cell. The architecture of the other 6 cells stacked to form the network during search are derived from either the normal or the reduction cell as shown in Figure <ref>.
§ CALCULATION OF SEARCH-STAGE ARCHITECTURE SIZE
The size of the architecture in the search phase k_s is different from the architecture size in evaluation phase due to the softmax weighting factor in equation <ref> (demonstrated in Figure <ref>). To address this, we introduce a tighter bound on the search constraint K_d^', which is less than the device resource constraint K_d. A lookup graph (LUG) needs to be made for each dataset by varying K_d^' within appropriate bounds and running the algorithm until convergence each time to obtain the corresponding device resource constraint K_d. The computation time of the LUG can be reduced by running the searches in parallel.
§ ALGORITHM
The practical implementation of our resource-constrained gradient descent-based approach is illustrated in Algrorithm <ref>.
§ IMPLEMENTATION DETAILS
The experiments with the smaller vision datasets-MNIST, FashionMNIST, CIFAR-10, Imagenet-16-120 and TinyImagenet were run on a single Tesla V100 GPU. Training and evaluation on Imagenet-1k was performed on a cluster containing eight V100 GPUs.
The super-net used for search with smaller vision datasets except Imagenet-1k consists of 8 cells, with 6 normal cells and 2 reduction cells, and an initial number of channels set to 16. Each cell has 6 nodes, with the first 2 nodes in cell k serving as input nodes. The super-net is trained for 50 epochs with a batchsize of 512, and optimized using SGD with a momentum of 0.9 and weight decay of 3e-4. The learning rate is initially set to 0.2 and gradually reduced to zero using a cosine scheduler. Architecture parameters α are optimized using Adam optimizer, with a learning rate of 6e-4, a momentum of (0.5, 0.999), and a weight decay of 1e-3. The search is run 5 times, and the architecture with the highest validation accuracy is chosen. For evaluation, the target-net has 20 cells, with 18 normal cells and 2 reduction cells, and an initial number of channels set to 36. The target-net is trained for 600 epochs with a batchsize of 96, optimized using SGD with a momentum of 0.9, weight decay of 3e-4, and gradient clipping of 5. The initial learning rate is set to 0.025 and gradually reduced to zero using a cosine scheduler. Additional settings include a cutout length of 16, dropout rate of 0.2, and use of an auxiliary head.
For Imagenet-1k, We reduce the input size from 224 × 224 to 28 × 28 using three convolution layers with a stride of 2. The super-net for search has 8 cells starting with 16 channels, and the target-net for evaluation has 14 cells starting with 48 channels. Both search and evaluation use a batch size of 1,024. In search, we train for 50 epochs with a learning rate of 0.5 (annealed down to zero using a cosine scheduler), and a learning rate of 6e-3 for architecture parameters. In evaluation, we train for 250 epochs using the SGD optimizer with a momentum of 0.9 and a weight decay of 3e-5, and adopt an auxiliary head and the label smoothing technique.
§ MODEL PERFORMANCE BY VARYING FLOPS CONSTRAINT ON CIFAR10, TINYIMAGENET AND IMAGENET-1K
Instead of model parameters, we also experiment with FLOPs as the constraint in our objective function. As shown in Figure <ref>, our method DCA-NAS retains performance till a certain FLOPs constraint, after which it degrades. In comparison to manual architectures, our NAS approach yields models which require much smaller FLOPs and hence would have lower latency.
|
http://arxiv.org/abs/2307.05163v1 | 20230711104141 | A Mapping Study of Machine Learning Methods for Remaining Useful Life Estimation of Lead-Acid Batteries | [
"Sérgio F Chevtchenko",
"Elisson da Silva Rocha",
"Bruna Cruz",
"Ermeson Carneiro de Andrade",
"Danilo Ricardo Barbosa de Araújo"
] | cs.LG | [
"cs.LG"
] |
add1]Sérgio F. Chevtchenkocor1
[email protected]
[cor1]Corresponding author
add1]Elisson da Silva Rocha
[email protected]
add1]Bruna Cruz
[email protected]
add4]Ermeson Carneiro de Andrade
[email protected]
add4]Danilo Ricardo Barbosa de Araújo
[email protected]
[add1]SENAI Institute of Innovation for Information and Communication Technologies (ISI-TICs), Recife, Brazil
[add4]Department of Computing at the Rural Federal University of Pernambuco (UFRPE), Recife, Brazil
Energy storage solutions play an increasingly important role in modern infrastructure and lead-acid batteries are among the most commonly used in the rechargeable category. Due to normal degradation over time, correctly determining the battery's State of Health (SoH) and Remaining Useful Life (RUL) contributes to enhancing predictive maintenance, reliability, and longevity of battery systems. Besides improving the cost savings, correct estimation of the SoH can lead to reduced pollution though reuse of retired batteries. This paper presents a mapping study of the state-of-the-art in machine learning methods for estimating the SoH and RUL of lead-acid batteries.
These two indicators are critical in the battery management systems of electric vehicles, renewable energy systems, and other applications that rely heavily on this battery technology. In this study, we analyzed the types of machine learning algorithms employed for estimating SoH and RUL, and evaluated their performance in terms of accuracy and inference time.
Additionally, this mapping identifies and analyzes the most commonly used combinations of sensors in specific applications, such as vehicular batteries.
The mapping concludes by highlighting potential gaps and opportunities for future research, which lays the foundation for further advancements in the field.
Machine Learning Lead-acid battery State-of-Health Remaining Useful Life Mapping study
§ INTRODUCTION
The increasing demand for reliable and efficient energy storage systems has prompted significant advancements in battery technologies. Among them, lead-acid batteries have been widely used for decades due to their affordability, reliability, and high current outputs <cit.>. They play a key role in many applications, including electric vehicles, renewable energy systems, telecommunications, backup systems, and energy storage in remote areas <cit.>. However, like all energy storage systems, lead-acid batteries degrade over time, impacting their ability to store and deliver energy <cit.>. Thus, accurately estimating the SoH and RUL of batteries is vital for preventing unexpected failures and ensuring optimal performance.
The State of Charge (SoC) and SoH are interrelated indicators of battery performance. Both SoC and SoH play crucial roles in assessing and monitoring the performance and longevity of a battery. The SoC refers to the amount of electrical energy stored in the battery at a given time, expressed as percentage relative of the battery's full capacity. A fully charged battery is indicated as 100%, while a fully discharged battery is denoted as 0%.
On the other hand, SoH estimates the battery's overall condition compared to when it was new, considering factors like charging cycles, age, and operating conditions. Similarly to SoC, SoH can be expressed as percentage relative to the battery's rated capacity <cit.>.
RUL is a critical predictive maintenance metric of a lead-acid battery. It is an estimate of the time a battery can continue operating while meeting performance requirements, considering factors like SoH, environmental conditions, and aging mechanisms. Accurately predicting RUL is challenging due to nonlinear battery degradation and the influence of factors like temperature, discharge rate, and depth of discharge <cit.>.
The premature failure of lead-acid batteries can have serious consequences, such as power loss, disruption of essential services, and substantial costs for repairs and replacements <cit.>. Moreover, replacing batteries prematurely, despite being in good health, leads to resource wastage and negative environmental impacts <cit.>. Hence, the development of accurate methods for estimating the SoH and RUL is crucial in order to optimize the reliability, lifespan, and efficiency of these batteries.
Advanced methods, including machine learning (ML) algorithms, are employed to integrate various factors and significantly enhance the accuracy of SoH and RUL estimates for lead-acid batteries <cit.>. Therefore, the objective of this paper is to provide a comprehensive mapping study that offers an overview of recent advancements in ML methods for estimating the SoH and RUL of lead-acid batteries.
The mapping study was conducted between March and June 2023, encompassing 17 selected studies published from 2013 to 2023. Our focus was to present the ML algorithms employed, along with the associated error rates and inference times of the models. Additionally, we examined the data acquisition methods and sensors used, as well as the approaches employed to simulate or achieve battery degradation.
By analyzing the main methodologies, strengths, and limitations of these techniques, our aim is to identify gaps and opportunities for future research in this field. The findings of this study will contribute to advancing the understanding of ML-based approaches for estimating SoH and RUL in lead-acid batteries and guide future research directions.
The remainder of the paper is organized as follows.
Section <ref> presents existing reviews in the field of lead-acid batteries.
Section <ref> introduces the protocol used to conduct the research.
Section <ref> presents the findings for the established research questions.
Section <ref> presents the open challenges.
Limitations of this review and corresponding mitigation strategies are presented in Section <ref>.
Finally, Section <ref> presents the conclusions, the limitations, and the future work for this study.
§ RELATED WORK
While there is an abundance of literature that provides mapping studies of methods for estimating battery life <cit.>, there is a limited emphasis on SoH and/or RUL estimation methods specifically for lead-acid batteries.
<cit.> conducted a review on the state of health estimation methods of lead-acid batteries.The review classified the estimation methods into four categories: direct measurement-based, model-based, data-driven, and other methods. Promising results were found with a combination of Kalman filter (KF) and data-driven methods for SoH estimation. However, accurately estimating SoH during irregular charging and discharging scenarios remained a challenge.
It is worth noting that data-driven methods include, but are not limited to, ML-based techniques. Thus, the authors provided insights into a wide range of methodologies applied in the field, but there was no specific focus on the application of machine learning techniques.
The work by <cit.> delved into the battery monitoring and prognostics optimization techniques. This review provided a broad understanding of model-based, data-driven and hybrid methods employed in battery health monitoring and prognostics. The analysis encompassed lead-acid, Ni-MH and lithium-ion batteries, with a significant focus on the latter. The authors proposed three dimensions of analysis: battery performance, approaches, and criteria to fulfil. Similar to <cit.>, data-driven methods included non-ML approaches, such as rule-based and coulumb-counting methods. The review concluded by identifying the most common model-based and data-driven approaches. A combination of both methods was suggested to compensate for the need for expert knowledge in model-based solutions and for the sensitivity to the quality and quantity of data in data-driven approaches.
<cit.> presented an important review on battery management strategies, with a special emphasis on battery state of health monitoring techniques. While this review provided a comprehensive study of various battery management strategies, its focus on machine learning methods for lead-acid batteries was limited, with the majority of the reviewed works focusing on lithium-ion batteries. Additionally, machine-learning methods were considered as a subset of data-driven approaches.
Each of these reviews provides valuable information to the field of battery health and lifespan estimation. However, the present review stands out for specifically focusing on the use of machine learning methods in estimating the state of health and remaining useful life of lead-acid batteries. It is also worth noting that the present study analyses thirteen new works that are not present in the related reviews above <cit.>. These additional works contribute to a more comprehensive and up-to-date understanding of the recent advancements and emerging trends in machine learning-based estimation methods for lead-acid batteries.
§ SYSTEMATIC MAPPING STUDY PROCESS
Our mapping study (MS) followed the methodology outlined by Petersen et al. <cit.> to guide our research. This methodology allowed us to systematically identify relevant works concerning the application of machine learning techniques in estimating the remaining useful life of lead-acid batteries.
Figure <ref> details the adapted methodology, with the steps described as follows.
Definition of Research Questions. Our research began by identifying important questions to investigate the current state of estimating the remaining lifespan of lead-acid batteries. We formulated the following research questions (RQs) for our study:
* RQ1: Which ML algorithms are used for estimation of RUL or SoH in lead-acid batteries?
* RQ2: What are the average error rates and inference times for SoH or RUL estimations?
* RQ3: Which types of sensors are commonly used?
* RQ4: What type of lead-acid batteries are typically monitored?
* RQ5: How is the battery degradation simulated or achieved?
Conduct Search for Primary Studies. To conduct the search for primary studies aiming to map and address the research questions, we developed a search string for automated searches across databases. The formulation of this string involved several steps: defining relevant keywords related to the topic, exploring alternative synonyms, and combining them using logical operators such as AND and OR. The final search string used was as follows: (("lead-acid") AND ("battery") AND ("predictive maintenance" OR "remaining useful life" OR "state of health") AND ("machine learning" OR "deep learning" OR "neural network")).
The databases considered for the search of the primary studies were the ACM Digital Library[https://dl.acm.org], IEEE Xplore[IEEExplore.ieee.org/Xplore/home.jsp], Web of Science[https://www.webofscience.com], Science Direct[https://www.sciencedirect.com/ ] and Scopus [https://www.scopus.com/ ].
Screening of Papers for Inclusion and Exclusion. Given the large number of articles unrelated to our research topics, we established specific inclusion and exclusion criteria. These criteria are intended to narrow down our search and ensure that the identified literature is relevant to our evaluation search.
Our inclusion criteria were as follows: we selected studies that propose or apply ML algorithms for the estimation of RUL in lead-acid batteries. Additionally, we chose to include only peer-reviewed original studies published between 2013 and 2023.
On the other hand, our exclusion criteria consisted of: excluding articles that do not utilize ML algorithms, studies not written in English, studies with unclear results or findings, duplicated studies, articles that are not original research papers, research that does not involve the use of lead-acid batteries, as well as secondary or tertiary articles.
By applying these criteria, we aimed to ensure the relevance and quality of the studies included in our research.
Keywording of Abstracts. After executing the search string in the selected databases, we proceeded with the analysis of the abstracts and metadata of the articles to verify their correspondence with the established criteria. This allowed us to determine the primary studies to be included in our review.
Data Extraction. With the definition of the primary studies, we extracted relevant information by carefully reading the entire articles and addressing the RQs accordingly.
Mapping of Studies. Finally, we presented an overview of all selected primary studies and provided responses to each proposed RQ. This allowed us to gain insights into the current state of the literature regarding the utilization of machine learning for estimating the remaining lifespan of lead-acid batteries.
§ RESULTS AND DISCUSSION
In this section, we are conducting a descriptive analysis and addressing the research questions related to the estimation RUL and SoH in lead-acid batteries. We explore the ML algorithms used in this context, examine the average error rates and speeds of SoH or RUL estimations, investigate the data acquisition methodology and commonly used sensors in the estimation process. Additionally, we discuss the type of lead-acid batteries typically monitored and how battery degradation is simulated or achieved.
§.§ Descriptive analysis
In May 2023, our search resulted in a total of 79 papers (See Figure <ref>).
These papers were obtained from various sources, including 7 from ACM Digital Library, 27 from Web Science, 2 from Science Direct, 34 from Scopus, and 9 from IEEE Xplore.
After removing duplicate papers, 53 unique papers remained, which underwent a thorough screening process based on inclusion and exclusion criteria using their abstracts.
Following this initial screening, 23 papers were selected for full-text reading and extraction of relevant information.
Out of these, 6 papers were excluded as they did not provide direct answers to the research questions. Consequently, a total of 17 primary studies were included in the mapping process.
Out of the 17 papers selected as primary studies, 10 were obtained from the 27 articles retrieved from Web Science, 6 from the 34 articles retrieved from Scopus, and 1 from Science Direct.
The 9 articles retrieved from IEEE Xplore were considered duplicates, as they had already been included from other sources.
The 7 articles from the ACM Digital Library were rejected due to their focus on batteries other than lead-acid.
Figure <ref> provides an overview of the distribution of articles in across different years.
The first identified article was published in 2013, which was the starting point for this research. However, there was a notable increase in the number of publications in 2016, with the inclusion of three articles. The subsequent years, 2018 and 2019, had only two articles each, but in 2022, the number increased again, with five articles meeting the inclusion criteria.
§.§ Which ML algorithms are used for estimation of RUL or SoH in lead-acid batteries?
The choice of an ML algorithm, as well as its set of hyperparameters, can have a significant impact on both prediction accuracy and inference time. Therefore, in this research question, we aim to identify the most commonly used algorithms for the estimation of RUL or SoH in lead-acid batteries. An overview of ML approaches is presented in Table <ref>.
[
caption = ML Approaches, SoH Estimation Error, and Inference Time.,
label = tab_ml_comp_2,
]colspec=l c c c,
rowhead = 1,
rows=font=
Article ML algorithm Reported SoH or
RUL error (%) Inference
time
<cit.> MLP, PSO 0.4900
<cit.> MLP 1.8000
<cit.> MLP 7.6000
<cit.> MLP 1.7000 10 min
<cit.> RSF, LSTM
<cit.> MLP
<cit.> MLP 25s
<cit.> MLP 0.0005
<cit.> MLP 100 ms
<cit.> MLP
<cit.> MLP 1s
<cit.> MLP, LSTM 2.8000 30 min
<cit.> KNN
<cit.> BiLSTM 3.0000
<cit.> LSTM, MLP, RNN 0.5800
<cit.> RBFNN, Fuzzy logic 1.6000 2 hours
<cit.> MLP 4.9000 35s
The multilayer perceptron (MLP) is by far the most common choice, being used in 13 of the considered works (see Figure <ref>).
Interestingly, 11 of these works focused on estimating the SoH. Overall, only <cit.> and <cit.> propose a method for direct estimation of RUL. The emphasis on SoH estimation rather than RUL estimation may have been due to the nonlinear relationship between the two. Battery health can degrade at varying rates throughout its lifespan, making RUL estimation a more complex task that required additional information compared to SoH estimation.
Rather than estimating RUL or SoH, <cit.> explored different versions of MLPs to estimate a reliability function. This function represented the probability that a battery would last for more than a given amount of time, denoted as t. They found that using an ensemble of five neural networks, each with two hidden layers, led to better prediction performance. This approach outperformed single network architectures, regardless of whether they had more or fewer hidden layers.
Another study conducted by <cit.> compared the performance of MLPs against the Particle Swarm Optimization (PSO) algorithm and a software-assisted human expert. They focused on identifying parameters in lead-acid batteries. The results showed that the neural network method, in this case, MLPs, achieved the highest accuracy and also demanded less computational resources compared to the other methods.
Long Short-Term Memory (LSTM) networks, which is the second most commonly used algorithm, are a type of Recurrent Neural Network (RNN), are often used for analyzing data with time dependencies.
They are specifically designed to handle temporal sequences and capture long-range dependencies more effectively compared to traditional RNNs. LSTM stands out due to its unique structure, which includes a cell state and three types of gates: input, forget, and output. These elements enable LSTM to selectively retain or discard information, making it highly effective for tasks involving time-series data <cit.>.
In their work, <cit.> extended their earlier research <cit.> by incorporating an LSTM model and comparing it with the Random Survival Forest (RSF) method.
The aim was to estimate the potential failure time of batteries using irregular and sparse operational data obtained from vehicles, which was collected during workshop visits or through remote readouts. Due to the absence of a consistent health indicator in the data and its sparsity, they represented battery survival as a probability, referred to as a lifetime function. The authors found that an ensemble of LSTM networks yielded the best results in this particular context.
LSTM networks were also compared to MLPs for SoH estimation in <cit.>. Due to the emphasis on retired lead-acid batteries, the initial conditions were unknown, and a specific current pattern was applied for model training. Subsequent SoH estimation relied on measurements of discharge voltage and current for approximately 30 minutes. The average performance of LSTM networks was found to be slightly superior to that of MLPs, although both results were reported as being equivalent. Furthermore, an LSTM model was proposed by <cit.> for estimating both the SoC and SoH specifically of gelled-electrolyte batteries used in golf carts. The study revealed that LSTM outperformed both RNNs and MLPs when estimating SoC. Subsequently, only the LSTM model is used to evaluate SoH estimation.
Finally, the work conducted by <cit.> stood out for employing a deep learning approach in the prediction of SoH. In their model, they first utilized a Convolutional Neural Network (CNN) to extract features from the data and reduce its dimensions. These processed features were then fed into a Bidirectional Long Short-Term Memory (BiLSTM) network. To enhance the learning from the time-series data, an attention mechanism was incorporated. However, it is important to note that this approach may have a drawback in terms of computational cost, as deep learning models typically require more processing power and memory compared to traditional machine learning algorithms.
§.§ What are the average error rates and inference times for SoH or RUL estimations?
SoH and RUL of a battery are important factors to consider in the field of predictive maintenance. Accurately estimating the precision and inference time of these battery indicators is essential for optimizing battery performance, extending their lifespan, and ensuring their efficiency.
In this RQ, we examine the inference time and average error rates reported in the reviewed papers.
If a paper reported error rates for multiple ML models or configurations, we highlighted the most successful results. It is worth noting that regarding the predictions of RUL, two out of the three articles did not provide quantitative information on precision nor inference time <cit.>.
There is a substantial variation in the estimation (inference) times across different studies, ranging from seconds to hours (ver Table <ref>). For instance, <cit.>, reported a notably quick estimation time of roughly 100 milliseconds, although they did not numerically report the corresponding SoH estimation error. Conversely, <cit.> reported an estimation time of approximately 2 hours for charging or discharging in their study. However, the proposed method demonstrated relatively good accuracy (1.6%) in estimating the SoH of a battery with unknown parameters.
Most studies utilized the Mean Absolute Percentage Error (MAPE) metric to report estimation errors. However, six out of the seventeen papers reviewed did not specify their estimated errors for SoH or RUL.
Note that in some cases the error rates are reported only for SoC estimation.
Additionally, some studies, like <cit.>, reported errors for discrete intervals of a battery's SoH, making it challenging to include their findings in an overall comparison due to differing methodologies.
Low reported error percentages are consistently observed in the analysed works. For instance, <cit.> reported a MAPE of around 0.58%.
It was noted that the error was closely related to operating temperature, with higher temperatures resulting in increased estimation errors.
Similarly, <cit.> reported a low average estimation error of 0.49%. While these studies reported the lowest error percentages in SoH estimation, it is important to note that <cit.> only used data on current, temperature, and voltage.
In contrast, <cit.>'s methodology was based on Electrochemical Impedance Spectroscopy (EIS). This requires the use of more expensive sensors <cit.>, in addition to application of currents under specific frequencies, which can generate voltage ripples at the output of power converters <cit.>.
§.§ Which types of sensors are commonly used?
The sensor types commonly used in the analyzed studies for estimating SoH or RUL of batteries are summarized in Table <ref>.
The last column is reserved for sensors not associated with the three main types: voltage, current, and temperature.
For instance, in the study by <cit.>, additional features such as capacity (Ah) and total energy (Wh) were employed. However, these features were not listed as extra sensors since they were derived from voltage and current measurements. In another case, <cit.> used the battery's state of charge for estimating SoH, which was calculated using the ampere-hour counting technique. This method only required a current sensor with a sufficient acquisition rate.
Table <ref> suggests that the most common method, utilized in 11 out of 17 reviewed studies, relies on a combination of current and voltage sensors. However, temperature fluctuations have a significant influence on SoH estimation of lead-acid batteries. As a result, five studies <cit.> opted to use a trio of sensors: voltage, current, and temperature, aiming at a more robust SoH estimation.
Specifically for the vehicle fleet management application, a large collection of categorical and numerical data was used in <cit.>. In this case, the failure time of the vehicle's battery was estimated based on maintenance records and expressed as a probability function.
The study by <cit.> incorporated battery density and charging time along with voltage and temperature measurements.
However, the specific impact of each individual feature on the results was not clearly identified, indicating the need for an additional ablation study to analyze the proposed model in more detail. Lastly, <cit.> adopted a model-based approach using the Electrochemical Impedance Spectroscopy (EIS) technique for battery parameter estimation. This approach involves characterizing the equivalent circuit parameters for a specific battery type and the electrochemical interpretation of these values allows for accurate inference of battery health status or failure modes. A drawback of this method is that it involves more costly sensors and is more commonly designed for laboratory conditions <cit.>.
[
caption = A comparison of sensors used in selected the works.,
label = tab_sens_comp,
]colspec=l c c c c,
rowhead = 1,
rows=font=
Article Current Voltage Temperature Other
<cit.> Electrochemical impedance
spectroscopy (EIS)
<cit.>
<cit.>
<cit.>
<cit.> Vehicle fleet
data
<cit.> Vehicle fleet
data
<cit.> Vehicle fleet
data
<cit.>
<cit.>
<cit.>
<cit.>
<cit.>
<cit.>
<cit.> Density
Charging time
<cit.> Discharging time
<cit.>
<cit.>
§.§ What type of lead-acid batteries are typically monitored?
Lead-acid batteries have been widely used in various applications due to their low cost and high reliability.
With the increasing demand for efficient energy storage systems, it becomes crucial to monitor the performance and condition of different types of lead-acid batteries.
Therefore, the objective of this research question is to investigate the particular types of lead-acid batteries that have been predominantly monitored in the reviewed studies.
While a large portion of studies focused on generic types of lead-acid batteries without specifying the model, it is worth noting that other types, such as OPzS (tubular flooded battery) <cit.>, valve-regulated lead–acid (VRLA) <cit.>, and automotive batteries <cit.> have also been investigated.
Figure <ref> illustrates the distribution of the different types of batteries identified in this mapping study.
Most approaches that studied automotive lead-acid batteries used vehicle maintenance data. For instance, in the study by <cit.>, the batteries of approximately 2,000 vehicles were monitored for the validation/test set, and battery failures were inferred when a workshop engineer replaced them. As an exception, <cit.> focused tests on a single automotive lead-acid battery, introducing a method for estimating battery health based on cranking current.
Only two studies in our review used a retired type of lead-acid battery <cit.>, which brings its own set of challenges. Notably, the initial parameters of the battery under test are unknown. As a result, both the estimation of the battery's SoH and the training of the model occur offline using a dedicated testing rig. By examining the behavior and degradation patterns of 70 retired batteries, <cit.> aimed to gain insight into the performance characteristics and remaining capacity of lead-acid batteries reaching the end of their operational lifespan.
The battery capacities reported in these papers range between 2.5 to 226 Ah, with an average evaluated capacity of approximately 120 Ah. However, it is worth noting that in six out of the seventeen papers, the capacity values of the investigated batteries were not explicitly stated.
Additionally, both real and simulated batteries were employed in these studies for monitoring purposes. While some works utilized real lead-acid batteries to obtain accurate and reliable data, others employed simulated batteries to model various operating conditions and evaluate the performance of monitoring systems.
Notably, <cit.> was the only study that exclusively focused on a simulated battery, with the goal of estimating the battery's SoH. This approach, however, relies heavily on knowing the battery's initial capacity to ensure accurate model training.
§.§ How is the battery degradation simulated or achieved?
The simulation or achievement of battery degradation is of significant importance in the field of predictive maintenance. In this context, battery degradation is simulated or achieved using various methodologies and techniques. Experimental techniques, such as EIS and cycle tests, are employed to characterize and quantify battery degradation. To address this question, we analyzed the degradation method reported in each primary paper. Figure <ref> illustrates the distribution of articles using simulated and real batteries, along with the different degradation methods employed.
The majority of the reviewed papers utilized batteries at varying stages of health for their analysis.
Typically, these methods tested a limited number of batteries.
For instance, <cit.> presented different estimation techniques for analyzing battery degradation factors during charging or discharging cycles.
However, the approach was evaluated on just 3 VRLA batteries with varying SoH levels.
On the other hand, the model proposed by <cit.> acquired the complete charge and discharge data from batteries at different SoH levels and a total of 70 lead-acid batteries retired from communication base stations were used in this work.
Methods that utilized accelerated life experiments provided a more detailed view of the algorithm's performance throughout the battery's lifespan. For example, <cit.> conducted experiments on three groups of large-capacity, flooded lead-acid batteries. Similarly, <cit.> implemented a six-month aging test with six VRLA batteries, subjecting them to various cycling and temperature operation conditions. While providing more information and robust evaluation of the proposed models, accelerate life experiments require hundreds of charge-discharge cycles, which adds to the cost of conducting such experiments.
Finally, <cit.> relies solely on a simulated battery for experimental validation. This allows for easily changing the simulation temperature, which would present a challenge in a controlled real environment. Combination of simulated and real battery models are found in <cit.> and <cit.>. In <cit.>, simulation is used for training and preliminary testing of the MLP model, while real world data are collected for additional validation on a hybrid energy system. On the other hand, <cit.> used simulation of a hybrid energy system on a maritime vessel for estimation of energy saving when using a battery in combination with a diesel generator.
§.§ Discussion
This section provides key insights from the data collected while addressing the research questions mentioned earlier.
Most of the studies analyzed in this mapping study utilized current, voltage, and temperature data, which indicates that these are essential variables for SoH and RUL estimation models.
However, certain studies, like <cit.> and <cit.> achieved successful results by incorporating additional data inputs such as EIS, battery density and charging time, respectively. It is interesting to note that some studies, including <cit.>, <cit.>, and <cit.>, utilized vehicle fleet data that comprised both categorical and numerical entries.
The inclusion of such data adds another layer of complexity to the models due to the variations in driving and charging behaviors observed across a fleet.
However, it is essential to note that not all studies have reported all the necessary information. For instance, some studies have omitted crucial details, such as inference time or the error rates associated with SoH and RUL estimation. This lack of uniform reporting presents a significant challenge when it comes to comparing results across different studies. As discussed in the Open Challenges section (see Section <ref>), the field would greatly benefit from the existence of comprehensive and publicly available datasets.
In addition to addressing the initial research questions, we also evaluated the articles in terms of three parameters, namely the specific protocol for model training, online estimation of parameters, and specific current for online estimation. The results of this evaluation can be found in Table <ref>. The specific protocol for model training indicates whether controlled experiments were conducted to generate a consistent dataset specifically for training the ML models. The online estimation of parameters evaluates the capability of the trained ML models to perform estimation of the battery's SoH or RUL without the need to disconnect the battery terminals. Lastly, the specific current for online estimation focuses on the requirement of a specific current pattern or load condition for the estimation process to take place, even though the battery does not need to be disconnected.
[
caption = Specific Protocols and Online Estimation Parameters.,
label = tab_ml_comp_1,
]colspec=l c c c,
rowhead = 1,
rows=font=
Article Specific protocol for
model training Online estimation
of parameters Specific current for
online estimation
<cit.>
<cit.>
<cit.>
<cit.>
<cit.>
<cit.>
<cit.>
<cit.>
<cit.>
<cit.>
<cit.>
<cit.>
<cit.>
<cit.>
<cit.>
<cit.>
<cit.>
Most of the analysed studies opted for building a dataset in a controlled environment, such as constant discharging under a fixed temperature <cit.>.
In contrast, <cit.>, <cit.> and <cit.> relied on data collected in the field from a large fleet of vehicles.
Online estimation was typically desirable for SoH or RUL calculation, as it eliminated the need to disconnect the battery during inference. However, this requirement was found to be less of a concern when processing retired batteries <cit.>.
Furthermore, while most of the analyzed methods could operate online, some necessitate required specific current patterns <cit.>, which could be a challenging when connected loads were unpredictable.
§ OPEN CHALLENGES
While a number of recent studies have utilized machine learning for SoH and RUL estimation of lead-acid battery, there are still several open challenges that need to be addressed. Overcoming these challenges has the potential to enhance the limitations of the existing body of work and result in more efficient algorithms. Some of the key challenges in applying machine learning to lead-acid battery SoH and RUL estimation are detailed as follows.
Evaluation of Machine Learning Algorithms. MLPs are commonly found in the analyzed works, likely due to their simplicity and effectiveness.
However, there exists a wide range of other machine learning algorithms that could be explored. Algorithms such as support vector machines (SVMs), decision trees, random forests, and more complex deep learning architectures could be better suited to certain datasets or applications.
In particular, deep learning models such as LSTMs and transformers have achieved state-of-the-art performance in various applications involving sequential data learning.
A comprehensive comparison of these algorithms on the same dataset would be valuable in determining the best algorithm for battery SoH and RUL prediction. It is worth noting that a publicly available dataset was not found during the present research.
Automatic Hyperparameter Optimization.
Another challenge is the optimization of hyperparameters. Machine learning models, including MLPs, have numerous hyperparameters that need to be tuned for optimal performance. Current practices often rely on manual tuning, which is time-consuming and usually does not guarantee the optimal configuration <cit.>. Automated hyperparameter optimization methods, such as Bayesian optimization and evolutionary algorithms, could significantly improve the performance of the models and should be explored in future works.
Inference Time Evaluation.
In real-world applications, especially in embedded systems, the speed at which a model can make a prediction could be critical. A model that takes too long to process a data stream may not be suitable for certain applications, regardless of its accuracy.
While the evaluation of inference time was generally not conducted in the analyzed studies, future research should consider not only the accuracy of the models but also their inference time on specific hardware platforms. Assessing and optimizing inference time can be essential for ensuring the practical applicability of the models in real-time scenarios.
Ablation Study.
Conducting an ablation study involves systematically removing or modifying components of the model to assess their individual impact. In the context of lead-acid battery SoH and RUL estimation, an ablation study could be highly beneficial. Specifically, investigating the influence of individual sensors on prediction accuracy could provide valuable insights. This analysis would help identify the sensors that contribute most significantly to accurate predictions, potentially leading to the development of simpler and more cost-effective battery monitoring systems.
Hybrid Accelerated Life Experiments.
As mentioned in Section <ref>, while providing more information and robust evaluation of the proposed models, accelerate life experiments require hundreds of charge-discharge cycles, which adds to the cost of conducting such experiments. However, future research could consider adopting a hybrid evaluation strategy that combines accelerated life degradation experiments with simulated models and testing on a set of pre-aged batteries. This approach would provide a more cost-effective alternative while still allowing for comprehensive evaluation of battery performance.
Multiobjective Optimization and Feature Selection.
There is often a trade-off between precision and computational cost when designing models. Models with higher accuracy may require more computational resources, which can be limited in embedded applications. Multiobjective optimization techniques could be used to find a balance between precision and computational cost. Additionally, feature selection techniques could be employed to identify the most important features, reducing the dimensionality of the problem and potentially further reducing computational costs. Incorporating these strategies can help achieve optimal performance while managing computational constraints in practical applications.
§ LIMITATIONS OF THIS REVIEW
Systematic mappings are subject to risks and limitations <cit.>. This section presents the most common limitations and our mitigation strategies:
Research question formulation: The formulation of research questions plays a crucial role in guiding the review process. However, there is a possibility of unintentionally excluding relevant studies or overlooking important aspects due to the specific wording or scope of the questions. To mitigate this, we carefully designed and refined the research questions through discussions with authors and external experts to ensure their adequacy.
The conduct of the search: Despite employing a comprehensive search strategy, it is possible that some relevant studies may have been missed. This could be due to limitations in the selected databases, language restrictions, or the exclusion of certain publication types. To mitigate this limitation, we adapted our search strings for each digital database while maintaining the same terms.
Publication and selection bias: The inclusion and exclusion criteria applied during the study selection process can introduce bias, as studies meeting those criteria may differ in their characteristics from those excluded. Additionally, a preference for published studies may introduce bias towards positive or significant results.
To mitigate this limitation, we developed clear and objective inclusion and exclusion criteria. Additionally, multiple authors assessed the eligibility of the selected works to minimize subjective bias.
Inaccuracy in data extraction: Misclassification or inaccuracies in data extraction refer to the potential for different reviewers to interpret the information from studies in varying ways <cit.>. While the classification of studies is based on our judgment, there remains a possibility of incorrect categorization. To mitigate this potential issue, the classification process involved multiple author-researchers, and any discrepancies were resolved through consensus discussions.
By acknowledging these limitations and implementing appropriate strategies, we aim to provide a comprehensive and unbiased evaluation of the literature within the scope of this review.
§ CONCLUSIONS
This mapping study provided an in-depth discussion on the utilization of machine learning methods for the estimation of SoH and RUL in lead-acid batteries. The review included the examination of several novel research works that have not been previously considered in similar reviews. The study was guided by five specific research questions, which yielded insights into the commonly used ML algorithms and battery types, corresponding error rates and inference time, and identified typical experimental setups. These insights contributed to a better understanding of the current landscape of machine learning techniques for SoH and RUL estimation in lead-acid batteries.
The findings highlighted the widespread usage of MLP and LSTM networks among the machine learning algorithms employed in this field.
These algorithms demonstrated effectiveness in addressing the complexities of battery health estimation, although their performance significantly depends on sensor combination and hyperparameter tuning. The review identified a clear need for more comprehensive evaluations of the ML algorithms, including factors such as inference time and computational cost. Among sensors used for SoH or RUL estimation, current, voltage and temperature were identified as the most relevant. However, the impact of individual sensors is often overlooked, suggesting the need for more detailed studies in future works, including ablation and feature selection analyses.
Furthermore, special attention was given to the online estimation capacity of each approach.
Works that did not require the battery to be disconnected or interrupted during its normal operation were considered suitable for a wider range of applications. Consequently, we identified the most promising works that addressed diverse applications, such as fleet monitoring and retired battery identification.
As part of our future work, we aim to build a simulated dataset to evaluate the performance of state-of-the-art methods for SoH and RUL. This analysis will encompass the consideration of multiple conflicting parameters, including precision and computational cost. Additionally, we envision further research efforts dedicated to the development of an energy-efficient approach for inference on embedded devices.
§ CREDIT AUTHORSHIP CONTRIBUTION STATEMENT
Sérgio F. Chevtchenko: Writing - original draft, Conceptualization, Methodology, Investigation. Bruna Cruz: Methodology, Investigation, Writing - original draft.
Elisson da Silva Rocha: Methodology, Writing - original draft.
Ermeron Carneiro de Andrade: Methodology, Writing - review and editing, Supervision.
Danilo Ricardo Barbosa de Araujo: Methodology, Writing - review and editing, Supervision.
elsarticle-num-names
|
http://arxiv.org/abs/2307.04636v1 | 20230710152529 | On the importance of illustration for mathematical research | [
"Rémi Coulon",
"Gabriel Dorfsman-Hopkins",
"Edmund Harriss",
"Martin Skrodzki",
"Katherine E. Stange",
"Glen Whitney"
] | math.HO | [
"math.HO",
"01-02, 01A65, 01A67, 00-02"
] |
Properties of the η_q leading-twist distribution amplitude and its effects to the B/D^+ →η^(')ℓ^+ ν_ℓ decays
Long Zeng
August 12, 2023
============================================================================================================
In the last decade, it has become increasingly possible for researchers to augment their experience of abstract mathematics with immersive sensory experience: 3D-printed or CNC-milled models, the ability to walk through impossible physical spaces with virtual reality, and the potential to explore high-dimensional mathematical spaces through computer visualisation, to name a few. Now much more than simply an aid to understanding, these tools have reached a level of sophistication that makes them indispensable to many frontiers of mathematical research. To preview one particular case recounted below, the tantalizing structure visible in Figure <ref> (and many others like it) led to conjectures and proofs that would likely otherwise have been inaccessible.
The list of examples of research driven by illustration is rapidly expanding in recent years. We use the term illustration to encompass any way one might bring a mathematical idea into physical form or experience, including hand-made diagrams or models, computer visualization, 3D printing, or virtual reality, among many others. We will discuss instances of this interplay in fields ranging from representation theory to geometry and many others. Many readers will also be aware of the recent and celebrated solution of the einstein problem with the hat monotile and its chiral version, the spectre <cit.>.
Illustration is beginning to find a home at programs like the special semester in Illustrating Mathematics in Fall 2019 at the Institute for Computational and Experimental Research in Mathematics (ICERM) and the Institute for Advanced Study (IAS)/Park City Math Institute (PCMI) virtual program in Summer 2021,[See <http://illustratingmath.org/> for links to these two programs, along with many other resources .] and a community is forming around many
modern tools.
Of course, the importance of illustration to research is not new: abstraction was linked to plane diagrams in the work of the ancients, including Euclid's Elements or the Chinese treatise The Nine Chapters on the Mathematical Art. Precise three-dimensional models were produced by skilled artisans in the 19th century, notable examples of which remain in the collections at the Institute Henri Poincaré[<https://patrimoine.ihp.fr/>]
or Göttingen University[<https://sammlungen.uni-goettingen.de/sammlung/slg_1017/>], among many other institutions. When computer visualization first became widely available in the 1980's, the Geometry Center was founded at the University of Minnesota, with a mission to exploit these new tools on behalf of mathematics.
But we are now at another cusp: modern technological tools have suddenly made 3D models and virtual reality widely available, and computation and computer visualization is more accessible and more powerful than ever. We can now collect huge mathematical datasets and examples, and it has become urgent to develop ways to interact immersively with this data.
Making full use of modern tools is not without its challenges: beyond the obvious technical challenges and software learning curves, there are important questions about how an illustration, much like a statistic or an experiment, can subtly mislead the researcher, or miss the essential mathematical pattern sought. Researchers often individually reinvent the necessary skill sets as they seek to advance their own projects, and these projects are pushing the boundaries of the possible.
But by building a discipline around this enterprise, we can develop its full potential to advance mathematical research.
§ SOME HIGHLIGHTS FROM THE HISTORY OF MATHEMATICAL ILLUSTRATION
Illustration of mathematics goes back as far as mathematical ideas themselves. In fact, some of the earliest evidence we have for abstract thinking comes from human-made designs, for example the cross-hatched carvings in Blombas Caves in South Africa, potentially from 73,000 years ago <cit.>. A little more recently, the middle-eastern tradition of geometry presented in Euclid's Elements
provides a structural link between statements deduced from axioms and figures made with straight edge and compass. These two tools provide a physical realization of the two key objects (straight lines and circles) described by the axioms. Euclid's diagrams give a map to help follow (or discover!) the chain of deduction in a proof. Conversely, the proof validates the image (which could otherwise mislead by error or the selection of a non-generic example).
This approach leads at the conclusion of Book 1 to a proof of the Pythagorean theorem; see Figure <ref>.
UTF8bsmi
In Chinese mathematics, this theorem is the 勾股 (Gougu) theorem. In the classic Nine Chapters on the Mathematical Art (九章算術)
, it plays a key role in applying the arithmetical mathematics of the text to geometric problems, for example in
measuring altitude. The Chinese tradition also gives an elegant visual proof of the result by rearranging triangles, as in Figure <ref>.
Although the Chinese proof is not considered rigorous by modern standards, Euclid was also criticized by
Bertrand Russell when he wrote “A valid proof retains its demonstrative force when no figure is drawn, but very many of Euclid’s earlier proofs fail before this test.” <cit.>. This criticism reveals one of the challenges of mathematical illustration.
A powerful example comes from a well-known “proof” that all triangles are equilateral, wherein a slightly misleading diagram (shown in Figure <ref>) can be used together with an otherwise correct proof. Disallowing these particular subtle errors requires axioms capturing the meaning of “between,” that took considerable work by David Hilbert to formulate <cit.>.
A related pitfall – when a good illustration, overused, can become a pair of blinders – is illustrated by the following example.
In the Elements, the concept of number is based on the concept of length. So the squares in the Pythagorean theorem are actual squares (the area of which are equal), not squared numbers. In the 11th century algebra treatise of Omar Khayyam, although he gives solutions to equations with higher powers than three, he also states: “Square-square, which, to the algebraists, is the product of the square by itself, has no meaning in continuous objects. This is because how can one multiply a square, which is a surface, by itself? Since the square is a two-dimensional object … and two-dimensional by two-dimensional is a four dimensional object. But solids cannot have more than three dimensions.” <cit.>. The relation between number and length was also an important factor in the European reluctance to consider negative numbers. A line, after all, cannot have negative length. In contrast, negative quantities are used freely in the Nine Chapters, where arithmetic is the foundational idea, with geometry built from it. In Europe the development of the number line, starting with John Wallis, gave an alternative illustration of number (see Figure <ref>) with the capacity to include negative quantities as numbers in their own right
.
Powers and negative numbers are but two examples of a productive pattern of mathematics developing from the tension between illustration and symbolic idea. The study of complex numbers advanced significantly with the concept of the complex plane, and then allowed a new algebraic approach to the geometry of the plane. Both quaternions and matrices were developed to try to extend that understanding to higher dimensions.
In the case of real numbers, although the symbolic ideas would refine the illustrations needed, it was not until the late nineteenth century when fully symbolic definitions were developed, such as Dedekind cuts and Cauchy sequences. At that point the need for illustrations as foundational objects was removed, although the potential for developing intuition and challenging what might be done with the concepts remained
.
Projective geometry, first developed (as perspective) by artists as a tool to create realistic images, provided one such challenge. These ideas were explored mathematically by Johannes Kepler, Gérard Desargues and Blaise Pascal. In the early nineteenth century, perspective was developed by Gaspard Monge into “descriptive geometry” for the training of engineers in constructing forts and later developed and axiomatized in the foundational work by Jean-Victor Poncelet <cit.>.
In turn this work would be key in establishing models for non-euclidean geometry, explored axiomatically by Nikolai I. Lobachevsky and János Bolyai <cit.>. In this case it was such models, themselves illustrations, that convinced mathematicians of the existence and interest of the non-euclidean geometries.
Projective geometry also spurred the study of algebraic geometry. In the late nineteenth century an industry
emerged to reveal the surfaces constructed in this field and their properties, such as cone singularities and embedded straight lines. One pioneer
was Alexander Brill, a student of Alfred Clebsch with a degree in architecture. Following the work of Peter Henrici (another student of Clebsch), Brill made sliceform paper models of surfaces. He later worked with Felix Klein in Munich to set up a laboratory for the design and production of mathematical
objects.
This lab grew into a company that, when it was taken over by Martin Schilling in 1911, had a catalogue of over 400 models. His work combined deep mathematical understanding with a knowledge of printing and construction from his family business <cit.>.
The need to combine mathematical knowledge with fabrication techniques is also highlighted by a story of missed opportunity: how to make physical patches of hyperbolic planes. In addition to his disk model (often called the Poincaré disk model), Eugenio Beltrami also attached together strips of paper to approximate the surface. Other examples used paper polygons connected to make a sort of hyperbolic “soccer ball.” These paper models are often fragile, and the rigidity of the paper means that it cannot change its local geometry; thus such models are crude approximations. Roughly a century later, Daina Taimiņa realised that crocheting could produce far more resilient surfaces, with local stretching that meant the negative curvature was more smoothly distributed <cit.>. An example of this medium of representation is shown in Figure <ref>. In fact, similar techniques had been used to create ruffles in scarves and skirts for decades. If the methods of fiber arts had earlier been considered seriously and not dismissed as “work for women,” researchers could have had the opportunity to handle robust hyperbolic planes far sooner.
§ THE INCREDIBLE POTENTIAL FOR MATHEMATICAL ILLUSTRATION
Turning to recent developments, the work of Lionel Levine, Wesley Pegden, and Charles K. Smart provides an excellent example of the value of illustration as a research tool. Their Annals of Mathematics paper The Apollonian structure of integer superharmonic matrices <cit.> was motivated by the study of Abelian sandpiles on ℤ^2: place a large number N of sand grains at the origin, and allow any position with at least four grains to distribute those grains equally to its four neighbours. The stable configuration that results from this simple system displays impressive large-scale structure that can be discovered through computer visualization (see Figure <ref>). Especially striking is the vivid visual impression that the structure continues to refine at larger N toward a continuum scaling limit, which was proven earlier by Pegden and Smart
. To describe the PDE governing this process, the individual periodic tilings in the regions of the limit must be understood. They are each governed by an integer superharmonic matrix. Levine, Pegden, and Smart generated a picture of the set of integer superharmonic matrices, and were astonished to see the familiar fractal structure of an Apollonian circle packing (Figure <ref>).
Each circle of the packing was associated to a periodic pattern appearing in the scaling limit. Through extensive computer investigation, the authors were able to determine the intricate recursive relationships between the patterns for circles generated from one another (`ancestors' overlap and merge to form `descendent' patterns according to complicated rules). These recursions led to a difficult inductive proof that the set did indeed have the Apollonian structure evident in experiments. The development of these results provide a perfect example of the role illustration can play in the cycle of conjecture, theorem, and proof. Without the data available through large-scale computer experimentation and the ability to explore it visually, the question of the scaling limit may not have been raised at all, and the recursive proof of their main result would likely not have been discovered.
Another area where research is intertwined with illustration is in the study of William Thurston's geometrization conjecture
, proved by Grigori Perelman.
This key tool in our understanding of 3-manifolds implies, for instance, the famous Poincaré conjecture.
Geometrization states that any compact topological 3-manifold can be cut into finitely many pieces, each of which carries a geometric structure.
There are eight possible such structures, known as Thurston geometries.
Some of them are rather familiar to mathematicians, such as the 3-dimensional euclidean and hyperbolic spaces or the 3-sphere.
Despite the fact that Thurston's geometries have been intensively studied, the more exotic geometries such as Nil and Sol still defy our “Euclidean-grown” spatial intuition. Keeping in mind the well-established power of our physical and visual intuition to aid geometrical research, Rémi Coulon, Elisabetta Matsumoto, Henry Segerman, and Steve Trettel developed virtual reality software to immerse the user in any of the eight Thurston geometries <cit.> (see Figure <ref>).
Besides building the much-needed intuition for these spaces, the development of the software itself raised mathematical questions. The meshes used in most animations must be replaced with raymarching techniques, which require computation of distances between objects. But, for example, there is no closed formula for the distance in Nil or Sol! Thus, the development of the algorithms themselves becomes a mathematical result in its own right. Work on Thurston's geometries has very often been closely tied with illustration. For example, the study of Spheres in Sol by Matei P. Coiculescu and Rich Schwartz in Geometry and Topology (positively) answers an old open question, whether metric spheres in Sol are homeomorphic to S^2 <cit.>. Each step of the proof was found after numerous graphical experiments, and 3D printing brings yet another perspective (see Figure <ref>).
For an example at the intersection of algebraic geometry and number theory, a few key illustrations have helped drive developments in the field of p-adic analytic geometry. At the same time, illustrating the p-adic analogs of complex analytic manifolds presents unique challenges, not the least of which is the fact that the p-adic numbers themselves are topologically a Cantor set. Nevertheless, clever and meaningful illustrations of p-adic analogs to the complex upper half-plane and complex unit disk have proved incredibly fruitful.
An illustration of Vladimir Drinfeld's p-adic upper half plane as tubular neighborhoods of Bruhat-Tits trees (Figure <ref>) clarified the behavior of the action of GL_2(ℚ_p) by Möbius transformations. Understanding this action was instrumental
in the construction of p-adic analytic uniformization of elliptic curves (reflecting the famous complex analytic uniformization of elliptic
curves as quotients of the complex upper half plane). Similarly, Peter Scholze's illustrations of the adic unit ball (Figure <ref>) provide access to the foundational geometric construction in his theory of perfectoid spaces <cit.>. The act of illustrating the central geometric objects of p-adic analysis has proven both beneficial and uniquely challenging, demanding a systematic and critical approach.
An example arising somewhat further afield of geometry is the work of Allen Knutson, Terence Tao, and Christopher Woodward in representation theory honeycombAMS.
Knutson and Tao introduced the notion of honeycombs (subsets of the plane as in Figure <ref>) to solve a longstanding open problem: Alfred Horn's conjectured shape of the polyhedral cone (sometimes called the Littlewood-Richardson cone) of triples of eigenvalue spectra (λ, μ, ν) for Hermitian matrices A, B, C which satisfy A + B+ C = 0.
This sum-of-Hermitian-matrices problem has applications to perturbation theory, quantum measurement theory, and the spectral theory of self-adjoint operators. Knutson and Tao were able to show that there exist such Hermitian matrices with the specified spectra if and only if there exist honeycombs with a specified boundary. They used this correspondence to prove Horn's conjecture. The honeycomb formalism also led naturally to a polynomial time algorithm to decide whether a triple of spectra can be realized by Hermitian matrices. In a follow-up, Knutson, Tao, and Woodward extended the study of honeycombs to define puzzles (Figure <ref>), which they described as replacing the Schubert calculus in past approaches to the Hermitian matrices problem, and used geometric arguments to give a complete characterization of the facets of the cone <cit.>. Puzzles and honeycombs provide an example of the power of rephrasing an algebraic problem as one about visual objects, where we can draw on other types of intuition. In what circumstances can we expect these sort of insightful geometric versions to exist for algebraic problems? When a geometric analog exists, it naturally exhibits additional features – can we then find new corresponding objects in the original problem? For example, what do the vertices of a honeycomb actually represent?
There are, of course, many more examples. Among these, the most famous may be the computer exploration of the Mandelbrot set and fractal geometry in the 1980's
(Figure <ref>). In the 1990's, Jeffrey Weeks created SnapPea (which now exists as SnapPy under the guidance of Marc Culler and Nathan Dunfield[<http://snappy.computop.org>]) as part of his doctoral thesis <cit.>, to explore the cusp structures of hyperbolic 3-manifolds. Its use inspired David Gabai, Robert Meyerhoff, and Peter Milley to invent mom structures to answer questions of the volumes of hyperbolic 3-manifolds <cit.>. In the same decade, the Geometry Center founded by Al Marden was focused on the use of computer visualization in mathematics.[<http://www.geom.uiuc.edu/>] It hosted mathematicians such as Eugenio Calabi, John Horton Conway, Donald E. Knuth, Mumford, and Thurston, among others, and produced the GeomView software
used to create some famous early computer visualizations, including the sphere eversion[Outside In, (1994), <http://www.geom.uiuc.edu/docs/outreach/oi/>] and illustrations for knot theory.[Not Knot, (1991), <http://www.geom.uiuc.edu/video/NotKnot/>]
Illustration has shown its importance in virtually all areas of mathematics, from random tilings in combinatorics,
to diagrammatic approaches to algebra,
to Apollonian circle packings and Schmidt arrangements in number theory,
and their higher dimensional analogs,
to mention just a few.
The examples above focus on pure mathematics, which is poised to join a great many other areas of scientific endeavour embracing illustration. In applied mathematics, illustration has already made great strides. Consider for instance the process of Alan H. Schoen, when describing the gyroid decades before it was mathematically proven to be a minimal surface.
He worked with both a sculpture of the surface and various models in Computer-Aided Design / Modelling (CAD/CAM), which ultimately led to the structure being found in various lipid and liquid crystalline systems <cit.>.
Other fields, like mathematical geometry processing rely equally on quantitative measures and qualitative visualizations for judging the quality of their results <cit.>.
Still, a back-and-forth between the development of mathematical procedures and their application to real-world data yields results that are well-grounded in mathematical quality guarantees, yet efficient and relevant for their applications.
In the field of exploratory data analysis, visualizations even form the main tool for finding research results.
Here, large, possibly high-dimensional, datasets are investigated for patterns by embedding them, e.g., as 2D scatter plots that can then be inspected by domain experts.
With this technique, in 2020, a novel type of anti-tumor cell was discovered <cit.>.
None of these research results would have been possible without the utilization of illustrations.
Furthermore, this last example utilized non-linear dimensionality reduction techniques for the visualization of high-dimensional data. These techniques were themselves the result of research driven by the desire for better illustrations.
The very closely allied field of computation in mathematics is a little ahead of illustration in its maturity as a tool for mathematical research. To give just one significant example in number theory, much recent activity has centered around the multi-million-dollar Simons Collaboration on Arithmetic Geometry, Number Theory, and Computation,[<https://simonscollab.icerm.brown.edu/>] whose mission states: “Our common perspective is that advances in computational techniques accelerate research in arithmetic geometry and number theory, both as a source of data and examples, and as an impetus for effective results. The dynamic interplay between experiment, theory, and computation has historically played a pivotal role in the development of number theory.” The work supported by the collaboration is rapidly expanding the L-Functions and Modular Forms Database,[<http://www.lmfdb.org>] an online database of mathematical objects (including visualizations) that is at the center of much modern progress in number theory.[See the extensive list of publications arising from the collaboration: <https://simonscollab.icerm.brown.edu/publications/>.] The discipline of mathematical computation is supported by a number of journals[Consider for instance “Advances in Computational Mathematics”, <https://www.springer.com/journal/10444>, the “Journal for Computational and Applied Mathematics”, <https://www.sciencedirect.com/journal/journal-of-computational-and-applied-mathematics>, or the “Journal of Computational Mathematics”, <https://www.jstor.org/journal/jcompmath>.] and has engendered areas of research in their own right, such as computational geometry.
Illustration appears to be following a similar trajectory. As it becomes more accessible and pervasive it demands rigorous and careful study, leading to the development of mathematical illustration as a discipline in its own right.
§ ILLUSTRATION AS A DISCIPLINE
Thurston once said, “mathematicians usually have fewer and poorer figures in their papers and books than in their heads” <cit.>.
Although the power of good illustrations to advance mathematical knowledge is clear, they are not simple to produce.
The challenges to creating powerful and trustworthy illustrations come on many levels.
On the one hand, some challenges are technical and concern rather practical questions regarding the production of mathematical illustrations.
Especially with newer technologies like virtual reality or 3D modeling, the learning curves are steep and while there are general tutorials available, just a handful are targeting issues specific to the illustration of mathematics.[A noteworthy example for introductory material, aimed at illustration of mathematics, is the Processing tutorial of Roger Antonsen, to be found online: <https://rant.codes/pcmi/>.]
Consider for instance <cit.> for a nice discussion of some of the challenges of 3D printing for mathematical illustration.
On the other hand, there are challenges within the mathematics itself.
The objects to be illustrated do not necessarily come with a description that lends itself to a suitable illustration.
Thus, a necessary initial step is the translation of the underlying mathematical object into a form that allows illustration in the first place.
However, this transformation is usually not enough by itself.
Subsequent steps aim at making the illustration effective, which can entail bridging the gap between the theoretical and the computational,
crafting a responsive and immersive experience, or ensuring the illustration actually imparts the desired aspects of the mathematical object.
In particular the last part implies important theoretical considerations: What exactly do we want to illustrate?
And how do we do so faithfully, i.e., without creating wrong impressions of the mathematical object illustrated?
Mathematics is not the first field of research to tackle these difficulties.
There are parallels to be found in the development of the scientific method and statistical methods for the natural sciences: Which experimental designs and statistics can be relied upon for developing conjectures and conclusions?
Cornerstones of the scientific methods were laid down, such as the important notion of falsifiability of a scientific theory.
Similarly, statistical methods amplified their usefulness and trustworthiness when expanded from pure descriptive statistics to inference statistics and statistical tests to assert the validity of results.
So in fact, all scientific fields have progressed by examining head-on some of the questions raised by their methodologies.
The question of illustrating well has been asked in statistics and data visualization, as explored in Darrell Huff's best-selling book How to Lie with Statistics, which became a standard college text.
The pioneering and richly illustrated books of Edward Tufte and Tamara Munzner on data visualization established that field in its own right. Every year, new research in data visualization is discussed at various venues, such as the Institute for Electrical and Electronics Engineers (IEEE) VIS meeting or the EuroVis conference, and published in outlets like the IEEE Transactions on Visualization and Computer Graphics. As it matures, the data visualization community addresses meta-questions on its research, such as where “the value of visualization” lies <cit.> or “Are we making progress in visualization research?” <cit.>.
Thus, the example of data visualization provides a pattern of development that the field of mathematical illustration might follow.
However, in comparison, mathematical illustration is just taking the first steps on its journey towards being a research field.
It is still facing basic challenges with regard to creating and evaluating the illustrations it produces.
As an example of these challenges, consider the images in Figure <ref> showing polynomial roots near i in the complex plane.
The leftmost is an image of all roots of polynomials of degree 3 with integer coefficients between N and -N, where here N=10 <cit.>.
The rightmost is an image of all roots of polynomials with coefficients from {-1,1} and degree no more than D, where in this case D = 13.
In both, in the region around i, there appears to be a hole shaped like two ellipses overlapping at right angles.
How to interpret this shape?
It turns out that at left it is very much an artifact of the algorithm for creating these images. If you consider the picture as an approximation of all cubic roots (by allowing N to tend to infinity), there are infinitely many such polynomials. By limiting N, we are looping through them in a growing hypercube in the coefficient space.
The corners of this cube are the corners jutting in toward i, and as the cube expands in the coefficient space, this hole will get filled in.
If instead of looping through coefficient space in a growing cube, we choose a different ordering, the limiting shape changes.
This is shown in the center image of Figure <ref>.
On the right, however, we can think of approximating the set of all roots of polynomials with coefficients ± 1 by allowing D to tend to infinity. In this case, the size and shape of the void remains essentially fixed, no matter how large D is taken.
So this hole `really exists' in the picture!
The shapes one sees at the boundaries of the limiting set of roots are explained in terms of fractal geometry and certain symmetries of this set.[These features are beautifully described by John Baez on his personal website: < https://math.ucr.edu/home/baez/roots/>.]
As another example of the challenges discussed above, the virtual reality versions of Thurston's geometries of <cit.> are a profound way to experience these spaces, but can feel overwhelming and nearly psychedelic, as our brains struggle to make sense of what we are seeing.
As an alternative, for several of the geometries, it is possible to place the geodesics of the geometry into familiar euclidean space as curves (see Figure <ref>).
The interplay between these two methods of illustration can be much more enlightening than either one alone.
The mathematical arguments that are developed to explain how one view can predict the other can end up as the basis of a mathematical proof.
Conjectures and mathematical arguments about the space can quickly be evaluated by predicting their effect on these illustrations.
§ LOOKING FORWARD
Illustrations have been used both historically and in recent state-of-the-art research projects to expand the boundaries of knowledge in pure mathematics.
Other fields of research, such as statistics and microbiology, have systematized visualization, and studied it in its own right.
However, as our gallery of examples shows, the quality of illustrations in pure mathematics varies, and there is no common framework to create, discuss, or evaluate them.
To further the possibilities that illustrations provide, there needs to be a dedicated community to tackle the next important problems.
These include, among others:
* How to identify illustrations that have rich potential to provide insight?
* How to identify (and mitigate) the ways that illustrations can mislead and distract?
* How to measure the fidelity of an illustration; are perceived patterns a result of its construction or the underlying mathematics?
* How can we harness the processing power and pattern-recognition capabilities of the human visual system?
* How can we empower a next generation of mathematical illustrators to create and leverage sophisticated illustrations?
* And how do we increase professional recognition of the illustration of mathematics?
Exploring these questions will lay the foundation of a discipline built around the illustration of mathematics, providing powerful tools for the advancement of mathematical research.
plain
|
http://arxiv.org/abs/2307.04423v1 | 20230710085537 | Density asymmetry and wind velocities in the orbital plane of the symbiotic binary EG Andromedae | [
"N. Shagatova",
"A. Skopal",
"E. Kundra",
"R. Komžík",
"S. Yu. Shugarov",
"T. Pribulla",
"V. Krushevska"
] | astro-ph.SR | [
"astro-ph.SR"
] |
Density asymmetry and velocities of the wind in EG And
N. Shagatova, A. Skopal, E. Kundra et al.
Astronomical Institute, Slovak Academy of Sciences,
059 60 Tatranská Lomnica, Slovakia
[email protected]
Main Astronomical Observatory of National Academy of Sciences of Ukraine, 27 Akademika Zabolotnoho St., 031 43, Kyiv, Ukraine
Non-dusty late-type giants without a corona and large-scale pulsations represent objects that do not fulfil the conditions under which standard mass-loss mechanisms can be applied efficiently. Despite the progress during the past decades, the driving mechanism of their winds is still unknown.
One of the crucial constraints of aspiring wind-driving theories can be provided by the measured velocity and density fields of outflowing matter. The main goal of this work is to match the radial velocities of absorbing matter with a depth in the red giant (RG) atmosphere in the S-type symbiotic star EG And.
We measured fluxes and radial velocities of ten Fe I absorption lines from spectroscopic observations with a resolution of ≈ 30 000. At selected orbital phases, we modelled their broadened profiles, including all significant broadening mechanisms.
The selected Fe I absorption lines at 5151 - 6469 Å originate at a radial distance ≈ 1.03 RG radii from its centre. The corresponding radial velocity is typically ≈ 1 , which represents a few percent of the terminal velocity of the RG wind. The high scatter of the radial velocities of several in the narrow layer of the stellar atmosphere points to the complex nature of the near-surface wind mass flow.
The average rotational velocity of 11implies that the rotation of the donor star can contribute to observed focusing the wind towards the orbital plane.
The orbital variability of the absorbed flux indicates the highest column densities of the wind in the area between the binary components, even though the absorbing neutral material is geometrically more extended from the opposite side of the giant. This wind density asymmetry in the orbital plane region can be ascribed to gravitational focusing by the white dwarf companion.
Our results suggest that both gravitational and rotational focusing contribute to the observed enhancement of the RG wind towards the orbital plane, which makes mass transfer by the stellar wind highly efficient.
Density asymmetry and wind velocities in the orbital plane of the symbiotic binary EG Andromedae
N. Shagatova <ref>
A. Skopal <ref>
E. Kundra <ref>
R. Komžík <ref>
S. Yu. Shugarov <ref>
T. Pribulla <ref>
V. Krushevska <ref>,<ref>
Received / Accepted
============================================================================================================================================================================================
§ INTRODUCTION
The atmospheres of late-type giant stars include slow and dense winds reaching terminal velocities lower than 100with decreasing values for later spectral types <cit.>. For the asymptotic giant branch (AGB) evolutionary stage, the driving mechanism of the outflow is thought to be based on a combination of the dust-forming levitation of the wind by stellar pulsations and of the acceleration by radiation pressure on dusty envelopes <cit.>. On the other hand, the lack of dust in the atmospheres of normal red giant stars (RGs) and the inefficiency of other known driving mechanisms represent a complication for the understanding of their winds. Since the late 20th century, the dissipation of magnetic waves is thought to be the key ingredient in their mass-loss process. A review of attempts to resolve the mechanism behind RG winds can be found in <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>.
Recently, <cit.> investigated the wind properties of Arcturus (K1.5 III) using the Wentzel–Kramers–Brillouin Alfvén wave-driven wind
theory <cit.>. They found that the wave periods that are required to match the observed damping rates correspond to hours to days, consistent with the photospheric granulation timescale.
The late-type giants play the role of donor star in the symbiotic stars (SySts), which are long-period (P≳ years) binary systems with a mass transfer of the giant wind towards a compact companion, usually a white dwarf <cit.>. The donor star supplying the dense wind matter is either a normal RG star (S-type SySts) or an AGB star (D-type SySts). The RGs in S-type systems have experienced the first dredge-up, which was confirmed by their low ^12C/^13C ratio in the range 5 - 23 <cit.>.
The white dwarf as a source of ultraviolet radiation enables us to probe the cool wind from the giant at different directions. For example, the continuum depression around the Ly-α line as a function of the orbital phase has shown a very slow wind velocity up to 1-2 RG radii, R_ g, above the donor surface and a steep increase to the terminal velocity afterwards in S-type SySts <cit.>. For D-type systems, this way of deriving the wind velocity profile is complicated by very long (P≈ 10-100 yr) and often poorly known orbital periods. However, for single O-rich AGB stars, the expansion velocities of the wind were determined by <cit.> as a measure of the half-width of the molecular lines at the baseline level. The majority of the stars in their sample has a distinct low-velocity region in front of the velocity jump to the terminal value, but in a few cases, the wind reaches terminal velocity already within the innermost parts. In one case, the authors found a deceleration of the gas as it moves away from the star (R Dor). The low-velocity region close to the star and a steep increase to the terminal velocity in O-rich AGB stars was also indicated by molecular line modelling with a non-local thermal equilibrium radiative transfer code <cit.>. In C-rich AGB stars, the wind velocity profile can be steeper because the opacity of the dust grains is higher <cit.>. For the C-rich AGB star CW Leo, a steep increase in the wind velocity was found to start at a distance of ≈ 5 stellar radii <cit.>.
The presence of the hot white dwarf <cit.> accompanied by the cool giant <cit.> in S-type SySts leads to a complex ionization structure of the circumbinary material.
During quiescent phases when there is no ongoing eruptive burning on the surface of the white dwarf, a fraction of the surrounding RG wind is photoionized by energetic radiation from the hot component. As a result, the neutral area around the RG is cone-shaped, with the RG near its apex facing the white dwarf <cit.>, where a thin boundary between the neutral and ionized zone is determined by the balance between the flux of ionizing photons from the white dwarf and the flux of neutral particles from the RG.
EG And is an S-type SySt with no recorded outburst of its white dwarf. The effective temperature of the white dwarf is ≈ 7.5× 10^4 K <cit.> and its mass is 0.4± 0.1 <cit.>. The system is eclipsing <cit.> with an orbital inclination of ≈ 80^∘ <cit.> and an orbital period of 483 days <cit.>.
The donor star is an RG of spectral class M2-3 III <cit.> with an effective temperature ≈ 3700 K <cit.>, luminosity ≈(1-2)× 10^3 <cit.>, and metallicity [Fe/H]≈ 0 <cit.>. Its mass is estimated to be 1.5± 0.6 <cit.> and its radius is estimated to be 75± 10 R_⊙ <cit.>, corresponding to log g ≈ 0.5 - 1.1. The slow and dense wind of RG is assumed to have a terminal velocity v_∞≈ 30<cit.>.
The velocity profile of the wind suggests an almost steady wind up to around 1.5 R_ g from the RG centre and subsequent rapid acceleration towards the terminal velocity, as derived from hydrogen column density values measured from the Lyα-line attenuation <cit.>. This approach accounts for the wind density distribution at the near orbital plane due to the point-like relative size of the white dwarf as a source of the probing radiation.
The giant wind in this system is distributed asymmetrically, with denser parts concentrated at the orbital plane and diluted areas located around the poles <cit.>.
The geometric distribution and radial velocity (RV) profile of the RG wind are essential components for exploring the physical mechanism driving the outflow and shaping the RG wind.
In this work, we analyse the orbital variability of fluxes and RVs of Fe I absorption lines of EG And (Sect. <ref>). We intend to match the resulting RVs of individual lines with the depth of their origin in the atmosphere by modelling their profile using a semi-empirical model atmosphere (Sect. <ref>) and including several broadening mechanisms (Sect. <ref>). The results are given in Sect. <ref>. The discussion and conclusions can be found in Sects. <ref> and <ref>, respectively.
§ OBSERVATIONS
In the optical wavelength range, the main source of the continuum radiation in EG And is the RG companion <cit.>. Its spectrum is superposed with dominant Balmer emission lines arising in the symbiotic nebula and many absorption lines of molecules and atoms originating in the cool giant wind <cit.>.
We collected 53 spectroscopic observations from Skalnaté Pleso Observatory (SP) from 2016 - 2023 in the wavelength range 4200 - 7300 Å (Table <ref> or <ref>). The observatory is equipped with a 1.3 m Nasmyth-Cassegrain telescope (f/8.36) with a fibre-fed échelle spectrograph (R∼30 000) similar to the MUSICOS design <cit.>.
The spectra were reduced with the Image Reduction and
Analysis Facility (IRAF; <cit.>) using specific scripts and programs
<cit.>. The spectra were wavelength-calibrated using the ThAr hollow-cathode lamp. The achieved accuracy for our set of spectra corresponds to the systematic error of RV measurements, which typically is in the range 0.2 - 0.6.
Our spectra were dereddened with E_ B-V = 0.05 mag
<cit.> using the extinction curve of <cit.>.
We determined the orbital phase φ of EG And using
the ephemeris of the inferior conjunction of the RG
(φ = 0) given as <cit.>
JD_ sp. conj. =
2 450 683.2(± 2.4) + 482.6(± 0.5)× E .
We assumed a systemic velocity of
v_ sys = -94.88 kms^-1 <cit.>.
Similar values were determined by <cit.>, <cit.>, <cit.>, and <cit.>.
We converted the spectra from relative into absolute fluxes by scaling them to the closest-date photometric fluxes using a fourth-degree polynomial function. We used the UBVR_ C photometry of EG And published by <cit.> together with new photometric observations obtained at the G2 pavilion of the Stará Lesná Observatory, which is equipped with a 60 cm, f/12.5 Cassegrain telescope <cit.>. To complement our dataset during 2022, we used photometric observations available in the International Database of the American Association of Variable Star Observers (AAVSO[<https://aavso.org>]).
We converted the photometric magnitudes into fluxes according to the calibration in Table 2.2 of <cit.>.
§ ANALYSIS AND RESULTS
To investigate the velocity distribution in the RG atmosphere of EG And, we selected ten Fe I absorption lines between 5151 and 6469 Å that were not severely blended.
We measured their orbital variability and modelled their absorption profiles to track the density conditions and dynamics of the corresponding part of the wind area.
§.§ Orbital variations of the Fe I absorption lines
The selected absorption lines of neutral iron show the orbital variability in RVs and absorbed fluxes. To measure these changes along the orbit, we fitted the lines with a Gaussian profile superimposed on a fourth-order polynomial function representing the continuum radiation of the spectrum (Sect. <ref>) using the curve-fitting program Fityk[<https://fityk.nieto.pl>] <cit.>.
The resulting variability in RV values, v_r, is plotted in Fig. <ref> together with the RV curve of the RG according to the solution of <cit.>. Shifts up to ≈ -5 in the RVs of individual Fe I absorption lines relative to the RG curve are measured. This is consistent with a slow outflow of the absorbing material. However, around orbital phases φ≈ 0 - 0.2, the RV values especially of the Fe I 5340 Å line suggest a slow inflow.
The orbital variability of the fluxes (Fig. <ref>) shows the strongest absorption around the orbital phase ≈ 0.6, and the possible weakest absorption can be indicated at φ≈ 0.1, but there is a lack of the data around this orbital phase. When the conical shape of the neutral wind area around the RG is taken into account <cit.>, this result points to the highest densities of the wind between the RG and the apex of the neutral area cone (Fig. <ref>). This agrees with the orbital variability of the absorption and the core-emission component of the Hα line, which suggests that high-density matter lies in the area between the binary stellar components <cit.>. The complete list of measured RV and flux values is given in Tables <ref> and <ref> in the appendix.
§.§ Model atmosphere grid
The spread of RV values of the Fe I absorption lines within ≈ -5/+3around the RV curve of the RG, v_r^ g (Fig. <ref>) suggests that these lines originate in the vicinity of the stellar surface. To match the velocities with a depth in the RG atmosphere through modelling the profiles of Fe I lines (Sect. <ref>), we constructed a semi-empirical model atmosphere. This model is based on a simplified extension of the MARCS model atmosphere <cit.> up to a distance of 150 R_ g from the stellar centre. We defined the distribution of three physical parameters in the atmosphere as a logarithmically spaced grid: the neutral hydrogen density N_ H [cm^-3], temperature T [K], and electron pressure P_ e [Ba] over the required range of radial distance r [R_ g].
The MARCS model atmosphere extends up to a distance of 1.1 R_ g from the stellar centre. From the available database,[<https://marcs.astro.uu.se>] we selected the model with parameters closest to those of the RG in EG And (Sect. <ref>), a moderately CN-cycled model with ^12C/^13C=20, with a spherical geometry, effective temperature T_ eff=3700 K, mass M=1.0, log g = 0.5, metallicity [Fe/H] =0, and microturbulence parameter of 2, which is a typical value for RGs in S-type SySts <cit.>. The selected model atmosphere corresponds to a star with a radius R = 93 and a luminosity L = 1478.
Beyond the radial distances covered by the MARCS atmosphere, we set the extrapolation up to r=150 R_ g, where the wind density is sufficiently low to have a negligible impact on the Fe I line absorption profile. At this outer edge of the atmosphere model, we estimated values of N_ H and T from the hydrodynamical simulation of the M-giant γ Eri wind by <cit.>. We assessed the corresponding value of P_ e for a representative value of the ionization fraction ≈ 10^-6 for dense interstellar medium clouds <cit.>.
We defined the values of the physical parameters N_ H, T and P_ e between a radial distance 1.1 and 150 R_ g by interpolating the corresponding functions (Table <ref>). The selection of the N_ H(r) interpolation function has a crucial effect on the Fe I absorption line profile. We used the form corresponding to the model of measured H^0 column densities of EG And by <cit.>,
N_ H(r) = n_1/2λ_1 R_ g1 + ξ r^1-K/r^2,
where n_1, ξ [this parameter is given as ξ=n_Kλ_1/(n_1λ_K), where n_K is a model parameter, and λ_K is the Kth eigenvalue of the Abel operator] and K are the model parameters, and λ_1 = π/2 is the Abel operator eigenvalue <cit.>.
Since the column density model is most reliable at distances of r of several R_ g, we applied the condition on the interpolation function (<ref>) that N_ H(r=3 R_ g)=1.6× 10^10 cm^-3, that is, it equals the value of model J (i=80^∘) from <cit.>.
This approach led to smooth profiles of the atmosphere parameters over the required range of radial distances (Fig. <ref>).
Finally, we took the asymmetric conical shape of the neutral wind zone into account. For orbital phases when the line of sight crosses the boundary between neutral and ionized wind, we estimated its distance from the RG surface from Fig. 6 in <cit.>. We assumed that only the neutral wind contributes to the absorption in Fe I lines. Therefore, we limited the radial size of the model atmosphere to the H^0/H^+ area border at these orbital phases. At the rest of the orbital phases, the radial length of the neutral area was assumed to be 150 R_ g.
§.§ Line profile of the Fe I absorption lines
To reproduce the spectral profiles of ten Fe I absorption lines from 5151 to 6469 Å at all orbital phases with a step of 0.1, we considered several broadening mechanisms that we incorporated into a custom Python code. We used the mass absorption coefficient including natural, pressure, thermal, and microturbulence broadening in the form given by <cit.>.
The values of the Ritz wavelengths, the inner quantum numbers J, the oscillator strengths, and the excitation potentials were acquired from the National Institute of Standards and Technology (NIST) database[<https://www.nist.gov/pml/atomic-spectra-database>] and the natural damping constants from the Vienna Atomic Line Database[< http://vald.astro.uu.se>] (VALD). The values of the partition functions for Fe I, Fe II, and Fe III were interpolated through the atmosphere grid from the tables of <cit.> and <cit.>. For the atmosphere layers with temperatures below 1000 K, we assumed constant partition functions.
We calculated the values of the Hjerting function as the real part of the Fadeeva function with the wofz function within scipy.special library[< https://docs.scipy.org/doc/scipy/reference/generated/scipy.special.wofz.html>].
The pressure broadening was treated as caused by the collisions with neutral hydrogen, using the impact approximation with line-broadening cross sections computed as a function of the effective principal quantum numbers <cit.> with the tabulated values of the broadening cross-section σ and velocity parameter α given by <cit.>.
Furthermore, we included rotational broadening using the Python rotBroad function that is part of the PyAstronomy.pyasl library [ <https://pyastronomy.readthedocs.io/en/latest/pyaslDoc/aslDoc/rotBroad.html>]. Since the projected rotational velocity v_ rotsin (i) can be dependent on tidal forces in the outer regions of the RG <cit.>, we allowed it to be a free parameter. After first fitting trials with a free linear limb-darkening coefficient ε, most of the fits converged to ε=1. As this is a reasonable value <cit.>, we kept ε=1 in all line-profile fits.
As the typical value of the macroturbulence velocity in RGs is ≈ 3<cit.>, it adds to the broadening of the absorption-line profile. Often, the radial-tangential (RT) anisotropic macroturbulence is the preferred broadening model in a spectroscopic analysis <cit.>.
On the other hand, <cit.> showed that the RT macroturbulence model is not adequate at least for solar-type stars because it overestimates turbulent velocity dispersion. They obtained more preferable results for the Gaussian anisotropic macroturbulence model. The resolution of our spectra and relatively low macroturbulent velocity does not allow us to distinguish between different macroturbulence models. Generally, there is agreement that neglecting macroturbulence as a source of line broadening leads to overestimated values of v_ rotsin (i), and, on the other hand, including a simple isotropic Gaussian macroturbulence model provides severely underestimated values of v_ rotsin (i) <cit.>. Therefore, we decided to include the isotropic Gaussian model with two values of macroturbulence velocity, 0 and 3, to obtain lower and upper limits of v_ rotsin (i) values.
Finally, we included the instrumental broadening using a Gaussian kernel. The width of the Gaussian profile used in the convolution is given by the resolution R, which depends on the wavelength and was estimated using ThAr lines.
For the wavelength range of the selected lines 5151-6469 Å, the spectral resolution of our spectra ranges from ≈ 39100 to 24000. We used the broadGaussFast function from the PyAstronomy.pyasl library[ <https://pyastronomy.readthedocs.io/en/latest/pyaslDoc/aslDoc/broad.html>] to include macroturbulent and instrumental broadening. The line profiles depicted at the right bottom panel of Fig. <ref> compare the strength of individual broadening mechanisms.
We performed line profile modelling of Fe I lines at nine different orbital phases. Example fits at orbital phase φ=0.1 are depicted in Fig. <ref>.
We evaluated the goodness of fit using the reduced χ-square, χ_ red^2.
Its value is often > 2 due to the low value of the degrees of freedom and the uncertain value of the standard observational error, which can vary from one observation to the next. We adopted a rather strict value of a 2% standard deviation of the flux values for all observations to avoid overestimating the errors for the best-quality spectra.
The errors due to the simplified model of macroturbulence are relatively small (Fig. <ref> and <ref>) and have practically no effect on the resulting maximum depth of the origin of the spectral line in the atmosphere. The values of the v_ rotsin (i) parameter could be affected by unresolved blending of an absorption line.
An important source of systematic error can be introduced by the simplifications in our model, namely the symmetry of the wind distribution, which except for the shape of the neutral area, does not reflect the asymmetry of the egress/ingress and orbital-plane/pole-region in the distribution of the physical quantities (Fig. 4 of <cit.> and Fig. 8 of <cit.>). Moreover, the particular shape of the neutral area itself represents a further source of systematic error. We estimated the distance from the RG centre to the ionization boundary by adapting the shape of the neutral area computed for the symbiotic binary SY Mus <cit.>. This system comprises a white dwarf that is more luminous than the hot companion in EG And by two orders of magnitude <cit.>. On the other hand, the mass loss from its RG is probably higher <cit.>, leading to a denser wind zone. These characteristics affect the shape of the ionization boundary, and the actual shape for EG And can therefore deviate from the one in SY Mus. However, the similar measured egress values of the H^0 column densities and the practically identical asymptote to the egress ionization boundary in the orbital plane, which is located at φ≈ 0.17 <cit.>, strongly suggest that the ionization boundaries in these systems are similar.
The lack of measured H^0 column densities at ingress orbital phases for EG And precludes us from modelling the full shape of the ionization boundary. To estimate the sensitivity of the resulting physical parameters on the location of the ionization boundary, we performed fits for a shifted radial distance of the ionization boundary by -0.5R_ g and +0.5R_ g for a subset of modelled spectra with all Fe I absorption lines and orbital phases with the finite radial size of the neutral area represented. For the ionization boundary closer to the RG by -0.5R_ g, we obtained the same values of column densities or lower values by up to 6.0%, and for the boundary that is more distant by +0.5R_ g, the values were the same or higher by up to 1.1%. In both cases, higher values of the errors of n_ H correspond to orbital phases ≈ 0.4 - 0.7, where the position of the ionization boundary is closer to the RG. The corresponding values of the projected rotational velocity v_ rotsin (i) remained unchanged for all fits, as did the values of the minimum distance from the RG centre r (Sect. <ref>). This confirms the dominant role of the densest parts of the RG atmosphere in the formation of Fe I absorption line profiles. Given the rather low magnitude of the errors of n_ H due to the uniform shifts and the most probably similar shape of the ionization structure for both systems, which is supported by similar profiles of the measured H^0 column densities <cit.>, an uncertain precise location of the apex of the neutral zone will probably not seriously affect the ratios of n_ H values at individual orbital phases yielded by the line-profile modelling.
Another source of systematic error comes from the uncertain level of the continuum, which is mainly due to the spread in the photometric data. In our dataset, the typical deviation of the continuum values from the average relative to the flux ranges from 3% to 9% at the positions of individual Fe I absorption lines. This leads to errors in the n_ H values with a magnitude of typically ≈ 10 - 20%, v_ rotsin (i) of ≈ 1 - 7% and a minimum distance r of ≈ 0.1 - 0.5%. Therefore, the uncertainty in the level of the continuum represents a more significant source of error than the uncertainty in the position of the ionization boundary. Still, these systematic errors are of lower magnitude than the values of the standard deviations of the resulting values from the set of modelled spectra.
§.§ Distribution of the physical parameters within the atmosphere
§.§.§ The height above the photosphere
Our models provided us with the total columns of the wind material that form the spectral profiles of individual Fe I lines in our set. From now on, the values of r are understood as the distances of the lowest layers of the atmosphere model, corresponding to the resulting neutral columns from the line-profile fits. In other words, a particular value r represents the maximum depth within the model atmosphere where the integration of the line-profile stops, and it corresponds to the deepest layer of the origin of the spectral line.
The maximum depths of the Fe I line profile fits correspond to
a relatively small height, ≈0.02 to ≈0.06 R_ g, above the RG photosphere.
Figure <ref> shows this result with the corresponding column
densities. The resulting physical parameters averaged over the orbital
phases are presented in Table <ref>.
There is no sign
of significant variations in the column density with orbital phase, but a
slightly higher average value is measured at φ=0.5-0.6 (Fig. <ref>).
§.§.§ Radial velocities
The total average and standard deviation over ten modelled spectra and ten Fe I absorption lines corresponds to RV -0.89± 1.26at a radial distance 1.03± 0.01 R_ g (Fig. <ref>). Assuming a terminal velocity of 30, we compared our RV values with velocity profiles obtained for EG And from modelling the measured column densities by <cit.>. As shown in Fig. <ref>, our results support very slow wind velocities close to the RG surface before the acceleration of the wind starts.
§.§.§ Rotational velocities
The orbit-averaged values of the projected rotational velocities of all modelled lines fall within 9.6 - 12.8 with standard deviations of 4 - 22% (Table <ref>), except for the Fe I 6469 Å line with v_ rotsin (i) = 8.5 and a significantly higher standard deviation of 36%.
While it is reasonable not to expect the same rotational velocity in any depth in the RG atmosphere, the measured differences can in part be caused by errors due to the blending of the lines. Moreover, the reliability of the v_ rotsin (i) determination is affected by the comparable strength of the instrumental broadening.
The average and standard deviation over the whole sample of ten line-profile models per ten fitted lines corresponds to v_ rotsin (i) = 10.9 ± 2.0, which is in a typical range of ≈ 5 - 11determined for RGs in S-type SySts <cit.>. There are also much faster rotators in this group of stars with v_ rotsin (i) up to ≈ 50 <cit.>.
Assuming an orbital inclination of i=80^∘± 10^∘, we obtained v_ rot = 11.1_-2.2^+2.6. Then, for RG radius R_ g=75± 10 R_⊙, the proportion of orbital to rotational period is P_ orb/P_ rot=1.4_-0.4^+0.6. Therefore, it is possible that the rotation of the RG is bounded to its orbital motion.
§ DISCUSSION
For our sample of ten Fe I absorption lines, we determined
the absorbed flux and RV from their Gaussian fits
(Sect. <ref>). Both quantities show the relative
displacements for individual lines along the orbit
(Figs. <ref> and <ref>).
The largest average shift in RVs by -3.8 with respect
to the v_r^ g(φ) curve is shown by the
Fe I 6469 Å line (Fig. <ref>, dotted line).
Around φ = 0.1, the RVs of many lines indicate
a slow flow of absorbing material towards the RG, especially the Fe I 5340 Å line with an average RV shift of +0.5.
The line-profile models accounting for several broadening
mechanisms at the selected ten orbital phases (Sect. <ref>)
enabled us to match their RV values with the deepest layer of
the atmosphere, where the absorption line is predominantly
created, characterized by r, T, N_ H and P_ e
values. For the resulting depths in the range of
1.02 - 1.06 R_ g, all averaged RV values are low.
Specifically, the outflow values lie within the interval from -0.2 to
-3.7 (Table <ref>). This represents
0.7 - 12.3 % of the estimated terminal wind velocity of
30. While the typical RV at r ≈ 1.03 R_ g
is ≈ 1 (≈ 3% of v_∞), there is
a considerable dispersion in individual RV values (Fig. <ref>, bottom).
The highest range of RV values is measured at the shortest
distances of ≈ 1.02 - 1.03 R_ g. This variability
can be a result of the highly complex flows of matter in the close
surroundings of cool evolved stars <cit.>.
In the light of our results, the orbital phase ≈ 0.6
seems to be exceptional in several ways. First, most of
the Fe I lines from our set reach the maximum absorbed flux
at this orbital phase (Fig. <ref>), pointing to a higher column density in the
neutral zone between the apex of its cone and the RG, that is, in the direction towards the white dwarf companion (Fig. <ref>). In the same way,
we could interpret the local maxima in the resulting column densities
of the line-profile models (Fig. <ref>).
Simultaneously, a higher dispersion of the RV values and
the overall highest outflow velocities were measured around
this orbital phase, suggesting enhanced outflow of the wind.
The same feature was observed for the core-emission and
absorption components of the Hα line at orbital
phases 0.6-0.7 <cit.>.
Higher densities and, at the same time, higher velocities
of the neutral matter may
represent a challenge for hydrodynamical simulations of
outflows from evolved cool stars in binary systems.
In our previous work, we investigated the geometrical
distribution of the RG wind in EG And. By
modelling H^0 column densities, we found that the
wind from the RG is focused towards the orbital
plane <cit.>. On the other hand, the RV orbital
variability of the [OIII] 5007 Å line, which coincides with the v_r^ g(φ) curve in both phase and amplitude, indicates a dilution of the
wind around the poles of the RG <cit.>. However, the underlying mechanism that focuses wind in
this system remains unclear. <cit.>
applied the wind-compression disk model proposed by <cit.> to RGs in S-type symbiotic systems with rotational velocities of 6-10and found that the wind
focusing occurs at the equatorial plane with a factor
of 5–10 relative to the spherically symmetric wind.
The average
value v_ rot = 11.1(Sect. <ref>)
is therefore sufficiently high for rotation-induced compression of the wind from the giant in EG And.
The wind focusing can also potentially explain the higher
densities of the neutral wind between the binary components, in contrast to the lower densities in the opposite direction, even though the neutral zone is more extended there.
However, the wind compression by the RG rotation cannot explain this asymmetry because this mechanism acts equally strongly in all outward directions in the plane perpendicular to the rotational axis.
Therefore, the gravitational effect of the white dwarf companion is the more natural explanation for this measured asymmetry.
In a recent 3D hydrodynamical simulation of the accretion process for representative parameters of S-type symbiotic systems by <cit.>, the centre of the oblique region with highest densities around the RG is shifted towards the white dwarf, and the wind enhancement in the area of the orbital plane is also visible in their Fig. 2. For S-type system, recurrent nova RS Oph, the simulations of <cit.> showed a dense equatorial outflow in the system as a result of the interaction of a slow wind with a binary companion. Therefore, gravitational focusing likely shapes the circumstellar matter in S-type SySts, as well as in D-type systems <cit.>.
Often, the analysis of spectral lines in stellar atmospheres is focused on the determination of elemental abundances and basic stellar parameters by comparing synthetic and observational spectra <cit.> In our work, we aimed to assess the physical conditions at different heights in the RG atmosphere in interacting binary star from Fe I absorption line profiles. In principle, this approach can also be used for isolated non-dusty RG stars, which can potentially have different wind velocity profiles.
The presence of the companion of a mass-loosing star affects the flow of matter in the wind region. Its gravitational pull can support the wind outflow from the RG, and we cannot exclude that in the case of single RGs, the low-velocity region is more extended and the velocities are lower. To form an idea about the proportions of gravitational force of the two stellar components in EG And, we compared the values of the gravitational force of the white dwarf and RG at several distances r on the line joining the two stars.
When we assume the separation between the two components of 4.5R_ g from the interval given by <cit.>, the magnitude of the white dwarf force at r = 1.02 - 1.06 R_ g, where the Fe I absorption lines are predominantly created, is small but not negligible. It is about 2% of the value of the RG gravitational force. At r = 1.5R_ g, where the acceleration of the wind starts (Fig. <ref>, top), this value is ≈ 7%, and at r = 2R_ g in the acceleration region, it is ≈ 17%. At the location at ≈ 3R_ g, where the terminal velocity of the wind is reached, the gravitational forces from the two stars are already comparable. Close to the RG surface, where most of the absorption in Fe I lines occurs, the gravitational effect of the white dwarf is small, and we do not observe any tendency in the wind RVs as a function of orbital phase (Fig. <ref>), that is, at different distances of the near-surface regions from the white dwarf
companion. Therefore, the RVs near the surface of the RG in EG And are probably comparable to those in isolated giants with similar evolutionary and physical characteristics. In the future, modelling of the Fe I absorption line-profiles for single late-type giants can be used to probe this assumption.
§ CONCLUSIONS
The RVs of the investigated Fe I absorption lines trace the orbital motion of the giant in the binary star EG And. They are displaced from the RV curve of the giant by 0.1 to 3.8 (i.e. up to 13% of the terminal wind velocity), which indicates a slow outflow of mass from the RG (Fig. <ref>).
Modelling of their profiles showed that they are formed at maximum depths from ≈ 0.02 to ≈ 0.06 R_ g above the photosphere.
The typical value of the RV at these distances is around 1, which is consistent with the previously determined wind velocity profile from measured values of H^0 column densities (Fig. <ref>).
It is interesting to note that several Fe I lines, especially the 5340 Å line, showed a slow inflow of the absorbing matter towards the RG around orbital phase 0.1. Together with the dispersion of the RV values of several , this may be a sign that the nature of the near-surface mass flows in the RG atmosphere is complex (Fig. <ref> and <ref>, bottom).
The orbital variations of the Fe I absorption line fluxes (Fig. <ref>) indicate that higher-density matter resides in the region between the binary components than in other directions from the RG at the near-orbital plane area. This asymmetry can be the result of gravitational interaction of the white dwarf with the RG wind, as was indicated by numerical simulations of gravitationally focused winds in interacting binaries.
The measured rotational velocity of the RG, ≈ 11.1, suggests an additional compression of the wind from the giant towards the orbital plane due to its rotation. Our results therefore support the contribution of both mechanisms to the observed RG wind enhancement and its asymmetry in the orbital plane of EG And.
The results of measuring the wind density asymmetry in the near-orbital plane region are consistent with our previous results on the wind focusing <cit.>. Our direct observational finding shows a wind density enhancement between the binary components. This confirms the high efficiency of the wind mass transfer in SySts.
We wish to thank to Zoltán Garai, Andrii Maliuk, Matej Sekeráš and Peter Sivanič
for obtaining 1-2 spectral/photometric observations each, used in this work.
We acknowledge with thanks the variable star observations from
the AAVSO International Database contributed by observers
worldwide and used in this research.
This work was supported by the Slovak Research and Development
Agency under the contract No. APVV-20-0148 and by a grant of
the Slovak Academy of Sciences, VEGA No. 2/0030/21.
VK acknowledges the support from the Government Office of the Slovak Republic within NextGenerationEU programme under project No. 09I03-03-V01-00002.
Reproduced with permission from Astronomy & Astrophysics, ESO.
aa
§ RADIAL VELOCITIES AND FLUXES OF SELECTED FE I ABSORPTION LINES
|
http://arxiv.org/abs/2307.05193v1 | 20230711115904 | Membership Inference Attacks on DNNs using Adversarial Perturbations | [
"Hassan Ali",
"Adnan Qayyum",
"Ala Al-Fuqaha",
"Junaid Qadir"
] | cs.LG | [
"cs.LG",
"cs.CR"
] |
Membership Inference Attacks on DNNs using Adversarial Perturbations
Hassan Ali^1, Adnan Qayyum^1, Ala Al-Fuqaha^2, Junaid Qadir^3
^1IHSAN Lab, Information Technology University, Lahore, Pakistan ([email protected], [email protected])
^2Hamad Bin Khalifa University, Qatar ([email protected]).
^3Corresponding author ([email protected]). Qatar University, Doha, Qatar.
August 12, 2023
===================================================================================================================================================================================================================================================================================================================================
Several membership inference (MI) attacks have been proposed to audit a target DNN. Given a set of subjects, MI attacks tell which subjects the target DNN has seen during training. This work focuses on the post-training MI attacks emphasizing high confidence membership detection—True Positive Rates (TPR) at low False Positive Rates (FPR). Current works in this category—likelihood ratio attack (LiRA) and enhanced MI attack (EMIA)—only perform well on complex datasets (e.g., CIFAR-10 and Imagenet) where the target DNN overfits its train set, but perform poorly on simpler datasets (0% TPR by both attacks on Fashion-MNIST, 2% and 0% TPR respectively by LiRA and EMIA on MNIST at 1% FPR).
To address this, firstly, we unify current MI attacks by presenting a framework divided into three stages—preparation, indication and decision.
Secondly, we utilize the framework to propose two novel attacks: (1) Adversarial Membership Inference Attack (AMIA) efficiently utilizes the membership and the non-membership information of the subjects while adversarially minimizing a novel loss function, achieving 6% TPR on both Fashion-MNIST and MNIST datasets; and (2) Enhanced AMIA (E-AMIA) combines EMIA and AMIA to achieve 8% and 4% TPRs on Fashion-MNIST and MNIST datasets respectively, at 1% FPR.
Thirdly, we introduce two novel augmented indicators that positively leverage the loss information in the Gaussian neighborhood of a subject. This improves TPR of all four attacks on average by 2.5% and 0.25% respectively on Fashion-MNIST and MNIST datasets at 1% FPR.
Finally, we propose simple, yet novel, evaluation metric, the running TPR average (RTA) at a given FPR, that better distinguishes different MI attacks in the low FPR region.
We also show that AMIA and E-AMIA are more transferable to the unknown DNNs (other than the target DNN) and are more robust to DP-SGD training as compared to LiRA and EMIA.
Membership inference attacks, Deep Learning, Deep Neural Networks, Data privacy
§ INTRODUCTION
Deep Learning (DL) algorithms, particularly Deep Neural Networks (DNNs), are now being used for decision-making in several sensitive domains, such as IoT devices, healthcare <cit.>, and purchase recommendations <cit.>, where an individual's data privacy is the first and the foremost concern <cit.>. However, several previous works have shown that DNNs can leak sensitive information about their training data. Privacy attacks <cit.>, most notably membership inference (MI) attacks <cit.>, exploit this leakage to quantify the privacy of a target DNN. Given a trained target DNN and a subject (x:input to DNN, y:ground truth), an MI attack infers the training membership of the subject—whether or not the target DNN has seen the subject during training. This work focuses on the inference time MI attacks on the target DNN with emphasis on high confidence membership detection—True Positive Rates (TPR) at low False Positive Rates (FPR) (see Section <ref> for definitions).
Carlini et al. <cit.> note that most of the previous attacks are only good at inferring the non-membership of a subject, and therefore, should instead be called non-membership inference (non-MI) attacks. non-MI attacks are not suitable for the threat setting where it is desired to infer the membership of a subject with high confidence. Therefore, Carlini et al. <cit.> propose the Likelihood Ratio Attack (LiRA) in two variations—online LiRA (n-LiRA) and offline LiRA (f-LiRA)—along with a novel quantitative metric—TPR-FPR curve (in log scale)—to outperform previous attacks in membership inference, specifically in the low FPR region.
However, n-LiRA is computationally inefficient—given k subjects, the membership of which is to be inferred, n-LiRA requires training of 2kN shadow DNNs, where N and k are typically in hundreds <cit.>—making it almost impossible for the future researchers to reproduce the attack, and for practitioners to audit a target DNN in limited computational settings, which is often the only feasible option. f-LiRA is computationally more efficient—requiring N shadow DNNs to be trained for k subjects—but notably less effective <cit.>.
To address this, Ye et al. <cit.> propose Enhanced Membership Inference Attack (EMIA) that uses soft labels from the target DNN to train the shadow DNNs. EMIA performs best in the high FPR region and comparably to n-LiRA in the low FPR region while being significantly computationally efficient—requiring N shadow DNNs to be trained for k subjects.
Limitations and Challenges:
Despite being effective on complex CIFAR-10, CIFAR-100, and Imagenet datasets, both f-LiRA and EMIA do not perform well on MNIST <cit.> and Fashion-MNIST datasets (Section <ref>). One possible reason is that the experimental setup of MI attacks only allows training the target DNN on half of the training set that causes the DNN to overfit, yielding a large train-test accuracy gap for complex datasets <cit.>. The overfitted target DNN leaks private information to even relatively weaker MI attacks. This is a major limitation, as in practical scenarios, the target DNN is less likely to overfit its training set. Previous works have also observed the overfitting of the target DNN <cit.>, but neither of them specifically relates it to MI attacks under-performing on MNIST and Fashion-MNIST, while being effective on complex datasets. We hypothesize two main reasons for this limitation detailed below.
Both f-LiRA and EMIA only leverage the non-membership information of the subject (how does the absence of the subject from the training set affect the behavior of the target DNN?) while ignoring its membership information (how does the inclusion of the subject in the training set affect the behavior of the target DNN?) in order to gain the computational advantage. As noted by Carlini et al. <cit.>, the membership information plays a key role in inferring the membership of the subjects that are hard-to-fit.
Both f-LiRA and EMIA use vanilla membership indicators that examine the loss of the target DNN precisely over the subjects without considering the loss landscape in the neighborhood (e.g., the Gaussian neighborhood) of the subjects. However, local loss landscape around the subject might be helpful for membership inference <cit.>, highlighting the need for augmented membership indicators. However, Carlini et al. <cit.> observe that simply ensembling the loss in the Gaussian neighborhood of the subject does not improve MI attacks' performance.
These limitations pose the following research challenges:
* How to utilize the membership information of the subjects while being computationally efficient?
* How to positively leverage the loss information in the Gaussian neighborhood of a given subject?
Findings and Contributions: In this paper, we first present a unified framework based on the working of current MI attacks by dividing their algorithms into three stages. The preparation stage prepares the variables 𝒱 that may include transformation functions or shadow DNNs used to guide the subsequent stages. The indication stage defines a membership indicator ℐ that, guided by 𝒱, indicates the likelihood of a subject being the training set member. The decision stage applies a threshold τ to ℐ to finally decide whether the subject is a member or not. For example, LiRA and EMIA respectively train 2kN and N shadow DNNs as 𝒱, and use the likelihood ratio as ℐ.
Building upon the proposed framework, we contribute towards a better 𝒱 by developing an Adversarial Membership Inference Attack (AMIA) that utilizes both, the membership and the non-membership information of a subject while being computationally efficient (only requiring 2N shadow DNNs to be trained for k subjects). Given a target DNN and a set of k subjects, AMIA first trains 2N shadow DNNs, where each shadow DNN is trained with k/2 randomly selected subjects augmented with the original training set as a batch for computational efficiency (Challenge <ref>), thereby, holding the (batch-wise) membership information of these k/2 subjects and the non-membership information of the remaining k/2 subjects. AMIA then adversarially computes small magnitude perturbations to the input of each subject that maximizes the difference between the loss of the member and the non-member shadow DNNs. This lets AMIA exploit the loss landscape of the shadow DNNs in the local neighborhood of the subject (Challenge <ref>) optimally for the membership inference purpose. AMIA notably outperforms both f-LiRA and EMIA on Fashion-MNIST and MNIST datasets and performs on par with f-LiRA (while outperforming EMIA) on the CIFAR-10 dataset.
We also propose Enhanced AMIA (E-AMIA) that exploits soft labels of EMIA to train the shadow DNNs of AMIA. E-AMIA shows comparable performance to AMIA on Fashion-MNIST and MNIST datasets, while notably outperforming others on the CIFAR-10 dataset.
We contribute towards a better ℐ by extending the likelihood ratio, a vanilla indicator proposed by Carlini et al. <cit.>, to define two augmented indicators that exploit the loss landscape in the Gaussian neighborhood of a subject to infer its membership (Challenge <ref>). Our proposed augmented indicators notably improve the performance of all the MI attacks considered in this paper, i.e., the previously proposed attacks (f-LiRA and EMIA) and the newly proposed attacks (AMIA and E-AMIA).
We also present a better evaluation metric of MI attacks in the low FPR region. More specifically, we find that the running average of TPR (RTA) at a given FPR (in log scale) better distinguishes between stronger and weaker attacks in the low FPR region as compared to the TPR at a given FPR (in log scale) proposed by Carlini et al. <cit.>.
Finally, we study how well do the variables 𝒱 prepared for k subjects by an MI attack for the target DNN transfer to the unknown DNNs trained on the same dataset. We find that EMIA is less transferable to the unknown DNNs as compared to f-LiRA even in the high FPR region where EMIA outperfroms others on the target DNN. This is because EMIA uses soft labels generated by the target DNN which makes it more customized to the target DNN. Interestingly, our proposed AMIA and E-AMIA, in addition to performing better than LiRA and EMIA on the target DNN, transfer well to the unknown DNNs. We attribute this to the transferability property of the adversarial perturbations used by AMIA and E-AMIA.
Our findings and contributions are summarized below.
* We unify MI attacks in the literature under a framework divided into three stages—preparation, indication, and decision stages (which are formalized in Section <ref>).
* We propose a compute-efficient methodology that leverages subject membership information and adversarial perturbations to effectively distinguish members from non-members on MNIST and Fashion-MNIST datasets.
* We propose improvements over the likelihood ratio indicator (used by both f-LiRA and EMIA) by leveraging the local loss information in the Gaussian neighborhood of the subject. More specifically, we perturb each subject with the Gaussian noise and compute the likelihood ratio of each perturbed sample separately, which notably improves the effectiveness of the attack.
* We study the transferability of MI attacks by using 𝒱 prepared for a target DNN to perform membership inference of the subjects on unknown DNNs. Our results indicate that AMIA is the most transferable, respectively followed by E-AMIA and f-LiRA (which give comparable performance) while EMIA is the least transferable.
The source code will be made available at: https://github.com/hassanalikhatim/AMIAhttps://github.com/hassanalikhatim/AMIA.
§ RELATED WORK
Recent works have extensively highlighted several limitations of DNNs in real-world safety-critical scenarios. For example, DNNs are vulnerable to explainability attacks <cit.>, adversarial attacks <cit.>, bias <cit.>, data imperfections <cit.> and privacy attacks <cit.>. In this section, we present a brief review of research on the MI attack (the most popular type of privacy attacks) and their connection to the adversarial perturbations.
§.§ Membership Inference Attacks on DNNs
MI attacks on DNNs can be mainly divided into two categories. Training time MI attacks assume that an attacker is capable of influencing the DNN during training, in addition to querying the DNN at test time <cit.>. For example, Tramer et al. <cit.> show that by carefully poisoning a DNN during training can cause the DNN to leak greater membership information at test time. Training-time MI attacks are most relevant to the frameworks such as federated learning, where a DNN is trained on broad data collected through untrustworthy sources. Inference time MI attacks assume that the attacker can only query the DNN at test-time without influencing it during the training phase <cit.>. Therefore, inference time MI attacks are relatively more practical because of the limited attacker capabilities. Inference time MI attacks can further be divided into two categories. Attacks that estimate the behavior of the DNNs on given subjects by training several shadow DNNs <cit.> are computationally expensive but notably stronger than the attacks that do not train the shadow DNNs but rely on other membership indicator functions <cit.>.
Our work in this paper falls under the category of inference-time MI attacks on black-box target DNN classifier by training shadow DNNs to estimate the behavior of the target DNN. To the best of our knowledge, LiRA <cit.> and EMIA <cit.> are two of the most popular and strongest MI attacks in this category. Later works on MI attacks have either largely focused on specific scenarios and applications (network pruning <cit.>, multiple models <cit.>, federated learning <cit.>, distillation <cit.>) or fall under a different category of attacks.
Here we briefly discuss both attacks and provide algorithmic details in Section <ref> after presenting a unified framework of MI attacks.
LiRA has two variations. Online LiRA trains thousands of member (subjects included in the training set) and non-member (subjects not included in the training set) shadow DNNs and learns the distribution of the ϕ-processed confidences (defined later in eq-(<ref>)) of member and non-member shadow DNNs on the given subjects. Online LiRA then computes the likelihood of the ϕ-processed confidences of the target DNN on the given subjects to fall within the member or the non-member distribution learned previously. Finally, online LiRA uses thresholding to label each subject as a member or a non-member.
Offline LiRA works the same as online LiRA but does not train the non-member shadow DNNs.
EMIA follows a similar algorithm as offline LiRA, but uses soft labels generated by the target DNN to train the non-member shadow DNNs.
Countermeasures against MI attacks: Two of the most popular defenses against MI attacks in the current literature are differentially private stochastic gradient descent (DP-SGD) training and L_2 regularization <cit.>. We analyze the effects of both of these approaches on the privacy leakage of the target DNN.
§.§ Adversarial perturbations in MI attacks
DNNs have been shown to be vulnerable to adversarial perturbations—small magnitude perturbations carefully crafted through iterative optimization to achieve a targeted goal. The goal of standard adversarial attacks is to change the output of a target DNN f to some target t. Formally,
δ = δargmin (f(x+δ) = t)
Several algorithms have been proposed to achieve the goal in eq-(<ref>) under numerous threat models resulting in a range of adversarial attacks <cit.> and countermeasures <cit.>.
To the best of our knowledge, only two recent works have leveraged the power of adversarial perturbations to perform MI attacks on DNNs. Del et al. <cit.> use the magnitude of adversarial perturbations computed for the target DNN to estimate the likelihood of the membership. Jalalzai et al. <cit.> use the loss values of the target DNN along the adversarial path computed using multiple perturbation magnitudes to estimate the likelihood of the membership. However, both works aim to compute a general threshold, not specifically customized to the subjects (following Attack S methodology by Ye et al. <cit.>). Additionally, both of these techniques compute adversarial perturbations over the target DNN, assuming a white-box access to compute the gradients, and perform membership inference based on how adversarial perturbations affect the behavior of the target DNN on the subject. In contrast, we introduce a novel objective function well-suited to the MI attacks framework (instead of conventional adversarial attacks objective used by prior works <cit.>) and adversarially optimize the novel objective function over the shadow DNNs (instead of over the target DNN), thereby using both the membership and the non-membership information of the subject, and only use the output probabilities of the target DNN to perform membership inference.
§ PRELIMINARIES
In this section, we first introduce the notations and problem setup, and then present a framework typically followed by current MI attacks. We then briefly explain the algorithm of f-LiRA, n-LiRA and EMIA based on the identified framework.
§.§ Notations and Problem Setup
We consider a DNN f: 𝒳→ [0,1]^m that, when given an input x ∈𝒳, outputs a vector f(x) of length m containing the probabilities of the class i at the ith index f(x)[i]. We use f_t ←𝒯(f, D_t) to denote that f is trained on some training data D_t={(x^i, y^i)}_i=0^S-1 of size S, yielding f_t as the trained DNN. D_t is assumed to be randomly sampled from the real-world data distribution D_r, denoted in the future as D_t ∼ D_r ∀ (x,y) ∈ D_t, (x,y) ∼ D_r.
Following <cit.>, for any dataset (e.g. CIFAR-10), we take the available training data as D_r and randomly sample D_t from D_r. The training phase of the target DNN f is detailed as follows:
* The available training data is assumed to be the real-world data D_r of size |D_r|.
* Randomly sample training dataset D_t ∼ D_r of size |D_t|=|D_r|/2.
* Output f_t ←𝒯(f, D_t).
§.§ General Membership Inference (MI) Attack Framework
Given query access to a target DNN f_t ←𝒯(f, D_t) trained on the dataset D_t, and a set of k subjects D_e = {( x^i, y^i ) }_i=0^k-1 the membership of which is to be inferred, an MI attack 𝒜(·) aims to find out whether each of the given subjects is a member of D_t or not. An MI attack is based on a commonly-observed behavior of DNNs: for any (x,y) ∼ D_r, the loss of f_t is typically smaller if (x,y) ∈ D_t, than the loss of f_t if (x,y) ∉ D_t.
MI attacks assume that the attacker can collect samples from the real-world data D_r, from which the training data D_t is sampled. For experimental simulation, MI attacks typically assume that the size of D_t is half that of D_r, i.e. |D_t| = |D_r|/2). An MI attack generally follows three stages—preparation, indication and decision.
* At the preparation stage, given D_e, an MI attack prepares the variables 𝒱 (e.g., training the models on the attacker dataset D_a ∼ D_r sampled from the real-world data) that guide the quantification of membership at the indication stage.
* At the indication stage, ∀ (x,y) ∈ D_e an MI attack uses an indicator function ℐ(f_t, (x,y); 𝒱) guided by 𝒱 to compute a value that indicates the membership of the subject (x,y) ∈ D_t.
The MI attack algorithm 𝒜 can thus be characterized by 𝒱 and ℐ, such that
𝒜 (f_t, (x,y)) = ℐ (f_t, (x,y), 𝒱)
Because the objective of an MI attack is to differentiate between (x,y) ∈ D_t and (x,y) ∉ D_t, any choice of 𝒱 and ℐ that satisfies the following condition can be used for membership inference.
𝒱,ℐ, such that,
(x,y) ∈ D_t𝔼 [ ℐ (f_t, (x,y), 𝒱) ] > (x,y) ∈ D_r `
D_t𝔼[ ℐ (f_t, (x,y), 𝒱) ]
* At the decision stage, a threshold τ is decided depending upon the tolerable false positive rate such that 𝒜(f_t, (x,y)) ≥τ, means (x,y) ∈ D_t, and 𝒜(f_t, (x,y)) < τ means (x,y) ∉ D_t. Formally,
b =
0, 𝒜(f_t, (x,y)) < τ
1, 𝒜(f_t, (x,y)) ≥τ
A number of algorithms 𝒜(·) have been proposed to effectively infer the membership of a data sample yielding several MI attacks with diverse threat settings <cit.>. In this work, we consider two of the strongest and the most recent MI attacks: the Likelihood Ratio Attack (LiRA) proposed by Carlini et al. <cit.>, and an Enhanced Membership Inference Attack (EMIA) proposed by Ye et al. <cit.>.
Likelihood Ratio Attack (LiRA):
LiRA may work in two different modes: offline LiRA and online LiRA.
Offline LiRA (f-LiRA): Given (x^i,y^i) ∈ D_e, f-LiRA algorithm is shown in Algorithm <ref>, and summarized below:
* Preparation (Steps 4 to 9):
∀ j ∈ [0..N], f^j_n 𝒯(D^j_a ∼ D_r `
D_e)
𝒱: μ^i_n = j𝔼[ ϕ( x^i,y^i|f^j_n ) ], σ^i_n = 𝕊[ ϕ( x^i,y^i|f^j_n ) ]
where j𝔼 is the expectation over j, 𝕊 is the standard deviation and ϕ(x^i,y^i|f) is defined as follows,
ϕ( x^i,y^i|f ) = logf(x^i)[y^i]/1-f(x^i)[y^i]
* Indication (Steps 10 to 15): ∀ (x^i,y^i) ∈ D_e,
ℐ: LR_f (f_t, (x^i,y^i), 𝒱) =
1 - p( ϕ(x^i,y^i|f_t) | 𝒩(μ^i_n, σ^i_n) )
where LR_f denotes the offline likelihood ratio and p is the probability.
* Decision Stage: The threshold τ is computed based on the tolerable false positives.
Online LiRA (n-LiRA): Online LiRA (n-LiRA) customizes f-LiRA to each subject in D_e, and is notably more effective than f-LiRA (Algorithm <ref>).
* Preparation:
∀ j ∈ [0..N], D^j_a ∼ D_r `
D_e,
f^j_n 𝒯(D^j_a)
∀ (x^i,y^i) ∈ D_e, f^j_m 𝒯 (D^j_a ∪ (x^i,y^i))
𝒱: μ^i_n j𝔼[ ϕ (x^i,y^i|f_n^j) ], σ^i_n 𝕊[ ϕ (x^i,y^i|f_n^j) ]
μ^i_m j𝔼[ ϕ (x^i,y^i|f_m^i,j) ], σ^i_m 𝕊[ ϕ (x^i,y^i|f_n^i,j) ]
* Indication: ∀ (x^i,y^i) ∈ D_e,
ℐ: LR_n (f_t, (x^i,y^i), 𝒱) =
p( ϕ(x^i,y^i|f_t) | 𝒩(μ^i_m, σ^i_m) )p( ϕ(x^i,y^i|f_t) | 𝒩(μ^i_n, σ^i_n) )
where LR_n denotes the online likelihood ratio.
* Decision: Finally, the threshold τ is computed based on the tolerable false positives.
Enhanced Membership Inference Attack (EMIA): The working of EMIA is given in Algorithm <ref>, and summarized below.
* Preparation (Steps 2 to 7):
∀ j ∈ [0..N],
D^j_a ∼ D_r `
D_e s.t. ∀ (x_a,y_a) ∈ D^j_a, y_a=f_t(x_a),
f^j_n 𝒯(D^j_a)
𝒱: μ^i_n = j𝔼[ ϕ( x^i,y^i|f^j_n ) ], σ^i_n = 𝕊[ ϕ( x^i,y^i|f^j_n ) ]
* The indication and decision stages of EMIA are the same as those of f-LiRA.
§ PROPOSED MEMBERSHIP INFERENCE ATTACKS
In this section, we present our algorithm to prepare the variables for membership inference with two novel improvements—firstly, our attack utilizes both the membership and the non-membership information of the subjects at a significantly higher computational efficiency compared to n-LiRA; secondly, our attack optimally leverages the loss landscape around the subjects by computing small magnitude input perturbations, adversarially optimized to maximize the expected loss of non-member shadow DNNs and minimize that of the member shadow DNNs on the subjects. We then present two novel augmented indicators that examine the loss values in the Gaussian neighborhood of the subjects, in addition to the loss over each subject, to infer the membership of the subjects.
§.§ Adversarial Membership Inference Attack (AMIA)
Motivated by eq-(<ref>), for each subject (x^i,y^i) ∈ D_e, we compute a perturbation Δ x^i such that (x^i + Δ x^i,y^i) is more vulnerable to MI attack as compared to (x^i, y^i). More formally, we aim to achieve the following,
𝒜(f_t, (x^i+Δ x^i,y^i)) ≥𝒜(f_t, (x^i,y^i)), if (x^i,y^i) ∈ D_t
𝒜(f_t, (x^i+Δ x^i,y^i)) ≤𝒜(f_t, (x^i,y^i)), if (x^i,y^i) ∉ D_t
However, it is impossible to directly achieve eq-(<ref>) because an MI attacker does not have access to D_t.
To address this, we leverage the transferability property of adversarial perturbations. It has been shown that the adversarial perturbation to an input computed to fool a deep learning model ℱ effectively transfers (fools) to other DNNs <cit.>. Additionally, using an ensemble of multiple DNNs as ℱ significantly improves the transferability of the adversarial perturbations. We leverage the transferability property to transfer the adversarial perturbations computed over member and non-member shadow DNNs to f_t as detailed in Algorithm <ref>.
* Preparation Stage: Given a set of subjects D_e for membership inference, at every iteration j ∈ [0..N-1], we first randomly sample a dataset D^j_a∼ D_r (Step 3) and randomly bisect D_e into two distinct subsets D^j_e1 and D^j_e2 (Step 4). We then train a pair of shadow DNNs f^j_m1 and f^j_m2 on D^j_a augmented with D^j_e1 and D^j_e2 respectively (Step 5). This lets us train the member and the non-member shadow DNNs simultaneously—∀ (x^i,y^i) ∈ D_e, if (x^i,y^i) ∈ D^j_e1, f^j_m1 is the member shadow DNN and f^j_m2 is the non-member shadow DNN, and vice versa if (x^i,y^i) ∈ D^j_e2. The process is repeated for j ∈ [0..N-1].
To make the attack stronger, ∀ j ∈ [0..N-1], ∀ (x^i,y^i) ∈ D_e, we first create lists of member shadow DNNs f^i,j_m and non-member shadow DNNs f^i,j_n (Steps 10 to 13). We then adversarially learn Δ x^i ∈ [-ϵ, ϵ], using the iterative Fast Gradient Sign Method (i-FGSM) to minimize the expected loss of (x^i,y^i) on the member shadow DNNs, while maximizing the loss of (x^i,y^i) on non-member shadow DNNs (Step 16).
Formally,
g^j(Δ x^i) =
ϕ( x^i+Δ x^i, y^i|f^i,j_n ) - ϕ( x^i+Δ x^i, y^i|f^i,j_m )
Δ x^i = min_Δ x^i ∈ [-ϵ, ϵ] j ∈ [0..N-1]𝔼[ g^j(Δ x^i) ]
The prepared variables 𝒱 are then,
𝒱: Δ x^i,
μ^i_n = j𝔼[ ϕ( x^i+Δ x^i,y^i|f^i,j_n ) ],
σ^i_n = 𝕊[ ϕ( x^i+Δ x^i,y^i|f^i,j_n ) ],
μ^i_m = j𝔼[ ϕ( x^i+Δ x^i,y^i|f^i,j_m ) ],
σ^i_m = 𝕊[ ϕ( x^i+Δ x^i,y^i|f^i,j_m ) ]
* Indication Stage: ∀ (x^i,y^i) ∈ D_e, and corresponding i ∈ [0..|D_e|-1] AMIA computes the ϕ-scaled confidences of non-member and member shadow DNNs on the subject (x,y) and compute ℐ using the online likelihood ratio LR_n as follows,
ℐ: LR_n (f_t, (x^i,y^i), 𝒱) =
p( ϕ(x^i+Δ x^i,y^i|f_t) | 𝒩(μ^i_m, σ^i_m) )p( ϕ(x^i+Δ x^i,y^i|f_t) | 𝒩(μ^i_n, σ^i_n) )
* Decision Stage: Finally, the threshold τ is computed based on the tolerable false positives.
Enhanced Adversarial Membership Inference Attack.
It may be possible to further increase the effectiveness of AMIA by reducing the data uncertainty inspired by EMIA. To achieve this, we propose the enhanced adversarial membership inference attack (E-AMIA). E-AMIA follows the same algorithm as AMIA, except that it replaces the data sampling step <ref> of Algorithm <ref> with the data sampling step <ref> of Algorithm <ref>.
§.§ Augmented Membership Indicators
Both LiRA and EMIA use vanilla indicators that examine the loss of the target DNN precisely upon the subject (x^i,y^i). However, loss values in the local neighborhood (e.g., the Gaussian neighborhood) of the subject might be helpful for membership inference. In order to utilize the Gaussian neighborhood information of a subject, we create a set of augmented inputs X^i by concatenating the original input x with its Gaussian noise augmented versions that are computed by adding to x^i, p-1 random Gaussian noises ⋃_l=1^p-1 n^l ∼𝒩(0, σ_n) with mean 0 and standard deviation σ_n.
X^i = ⋃_l=0^p-1( x^i + n^l ∼𝒩(0, σ_n) )
where n^(0) = 0. This lets us analyze the loss of f_t at and around (x^i,y^i) allowing us to model local trends which are helpful in membership inference (See Fig.).
§.§.§ Likelihood Ratio with Perturbation:
For any (x^i,y^i) ∈ D_e, follow the following steps:
* ∀ l ∈ [0..p-1], compute
F_n^l = {∀ j ∈ [0..N-1], ϕ( x^i + n^l,y|f^j_n ) }
F_m^l = {∀ j ∈ [0..N-1], ϕ( x^i + n^l,y|f^j_m ) }
* ∀ l ∈ [0..p-1], compute
μ_n^l = mean( F_n^l ), σ_n^l = std( F_n^l )
μ_m^l = mean( F_m^l ), σ_m^l = std( F_m^l )
* Finally, compute LR_p (f_t, (x^i,y^i), 𝒱) the likelihood ratio of (x^i,y^i) to be a member of D_t with perturbation defined as the expected likelihood ratio of (x^i+n^l,y^i) over l for f_t.
LR_p (f_t, (x,y), 𝒱) =
l ∈ [0..p-1]𝔼[ LR_n ( f_t, ( x^i+n^l, y^i ), F_n^l, F_m^l ) ]
§.§.§ Likelihood Ratio with the Optimized Perturbation:
For any (x^i,y^i) ∈ D_e, follow the following steps:
* Follow steps 1 and 2 of the likelihood ratio with perturbation.
* Set O = {},
Repeat z times: O := O ∪l ∈ [0..p-1], l ∉ Oargmax( F_m^l - F_n^l )
O is a set of z integers denoting the indices l ∈ [0..p-1] of the noise that produce the maximum expected difference between the ϕ-scaled confidences of the member and the non-member shadow DNNs.
* Finally, compute LR_o (f_t, (x^i,y^i)) the likelihood ratio of (x^i,y^i) to be a member of D_t with optimal perturbation defined as the expected likelihood ratio of (x+n^l,y) over l ∈ O for f_t.
.9!LR_o (f_t, (x,y)) = l ∈ O𝔼[ LR_n ( f_t, ( x^i+n^l, y^i ), F_n^l, F_m^l ) ]
Note that if z = p, LR_o(·) = LR_n(·).
§ PERFORMANCE EVALUATION
§.§ Evaluation Methodology
Our evaluation methodology is similar to that used by the previous works <cit.> as summarized below.
§.§.§ Datasets
We evaluate MI attacks on three commonly used machine learning datasets—CIFAR-10, MNIST and Fashion-MNIST. All our datasets are divided into two sets: (1) the original training set of the data is assumed to be our real-world data D_r, and (2) the original test set is denoted as D_s.
The training data D_t ∼ D_r for the target DNN is sampled from D_r. For coherence with Carlini et al. <cit.>, we use |D_t| = |D_r|/2, where |·| denotes the size of the dataset. The size of the attacker data D_a ∼ D_r is |D_a|=|D_t|/2 <cit.>. We construct the evaluation dataset D_e of size |D_e| = k, where each instance is sampled uniformly either from D_t or D_r `
D_t. We use k=200 in our experiments.
§.§.§ Model architecture
Following Ye et al. <cit.>, we chose a deep Convolutional Neural Network (CNN)-based model as our target DNN. More specifically, we use the following CNN: Input() - {Conv2D() - ReLU() - BatchNorm()}× 3 - MaxPool() - Dropout(.) - Flatten() - Dense(# classes) - softmax(). Unless stated otherwise, we always train the CNN using L_2 regularization because it makes the CNNs notably more robust to MI attacks <cit.>. We train N=50 non-member shadow DNNs for f-LiRA and EMIA, and equal number of non-member and member shadow DNNs for AMIA and E-AMIA for each experiment.
We do not implement n-LiRA due to its high computational overhead—n-LiRA requires training 10000 shadow DNNs for a single experiment on each dataset in our experimental setup.
§.§.§ Evaluation Metrics
Here we first define the TPR-FPR curve typically used by prior works <cit.>, and then present the running TPR average (RTA), a novel metric to test the performance of MI attacks. Although AMIA and E-AMIA generally perform better than f-LiRA and EMIA on both evaluation metrics, we believe that RTA better distinguishes different attacks as compared to the TPR-FPR curve.
§.§.§ TPR-FPR curve
Following previous works we evaluate the effectiveness of an MI attack by computing its TPR at a given FPR in log scale (emphasizing low FPR region) <cit.>. Given τ, TPR is the ratio of the number of samples correctly marked as members by 𝒜, to the total number of member samples. Similarly, the FPR is the total number of samples incorrectly marked by 𝒜 as members to the total number of non-member samples. Both TPR and FPR are define below:
TPR (τ) = |∀ (x,y) ∈ D_t ∩ D_e, s.t. 𝒜(f_t, (x,y)) > τ|/|∀ (x,y) ∈ D_t ∩ D_e|
FPR (τ) = |∀ (x,y) ∈{D_r `
D_t}∩ D_e, s.t. 𝒜(f_t, (x,y)) > τ|/|∀ (x,y) ∈ D' ∩ D_e|
where, |·| denotes the total number of samples satisfying the condition.
§.§.§ Running TPR Average (RTA)
We propose a new metric called the running TPR average (RTA) defined at a given value t ∈ [0,1] as the average value of TPR for FPR less than t. Formally,
RTA (t) = τ s.t. FPR(τ) ≤ t𝔼 [ TPR(τ) ]
Stated simply, RTA(t) defines the success rate of membership detection rate with more than (1-t) × 100% confidence. We believe that RTA better quantifies the member inference efficacy of an MI attack in the low FPR region. Nevertheless, we show that our proposed attacks—AMIA and E-AMIA—outperform the state-of-the-art on both of the evaluation metrics.
§.§ Accuracy of the Models
On CIFAR-10 dataset, the CNN achieves an accuracy of 96.7% accuracy on the training dataset D_t with 64.6% accuracy on the test set D_s as shown in Fig. <ref>, which is the same as that achieved by Carlini et al. <cit.> using wide ResNet. For F-MNIST and MNIST, Fig. <ref> show that the CNN respectively achieves 98.2% and 99.9% accuracy on D_t, and 90.6% and 98.6% accuracy on D_s. Like Carlini et al. <cit.>, we achieve a significantly smaller accuracy on the test set as compared to the state-of-the-art for CIFAR-10 and F-MNIST. This is because the experimental setup of LiRA (and EMIA) further divides the original training set D_r into two equal halves, and only one of these halves D_t is used to train the CNN.
§.§ Evaluating MI Attacks
This section compares the state-of-the-art MI attacks—f-LiRA and EMIA—with the newly proposed AMIA and E-AMIA. Throughout the section (and the following sections), we denote an MI attack using the 𝒱, ℐ format. For example, f-LiRA,LR_f denotes that f-LiRA algorithm is used to prepare the variables and LR_f is used as the indicator function.
§.§.§ Comparing MI attacks
Comparison of f-LiRA,LR_f <cit.>, EMIA,LR_f <cit.>, AMIA,LR_n (ours) and E-AMIA,LR_n (ours) respectively for the CIFAR-10, Fashion-MNIST and MNIST datasets is provided in Fig. <ref>(a)-(c) based on TPR-FPR metric, and in Fig. <ref>(a)-(c) based on RTA metric. We note that f-LiRA,LR_f is more effective than EMIA,LR_f in the low FPR region—Ye et al. <cit.> also observe that Attack R (f-LiRA,LR_f in this paper) performs better than Attack D (EMIA,LR_f in this paper) in the low FPR region on CIFAR-10 dataset. In the high FPR region, EMIA,LR_f typically outperforms including the proposed AMIA and E-AMIA—in their work, Ye et al. <cit.> also observe that EMIA,LR_f is more effective in the high FPR region on CIFAR-10 dataset. We also note that, although our general observations and conclusions from RTA and TPR-FPR curve are consistent with each other, RTA is able to better distinguish the effectiveness of different attacks and appears more stable as compared to the TPR-FPR curve.
As we are mostly interested in membership detection with high confidence (low FPR region) <cit.>, our results show that AMIA,LR_n and E-AMIA,LR_n consistently outperform both f-LiRA,LR_f and EMIA,LR_f for low FPRs (e.g., ≤ 3%) with a significant margin for all the three datasets. The only exception is CIFAR-10 at FPR 1%, where f-LiRA,LR_f performs slightly better. For example, on the Fashion-MNIST dataset, f-LiRA,LR_f and EMIA,LR_f exhibit 0% TPR, while AMIA,LR_n and E-AMIA,LR_n respectively achieve 6% and 8% TPR at 1% FPR. This is concerning as it implies that around 8% of the training dataset members can be identified with 99% confidence (1% FPR). We attribute this increased effectiveness to two reasons. Firstly, AMIA,LR_n is guided by both the membership and the non-membership properties of the subject (x^i,y^i) ∈ D_e in contrast to EMIA,LR_f, which is only informed by the non-membership properties. Secondly, AMIA,LR_f leverages small adversarial perturbations (or adversarial features) that have been carefully optimized to be indicative of the membership of (x,y).
§.§.§ Comparing indicator functions
We compare the performance of different indicator functions ℐ with f-LiRA, EMIA, AMIA and E-AMIA on the three datasets considered in this paper. More specifically, for f-LiRA and EMIA, we compare two indicators LR_f and LR_p, and for AMIA, we compare three indicators LR_n, LR_p and LR_o, because LR_o is only compatible with the algorithms that leverage the membership information of (x,y) (See eq-(<ref>)) in Table <ref>.
Carlini et al. <cit.> observe that Gaussian augmented indicator proposed by Jayaraman et al. <cit.> does not improve the MI attack performance. On the contrary, our proposed augmented indicators LR_p and LR_o typically outperform vanilla indicators LR_f and LR_n for all the baseline attacks, specifically on Fashion-MNIST and MNIST datasets. This is because our novel augmented indicators compute the membership likelihood of each noisy sample independently, and then ensemble the results as defined in eq-(<ref>) and (<ref>). These observations are also consistent with our initial hypothesis that analyzing local loss at and around (x,y) will improve the effectiveness of MI attacks.
Table <ref> also validates our previous finding that AMIA outperforms both f-LiRA and EMIA in the low FPR region, while EMIA is most effective for the high FPR region (see AUC(1) in the table).
§.§ Analyzing Hyperparameters
§.§.§ Effects of ϵ on the attack performance
Fig. <ref>(a)-(c) show the effects of varying ϵ, the l_∞ norm of the adversarial perturbations, on the performance of AMIA on CIFAR-10, Fashion-MNIST and MNIST datasets respectively. TPR of f-LiRA,LR_f and EMIA,LR_f are also provided for comparison.
For all the three datasets and the indicator functions, we observe that the performance of AMIA initially increases as ϵ is increased from 0.01 to 0.02. This is because a greater l_∞ norm allows AMIA to analyze and process more information about the local loss around (x,y). TPRs of AMIA reach a maximum value at ϵ = 0.02, and then start decreasing as ϵ increases. This decrease in TPR is because a too large ϵ only sub-optimally captures the local loss trends around (x,y).
§.§.§ Effects of σ_n on the attack performance
Fig. <ref>(a)-(c) show the effects of varying σ_n, the standard deviation of Gaussian noise in eq-(<ref>), on the performance of AMIA on CIFAR-10, Fashion-MNIST and MNIST datasets respectively. TPR of f-LiRA,LR_f and EMIA,LR_f are also provided for comparison.
For all the three datasets and the indicator functions, we observe that the performance of AMIA typically increases slightly as σ_n is increased from 0.01 to 0.02, and then starts decreasing as σ_n increases. The observed trend is interestingly similar to that observed when ϵ is increased in Fig. <ref>. We attribute this to the same reasons as above—the initial slight increase is caused by the greater σ_n better capturing the local loss around (x,y), and a too high σ_n sub-optimally captures the local loss around (x,y) causing a decrease in the running average.
However, interestingly, the decrease in the RTA due to an increase in σ_n is significantly less drastic than that caused by an increase in ϵ. We hypothesize that because σ_n is the standard deviation of the Guassian noise, n^l ∼𝒩(0, σ_n) is not bound by l_∞(σ_n), unlike the adversarial noise which is bound by l_∞(ϵ). Therefore even if σ_n is large, several noise elements have notably smaller l_∞ norm due to the Guassian distribution. Therefore, a suboptimally large value of σ_n better captures local loss trends than the suboptimally large value of ϵ.
§.§.§ Effects of N on the attack performance
Fig. <ref>(a)-(c) show the effects of varying N, the number of shadow DNNs trained by the attacker on the sampled datasets D^j_a, on the performance of AMIA on CIFAR-10, Fashion-MNIST and MNIST datasets respectively. TPR of f-LiRA,LR_f and EMIA,LR_p are also provided for comparison. We observe that the effectiveness of the attacks increases as N is increased. In MI attacks, shadow DNNs are used to by the attacker to study the behavior of DNN on the subject given that the subject is or is not part of its training set. Therefore, training a greater number of shadow DNNs enables the attacker to more precisely approximate the expected behavior of the target DNN, ultimately improving the attack performance.
It can be observed in the figure that AMIA outperforms EMIA,LR_f on all three considered datasets. On CIFAR-10 dataset, f-LiRA,LR_f outperforms AMIA with a significant margin, except at N=50 where AMIA performs slightly better than f-LiRA,LR_f. We believe the reason for the increased effectiveness of f-LiRA,LR_f on CIFAR-10 is the overfitting of the target CNN on the CIFAR-10 training set D_t. On Fashion-MNIST and MNIST datasets, where the target CNN is more generalized over the test set, AMIA is typically more effective than both f-LiRA,LR_f. For MNIST, N=20, f-LiRA,LR_f shows surprisingly improved effectiveness—we regard it as an outlier, as we were unable to achieve a similar effectiveness with f-LiRA,LR_f when we re-run the experiment.
§ DISCUSSION
§.§ Transferability of MI Attacks
We study the transferability of MI attacks in Fig. <ref>(a)-(c) respectively for CIFAR-10, Fashion-MNIST and MNIST datasets. We define transferability as the ability of MI attack variables optimized for a target CNN to transfer to the unknown CNNs trained on the same dataset that the variables have not been optimized for. Fig. <ref> reports RTA of different attacks averaged over 10 unknown CNNs. We observe that generally in the low FPR region AMIA,LR_f most effectively transfers to unknown CNNs, with E-AMIA,LR_n and f-LiRA,LR_f showing comparable performance. On the other hand, EMIA,LR_f is least transferable to unknown CNNs even in the high FPR region where it performs best on the target CNN. This is because EMIA,LR_f is more customized to the target DNN because of using soft labels. For the CIFAR-10 dataset however, f-LiRA,LR_f outshines both AMIA,LR_f and E-AMIA,LR_n due to the overfitting of the unknown CNNs as discussed in detail in Section <ref>.
Not surprisingly, the transferability of E-AMIA,LR_n seems correlated with the transferability of EMIA,LR_f. More specifically, when EMIA,LR_f shows worse transferability than AMIA,LR_f—for example on CIFAR-10 and Fashion-MNIST datasets in Fig. <ref>(a),(b)—E-AMIA,LR_n consistently performs slightly worse than AMIA,LR_f. Likewise, when EMIA,LR_f shows better transferability than AMIA,LR_f—for example on MNIST dataset in high TPR region—E-AMIA,LR_n performs slightly better than AMIA,LR_f. This is simply because E-AMIA,LR_n algorithm uses the data annotation approach used by EMIA,LR_f to modify AMIA,LR_f.
§.§ Evaluating Differentially Private Training
Fig <ref>(a)-(b) compares the transferability of f-LiRA,LR_f and EMIA,LR_f with that of AMIA,LR_n and E-AMIA,LR_n to the unknown CNNs trained with differential privacy algorithm (instead of with L_2 regularization as in Fig. <ref>) for Fashion-MNIST and MNIST datasets respectively. Due to large computational overhead, we refrain from optimizing new 𝒱 directly over DP-SGD CNNs and reuse 𝒱 already prepared for the target CNN in previous experiments. More specifically, each attack reuses 𝒱, prepared for the target CNN in previous experiments, to infer the membership of given subjects for 10 unknown CNNs (that 𝒱 has not been optimized for) trained with (e=1.56, δ=10^-5)-DP-SGD gradient updates.
We make two key observations. (1) CNNs trained with DP-SGD are notably more robust to MI attacks as compared to those trained with L_2 regularization as illustrated by the reduced RTA in Fig. <ref>. This observation is consistent with that of Choquette et al. <cit.> who note that only L_2 regularization needs to be notably stronger in order to match the robustness of DP-SGD to privacy attacks. (2) AMIA,LR_n and E-AMIA,LR_n are significantly more effective against DP-SGD as compared to f-LiRA,LR_f and EMIA,LR_f. This is because (a) AMIA,LR_n and E-AMIA,LR_n are more transferable to unknown CNNs as compared to f-LiRA,LR_f and EMIA,LR_f, and (b) AMIA,LR_n and E-AMIA,LR_n are significantly more effective against well-performing DNNs that do not overfit their training data.
§.§ Limitations and Future Work
AMIA, similar to LiRA and EMIA, assumes access to the real data distribution D_r, from which the training data D_t is sampled. This assumption is commonly shared by several standard MI attacks. Although it might be challenging to meet this assumption in certain real-world scenarios, there exist other (more complex) approaches, such as the model inversion and the model stealing attacks, that can be used to estimate the training dataset D_t, given that f_t is vulnerable to MI attacks under the aforementioned assumption. For example, an attacker may formulate the optimization problem to maximize 𝒜(·) over the subject (similar to model inversion attacks <cit.>). Therefore, MI attack is believed to be a fundamental privacy attack, and understanding the vulnerabilities of f_t against MI attack is both critical and insightful <cit.>.
While our proposed attacks, AMIA and E-AMIA, to the best of our knowledge, are currently the most computationally efficient methods for utilizing subject membership information, they still require training multiple shadow DNNs. To further enhance efficiency, we suggest incorporating universal adversarial perturbations <cit.> that optimize the loss of member shadow DNNs and maximize the loss of non-member shadow DNNs. Our future work aims to develop a universal adversarial membership inference attack (U-AMIA) building upon this approach.
§ CONCLUSIONS
We first present a unified framework of current MI attacks divided it into three stages—preparation, indication and decision. We note several limitations of Likelihood Ratio Attack (LiRA) and Enhanced Membership Inference Attack (EMIA) at the preparation stage. Online LiRA is computationally highly inefficient, while offline LiRA and EMIA sacrifice the membership information of subjects for computational efficiency. This makes LiRA and EMIA perform poorly on MNIST and Fashion-MNIST datasets where the target is well-trained, unlike complex datasets (CIFAR-10, CIFAR-100, and Imagenet) where the target DNN typically overfits its training set due to the experimental setup of MI attacks.
To address this, we propose Adversarial Membership Inference Attack (AMIA) that efficiently utilizes the membership information by training member shadow DNNs on subject batches. AMIA also optimally leverages the loss landscape around subjects by computing adversarially minimizing a novel loss function. We also propose the Enhanced AMIA (E-AMIA) that combines EMIA and AMIA to further improve the attack effectiveness. We experiment with a range of hyperparameters and observe that E-AMIA and AMIA notably outperform both LiRA and EMIA in the low FPR region—e.g., LiRA and EMIA showed 0% TPR at 1% FPR (99% confidence), while AMIA and E-AMIA showed 6% and 8% TPR on Fashion-MNIST. We also study the transferability of each MI attack and show that AMIA is most transferable followed by E-AMIA and LiRA, while EMIA is least transferable.
§ ACKNOWLEDGMENT
This publication was made possible by NPRP grant # [13S-0206-200273] from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors.
2_IEEE_Transactions/style
|
http://arxiv.org/abs/2307.04744v1 | 20230710175321 | Behavioral Analysis of Pathological Speaker Embeddings of Patients During Oncological Treatment of Oral Cancer | [
"Jenthe Thienpondt",
"Kris Demuynck"
] | eess.AS | [
"eess.AS"
] |
Classical Observables from the Exponential Representation of the Gravitational S-Matrix
[
August 12, 2023
========================================================================================
In this paper, we analyze the behavior of speaker embeddings of patients during oral cancer treatment. First, we found that pre- and post-treatment speaker embeddings differ significantly, notifying a substantial change in voice characteristics. However, a partial recovery to pre-operative voice traits is observed after 12 months post-operation. Secondly, the same-speaker similarity at distinct treatment stages is similar to healthy speakers, indicating that the embeddings can capture characterizing features of even severely impaired speech. Finally, a speaker verification analysis signifies a stable false positive rate and variable false negative rate when combining speech samples of different treatment stages. This indicates robustness of the embeddings towards other speakers, while still capturing the changing voice characteristics during treatment. To the best of our knowledge, this is the first analysis of speaker embeddings during oral cancer treatment of patients.
Index Terms: pathological speaker embeddings, oral cancer treatment, speaker recognition
§ INTRODUCTION
Oral cancer is a type of cancer that can develop in various locations within the oral cavity, predominantly originating in the tissues of the mouth <cit.>. It is a serious and potentially life-threatening condition that can cause significant damage to the affected tissues and spread to other parts of the body. Common risk factors for oral cancer include tobacco usage and excessive alcohol consumption <cit.>. Treatment options for oral cancer typically include surgery, radiation therapy and chemotherapy, which may be used in isolation or in conjunction with each other, depending on the stage and location of the cancer.
In prior research, it is shown that oncological treatment of oral cancer can be accompanied with impaired speech capabilities, including articulation and intelligibility <cit.>. Subsequent research found reduced speech abilities even after extensive recovery periods up to 12 months after surgical intervention <cit.>. Another study <cit.>, showed a significant decrease in tongue function during oral cancer treatment, which can potentially be an important contributor to post-intervention speech impairment. Other studies <cit.> observed a significant decrease in speech recognition transcription accuracy when comparing healthy speakers to a group of patients diagnosed with oral cancer in various treatment stages.
However, to the best of our knowledge, there is no prior research on the behavior of speaker embeddings of patients treated for oral cancer. Speaker embedding similarity, in contrast to conventional intelligibility rating systems, could provide an objective and text-independent measurement of changing voice characteristics without relying on any human perceptual evaluation of pathological speech.
In recent years, speaker verification has gained significant performance increases due to the availability of large and labeled datasets <cit.>, a significant increase in computational power and the advent of specialized deep learning models, including the x-vector architecture <cit.>, ECAPA-TDNN <cit.> and fwSE-ResNet <cit.>. Low-dimensional speaker embeddings can be extracted from these models and have shown to capture a wide variety of speaker characteristics, including gender, age, spoken language and emotional state <cit.>.
In this paper, we want to analyze the behavior of speaker embeddings at different stages during oral cancer treatment on multiple properties. First, how do the speaker characteristics, according to the speaker embeddings, evolve between the pre- and post-intervention stages. Subsequently, we want to compare this to previous research results and establish the feasibility of potential usage of speaker embeddings during the oral cancer treatment procedure of a patient. Secondly, assess the intra-session robustness of speaker embeddings of patients based on speech samples recorded at the same session during oral cancer treatment and compare this to a cohort of non-pathological speakers. Finally, perform a speaker verification analysis when combining utterances of several steps in the intervention trajectory of the patients with the goal of analyzing the robustness of the pathological embeddings towards other speakers.
§ PATHOLOGICAL SPEAKER EMBEDDINGS
The speech samples in the analysis of this paper were collected from 57 Dutch patients with primary oral carcinoma taken at the University Medical Center Utrecht (UMC Utrecht) and the Radboud University Medical Center (Radboudumc) in the Netherlands between January 2007 and August 2009. The study protocol (study ID: NL1200604106) was approved by the Ethics Committees of the UMC Utrecht and Radboudumc. All participants received written information and provided their signed informed consent. The oncological treatment of the patients consists of surgery and subsequent radiotherapy. In addition, samples were also collected from 60 healthy speakers, matched for age and gender, as the control group <cit.>. Speech samples of patients were taken within 4 weeks before oncological intervention, 4 to 6 weeks after both surgery and radiotherapy and 6 and 12 months after surgery during the recovery phase. The healthy control group has speech samples only taken once. At each sampling session, two speech utterances are collected from the speakers by reading two short, phonetically diverse texts which will be referred to as text1 and text2 in this paper, respectively. The texts and recording equipment is kept consistent across all sampling sessions. The average duration of all collected speech samples is 49.6 seconds.
In addition, the tumor stage, as indicated by T of the commonly used TNM cancer staging system <cit.> of the patients were also collected during the pre-intervention period. The T variable ranges from T1, indicating small tumors, to T4, indicating large tumors which have potentially invaded nearby structures, known as metastasis. Furthermore, the reconstruction type of the oral cancer surgical procedure is also collected, existing of primary closure, free flap, local flap, and bone flap reconstruction. Primary closure refers to the immediate closure of the incision after the removal of cancerous tissue. Local flap reconstruction uses adjacent oral cavity tissue to reconstruct the affected area after tumor removal, while free flap reconstruction uses tissue from another body part. Bone flap reconstruction is used to rebuild bone structures inside the oral cavity after removal of the cancer tumors. The composition of the speaker characteristics of the patients in the dataset is given in Table <ref>.
The speaker embeddings are extracted from the state-of-the-art speaker verification fwSE-ResNet34 model presented in <cit.>. This architecture extends the popular ResNet <cit.> backbone with a speech-adapted version of Squeeze-Excitation (SE) <cit.> and incorporates positional encodings to extend the spatial invariance of the 2D convolutional kernels with a notion of frequency positional information. The model is optimized using the Additive Angular Margin (AAM) softmax loss function <cit.>, resulting in the cosine distance being the similarity metric between speaker embeddings. More information about the architecture and training procedure can be found in the accompanying paper <cit.>. We note that this includes using the same training set, which solely exists of the development part of VoxCeleb2 <cit.>, with no form of subsequent domain adaptation to pathological speech.
§ PATHOLOGICAL SPEAKER ANALYSIS
It is shown that various functions related to the oral cavity are impacted by surgical and radiotherapy interventions, including the masticatory, swallowing and speech capabilities <cit.>. To analyze the evolution of the speaker identifying characteristics of patients undergoing oral oncological treatment, we calculate the cosine similarities speaker-wise between the pre-operative text1 embedding and all text2 embeddings at different stages in the treatment trajectory. We also calculate the cosine similarities between the text1 and text2 embeddings of the healthy speakers to be compared to the pre-operative embedding behavior of the patients.
Figure <ref> depicts a box plot describing the evolution of the speaker similarity relative to the pre-operative speaker embeddings. We observe no significant difference between the pre-operative speaker similarity of the pathological group in comparison to the healthy set of speakers. While pre-operative speech impairment is usually limited for patients diagnosed with oral cancer in comparison to the post-intervention condition <cit.>, it is encouraging to observe similar behavior of pre-operative pathological speakers and the healthy control group. Section <ref> analyzes the intra-session robustness of the speaker embeddings in more detail.
A significant decrease in pre-operative speaker similarity is observed after surgical treatment of the patients. It is previously shown that surgical intervention in oral cancer treatment has a significant negative impact on a wide variety of oral function abilities, including self-reported speech capability <cit.>. Those findings are reinforced by observing a comparable degradation between the pre-operative and post-operative speaker embedding similarity, which provides an objective and robust measurement of changing voice characteristics.
Radiotherapy during oral oncological treatment can potentially impact important tissues related to speech production <cit.>. However, the cumulative effect on oral function of post-operative radiotherapy strongly depends on variables such as tumor location, tumor stage and reconstruction type <cit.>. In our results, an additional significant change in voice characteristics is discerned after the post-operative radiotherapy stage in the treatment trajectory. We also observe a substantial increase in variability between pre-operative and post-radiotherapy speaker similarity, suggesting the final extent of change in voice characteristics is highly dependent on some underlying variables.
Both an increased pre-operative speaker similarity and decreased variability is noted after the 6-month recovery period, relative to the post-radiotherapy stage, with a similar trend in the following 6 months. This indicates that voice characteristics tend to return to the pre-operative state to a certain extent for at least a 1-year period post-intervention.
§.§ Tumor stage impact on voice characteristics
Figure <ref> depicts the change in pre-operative voice characteristics for each subgroup of patients based on tumor stage determined before intervention. The figure shows the mean cosine similarity between the pre-operative text1 and pre- and post-operative text2 embeddings for each subgroup. The number of speakers in each group is given in Table <ref>.
We notice an inversely proportional relationship between the tumor size and the pre-operative speaker similarity at the post-intervention stages. This corroborates previous research which suggests that late-stage tumors were associated with poorer post-operative speech outcomes, including reduced speech intelligibility and decreased vocal quality <cit.>. Notably, this is accompanied with a more pronounced recovery towards pre-operative speaker characteristics in the T3 and T4 groups after the 1-year post-intervention period. This suggests that the additional severity of post-radiotherapy changes in speaker characteristics in the late-stage tumor groups is partially or even completely offset after sufficient recovery time.
§.§ Reconstruction type impact on voice characteristics
Likewise, Figure <ref> shows the evolution of the pre-operative mean speaker similarity according to the type of reconstructive surgery performed. We observe that primary closure has the least significant impact on post-intervention voice characteristics in comparison to flap-based reconstruction. This supports previous research in which patients treated with primary closure were rated higher in speech intelligibility <cit.>. Notable is the significantly more severe change of voice characteristics of patients undergoing restorative local flap surgery in comparison to free flap surgery. This can possibly be attributed to the removal of tissue from the oral cavity during local flap surgery, as opposed to tissue removal from other parts of the body in free flap surgery. The removal of tissue in the oral cavity can potentially devise an additional degree of voice transformation in the patient in the case of local flap restoration. However, we note that the number of local flap surgeries in our dataset is limited.
§.§ Intra-session robustness of pathological embeddings
State-of-the-art speaker embeddings have shown to robustly capture speaker characteristics in a variety of challenging conditions, including severe background noise, short sampling duration and language switching <cit.>. However, it is an open question how well these embeddings can identify speakers who have had severe medical intervention in the oral cavity region. Surgery related to oral cancer treatment can have a severe impact on the structural composition of the vocal tract, which could potentially both limit or enhance the identifying characteristics captured by the speaker embeddings. In this section, we analyze the intra-session robustness of the speaker embeddings at all stages during oral cancer treatment.
To establish the intra-session robustness of the speaker embeddings, we calculate the cosine similarity between the text1 and text2 embedding of each patient at all sampling sessions during the treatment trajectory. The session-wise mean and standard deviation of the same-speaker cosine similarities is shown in Figure <ref>. As a reference, the mean similarity between the embeddings from the same speakers in the healthy group is indicated by the dotted line. For comparison, we also plotted the mean and standard deviation of the speaker-wise similarities between the pre-operative text1 and post-intervention text2 embeddings.
We can observe that the mean intra-session similarity is very consistent during the complete oral cancer treatment trajectory, even slightly exceeding the healthy control group. This indicates that the speaker embeddings are able to capture robust and distinguishing voice characteristics of speakers, even after substantial oncological intervention in the oral cavity, given the changed voice characteristics are temporally stable. This is notable due to the training set of the speaker embedding extractor not containing any comparable pathological speakers. This implies no domain-specific adaption of the training procedure of the speaker embedding extractor is needed, which greatly alleviates the potential medical usage of speaker embeddings in oral cancer treatment.
§.§ Pathological speaker verification analysis
In this section we want to analyze the behavior of pathological speaker embeddings in a speaker verification setting. Speaker verification attempts to solve the task if two utterances are spoken by the same person. We create three groups of speaker verification trials based on speech samples from patients: pre-operative, pre-operative combined with post-operative and pre-operative combined with post-radiotherapy utterances. To increase the number of trials, we create consecutive, non-overlapping crops of 5 seconds of each utterance and subsequently extract the speaker embeddings as described in Section <ref>. Each trial consists of a text1 embedding paired with a text2 embedding for text-independency and we balance the amount of positive and negative trials. Results are reported using the equal error rate (EER) and a breakdown of the false positive rate (FPR) and false negative rate (FNR).
As shown in Table <ref>, the overall EER sharply increases by the subsequent addition of post-operative and post-radiotherapy samples. However, as Figure <ref> indicates, the FPR of all trial groups remains almost identical, independent of the chosen speaker verification threshold. The degradation of EER can exclusively be attributed by an increase in FNR in the groups combining pre-operative and post-intervention embeddings. The implications of a stable FPR and variable FNR are desirable from an oral cancer treatment viewpoint. A stable FPR signifies a robust behavior of the speaker embeddings towards other speakers, while simultaneously still being able to capture the change in voice characteristics of the same speaker during the treatment trajectory.
§ FUTURE WORK
As shown in this paper, the use of speaker embeddings has the potential to improve our understanding of changing voice characteristics during oral cancer treatment. Using speaker embeddings to analyze individual treatment trajectories proves viable due to a combination of intra-session robustness, objective and text-independent metrics for changing voice characteristics and no reliance on human perceptual evaluation in the process. In future work, we will attempt to investigate the feasibility of using speaker embeddings to identify potential complications or challenges that may arise during the recovery process.
§ CONCLUSION
In this paper, we analyzed the behavior of speaker embeddings of patients diagnosed with oral cancer at different stages during oncological treatment. First, we found that pre-operative and post-intervention speaker similarity significantly diminishes. However, we observe an evolution of the voice characteristics towards the pre-operative stage in the following 12-month post-operative period. Secondly, we establish the intra-session robustness of current state-of-the-art speaker embeddings on speakers with oral cancer treatment. This indicates that the embeddings can successfully capture pathological speaker characteristics, given the pathology is temporally stable. Finally, we observe a stable false positive rate and variable false negative rate in a speaker verification analysis when speech samples are used from different stages in oral cancer treatment. This signifies a stable behavior of the embeddings towards other speakers while still being able to capture the change in voice characteristics during oral oncological treatment.
IEEEtran
|
http://arxiv.org/abs/2307.04936v1 | 20230710230605 | Quarkonia pair production as a tool for study of gluon GPDs | [
"Marat Siddikov",
"Ivan Schmidt"
] | hep-ph | [
"hep-ph",
"hep-ex"
] |
=6.0in =8.25in
=-0.3in =-0.20in
#1
#1
#1
#1
#1
#1
and
#1
Submitted to #1
Abstract
Presented
PRESENTED AT
Quarkonia pair production as a tool for study of gluon GPDs
Marat Siddikov, Ivan Schmidt
Departamento de Física, Universidad Técnica Federico Santa María, y Centro Científico - Tecnológico de Valparaíso, Casilla 110-V, Valparaíso, Chile
In these proceedings we present our results on the exclusive photoproduction of J/ψ η_c pairs in the collinear factorization framework. We argue that the process might be used as a complementary channel for studying the generalized parton distributions (GPDs) of gluons. We provide numerical estimates for the cross-section in the kinematics of the future Electron Ion Collider.
DIS2023: XXX International Workshop on Deep-Inelastic Scattering and
Related Subjects,
Michigan State University, USA, 27-31 March 2023
< g r a p h i c s >
§ INTRODUCTION
Nowadays the understanding of partonic and multipartonic distributions in the proton, and in particular of the so-called Generalized Parton Distributions (GPDs) <cit.>
remains one of the open problems within hadronic physics. The phenomenological extraction of these distributions is challenging for technical (mathematical) reasons. Moreover, it relies on different assumptions and sometimes provides only limited information about
the partonic distributions. For these reasons, it is always desired
to extend the number of channels used for phenomenological studies <cit.>. Recently it has been suggested that 2→3 exclusive processes might be used as a new tool for the study of the GPDs and complement existing phenomenological
research in 2→2 channels <cit.>.
We argue that it is possible to extend these studies and use the exclusive photoproduction of heavy quarkonia pairs as additional probe of the gluon GPDs. From previous research on single-quarkonia production it is known that the heavy mass of quarkonia may be used as a natural hard scale in the problem, justifying the use of perturbative methods, even in the photoproduction regime.
Due to the different structure of the coefficient function, the quarkonia
pair production allows to get additional constraints on the gluon GPDs,
especially outside the classical x=±ξ line.
Since, due to C-parity constraints, the production of J/ψ J/ψ pairs
is not related to GPDs of the target, in our
study we focus on the production of J/ψ η_c pairs, and
analyze the kinematics of the low-energy runs at the future Electron-Ion
Collider <cit.>.
For high-energy runs, as well as other future accelerators, it might
be more appropriate to use the evaluations in the color dipole picture,
which incorporates saturation effects <cit.>.
This proceeding is structured as follows. Below, in Section <ref>,
we briefly discuss the framework and the structure of the cross-section
of quarkonia pair production (the detailed derivation of these results
might be found in <cit.>). At the end of this section
we present some numerical estimates for the cross-section in the EIC kinematics and
draw conclusions.
§ EXCLUSIVE PHOTOPRODUCTION OF MESON PAIRS
The production of light meson pairs was analyzed previously
in Refs. <cit.>,
in Bjorken kinematics. However, for heavy quarkonia this analysis has limited applicability, since in the kinematics of very large photon virtualities
Q^2=-q^2≫ M_1,2^2 (quarkonia masses), the cross-section is vanishingly small. In our studies we consider that both Q^2 and M_1,2^2 are large scales, although eventually we will consider the photoproduction
limit Q→0.
The cross-section of the photoproduction of heavy quarkonia pairs is given by
dσ_γ p→ M_1M_2p^(L,T)=dy_1dp_1⊥^2dy_2dp_2⊥^2dϕ|𝒜_γ p→ M_1M_2p^(L,T)|^2/4(2π)^4√((W^2+Q^2-m_N^2)^2+4Q^2m_N^2)δ((q+P_1-p_1-p_2)^2-m_N^2)
where y_1,y_2 are the quarkonia rapidities, p_1⊥, p_2⊥
are their corresponding momenta, ϕ is the azimuthal angle between
p_1⊥,p_2⊥; W^2=(q+P_1)^2
is the energy of the γ^*p pairs, and the δ-function
in the right-hand side stems from the onshellness of the recoil proton.
This δ-function introduces cumbersome constraints on the kinematics
of the produced quarkonia pairs for the fixed-energy photons
(see <cit.> for details), however might be trivially
taken into account if we treat the quarkonia momenta p_1⊥, p_2⊥
and rapidities y_1,y_2 as independent variables, and fix the
photon energy W from the onshellness condition.
The evaluation of the amplitudes 𝒜_γ p→ M_1M_2p^(L,T)
was done in the collinear factorization framework, assuming the quarkonia
pairs and the recoil proton are kinematically well-separated from
each other. In the leading order, the dominant contribution to the
amplitudes of quarkonia production comes from the gluon GPDs. In our
evaluations we will disregard the contributions of the poorly known
transversity gluon GPDs H_T^g, E_T^g, H̃_T^g, Ẽ_T^g,
since existing experimental bounds suggest that they should be negligibly
small (see e.g. explanation in <cit.>).
The contribution of the remaining (chiral even) GPDs to the square
of amplitude is given by
∑_ spins|𝒜_γ p→ M_1M_2p^(𝔞)|^2 =1/(2-x_B)^2[4(1-x_B)(ℋ_𝔞ℋ_𝔞^*+ℋ̃_𝔞ℋ̃_𝔞^*).
-x_B^2(ℋ_𝔞ℰ_𝔞^*+ℰ_𝔞ℋ_𝔞^*+ℋ̃_𝔞ℰ̃_𝔞^*+ℰ̃_𝔞ℋ̃_𝔞^*)
.-(x_B^2+(2-x_B)^2t/4m_N^2)ℰ_𝔞ℰ_𝔞^*-x_B^2t/4m_N^2ℰ̃_𝔞ℰ̃_𝔞^*], 𝔞=L,T
where we introduced the shorthand notations for convolutions
ℋ_𝔞 =∫_-1^1dx c_𝔞(x, y_1, y_2)H_g(x,ξ,t), ℰ_𝔞=∫_-1^1dx c_𝔞(x, y_1, y_2)E_g(x,ξ,t),
ℋ̃_𝔞 =∫_-1^1dx c̃_𝔞(x, y_1, y_2)H̃_g(x,ξ,t), ℰ̃_𝔞=∫_-1^1dx c̃_𝔞(x, y_1, y_2)Ẽ_g(x,ξ,t),
x is the average light-cone momentum fraction of the proton carried
by the gluon before and after interaction, and ξ is the standard
skewness variable (it might be related to quarkonia rapidities y_1,y_2).
The partonic amplitudes c_𝔞, c̃_𝔞
might be evaluated perturbatively (see details in <cit.>). For the case in which the quarkonia are well-separated from each other kinematically,
it is possible to express the amplitudes c_𝔞, c̃_𝔞
in terms of the nonperturbative long-distance matrix elements (LDMEs)
of Non-Relativistic QCD (NRQCD) <cit.>, multiplied by a rational function,
c_𝔞, c̃_𝔞∼∑_ℓ𝒫_ℓ(x)/∏_k=1^n_ℓ(x-x_k^(ℓ)+i0)
where 𝒫_ℓ(x) is a smooth polynomial of
the variable x, and the denominator of each term in the sum (<ref>)
might include a polynomial with up to n_ℓ=5 nodes x_k^(ℓ)
in the region of integration. The position of the poles x_k^(ℓ)
depends on all kinematic variables y_1, y_2, Q, and for this reason, varying the rapidities y_1,y_2 of the observed quarkonia, it is possible to probe the gluon GPDs in the full kinematic range (x, ξ).
Due to space limitations, here we omit the full expressions for the
amplitudes c_𝔞, c̃_𝔞 (see <cit.> for details). However, in
Figure <ref>
we show the density plot which illustrates the behavior of the coefficient
function c_T(x, y_1, y_2) as a function of its
arguments, and allows to see the dependence of the poles on the variable ξ.
The typical values of the cross-sections in the EIC kinematics range
between a few dozens to a few hundreds of picobarns, depending on
the kinematics and chosen parametrization of the gluon GPDs.
In the right panel of Figure <ref> for the
sake of illustration we show the cross-section for the lowest-energy
electron-proton beam as a function of the invariant momentum transfer
t, for several parametrizations of the gluon GPDs.
More detailed predictions for the cross-section at various
energies might be found in our recent article <cit.>.
To summarize, our findings demonstrate that the exclusive photoproduction
of J/ψ η_c mesons (as well as other heavy quarkonia pairs
with opposite C-parities) potentially could be used as a viable
gateway for the analysis of the gluon GPDs of the target. The amplitude
of this process obtains the dominant contribution from the unpolarized
gluon GPD H_g; however, in contrast to classical 2→2 processes,
it has sensitivity to the behavior of the GPDs outside the x=±ξ
line, and thus could complement information extracted from DVCS and
single-quarkonia production. Numerically, the evaluated cross-sections are on par with similar
estimates for 2→3 processes suggested recently in the literature <cit.>.
§ ACKNOWLEDGEMENTS
We thank our colleagues at UTFSM university for encouraging discussions.
This research was partially supported by Proyecto ANID PIA/APOYO AFB220004
(Chile) and Fondecyt (Chile) grants 1220242 and 1230391. “Powered@NLHPC:
This research was partially supported by the supercomputing infrastructure
of the NLHPC (ECM-02)”.
100
Goeke:2001tz K. Goeke, M. V. Polyakov and M. Vanderhaeghen,
Prog. Part. Nucl. Phys. 47, 401 (2001).
Diehl:2003ny M. Diehl, Phys. Rept. 388, 41
(2003).
Guidal:2013ryaM. Guidal, H. Moutarde and M. Vanderhaeghen,
Rept. Prog. Phys. 76 (2013), 066202.
Pire:2017ygeB. Pire and L. Szymanowski, Phys. Rev. D
96 (2017) no.11, 114008.
Pire:2021dadB. Pire, L. Szymanowski and J. Wagner, Phys.
Rev. D 104 (2021) no.9, 094002.
GPD2x3:9 G. Duplančić, S. Nabeebaccus, K. Passek-Kumerički,
B. Pire, L. Szymanowski and S. Wallon, JHEP 03 (2023) 241; JHEP 11 (2018) 179.
GPD2x3:7 R. Boussarie, B. Pire, L. Szymanowski and S. Wallon,
JHEP 02 (2017) 054.
GPD2x3:6 W. Cosyn and B. Pire, Phys. Rev. D 103 (2021)
114002.
GPD2x3:5 A. Pedrak, B. Pire, L. Szymanowski and J. Wagner,
Phys. Rev. D 101 (2020) 114027.
GPD2x3:4 B. Pire, L. Szymanowski and S. Wallon, Phys. Rev.
D 101 (2020) 074005.
Boussarie:2016qopR. Boussarie, B. Pire, L. Szymanowski
and S. Wallon, JHEP 02 (2017), 054 [erratum: JHEP 10
(2018), 029].
GPD2x3:10 J.-W. Qiu and Z. Yu, JHEP 08 (2022)
103; Phys. Rev D 107 (2023) 1, 014007.
Brambilla:2010cs N. Brambilla et al.,
Eur. Phys. J. C71, 1534 (2011).
Accardi:2012qutA. Accardi et al., Eur. Phys. J. A
52, no. 9, 268 (2016).
AbdulKhalek:2021gbh R. Abdul Khalek et al., Nucl.
Phys. A 1026 (2022) 122447.
Andrade:2022rbnS. Andrade, M. Siddikov and I. Schmidt,
Phys. Rev. D 105 (2022) 7, 076022.
Siddikov:2022bku M. Siddikov and I. Schmidt, Phys. Rev.
D 107 (2023) no.3, 034037 [arXiv:2212.14019 [hep-ph]].
LehmannDronke:2000hloB. Lehmann-Dronke, A. Schafer, M. V. Polyakov
and K. Goeke, Phys. Rev. D 63 (2001), 114001.
Clerbaux:2000hbB. Clerbaux and M. V. Polyakov, Nucl.
Phys. A 679 (2000), 185-195.
Goloskokov:2013mbaS. V. Goloskokov and P. Kroll, Eur.
Phys. J. C 74 (2014), 2725; Eur. Phys. J. A
47, 112 (2011).
|
http://arxiv.org/abs/2307.07270v1 | 20230714105145 | Characterizing current noise of commercial constant-current sources by using of an optically-pumped rubidium atomic magnetometer | [
"Ni Zhao",
"Lulu Zhang",
"Yongbiao Yang",
"Jun He",
"Yanhua Wang",
"Tingyu Li",
"Junmin Wang"
] | physics.atom-ph | [
"physics.atom-ph"
] |
yellow
AIP/123-QED
Characterizing current noise of commercial constant-current sources by using of an optically-pumped rubidium atomic magnetometer]Characterizing current noise of commercial constant-current sources by using of an optically-pumped rubidium atomic magnetometer
State Key Laboratory of Quantum Optics and Quantum Optics Devices and Institute of Opto-Electronics, Shanxi University, Taiyuan 030006, China
State Key Laboratory of Quantum Optics and Quantum Optics Devices and Institute of Opto-Electronics, Shanxi University, Taiyuan 030006, China
State Key Laboratory of Quantum Optics and Quantum Optics Devices and Institute of Opto-Electronics, Shanxi University, Taiyuan 030006, China
State Key Laboratory of Quantum Optics and Quantum Optics Devices and Institute of Opto-Electronics, Shanxi University, Taiyuan 030006, China
Collaborative Innovation Center of Extreme Optics, Shanxi University, Taiyuan 030006, China
State Key Laboratory of Quantum Optics and Quantum Optics Devices and Institute of Opto-Electronics, Shanxi University, Taiyuan 030006, China
College of Physics and Electronic Engineering, Shanxi University, Taiyuan 030006, China
College of Electronic Information and Optical Engineering, Taiyuan University of Technology, Taiyuan 030024, China
corresponding author. E-mail: [email protected]
State Key Laboratory of Quantum Optics and Quantum Optics Devices and Institute of Opto-Electronics, Shanxi University, Taiyuan 030006, China
Collaborative Innovation Center of Extreme Optics, Shanxi University, Taiyuan 030006, China
This paper introduces a method for characterizing the current noise of commercial constant-current sources(CCSs) using a free-induction-decay(FID) type optically-pumped rubidium atomic magnetometer driven by a radio-frequency(RF) magnetic field.
We convert the sensitivity of the atomic magnetometer into the current noise of CCS by calibrating the coil constant. At the same time, the current noise characteristics of six typical commercial low-noise CCSs are compared. The current noise level of the KeySight Model B2961A is the lowest among the six tested CCSs, which is 36.233±0.022 nA / Hz^1/2 at 1 ∼ 25 Hz and 133.905±0.080 nA / Hz^1/2 at 1 ∼ 100 Hz respectively. The sensitivity of atomic magnetometer is dependent on the current noise level of the CCS. The CCS with low noise is of great significance for high-sensitivity atomic magnetometer. The research provides an important reference for promoting the development of high precision CCS, metrology and basic physics research.
[
Junmin Wang
August 12, 2023
===================
§ INTRODUCTION
Optically pumped atomic magnetometers mainly extract magnetic field information based on the interaction between light and atoms^1-3, which have been widely used in military, medicine, space magnetic measurement, atomic gyroscope and basic physics research due to its outstanding advantages such as high sensitivity, fast response speed and portability^4-8. According to their various working principles, optically pumped atomic magnetometers are mainly composed of the spin-exchange relaxation-free (SERF) atomic magnetometer^9, the nonlinear magneto-optical rotation(NMOR) atomic magnetometer^10, the coherent population trapping (CPT) atomic magnetometer^11, the Mx magnetometer^12,
the Mz magnetometer^13, and etc.
CCSs with low noise and excellent stability have important applications in metrology, quantum precision measurement, search for neutron electric dipole moment^14-15 and basic physics research.
Current noise levels can be used to evaluate the characteristics of CCSs. Traditionally, the current noise characteristics of CCSs can be characterized indirectly based on Ohm's law. A constant current is applied to a high-precision resistance through a CCS. By analyzing the voltage signal noise on the resistance over a period of time, the current noise of the CCS can be deduced from Ohm's law. However, the resistivity maybe be affected by temperature fluctuation, which can affect the measurement results. Atoms are the most sensitive measurement media in nature. Based on the high sensitivity atomic magnetometer, the magnetic field can be accurately measured and the current noise of CCSs can be further characterized.
In recent years, significant progress has been made in characterizing and suppressing current noise. Shifrin et al.^16realized high precision DC current measurement using a single-layer quartz two-zone solenoid and high precision differential current-frequency converter based on a He-Cs atomic magnetometer. Miao et al.^17measured the frequency, amplitude and phase of sinusoidal alternating current using a pump-probe atomic magnetometer. Li. et al. ^18developed a high-precision DC current sensor based on the optically pumped Mz atomic magnetometer. Chen. et al. ^19characterized the current noise based on a pump-probe atomic magnetometer. Shen. et al. ^20 measured and suppressed the current noise of commercial CCS with a potassium atomic magnetometer, which is 600 pA / Hz^1/2 at 1 Hz. Zheng et al. ^21measured and suppressed the low-frequency noise of CCS based on double resonance alignment magnetometers and current noise level was 100 nA / Hz^1/2 at 0.001 Hz.
FID atomic magnetometer could operate in a large range of terrestrial magnetic field, and have a relatively wide dynamic magnetic measurement range and high sensitivity^22-26. In our previous work^27, the fundamental principle and classical physical picture was described in detail based on a FID atomic magnetometer driven by a RF magnetic field. And the sensitivities of the optical path of the interaction between probe beam and atomic ensemble were compared. Here, we present a method for characterizing current noise of commercial CCS by using the FID atomic magnetometer driven by a RF magnetic field. In this method, we calibrate coil constant in a magnetic field shields based on a high-precision commercial CCS. The measured magnetic field noise is converted to the current noise of commercial CCSs by the coil constant. We select six typical commercial CCSs (KeySight Model B2961A, ThorLabs Model LDC205C, SRS Model LDC501, SRS Model CS580, Home-made CCS and GWInstek Model 2303S) to characterize and compare the current noise characteristics within the bandwidth range of 1 ∼ 25 Hz and 1 ∼ 100 Hz. The current noise characteristics of different commercial CCSs are analyzed and discussed in detail. We also experimentally demonstrate that the dependence between sensitivity and current noise.
§ EXPERIMENTAL SETUP
Figure 1 shows the experiment setup. A 15×15×15 mm^3 vapor cell containing isotopically enriched ^87Rb is used in our experiment, which is filled with 100 Torr N_2 gas as buffer gas and fluorescence-quenching gas. To maintain even heating, the vapor cell is positioned at the center of the boron nitride ceramic oven. A specially designed square flexible film electric heater made by twisted pair wires is attached to the outer surface of the oven, which is used to heat and control the temperature of the atomic vapor cell. Here, the flexible film electric heater is driven by 477 kHz alternating current, which is set to be much higher than the measurement bandwidth and Larmor frequency to ensure that the heating system does not interfere with the measurement. And the non-magnetic PT100 thermistor is used as the temperature sensor without introducing magnetic interference. The temperature of the atomic vapor cell is set at 85℃. A four-layer cylindrical µ-metal magnetic field shields to suppress environmental magnetic field noise. The inner dimension of the magnetic shield is ϕ250×500 mm, and the shielding factor is greater than 50000. Commercial CCSs apply current to produce a static magnetic field B_0 along the y direction and a RF magnetic field B_RF along the z direction. The pump laser is emitted from a distributed Bragg reflector (DBR) laser, which is tuned to the ^87Rb D1 transition line at 795 nm(from 5^2S_1/2 F=2 to 5^2P_1/2 F'=1). The pumped beam passes through an acoustic-optical modulator(AOM), expands the beam through a telescope system, and is converted into a circularly polarized beam through a λ / 4 wave plate, which enters the atomic vapor cell along the y direction. The diameter of the expanded beam is about 10 mm. The pump beam has a power of 5 mW. The linearly polarized probe laser, originated from a 780 nm DBR laser, is blue detuned by 6 GHz from the ^87Rb D2 transition line at 780 nm(from 5^2S_1/2 F=1 to 5^2P_3/2 F'=2). The probe beam has a diameter of 2 mm and a power of 30 μW. The direction of the probe beam is perpendicular to the pump beam and the RF magnetic field. The probe beam passes through the atomic vapor cell and enters the polarimeter composed of λ / 2 wave plate, a Wollaston prism and a balanced differential photoelectric detector(common-mode noise rejection ratio ∼ 50 dB). We obtain information about the Faraday rotation angle by the data acquisition system composed of NI data acquisition card DAQ (NI-USB6363) and LabView.
The relationship betweent static magnetic field measured by the FID atomic magnetometer and the Larmor precession frequency can be expressed as:
B=ω/γ
γ represent the gyromagnetic ratio of ground state atoms, which is about 6.99583 Hz/nT of ground state (F = 2) of ^87Rb.
Timing sequence control system is shown in Figure 2. At first, we turn on the pumped laser to prepare the spin-polarized state of the ^87Rb atomic ensemble from t_0 - t_1. The polarized ^87Rb atomic macroscopic magnetic moment is along the y direction at the end of t_1. Then, applying a RF magnetic field with the angular frequency equal to the Larmor precession frequency, the atomic macroscopic magnetic moment precess to the xoz plane after applying π / 2 pulse time. Finally, the RF magnetic field is switched off and the probe laser is turned on. The atomic macroscopic magnetic moment evolve freely at Larmor frequency until the thermal equilibrium state. The pump laser, RF magnetic field and probe laser are separated from the time domain by time sequence control to avoid the crosstalk effect on the measurement signal and sensitivity, and further influence on the current noise characterization.
We apply a static magnetic field of 6.3 μT along the y direction. The heating temperature of the atomic vapor cell is set at 85℃, and the atomic number density is about 2.2 × 10^12 cm^-3. Figure 3(a) shows a typical FID signal in one period. The transverse relaxation times T_2 is 2.5 ms of ^87Rb by exponential fitting. Figure 3(b) shows the Fast Fourier Transform (FFT) of FID signal. A full width at half maximum (FWHM) is about 292.4±2.9 Hz.
§ MEASUREMENT RESULTS AND DISCUSSION
§.§ Calibration of the coil constant
In the experiment, a Low-noise and high-stability CCS KeySight Model B2961A and a four-layer cylindrical µ-metal magnetic field shields for us provide a good conditions to calibrate the coil constant. The coil constant can be described by^28-29:
C_coil=B_total/I
where I is current, B_total is the total magnetic field, which can be measured by using an atomic magnetometer.
Figure 4 shows result of the coil constant calibration along the y direction. At first, CCS B2961A apply a known current to the coils. Then we record the FID signal for 240 s when the period T is about 50 ms. Larmor frequency is obtained by FFT transformation, and magnetic field value is obtained by calculation and statistical averaging. CCS B2961A applies current in the range of 2-250 mA. A series of magnetic field values are measured by using a FID magnetometer. The linear fitting results can be obtained as follows:
B=126.956I-4.914
The measured magnetic field is actually composed of the magnetic field generated by the CCS applying current to the coils and the residual magnetic field. According to the fitted linear equation, the calibrated coil constant is approximately 126.956±0.076 nT/mA and the residual magnetic field is about 4.914 nT.
§.§ Sensitivity analysis
Sensitivity is an important index to evaluate the performance of atomic magnetometer. Taking B2961A as an example, we calculate and analyse the sensitivity. B2961A applies 100 mA current to the coils along the y direction, and the static magnetic field is about 12.6 μT. We record 6000 periods FID signals by the data acquisition system and calculate sensitivity. Figure 5(a) shows the partially repeated measured FID signal with a sampling period of 5 ms. As shown in Figure 5(b), we obtained about 6000 DC magnetic field measurement values by converting the Larmor frequency into magnetic field values using Eq.(1). According to the statistical average of magnetic field values distribution, the static magnetic field is about 12.64154 μT. Figure 5(c) is the power spectral density(PSD) calculated with the magnetic field values, which shows a magnetic sensitivity is 17.0 pT/Hz^1/2 with a bandwidth of 1 ∼ 100 Hz. Here, we mainly measure and characterize the current noise of CCSs . Considering that the ambient magnetic field noise (for example, 1/ f noise, etc) is comparatively high at lower frequencies, in order to minimize the interference of various noises at low frequencies, we choose a bandwidth range of 1-100 Hz.
Figure 6 shows the electronic noise from the DAQ, the electronic noise from photodetector and DAQ, and the intensity noise from the probe laser. The analysis shows that DAQ data acquisition system has the lowest noise. The intensity noise from the probe laser is close to the electronic noise from photodetector and DAQ. Because the differential detection method can effectively suppress the common-mode noise of the measurement system. In addition, the sensitivity of atomic magnetometer is also limited by current noise from CCSs. Analyzing of the sensitivity of different commercial CCSs makes plenty of sense.
§.§ Characterization current noise of commercial CCSs
Six typical commercial CCSs (KeySight Model B2961A, ThorLabs Model LDC205C, SRS Model LDC501, SRS Model CS580, Home-made CCS and GWInstek
Model 2303S) apply the same current of 100 mA (corresponding to the static magnetic field of 12.6 μT) to the coils along the y direction. We obtain 6000 DC magnetic field values and analyze the sensitivity. Figure 7(a) and (b) show the PSD for the atomic magnetometer with various CCSs at 1∼ 25 Hz and at 1 ∼ 100 Hz respectively. The peak is caused by the 50-Hz electronic noise and its harmonic interference. The same experimental conclusions are presented for different CCSs that the sensitivity of the atomic magnetometer is improved significantly at 1 ∼ 25 Hz. Whether the frequency bandwidth is 1 ∼ 25 Hz or 1 ∼ 100 Hz, it can reflect the difference in the sensitivity of the magnetometer when the same currentt is generated by each CCS. The B2961A has the best sensitivity index among the six tested CCSs, which is 4.6 pT /Hz^1/2 at 1 ∼ 25 Hz and 17.0 pT/Hz^1/2 at 1 ∼ 100 Hz respectively. The reason may be that CCS B2961A has ultra-low current noise and high-stability.
In our experiment, the magnetic field of the atomic manetometer is generated by the CCSs, so the current noise of CCSs can be reflected from magnetic field noise power spectrum density. The sensitivity and current noise of different commercial CCSs are shown in Table 1. The current noise is obtained by dividing the sensitivity by the coil constant. The sensitivity of FID magnetometer is 17.0 pT/Hz^1/2 at 1 ∼ 100 Hz. Dividing this value by the coil constant, the current noise is 133.905 ± 0.080 nA/Hz^1/2 when the CCS B2961A outputs a current of 100 mA. The sensitivity is 4.6 pT/Hz^1/2 at 1 ∼ 25 Hz, and the current noise is 36.233 ± 0.022 nT/mA when the CCS B2961A outputs a current of 100 mA. It can also be clearly seen from the figure that different commercial CCSs have different current noise levels. The KeySight Model B2961A has the lowest current noise level. ThorLabs Model LDC205C and SRS Model LDC501 have similar current noise levels. SRS Model CS580 and Home-made CCS have higher current noise levels. The GWInstek Model 2303S has the highest current noise level. CCSs have lower current noise levels at 1 ∼ 25 Hz. The current noise level can clearly reflect the output current fluctuation of CCSs.
§ DISCUSSION AND CONCLUSION
We present a method to characterize the current noise of different commercial CCSs based on the calibration of coil constant by using a FID atomic magnetometer. The sensitivity of atomic magnetometer is interdependent with the current noise of CCSs. The current noise of CCSs can be estimated by the sensitivity of atomic magnetometer. We characterize and compare the current noise characteristics within the bandwidth range of 1 ∼
25 Hz and 1 ∼ 100 Hz. Bandwidth and sensitivity are mutually restricted, increasing bandwidth comes at the expense of sensitivity. The sampling period of FID signal is longer, the Larmor frequency obtained after FFT is more accurate, the smaller the magnetic field fluctuation value over a period of time and the higher the sensitivity obtained by calculation when a small bandwidth range is selected. As a result, we characterize current noise more accurately. The CCS with low-noise and high-stability is of great significance to improve the sensitivity of atomic magnetometer.
In addition, the sensitivity of an optically pumped magnetometer is limited by various factors, such as: the photon shot noise(PSN), the spin-projection noise(SPN), the fluctuations of the residual magnetic field, the intensity noise from probe laser, the electronic noise, etc. The various noise mentioned above are included in the collected FID signal, as well as in the calculated PSD and the measurement results of the current noise. Therefore, our measurement results are actually the upper limit of the current noise of the CCSs. It is also very important to further improve the sensitivity of magnetometer. Among the factors affecting the sensitivity of magnetometer, the spin-exchange collisions(SE) among alkali-metal atoms has a significant effect on transverse spin-relaxation rates and the linewidth of magnetic resonance spectrum^30, which leads to a decrease in the sensitivity of atomic magnetometers. SE can be suppressed by filling buffer gas with appropriate pressure. Spin-destruction collisions can be suppressed by filling buffer gas or coating the atomic gas chamber wall with anti-relaxation film. PSN and SPN can be suppressed by squeezed states of light field ^31-32and spin squeezing^33.
In the future, we can improve the sensitivity based on the following two methods. (i)We can perform active magnetic field stabilization^34 based on CCSs with high-stability and low-noise, which can be further compensate and shield ambient magnetic field noise. (ii)At the same time, we can also introduce the Stokes operator S_2 polarization-squeezed light to further suppress the PSN, so that the sensitivity of the atomic magnetometer can go beyond the PSN limit and realize the quantum enhancement measurement of the atomic magnetometer.
§ ACKNOWLEDGMENTS
This research was financially supported by the National Natural Science Foundation of China (11974226).
§ AUTHOR DECLARATIONS
§.§ Conflict of Interest
The authors have no conflicts to disclose.
§.§ Author Contributions
Ni Zhao: Data curation (equal); Formal analysis(equal); Writing–original draft(equal). Lulu Zhang: Data curation (equal); Formal analysis(equal); Software (equal). Yongbiao Yang: Data curation (equal). Jun He: Investigation (supporting). Yanhua Wang: Investigation (supporting); Funding acquisition(equal). Tingyu Li: Investigation (supporting); Funding acquisition(equal). Junmin Wang: Project administration (leading); Resources (leading); Funding acquisition (leading); Writing–review& editing (leading); Supervision (leading).
§ DATA AVAILABILITY
The data that support the findings of this study are available from the corresponding author upon reasonable request.
§ REFERENCES
^1 F. Bitter, “The optical detection of radiofrequency resonance,” Phys. Rev. 76, 833-835 (1949).
^2 W. E. Bell and A. L. Bloom, “Optical detection of magnetic resonance in alkali metal vapor,” Phys. Rev. 107, 1559-1565 (1957).
^3 D. Budker and M. Romalis, “Optical magnetometry,” Nature Phys. 3, 227-234 (2007).
^4 M. E. Limes, E. L. Foley, T. W. Kornack, S. Caliga, S. McBride, A. Braun, W. Lee, V. G. Lucivero, and M. V. Romalis, “Portable magnetometry for detection of biomagnetism in ambient environments,” Phys. Rev. Applied. 14, 011002 (2020).
^5 T. H. Sander, J. Preusser, R. Mhaskar, J. Kitching, L. Trahms, and S. Knappe, “Magnetoencephalography with a chip-scale atomic magnetometer,” Biomed. Opt. Express. 3, 981–990 (2012).
^6 O. Alem, T. H. Sander, R. Mhaskar, J. LeBlanc, H. Eswaran, U. Steinhoff, Y. Okada, J. Kitching, L. Trahms, and S. Knappe, “Fetal magnetocardiography measurements with an array of microfabricated optically pumped magnetometers,” Phys. Med. Biol. 60, 4797–4811 (2015).
^7 H. Korth, K. Strohbehn, F. Tejada, A. G. Andreou, J. Kitching, S. Knappe, S. J. Lehtonen, S. M. London, and M. Kafel, “Miniature
atomic scalar magnetometer for space based on the rubidium isotope ^8^7Rb,” J. Geophys. Res. Space Physics. 121, 7870–7880 (2016).
^8 R. J. Li, W. Quan, W. F. Fan, L. Xing, and J. C. Fang, “Influence of magnetic fields on the bias stability of atomic gyroscope
operated in spin-exchange relaxation-free regime,” Sensors and Actuators A. 266, 130–134 (2017).
^9 J. C. Allred, R. N. Lyman, T. W. Kornack and M. V. Romalis, “High-sensitivity atomic magnetometer unaffected by spin-exchange relaxation,” Phys. Rev. Lett. 89, 130801 (2002).
^1^0 W. Gawlik, L. Krzemień, S. Pustelny, D. Sangla, and J. Zachorowski, “Nonlinear magneto-optical rotation with amplitude modulated light,” Appl. Phys. Lett. 88, 131108 (2006).
^1^1J. Belfi, G. Bevilacqua, V. Biancalana, S. Cartaleva, Y. Dancheva, and L. Moi, “Cesium coherent population trapping magnetometer for cardiosignal detection in an unshielded environment,” J. Opt. Soc. Am. B. 24, 2357–2362 (2007).
^1^2 S. R. Su, G. Y. Zhang, X. Bi, X. He, W. Q. Zheng, and Q. Lin, “Elliptically polarized laser-pumped Mx magnetometer towards applications at room temperature,” Opt. Express. 27, 33027-33039 (2019).
^1^3 Y. Gu, R. H. Sh, and Y. H. Wang, “Study on sensitivity-related parameters of distributed feedback laser-pumped cesium atomic magnetometer,” Acta Physica Sinica.63, 110701 (2014).(In Chinese)
^1^4 M. Jiang, H. W. Su, A. Garcon, X. H. Peng, and D. Budker, “Search for axion-like dark matter with spin-based amplifiers,” Nature. Phys. 17, 1402–1407(2021).
^1^5 I. S. Altarev, Y. V Borisov, N. V. Borovikova, A. I. Egorov, S. N. Ivanov, E. A. Kolomensky, M. S. Lasakov, V. M. Lobashev, V. A. Nazarenko, A. N, Pirozhkov, A. P. Serebrov, Y. V. Sobolev, and E. V. Shulgina, “Search for the neutron electric dipole moment,” Phys. Atom. Nucl. 59, 1204-1224 (1996) .
^1^6 V. Y. Shifrin, V. N. Khorev and P. G. Park, “A high-precision system for direct current reproduction based on atomic magnetic resonance in helium-4,” Metrologia. 36, 171-177 (1999).
^1^7 P. X. Miao, “Measurement of 10 kHz sinusoidal alternating current by pump-probe atomic magnetometer,” Vacuum and Cryogenics. 28, 592-600 (2022). (In Chinese)
^1^8 G. Z. Li, Q. Xin, X. X. Geng, Z. Liang, S. Q. Liang, G. M. Huang, G. X. Li, and G. Q. Yang, “Current sensor based on an atomic magnetometer for DC application,” Chinese Opt. Lett. 18, 031202 (2020).
^1^9 D. Y. Chen, P. X. Miao, Y. C. Shi, J. Z. Cui, Z. D. Liu, J. Chen, and K. Wang, “Measurement of noise of current source by pump-probeatomic magnetometer,” Acta Physica Sinica. 71, 024202 (2022). (In Chinese)
^2^0 L. Shen, R. Zhang, T. Wu, X. Peng, S. Yu, J. B. Chen, and H. Guo, “Suppression of current source noise with an atomic magnetometer,” Rev. Sci. Instrum. 91, 084701 (2020).
^2^1 J. T. Zheng, Y. Zhang, Z. Y. Yu, Z. Q. Xiong, H. Luo, and Z. G. Wang, “Precision measurement and suppression of low-frequency noise in a current source with double-resonance alignment magnetometers,” Chinese. Phys. B.32, 040601(2023).
^2^2 R. Zhang, D. Kanta, A. Wickenbrock, H. Guo, and D. Budker, “Heading-error-free optical atomic magnetometry in the earth-field range,” Phys. Rev. Lett. 130, 153601 (2023).
^2^3 S. R. Li, D. Y. Ma, K. Wang, Y.N. Gao, B. Z. Xing, X. J. Fang, B. C. Han, and W. Quan, “High sensitivity closed-loop Rb optically pumped magnetometer for measuring nuclear magnetization”
Opt. Express. 30, 43925-43937 (2022).
^2^4 D. Sheng, S. Li, N. Dural, and M. V. Romalis, “Subfemtotesla scalar atomic magnetometry using multipass cells,” Phys. Rev. Lett.110, 160802 (2013).
^2^5 D. Hunter, S. Piccolomo, J. D. Pritchard, N. L. Brockie,T. E. Dyer, and E. Riis, “Free-induction-decay magnetometer based on a microfabricated Cs vapor cell,” Phys. Rev. A. 10, 014002(2018).
^2^6 C. Liu, H. F. Dong, and J. J. Sang, “Submillimeter-resolution magnetic field imaging with digital micromirror device and atomic vapor cell,” Appl. Phys. Lett. 119, 114002 (2021).
^2^7 L. L. Zhang, Y. B. Yang, N. Zhao, J. He, and J. M. Wang, “A multi-pass optically pumped rubidium atomic magnetometer with free induction decay” Sensors. 22, 7598 (2022).
^2^8 L. L. Chen, B. Q. Zhou, G. Q. Lei, W.F. Wu, J. Wang, Y. Y. Zhai, Z. Wang, and J. C. Fang, “A method for calibrating coil constants by using the free induction decay of noble gases” AIP Advances. 7, 075315 (2017).
^2^9 H. Zhang, S. Zou, and X. Y. Chen, “A method for calibrating coil constants by using an atomic spin co-magnetometer,” Eur. Phys. J. D . 70, 203(2016).
^3^0 W. Happer, “Optical Pumping,” Rev. Mod. Phys. 44, 169(1972).
^3^1 C. Troullinou, R. Jiménez-Martínez, J. Kong, V. G. Lucivero, and M. W. Mitchell, “Squeezed-light enhancement and backaction evasion in a high sensitivity optically pumped magnetometer,” Phys. Rev. Lett. 127, 193601(2021).
^3^2 L. L. Bai, X. Wen, Y. L. Yang, L. L.Zhang, J. He, Y. H. Wang, and J. M. Wang, “Quantum-enhanced rubidium atomic magnetometer
based Faraday rotation via 795 Stokes operator squeezed light,” J. Opt. 23, 085202(2021).
^3^3 H. Bao, J. L. Duan, S. C. Jin, X. D. Lu, P. X. Li, W. Z. Qu, M. F. Wang, I. Novikova, E. E. Mikhailov, K. F. Zhao, K. Mølmer, H. Shen and Y. H. Xiao, “Spin squeezing of 10^11
atoms by prediction and retrodiction measurements,” Nature. 581:159–163(2020).
^3^4 Y. D. Ding, R. Zhang, J. H. Zheng, J. B. Chen, X. Peng, T. Wu, and H.Guo, “Active stabilization of terrestrial magnetic field with potassium atomic magnetometer,” Rev. Sci. Instrum. 93, 015003(2022).
|
http://arxiv.org/abs/2307.05626v1 | 20230711062225 | Poincaré generators at second post-Minkowskian order | [
"Hojin Lee",
"Kanghoon Lee",
"Sangmin Lee"
] | hep-th | [
"hep-th"
] |
Neural-Symbolic Recommendation with Graph-Enhanced Information
Bang Chen1 Wei Peng2 Maonian Wu1 Bo Zheng1 Shaojun Zhu1
August 12, 2023
==============================================================
§ INTRODUCTION
Ever since the pioneering work <cit.> by Damour, Jaranowski and Schäfer,
global Poincaré generators have provided stringent consistency checks for post-Newtonian (PN)
approach to the effective Hamiltonian mechanics of a gravitating binary system.
At each order of the perturbative Hamiltonian, constructing the Poincaré generators (the boost generator in particular)
can reconfirm the validity of the Hamiltonian and may even fix undetermined coefficients or detect errors.
In the context of post-Minkowskian (PM) expansion strongly influenced by scattering amplitudes (see e.g. <cit.> for comprehensive reviews), the construction of the Poincaré generators
was initiated only recently in <cit.>, where the Hamiltonian H^[1] and the boost generator G⃗^[1], including all spin multipole moments, were constructed at the first post-Minkowskian order (1PM).
The present paper takes a step further in the same direction.
To pinpoint the novelties of the 2PM computation, we first review the 1PM results <cit.>.
At 0PM (a pair of free particles), it is well known that
H^[0] = E_1 + E_2 ,
G⃗^[0] = E_1 x⃗_1 + E_2 x⃗_2 ,
E_a = √(p⃗_a^2 + m_a^2)
(a=1,2) .
The 1PM Hamiltonian is
H^[1] = γ_c [ - Gm_1^2 m_2^2(2γ^2 -1)/E_1 E_2 r] ,
γ = - p_1 · p_2/m_1 m_2 = E_1E_2 - p⃗_1 ·p⃗_2/m_1m_2 ,
γ_c = [ 1-u⃗_c^2 + (n̂·u⃗_c)^2 ]^-1/2 ,
u⃗_c = p⃗_1 + p⃗_2/E_1+E_2 ,
r⃗ = x⃗_1 - x⃗_2 ,
n̂ = r⃗/r .
The 1PM boost generator is
G⃗^[1] = H^[1]X⃗^[1] , X⃗^[1] = z_2 x⃗_1 + z_1 x⃗_2 ,
z_a = E_a/E_1+E_2 .
Since the boost generator takes one inertial frame to another, it is necessary to generalize the Hamiltonian to a form valid in
an arbitrary “lab" frame. At 1PM, the change of frame results in the “dressing" factor γ_c which is a measure for the deviation from the COM frame. The fact that G⃗^[1] has a simple factorized form is another main finding of <cit.>.
The 2PM Hamiltonian was first derived in the center of momentum (COM) frame in <cit.>; see also <cit.>.
It consists of four terms,
H^[2]|_COM = H^[2]_1,b + H^[2]_2,b + H^[2]_3,b + H^[2]_4,b ,
where the subscript b stands for “bare".
From a diagrammatic point of view, H^[2]_1
originates from one-loop triangle diagrams. The other three are not directly produced by one-loop diagrams.
Rather, they are remainders from an iteration of the 1PM Hamiltonian which cancels out the so-called super-classical term
from the one-loop box diagrams.
One of the two main results of this paper is the general form of the Hamiltonian.
Going to the lab frame amounts to dressing the four terms in a few distinct ways,
H^[2] = H^[2]_1 + H^[2]_2 + H^[2]_3 + H^[2]_4 , γ_o = (1-u⃗_c^2)^-1/2 ,
H^[2]_1 = γ_c^2 γ_o^-1 H^[2]_1,b
H^[2]_(2,3) = γ_c^2 H^[2]_(2,3),b ,
H^[2]_4 = (3γ_c^2 - 2 γ_c^4) H^[2]_4,b .
Understanding these dressing factors takes a large part of this paper. Here, we describe briefly the origin of the last term in (<ref>) which has no 1PM counterpart.
As emphasized in e.g. <cit.>, scattering amplitudes are gauge invariant, on-shell quantities
while potentials are gauge dependent, off-shell quantities.
When deducing the form of a potential from an amplitude, the off-shell extension of the potential should be carefully spelled out, as discussed in the COM frame in the original work <cit.>. As we will explain in the main body of this paper, the iteration process in the lab frame produces an extra contribution to H^[2]_4 that is not visible in the COM frame.
Once the general form of the Hamiltonian is given,
the search for the boost generator is a matter of straightforward (but lengthy) computation.
There are three types of constraints, which we call H/P/J-conditions.
Following <cit.>, we propose an ansatz satisfying the H condition
and involving a small number of unknown functions, and then solve the P/J-conditions to determine the functions.
The result is slightly more complicated than what one may naively expect from the 0PM (<ref>) and the 1PM (<ref>) expressions:
G⃗^[2] = H^[2]X⃗^[1] + γ_c^2 H^[2]_4,b z_12r⃗ ,
z_12 = z_1 - z_2 .
The reappearance of X⃗^[1] and the “misalignment" of
H^[2]_4 with respect to the other three terms are the two most notable features of this result.
The main body of this paper consists of two sections. In section <ref>, we explain how to derive the general 2PM Hamiltonian (<ref>). In section <ref>, we explain how to construct the boost generator (<ref>).
We conclude in section <ref> with a few possible future directions.
The two appendices provide some technical details for section <ref> and <ref>.
§ HAMILTONIAN
The 2PM COM Hamiltonian was first constructed in <cit.>; see also <cit.>.
The goal of this section is to find the general form of the 2PM Hamiltonian valid in an arbitrary lab frame.
1PM The 2PM Hamiltonian is intimately related to the 1PM Hamiltonian.
To see the connection clearly, and to establish our notations, we quickly review the 1PM Hamiltonian.
The 0PM Hamiltonian is the Minkowskian kinetic term,
H^[0] = E_1 + E_2 , E_a = √(p⃗_a^2 + m_a^2) .
The 1PM Hamiltonian in a lab frame is <cit.>
H^[1] = γ_c [ - Gm_1^2 m_2^2(2γ^2 -1)/E_1 E_2 r] ,
γ = - p_1 · p_2/m_1 m_2 = E_1E_2 - p⃗_1 ·p⃗_2/m_1m_2 ,
γ_c = [ 1-u⃗_c^2 + (n̂·u⃗_c)^2 ]^-1/2 ,
u⃗_c = p⃗_1 + p⃗_2/E_1+E_2 ,
r⃗ = x⃗_1 - x⃗_2 ,
n̂ = r⃗/r .
The transverse Lorentz factor γ_c originates from the Fourier integral,
4π∫_q⃗ e^iq⃗·r⃗/q⃗^2- (q^0)^2 =
4π∫_q⃗ e^iq⃗·r⃗/q⃗^2- (u⃗_c·q⃗)^2 = γ_c/r .
For later convenience, we introduce a few more shorthand notations,
u⃗_a ≡p⃗_a/E_a ,
z_a ≡E_a/E_1+E_2 ,
u⃗_- ≡u⃗_1 - u⃗_2 ,
z_12≡ z_1 - z_2 ,
ξ≡ z_1 z_2 .
2PM
The 2PM Hamiltonian in the COM frame <cit.> is given by
H^[2]_b = H^[2]_1,b + H^[2]_2,b + H^[2]_3,b + H^[2]_4,b = -G^2M m_1 m_2/4 r^2[ F_1 + F_2 + F_3 + F_4 ] ,
F_1 = 3 ( m_1m_2/E_1E_2) (5γ^2 -1 ) ,
F_2 = -16 ( m_1m_2/E_1E_2)^2 E/Mγ(2γ^2-1) ,
F_3 = 2 ( m_1m_2/E_1E_2)^3 E/M (2γ^2-1)^2 ,
F_4 = -2 (m_1m_2)^3/(E_1 E_2)^21/E M (2γ^2-1)^2 ,
M = m_1 + m_2 ,
E = E_1 + E_2 .
We pulled out an overall factor so that F_1,2,3,4 are dimensionless. The subscript b stands for “bare". As we move to a lab frame, each term in the first line of (<ref>) will be “dressed" by a factor that depends on u⃗_c and n̂. Before we discuss the dressing, we digress to give a quick review of how the four terms are determined by one-loop scattering amplitudes.
The diagrammatic origin of the four terms in (<ref>)
is depicted schematically in Figure <ref>.
As far as the classical Hamiltonian is concerned,
the only relevant diagrams are the triangle and box diagrams.
Bubble and other diagrams may only affect quantum corrections.
The sum of the two triangle diagrams directly determines H^[2]_1. The other three terms, H^[2]_2,3,4, are produced through an indirect route.
In four space-time dimensions, the sum of the box and crossed box diagrams does not directly contribute to H^[2]. Instead,
it produces an infrared divergent “super-classical" term,
which should be cancelled by an iteration of the 1PM Hamiltonian:
a process commonly called “Born subtraction". In the notation of <cit.>, the subtraction process reads
⟨ p^'|V| p ⟩=ℳ(p^', p )-∫_k⃗ℳ(p^', k) ℳ(k, p)/E_p-E_k+i ϵ+⋯ ,
where ℳ is the on-shell amplitude from the quantum field theory and V is the interacting part of the Hamiltonian.
After cancelling the unphysical divergence, the Born subtraction produces H^[2]_2,3,4. To emphasize this indirect origin,
and to prepare for later computations,
we record the relation between H^[2]_2,3,4 and (H^[1])^2 before the dressing:
H^[2]_2,b = E/m_1 m_24γ/2γ^2-1 ( H^[1]_b)^2 ,
H^[2]_3,b = -E/2E_1 E_2 ( H^[1]_b)^2 ,
H^[2]_4,b = 1/2E ( H^[1]_b)^2 .
Dressing factor
The dressing factor of H^[2]_1 is easy to determine.
The triangle diagrams are proportional to |q|^-1.
In the lab frame, as noted in the 1PM setting <cit.>, q is the 4-vector satisfying q·(p_1+p_2)=0
or q^0 = u⃗_c·q⃗.
The Fourier integral of |q|^-1 is then given by
[See appendix <ref> for a collection of all Fourier transforms needed in this paper.]
2 π^2 ∫_q⃗e^i q⃗·r⃗/√(q⃗^2- (u⃗_c·q⃗ )^2) =γ_c^2 γ_o^-1/r^2 ,
γ_o = (1-u⃗_c^2)^-1/2 .
It determines the dressing factor for H^[2]_1:
H^[2]_1 = γ_c^2γ_o^-1 H^[2]_1, b .
The iteration structure of H^[2]_2,3,4
translates to the fact that the r^-2 potential may come from a convolution integral. In the COM frame, the relevant integral is
1/r^2
= (4π)^2 ∫_q⃗∫_l⃗e^i q⃗·r⃗/l⃗^2(q⃗-l⃗)^2 .
Each factor in the denominator is a copy of the graviton propagator part of the tree amplitude.
To generalize the integral to an arbitrary lab frame, it would be natural to modify the denominators as in (<ref>) and obtain
γ_c^2/r^2 = (4π)^2 ∫_q⃗∫_l⃗e^i q⃗·r⃗/(l⃗^2)_c (q⃗-l⃗)^2_c , (a⃗·b⃗)_c ≡a⃗·b⃗ - (u⃗_c ·a⃗)(u⃗_c ·b⃗) .
If there were no extra contributions, we would be led to the conclusion that
H^[2]_(2,3,4) = γ_c^2 H^[2]_(2,3,4) b (?) .
As it turns out, the argument above gives the correct answer for H^[2]_2,3 but not for H^[2]_4.
To recover the missing piece, we have to take a closer look at the Born subtraction.
Off-shell extension
Let (p⃗_1, p⃗_2) be the incoming momenta and (p⃗'_1 , p⃗'_2) be the outgoing momenta.
The Born subtraction for the 2PM potential introduces an intermediate off-shell state with momenta (k⃗_1, k⃗_2).
The off-shell state strictly obeys the momentum conservation,
p⃗_1 + p⃗_2 = P⃗ = k⃗_1 + k⃗_2 .
But, by definition, it violates energy conservation,
E_1(p) + E_2(p) = E_p ≠ E_k = E_1(k) + E_2(k) .
A major intermediate step of the Born subtraction (<ref>) involves the integral,
∫_k⃗c_1(p',k) c_1(k,p)/(E_p-E_k+i ϵ)(p⃗'-k⃗)^2_c (k⃗-p⃗)^2_c ,
c_1 ∝2γ^2-1/E_1 E_2 .
We recognize two copies of the 1PM potential as well as the “propagator" 1/(E_p - E_k) from the non-relativistic quantum mechanics to be matched with the full quantum field theory
in the classical limit.
As explained in <cit.> for the COM Hamiltonian,
for each numerator factor, we need to prescribe how to interpolate between the two momenta. For example, using the fact that E_1, E_2 and γ are all functions of p⃗^2 = p⃗_1^2 = p⃗_2^2 only in the COM frame, we may assign
c_1(k,p)_COM = c_1( k^2 +p^2/2) = c_1(p^2) + k^2-p^2/2 c_1'(p^2) + ⋯ .
As we move to a lab frame, some complications arise. Most notably, the level surface of E = E_1 + E_2 for a fixed P⃗ = p⃗_1 + p⃗_2 is no longer a sphere but an ellipsoid. Nevertheless, the process of extracting contributions
from c_1 to H^[2]_2,3,4 remains largely unchanged aside from the dressing by γ_c^2 as in (<ref>).
The extra contribution to H^[2]_4 comes from the denominator factors:
D(k,p) = 1/l⃗^2 - (u⃗_c(k,p)·l⃗)^2 ,
l⃗ = p⃗ - k⃗ .
It is similar but not equal to the naive denominator factors in (<ref>).
What exactly is u⃗_c(k,p) in (<ref>)? The vector u⃗_c was originally defined for on-shell momenta,
u⃗_c = P⃗/E = p⃗_1 + p⃗_2/E_1 + E_2 .
As we mentioned in (<ref>) and (<ref>), the total momentum is always conserved but the total energy is not.
So, we need an off-shell extension for u⃗_c(k,p) in D(k,p)
just as we extended the numerator factor c_1. One natural prescription is
u⃗_c(k,p) = P⃗( E_p + E_k/2)^-1≈(1 - E_p - E_k/2E_p) u⃗_c ,
u⃗_c = P⃗/E_p .
(To the leading order in Δ E ≡ E_p - E_k, which is all there is to contribute to the 2PM potential, all prescriptions
treating k and p on an equal footing give equivalent results.)
Expanding the interpolated denominator with respect to the one fixed at external momenta,
D(k,p) ≈1/l⃗^2 - (1-Δ E/2E_p)^2 (u⃗_c·l⃗)^2
≈1/(l⃗^2)_c + (Δ E/E_p) (u⃗_c·l⃗)^2≈1/(l⃗^2)_c( 1 - Δ E/E_p(u⃗_c·l⃗)^2/(l⃗^2)_c) .
The Δ E term leads to the convolution between γ_c/r in (<ref>) and a new Fourier integral,
γ_c - γ_c^3/r = - 4π∫_q⃗2(u⃗_c·q⃗)^2 /(q⃗)^4_c e^iq⃗·r⃗ ,
which contributes to H^[2]_4 an extra term proportional to (γ_c^2 - γ_c^4).
After fixing the constant of proportionality, we
find that the final result for H^[2]_4 is
H^[2]_4 = γ_c^2 H^[2]_4,b + 2(γ_c^2 - γ_c^4) H^[2]_4,b = (3γ_c^2 - 2γ_c^4) H^[2]_4,b .
We refer the readers to appendix <ref> for further discussions and computations.
§ BOOST
We proceed to construct the boost generator compatible with the the Hamiltonian obtained in the previous section.
Before we begin, we make some technical notes.
It is useful to introduce a notation to distinguish the two parts of
H^[2]_4:
H^[2]_4 = H^[2]_4α + H^[2]_4β = (3γ_c^2 - 2 γ_c^4) H^[2]_4,b .
The relations between H^[2]_2,3,4 and (H^[1])^2, which generalize (<ref>) to include the dressing factors, will play a crucial role:
H^[2]_2 = E/m_1 m_24γ/2γ^2-1 ( H^[1])^2
,
H^[2]_3 = -E/2E_1E_2 (H^[1])^2 ,
H^[2]_4α = 3/2E (H^[1])^2 ,
H^[2]_4β = -γ_c^2/E (H^[1])^2 .
It is convenient to take components with respect to u⃗_c and u⃗_- in the u⃗-space,
u⃗_c = z_1 u⃗_1 + z_2 u⃗_2 ,
u⃗_- = u⃗_1 - u⃗_2 ,
-2cm
u⃗_1 = u⃗_c + z_2 u⃗_- ,
u⃗_2 = u⃗_c - z_1 u⃗_- .
Poincaré algebra up to 1PM
For a 2-body dynamics without spin, the translation and rotation generators are simply
P⃗ = p⃗_1 + p⃗_2 ,
J⃗ = x⃗_1 ×p⃗_1 + x⃗_2 ×p⃗_2 ,
{ x_i, p_j } = δ_ij .
The complete Poincaré algebra reads (K⃗ = G⃗ - t P⃗)
{P_i, P_j}= 0 , {P_i, H}=0 , {J_i, H}=0 ,
{J_i, J_j}= ϵ_i j k J_k , {J_i, P_j}=ϵ_i j k P_k , {J_i, G_j}=ϵ_i j k G_k ,
{G_i, P_j}=δ_i j H , {G_i, H}=P_i , {G_i, G_j}=-ϵ_i j k J_k .
The non-trivial conditions are all in (<ref>),
which we call “H/P/J-conditions", respectively.
In <cit.>, it was shown that at 0PM and 1PM,
G⃗^[0] = H^[0]X⃗^[0] , X⃗^[0] = z_1 x⃗_1 + z_2 x⃗_2 ,
G⃗^[1] = H^[1]X⃗^[1] , X⃗^[1] = z_2 x⃗_1 + z_1 x⃗_2 .
and it was conjectured that a similar pattern will persists at higher orders,
G⃗^[n] = H^[n]X⃗^[n] ,
X⃗^[n] = α^[n]_1 x⃗_1 + α^[n]_2 x⃗_2 ,
α^[n]_1 + α^[n]_2 = 1 .
The restriction on the sum of the two coefficients is sufficient to satisfy the H-condition.
We find in this paper that the ansatz (<ref>) is too restrictive and should be modified to allow for more possibilities.
With hindsight,
we write our extended ansatz as
G⃗^[2] = H^[2]X⃗^[1] + Q r⃗ .
The H-condition still holds if Q is translation invariant.
The appearance of X⃗^[1] in the ansatz for G⃗^[2] may look unnatural.
But, since the difference between any two vectors X⃗, X⃗' of the form in (<ref>)
is proportional to r⃗, we can choose any such X⃗ as a reference and absorb the difference
in the definition of Q. To resolve this ambiguity in splitting between X⃗ and Qr⃗,
we demand that Q takes the simplest form possible.
As we will show shortly, taking X⃗^[1] as a reference for G⃗^[2] coincides with the minimal choice for Q.
Before solving the 2PM P/J-conditions, we recall some of the main features of the 1PM computation.
In <cit.>, the P-condition is reorganized as
(X⃗^[0]-X⃗^[1] ) {H^[0], H^[1]} +H^[0]{X⃗^[0], H^[1]}+H^[1]{X⃗^[1], H^[0]}=0 .
The resulting vector can be decomposed into the “basis" (r⃗, u⃗_c ,u⃗_-). Each component should vanish.
As for the J-condition, the use of the P-condition to reorganize the J-condition
simplifies the computation,
{G⃗^[0] , G⃗^[1]}_× = H^[1][ {H^[0], X⃗^[1]}×X⃗^[1] + {G⃗^[0] , X⃗^[1]}_×] ,
{A⃗ , B⃗}_×|_i ≡ϵ_ijk{ A_j , B_k } .
P-condition
Plugging the ansatz (<ref>) into the P-condition,
𝒫⃗≡{G⃗^[0] , H^[2]} + {G⃗^[1] , H^[1]} + {G⃗^[2] , H^[0]} = 0 ,
and rearranging some terms using (<ref>), we get
𝒫⃗ = { H^[0] , H^[2]} (X⃗^[0] - X⃗^[1]) + H^[0]{X⃗^[0] , H^[2]} + {G⃗^[1] , H^[1]}
+ H^[2]{X⃗^[1] , H^[0]} + { Q , H^[0]}r⃗ + Q {r⃗ , H^[0]}
To decompose the vector 𝒫⃗ into components along u⃗_c, u⃗_- and n̂, we recall a few facts,
X⃗^[0] - X⃗^[1] = z_12r⃗ ,
{X⃗^[1] , H^[0]}
= u⃗_c - z_12u⃗_- ,
{r⃗ , H^[0]} = u⃗_- .
We also import a few less obvious facts
from appendix <ref>.
First, we rewrite the bracket of the 1PM generators in a suggestive form.
{G⃗^[1] , H^[1]} = A_c u⃗_c + A_- u⃗_- + A_n n̂ ,
A_c = - E/E_1 E_2 (1-3ξ) (H^[1])^2 = 2 H^[2]_3 + 2H^[2]_4α ,
A_- = z_12[- (E/m_1m_2)4γ/2 γ^2-1 + E/E_1E_2 + γ_c^2 -2/E] (H^[1])^2
= z_12( - H^[2]_2 -2 H^[2]_3 -4/3 H^[2]_4α - H^[2]_4β) ,
A_n = -(2γ_c^2 (n̂·u⃗_c ) - z_12( γ_c^2(u⃗_-·u⃗_c)_⊥n̂·u⃗_c + n̂·u⃗_- ) ) (H^[1])^2/E
= z_12/3{ H^[0] , H^[2]_4α} +2 (n̂·u⃗_c)H^[2]_4β .
We can write the second term of (<ref>) in a similar way,
H^[0]{X⃗^[0] , H^[2]} = B_c u⃗_c + B_- u⃗_- + B_n n̂ ,
B_c = -H^[2]_1 - H^[2]_2- 3H^[2]_3- 3H^[2]_4α - H^[2]_4β ,
B_- = z_12( H^[2]_1 + 2 H^[2]_2 + 3H^[2]_3+2 H^[2]_4α +2 H^[2]_4β) .
Instead of computing B_n n̂ explicitly, we cancel a large part of it against a neighboring term using the identity,
r { H^[0] , H^[2]} z_12 + B_n = - (u⃗_c ·∇⃗_r) H^[2] + (D⃗_p H^[2] )_n .
Here, (D⃗_p F)_n defined by the decomposition,
(E_1 ∇⃗_p_1 + E_2 ∇⃗_p_2) F
= (D⃗_p F)_c u⃗_c + (D⃗_p F)_- u⃗_- + (D⃗_p F)_n n̂ .
Remarkably, for F ∝γ_c^2/r^2, the RHS of (<ref>) vanishes, so H^[2]_1,2,3 and
H^[2]_4α all drop out.
The contribution from H^[2]_4β is
- (u⃗_c ·∇⃗_r) H^[2]_4β + (D⃗_p H^[2]_4β )_n = -2(n̂·u⃗_c) H^[2]_4β ,
which cancels against a similar term from (<ref>).
In summary, the P-condition boils down to
𝒫⃗ = P_c u⃗_c + P_- u⃗_- + P_n n̂ ,
P_c = 0 ,
P_- = -1/3 z_12 H^[2]_4α + Q ,
P_n = z_12/3{ H^[0] , H^[2]_4α}- { H^[0] , Q } .
Clearly, the unique solution to the P-condition is
Q = z_12/3 H^[2]_4α = z_12γ_c^2 H_4,b = z_12(H^[1])^2/2E .
J-condition
While we solve the J-condition,
𝒥⃗ ≡{G⃗^[2] , G⃗^[0]}_× + 1/2{G⃗^[1] , G⃗^[1]}_× = 0 ,
we keep track of the two terms in our main ansatz (G⃗^[2] = H^[2]X⃗^[1] + Q r⃗) separately.
For the first term, we note that
{G⃗^[0], H^[2]X⃗^[1]}_×
= {G⃗^[0], H^[2]}×X⃗^[1] + H^[2]{G⃗^[0], X⃗^[1]}_×
= - [ {G⃗^[1], H^[1]} + {G⃗^[2], H^[0]}]×X⃗^[1]
- H^[2]{ H^[0], X⃗^[1]}×X⃗^[1]
= - H^[1]{X⃗^[1] , H^[1]}×X⃗^[1]
- { Q r⃗ , H^[0]}×X⃗^[1] .
To reach the second line, we used the 2PM P-condition as well as the 1PM J-condition (<ref>). In the last step, we used (<ref>) once again and also used X⃗^[1]×X⃗^[1] = 0.
The sum of all Q-independent terms gives
- H^[1]{X⃗^[1] , H^[1]}×X⃗^[1] + 1/2{G⃗^[1], G⃗^[1]}_×
= 1/2 (H^[1])^2 {X⃗^[1], X⃗^[1]}_× .
The sum of all Q-dependent terms gives
{G⃗^[0], Q r⃗}_×- { Q r⃗ , H^[0]}×X⃗^[1] = z_12 Q {r⃗ , H^[0]}×r⃗ + H^[0]{ Qr⃗ , X⃗^[0]}_× .
The full J-condition is then
𝒥⃗ = 1/2 (H^[1])^2 {X⃗^[1], X⃗^[1]}_× + z_12 Q (u⃗_- ×r⃗) + H^[0]{ Qr⃗ , X⃗^[0]}_×
≡𝒥⃗_A + 𝒥⃗_B + 𝒥⃗_C .
Using the fact that
{X⃗^[1] , X⃗^[1]}_× = -2/E (z_2^2 u⃗_1 - z_1^2 u⃗_2) ×r⃗
= 2/E(z_12u⃗_c - (1-3ξ) u⃗_- ) ×r⃗ ,
and the last equality in (<ref>), we get
𝒥⃗_A = 2Q (u⃗_c - 1/z_12 (1-3ξ) u⃗_- ) ×r⃗ .
It remains to compute
𝒥⃗_C = H^[0]{X⃗^[0] , Qr⃗}_×
= H^[0]{X⃗^[0] , Q}×r⃗ + Q H^[0]{X⃗^[0] , r⃗}_× .
Further computations show
H^[0]{X⃗^[0] , r⃗}_× = (u⃗_c -z_12u⃗_-) ×r⃗ ,
H^[0]{X⃗^[0] , Q}_u = (-3u⃗_c + 2( 1- ξ)/z_12) Q .
Combining everything, and checking the coefficients of (u⃗_c ×r⃗) and (u⃗_- ×r⃗) separately, we confirm that the J-condition, 𝒥⃗ = 0, holds.
§ DISCUSSION
In the spinless point particle limit, we succeeded in constructing the 2PM Hamiltonian in an arbitrary lab frame and the 2PM boost generator, thereby explicitly verifying the global Poincaré algebra up to the 2PM order. We did not attempt to compare our result with the overlapping result in the PN expansion in <cit.> and many subsequent work. Since the PM computation is exact in velocities, the expressions for the PM generators tend to be more compact than their PN counterparts.
One obvious extension of this work would be to proceed to the 3PM order.
At 2PM, we had to separately keep track of the four terms comprising the Hamiltonian. For all but one terms,
the interplay between the 2PM Hamiltonian and the iteration of the 1PM Hamiltonian played a vital role.
A cursory look at the 3PM COM Hamiltonian <cit.> shows that the number of terms is at least doubled and that the iteration includes many terms such as c_1^3, c_1(c_1')^2, c_1^2 c_1”, c_2c_1, c_2c_1' and so on. Here, c_1 and c_2 are the numerator factors of H^[1] and H^[2]_1, respectively.
The appearance of a second order derivative as in c_1” would require a more precise prescription for the interpolation during the off-shell extension. At 2PM, we needed at most c_1' and
the difference between different prescriptions was immaterial.
Another important extension would be to study N-body dynamics.
Take a 3-body Hamiltonian for an example.
As explained in <cit.>, while the 1PM Hamiltonian is merely the sum of pairwise interactions, the 2PM Hamiltonian includes a genuine 3-body interaction term. How this term affects the global Poincaré algebra is an interesting question.
Even for a binary system at 2PM, including the spin effects is a challenging open problem. The 2PM amplitudes to all orders in spin were recently proposed in <cit.>. But, extra steps are needed to map the amplitudes to the Hamiltonian. In particular, the Thomas-Wigner rotation factor among an incoming momentum, an outgoing momentum and the common reference frame for the binary should be included. At 1PM, the complete form of the rotation factor was first given in the COM frame in <cit.> and was generalized to lab frames in <cit.>.
It would be interesting to see how the attempt to construct the boost generator may constrain the form of the rotation factor at 2PM.
1cm
The work of HL and SL is supported by the National Research Foundation of Korea grant NRF-2019R1A2C2084608.
The work of KL is supported by an appointment to the JRG
Program at the APCTP through the Science and Technology Promotion Fund and Lottery Fund of the Korean Government.
KL is also supported by the National Research Foundation
of Korea grant funded by the Korean government(MSIT) No.RS-2023-00249451 and the Korean Local Governments of Gyeongsangbuk-do Province and Pohang City.
We are grateful to Sungjay Lee for discussions.
HL and SL thank the Asia Pacific Center for Theoretical Physics for hospitality where a part of this work was done.
HL and KL thank Korea Institute for Advanced Study for hospitality where a part of this work was done.
§ MORE ON THE HAMILTONIAN
In this appendix, we review the Born subtraction contributing to the 2PM Hamiltonian in the COM frame,
and see what new features arise as we move to a lab frame.
COM frame
We follow <cit.>.
An intermediate step involves the integral,
ℐ_B =
∫_k⃗c_1^2(p^2+k^2/2)/(E_p-E_k+i ϵ)(p⃗-k⃗)^2 (p⃗'-k⃗)^2 .
A key idea in the next step is that the super-classical and classical terms all come from near the pole at E_p - E_k = 0.
For the time being, let us focus on the factor
ℱ = c_1^2(p^2+k^2/2)/E_p-E_k+i ϵ ,
c_1(p^2) ∝2γ^2-1/E_1 E_2 .
The references suggest that we use k^2 as an independent variable and expand both the denominator and the numerator,
E_p - E_k = . d E/d (k^2)|_k^2 = p^2 (p^2 -k^2) - 1/2. d^2 E/d (k^2)^2|_k^2 = p^2 (p^2 -k^2)^2 + ⋯ ,
c_1^2(p^2+k^2/2) = c_1^2(p^2) + (k^2-p^2) c_1(p^2) . d c_1(k^2)/dk^2|_k^2 = p^2 + ⋯ .
The super-classical term arises from the leading 𝒪(ħ^-1) term,
ℱ_-1 = ( 1/E') c_1(p^2)^2/p^2 - k^2 + iϵ ,
f' ≡. df/d(k^2)|_k^2 =p^2 .
More important to us is the classical term, which comes from the next-to-leading terms.
ℱ_0 = - c_1 c_1'/E' + E”/2(E')^2 (c_1)^2 .
Using the facts,
E' = 1/2( 1/E_1 + 1/E_2) = 1/2Eξ ,
E” = -E^3/4E_1^3 E_2^3 (1 - 3ξ) ,
c_1' = 4 γγ'/E_1 E_2 - c_1 E^2/2E_1^2 E_2^2 (1-2ξ) , γ' = 1/2 m_1m_2 ξ ,
we obtain
- c_1 c_1'/E' = E/E_1E_2( -4 γ c_1 /m_1 m_2 + (1-2ξ) (c_1)^2 ) ,
E”/2(E')^2 (c_1)^2 = - E/2E_1E_2 (1-3ξ)(c_1)^2 ,
and finally,
ℱ_0 = E/2E_1E_2[ -8 γ c_1/m_1m_2 + (1-ξ)(c_1)^2 ] .
The γ c_1, (c_1)^2, ξ(c_1)^2 terms correspond to H^[2]_2,b, H^[2]_3,b, H^[2]_4,b in (<ref>), respectively.
Lab frame
In the COM frame, p⃗_1 = p⃗ = - p⃗_2 implies that E_1 and E_2 depend only on p⃗^2. The level surface of E = E_1 +E_2 is a sphere. In the lab frame, the situation is more complicated.
Let us first examine the Born denominator (E_p-E_k)^-1.
As we vary p⃗_1 and p⃗_2 while keeping P⃗ = p⃗_1 + p⃗_2 fixed,
the condition for the total energy staying constant is
√(p⃗_1^2 +m_1^2) + √(p⃗_2^2 +m_2^2)
= √((p⃗_1-l⃗)^2 +m_1^2) + √((p⃗_2+l⃗)^2 +m_2^2) .
Squaring both sides and simplifying a bit, we find an equation for an ellipsoid,
f_p⃗(l⃗) ≡l⃗^2 - (u⃗_c·l⃗)^2 -2E_1 E_2/E ( u⃗_- ·l⃗ )= 0 .
This expression will be useful in extracting the super-classical and classical terms from the Born subtraction in the lab frame.
Expanding the Born denominator in l⃗ = p⃗ - k⃗, we find
E_p - E_k
= -1/2Eξ f(l⃗) ( 1 - z_12/E ξ (u⃗_c·l⃗) ) + (1/2Eξ)^3 (1 -3 ξ ) f(l⃗)^2 + 𝒪(l^3) .
The numerator c_1 consists of two distinct factors, (2γ^2-1) and 1/(E_1E_2). For the first factor, we note that the following relation holds in any frame:
E^2 - P⃗^2 = m_1^2 + m_2^2 + 2m_1 m_2 γ .
If we vary E while keeping P⃗ fixed, we find
E dE = m_1 m_2 dγ ⟹ dγ/dE = E/m_1 m_2 .
When restricted to the COM frame, it agrees with γ'/E' in (<ref>), but we stress that (<ref>) is valid in any frame.
This simple argument proves that H^[2]_2 receives nothing more than the simple dressing by γ_c^2.
The 1/(E_1E_2) factor in c_1 deserves more attention.
To distinguish the variance through the change in the total energy E from the variance independent of the change in E,
we propose a double-expansion in f(l⃗) and (u⃗_c·l⃗):
E_1E_2/E_1(p⃗_1 - l⃗)E_2(p⃗_2 + l⃗)
= 1 - z_12/Eξ ( u⃗_c·l⃗) + (1-3ξ)/(Eξ)^2 ( u⃗_c·l⃗)^2
- 1/2(Eξ)^2f(l⃗)[ (1-2ξ) - (3-4ξ) z_12/Eξ (u⃗_c ·l⃗) ]
+ 1/8(Eξ)^4f(l⃗)^2 (3-12ξ+8ξ^2) + 𝒪(l^3) .
At each order in f(l⃗), the leading coefficient independent of (u⃗_c·l⃗) should agree with that computed in the COM frame.
In (<ref>) and (<ref>), we observed factors depending on (u⃗_c·l⃗), possibly
signaling a new feature as we move away from the COM frame.
However, since l⃗ scales as 𝒪(ħ) in the classical limit, most of them become irrelevant.
The only possible exception is the one contributing to the super-classical term (<ref>).
Fortunately, the leading (u⃗_c·l⃗) corrections to (<ref>) and (<ref>) cancel out precisely,
so even the super-classical term remains unaffected.
Box diagram
According to <cit.>, in D=4-2ϵ dimensions, the sum of the box and the crossed-box diagrams gives (neglecting quantum corrections)
ℐ_⊠ = 1/|q|^2+2 ϵ(4π)^ϵ/32m_1 m_2(
-Γ(1+ϵ) Γ(-ϵ)^2/πΓ(-2 ϵ) √(γ^2-1).
4cm
.
+i(m_1+m_2)|q|/m_1 m_2(γ^2-1)Γ(1/2-ϵ)^2 Γ(1/2+ϵ)/π^3/2Γ(-2 ϵ)) .
The first term is the infrared-divergent super-classical term and the second term is the finite classical term.
A key feature of (<ref>) is Lorentz invariance. In the lab frame, the |q| in (<ref>) should be understood as
|q|^2 = (q⃗)^2_c = q⃗^2 - (u⃗_c·q⃗)^2.
Another important fact is that in four space-time dimensions (ϵ = 0), the classical term vanishes:
Γ(1/2-ϵ)^2 Γ(1/2+ϵ)/π^3/2Γ(-2 ϵ)
= 0 - 2ϵ + 𝒪(ϵ^2) .
In summary, (<ref>) is equally valid in any reference frame.
Nevertheless, we find it instructive to compute the super-classical term in a seemingly non-relativistic way.
The relevant loop integral is the scalar box integral,
ℐ_□ = ∫d^D ℓ/(2π)^D1/((p_1 -ℓ)^2 + m_1^2 -iε)((p_2 +ℓ)^2 +m_2^2 -iε) ℓ^2 (q -ℓ)^2
= ∫d^D ℓ/(2π)^D1/(ℓ^2 -2p_1·ℓ -iε)(ℓ^2 + 2p_2 ·ℓ -iε) ℓ^2 (q -ℓ)^2 .
We work in the (-+++) metric signature. Evaluating this integral, even approximately, is a great challenge, but it is relatively easy to isolate the leading super-classical term.
One simply replaces the two massive propagators by delta functions in view of the relation
1/x-iε = p.v.(1/x) + π i δ(x) .
Then the integral becomes
ℐ_□ ≈π i ∫d^D ℓ/(2π)^Dδ (ℓ^2 -2p_1·ℓ) δ(ℓ^2 + 2p_2 ·ℓ)/ℓ^2 (q -ℓ)^2
= π i/2∫d^D ℓ/(2π)^Dδ ((p_1+p_2)·ℓ) δ(ℓ^2 -(p_1-p_2) ·ℓ)/ℓ^2 (q -ℓ)^2
= i/4E∫. d^d l⃗/(2π)^dδ(ℓ^2 -(p_1-p_2) ·ℓ)/ℓ^2 (q -ℓ)^2|_ℓ^0 = u⃗_c·l⃗ .
One of the delta functions led to the replacement ℓ^0 = u⃗_c·l⃗, which in turn implies
ℓ^2 → (l⃗^2)_c ,
(q- ℓ)^2 → (q⃗-l⃗)^2_c ,
ℓ^2 -(p_1-p_2) ·ℓ→ f_p⃗(l⃗) .
Hence the super-classical term from the box integral takes the same form as the corresponding term
in the Born integral (<ref>) not only in the COM frame but in any lab frame.
Fourier transform
We use the shorthand notations,
∫_r⃗≡∫ d^3r⃗ ,
∫_q⃗≡∫d^3q⃗/(2π)^3 .
Before deformation from the sphere to the ellipsoid in the
q-space, the Fourier transform of the (1/r) potential is well-known:
1/r = 4π∫_q⃗e^iq⃗·r⃗/q^2 ⟺ 4π/q^2 = ∫_r⃗e^-iq⃗·r⃗/r .
The transform of the (1/r^2) potential comes for free. We simply switch the labels between r⃗ and q⃗ and adjust
the factors of (2π). The result is
1/r^2 = 2π^2 ∫_q⃗e^iq⃗·r⃗/q ⟺ 2π^2/q = ∫_r⃗e^-iq⃗·r⃗/r^2 .
Alternatively, we can approach the (1/r^2) potential via a convolution,
1/r^2 = (4π)^2 ∫_q⃗∫_l⃗e^iq⃗·r⃗/l⃗^2 (q⃗-l⃗)^2 .
The two approaches agree through the relation
∫_l⃗1/l⃗^2 (q⃗-l⃗)^2 = 1/8q .
The ellipsoid deformation of the lab frame induces an an effective metric in the q-space:
(g_c)_ij = δ_ij - (u_c)_i (u_c)_j
⟺
(g_c)^ij = δ^ij + (u_c)^i (u_c)^j/1-u⃗_c^2 .
The deformed Fourier transforms can be performed with the help of orthonormal frames associated with the metric.
The results for the (1/r) and (1/r^2) potentials are
γ_c/r = 4π∫_q⃗e^iq⃗·r⃗/(q⃗^2)_c ,
γ_c^2/r^2 = 2π^2 γ_o ∫_q⃗e^iq⃗·r⃗/(q⃗^2)_c^1/2 .
The convolution argument is deformed accordingly,
∫_l⃗1/(l⃗^2)_c (q⃗-l⃗)^2_c = γ_o/8 (q⃗^2)_c^1/2 .
Finally, to compute the extra contribution to H^[2]_4, we also need
γ_c - γ_c^3/r = - 4π∫_q⃗2(u⃗_c·q⃗)^2 /(q⃗^2)_c^4 e^iq⃗·r⃗ .
Again, one can verify it using the orthonormal frames and direct integration.
§ MORE ON THE BOOST GENERATOR
Recall that the P-condition is
{G⃗^[2] , H^[0]} + {G⃗^[1] , H^[1]} + {G⃗^[0] , H^[2]} = 0 .
Each term will produce terms proportional to u⃗_1,2 or x⃗_1,2.
(1-1) part
This part does not rely on the 2PM ansatz:
{G⃗^[1] , H^[1]} = { H^[1]X⃗^[1] , H^[1]}
= H^[1]{X⃗^[1] , H^[1]} = 1/2{X⃗^[1] , (H^[1])^2 } .
It is useful to separate the γ_c factor,
H^[1] = γ_c H^[1]_b .
The bare part is easy to compute:
{X⃗^[1] , H^[1]_b} =
[ -E/E_1E_2 (z_2^2u⃗_1 + z_1^2 u⃗_2) - 4γ/2γ^2-1Ez_12/m_1m_2u⃗_-
- n̂/E (w⃗·n̂) ] H^[1]_b ,
w⃗ ≡ z_2 u⃗_1 + z_1 u⃗_2 = {X⃗^[1] , H^[0]} = E {r⃗ , z_1 } .
The γ_c factor requires more work. The result is
{X⃗^[1], γ_c} = -γ _c^3/E(n̂(n̂·u⃗_c) (1 + ( u⃗_c ·w⃗)_)+(u⃗_c)^2_w⃗ -u⃗_c) ,
(a⃗·b⃗)_ ≡a⃗·b⃗ - (a⃗·n̂)(n̂·b⃗) .
Combining everything and simplifying a bit, we find
{X⃗^[1] , H^[1]}
= z_12u⃗_- (γ_c^2 /E- (E/m_1m_2)4γ/2 γ^2-1) H^[1]
-(z_2^2/E_1u⃗_1+z_1^2/E_2u⃗_2)H^[1]
- n̂(γ_c^2 (n̂·u⃗_c ) (1 + ( u⃗_c ·w⃗)_) + n̂·w⃗) H^[1]/E .
Switching to the u⃗_c, u⃗_- basis, we obtain
{X⃗^[1] , H^[1]}
= z_12u⃗_- [γ_c^2 /E- (E/m_1m_2)4γ/2 γ^2-1 + E/E_1E_2(1-2ξ) ] H^[1]
- u⃗_c E/E_1E_2(1-3ξ) H^[1]
- n̂(2γ_c^2 (n̂·u⃗_c ) - z_12( γ_c^2(u⃗_-·u⃗_c)_⊥n̂·u⃗_c + n̂·u⃗_- ) ) H^[1]/E .
(0-2) part
When we compute H^[0]{X⃗^[0],H^[2]},
the terms proportional to u⃗_1,2 come from
H^[0]{X⃗^[0],H^[2]}_u = . D⃗_p H^[2]|_u ≡. ( E_1 ∇⃗_p_1 + E_2 ∇⃗_p_2 ) H^[2]|_u .
The four pieces of H^[2] depend on p⃗_1,2 either through
powers of γ_c and γ_o or through F_1,2,3,4.
The γ factor does not contribute since {G⃗^[0] , γ} = 0
as discussed in <cit.>.
Applying chain rules and projecting onto u⃗_c and u⃗_-,
we find an appealing intermediate expression:
H^[0]{X⃗^[0],H^[2]}_u =
u⃗_c (γ_c ∂/∂γ_c +γ_o ∂/∂γ_o + E_1 ∂/∂ E_1 + E_2 ∂/∂ E_2) H^[2]
+ u⃗_- ( E_1 E_2/E) ( ∂/∂ E_1 - ∂/∂ E_2) H^[2] .
To compute the x⃗ terms from H^[0]{X⃗^[0],H^[2]}, we derive an identity that holds
for any function of p⃗_1,2 and r⃗:
H^[0]{X⃗^[0] , F }_x = E { z_1 , F }x⃗_1 + E { z_2 , F }x⃗_1 + (D⃗_p F)_x
= E { z_1 , F }r⃗ + (D⃗_p F)_x
= - [ E (∇⃗_p_1 z_1 - ∇⃗_p_2 z_1) ·∇⃗_r F ] r⃗ + (D⃗_p F)_x
= - [ (z_2 u⃗_1 + z_1 u⃗_2) ·∇⃗_r F ]r⃗ + (D⃗_p F)_x
= - [ (u⃗_c ·∇⃗_r) F - z_12 (u⃗_- ·∇⃗_r) F ] r⃗ + (D⃗_p F)_x
= - [ (u⃗_c ·∇⃗_r) F ] r⃗ - z_12{H^[0] , F }r⃗ + (D⃗_p F)_x .
The Poisson bracket with H^[0] acts as a derivative in the r⃗-space:
{ H^[0], Z } = - u⃗_- ·∇⃗_r Z ≡ - D_- Z .
As such, it does not affect functions of p⃗_1,2 only.
{ H^[0], γ_c^2 } = 2γ_c^4/r (n̂·u⃗_c) (u⃗_- ·u⃗_c)_⊥ ,
{ H^[0], r^-2} = 2n̂·u⃗_-/r^3 .
Since H^[2]_1,2,3,4α are all proportional to γ_c^2/r^2, we have
{ H^[0], H^[2]_i} = 2 H^[2]_i/r(γ _c^2 (u⃗_-·u⃗_c)_ (n̂·u⃗_c) +n̂·u⃗_-)
(i=1,2,3,4α) .
As for H^[2]_4β, we find
{ H^[0], H^[2]_4β} = 2 H^[2]_4β/r(2γ _c^2 (u⃗_-·u⃗_c)_ (n̂·u⃗_c) +n̂·u⃗_-) .
JHEP
|
http://arxiv.org/abs/2307.04002v1 | 20230708160353 | Energy-Efficient Beamforming Design for Integrated Sensing and Communications Systems | [
"Jiaqi Zou",
"Songlin Sun",
"Christos Masouros",
"Yuanhao Cui",
"Yafeng Liu",
"Derrick Wing Kwan Ng"
] | eess.SP | [
"eess.SP"
] |
4.1ex
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals
Energy-Efficient Beamforming Design for Integrated Sensing and Communications Systems
Jiaqi Zou, Graduate Student Member, IEEE, Songlin Sun, Senior Member, IEEE, Christos Masouros, Senior Member, IEEE, Yuanhao Cui, Member, IEEE,
Ya-Feng Liu, Senior Member, IEEE, and Derrick Wing Kwan Ng, Fellow, IEEE
Part of this work has been submitted to the IEEE Global Communications Conference (GLOBECOM 2023) for possible presentation <cit.>.
Jiaqi Zou is with the School of Information and Communication Engineering, Beijing University of Posts and Telecommunications (BUPT), Beijing 100876, China, and also with the Department of Electrical and Electronic Engineering, University College London, London WC1E 7JE, UK (e-mail: [email protected]).
Songlin Sun and Yuanhao Cui are with Beijing University of Posts and Telecommunications (BUPT), Beijing, China (e-mail: [email protected], [email protected]).
Christos Masouros is with the Department of Electrical and Electronic Engineering, University College London, WC1E 7JE, UK (e-mail: [email protected]).
Ya-Feng Liu is with the State Key Laboratory of Scientific and Engineering Computing, Institute of Computational Mathematics and Scientific/Engineering Computing, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China (e-mail: [email protected])
Derrick Wing Kwan Ng is with the School of Electrical Engineering and Telecommunications, University of New South Wales, Sydney, NSW 2052, Australia (e-mail: [email protected]).
August 12, 2023
======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
1.5
In this paper, we investigate the design of energy-efficient beamforming for an ISAC system, where the transmitted waveform is optimized for joint multi-user communication and target estimation simultaneously.
We aim to maximize the system energy efficiency (EE), taking into account the constraints of a maximum transmit power budget, a minimum required signal-to-interference-plus-noise ratio (SINR) for communication, and a maximum tolerable Cramér-Rao bound (CRB) for target estimation.
We first consider communication-centric EE maximization.
To handle the non-convex fractional objective function, we propose an iterative quadratic-transform-Dinkelbach method, where Schur complement and semi-definite relaxation (SDR) techniques are leveraged to solve the subproblem in each iteration.
For the scenarios where sensing is critical, we propose a novel performance metric for characterizing the sensing-centric EE and optimize the metric adopted in the scenario of sensing a point-like target and an extended target.
To handle the nonconvexity, we employ the successive convex approximation (SCA) technique to develop an efficient algorithm for approximating the nonconvex problem as a sequence of convex ones.
Furthermore, we adopt a Pareto optimization mechanism to articulate the tradeoff between the communication-centric EE and sensing-centric EE. We formulate the search of the Pareto boundary as a constrained optimization problem and propose a computationally efficient algorithm to handle it.
Numerical results validate the effectiveness of our proposed algorithms compared with the baseline schemes and
the obtained approximate Pareto boundary shows that there is a non-trivial tradeoff between communication-centric EE and sensing-centric EE, where the number of communication users and EE requirements have serious effects on the achievable tradeoff.
Integrated sensing and communication (ISAC), energy efficiency, fractional programming.
§ INTRODUCTION
Integrated sensing and communications (ISAC) are anticipated as a viable enabling technology for unlocking the potential of next-generation wireless networks, as the two kinds of systems tend to share various common devices, signal processing techniques, and even the hardware circuitries. Rather than the conventional parallel development of the two systems, the joint designs advocating their coexistence and cooperation have attracted extensive research interest in recent years. For instance, the coexistence of communication and radar systems focuses on spectrum sharing or physical integration design, which mainly aims to mitigate the mutual interference and efficiently manage the limited wireless resources <cit.>. Indeed, since communication and radar systems may transmit independent signals superimposed in the time/frequency domains, the interference between each other should be minimized to facilitate their individual functionalities. In such cases, numerous approaches have been proposed, such as cooperative spectrum sharing <cit.> and beamforming design <cit.>. Nevertheless, the existence of inevitable mutual interference still causes certain limitations on spectral efficiency performance.
Meanwhile, compared with the coexistence design approaches that generate communication and sensing signals separately, ISAC employs a common transmitted signal for realizing communication and sensing simultaneously. In such a case, the crux of ISAC is how to design a specialized waveform for effectively transmitting data and sensing potential targets.
In particular, the waveform design can be categorized into the communication-centric, radar-centric, and joint design according to the design goals <cit.>. Specifically, the radar-centric design aims to modulate the communication data onto the radar pulses, where the radar probing signals can be regarded as an information carrier <cit.>. On the other hand, communication-centric approaches utilize existing communication signals to sense the environment, such as cellular signals <cit.> and Wi-Fi signals <cit.>. In particular, various environmental conditions can be extracted from the received echoes of the communication signals, as the target's existence or movement inevitably affects the signal's propagation. Nevertheless, the integration performance is limited in the above two approaches, as the communication/sensing functionality is often carried out as ancillary tasks. In contrast, the joint ISAC design studies the co-design of signaling methodologies enabling both communications and sensing, which is the research content of this work.
§.§ Related Works
Related works of joint waveform design focus on striking a balance between the tradeoff of communication and sensing. For example, <cit.> investigated the tradeoff between the multi-user interference minimization and the appropriate radar beampattern formulation. Besides, a recent work in <cit.> considered the Cramér-Rao bound (CRB) minimization with guaranteed signal-to-interference-plus-noise ratio (SINR) for each communication user. Furthermore, as widely-used performance metrics, the fundamental tradeoff between the CRB for target parameter estimation and the data rate for communication was also investigated in <cit.> under various system settings, to unveil the potential of ISAC.
Despite the above approaches can achieve favorable performance tradeoffs between the estimation performance and spectral efficiency <cit.>, the energy efficiency (EE) optimization of the joint waveform has not been fully investigated. Currently, the energy consumption of the state-of-the-art fifth-generation (5G) wireless networks is extremely high, resulting in expensive operational costs <cit.>.
It is anticipated that the upcoming ISAC will pave the way for developing a perceptive wireless network requiring a much higher energy consumption than the current one, since the wireless signals are expected to achieve the dual purposes of environment sensing and information transmission simultaneously.
This could hinder the long-term development of sustainable and environmentally friendly wireless communication technologies.
Hence, there is a pressing need to investigate the energy efficiency design of ISAC for establishing
a perceptive-efficient and spectrally-efficient cellular network.
Actually, energy-aware optimization has been a hot topic in the past decade for conventional cellular networks,
e.g., <cit.>.
Specifically, EE is defined as the ratio of the achieved data rate and the required power consumption, capturing the energy consumption per bit in communication, which has been widely studied for various communication networks <cit.>.
However, these approaches for maximizing the communication EE cannot be directly applied to ISAC, as they do not take into consideration of sensing functionalities.
Recently, the EE optimization for radar-communication spectrum sharing has been studied in <cit.>, and the results cannot be applied to ISAC systems either due to the separated signal waveform design.
On the other hand, a few works have studied ISAC beamforming for maximizing communication-centric EE. For instance, the work of <cit.> investigated the communication EE maximization under the required radar beampattern constraint. Yet, it does not consider the sensing EE and the performance of target parameter estimation. Besides, the work of <cit.> focused on energy minimization under the sensing and communication constraints. In particular, the algorithm designed in <cit.> cannot handle the EE optimization due to the intrinsic challenges brought by fractional programming in the resource allocation design.
More importantly, to the best of our knowledge, the sensing-centric EE that characterizes the EE of target sensing has been rarely studied in the literature.
In particular, to fulfill the increasing demand for sensing services, it is natural for the base station (BS) to transmit the waveforms with high power for improving the detection and estimation performance. However, this operation will inevitably bring unaffordable energy costs, which contradicts to the emerging requirements of carbon neutrality and environmental sustainability for future wireless networks <cit.>.
Therefore, there is an urgent need for the design an energy-efficient sensing performance metric for ISAC.
§.§ Contributions
Against this background, this work considers the EE optimization for the waveform design of ISAC, where the communication-centric EE, sensing-centric EE, and their tradeoffs are investigated.
Specifically, for the ISAC systems wherein communication serves as the primary objective, we study the ISAC waveform design for maximizing the communication-centric EE, i.e., the ratio of the achievable rate and the corresponding power consumption, while guaranteeing both the target estimation and communication performance in terms of the CRB and SINR, respectively.
As for the sensing-centric ISAC systems, for the first time, we propose the performance metric to measure the sensing-centric EE for target parameter estimation.
Then, we optimize the ISAC waveform to maximize the sensing-centric EE, considering the constraints of SINR, CRB, and the maximum transmission power budget. Then, we study the Pareto boundary of communication-centric EE and sensing-centric EE for characterizing their tradeoffs. The main contributions of this paper are summarized as follows.
* We optimize the communication-centric EE considering the two scenarios having a point-like target estimation and an extended target estimation, respectively, under the constraints of CRB, SINR, and transmission power limitations. For the case of point-like target, the nonconvexity of the objective function and CRB constraint hinder the communication-centric EE optimization. For handling these challenges, we first adopt the quadratic-transform-Dinkelbach method to reformulate the nonconvex fractional objective function as a tractable formulation. Then, we adopt the semi-definite relaxation and linear matrix inequality to convert the nonconvex optimization problem into a sequence of convex optimization problems. Finally, we generalize the proposed algorithm to an extended target case.
* We propose a performance metric for capturing the notion of sensing-centric EE for the first time, which adopts the ratio of the reciprocal of the CRB to the transmit energy for measuring “information-per-Joule’’. Then, based on the proposed metric, we consider the sensing-centric EE maximization for point-like/extended targets by optimizing the transmit beamforming. Although the considered problem is nonconvex, we adopt the Schur complement to reformulate the problem into a tractable formulation, facilitating the development of a successive convex approximation (SCA)-based algorithm to effectively acquire the solution to the design problem.
* We adopt the Pareto optimization technique to characterize the tradeoff between the communication-centric EE and the sensing-centric EE. In particular, we formulate a constrained optimization problem that maximizes the communication-centric EE under the constraint of sensing-centric EE. To handle the nonconvexity of the considered optimization problem, we propose an SCA-based iterative algorithm for addressing the nonconvexity. Then, by varying the threshold of the sensing-centric EE, the approximate Pareto boundary can be obtained by solving a sequence of constrained problems. Simulation results present the Pareto boundary to demonstrate the tradeoff between the two EE metrics.
The remainder of this paper is organized as follows. Section II introduces the system model, including the communication model and the sensing model. In Section III, we study the optimization of the communication-centric EE under the sensing and communication constraints. The sensing-centric EE is studied in Section IV. Section V investigates the tradeoff between the communication-centric and the sensing-centric EE. Simulation results are provided in Section VI. Finally, we conclude the paper in Section VII.
Notations: The normal plain text (i.e., t), bold lowercase letters (i.e., 𝐰) and uppercase letters (i.e., 𝐖) represent scalars, vectors, and matrices, respectively. tr(·), rank(·), (·)^H, and (·)^T denote the trace operator, the rank operator, the Hermitian transpose, and the transpose operator, respectively. ℂ^n × n stands for an n × n complex-valued matrix. · represents the L_2 norm of a matrix. The inequality 𝐀≽0 means that 𝐀 is Hermitian positive semi-definite. Re(·) denotes the real part of the argument. We adopt 𝔼(·) for the stochastic expectation. ḟ(x) denotes the first derivative of function f(x). The notation ≜ is used for definitions.
§ SYSTEM MODEL
As depicted in Fig. <ref>, we consider an ISAC multiple-input multiple-output (MIMO) system, where the BS equipped with M transmit antennas serves K single-antenna UEs for communication with K ≤ M. Let k ∈𝒦≜{1,2, ⋯,K} denote the communication user set. As for radar estimation, the environmental information is simultaneously extracted from the reflected echoes with N receiving antennas implemented at the BS.
Without loss of generality, the number of transmit antennas is less than that of receive antennas, i.e., M ≤ N. As for target sensing, both the point-like target and the extended target cases are considered separately covering various practical scenarios. In particular, the former case denotes the unstructured point that is far away from the BS, such as unmanned aerial vehicles (UAVs). On the other hand, for the extended target, it acts as a reflecting surface with a large number of distributed scatterers, such as a vehicle or a pedestrian <cit.>. The detailed model is given as follows.
§.§ Communication Model
We denote the beamforming vector and the channel from the BS to the k-th user as 𝐰_k∈ℂ^M× 1 and 𝐡_k∈ℂ^M× 1, respectively. Then, the data symbol intended for the k-th user at time slot l is denoted as s_k[l], with unit power 𝔼( |s_k[l]|^2) =1. Left multiplying 𝐬[l] = [s_1[l], s_2[l], ⋯, s_k[l]]^T ∈ℂ^K × 1 with the beamforming matrix 𝐖 = [𝐰_1, 𝐰_2, ⋯, 𝐰_k] ∈ℂ^M × K, the transmitted signal vector of the BS is given by 𝐱[l]= 𝐖𝐬[l].
Then, the transmitted ISAC waveform over L time slots can be denoted as 𝐗 = [ x[1], x[2], ⋯, x[L] ] ∈ℂ^M × L. Then, the received signal at the k-th user during the l-th time slot, l ∈{1, 2, ⋯, L}, is given as follows
y_k[l] = h_k^H 𝐰_k s_k[l] +
∑_k ∈𝒦 j ≠ k h_k^H 𝐰_j s_j[l] + z_c[l],
where z_c[l] is the additive white Gaussian noise (AWGN) with zero mean and variance σ_c^2. The received SINR at the k-th user can be calculated as
SINR_k( W) = | h_k^H w_k |^2/σ_c^2 + ∑_k ∈𝒦 j ≠ k| h_k^H w_j |^2,
and the corresponding achievable rate is R_k( W) = log_2(1+SINR_k ( W)).
It is well known that communication-centric EE is defined as a ratio of the transmission sum rate ∑_k R_k( W) to the total power consumption P. Following <cit.>, the power consumption can be calculated as
P = 1/ϵP_d + P_0,
where the power amplifier efficiency ϵ∈ [0,1] and P_0 denotes the constant circuit power consumed by circuitries in RF chains, power supply, cooling system, etc. Besides, the total transmit power is given by P_d = ∑_k w_k_2^2. Hence, the communication-centric EE, measuring the required “bits-per-Joule" <cit.>, can be calculated as
EE_C = R_k(𝐖)/ P = ∑_k log_2( 1+| h_k^H w_k |^2 / ( σ_c^2 + ∑_k ∈𝒦 j ≠ k| h_k^H w_j |^2) ) /1/ϵ∑_k w_k_2^2 + P_0.
§.§ Sensing Model
For radar sensing, the BS exploits the echo signals collected in L time slots to estimate the target parameter.
This work considers the two cases with either a point-like target or an extended target, respectively.
For notational simplicity, we consider the same angle of departure (AOD) and angle of arrival (AOA) of the target, i.e., θ_t=θ_r=θ <cit.>. Then,
for the point-like target that locates in the far field, the target response matrix can be denoted as
𝐀 = α𝐚_r(θ)𝐚^H_t(θ),
where 𝐚_x(θ), x∈{t,r}, is the steering vector for the transmit signal at angle θ. Following the existing works on ISAC, e.g., <cit.>, we assume that the BS employs a uniform linear antenna with a half-wavelength spacing between the adjacent antennas. Then, the transmit and receive steering vectors are given by
𝐚_t(θ) = [ 1, ⋯, e^-j π cosθ, e^-j π (M -1) cosθ]^T,
𝐚_t(θ) = [ 1, ⋯, e^-j π cosθ, e^-j π (N -1) cosθ]^T.
For the extended target that locates in the near field, we follow <cit.> to model it as a reflecting surface with N_s point-like scatters. Then, the target response matrix can be represented as
𝐀 = ∑_i=1^N_sα_i 𝐚_r(θ_i)𝐚_t^H(θ_i),
where α_i is the reflection coefficient of the i-th scatterer.
Therefore, the received target echoes 𝐘_R from the point-like or the extended targets can both be denoted as
𝐘_R = 𝐀𝐗 + 𝐙_s,
where 𝐙_s is the zero-mean AWGN with variance σ_s^2 in each element.
Since CRB is a lower bound on the variance of an unbiased estimator of an unknown parameter that can guarantee the performance of sensing <cit.>, we adopt the CRB as the sensing metric to design the energy-efficient ISAC in the following.
§ COMMUNICATION-CENTRIC ENERGY-EFFICIENT DESIGN
§.§ Point-Like Target Case
Since the CRB of α has a similar form as the one of θ, for conciseness,
this work only considers the CRB of θ to for the design of the ISAC beamforming. For the point-like target, the CRB of θ is given as follows <cit.>
CRB(θ)=σ_s^2/|α|^2 (M𝐚̇^H(θ)𝐑_𝐱^T𝐚̇(θ)+ 𝐚^H(θ)𝐑_𝐱^T𝐚(θ)‖𝐚̇(θ)‖^2-M|𝐚^H(θ)𝐑_𝐱^T𝐚̇(θ)|^2/𝐚^H(θ)𝐑_𝐱^T𝐚 (θ)),
where 𝐑_𝐱 is the sample covariance matrix of 𝐗. Since 𝔼( |s_k[l]|^2) =1, for a large L, we have the asymptotic result
R_𝐱 = 1/L X X^H ≈ W W^H = ∑_k=1^K w_k w_k^H <cit.>.
The communication-centric energy efficient design is to maximize the EE_C defined in (<ref>), under the constraints of multiple users’ required SINR and maximal CRB(θ), whose optimization problem can be formulated as follows
max_{𝐰_k}_k=1^K ∑_k=1^K log_2 ( 1+| h_k^H w_k |^2 / ( σ_c^2 + ∑_k ∈𝒦 j ≠ k| h_k^H w_j |^2) ) /1/ϵ∑_k w_k_2^2 + P_0
s.t. ∑_k=1^K w_k _2^2 ≤ P_max,
CRB(θ) ≤ρ ,
| h_k^H w_k |^2/σ_c^2 + ∑_k ∈𝒦 j ≠ k| h_k^H w_j |^2≥γ_k, ∀ k,
where P_max denotes the power budget of the BS and (<ref>) is the transmit power constraint.
Besides, ρ and γ_k are the required CRB threshold for sensing and the required SINR for the k-th communication user, respectively.
In general, it is challenging to solve problem (<ref>) directly, due to the nonconvexity of the fractional objective function (<ref>) and nonconvex constraints (<ref>) and (<ref>).
For addressing the nonconvex optimization problem, we first adopt the Dinkelbach's method <cit.> to reformulate the problem (<ref>) as
max_{𝐰_k}_k=1^K f_1(𝐰_k) - λ f_2(𝐰_k)
s.t. (<ref>), (<ref>), (<ref>),
where f_1(𝐰_k) ≜∑_k=1^K log_2 ( 1+| h_k^H w_k |^2/σ_c^2 + ∑_j=1,j ≠ k^K | h_k^H w_j |^2),
f_2(𝐰_k) ≜1/ϵ∑_k=1^K w_k_2^2 + P_0, and λ≥ 0 is the auxiliary variable to be iteratively updated by
λ = f_1(𝐰_k)/f_2(𝐰_k).
With (<ref>) and (<ref>), an efficient solution to problem (<ref>) can be obtained by updating 𝐰_k and λ alternately.
Nevertheless, problem (<ref>) is still difficult to handle due to the following issues: 1) the objective function (<ref>) is still non concave over {𝐰_k } due to the fractional function f_1(𝐰_k); 2) nonconvex constraints (<ref>) and (<ref>).
Since the function log_2(·) is concave and non-decreasing, the nonconvexity of (<ref>) can be addressed if the term inside log_2(·) can be reformulated as an equivalent concave formulation.
Bearing this in mind, since f_1(𝐰_k) belongs to the general multiple-ratio concave-convex fractional programming problem, we adopt the quadratic transform method <cit.> to reformulate f_1(𝐰_k) as
f_1(𝐰_k) = t_kmax∑_k=1^K log_2 ( 1+ 2 t_k Re(𝐰_k^H 𝐡_k)
- t_k^2 B_k(𝐰_k) ) ,
where B_k(𝐰_k) = σ_c^2 + ∑_j=1,j ≠ k^K | h_k^H w_j |^2 and t_k is an introduced auxiliary variable that is iteratively updated by
t_k = | h_k^H w_k |( σ_c^2 +∑_j=1,j ≠ k^K| h_k^H w_j |^2)^-1.
Based on the above reformulations, problem (<ref>) can be recast as
max_{𝐰_k, t_k}_k=1^K, λ ∑_k=1^K log_2( 1+ 2 t_k Re(𝐰_k^H 𝐡_k) - t_k^2 B_k(𝐰_k) ) - λ( 1/ϵ∑_k=1^K w_k_2^2 + P_0) s.t. (<ref>),
where {𝐰_k, t_k}_k=1^K and λ can be updated alternatively.
In the following, we focus on handling the nonconvex constraints (<ref>) and (<ref>). Specifically, constraint (<ref>) can be reformulated as
Mȧ^H(θ) R_ x^Tȧ(θ)+ a^H(θ) R_ x^T a(θ)‖ȧ(θ)‖^2 - M| a^H(θ) R_ x^Tȧ(θ)|^2/ a^H(θ) R_ x^T a(θ) - σ_s^2/2Lρ|α|^2 ≥ 0.
Then, for notational conciseness, denoting ℱ( R_X) ≜ Mȧ^H(θ) R_ x^Tȧ(θ)+ a^H(θ) R_ x^T a(θ)‖ȧ(θ)‖^2, (<ref>) can be reformulated as the following linear matrix inequality by leveraging the Schur complement <cit.>.
[ ℱ( R_x) - σ_s^2/2Lρ|α|^2 √(M) a^H(θ) R_ x^Tȧ(θ); √(M)ȧ^H(θ) R_ x^T a(θ) a^H(θ) R_ x^T a(θ) ]≽0 .
Next, for handling the nonconvex constraint (<ref>), we introduce an auxiliary optimization variable matrix 𝐖_k and reformulate constraint (<ref>) into
tr(𝐐_k 𝐖_k) - γ_k ∑_k ∈𝒦 j ≠ ktr(𝐐_k 𝐖_j) ≥γ_k σ_c^2,
W_k =w_k w_k^H,
where 𝐐_k = h_k h_k^H. Then, problem (<ref>) can be equivalently reformulated as
max_{𝐰_k,𝐖_k, t_k}_k=1^K ∑_k=1^K log_2 ( 1+ 2 t_k ·Re(𝐰_k^H 𝐡_k) - t_k^2 B_k(𝐖_k) ) - λ( 1/ϵ∑_k=1^K tr( W_k)+ P_0)
s.t. [ [ ℱ(∑_k=1^K𝐖_k) - σ_s^2/2Lρ|α|^2 √(M) a^H(θ)∑_k=1^K W_k^Tȧ(θ); √(M)ȧ^H(θ)∑_k=1^K W_k^T a(θ) a^H(θ)∑_k=1^K W_k^T a(θ) ] ]≽0 ,
(<ref>), (<ref>), (<ref>),
where B_k(𝐖_k) ≜∑_k ∈𝒦 j ≠ ktr(𝐐_k 𝐖_j) + σ_c^2. However, constraint (<ref>) is a nonconvex equality constraint which is difficult to handle. Therefore, we introduce the following lemma to transform constraint (<ref>) into equivalent inequality constraints.
W_k =w_k w_k^H can be equivalently reformulated as
[ 𝐖_k 𝐰_k; 𝐰_k^H 1 ]≽0 , 𝐖_k ≽0, ∀ k,
tr(𝐖_k) - 𝐰^H_k 𝐰_k ≤ 0, ∀ k.
The proof is given in Appendix A.
Although the equality constraint in (<ref>) has been reformulated as the equivalent inequality constraints, constraint (<ref>) is still nonconvex.
For handling this, we adopt the SCA technique that establishes an inner convex approximation of constraint (<ref>) given as
tr(𝐖_k) + (𝐰_k^(i-1))^H 𝐰_k^(i-1) - 2Re((𝐰_k^(i-1))^H 𝐰_k ) ≤ 0, ∀ k,
where 𝐰^(i-1)_k is the solution obtained at the i-th iteration of the SCA.
Therefore, at the i-th iteration, the convex approximation of problem (<ref>) can be reformulated as
max_𝒲, t_k, λ ∑_k=1^K log_2 ( 1+ 2 t_k Re(𝐰_k^H 𝐡_k) - t_k^2 B_k(𝐖_k) ) - λ( 1/ϵ∑_k=1^K tr( W_k)+ P_0)
s.t. (<ref>), (<ref>),(<ref>),(<ref>),(<ref>).
Algorithm <ref> summarizes the iterative algorithm for handling problem (<ref>), where f̂_1(𝐰_k, 𝐖_k) = ∑_k=1^K log_2( 1+ 2 t_k Re(𝐰_k^H 𝐡_k) - t_k^2 B_k(𝐖_k) ) and f̂_2(𝐖_k) =1/ϵ∑_k=1^K tr( W_k)+ P_0. Although we cannot guarantee that the optimal solution of problem (<ref>) can be obtained, the proposed Algorithm <ref> follows the inexact Dinkelbach-type algorithm adopted in <cit.>, whose convergence can be guaranteed by the following lemma.
Let {𝐰_k^i,𝐖_k^i} be the solution sequence generated by solving problem (<ref>). The sequence {λ^(i)} generated by Algorithm 1 is non-decreasing and convergent.
Since
f̂_1(𝐰^(i),𝐖^(i))-λ^(i)f̂_2(𝐖^(i))
=(λ^(i+1)-λ^(i))f̂_2(𝐖^(i)),
we have λ^(i+1)≥λ^(i) if f̂_1(𝐰^(i),𝐖^(i))-λ^(i)f̂_2(𝐖^(i))≥ 0.
Obviously, f̂_1(𝐰^(i-1),𝐖^(i-1))-λ^(i)f̂_2(𝐖^(i-1))=0. At the i-th iteration, we approximate problem (<ref>) as
problem (<ref>) around 𝐰_k^(i-1). Since 𝐰_k^(i-1) is definitely a feasible solution of problem (<ref>), we have
f̂_1(𝐰^(i),𝐖^(i))-λ^(i)f̂_2(𝐖^(i))≥f̂_1(𝐰^(i-1),𝐖^(i-1))-λ^(i)f̂_2(𝐖^(i-1))= 0.
Therefore, we can conclude that the sequence {λ^(i)} is non-decreasing and Algorithm 1 converges due to the finite power budget.
Complexity Analysis:
The computational complexity of Algorithm <ref> is dominated by solving problem (<ref>). Problem (<ref>) involves linear matrix inequality (LMI) constraints that dominate the computation complexity. We notice that the problem contains one LMI constraint of size 2M, K LMI constraints of size M+1, and K LMI constraints of size M.
Given the required accuracy ϵ_0 > 0, the ϵ_0-optimal solution can be achieved after a sequence of iterations. Then, the computational complexity can be given as 𝒪( √((2M +1)(K+1)) M^6 K^3 I_iterln(1/ϵ_0) ) by reserving the highest order term, where I_iter denotes the number of iterations <cit.>.
Due to the stringent requirement introduced by (<ref>), it is generally non-trival to directly obtain a feasible solution as an initial point. Alternatively, we can adopt the penalty SCA <cit.> and introduce auxilary variables ρ̅_k to transform problem (<ref>) into
max_𝒲, t_k, λ ∑_k=1^K log_2 ( 1+ 2 t_k Re(𝐰_k^H 𝐡_k) - t_k^2 B_k(𝐖_k) ) - λ( 1/ϵ∑_k=1^K tr( W_k)+ P_0) - p̅∑_k=1^K ρ̅_k
s.t. 𝐰_k^(i-1) - 2Re((𝐰_k^(i-1))^H 𝐰_k ) ≤ρ̅_k, ∀ k,
(<ref>), (<ref>), (<ref>), (<ref>),
where p̅ and ∑_k=1^K ρ̅_k denote the weight coefficient and the penalty term, respectively. To obtain the initial point of (<ref>), we can solve problem (<ref>) as an initial warm-up phase by gradually raising p̅ to induce a reduction in the penalty term to a smaller value. When the penalty term decreases to zero, problem (<ref>) reduces to problem (<ref>), whose solution serves as the feasible initial point of (<ref>).
§.§ Extended Target Case
For estimating the extended target, we follow <cit.> to consider the CRB of the target response matrix 𝐀 instead of the angle. Since K ≤ M, transmitting K signal streams is not always sufficient for recovering the rank-M matrix. To address this issue, the BS generates additional signals that are dedicated for target probing. As such, the augmented data matrix at the l-th time slot is 𝐱̃[l]≜[𝐖, 𝐖̃][𝐬[l];𝐬̃[l]], where 𝐬̃[l] ∈ℂ^(N_t-K) × 1 is the dedicated probing signal and 𝔼( 𝐬[l] 𝐬̃^H[l] ) = 0.
Note that in the augmented signal, the beamforming 𝐖 = [𝐰_1, 𝐰_2, ⋯, 𝐰_K] ∈ℂ^M × K broadcasts the information data to the K users and the beamforming 𝐖̃ = [𝐰_K+1, ⋯, 𝐰_K+M] ∈ℂ^M × M is employed to generate probing signals for enabling the estimation of the target response matrix. However, the introduced probing signals 𝐬̃[l] inevitably generate undesired interference to the served multiple users that introduces non-trivial tradeoff between sensing and communication. In particular, the SINR received at the k-th user is given by
S̃ĨÑR̃_k = | 𝐡_k^H 𝐰_k|^2/∑ _i = 1,i k^K| 𝐡_k^H𝐰_i|^2 + ‖𝐡_k^H𝐖̃‖_2^2 + σ _C^2,
where ‖𝐡_k^H𝐖̃‖^2_2 is the additional interference due to the probing signals.
In such a case, the CRB for the extended target estimation can be derived as
CRB_extended= σ_s^2 M/Ntr(𝐑_𝐱^ - 1),
where 𝐑_𝐗 = 𝐖𝐖^H + 𝐖̃𝐖̃^H .
Based on the discussions above, the problem of communication-centric EE optimization for estimating an extended target can be formulated as
max_{𝐰_k}_k=1^K+M ∑_k=1^K log_2(1+S̃ĨÑR̃ _k)/1/ϵ∑_k=1^K+M w_k _2^2 + P_0
s.t. ∑_k=1^K+M w_k _2^2 ≤ P_max,
CRB_extended= σ_s^2 M/Ltr(𝐑_𝐱^ - 1) ≤τ ,
S̃ĨÑR̃_̃k̃≥γ_k.
Obviously, although constraints (<ref>) and (<ref>) are both convex, the fractional objective function (<ref>)
is still nonconvex.
Following Section <ref>, we first adopt Dinkelbach’s transformation to handle the nonconvex fractional programming and reformulate the problem as follows
max_{𝐰_k}_k=1^K+M ∑_k=1^K log_2 (1+S̃ĨÑR̃ _k) - λ( 1/ϵ∑_k=1^K+M w_k _2^2 + P_0)
s.t. (<ref>), (<ref>), (<ref>).
Then, by exploiting the equality -log a = bmax (log b - ab) <cit.>, problem (<ref>) can be reformulated as
max_{𝐰_k}_k=1^K+M, {b_k}_k=1^K, λ ∑_k=1^K log_2 ( | 𝐡_k^H 𝐰_k|^2 + ∑_i = 1,i k^K| 𝐡_k^H𝐰_i|^2 + ‖𝐡_k^H𝐖̃‖_2^2 + σ _C^2)
+ ∑_k=1^K( log_2 b_k - b_k ( ∑_i = 1,i k^K| 𝐡_k^H𝐰_i|^2 + ‖𝐡_k^H 𝐖̃‖_2^2 + σ _C^2 ) )
- λ( 1/ϵ∑_k=1^K+M w_k _2^2 + P_0)
s.t. (<ref>), (<ref>), (<ref>).
For obtaining a tractable formulation, by introducing auxiliary variables 𝐖_k ≜𝐰_k 𝐰_k^H, k ∈ [1, 2, ⋯, K] and 𝐑_𝐖̃ = 𝐖̃𝐖̃^H, problem (<ref>) can be reformulated as
max_{𝐖_k, b_k}_k=1^K, 𝐑_𝐖_2, λ ∑_k=1^K log_2 ( 𝐡_k^H ( 𝐖_k +∑_i = 1,i k^K𝐖_i + 𝐑_𝐖̃ + σ _C^2 ) 𝐡_k )
+ ∑_k=1^K( log_2 b_k - b_k ( ∑_i = 1,i k^K𝐡_k^H𝐖_i𝐡_k + 𝐡_k^H 𝐑_𝐖̃𝐡_k + σ _C^2 ) )
- λ( 1/ϵtr( ∑_k=1^K𝐖_k + 𝐑_𝐖̃) + P_0)
,
s.t. tr( ∑_k=1^K𝐖_k + 𝐑_𝐖̃) ≤ P_max,
σ_s^2 M/Ntr( ( ∑_k=1^K𝐖_k + 𝐑_𝐖̃) ^-1) ≤τ ,
𝐡_k 𝐖_k 𝐡^H_k - γ_k ( ∑_i = 1,i k^K𝐡_k^H 𝐖_i 𝐡_k + 𝐡_k^H 𝐑_𝐖̃𝐡_k ) ≥γ_k σ_c^2,
𝐖_k ≽0, ∀ k, 𝐑_𝐖̃≽0,
rank(𝐖_k) = 1, ∀ k.
After inspecting problem (<ref>), we can find that all constraints are convex, except for constraint (<ref>). Besides, the objective function in (<ref>) includes three sets of optimization variables: {λ}, {b_k}, and {{𝐖_k}_k=1^K, 𝐑_𝐖̃}. Moreover, when fixing the other two sets, the objective function is convex with respect to the remaining one. Therefore, we first adopt the rank relaxation to remove constraint (<ref>) and then employ an alternating optimization (AO) algorithm to optimize three sets of optimization variables alternately.
The detailed algorithm is summarized in Algorithm 2, where we denote
f̃_1(𝐖_k, 𝐑_𝐖̃ ) = ∑_k=1^K log_2 ( 𝐡_k^H ( 𝐖_k +∑_i = 1,i k^K𝐖_i + 𝐑_𝐖̃ + σ _C^2 ) 𝐡_k )
+ ∑_k=1^K( log_2 b_k - b_k ( ∑_i = 1,i k^K𝐡_k^H𝐖_i𝐡_k + 𝐡_k^H 𝐑_𝐖̃𝐡_k + σ _C^2 ) )
f̃_2(𝐖_k, 𝐑_𝐖̃ ) = 1/ϵtr( ∑_k=1^K𝐖_k + 𝐑_𝐖̃) + P_0.
In the following theorem, we will show that the rank-1 solution of problem (<ref>) can be recovered from the solution generated by Algorithm 2.
Given the optimal solution obtained by Algorithm <ref> as {𝐖_k^∗, 𝐑^∗_𝐖̃}. When K = 1,
𝐖̂^∗ = 𝐖^∗𝐡_k 𝐡_k^H 𝐖^∗/𝐡_k^H 𝐖^∗𝐡_k, 𝐑̂^∗_𝐖̃= 𝐑^∗_𝐖̃
is the optimal rank-1 solution that achieves identical performance as {𝐖_k^∗, 𝐑^∗_𝐖̃}.
When K > 1, one can always construct the optimal solution that satisfies the rank-1 constraint acquiring the same performance.
The proof is given in Appendix B.
Complexity Analysis:
We provide the computational complexity of Algorithm <ref> as follows. Similarly, the problem (<ref>) is a semidefinite program that can be solved by the standard interior-point algorithm. We note that the problem involves K+1 LMI constraints of size M. We consider the highest order term and express the computational complexity as 𝒪( √(MK+M+K+1) M^6 K^3 I_iterlog(1/ϵ_0) ) for an ϵ_0-optimal solution, where I_iter represents the number of iterations <cit.>.
§ SENSING-CENTRIC ENERGY-EFFICIENT DESIGN
§.§ Performance Metric for Sensing-Centric EE
It is well known that CRB is the inverse of Fisher information for the unbiased estimator <cit.>. In fact, Fisher information is the statistical expected value of the observed information about an observable random variable. Considering these, we adopt the reciprocal ratio of the CRB to the transmit power, further normalized by the total time slot length. In this context, we arrive at a novel sensing-centric EE metric that measures the average sensing information per Joule, defined as
EE_s≜CRB^-1/L ( 1/ϵ∑_k=1^K w_k_2^2 + P_0 ) .
In this manner, both the sensing-centric EE and communication-centric EE measure the “information” per Joule, but the “information” has different meanings.
Based on the above metric, we study the waveform design to maximize the sensing-centric EE considering the point-like target and the extended target in Sections <ref> and <ref>, respectively.
§.§ Point-Like Target Case
Considering the point-like target, with the CRB of estimating θ given in (<ref>), the sensing-centric EE optimization problem can be formulated as
max_{𝐰_k}_k=1^K CRB^-1(θ)/ L ( 1/ϵ∑_k=1^K w_k_2^2 + P_0 )
s.t. ∑_k=1^K w_k _2^2 ≤ P_max,
CRB(θ) ≤ρ ,
| h_k^H w_k |^2/σ_c^2 + ∑^K_j = 1, j ≠ k| h_k^H w_j |^2≥γ_k, ∀ k.
Obviously, problem (<ref>) is also intractable due to the fractional objective function (<ref>) and nonconvex constraints (<ref>) and (<ref>).
For handling the fractional objective function (<ref>), with the introduced auxiliary optimization variables ω, t,ϕ, and ζ, problem (<ref>) can be reformulated as
max_{𝐰_k}_k=1^K, ω, ϕ, ζ ω
s.t. CRB^-1(θ) ≤1/t,
1/ϵ∑_k=1^K w_k_2^2 + P_0 ≤ϕ, t ≥ζ^2,
ω≤ζ^2/ϕ,
(<ref>), (<ref>), (<ref>).
The equivalence between (<ref>) and (<ref>) is obvious, since constraints
(<ref>), (<ref>), and (<ref>) should be active at the optimal solution. We note that (<ref>) share the same form with (<ref>). Therefore, with Schur complement, constraint (<ref>) can be reformulated as
[ ℱ(∑_k=1^K𝐖_k) - t σ_s^2/2L |α|^2 √(M) a^H(θ)∑_k=1^K𝐖_k^Tȧ(θ); √(M)ȧ^H(θ) R_ x^T a(θ) a^H(θ) R_ x^T a(θ) ]≽0,
where ℱ(∑_k=1^K𝐖_k) ≜ Mȧ^H(θ)∑_k=1^K𝐖_k^Tȧ(θ)+ a^H(θ)∑_k=1^K𝐖_k^T a(θ)‖ȧ(θ)‖^2 and 𝐖_k = 𝐰_k 𝐰_k^H. Furthermore, Lemma <ref> presents an equivalent formulation of the equality 𝐖_k = 𝐰_k 𝐰_k^H whose convex approximation has been given in (<ref>) and (<ref>).
Then, for handling the fractional constraint (<ref>), we introduce auxiliary variables {τ_k, ψ_k, ∀ k} to reformulate (<ref>) as
τ^2_k / ψ_k ≥γ_k,
τ_k = 𝐡_k^H 𝐰_k,
ψ_k ≥σ_c^2 + ∑^K_j = 1, j ≠ k| h_k^H w_j |^2,
where (<ref>) and (<ref>) are convex constraints. Then, problem (<ref>) can be reformulated as
max_Θ ω
s.t. ω≤ζ^2/ϕ , γ_k ≤τ^2_k/ψ_k , ∀ k
(<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>),(<ref>), (<ref>),
where Θ≜{{𝐖_k, 𝐰_k}_k=1^K, ω, t,ϕ, ζ, τ_k, ψ_k } denotes the set of optimization variables. Obviously constraint (<ref>) is convex. Therefore, the challenge for handling problem (<ref>) lies in the nonconvexity of constraint (<ref>). To deal with this, we adopt the SCA techniques to establish a convex approximation of constraint (<ref>). Since function ζ^2/ϕ is jointly convex with respect to ζ and ϕ, its convex lower approximation can be established as
ζ^2/ϕ ≥(ζ^(n))^2/ϕ^(n) + 2 ζ^(n)/ϕ^(n) (ζ - ζ^(n) ) - ( ζ^(n)/ϕ^(n)) ^2 (ϕ - ϕ^(n) ) = 2 ζ^(n)/ϕ^(n)ζ - ( ζ^(n)/ϕ^(n)) ^2 ϕ ,
where ζ^(n) and ϕ^(n) are the feasible points obtained at the n-th iteration of the SCA. Consequently, the inner convex approximation of ω≤ζ^2/ϕ is
ω≤2 ζ^(n)/ϕ^(n)ζ - ( ζ^(n)/ϕ^(n)) ^2 ϕ.
Similarly, the inner convex approximation of γ_k ≤τ^2_k/ψ_k, ∀ k is
γ_k ≤2 τ_k^(n)/ψ_k^(n)τ_k - ( τ_k^(n)/ψ_k^(n)) ^2 ψ_k , ∀ k ,
where τ_k^(n) and ψ_k^(n) are the feasible points obtained at the n-th iteration.
Finally, a convex approximation of problem (<ref>) is formulated as
max_Θ ω
s.t. (<ref>), (<ref>), (<ref>).
In this way, problem (<ref>) can be solved with off-the-shelf numerical convex program solvers such as CVX Toolbox <cit.>. We summarize the proposed iterative method in Algorithm <ref>, where its initial feasible solution can be obtained by following the penalty SCA method given in Remark 1.
In the following, we analyze the convergence of Algorithm <ref>. We can note that in the iterative procedure of Algorithm <ref>, Θ^(n-1) is always feasible in problem (<ref>) at n-th iteration owing to the adopted first-order Taylor approximation. We note that (<ref>) can be optimally solved and the optimal value of its objective function serves as a lower bound on that of (<ref>).
Therefore, it can be guaranteed that the optimal value of (<ref>) at n-th iteration n, denoted as p_∗^(n), always satisfies p_∗^(n)≥ p_∗^(n-1). Therefore, Algorithm <ref> produces a non-decreasing objective function of problem (<ref>).
Similar to Algorithm <ref>, the computational complexity of Algorithm <ref> is 𝒪( √((2M +1)(K+1)) M^6 K^3 I_iterln(1/ϵ_0) ).
§.§ Extened Target Case
For the case of the extended target, following the discussion in Section <ref>, we choose 𝐀 as the parameter to be estimated and adopt the formulation of CRB in (<ref>).
Then, we have the sensing-centric EE for sensing an extended target as
EE_S = ( σ_s^2 M/Ltr(𝐑_𝐱^-1) )^-1/ L ( 1/ϵtr(𝐑_𝐱) + P_0 ) = ( tr(𝐑_𝐱^ - 1) )^-1/σ_s^2 M ( 1/ϵtr(𝐑_𝐱) + P_0 ) ,
where 𝐑_𝐗 = 𝐖𝐖^H + 𝐖̃𝐖̃^H = ∑_k=1^K 𝐰_k 𝐰_k^H + 𝐑_𝐖̃. Then, we formulate the problem as
max_{𝐰_k}_k=1^K,𝐑_𝐖̃ ( tr(𝐑_𝐱^ - 1) )^-1/σ_s^2 M ( 1/ϵtr(𝐑_𝐱) + P_0 )
s.t. tr(𝐑_𝐱) ≤ P_max,
σ_s^2 M/Ntr(𝐑_𝐱^ - 1) ≤ϕ ,
S̃ĨÑR̃_̃k̃≥γ_k,
where S̃ĨÑR̃_̃k̃ is given in (<ref>) and can be recast as a convex form in (<ref>) by letting 𝐖_k = 𝐰_k 𝐰_k^H.
We notice that in (<ref>), the numerator is the reciprocal of a convex function and the denominator is strictly positive and convex. To handle its nonconvexity, we introduce auxiliary optimization variables p_e,q_e and equivalently transform the problem into
max_{𝐰_k}_k=1^K,𝐑_𝐖̃, q_e, p_e 1/p_e q_e
s.t. p_e ≥σ_s^2 M ( 1/ϵtr(𝐑_𝐱) + P_0 ), q_e ≥tr(𝐑_𝐱^ - 1),
(<ref>), (<ref>),(<ref>).
Then, the problem can be further transformed into its equivalent form as
min_{𝐖_k}_k=1^K,𝐑_𝐖̃, q_e, p_e ln(p_e) + ln(q_e) s.t. (<ref>), (<ref>),
where the objective function is still not convex, but can be approximated based on the first order Taylor series expansion given by
ln(p_e) + ln(q_e) ≤ln( p^(n)_e ) + ln( q_e^(n)) + 1/p_e^(n)( p_e-p_e^(n)) + 1/q^(n)_e( q_e-q^(n)_e) ,
where p_e^(n) and q_e^(n) are the feasible solutions obtained at the n-th iteration. Following the techniques detailed in Section <ref>, a convex approximation of problem (<ref>) at the n-th iteration can be established as
min_{𝐖_k}_k=1^K, 𝐑_𝐖̃, q_e, p_e ln(p^(n)_e) + ln(q_e^(n)) + 1/p_e^(n) (p_e-p_e^(n)) + 1/q^(n)_e (q_e-q^(n)_e)
s.t. (<ref>), (<ref>),(<ref>),(<ref>), (<ref>).
The computational complexity is 𝒪( √(MK+M+K+1) M^6 K^3 I_iterln(1/ϵ_0) ) for an ϵ_0-optimal solution.
Based on the optimal solution of (<ref>), denoted as {𝐖_k^∗, 𝐑^∗_𝐖̃}, the optimal rank-1 solutions can always be reconstructed.
The proof can be achieved by following the proof of Theorem 2 and the details are omitted for brevity.
§ APPROXIMATE PARETO BOUNDARY OF ENERGY-EFFICIENT ISAC SYSTEMS
In this section, we aim to investigate the Pareto boundary of the achievable EE performance region built on the communication-centric EE and the sensing-centric EE.
Considering the point-like target case, we follow <cit.> to formulate the search of the Pareto boundary as a constrained optimization problem that maximizes the communication-centric EE under the sensing-centric EE constraint. It is worth noting that the proposed algorithm can be adapted to the extended target case directly. Now, we aim to solve
max_{𝐰_k}_k=1^K ∑_k=1^K log_2 ( 1+| h_k^H w_k |^2 / ( σ_c^2 + ∑_k ∈𝒦 j ≠ k| h_k^H w_j |^2) ) /1/ϵ∑_k w_k_2^2 + P_0
s.t. CRB^-1(θ)/ L ( 1/ϵ∑_k=1^K w_k_2^2 + P_0 ) ≥ℰ,
∑_k w_k _2^2 ≤ P_max,
where ℰ denotes the required minimum sensing-centric EE threshold.
Obviously, problem (<ref>) is a nonconvex fractional program, which is challenging to solve directly.
To handle fractional objective function (<ref>) and nonconvex constraint (<ref>), we follow <cit.> to find the approximate optimal Pareto boundary for characterizing the tradeoff between the communication-centric EE and sensing-centric EE.
In particular, we first apply the Dinkelbach algorithm to reformulate fractional function (<ref>) as
max_λ ∑_k=1^Klog_2 ( 1+ | h_k^H w_k |^2 /B_k(𝐖_k)) - λ( 1/ϵ∑_k=1^Ktr( W_k)+ P_0 )
s.t. (<ref>), (<ref>),
where B_k(𝐖_k) = ∑^K_j=1, j ≠ ktr(𝐐_k 𝐖_j) + σ_c^2.
Furthermore, by introducing auxiliary variables b_k, k=1,…,K, the intractable fractional terms in (<ref>) can be equivalently formulated as
∑_k=1^Klog_2 ( 1+ | h_k^H w_k |^2 /B_k(𝐖_k)) = max_b_k ( ∑_k=1^Klog_2 (1+ b_k) - ∑_k=1^K b_k + ∑_k=1^K(1+b_k)| h_k^H w_k |^2 /B_k(𝐖_k)),
which has an analytical solution b_k = | h_k^H w_k |^2/B_k(𝐖_k).
Finally, by applying the quadratic transform <cit.>, problem (<ref>) can be reformulated as
max_{𝐰_k 𝐖_k, b_k, t_k}_k=1^K, λ ∑_k ( log_2 (1+ b_k) - b_k + 2t_k √((1+b_k))Re(𝐰_k^H 𝐡_k) - t_k^2 B_k(𝐖_k) )
- λ( 1/ϵ∑_k=1^Ktr( W_k)+ P_0)
s.t. (<ref>), (<ref>),(<ref>),(<ref>).
The convex approximation of nonconvex constraint (<ref>) is constraint (<ref>), as mentioned in Section <ref>. For handling nonconvex constraint (<ref>),
we introduce an auxiliary variable ℰ̃ and employ the Schur complement to obtain the convex approximation of problem (<ref>) given by
max_{𝐰_k 𝐖_k, b_k, t_k}_k=1^K, λ ∑_k ( log_2 (1+ b_k) - b_k + 2t_k √((1+b_k))Re(𝐰_k^H 𝐡_k) - t_k^2 B_k(𝐖_k) )
- λ( 1/ϵ∑_k=1^Ktr( W_k)+ P_0)
s.t. [ ℱ(∑_k=1^K𝐖_k) - ℰ̃σ_s^2/2L |α|^2 √(M) a^H(θ)∑_k=1^K𝐖_k^Tȧ(θ); √(M)ȧ^H(θ) R_ x^T a(θ) a^H(θ) R_ x^T a(θ) ]≽0 ,
ℰ̃≥ℰ N (1/ϵ∑_k=1^Ktr( W_k)+ P_0),
(<ref>), (<ref>), (<ref>).
(<ref>) is convex whose optimum can be obtained by the interior point method. Therefore, an efficient solution of problem (<ref>) can be obtained by solving a sequence of problem (<ref>). Algorithm <ref> summarizes the iterative algorithm, where f̆_1(𝐰_k, 𝐖_k) = β/ℛ∑_k=1^K( log_2 (1+ b_k) - b_k + 2t_k √((1+b_k))Re(𝐰_k^H 𝐡_k). - t_k^2 B_k(𝐖_k) ) + (1-β) ϕ̃/L 𝒞, f̆_2(𝐖_k) = λ( 1/ϵ∑_k=1^Ktr( W_k)+ P_0).
§ NUMERICAL RESULTS
In this section, we provide simulation results of the proposed energy-efficient waveform design. Numerical analysis is presented to evaluate the performance of communication-centric EE (EE_C), sensing-centric EE (EE_S), and their approximate Pareto boundary.
Unless stated otherwise, we consider a dual-functional BS equipped N = 20 receiving antennas, with the frame length N set to 30. The maximum transmission power P_max is set to 30 dBm with the power amplifier efficiency ϵ = 0.35. The circuit power consumption is set to P_0 = 33 dBm. For the target estimation of radar, the target angle is θ = 90 ^∘.
§.§ EE_C Optimization
We first examine the performance of Algorithm <ref> for maximizing EE_C considering the existence of a point-like target. The convergence rate of Algorithm 1 is given in Fig. <ref>. Obviously, it enjoys a fast convergence rate, whose objective function value converges within 12 iterations on average.
Furthermore, the convergence rate of Algorithm 1 is almost the same for
different system parameters, e.g., different M and CRB constraints, which confirms the scalability of Algorithm 1.
Fig. <ref> investigates the EE_C performance versus the root-CRB threshold for different M. The EE_C increases with the increasing Root-CRB threshold, indicating that EE_C can achieve a higher level when the sensing performance requirement is less stringent. Indeed, increasing the number of antennas can improve EE_C, since more spatial degrees-of-freedom can be utilized for designing an efficient ISAC waveform. On the other hand, the baseline scheme only maximizes the communication sum rate under the same constraints of problem (<ref>).
Obviously, the EE_C of the baseline scheme is unsatisfying, since it only considers the spectral efficiency maximization instead of the EE_C maximization. In such a case, the baseline scheme encourages the ISAC BS to adopt as much power as possible for increasing the communication sum rate.
Fig. <ref> and Fig. <ref> plot the EE_C of the point-like target and extended target with the increasing SINR constraint of multiple users, γ_k, respectively. With the increasing γ_k, EE_C first remains unchanged and then decreases due to the shrunken feasible region. Therefore, increasing the downlink communication rate does not necessarily improve EE_C. Furthermore, with the increasing root-CRB, the EE_C decreases, since more power is allocated to radar sensing due to the increasing sensing requirements. A similar trend can also be found in Fig. <ref> for the increasing CRB in the extended target case.
§.§ EE_S Optimization
In this subsection, we investigate the performance of EE_S optimization for both the point-like target sensing and extended target cases. In Fig. <ref>, we first consider the point-like target to show the EE_S versus the increasing power budget, for different SINR levels. As expected, EE_S increases with the increasing P_T, since the increasing power improves the estimation accuracy and increases EE_S. Besides, lowering the SINR requirement also improves
EE_S, since relaxing the SINR constraint enlarges the feasible region and improves EE_S.
For demonstrating the performance gain obtained by our proposed Algorithm 3,
we perform the performance comparison with two other baselines, namely BA_1 and BA_2. In particular, BA_1 aims to minimize the transmission power while BA_2 aims to maximize the communication sum rate under the same constraints as our proposed method (γ_k = 5 dB, the root-CRB threshold is set to 0.15 deg, P_max = 30 dBm). The results indicate that EE_S of BA_1 is significantly low due to the insufficient power for improving the CRB performance. Additionally, EE_S of BA_2 is also inferior to the proposed method and exhibits a further decline as the transmission power increases, since most of the power is utilized for maximizing the sum rate instead of sensing target.
Fig. <ref> further demonstrates the EE_S versus the SINR requirement, where the root-CRB threshold is set to 0.15 deg. It can be observed that EE_S decreases as the increasing SINR and the number of communication users since the increasing communication requirements deteriorates the sensing performance.
As for the scenario of sensing an extended target, Fig. <ref> shows the EE_S versus communication SINR under different numbers of users and different CRB.
It is worth noting that the performance metric for the extended target sensing EE_S is different from the point-like target case.
Similar to the scenario of sensing a point-like target, EE_S decreases with the increasing requirements of communication SINR, especially when the number of users is larger. Besides, increasing CRB requirements improves EE_S, due to the improved estimation performance.
§.§ Approximate Pareto Boundary of Energy-Efficient ISAC.
Fig. <ref> plots the approximate Pareto boundary of energy-efficient ISAC, which demonstrates the tradeoff between EE_C and EE_S. With the more stringent EE_S constraint, the EE_C decreases.
In particular, when the required minimum sensing-centric EE threshold ℰ is small, strengthening the requirement of EE_S only affects EE_C mildly.
However, when the required EE_S beyonds a certain threshold, increasing EE_S constraint will bring a sharp decline in EE_C.
This phenomenon shows that there is a non-trivial tradeoff between EE_S and EE_C, which should be given serious consideration.
Besides, we can find that the area spanned by the Pareto boundary is sensitive to the number of communication users, K, since the increasing number of served communication users consumes the available spatial degrees of freedom which cannot compensate for the performance loss due to the increasingly stringent EE_S constraint.
Therefore, it is more challenging to balance EE_S and EE_C for a large K.
On the other hand, after the required EE_S surpasses some threshold, EE_C decreases sharply. This is because most of the available resources are allocated for satisfying the stringent EE_s constraint, such that the remaining resources are insufficient for guaranteeing the EE_C performance.
§ CONCLUSION
In this paper, we addressed the problem of maximizing energy efficiency for MIMO ISAC systems. We first studied the communication-centric EE adopting the conventional definition of EE in both the point-like target and extended target cases. We reformulated the objective function using the quadratic-transform-Dinkelbach method and solved the sub-problem by leveraging the Schur complement and semi-relaxation techniques. In the second part, we introduced a novel performance metric for measuring sensing-centric EE. We iteratively approximated the objective function as a convex program exploiting SCA to address this problem. Finally, we investigated the tradeoff between the two EE metrics and provided an effective solution. Numerical results showed an improvement compared to the benchmark on both communication-centric EE and sensing-centric EE performance, and we also demonstrated the tradeoff between communication-centric and sensing-centric EE.
§ APPENDIX A
First, we provide the matrix inequality
𝐖_k ≽𝐰_k 𝐰_k^H,
which satisfies either of the following cases:
Case I: 𝐖_k ≻𝐰_k 𝐰_k^H. Then, we have tr(𝐖_k) > tr(𝐰_k 𝐰^H_k).
Case II: 𝐖_k = 𝐰_k 𝐰_k^H. In this case, we have tr(𝐖_k) = tr(𝐰_k 𝐰^H_k).
By combining 𝐖_k ≽𝐰_k 𝐰_k^H, with an additional LMI constraint, given as tr(𝐖_k) ≤tr(𝐰_k 𝐰^H_k), we can guarantee that Case II always holds.
We remark that tr(𝐰_k 𝐰_k^H) = tr(𝐰^H_k 𝐰_k) =𝐰^H_k 𝐰_k. Further applying the Schur complement, W_k =w_k w_k^H can be equivalently transformed into the following LMI, given as
[ 𝐖_k 𝐰_k; 𝐰_k^H 1 ]≽0 , ∀ k, tr(𝐖_k) - 𝐰^H_k 𝐰_k ≤0, ∀ k,
which completes the proof.
§ APPENDIX B
For K = 1, we can derive that 𝐡_k^H 𝐖̂^∗𝐡_k = 𝐡_k^H 𝐖^∗𝐡_k. Hence, the received SNR and the transmission rate at the user does not decrease. Besides, we have
𝐖^∗ - 𝐖̂^∗ = ( 𝐖^∗)^1/2( 𝐈 - (𝐖^∗)^1/2𝐡_k 𝐡_k^H (𝐖^∗)^1/2/𝐡_k^H 𝐖^∗𝐡_k) ( 𝐖^∗)^1/2≽0,
indicating that the power constraint is satisfied due to 𝐖^∗≽𝐖̂^∗. Additionally, replacing 𝐖^∗ by 𝐖̂^∗ would not decrease the transmission rate or increase the total power, showing that 𝐖̂^∗ is the optimum to the objective function.
Then, we discuss the case of K > 1 . We introduce r = 𝐡_k^H ( 𝐖_k +∑_i = 1,i k^K𝐖_i + 𝐑_𝐖̃ + σ _C^2 ) 𝐡_k -1 and equivalently reformulate (<ref>) as
max_{𝐖_k, b_k}_k=1^K, 𝐑_𝐖̃, λ ∑_k=1^K log( 1+r ) - λ( 1/ϵtr( ∑_k=1^K𝐖_k + 𝐑_𝐖̃) + P_0)
+ ∑_k=1^K( log b_k - b_k ( ∑_i = 1,i k^K𝐡_k^H𝐖_i𝐡_k + 𝐡_k^H 𝐑_𝐖̃𝐡_k + σ _C^2 ) )
s.t. r = 𝐡_k^H ( 𝐖_k +∑_i = 1,i k^K𝐖_i + 𝐑_𝐖̃ + σ _C^2 ) 𝐡_k -1 ,
(<ref>),(<ref>), (<ref>), (<ref>), (<ref>) .
We note that with the fixed λ, problem (<ref>) is jointly convex of variables {𝐖_k, b_k}_k=1^K, 𝐑_𝐖̃. Thus, it can be proved that Slater's condition holds such that strong duality holds. By introducing the Lagrange multipliers ϖ_k,1≤ 0, ϖ_k,2≤ 0, μ≤ 0 and Ψ_k ≽0, we provide the Lagrangian function of 𝐖_k as
ℒ(𝐖_k) = - ϖ_k,1𝐡_k^H 𝐖_k 𝐡_k + ∑_i = 1,i k^Kϖ_i,1𝐡_i^H 𝐖_k 𝐡_i + ϖ_k,2𝐡_k^H 𝐖_k 𝐡_k - ∑_i = 1,i k^Kϖ_i,2γ_k 𝐡_i^H 𝐖_k 𝐡_i
- tr(𝐖_k Ψ_k)+ μtr(𝐖_k) + ξ ,
where ξ represent the terms that do not involve 𝐖_k. Then, the KKT conditions of (<ref>) is given as
ℒ̇(𝐖^∗_k) = 0 , 𝐖^∗_k Ψ_k = 0.
Then, we have Ψ^∗_k = 𝐀_k^∗ - ϖ_k,1𝐡_k^H 𝐡_k and
𝐀_k^∗ = ∑_i = 1,i k^Kϖ_i,1𝐡_i^H 𝐡_i + ϖ_k,2𝐡_k^H 𝐡_k - ∑_i = 1,i k^Kϖ_i,2γ_k 𝐡_i^H 𝐡_i + μ𝐈_M.
Nest, we discuss the rank of 𝐀_k^∗ under the following cases.
1) Case I: rank( 𝐀_k^∗) = M.
In this case, we have rank( Ψ^∗_k) ≥ M-1 with the inequality rank( 𝐗 + 𝐘 ) ≥rank( 𝐗 ) - rank( 𝐘 ) <cit.>. For rank(Ψ^∗_k ) = M, the first condition in (<ref>) implies 𝐖^∗_k = 0.
For rank(Ψ^∗_k ) = M - 1, we have rank( 𝐖^∗_k )= 1.
2) Case II: rank( 𝐀_k^∗) = r_a < M.
In this case, we exploit <cit.> to construct a rank-1 solution 𝐖^∗_k. We give {𝐪_k,i^∗}_i=1^M-r_ato denote the columns of orthonormal basis of Ω_k^∗, which represents the nullspace of 𝐀_k^∗. As Ψ^∗_k ≽0, we have (𝐪_k,i^∗)^H Ψ^∗_k 𝐪_k,i^∗ = - ϖ_k,1 |𝐡_k^H 𝐪_k,i^∗ |^2 ≥ 0. Since (<ref>) should be active at opimum indicating ϖ_k,1≥ 0, we have 𝐡_k^H 𝐪_k,i^∗ = 0 and Ψ^∗_k Ω_k^∗ = 0. Thus, the M - r_a dimensions of Ψ^∗_k's null space can be represented by Ω_k^∗. We further denote Ω_k^∗ as the null-space of Ψ^∗_k, we have rank(Ω_k^∗) ≥ M - r_a. Additionally, since rank( 𝐀_k^∗) = r_a, we have rank( Ψ^∗_k) ≥ r_a - 1, which shows that rank(Ω_k^∗) ≤ M - r_a + 1. Then, it can be readily noted that rank(Ω_k^∗) = M - r_a or rank(Ω_k^∗) = M - r_a + 1. When rank(Ω_k^∗) = M - r_a , we have 𝐖^∗_k = ∑_i=1^M-r_aλ_k,i^∗𝐪_k,i^∗ (𝐪_k,i^∗)^H with λ_k,i^∗≥ 0. In such a case, 𝐡_k^H 𝐖_k^∗𝐡_k = 0, which constradicts the optimality. Hence, we conclude that rank(Ω_k^∗) = M - r_a + 1. Denoting Ω_k^∗ as [Ω_k^∗, 𝐩_k^∗], the optimal solution 𝐖^∗_k can be given as 𝐖^∗_k = ∑_i=1^M-r_aλ_k,i^∗𝐪_k,i^∗ (𝐪_k,i^∗)^H + λ̃^∗_k 𝐩_k^∗ (𝐩_k^∗)^H with λ̃^∗_k ≥ 0. Therefore, a rank-1 solution can be constructed as
𝐖̂_k^∗ = 𝐖^∗_k - ∑_i=1^M-r_aλ_k,i^∗𝐪_k,i^∗ (𝐪_k,i^∗)^H = λ̃^∗_k 𝐩_k^∗ (𝐩_k^∗)^H , 𝐑̂^∗_𝐖̃ = 𝐑^∗_𝐖̃ + ∑_i=1^M-r_aλ_k,i^∗𝐪_k,i^∗ (𝐪_k,i^∗)^H.
In the following, we show that the reconstructed solution, 𝐖̂_k^∗ and 𝐑̂^∗_𝐖̃ satisfy the constraints. Firstly, we have
𝐡_k^H 𝐖_k^∗𝐡_k = 𝐡_k^H 𝐖̂_k^∗𝐡_k, 𝐡_k^H (∑_i = 1,i k^K𝐖^∗_i + 𝐑^∗_𝐖̃) 𝐡_k = 𝐡_k^H (∑_i = 1,i k^K𝐖̂^∗_i + 𝐑̂^∗_𝐖̃) 𝐡_k.
Therefore, the right-hand side term in (<ref>) and the left-hand side term in (<ref>) remain unchanged.
Besides, it can be readily verified that constraints (<ref>) and (<ref>) hold, since 𝐖_k^∗ + 𝐑^∗_𝐖̃ = 𝐖̂^∗_k + 𝐑̂^∗_𝐖̃, which completes the proof.
IEEEtran
|
http://arxiv.org/abs/2307.05894v1 | 20230712035118 | On Maximal Functions Associated to Families of Curves in the Plane | [
"Joshua Zahl"
] | math.CA | [
"math.CA"
] |
Deep learning-based estimation of whole-body kinematics from multi-view images
[
Received: date / Accepted: date
==============================================================================
We consider the L^p mapping properties of maximal averages associated to families of curves, and thickened curves, in the plane. These include the (planar) Kakeya maximal function, the circular maximal functions of Wolff and Bourgain, and their multi-parameter analogues. We propose a framework that allows for a unified study of such maximal functions, and prove sharp L^p→ L^p operator bounds in this setting. A key ingredient is an estimate from discretized incidence geometry that controls the number of higher order approximate tangencies spanned by a collection of plane curves. We discuss applications to the Fässler-Orponen restricted projection problem, and the dimension of Furstenberg-type sets associated to families of curves.
§ INTRODUCTION
In this paper, we study the L^p mapping properties of maximal functions associated to families of curves in the plane. The prototypical example is the (planar) Kakeya maximal function
K_δ f (e) = 1/δsup_ℓ || e∫_ℓ^δ|f|, e∈ S^1.
In the above expression, δ>0 is a small parameter; the supremum is taken over all unit line segments ℓ parallel to the vector e; and ℓ^δ denotes the δ neighborhood of ℓ. Cordoba <cit.> obtained the estimate ‖ K_δ f‖_p ≤ C (log 1/δ)^1/2‖ f‖_p for p≥ 2. This is the sharp range of Lebesgue exponents, and the dependence of the operator norm on δ is also best possible (up to the choice of constant C). In particular, the existence of measure zero Besicovitch sets (compact sets in the plane that contain a unit line segment pointing in every direction) shows that for p<∞ the operator K_δ cannot be bounded in L^p with operator norm independent of δ.
A second Kakeya-type maximal function was introduced by Wolff <cit.>. Let C^δ(x,y,r) denote the δ-neighborhood of the circle centered at (x,y) of radius r, and define
W_δ f (r) = 1/δsup_(x,y)∈^2∫_C(x,y,r)^δ|f|, r∈ [1,2].
Wolff <cit.> obtained the estimate ‖ W_δ f‖_p ≤ C_δ^-‖ f‖_p for p≥ 3. This is the sharp range of Lebesgue exponents, and the existence of measure zero Besicovitch-Rado-Kinney sets (compact sets in the plane that contain a circle of every radius r∈[1,2]) shows that for p<∞, the operator W_δ cannot be bounded in L^p with operator norm independent of δ.
A second class of maximal functions contains the Bourgain circular maximal function and its generalizations. For (x,y)∈^2, let
Bf(x,y) = sup_1≤ r≤ 2∫_C(x,y,r)|f|.
Bourgain <cit.> proved that B is bounded from L^p→ L^p for p>2. This is the sharp range of Lebesgue exponents for L^p→ L^p bounds (the full range of exponents for which B is bounded from L^p→ L^q is slightly more complicated; see <cit.> for details). As a consequence, if K⊂^2 has positive measure and if X⊂^2 contains a circle centered at every point of K, then |X|>0, i.e. there are no analogues of measure-zero Besicovitch sets or Besicovitch-Rado-Kinney sets in this setting.
Finally, we recall the Erdoğan elliptic maximal function
Ef(x,y) = sup_W∫_W|f|,
where the supremum is taken over all ellipses centered at (x,y) whose semi-major and semi-minor axes have lengths in [-1/2, 2].
This is a multi-parameter generalizations of the Bourgain circular maximal function. Erdoğan <cit.> conjectured that E should be bounded from L^p→ L^p for p>4. Prior to this work, the best-known bound is p>12 by Lee, Lee, and Oh <cit.>.
§.§ The Setup
The above maximal functions can be described as follows: We have a family of plane curves 𝒞 (i.e. lines, circles, ellipses) and a projection Φ𝒞→^d (i.e. the map sending a line to its slope, a circle to its radius, a circle to its center, etc.). For each z∈^d, the maximal function Mf(z) is a maximal average of f taken over all (possibly thickened) curves γ∈𝒞 with Φ(γ)=z; this is a subvariety of 𝒞 of codimension d.
The above maximal functions exhibit two phenomena. First, when d=1, we have examples of measure zero Besicovitch-type sets (and hence no operator norm bounds that are independent of δ), while for d>1 we have not seen such examples. Second, the dimension of the fibers Φ^-1(z) determine the range of Lebesgue exponents for which L^p→ L^p bounds can hold.
Our first task is to describe the family of curves associated to our maximal function. It will be convenient to describe such curves as the graphs of functions.
Let 𝒞 be a m-dimensional manifold and let I⊂ be an interval. Let h𝒞× I→ and define
F^h_t(u)=(h(u;t), ∂_t h(u;t),…,∂_t^m-1 h(u;t)).
We say that h parameterizes a m-dimensional family of cinematic curves if F^h_t𝒞→^m is a local diffeomorphism for each t∈ I.
Next we will discuss a transversality condition the controls the behavior of the fibers Φ^-1(z). Let 1≤ s < m. For (u,t)∈𝒞× I, define
V_u;t = {u'∈𝒞∂_t^j h(u';t)=∂_t^j h(u;t), j = 0,…, s}.
The restriction of V_u;t to a small neighborhood of u is a (m-s-1)-dimensional manifold.
We say a smooth function Φ𝒞→^m-s is transverse to h if for each (u,t)∈𝒞× I, the derivative of Φ|_V_u;t has maximal rank (i.e. rank m-s+1) at u. Note that this condition is vacuously satisfied if s=m-1.
With these definitions, we can now describe our class of maximal functions.
Let 1≤ s<m, let h𝒞× I → parameterize a m-dimensional family of cinematic curves, and let Φ𝒞→^m-s be transverse to h. Fix a compact set 𝒞_0⊂𝒞, and a compact interval I_0⊂ I. Abusing notation, we restrict h and Φ to 𝒞_0× I_0 and 𝒞_0, respectively. For each u∈𝒞_0, define the curve
γ_u = {(t, h(u;t)) t∈ I_0}.
We define the maximal functions M_δ and M by
M_δ f(v) = 1/δsup_u ∈Φ^-1(v)| ∫_γ_u^δ f|,
Mf(v) = sup_u ∈Φ^-1(v)|∫_γ_u f|.
We call these s-parameter maximal functions associated to a m-dimensional family of cinematic curves.
We remark that the L^p mapping properties of these operators remain unchanged if we replace the integrand f by |f|, but for technical reasons (see Section <ref>) we adopt the formulation above. The Kakeya, Wolff, Bourgain, and Erdoğan maximal functions can be re-written in the above framework, with (m,s) equal to (2,1), (3,2), (3,1), (5,3), respectively. This is a straightforward computation, which is described in Appendix <ref>.
§.§ Kakeya-type maximal functions
Our main result is a sharp L^p→ L^p bound for the Kakeya-type maximal function M_δ.
Let m>s≥ 1 be integers, and let M_δ be a s-parameter maximal function associated to a m-dimensional family of cinematic curves. Let >0. Then for all δ>0 sufficiently small, we have
‖ M_δ f‖_p ≤δ^-‖ f‖_p, p≥ s+1.
Previous work in this setting has focused on the cases m=2,s=1 <cit.>; m=3,s=1 <cit.>; and m=3,s=2 <cit.>. The most interesting case is when s=m-1; the case s<m-1 can be reduced to s=m-1 by slicing. The stated range of p in (<ref>) is sharp. This can be seen by selecting 𝒞=^m, h(u;t)=(1, t, t^2,…,t^m-1)· u; Φ the projection to the first m-s coordinates; and f the characteristic function of the Knapp rectangle [0, δ^1/s]×δ.
When s=m-1, the existence of measure-zero Besicovitch sets shows that for p<∞ the operator M_δ cannot in general be bounded in L^p with operator norm independent of δ . This can be seen by choosing 𝒞 and h as above; Φ(u_0,u_1,…,u_m-1) = u_1; and f the characteristic function of the δ-thickening of a measure-zero Besicovitch set. More generally, Besicovitch and Rado <cit.> describe a procedure for constructing a measure-zero set that contains a translated copy of every algebraic curve from a one-parameter family.
§.§ Bourgain-type maximal functions
In certain circumstances, Theorem <ref> can be used to obtain sharp L^p→ L^p bounds for the maximal function Mf from Definition <ref>.
For f^2→, let P_kf denote the Littlewood-Paley projection to the frequency annulus of magnitude ∼ 2^k. We say that a sublinear operator M has high frequency decay if there exists p<∞ and C,c>0 so that
‖ M (P_k f)‖_p < C 2^-ck‖ f‖_p, f∈ L^p(^2).
Bourgain <cit.> (see also <cit.>) observed that if a maximal function M has high frequency decay, then the estimate (<ref>) can be interpolated with an estimate of the form (<ref>) to obtain L^p→ L^p operator norm bounds for M, for all p strictly larger than the range in (<ref>). Bourgain <cit.> followed this strategy (with slightly different notation) to obtain sharp L^p bounds for his circular maximal function, and Chen, Guo, and Yang <cit.> followed this strategy to obtain sharp L^p bounds for the axis-parallel elliptic maximal function (see also <cit.> for previous results on this operator).
These maximal functions are translation invariant, in the sense that for each point (x,y)∈^2, the operator is a maximal average over a fixed family of curves that have been translated to the point (x,y). We formalize this as follows:
Let M be an s-parameter maximal function associated to a s+2-dimensional family of cinematic curves. Let h𝒞× I→ and Φ𝒞→^2 be the associated parameterization and projection functions. We say that M is translation invariant if in a neighborhood of each point of 𝒞× I, we can choose local coordinates u=(x,y,w_1,…,w_s) so that Φ has the form Φ(u) = (x,y) and h has the form h(u; t) = g(w_1,…,w_s; t-x) + y.
The Bourgain circular maximal function and the elliptic maximal function are translation invariant according to this definition.
Lee, Lee, and Oh <cit.> recently proved a sharp local smoothing estimate for the elliptic and axis-parallel elliptic maximal functions, and in doing so they showed that these maximal functions have high frequency decay. Shortly thereafter, Chen, Guo, and Yang <cit.> proved that every translation invariant maximal function (in the sense of Definition <ref>) has high frequency decay (their result uses slightly different notation and applies to a slightly modified form of the maximal function (<ref>); see Proposition <ref> and the surrounding discussion for a precise statement). The Lee-Lee-Oh and Chen-Guo-Yang result has the following consequence.
Let s≥ 1 be an integer and let M be an s-parameter translation invariant maximal function associated to a (s+2)-dimensional family of cinematic curves. Then
‖ M f‖_p ≤ C_p‖ f‖_p, p> s+1.
The stated range of p is sharp, as can be seen by modifying an example due to Schlag <cit.>; see Appendix <ref> for details. In particular, Theorem <ref> resolves Erdoğan's conjecture by showing that the elliptic maximal operator is bounded from L^p→ L^p in the sharp range p>4. Previously, Lee, Lee, and Oh <cit.> (in the elliptic and axis-parallel elliptic case) and Chen, Guo, and Yang <cit.> (in the general case) proved a variant of Theorem <ref> for p>s(s+1).
We conjecture that when m=s+2, every maximal function of the form (<ref>) has high frequency decay. This was proved by Sogge <cit.> when s=1. If true, such a result could be combined with Theorem <ref> and a slicing argument (see Section <ref>) to yield the analogue of Theorem <ref> for all m≥ s+2 and all s-parameter maximal functions associated to a m-dimensional family of cinematic curves.
It is natural to ask about analogues of Theorems <ref> and <ref> for curves in ^d, in the spirit of the helical maximal function and its generalizations <cit.>. This appears to be rather difficult at present, since our proof of Theorem <ref> uses Theorem <ref>, and the latter is at least as difficult as the Kakeya conjecture, which is open in dimension 3 and higher.
§.§ A L^p estimate for collections of plane curves
To prove Theorem <ref>, we begin by establishing (<ref>) when s=m-1. This will be a consequence of a slightly more general maximal function estimate associated to collections of thickened curves in the plane. The setting is as follows
We say that a set ℱ⊂ C^k(I) forbids k–th order tangency if there exists a constant c>0 so that for all f,g∈ℱ, we have
inf_t ∑_i=0^k |f^(i)(t)-g^(i)(t)| ≥ c ‖ f-g‖_C^k(I).
Examples.
* On a compact interval, linear functions forbid 1st order tangency. More generally, polynomials of degree ≤ k forbid k-th order tangency.
* A m-dimensional family 𝒞 of cinematic curves restricted to a sufficiently small compact set forbid (m-1)-st order tangency.
Recall that a set ℱ⊂ C^∞(I) is uniformly smooth if sup_f∈ F‖ f^(i)‖_∞<∞ for each i≥ 0. The functions in Example 2 are uniformly smooth. The functions in Example 1 are uniformly smooth if we restrict the coefficients to a bounded set. With this definition, we can now state the main technical result of the paper.
Let k≥ 1, let I be a compact interval, and let ℱ⊂ C^∞(I) be uniformly smooth and forbid k–th order tangency. Let >0. Then the following is true for all δ>0 sufficiently small. Let F⊂ℱ satisfy the non-concentration condition
#(F∩ B_r)≤ r/δ for all balls B_r⊂ C^k(I) of radius r.
Then
‖∑_f∈ Fχ_f^δ‖_k+1/k≤δ^-(δ#F)^k/k+1,
where f^δ is the δ neighborhood of the graph of f.
The bound (<ref>) is a Kakeya-type estimate for families of curves that forbid k-th order tangency. The range of p is best-possible, and the existence of measure zero Besicovitch sets shows that the the δ^- term (or at least some quantity that becomes unbounded as δ↘ 0) is also necessary.
We will prove a slightly more technical technical version of Theorem <ref>, where the ball condition (<ref>) is replaced by a Frostman-type condition, and the sets f^δ are replaced by subsets that satisfy a similar Frostman-type condition. This more technical version will be called Theorem <ref>'. Theorem <ref>' implies Theorem <ref> in the special case s=m-1. The result is also connected to questions in geometric measure theory. We discuss some of these connections below.
§.§ Applications to geometric measure theory
Restricted projections
In <cit.>, Käenmäki, Orponen, and Venieri discovered a connection between maximal function estimates for families of plane curves, and Marstrand-type results for projections in a restricted set of directions; the latter question was first investigated by Fässler and Orponen in <cit.>. Accordingly, Theorem <ref> is closely related to the following Kaufman-type estimate for the restricted projection problem. In what follows, “” refers to Hausdorff dimension.
Let γ[0,1]→^n be smooth and satisfy the non-degeneracy condition
(γ(t),γ'(t),…,γ^(n-1)(t))≠ 0, t∈ [0,1].
Let E⊂^n be Borel and let 0≤ s≤min( E, 1). Then
{ t∈ [0,1](γ(t)· E)< s }≤ s.
We will comment briefly on this history of this problem. In <cit.>, Fässler and Orponen introduced the non-degeneracy condition (<ref>), and they conjectured that if a smooth curve γ [0,1]→^3 satisfied (<ref>), then (γ(t)· E)=min(1, E) for a.e. t; they made partial progress towards this conjecture. In <cit.>, Käenmäki, Orponen, and Venieri used circle tangency bounds proved by Wolff to resolve this conjecture in the special case where γ(t) = (1, t, t^2). In <cit.>, Pramanik, Yang, and the author used a more general curve tangency bound (corresponding to k=2) to prove a mild generalization of Theorem <ref> when n=3; the result in <cit.> only requires that the curve γ be C^2. In <cit.>, Gan, Guth, and Maldague proved an estimate in a similar spirit to (<ref>) (sometimes referred to as a “Falconer-type” exceptional set estimate) using techniques related to decoupling. Finally, in <cit.>, Gan, Guo, and Wang proved a Falconer-type exceptional set estimate for general n, again using decoupling.
Furstenberg sets
As noted above, a consequence of Cordoba's Kakeya maximal function bound is that Besicovitch sets in the plane must have Hausdorff dimension 2. Similarly, Wolff's circular maximal function bound implies that Besicovitch-Rado-Kinney sets must have Hausdorff dimension 2. Theorem <ref> has a similar consequence; in fact a slightly stronger statement is true in the spirit of the Furstenberg set conjecture. We first define a Furstenberg set of curves.
Let α,β≥ 0 and let ℱ⊂ C^k(I). We say a set E⊂^2 is a (α,β) Furstenberg set of curves from ℱ if there is a set F⊂ℱ with (F)≥β (here “” refers to Hausdorff dimension in the metric space C^k(I)) so that (graph(f) ∩ E)≥α for each f ∈ F.
Let k≥ 1, let I be a compact interval, and let ℱ⊂ C^∞(I) be uniformly smooth and forbid k–th order tangency. Let 0≤β≤α≤ 1. Then every (α,β) Furstenberg set of curves from ℱ has Hausdorff dimension at least α+β.
We will comment briefly on this history of this problem. In <cit.>, Wolff defined a class of Besicovitch-type sets, inspired by the work of Furstenberg <cit.>, which he called Furstenberg sets. In brief, for 0≤α≤ 1, an α-Furstenberg set is a compact set E⊂^2 with the property that for each direction e∈ S^1, there is a line ℓ parallel to e with (E∩ℓ)≥α. Wolff proved that every set of this type must have dimension at least max{2α,α+1/2}, and he constructed examples of such sets that have dimension 3α/2 + 1/2. He conjectured that the latter bound is sharp. In <cit.>, Molter and Rela introduced the related notion of an (α,β)-Furstenberg set. In the plane, their definition coincides with Definition <ref>, where ℱ is the set of linear functions. See <cit.> and the references therein for an up-to-date survey of progress on problem, and <cit.> for variants in higher dimensions.
Recently, Fässler, Liu, and Orponen <cit.> considered the analogous problem where lines are replaced by circles; they formulated the analogous definition of a Furstenberg set of circles, and they proved that if 0≤α≤β≤ 1, then every (α,β) Furstenberg set of circles must have dimension at least α+β. Theorem <ref> generalizes the Fässler-Liu-Orponen result from circles to a larger class of curves. Theorem <ref> is clearly sharp in the stated range 0≤β≤α≤ 1. When α<β, it is not obvious what dimension bounds should hold for (α,β) Furstenberg sets of curves.
§.§ Curve tangencies, and tangency rectangles
The main input to Theorem <ref> is a new estimate in discretized incidence geometry that controls the number of approximate higher-order tangencies spanned by a collection of plane curves; this is Theorem <ref> below. Theorem <ref> requires several technical definitions. We will give an informal explanation of these definitions and then state an informal version of Theorem <ref>.
A (δ;k) tangency rectangle R is the δ-neighborhood of the graph of a function with C^k norm at most 1, above an interval I of length δ^1/k (we are abusing notation slightly, since the set R need not be rectangle in the usual geometric sense). If f is a function, we say that f is tangent to R (denoted f∼ R) if the graph of f, restricted to I, is contained in R. If F is a set of functions and μ≥ 1, we say a tangency rectangle is μ-rich with respect to F if it is tangent to at least μ functions f∈ F. We say two (δ;k) tangency rectangles R_1,R_2 are comparable if they are contained in a common (2^kδ;k) tangency rectangle. Otherwise they are incomparable (the factor 2^k simplifies certain parts of the proof, but any constant larger than 1 would suffice).
Observe that if two functions f_1,f_2 with C^k norm at most 1 are both tangent to a common (δ;k) tangency rectangle R above the interval [a, a+δ^1/k], then we have
|f_1(a+t)-f_2(a+t)|≲ t^k+δ.
We say that R is broad if for most pairs of functions f_1,f_2∈ F that are tangent to R, the inequality (<ref>) is almost tight, i.e. there is a matching lower bound |f_1(a+t)-f_2(a+t)|≳ t^k for all t. The precise definition of broadness involves additional quantifiers; see Definition <ref> for details. With these (informal) definitions, we can now state an informal version of Theorem <ref>
Let k,μ≥ 1 and let δ>0. Let F be a set of low degree polynomials, and let ℛ be a set of pairwise incomparable (δ;k) tangency rectangles, each of which are μ-rich and broad with respect to F. Provided δ>0 is sufficiently small, we have
#ℛ≤δ^-(#F/μ)^k+1/k.
Remarks.
* The requirement that the rectangles in ℛ are broad (or some analogous requirement) is necessary. Without this assumption, we could construct a counter-example to Theorem <ref> as follows. Let F be a set of functions with #F = μ, each of which is an infinitesimal perturbation of the same function f_0; and let ℛ be a set of δ^-1/k pairwise incomparable tangency rectangles arranged along the graph of f_0.
* When k=1, the bound (<ref>) follows from double-counting triples (f_1, f_2, R), where f_1,f_2 are functions whose graphs transversely intersect inside R. When k=2 and the graphs of the functions in F are (arcs of) circles, a bilinear variant of (<ref>) was proved by Wolff <cit.> using techniques from computational geometry originating from <cit.>. This was generalized by the author in <cit.> for more general curves (again with k=2). Recently, Pramanik, Yang, and the author <cit.> proved a variant of Theorem <ref> for k=2 that works for C^2 functions.
* The exponent k+1/k follows from the numerology inherent in the polynomial method. For k=2, there are at least three independent proofs of this same bound, using different techniques (see Item 2 above). However, it is not clear whether the exponent k+1/k in (<ref>) is sharp. For k=2 the current best construction comes from Szemerédi-Trotter and yields a lower bound with exponent 4/3.
§.§ Main ideas, and a sketch of the proof
In this section, we will sketch the proofs of Theorems <ref> and <ref>. We begin with Theorem <ref>. For simplicity during this proof sketch, we will suppose that μ has size close to 1 and I=[0,1]. When writing or describing inequalities, we will ignore constants that are independent of δ and #F. We will prove the result by induction on the cardinality of F.
The induction step proceeds as follows. For each curve f∈ F in F, we consider the (k-1)-st order “jet lift”
ζ_f = {(t, f(t), f'(t), …, f^(k-1)(t)) t∈ [0,1]}⊂^k+1.
For each tangency rectangle R∈ℛ, we consider the corresponding “tangency prism,” R̂⊂^k+1 which is a (curvilinear) prism of dimensions roughly δ^1/k×δ^k/k×δ^(k-1)/k×…×δ^1/k. If f∈ F is tangent to a rectangle R∈ℛ, then ζ_f intersects R̂ in a curve of length roughly δ^1/k; if this happens then we say ζ_f is incident to R̂.
We have transformed the problem of estimating the number of robustly broad tangency rectangles in the plane into a problem about incidences between curves and tangency prisms in ^k+1. To attack this latter problem, we use the Guth-Katz polynomial partitioning theorem. Let E be a large number, and let Q⊂[t, x_0,…,x_k-1] be a polynomial of degree at most E, so that ^k+1\{Q=0} is a union of about E^k+1 “cells” (open connected regions), with the property that at most (#ℛ) E^-k-1 prisms are contained in each cell (if a prism intersects more than one cell, it is not counted here). Using a variant of Bézout's theorem and the assumption that ℱ is uniformly smooth, we can ensure that at most (#F) E^-k curves intersect a typical cell.
Since each prism R̂ is connected, it is either contained inside a cell, or it must intersect the partitioning hypersurface {Q=0}. Our argument now divides into two cases: If at least half of prisms are contained inside a cell, then we are in the “cellular case.” If at least half of the prisms intersect the partitioning hypersurface, then we are in the “algebraic case.”
We handle the cellular case as follows. Using our induction hypothesis, we conclude that since a typical cell Ω intersects roughly (#F) E^-k curves, there are at most ((#F)E^-k)^k+1/k=(#F)^k+1/kE^-k-1 rectangles R∈ℛ with R̂⊂Ω. thus the total contribution from all of the cells is at most E^k+1· (#F)^k+1/kE^-k-1 = (#F)^k+1/k. With some care (and a slight weakening of exponents, which introduces the δ^- term in (<ref>)), the induction closes. It is this argument (and the associated numerology) that determines the shape of the bound (<ref>).
The ideas described above to handle the cellular case are not new; they were inspired by similar arguments in <cit.>. To handle the algebraic case, however, new ideas are needed. This is the main innovation in this paper. We now sketch the proof of the algebraic case. We begin with several simplifying assumptions. Simplifying Assumption (A): the surface {Q=0} can be written as a graph {x_k-1 = L(t, x_0,…, x_k-2)}, where L is 1-Lipschitz. As a consequence of Assumption (A), if a tangency prism R̂ intersects {Q=0}, then R̂ is contained in a thin neighborhood of the graph of L, i.e. R̂⊂ Z^*, where
Z^* = {(t, x_0,…, x_k-1)∈[0,1]^k+1 |x_k-1 - L(t, x_0,…,x_k-2)|≤δ^1/k}.
Next we make Simplifying Assumption (B): each curve ζ_f is contained in Z^*. This means that f almost satisfies the ODE f^(k-1)(t) = L(t, f(t), f'(t), …, f^(k-2)(t)). More precisely, we have
|f^(k-1)(t) - L(t, f(t), f'(t), …, f^(k-2)(t))|≤δ^1/k, t∈ [0,1].
If ζ_f and γ_g are both incident to a common prism R̂, then a straightforward calculus exercise shows that there must exist some t_0 for which the first k-1 derivatives of f and g almost agree, in the sense that
|f^(i)(t_0) - g^(i)(t_0)|≤δ^1/k, i=0,…,k-1.
(<ref>) (and its analogue for g) say that f and g almost satisfy the same ODE, and (<ref>) says that f and g almost have the same initial conditions, and hence f and g almost satisfy the same initial value problem. Since L is 1-Lipschitz, we can use a quantitative version of Gronwall's inequality to conclude that |f(t)-g(t)| is small for all t∈ [0,1]. We conclude that all of the curves tangent to a common rectangle R∈ℛ must remain close for all time t∈ [0,1]; but this contradicts the requirement that the rectangles in ℛ are broad. This implies ℛ must be empty. Thus we have established Theorem <ref>, except that we have not yet justified Simplifying Assumptions (A) and (B).
First, we will explain how to remove Simplifying Assumption (B); this is mostly a technical matter. While the curves ζ_f need not be contained in Z^*, each curve intersects Z^* in a small number of curve segments, and the curve-prism incidences occur within these segments. Thus we can find a typical length ℓ, so that most curve-prism incidences occur within segments that have length roughly ℓ. After partitioning space into rectangular prisms of the appropriate dimensions and re-scaling, we reduce to the case where ℓ=1.
Next, we will explain how to remove Simplifying Assumption (A); this issue is more serious. In general, we may suppose that each tangency prism is contained in the δ^1/k neighborhood of the variety {Q=0}. This is a semi-algebraic set, and after restricting to [-1,1]^k+1, this set has volume roughly δ^1/k. We prove a new structure theorem which says that any semi-algebraic set in [0,1]^k+1 with small (k+1)-dimensional volume can be decomposed into a union of pieces, each of which is the thin neighborhood of a Lipschitz graph (with controlled Lipschitz constant), plus a final piece whose projection to the first k coordinates has small k-dimensional volume. If the majority of prisms and curves are contained in one of the Lipschitz graph pieces, then (a slight weakening of) Simplifying Assumption (A) holds, and we can argue as above. If instead the majority of prisms and curves are contained in the final piece, then we project from ^k+1 to the first k–coordinates. The Tarski–Seidenberg theorem says that the image under this projection is a semi-algebraic subset of [0,1]^k, and thus we can apply the same decomposition again. After iterating this procedure at most k times, we arrive at a situation where Simplifying Assumption (A) holds, and we can apply the arguments described above.
From Tangency Rectangles to Maximal Functions
We now sketch the proof of Theorem <ref>. The proof is complicated by the fact that the collection of curves F can be arranged in many different ways. To begin, we will examine three specific arrangements that will give the reader a sense of the range of possibilities. For clarity when writing inequalities, we will ignore constants that are independent of δ, and we will sometimes omit terms of the form δ^-.
Arrangement 1. Suppose that for a typical pair of functions f,g∈ F for which f^δ∩ g^δ is non-empty, we have that the graphs of f and g intersect transversely. This means that |f^δ∩ g^δ| typically has size about δ^2, and thus we might expect
‖∑_f∈ Fχ_f^δ‖_2 ≤‖∑_f∈ Fχ_f^δ‖_2 ≤(∑_f,g∈ F|f^δ∩ g^δ|)^1/2≤δ (#F).
On the other hand, we have
‖∑_f∈ Fχ_f^δ‖_1 ≤δ(#F).
Interpolating (<ref>) and (<ref>), we obtain
‖∑_f∈ Fχ_f^δ‖_k+1/k≤δ(#F).
Note that this is stronger than (<ref>), since the ball condition (<ref>) implies that δ(#F)≲ 1.
Arrangement 2. Suppose that for a typical pair of functions f,g∈ F for which f^δ∩ g^δ is non-empty, we have that the graphs of f and g are tangent to order k-1. This means that |f^δ∩ g^δ| is a curvilinear rectangle of dimensions roughly δ×δ^1/k. In this situation, we can find a number 1≤μ≤#F and a set ℛ of μ-rich, broad (δ;k) rectangles, so that
‖∑_f∈ Fχ_f^δ‖_k+1/k^k+1/k≤∑_R∈ℛ∫_R (∑_f∈ F
f∼ Rχ_f^δ)^k+1/k.
By Theorem <ref>, #ℛ≤(#F/μ)^k+1/k, and the contribution from each R∈ℛ to the RHS of (<ref>) is at most (μδ)^k+1/k. Thus we again have the bound
‖∑_f∈ Fχ_f^δ‖_k+1/k≤((#F/μ)^k+1/k· (μδ)^k+1/k)^k/k+1= δ(#F).
Arrangement 3. Suppose that #F={f}. Then
‖∑_f∈ Fχ_f^δ‖_k+1/k = δ^k/k+1 = (δ#F)^k/k+1.
Note that our bounds (<ref>) and (<ref>) for Arrangements 1 and 2 are stronger than the corresponding estimate (<ref>) from Theorem <ref>. In this direction, we will first prove a variant of Theorem <ref>, where the non-concentration condition (<ref>) is replaced by a (local) two-ends type non-concentration condition on the set of curves passing through each point. This is Proposition <ref> below. Informally, the statement is as follows
Let k≥ 1 and let ,δ>0. Let F be a set of functions that come from a uniformly smooth family of curves. Suppose that for a typical point x∈^2, a typical pair of curves from F whose δ-neighborhoods contain x diverge at speed at least t^k in a neighborhood of x. Then
‖∑_f∈ Fχ_f^δ‖_k+1/k≤δ^1-(#F).
Note that if (<ref>) is established for some value of k, then the analogous result immediately follows for all larger k by interpolation with the trivial L^1 estimate (<ref>). This observation will play an important role in the proof. We prove Proposition <ref> by induction on k. In the inequalities that follow, we will ignore all constants independent of δ, and all factors of the form δ^-.
The base case k=1 is essentially the estimate (<ref>). For the induction step, we select the smallest ρ∈[δ,1] so that the intersection of a typical pair of curves is localized to a ρ×ρ^1/k curvilinear rectangle. This allows us to find a set ℛ of (ρ;k) rectangles, each of which have roughly the same richness μ, so that
∫(∑_f∈ Fχ_f^δ)^k+1/k≤∑_R∈ℛ∫_R (∑_f∈ F
f ∼ Rχ_f^δ)^k+1/k.
Furthermore, the rectangles in ℛ are broad, and hence by Theorem <ref> we have #ℛ≤(#F/μ)^k+1/k.
If ρ has size roughly δ, then we are in the situation of Arrangement 2 and we can immediately apply (<ref>). If instead ρ is substantially larger than δ (and hence δ/ρ is small), then our definition of ρ has the following consequence: If we re-scale a rectangle R∈ℛ to the unit square, then the images of the functions {f∈ F f ∼ R} under this re-scaling satisfy the hypothesis of (the informal version of) Proposition <ref>, with k-1 in place of k. Denote the image of f under this re-scaling by f̃, and let δ̃=δ/ρ. Then we have
‖ h‖_k+1/k^k+1/k≤‖ h‖_1^1/k‖ h‖_k/k-1≤(δ̃μ)^1/k( δ̃μ)
=(δ/ρμ)^k+1/k, h = ∑_f∈ F
f ∼ Rχ_f̃^δ̃.
In the above inequality, we used (<ref>) to obtain a L^1 estimate, and we used the induction hypothesis to obtain a L^k/k-1 estimate. Note that the re-scaling from R to the unit square distorts volumes by a factor of ρ^1+1/k, and thus (<ref>) says that for each R∈ℛ we have
∫_R (∑_f∈ F
f ∼ Rχ_f^δ)^k+1/k≤(δμ)^k+1/k.
Inserting the estimate (<ref>) into (<ref>) and using our bound on the size of ℛ, we obtain
∫(∑_f∈ Fχ_f^δ)^k+1/k≤∑_R∈ℛ(δμ)^k+1/k≤ (δ#F)^k+1/k,
which is (<ref>). This closes the induction. The details of this argument are discussed in Section <ref>.
Finally, we remark that Arrangements 1 and 2 satisfy the hypotheses of Proposition <ref>, and thus are amenable to the above argument. Arrangement 3 does not satisfy the hypotheses of Proposition <ref>, and indeed the conclusion of Proposition <ref> is false for Arrangement 3. The final step in the proof of Theorem <ref> is to reduce an arbitrary arrangement of curves that forbid k-th order tangency and satisfy the non-concentration condition (<ref>) to a collection of (re-scaled) non-interacting sub-arrangement, each of which satisfy the hypotheses of Proposition <ref>. This is a standard “two-ends rescaling” type argument.
§.§ Paper organization
In Sections <ref> and <ref>, we execute the proof sketch described in Section <ref> in order to prove Theorem <ref>. In Section <ref>, we will continue following the proof sketch to show how Theorem <ref> implies Proposition <ref>. The remaining Sections <ref>, <ref>, <ref>, and <ref> are devoted to the proofs of Theorems <ref>, <ref>, <ref>, and <ref> + <ref>, respectively.
§.§ Thanks
The author would like to thank Young-Heon Kim for helpful conversations and discussions about Gronwall's inequality, which helped shape Section <ref>. The author would like to thank Shaoming Guo for helpful conversations and discussions about local smoothing and its implications for maximal functions over curves, which helped shape Section <ref>. The author would like to thank Jonathan Hickman and Sanghyuk Lee for suggestions and corrections to an earlier version of this manuscript. The author was supported by a NSERC Discovery grant.
§.§ Notation
We use A≲ B or A = O(B) or B = Ω(A) to mean A≤ KB, where K is a quantity that may depend on the parameter k from the statement of Theorem <ref>. If K is allowed to depend on an additional parameter , then we denote this by A≲_ B or A = O_(B) or B=Ω_(A).
Unless otherwise specified, all functions will be assumed to have domain [0,1] and co-domain . We abbreviate C^k([0,1]) as C^k, and ‖ f ‖_C^k([0,1]) as ‖ f ‖_C^k.
§ CURVES AND TANGENCY RECTANGLES
In this section we will state the precise version of Theorem <ref>, and begin the proof. We begin with precise versions of the informal definitions from Section <ref>
Let δ>0, k≥ 1, and T≥ 1. A (δ;k;T) tangency rectangle is the vertical δ neighborhood of a function with C^k norm at most 1, above an interval of length (Tδ)^1/k. When T=1, we abbreviate this to (δ;k) tangency rectangle, or (δ;k) rectangle.
If R is a (δ;k;T) tangency rectangle above an interval I, and f [0,1]→, we say f is tangent to R if the graph of f above I is contained in R. We denote this by f∼ R.
Next, we will describe what it means for two tangency rectangles to be distinct.
We say two (δ;k;T) rectangles are comparable if there is a (2^kδ;k;T) rectangle that contains them both. Otherwise they are incomparable.
The factor 2^k in the above definition was chosen to make the following true: if R_1,R_2 are incomparable (δ; k; T) rectangles above intervals I_1 and I_2 respectively, and if R_1 and R_2 are both tangent to a common function f with C^k norm at most 1, then I_1 and I_2 are disjoint.
If R is a (δ;k) rectangle and F is a set of functions from [0,1] to , we say that R is μ-rich and -robustly broad with error at most B if there is a set F(R)⊂{f∈ F f∼ R} with #F(R)≥μ that has the following property: For every ρ∈ [δ,1], every T∈ [1, ρ^-1], and every (ρ; k; T) rectangle R' containing R, we have
#{f∈ F(R) f ∼ R'}≤ B T^-#F(R).
During informal discussion, we will say that R is robustly broad if we do not wish to emphasize the role of μ, , or B.
When k=1, a (δ;1) rectangle R is robustly broad if many of the pairs f_1,f_2∈ F(R) have graphs that intersect transversely. In k>1 then all of the functions in F(R) will intersect (almost) tangentially, but if R is robustly broad then many pairs of functions will diverge outside of R at speed roughly t^k—this is the fastest possible speed of divergence that is allowed by the geometry of R and the constraint that the functions have C^k norm at most 1.
With these definitions, we can now precisely state our incidence bound.
Let k≥ 1 and >0. Then there exists η,δ_0>0 so that the following holds for all δ∈(0,δ_0].
Let F be a set of (univariate) polynomials of degree at most δ^-η, each of which has C^k-norm at most 1. Let μ≥ 1 and let ℛ be a set of pairwise incomparable (δ,k) rectangles that are μ-rich and -robustly broad with error at most δ^-η with respect to F. Then
#ℛ≤δ^-(# F/μ)^k+1/k.
§.§ Initial reductions
We will begin the proof of Theorem <ref>, following the outline discussed in Section <ref>. First, we will reduce Theorem <ref> to a version that is weaker in several respects. First, the hypotheses are strengthened: we only need to consider the case where μ has size roughly 1. Second, the conclusion is weakened: the exponent k+1/k is weakened to k+1/k+.
Let k≥ 1, >0. Then there exist (large) constants B=B(k) and C= C(k,) and a small constant η=η(k,)>0 so that the following holds. Let F be a set of polynomials of degree at most δ^-η, each of which has C^k norm at most 1. Let ℛ be a set of pairwise incomparable (δ,k) rectangles that are robustly broad with error at most δ^-η with respect to F. Then
#ℛ≤ C δ^-B (# F)^k+1/k+.
To reduce to the case where μ has size roughly 1, we will refine the set F by randomly keeping each element with probability roughly μ^-1. To ensure that the resulting refinement satisfies the hypotheses of Proposition <ref>, we will use the following special case of Chernoff’s inequality.
Let X_1,…,X_n be independent random variables taking value 1 with probability p and value 0 with probability 1-p. Let X denote their sum. Let A≥ 2. Then
ℙ(X ≤ pn/2 ) < e^-pn/8; ℙ(X ≥ Apn ) < e^-Apn/6.
We can now explain the reduction from Proposition <ref> to Theorem <ref>.
Suppose that Proposition <ref> is true. Let k≥ 1, >0, δ>0, μ≥ 1, F, and ℛ satisfy the hypotheses of Theorem <ref>. Our goal is to show that if η>0 and δ_0>0 are selected appropriately (depending on k and ), then (<ref>) holds.
First, we may suppose that μ≤#F, since otherwise ℛ=∅ and we are done. Second, we may suppose that # F≤δ^-k. If not, then (<ref>) follows from the observation that any set of pairwise incomparable (δ;k)-rectangles has cardinality O(δ^-k-1).
Step 1: Random sampling.
Let _1=_1()>0 be a small quantity to be chosen below. If μ≤δ^-2_1, define F'=F and proceed to the computation (<ref>) below. Otherwise, after dyadic pigeonholing the set ℛ and increasing μ if necessary, we can suppose that for each R∈ℛ, there is a set F(R)⊂{f∈ F f∼ R} that satisfies (<ref>), with μ≤#F(R) < 2μ.
Let p = (δ^2_1μ)^-1 (since μ>δ^-2_1, we have 0<p<1). Let F'⊂ F be obtained by randomly selecting each f∈ F with probability p. F' has expected cardinality p(#F)≥δ^-2_1.
Step 2: Robust broadness with respect to F'.
We claim that with probability at least 1/2, the following is true
* #F' ≤ 2p(#F)=2δ^-2_1μ^-1(#F).
* Each rectangle in ℛ is 1/4pμ-rich and _1-robustly broad with error at most O(δ^η) with respect to F'.
The first item holds with probability at least 3/4 (in fact much higher probability!) by Theorem <ref>.
We will show that the second item also holds with probability at least 3/4. Fix R∈ℛ with an associated set F(R). By Theorem <ref>, we have
ℙ[#(F(R)∩ F')≤1/2p(#F(R)) ]≤ e^-p(#F)/8, ℙ[#(F(R)∩ F')≥ 2p(#F(R)) ]≤ e^-p(#F)/3,
and hence the probability that at least one of these events occurs is at most e^-δ^-_1. Suppose that neither of these events occur, and hence
pμ/4 ≤#(F(R)∩ F') ≤ 4pμ.
Let ρ∈[δ,1], let T∈[1,1/ρ], and let R'⊃ R be a (ρ; k; T) rectangle. We would like to show that with high probability,
#{f∈ (F(R)∩ F') f∼ R'}= O(δ^-η T^-_1μ p).
We will estimate the probability that (<ref>) fails. First, we will estimate the probability that
#{f∈ (F(R)∩ F') f∼ R'} > 2δ^-η T^-_1μ p.
Define n= #{f∈ F(R) f∼ R'}. By hypothesis, the rectangles in ℛ are μ rich and robustly broad with error at most δ^-η with respect to F, and hence n≤δ^-η T^-μ≤δ^-η T^-_1μ. Write 2δ^-ηT^-_1μ p = Apn, i.e. A=2δ^-ηT^-_1μ/n≥ 2. Applying Theorem <ref> with n, p, and A as above and using the fact that T≤ρ^-1≤δ^-1, we conclude that the probability that (<ref>) occurs is at most
e^-Apn/6= e^-2δ^-ηT^-_1(μ p)/6 ≤ e^-2δ^_1-η(δ^-2_1)/6 ≤ e^-δ^-_1.
Our goal is to show that with high probability, (<ref>) holds for all ρ∈ [δ,1]; all T∈ [1,1/ρ], and all (ρ;k;T) rectangles R'. We claim that it suffices to show that with high probability, (<ref>) fails when we consider rectangles with the following three properties: (i) ρ is of the form δ 2^j for j≥ 0 an integer; (ii) T is of the form 2^ℓ for ℓ≥ 0 an integer; (iii) R' is the vertical neighborhood of the graph of a function from F. Indeed, by the triangle inequality, if there is a rectangle R' for which (<ref>) fails with constant C_0, then there is a rectangle R” satisfying Properties (i), (ii), and (iii), for which (<ref>) fails with constant C_0/O(1). Conversely, if (<ref>) fails with high probability for every rectangle R'⊃ R satisfying Properties (i), (ii), and (iii), then (<ref>) holds with high probability for every rectangle R'⊃ R, provided the implicit constant has been chosen appropriately.
We have shown that (<ref>) fails with high probability for a particular rectangle R'. Since there are δ^-O(1) rectangles that satisfy Properties (i), (ii), and (iii), we use the union bound to conclude that the probability that (<ref>) holds any rectangle satisfying Properties (i), (ii), and (iii) is at most e^-δ^-_1δ^-O(1), i.e. the probability that (<ref>) fails for a fixed rectangle R is at most e^-δ^-_1δ^-O(1). Since the rectangles in ℛ are incomparable, we have #ℛ=O(δ^-k-1), and hence the probability that (<ref>) fails for at least one rectangle in ℛ is at most e^-δ^-_1δ^-O(1). If δ_0 (and hence δ) is selected sufficiently small depending on k and _1 (recall that _1 in turn depends on k and ), then the probability that (<ref>) holds for every rectangle in ℛ is at least 3/4. This completes the proof of our claim.
Step 3: Applying Proposition <ref>.
Next, let _2=_2()<_1 be a quantity to be determined below, and let η_1 = η_1(k, _2) be the quantity from the statement of Proposition <ref> (with k as above and _2 in place of ). If η≤η_1/2 and δ_0 is sufficiently small, then the rectangles in ℛ are _2 robustly broad with error at most δ^-η_1 with respect to F'. Thus we can apply Proposition <ref> (with _2 in place of and η_1 in place of η) to conclude that
#ℛ ≤ C δ^-B _2 (#F')^_2 (# F')^k+1/k≤ C δ^-B _2δ^-k_2(δ^-2_1#F/μ)^k+1/k.
The result now follows by selecting _1</10; _2</(10(B_1+k)); and δ_0 sufficiently small.
§.§ Tangency Rectangles and Tangency prisms
We now turn to the proof of Proposition <ref>. We begin by analyzing the structure of tangency rectangles. Recall that a (δ;k) rectangle is the vertical δ-neighborhood of a function f with C^k norm at most 1, above an interval I of length δ^1/k. For notational convenience, We will write this as R^f(I) or R(I). The next says that the tangency rectangle R^f(I) is accurately modeled by the (k-1)-st order Taylor expansion of f.
Let R=R^f(I) be a (δ,k) tangency rectangle, with I = [a, a+δ^1/k]. Let g(t) = f(a)+ ∑_j=1^k-1f^(j)(a)/j!(t-a)^j be the (k-1)-st order Taylor expansion of f around a. Then R is contained in the vertical 2δ neighborhood of the graph of g above I.
This is a consequence of Taylor's theorem. We now define the “tangency prisms” introduced in Section <ref>.
A (δ;k) tangency prism is a set P of the form
{(t, y_0,…,y_k-1)∈^k+1 t∈ [a, a+δ^1/k],
| y_j - ∑_i=j^k-1(t-a)^i-j/(i-j)!b_i|≤ Kδ^1-j/k, j=0,…,k-1}.
In the above expression, a, b_0,…,b_k-1∈ [-1,1] are parameters that define the tangency prism, and K is a constant depending on k; the specific choice of K will be fixed in Lemma <ref> below. We call I = [a, a+δ^1/k] the interval associated to P.
Let P⊂^k+1 be a (δ;k) tangency prism with associated interval I⊂[0,1], and let h [0,1]→^k. We say h∼ R if the graph of h above I is contained in R.
Let f∈ C^k and let 0≤ j≤ k. We define the j-th order jet lift of f, denoted 𝒥_jf to be the function 𝒥_jf(t) = (f(t),f'(t),…,f^(j)(t)). When j=0, we have 𝒥_0f(t)=f(t).
Let R=R^f(I) be a (δ,k) tangency rectangle. We define the tangency prism R̂ to be a set of the form (<ref>), where a is the left endpoint of I, and b_i = f^(i)(a) for i=0,…,k-1.
If the quantity K=O(1) from Lemma <ref> is chosen appropriately (depending on k), then the following is true. Let R be a (δ,k) tangency rectangle, and let f be a function with C^k norm at most 1, with f∼ R. Then 𝒥_k-1f∼R̂.
Write R=R^g(I), with I = [a, a+δ^1/k]. Since f∼ I, we have |f(t)-g(t)|≤δ on I, and thus by Lemma <ref> there exists a constant K_1=K_1(k) so that |f^(j)(t)-g^(j)(t)|≤ K_1 δ^1-j/k for j=0,…,k-1. On the other hand, by Taylor's theorem, for each index j< k and each t∈ I, there exists t_1 between a and t so that
g^(j)(t) = ∑_i=j^k-1(t-a)^i-j/(i-j)! g^(i)(a) + (t-a)^k-j/(k-j)!g^(k)(t_1).
If we define b_i= g^(i)(a) for j=0,…,k-1, we conclude that for each j=0,…,k-1 and each t∈ I, we have
| f^(j)(t) - ∑_i=j^k-1(t-a)^j-i/(j-i)!b_i| ≤ |f^(j)(t)-g^(j)(t)| + | g^(j)(t) - ∑_i-j^k-1(t-a)^i-j/(i-j)! b_i|
≤ |f^(j)(t)-g^(j)(t)| + | (t-a)^k-j/(k-j)!g^(k)(t_1)|
≤ K_1 δ^1-j/k + δ^1-j/k,
where the final line used the assumption that ‖ g ‖_C^k≤ 1. Thus the lemma holds with K=K_1+1.
§.§ Tangency and re-scaling
In this section, we will explore how re-scaling a tangency rectangle R induces a re-scaling of functions tangent to R, and also induces a re-scaling of (smaller) tangency rectangles contained in R.
Let 0<δ<ρ≤ 1. Let R be a (ρ;K) tangency rectangle, and let S be a (δ;K) tangency rectangle. We say R covers S, denoted R≻ S or S≺ R, if Ŝ⊂R̂.
Let ρ>0 and let R=R^g(I) be a (ρ;k) rectangle; here ‖ g ‖_C^2≤ 1 and I=[a, a+ρ^1/k]. Let K=K(k)≥ 1 be the constant from Definition <ref>, and let c=1/(k+1)K. For x∈ I, define
ϕ^R(x,y) = ( ρ^-1/k(x-a), cρ^-1(y-g(x-a))).
For f [0,1]→, define f_R to be the function whose graph is ϕ^R(graphf|_I), and define
ψ^R(x,y_0,…,y_k-1) = ( ρ^-1/k(x-a), cρ^-1(y_0-g(x-a)), cρ^-1+1/k(y_1-g'(x-a)),
cρ^-1+2/k(y_2-g”(x-a)),…, cρ^-1/k(y_k-1-g^(k-1)(x-a))).
Let R be a (ρ;k) rectangle, let ‖ f‖_C^k≤ 1, and suppose f∼ R. then ‖ f_R ‖_C^k≤ 1.
By the chain rule,
graph(𝒥_k-1(f_R)) = ψ^R(graph𝒥_k-1 f|_I).
As a consequence, if f∼ R and ‖ f ‖_C^k≤ 1, then by Lemma <ref> we have 𝒥_k-1f∼R̂, and hence the set (<ref>) is contained in ψ^R(R̂)⊂ [0,1]×[-(k+1)^-1,(k+1)^-1]^k. In particular, we have
sup_x∈[0,1] |f_R^(j)(x)|≤ (k+1)^-1, j=0,…,k-1.
If R = R^g(I), then we can also use the chain rule and the fact that ‖ f‖_C^k≤ 1 and ‖ g‖_C^k≤ 1, to compute sup_x∈[0,1] |f_R^(k)(x)|≤ (k+1)^-1. We conclude that ‖ f_R ‖_C^k≤ 1.
Motivated by the above computation, we introduce the following definition.
Let S be a (ρ;k) tangency rectangle. If S≺ R is a (δ; k) tangency rectangle, then ϕ^R(S) is the vertical c δ/ρ neighborhood of a function h (which has C^k norm at most 1) above an interval J of length (δ/ρ)^1/k. Define S_R to be the (δ/ρ; k) tangency rectangle given by the vertical δ/ρ neighborhood of h above J.
The next lemma says that our definitions of f_R and S_R preserve broadness.
Let R≻ S be tangency rectangles. Let F be a set of functions with C^k norm at most 1, all of which are tangent to R. Let F(S)⊂{f∈ F f∼ S} satisfy (<ref>). Then the functions {f_R f∈ F(S)} are tangent to S_R, and satisfy the analogue of (<ref>) with B replaced by O(B).
Suppose there exists a (τ;k;T)-rectangle R^h(J) ⊃ S_R that is tangent to M functions from {f_R f∈ F(S)}; denote this set of functions F_1. Our goal is to show that
M = O(B) T^-#F(S).
Fix a function g_R∈ F_1. By the triangle inequality, the graph of each f_R∈ F_1 above J is contained in the vertical 2τ neighborhood of g_R above J; denote this latter set by R_1 (note that R_1⊃ S_R ).
We have that (ϕ^R)^-1(R_1) is the vertical (2τ)(c^-1ρ) neighborhood of g (a function of C^k norm at most 1), above an interval of length (Tρτ)^1/k, and this set contains S. In summary, we have constructed a (2/cτρ; k; cT/2) tangency rectangle that is tangent to at least M functions from F(S). Comparing with (<ref>), we conclude that
M ≤ B (cT/2)^#F(S) ≤ (2B/c) T^#F(S).
Since c>0 depends only on k, this establishes (<ref>).
§.§ Proof of Proposition <ref> Part 1: Space curves, partitioning, and induction
We are now ready to begin the proof of Proposition <ref>. Our basic strategy is as follows. We lift each function f∈ F to its (k-1)-st order jet 𝒥_k-1f, and we lift each rectangle R∈ℛ to its corresponding tangency prism R̂. Proposition <ref> then becomes an incidence theorem between (polynomial) curves and prisms in ^k+1. Roughly speaking, the statement is as follows: given a set of n polynomial curves in ^k+1 that come from the jet lifts of plane curves, there can be at most n^k+1/k+ prisms that are (broadly) incident to these curves. We prove this statement by induction on n. For the induction step, we use the Guth-Katz polynomial partitioning theorem to divide ^k+1 into cells, most of which interact with only a small fraction of the (lifted) curves from F. The precise statement is a consequence of the following two theorems. The first is the celebrated Guth-Katz polynomial partitioning theorem <cit.>.
Let 𝒫⊂^d be a finite set of points. Then for each E≥ 1, there is a polynomial Q∈[x_1,…,x_d] so that ^d\{Q=0} is a union of O_d(E^d) open connected sets, and each of these sets intersects O_d(E^-d#𝒫) points from 𝒫.
The second is a variant of Bézout's theorem for real varieties. This is a special case of the main result from <cit.>.
Let ζ⊂^d be a one-dimensional real variety defined by polynomials of degree at most D. Let Q∈[x_1,…,x_d] be a polynomial of degree E≥ D. Then ζ intersects O_d(D^d-1E) connected components of ^d\{Q=0}.
We apply the induction hypothesis inside each cell, and sum the resulting contributions. The exponent k+1/k+ was chosen so that the induction closes. There is also a contribution from the boundary of the partition. This will be described in greater detail (and dealt with) later. We now turn to the details
Fix k and . We will prove the result by induction on # F. The induction will close, provided B,C, and η have been chosen appropriately. When F=∅, there is nothing to prove.
Step 1. Polynomial partitioning.
Suppose that #F = n, and that the result has been proved for all sets of curves F' of cardinality less than n. To each (δ;k) tangency rectangle R^f(I)∈ℛ, associate the point p_R = (a, f(a), f'(a),…, f^(k-1)(a))∈^k+1, where a is the left endpoint of I. Observe that p_R∈R̂. It is easy to verify that distinct (and hence incomparable) rectangles in ℛ give rise to distinct (in fact ≳δ separated) points. Let 𝒫={p_R R∈ℛ}.
Let E≥ 1 be a number to be specified below. Use Theorem <ref> to select a polynomial Q∈[t, y_0,…,y_k-1] of degree at most E, so that ^k+1\{Q=0} is a union of O(E^k+1) open connected components, each of which contain O(E^-k-1#𝒫) points from 𝒫. Let 𝒪 denote the set of connected components.
Define Z=Z(Q), and define Z^* to be the union of all (δ;k) tangency prisms that intersect Z. We claim that for each R∈ℛ, at least one of the following must hold
* There is a cell Ω∈𝒪 so that R̂⊂Ω.
* R̂⊂ Z^*.
Indeed, if the second item does not hold then R̂ is disjoint from Z. Since R̂ is connected, we must have R̂⊂Ω for some Ω∈𝒪.
For each Ω∈𝒪, define
ℛ_Ω={R∈ℛR̂⊂Ω}.
We have #ℛ_Ω≤#(𝒫∩Ω) =O_k(E^-k-1#ℛ). If R∈ℛ_Ω and f∈ F with f∼ R, then graph(𝒥_k-1f)∩R̂≠∅, and hence graph(𝒥_k-1f)∩Ω≠∅.
Define ℛ_Z=ℛ\⋃_Ω∈𝒪ℛ_Ω. We say we are in the cellular case if #⋃_Ω∈𝒪ℛ_Ω≥1/2#ℛ. Otherwise we are in the algebraic case. We remark that if E^k+1 is substantially larger than #ℛ, then the bound O(E^-k-1#𝒫) from the application of Theorem <ref> might be smaller than 1, i.e. each cell contains fewer than one point from 𝒫. If this happens, then 𝒫⊂ Z(Q), and we are most certainly in the algebraic case.
Step 2. The cellular case.
Suppose we are in the cellular case. Then we may select a set 𝒪'⊂𝒪 so that ∑_Ω∈𝒪'#ℛ_Ω≥1/4#ℛ, and
#ℛ_Ω≥ c_1(k) E^-k-1#ℛ for each Ω∈𝒪',
where c_1(k)>0 is a quantity depending only on k. To simplify notation, write ζ_f for graph(𝒥_k-1f). Note that if f is a polynomial of degree D, then ζ_f is a one-dimensional real variety defined by polynomials of degree at most D.
By Proposition <ref>, since each polynomial in f has degree at most δ^-η, there are ≤ K_1(k) δ^-kη E# F pairs (Ω, f)∈𝒪'× F with ζ_f∩Ω≠∅ (here K_1(k) is a constant depending only on k). Thus there is a cell Ω∈𝒪' with
#{f∈ F ζ_f∩Ω≠∅}≤ K_1(k) δ^-kη E^-k#F.
Denote the above set by F_Ω. If we choose E sufficiently large (E ≥ K_1(k)δ^-η will suffice), then #F_Ω<n, and thus we may apply the induction hypothesis to conclude that
#ℛ_Ω≤ C δ^-B(# F_Ω)^k+1/k+.
Combining (<ref>), (<ref>), and (<ref>), we conclude that
#ℛ ≤(c_1(k)^-1 E^k+1)( C δ^-B(# F_Ω)^k+1/k+)
≤(c_1(k)^-1 E^k+1)(C δ^-B(K_1(k)δ^-kη E^-k#F)^k+1/k+)
≤(c_1(k)^-1 K_1(k)^k+1/k+δ^-(k+1)η+kη E^-k) ( C δ^-B (#F)^k+1/k+).
If we select E≳_δ^-3η/ sufficiently large and B=B(k), C=C(k,) appropriately, then
#ℛ≤ C δ^-B (#F)^k+1/k+,
and the induction closes. This completes the proof of Proposition <ref> when we are in the cellular case.
Step 3. The algebraic case.
Next we consider the algebraic case. Observe that the tangency prisms associated to rectangles in ℛ_Z are contained in a thin neighborhood of the variety Z. The following theorem of Wongkew <cit.> controls the volume of the thin neighborhood of a variety.
Let Z=Z(Q)⊂^d, where Q is a non-zero polynomial. Let B⊂^n be a ball of radius r. Then there exists a constant C(d) depending only on d so that for all ρ>0, we have
|B ∩ N_ρ(Z)|≤ C(d) (deg Q)^dρ^d-1 r.
The set in (<ref>) is described by a boolean combination of polynomial (in)equalities. Sets of this form are called semi-algebraic; we give a precise definition below.
A set S ⊂^d is called a semi-algebraic set of complexity at most M if there exists N≤ M; polynomials P_1,…,P_N, each of degree at most M; and a Boolean formula Φ{0,1}^N→{0,1} such that
S = {x∈^d Φ( P_1(x)≥ 0, …, P_N(x)≥ 0 )=1}.
The next result controls the number of tangencies that can be contained in a semi-algebraic set of small volume.
Let k≥ 1, >0. Then there exist positive numbers c=c(k), η=η(k,), and δ_0=δ_0(k,) so that the following holds for all δ∈(0,δ_0].
Let F be a set of polynomials of degree at most δ^-η, each of which has C^k norm at most 1. Let ℛ be a set of pairwise incomparable (δ;k) rectangles. For each R∈ℛ, let F(R)⊂{f∈ F f∼ R}. Define the dual relation ℛ(f) = {R∈ℛ f∈ F(R)}. Suppose that for each f∈ F, the rectangles in ℛ(f) satisfy the following “two-ends” type non-concentration condition: for each interval J⊂[0,1], we have
#{R=R(I)∈ℛ(f) I⊂ J}≤δ^-η |J|^#ℛ(f).
Let S⊂ [0,1]^k+1 be a semi-algebraic set of complexity at most δ^-η and volume |S|≤δ^. Suppose that R̂⊂ S for each R∈ℛ.
Then there exists R∈ℛ, τ∈ [δ,δ^c], and a (τ; k+1) rectangle R_1⊃ R with
#{f∈ F(R) f∼ R_1}≳#F(R).
We defer the proof of Proposition <ref> to the next section. Using Proposition <ref>, we will handle the algebraic case.
Step 3.1 A two-ends reduction.
Recall that for each R∈ℛ, there is a set F(R)⊂ F that satisfies the non-concentration condition (<ref>) from Definition <ref>. This necessarily implies that #F(R)≳δ^η-. After dyadic pigeonholing, we can find a set ℛ_1⊂ℛ with #ℛ_1≥|logδ|^-1#ℛ and a number μ so that μ≤#F_1(R)<2μ for each R∈ℛ_1. Define ℐ_1 = { (f,R) R∈ℛ_1, f∈ F(R)}. We have
μ#ℛ_1≤#ℐ_1 <2μ#ℛ_1.
For each f∈ F, the curve ζ_f intersects Z^* in a union of O( (δ^-ηE)^O(1))=O_(δ^-O(η/)) intervals. Let _1>0 be a small quantity to be determined below. For each f∈ F, apply a two-ends reduction (see <cit.> for an introduction to this topic) with exponent _1; this allows us to select an interval I_f⊂ [0,1] so that the restriction of ζ_f to the interval I_f is contained in Z^*, and we have the following re-scaled analogue of (<ref>) inside I_f: For each interval J⊂ I_f, we have
#{R=R(I) (f,R)∈ℐ_1, I⊂ J}≤ 2 (|J|/|I_f|)^_1#{R (f,R)∈ℐ_1}.
Define ℐ_2 to be those pairs (f;R)∈ℐ_1 where R=R(I) satisfies I⊂ I_f; we have #ℐ_2≥δ^_1#ℐ_1. After further dyadic pigeonholing, we can select a set F_3⊂ F, a multiplicity ν, and a length ℓ. so that the following conditions hold.
(a)
ℓ≤ |I_f| <2ℓ for each f∈ F_3.
(b) Each f∈ F_3 satisfies
ν≤#{R (f,R)∈ℐ_2}< 2ν.
Define ℐ_3=ℐ_2∩ (F_3×ℛ). We have the following bounds on the size of ℐ_3
|logδ|^-2δ^_1+O(η/)μ#ℛ_1≤#ℐ_3 <2μ#ℛ_1, and ν#F ≤#ℐ_3 <2ν#F.
Note that (<ref>) continues to hold with ℐ_3 in place of ℐ_2.
Step 3.2 Graph refinement.
At this point, the functions f∈ F_3 satisfy a re-scaled analogue of (<ref>). Unfortunately, while all of the rectangles R∈ℛ satisfied the robust broadness condition (<ref>) with respect to ℐ_1, some of them might not satisfy this condition with respect to ℐ_3. We can fix this by applying the following graph refinement lemma from <cit.>.
Let G = (A⊔ B, E) be a bipartite graph. Then there is a sub-graph G'=(A'⊔ B', E') so that #E'≥#E/2; each vertex in A' has degree at least #E/4#A; and each vertex in B' has degree at least #E/4#B.
Applying Lemma <ref> to ℐ_3, we obtain sets F_4, ℛ_4 and ℐ_4, with the following properties
* #ℐ_4≥1/2#ℐ_3, and hence (<ref>) continues to hold, with the LHS weakened by a factor of 1/2.
* Each f∈ F_4 satisfies an analogue of (<ref>) with ℐ_4 in place of ℐ_2, except the LHS is weakened to ν/8.
* Each R∈ℛ_4 is incident (under the incidence relation ℐ_4) to at least (#ℐ_3)/(4#ℛ_1)≥ (2|logδ|)^-2δ^-_1μ functions f∈ F_4. Since each R∈ℛ_4 is incident to at most 2μ functions, we have
#ℛ_4 ≥#ℐ_4/(2μ)≳ |logδ|^-2δ^_1+O(η/)#ℛ_1 ≳ |logδ|^-3δ^_1+O(η/)#ℛ.
Step 3.3 Rescaling.
If ℓ≤δ^1/k-, then for each f∈ F_4 there are at most δ^- rectangles R∈ℛ with (f,R)∈ℐ_4 (see the comment after Definition <ref>). We conclude that
#ℛ≤#ℐ≤ |logδ|^-3δ^-_1#ℐ_4≤ |logδ|^-3δ^--_1#F,
and hence (<ref>) holds and we are done. (provided we select _1≤, and B≥ 3, and C sufficiently large).
Next, suppose that
ℓ≥δ^1/k-.
Our goal is to obtain a contradiction, and thereby finish the proof.
Let ρ=ℓ^k≥δ^1-k, and let 𝒮 be a maximal set of pairwise non-close (ρ;k) tangency rectangles. For each f∈ F_4, the restriction of f to the interval I_f is tangent to at least one, and at most O(1) of these rectangles (recall that I_f is an interval of length roughly ℓ, and each curvilinear rectangle in 𝒮 has dimensions ρ×ρ^1/k=ρ×ℓ). Each rectangle R∈ℛ_4 is covered (in the sense of Definition <ref>) by at least one, and at most O(1) of these rectangles. Furthermore, if (f,R)∈ℐ_4, then there is a (ρ;k) rectangle S so that the restriction of f to the interval I_f is tangent to S, and S≻ R.
This induces a decomposition of the incidence arrangement (ℐ_4, F_4, ℛ_4) into sub-arrangements, which we will denote by {(ℐ_S, F_S, ℛ_S}_S∈𝒮, with the following properties:
* Each f∈ F_4 is contained in O(1) sets {F_S}_S∈𝒮. If f∈ F_S then the restriction of f to the interval I_f is tangent to S.
* Each R∈ℛ_4 is contained in O(1) sets {ℛ_S}_S∈𝒮. If R∈ℛ_S then R≺ S.
* ℐ_4=⋃_S∈𝒮ℐ_S.
Fix a tangency rectangle S∈𝒮 for which ℐ_S (and hence ℛ_S and F_S) is non-empty. After applying the re-scaling f↦ f_S and R↦ R_S from Definitions <ref> and <ref>, we have sets F̃_S and ℛ̃_S, and an incidence relation ℐ̃_S.
If we define F̃_S(R̃)={f̃∈F̃_S (f̃,R̃)∈ℐ̃_S }, then the sets F̃_S and ℛ̃_S, and the sets {F̃_S(R̃)} obey the two-ends non-concentration condition (<ref>) from Proposition <ref> at scale τ=δ/ρ, with _1 in place of and a number Ω(1) in place of δ^-η.
Before we can applying the proposition, however, we must show that the prisms {R̂̃̂R̃∈ℛ̃_S} are contained in a semi-algebraic set S of controlled complexity and small volume.
First, observe that every such prism R̂̃̂ is contained in ψ^R(S∩ Z^*) (recall that ψ^R is defined in Definition <ref>), which in turn is contained in the union of all (τ;k) tangency prisms that intersect ϕ^R(S∩ Z^*)⊂ ([0,1]×[-1,1]^k) ∩ψ^R(Z). This in turn is contained in the set
S=([0,1]×[-1,1]^k) ∩ N_τ^1/k(ψ^R(Z)).
ψ^R(Z) is an algebraic variety of degree at most Q≤ E, so by Theorem <ref> we have
|S| ≲ E^k+1(τ^1/k)^k=E^k+1(τ) ≲_δ^-O(η/)τ≤τ^1-O(η/^2),
where we used the bound ρ≥δ^1-k (and thus τ≤δ^k) to replace δ^-O(η/) with τ^-O(η/^2).
It is straightforward to show that S has complexity at most E^O(1)≲_δ^O(η/)≲τ^O(η/^2). We wish to apply Proposition <ref> with τ in place of δ, _1 in place of , and B = O(1). Let c=c(k)>0, η_1, and δ_0 be the corresponding quantities from Proposition <ref>. If η>0 is selected sufficiently small depending on η_1,k, and (recall that η_1 depends on k and _1, and _1 in turn depends on k and ), then the hypotheses of Proposition <ref> are satisfied. We conclude that there is a rectangle R̃∈ℛ̃_S; a scale τ_1∈ [τ,τ^c]; and a (τ_1;k+1) rectangle R̃_1⊃R̃ with
#{f̃∈F̃_S(R̃)f̃∼R̃_1}≳#F̃_S(R̃) ≳ |logδ|^-2δ^_1μ.
Undoing the re-scaling, we have a curvilinear rectangle of dimensions τ_1ρ×τ_1^1/(k+1)ρ^1/k=τ_1ρ×τ_1^-1/k(k+1)(τ_1ρ)^1/k; i.e. we have a (τ_1ρ; k; τ_1^-1/k(k+1)) tangency rectangle R_1⊃ R, with
#{ f∈ F_S( R) f∼ R_1}≳ |logδ|^-2δ^_1μ.
Finally, define ρ_1 =τ_1ρ and define T=τ_1^-1/k(k+1)≥τ^-c/k(k+1)≥δ^-c /k+1. Since the rectangles in ℛ are μ-rich and -robustly broad with error δ^-η, by (<ref>) we have
#{ f∈ F_S( R) f∼ R_1}≤δ^-ηT^#F(R) ≲δ^-η+c ^2/k+1μ.
Comparing (<ref>) and (<ref>), we obtain a contradiction provided we select _1 sufficiently small depending on and c (recall that c in turn depends on k), and provided δ>0 is sufficiently small.
This contradiction shows that (<ref>) cannot hold. This completes the proof of Proposition <ref>, except that we still need to prove Proposition <ref>. This will be done in the next section.
§ TANGENCIES INSIDE A SEMI-ALGEBRAIC SET OF SMALL VOLUME
In this section we will prove Proposition <ref>. We begin by establishing a decomposition theorem for semi-algebraic sets with small volume.
§.§ Covering semi-algebraic sets with thin neighborhoods of Lipschitz graphs
In this section, we will show that a semi-algebraic set W⊂[0,1]^n+1 with small volume can be covered by a small number of thin neighborhoods of Lipschitz graphs, plus a set that has small projection to [0,1]^n. The precise statement is as follows. Throughout this section, all implicit constants may depend on the dimension n. We write A = (B) to mean A≤ C B^C, where the constant C may depend on the ambient dimension n.
Let S⊂[0,1]^n+1 be a semi-algebraic set of complexity at most W. Let 0<u≤ 1 and L≥ 1. Then we can cover S by a collection of sets,
S ⊂⋃_i=0^N S_i,
with the following properties:
* N = (W).
* S_0 = T_0× [0,1], where T⊂[0,1]^n is semi-algebraic with complexity (W), and
|T_0|≤(W)(L^-1 + |S|/u).
* For each index i≥ 1, S_i is of the form
S_i = {(x, x_i+1)x∈ T_i, f_i(x)< x_n+1 < f_i(x)+u},
where T_i⊂[0,1]^n is semi-algebraic with complexity (W), and f_i [0,1]^n→ is L-Lipschitz.
One of the main tools we will use is the cylindrical algebraic decomposition. This is a technique from real algebraic geometry that was originally developed in the context if quantifier elimination. The cylindrical algebraic decomposition decomposes an arbitrary semi-algebraic set into simpler sets, which are called cells[these are not to be confused with the connected components of ^k+1\ Z(Q) from Section <ref>, which are also called cells.]. See Chapter 5 from <cit.> for an introduction to the topic. We will require a version of this result where both the number of cells and their complexity is controlled by the complexity of the input.
Let S⊂^n+1 be a semi-algebraic set of complexity at most W (see Definition <ref>). Then there exists a decomposition S = _j=0^N S_i with N=(W), where the sets S_i have the following properties.
* Each S_i is semi-algebraic of complexity (W).
* The projection of S_0 to the first n coordinates is a semi-algebraic set of measure 0 and complexity (W).
* For each i=1,…,N, the set S_i is of one of the following two forms:
S_i = {(x, x_n+1)x∈ T_i, f_i(x) < x_n+1 < g_i(x)},
or
S_i = {(x, x_n+1)x∈ T_i, x_n+1=f_i(x)}.
In the above, T_i⊂^n is a semi-algebraic set of complexity (W); f_i T_i→ is smooth; and there is a nonzero polynomial F_i^n+1→ of degree (W) so that
F_i(x, f_i(x))=0 and ∂_x_n+1F_i(x, f_i(x))≠ 0 for all x∈ T_i.
The function g_i T_i→ satisfies the analogous conditions.
We now begin the process of proving Proposition <ref>. To start, we will study structural properties of the cells arising from the cylindrical algebraic decomposition.
Let L>0 and let S⊂ [0,1]^n+1 be a set of the form
{(x, x_n+1)∈ [0,1]^n+1x∈ T, f(x)< x_n+1 < g(x)},
where
* T⊂[0,1]^n is semi-algebraic of complexity at most W.
* f T→[0,1] is differentiable.
* There is a nonzero polynomial F of degree at most W so that F(x, f(x))=0 and ∂_x_n+1F(x, f(x))≠ 0 for all x∈ T.
* |∇ f(x)|≥ L for all x∈ T.
Then
|T|≲ n^2 W^2/L.
Write T = ⋃_i=1^n T_i, where | d/d x_if|≥ L/n on T_i. Each set T_i has complexity at most 2W, since by the implicit function theorem we have
T_i = {xx∈ T, |∂_x_i F(x)/∂_x_n+1 F(x)|≥ L/n}={xx∈ T, (∂_x_i F(x))^2 ≥L^2/n^2(∂_x_n+1 F(x))^2}.
Fix an index i, and let L⊂^n be a line pointing in the x_i direction, with |L∩ T_i|≥ |T_i| (here we use the fact that T_i⊂[0,1]^n; the |·| on the LHS denotes one-dimensional Lebesgue measure, while the |·| on the RHS denotes n-dimensional Lebesgue measure). Since T_i has complexity at most 2W, L∩ T_i contains at most 4W^2 connected components. Let L'⊂ L∩ T_i be an interval of length ≥ |T_i|/(4W^2). But since |d/dx_if|≥ L/n on T_i, we have |f(a)-f(b)|≥ L|T_i|/(4nW^2), where a and b are the endpoints of L'. On the other hand, f(a),f(b)∈[0,1]. We conclude that
L|T_i|/4nW^2≤ |f(a)-f(b)|≤ 1.
Re-arranging we have |T_i|≤ 4nW^2/L. Summing over i we obtain (<ref>).
Let S⊂ [0,1]^n+1 be a set of the form
{(x, x_n+1)∈ [0,1]^n+1x∈ T, f(x)< x_n+1 < g(x)},
where
* T⊂[0,1]^n is semi-algebraic of complexity at most W.
* f T→[0,1] is smooth.
* there is a polynomial F of degree at most W so that F(x, f(x))=0 and ∂_x_n+1F(x, f(x))≠ 0 for all x∈ T.
Let L>0. Then we can write T = T'∪ T”, where
* T' and T” are semi-algebraic of complexity O_n(W).
* |T'|= O_n(W^2/t).
* f is differentiable on T”, and |∇ f|≤ L on T”.
Let
T' = {x∈ T∑_i=1^n (∂_x_i F /∂_x_n+1 F)^2≥ L^2 },
T” =T\ T'.
By construction, f is differentiable and satisfies |∇ f|≤ L on T”. By Lemma <ref> we have |T'|=O_n(W^2/L).
We are now ready to prove Proposition <ref>.
Apply Theorem <ref> to S, and let A_1,…,A_N be the corresponding cells. For each cell A_i of the form
A_i = {(x, x_n+1)x∈ B_i, f_i(x) < x_n+1 < g_i(x)},
apply Lemma <ref> to decompose B_i = B_i'∪ B_i”, with L as above; we have that |B_i'|≤ L^-1(W), and hence
| ⋃_i=1^N B_i' | ≤ L^-1(W).
For each index i, write B_i” = T_i⊔ C_i, where
T_i = {x∈ B_i” |g(x) - f(x)| ≤ u }.
We have u|C_i|≤ |B_i”|≤ |S|, and hence
∑ |C_i|≤(W)|S|/u.
Thus if we define
T_0 = ⋃ C_i ∪ ⋃ Y_i',
then T_0 satisfies (<ref>). Finally, we have
{(x, x_n+1)x∈ T_i, f_i(x)< x_n+1< g_i(x)}⊂{(x, x_n+1)x∈ T_i, f_i(x)< x_n+1< f_i(x)+u},
and hence (<ref>) holds. We have now proved Proposition <ref>, except that our functions f_i are only defined on T_i, rather than [0,1]^n. Since |∇ f_i|<L on T_i, each f_i is L-Lipschitz on T_i. By the Kirszbraun-Valentine Lipschitz extension theorem, we can extend each f_i to a L-Lipschitz function on [0,1]^n.
§.§ Jet lifts in thin neighborhoods of Lipschitz graphs
Our goal in this section is to prove the following result. In what follows, recall that the jet lift 𝒥_jf was given in Definition <ref>.
For each d≥ 0, >0, there are constants A = A(d) and B=B(d,) so that the following holds. Let D,W≥ 1 and let ρ>0. Let S⊂[0,1]^d+2 be a semi-algebraic set of complexity at most W and volume at most ρ. Then for each polynomial f of degree at most D, there is a “bad” set B_f⊂[0,1], which is a union of at most A(DW)^A intervals and has measure at most A (DW)^A ρ^1/B, so that the following holds.
Let f,g be polynomials of degree at most D. Suppose there is a point t_0∈ [0,1]\(B_f ∪ B_g) that satisfies
(t_0, 𝒥_df(t_0))∈ S, (t_0, 𝒥_dg(t_0))∈ S,
|𝒥_d f(t_0)-𝒥_d g(t_0)|≤ρ.
Then there is a number τ∈ [ρ, ρ^1/B] so that
|f(t)-g(t)|≤ A τ, t ∈ [t_0-τ^, t_0 + τ^].
Furthermore, the value of τ can be selected from a set X⊂[ρ, ρ^1/B] (the set X depends only on d, , and ρ) that has cardinality d+1.
Gronwall's inequality will play an important role in the proof of Proposition <ref>. We will use the following formulation. See e.g. <cit.> for a discussion and proof of this version.
Let I be an interval, let F,G I×^d→, let t_0∈ I, let x̃, ỹ∈^d, and let f,g I→ satisfy the initial value problems
f^(d)(t) = F(t, 𝒥_d-1f(t)), 𝒥_d-1f(t_0) = x̃,
g^(d)(t) = G(t, 𝒥_d-1g(t)), 𝒥_d-1g(t_0) = ỹ.
Suppose that for t fixed, F is L-Lipschitz in x, i.e.
|F(t, x)-F(t, x')|≤ L|x-x'|, t∈ I, x, x'∈^d.
Let ρ>0. Suppose that |x̃-ỹ|≤ρ, and
|F(t, x) - G(t, x)|≤ρ, t∈ I, x∈^d.
Then
|f(t)-g(t)|≲_d e^L|I|ρ, t∈ I.
The following result is a variant of Theorem <ref>. Instead of requiring that f and g satisfy “nearby” initial value problems, in the sense of (<ref>), we require that f satisfies the initial value problem f^(d)=F(t,𝒥_d-1f), and g almost satisfies this same initial value problem, in the sense that |g^(d)-F(t,𝒥_d-1g)| is small. The precise statement is as follows.
1
* Let d≥ 1, let I be an interval, and let g∈ C^d(I).
* Let F I×^d → be L-Lipschitz, let ρ>0, and suppose that
|g^(d)(t) - F(t, 𝒥_d-1g(t))| ≤ρ, t∈ I.
* Let t_0∈ I, let ỹ=𝒥_d-1g(t_0), and let x̃∈^d, with |x̃-ỹ |≤ρ.
* Let f I→ be a solution to the initial value problem
f^(d)(t) = F(t, 𝒥_d-1f(t)), 𝒥_d-1f(t_0)=x̃.
Then
|g(t)-f(t)|≲_d e^L|I|ρ, t∈ I.
Define
G(t, x) = F(t, x)+e(t),
e(t) = g^(d)(t) - F(t, 𝒥_d-1g(t)).
The quantity e(t) is intended to measure the error between the initial value problems F and G. Inequality (<ref>) says that |e(t)|≤ρ for t∈ I, and thus
|G(t, x)-F(t, x)|≤ρ, t∈ I, x∈^d.
But g I→ is the solution to the initial value problem
g^(d)(t) = G(t, 𝒥_d-1g(t)), 𝒥_d-1g(t_0)=ỹ.
Thus by Theorem <ref>, we have
|f(t)-g(t)|≲_d e^L|I|ρ for all t∈ I.
1
* Let d≥ 0, L≥ 1, let I be an interval of length at most L^-1, and let f,g ∈ C^d(I).
* Let F [0,1]×^k → be L-Lipschitz, let ρ>0, and suppose that
|f^(d)(t) - F(t, 𝒥_d-1f(t))| ≤ρ, t∈ I,
|g^(d)(t) - F(t, 𝒥_d-1g(t))| ≤ρ, t∈ I.
* Suppose there is t_0∈ I so that
|𝒥_d f(t_0) - 𝒥_d g(t_0)|≤ρ.
Then
|f(t)-g(t)|≲_d ρ, t∈ I.
If d=0 then (<ref>) follows from (<ref>) and the triangle inequality.
Suppose instead that d≥ 1. Define
z = 1/2[ 𝒥_d-1f(t_0) + 𝒥_d-1g(t_0) ].
By (<ref>), we have
|z - 𝒥_d-1f(t_0)|≤ρ/2.
We now apply Lemma <ref>. Let h I → be the solution to the initial value problem
h^(d)(t) = F(t, 𝒥_d-1 h(t)), 𝒥_d-1 h( t_0)=z.
By Lemma <ref>, we have |h(t) - f(t)|≲_dρ for t∈ I. But note that the construction of h is symmetric in the functions f and g, and thus we also have |h(t) - g(t)|≲_d ρ for t∈ I. The conclusion (<ref>) now follows from the triangle inequality.
With these tools, we are now ready to prove the main result in this section.
Without loss of generality, we may suppose that ≤ 1; otherwise we can replace by 1 and the conclusion remains valid. Let L_0=ρ^-/2, and for each i=1,…,d, define L_i = L_i-1^/2. For each index i, define ρ_i = L_i^-1/. We will select the quantity B(d,) sufficiently large so that ρ_d≥ρ^1/B. With B selected in this way, we have
ρ_i∈ [ρ, ρ^1/B], 0≤ i ≤ d.
We will define an iterative decomposition of S, which will take d steps; “L_i” will be the allowable Lipschitz constant, and ρ_i will be the allowable thickness at stage i of the constriction.
For the first step, apply Proposition <ref> to S with L_0 in place of L, and u=ρ_0; we obtain sets S^0_1,…, S^0_N_0; and a set S^0_0; each set S^0_i is contained in the ρ_0 neighborhood of a L_0-Lipschitz graph, and S^0_0 ⊂ T^0_0×[0,1], with |T^0_0|≲ L_0^-1+|S|ρ_0^-1. Since |S|≤ρ and ≤ 1, Our choice of L_0 and ρ_0 ensures that L_0^-1≤ |S|ρ_0^-1, and hence |T^0_0|≲ L_0^-1.
For the i-th step of our decomposition, we apply Proposition <ref> to T^i-1_0 with L_i in place of L, and u=ρ_i; we obtain sets S^i_1,…, S^i_N_2; and a set S^i_0; each set S^i_j is contained in the ρ_i neighborhood of a L_i-Lipschitz graph, and S^i_0 ⊂ T^i_0×[0,1], with
|T^i_0|
≲ L_i^-1+|T^i-1_0| ρ_i^-1≲ L_i^-1+L_i-1^-1ρ_i^-1
=L_i^-1+L_i^-2/L_i^1/≤ 2 L_i^-1,
Where the first inequality is the conclusion of Proposition <ref>; the second inequality used (<ref>) with i-1 in place of i; and the third equality used the definition of L_i-1. Note that the (iterated) use of implicit constants in these inequalities is harmless, since the iteration only occurs d times.
After this process is complete, we have a covering of S of the form
S ⊂⋃_i=0^d⋃_j=1^N_i( S^i_j × [0,1]^i),
where S^i_j⊂ [0,1]^d+2-i is the vertical ρ_i neighborhood of a L_i-Lipschitz graph (we denote the associated Lipschitz function by G^i_j) above a set T^i_j⊂[0,1]^d+1-i.
We would like to claim that if f and g are two polynomials that satisfy (<ref>) at some point t_0∈[0,1], then the corresponding points (t_0, 𝒥_d f(t_0)) and (t_0, 𝒥_d g(t_0)) must be contained in a common set of the form S^i_j × [0,1]^i from the decomposition (<ref>). Unfortunately this need not be true, since even though (<ref>) guarantees that the points 𝒥_d f(t_0) and 𝒥_d g(t_0) are nearby, they might nonetheless be contained in different sets from (<ref>).
To handle this annoyance, we will expand each set S^i_j slightly: define (S^i_j)^* to be the vertical ρ_i+ρ neighborhood of the Lipschitz graph G^i_j above the ρ-neighborhood of T^i_j. Then, if f and g satisfy (<ref>) at t_0, and if (t_0, 𝒥_d f(t_0))∈ S^i_j × [0,1]^i, then (t_0, 𝒥_d g(t_0))∈ (S^i_j)^* × [0,1]^i. We will see below why this is useful.
Our next task is to define the “bad” set B_f from the statement of Proposition <ref>. Let f be a polynomial of degree ≤ D. Let
J_f={t∈ [0,1](t, 𝒥_d f(t))∈ S}.
J_f is semi-algebraic of complexity (DW)^O(1), and hence is a union of O(DW)^O(1) intervals. We will further sub-divide these intervals into a collection of intervals ℐ_f with the following properties. First, J_f = ⋃_J∈ℐ_fJ. Second, for each J∈ℐ_f and each set of the form X = S^i_j × [0,1]^i from the decomposition (<ref>), we either have (t, 𝒥_d f(t))∈ X for all t∈ J, or (t, 𝒥_d f(t))∉X for all t∈ J. Third, the same holds true for each set of the form X = (S^i_j)^* × [0,1]^i, where (S^i_j)^* is the expansion of the set S^i_j described in the previous paragraph.
The set ℐ_f has cardinality O(DW)^O(1). For each closed interval J=[a,b]∈ℐ_f, if |J|≤ L_d^-1 then define Ends(J)=J. If |J|> L_d^-1, then define Ends(J)=[a, a+L_d^-1] ∪ [b-L_d^-1, b]. Define Ends(J) analogously for intervals of the form (a, b], [a, b) and (a,b).
Define
B_f=⋃_J∈𝒥_fEnds(J).
If we define the quantity A=A(d) appropriately, then B_f is a union of at most A(DW)^A intervals, and has measure at most A(DW)^AL_d^-1≤ A(DW)^Aρ^1/B.
Our final task is to show that B_f satisfies the conclusion of Proposition <ref>. Let f,g be polynomials of degree at most D, and suppose there is t_0∈ [0,1]\(B_f ∪ B_g) that satisfies (<ref>). By (<ref>), there is an index i and a set S^i_j×[0,1]^i that contains (t_0, 𝒥_d f(t_0)). But (<ref>) implies that the expanded set (S^i_j)^*×[0,1]^i must contain both (t_0, 𝒥_df(t_0)) and (t_0, 𝒥_dg(t_0)). Furthermore, since t_0∉B_f∪ B_g, if we define I = [t_0 - L_i^-1, t_0+L_i^-1]⊂ [t_0 - L_d^-1, t_0+L_d^-1], then
(t, 𝒥_df(t))∈ (S^i_j)^*×[0,1]^i and (t, 𝒥_dg(t))∈ (S^i_j)^*×[0,1]^i, t∈ I.
Since S^i_j is contained in the vertical ρ_i+ρ≤ 2ρ_i (recall (<ref>)) neighborhood of the L_i-Lipschitz function G^i_j, (<ref>) implies that
|f^(d-i+1)(t) - G(t, 𝒥_d-if(t)|≤ 2ρ_i, t∈ I,
|g^(d-i+1)(t) - G(t, 𝒥_d-ig(t)|≤ 2ρ_i, t∈ I.
Apply Lemma <ref> with 2ρ_i in place of ρ, and L_i in place of L—The function G from (<ref>) has Lipschitz constant at most L_i, and the interval I has length L_i^-1. The conclusion (<ref>) of Lemma <ref> says that
|f(t)-g(t)|≲_d ρ_i, t∈ I.
This is exactly conclusion (<ref>), provided we select A = A(d) sufficiently large.
§.§ Proof of Proposition <ref>
Let F=⋃_R∈ℛF(R). Apply Proposition <ref> to S, with d = k-1; =1/(k+1); and ρ=max(δ^, C_0 δ^1/k). If we choose the constant C_0=C_0(k) appropriately, then by Lemma <ref> we can ensure that whenever f,g∈ F are tangent to a common rectangle R=R(I) from ℛ, there is a point t_0∈ I with |𝒥_k-1f(t_0) - 𝒥_k-1g(t_0)| ≤ρ.
For each f∈ F, we obtain a bad set B_f, which is a union of at most A(DW)^A intervals and has measure at most A (DW)^A ρ^1/B≤ A (DW)^A δ^/B. Note that the quantities A and B only depend on k (B depends on the quantity “” from the statement of Proposition <ref>, but we have selected = 1/(k+1)). Thus if η=η(,k) is selected sufficiently small, then by (<ref>) we have
#{R=R(I)∈ℛ(f) I ∩ B_f≠∅}≤1/2#ℛ(f).
Thus by pigeonholing, there exists a rectangle R=R(I)∈ℛ so that
#{f∈ F(R) I ∩ B_f = ∅}≥1/2#F(R).
Fix such a rectangle R, and let F'(R) denote the set on the LHS of (<ref>). For each f,g∈ F'(R), there is a point t_0∈ I and a scale τ=τ(f,g)∈ X (recall that X⊂[ρ, ρ^1/B] has cardinality at most d+1=k) so that
|f(t)-g(t)|≤ Aτ, t∈ [t_0-τ^1/(k+1), t_0+τ^1/(k+1)]⊂ I_τ,
where I_τ is the interval of length τ^1/(k+1) with the same midpoint as I.
By pigeonholing, we can select a choice of τ∈ X; a choice of f∈ F'(R); and a set F”(R)⊂ F'(R) with #F”(R)≥ k^-1(#F'(R)), so that (<ref>) holds for this choice of τ, for all g∈ F”(R).
To finish the proof, define R_1=R^f(I_τ), then R_1⊃ R is a (τ; k+1) rectangle; g∼ R_1 for all g∈ F”(R); and #F”(R)≳#F(R).
§ FROM RECTANGLE TANGENCIES TO MAXIMAL FUNCTIONS
In this section we will use Theorem <ref> to prove Proposition <ref>. Our proof will follow the outline sketched in Section <ref>. The next result helps us find the scale “ρ” discussed in the proof sketch (in what follows, this quantity will be called δ'). Our proofs will involve repeated use of dyadic pigeonholing, which will induce refinements by factors of (log 1/δ)^O(1) (recall that O(1) denotes a quantity that may depend on k). To simplify notation, we will write A⪅ B if A≲ (log 1/δ)^O(1)B.
Let k≥ 2 and let ,η>0. Then there exists δ_0>0 such that the following is true for all δ∈ (0,δ_0]. Let F be a set of functions, each of which has C^k norm at most 1. For each f∈ F, let Y(f)⊂ f^δ be a shading of f. Suppose that for every x∈[0,1]^2, every ρ≥δ, every T∈ [1, 1/ρ] and every (ρ; k; T) rectangle R containing x, we have
#{f∈ F x∈ Y(f), f∼ R}≤δ^-ηT^-#{f∈ F x∈ Y(f)}.
Then there exists a sub-shading Y'(f)⊂ Y(f) for each f∈ F; a scale δ'∈ [δ,1]; a set ℛ of pairwise incomparable (δ'; k) rectangles; a number μ; and for each R∈ℛ, a set F(R)⊂{f∈ F f∼ R} of size μ, such that the following holds
(A)
∫_[0,1]^2(∑_f∈ Fχ_Y(f))^k+1/k≤δ^-O(η)∑_R∈ℛ∫_R(∑_f∈ F(R)χ_Y'(f))^k+1/k.
(B)
Either #ℛ=1, or for every R∈ℛ, every ρ∈[δ',1], every T∈ [1,1/ρ], and every (ρ; k; T) rectangle R'⊃ R, we have
#{f∈ F(R) f∼ R'}≤ (δ')^-2ηT^-μ.
(C)
For every R=R(I)∈ℛ, let F̃(R)={f_R f∈ F(R)} and let Ỹ'(f̃)=ϕ^R(Y'(f)∩ R) (recall Definition <ref>). Then for every point x, every ρ≥δ/δ', every T∈[1,1/ρ], and every (ρ; k-1; T) rectangle R' containing x, we have
#{f̃∈F̃(R) x∈Ỹ'(f̃), f̃∼ R'}⪅_ T^-η/2#{f̃∈F̃(R) x∈Ỹ'(f̃)}.
In brief, Item (A) says that the L^k+1/k norm of ∑χ_Y(f) can be broken into pieces localized to the rectangles in ℛ. Item (B) says that the rectangles in ℛ are μ-rich and -robustly broad with error at most (δ')^-2η. Item (C) says that for each R∈ℛ, the functions in F(R) satisfy a (re-scaled) version of the hypotheses (<ref>) from Proposition <ref>, except has been replaced by η/2, and δ^-η has been replaced by O_(log 1/δ)^O(1).
Step 1: A two-ends reduction. Let A=O(1) be a constant to be specified below (the reason for introducing the constant A will be explained at the beginning of Step 2). For each x∈[0,1]^2, let t(x) be the infimum of all numbers t≥δ such that there exists a (t;k; A) rectangle R containing x with
#{f x∈ Y(f), f∼ R}≥ t^η/2#{f x∈ Y(f)}.
This set of numbers is non-empty, since it contains t=1, and it is bounded below by δ. For each x∈[0,1]^2 there exists a (t(x); k; A) rectangle R containing x that satisfies a variant of (<ref>) where the RHS has been weakened by a factor of 2. Denote this rectangle by R(x).
After dyadic pigeonholing, we can select a number t∈[δ,1], an integer ν≥ 1, and a sub-shading Y_1(f)⊂ Y(f) for each f∈ F such that the following holds
* For each x∈⋃_f Y_1(f), we have ν≤#{f x∈ Y_1(f)}<2ν; #{f x∈ Y(f)}≤δ^-η/2ν; and t≤ t(x)<2t.
* If f∈ F and x∈ Y_1(f), then f∼ R(x).
*
∫_[0,1]^2(∑_f∈ Fχ_Y(f))^k+1/k⪅δ^-η/2(k+1/k)∫_[0,1]^2(∑_f∈ Fχ_Y_1(f))^k+1/k.
* For each x∈ [0,1]^2, each ρ∈ [δ,t], and each (ρ;k) rectangle R containing x, we have
#{f∈ F x∈ Y_1(f), f∼ R}≤ 2(ρ/t)^η/2#{f∈ F x∈ Y_1(f)}.
By (<ref>), for every x∈⋃_f Y_1(f), every ρ≥δ, every T∈ [1, 1/ρ], and every (ρ; k; T) rectangle R containing x, we have
#{f∈ F x∈ Y_1(f), f∼ R} ≤#{f∈ F x∈ Y(f), f∼ R}
≤δ^-η/2T^-#{f∈ F x∈ Y(f)}≤ 2δ^-3/2ηT^-ν.
Step 2: Clustering into rectangles.
Our goal in this step is to find a set of rectangles ℛ_0 so that Item (A) is satisfied. The idea is as follows: We choose δ'∼ t and select a (maximal) set of pairwise incomparable (δ';k) rectangles ℛ_0. For each point x∈^2, the rectangle R(x) from Step 1 will be comparable to some rectangle R∈ℛ_0. If x∈ Y_1(f), then f∼ R(x) and thus (one might hope!) f∼ R. Thus we would have the pointwise inequality
(∑_f∈ Fχ_Y_1(f)(x))^p≲(∑_f∈ F
f∼ Rχ_Y_1(f)(x))^p,
and (<ref>) would follow. The only problem with the above argument is that if f∼ R(x) and R(x)∼ R, it is almost true that f∼ R, but not quite—we only have that f is tangent to a slight thickening of R. It is to deal with this technical annoyance that we introduced the number A = O(1) above.
Let δ'=(At)^1/k, i.e. a (δ';k) rectangle can be thought of as a (t;k;A) rectangle that has been thickened by a (multiplicative) factor of A^1/k in the vertical direction. If the constant A = O(1) is selected appropriately, then we can find a set ℛ_0 of (δ';k) rectangles with the following properties: (i) for each R∈ℛ_0, at most O(1) rectangles from ℛ_0 are comparable to R. (ii) Let x∈^2, and let R(x) be a (t;k;A) rectangle from Step 1. Write R(x) = R^g(I), and let R'(x)⊃ R(x) be the (δ';k) rectangle obtained by taking the vertical δ' neighborhood of g above I (i.e. R'(x) is the rectangle obtained by thickening R(x) in the vertical direction). Let f∈ F with x∈ Y_1(f), and hence f∼ R(x). Then there exists a rectangle R_0∈ℛ_0 that is comparable to R'(x), and satisfies f∼ R_0.
Item (i) implies that for each f∈ F, the sets {f^δ∩ R R∈ℛ, f∼ R} are pairwise O(1)-overlapping. Item (ii) says that (<ref>) holds if we replace the term on the RHS by a sum over the O(1) rectangles in ℛ_0 that are comparable to R'(x). Thus we have
∫_[0,1]^2(∑_f∈ Fχ_Y_1(f))^k+1/k≲∑_R∈ℛ_0∫_R ( ∑_f∈ F
f ∼ Rχ_Y'(f))^k+1/k∼∑_R∈ℛ_0ν^k+1/k|R ∩⋃_f∈ F
f ∼ R Y_1(f) |.
After refining ℛ_0 by a O(1) factor, we can ensure that for each f∈ F, the sets {f^δ∩ R R∈ℛ_0, f∼ R} are disjoint, and (<ref>) remains true (with a new implicit constant) for this refined collection ℛ_0.
After dyadic pigeonholing, we can select numbers λ>0 and μ≥ 1, and a set ℛ_1⊂ℛ_0 so that if we define
Y_2(f) = Y_1(f)∩⋃_R∈ℛ_0
λ≤ |R∩ Y_1(f)|<2λR,
then the following three items are true.
* For each R∈ℛ_1 we have
∫_R ∑_f∈ F
f∼ Rχ_Y_1(f)⪅∫_R ∑_f∈ F
f∼ Rχ_Y_2(f),
and since
ν≤∑_f∈ F
f∼ Rχ_Y_1(f)(x) < 2ν for x∈⋃_f∈ F
f∼ RY_1(f),
by (<ref>) and Hölder we have
∫_R (∑_f∈ F
f∼ Rχ_Y_1(f))^k+1/k⪅∫_R (∑_f∈ F
f∼ Rχ_Y_2(f))^k+1/k.
* For each R∈ℛ_1 we have
μ≤#{f∈ F f∼ R, Y_2(f)∩ R≠∅}<2μ.
*
∑_R∈ℛ_0∫_R (∑_f∈ F
f∼ Rχ_Y_1(f))^k+1/k⪅∑_R∈ℛ_1∫_R (∑_f∈ F
f∼ Rχ_Y_2(f))^k+1/k.
For each R∈ℛ_1, let
F(R) ⊂{f∈ F f∼ R, Y_2(f)∩ R≠∅}
be a set of size μ. We now (briefly) divide into cases.
Case 1: If δ'≤δ^η then define ℛ=ℛ_1. This is the main case.
Case 2: If δ'>δ^η, then since #ℛ_1≤ℛ_0≲ (δ')^-O(1)≲δ^-O(η), we can select R_1∈ℛ_1 with
∑_R∈ℛ_1∫_R (∑_f∈ F
f∼ Rχ_Y_2(f))^k+1/k≤δ^-O(η)∫_R_1(∑_f∈ F
f∼ R_1χ_Y_2(f))^k+1/k.
Define ℛ={R_1}.
We will show that in both Case 1 and Case 2, Item (B) is satisfied. In Case 2, we have #ℛ=1 and Item (B) is satisfied. Suppose instead we are in Case 1, i.e. δ'∈ [δ, δ^η]; we will establish (<ref>). Let ρ∈ [δ', 1], T∈ [1,1/ρ], let R∈ℛ, and let R'⊃ R be a (ρ; k; T) rectangle. Suppose
#{f∈ F(R) f∼ R'} = ωμ,
for some ω>0. To show that Item (B) is satisfied, we need to prove that
ω≤ (δ')^-2ηT^-.
We have
∫_R ∑_f∈ F(R)
f∼ R'χ_Y_2(f)≥ωλμ≥1/2ω∫_R ∑_f∈ F(R)χ_Y_2(f)⪆ω∫_R ∑_f∈ F(R)χ_Y_1(f)⪆ων| R ∩⋃_f∈ F(R) Y_1(f)|,
where the first two inequalities used the definition of λ and the shading Y_2 from (<ref>), and the third inequality used (<ref>). The integral on the LHS of (<ref>) is supported on the set
W=R ∩⋃_f∈ F(R)
f∼ R'Y_2(f) ⊂ R ∩⋃_f∈ F(R)Y_1(f).
Thus comparing the left and right sides of (<ref>), we conclude that there exists a point x∈ W with
∑_f∈ F(R)
f∼ R'χ_Y_2(f)(x) ⪆ων.
For this point x, we have
#{f∈ F x∈ Y_1(f), f∼ R'}≥#{f∈ F x∈ Y_2(f), f∼ R'}⪆ων.
Comparing (<ref>) and (<ref>), we obtain (<ref>), provided δ_0=δ_0(,η) is selected sufficiently small—here we use the assumption that δ'≤δ^η to dominate the implicit constant (log 1/δ)^O(1) in (<ref>) by (δ')^-η/4. At this point, we have established Item (B).
Step 3: Refining the shading. Our next task is to establish Item (C). After dyadic pigeonholing, there exists a number ν_1≤ν so that if we define
Y'(f) = ⋃_R f∈ F(R){x∈ Y_2(f) ∩ R ν_1 ≤∑_g∈ F(R)χ_Y_2(g)(x)<2ν_1},
then
∑_R∈ℛ∫_R(∑_f∈ F(R)χ_Y_2(f))^k+1/k⪅∑_R∈ℛ∫_R(∑_f∈ F(R)χ_Y'(f))^k+1/k.
We have ν_1⪆ν, and thus by (<ref>), for each x∈ [0,1]^2 each ρ∈ [δ/δ', 1], and each (ρδ'; k) rectangle R' containing x, we have
#{f∈ F(R) x∈ Y'(f), f∼ R'}≤ρ^η/2ν⪅ρ^η/2ν_1⪅ρ^η/2#{f∈ F(R) x∈ Y'(f)}.
We will see how this establishes Item (C). Fix a rectangle R=R(I)∈ℛ, and let F̃(R) and {Ỹ'(f̃) f̃∈F̃(R)} be the sets defined in Part (C) of the statement of Proposition <ref>. Under this re-scaling, the inequality (<ref>) becomes the following: For each x∈ [0,1]^2 each ρ∈ [δ/δ', 1], and each (ρ; k) rectangle R' containing x, we have
#{f̃∈F̃(R) x∈Ỹ'(f), f̃∼ R'}⪅ρ^η/2#{f̃∈F̃(R) x∈Ỹ'(f̃)}.
To show that Item (C) is satisfied, let x∈ [0,1]^2, ρ∈ [δ/δ', 1], T∈ [1/ρ, 1], and let R' be a (ρ; k-1; T) rectangle containing x. By Lemma <ref>, the functions {f̃∈F̃(R) x∈Ỹ'(f), f̃∼ R'} are all tangent to a curvlinear rectangle of dimensions Aτ×τ^1/k, where A = O(1) and τ = min(ρ, T^-k). Thus by Lemma <ref>, at least a ≳ 1 fraction of these rectangles are tangent to a common (τ;k) rectangle R”, which contains x, i.e.
#{f̃∈F̃(R) x∈Ỹ'(f), f̃∼ R'}≲#{f̃∈F̃(R) x∈Ỹ'(f), f̃∼ R”}.
The size of the latter set is controlled by (<ref>). Thus we have
#{f̃∈F̃(R) x∈Ỹ'(f), f̃∼ R'} ⪅τ^η/2#{f̃∈F̃(R) x∈Ỹ'(f̃)}
≲max(ρ^-η/2, T^-kη/2)#{f̃∈F̃(R) x∈Ỹ'(f̃)}
≤ T^-η/2#{f̃∈F̃(R) x∈Ỹ'(f̃)},
where the second inequality used the fact that T≤ρ^-1 and k≥ 1. This establishes Item (C).
Finally, by chaining inequalities (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>), we see that Item (A) is satisfied.
Next, we will show how Proposition <ref> and Theorem <ref> can be combined to prove Proposition <ref>, which is a variant of Theorem <ref> where the non-concentration condition on F is replaced by a (local) two-ends type non-concentration condition on the set of curves passing through each point. Before stating the result, we recall the following definition from <cit.>.
Let (M,d) be a metric space. Let α,δ,C>0. A set E⊂ M is called a (δ,α;C)-set if for all r≥δ and all metric balls B of radius r, we have
ℰ_δ(E ∩ B)≤ C(r/δ)^α,
where ℰ_δ(X) denotes the δ-covering number of the set X. In informal settings, we will sometimes abbreviate this to (δ,α)-set.
Our proof below involves an anisotropic rescaling, which sends a (ρ;k) rectangle to the unit square. Such a rescaling distorts (δ,α)-sets into slightly more complicated objects. The next definition describes a class of set that is preserved (as a class) under this type of rescaling.
Let δ,τ,α>0 and C≥ 1. Let f[0,1]→. We say a set Y(f)⊂ f^δ is a δ-thick shading striped by a (τ;α;C)-set if Y(f) is contained in a set of the form f^δ∩ (E×), where E⊂[0,1] is a (τ;α;C)-set.
The shadings Y(f) from the statement of Theorem <ref> are δ-thick shadings striped by (δ,α;δ^-η)-sets, i.e. τ=δ. However, we gain some additional flexibility by allowing δ and τ to differ. We exploit this flexibility as follows. Suppose Y(f)⊂ f^δ is a δ-thick shading striped by a (τ;α;C)-set. Suppose furthermore that R=R(I) is a (δ';k) rectangle, and f∼ R. Let f̃=f_R in the sense of Definition <ref>, and let Ỹ(f̃)=ϕ^R(Y(f)∩ R). Since this rescaling is anisotropic, it affects δ and τ differently—Ỹ(f̃) is a δ/δ'-thick shading striped by a (τ/δ^1/k, α; C)-set. This observation will play an important role in the proof below.
Let k≥ 1, 0≤α≤ 1, and let >0. Then there exists η>0 such that the following is true for all δ,τ>0. Let F be a set of polynomials of degree at most δ^-η, each of which has C^k norm at most 1. For each f∈ F, let Y(f) be a δ-thick shading striped by a (τ,α;δ^-η)-set. Suppose that for all x∈ [0,1]^2, all ρ∈[δ,1], all T∈ [1, 1/ρ], and all (ρ; k; T) rectangles R containing x, we have
#{f∈ F x∈ Y(f), f∼ R}≤δ^-ηT^-#{f∈ F x∈ Y(f)}.
Then
‖∑_f∈ Fχ_Y(f)‖_k+1/k≲_α,δ^-δ^k+α/k+1τ^k(1-α)/k+1
(#F).
We prove the result by induction on k.
The base case. We begin with the base case k=1. After dyadic pigeonholing and replacing each shading Y(f) by a refinement Y_1(f), we may suppose that there is a number μ so that
μ≤∑_f∈ Fχ_Y_1(f)(x)< 2μ for all x∈ X, X= ⋃_f∈ F Y_1(f).
(<ref>) remains true with Y_1 in place of Y, and it suffices to establish (<ref>) (with /2 in place of ) for the shading Y_1. By (<ref>) with ρ=2δ and T=2^1/δ^-η/, we have that for each x∈ X, there are ≳μ^2 pairs f,g∈ F with the following two properties: (i) x∈ Y_1(f)∩ Y_1(g), and (ii): the connected component of f^δ∩ g^δ containing x projects to an interval of length at most 2^1/δ^1-η/ on the x_1-axis; denote this interval by I(f,g). Let 𝒯 be the set of triples (x, f,g)∈ X× F^2, where the pair f,g satisfy items (i) and (ii). Then |𝒯|∼μ^2 X, where |·| denotes the product of two-dimensional Lebesgue measure on X and counting measure on F^2.
For each f,g∈ F, we have
|{x∈ X (x, f,g)∈𝒯}| ≤ |Y_1(f)∩ Y_1(g)| ≤ |Y_1(f) ∩ (graphf|_I(f,g))^δ|,
where (graphf|_I(f,g))^δ denotes the δ neighborhood of the graph of f, restricted to the interval I(f,g); recall that this interval has length O_(δ^1-η/). Since Y_1(f) is a δ-thick shadings striped by a (τ,α;δ^-η)-set, we have
|Y_1(f) ∩ (graphf|_I(f,g))^δ|≲_{[ (δτ) (δ^-η (δ^1-η//τ)^α), δ^1-η/≥τ; δ^2-η/, δ^1-η/≤τ ]}≤δ^1+α-η/τ^1-α.
Since f and g are polynomials of degree at most δ^-η, we have that f^δ∩ g^δ is a union of O(δ^-η) connected components, and thus each pair f,g∈ F can contribute to 𝒯 above O(δ^-η) intervals. Hence
|𝒯|≲_δ^1+α-2η/τ^1-α(#F)^2.
On the other hand, we have |X|≤μ^-2|𝒯|, and thus
|X|≲δ^1+α-2η/τ^1-α(#F/μ)^2.
If we select η≤^2/4, then (<ref>) implies (<ref>) (with /2 in place of , as required above).
The induction step Suppose that k≥ 2 and that the result has been established for k-1. Fix 0≤α≤ 1 and >0. Let η>0 be a quantity to be specified below, let δ,τ>0, and let F and Y(f) satisfy the hypotheses of Proposition <ref> with this value of η. First, let δ_0>0 be a small quantity to be chosen below, which depends on k and . We may suppose that δ≤δ_0, since otherwise (<ref>) is trivial, provided we choose the implicit constant sufficiently large.
Let _1=/2. Let η_1=η_1(k,_1) be the output from Theorem <ref>, with _1 in place of . We will select η>0 sufficiently small so that η≤η_1. Thus the shadings {Y(f) f∈ F} satisfy Hypothesis (<ref>) of Proposition <ref> with _1 in place of and η_1 in place of η. Applying Proposition <ref>, we get a sub-shading Y'(f)⊂ Y(f); a scale δ'∈ [δ,1]; a set ℛ of (δ', k) rectangles; sets F(R), R∈ℛ; and a multiplicity μ≤#F.
By Item (B), either ℛ=1, or (provided we choose δ_0 sufficiently small depending on k and _1) we can apply Theorem <ref> (recall that we selected η_1 sufficiently small to ensure that Theorem <ref> can be applied) to conclude that
#ℛ≤δ^-_1(# F/μ)^k+1/k.
We next explore the consequences of Item (C) from Proposition <ref>. We first consider the case where δ'≤δ^1-_1. By Item (A) from Proposition <ref>, we have
‖∑_f∈ Fχ_Y(f)‖_k+1/k^k+1/k ≤δ^-O(η_1)∑_R∈ℛ∫_R(∑_f∈ F(R)χ_Y'(f))^k+1/k
≲{[ δ^-O(η_1)(#ℛ) μ^k+1/k (δτ)( δ^-η(δ^1/k/τ)^α), τ≤δ^1/k; δ^-O(η_1)(#ℛ) μ^k+1/kδ^k+1/k, τ>δ^1/k ].
≲δ^-/2 -O(η_1)δ^k+α/kτ^1-α (#F)^k+1/k,
and we have established (<ref>) and completed the proof, provided we select η_1 sufficiently small depending on and k, and provided we select the implicit constant in (<ref>) sufficiently large depending on and k.
For the remainder of the proof we will consider the case where δ'<δ^1-_1, so in particular the implicit constant O__1(log 1/δ)^O(1) from (<ref>) is bounded by O__1(log1/δ/δ')^O(1). Recall that by Item (C), the sets F(R) and Y'(f), f∈ F(R) satisfy (<ref>). For each R∈ℛ, let F̃(R) and Ỹ'(f) be as defined in Item (C) of Proposition <ref>.
If δ_0 (and thus δ/δ') is sufficiently small, then F̃(R) and Ỹ'(f) will satisfy the induction hypothesis (<ref>) with the parameters changed as follows:
* k is replaced by k-1.
* δ is replaced by δ̃= δ/δ'.
* τ is replaced by τ̃= τ/(δ')^1/k
* The functions f∈ F are polynomials of degree δ^-η≤δ̃^-η/_1, each of which have C^k norm at most 1.
* The shadings Ỹ'(f) are δ̃-thick shadings striped by a (τ̃;α;δ̃^-η/_1)-set.
* The shadings Ỹ'(f) satisfy (<ref>), with T^-η_1/2 in place of T^-, and O__1(log 1/δ̃)^O(1) in place of δ^-η.
It is now time to apply the induction hypothesis: we apply Proposition <ref> with k-1 in place of k; α unchanged; and η_1/2 in place of . Let η_2 be the output from this proposition. If η>0 is selected sufficiently small, then η/_1≤η_2. This means that the functions f∈ F are polynomials of degree at most δ̃^-η_2, and the shadings Ỹ'(f) are striped by (τ̃;α;δ̃^-η_2)-sets. If δ_0 and thus δ̃ are sufficiently small, then the quantity O__1(log 1/δ̃)^O(1) from the final item above is at most δ̃^-η_2. Thus we can use the induction hypotheses to conclude that
‖∑_f∈F̃(R) χ_Ỹ'(f)‖_k/k-1≲δ^-η_1/4δ̃^k-1+α/kτ̃^(1-α)(k-1)/k (#F(R)).
We also have the L^1 bound
‖∑_f∈F̃(R) χ_Ỹ'(f)‖_1≤δ^-ηδ̃τ̃^1-α(#F(R)).
Interpolating (<ref>) and (<ref>) and recalling the definition of δ̃ and τ̃ (and the fact that η≤η_1), we have
‖∑_f∈F̃(R) χ_Ỹ'(f)‖_k+1/k^k+1/k≤‖∑_f∈F̃(R) χ_Ỹ'(f)‖_1^1/k‖∑_f∈F̃(R) χ_Ỹ'(f)‖_k/k-1≤δ^-η_1(δ')^-k+1/kδ^k+α/kτ^1-α(#F).
Undoing the scaling that mapped R to the unit square (this scaling distorted volumes by a factor of (δ')^k+1/k), and recalling that #F(R)=μ for each R∈ℛ, we conclude that
∫_R ( ∑_f∈ F(R) χ_Y'(f))^k+1/k≲δ^-2η_1δ^k+α/kτ^1-αμ^k+1/k.
Combining Item (A) from Proposition <ref> and (<ref>), we conclude that
‖∑_f∈ Fχ_Y(f)‖_k+1/k^k+1/k ≲_δ^-O(η)∑_R∈ℛ∫_R(∑_f∈ F(R)χ_Y'(f))^k+1/k
≲δ^-O(η)-2η_1(#ℛ)δ^k+α/kτ^1-αμ^k+1/k
≤δ^-O(η)-2η_1-/2δ^k+α/kτ^1-α (# F)^k+1/k.
If we select η_1 and η sufficiently small (depending on and k), then the term δ^-O(η)-δ^-2η_1-/2 has size at most δ^-. This establishes (<ref>) and closes the induction.
§ PROOF OF THEOREM <REF>
In this section, we will prove the following slightly more technical variant of Theorem <ref>.
Let k≥ 1, let I be a compact interval, and let ℱ⊂ C^∞(I) be uniformly smooth and forbid k–th order tangency. Let 0<β≤α≤ 1, and let >0.
Then there exists η>0 and δ_0>0 so that the following is true for all δ∈ (0,δ_0]. Let F⊂ℱ be a (δ, β; δ^-η)-set (here ℱ is given the usual metric on C^k(I)). For each f∈ F, let Y(f)⊂ f^δ be a (δ, α;δ^-η)-set (here we use the usual Euclidean metric on ^2). Then
‖∑_f∈ Fχ_Y(f)‖_k+1/k≤δ^-( δ^2-α#F)^k/k+1.
If instead 0≤α≤ 1 and β>α then the result remains true, except the bound becomes
‖∑_f∈ Fχ_Y(f)‖_k+1/k≤δ^-( δ^2-α-β-α/k#F)^k/k+1.
Theorem <ref> is the special case where α=β=1 and Y(f)=f^δ for each f∈ F.
Before proving Theorem <ref>', let us examine how it differs from Proposition <ref>. First, the functions in Proposition <ref> are polynomials, while those in Theorem <ref>' are uniformly smooth; moving between these two conditions will not pose any difficulties. More care, however, is needed to move between the different non-concentration hypotheses imposed by Theorem <ref>' versus Proposition <ref>. In brief, if F is a family of curves that violates the non-concentration hypothesis (<ref>) from Proposition <ref>, then for a typical point x∈ [0,1]^2, the curves whose δ-neighborhoods contain x will be concentrated inside a small ball (in the metric space C^k(I)) in ℱ. Thus the curves in F can be partitioned into non-interacting pieces, each of which is localized to a small ball in ℱ. Since F is a (δ, β; δ^-η)-set, each of these pieces only contains a small fraction of the total collection of curves. Each of these pieces can then be re-scaled to create an arrangement of curves that satisfies the non-concentration hypothesis (<ref>). We now turn to the details
Step 1: Polynomial approximation.
First, after a harmless rescaling we may suppose that I=[0,1] and sup_f∈ℱ‖ f ‖_C^k+1≤ 1/2. Let η be a quantity to be chosen below. By Jackson's approximation theorem (see e.g. <cit.>), for each f∈ F there exists a polynomial P_f of degree at most Kδ^-η/2, so that ‖ f-P_f‖_C^k+1≤δ/4. The quantity K depends only on the numbers sup_f∈ℱ‖ f^(i)‖_∞ for i=0,…, i_0, with i_0∼ 1/η. Crucially, K is independent of δ and the specific choice of F⊂ℱ. In particular, if δ_0>0 and hence δ is sufficiently small depending on η and ℱ, then the degree of each polynomial P_f is at most δ^-η. Define F_1={P_f f∈ F}. For each P_f∈ F_1, define the shading Y(P_f) = P_f^2δ∩ N_δ(Y(f)). Abusing notation slightly, we will replace δ by 2δ, so Y(P_f)⊂ P_f^δ. It suffices to prove Theorem <ref> with F_1 in place of F, i.e. we must show that
‖∑_f∈ F_1χ_Y(f)‖_k+1/k≤δ^-( δ^2-α-max(0, β-α/k)#F)^k/k+1.
Note that since the set F is a (δ,β;δ^-η)-set, we may suppose after a (harmless) refinement by a factor of δ^η that the points in F are δ-separated. Hence the set F_1 is 3/4δ-separated, and is a (δ, β; 2δ^-η)-set. The set F_1 also satisfies the “forbidding k-th order tangencies” condition (<ref>), with ω replaced by ω/2.
Step 2: A two-ends reduction. Let _1>0 be a small quantity to be specified below. For each x∈^2, let t(x) be the infimum of all t≥δ for which there exists a ball B⊂ C^k([0,1]) of radius t satisfying
#{f∈ F_1 ∩ B x∈ Y(f)}≥ t^_1#{f∈ F_1 x∈ Y(f)}.
After dyadic pigeonholing, we can select a radius t and a shading Y_1(f)⊂ Y(f), f∈ F_1 with the following properties.
* t/2≤ t(x)<t for each x∈⋃_f∈ F_1Y_1(f).
* For each x∈⋃_f∈ F_1Y_1(f), there is a ball B(x)⊂ C^k([0,1]) of radius t that contains every f∈ F_1 with x∈ Y_1(f).
*
‖∑_f∈ F_1χ_Y(f)‖_k+1/k⪅δ^-_1‖∑_f∈ F_1χ_Y_1(f)‖_k+1/k.
* For each x∈⋃_f∈ F_1Y_1(f), each r∈[δ,t], and each ball B⊂ C^k([0,1]) of radius r, we have
#{f∈ F_1 ∩ B x∈ Y_1(f) }≤ (r/t)^_1#{f∈ F_1 x∈ Y_1(f)}.
Let ℬ_0 be a maximal set of pairwise non-overlapping balls in C^k([0,1]) of radius t that intersect ℱ. For each B∈ℬ, let 4B denote the ball with the same center and radius 4t; denote this new set of balls by ℬ_1. Then for every x∈⋃_f∈ F_1 Y_1(f), the ball B(x) is contained in at least one of the balls from ℬ_1, and hence we have the pointwise bound
∫(∑_f∈ F_1χ_Y_1(f))^k+1/k≤∑_B∈ℬ_1∫(∑_f∈ F_1∩ Bχ_Y_1(f))^k+1/k.
We claim that each f∈ℱ is contained in O(c^-O(1)) balls from ℬ, where c>0 is the quantity from (<ref>) associated to the family ℱ. From this it follows that
∑_B∈ℬ_1#(F_1∩ B)≲ c^-O(1)#F.
To verify the above claim, suppose that f∈ F is contained in ℓ distinct balls with centers g_1,…,g_ℓ. Since the points g_1,…,g_ℓ are t-separated in C^k([0,1]), by (<ref>) we have that the vectors v_j = (g_j(0), g_j'(0),…, g_j^(k)(0)), j=1,…,ℓ are at least c t/2-separated in ^k+1 with the L^1 metric. But since ‖ f - g_j‖_C^k≤ 4t for each index j, by the triangle inequality the vectors {v_j}_j=1^ℓ are contained in a ball of radius 8t. We conclude that ℓ≲ c^-O(1), as desired.
Step 3: Rescaling and Applying Proposition <ref>.
For each B∈ℬ_1, with center g_B and each f∈ F_1∩ B, define f̃_B(x) =(4t)^-1(f(x) - g_B(x). Then ‖f̃_B‖_C^k≤ 1 for each f∈ F_1∩ B. Define δ̃= δ/(2t) and let Ỹ_1(f̃_B) be the image of Y_1(f) under the map ϕ_B(x,y)= (x, (4t)^-1(y - g_B(x)). Then Ỹ_1(f̃_B)⊂f̃_B^δ̃. The shading Ỹ(f̃_B) now satisfies the hypotheses of Proposition <ref>, with δ̃ in place of δ and τ = δ. Define F̃_B = {f̃_B f∈ F∩ B}.
The non-concentration estimate (<ref>) now has the following consequence. For each r∈[δ̃, 1] and each ball B'⊂ C^k([0,1]) of radius r, we have
#{f̃_B∈F̃_B ∩ B' x∈Ỹ_1(f̃_B) }≤ 4 r^_1#{f̃_B∈F̃_B x∈Ỹ_1(f̃_B)}.
The consequence of (<ref>) is the following: for each x∈⋃_f̃_B ∈F̃_BỸ_1(f̃_B), each T≥ 1, each ρ≥δ̃, and each (k; ρ; T)-rectangle R containing x, we have
#{f̃_B ∈F̃_B x∈Ỹ_1(f̃_B), f̃_B∼ R}≲ T^_1#{f̃_B ∈F̃_B x∈Ỹ_1(f̃_B), f̃_B∼ R}.
Indeed, by Lemma <ref>, the set of functions f̃_B in the set on the LHS of (<ref>) are localized to a ball B' centered at g_B of diameter O(T^-1) (recall that a (ρ; k; T)-rectangle has length (T ρ)^1/k. ) Comparing with (<ref>), we obtain (<ref>).
Applying Proposition <ref> with _1 in place of ; δ̃ in place of δ; and τ=δ, we conclude that if η>0 is sufficiently small, then for each ball B∈ℬ_1 we have (provided δ̃ is sufficiently small)
∫( ∑_f̃_B∈F̃_Bχ_Ỹ_1(f̃_B))^k+1/k≲_α,δ̃^-_1δ̃^1+α/kτ^1-α(#F̃_B)^k+1/k≤δ^-_1 t^-1-α/kδ^2-α+α/k(#F̃_B)^k+1/k.
Undoing the scaling ϕ_B (which distorted volumes by a factor of 4t) and using the fact that #F_B≲δ^-η(t/δ)^β (since F is a (δ, β; δ^-η)-set), we have
∫( ∑_ f∈ F_Bχ_ Y_1(f))^k+1/k ≲_α,δ^-_1 t^-α/kδ^2-α+α/k(# F_B)^k+1/k
≲δ^-_1-η t^β-α/kδ^2 - α + α-β/k(# F_B)
= δ^-_1-η (δ/ t)^α-β/kδ^2 - α(# F_B).
Combining (<ref>), (<ref>), and (<ref>), we conclude that
‖∑_f∈ F_1χ_Y(f)‖_k+1/k^k+1/k⪅_,αδ^-2_1-η∑_B∈ℬ_1 (δ/ t)^α-β/kδ^2 - α(# F_B)
≲δ^-2_1-η(δ/ t)^α-β/kδ^2 - α(# F).
If α≥β, then the worst case occurs when t=δ. This is unsurprising, in light of the behavior of Arrangements 1, 2, and 3 from Section <ref>. If instead β>α, then the worst case occurs when t=1. Regardless, we obtain (<ref>), provided we select η,_1≤/3, and choose δ_0>0 sufficiently small so that the implicit constant O_,α(log(1/δ)^O(1)) in inequality (<ref>) is at most δ^/3.
§ FROM THEOREM <REF>' TO THEOREM <REF>
In this section we will prove Theorem <ref>. We begin with the case s=m-1. Let h𝒞× I→, Φ𝒞→^m-s, 𝒞_0⊂𝒞, and I_0⊂ I be as in the statement of Theorem <ref>. Since 𝒞_0 and I_0 are compact and h,Φ are smooth, it suffices to consider the case where 𝒞=N(u_0,r) is a small neighborhood of a point u_0, and I_0 is a short interval. Since h parameterizes a m-dimensional family of cinematic curves, if the neighborhood 𝒞 and the interval I_0 are chosen sufficiently small, then there exists c>0 so that
∑_j=0^m-1 |∂_t^j h(u;t) - ∂_t^j h(u';t)|≥ c|u-u'|, u,u'∈𝒞, t∈ I_0,
i.e. the family ℱ = {h(·, u) u∈𝒞_0} is uniformly smooth and forbids (m-1)–st order tangency.
The reduction from Theorem <ref> to Theorem <ref> now proceeds by a standard L^p duality argument. We will briefly sketch the proof, and refer the reader to Lemma 10.4 from <cit.> for further details. Let {v_i} be a maximal δ-separated subset of Φ(𝒞_0). If |v-v'|<δ, then M_δ f(v)≤ A M_δ f(v'), where the constant A depends on h,Φ, 𝒞 and I. Thus
‖ M_δ f‖_p ≲(δ∑_j |M_δ f(v_j)|^p)^1/p.
By the duality of ℓ^p and ℓ^p', there exists a sequence {y_j} with δ∑_j y_j^p'=1, so that
(δ∑_j |M_δ f(v_j)|^p)^1/p=δ∑_j y_j|M_δ f(v_j)|,
and thus
‖ M_δ f‖_p ≲δ∑_j y_j 1/δ∫_g_j^δ|f|=∫(∑_j y_jχ_g_j^δ)|f|,
where g_j∈ℱ is a function that comes within a factor of 1/2 of achieving the supremum M_δ f(v_j). We now use Hölder's inequality to bound
∫(∑_j y_jχ_g_j^δ)|f| ≤‖∑_j y_jχ_g_j^δ‖_p'‖ f‖_p.
We would like to apply Theorem <ref>, but we must first deal with the weights {y_j}. Since we do not care about factors of log(1/δ), this can be handled using dyadic pigeonholing. We divide ‖∑_j y_jχ_g_j^δ‖_p' into log (1/δ) pieces based on the dyadic value of y_j (there are only O(log (1/δ)) dyadic ranges for y_j, since each y_j has size at most 1, and values smaller than δ^100m can be ignored, since the total contribution from such weights is at most O(δ^100)), and apply Theorem <ref> with p' = m/m-1 to each piece. Summing the resulting contributions, we obtain the estimate ‖∑_j y_jχ_g_j^δ‖_p'≤δ^-, provided δ>0 is selected sufficiently small.
The conclusion (<ref>) of Theorem <ref> holds for all δ>0 sufficiently small. More precisely, the conclusion holds for all δ∈ (0,δ_0], where δ_0 depends on the following quantities:
* m and s (so far, we have only considered the case m=s+1).
* .
* The infimum of | DF_t(u)| from Definition <ref>, for (u,t)∈𝒞_0× I_0; this quantifies the property that h paramaterizes a m-dimensional family of cinematic curves.
* The infumum of | D Φ|_V_u;t| from Definition (<ref>), for (u,t)∈𝒞_0× I_0; this quantifies the property that Φ is transverse to h.
* sup|∇Φ|; in order for F to be a (δ, β; δ^-η)-set, we need this supremum to be at most δ^-η.
* The C^N-norm of h, where N=N() is a large integer depending on . More precisely, we can cover 𝒞_0⊂𝒞 by a finite (independent of δ) set of coordinate charts, and our choice of δ_0 will depend on the maximum of the C^N-norm of h in these coordinate charts.
Next, we consider the case s<m-1. The reduction from s<m-1 to s=m-1 is a “slicing” argument. Again, since 𝒞_0 and I_0 are compact and h,Φ are smooth, it suffices to consider the case where 𝒞=N(u_0,r) is a small neighborhood of a point u_0, and I is a short interval. In particular, we can suppose that there is a unit vector e∈^m-s so that for each u∈𝒞 and each t∈ I, if we consider the manifold V_u;t given by (<ref>),
then Φ(V_u;t) is a codimension-1 manifold (i.e. dimension (m-s-1)) in ^m-s, and at each point p∈Φ(V_u;t), the tangent plane T_p Φ(V_u;t) has normal vector that makes angle ≤ 1/100 with e.
After a harmless rotation, we may suppose that e = e_1 is the first standard basis vector. After further restricting 𝒞 and translating, we may suppose that Φ(𝒞) is the cube Q=[0,r]^m-s for some small r>0. Writing v = (v, v_m-s)∈^m-s-1× and Q=Q× [0,r], we have
‖ M_δ f‖_L^s+1(Q) =(∫_Q∫_0^r (M_δ f(v))^s+1dv)^1/s+1≤(sup_v∈Q∫_0^r(M_δ f(v, v_m-s)^s+1dv_m-s)^1/s+1
=sup_v∈Q‖ M_δ f(v, ·)‖_L^s+1([0,r])=sup_v∈Q‖ M^v_δ f‖_L^s+1([0,r]),
where
M^v_δ f(v_m-s)=1/δsup_u∈Φ^-1(v, v_m-s)∫_γ_u f.
The purpose of the above computation is that M^v_δ is a maximal operator in the sense of Definition <ref>, with s+1 in place of m. Thus we can apply Theorem <ref> with s+1 in place of m to conclude (see Remark <ref>) that there exists a choice of δ_0>0 (which is uniform in our choice of v) so that ‖ M^v_δ‖_L^s+1→ L^s+1≤δ^- for all v and all δ∈ (0,δ_0]. This means that for δ∈ (0,δ_0], we have
sup_v∈Q‖ M^v_δ f‖_L^s+1([0,r])≤δ^-‖ f‖_s+1.
Combining (<ref>) and (<ref>), we obtain (<ref>).
§ FROM THEOREM <REF> TO THEOREM <REF>
In this section we will prove Theorem <ref>. The main new input is a local smoothing estimate by Chen, Guo, and Yang <cit.>. As noted in the introduction, Chen, Guo, and Yang prove sharp L^p→ L^p bounds for the axis-parallel elliptic maximal function by combining their local smoothing theorem with an estimate similar to (<ref>). We will follow a similar strategy. We begin by recalling the setup from <cit.>.
§.§ Local smoothing: The Chen-Guo-Yang framework
Let s≥ 2, w = (w_1,… w_s). Let ζ(w;t)^s→ be smooth, let ϕ(w, t) be a smooth bump function supported near the origin. Define
A_ζ,ϕf(x,y; w)=∫_f(x-t, y - ζ(w; t)ϕ(w, t)dt,
and define
G_ζ,ϕf(x,y) = sup_w∈^s| A_ζ,ϕf(x,y;w) |.
Next, we define an analogue of Sogge's cinematic curvature condition from <cit.> in this setting. Let
T^ζ(w;t) = (∂_tζ(w; t), ∂^2_tζ(w;t),…,∂^s+1_tζ(w;t))^T.
We say that G_ζ,ϕ satisfies the s parameter curvature condition at the origin if
[∂_tT^ζ, ∂_w_1T^ζ, …, ∂_w_sT^ζ]|_(w,t)=(0,0)≠ 0.
By continuity, if (<ref>) is satisfied, then the determinant continues to be nonzero for (w,t) in a small neighborhood of the origin. The bump function ϕ will be selected so that this determinant will be uniformly bounded away from 0 on the support of ϕ.
Now we can state Proposition 3.2 from <cit.>. In what follows, P_kf denotes the Littlewood-Paley projection to the frequency annulus of magnitude ∼ 2^k.
Let ζ(w,t)^s×→ satisfy the s parameter curvature condition at the origin. Then there exists p_s=p(s)<∞ so that for all >0 and all smooth bump functions ϕ(w,t) whose support is contained in a sufficiently small neighborhood of the origin, there is a constant C = C(,ζ, ϕ) so that
‖ A_ζ,ϕ(P_kf)‖_L^p(^2×^s)≤ C 2^-(s+1/p+)k‖ f ‖_L^p(^2).
In <cit.>, Proposition 3.2 is stated with the additional hypothesis that ζ(w,t) is a “normal form” at the origin (this is defined in Definition 3.1 from <cit.>). However, the argument immediately following Proposition 4.2 shows how an arbitrary ζ(w,t) can be reduced to the case where ζ is a normal form. We also remark that the analogue of (<ref>) from <cit.> has the expression A_ζ,ϕf rather than A_ζ,ϕ(P_kf), but the latter is what is intended.
Note that
‖ G_ζ,ϕ(P_kf)(x,y))‖_L^p_xy=
‖ A_ζ,ϕ(P_k f)(x,y;w))‖_L^p_xy(L^∞_w),
where L^p_xy denotes L^p(^2) in the variables (x,y) and L^∞_w denotes L^∞(^s) in the variable w. Thus by Sobolev embedding, (<ref>) implies
‖ G_ζ,ϕ(P_kf) ‖_L^p(^2)≤ C 2^-(1/p+)k‖ f ‖_L^p(^2),
with p=p(s) as above, and for a (possibly different) constant C = C(,γ,χ). I.e. the sublinear operator G_ζ,ϕ has high frequency decay, in the sense of Definition <ref>.
§.§ From local smoothing to maximal averages over curves
Our next task is to relate the maximal operator G_ζ,ϕf from (<ref>) to the operator M from Definition <ref>. By compactness, it suffices to consider the case where 𝒞 is a small neighborhood of a point u_0, and I is a small interval. Since we restrict to the case where M is translation invariant, we may choose local coordinates of the form u=(x,y,w_1,…,w_s) so that the parameterization and projection functions h𝒞× I→ and Φ𝒞→^2 can be expressed in the form h(u; t) = ζ(w_1,…,w_s; t-x) + y and Φ(u) = (x,y); we can choose these coordinates so that u_0 = 0 and I is an interval centered at 0. Let G = G_ζ,ϕ, where ϕ is a bump function chosen so that Proposition (<ref>) holds. We will further restrict 𝒞 and I so that ϕ is identically 1 on 𝒞× I. With these restrictions, we have
Mf(x,y) = sup_w (x,y,w)∈𝒞_0∫_γ_wf ≤ sup_w∈^sA_ζ,ϕf(x,y; w)≤ G_ζ,ϕf(x,y),
for every non-negative function f^2→.
Let us suppose for the moment that ζ(w,t)^s×→ satisfies the s parameter curvature condition at the origin. Theorem <ref> says that for each >0, there exists a constant C_ so that
‖ G_ζ,ϕP_kf ‖_L^s+1(^2)≤ C_ 2^ k‖ f ‖_L^s+1(^2).
Indeed, the quantity G_ζ,ϕ(P_kf)(x,y) is comparable to M_δ f(x,y) for δ=2^-k, where M is the maximal operator from (<ref>) associated to h. The conclusion of Theorem <ref> holds for all δ>0 sufficiently small (depending on , 𝒞, h, and Φ), but this may be extended to all δ>0 by selecting a sufficiently large constant C_.
Let p>s+1. If we select >0 sufficiently small depending on p and the Lebesgue exponent p(s) from (<ref>), then by interpolating (<ref>) and (<ref>), we conclude that there exist constants η>0 (small) and C (large) so that
‖ G_ζ,ϕP_kf ‖_L^p(^2)≤ C 2^-η k‖ f ‖_L^p(^2),
and hence there is a constant C_p so that
‖ G_ζ,ϕf ‖_L^p(^2)≤ C_p ‖ f ‖_L^s(^2).
Since it suffices to prove Theorem <ref> for non-negative functions, the theorem now follows from (<ref>).
It remains to verify that ζ(w,t)^s×→ satisfies the s parameter curvature condition at the origin. By hypothesis, h parameterizes a s+2-dimensional family of cinematic curves, in the sense of Definition <ref>. To slightly simplify notation below, we will write coordinates u = (y, x, w_1,…,w_s) rather than (x,y, w_1,…,w_s). We have
DF^h_0(0)=
(
[ 1 ∂_t h ∂_w_1h ⋯ ∂_w_sh; 0 ∂_t∂_t h ∂_t ∂_w_1h ⋯ ∂_t ∂_w_sh; ⋮ ⋮ ⋮ ⋱ ⋮; 0 ∂_t^s+1∂_t h ∂_t^s+1∂_w_1h ⋯ ∂_t^s+1∂_w_sh ])
But the bottom-right (s+1)× (s+1) minor of the above matrix is precisely T^ζ(w;t), and hence these matrices have the same determinant. Since h paramaterizes a (s+2)-dimensional family of cinematic curves, this determinant is non-vanishing at (u;t)=(0;0). We conclude that ζ(w,t)^s×→ satisfies the s parameter curvature condition at the origin.
§ FROM THEOREM <REF>' TO THEOREMS <REF> AND <REF>
In this section we will briefly discuss the reduction from Theorem <ref>' to Theorems <ref> and <ref>. Reductions of this type are already present in the literature, so we will just provide a brief sketch and refer the reader to the appropriate sources for further details.
§.§ Restricted Projections
The connection between exceptional set estimates for projections in restricted sets of projections, and maximal function estimates for curves was first explored by Käenmäki, Orponen, and Venieri in <cit.>. We will follow the framework from Section 2 of <cit.>. We will only briefly sketch the numerology of the problem, and refer the reader to <cit.> for details.
Let γ[0,1]→^n and E⊂^n be as in the statement of Theorem <ref>; after re-scaling and replacing E by a subset, we may suppose that E⊂[-1,1]^n and E≤ 1. Suppose for contradiction that there exists some 0≤ q< E so that the set
S = {t∈ [0,1](E·γ(t))<q}
satisfies S>q. After possibly replacing S and E by subsets, we may suppose that q< S = E. Let α= S. Let ℱ = {t↦ z·γ(t) z∈ [-1,1]^n}. Since γ is smooth, the set ℱ is uniformly smooth, and the nondegeneracy condition (<ref>) implies that ℱ forbids (n-1)-st order tangency. Define ℱ_E = {t↦ z·γ(t) z∈ E}.
Let η,δ_0>0. In Section 2 of <cit.>, the authors explain how to extract a (δ, α; δ^-η)-set F⊂ℱ_E, for some δ∈(0,δ_0], and how to construct a shading Y(f)⊂ f^δ for each f∈ F, where each set Y(f) is a (δ,α;δ^-η)-set (in the metric space ^2), with the property that the union ⋃_f∈ FY(f) is contained in a (δ, α; δ^-η)× (δ, q; δ^-η) quasi-product, i.e. a set X⊂^2 whose projection to the x-axis is a (δ, α; δ^-η)-set, and every fiber above this projection is a (δ, q; δ^-η)-set. In particular, such a quasi-product has measure at most δ^2-α-q-2η. Since ∑_f∈ f|Y(f)|≳δ^2 - 2α+2η, by Hölder's inequality we have
‖∑_f∈ Fχ_Y(f)‖_n/n-1^n/n-1≥δ^-2α + q-α/n-1+O(η).
On the other hand, by Theorem <ref> with k=n-1, for each >0 we have
‖∑_f∈ Fχ_Y(f)‖_n/n-1^n/n-1≤δ^-2α-,
provided δ is sufficiently small. Since q<α, we obtain a contradiction provided ,η, and δ_0 are chosen sufficiently small. We refer the reader to Section 2 of <cit.> for details.
§.§ Furstenberg sets of curves
In this section we will briefly discuss the proof of Theorem <ref>. In <cit.>, Héra, and Shmerkin, and Yavicoli obtained new bounds for the dimension of (α, 2α) Furstenberg sets. They did this by first introducing the notion of a discretized (α,β) Furstenberg set, and then showing that covering number bounds on the size of such discretized Furstenberg sets imply Hausdorff dimension bounds for the corresponding (α,β) Furstenberg sets. An identical strategy will work here. The corresponding notion of a discretized Furstenberg set of curves is as follows.
Let ℱ⊂ C^k([0,1]). For α,β,δ, C>0, we say a set E⊂[0,1]^2 is a discretized (δ, α, β) Furstenberg set of curves (with error C) from the family ℱ, if E=⋃_f∈ FA_f, where
* The set F⊂ℱ is a (δ, β; C)-set (in the metric space C^k([0,1]), with #F ≥ C^-1δ^-β.
* For each f∈ F, the set A_f is a (δ,α;C)-set (in the metric space ^2), with |A_f|≥ C^-1δ^2-α, which is contained in f^2δ.
Definition <ref> is modeled off of Definition 3.2 from <cit.>. The definitions are very similar, with the following two differences: in <cit.>, the authors consider lines in ^n rather that a family ℱ of curves, and the authors use the notation “⪅” to suppress the role of the constant C.
In Lemma 3.3 from <cit.>, the authors prove the following: Let α,β,s≥ 0. Suppose that for every >0, there exists η>0 so that every (δ, α,β) Furstenberg set of lines (with error δ^-η) has measure at least δ^2-s+. Then every (α,β) Furstenberg set has Hausdorff dimension at least s.
An identical proof yields the analogous result for Furstenberg sets of curves: Fix a family ℱ⊂ C^k([0,1]), and fix α,β,s≥ 0. Suppose that for every >0, there exists η>0 so that every (δ, α,β) Furstenberg set of curves (with error δ^-η) from the family ℱ has measure at least δ^2-s+. Then every (α,β) Furstenberg set of curves from ℱ has Hausdorff dimension at least s. Thus in order to prove Theorem <ref>, it suffices to obtain the corresponding bound on the volume of discretized (δ, α, β) Furstenberg sets of curves from ℱ.
To this end, fix k≥ 1 and 0≤β≤α≤ 1, and fix a family ℱ of uniformly sooth curves that forbid k-th order tangency. Fix >0, and let η>0 be a small quantity to be specified below. Let E⊂[0,1]^2 be a discretized (δ, α, β) Furstenberg set of curves (with error δ^-η) from the family ℱ, and let F⊂ℱ and {Y(f) f∈ F} be as in Definition <ref>). Then if η is sufficiently small, we can use Theorem <ref> and Hölder's inequality to compute
δ^2-α-β=‖χ_E ∑_f∈ Fχ_Y(f)‖_1 ≤‖χ_E‖_k+1‖∑_f∈ Fχ_Y(f)‖_k+1/k≤ |E|^1/k+1δ^-/k+1(δ^2-α-β)^k/k+1.
Re-arranging, we conclude that |E|≥δ^2-α-β+, as desired.
§ EXAMPLES
In this section, we will show that the maximal functions discussed in the introduction can be expressed in the framework described in Section <ref>. The Kakeya maximal function is straightforward: select 𝒞_0=[0,1]^2, I_0 = [0,1], 𝒞 a neighborhood of 𝒞_0, and I a neighborhood of I_0. Let h(m,b;t) = mt+b, and let Φ(m,b)= m. Then F_t(m, b) = (mt+b, m), and DF_t = t, 11, 0, which is invertible. Since s=m-1, Φ is automatically transverse to h.
For the Wolff and Bourgain circular maximal functions, we can use translation and rotation symmetry to reduce to the case where r takes values in a neighborhood of 1 and the centers (x,y) take values in a neighborhood of (0,0). Finally, we may restrict the integral (<ref>) (resp. (<ref>)) to the upper arc of C(x,y, r) above the interval [-ρ, ρ], for ρ>0 a small (fixed) quantity. With these reductions, define 𝒞 to be a neighborhood of (0,0,1); I a neighborhood of 0; h(x,y,r;t) = √(r^2-(t-x)^2)-y. Then it suffices to verify that D^hF_0 has full rank at (x,y,r) = (0,0,1); this is a straightforward computation.
For the Wolff circular maximal function, define Φ(x,y,r)= r. we have m=s+1, and hence Φ is automatically transverse to h. For the Bourgain circular maximal function we have Φ(x,y,r)=(x,y), and thus we must verify that DΦ restricted to the manifold
V_(0,0,1;0) = {(x',y',r')∈𝒞 h(x',y',r'; 0) = 1, ∂_t h(x',y',r'; t)|_t=0 = 0}
= {(x',y',r')∈𝒞 y' = 0 r' = 1- x'}
has rank 1 at (0,0,1). But this is evidently the case, since we can write this manifold as { (t, 0, 1-t) } for t in a neighborhood of 0.
Finally, we discuss the Erdoğan elliptic maximal function. Given an ellipse with semi-major axis a, semi-minor axis b, center (x,y), and rotation angle θ, define
[ A= a^2sin^2θ + b^2cos^2θ B=2(b^2-a^2)sinθcosθ C=a^2cos^2θ+b^2sin^2θ; D=-2Ax-By E=-Bx-2Cy F=Ax^2+Bxy+Cy^2-a^2b^2. ]
Then the corresponding ellipse is the locus of points (X,Y) satisfying
AX^2+BXY+CY^2+DX+EY+F=0.
In light of the above, define
h(a,b,x,y,θ,t) = -(Bt+E) + √((Bt+E)^2 - 4C(At^2+Dt+F))/2C.
Again, after translation, rotation, and anisotropic re-scaling, it suffices to consider the case where 𝒞 is a neighborhood of (1,1,0,0,0), i.e. the semi-major and semi-minor axes have lengths close to 1, and origin is close to (0,0), and the rotation is close to 0. With A,…,F as given by (<ref>), h is a function from 𝒞 to . The graph of t↦ h(a,b,x,y,θ;t) is the (upper half) of the ellipse with major axis a, minor axis b, center (x,y), and rotation θ. A direct computation shows that DF^h_0(1,1,0,0,0) has non-zero determinant.
Next, we have Φ(a,b,x,y,θ) = (x,y). We must verify that DΦ restricted to the manifold
V_(1,1,0,0,0;0) = {(a', b', x',y',θ')∈𝒞 h(a', b', x',y',θ';t) = 1/4, ∂_t h(a', b', x',y',θ';t)|_t=0 = 0,
∂^2_t h(a', b', x',y',θ';t)|_t=0=-√(2), ∂^3_t h(a', b', x',y',θ';t)|_t=0=0}
has rank 1 at (1,1,0,0,0). But in a neighborhood of (1,1,0,0,0), this manifold can be written as (1+a_1(t), 1+a_2(t), 0, b(t), a_3(t)), where a_1,a_2,a_3 are smooth and satisfy a_i(0)=0, and ∂_t b(t)|_t=0∼ 1. Since Φ(a,b,x,y,θ)=(x,y), we conclude that DΦ restricted to V_(1,1,0,0,0;0) has rank 1, as desired.
§.§ The range of p in Theorem <ref> is sharp
In this section we will give an example showing that the range of p in Theorem <ref> is sharp. Define
h(x,y,w_1,…,w_s;t)= y + w_1(t-x)^2 + w_2(t-x)^3 + w_2(t-x)^4 + … + w_s(t-x)^s+1.
It is straightforward to show that every polynomial (in t) of degree ≤ s+1 can be uniquely expressed as h(x,y,w_1,…,w_s;t) for an appropriate choice of x,y,w_1,…,w_s.
For ρ>0 small, define
f(x,y) = (y+ρ)^-1/(s+1)χ_[-1,1]×[0,1].
Then ‖ f ‖_s+1∼ (log 1/ρ)^1/(s+1). On the other hand, for (x,y) in a neighborhood of (0,1), we can select w so that the curve h(x,y,w_1,…,w_s;0) is tangent to the x-axis to order s, and hence
∫_γ_uf ∼log(1/ρ),
where γ_u is the graph of t↦ h(x,y,w_1,…,w_s;t) over [-1,1]. Letting ρ↘ 0, we conclude that the operator M from (<ref>) cannot be bounded from L^p→ L^p for p=s+1. To show that no L^p→ L^p bound is possible for p<s+1 is straightforward: let h be as above, and let f be the characteristic function of a 1×ρ rectangle.
§ GEOMETRIC LEMMAS
In this section we will record several computations that explore some of the consequences of curve-rectangle tangency. Our main tool will be Taylor's theorem with remainder. In a typical argument in this section, we will approximate a function f by its k-th order Taylor polynomial, which we denote by f_k. To show that the function f cannot be small on a large set, we will need the analogous result for f_k. The following inequalities will be useful for this purpose.
Let I⊂ be a finite interval, let E⊂ I be measurable, and let P be a polynomial of degree at most D. Then
sup_x∈ I|P(x)|≤(4|I|/|E|)^Dsup_x∈ E|P(x)|.
Let P be a polynomial of degree at most D, with leading coefficient a∈. Then for λ>0, we have
|{x∈ |P(x)|≤λ}|≤ 4(λ/2|a|)^1/D.
The next inequality says that if f is small on a long interval, then the derivatives of f must also be small on this interval.
Let k≥ 1, δ>0, and let f∈ C^k with ‖ f ‖_C^k≤ 1. Let I⊂[0,1] be a closed interval of length at most δ^1/k, and suppose that |f(x)|≤δ for x∈ I.
Then
sup_x∈ I|f^(i)(x)|≲δ|I|^-i, i=0,…,k.
First, we may suppose that δ^1/(k-1)<|I|≤δ^1/k, since otherwise (<ref>) for i=k follows from the assumption ‖ f ‖_C^k≤ 1, and we may replace k by k-1.
Let K=2· 8^k^2k. We will prove that (<ref>) holds with implicit constant K. Suppose not; then there exists an index 0≤ i≤ k so that
sup_x∈ I|f^(i)(x)| > K 8^-ikδ|I|^-i.
Let j be the largest index for which (<ref>) holds. Since sup_x∈ I|f(x)|≤δ, ‖ f‖_C^k≤ 1, and |I|≤δ^1/k, we must have 1≤ j≤ k-1.
Select x_0∈ I with |f^(j)(x_0)|=sup|f^(j)|. Define
Q(x) = f(x_0) + ∑_i=1^j (x-x_0)^i/i! f^(i)(x_0).
By Pólya's sub-level set inequality (Theorem <ref>) with λ = K8^-(k+1)jδ/j!, we have
|{x∈ I |Q(x)| ≤λ}| ≤ 4( λ/2 |f^(j)(x_0)|/j! )^1/j≤ 4( K8^-(k+1)jδ/j!/ 2· K8^-jkδ|I|^-j/j! )^1/j≤1/2|I|.
In particular, there exists a point x∈ I with |Q(x)|≥ K8^-(k+1)jδ/j!. On the other hand, by Taylor's theorem there is a point x_1 between x_0 and x so that
f(x) = Q(x) + (x-x_0)^j+1/(j+1)!f^(j+1)(x_1),
and hence
|f(x)| ≥ |Q(x)| - |x-x_0|^j+1/(j+1)!|f^(j+1)(x_1)|
≥K8^-(k+1)jδ/j! - |I|^j+1/(j+1)!( K8^-(j+1)kδ|I|^-j-1)
≥ K8^-jkδ( 1/8^jj! - 1/8^k(j+1)!)
≥K/8^k^2kδ
>δ.
This contradicts the assumption that |f(x)|≤δ on I. We conclude that (<ref>) holds.
The next result says that if f is tangent to a (δ; k-1; T) rectangle R, then there is a corresponding value of ρ≥δ (which depends on δ, k, and T) so that f is tangent to a (ρ;k) rectangle associated to R.
Let k≥ 1, δ>0, and let f∈ C^k with ‖ f ‖_C^k≤ 1. Let I = [a,a+(Tδ)^1/k-1] ⊂[0,1], with T≥ 1. suppose that |f(x)|≤δ for x∈ I.
Let ρ = max(δ, T^-k). Then |f(x)|≲ρ for x∈ [a, a+ρ^1/k].
If T≥δ^1/k then (Tδ)^1/k-1≥δ^1/k the the conclusion is immediate.
Suppose instead that T<δ^1/k and ρ=T^-1/k. Since ‖ f ‖_C^k≤ 1 and |I|≤δ^1/k, we can apply Lemma <ref> to conclude that
sup_x∈ I|f^(i)(x)|≲δ|I|^-i = δ^1-i/k-1ρ^i/k(k-1), i=0,…,k-1.
Since ‖ f ‖_C^k≤ 1, we can apply Taylor's theorem to conclude that for each x∈ [a, a+ρ^1/k], there is a point x_1 between a and x so that
f(x) = ∑_i=0^k-1f^(i)(a)/i!(x-a)^i + f^k(x_1)/(k)!(x-a)^k,
and hence by (<ref>) (and noting that ρ≤δ),
|f(x)|
≲∑_i=0^k-1δ^1-i/k-1ρ^i/k(k-1)ρ^i/k + ρ^k/k
= ∑_i=0^k-1δ(ρ/δ)^i/k-1 + ρ≲ρ.
The next result says that if a set of functions are all tangent to a common curvilinear rectangle of dimensions Aδ×δ^1/k, then a large fraction of these functions must be tangent to a common curvilinear rectangle of dimensions δ×δ^1/k. The proof is an application of pigeonholing, and is omitted.
Let k≥ 1, A≥ 1. Then there exists >0 so that the following holds. Let F be a set of functions with C^k norm at most 1. Suppose that there is an interval I⊂[0,1] of length δ^1/k so that sup_x∈ I|f(x)-g(x)|≤ A δ for all f,g∈ F. Then there is a set F'⊂ F of cardinality at least (#F) so that sup_x∈ I|f(x)-g(x)|≤δ for all f,g∈ F'.
The next result records a useful property of families of curves that forbid k-th order tangency: if two functions from this family are both tangent to a common (δ; k; T) rectangle for T≥ 1, then these functions must be close in C^k norm. The proof is similar to that of Lemma <ref>, and is omitted.
Let k≥ 2, δ>0, K≥ 1. Let I be a compact interval and let f∈ C^k(I) with ‖ f ‖_C^k(I)≤ 1. Let J⊂ I be a closed interval of length at most 1. Suppose that
sup_x∈ J|f(x)|≤δ,
and
‖ f ‖_C^k(I)≤ Kinf_x∈ I∑_j=0^k|f^(j)(x)|.
Then
‖ f ‖_C^k(I)≲ (K/|J|)^k δ.
plain
|
http://arxiv.org/abs/2307.04557v1 | 20230710134648 | Exploring Non-Standard Quark Interactions through Solar Neutrino Studies | [
"Ilídio Lopes"
] | hep-ph | [
"hep-ph",
"astro-ph.SR",
"hep-ex"
] |
Centro de Astrofísica e Gravitação - CENTRA,
Departamento de Física, Instituto Superior Técnico - IST,
Universidade de Lisboa - UL, Av. Rovisco Pais 1, 1049-001 Lisboa, Portugal
[email protected]
We investigate the effects of a Non-Standard Interaction (NSI) extension of the standard model of particle physics on solar neutrino flavour oscillations. This NSI model introduces a U_Z^'(1) gauge symmetry through a Z^'
boson that mixes with the photon, creating a neutral current between active neutrinos and matter fields via a unique coupling to up and down quarks. The interaction is defined by a
single parameter, ζ_o, which is related to the Z^' boson's mass m_Z^'
and coupling constant g_Z^'. Notably, this model relaxes the bounds on Coherent Elastic Neutrino-Nucleus
Scattering experiments and fits the experimental values of the anomalous magnetic dipole moment of the muon.
In this study, we use solar neutrino measurements and an up-to-date standard solar model to
evaluate the neutrino flavour oscillations and assess the constraints on ζ_o.
Our study indicates that the NSI model aligns with the current solar neutrino data when ζ_o is between -0.7 and 0.002. These models have χ^2_ν values equal to or better than the standard neutrino flavor oscillation model, which stands at a χ^2_ν of 3.12. The best NSI model comes with a ζ_o value of -0.2 and a χ^2_ν of 2.96. Including extra data from the Darwin experiment in our analysis refines the range of ζ_o values from -0.7 to 0.002, down to -0.5 to -0.002.
These results hint at the possible existence of novel interactions, given that NSI models achieve a comparable or superior fit to the solar neutrino data when contrasted with the prevailing standard model of neutrino flavour oscillation.
Exploring Non-Standard Quark Interactions through Solar Neutrino Studies
Ilídio Lopes
August 12, 2023
========================================================================
§ INTRODUCTION
Neutrinos are widely regarded as one of the most valuable probes for studying the Standard Model (SM) of elementary particles and fundamental interactions, thanks to their unexpected behavior when compared to other elementary particles <cit.>. This insight has been derived from extensive experimental data sets from detectors around the world. Our knowledge of neutrinos spans many different physical contexts and energy scales, from detecting astrophysical neutrinos with energies ranging from MeV to PeV, to producing them in nuclear reactors and accelerators with energies above MeV and GeV, respectively <cit.>.
Astrophysical neutrinos have been historically at the heart of some of the most compelling challenges to modern physics and astrophysics. This sphere of exploration includes groundbreaking discoveries, such as the detection of solar neutrinos, as evidenced by <cit.>, and the identification of neutrino production in remarkable events like Supernova 1987A, as reported by <cit.> and <cit.>.
Additionally, the recent discovery of high-energy neutrinos sourced from distant celestial entities, as chronicled by the <cit.>, highlights the substantial advancements unfolding within the specialized field of neutrino astronomy.
A historical perspective on the critical role of astrophysical neutrinos within modern physics can be gleaned from comprehensive reviews, such as those by <cit.>. These phenomena have collectively designated neutrinos as the ultimate messengers of novel physics extending beyond the Standard Model's boundaries.
Despite the SM providing the framework for how neutrinos interact with leptons and quarks through weak interactions, many fundamental questions remain unanswered, such as the mechanism for neutrino mass generation or whether neutrinos are Dirac or Majorana particles. For a more detailed account, please refer to the comprehensive reviews by <cit.> and
<cit.>.
These questions provide solid motivation for thoroughly testing the standard picture of the three-neutrino flavour oscillation <cit.>. Specifically, neutrino oscillations over the years have presented compelling evidence for novel physics surpassing the boundaries of the Standard Model, as evidenced by <cit.> and <cit.>. Consequently, they function as a highly effective tool for examining the possible presence of novel particles and their interactions. With the increasing sensitivity of neutrino experiments <cit.>, it is timely to investigate whether there are any new interactions between neutrinos and matter.
The particle physics community has proposed many alternative neutrino physics models to address these questions, including simple extensions to the SM and models addressing the origin of dark matter, dark energy, and experimental neutrino anomalies <cit.>.
These models encompass the introduction of novel particles, including new types of fermions and bosons, such as sterile neutrinos and axion-like particles <cit.>.
In this article, we delve into the impact of a new quark neutrino interaction on the three neutrino flavour oscillation model <cit.>, which is predicted by the current standard solar model <cit.>. This Non-Standard Interaction (NSI) model, developed by <cit.>, provides a compelling explanation for some of the unsettled experimental data, including the Coherent Elastic Neutrino-Nucleus Scattering (CEν NS) experiments <cit.> and the anomalous magnetic dipole moment of the muon (g-2)_μ <cit.>. This model is based on a U(1) gauge symmetry, incorporating a light gauge boson that mixes with the photon <cit.>.
The coupling of neutrinos with up (u-) and down (d-) quarks leads to a ratio that nullifies the contribution to the CEν NS amplitude, relaxing the constraint on the NSI model with the CEν NS experimental measurements <cit.>. Furthermore, the constraints imposed on the parameter space of this model through experimental and observational bounds lead to a solution that is compatible with the (g-2)_μ anomaly.
Here, we present novel constraints on the NSI model using state-of-the-art solar neutrino data and an up-to-date standard solar model <cit.>. Furthermore, we determine the parameter range that is consistent with solar neutrino experimental measurements and predict potential constraints that could be derived from future neutrino experiments.
The article is organized as follows: Section <ref> provides a summary of the Non-Standard quark-neutrino model used in this work. In Section <ref>, we calculate the survival probability of electron neutrinos. Next, Section <ref> presents the constraints obtained from the standard solar model. Finally, Section <ref> provides a summary and draws conclusions.
§ NEUTRINOS AND NON-STANDARD INTERACTION WITH QUARKS
Here, we consider an extension to the standard model of elementary particles and fundamental interactions with a new
interaction between active neutrinos and up and down quarks <cit.>.
Accordingly, we consider that our model's Lagrangian density
L corresponds to the sum of the standard model's Lagrangian L_ST plus a Non-Standard Interaction (NSI) Lagrangian L_NSI. Hence,
L= L_ST+ L_NSI,
where
L_NSI is the effective Lagrangian that describes the NSI contribution resulting from the neutrino propagation in matter <cit.>.
In this study, we focus on an extension of the standard model by a new local group U_Z^'(1). Z^' denotes the gauge boson of the U_Z^'(1) symmetry group. We also assume that Z^' has a mass m_Z^' and couples to matter with a coupling constant g_Z^'. The L_NSI
corresponds now to a NSI vector-like interaction <cit.>, such that L_NSI≡ L_Z^', where
L_Z^' is defined as
L_Z^' =2√(2)G_Fϵ_αβ^f(ν̅_αγ_μ1-γ_5/2ν_β) (f̅γ^μ f ),
where α and β refer to neutrino flavours e, μ and τ; and f and f̅ correspond to the fermions or anti-fermions:
up quarks, down quarks and electrons.
The previous Lagrangian (equation <ref>)
corresponds an NSI model with an arbitrary ratio of NSI coupling to the u — and d — quarks <cit.>.
Since we are interested in only the contribution of the NSI interaction for the neutrino oscillation experiments, only the vector part contributes to the interaction ϵ_αβ^f. Consequently, the coherent forward scattering of neutrino in the matter is unpolarized <cit.>.
In the case where |ϵ_αβ^f|∼ 1, the contribution of NSI becomes as strong as the weak interaction. We notice, in the limit that ϵ_αβ^f=0, we obtain the standard case for which
L= L_ST (L_NSI=0).
Here, we describe the propagation of neutrinos through vacuum and matter employing the three-flavour neutrino oscillation model
<cit.>.
As usual, we follow the standard convention, (ν_e,ν_τ,ν_μ), (ν_1,ν_2,ν_3) and (m_1,m_2,m_3) correspond to the neutrino flavours, neutrino mass eigenstates and the associated neutrino masses. Accordingly, the neutrino evolution equation reads
i dΨ/dr= H_νΨ
=( H_ vac
+ H_ mat) Ψ
where r (distance to the centre of the Sun) is the coordinate along the neutrino trajectory, H_ν is the Hamiltonian and
Ψ=(ν_e,ν_τ,ν_μ)^T.
Conveniently, we can decompose this H_ν in a vacuum and matter components: H_ vac=𝐔 M^2 𝐔^ †/(2E) and H_ mat≡ V, where E is the energy of the neutrino, M^2= diag(0,Δ m_21^2,Δ m_31^2)
is the neutrino mass matrix, 𝐔 is a unitary matrix describing the mixing of neutrinos in vacuum, V is a diagonal matrix of Wolfenstein potentials.
Δ m^2_21 and Δ m^2_31 are the mass-squared differences between neutrinos of different mass eigenstates, such as Δ m^2_21= m_2^2-m_1^2 and Δ m^2_31= m_3^2-m_1^2. Moreover, we decompose V into two additional components <cit.>, one related to the standard matter interactions and another one to NSI interactions:
V= V^SM+ V^NSI,
where V^SM is the standard matter Wolfenstein potential defined as V^SM= diag(V^SM_e,0,0), and V^NSI is
the NSI matter Wolfenstein potential defined as
V^NSI= diag(0,V^NSI_μ,V^NSI_τ).
Therefore, the Non-Standard Interactions matrix, symbolized as V^NSI, is characterized as a diagonal 3× 3 matrix, mirroring the structure of the standard Wolfenstein potential denoted as V^SM.
This process corresponds to a generalisation of the well-known Mikheyev-Smirnov-Wolfenstein effect <cit.>. For the standard Wolfenstein potential for neutrino propagation <cit.>, we conveniently chose to define it as
V^SM_e= √(2)G_Fn_e(r),
where G_F is the Fermi constant and n_e(r) is the number density of electrons inside the Sun.
In this study we focus on the NSI model proposed by
<cit.>. They have opted to impose in this NSI model the additional condition: the lepton numbers L_μ and L_τ, the baryon numbers B_i with flavour i (such that i=1,2,3 corresponding to the three generations) and any arbitrary real value of c_o, fulfil the following rule:
L_μ+L_τ-c_o(B_1+B_2)-2B_3(1-c_o), which accommodates the B meson anomalies observed at LHC <cit.>, under which the model is anomaly-free <cit.>. The relationship established earlier shows that if we consider an arbitrary real number, such as c_o≠ 2/3, then the U_Z^'(1) charges of the third generation of quarks will differ from those of the first and second generations. In the model calculated by <cit.>, the Non-Standard Interaction contribution to the potential, which relates to neutrino propagation in matter, assumes a straightforward form: V^NSI_μ=V^NSI_τ=V_Z^'. Here, V_Z^' is defined as
V_Z^'= 2√(2)G_F n_e(r) ϵ_Z'(r),
demonstrating the relationship between the NSI potential, the Fermi constant (G_F), electron density (n_e), and the NSI strength parameter (ϵ_Z').
In the previous equation, ϵ_Z'(r) estimates the contribution of the NSI Lagrangian. Here, ϵ_Z'(r) is given by
ϵ_Z'(r) = ζ_on_n(r)+n_p(r)/n_e(r),
where ζ_o= - c_o g_Z^'^2/(2√(2) G_F m_Z^'^2), and n_n(r) and n_p(r) are the number density of neutrons and protons or u — quarks and d — quarks
inside the Sun. We notice ζ_o, like c_o can be a positive or negative value. A detailed account of this model is available in <cit.>, and additional information is available in other related articles <cit.>.
Furthermore, we will assume that the Z^' boson's mass is sufficiently large, and there is no need to consider the size of the medium in the computation of the Welfonstein potentials <cit.>.
The standard three-flavour neutrino oscillation model features a universal term, denoted as V_e^SM, that applies to all active neutrino flavors and does not alter the flavor oscillation pattern. This allows us to simplify the model by setting V= V^SM≡ diag(V^SM_e,0,0)
Now, the inclusion of NSI interaction in the model alters V (see Equation <ref>) by incorporating a new interaction with u — and d — quarks, as a consequence
V= diag(V^SM_e,V_Z^',V_Z^').
Now, if we subtract the common term, V_Z^' (equation <ref>) to the diagonal matrix V <cit.>, the latter takes the simple form V= diag(V_ eff,0,0) with V_ eff≡ V^SM_e-V_Z^' defined as:
V_ eff= √(2)G_F n_ eff(r)
and n_ eff(r) is the effective number density given by
n_ eff= n_e(r)
[1-2 ϵ_Z'(r)
],
where ϵ_Z' is given by equation
(<ref>).
§ SOLAR NEUTRINOS: SURVIVAL PROBABILITY OF ELECTRON NEUTRINOS
We compute the survival probability of electron neutrinos P_e(E) of several NSI models with different ζ_o (equation <ref>) values and compare them with the data from recent solar neutrino experiments.
Several groups have shown that, at a reasonable approximation, the neutrino flavour oscillations are adiabatic <cit.>. As such, we can compute a full analytical P_e(E) expression that agrees with the current solar neutrino data <cit.>. Moreover, many authors opted to include a second-order non-adiabatic contribution in P_e(E) by modifying the original adiabatic P_e(E) expression
<cit.>.
The reader can find a detailed discussion about non-adiabatic neutrino flavour oscillations in many articles, among others, the following ones: <cit.>.
Here, we follow a recent review of particle physics on this topic <cit.>, specifically in the computation described in the "Neutrino Masses, Mixing, and Oscillations" section <cit.>. The survival probability of electron neutrinos P_e(E) is given by
P_e(E)≈cos^4(θ_13)P_e^2ν_e+sin^4(θ_13)
and
P_e^2ν_e(E)=1/2+(1/2-P_γ)cos(2θ_12)cos(2θ_m).
In the previous expression, P_e^2ν_e(E) gives the survival probability of electron neutrinos in the two neutrino flavour model (θ_13=0), P_γ computes the probability jumps coming from the non-adiabatic correction, and θ_m=θ_m(r_s) is the matter mixing angle <cit.>. θ_m is evaluated in the neutrino production (source) region located at a distance r_s from the Sun's centre <cit.>.
The jump probability P_γ reads
P_γ=e^-γsin^2θ_12-e^-γ/1-e^-γ P_ H
where γ=2π h_γΔ m_21^2/2E, h_γ is the scale height <cit.> and
P_ H is a regular step function. The matter mixing angle <cit.> θ_m is given by
cos(2θ_m)=A_m/√(A_m^2 +sin^2(2θ_12))
where A_m reads
A_m=cos(2θ_12)-V_m/Δ m^2_21.
In the standard case <cit.>, it corresponds to
V_m=2V_e^SMcos^2(θ_13)E where V_e^SM is given by equation (<ref>). However, V_e^SM(r) in this study will be replaced by a new effective potential V_ eff(r) given by equation (<ref>),
with n_ eff(r) by equation (<ref>).
We remind the reader that we use standard parametrization for the neutrino flavour oscillations: mass square splitting and angle between neutrinos of different flavours <cit.>.
Hence, we adopt the recent values obtained by the data analysis
of the standard three-neutrino flavour oscillation model obtained by
<cit.>.
Accordingly, for a parameterisation with a normal ordering of neutrino masses the mass-square difference and the mixing angles have the following values <cit.>:
Δ m^2_21= 7.50^+0.22_-0.20× 10^-5 eV^2,
sin^2θ_12=0.318± 0.016,
and sin^2θ_13=0.02250^+0.00055_-0.00078.
Similarly Δ m^2_31= 2.55^+0.02_-0.03× 10^-3 eV^2 and sin^2θ_23=0.574± 0.014.
The maximum production of neutrinos in the Sun's core occurs in a region between 0.01 and 0.25 solar radius, with neutrino nuclear reactions of the proton-proton chain and carbon-nitrogen-oxygen cycle occurring at different locations <cit.>. These neutrinos produced at various values of r_s, when travelling towards the Sun's surface, follow paths of different lengths. Moreover, neutrinos experience varying plasma conditions during their travelling, including a rapid decrease of the electron density from the centre towards the surface. In general, we expect that non-adiabatic corrections averaged out and be negligible along the trajectory of the neutrinos, except at the boundaries (layer of rapid potential transition) of the neutrino path, typically around the neutrino production point or at the surface of the Sun.
Therefore, we could expect equation (<ref>) to be very different when considering such effects.
Nevertheless, this is not the case: <cit.> analysed in detail the contribution to P_e
(equation <ref>) coming from non-adiabaticity corrections and variation on the locations of neutrino production, i.e., r_s, and they found that the impact is minimal.
Generally, P_γ=0 (equation <ref>) corresponds to an adiabatic flavour conversion and P_γ 0 to a nonadiabatic one. For reference, the conversion is called nonadiabatic only if P_γ 0 has a non-negligible value.
We notice that inside the Sun, the number densities of electrons, protons, and neutrons vary considerably among the different neutrino paths.
Accordingly, n_e(r) , n_p(r) and n_n(r) decrease monotonically from the centre towards the surface. As the neutrinos produced in the core propagate towards the surface, a fraction is converted to other flavours. The magnitude of this conversion depends on the neutrino's energy and the coupling constant to electrons, up quarks and down quarks. We remember that in the standard neutrino flavour oscillation model with ζ_o=0, only the n_e(r) contributes to the matter flavour conversion. However, in our NSI model with ζ_o 0, the n_p(r) and n_n(r) also participate in the flavour conversion.
Neutrinos in their path will cross a layer where A_m=0 (equation <ref>). This layer is defined by the resonance condition:
V_m= Δ m^2_21cos(2θ_12).
We compute the effective number density associated with the resonance condition by matching equations (<ref>) and (<ref>). Therefore, the n_ eff in the resonance layer reads
n^o_ eff≡
n_ eff(r_o)=
Δ m^2_21cos(2θ_12)/2√(2)
G_F E cos^2(θ_31),
where r= r_o (h_γ) is defined as the layer where the resonance condition n_ eff(r_o)= n_ res (E) occurs.
We observe that in the previous equation, n^o_ eff(r) corresponds to the quantity defined in equation (<ref>). Although in the classic case (ϵ_Z^'=0), the effective number density is equivalent to the electronic number density in the resonance layer: n_ eff(r_o)=n^o_e(r_o). In general, the adiabatic and non-adiabatic nature of neutrino oscillations depends of the neutrino's energy E and the relative value of the resonance condition of n_ res(E) (equation <ref>). For instance, if a neutrino of energy E is such that: (i)
n^o_ eff (E) ≫ n_ eff
neutrinos oscillate practically as in vacuum, (ii) n^o_ eff≪ n_ eff (E) oscillations as suppressed in the presence of matter <cit.>.
In our models most of the cases correspond to adiabatic transitions, for which P_γ≈ 0. Nevertheless, it is possible to compute the contribution of the non adiabatic component P_γ to P_e(E) by using equation (<ref>) and the following prescription: (i) compute the value of n^o_ eff (using equation <ref>) for each value of E (with fixed values of Δ m_12^2, θ_12 and θ_13), (ii) calculate the scale-height h_γ =|n_ eff/(d n_ eff/dr)|_r_o at the point r_o defined as n_ eff(r_o)=n^o_ eff(E), (iii) calculate P_γ and γ for the value of h_γ. The scale-height h_γ also reads h_γ =|(d lnn_ eff/dr)^-1|_r_o.
Conveniently, to properly take into account the non-adiabatic correction into equations (<ref>) and (<ref>), we included the step function P_ H, defined as P_ H (V_m - Δ m^2_21cos(2θ_12) ). This function is one for
Δ m^2_21cos(2θ_12)≤ V_m, and is 0 otherwise
<cit.>.
Figure <ref> shows P_e (E) for the standard neutrino flavour oscillation model. In any case, in this study we focus on the solar neutrino energy window (0.1 up to 20 MeV), as the P_γ contribution for P_e(E) is negligible.
Numerous studies <cit.> have highlighted that the nuclear reactions occurring in the Sun's core produce a significant amount of electron neutrinos. Due to their extensive mean free path, these neutrinos interact minimally with the solar plasma as they travel towards Earth. During their journey, these particles undergo flavour oscillations (neutrino's energy range spans from 0.1 to 100 MeV): lower-energy neutrinos experience flavour transformations due to vacuum flavour oscillations, while high-energy neutrinos participate in additional flavour oscillations, courtesy of the MSW effect or matter flavour oscillations
<cit.>. This additional oscillation mechanism is significantly influenced by both the origin of the neutrino-emitting nuclear reactions and the energy of the produced neutrinos.
Here, we will investigate the influence of these revised NSI neutrino models on the flux variation of different neutrino flavours. Specifically, we will consider how these variations are affected by the local alterations in the distributions of protons and neutrons. This new flavour mechanism will affect all electron neutrinos produced in the proton-proton (PP) chain reactions and carbon-nitrogen-oxygen (CNO) cycle
<cit.>.
Therefore, the survival probability of electron neutrinos associated with each nuclear reaction will depend on the location of the neutrino source in the solar interior. A detailed discussion of how the location of solar neutrino sources affects P_e(E) (equation <ref>) can be found on <cit.>. The average survival probability of electron neutrinos for each nuclear reaction in the solar interior, i.e., P_e,k (≡⟨ P_e (E)⟩_k) is computed
as
P_e,k (E) =
A_k^-1∫_0^R_⊙ P_e (E,r)ϕ_k (r) 4πρ(r) r^2 dr,
where A_k (=∫_0^R_⊙ϕ_i (r) 4 πρ (r) r^ 2 dr in which ϕ_k (r) is the electron neutrino emission function
for the k solar nuclear reaction) is a normalization constant, and k corresponds to the following solar neutrino sources: pp, pep, ^8B, ^7Be, ^13N, ^15O and ^17F.
The probability of electron-neutrinos changing flavour is influenced by variables tied to both vacuum and matter oscillations and the intrinsic physics of the Sun's interior. In particular, matter flavour conversion significantly relies on the local plasma conditions. Consequently, the quantity of electron neutrinos detected on Earth for each 'k' species, as indicated by Φ_⊗,k(E), diverges markedly from the electron neutrinos generated by each neutrino-producing nuclear reaction, denoted as Φ_⊙,k(E). These quantities are related as follows:
Φ_⊗,k (E) =P_e,k (E) Φ_⊙,k (E),
where P_e,k (E) (equation <ref>) is the electron-neutrino survival probability of a neutrino of energy E. In this study k is equal to ^8B or ^7Be.
§ CONSTRAINTS TO NSI NEUTRINO MODEL
We now turn our attention to the impact of the Non-Standard Interactions model on neutrino flavour oscillations, as explored in previous sections. Specifically, we calculate the survival probability of electron neutrinos for varying values of the NSI parameter ζ_o (as per equation <ref>). This analysis applies to an updated standard solar model characterized by low metallicity, or 'low-Z.' A comprehensive explanation of the origins of low-Z solar models is presented in the review article of <cit.>. For further exploration of the impact of low metallicity on solar modelling, we refer to the articles by <cit.>,
<cit.> and <cit.>.
We obtain the present-day Sun's internal structure using an up-to-date standard solar model that agrees relatively well with current neutrino fluxes and helioseismic data sets. To that end, we use a one-dimensional stellar evolution code that follows the star's evolution from the pre-main sequence phase until the present-day solar structure: age, luminosity and effective temperature, 4.57 Gyr, 3.8418× 10^33 erg s^-1, and 5777 ^ o K, respectively. Moreover, our solar reference model has the following observed abundance ratio at the Sun's surface: (Z_s/X_s)_⊙=0.01814, where Z_s and X_s are the metal and hydrogen abundances at the star's surface <cit.>. The details about the physics of this standard solar model in which we use the AGSS09 (low-Z) solar abundances <cit.> are described in <cit.>.
Figure <ref> compares our predictions with current solar neutrino data. Each data point illustrated herein represents the measured survival probabilities of electron-neutrinos, as captured by three solar neutrino detectors: SNO, Super-Kamiokande, and Borexino.
In detail: Borexino data includes measurements from pp reactions (yellow diamond), ^7Be reactions (red upward-triangle), pep reactions (blue downward-triangle), and ^8B reactions in the High-Energy Region (HER), presented in salmon (HER), orange (HER-I), and magenta (HER-II) circles. SNO's ^8B measurements are denoted by a cyan square, while the joint KamLAND/SNO ^7Be measurements are represented by a green square. Refer to <cit.> and included references for additional insight into this experimental data.
The lowest neutrino energy data point relates to the anticipated precision of the Darwin experiment in measuring P_e±Δ P_e (ζ_o=0). Here, Δ P_e has the potential to be as reduced as 0.017, as suggested by <cit.>.
Here, we compute P_e for several NSI models as given by equation (<ref>).
It shows P_e for the standard three neutrino flavour model (continuous red curve) and different NSI models (other continuous coloured curves).
Only a restricted set of NSI models with relatively low ζ_o agree with all the neutrino data. Notably, the NSI models with lower ζ_o have an explicit agreement with the ^8B measurements for neutrino energies just below 10 MeV (as depicted in Figure <ref>).
For illustration, we present a selection of NSI models that significantly diverges from the standard flavour oscillation model in their impact on P_e. The degree of effect in these NSI models depends on the value of ζ_o, the location of neutrino emission, and the energy spectrum of neutrinos from each nuclear reaction. We illustrate this impact in Figures <ref> and <ref>, demonstrating how the parameter ζ_o influences neutrino flavour oscillation (refer to equation <ref>) and modulates the ^8B spectrum (see equation <ref>).
To exemplify the influence of the neutrino source location on P_e, Figure <ref> displays curves based on the presumption that neutrinos originate from the Sun's center, indicated as 'Ref'. These curves are then juxtaposed with those derived from neutrinos generated by the ^8B nuclear reaction for a variety of ζ_o values.
To enhance the robustness of our analysis, we opt to calculate a chi-squared-like test (χ_ν^2 — test). This test leverages the inherent reliance of P_e on the solar background structure. Therefore, we define this chi-squared-like test as follows:
χ^2_ν= ∑_i,k(P_e,k^obs(E_i)-P_e,k^th(E_i)/σ_obs(E_i))^2.
This function compares our theoretical predictions with the empirical data collected by various neutrino experiments, evaluated at different energy values, E, used to calculate the survival probability function P_e,k(E), as defined in equation (<ref>).
Here, the subscript 'obs' and 'th' signify the observed and theoretical values, respectively, at the neutrino energy E_i. The subscript i points to specific experimental measurements (refer to Figure <ref>), and k corresponds to the source of solar neutrino (see equation <ref>). The term σ_obs(E_i) represents the error in measurement i. The data points, P_e,k^obs(E_i), are measurements derived from solar neutrino experiments, as cited in <cit.>.
Figure <ref> presents the experimental data points, P_e,k^obs(E_i), juxtaposed with the curves of select NSI models. The corresponding χ^2_ν values for these models are explicitly listed in the figure's caption.
In the χ_ν^2 test, as described by equation (<ref>), the standard neutrino flavor model yields a χ_ν^2 value of 3.12.
For comparison, when the ζ_o values are at -2 and 2, the corresponding χ_ν^2 values are 5.26 and 111.6, respectively. Our study reveals that a χ_ν^2 value of 3.12 or less is achieved when ζ_o lies between -0.7 and 0.002. This result is visually demonstrated in Figure <ref> with a dashed horizontal line intersecting the blue curve, which connects the series of red circles at the points -0.7 and 0.002. According to this preliminary analysis, an NSI neutrino model with ζ_o=-0.2 yields a χ_ν^2 value of 2.96, suggesting a better fit to the solar neutrino data than the standard neutrino flavour model.
§ CONCLUSION
Currently, a new class of models based on flavour gauge symmetries with a lighter gauge boson is being proposed in the literature to resolve some of the current particle anomalies in the standard model of physics. These new interactions lead to non-standard neutral current interactions between neutrinos and quarks. Specifically, we focus on studying and testing an NSI model proposed by <cit.> that incorporates a new U(1) gauge symmetry through a light gauge boson Z^', which mixes with the photon. The interaction leads to a neutral current between active neutrinos and matter fields, with an arbitrary coupling to the up and down quarks. This model has some intriguing features, as it relaxes the bound on the Coherent Elastic Neutrino-Nucleus Scattering experiments and fits the measured value of the anomalous magnetic dipole moment of the muon.
In this paper, we analyze the impact of the NSI model proposed by <cit.> on neutrino flavour oscillations, using an up-to-date standard solar model that is in good agreement with helioseismology and neutrino flux data sets.
Specifically, we examine the impact of this non-standard interaction model on the survival probability of electron neutrinos, with a focus on the PP-chain nuclear reactions taking place in the Sun's core. Our results show that the shapes of the neutrino spectra vary with the location of the nuclear reactions in the core, depending on the algebraic value of ζ_o. The effect is particularly visible in the ^8B neutrino spectrum.
We find that the NSI models with -0.7 ≤ζ_o ≤ 0.002 fit the solar neutrino data equal or better than the standard neutrino flavour model. The best NSI model corresponds to ζ_o=-0.2. From equation (<ref>), we can derive a relationship between the mass of the Z^' boson m_Z^', the gauge coupling g_Z^', and the quark charge c_o: ζ_o= - c_o g_Z^'^2/(2√(2) G_F m_Z^'^2)=-0.2.
In essence, our research underscores the significance of neutrino oscillation analyses in assessing NSI models. Our findings reveal the potential of these neutrino models to refine the parameters of NSI models. This methodology provides a robust and independent means to confirm this class of NSI models, especially as they address certain existing experimental data anomalies, such as those observed in Coherent Elastic Neutrino-Nucleus Scattering experiments and in measurements of the muon's anomalous magnetic dipole moment.
In the future, the validation or exclusion of such a class of NSI models can be achieved more efficiently with new solar neutrino detectors that can obtain much more accurate measurements <cit.>.
For instance, the Darwin experiment <cit.> is set to generate data that can better calculate the survival rate of low-energy electron neutrinos (see Figure <ref>). The figure shows that by factoring in the predicted precision from Darwin and presuming the P(E) value to be standard at E=0.150 MeV (with ζ_o=0.0), we anticipate a P_e±Δ P_e where Δ P_e= 0.017. This additional data point from Darwin, when included in the χ^2 analysis, narrows down the set of NSI models that perform equal or better than the standard case in terms of χ^2. Specifically, it shifts the ζ_o interval from -0.7 to 0.002 to a tighter range of -0.5 to -0.002. Furthermore, the addition of this data point also decreases the χ^2/ d.o.f. value. For reference, in Figure <ref>, the models with a d.o.f. of 7 display χ_ν^2/ d.o.f. values that vary from 0.50 to 0.53 within the ζ_o range of -1 to 0.2, and hit a local minimum of χ_ν^2/ d.o.f.=0.4 at ζ_o=-0.2. Adding one more data point increases the d.o.f to 8 and adjusts the χ_ν^2/ d.o.f. range to 0.48 to 0.47. The local minimum remains at ζ_o=-0.2, but its value reduces to χ_ν^2/ d.o.f.=0.37.
This work emphasizes the significance of NSI models in defining the fundamental properties of particles and their interactions, driving theoretical progress in this research field. As research in experimental neutrino physics continues to advance at a rapid pace, studies of this nature will be critical for comprehensive analysis of neutrino properties <cit.>. We anticipate that the innovative approach outlined in this paper will offer a fresh perspective for exploring new particle physics interactions using the standard solar model combined with a comprehensive analysis of neutrino flavour oscillation experimental data.
§ ACKNOWLEDGMENTS
The author thanks the anonymous referee for the invaluable input which significantly enhanced the quality of the manuscript. I.L. would like to express gratitude to the Fundação para a Ciência e Tecnologia (FCT), Portugal, for providing financial support to the Center for Astrophysics and Gravitation (CENTRA/IST/ULisboa) through Grant Project No. UIDB/00099/2020 and Grant No. PTDC/FIS-AST/28920/2017.
90
natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1url<#>1urlprefixURL
[Bilenky and Petcov(1987)]1987RvMP...59..671B
authorS. M. Bilenky
and authorS. T.
Petcov, journalReviews of Modern Physics
volume59, pages671 (year1987).
[Maltoni et al.(2004)Maltoni,
Schwetz, Tórtola, and Valle]2004NJPh....6..122M
authorM. Maltoni,
authorT. Schwetz,
authorM. Tórtola,
and authorJ. W. F.
Valle, journalNew Journal of Physics
volume6, pages122 (year2004),
hep-ph/0405172.
[Sajjad Athar et al.(2023)Sajjad
Athar, Fatima, and Singh]2023PrPNP.12904019S
authorM. Sajjad Athar,
authorA. Fatima, and
authorS. K. Singh,
journalProgress in Particle and Nuclear Physics
volume129, eid104019 (year2023).
[Balantekin and Kayser(2018)]2018ARNPS..68..313B
authorA. B. Balantekin
and authorB. Kayser,
journalAnnual Review of Nuclear and Particle Science
volume68, pages313 (year2018).
[Davis et al.(1968)Davis, Harmer,
and Hoffman]1968PhRvL..20.1205D
authorR. Davis,
authorD. S. Harmer,
and authorK. C.
Hoffman, journal
volume20, pages1205 (year1968).
[Hirata et al.(1987)Hirata, Kajita,
Koshiba, Nakahata, Oyama, Sato, Suzuki, Takita, Totsuka,
Kifune et al.]1987PhRvL..58.1490H
authorK. Hirata,
authorT. Kajita,
authorM. Koshiba,
authorM. Nakahata,
authorY. Oyama,
authorN. Sato,
authorA. Suzuki,
authorM. Takita,
authorY. Totsuka,
authorT. Kifune,
et al., journal volume58,
pages1490 (year1987).
[Bionta et al.(1987)Bionta,
Blewitt, Bratton, Casper, Ciocio, Claus, Cortez, Crouch, Dye,
Errede et al.]1987PhRvL..58.1494B
authorR. M. Bionta,
authorG. Blewitt,
authorC. B. Bratton,
authorD. Casper,
authorA. Ciocio,
authorR. Claus,
authorB. Cortez,
authorM. Crouch,
authorS. T. Dye,
authorS. Errede,
et al., journal volume58,
pages1494 (year1987).
[IceCube Collaboration
et al.(2018)IceCube Collaboration, Aartsen, Ackermann,
Adams, Aguilar, Ahlers, Ahrens, Samarai, Altmann, Andeen
et al.]2018Sci...361..147I
authorIceCube Collaboration,
authorM. G. Aartsen,
authorM. Ackermann,
authorJ. Adams,
authorJ. A. Aguilar,
authorM. Ahlers,
authorM. Ahrens,
authorI. A. Samarai,
authorD. Altmann,
authorK. Andeen,
et al., journalScience
volume361, pages147 (year2018),
1807.08794.
[Zuber(2011)]2011neph.book.....Z
authorK. Zuber,
titleNeutrino Physics, Second Edition
(year2011).
[Gerbino and Lattanzi(2017)]2017FrP.....5...70G
authorM. Gerbino and
authorM. Lattanzi,
journalFrontiers in Physics volume5,
eid70 (year2017).
[Fuller and Haxton(2022)]2022arXiv220808050F
authorG. M. Fuller and
authorW. C. Haxton,
journalarXiv e-prints eidarXiv:2208.08050
(year2022), 2208.08050.
[Nakahata(2022)]2022PTEP.2022lB103N
authorM. Nakahata,
journalProgress of Theoretical and Experimental Physics
volume2022, eid12B103
(year2022), 2202.12421.
[Mohapatra et al.(2007)Mohapatra,
Antusch, Babu, Barenboim, Chen, de Gouvêa, de Holanda,
Dutta, Grossman, Joshipura et al.]2007RPPh...70.1757M
authorR. N. Mohapatra,
authorS. Antusch,
authorK. S. Babu,
authorG. Barenboim,
authorM. C. Chen,
authorA. de Gouvêa,
authorP. de Holanda,
authorB. Dutta,
authorY. Grossman,
authorA. Joshipura,
et al., journalReports on Progress in Physics
volume70, pages1757 (year2007),
hep-ph/0510213.
[Athar et al.(2022)Athar, Barwick,
Brunner, Cao, Danilov, Inoue, Kajita, Kowalski, Lindner, Long
et al.]2022PrPNP.12403947A
authorM. S. Athar,
authorS. W. Barwick,
authorT. Brunner,
authorJ. Cao,
authorM. Danilov,
authorK. Inoue,
authorT. Kajita,
authorM. Kowalski,
authorM. Lindner,
authorK. R. Long,
et al., journalProgress in Particle and Nuclear
Physics volume124, eid103947
(year2022), 2111.07586.
[Lesgourgues and
Pastor(2006)]2006PhR...429..307L
authorJ. Lesgourgues
and authorS. Pastor,
journal volume429,
pages307 (year2006), astro-ph/0603494.
[Gonzalez-Garcia and
Maltoni(2008)]2008PhR...460....1G
authorM. C. Gonzalez-Garcia
and
authorM. Maltoni,
journal volume460,
pages1 (year2008), 0704.1800.
[Fukuda et al.(1998)Fukuda,
Hayakawa, Ichihara, Inoue, Ishihara, Ishino, Itow, Kajita,
Kameda, Kasuga et al.]1998PhRvL..81.1562F
authorY. Fukuda,
authorT. Hayakawa,
authorE. Ichihara,
authorK. Inoue,
authorK. Ishihara,
authorH. Ishino,
authorY. Itow,
authorT. Kajita,
authorJ. Kameda,
authorS. Kasuga,
et al., journal volume81,
pages1562 (year1998), hep-ex/9807003.
[Ahmad et al.(2002)Ahmad, Allen,
Andersen, Anglin, Barton, Beier, Bercovitch, Bigu, Biller,
Black et al.]2002PhRvL..89a1301A
authorQ. R. Ahmad,
authorR. C. Allen,
authorT. C. Andersen,
authorJ. D. Anglin,
authorJ. C. Barton,
authorE. W. Beier,
authorM. Bercovitch,
authorJ. Bigu,
authorS. D. Biller,
authorR. A. Black,
et al., journal volume89,
eid011301 (year2002), nucl-ex/0204008.
[Argüelles
et al.(2023)Argüelles, Barenboim, Bustamante,
Coloma, Denton, Esteban, Farzan, Martínez, Forero, Gago
et al.]2023EPJC...83...15A
authorC. A. Argüelles,
authorG. Barenboim,
authorM. Bustamante,
authorP. Coloma,
authorP. B. Denton,
authorI. Esteban,
authorY. Farzan,
authorE. F. Martínez,
authorD. V. Forero,
authorA. M. Gago,
et al., journalEuropean Physical Journal C
volume83, eid15 (year2023),
2203.10811.
[Giunti et al.(2012)Giunti,
Laveder, Li, Liu, and Long]2012PhRvD..86k3014G
authorC. Giunti,
authorM. Laveder,
authorY. F. Li,
authorQ. Y. Liu, and
authorH. W. Long,
journal volume86, eid113014
(year2012), 1210.5715.
[Giunti et al.(2013)Giunti,
Laveder, Li, and Long]2013PhRvD..87a3004G
authorC. Giunti,
authorM. Laveder,
authorY. F. Li, and
authorH. W. Long,
journal volume87, eid013004
(year2013), 1212.3805.
[Capozzi et al.(2017)Capozzi,
Shoemaker, and Vecchi]2017JCAP...07..021C
authorF. Capozzi,
authorI. M. Shoemaker,
and authorL. Vecchi,
journal volume2017, eid021
(year2017), 1702.08464.
[Capozzi et al.(2018)Capozzi,
Shoemaker, and Vecchi]2018JCAP...07..004C
authorF. Capozzi,
authorI. M. Shoemaker,
and authorL. Vecchi,
journal volume2018, eid004
(year2018), 1804.05117.
[Dentler et al.(2017)Dentler,
Hernández-Cabezudo, Kopp, Maltoni, and
Schwetz]2017JHEP...11..099D
authorM. Dentler,
authorÁ. Hernández-Cabezudo,
authorJ. Kopp,
authorM. Maltoni,
and
authorT. Schwetz,
journalJournal of High Energy Physics
volume2017, eid99 (year2017),
1709.04294.
[Lopes(2018)]2018EPJC...78..327L
authorI. Lopes,
journalEuropean Physical Journal C volume78,
eid327 (year2018), 1804.08344.
[Lopes(2020)]2020ApJ...905...22L
authorI. Lopes,
journal volume905, eid22
(year2020), 2101.00210.
[Heeck et al.(2019)Heeck, Lindner,
Rodejohann, and Vogl]2019ScPP....6...38H
authorJ. Heeck,
authorM. Lindner,
authorW. Rodejohann,
and authorS. Vogl,
journalSciPost Physics volume6,
eid038 (year2019), 1812.04067.
[Alves Batista et al.(2021)Alves
Batista, Amin, Barenboim, Bartolo, Baumann, Bauswein, Bellini,
Benisty, Bertone, Blasi et al.]2021arXiv211010074A
authorR. Alves Batista,
authorM. A. Amin,
authorG. Barenboim,
authorN. Bartolo,
authorD. Baumann,
authorA. Bauswein,
authorE. Bellini,
authorD. Benisty,
authorG. Bertone,
authorP. Blasi,
et al., journalarXiv e-prints
eidarXiv:2110.10074 (year2021), 2110.10074.
[Capelo and Lopes(2020)]2020MNRAS.498.1992C
authorD. Capelo and
authorI. Lopes,
journal volume498,
pages1992 (year2020), 2010.01686.
[Lopes and Silk(2013)]2013MNRAS.435.2109L
authorI. Lopes and
authorJ. Silk,
journal volume435,
pages2109 (year2013), 1309.7571.
[Turck-Chieze and
Lopes(1993)]1993ApJ...408..347T
authorS. Turck-Chieze
and authorI. Lopes,
journal volume408, pages347
(year1993).
[Bernal and Farzan(2023)]2023PhRvD.107c5007B
authorN. Bernal and
authorY. Farzan,
journal volume107, eid035007
(year2023), 2211.15686.
[Esteves Chaves and
Schwetz(2021)]2021arXiv210211981E
authorM. Esteves Chaves
and
authorT. Schwetz,
journalarXiv e-prints eidarXiv:2102.11981
(year2021), 2102.11981.
[Abi et al.(2021)Abi, Albahri,
Al-Kilani, Allspach, Alonzi, Anastasi, Anisenkov, Azfar,
Badgley, Baeßler et al.]2021PhRvL.126n1801A
authorB. Abi,
authorT. Albahri,
authorS. Al-Kilani,
authorD. Allspach,
authorL. P. Alonzi,
authorA. Anastasi,
authorA. Anisenkov,
authorF. Azfar,
authorK. Badgley,
authorS. Baeßler,
et al., journal volume126,
eid141801 (year2021), 2104.03281.
[Farzan(2015)]2015PhLB..748..311F
authorY. Farzan,
journalPhysics Letters B volume748,
pages311 (year2015), 1505.06906.
[Coloma et al.(2022)Coloma,
Gonzalez-Garcia, Maltoni, Pinheiro, and Urrea]2022JHEP...07..138C
authorP. Coloma,
authorM. C. Gonzalez-Garcia,
authorM. Maltoni,
authorJ. P. Pinheiro,
and authorS. Urrea,
journalJournal of High Energy Physics
volume2022, eid138 (year2022),
2204.03011.
[Xu et al.(2022)Xu, Wang, and
Chen]2022arXiv220914832X
authorX.-J. Xu,
authorZ. Wang, and
authorS. Chen,
journalarXiv e-prints eidarXiv:2209.14832
(year2022), 2209.14832.
[Farzan and Heeck(2016)]2016PhRvD..94e3010F
authorY. Farzan and
authorJ. Heeck,
journal volume94, eid053010
(year2016), 1607.07616.
[Farzan and
Tórtola(2018)]2018FrP.....6...10T
authorY. Farzan and
authorM. Tórtola,
journalFrontiers in Physics volume6,
eid10 (year2018), 1710.09360.
[Coloma et al.(2021)Coloma,
Gonzalez-Garcia, and Maltoni]2021JHEP...01..114C
authorP. Coloma,
authorM. C. Gonzalez-Garcia,
and
authorM. Maltoni,
journalJournal of High Energy Physics
volume2021, eid114 (year2021),
2009.14220.
[Esteban et al.(2018)Esteban,
Gonzalez-Garcia, Maltoni, Martinez-Soler, and
Salvado]2018JHEP...08..180E
authorI. Esteban,
authorM. C. Gonzalez-Garcia,
authorM. Maltoni,
authorI. Martinez-Soler,
and
authorJ. Salvado,
journalJournal of High Energy Physics
volume2018, eid180 (year2018),
1805.04530.
[Farzan and Shoemaker(2016)]2016JHEP...07..033F
authorY. Farzan and
authorI. M. Shoemaker,
journalJournal of High Energy Physics
volume2016, eid33 (year2016),
1512.09147.
[Farzan(2020)]2020PhLB..80335349F
authorY. Farzan,
journalPhysics Letters B volume803,
eid135349 (year2020), 1912.09408.
[Kuo and
Pantaleone(1989a)]1989RvMP...61..937K
authorT. K. Kuo and
authorJ. Pantaleone,
journalReviews of Modern Physics volume61,
pages937 (year1989a).
[Gonzalez-Garcia and
Maltoni(2013)]2013JHEP...09..152G
authorM. C. Gonzalez-Garcia
and
authorM. Maltoni,
journalJournal of High Energy Physics
volume2013, eid152 (year2013),
1307.3092.
[Wolfenstein(1978)]1978PhRvD..17.2369W
authorL. Wolfenstein,
journal volume17, pages2369
(year1978).
[Mikheyev and Smirnov(1985)]1985YaFiz..42.1441M
authorS. P. Mikheyev
and authorA. Y.
Smirnov, journalYadernaya Fizika
volume42, pages1441 (year1985).
[Aaij et al.(2013)Aaij, Adeva,
Adinolfi, Adrover, Affolder, Ajaltouni, Albrecht, Alessio,
Alexander, Ali et al.]2013PhRvL.111s1801A
authorR. Aaij,
authorB. Adeva,
authorM. Adinolfi,
authorC. Adrover,
authorA. Affolder,
authorZ. Ajaltouni,
authorJ. Albrecht,
authorF. Alessio,
authorM. Alexander,
authorS. Ali,
et al., journal volume111,
eid191801 (year2013), 1308.1707.
[Crivellin et al.(2015)Crivellin,
D'Ambrosio, and Heeck]2015PhRvD..91g5006C
authorA. Crivellin,
authorG. D'Ambrosio,
and authorJ. Heeck,
journalPRD volume91, eid075006
(year2015), 1503.03477.
[Feldman et al.(2007)Feldman, Liu,
and Nath]2007PhRvD..75k5001F
authorD. Feldman,
authorZ. Liu, and
authorP. Nath,
journal volume75, eid115001
(year2007), hep-ph/0702123.
[Amaral et al.(2021)Amaral,
Cerdeno, Cheek, and Foldenauer]2021arXiv210403297A
authorD. W. P. Amaral,
authorD. G. Cerdeno,
authorA. Cheek, and
authorP. Foldenauer,
journalarXiv e-prints eidarXiv:2104.03297
(year2021), 2104.03297.
[Smirnov and Xu(2019)]2019JHEP...12..046S
authorA. Y. Smirnov
and authorX.-J. Xu,
journalJournal of High Energy Physics
volume2019, eid46 (year2019),
1909.07505.
[Bahcall and
Peña-Garay(2004)]2004NJPh....6...63B
authorJ. N. Bahcall
and
authorC. Peña-Garay,
journalNew Journal of Physics volume6,
pages63 (year2004), hep-ph/0404061.
[Beacom et al.(2017)Beacom, Chen,
Cheng, Doustimotlagh, Gao, Gong, Gong, Guo, Han, He
et al.]2017ChPhC..41b3002B
authorJ. F. Beacom,
authorS. Chen,
authorJ. Cheng,
authorS. N. Doustimotlagh,
authorY. Gao,
authorG. Gong,
authorH. Gong,
authorL. Guo,
authorR. Han,
authorH.-J. He,
et al., journalChinese Physics C
volume41, eid023002 (year2017).
[Kumaran et al.(2021)Kumaran,
Ludhova, Penek, and Settanta]2021Univ....7..231K
authorS. Kumaran,
authorL. Ludhova,
authorÖ. Penek,
and
authorG. Settanta,
journalUniverse volume7,
pages231 (year2021), 2105.13858.
[Lopes(2013)]2013PhRvD..88d5006L
authorI. Lopes,
journal volume88, eid045006
(year2013), 1308.3346.
[Haxton(1986)]1986PhRvL..57.1271H
authorW. C. Haxton,
journal volume57, pages1271
(year1986).
[Parke(1986)]1986PhRvL..57.1275P
authorS. J. Parke,
journal volume57, pages1275
(year1986), 2212.06978.
[Haxton et al.(2013)Haxton, Hamish
Robertson, and Serenelli]2013ARA A..51...21H
authorW. C. Haxton,
authorR. G. Hamish Robertson,
and authorA. M.
Serenelli, journal
volume51, pages21 (year2013),
1208.5723.
[Gonzalez-Garcia and
Nir(2003)]2003RvMP...75..345G
authorM. C. Gonzalez-Garcia
and authorY. Nir,
journalReviews of Modern Physics volume75,
pages345 (year2003), hep-ph/0202058.
[Fantini et al.(2018)Fantini, Gallo
Rosso, Vissani, and Zema]2018arXiv180205781F
authorG. Fantini,
authorA. Gallo Rosso,
authorF. Vissani,
and authorV. Zema,
journalarXiv e-prints eidarXiv:1802.05781
(year2018), 1802.05781.
[Tanabashi et al.(2018)Tanabashi,
Hagiwara, Hikasa, Nakamura, Sumino, Takahashi, Tanaka, Agashe,
Aielli, Amsler et al.]2018PhRvD..98c0001T
authorM. Tanabashi,
authorK. Hagiwara,
authorK. Hikasa,
authorK. Nakamura,
authorY. Sumino,
authorF. Takahashi,
authorJ. Tanaka,
authorK. Agashe,
authorG. Aielli,
authorC. Amsler,
et al., journal volume98,
eid030001 (year2018).
[Patrignani et al.(2016)Patrignani,
Particle Data Group, Agashe, Aielli, Amsler, Antonelli, Asner,
Baer, Banerjee, Barnett et al.]2016ChPhC..40j0001P
authorC. Patrignani,
authorParticle Data Group,
authorK. Agashe,
authorG. Aielli,
authorC. Amsler,
authorM. Antonelli,
authorD. M. Asner,
authorH. Baer,
authorS. Banerjee,
authorR. M. Barnett,
et al., journalChinese Physics C
volume40, eid100001 (year2016).
[de Gouvêa(2003)]2003NIMPA.503....4D
authorA. de Gouvêa,
journalNuclear Instruments and Methods in Physics Research A
volume503, pages4 (year2003),
hep-ph/0109150.
[Kuo and
Pantaleone(1989b)]1989PhRvD..39.1930K
authorT. K. Kuo and
authorJ. Pantaleone,
journal volume39, pages1930
(year1989b).
[Bruggen et al.(1995)Bruggen,
Haxton, and Qian]1995PhRvD..51.4028B
authorM. Bruggen,
authorW. C. Haxton,
and authorY. Z.
Qian, journal volume51,
pages4028 (year1995).
[Gouvêa et al.(2000)Gouvêa,
Friedland, and Murayama]2000PhLB..490..125G
authorA. d. Gouvêa,
authorA. Friedland,
and
authorH. Murayama,
journalPhysics Letters B volume490,
pages125 (year2000), hep-ph/0002064.
[Gando et al.(2011)Gando, Gando,
Ichimura, Ikeda, Inoue, Kibe, Kishimoto, Koga, Minekawa,
Mitsui et al.]2011PhRvD..83e2002G
authorA. Gando,
authorY. Gando,
authorK. Ichimura,
authorH. Ikeda,
authorK. Inoue,
authorY. Kibe,
authorY. Kishimoto,
authorM. Koga,
authorY. Minekawa,
authorT. Mitsui,
et al., journal volume83,
eid052002 (year2011).
[Gonzalez-Garcia
et al.(2016)Gonzalez-Garcia, Maltoni, and
Schwetz]2016NuPhB.908..199G
authorM. C. Gonzalez-Garcia,
authorM. Maltoni,
and
authorT. Schwetz,
journalNuclear Physics B volume908,
pages199 (year2016), 1512.06856.
[de Salas et al.(2021)de Salas,
Forero, Gariazzo, Martínez-Miravé, Mena, Ternes,
Tórtola, and Valle]2021JHEP...02..071D
authorP. F. de Salas,
authorD. V. Forero,
authorS. Gariazzo,
authorP. Martínez-Miravé,
authorO. Mena,
authorC. A. Ternes,
authorM. Tórtola,
and authorJ. W. F.
Valle, journalJournal of High Energy Physics
volume2021, eid71 (year2021),
2006.11237.
[Lopes and
Turck-Chièze(2013)]2013ApJ...765...14L
authorI. Lopes and
authorS. Turck-Chièze,
journal volume765, eid14
(year2013), 1302.2791.
[de Holanda et al.(2004)de Holanda,
Liao, and Smirnov]2004NuPhB.702..307D
authorP. C. de Holanda,
authorW. Liao, and
authorA. Y. Smirnov,
journalNuclear Physics B volume702,
pages307 (year2004), hep-ph/0404042.
[Casini et al.(2000)Casini,
D'olivo, and Montemayor]2000PhRvD..61j5004C
authorH. Casini,
authorJ. C. D'olivo,
and
authorR. Montemayor,
journal volume61, eid105004
(year2000), hep-ph/9910407.
[Lopes(2017)]2017PhRvD..95a5023L
authorI. Lopes,
journal volume95, eid015023
(year2017), 1702.00447.
[Serenelli et al.(2009)Serenelli,
Basu, Ferguson, and Asplund]2009ApJ...705L.123S
authorA. M. Serenelli,
authorS. Basu,
authorJ. W. Ferguson,
and
authorM. Asplund,
journal volume705,
pagesL123 (year2009), 0909.2668.
[Vinyoles et al.(2017)Vinyoles,
Serenelli, Villante, Basu, Bergström, Gonzalez-Garcia,
Maltoni, Peña-Garay, and Song]2017ApJ...835..202V
authorN. Vinyoles,
authorA. M. Serenelli,
authorF. L. Villante,
authorS. Basu,
authorJ. Bergström,
authorM. C. Gonzalez-Garcia,
authorM. Maltoni,
authorC. Peña-Garay,
and authorN. Song,
journal volume835, eid202
(year2017), 1611.09867.
[Bahcall et al.(2006)Bahcall,
Serenelli, and Basu]2006ApJS..165..400B
authorJ. N. Bahcall,
authorA. M. Serenelli,
and authorS. Basu,
journal volume165, pages400
(year2006), astro-ph/0511337.
[Asplund et al.(2009)Asplund,
Grevesse, Sauval, and Scott]2009ARA A..47..481A
authorM. Asplund,
authorN. Grevesse,
authorA. J. Sauval,
and authorP. Scott,
journal volume47, pages481
(year2009), 0909.0948.
[Borexino Collaboration
et al.(2018)Borexino Collaboration, Agostini,
Altenmüller, Appel, Atroshchenko, Bagdasarian, Basilico,
Bellini, Benziger, Bick et al.]2018Natur.562..505B
authorBorexino Collaboration,
authorM. Agostini,
authorK. Altenmüller,
authorS. Appel,
authorV. Atroshchenko,
authorZ. Bagdasarian,
authorD. Basilico,
authorG. Bellini,
authorJ. Benziger,
authorD. Bick,
et al., journal volume562,
pages505 (year2018).
[Agostini et al.(2019)Agostini,
Altenmüller, Appel, Atroshchenko, Bagdasarian, Basilico,
Bellini, Benziger, Bonfini, Bravo et al.]2019PhRvD.100h2004A
authorM. Agostini,
authorK. Altenmüller,
authorS. Appel,
authorV. Atroshchenko,
authorZ. Bagdasarian,
authorD. Basilico,
authorG. Bellini,
authorJ. Benziger,
authorG. Bonfini,
authorD. Bravo,
et al., journal volume100,
eid082004 (year2019), 1707.09279.
[Bellini et al.(2010)Bellini,
Benziger, Bonetti, Buizza Avanzini, Caccianiga, Cadonati,
Calaprice, Carraro, Chavarria, Chepurnov
et al.]2010PhRvD..82c3006B
authorG. Bellini,
authorJ. Benziger,
authorS. Bonetti,
authorM. Buizza Avanzini,
authorB. Caccianiga,
authorL. Cadonati,
authorF. Calaprice,
authorC. Carraro,
authorA. Chavarria,
authorA. Chepurnov,
et al., journal volume82,
eid033006 (year2010), 0808.2868.
[Abe et al.(2011)Abe, Furuno,
Gando, Gando, Ichimura, Ikeda, Inoue, Kibe, Kimura, Kishimoto
et al.]2011PhRvC..84c5804A
authorS. Abe,
authorK. Furuno,
authorA. Gando,
authorY. Gando,
authorK. Ichimura,
authorH. Ikeda,
authorK. Inoue,
authorY. Kibe,
authorW. Kimura,
authorY. Kishimoto,
et al., journal volume84,
eid035804 (year2011), 1106.0861.
[Abe et al.(2016)Abe, Haga,
Hayato, Ikeda, Iyogi, Kameda, Kishimoto, Marti, Miura,
Moriyama et al.]2016PhRvD..94e2010A
authorK. Abe,
authorY. Haga,
authorY. Hayato,
authorM. Ikeda,
authorK. Iyogi,
authorJ. Kameda,
authorY. Kishimoto,
authorL. Marti,
authorM. Miura,
authorS. Moriyama,
et al., journal volume94,
eid052010 (year2016).
[Aharmim et al.(2013)Aharmim,
Ahmed, Anthony, Barros, Beier, Bellerive, Beltran, Bergevin,
Biller, Boudjemline et al.]2013PhRvC..88b5501A
authorB. Aharmim,
authorS. N. Ahmed,
authorA. E. Anthony,
authorN. Barros,
authorE. W. Beier,
authorA. Bellerive,
authorB. Beltran,
authorM. Bergevin,
authorS. D. Biller,
authorK. Boudjemline,
et al., journal volume88,
eid025501 (year2013), 1109.0763.
[Cravens et al.(2008)Cravens, Abe,
Iida, Ishihara, Kameda, Koshio, Minamino, Mitsuda, Miura,
Moriyama et al.]2008PhRvD..78c2002C
authorJ. P. Cravens,
authorK. Abe,
authorT. Iida,
authorK. Ishihara,
authorJ. Kameda,
authorY. Koshio,
authorA. Minamino,
authorC. Mitsuda,
authorM. Miura,
authorS. Moriyama,
et al., journal volume78,
eid032002 (year2008), 0803.4312.
[Aalbers et al.(2020)Aalbers,
Agostini, Maouloud, Alfonsi, Althueser, Amaro, Angevaare,
Antochi, Antunovic, Aprile et al.]2020arXiv200603114A
authorJ. Aalbers,
authorF. Agostini,
authorS. E. M. A. Maouloud,
authorM. Alfonsi,
authorL. Althueser,
authorF. Amaro,
authorJ. Angevaare,
authorV. C. Antochi,
authorB. Antunovic,
authorE. Aprile,
et al., journalarXiv e-prints
eidarXiv:2006.03114 (year2020), 2006.03114.
[Capozzi et al.(2019)Capozzi, Li,
Zhu, and Beacom]2019PhRvL.123m1803C
authorF. Capozzi,
authorS. W. Li,
authorG. Zhu, and
authorJ. F. Beacom,
journal volume123, eid131803
(year2019), 1808.08232.
[Dutta et al.(2020)Dutta, Lang,
Liao, Sinha, Strigari, and Thompson]2020JHEP...09..106D
authorB. Dutta,
authorR. F. Lang,
authorS. Liao,
authorS. Sinha,
authorL. Strigari,
and
authorA. Thompson,
journalJournal of High Energy Physics
volume2020, eid106 (year2020),
2002.03066.
[Goldhagen et al.(2022)Goldhagen,
Maltoni, Reichard, and Schwetz]2022EPJC...82..116G
authorK. Goldhagen,
authorM. Maltoni,
authorS. E. Reichard,
and
authorT. Schwetz,
journalEuropean Physical Journal C volume82,
eid116 (year2022), 2109.14898.
[Baudis et al.(2022)Baudis, Hall,
Lesko, and Orrell]2022arXiv221113450B
authorL. Baudis,
authorJ. Hall,
authorK. T. Lesko,
and authorJ. L.
Orrell, journalarXiv e-prints
eidarXiv:2211.13450 (year2022), 2211.13450.
|
http://arxiv.org/abs/2307.06065v2 | 20230712102940 | Operational Support Estimator Networks | [
"Mete Ahishali",
"Mehmet Yamac",
"Serkan Kiranyaz",
"Moncef Gabbouj"
] | cs.CV | [
"cs.CV"
] |
Operational Support Estimator Networks
Mete Ahishali, Mehmet Yamac, Serkan Kiranyaz, Moncef Gabbouj
Mete Ahishali, Mehmet Yamac, and Moncef Gabbouj are with the Faculty of Information Technology and Communication Sciences, Tampere University, Tampere, Finland (email: [email protected]).
Serkan Kiranyaz is with the Department of Electrical Engineering, Qatar University, Doha, Qatar (email: [email protected]).
August 12, 2023
==============================================================================================================================================================================================================================================================================================================================================================================================
In this work, we propose a novel approach called Operational Support Estimator Networks (OSENs) for the support estimation task. Support Estimation (SE) is defined as finding the locations of non-zero elements in a sparse signal. By its very nature, the mapping between the measurement and sparse signal is a non-linear operation. Traditional support estimators rely on computationally expensive iterative signal recovery techniques to achieve such non-linearity. Contrary to the convolution layers, the proposed OSEN approach consists of operational layers that can learn such complex non-linearities without the need for deep networks. In this way, the performance of the non-iterative support estimation is greatly improved. Moreover, the operational layers comprise so-called generative super neurons with non-local kernels. The kernel location for each neuron/feature map is optimized jointly for the SE task during the training. We evaluate the OSENs in three different applications: i. support estimation from Compressive Sensing (CS) measurements, ii. representation-based classification, and iii. learning-aided CS reconstruction where the output of OSENs is used as prior knowledge to the CS algorithm for an enhanced reconstruction. Experimental results show that the proposed approach achieves computational efficiency and outperforms competing methods, especially at low measurement rates by a significant margin. The software implementation is publicly shared at https://github.com/meteahishali/OSENhttps://github.com/meteahishali/OSEN.
Support estimation, sparse representation, operational layers, compressive sensing, machine learning
§ INTRODUCTION
Sparse Representation of a signal 𝐲∈ℝ^m defines that the signal is represented as a linear combination of only a small subset of k atoms from the entire dictionary with n elements, where k is significantly smaller than the total number of atoms, i.e., k<<n. Mathematically speaking, let 𝐃∈ℝ^m × n is an underdetermined matrix where m < n and the following representation: 𝐲 = 𝐃𝐱 is considered sparse if there are a few non-zero coefficients in the representation vector 𝐱. Finding indices of these non-zero coefficients forms the task of (Sparse) Support Estimation (SE) <cit.>. Alternatively, SE can be defined as the localization of the subset consisting of smallest number of bases atoms whose linear combination with the corresponding representation coefficients forms the original signal. There is a fundamental difference between SE and traditional (Sparse) Signal Recovery/Reconstruction (SR). In the latter, the aim is to find the exact values of 𝐱. Hence, it is a more challenging task than the SE as it also includes the estimation of the representation coefficients in addition to the localization.
In general, many tasks can be formulated as an SR task, for example, consider Compressive Sensing (CS) problem <cit.> 𝐲 = 𝐀𝐬 for a signal 𝐬∈ℝ^d where 𝐀∈ℝ^m × d is the measurement matrix. If the signal 𝐬 is sparsely coded in a proper domain Φ∈ℝ^d × n i.e, 𝐬 = Φ𝐱, then linear measurement system can be expressed as 𝐲 = 𝐃𝐱 where 𝐱 is the sparse representation code and 𝐃∈ℝ^m × n is called the equivalent dictionary matrix with 𝐃 = 𝐀Φ and m << n. Another example is representation-based classification task where the dictionary is formed by collecting training samples column-wise and the aim is to represent a query sample by a linear combination of the columns (atoms) in the dictionary. In particular, this representation of the query sample 𝐲 in the dictionary 𝐃 is expressed as 𝐲 = 𝐃𝐱, and solving it for 𝐱 provides estimated representation coefficients 𝐱. Then, the corresponding indices of the non-zero entries in 𝐱 determine the predicted class label of the query sample. In this manner, there are different representation-based classification approaches consisting of Sparse Representation-based Classification (SRC) <cit.> and Collaborative Representation-based Classification (CRC) <cit.>. In the first group, approaches are more focused on computing spare solutions 𝐱, where query sample is represented sufficiently well with a few non-zero coefficients in the estimated solution. One major drawback of the SRC methods is that they are based on ℓ_1-minimization requiring iterative computations which makes them computationally complex. In the CRC approach, the solution is computed by following the regularized least-square estimation as 𝐱 = ( 𝐃^T 𝐃 + λ𝐈)^-1𝐃^T𝐲 with the regularization parameter λ. A group selection procedure is used in the CRC method to determine the predicted label of the query sample. Estimated coefficients are replaced into the representation and the group with the least representation error is selected, therefore its corresponding class can be determined. Direct mapping for representation vector estimation is computationally more efficient than SRC approaches and in certain applications such as face recognition <cit.>, it can still provide comparable results.
Contrary to existing literature focusing widely on the SR task e.g., <cit.>, there are only a few studies <cit.> concentrating on the SE problem. In fact, traditional methods for SE are based on first performing SR and then the support sets are estimated over the reconstructed signals by applying certain thresholding techniques. Considering the complexity of SR compared to the SE task, it is more feasible to directly estimate support sets. Moreover, in many applications, there might be no need to perform SR at all if the support sets are already obtained. For example, estimating the occupied spectrum in a CS task with cognitive radio systems <cit.> is actually a SE task and it is substantially important as the spectrum interval is available only for a given time frame. Next, detection of an observed target in ground-penetrating radar systems <cit.> is also a SE task since it only involves localization. Finally, in a representation-based classification task, it is important to locate active representation coefficients than their exact values since their locations solely determine the predicted class labels.
Because of the aforementioned reasons, for certain applications, it is more practical and efficient to develop the SE technique directly rather than a prior SR application. For this purpose, we proposed Convolutional Support Estimator Networks (CSENs) <cit.> combining traditional model-based SE with a learning-based approach. The CSENs are able to learn a mapping from a proxy signal which is a rough estimation for the sparse signal to be reconstructed, i.e., 𝐱 = 𝐁𝐲 where the denoiser matrix is defined as 𝐁 = 𝐃^ T or 𝐁 = ( 𝐃^T𝐃 + λ𝐈)^-1𝐃^T. The CSENs can achieve state-of-the-art performance levels with minimum computational complexity. CSENs are designed to work with compact network architectures compared to deep SR networks such as ReconNet <cit.>, Learned Approximate Message Passing (LAMP) <cit.>, Learned Iterative Shrinkage Thresholding Algorithm (LISTA) <cit.>, and Learned Vector AMP (LVAMP) <cit.>. Therefore, CSENs demonstrated that not only the compact network models are sufficient to learn a direct mapping for the support sets; they are also reliable over different measurement rates and robust against corruption in measurements, e.g., robust under different noise levels. Their extensions have been designed for COVID-19 classification <cit.>, and recently for object distance estimation <cit.> as a regressor.
However, especially in a compact configuration, the learning performance of CNNs is limited due to its homogenous network structure with a linear neuron model. Operational Neural Networks (ONNs) have recently been proposed <cit.> as a superset of CNNs. ONNs have not only outperformed CNNs significantly, but they are even capable of learning those problems where CNNs entirely fail. On the other hand, ONNs like their ancestors, Generalized Operational Perceptrons (GOPs) <cit.> exhibited certain drawbacks such as strict dependability to the operators in the operator set for each layer/neuron, and the need for setting (fixing) the operator sets of the output layer neuron(s) in advance. Such drawbacks yield a limited network heterogeneity and divergence that eventually cause certain issues in learning performance and computational efficiency. As a solution, Self-organized ONNs (Self-ONNs) with the generative neuron model that can address all these drawbacks without any prior (operator) search or training, and with elegant computational complexity have been proposed <cit.>. During the training of the network, to maximize the learning performance, each generative neuron in a Self-ONN can customize the nodal operators of each kernel connection. This yields an ultimate heterogeneity level that is far beyond what ONNs can offer, and thus, the traditional "weight optimization" of conventional CNNs is entirely turned out to be an "operator generation" process.
Compact CNNs have another severe limitation which is the limited receptive field size and this is actually one of the main reasons for their limited learning capabilities. Compact Self-ONNs suffer from it due to the localizes and fixed-size kernels as well. To address this drawback, a recent study in <cit.> has introduced superior generative neurons, or "Super Neurons" in short that can increase the receptive field size substantially with non-localized kernels. Considering the fact that kernel locations are jointly optimized along with the kernel parameters, each super neuron is able to learn the best kernel transformation function for the kernel position optimized simultaneously. Hence, during the inference stage, super neurons have approximately similar computational complexity with only additional shifting operations and kernel shift parameters. It is shown in <cit.> that compact Self-ONNs with super neurons can achieve significantly higher performance levels in several regression and classification problems.
The CSENs have naturally inherited the aforementioned limitations of compact CNNs. To address these drawbacks, in this study, we propose novel Operational Support Estimator Networks (OSENs) based on the operational layers of Self-ONNs with super neurons. OSENs can learn a direct mapping for the SE and can significantly outperform the state-of-the-art CSENs and other traditional SR-based approaches. The novel and significant contributions of this work can be summarized as follows:
* We propose a novel OSEN approach for a non-iterative SE. The proposed approach has achieved state-of-the-art performance levels in simulated SE problems over the MNIST dataset. Moreover, a hybrid loss penalizing both SE estimation and classification errors is proposed in the representation-based classification framework using OSENs. It has achieved superior classification accuracy on Yale-B dataset.
* A Non-linear Compressive Learning (NCL) approach is proposed for joint optimization of the proxy mapping and SE parts. Consequently, NCL-OSENs can directly compute a mapping from measurement signals to support locations, whereas conventional CSENs need a proxy computation for their input: 𝐱 = 𝐁𝐲.
* We introduce Self-Organized Generalized Operational Perceptrons (Self-GOPs) in the NCL module of the proposed approach. This is the first study in the literature introducing self-organizing GOP models in fully connected layers. The Self-GOPs can learn non-linear mappings in dense layers and it is shown that their learning capability is superior compared to conventional Multi-Layer Perceptrons (MLPs).
* OSENs produce probability maps indicating the probability of signal coefficients being non-zero in a sparse representation vector. This prior likelihood can be used as prior information in signal reconstruction algorithms to further enhance CS performance. We show that the reconstruction accuracy is improved using proposed learning-aided recovery framework in Magnetic Resonance Imaging (MRI) CS problem with a total-variation (TV) based minimization scheme.
* Thanks to the operational layers with super neurons, compact OSENs have an elegant learning capability with a limited amount of training data. Furthermore, it is computationally efficient because of more compact and shallow network architectures compared to the competing methods.
* Finally, the number of studies focusing on solely SE is only a few compared to the well-studied SR applications. Hence, this work with state-of-the-art performance levels achieved in numerous applications will stir the attention and eventually draw more interest in SE.
The rest of the paper is organized as follows, background and prior work are provided in Section <ref>, then the proposed methodology will be presented in Section <ref>, and experimental setup and an extensive set of comparative evaluations for different applications have been presented in Section <ref>. Finally, Section <ref> concludes the paper and suggests topics for future research.
§ BACKGROUND AND PRIOR WORK
In this section, we first introduce the notations that are used in this study and provide a brief background related to sparse representation, SE, and representation-based classification.
The ℓ_p-norm of a vector 𝐱∈ℝ^n is defined as 𝐱_ℓ_p^n = ( ∑_i=1^n | x_i |^p )^1/p for p ≥ 1, whereas the ℓ_0-norm is 𝐱_ℓ_0^n = lim_p → 0∑_i=1^n | x_i |^p = #{ j: x_j ≠ 0 } and ℓ_∞-norm is 𝐱_ℓ_∞^n = max_i=1,...,n ( | x_i | ). A signal 𝐬 is called strictly k-sparse if it can be represented in a proper domain, i.e., 𝐬= Φ 𝐱 by less than k+1 non-zero representation coefficients such that 𝐱_0 ≤ k. The location information of these non-zero coefficients forms the support set Λ := { i: x_i ≠ 0 }. In other words, Λ⊂{1,2,3,...,n } is a set of indices corresponding to active basis vectors in the representation of the signal 𝐬 in the domain Φ.
In CS theory, a signal 𝐬 is sensed with a few number of measurements,
𝐲 = 𝐀𝐬 = 𝐀Φ𝐱 = 𝐃𝐱,
where 𝐀∈ℝ^m × d and 𝐃∈ℝ^m × n are called measurement matrix and equivalent dictionary, respectively, and we define the measurement rate (MR) as m/n. This system is an underdetermined linear system of equations as m < < n; therefore, a priori assumption about the solution is needed to find a unique solution for this ill-posed problem. It is shown in <cit.> that the sparse representation of 𝐱 satisfying 𝐱 _0≤ k is unique in the solution,
min_𝐱 𝐱 _0 subject to 𝐃𝐱 = 𝐲,
if 𝐃 has more than 2k linearly independent columns and m ≥ 2k. That is to say, at least k-sparse signal pairs are distinguishable or they can be separately represented in the dictionary 𝐃. However, the problem in (<ref>) is NP-hard and non-convex because of ℓ_0-norm. Its closest norm relaxation can be the following so-called Basis Pursuit <cit.>:
min_𝐱𝐱_1 s.t. 𝐱∈℧ ( 𝐲 ),
where ℧ ( 𝐲 ) = {𝐱: 𝐃𝐱=𝐲}. The CS theory claims that in cases where the exact recovery of 𝐬 is not possible, a tractable solution is still achievable if m>k(log(n/k)) and Restricted Isometry Property <cit.> is satisfied for 𝐃. If these conditions are satisfied, the stable recovery from corrupted noisy query sample is also possible via Basis Pursuit Denoising (BPDN) <cit.> using a relaxed version of (<ref>), i.e., min_𝐱𝐱 s.t. 𝐲 - 𝐃𝐱≤ϵ, where a small ϵ constant is set according to the noise power.
§.§ Sparse Support Estimation (SE)
In various applications, the SE task is more important than performing a complete SR. The complete SR procedure includes finding the support set, magnitude, and their corresponding signs. For example, in an anomaly detection problem using distributed CS surveillance systems, location of non-zero indices (support set λ) is satisfactory enough to locate the anomaly such as anomaly detection in sensor data streams <cit.> and HSI images <cit.>. Similarly, active user detection in NOMA <cit.> and CDMA <cit.> systems can be considered SE tasks with a significant role in 5G communication. Additionally, considering a query sample in a representation-based classification problem <cit.>, finding the support set already provides class information of the query sample; hence, there is no need to perform computationally expensive estimation for the exact 𝐱.
A support estimator ℰ(.) is defined as follows,
Λ = ℰ (𝐲,𝐃 ),
where 𝐲= 𝐃𝐱 +𝐳 is the measurement with an additive noise and Λ is the estimated support set. Previously, traditional SE methods were based on recovering the exact signal which makes their estimation performance dependent on the SR. These conventional methods can be grouped under three categories: i. iterative estimators using ℓ_1-minimization <cit.>, ii. least-square sense approximation including LMSEE <cit.> and Maximum Correlation (MC) <cit.> as 𝐱^LMMSE = ( 𝐃^T 𝐃 + λ𝐈_n × n )^-1𝐃^T 𝐲 and 𝐱^MC = 𝐃^T 𝐲, respectively, and iii. Deep Neural Networks <cit.>.
The approaches in (i) are computationally complex and iterative methods limiting their efficiency in support recovery. Although the approaches in (ii) use the closed-form solution and are non-iterative, their accuracies might be limited in challenging cases
such as low MRs <cit.>. In the last group (iii), various deep learning approaches are proposed mainly for SR. Deep unfolding models in <cit.> consist of many dense layers with high numbers of trainable parameters in millions; ReconNet in <cit.> has several fully convolutional layers with a deep network configuration. Overall, these methods aim to compute a direct mapping from a measurement to the original signal using complex architectures with many layers. Therefore, their usability is limited when available training data is scarce. Their generalization capability is only satisfied with massive sizes of data and it is shown in <cit.> that they are also sensitive to noisy measurements.
In our recent work, CSENs <cit.> are proposed for the SE task as a non-iterative and computationally efficient approach. These networks are designed to compute support sets without performing a prior SR. It is shown that CSENs can outperform the traditional methods which often produce noisy estimations due to the uncertainty in the prior recovery. Thanks to their proposed compact network configuration, it is possible to achieve satisfactory performance levels using limited training data. Furthermore, the compact architecture provides improved generalization capability and further enhances the robustness of CSENs to measurement noises. The readers are referred to <cit.> for more detailed evaluations and technical discussions where limitations of classical support estimators are discussed and compared against non-iterative network-based estimators.
§.§ Representation-based Classification
As introduced earlier, traditional approaches proposed for representation-based classification can be grouped into two categories: SRC and CRC methods. In both categories, the estimation of the support set is the main goal. Generally, a dictionary is built by stacking training samples column-wise and when a test sample is introduced, it is aimed to represent the test sample by a linear combination of the columns of representative dictionary 𝐃. Predicted label for the test sample is assigned according to the location of the estimated non-zero representation coefficients. Specifically, it is expected that the solution vector 𝐱 has sufficient information to represent query sample 𝐲 = 𝐃𝐱 + 𝐳 within a small error margin and the query sample has the same label as the dictionary samples whose corresponding coefficients are non-zero.
§.§.§ Sparse Representation-based Classification (SRC)
There are various existing SRC approaches that are proposed for different applications, for example, early coronavirus disease 2019 (COVID-19) detection <cit.>, COVID-19 recognition <cit.>, hyper-spectral image classification <cit.>, face recognition <cit.>, and human action recognition <cit.>. Generally, these approaches try to represent query sample 𝐲 using only a few coefficients. It is expected that these non-zero components of 𝐱 should have the same class label as the query. More specifically, the recovery of sparse signal is obtained using, e.g., Lasso formulation <cit.> for 𝐲 =𝐃𝐱 + 𝐳:
min_𝐱{𝐃𝐱-𝐲_2^2 + λ𝐱_1 },
where κ is a small constant and for a stable solution, it satisfies that 𝐱- 𝐱≤κ𝐳. This ℓ_1-minimization is used in a four-step classification procedure consisting of (i) normalization of 𝐲 and 𝐃 to have unit ℓ_2-norm, (ii) estimation of 𝐱 = min_𝐱 𝐱_1 s.t𝐲 - 𝐃𝐱_2, (iii) computing 𝐞_i = 𝐲 - 𝐃_i 𝐱_i _2, where 𝐞_i is residual for class i and 𝐱_i is the estimated group coefficients, and (iv) label is assigned to the query sample by Class ( 𝐲 ) = min ( 𝐞_i ). The four-step approach or its similar variants is widely used in the abovementioned SRC studies <cit.>. This residual finding approach is more suitable for real-life classification problems where there is an existing correlation between samples.
§.§.§ Collaborative Representation-based Classification (CRC)
Although SRC approaches achieve considerable performance levels, they have one major limitation that ℓ_1-minimization in their SR scheme requires iterative computation increasing their computational complexity. To address this drawback, the CRC approach in <cit.> proposes that instead of using (<ref>), traditional ℓ_2-based minimization can be followed,
𝐱= min_𝐱{𝐲 - 𝐃𝐱 _2^2 + λ𝐱_2^2 }.
The above problem has a closed-form solution, i.e., 𝐱 = ( 𝐃^T 𝐃 + λ𝐈_n × n )^-1𝐃^T 𝐲, which can be replaced in the second step of the four-step classification approach.
The CRC approach focuses on finding the minimum energy solution instead of computing the sparsest 𝐱. Hence, query signal 𝐲 is represented using relatively small coefficients where a collaborative representation is searched within the atoms of the dictionary. It is discussed in <cit.> that such collaborative representation might be preferred in the cases where dictionary 𝐃 is unable to satisfy the properties required by the exact or stable recovery due to correlation between images/signals. Especially when MR is large enough, ℓ_2-minimization provides similar or even better recognition performances compared to SRC approaches. Additionally, the CRC approach is computationally efficient since it computes a direct-mapping by the closed-form solution.
§ PROPOSED METHODOLOGY
In this section, we will introduce proposed OSENs equipped with self-organized operational layers and NCL-OSEN approach where we jointly optimize the compression matrix with support estimator in the training stage. Finally, a learning-aided SR technique is presented where the output of the OSENs is used as side information in exact SR tasks.
§.§ Operational Support Estimator Networks (OSENs)
Given 𝐲 and 𝐃, the proposed SE approach learns to produce a binary mask 𝐯∈{ 0,1 }^n indicating active support elements:
v_i =
1 if i ∈Λ,
0 else .
The estimation of such binary mask, 𝐯, is equivalent to finding the support set as Λ = { i ∈{ 1,2,..,n} : v_i =1 }. In this manner, we train the OSEN network for this segmentation problem to provide the following mapping: 𝒫 ( 𝐱 ): ℝ^n ↦ [ 0,1 ]^n, where 𝐱 is called proxy estimation which is computed by LMMSE as 𝐱 = ( 𝐃^T 𝐃 + λ𝐈 )^-1𝐃^T𝐲 or MC 𝐱 = 𝐃^T𝐲. Then, the proxy is reshaped to a 2D plane and fed as the input to the OSEN.
The proposed OSEN approach consists of self-organized operational layers where non-linear kernel transformation function of each super neuron is approximated using Taylor series expansion. Accordingly, the Q^th order approximation near the origin can be written as a finite sum of derivatives for a function g at given point x as follows,
g(x)^(Q) = ∑_q=0^Qg^(q)(0)/q!(x)^q.
Assuming w_q = g^(q)(0)/q! as the q^th coefficient of the approximation, the set of coefficients 𝐰∈ℝ^Q is learned during training. For a given single channel input 𝐱^(k)∈ℝ ^ P × R, the k^th filter in an operational layer having shared weights would have the following set of trainable parameters: 𝐖^(k) = [𝐰_1^(k), 𝐰_2^(k), …, 𝐰_Q^(k)] ∈ℝ^f_s × f_s × Q, where 𝐰_q^(k)∈ℝ^f_s × f_s is the q^th coefficient and f_s is the filter size.
The intermediate output of the k^th filter at pixel (p, r) would be
g(𝐱^(k))_p, r = ∑_q=0^Q ∑_i=0^f_s-1∑_j=0^f_s-1 w_q, i, j^(k)(x_p + i + α, r + j + β^(k))^q,
where α and β are the shift (bias) parameters optimized during training. The summations are commutative in (<ref>), then more computationally efficient implementation can be desired by expressing the operational layers as the summation of repeated convolutions,
𝐱^(k) = σ( ∑_q=1^Q (𝐱^(k)_t )^⊙ q * 𝐰_q^(k) + b_q^(k)),
where * is the convolution operation, 𝐱^⊙ q is the Hadamard power providing component-wise power raise x ↦ x^q, b_q^(k) is bias, and 𝐱^(k) is the final output of the generative neuron after activation function σ(.) is applied. Note that 𝐱^(k)_t is obtained by shifting the input such that 𝐱^(k)_t = T^(k)_α, β𝐱^(k) where T^(k)_α, β is the shifting operator for the k^th kernel. In this implementation, we use real-valued shifts α, β∈ℝ, where the shifted feature maps are computed by bilinear interpolation.
Overall, the k^th feature map of l^th layer is computed using the following trainable parameters Θ^(k)_l = {𝐖^(k)_l ∈ℝ^f_s × f_s × Q × N_l-1, 𝐛^(k)_l ∈ℝ^Q, α^(k)_l ∈ℝ, β^(k)_l ∈ℝ} where N_l-1 is the number of feature maps in the previous layer. As α_l,β_l are scalar values, shift parameters are shared for the previous layer connections of the k^th feature map. Finally, total trainable parameters in a L-layer OSEN will be Θ_OSEN={{Θ^(k)_1}_k=1^N_1, {Θ^(k)_2}_k=1^N_2, ... , {Θ^(k)_L}_k=1^N_L}. Shifting parameters of the kernels are jointly learned with kernel parameters, which determine the transformation function of each kernel element.
During training, Mean-Square Error (MSE) is computed for training sample pairs of (𝐱, 𝐯) as,
ℒ(Θ_OSEN, 𝐱, 𝐯) = 𝒫_Θ_OSEN (𝐱 )- 𝐯_2^2.
In specific to representation-based classification, instead of using (<ref>), a group ℓ_2-minimization based optimization can be followed:
ℒ_𝒢(𝐯, 𝐯) = 𝐯 - 𝐯_2^2 + λ_g ∑_i=1^N_c𝐯_G,i_2.
where 𝐯 = 𝒫_Θ_OSEN (𝐱 ) is the predicted mask and 𝐯_G,i is the group of supports for class i. In the previous work <cit.>, this cost function is approximated by average-pooling which is followed by a SoftMax operation at the output. Hence, it was possible to produce class probabilities directly from the input proxy. On the other hand, such an approximation may reduce the classification performance since the MSE between the predicted and actual binary masks indicating support sets is ignored for the training sample pairs. Therefore, in this study, we propose a novel approach that can produce support maps and estimated class labels after a single inference. Accordingly, the proposed approach provides the following mapping: 𝒫_Θ_OSEN (𝐱 ) = {𝐯, 𝐜_y }, where 𝐜_y ∈ℝ^N_C is the estimated class label vector for the query sample 𝐲. The following hybrid loss function is proposed to train the network:
ℒ_ℛ(𝐯, 𝐯, 𝐜_y, 𝐜_y) = 𝐯 - 𝐯_2^2 + λ_c ∑_i=1^N_C c_y,ilog(c_y,i).
In this way, the estimated support masks and class labels are optimized jointly during the training of OSENs in representation-based classification task.
Overall, if SE problem is different than the classification, the OSEN model is trained using (<ref>) as its cost function with the following input-output pairs: (𝐱^train, 𝐯^train) as illustrated in Fig. <ref>. If there is categorical class label information available, then it is trained using (𝐱^train, 𝐯^train, 𝐜^train_y) triplet samples with the hybrid loss in (<ref>) as presented in Fig. <ref>.
§.§ Non-linear Compressive Learning (NCL)
The OSEN approach takes proxy 𝐱 = 𝐁𝐲 signal as the input, where 𝐁 is the denoiser matrix according to LMMSE or MC. This rough estimation may limit the potential of non-linear neuron model in the proposed approach. To address this drawback, we propose to jointly optimize proxy-mapping stage with the support estimator part.
We introduce the Self-GOP model with the generative neuron topology to approximate different transformation functions that are not possible to learn with a traditional MLP approach. Similar to operational layers, a Self-GOP layer can learn highly non-linear mappings with less number of neurons. These neurons are named as generative perceptrons in Self-GOP model since they are formed to perform the following transformation during the training process:
g(x, 𝐰) = w_0 + xw_1 + x^2w_2 + … + x^Qw_Q,
where 𝐰∈ℝ^Q is the trainable weight for a scalar input x. The proposed NCL scheme has one fully-connected layer consisting of Self-GOPs. Given a measurement signal 𝐲 as input, it tries to learn the following mapping, ϕ(𝐲): ℝ^m ↦ℝ^n for n > m. Since summations are commutative, one can write the output for input 𝐲 as,
𝐱 = ϕ(𝐲) = σ( ∑_q=1^Q 𝐰_q 𝐱^⊙ q + 𝐛_q ),
where 𝐰_q ∈ℝ^n × m and 𝐛_q ∈ℝ^n. Then, overall trainable parameters would be: Θ_NCL = {𝐖∈ℝ^n × m × Q, 𝐛∈ℝ^n × Q}. Note that the first order weights in (<ref>) are initialized with 𝐰_1 = 𝐁^T where 𝐁=( 𝐃^T𝐃+λ𝐈)^-1𝐃^T for the representation-based classification problem and 𝐁 = 𝐃^T for other SE task. Finally, the output of the proposed NCL scheme is connected to the OSEN. Therefore, output 𝐱 = ϕ(𝐲) is reshaped to be fed to the first operational layer of the OSEN.
In this work, we name this end-to-end novel approach as NCL-OSENs where the proxy mapping from low to high-dimensional space is learned using Self-GOPs during training. It is observed that the joint optimization of Θ_NCL and Θ_OSEN improves SE performance. As illustrated in Fig. <ref>, the input-output training pairs of the NCL-OSEN approach would be ( 𝐲^train, 𝐯^train) and in case of classification it will be ( 𝐲^train, 𝐯^train, c_y^train).
§.§ Learning-aided Signal Reconstruction via Total-Variation Minimization
When the aim is to recover the exact signal, produced probability maps by OSENs can be used as side information in ℓ_1-minimization based approaches. In this case, the following weighted minimization approach can be followed:
min_𝐱{𝐃𝐱-𝐲_2^2 + λΓ⊙𝐱_1 },
where Γ∈ℝ^n is the weight for non-zero cost and ⊙ is the component-wise multiplication operator. The weight is formed by prior information about each component of the sparse signal 𝐱. In modified-CS literature <cit.>, the weight of the i^th element is defined as γ_i = 1/p_i + ϵ, where ϵ is a small positive constant and p_i is the i^th element of 𝐩 that is e.g., prior likelihood <cit.> of Λ such that the probability of x_i being non-zero is p_i.
In the learning-aided CS framework, we choose gradient as the sparsifying domain: Φ = ∇ in the aforementioned CS scheme, i.e., 𝐲 = 𝐀𝐬 = 𝐀Φ𝐱 = 𝐃𝐱. The gradient-domain is a convenient choice since natural images/signals are generally sparse in ∇, which can also preserve boundary details and edges as a preferred way of reconstruction in many inverse imaging systems <cit.>. Let 𝐒∈ℝ^P × R be an image that will be compressively sensed, then the following total-variation minimization problem is defined:
min_𝐒{𝐲 -𝐀vec(𝐒) _2^2 + λ∇𝐒_TV},
where vec(.) is the vectorization operation and ∇𝐒_TV is defined as follows,
∇𝐒_TV = ∇_x 𝐒_1 + ∇_y 𝐒_1
= ∑_p,r | S_p+1,r - S_p,r | + ∑_p,r | S_p,r+1 - S_p,r |,
i.e., the anisotropic total-variation. We solve the problem in (<ref>) using Total-Variation (TV) minimizer by Alternating Direction Method of Multipliers (ADMM) <cit.>.
The OSEN approach for learning-aided SR has two input and output channels. Given a measurement 𝐲 = 𝐀vec(𝐒), first rough estimation 𝐒 = 𝐀^T𝐲 is computed, then it is reshaped to the 2-D plane. Consequently, the training input samples consisting of two-channel proxy will be {𝐗_x^train = ∇_x 𝐒^train, 𝐗_y^train = ∇_y 𝐒^train} and two-channel ground-truth mask will be {𝐯_x^train, 𝐯_y^train} which corresponds to the sparse-codes Λ = { i,j ∈{ 1,2,..,n_1}×{ 1,2,..,n_2} : |∇ S_i,j | > τ_1 }. In the CS-aided recovery approach, when a test sample proxy is introduced to OSEN, the output probability map produced by the network is used as a likelihood measure of the corresponding support set. This is achieved as follows, given produced probability maps, (𝐩_x, 𝐩_y ), we first compute the weights of the cost Γ_x = 1/𝐩_x + ϵ and Γ_y = 1/𝐩_y + ϵ. Similar to (<ref>), the weighted total-variation minimization problem can be expressed as follows:
min_𝐒{𝐲 -𝐀vec(𝐒) _2^2 + λΓ⊙∇𝐒_TV},
where the second term is defined as,
Γ⊙∇𝐒_TV = Γ_x ⊙∇_x𝐒_1 + Γ_y ⊙∇_y 𝐒_1.
The traditional ADMM solver can still be used in order to solve the proposed minimization scheme in (<ref>) by modifying soft thresholding part of the solver to weighted soft thresholding. In this way, thanks to the proposed hybrid approach integrating model-based optimization procedure with the data-driven OSEN, we aim to improve the SR performance.
§ EXPERIMENTAL RESULTS
The proposed approach is extensively evaluated in three different applications including the use-case scenarios where the signal/image to be sensed is sparse in spatial domain, representation-based classification, and CS/reconstruction of Magnetic Resonance Imaging (MRI). In this section, we will first give details about the experimental setup and we will present the results for different applications of the proposed OSEN approaches.
§.§ Experimental Setup
There are two compact configurations used in the proposed approach. The first configuration (OSEN_1) model has only two hidden operational layers with 48 and 24 neurons, respectively. The OSEN_2 architecture has max-pooling and transposed operational layers connected to the first hidden layer. Both networks have 3 × 3 kernel sizes and their activation functions are set to hyperbolic tangent except for the output layers which are set to sigmoid and softmax for segmentation and classification, respectively.
The proposed approach is trained using Adam optimizer <cit.> with its default learning parameters: learning rate α=10^-3, β_1 = 0.9, and β_2 = 0.999 for 30 epochs in the representation-based classification and 100 epochs in the other applications. The regularization parameter λ is searched empirically in log-scale over validation sets within the range of λ^* ∈ [10^-10, 10^2], then it is fine-tuned with slight adjustments as λ = λ^* ± 10^log(λ^*). The proposed approach has been implemented using Python with the Tensorflow library <cit.> on a workstation having Intel ® i9-7900X CPU and NVidia ® 1080 Ti GPU with 128 GB system memory.
In the results, it is important to show performance improvements obtained by the proposed approach over the baseline model, that is the least-square sense solution with the CRC approach <cit.>. The competing SRC approaches used in this study are Primal and Dual Augmented Lagrangian Methods, (PALM and DALM) <cit.>, ℓ_1-regularized Least Squares (ℓ_1-LS) <cit.>, ADMM <cit.>, Orthogonal Matching Pursuit (OMP) <cit.>, Homotopy <cit.>, Gradient Projection for Sparse Reconstruction (GPSR) <cit.>, and ℓ_1-magic <cit.>. For a fair comparison, dictionary samples of the competing CRC and SRC approaches are enlarged by adding training samples used by the proposed approach. Since the SE literature is limited, in the comparisons, we modify several state-of-the-art SR approaches including ReconNet <cit.>, LAMP <cit.>, LISTA <cit.>, and LVAMP <cit.> by training these reconstruction networks for the SE task using proposed input-output training pairs. Finally, we compare the proposed OSEN approach with our previous support estimator CSEN <cit.>. Above-mentioned deep learning approaches are implemented using the same Tensorflow environment following their proposed training configurations, whereas the SRC and CRC approaches have been tested with MATLAB version 2019a. For the learning-aided SR application, TV minimization via ADMM is performed on Python with the following parameter values: λ = 0.01, ρ = 1.0, α = 0.7, abs_tol=10^-4, rel_tol=10^-2, max_it=2000.
The results are reported using the following metrics: in representation-based classification, accuracy is computed as follows,
Accuracy = (TP + TN) / (TP + TN + FP + FN),
where the number of true positive and negative samples are TP and TN; false positive and negative samples are FP and FN, respectively. Next, SE performance is evaluated between the true binary mask 𝐯 and estimated mask 𝐯:
Precision = TP / (TP + FP).
Specificity = TN / (TN + FP),
Sensitivity = TP / (TP + FN).
Moreover, F_1 and F_2 scores are obtained by setting β = 1 and β = 2 in the following,
F_β = (1 + β ^ 2) Precision×Sensitivity/β ^ 2 ×Precision + Sensitivity.
Therefore, F_2 score has more importance on improved TP detection than TN. After computing the metrics pixel-wise for each test sample, we report the averaged values using the Macro-average method. Finally, averaged PSNR and normalized MSE (NMSE) values are reported for learning-aided CS experiments. Note that we repeat all the reported experiments five times and present the final averaged values.
§.§ Support Estimation from CS measurements
The proposed approach is first evaluated over the MNIST dataset with handwritten digit samples to show its ability to perform support recovery when the signal itself is already sparse in spatial domain. The dataset has 70 000 samples and it is divided into train, validation, and test splits with the ratio of 5:1:1, respectively. Each sample image in the dataset has a size of 28 × 28 with the intensity range in [0, 1]. A sample from the MNIST dataset is an example of a sparse signal since the foreground is more predominant compared to the digits. In fact, its averaged sparsity ratio is computed as ρ = k/n = 0.2 for the vectorized samples 𝐱∈ℝ^784. Since the samples are sparse in spatial domain or canonical basis, the sparsifying domain is set to Φ =𝐈 and (<ref>) can be written as,
𝐲 = 𝐀𝐱 = 𝐃𝐱,
with 𝐀 = 𝐃∈ℝ^m × n. The elements of the measurement matrix, A_i,j, are i.i.d. drawn from 𝒩 ( 0,1/m ).
In the MNIST experiments, for a measurement test sample 𝐲, the input proxy is computed as 𝐱 = 𝐁𝐲 where 𝐁 = 𝐃^T. Then, the reshaped proxy with size of 28 × 28 is given as input to the OSENs. In the results, we change MR from 0.05 to 0.25 to observe its effect on the performance. The results are presented in Table <ref> for the competing methods and proposed approach using different configurations. For a better comparison, the number of linear transformation layers (T) has been varied also for LAMP, LISTA, and LVAMP methods. It is seen that the proposed OSEN approaches outperform other approaches with significant performance margins in F_1 and F_2-scores. For example, the gap in F_1-scores reaches approximately 8% and 5% comparing OSEN_1 with CSEN_1 and OSEN_2 with CSEN_2, respectively, when the polynomial order is set to Q = 3. Moreover, the proposed NCL-OSEN framework equipped with the NCL scheme further increases the estimation performance achieving the state-of-the-art F_1-score larger than the previous competitor CSEN_1 and CSEN_2 approaches approximately by 11.5% and 7%, respectively. In Table <ref>, even if we increase the number of neurons for CSENs, they are still outperformed by the proposed approach. Note the fact that LAMP method is designed for Gaussian measurement matrix, but it still lacks the performance which has been achieved by the proposed approach. The number of trainable parameters and elapsed times are given in Table <ref> when MR is set to 0.05. It is shown that OSENs have computational efficiency compared to other approaches. We increase the number of parameters in CSEN+ configuration to have approximately similar computational complexity with the OSENs. As shown in Table <ref>, CSEN+ still cannot reach the same performance level with the OSENs.
The robustness of the methods to measurement noise is evaluated by introducing Gaussian additive noise to the measurement, 𝐲 = 𝐃𝐱 + 𝐳. Achieved F_1-Scores are presented for varied noise levels in Fig. <ref>. Accordingly, it has been observed that the proposed approach still achieves state-of-the-art performance levels even under severe noisy conditions. Among the competing methods, the CSEN approach shows some robustness to the noise. Even though, especially for low MR values, it has a similar decaying trend, the proposed OSEN and NCL-OSEN approaches have an overall robustness to the measurement noise for all MR values compared to the competing methods. This is not surprising considering the superior learning capabilities of the super-neurons and generative perceptrons.
§.§ Face Recognition using Representation-based Classification
Face recognition can be considered as a representation-based classification problem and it is well-suited to evaluate the proposed approach because of the limited number of available samples from each identity. We use Yale-B <cit.> dataset consisting of 2414 samples with 32 × 32 pixel sizes belonging to 38 different identities. Note that since all experiments are repeated five times, splits for dictionary, train, validation, and test are different in each run. To evaluate the competing CRC and SRC methods, 48 and 16 samples per identity are randomly selected to for the dictionary and test set, respectively. The learning-based methods, OSEN and CSEN, use 25% of data from the dictionary samples during training from which 25% of the samples are separated for the validation set. Consequently, their dictionaries are constructed using only 32 samples per identity for a fair comparison against CRC and SRC approaches. More specifically, the collected dictionary with vectorized samples would have a size of 1024 × 1216. Then, we use PCA matrix 𝐀 for dimensionality reduction to obtain the equivalent dictionary 𝐃 in (<ref>). In face recognition experiments, for a given query sample 𝐲, coarse estimation is computed using the denoiser matrix as 𝐱 = 𝐁𝐲 where 𝐁 = ( 𝐃^T 𝐃 + λ𝐈 )^-1𝐃^T. Next, the input proxy 𝐱∈ℝ^1216 is reshaped to obtain a 2-D proxy image with the size of 16 × 76. This reshaping is performed by following a specific ordering to make sure that the dictionary samples belonging the same class categories are grouped together. Hence, the kernel size of the average-pooling operation is set to the size of grouped pixels from the same class (i.e., 8 × 4).
Achieved classification accuracies are presented in Fig. <ref> along with required computational times in the log-scale. We only report accuracies obtained by the initial network structures, OSEN_1 and CSEN_1, since no significant performance improvements have been observed by using transposed operational and convolutional layers in the second network configuration. As shown in Fig. <ref>, CSENs have similar computational time with the proposed approach; and hence, we consider the CSEN approach as previous state-of-the-art since we aim for a maximum classification accuracy with a minimum computational time complexity. The CRC-light approach has the same number of samples in the dictionary as the proposed approach. It is observed that especially for low MRs, the proposed approach significantly improves the classification accuracy. For instance, the accuracy gap is around 25% between the NCL-OSENs and CSENs when MR = 0.01. On the other hand, when MR is large enough, the classification problem becomes less complicated and all methods can achieve higher than 90% accuracy. Overall, the proposed approach is able to achieve comparable or better performance levels than computationally complex and iterative SRC methods. This demonstrates that the proposed solution is highly robust across all measurement rates but especially for lower rates, it has a significant superiority. On the other hand, the performance of different SR algorithms is inconsistent and varies significantly from one setup to another <cit.>. Note also the fact that contrary to the previous results in Table <ref>, small values for the polynomial order Q are chosen in Yale-B dataset.
§.§ Learning-aided Signal Reconstruction (SR) for Compressively Sensed (CS) MRI
As the third application of the proposed approach, we consider CS MRI reconstruction using learning-aided SR. Assuming an MRI image to be reconstructed is 𝐒∈ℝ^P × R, the sensing model is defined as 𝐲 = 𝐀𝐅vec(𝐒) where 𝐅 is 2-D DFT and 𝐀∈ℝ^m × n is the sampling matrix to select spectral indices from the DFT. Choosing the sparsifying domain as Φ = ∇, the proposed weighted-TV minimization in (<ref>) can be modified for the CS MRI reconstruction as follows,
min_𝐒{𝐲 -𝐀𝐅vec(𝐒) _2^2 + λΓ⊙∇𝐒_TV},
where 𝐲∈ℂ^m is the measurement. Computational complexity for computing 𝐀vec(𝐒) and 𝐀^T 𝐲 increases significantly when the image size is large. Therefore, it is more feasible to use a structural measurement matrix for sampling indices of the Fourier domain and it is also the hardware requirements of the current MRI devices. The structural measurement matrix is constructed by semi-random sampling <cit.> using Gaussian and ℓ_2 ball. More specifically, let a measurement set be Ω = Ω_1∪Ω_2⊆{-n/4, ..., n/4}^2 with measurements m_1 + m_2 = m. Accordingly, the subset Ω_1 contains m_1 number of frequencies whose indices are drawn from standard normal distribution on {-n/4, ..., n/4 }^2 ∖Ω_2. The second set of frequencies are Ω_2 = {ω_i, j} where i^2 + j^2 < r^2 with r = √(m/3/π) to make sure that a third of measurements are from the ℓ_2-ball, since π r^2 = m/3. This proposed semi-random sampling scheme is illustrated in Fig. <ref>.
In the experiments, we use the Diencephalon Challenge (Mid-brain) dataset <cit.> consisting of already partitioned 35 and 12 patients for training and testing, respectively. For both sets, we remove empty beginning and ending slices and include the slices where there are clearly visible parts of the anatomy. After the preprocessing, there are collected 6045 training and 2048 testing samples with the size of 256 × 256. From the training set, we separate another around 1000 samples for validation. Next, two-channel proxies are constructed as previously described: {𝐗_x^train = ∇_x 𝐒^train, 𝐗_y^train = ∇_y 𝐒^train}, where the gradient operation is applied to rough estimations obtained by the zero-padded inverse DFT which is equivalent to mapping with 𝐀^T. Corresponding ground-truth masks are obtained by thresholding the gradient magnitudes of the original images by setting τ_1 = 0.04. When proxies are fed into the OSENs, probability maps (𝐩_x, 𝐩_y ) are obtained by the networks, which are then used in computation of the cost weights in (<ref>) for the proposed weighted TV reconstruction by setting ϵ = 0.2.
Reconstruction performances obtained by the proposed approach and compared methods are presented in Table <ref>. It is shown that the proposed approach outperforms CSENs and the reconstruction accuracy is significantly improved. Especially for MR = 0.1, the improvement over the baseline approach is 2 dB in PSNR using OSEN_2 with Q = 2. It is also crucial to point out that the introduced learning-aided CS recovery approach improves the performance without bringing additional computational complexity to the SR algorithm. Because, the computational complexity of performing direct mapping via OSENs is negligible compared to iterative reconstruction. For instance, TV-based (baseline) recovery has taken 412.73, 386.71, and 378.88 seconds for MR = 0.05, 0.1, and 0.25, respectively, whereas OSEN_2 (Q=3) has taken only 13 milliseconds. The reconstructed MRI images are shown in Fig. <ref>. Accordingly, it is observed that learning-aided reconstructions both with CSEN and OSEN improve the overall reconstructed image quality and this shows that the output probability maps indeed can be used in the recovery of CS signals to enhance the performance of traditional model-based approach such as TV-based ℓ_1-minimization. Moreover, investigating the zoomed regions, one can say that the details are better preserved in the proposed OSEN approach compared to the CSEN. Recovery performances of support locations in gradient domain are presented in Table <ref>. It is clear that the proposed approach outperforms CSENs in the SE by more than 10% and 15% considering F_1 and F_2 scores, respectively. Produced output probability maps by the methods are illustrated in Fig. <ref>. Accordingly, estimated support maps by the OSENs have sharper details and supports at the edges are accurately estimated, whereas CSENs produce significantly higher false negatives yielding lower sensitivity levels in the SE.
§.§ Ablation Study: Localized versus Non-localized Kernels
In Table <ref>, we compare localized and non-localized kernel configurations in the proposed OSENs. Accordingly, it is shown that the usage of non-localized kernels often improves the performance of OSENs in three different applications. Especially for the OSEN_1 over MNIST dataset, the improvement is significant using smaller Q values. It is worth mentioning that even though the highest classification accuracy is obtained using the non-local kernels in representation-based classification problem using Yale-B dataset, for Q=1 and Q=3 settings, OSENs with localized kernel configurations can obtain comparable accuracies.
§ CONCLUSION
In this study, we have proposed novel support estimator approaches called OSENs. First, an OSEN can learn a direct mapping for support sets which eliminates the need for performing a prior SR task, unlike traditional methods. Next, the proposed OSEN approach consisting of operational layers with super neurons has several improvements over the traditional convolutional layers of a CSEN. The super (generative) neuron model can learn non-linear transformation functions for each kernel element and the kernel locations are jointly optimized with the self-organized transformation functions. An extended set of experiments has shown that non-localized kernels and operational layers have significantly improved the SE performance compared to their traditional convolutional layers/kernels. Moreover, thanks to the introduced learned and non-linear proxy mapping layer using Self-GOPs, it is possible to directly estimate the support set or class labels from measurement samples using end-to-end NCL-OSENs. In this way, the proposed NCL module can further enhance the performance by fine-tuning the proxy mapping layer together with the SE part of the network.
We have evaluated the proposed approach considering three different applications: i. SE from CS measurements for the cases where the signal is sparse in the spatial domain, ii. face recognition using representation-based classification, and iii. learning aided CS MRI reconstruction. In these applications, it has been shown that the proposed approach can achieve state-of-the-art performance levels requiring minimum computational complexity. Especially in low MRs, the performance gap widens significantly. Because of the proposed direct SE scheme, it is sufficient to use compact networks in OSEN architectures with maximized learning ability thanks to the operational layers with super neurons. A crucial advantage is that compact OSENs can operate under limited training data with an elegant performance, and therefore, besides their computational efficiency compared to deep learning techniques and traditional iterative SRC approaches, they do not suffer from well-known “over-fitting” problems. Note further the fact that the applications of the proposed approach presented in this work are not limited only to, for example, MRI reconstruction or face recognition, but its usability for different applications is possible and that will be our future research topic.
§ ACKNOWLEDGMENT
This work has been supported by NSF-Business Finland Center for Visual and Decision Informatics (CVDI) under the project Advanced Machine Learning for Industrial Applications (AMaLIA).
IEEEtran
|
http://arxiv.org/abs/2307.04424v1 | 20230710085906 | About the algebraic closure of formal power series in several variables | [
"Michel Hickel",
"Mickaël Matusinski"
] | math.AC | [
"math.AC",
"math.AG",
"13J05, 13F25, 14J99, 12-08"
] |
theoTheorem[section]
definition[theo]Definition
defi
remarque[theo]Remark
remark
exemple[theo]Example
ex
lemma[theo]Lemma
propo[theo]Proposition
coro[theo]Corollary
nota[theo]Notation
notation
prf1Idea of the proof
demo1
prfProof
demo
*theorem*Theorem
@#1#2
@th###1@font Lim1.5@#2-@
@@@
@rlay
@rlay#1#2
@skip -@th#1###2
𝔸
ℕ
ℤ
ℚ
ℝ
ℂ
𝕂
ℙ
δ̣
Michel Hickel and Mickaël Matusinski, Univ. Bordeaux, CNRS, Bordeaux INP, IMB, UMR 5251, F-33400 Talence, France
[2020]13J05, 13F25, 14J99 and 12-08
Let K be a field of characteristic zero. We deal with the algebraic closure of the field of fractions of the ring of formal power series K[[x_1,…,x_r]], r≥ 2. More precisely, we view the latter as a subfield of an iterated Puiseux series field 𝒦_r. On the one hand, given y_0∈𝒦_r which is algebraic, we provide an algorithm that reconstructs the space of all polynomials which annihilates y_0 up to a certain order (arbitrarily high). On the other hand, given a polynomial P∈ K[[x_1,…,x_r]][y] with simple roots, we derive a closed form formula for the coefficients of a root y_0 in terms of the coefficients of P and a fixed initial part of y_0.
About the algebraic closure of formal power series in several variables.
Michel Hickel and Mickaël Matusinski
August 12, 2023
========================================================================
§ INTRODUCTION.
Let K be a field of characteristic zero and K its algebraic closure. Let x:=(x_1,…,x_r) be an r-tuple of indeterminates where r∈, r≥ 2. Let K[x] and K[[x]] denote respectively the domains of polynomials and of formal power series in r variables with coefficients in K, and K(x) and K((x)) their fraction fields. Both fields embed naturally into K((x_r))((x_r-1))⋯((x_1)), the latter being naturally endowed with the lexicographic valuation in the variables (x_1,…,x_r) (see Section <ref>).
By iteration of the classical Newton-Puiseux theorem (see e.g. <cit.> and <cit.>), one can derive a description of an algebraic closure of K((x_r))((x_r-1))⋯((x_1)) in terms of iterated fractional Laurent series (see <cit.><cit.>):
The following field, where L ranges over the finite extensions of K in K:
ℒ_r:= _p∈ℕ^*_L L((x_r^1/p))((x_r-1^1/p))⋯ ((x_1^1/p))
is the algebraic closure of K((x_r))((x_r-1))⋯((x_1)).
Within this framework, there are several results concerning those iterated fractional Laurent series which are solutions of polynomial equations with coefficients either in K(x) or K((x)). More precisely, the authors provide necessary constraints on the supports of such a series (see <cit.>, <cit.>, <cit.> <cit.>, <cit.>). More recently, Aroca, Decaup and Rond study more precisely the support of Laurent-Puiseux power series which are algebraic over K[[x]] (with certain results for K of positive characteristic) <cit.>. As asserted in <cit.>, one can prove the following result (see the proof in Section <ref>),
which could also be derived from the methods in <cit.> or <cit.>:
The following field 𝒦_r, where L ranges over the finite extensions of K in K, is an algebraically closed extension of K(x) and K((x)) in ℒ_r:
𝒦_r := _(p,q)∈ℕ^*×ℕ^r-1_L
L(( ( x_1/x_2^q_1)^1/p,…, ( x_r-1/x_r^q_r-1)^1/p ,x_r^1/p)).
Let ỹ_0∈𝒦_r and f̃,g̃∈ L[[(x_1/x_2^q_1)^1/p,…, (x_r-1/x_r^q_r-1)^1/p ,x_r^1/p]] such that ỹ_0=f̃/g̃. Let α be the lexicographic valuation of g̃ (where it is understood that the valuation of x_i^1/p is equal to 1/p times the valuation of x_i). Denote g̃=ax^α(1-ε) with ε having positive valuation. We expand:
ỹ_0=f̃/g̃=f̃ a^-1x^-α∑_k∈ε^k
as a generalized power series ∑_n∈(^r,≤_lex) c_n/px^n/p (the latter is well defined by <cit.>). We set:
Supp(∑_n∈(^r,≤_lex) c_n/px^n/p):={1/pn∈(1/p^r,≤_lex) | c_n/p≠ 0}.
Let us call the elements of 𝒦_r rational polyhedral Puiseux series (since one can observe that the support with respect to the variables x_i's of such a series is included in the translation of some rational convex polyhedral cone). We are interested in those rational polyhedral Puiseux series that are algebraic over K((x)), say the rational polyhedral Puiseux series which verify a polynomial equation P̃(x,y)=0 with coefficients which are themselves formal power series in x: P̃(x,y)∈ K[[x]][y]∖{0}. Let us call such a series algebroid. If such a series ỹ_0 admits a vanishing polynomial of degree at most d in y, we will say that ỹ_0 is algebroid of degree bounded by d.
More precisely, we extend our previous work on algebraic (over K(x)) Puiseux series in several variables <cit.>, by dealing with the following analogous questions:
∙ Reconstruction of pseudo-vanishing polynomials for a given algebroid rational polyhedral Puiseux series.
In this part, for simplicity reasons, we will assume that K is algebraically closed. For Q̃(x,y)∈ K[[x]][y] a nonzero polynomial, the (x)-adic order of Q̃ is the maximum of the integers k such that Q̃(x,y)∈ (x)^kK[[x]][y] where (x) denotes the ideal of K[[x]] generated by x_1,…,x_r.
We consider ỹ_0=f̃/g̃ with f̃,g̃∈ K[[(x_1/x_2^q_1)^1/p,…, (x_r-1/x_r^q_r-1)^1/p ,x_r^1/p]] algebroid of degree bounded by d. For an arbitrarily large valuation l∈, we provide an algorithm which computes polynomials Q̃(x,y)∈ K[[x]][y] such that the expansion of Q̃(x,ỹ_0)∈𝒦_r as a rational polyhedral Puiseux series has valuation greater than l.
More precisely, let us denote ζ_i:=(x_i/x_i+1^q_i)^1/p for i=1,…,r-1, and ζ_r:=x_r^1/p. We suppose that for any k∈, one can compute all the coefficients of ζ^n with n_1+⋯+n_r≤ k in f̃ and g̃. Moreover, we assume that the lexicographic valuations with respect to ζ of f̃ and g̃ are given.
Let d∈^* and ν̃_0∈. Let ỹ_0∈𝒦_r be algebroid of degree bounded by d. We assume that there is a vanishing polynomial P̃ of degree bounded by d and of (x)-adic order bounded by ν̃_0. We consider formal power series f̃,g̃∈ K[[(x_1/x_2^q_1)^1/p,…, (x_r-1/x_r^q_r-1)^1/p ,x_r^1/p]] such that ỹ_0=f̃/g̃. Let β=(β_1,…,β_r) be the lexicographic valuation of f̃g̃ with respect to the variables ζ_i:=(x_i/x_i+1^q_i)^1/p, ζ_r:=x_r^1/p, and q_i':=q_i+β_i+1+1 for i=1,…,r-1.
We set:
[ L̃: ^r → ; (n_1,…,n_r) ↦ n_r+q'_r-1n_r-1+q'_r-1q'_r-2n_r-2+⋯+q'_r-1q'_r-2⋯ q'_1n_1. ]
The algorithm described in Section <ref> provides for any ν∈ a parametric description of the space of all the polynomials Q̃_ν(x,y)∈ K[[x]][y] with _yQ̃_ν≤ d and of (x)-adic order bounded by ν̃_0 such that, for any 1/pn=1/p(n_1,…,n_r)∈Supp Q̃_ν(x,ỹ_0), one has:
L̃(n)≥ν.
Note that the condition L̃(n)≥ν for 1/pn∈Supp Q̃_ν(x,ỹ_0) implies that infinitely many coefficients of Q̃_ν(x,ỹ_0) vanish since n∈^r. With more information on ỹ_0, we can use other linear forms L̃, see Theorem <ref>.
∙ Description of the coefficients of an algebroid rational polyhedral Puiseux series in terms of the coefficients of a vanishing polynomial.
Now, let a polynomial P̃(x,y)∈ K[[x]][y] with only simple roots and a root ỹ_0∈𝒦_r be given. Up to a change of coordinates (see Section <ref>), we reduce to the case of a polynomial P(u,y)∈ K[[u]][y] whose support has constraints (see Lemma <ref>), and a simple root y_0∈ L[[u]] (where [L:K]<∞). In Theorem <ref> and Corollary <ref>, we provide a closed form formula for the coefficients of y_0 in terms of the coefficients of P and the coefficients of a fixed initial part of y_0. This is obtained as a consequence of a generalization of the multivariate Flajolet-Soria formula for Henselian equations (<cit.>), see Theorem <ref>.
Our article is organized as follows. In Section <ref>, we prove a monomialization lemma (Lemma <ref>) which is a key to reduce to the case of formal power series annihilating a polynomial whose support has constraints (Lemma <ref>). This is done by a change of variable (<ref>) corresponding to the lexicographic valuation. Moreover, we distinguish two sets s and t of variables and we show that our series y_0 can be expanded as y_0=∑_nc_n(s) t^n where the c_n(s)∈ K[[s]] are algebraic power series (see Lemma <ref>) of bounded degree (see Lemma <ref>). Section <ref> is devoted to the proof of the nested depth lemma (Theorem <ref>). It is used in the subsequent sections to ensure the finiteness of the computations. We use elementary properties on Bézout's identity and the resultant of two polynomials. In Section <ref>, we show how to reconstruct all the polynomials of given bounded degrees which vanish at given several algebraic power series. This is based on Section <ref> and our previous work on algebraic multivariate power series <cit.>. In Section <ref>, we prove our first main result, Theorem <ref> and its variant Theorem <ref>. Sections <ref> and <ref> are devoted to our second question. In Section <ref>, we study what we call strongly reduced Henselian equations (see Definition <ref>) and prove a generalisation of the multivariate Flajolet-Soria formula (see Theorem <ref>). In Section <ref>, we prove how to reduce to the case of a strongly reduced Henselian equation (see Theorem <ref>) and, in the case of an equation with only simple roots, we derive a closed form formula for the coefficients of a solution y_0 in terms of the coefficients of the equation and of a bounded initial part of y_0 (see Corollary <ref>).
§ PRELIMINARIES
Let us denote ℕ:=ℤ_≥ 0 and ℕ^*:=ℕ∖{0}=ℤ_>0.
For any set ℰ, we denote by |ℰ| its cardinal. We systematically write the vectors using underlined letters, e.g. x:=(x_1,…,x_r), n:=(n_1,…,n_r), and in particular 0:=(0,…,0). Moreover, x^n:=x_1^n_1⋯ x_r^n_r. The floor function will be denoted by ⌊ q ⌋ for q∈ℚ.
For a polynomial P(y)=∑_i=0^d a_iy^i with coefficients a_i in a domain and a_d≠ 0, we consider that its discriminant Δ_P is equal to the resultant of P and ∂ P/∂ y (instead of the more usual convention Δ_P=(-1)^d(d-1)/2/a_dRes(P,∂ P/∂ y)).
For any sequence of nonnegative integers m=(m_i,j)_i,j with finite support and any sequence of scalars a=(a_i,j)_i,j indexed by i∈ℤ^r and j∈ℕ, we set:
* m!:=∏_i,jm_i,j!;
* a^m:=∏_i,ja_i,j^m_i,j;
* |m|:=∑_i,jm_i,j, ||m||:= ∑_i,jm_i,j j∈ and g(m) := ∑_i,jm_i,j i∈^r.
In the case where k=(k_0,…,k_l), we set
k :=∑_j=0^lk_j j. In the case where k=(k_i)_i∈Δ where Δ is a finite subset of ℤ^r, we set g(k):=∑_i∈Δk_i i.
We will consider the following orders on tuples in ℤ^r:
The lexicographic order n≤_lexm :⇔ n_1<m_1 or (n_1=m_1 and n_2<m_2) or ⋯ or (n_1=m_1, n_2=m_2, … and n_r<m_r).
The graded lexicographic order n≤_grlexm :⇔ |n |<|m| or (|n |=|m| and n≤_lexm).
The product (partial) order n≤m :⇔ n_1≤ m_1 and n_2≤ m_2 ⋯ and n_r≤ m_r.
Note that we will apply also the lexicographic order on ℚ^r. Similarly, one has the anti-lexicographic order denoted by ≤_alex.
Considering the restriction of ≤_grlex to ^r (for which ^r has order type ω), we denote by S(k) (respectively A(k) for k≠ 0), the successor element (respectively the predecessor element) of k in (ℕ^r,≤_grlex).
Given a variable x and a field K, we call Laurent series in x with coefficients in K any formal series ∑_n≥ n^0c_nx^n for some n^0∈ and c_n∈ K for any n. They consist in a field, which is identified with the fraction field K((x)) of K[[x]].
To view the fields K(x) and K((x)) as embedded into K((x_r))((x_r-1))⋯((x_1)) means that the rational fractions or formal meromorphic fractions can be represented as iterated formal Laurent series, i.e. Laurent series in x_1 whose coefficients are Laurent series in x_2, whose coefficients... etc. This corresponds to the following approach. As in <cit.>, we identify K((x_r))((x_r-1))⋯((x_1)) with the field of generalized power series (in the sense of <cit.>, see also <cit.>) with coefficients in K and exponents in ℤ^r ordered lexicographically, usually denoted by K((X^ℤ^r))^lex. By definition, such a generalized series is a formal expression s=∑_n∈ℤ^rc_nX^n (say a map ℤ^r→ K) whose support (s):={n∈ℤ^r | c_n≠ 0} is well-ordered. The field K((X^ℤ^r))^lex comes naturally equipped with the following valuation of rank r:
[ v_x: K((X^ℤ^r))^lex → (ℤ^r∪{∞},≤_lex); s≠ 0 ↦ min((s)); 0 ↦ ∞ ]
The identification of K((X^ℤ^r)) and K((x_r))((x_r-1))⋯((x_1)) reduces to the identification
X^(1,0,…,0)=x_1 , X^(0,1,…,0)=x_2 , … , X^(0,…,0,1)=x_r.
By abuse of terminology, we call K((X^ℤ^r))^lex or K((x_r))((x_r-1))⋯((x_1)) the field of (iterated) multivariate Laurent series.
Note also that this corresponds to the fact that the power series in the rings K[x] and K[[x]] are viewed as expanded along (ℤ^r,≤_lex).
Similarly, the field ℒ_r is a union of fields of generalized series L((X^(ℤ^r)/p))^lex and comes naturally equipped with the valuation of rank r:
[ v_x: ℒ_r → (ℚ^r∪{∞},≤_lex); s≠ 0 ↦ min((s)); 0 ↦ ∞. ]
We will need another representation of the elements in K(x) and K((x)), via the embedding of these fields into the field K((X^ℤ^r))^grlex with valuation:
[ w_x: K((X^ℤ^r))^grlex → (ℤ^r∪{∞},≤_grlex); s≠ 0 ↦ min((s)); 0 ↦ ∞. ]
and the same identification:
X^(1,0,…,0)=x_1 , X^(0,1,…,0)=x_2 , … , X^(0,…,0,1)=x_r.
For a polynomial P(y)=∑_j=0^da_jy^j∈ K((X^ℤ^r))^grlex[y], we denote:
w_x(P(y)):=min_j=0,…,d{w_x(a_j)}.
We will also use the following notations to keep track of the variables used to write the monomials. Given a ring R, we denote by R((x_1^ℤ,…,x_r^ℤ))^lex and R((x_1^ℤ,…,x_r^ℤ))^grlex the corresponding rings of generalized series ∑_n∈ℤ^rc_nx^n with coefficients c_n in R. Accordingly, let us write R((x_1^ℤ,…,x_r^ℤ))^lex_Mod and R((x_1^ℤ,…,x_r^ℤ))^grlex_Mod the subrings of series whose actual exponents are all bounded by below by some constant for the product order. Note that these subrings are both isomorphic to the ring ⋃_n∈ℤ^rx^nR[[x]].
Let us write also R((x_1^ℤ,…,x_r^ℤ))^lex_≥_lex0 and R((x_1^ℤ,…,x_r^ℤ))^grlex_≥_grlex0 the subrings of series s with v_x(s)≥_lex0, respectively w_x(s)≥_grlex0.
Let f be non zero in K[[ξ_1,…,ξ_r]]. There exists ρ_1,…,ρ_r-1∈ℕ such that, if we set
{[ η_1 := ξ_1/ξ_2^ρ_1; ⋮; η_r-1 := ξ_r-1/ξ_r^ρ_r-1; η_r := ξ_r ].
then f(ξ_1,…,ξ_r)=η^αg(η_1,…,η_r) where α∈ℕ^r and g is an invertible element of K[[η_1,…,η_r]]. Moreover, for all i=1,…,r-1, ρ_i≤ 1+β_i+1 where β:=v_ξ(f).
Let us write f=ξ^β h where β=v_ξ(f) and h∈ K((ξ_1^ℤ,…,ξ_r^ℤ))^lex_≥_lex0, Mod with v_ξ(h)=0. Note that h can be written as h=h_0+h_1 where h_0∈ K((ξ_2^ℤ,…,ξ_r^ℤ))^lex_≥_lex0, Mod with v_ξ(h_0)=0, and h_1∈ξ_1K[[ξ_1]]((ξ_2^ℤ,…,ξ_r^ℤ))^lex_Mod. If h_1∈ K[[ξ_1]]((ξ_2^ℤ,…,ξ_r^ℤ))^lex_≥_lex0, Mod, then we set ρ_1=0. Otherwise, let ρ_1 be the smallest positive integer such that:
ρ_1≥sup{1 ; (1-m_2)/m_1, m∈supp h_1}.
Note that, since m_1≥ 1 and m_2≥ -β_2, we have that ρ_1≤ 1+β_2. We also remark that the supremum is achieved for 0≥ m_2≥ -β_2 and 1+β_2 ≥ m_1≥ 1.
Let η_1:=ξ_1/ξ_2^ρ_1. For every monomial in h_1, one has ξ_1^m_1ξ_2^m_2…ξ_r^m_r=η_1^m_1ξ_2^m_2+ρ_1m_1…ξ_r^m_r. Hence, m_2+ρ_1m_1≥ 1 by definition of ρ_1. So (m_2+ρ_1m_1,…,m_r)>_lex0, meaning that h_1∈ K[[η_1]]((ξ_2^ℤ,…,ξ_r^ℤ))^lex_≥_lex0, Mod and that v(h_1)>_lex0 where here v is the lexicographic valuation with respect to the variables (η_1,ξ_2,…,ξ_r). So h∈ K[[η_1]]((ξ_2^ℤ,…,ξ_r^ℤ))^lex_≥_lex0, Mod and v(h)=0. Note that the exponents m_3,…, m_r remain unchanged in the support of h.
Suppose now that we have obtained h∈ K[[η_1,…,η_p]]((ξ_p+1^ℤ,…,ξ_r^ℤ))^lex_≥_lex0, Mod and that v(h)=0 where v is now the lexicographic valuation with respect to the variables
(η_1,…,η_p,ξ_p+1,…,ξ_r). The induction step is similar to the initial one. As before, let us write h=h_0^(p+1)+h_1^(p+1) where h_0^(p+1)∈ K[[η_1,…,η_p]]((ξ_p+2^ℤ,…,ξ_r^ℤ))^lex_≥_lex0, Mod with v(h_0^(p+1))=0, and
h_1^(p+1)∈ξ_p+1K[[η_1,…,η_p,ξ_p+1]]((ξ_p+2^ℤ,…,ξ_r^ℤ))^lex_Mod.
If
h_1^(p+1)∈ K[[η_1,…,η_p,ξ_p+1]]((ξ_p+2^ℤ,…,ξ_r^ℤ))^lex_≥_lex0, Mod,
then we set ρ_p+1=0. Otherwise, let ρ_p+1 be the smallest positive integer such that:
ρ_p+1≥sup{1 ; (1-m_p+2)/m_p+1, m∈supp h_1^(p+1)}.
Note that, since m_p+1≥ 1 and m_p+2≥ -β_p+2 (since these exponents m_p+2 remained unchanged until this step), we have that ρ_p+1≤ 1+β_p+2. If we set η_p+1:=ξ_p+1/ξ_p+2^ρ_p+1, then h∈ K[[η_1,…,η_p+1]]((ξ_p+2^ℤ,…,ξ_r^ℤ))^lex_≥_lex0, Mod and v(h)=0 (where v is now the lexicographic valuation with respect to the variables (η_1,…,η_p+1,ξ_p+2,…,ξ_r)).
By iteration of this process, we obtain that h ∈ K[[η_1,…,η_r-1]]((ξ_r^ℤ))^lex_≥_lex0, Mod and v(h)=0 (where v is now the lexicographic valuation with respect to the variables (η_1,…,η_r-1, ξ_r)), which means that h∈ K[[η_1,…,η_r-1,ξ_r]] with h invertible. Since ξ^β=η^α for some α∈^r, the lemma follows.
(i) Let ỹ_0:=f̃/g̃∈𝒦_r. There exist (p,q)∈ℕ^*×ℕ^r-1 and L with [L:K]<+∞ such that ỹ_0∈ L(((x_1/x_2^q_1)^1/p,…, (x_r-1/x_r^q_r-1)^1/p ,x_r^1/p)). We note that we can rewrite
ỹ_0 as a monomial (with integer exponents) times an invertible power series in other variables (( x_1/x_2^q_1')^1/p,…, (x_r-1/x_r^q_r-1')^1/p ,x_r^1/p).
Indeed, let us denote ξ=(ξ_1,…,ξ_r):=((x_1/x_2^q_1)^1/p,…, (x_r-1/x_r^q_r-1)^1/p ,x_r^1/p). So ỹ_0=f̃/g̃ for some f̃,g̃∈ L[[ξ]].
By the preceding lemma, we can monomialize the product f̃.g̃, so f̃ and g̃ simultaneously, by a suitable transformation (<ref>). Note that this transformation maps L[[(x_1/x_2^q_1)^1/p,…, (x_r-1/x_r^q_r-1)^1/p ,x_r^1/p]] into some L[[(x_1/x_2^q_1')^1/p,…, (x_r-1/x_r^q'_r-1)^1/p ,x_r^1/p]]. Indeed, a monomial in ξ is transformed into a monomial in η, and one has that:
η_1^i_1/p⋯η_r-1^i_r-1/pη_r^i_r/p= (x_1/x_2^q_1/(x_2/x_3^q_2)^ρ_1)^i_1/p⋯(x_r-1/x_r^q_r-1/x_r^ρ_r-1)^i_r-1/px_r^i_r/p =
(x_1/x_2^q_1+ρ_1)^i_1/p⋯(x_r-1/x_r^q_r-1+ρ_r-1)^i_r-1/p x_r^i_r/p(x_3^q_2ρ_1)^i_1/p(x_4^q_3ρ_2)^i_2/p⋯(x_r^q_r-1ρ_r-2)^i_r-2/p
and we write (x_3^q_2ρ_1)^i_1/p= (x_3/x_4^q_3+ρ_3)^q_2ρ_1i_1/px_4^(q_3+ρ_3)q_2ρ_1i_1/p and so on. Thus we obtain a monomial in the variables ((x_1/x_2^q_1+ρ_1)^1/p,…, (x_r-1/x_r^q_r-1+ρ_r-1)^1/p, x_r^1/p).
(ii) Let f∈ K[[ξ]], ρ_1,…,ρ_r-1∈ℕ, and η be as in the Monomialization Lemma <ref>. Let β=v_ξ(f). If we replace ρ_1,…,ρ_r-1 by ρ_1',…,ρ_r-1' with ρ_i'≥ρ_i for all i, and we proceed to the corresponding change of variables η' as in (<ref>), then we still have f(ξ)=(η')^αg'(η') for some invertible g'∈ K[[η']]. So Lemma <ref> holds true if we take 1+β_i+1 instead of ρ_i whenever ρ_i>0.
𝒦_r is an algebraically closed extension of K((x)).
This is a consequence of Abhyankar-Jung Theorem <cit.>, see <cit.>, and our Monomialization Lemma <ref>. Let
P(y)=∑_i=0^da_iy^i∈ L[[(x_1/x_2^q_1)^1/p,…, (x_r-1/x_r^q_r-1)^1/p ,x_r^1/p]][y]
where [L:K]<+∞, p∈^*, q_i∈ for i=1,..r-1 and a_d≠ 0. We want to show that P has a root in 𝒦_r. Up to multiplication by a_d^d-1 and change of variable z=a_dy, we may assume that P is monic. Let us denote ξ=(ξ_1,…,ξ_r):=((x_1/x_2^q_1)^1/p,…, (x_r-1/x_r^q_r-1)^1/p ,x_r^1/p) and P(y)=P(ξ,y).
Up to replacing L by a finite algebraic extension of it, we may also suppose that
P(0,y)=(y-c_1)^α_1⋯ (y-c_m)^α_m
with c_i∈ L. By Hensel's Lemma [CITE Raynaud Propo 5 4) and Lafon Alg locale, chap 12, theo 12.5 p.166], there exist polynomials P_1(ξ,y),…,P_m(ξ,y) such that P_i(0,y)=(y-c_i)^α_i (i=1,..,m) and P=P_1⋯ P_m. It is enough to show that P_1 has a root in 𝒦_r. By a change of variable y=z-c_1, we are lead to the case of a polynomial
P(ξ,y)=y^d+∑_i=0^d-1a_i(ξ)y^i
with a_i(0)=0, i=0,..,d-1. By our Monomialization Lemma <ref> and Remark <ref>(i), we may assume that the discriminant of P is monomialized. Hence, Abhyankar-Jung Theorem applies. Note that this last step may require to replace L by a finite algebraic extension.
Let ỹ_0∈𝒦_r be a non zero rational polyhedral Puiseux series. Let us show that the existence of a nonzero polynomial P̃(x,y) cancelling ỹ_0 is equivalent to the one of a polynomial P(u,y) cancelling y_0∈ L[[u]], but with constraints on the support of P.
Indeed, by our Monomialization Lemma <ref> and Remark <ref>(i), there are (p,q)∈ℕ^*×ℕ^r-1 such that, if we set:
(u_1,…,u_r-1,u_r):=((x_1/x_2^q_1)^1/p,…, (x_r-1/x_r^q_r-1)^1/p ,x_r^1/p),
then we can rewrite ỹ_0 =∑_n≥ñ^0c̃_nu^n, c̃_ñ^0≠ 0. Let us denote c_n:=c̃_n+ñ^0, and:
ỹ_0=u^ñ^0∑_n≥0 c_nu^n=u^ñ^0 y_0 with c_0≠ 0.
Hence, y_0 is a formal power series in u with coefficient in a finite algebraic extension L of K.
By the change of variable (<ref>), we have:
x_k=u_k^pu_k+1^pq_ku_k+2^pq_kq_k+1⋯ u_r^pq_kq_k+1⋯ q_r-1, k=1,…,r
The rational polyhedral Puiseux series ỹ_0 is a root of a polynomial
P̃(x,y)=∑_j=0^d∑_i∈^rã_i,jx^iy^j ∈ K[[x]][y]
of degree d in y if and only if the power series y_0=∑_n∈^r c_nu^n∈ L[[u]] is a root of
u^m̃^0P̃( u_1^pu_2^pq_1⋯ u_r^pq_1q_2⋯ q_r-1 , … , u_r^p , u^ñ^0y),
the latter being a polynomial P(u,y) in K[[u]][y] for m̃^0 such that
m̃^0_k=max{0 ; -ñ_k^0d}, k=1,…,r
.
Note that the transformation is uniquely defined by p,q,d and ñ^0.
In the following lemma, we clarify the constraints on the support of the polynomial P.
With the notations of (<ref>), we set u=( t_0, s_1, t_1,…, s_σ, t_σ) where t_0 might be empty, such that u_i∈ s_k if and only if q_i≠ 0 (and, so u_i∈ t_k if and only if q_i=0). Moreover, we write s:=( s_1,…, s_σ) and t:=( t_0, t_1,…, t_σ).
Hence, a polynomial P̃(x,y) ∈ K[[x]][y] is changed by the transformation induced by (<ref>) and (<ref>) into a polynomial:
P(s,t,y)=∑_l≥0∑_j=0^dP_l,j(s)y^j t^l∈ K[s,y][[t]]
with
for any i such that u_i∈s_k,
_u_i(P_l,j(s))-(m̃^0_i+jñ_i^0) ≤_u_i+1 (P_l,j(s) t^l)-(m̃^0_i+1+jñ_i+1^0) /q_i, j=0,..,d.
Conversely, any polynomial
P(s,t,y)=∑_l≥0∑_j=0^dP_l,j(s)y^j t^l∈ K[s,y][[t]]
comes from a unique polynomial P̃(x,y) ∈ K[[x]][y] by the transformation induced by (<ref>) and (<ref>) if and only if each monomial u^αy^j in the support of P satisfies the following conditions:
(i) α≥m̃^0+jñ^0;
(ii) ∀ i=1,…,r, α_i-(m̃^0_i+jñ_i^0)≡ 0 (p) ;
(iii) For any u_i∈s_k, α_i-(m̃^0_i+jñ_i^0)≤α_i+1-(m̃^0_i+1+jñ_i+1^0)/q_i.
Let us collect the variables x_i according to the distinction between t_j and s_k among the variables u_l. We set x_k for the sub-tuple of variables x_i corresponding to t_k, and ξ_k for s_k respectively.
Let us consider a general monomial:
x^ ny^j = x_0^ n_0 ξ_1^ m_1 x_1^ n_1⋯ξ_σ^ m_σ x_σ^ n_σy^j.
where n=( n_0, m _1, n_1,…, m_σ, n_σ). For k=1,…,σ, we denote ξ_k=(x_i_k,…,x_j_k-1) and x_k=(x_j_k,…,x_i_k+1-1), and accordingly m_k=(n_i_k,…,n_j_k-1) and n_k=(n_j_k,…,n_i_k+1-1) with i_σ+1:=r+1. For k=0 when t_0 is not empty, we denote x_0= t_0=(x_j_0,…,x_i_1-1) and n_0=(n_j_0,…,n_i_1-1) with j_0:=1.
By the change of variable (<ref>), for each k=1,…,σ, we obtain that:
ξ_k^ m_k x_k^ n_k= ((x_i_k/x_i_k+1^q_i_k)^1/p)^pn_i_k( (x_i_k+1/x_i_k+2^q_i_k+1)^1/p)^p(n_i_k+1+q_i_kn_i_k)⋯
((x_j_k-1/x_j_k^q_j_k-1)^1/p)^p(n_j_k-1+q_j_k-2n_j_k-2 +q_j_k-2q_j_k-3n_j_k-3+⋯+ q_j_k-2q_j_k-3⋯ q_i_kn_i_k)
×( x_j_k^1/p)^p(n_j_k+q_j_k-1n_j_k-1+q_j_k-1q_j_k-2n_j_k-2+⋯+ q_j_k-1q_j_k-2⋯ q_i_kn_i_k)
×( x_j_k+1^1/p)^pn_j_k+1⋯(x_i_k+1-1^1/p)^pn_i_k+1-1
= u_i_k^pn_i_ku_i_k+1^p(n_i_k+1+q_i_kn_i_k)⋯u_j_k-1^p(n_j_k-1+q_j_k-2n_j_k-2+q_j_k-2q_j_k-3n_j_k-3+⋯+ q_j_k-2q_j_k-3⋯ q_i_kn_i_k)
u_j_k^p(n_j_k+q_j_k-1n_j_k-1+q_j_k-1q_j_k-2n_j_k-2+⋯+ q_j_k-1q_j_k-2⋯ q_i_kn_i_k) u_j_k+1^pn_j_k+1⋯u_i_k+1-1^pn_i_k+1-1
[ = s_i_k^pn_i_ks_i_k+1^p(n_i_k+1+q_i_kn_i_k)⋯s_j_k-1^p(n_j_k-1+q_j_k-2n_j_k-2+q_j_k-2q_j_k-3n_j_k-3+⋯+ q_j_k-2q_j_k-3⋯ q_i_kn_i_k); t_j_k^p(n_j_k+q_j_k-1n_j_k-1+q_j_k-1q_j_k-2n_j_k-2+⋯+ q_j_k-1q_j_k-2⋯ q_i_kn_i_k) t_j_k+1^pn_j_k+1⋯t_i_k+1-1^pn_i_k+1-1. ]
Moreover, y^j is transformed into
u^m̃^0+jñ^0y^j.
For u_i∈ s_k, we denote by c_i its exponent in Formula (<ref>). If i<j_k-1, then u_i+1∈ s_k and its exponent is c_i+1=p(n_i+1+q_in_i+⋯ +q_iq_i-1⋯ q_i_kn_i_k) =pn_i+1+q_ic_i. The total exponent of u_i in the transform of x^ ny^j is c_i+m̃^0_i+jñ_i^0. So,
_u_i+1 (P_l,j(s) y^j t^l)-(m̃^0_i+1+jñ_i+1^0)
= _u_i+1 (P_l,j(s))-(m̃^0_i+1+jñ_i+1^0)
≥q_i(_u_i (P_l,j(s))-(m̃^0_i+jñ_i^0)).
If i=j_k-1, then u_i+1=t_j_k∈ t_k. Likewise, its exponent in (<ref>) is pn_j_k+q_j_k-1c_j_k-1. We obtain that
_u_i+1 (P_l(s) y^j t^l)-(m̃^0_j_k+jñ_j_k^0) =_t_j_kt^l-(m̃^0_j_k+jñ_j_k^0) ≥q_j_k-1(_u_j_k-1P_l(s,y)-(m̃^0_j_k-1+jñ_j_k-1^0) ).
Conversely, we consider a monomial s_k^λ t_k^μ. It is of the form (<ref>), that is, it comes from a monomial ξ_k^ m_k x_k^ n_k, if and only if
_u_is_k^λ≤_u_i+1s_k^λ t_k^μ/q_i
and λ_i≡μ_j≡ 0 (p), which are equivalent to the conditions (ii) and (iii). Taking into account the transformation (<ref>), this gives the converse part of the lemma.
Note that, if x^ny^j≠x^n'y^j', the transformation applied to these monomials gives u^αy^j≠u^α'y^j'.
For the rest of this section, and also for Sections <ref>, <ref> and <ref>, we assume that the field K is algebraically closed, hence K=L=K.
If for all i, q_i=0, namely if u_i=x_i^1/p, then any ỹ_0=f/g with f,g∈ K[[u]]
is algebroid. Indeed, let θ_p denote a primitive pth root of unity. We set:
P̃(u,y) := ∏_i=1,…,r∏_k_i=0,…,p-1g(θ_p^k_1u_1,…,θ_p^k_ru_r) (y-ỹ_0(θ_p^k_1u_1,…,θ_p^k_ru_r))
= ∏_i=1,…,r∏_k_i=0,…,p-1[g(θ_p^k_1u_1,…,θ_p^k_ru_r) y-f(θ_p^k_1u_1,…,θ_p^k_ru_r)].
Note that P̃(u,ỹ_0)=0. Moreover, since P̃(u_1,…,θ_pu_i,…,u_r,y)=P̃(u,y) for any i=1,…,r, we conclude that P̃∈ K[[x]][y].
Consequently, from now on, we consider the case where q_i≠ 0 for at least one i∈{1,…,r}.
Let us denote by τ the number of variables in s, and so r-τ is the number of variables in t. We consider y_0=∑_m∈ℕ^τ, n∈ℕ^r-τ c_m ,ns^mt^n =∑_n∈ℕ^r-τ c_n(s) t^n such that c_0,0≠ 0 which satisfies an equation
P(s,t,y)=0 where P agrees conditions (i), (ii) and (iii) of Lemma <ref>.
The series c_n(s)∈ K[[s]], n∈ℕ^r-τ, are all algebraic over K(s), and lie in a finite extension of K(s).
We consider y_0 =∑_n∈ℕ^r-τ c_n(s) t^n root of a non-trivial polynomial
P(s,t,y)=∑_l∈ℕ^r-τ P_l(s,y) t^l∈ K[s,y][[t]]
which satisfies conditions (i), (ii) and (iii). We proceed by induction on ℕ^r-τ ordered by ≤_ grlex. Given some n∈ℕ^r-τ, we set
y_0=z̃_n+c_nt^n+y_n
with
z̃_n=∑_β<_grlexn c_βt^β, y_n=∑_β>_grlexn c_βt^β,
(and z_0:=0 which corresponds to the initial step of the induction). We assume that the coefficients c_β of z̃_n belong to a finite extension L_n of K(s). We set
Q_n(t,y):=P(s,t,z̃_n+y)∈ L_n[y][[t]]
and we denote it by:
Q_n(t,y)=∑_l≥0Q_n,l(y) t^l.
We claim that
w_t(P)=w_t(Q_n).
This is clear if n=0.
For n>_ grlex0, let l_0:=w_t(P). We have
Q_n(t,y)=P_l_0(s, z̃_n+y)t^l_0+⋯ =( ∑_j=0^d 1/j!∂^j P_l_0/∂ y^j(s,y)z̃_n^j )t^l_0+⋯
Let d_l_0 :=_y P_l_0: the coefficient of y^d_l_0 in the previous parenthesis is not zero for j=0 but zero for j≥ 1. Namely, it is the coefficient of P_l_0(s,y), which is of the form a(s)y^d_l_0t^l_0 and therefore cannot overlap with other terms.
By Taylor's formula, we have that:
Q_n(t,Ct^n+y)=∑_l≥_ grlexl_0∑_j=0^d 1/j!∂^j Q_n,l/∂ y^j(0) (Ct^n+y)^j t^l.
Recall that y_n∈ K[[s]][[t]] with w_t(y_n)>_grlexn. Then Q_n(t,Ct^n+y_n)≠ 0 as a polynomial in C (otherwise P would have more than d roots). Necessarily, w_t( Q_n(t,Ct^n+y_n)) is of the form ω=l_1+j_1 n. Indeed, let us consider ω:=min_l,j{l+j n | ∂^j Q_n,l/∂ y^j(0)≠ 0}, and among the (l,j)'s which achieve this minimum, consider the term with the biggest j. This term cannot be cancelled. The correspondent coefficient of t^ω in Q_n(t,Ct^n+y_n) is a nonzero polynomial in C of the form:
∑_l_k+j_k n=ω1/j_k!∂^j_k Q_n,l_k/∂ y^j_k(0) C ^j_k.
Since y_0 is a root of P, this polynomial needs to vanish for C=c_n, which proves by the induction hypothesis that c_n is itself algebraic over K(s).
Without loss of generality, we may assume that y_0 is a simple root of P, hence, ∂ P/∂ y(s,t,y_0) ≠ 0. With the same notations as above, we consider n_0:= w_t(∂ P/∂ y(s,t,y_0)) ∈ℕ^r-τ. For any n>_grlexn_0, ∂ Q_n/∂ y(t,0)=∂ P/∂ y(s,t,z̃_ n) and
w_t(∂ Q_n/∂ y(t,0)-∂ P/∂ y(s,t,y_0))=w_t(∂ P/∂ y(s,t,z̃_ n)-∂ P/∂ y(s,t,y_0))≥_grlexn>_grlexn_0.
So w_t(∂ Q_n/∂ y(t,0))=n_0.
By Taylor's formula:
Q_n(t,Ct^n+y_n)=∑_j=0^d 1/j!∂^j Q_n/∂y_n^j(t,0) (Ct^n+y)^j.
We have:
w_t(∂ Q_n/∂ y(t,0) (Ct^n+y_n))= n+n_0,
and for any j≥ 2:
w_t(∂^j Q_n/∂ y^j(t,0) (Ct^n+y_n)^j)≥_grlex 2n>n+n_0.
We deduce by (<ref>) that w_t(Q_n(t,0)) ≥_grlexn+n_0 since, otherwise, Q_n(t,Ct^n+y_n) could not vanish at C=c_n.
Let us prove by induction on n∈ℕ^r-τ ordered by ≤_ grlex, n≥_ grlexn_0, that the coefficients c_l of t^l in z̃_n all belong to L_ n_0=K(s,c_0,…,c_n_0). The initial case is clear. Assume that the property holds for less than some given n. Let us denote ∂ Q_n/∂ y(t,0)=a_n_0t^n_0+R(t) with w_t(R(t)) >_grlexn_0, a_n_0≠ 0, and Q_n(t,0)=b_n+n_0t^n+n_0+S(t) with w_t(S(t)) >_grlexn+n_0. By (<ref>) and the induction hypothesis, a_n_0 and b_n+n_0 belong to L_n_0. Looking at the coefficient of t^n+n_0 in (<ref>) evaluated at C=c_n, we get:
a_n_0c_n +b_n+n_0=0.
Hence we obtain that c_n∈ L_n_0=K(s,c_0,…,c_n_0) for all n>_ grlexn_0.
Let us recall that A(n) denotes the predecessor element of n in (ℕ^r,≤_grlex). The following lemma will be used in Section<ref> in order to apply the results of Section <ref>.
Let d, m̃^0, ñ^0, q, p and P be as above (see (<ref>) and (<ref>)). As in the proof of the previous lemma, we set l_0:=w_t(P). We resume the notations of Lemma <ref>. For k=1,…,σ, with s_k=(u_i_k,…,u_j_k-1), we denote
e_s_k:=1/q_i_kq_i_k+1⋯ q_j_k-1+1/q_i_k+1⋯ q_j_k-1+⋯ + 1/ q_j_k-1,
and ñ^0,s_k (respectively m̃^0,s_k), the multi-index obtained from ñ^0 (respectively m̃^0), by restriction to the components corresponding to the variables in s_k.
Likewise, we set ñ^0,t_k and m̃^0,t_k corresponding to the variables in t_k for k=0,…, σ. Let n∈^r-τ, then there exists T_n∈ K[s,( C_β)_β≤_grlexn]∖{0} such that T_n(s,c_0,…,c_A(n),c_n)=0, T_n(s,c_0,…,c_A(n),C_n)≢0 with
_C_βT_n≤ d,
_ sT_n≤( |l_0|+d |n| )a+b,
where
a:=∑_k=1^σ e_s_k,
b:=ε(∑_k=1^σ |ñ^0,s_k|-∑_k=1^σñ^0,t_k_j_k e_s_k)+∑_k=1^σ |m̃^0,s_k|-∑_k=1^σm̃^0,t_k_j_k e_s_k,
with ñ^0,t_k_j_k (respectively m̃^0,t_k_j_k) the first component of ñ^0,t_k (respectively m̃^0,t_k), and
ε:={[ 0 if ∑_k=1^σ |ñ^0,s_k|-∑_k=1^σñ^0,t_k_j_k e_s_k≤ 0,; d if ∑_k=1^σ |ñ^0,s_k|-∑_k=1^σñ^0,t_k_j_k e_s_k> 0. ].
Resuming the notations and computations of the previous lemma (see (<ref>) to (<ref>)), c_n is a root of a nonzero polynomial in C of the form:
∑_l_k+p_k n=ω1/p_k!∂^p_k Q_ n,l_k/∂ y^p_k(0) C ^p_k
where ω:=w_t( Q_n(t,Ct^n+y_n))=l_1+p_1 n≤_grlexl_1+d n. Let us denote by T_n the polynomial obtained from the preceding expression by substituting C_n to C and C_β to c_β for β<_grlexn. More precisely, if we set
H_n(s,t,(C_β)_β≤_grlexn,y)= P(s,t,∑_β≤_grlexn C_βt^β+y )
=∑_l∈ℕ^r-τ H_n,l(s,(C_β)_β≤_grlexn,y)t^l
then T_n(s,(C_β)_β≤_grlexn):=H_n,ω(s,(C_β)_β≤_grlexn,0).
Since w_t(Q_ n)=w_t(P) by (<ref>), we observe that l_0=min_≤_grlex{l | ∃ p, ∂^p Q_ n,l/∂ y^p(0)≠ 0 }. Let p_0 = min{p | ∂^p Q_ n,l_0/∂ y^p(0)≠ 0 }. Then the coefficient of C^p_0t^l_0 +p_0n in the expansion of Q_ n(t,C t^n+y_n) is not zero. Since we have that:
Q_ n(t,Ct^n+y_n)=∑_l≥0∑_j=0^d 1/j!∂^j Q_ n,l/∂ y^j(0) (Ct^n+y_n)^j t^l,
the term 1/p_0!∂^p_0 Q_ n,l_0/∂ y^p_0(0) C ^ p_0 t^l_0+p_0n cannot overlap with other terms since the latter will necessarily be of the form 1/(p-p_0)!p_0!∂^p Q_ n,l/∂ y^p(0) C ^ p_0 t^l+p_0ny_n^p-p_0 with l≥_grlexl_0, p≥ p_0 and w_t(y_n)>_grlexn. (see (<ref>)). So, ω≤_grlexl_0+p_0n≤_grlexl_0+dn.
Let us detail the expression of the connection between P and Q_ n. We denote P(s,t,y)=∑_l∈ℕ^r-τ(∑_k∈^τ∑_j=0^d a_k,l,js^ky^j) t^l, and we get:
Q_ n(s,t,y) =P(s,t,z̃_n+y )
=∑_l∈ℕ^r-τ(∑_k∈^τ∑_j=0^d a_k,l,js^k(∑_β<_grlexn c_βt^β+y)^j) t^l
=∑_l∈ℕ^r-τ(∑_k∈^τ∑_j=0^d a_k,l,js^k(∑_|j|=jj!/j!(∏_β<_grlexn c_β^j_β) y^j_nt^g(j)-j_nn)) t^l
=∑_l∈ℕ^r-τ∑_k∈^τ∑_j=0^d∑_|j|=j a_k,l,js^kj!/j!(∏_β<_grlexn c_β^j_β) y^j_nt^l+g(j)-j_nn
where j=(j_0,…,j_n) and g(j) is as in Notation <ref>. Next, we evaluate y at C t^n+y_n and we consider the (l,j)'s such that l+g(j)=ω for which the coefficient of t^ω is the non-trivial polynomial of which c_n is a root. Then, the multi-indices l involved are such that l≤_grlexl_0+dn. Consider such a monomial s^ kt^ly^j written as u^αy^j as in (<ref>). Recall that the elements of the support of P satisfy Condition (iii) of Lemma <ref>: for any k=1,…,σ, for any u_i∈s_k, α_i-(m̃^0_i+jñ_i^0)≤α_i+1-(m̃^0_i+1+jñ_i+1^0)/q_i. For s_k=(u_i_k,…,u_j_k-1) and t_k=(u_j_k,…,u_i_k+1-1), we claim that for any i=i_k,…,j_k-1,
α_i≤α_j_k/q_iq_i+1⋯ q_j_k-1+j ( ñ^0_i-ñ^0_j_k/q_iq_i+1⋯ q_j_k-1)+m̃^0_i-m̃^0_j_k/q_iq_i+1⋯ q_j_k-1.
The case i=j_k-1 is given by Condition (iii). Suppose that the formula holds until i+1, i.e.
α_i+1≤α_j_k/q_i+1⋯ q_j_k-1+j ( ñ^0_i+1-ñ^0_j_k/q_i+1⋯ q_j_k-1)+m̃^0_i+1-m̃^0_j_k/q_i+1⋯ q_j_k-1.
Since, by Condition (iii), we have α_i≤α_i+1/q_i+j(ñ_i^0-ñ_i+1^0/q_i)+m̃_i^0-m̃_i+1^0/q_i, we obtain the formula for α_i as expected.
Now, we consider the sum for i=i_k,…,j_k-1 of these inequalities (<ref>):
∑_i=i_k^j_k-1α_i≤α_j_ke_s_k+j(|ñ^0,s_k|-ñ^0 _j_ke_s_k)+|m̃^0,s_k|-m̃^0 _j_ke_s_k.
Note that ñ^0 _j_k=ñ^0,t_k_j_k and m̃^0 _j_k=m̃^0,t_k_j_k. Moreover, α_j_k is equal to some l_γ component of l, so α_j_k≤ |l_0|+d|n|. So,
∑_i=i_k^j_k-1α_i≤(|l_0|+d|n|)e_s_k+j(|ñ^0,s_k|-ñ^0,t_k_j_ke_s_k)+|m̃^0,s_k|-m̃^0,t_k_j_ke_s_k.
Taking the sum for k=1,…,σ, we obtain:
|k|≤(|l_0|+d|n|)∑_i=1^σe_s_k+j(∑_i=1^σ|ñ^0,s_k|-∑_i=1^σñ^0,t_k_j_ke_s_k)+∑_i=1^σ|m̃^0,s_k|-∑_i=1^σm̃^0,t_k_j_ke_s_k.
Since 0≤ j≤ d, we finally obtain:
|k|≤(|l_0|+d|n|)∑_i=1^σe_s_k+ε(∑_i=1^σ|ñ^0,s_k|-∑_i=1^σñ^0,t_k_j_ke_s_k)+∑_i=1^σ|m̃^0,s_k|-∑_i=1^σm̃^0,t_k_j_ke_s_k.
From the previous proof, we observe that, for any monomial s^ kt^ly^j in the support of a polynomial P which satisfies the conditions of Lemma <ref>, one has that:
| k|≤ a | l|+b,
where a and b are as in Lemma <ref>. To see this, use α_j_k≤ |l| in place of α_j_k≤ |l_0|+d|n| in (<ref>).
For r=2, let p,q∈^* and ñ^0=(ñ^0_1,ñ^0_2)∈^2.
* Let us consider:
ỹ_0=(x_1/x_2^q)^ñ^0_1/px_2^ñ^0_2/p∑_i,j=0^p-1(1/1-x_2x_2^q/x_2^q-x_1) (x_1/x_2^q)^i/p x_2^j/p∈𝒦_2.
The series ỹ_0 is algebroid, even algebraic, since it is a finite sum and product of algebraic series. Hence, (u_1,u_2)=( (x_1/x_2^q_1)^1/p, x_2^1/p)=(s,t). Moreover, it has a full support:
{1/pñ^0+(k/p, l-qk/p) | (k,l)∈^2 }.
< g r a p h i c s >
* Let us consider
ỹ_0=(x_1/x_2^q)^ñ^0_1/px_2^ñ^0_2/p(1/1-x_2^1/p) exp((x_1/x_2^q)^1/p) ∈𝒦_2.
The series ỹ_0 is transcendental over K[[x_1,x_2]]. Indeed, with the same notations as above, ỹ_0=s^ñ^0_1/pt^ñ^0_2/p1/1-texp(s) is algebroid if and only if exp(s) is algebraic by Lemma <ref>. This is clearly not the case. Moreover, ỹ_0 has the same support as above.
In <cit.>, the authors ask whether K((x)) is a Rayner field. The above example with p=1 provides us with two series having same support, the first belonging to K((x)), and the second not. Following the argument after <cit.>, this shows that K((x)) is not a Rayner field.
§ A NESTED DEPTH LEMMA.
Let d_x, d, _̣x, ∈̣ℕ^*.
Given two polynomials P∈ K[x,y]∖{0 }, _xP≤ d_x, _yP≤ d, and Q∈ K[x,y]∖{0 }, _xQ≤_̣x, _yQ≤$̣, we denote byR∈K[x] their resultant. It satisfies_xR≤d_̣x+ḍ_x. Moreover, in the Bézout identity:AP+BQ=R,one can choose the polynomialsS, T ∈K[x,y]which satisfy:{[ _xA≤ d_x(-̣1)+_̣xd _yA≤-̣1; _xB≤ d_x+̣_̣x(d-1) _yB≤ d-1 ].We consider the following linear map:
[ φ: K(x)[y]_× K(x)[y]_d → K(x)[y]_d+; (A,B) ↦ AP+BQ, ]
where K(x)[y]_n denotes the K(x)-vector space of polynomials of degree less than n in y.
The matrix M of φ in the standard basis {(y^i,0)}∪{(0,y^j)} and {y^k} is the Sylvester matrix of P and Q.
The polynomial R∈ K[x] is its determinant. So, _xR≤ d_̣x+ḍ_x. Let M' be the matrix of cofactors of M. From the relation M. ^tM'=R Id_d+, one deduces the Bézout identity AP+BQ=R, the coefficients of A and B being minors of M of maximal order minus 1.
Let 𝔄 be a domain and 𝔎 its field of fractions. Given n∈, n≥ 2, we consider an n× n matrix M=(m_i,j) with coefficients in 𝔄. We suppose that M (as a matrix with coefficients in 𝔎) has rank n-p for some 1≤ p<n. Then there exists a vector V∈𝔄^n∖{0} whose nonzero coefficients are equal, up to sign ±, to minors of order n-p of M and such that M.V=0.
Without loss of generality, we can suppose that the minor of order n-p, say Δ, given by the first n-p rows and columns is not zero. Denote V:=(Δ_1,…, Δ_n). For k>n-p+1, set Δ_k:=0. For k=n-p+1, set Δ_k:=(-1)^n-p+1Δ≠ 0. For k< n-p+1, we set Δ_k equal to (-1)^k times the minor of M given by the first n-p rows, and all but the k'th first n-p+1 columns. Denote M.V:=(c_1,…,c_n). We claim that M.V=0. Indeed, c_1= ∑_j=1^n-p+1 m_1,jΔ_j which is the determinant of the (n-p+1)×(n-p+1)-matrix (δ_i,j) with δ_i,j=m_i,j for 1≤ i≤ n-p and 1≤ j≤ n-p+1, and δ_n-p+1,j=m_1,j for 1≤ j≤ n-p+1. This determinant vanishes since it has two identical rows. Similarly, we have that c_2=⋯=c_n-p=0.
Now, c_n-p+1=∑_j=1^n-p+1 m_n-p+1,jΔ_j, which is equal to a minor of order n-p+1 of M. It vanishes since M has rank n-p. Similarly, c_n-p+2=…=c_n=0.
Let 𝔄 be a domain and 𝔎 its field of fractions.
Let P_1,P_2∈𝔄[y]∖{0} of positive degrees d_1≥ d_2 respectively.
The Sylvester matrix of P_1 and P_2 has rank at least d_1.
Moreover,
it has rank d_1 if and only if aP_1=BP_2 for some a∈𝔄 and B∈𝔄[y]∖{0}.
In this case, one can take a=q_d_2^d_1-d_2 + 1 (where q_d_2 is the coefficient of y^d_2 in P_2) and the coefficients of such a polynomial B can be computed as homogeneous polynomial formulas in the coefficients of P_1 and P_2 of degree d_1-d_2+1, each monomial consisting of d_1-d_2 coefficients of P_2 times 1 coefficient of P_1.
As in the proof of Lemma <ref>, we denote by M_P_1,P_2 the Sylvester matrix of P_1 and P_2. By definition, its d_1 columns corresponding to the coefficients of y^lP_2, l=0,…,d_1-1, being upper triangular are linearly independent (and the same holds for the d_2 columns corresponding to the coefficients of y^kP_1). Hence, M_P_1,P_2 has rank at least max{d_1,d_2}=d_1.
Moreover, an equality aP_1=BP_2 translates exactly into a linear relation between the column corresponding to P_1 and the columns corresponding to y^lP_2 for l=0,…,d_1-d_2. In this case, the linear relation repeats mutatis mutandi between the column corresponding to y^k P_1 and the columns corresponding to y^lP_2 for l=k,…,d_1-d_2+k, corresponding to an equality ay^kP_1=y^kBP_2.
Let us consider the submatrix N_P_1,P_2 of M_P_1,P_2 consisting of the column corresponding to P_1 and the columns corresponding to y^lP_2 for l=0,…,d_1-d_2. It has rank d_1-d_2+1. By the previous lemma, there exists a nonzero vector in the kernel of N_P_1,P_2, given by minors of order d_1-d_2+1. More precisely, we are in the case of a Cramer system encoding an equality BP_2 = aP_1, with in particular a=q_d_2^d_1-d_2+1 corresponding to the determinant of the matrix of the linear map B↦ BP_2. By Cramer's rules, the coefficients of B are computed as determinants which indeed give homogeneous polynomial formulas with monomials consisting of d_1-d_2 coefficients of P_2 and 1 coefficient of P_1.
Let d_x, d, _̣x, ∈̣ℕ^* and P, Q∈ K[x,y]∖{0 }, _xP≤ d_x, _yP≤ d, _xQ≤_̣x, _yQ≤$̣. For any seriesc_0
∈ K[[x]]such thatP(x,c_0)=0andQ(x,c_0)≠ 0, one has thatord_xQ(x,c_0)≤_̣xd+ d_x.̣Let c_0 be a series as in the statement of Lemma <ref>. We consider the prime ideal ℑ_0:={R(x,y)∈ K[x,y] | R(x,c_0)=0}.
Since ℑ_0≠ (0), (K[x,y]/ℑ_0)=trdeg_KFrac(K[x,y]/ℑ_0)≤ r. But, in Frac(K[x,y]/ℑ_0), the elements x_1,…,x_r are algebraically independant (if not, we would have T(x_1,…,x_r)=0 for some non trivial T∈ K[X], i.e. T(x_1,…,x_r)∈ℑ_0, a contradiction). Thus, ℑ_0 is a height one prime ideal of the factorial ring K[x,y]. It is generated by an irreducible polynomial P_0(x,y)∈ K[x,y]. We set d_x,0:=_x P_0 and d_y,0:=_y P_0. Note also that, by factoriality of K[x,y], P_0 is also irreducible as an element of K(x)[y].
Let P be as in the statement of Lemma <ref>. One has that P=SP_0 for some S∈ K[x,y]. Hence d_x,0≤ d_x and d_y,0≤ d. Let Q∈ K[x,y] be such that Q(x,c_0)≠ 0 with _x Q≤_̣x, _yQ≤$̣. SoP_0andQare coprime inK(x)[y]. Their resultantR(x)is nonzero. One has the following Bézout relation inK[x][y]:A(x,y)P_0(x,y)+B(x,y)Q(x,y)=R(x).We evaluate aty=c_0:0+B(x,c_0)Q(x,c_0)=R(x).But, by Lemma <ref>,_x R ≤ d_y,0_̣x+ ḍ_x,0≤ d_̣x+ ḍ_x. Hence, one has that:ord_x Q(x,c_0)≤ord_xR ≤_x R≤ d_̣x+ ḍ_x.Let i, d_x, d, _̣x, ∈̣ℕ, d≥ 2, ≥̣1. There exists ω(i,d_x, d, _̣x, )̣∈ minimal such that:
for any j=0,…,i, given c_j=∑_n∈^r c_j,nx^n∈ K[[x]] power series
satisfying some equations P_j(x,c_0,…,c_j)=0
where P_j∈ K[x,z_0,z_1,…,z_j ]∖{0 }, _xP_j≤ d_x,
_z_kP_j≤ d for k=0,…,j, and P_j (x,c_0,…,c_j-1,z_j)≢0, and given Q_i∈ K[x,z_0,z_1,…,z_i ]∖{0 }, _xQ_i≤_̣x, _z_jQ_i≤$̣ forj=0,…,ia polynomial
such thatQ_i(x,c_0,c_1,…,c_i)≠ 0, one has that_xQ_i(x,c_0,c_1,…,c_i) ≤ ω(i,d_x, d, _̣x, )̣.Moreover, for≥̣3:
[ ω(i,d_x, d, _̣x, )̣≤ (2.3^d^i-1+⋯+d^2+d+1 -2^i3^d^i-1+⋯+d^2+d-(i-1)) d^d^i-1+⋯+d^2+d+1 d_x^̣d^i+; 2^i.3^d^i-1+⋯+d^2+d-(i-1) d^d^i-1+⋯+d^2+d+2_̣x^̣d^i-1 . ]
So, ford≥ 3:
ω(i,d_x, d, d_x, d )≤ 2.3^d^i-1+⋯+d^2+d+1
d_x d^d^i+⋯+d^2+d+1 .
Finally, for anyε>0, there is_̣εsuch that, for≥̣_̣ε:
[ ω(i,d_x, d, _̣x, )̣≤; ( 2.(2+ε)^d^i-1+⋯+d^2+d+1 -
(1+ε)^i.(2+ε)^d^i-1+⋯+d^2+d-(i-1))d^d^i-1+⋯+d^2+d+1 d_x^̣d^i +; (1+ε)^i.(2+ε)^d^i-1+⋯+d^2+d-(i-1) d^d^i-1+⋯+d^2+d+2_̣x^̣d^i-1 , ]
and ford≥_̣ε:
ω(i,d_x, d, d_x, d ) ≤
2.(2+ε)^d^i-1+⋯+d^2+d+1
d^d^i+d^i-1+⋯+d^2+d+1 d_x.
We proceed by induction on i∈, the case i=0 being Lemma <ref> where we set d^i-1+⋯+d^2+d+1:=0, d^i-1+⋯+d^2+d+2:=d^i-1+⋯+d^2+d+1+1=1 and d^i-1+⋯+d^2+d-(i-1):=0 and where we get:
ord_xQ_0(x,c_0)≤_̣xd+ d_x.̣
Suppose that the property holds until some rank i-1≥ 0, and consider polynomials P_i and Q_i as in the statement of the theorem. Let R_1 be the resultant of P_i and Q_i with respect to z_i, and the following Bézout identity according to Lemma <ref> (where x there stands for x or z_j, j=0,..,i-1, here):
A_1P_i+B_1Q_i=R_1.
There are two cases. If R_1(x,c_0,…,c_i-1)≠ 0, since R_1∈ K[x,z_0,…,z_i-1] with _xR_1≤ d_x+̣_̣xd, _z_jR_1≤ 2d $̣ forj=1,…,i-1, we deduce from the induction hypothesis thatord_x R_1(x,c_0,…,c_i-1)≤ω(i-1,d_x,d,d_x+̣_̣xd, 2d )̣.
So, by the Bézout identity:ord_x Q_i(x,c_0,…,c_i)≤ord_xR_1(x,c_0,…,c_i-1) ≤ω (i-1,d_x,d,d_x+̣_̣xd, 2d )̣.IfR_1(x,c_0,…,c_i-1)=0, thenB_1(x,c_0,…,c_i-1,c_i)=0. There are several sub-cases.
If R_1(x,c_0,…,c_i-1)=0, then there exist A,B∈ K[x,z_0,…,z_i] such that B(x,c_0,…,c_i-1,c_i)=0, B(x,c_0,…,c_i-1,z_i)≢0 and
A(x,c_0,…,c_i-1,z_i)P_i(x,c_0,…,c_i-1,z_i)+B(x,c_0,…,c_i-1,z_i) Q_i(x,c_0,…,c_i-1,z_i)=0
with _xB≤ d_x+̣_̣x(d-1), _z_jB≤ (2d-1) $̣ forj=1,…,i-1, and_z_i B≤ d-1.
If B_1(x,c_0,…,c_i-1,z_i)≢0, we take A=A_1 and B=B_1, noticing by Lemma <ref> that _xB_1≤ d_x+̣_̣x(d-1), _z_jB_1≤ (2d-1) $̣ forj=1,…,i-1, and_z_i B_1≤ d-1.
IfB_1(x,c_0,…,c_i-1,z_i)≡ 0, necessarilyA_1(x,c_0,…,c_i-1,z_i)≡ 0.
Let us denoteP̃_i:=P_i(x,c_0,…,c_i-1,z_i)andQ̃_i:=Q_i(x,c_0,…,c_i-1,z_i), henceP̃_i,Q̃_i∈ K[x,c_0,…,c_i-1][z_i], with degreesd̃andinz_irespectively. Note thatd̃≥ 1and≥ 1(if not,R_1(x,c_0,…,c_i-1)≠ 0). LetM_P̃_i,Q̃_ibe the Sylvester matrix ofP̃_iandQ̃_i, andd̃+-pits rank. Hence,p≥ 1. Suppose thatp=1. Let us denote byM'_P̃_i,Q̃_ithe matrix of cofactors ofM_P̃_i,Q̃_i, and by^tM'_P̃_i,Q̃_iits transpose. At least one of the columns of^tM'_P̃_i,Q̃_iis not zero. Since we have thatM_P̃_i,Q̃_i.^tM'_P̃_i,Q̃_i=0, this column determines a non-trivial relationÃP̃_i+B̃Q̃_i=0where the coefficients ofÃ,B̃are given by the coefficients of this column. Moreover,B̃(x,c_0,…,c_i-1,c_i)=0sinceP̃_i(x,c_0,…,c_i-1,c_i)=0andQ̃_i(x,c_0,…,c_i-1,c_i)≠ 0, andB̃(x,c_0,…,c_i-1,z_i)≢0(if not, we would haveÃ(x,c_0,…,c_i-1,z_i)≡ 0sinceP̃_i(x,c_0,…,c_i-1,z_i)≢0). The coefficients ofB̃are homogeneous polynomial formulas incoefficients ofP̃_iandd̃-1coefficients ofQ̃_i. Lifting these formulas toK[x,z_0,…,z_i-1,z_i]by replacing thec_j's by thez_j's, we obtainAandBwith_xB≤ d_x +_̣x(d̃-1), _z_jB≤ d +(̣d̃-1)forj=1,…,i-1, and_z_i B≤d̃-1. We conclude since≤$̣ and d̃≤ d.
Suppose that p≥ 2. The columns corresponding to the coefficients of the z_i^k P̃_i's, k=0,..,-1, are linearly independent (since they form an upper triangular system). We complete them with d̃-p columns corresponding to the coefficients of the z_i^k Q̃_i to a maximal linearly independent family. There is a non-zero minor, say Δ, of maximal order +d̃-p of this family. Proceeding as in Lemma <ref>, there is a non-zero vector V in the kernel of M_P̃_i,Q̃_i whose coefficients are minors of order +d̃-p. More precisely, except for Δ, the other minors are obtained by replacing a column of Δ by the corresponding part of another column of M_P̃_i,Q̃_i. Hence, they consist of either d̃-p+1 columns with coefficients of Q̃_i and -1 columns with coefficients of P̃_i, or d̃-p columns with coefficients of Q̃_i and columns with coefficients of P̃_i. We translate the relation M_P̃_i,Q̃_i.V=0 to a non-trivial relation
ÃP̃_i+B̃Q̃_i=0
where the coefficients of Ã,B̃ are given by the coefficients de V. Moreover,
B̃(x,c_0,…,c_i-1,c_i)=0 since P̃_i(x,c_0,…,c_i-1,c_i)=0 and Q̃_i(x,c_0,…,c_i-1,c_i)≠ 0, and B̃(x,c_0,…,c_i-1,z_i)≢0 (if not, we would have Ã(x,c_0,…,c_i-1,z_i)≡ 0 since
P̃_i(x,c_0,…,c_i-1,z_i)≢0). The coefficients of B̃ are homogeneous polynomial formulas in at most coefficients of P̃_i and d̃-p+1 coefficients of Q̃_i. Lifting these formulas to K[x,z_0,…,z_i-1,z_i] by replacing the c_j's by the z_j's, since p≥ 2, we obtain A and B with _xB≤ d_x +_̣x(d̃-1), _z_jB≤ d +(̣d̃-1) for j=1,…,i-1, and _z_i B≤d̃-1. We conclude since ≤$̣ andd̃≤ d.
We denote byB_1the polynomialBof the previous lemma. In any case, we are in position to replacePbyB_1, with_xB_1≤ d_x+̣_̣x(d-1), _z_jB_1≤ (2d-1) $̣ for j=1,…,i-1, and _z_i B_1≤ d-1. We obtain another Bézout identity:
A_2B_1+B_2Q_i=R_2
with R_2 the resultant of B_1 and Q_i with respect to z_i, _xR_2≤ (d_x+̣_̣x(d -1) )+̣_̣x(d -1) = d_x^2+_̣x ((d-1)+̣(d -2)+1), likewise, for j=1,…,i-1, _z_jR_2≤ d ^̣2+(̣(d -1)+̣(d-2)+1).
Moreover, [ _xB_2 ≤ (_xB_1)+̣_̣x(_z_iB_1 -1); ≤ (d_x+̣_̣x(d-1))+̣_̣x(d-1 -1)=d_x^̣2+_̣x((̣d-1)+d-2), ]
and likewise, for j=1,…,i-1,
[ _z_jB_2 ≤ (_z_jB_1)+̣ (_z_iB_1-1) ≤ (2d-1)^̣2+(d-2)=̣d^̣2+(̣(̣d-1)+d-2), ]
and
_z_iB_2≤_z_i B_1-1≤ d-2.
If R_2(x,c_0,…,c_i-1)≠ 0, we proceed as before Lemma <ref>, and we obtain:
ord_x Q_i(x,c_0,…,c_i)≤ord_x R_2(x,c_0,…,c_i-1)≤ω(i-1,d_x, d, d_x^2+_̣x ((d-1)+̣(d -2)+1), d ^̣2+(̣(d -1)+̣(d-2)+1)).
Note that this new bound for ord_x Q_i(x,c_0,…,c_i-1,c_i) has increased with respect to the previous one, since d≤ (d-1)(+̣1)=(d-1)+̣(d -2)+1 for any d≥ 2, ≥̣1. At worst, one can have repeatedly the second case with successive Bézout identities:
A_kB_k-1+B_kQ_i=R_k
with R_k(x,c_0,…,c_i-1)=0 where for j=0,…,i-1,
{[ _xR_k ≤ d_x^̣k+_̣x(^̣k-1(d-1)+^k-2(d-2)+⋯+(d-(k-1))+(d-k)+1); _z_jR_k ≤ d^k+(^k-1(d-1)+^k-2(d-2)+⋯+(d-(k-1))+(d-k)+1), ]. and with
{[ _xB_k ≤ d_x^k+_̣x(^k-1(d-1)+^k-2(d-2)+⋯+(d-k+1)+(d-k)); _z_jB_k ≤ d^k+(^k-1(d-1)+^k-2(d-2)+⋯+(d-k+1)+(d-k)); _z_iB_k ≤ d-k. ].
The greatest bound is obtained for k=d-1, for which B_d-1 has _z_iB_d-1= 1. In this case, B_d-1 has c_i as unique root and Q_i(x,c_0,…,c_i-1,c_i)≠ 0, so R_d(x,c_0,…,c_i-1)≠ 0. We set for n,m∈^*:
[ ϕ(n,m) := (n-1)m^n-1+(n-2)m^n-2+⋯+m +1; = ((n-1)m^n-2+(n-2)m^n-3+⋯+2m +1)m+1; = (n-1)m^n+1-nm^n+m^2-m+1/(m-1)^2 for m≠ 1 ]
We have for j=0,…,i-1:
{[ _xR_d ≤ d_x^̣d+_̣xϕ(d ,)̣; _z_jR_d ≤ d^̣d+ϕ̣(d,)̣, ].
By the induction hypothesis, ord_x R_d(x,c_0,…,c_i-1) is bounded by
ω(i-1,d_x,d, d_x^̣d+_̣xϕ(d ,)̣, d^̣d+ϕ̣(d,)̣).
We get the corresponding expected bound:
ord_x Q_i(x,c_0,…,c_i-1,c_i)≤ω(i-1,d_x,d, d_x^̣d+_̣xϕ(d ,)̣, d^̣d+ϕ̣(d,)̣),
which proves the existence of ω(i,d_x,d, _̣x,)̣ with
ω(i,d_x,d,_̣x,)≤ω(i-1,d_x,d, d_x^̣d+_̣xϕ(d ,)̣, d^̣d+ϕ̣(d,)̣).
To bound ω(i,d_x,d, _̣x,)̣, we need to find estimates for ϕ.
First step: for n,m≥ 2,
ϕ(n,m)≤ (n-1)m^n.
Indeed, ϕ(n,m)=(n-1)m^n+1-nm^n+m^2-m+1/(m-1)^2. For n≥ 2, -nm^n+m^2-m+1≤ 0, so ϕ(n,m)≤(n-1)m^n+1/(m-1)^2 and (n-1)m^n+1/(m-1)^2≤ (n-1) m^n⇔ m/(m-1)^2≤ 1 ⇔ m^2- 3m+1≥ 0 with Δ=5 et m=(3+√(5))/2< 3. This holds for m≥ 3. For m=2, we compute:
ϕ(n,2)=(n-1)2^n+1-n2^n+3≤ (n-1)2^n⇔ 3≤ 2^n
This holds for n≥ 2. On the other hand, this does not hold for m=1 and n≥ 3.
Second step: for n≥ 3, m≥ 2,
ϕ(n,m)≤ (2n-3)m^n-1
Indeed, from the first step:
[ ϕ(n,m):=(n-1)m^n-1+(n-2)m^n-2+⋯+m +1 = (n-1)m^n-1+ϕ(n-1,m); ≤ (n-1)m^n-1+(n-2)m^n-1; ≤ (2n-3)m^n-1 ]
Let ε>0. For n≥ 2, since -nm^n+m^2-m+1≤ 0, the inequality
ϕ(n,m)≤ (1+ε)(n-1)m^n-1
is implied by (n-1)m^n+1/(m-1)^2≤ (1+ε)(n-1)m^n-1⇔m^2/(m-1)^2≤ 1+ε.
This holds for m large enough, say for m≥ m_ε, since m^2/(m-1)^2 decreases to 1.
Now, let us prove the estimates for ω(i,…) by induction on i. For i=0, ω(0,…)≤ d_̣x+ ḍ_x by Lemma <ref>. Suppose that the estimates (<ref>), (<ref>), (<ref>) and (<ref>) hold until some i≥ 0. By (<ref>):
ω(i+1,d_x,d,_̣x,)≤ω(i,d_x,d, d_x^̣d+_̣xϕ(d ,)̣, d^̣d+ϕ̣(d,)̣)
≤ω(i,d_x,d, d_x^̣d+_̣x
(2d-3)^̣d-1, d^̣d+(̣2d-3)^̣d-1)
≤ω(i,d_x,d, d_x^̣d+_̣x
2d^̣d-1, d^̣d+2̣d^̣d-1)
≤ω(i,d_x,d, d_x^̣d+_̣x
2d^̣d-1, 3d^̣d)
≤ (2.3^d^i-1+⋯+d^2+d+1 -2^i3^d^i-1+⋯+d^2+d-(i-1)) d^d^i-1+⋯+d^2+d+1 d_x(3d^̣d)^d^i+ 2^i.3^d^i-1+⋯+d^2+d-(i-1) d^d^i-1+⋯+d^2+d+2 (d_x^̣d+_̣x
2d^̣d-1) (3d^̣d)^d^i-1
≤ (2.3^d^i+d^i-1+⋯+d^2+d+1 -2^i3^d^i+d^i-1+⋯+d^2+d-(i-1)) d^d^i+d^i-1+⋯+d^2+d+1 d_x^̣d^i+1+
2^i.3^d^i+d^i-1+⋯+d^2+d-(i-1)-1 d^d^i+d^i-1+⋯+d^2+d+1
d_x^̣d^i+1+
2^i+1.3^d^i+d^i-1+⋯+d^2+d-i d^d^i+d^i-1+⋯+d^2+d+2_̣x^̣d^i+1-1
≤ (2.3^d^i+d^i-1+⋯+d^2+d+1 -2^i3^d^i+d^i-1+⋯+d^2+d-(i-1)) d^d^i+d^i-1+⋯+d^2+d+1 d_x^̣d^i+1+
1/32^i3^d^i+d^i-1+⋯+d^2+d-(i-1) d^d^i+d^i-1+⋯+d^2+d+1
d_x^̣d^i+1+
2^i+1.3^d^i+d^i-1+⋯+d^2+d-i d^d^i+d^i-1+⋯+d^2+d+2_̣x^̣d^i+1-1
≤ (2.3^d^i+d^i-1+⋯+d^2+d+1 -2/32^i3^d^i+d^i-1+⋯+d^2+d-(i-1)) d^d^i+d^i-1+⋯+d^2+d+1 d_x^̣d^i+1+
2^i+1.3^d^i+d^i-1+⋯+d^2+d-i d^d^i+d^i-1+⋯+d^2+d+2_̣x^̣d^i+1-1
≤ (2.3^d^i+d^i-1+⋯+d^2+d+1 -2^i+13^d^i+d^i-1+⋯+d^2+d-i) d^d^i+d^i-1+⋯+d^2+d+1 d_x^̣d^i+1+
2^i+1.3^d^i+d^i-1+⋯+d^2+d-i d^d^i+d^i-1+⋯+d^2+d+2_̣x^̣d^i+1-1.
This proves (<ref>), and also (<ref>) by letting ≤̣d and _̣x≤ d_x.
Similarly, given ε>0, we use (<ref>) and (<ref>) with ≥̣_̣ε and, since d-1<d, we get:
ω(i+1,d_x,d,_̣x,)≤ω(i,d_x,d, d_x^̣d+_̣x
(1+ε)d^̣d-1, (2+ε)d^̣d)
≤ (2.(2+ε)^d^i-1+⋯+d^2+d+1 -(1+ε)^i(2+ε)^d^i-1+⋯+d^2+d-(i-1)) d^d^i-1+⋯+d^2+d+1 d_x((2+ε)d^̣d)^d^i+ (1+ε)^i.(2+ε)^d^i-1+⋯+d^2+d-(i-1) d^d^i-1+⋯+d^2+d+2 (d_x^̣d+_̣x (1+ε)d^̣d-1) ((2+ε)d^̣d)^d^i-1
≤ (2.(2+ε)^d^i+d^i-1+⋯+d^2+d+1 -(1+ε)^i(2+ε)^d^i+d^i-1+⋯+d^2+d-(i-1)) d^d^i+d^i-1+⋯+d^2+d+1 d_x^̣d^i+1+
(1+ε)^i(2+ε)^d^i+d^i-1+⋯+d^2+d-(i-1)-1 d^d^i+d^i-1+⋯+d^2+d+1
d_x^̣d^i+1+
(1+ε)^i+1(2+ε)^d^i+d^i-1+⋯+d^2+d-i d^d^i+d^i-1+⋯+d^2+d+2_̣x^̣d^i+1-1
≤ (2.(2+ε)^d^i+d^i-1+⋯+d^2+d+1 -(1+ε)^i(2+ε)^d^i+d^i-1+⋯+d^2+d-(i-1)) d^d^i+d^i-1+⋯+d^2+d+1 d_x^̣d^i+1+
1/(2+ε)(1+ε)^i(2+ε)^d^i+d^i-1+⋯+d^2+d-(i-1) d^d^i+d^i-1+⋯+d^2+d+1 d_x^̣d^i+1+
(1+ε)^i+1(2+ε)^d^i+d^i-1+⋯+d^2+d-i d^d^i+d^i-1+⋯+d^2+d+2_̣x^̣d^i+1-1
≤ (2.(2+ε)^d^i+d^i-1+⋯+d^2+d+1 -(1+ε)/(2+ε)(1+ε)^i(2+ε)^d^i+d^i-1+⋯+d^2+d-(i-1)) d^d^i+d^i-1+⋯+d^2+d+1 d_x^̣d^i+1+
(1+ε)^i+1.(2+ε)^d^i+d^i-1+⋯+d^2+d-i d^d^i+d^i-1+⋯+d^2+d+2_̣x^̣d^i+1-1
≤ (2.(2+ε)^d^i+d^i-1+⋯+d^2+d+1 -(1+ε)^i+1(2+ε)^d^i+d^i-1+⋯+d^2+d-i) d^d^i+d^i-1+⋯+d^2+d+1 d_x^̣d^i+1+
(1+ε)^i+1(2+ε)^d^i+d^i-1+⋯+d^2+d-i d^d^i+d^i-1+⋯+d^2+d+2_̣x^̣d^i+1-1.
This proves (<ref>), and also (<ref>) by letting ≤̣d and _̣x≤ d_x.
§ TOTAL RECONSTRUCTION OF VANISHING POLYNOMIALS FOR SEVERAL ALGEBRAIC SERIES.
In the present section, we provide several improvements of <cit.>.
§.§ Total reconstruction in the algebraic case.
* Let ℱ' and 𝒢' be two strictly increasing finite sequences of pairs (k,j)∈(ℕ^τ×ℕ)_alex* ordered anti-lexicographically:
(k_1,j_1) ≤_alex* (k_2,j_2)⇔ j_1 < j_2 or (j_1 = j_2 and k_1 ≤_grlexk_2).
We suppose additionally that (k_1,j_1) ≥_alex*(0,1)>_alex*(k_2,j_2) for any (k_1,j_1)∈ℱ' and (k_2,j_2)∈𝒢' (thus the elements of 𝒢' are ordered pairs of the form (k_2,0), and those of ℱ' are of the form (k_1,j_1), j_1≥ 1).
We denote d_y'':=max{j, (k,j)∈ℱ'} and d_ s':=max{|k|, (k,j)∈ℱ'∪𝒢'}.
* We say that a series y_0'=∑_m∈ℕ^τ c_ms^m∈ K[[s]] is algebraic relatively to (ℱ',𝒢') if there exists a polynomial P(s,y')=∑_(k,j)∈ℱ'∪𝒢' a_k,js^ky'^j∈ K[s,y']∖{0} such that P(s,y_0')=0.
* Let d_y'', d_ s'∈, d_y''≥ 1. We say that a series y_0' ∈ K[[s]] is algebraic of degrees bounded by d_y'' and d_ s' if it is algebraic relatively to (ℱ',𝒢') where ℱ' and 𝒢' are the complete sequences of indices (k,j)∈(ℕ^τ×ℕ)_alex* with j≤ d_y'' and |k|≤ d_ s'.
Let us consider a series Y_0'=∑_m∈ℕ^τ C_ms^m∈ K[(C_m)_m∈ℕ^τ][[s]] where s and the C_m's are variables. We denote the multinomial expansion of the jth power Y_0'^j of Y_0' by:
Y_0'^j=∑_m∈ℕ^τ C_m^(j)s^m.
where C_m^(j)∈ K[(C_m)_m∈ℕ^τ].
For instance, one has that C_0^(j)=C_0^j. For j=0, we set Y_0'^0:=1. More generally, for any m and any j≤ |m|, C_m^(j) is a homogeneous polynomial of degree j in the C_k's for k∈ℕ^τ, k≤m, with coefficients in ℕ^*.
Now suppose we are given a series y_0'=∑_m∈ℕ^τ c_ms^m∈ K[[s]]∖{0}. For any j∈ℕ, we denote the multinomial expansion of y_0'^j by:
y_0'^j=∑_m∈ℕ^τ c_m^(j)s^m.
So, c_m^(j)=C_m^(j)(c_0,…,c_m).
Let y_0'=∑_m∈ℕ^τ c_ms^m∈ K[[s]]∖{0}.
*
Given a pair (k,j)∈ℕ^τ×ℕ, we call Wilczynski vectorV_k,j (associated to y_0') the infinite vector with components γ_m^k,j with m∈ℕ^τ ordered with ≤_grlex:
- if j≥ 1:
V_k,j:=(γ_m^k,j)_m∈ℕ^τ with γ_m^k,j={[ =c_m-k^(j) if m≥k; =0 otherwise ].
- otherwise: 1 in the kth position and 0 for the other coefficients,
V_k,0:=(0,…,1,0,0,…,0,…).
So γ_m^k,j is the coefficient of s^m in the expansion of s^ky_0'^j.
* Let ℱ' and 𝒢' be two sequences as in Definition <ref>. We associate to ℱ', 𝒢' and y_0' the (infinite) Wilczynski matrix whose columns are the corresponding vectors V_k,j:
M_ℱ',𝒢':=(V_k,j)_(k,j)∈ℱ'∪𝒢' ,ℱ'∪𝒢' being ordered by ≤_alex* as in Definition <ref>.
We also define the reduced Wilczynski matrix, M_ℱ',𝒢'^red: it is the matrix obtained from M_ℱ',𝒢' by removing the columns indexed in 𝒢', and also removing the corresponding rows (suppress the kth row for any (k,0)∈𝒢'). This amounts exactly to remove the rows containing the coefficient 1 for some Wilczynski vector indexed in 𝒢'. For (i,j)∈ℱ', we also denote by V_i,j^red the corresponding vectors obtained from V_i,j by suppressing the kth row for any (k,0)∈𝒢' and we call them reduced Wilczynski vectors.
The following result is <cit.>:
The series y_0' is algebraic relatively to (ℱ',𝒢') if and only if all the minors of order |ℱ'∪𝒢'| of the Wilczynski matrix M_ℱ',𝒢' vanish, or also if and only if all the minors of order |ℱ'| of the reduced Wilczynski matrix M_ℱ',𝒢'^red vanish.
Let us give an outline of the reconstruction process of <cit.>. Let ℱ' and 𝒢' be two sequences as in Definition <ref> and y_0'=∑_m∈ℕ^τ c_ms^m∈ K[[s]]∖{0} be algebraic relatively to (ℱ',𝒢'). Our purpose is to describe the K-vector space whose non-zero elements are the polynomials P(s,y')=∑_(k,j)∈ℱ'∪𝒢' a_k,js^ky'^j∈ K[s,y']∖{0} such that P(s,y_0')=0. The components of the infinite vector computed as M_ℱ',𝒢'· (a_k,j)_(k,j)∈ℱ'∪𝒢' are exactly the coefficients of the expansion of P(s,y_0') in K[[s]].
Let us now remark that, in the infinite vector M_ℱ',𝒢'· (a_k,j)_(k,j)∈ℱ'∪𝒢', if we remove the components indexed by k for (k,0)∈𝒢', then we get exactly the infinite vector M_ℱ',𝒢'^red· (a_k,j)_(k,j)∈ℱ'. The vanishing of the latter means precisely that the rank of M_ℱ',𝒢'^red is less than |ℱ|.
Conversely, if the columns of M_ℱ',𝒢'^red are dependent for certain ℱ' and 𝒢', we denote by (a_k,j)_(k,j)∈ℱ' a corresponding sequence of coefficients of a nontrivial vanishing linear combination of the column vectors. Then it suffices to note that the remaining coefficients a_k,0 for (k,0)∈𝒢' are uniquely determined as follows:
a_k,0=-∑_(i,j)∈ℱ', i≤k a_i,jc_k-i^(j) .
We consider a maximal family ℱ”⊊ℱ' such that the corresponding reduced Wilczynski vectors are K-linearly independent. Proceeding as in Lemma 3.7 in <cit.>, ℱ” is such a family if and only if, in the reduced Wilczynski matrix M_ℱ',𝒢'^red, there is a nonzero minor (A) where A has columns indexed in ℱ” and lowest row with index m such that |m|≤ 2d_s'd_y'' and ℱ” is maximal with this property. Moreover, among such A's, we take one that has its lowest row having an index minimal for ≤_grlex, and we denote the latter index by p̂.
For any (k_0,j_0)∈ℱ'∖ℱ”, the family of reduced Wilczynski vectors (V_k,j^red) with (k,j)∈ℱ”∪{(k_0,j_0)} is K-linearly dependent. There is a unique relation:
V_k_0,j_0^red =∑_(k,j)∈ℱ”λ_k,j^k_0,j_0 V_k,j^red with λ_k,j^k_0,j_0∈ K.
We consider the restriction of M_ℱ',𝒢'^red to the rows of A. For these rows, by Cramer's rule, we reconstruct the linear combination (<ref>).
The coefficients λ_k,j^k_0,j_0 of such a linear combination are quotients of homogeneous polynomials with integer coefficients in terms of the entries of these restricted matrix, hence quotients of polynomials in the corresponding c_m's, |m|≤ 2d_s'd_y''.
Let P(s,y')=∑_(k,j)∈ℱ'∪𝒢' a_k,js^ky'^j∈ K[s,y']∖{0}. One has P(s,y_0')=0 if and only if (<ref>) holds as well as:
∑_(k,j)∈ℱ”a_k,j V_k,j^red+∑_(k_0,j_0)∈ℱ'∖ℱ”a_k_0,j_0 V_k_0,j_0^red=0
⇔∑_(k,j)∈ℱ”a_k,j V_k,j^red+∑_(k_0,j_0)∈ℱ'∖ℱ”a_k_0,j_0(∑_(k,j)∈ℱ”λ_k,j^k_0,j_0 V_k,j^red)=0
⇔∑_(k,j)∈ℱ”( a_k,j +∑_(k_0,j_0)∈ℱ'∖ℱ”a_k_0,j_0λ_k,j^k_0,j_0)V_k,j^red=0
⇔∀ (k,j)∈ℱ”, a_k,j =-∑_(k_0,j_0)∈ℱ'∖ℱ”a_k_0,j_0λ_k,j^k_0,j_0,
Let ℱ',𝒢',d_s',d_y'', y_0',ℱ” be as above. Then, the K-vector space of polynomials P(s,y')=∑_(k,j)∈ℱ'∪𝒢' a_k,js^ky'^j∈ K[s,y'] such that P(s,y_0')=0 is the set of polynomials such that
∀ (k,j)∈ℱ”, a_k,j =-∑_(k_0,j_0)∈ℱ'∖ℱ”a_k_0,j_0λ_k,j^k_0,j_0,
and
∀ (k,0)∈𝒢', a_k,0=-∑_(i,j)∈ℱ', i≤k a_i,jc_k-i^(j) ,
where the λ_k,j^k_0,j_0's are computed as in (<ref>) as quotients of polynomials with integer coefficients in the c_m's for |m|≤ 2d_s'd_y''.
Note that the set of polynomials P(s,y')∈ K[s,y'] with support in ℱ'∪𝒢' such that P(s,y_0')=0 is a K-vector space of dimension |ℱ'|-|ℱ”|≥ 1.
§.§ Total algebraic reconstruction in the non-homogeneous case.
Let ℱ',𝒢', d_y'',d_ s' be as in Definition <ref>.
§.§.§ First case.
Let y_0'=∑_m∈ℕ^τ c_ms^m∈ K[[s]] be algebraic relatively to (ℱ',𝒢').
Let i, d_s, d' ∈ℕ, d'≥ 3, d_ s'≤ d_ s and d_y''≤ d'. For any j=0,…,i, we consider power series y_j'=∑_m∈^τ c_j,ms^m∈ K[[s]]
which satisfy some equations P_j(s,y'_0,…,y'_j)=0
where P_j∈ K[s,z_0,z_1,…,z_j ]∖{0 }, P_j(s,y_0',…,y_j-1',z_j)≢0, _sP_j≤ d_s,_z_kP_j≤ d' for k=0,…,j. In particular, c_m=c_0,m for any m.
Let z'=R(s,y'_0,…,y'_i)∈ K[[s]]∖{0}, where R∈ K[s,z_0,z_1,…,z_i ]∖{0 } with _sR≤ d_s,_z_kR≤ d' for k=0,…,i.
We want to determine when there is a polynomial P(s,y')=∑_(k,j)∈ℱ'∪𝒢' a_k,js^ky'^j∈ K[s,y']∖{0} such that P(s,y_0')=z' and, subsequently, to reconstruct all such possible P's.
Let V be the infinite vector with components the coefficients of z', and V^red the corresponding reduced vector as in Definition <ref>. For ℱ” as in the previous section, we have P(s,y_0')=z' if and only if:
∑_(k,j)∈ℱ”( a_k,j +∑_(k_0,j_0)∈ℱ'∖ℱ”a_k_0,j_0λ_k,j^k_0,j_0)V_k,j^red= V^red.
We want to examine when the vectors (V_k,j^red)_(k,j)∈ℱ” and V^red are linearly dependent. Let N^red be the infinite matrix with columns (V_k,j^red)_(k,j)∈ℱ” and V^red.
The vectors (V_k,j^red)_(k,j)∈ℱ” and V^red are linearly dependent if and only if all the minors of maximal order of N^red up to the row p with: |p| ≤ 2.3^(d')^i-1+⋯+(d')^2+d'+1
d_s (d')^(d')^i+⋯+(d')^2+d'+1 vanish.
The vectors (V_k,j^red)_(k,j)∈ℱ” and V^red are linearly dependent if and only if all the minors of N^red of maximal order vanish: see <cit.>.
Conversely, we suppose that the vectors are linearly independent. So, there is a minor of N of maximal order which is nonzero. Let p be the smallest multi-index for ≤_grlex such that there is such a nonzero minor of N^red of maximal order with lowest row of index p. Hence, there is a subminor of it based on the columns indexed in ℱ” which is nonzero, say (B). The lowest row of B is at most p. So, by minimality of p̂ (see before (<ref>) in the previous section), p≥_grlexp̂. If p=p̂, then | p|≤ 2d_s'd' and we are done. If p>_grlexp̂, let us denote by p̃ the predecessor of p for ≤_grlex. Then p̃≥_grlexp̂.
For any multi-index m∈^r, denote by N_m^red, V_k,j,m^red,V_m^red the truncations up to the row m of N^red,V_k,j^red,V^red respectively. By definition of p, the rank of the matrix N^red_p is |ℱ”|+1, whereas the rank of N^red_p̃ is |ℱ”|. There exists a nonzero vector ((a_i,j)_(i,j)∈ℱ”,-a) of elements of K such that
N_p̃^red · ( [ (a_i,j)_(i,j)∈ℱ”; -a ])= 0,
where a can be chosen to be 1 since the vectors (V_k,j,p̃^red)_(k,j)∈ℱ” are independent. The components of the resulting vector N_p̃^red · ( [ (a_i,j)_(i,j)∈ℱ”; -1 ]) are exactly the coefficients e_k, (k,0)∉𝒢' and k≤_grlexp̃, of the expansion of ∑_(i,j)∈ℱ”a_i,j s^i (y_0')^j-z'.
By computing the coefficients a_k,0 for (k,0)∈𝒢' as:
a_k,0=-∑_(i,j)∈ℱ”, k>i a_i,jc_k-i^(j)+f_k,
where f_k denotes the coefficient of s^k in z',
we obtain the vanishing of the first terms of Q(s,y_0',…,y'_i):=∑_(i,j)∈ℱ”∪𝒢' a_i,js^i(y_0')^j-z' up to p̃. So, w_s(Q(s,y_0',…,y'_i))≥_grlexp and, therefore, (Q(s,y_0',…,y'_i))≥ |p|.
On the contrary,
we have:
N_p^red · ( [ (a_i,j)_(i,j)∈ℱ”; -1 ])≠ 0.
From (<ref>) and (<ref>), we deduce that the coefficient e_p of s^p in the expansion of
∑_(i,j)∈ℱ”a_i,j x^i (y_0')^j-z' is nonzero.
Observe that this term of the latter series does not overlap with the terms of ∑_(i,0)∈𝒢'a_i,0 s^i since (p,0)∉𝒢'. Therefore, w_s(Q(s,y_0',…,y'_i))=p. In particular, Q(s,y_0',…,y'_i)≠ 0, so the bound (<ref>) in Theorem <ref> applies:
|p| ≤ 2.3^(d')^i-1+⋯+(d')^2+d'+1
d_s (d')^(d')^i+⋯+(d')^2+d'+1 .
Let us return to (<ref>). Let A be the square matrix defined after (<ref>). For any (k,j)∈ℱ”, we denote by A_k,j the matrix deduced from A by substituting the corresponding part of V^red instead of the column indexed by (k,j). Equality (<ref>) holds if and only if the vectors (V_k,j^red)_(k,j)∈ℱ” and V^red are linearly dependent, and by Cramer's rule, one has:
∀ (k,j)∈ℱ”, a_k,j +∑_(k_0,j_0)∈ℱ'∖ℱ”a_k_0,j_0λ_k,j^k_0,j_0= (A_k,j) /( A).
Recall that one determines that (V_k,j^red)_(k,j)∈ℱ” and V^red are linearly dependent by examining the dependence of the finite truncation of these vectors according to Lemma <ref>. Finally, the remaining coefficients a_k,0 for (k,0)∈𝒢' are each uniquely determined as follows:
a_k,0=-∑_(i,j)∈ℱ', i≤k a_i,jc_k-i^(j)+f_ k ,
where f_k denotes the coefficient of s^k in z'.
As a conclusion, we obtain the affine space of P(s,y')∈ K[s,y']∖{0} such that P(s,y_0')=z' as a parametric family of its coefficients with free parameters the a_k_0,j_0's for (k_0,j_0)∈ℱ'∖ℱ”.
§.§.§ Second case.
Let _̣ s'∈ and y_0'=∑_m∈ℕ^τ c_ms^m∈ K[[s]] be algebraic of degrees d'_y' and _̣ s', but not algebraic relatively to (ℱ',𝒢').
Let i, d_s, d' ∈ℕ, d'≥ 3, d_ s'≤ d_ s and d_y''≤ d'. For any j=0,…,i, we consider power series y_j'=∑_m∈^τ c_j,ms^m∈ K[[s]]
which satisfy some equations P_j(s,y'_0,…,y'_j)=0
where P_j∈ K[s,z_0,z_1,…,z_j ]∖{0 }, P_j(s,y_0',…,y_j-1',z_j)≢0, _sP_j≤ d_s,_z_kP_j≤ d' for k=0,…,j. In particular, c_m=c_0,m for any m.
Let z'=R(s,y'_0,…,y'_i)∈ K[[s]]∖{0}, where R∈ K[s,z_0,z_1,…,z_j ]∖{0 } with _sR≤ d_s,_z_kR≤ d' for k=0,…,j.
As in the previous section, our purpose is to determine when there is a polynomial P(s,y')=∑_(k,j)∈ℱ'∪𝒢' a_k,js^ky'^j∈ K[s,y']∖{0} such that P(s,y_0')=z'. Note that such a polynomial is necessarily unique, since y_0' is not algebraic relatively to (ℱ',𝒢').
We consider the corresponding reduced Wilczynski matrix M_ℱ',𝒢'^red. Proceeding as in Lemma 3.7 in <cit.> and using Lemma <ref>, there is a nonzero minor (B) of maximal order where the lowest row of B is indexed by m such that |m|≤(_̣s'+ d'_s)d'_y'.
We resume the notations of the previous section. There is a polynomial P such that P(s,y_0')=z' if and only if the vectors (V_k,j^red)_(k,j)∈ℱ' and V^red are K-linearly dependent, since the vectors (V_k,j^red)_(k,j)∈ℱ' are independent. One determines that (V_k,j^red)_(k,j)∈ℱ' and V^red are linearly dependent by examining the dependence of the finite truncation of these vectors according to the following lemma.
The vectors (V_k,j^red)_(k,j)∈ℱ' and V^red are linearly dependent if and only if, in the corresponding matrix denoted by N^red, all the minors of maximal order up to the row p with |p| ≤ 2.3^(d')^i-1+⋯+(d')^2+d'+1
d_s (d')^(d')^i+⋯+(d')^2+d'+1 vanish.
The proof is analogous to that of Lemma <ref>, also using Theorem <ref>.
We proceed as in the previous section. For any (k,j)∈ℱ', we denote by B_k,j the matrix deduced from B by substituting the corresponding part of V^red instead of the column indexed by (k,j). If the condition of the previous lemma holds, by Cramer's rule, one has:
∀ (k,j)∈ℱ', a_k,j = (B_k,j) /( B).
Then it suffices to note that the remaining coefficients a_k,0 for (k,0)∈𝒢' are each uniquely determined as follows:
a_k,0=-∑_(i,j)∈ℱ', i≤k a_i,jc_k-i^(j)+f_ k ,
where f_k denotes the coefficient of s^k in z'.
§.§ Total algebraic reconstruction with several algebraic series.
Let i, d_s, d' ∈ℕ, d'≥ 3.
For any j=0,…,i, we consider power series y_j'=∑_m∈^τ c_j,ms^m∈ K[[s]]
which satisfy some equations P_j(s,y'_0,…,y'_j)=0
where P_j∈ K[s,z_0,z_1,…,z_j ]∖{0 }, P_j(s,y_0',…,y_j-1',z_j)≢0, _sP_j≤ d_s,_z_kP_j≤ d' for k=0,…,j.
Let 𝒦' and ℒ', 𝒦'≠∅, be two strictly increasing finite sequences of pairs (k,l)∈(ℕ^τ×ℕ^i+1) ordered anti-lexicographically:
(k_1,l_1) ≤_alex* (k_2,l_2)⇔l_1 <_grlexl_2 or (l_1 = l_2 and k_1 ≤_grlexk_2).
We suppose additionally that 𝒦'≥_alex*(0,(0,…,0,1))>_alex*ℒ' (thus the elements of ℒ' are ordered tuples of the form (k,0), and those of 𝒦' are of the form (k,l), |l|≥ 1).
We set d_y'_j':=max{l_j, (k,l)∈𝒦'} for j=0,…,i, and d_ s':=max{|k|, (k,l)∈𝒦'∪ℒ'}. We assume that d_y'_j'≤ d' for j=0,…,i, and d_ s'≤ d_s.
Let us set z=(z_0,…,z_i) and y'=(y_0',…,y_i'). We assume that y'≠0. We want to determine when there is a polynomial P(s,z)=∑_(k,l)∈𝒦'∪ℒ' a_k,ls^kz^l∈ K[s,z]∖{0} such that P(s,y')=0 and, subsequently, to reconstruct all such possible P's. It is a generalization of Section <ref>.
For any j=0,…,i, for any l_j∈ℕ, we denote the multinomial expansion of y_j'^l_j by:
y_j'^l_j=∑_n_j∈ℕ^τ c_j,n_j^(l_j)s^n_j.
So the coefficient of s^m in y'^l=y_0'^l_0⋯y_i'^l_i is equal to: c_m^(l):=∑_n_0∈^τ,…,n_i∈^τ, n_0+⋯+n_i=m c_0,n_0^(l_0)⋯ c_i,n_i^(l_i).
*
Given an ordered pair (k,l)∈ℕ^τ×ℕ^i+1, we call Wilczynski vectorV_k,l the infinite vector with components γ_m^k,l with m∈ℕ^τ ordered with ≤_grlex:
- if l≥_grlex (0,…,0,1):
V_k,l:= (γ_m^k,l)_m∈ℕ^τ with γ_m^k,l={[ =c_m-k^(l) if m≥k; =0 otherwise ].
- otherwise: 1 in the kth position and 0 for the other coefficients,
V_k,0:=(0,…,1,0,0,…,0,…).
So γ_m^k,l is the coefficient of s^m in the expansion of s^ky'^l.
* Let 𝒦' and ℒ' be two sequences as above. We associate to 𝒦' and ℒ' the (infinite) Wilczynski matrix whose columns are the corresponding vectors V_k,l:
M_𝒦',ℒ':=(V_k,l)_(k,l)∈𝒦'∪ℒ' ,𝒦'∪ℒ' being ordered by ≤_alex* as above.
We also define the reduced Wilczynski matrix, M_𝒦',ℒ'^red: it is the matrix obtained from M_𝒦',ℒ' by removing the columns indexed in ℒ', and also removing the corresponding rows (suppress the kth row for any (k,0)∈ℒ'). This amounts exactly to remove the rows containing the coefficient 1 for some Wilczynski vector indexed in ℒ'. For (i,l)∈𝒦', we also denote by V_i,l^red the corresponding vectors obtained from V_i,l by suppressing the kth row for any (k,0)∈ℒ' and we call them reduced Wilczynski vectors.
There exists a nonzero polynomial with support included in 𝒦'∪ℒ' which vanishes at y' if and only if all the minors of order |𝒦'∪ℒ'| of the Wilczynski matrix M_𝒦',ℒ' vanish, or also if and only if all the minors of order |𝒦'| of the reduced Wilczynski matrix M_𝒦',ℒ'^red vanish.
By construction of the Wilczynski matrix M_𝒦',ℒ', the existence of such a polynomial is equivalent to the fact that the corresponding Wilczynski vectors are K-linearly dependent. This is in turn equivalent to the vanishing of all the minors of maximal order of M_𝒦',ℒ'.
Suppose that we are given a nonzero vector (a_k,l)_(k,l)∈𝒦'∪ℒ' such that
M_𝒦',ℒ'·(a_k,l)_(k,l)∈𝒦'∪ℒ'=0.
Observe that, necessarily, the vector (a_k,l)_(k,l)∈𝒦' is also nonzero (since the vectors V_k,0 for (k,0)∈ℒ' are independent). Let us remark that:
M_𝒦',ℒ'^red·(a_k,l)_(k,l)∈𝒦'=0
since the latter vector is deduced from the former one by deleting the rows corresponding to (k,0)∈ℒ'. So, the columns of M_𝒦',ℒ'^red are linked, which is equivalent to the vanishing of its minors of maximal order. Conversely, suppose that there exists a nonzero (a_k,l)_(k,l)∈𝒦' such that M_𝒦',ℒ'^red·(a_k,l)_(k,l)∈𝒦'=0.
Then, we can complete the list of coefficients (a_k,l)_(k,l)∈𝒦'∪ℒ' by setting:
a_k,0=- ∑_(i,l)∈𝒦', i≤k a_i,l c_k-i^(l).
There exists a nonzero polynomial with support included in 𝒦'∪ℒ' which vanishes at y' if and only if all the minors of the reduced Wilczynski matrix M_𝒦',ℒ'^red of order |𝒦'| and with lowest row indexed by m with:
|m|≤ 2.3^(d')^i-1+⋯+(d')^2+d'+1
d_s (d')^(d')^i+⋯+(d')^2+d'+1,
vanish.
The direct part follows from the previous lemma. Suppose that there is no nonzero polynomial with support included in 𝒦'∪ℒ' which vanishes at y'. So there is a nonzero minor of the reduced Wilczynski matrix M_𝒦',ℒ'^red of order |𝒦'| and with lowest row indexed by m that we assume to be minimal for ≤_grlex. Reasoning as in the proof of Lemma <ref>, we obtain a nonzero polynomial Q(s,z_0,…,z_i) with Supp(Q)⊆𝒦'∪ℒ', such that Q(s,y')≠ 0, and with _s(Q(s,y'))≥ |m|. Since d_y'_j'≤ d' for j=0,…,i, and d_ s'≤ d_s, by Theorem <ref>, we obtain that:
_s(Q(s,y'))≤ 2.3^(d')^i-1+⋯+(d')^2+d'+1
d_s (d')^(d')^i+⋯+(d')^2+d'+1,
which gives the expected result.
Let us suppose that there is a nonzero polynomial P with support included in 𝒦'∪ℒ' which vanishes at y'. Our purpose is to determine the space of all such polynomials.
For this, we consider a maximal family 𝒦”⊊𝒦' such that the corresponding reduced Wilczynski vectors are K-linearly independent. This is equivalent to the fact that, for the matrix consisting of the (V_k,l^red) with (k,l)∈𝒦”, there is a nonzero minor (A) of maximal order and with lowest row indexed by m with
|m|≤ 2.3^(d')^i-1+⋯+(d')^2+d'+1
d_s (d')^(d')^i+⋯+(d')^2+d'+1.
For any (k_0,l_0)∈𝒦'∖𝒦”, the corresponding family of reduced Wilczynski vectors (V_k,l^red) with (k,l)∈ℱ”∪{(k_0,l_0)} is K-linearly dependent. There is a unique relation:
V_k_0,l_0^red =∑_(k,l)∈𝒦”λ_k,l^k_0,l_0 V_k,l^red with λ_k,l^k_0,l_0∈ K.
which can be computed by Cramer's rule based on (A).
The coefficients λ_k,l^k_0,l_0 of such a linear combination are quotients of homogeneous polynomials with integer coefficients in terms of the entries of these restricted matrices, hence quotients of polynomials in the corresponding c_m's, |m|≤ 2.3^(d')^i-1+⋯+(d')^2+d'+1
d_s (d')^(d')^i+⋯+(d')^2+d'+1.
Let z=(z_0,…,z_i), and P(s,z)=∑_(k,l)∈𝒦'∪ℒ' a_k,ls^kz^l∈ K[s,z]∖{0}. One has P(s,y')=0 if and only if (<ref>) holds as well as:
∑_(k,l)∈𝒦”a_k,l V_k,l^red+∑_(k_0,l_0)∈𝒦'∖𝒦”a_k_0,l_0 V_k_0,l_0^red=0
⇔∑_(k,l)∈𝒦”a_k,l V_k,l^red+∑_(k_0,l_0)∈𝒦'∖𝒦”a_k_0,l_0(∑_(k,l)∈𝒦”λ_k,l^k_0,l_0 V_k,l^red)=0
⇔∑_(k,l)∈𝒦”( a_k,l +∑_(k_0,l_0)∈𝒦'∖𝒦”a_k_0,l_0λ_k,l^k_0,l_0)V_k,l^red=0
⇔∀ (k,l)∈𝒦”, a_k,l =-∑_(k_0,l_0)∈𝒦'∖𝒦”a_k_0,l_0λ_k,l^k_0,l_0.
Let 𝒦',ℒ',d_s,d', y',𝒦” be as above. Then, the set of polynomials P(s,z)=∑_(k,l)∈𝒦'∪ℒ' a_k,ls^kz^l∈ K[s,z] such that P(s,y')=0 is the set of polynomials such that
∀ (k,l)∈𝒦”, a_k,l =-∑_(k_0,l_0)∈𝒦'∖𝒦”a_k_0,l_0λ_k,l^k_0,l_0,
and
∀ (k,0)∈ℒ', a_k,0=-∑_(i,l)∈𝒦', i≤k a_i,lc_k-i^(j) ,
where the λ_k,l^k_0,l_0's are computed as in (<ref>) as quotients of polynomials with integer coefficients in the c_m's for |m|≤ 2.3^(d')^i-1+⋯+(d')^2+d'+1
d_s (d')^(d')^i+⋯+(d')^2+d'+1.
Note that the set of polynomials P(s,z)∈ K[s,z] with support in 𝒦'∪ℒ' such that P(s,y')=0 is a K-vector space of dimension |𝒦'|-|𝒦”|≥ 1.
§ RECONSTRUCTION OF AN EQUATION FOR AN ALGEBROID SERIES.
§.§ The reconstruction algorithm
We resume the notations of Section <ref>, in particular Lemma <ref> and after. In particular, recall that τ is the number of variables in s, and so r-τ is the number of variables in t.
Let ℱ and 𝒢 be two strictly increasing sequences of triples (k,l,j)∈ℕ^τ×ℕ^r-τ×ℕ ordered as follows:
(k_1,l_1,j_1) ≤_*alex* (k_2,l_2,j_2):⇔ j_1 < j_2 or (j_1 = j_2 and (k_1,l_1) ≤_alex* (k_2,l_2))
with
(k_1,l_1) ≤_alex* (k_2,l_2):⇔l_1 <_grlexl_2 or (l_1 = l_2 and k_1 ≤_grlexk_2).
We suppose additionally that (k_1,l_1,j_1)≥_*alex*(0,0,1)>_*alex*(k_2,l_2,j_2) for any (k_1,l_1,j_1)∈ℱ and (k_2,l_2,j_2)∈𝒢 (thus the elements of 𝒢 are ordered triples of the form (k_2,l_2,0), and those of ℱ are of the form (k_1,l_1,j_1), j_1≥ 1). Moreover, we assume that there is d∈, d≥ 1, such that j≤ d for any (k,l,j)∈ℱ∪𝒢, and we set d:= max{j | ∃ (k,l,j)∈ℱ∪𝒢}. We say that a series y_0=∑_(m,n)∈ℕ^τ×ℕ^r-τ c_m,ns^mt^n∈ K[[s,t]], c_0,0≠ 0, is algebroid relatively to (ℱ,𝒢) if there exists a polynomial P(s,t,y)=∑_(k,l,j)∈ℱ∪𝒢 a_k,l,js^kt^ly^j∈ K[[s, t]][y]∖{0} such that P(s,t,y_0)=0.
For any ℱ,𝒢 satisfying Conditions (i), (ii), (iii) of Lemma <ref>, let us denote by
(K[s][[t]][y])_ℱ,𝒢 the subset of polynomials in K[s][[t]][y]∖{0} with support in ℱ∪𝒢.
The purpose of the following discussion is to make more explicit the conditions in Lemma <ref> for the vanishing of a polynomial P∈(K[s][[t]][y])_ℱ,𝒢 for some ℱ,𝒢 corresponding to (i), (ii), (iii) in Lemma <ref>, at a formal power series y_0∈ K[[s]][[t]]. As we have seen in Section <ref>, one can always assume that y_0=∑_m∈ℕ^τ, n∈ℕ^r-τ c_m ,ns^mt^n =∑_n∈ℕ^r-τ c_n(s) t^n is such that c_0,0≠ 0.
Let us consider a series Y_0=∑_n∈ℕ^r-τ (∑_m∈ℕ^τ C_m,ns^m)t^n = ∑_n∈ℕ^r-τC_n(s) t^n∈ K[(C_m , n)_m∈ℕ^τ, n∈ℕ^r-τ][[s]][[t]] where s, t and the C_m,n's are variables. We denote the multinomial expansion of the jth power Y_0^j of Y_0 by:
Y_0^j=∑_n∈ℕ^r-τ (∑_m∈ℕ^τ C_m,n^(j)s^m)t^n = ∑_n∈ℕ^r-τC_n^(j)(s) t^n
where C_m,n^(j)∈ K[(C_k,l)_k≤m, l≤n] and C_n^(j)(s)∈ K[(C_l(s))_l≤n]⊆ K[(C_k,l)_k≤m, l≤_grlexn][[s]].
We also set Y_0^0:=1.
Now, suppose we are given a series y_0=∑_m∈ℕ^τ, n∈ℕ^r-τ c_m, ns^mt^n∈ K[[s,t]] with c_0,0≠ 0. For any j∈ℕ, we denote the multinomial expansion of y_0^j by:
y_0^j=∑_m∈ℕ^τ, n∈ℕ^r-τ c_m,n^(j)s^mt^n= ∑_n∈ℕ^r-τc_n^(j)(s) t^n.
So, c_m,n^(j)=C_m,n^(j)(c_0,0,…,c_m,n) and c_n^(j)(s)=C_n^(j)(c_0(s),…,c_n(s)). We also set y_0^0:=1.
For a polynomial P∈(K[s][[t]][y])_ℱ,𝒢∖{0}, we denote P(s,t,y)=∑_(k,l,j)∈ℱ∪𝒢 a_k,l,js^kt^ly^j =∑_l∈ℕ^r-τ, j=0,..,d a_l,j(s)t^ly^j.
A series y_0∈ K[[s]][[t]], y_0=∑_m∈ℕ^τ, n∈ℕ^r-τ c_m ,ns^mt^n =∑_n∈ℕ^r-τ c_n(s) t^n, is a root of P if and only if the following polynomial relations hold when evaluated at the series c_0(s),…, c_n(s):
∀l∈ℕ^r-τ, ∑_j=0,..,d a_l,j(s) C_0^j(s)=- ∑_i<l, j=0,..,d a_i,j(s) C_l-i^(j)(s) .
Let us compute:
P(s,t,y_0)=∑_i∈ℕ^r-τ, j=0,..,d a_i,j(s)t^iy_0^j
=∑_i∈ℕ^r-τ, j=0,..,d a_i,j(s)t^i(∑_n∈ℕ^r-τc_n^(j)(s) t^n)
=∑_l∈ℕ^r-τ(∑_i≤l, j=0,..,d a_i,j(s)c_l-i^(j)(s))t^l.
So, y_0 is a root of P if and only if, in the latter formula, the coefficient of t^l for each l vanishes, which is equivalent to the vanishing of (<ref>) (noticing that C_0^(j)= C_0^j for all j).
Let ℱ,𝒢 be as in Definition <ref> and satisfying Conditions (i), (ii), (iii) of Lemma <ref>. Let y_0=∑_(m,n)∈ℕ^τ×ℕ^r-τ c_m,ns^mt^n=∑_n∈ℕ^r-τc_n(s) t^n∈ K[[s,t]], c_0,0≠ 0, be a series algebroid relatively to (ℱ,𝒢). Let P∈(K[s][[t]][y])_ℱ,𝒢∖{0} be a polynomial such that P(s,t,y_0)=0.
We notice that w_t(P) is the index of the first non-trivial relation (<ref>), for ℕ^r-τ ordered with ≤_grlex. Let l̂_0∈^r-τ be such that w_t(P)≤_grlexl̂_0. If w_t(P) is known, then one can take l̂_0=w_t(P).
§.§.§ First step
For any l∈^r-τ, we denote by ℱ_l' and 𝒢_l' the corresponding sets of tuples (k,j)∈^τ× where (k,l,j)∈ℱ and (k,l,0)∈𝒢 respectively. We denote d'_s,l:=max{|k| | (k,j)∈ℱ_l'∪𝒢_l' } (which is well-defined thanks to Condition (iii) of Lemma <ref>). By (<ref>) in Remark <ref>, we have that:
d'_s,l≤ a|l|+b,
where a and b are as in Lemma <ref>.
Let l≤_grlexl̂_0 (or directly l=w_t(P) if known). As we are interested in the first non trivial relation in (<ref>), we consider its following instance:
∑_j=0,..,d a_l,j(s) C_0^j=∑_(k,j)∈ℱ'_l∪𝒢'_l a_k,l,js^kC_0^j=0 .
By Lemma <ref>, there is l≤_grlexl̂_0 such that c_0 satisfies the latter relation, i.e. c_0 is algebraic relatively to (ℱ'_l,𝒢'_l). In particular, c_0 is algebraic relatively to (⋃_l≤_grlexl̂_0ℱ'_l,⋃_l≤_grlexl̂_0𝒢'_l). We denote d'_s:=max_l≤_grlexl̂_0(d'_s,l). Let us now describe the reconstruction method for this first step:
* We determine the multi-indices l≤_grlexl̂_0 such that ℱ'_l∪𝒢'_l≠∅.
* For each l≤_grlexl̂_0 as above,
we determine whether c_0 is algebraic relatively to (ℱ'_l,𝒢'_l) by computing the first minors of maximal order of the corresponding Wilczynski matrix M_ℱ'_l,𝒢'_l^red. Proceeding as in <cit.> or Lemma <ref>, it suffices to compute them up to the row indexed by the biggest m∈^τ such that | m|≤ 2 d d'_s.
* Let l≤_grlexl̂_0 such that c_0 is algebraic relatively to (ℱ'_l,𝒢'_l). We reconstruct the K-vector space of polynomials corresponding to Equation (<ref>) according to the method in Section <ref>, in particular Lemma <ref>, applied to (ℱ'_l,𝒢'_l) and c_0. We denote by E_l this space.
* For each l'<_grlexl, we set a_k, l',j:=0 for (k, l',j)∈ℱ∪𝒢.
§.§.§ Second step
With the notations of the previous section, let l be such that E_l≠{0}. Let us consider the instances of (<ref>) corresponding to the l' such that:
l<_grlexl'<_grlexl+(0,…,0,1),
For such l', we claim that the set of indices i such that i<l' and i≥_grlexl is empty. Indeed, by (<ref>), note that | l'|=| l|. For such i, one necessarily has | i|<| l'|=| l|, but also | i|≥ | l|: a contradiction.
According to (4) at the end of First Step above and to the previous claim, the right hand sides of such instances are equal to 0. Hence, they also are of the same form as (<ref>):
∑_j=0,..,d a_l',j(s) C_0^j=∑_(k,j)∈ℱ'_l'∪𝒢'_l' a_k,l',js^kC_0^j=0 .
We perform the same method of reconstruction as in the First Step <ref> to determine E_l' the K-vector space of polynomials corresponding to this equation. Note that E_l' might be equal to {0}.
At this step, for each l≤_grlexl̂_0 such that E_l≠{0} from the First Step, we have built the vector spaces E_l' (possibly {0}) of all the coefficients a_k,l',j for (k, l',j)∈ℱ∪𝒢 satisfying the instances of (<ref>) for
l'<_grlexl+(0,…,0,1).
§.§.§ Third step
Let l≤_grlexl̂_0 such that E_l≠{0} as in the First Step <ref>. We consider the instance of (<ref>) corresponding to l+(0,…,0,1). Note that for i< l+(0,…,0,1), we have that i≤_grlexl. Applying (4) from the end of the First Step, we obtain:
∑_j=0,..,d a_l+(0,…,0,1),j(s) C_0^j=- ∑_j=0,..,d a_l,j(s) C_(0,…,0,1)^(j) .
Noticing that
C_(0,…,0,1)^(j)=j C_0^j-1C_(0,…,0,1), we get:
∑_(k,j)∈ℱ'_l+(0,…,0,1)∪𝒢'_l+(0,…,0,1) a_k,l+(0,…,0,1),js^kC_0^j=-
(∑_(k,j)∈ℱ'_l∪𝒢'_l a_k,l,js^kj C_0^j-1) C_(0,…,0,1) .
There is l≤_grlexl̂_0 such that c_0 and c_(0,…,0,1) satisfy the latter relation, and c_0 satisfies the relations (<ref>) and (<ref>).
If c_(0,…,0,1)=0, then there are two cases. Either ℱ'_l+(0,…,0,1)∪𝒢'_l+(0,…,0,1)=∅ i.e. there is no coefficient a_k,l+(0,…,0,1),j to reconstruct. Or else, we obtain an equation like (<ref>) and we derive E_l+(0,…,0,1) as in the first and second step.
If c_(0,…,0,1)≠ 0, let us denote θ_s,(0,…,0,1):= (| l̂_0|+d)a +b where a and b are as in Lemma <ref>.
By this lemma,
there are non-trivial polynomial relations P_0(s,z_0)=0 and P_1(s,z_0,z_1)=0 satisfied by c_0 and c_(0,…,0,1) with _sP_j≤θ_s,(0,…,0,1), _z_0P_j≤ d and _z_1P_1≤ d. There are several cases.
∙ Suppose that ℱ'_l+(0,…,0,1)∪𝒢'_l+(0,…,0,1)=∅. Equation (<ref>) reduces to:
∑_(k,j)∈ℱ'_l∪𝒢'_l a_k,l,js^kj c_0^j-1=
∑_(k,j)∈ℱ'_l a_k,l,js^kj c_0^j-1=0,
which means that c_0 is at least a double root of (<ref>). We resume the notations of Section <ref>. Let us denote by ℱ”_l the family corresponding to ℱ” for (<ref>), and λ_l, k,j^k_0,j_0 the coefficients corresponding to λ_ k,j^k_0,j_0. Formula (<ref>) of Lemma <ref> becomes:
∀ (k,j)∈ℱ”_l, a_k,l,j =-∑_(k_0,j_0)∈ℱ'_l∖ℱ”_la_k_0,l,j_0 λ_l, k,j^k_0,j_0 .
Substituting this formula in (<ref>) gives:
∑_(k_0,j_0)∈ℱ'_l∖ℱ”_l a_k_0,l,j_0s^k_0j_0 c_0^j_0-1 + ∑_(k,j)∈ℱ”_l( -∑_(k_0,j_0)∈ℱ'_l∖ℱ”_la_k_0,l,j_0 λ_l, k,j^k_0,j_0)
s^kj c_0^j-1 =0 ,
which is:
∑_(k_0,j_0)∈ℱ'_l∖ℱ”_l a_k_0,l,j_0( s^k_0j_0 c_0^j_0-1 - ∑_(k,j)∈ℱ”_l λ_l, k,j^k_0,j_0s^kj c_0^j-1) =0 .
Either, the latter relation is trivial, i.e. for all (k_0,j_0)∈ℱ'_l∖ℱ”_l, the contents of the parenthesis are all 0. In this case, the space E_l of possible equations for c_0 remains unchanged. Or, the dimension of E_l drops. Since the contents of these parenthesis are polynomials in s and c_0, by Lemma <ref>, the s-adic order of the non-vanishing ones is at most 2d'_sd. The vanishing of (<ref>) follows from the vanishing of the terms of s-adic order up to 2d'_sd. This gives linear relations (with at least one that is nontrivial) between the a_k_0,l,j_0's for (k_0,j_0)∈ℱ'_l∖ℱ”_l. Accordingly, we derive a new space of possible equations for c_0, that we still denote by E_l for simplicity. In the particular case where E_l={0}, we exclude l from the list of admissible multi-indices.
⋆ Suppose now that ℱ'_l+(0,…,0,1)∪𝒢'_l+(0,…,0,1)≠∅. We determine whether c_0 is algebraic relatively to (ℱ'_l+(0,…,0,1),𝒢'_l+(0,…,0,1)). For this, we examine the vanishing of the minors of maximal order of M_ℱ'_l+(0,…,0,1),𝒢'_l+(0,…,0,1)^red up to the lowest row of order 2d'_s,l+(0,…,0,1)d. There are two subcases.
⋆∙ If c_0 is algebraic relatively to (ℱ'_l+(0,…,0,1),𝒢'_l+(0,…,0,1)), according to Equation (<ref>), we set z'=-
(∑_(k,j)∈ℱ'_l a_k,l,js^kj c_0^j-1) c_(0,…,0,1). We have to determine whether there exists a relation P( s, c_0)=z' with P having support in ℱ'_l+(0,…,0,1)∪𝒢'_l+(0,…,0,1). We consider as in Section <ref>, a subfamily ℱ”_l+(0,…,0,1) of ℱ'_l+(0,…,0,1), the vectors (V_l+(0,…,0,1), k,j^red)_(k,j)∈ℱ”_l+(0,…,0,1) and V^red_l+(0,…,0,1) for z', and the corresponding matrix N^red_l+(0,…,0,1). According to Lemma <ref>, the existence of such a polynomial P is equivalent to the vanishing of the minors of N^red_l+(0,…,0,1) of maximal order up to the row p with |p| ≤ 2.3.
θ_s,(0,…,0,1)d^d+1. Let us consider one of these minors, say (D). For (k,j)∈ℱ'_l, we denote by W_k,j^red the infinite vector corresponding to s^kj c_0^j-1 c_(0,…,0,1). Hence, we have:
V^red_l+(0,…,0,1)= -∑_(k,j)∈ℱ'_l a_k,l,j W_k,j^red.
For each (k,j)∈ℱ'_l, we set D_k,j the matrix obtained from D by substituting to its last column, i.e. the part of V^red_l+(0,…,0,1), the corresponding part of the W_k,j^red. By multilinearity of the determinant, one obtains:
(D)=-∑_(k,j)∈ℱ'_l(D_k,j)a_k,l,j.
So, the vanishing of (D) is equivalent to the vanishing of a linear form in the a_k,l,j's for (k,j)∈ℱ'_l. Considering the linear relations for all these D's, we derive from E_l a new space of possible equations for c_0, that we still denote by E_l for simplicity. In the particular case where E_l={0}, we exclude l from the list of admissible multi-indices.
If E_l≠{0}, for each a_l:=(a_k,l,j)_(k,j)∈ℱ'_l∪𝒢'_l list of coefficients of a polynomial in E_l, we perform the method in Section <ref> and we reconstruct the space Φ_l+(0,…,0,1)(a_l) of coefficients (a_k,l+(0,…,0,1),j)_ (k,j)∈ℱ'_l+(0,…,0,1)∪𝒢'_l+(0,…,0,1) for a relation (<ref>). By (<ref>) and (<ref>), it is an affine space ϕ_l+(0,…,0,1)(a_l) + F_l+(0,…,0,1) where ϕ_l+(0,…,0,1)(a_l) is a point and F_l+(0,…,0,1) a vector space. Note that ϕ_l+(0,…,0,1)(a_l) depends linearly on a_l and
that its computation is done by computing a finite number of minors of matrices given by the W_k',j'^red's, (k',j')∈ℱ'_l , and the V_k”,j”^red's, (k”,j”)∈ℱ”_l+(0,…,0,1).
Also, we have that F_l+(0,…,0,1) is independent of a_l. Finally, we observe that, for a given l, the set of admissible
((a_k,l,j)_(k,j)∈ℱ'_l∪𝒢'_l , (a_k,l+(0,…,0,1),j)_ (k,j)∈ℱ'_l+(0,…,0,1)∪𝒢'_l+(0,…,0,1))'s is a nonzero K-vector space.
⋆⋆ If c_0 is not algebraic relatively to (ℱ'_l+(0,…,0,1),𝒢'_l+(0,…,0,1)), we have to determine whether there exists a relation P( s, c_0)=z' with P having support in ℱ'_l+(0,…,0,1)∪𝒢'_l+(0,…,0,1). Note that in this case, such a polynomial P is necessarily unique for a given z'. We proceed as above with ℱ'_l+(0,…,0,1) instead of ℱ”_l+(0,…,0,1) and as in
Section <ref>, in particular Lemma <ref> with 2.3.
θ_s,(0,…,0,1)d^d+1 as bound for the depth of the minors involved.
This determines from E_l a new space of possible equations for c_0, that we still denote by E_l for simplicity. In the particular case where E_l={0}, we exclude l from the list of admissible multi-indices. Also, if E_l≠{0}, for each a_l∈ E_l≠{0}, we reconstruct the list of coefficients ϕ_l+(0,…,0,1)(a_l):= (a_k,l+(0,…,0,1),j)_ (k,j)∈ℱ'_l+(0,…,0,1)∪𝒢'_l+(0,…,0,1) for a relation (<ref>). By (<ref>) and (<ref>), ϕ_l+(0,…,0,1)(a_l) depends linearly on a_l and
its computation is done by computing a finite number of minors of matrices given by the W_k',j'^red's, (k',j')∈ℱ'_l , and the V_k”,j”^red's, (k”,j”)∈ℱ'_l+(0,…,0,1). Again, we observe that, for a given l, the set of admissible ((a_k,l,j)_(k,j)∈ℱ'_l∪𝒢'_l , (a_k,l+(0,…,0,1),j)_ (k,j)∈ℱ'_l+(0,…,0,1)∪𝒢'_l+(0,…,0,1))'s is a nonzero K-vector space.
To sum up Sections <ref> to <ref>, we have reconstructed a finite number of multi-indices l (i.e. possible initial steps l_0:=w_t(P)) and, for each of these l's, the nonzero K-vector space E_l,l+(0,…,0,1) of coefficients (a_k,l',j)_(k,l',j)∈ℱ∪𝒢 , l≤_grlexl'≤_grlexl+(0,…,0,1) for the initial part of a possible vanishing polynomial for y_0.
§.§.§ Induction step.
For each l≤_grlexl̂_0 possible initial step as above, we assume that up to some l̃≥_grlexl+(0,…,0,1) we have reconstructed the nonzero K-vector space, say E_l,l̃, of coefficients (a_k,l',j)_(k,l',j)∈ℱ∪𝒢 , l'≤_grlexl̃ for the initial part of a possible vanishing polynomial for y_0. Recall that, for λ∈^r, S(λ) (respectively A(λ) for λ≠ 0) denotes the successor (respectively the predecessor) for ≤_grlex of λ in ^r. Equation (<ref>) gives:
∑_j=0,..,d a_S(l̃),j(s) C_0^j=- ∑_i<S(l̃), j=0,..,d a_i,j(s) C_S(l̃)-i^(j) ,
which we write as:
∑_(k,j)∈ℱ'_S(l̃)∪𝒢'_S(l̃) a_k,S(l̃),js^kC_0^j=- ∑_i<S(l̃) (∑_(k,j)∈ℱ'_i∪𝒢'_i a_k,i,js^k C_S(l̃)-i^(j)) .
Let us denote θ_s,S(l̃):= (|l̂_0|+d |S(l̃)|)a +b where a and b are as in Lemma <ref>. By this lemma, there exist polynomials (P_λ(s,z_0,…,z_λ))_λ= 0,…,S(l̃) such that P_λ(s,c_0,…,c_λ)=0, P_λ(s,c_0,…,c_A(λ),z_λ)≢0, _sP_λ≤θ_s,S(l̃), _z_μP_λ≤ d for μ≤_grlexλ. Let us denote
i_S(l̃):=([ |S(l̃)|+r-τ; |S(l̃)| ])-1.
Note that i_S(l̃)+1 is at most the number of multi-indices λ such that λ≤_grlexS(l̃).
∙ Suppose that ℱ'_S(l̃)∪𝒢'_S(l̃)=∅. Equation (<ref>) evaluated at c_0,…,c_S(l̃) reduces to:
∑_i<S(l̃) (∑_(k,j)∈ℱ'_i∪𝒢'_i a_k,i,js^k c_S(l̃)-i^(j))=0 .
Let us expand c_n^(j) in (<ref>):
y_0^j= ∑_n∈ℕ^r-τc_n^(j) t^n= (∑_γ∈ℕ^r-τc_γ t^γ)^j,
so,
c_n^(j)=∑_j / |j|=j g(j)=nj!/j!c^j
where j:=(j_0,…,j_n) and c^j:= c_0^j_0⋯ c_n^j_n (and where g is as in Notation <ref>).
Let us expand the left hand side of (<ref>):
∑_i<S(l̃) (∑_(k,j)∈ℱ'_i∪𝒢'_i a_k,i,js^k c_S(l̃)-i^(j))= ∑_i<S(l̃) (∑_(k,j)∈ℱ'_i∪𝒢'_i a_k,i,js^k ∑_j / |j|=j g(j)=S(l̃)-ij!/j!c^j)
(where j:=(j_0,…,j_S(l̃)) and c^j:= c_0^j_0⋯ c_S(l̃)^j_S(l̃)).
We set 𝒦'_S(l̃) the set of (k,j) where k∈^τ and j:=(j_0,…,j_S(l̃)), j≠0, such that j:=|j|∈{0,…,d} and there exists i∈^r-τ with i<S(l̃), (k,j)∈ℱ'_i∪𝒢'_i, g(j)=S(l̃)-i.
Equation (<ref>) becomes:
∑_(k,j)∈𝒦'_S(l̃)∪ℒ'_S(l̃)j!/j!a_k,S(l̃)-g(j),j s^kc^j=0
.
Thanks to Remark <ref>, for any (k,j)∈𝒦'_S(l̃)∪ℒ'_S(l̃), we have that |k|≤ a |S(l̃)|+b≤θ_s,S(l̃). We are in position to apply the method of reconstruction of Section <ref> of all the polynomials such that
∑_(k,j)∈𝒦'_S(l̃)∪ℒ'_S(l̃)
b_k,j s^kc^j=0.
This requires computations of minors of the corresponding Wilczynski matrix up to a finite depth bounded by
2.3^d^i_S(l̃)-1+⋯+d^2+d+1θ_s,S(l̃) d^d^i_S(l̃)+⋯+d^2+d+1
(see Lemma <ref>). By Lemma <ref>, the formulas (<ref>) and (<ref>) give us with a vector space B_S(l̃) (possibly zero) of coefficients b_k,j, hence a corresponding vector space A_S(l̃) of coefficients a_k,S(l̃)-g(j),j=j!/j!b_k,j.
We take the intersection of A_S(l̃) with E_l,l̃ and we obtain another vector space of admissible coefficients that we still denote by E_l,l̃ for simplicity. In the particular case where the projection of E_l,l̃ on E_l is {0}, we exclude l from the list of admissible multi-indices.
⋆ Suppose that ℱ'_S(l̃)∪𝒢'_S(l̃)≠∅. We determine whether c_0 is algebraic relatively to (ℱ'_S(l̃),𝒢'_S(l̃)). For this, we examine the vanishing of the minors of maximal order of M_ℱ'_S(l̃),𝒢'_S(l̃)^red up to the lowest row of order 2d'_s,S(l̃)d (see Section <ref> for the notation). There are two subcases.
⋆∙ If c_0 is algebraic relatively to (ℱ'_S(l̃),𝒢'_S(l̃)), according to Equation (<ref>), we set z':=-
∑_i<S(l̃) (∑_(k,j)∈ℱ'_i∪𝒢'_i a_k,i,js^k c_S(l̃)-i^(j)). We have to determine whether there exists a relation P( s, c_0)=z' with P having support in ℱ'_S(l̃)∪𝒢'_S(l̃). We consider as in Section <ref>, a subfamily ℱ”_S(l̃) of ℱ'_S(l̃), the vectors (V_S(l̃), k,j^red)_(k,j)∈ℱ”_S(l̃) and V^red_S(l̃) for z', and the corresponding matrix N^red_S(l̃).
According to Lemma <ref>, the existence of such a polynomial P is equivalent to the vanishing of the minors of N^red_S(l̃) of maximal order up to the row p with
|p| ≤ 2.3^d^i_S(l̃)-1+⋯+d^2+d+1θ_s,S(l̃) . d^d^i_S(l̃)+⋯+d^2+d+1
Let us consider one of these minors, say (D). For i<S(l̃), for (k,j)∈ℱ'_i∪𝒢'_i, we denote by W_k,i,j^red the infinite vector corresponding to s^k c_S(l̃)-i^(j).
We set D_k,i,j the matrix obtained from D by substituting to its last column, i.e. the part of V^red_S(l̃), the corresponding parts of the W_k,i,j^red's. Since V^red_S(l̃)= ∑_i<S(l̃) ( ∑_(k,j)∈ℱ'_i∪𝒢'_i a_k,i,j.W_k,i,j^red), one has:
(D)=- ∑_i<S(l̃) ( ∑_(k,j)∈ℱ'_i∪𝒢'_i(D_k,i,j) a_k,i,j).
So, the vanishing of (D) is equivalent to the vanishing of a linear form in the a_k,i,j's for i<S(l̃) and (k,j)∈ℱ'_i∪𝒢'_i. Considering these linear relations, we derive from E_l,l̃ a new space of possible coefficients (a_k,l',j)_(k,l',j)∈ℱ∪𝒢 , l'≤_grlexl̃, that we still denote by E_l,l̃ for simplicity. In the particular case where the projection of E_l,l̃ on E_l is {0}, we exclude l from the list of admissible multi-indices.
If this projection is not {0}, so in particular E_l≠{0}, for each a_l̃:=(a_k,l',j)_(k,l',j)∈ℱ∪𝒢, l'≤_grlexl̃ list of coefficients of a polynomial in E_l,l̃, we perform the method in Section <ref> and we reconstruct the space Φ_S(l̃)(a_l̃) of coefficients (a_k,S(l̃),j)_ (k,j)∈ℱ'_S(l̃)∪𝒢'_S(l̃) for a relation (<ref>). By (<ref>) and (<ref>), it is an affine space ϕ_S(l̃)(a_l̃) + F_S(l̃) where ϕ_S(l̃)(a_l̃) is a point and F_S(l̃) a vector space. Note that ϕ_S(l̃)(a_l̃) depends linearly on a_l̃ and
that its computation is done by computing a finite number of minors of matrices given by the W_k',i,j'^red's, i<S(l̃), (k',j')∈ℱ'_i∪𝒢'_i, and the V_k”,j”^red's, (k”,j”)∈ℱ”_S(l̃).
Also, we have that F_S(l̃) is independent of a_l̃. Finally, we observe that the set of admissible ((a_k,l',j)_(k,l',j)∈ℱ∪𝒢, l'≤_grlexl̃ , (a_k,S(l̃),j)_ (k,j)∈ℱ'_S(l̃)∪𝒢'_S(l̃))'s, for a given l, is a nonzero K-vector space which we denote by E_l,S(l̃).
⋆⋆ If c_0 is not algebraic relatively to (ℱ'_S(l̃),𝒢'_S(l̃)), according to Equation (<ref>), we set z'=-
∑_i<S(l̃) (∑_(k,j)∈ℱ'_i∪𝒢'_i a_k,i,js^k c_S(l̃)-i^(j)). We want to determine if there exists a relation P( s, c_0)=z' with P having support in ℱ'_S(l̃)∪𝒢'_S(l̃). As in Section <ref>, we consider the vectors (V_S(l̃), k,j^red)_(k,j)∈ℱ'_S(l̃), V^red_S(l̃) for z', and the corresponding matrix N^red_S(l̃).
According to Lemma <ref>, the existence of such a polynomial P is equivalent to the vanishing of the minors of N^red_S(l̃) of maximal order up to the row p with
|p| ≤ 2.3^d^i_S(l̃)-1+⋯+d^2+d+1θ_s,S(l̃) . d^d^i_S(l̃)+⋯+d^2+d+1
where i_S(l̃) is defined by (<ref>).
As previously, for any of such minors, say (D), the vanishing of (D) is equivalent to the vanishing of a linear form in the a_k,i,j's for i<S(l̃) and (k,j)∈ℱ'_i∪𝒢'_i. Considering these linear relations, we derive from E_l,l̃ a new space of possible coefficients (a_k,l',j)_(k,l',j)∈ℱ∪𝒢 , l'≤_grlexl̃, that we still denote by E_l,l̃ for simplicity. In the particular case where the projection of E_l,l̃ on E_l is {0}, we exclude l from the list of admissible multi-indices.
If this projection is not {0}, so in particular E_l≠{0}, for each a_l̃:=(a_k,l',j)_(k,l',j)∈ℱ∪𝒢, l'≤_grlexl̃ list of coefficients of a polynomial in E_l,l̃, we perform the method in Section <ref> and we reconstruct the unique list of coefficients (a_k,S(l̃),j)_ (k,j)∈ℱ'_S(l̃)∪𝒢'_S(l̃) for a relation (<ref>). Note that this list depends linearly on (a_k,l',j)_(k,l',j)∈ℱ∪𝒢, l'≤_grlexl̃ by relations (<ref>) and (<ref>). Finally, we denote by E_l,S(l̃) the K-vector space of ((a_k,l',j)_(k,l',j)∈ℱ∪𝒢, l'≤_grlexl̃ , (a_k,S(l̃),j)_ (k,j)∈ℱ'_S(l̃)∪𝒢'_S(l̃)) admissible.
As a conclusion, we obtain:
Let ñ^0∈^r, p∈^*, q∈^r-1∖{0}, d∈^* be given. Let ℱ,𝒢 be as in Definition <ref> and satisfying Conditions (i), (ii), (iii) of Lemma <ref>. Let y_0=∑_(m,n)∈ℕ^τ×ℕ^r-τ c_m,ns^mt^n=∑_n∈ℕ^r-τc_n(s) t^n∈ K[[s,t]], c_0,0≠ 0, be a series algebroid relatively to (ℱ,𝒢). Let l̂_0∈^r-τ be given. Assume that there exists a polynomial P∈(K[s][[t]][y])_ℱ,𝒢∖{0} such that P(s,t,y_0)=0 and w_t(P)≤_grlexl̂_0.
For any l≤_grlexl̂_0, for any l̃≥_grlexl, Sections <ref> to <ref> provide the vector space E_l,l̃ of all the polynomials Q_l,l̃∈(K[s][[t]][y])_ℱ,𝒢 such that:
w_t(Q_l,l̃)=l and w_t(Q_l,l̃(s,t,y_0) )>_grlexl̃.
§.§ Proof of Theorem <ref>
Theorem <ref> will be a corollary of the following result:
Let d∈^* and ν̃_0∈. Let ỹ_0∈𝒦_r, more precisely ỹ_0=f̃/g̃ for some formal power series f̃,g̃∈ K[[(x_1/x_2^q_1)^1/p,…, (x_r-1/x_r^q_r-1)^1/p ,x_r^1/p]]. We assume that ỹ_0 is algebroid of degree bounded by d, and that there is a vanishing polynomial P̃ of degree bounded by d and of (x)-adic order bounded by ν̃_0. Let q_i'≥ q_i, i=1,…,r-1, be such that the transform fg of f̃g̃ under the change of variables u_i:=(x_i/x_i+1^q_i')^1/p, i=1,…,r-1, u_r=x_r^1/p, is monomialized with respect to the u_i's:
(fg)(u):=(f̃g̃)( u_1^pu_2^pq'_1⋯ u_r^pq'_1q'_2⋯ q'_r-1 , … , u_r-1^pu_r^p q'_r-1 , u_r^p , y)
We resume the notations of (<ref>), (<ref>), (<ref>), in particular, x_i∈ξ_k if and only if q_i'>0, and otherwise x_i ∈x_k for some k:
x^ ny^j = x_0^ n_0 ξ_1^ m_1 x_1^ n_1⋯ξ_σ^ m_σ x_σ^ n_σy^j.
where n=( n_0, m _1, n_1,…, m_σ, n_σ). For k=1,…,σ, we denote ξ_k=(x_i_k,…,x_j_k-1) and x_k=(x_j_k,…,x_i_k+1-1), and accordingly m_k=(n_i_k,…,n_j_k-1) and n_k=(n_j_k,…,n_i_k+1-1) with i_σ+1:=r+1. For k=0 when x_0 is not empty, we denote x_0=(x_j_0,…,x_i_1-1) and n_0=(n_j_0,…,n_i_1-1) with j_0:=1. When x_0 is empty, we set n_0=0.
We set:
[ L̃_k: ^i_k+1-i_k → ; (m_k,n_k)=(n_i_k,…,n_i_k+1-1) ↦ L̃_k(m_k,0)+ |n_k| ]
where:
L̃_k(m_k,0):=q'_j_k-1q'_j_k-2⋯ q'_i_kn_i_k+⋯+q'_j_k-1q'_j_k-2n_j_k-2 + q'_j_k-1n_j_k-1.
Moreover, let
L̃(n):=|n_0|+∑_k=1,…,σL̃_k(m_k,n_k).
The algorithm described in Section <ref> provides for any ν∈ all the polynomials Q̃_ν(x,y)∈ K[[x]][y] with _yQ̃_ν≤ d and of (x)-adic order bounded by ν̃_0 such that, for any 1/pn=1/p(n_1,…,n_r)∈Supp Q̃_ν(x,ỹ_0), one has:
L̃(n)≥ν.
Recall that, by the Monomialization Lemma <ref> and by Remark <ref>, if β=(β_1,…,β_r) is the lexicographic valuation of f̃g̃ with respect to the variables ζ_i:=(x_i/x_i+1^q_i)^1/p for i=1,…,r-1, ζ_r:=x_r^1/p, then the assumptions of Theorem <ref> are satisfied with q_i':=q_i+β_i+1+1. Therefore, Theorem <ref> follows.
Let us now deduce Theorem <ref> from Theorem <ref>. Suppose that _xP̃≤ν̃_0. Let ℱ,𝒢 be as in Definition <ref> and such that ℱ∪𝒢 is the total family of multi-indices (α,j) satisfying Conditions (i), (ii), (iii) of Lemma <ref> with q_i' instead of q_i. By the transformations described in (<ref>), (<ref>) and (<ref>) associated to the change of variables u_i:=(x_i/x_i+1^q_i')^1/p, i=1,…,r-1, u_r=x_r^1/p, we obtain a polynomial
P(u,y):=u^m̃^0P̃( u_1^pu_2^pq'_1⋯ u_r^pq'_1q'_2⋯ q'_r-1 , … , u_r^p , u^ñ^0y)∈(K[[u]][y])_ℱ,𝒢.
Recall that we denote by x_k, ξ_k the sub-tuple of variables x_i corresponding to t_k, s_k respectively. For k=0 when t_0 is not empty, we denote x_0=(x_j_0,…,x_i_1-1), t_0=(u_j_0,…,u_i_1-1)=(x_j_0^1/p,…,x_i_1-1^1/p) and n_0=(n_j_0,…,n_i_1-1) with j_0:=1.
According to (<ref>), (<ref>), (<ref>), a monomial x^ n is transformed into a monomial u^α=s^βt^γ such that, for k=1,…,σ, we have:
[ ξ_k^ m_k x_k^ n_k= s_i_k^pn_i_ks_i_k+1^p(n_i_k+1+q'_i_kn_i_k)⋯s_j_k-1^p(n_j_k-1+q'_j_k-2n_j_k-2+q'_j_k-2q'_j_k-3n_j_k-3+⋯+ q'_j_k-2q'_j_k-3⋯ q'_i_kn_i_k); t_j_k^p(n_j_k+q'_j_k-1n_j_k-1+q'_j_k-1q'_j_k-2n_j_k-2+⋯+ q'_j_k-1q'_j_k-2⋯ q'_i_kn_i_k) t_j_k+1^pn_j_k+1⋯t_i_k+1-1^pn_i_k+1-1. ]
Hence, a monomial x^ ny^j of P̃(x,y) gives a monomial u^αu^m̃^0+jñ^0y^j=s^βt^γu^m̃^0+jñ^0y^j of P(u,y).
Since (P̃) contains a monomial x^ ny^j such that
|n|= |n_0|+∑_k=1^σ(|m_k|+|n_k|)≤ν̃_0,
we have that:
_tP≤ p|n_0|+ ∑_k=1^σ(pq'_j_k-1q'_j_k-2⋯ q'_i_k|m_k|+p|n_k|) + |(m̃^0+jñ^0)_|t|
≤ p.κ.ν̃_0 + d.ρ
where n_|t denotes the components of n corresponding to the exponents of the variables t in u^n, κ:=max_k=1,..,σ(q'_j_k-1q'_j_k-2⋯ q'_i_k) and ρ:=∑_k=0^σ( |ñ^0_j_k|+⋯+|ñ^0_i_k+1-1|). We set
l̂_0:= (p.κ.ν̃_0 + d.ρ,0,…,0)∈^r-τ,
so that w_t(P)≤_grlexl̂_0.
Given Q̃_ν(x,y) as in Theorem <ref>, let us denote by Q_ν(u,y) its transform via (<ref>), (<ref>), (<ref>) as recalled between P̃ and P above. One gets Q̃_ν(x,ỹ_0)=u^m̃^0Q_ν(u,y_0). According to (<ref>), (<ref>), (<ref>), a monomial x^ n/p of Q̃_ν(x,ỹ_0) is transformed into a monomial u^α=s^βt^γ such that, for k=1,…,σ, we have:
[ ξ_k^ m_k/p x_k^ n_k/p= s_i_k^n_i_ks_i_k+1^n_i_k+1+q'_i_kn_i_k⋯s_j_k-1^n_j_k-1+q'_j_k-2n_j_k-2+q'_j_k-2q'_j_k-3n_j_k-3+⋯+ q'_j_k-2q'_j_k-3⋯ q'_i_kn_i_k; t_j_k^n_j_k+q'_j_k-1n_j_k-1+q'_j_k-1q'_j_k-2n_j_k-2+⋯+ q'_j_k-1q'_j_k-2⋯ q'_i_kn_i_k t_j_k+1^n_j_k+1⋯t_i_k+1-1^n_i_k+1-1. ]
So the monomials of Q_ν(u,y_0) are of the form u^α-m̃^0. As in the computation of (<ref>), _xQ̃_ν(x,y)≤ν̃_0 implies that _tQ_ν(u,y)≤ p.κ.ν̃_0 + d.ρ, so w_t(Q_ν(u,y))≤_grlexl̂_0.
Moreover, since Q̃_ν(x,ỹ_0)=u^m̃^0Q_ν(u,y_0), the condition such that for any
1/pn=1/p(n_1,…,n_r)∈Supp Q̃_ν(x,ỹ_0), L̃(n)≥ν, is equivalent to _t(Q_ν(u,y_0))+|m̃^0_ |t|≥ν. This is in turn equivalent to w_t(Q_ν(u,y_0))≥(0,…,0,ν-|m̃^0_ |t|). We set
l̃_ν:= (0,…,0,ν-|m̃^0_ |t|), and l:=w_t(Q_ν(u,y)).
A polynomial Q̃_ν(x,y) satisfying the conditions of Theorem <ref> comes from a polynomial Q_ν(u,y) as above satisfying
w_t(Q_ν(u,y))≤_grlexl̂_0 and w_t(Q_ν(u,y_0))≥l̃_ν.
The construction of such polynomials Q_ν(u,y)=Q_l,l̃_ν(u,y) is given by Theorem <ref>.
This achieves the proofs of Theorems <ref> and <ref>.
§.§ Plan of the algorithm and example
For the convenience of the reader, we now give several flowcharts in order to describe the algorithm. The first one provides the plan of the algorithm. The others consist of the details of the corresponding steps.
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
The purpose of the present example is to illustrate the various points of our Theorem <ref>. For r=d=p=2 and q_1=ν̃_0=1, let us consider
ỹ_0=f̃/g̃∈𝒦_2 with f̃,g̃∈ K[[(x_1/x_2)^1/2,x_2^1/2]] a root of the following equation:
P̃(x_1,x_2,y) := sin(x_1+x_2)y^2+e^x_1x_1x_2y-x_2^2cos(x_1x_2) = 0.
For instance,
ỹ_0 := - e^x_1x_1x_2+ √( e^2x_1x_1^2x_2^2+4 x_2^2cos( x_1x_2 ) sin( x_1+x_2 ) )/2 sin( x_1+x_2 )
= - e^x_1/x_2x_2x_1/x_2x_2+ x_2^1/2√( e^2x_1/x_2x_2(x_1/x_2)^2x_2+4 cos( x_1/x_2x_2^2 ) sin( x_1/x_2x_2+x_2)/x_2)/2 sin( x_1/x_2x_2+x_2) / x_2
and therefore:
f̃ := [ 2+x_1/x_2-1/4(x_1/x_2)^2+1/8(x_1/x_2)^3-5/64(x_1/x_2)^4+7/128(x_1/x_2)^5] x_2^1/2 -x_1/x_2x_2
+[ 1/4(x_1/x_2)^2-1/8(x_1/x_2)^3+3/32(x_1/x_2)^4-5/64(x_1/x_2)^5]x_2^3/2 -(x_1/x_2)^2x_2^2
+[ -1/6-5/12x_1/x_2-5/16(x_1/x_2)^2+43/96(x_1/x_2)^3-199/768(x_1/x_2)^4+107/512(x_1/x_2)^5] x_2^5/2
-1/2(x_1/x_2)^2x_2^3+⋯
g̃ := [2+2 x_1/x_2]-[ 1/3+x_1/x_2+(x_1/x_2)^2+1/3(x_1/x_2)^3]x_2^2
+ [1/60+1/12x_1/x_2+1/6(x_1/x_2)^2+1/6(x_1/x_2)^3+1/12(x_1/x_2)^4 +1/60(x_1/x_2)^5]x_2^4
-1/2520[∑_k=0^7 7!/k!(7-k)! (x_1/x_2)^k
]x_2^6+⋯
In this case, note that the transform fg of f̃g̃ under the change of variables u_1:=(x_1/x_2)^1/2, u_2=x_2^1/2, is monomialized with respect to (u_1,u_2), so that q_1'=q_1=1 and (u_1,u_2)=(s,t). Hence, r-τ=τ=1. Therefore, one can expand ỹ_0 as a monomialized power series in (s,t): ỹ_0=ty_0 with
y_0 = 1-1/2s^2+3/8s^4-5/16s^6+35/128s^8-63/256s^10+⋯
+ ( -1/2s^2+1/2s^4-1/2s^6+1/2s^8-1/2s^10+⋯)t
+ (1/8s^4-3/16s^6+15/64s^8-35/128s^10+⋯)t^2
+(-1/2s^4+1/2s^6-1/2s^8+1/2s^10+⋯)t^3
+( 1/12+1/8s^2+1/32s^4+47/192s^6-195/512s^8+499/1024s^10+⋯)t^4
( -1/12s^2-1/12s^4-1/4s^6+1/4s^8-1/4s^10+⋯)t^5 +⋯
= ∑_n∈ℕc_n(s) t^n with c_0,0=1≠ 0
As described after (<ref>), now we are in position to apply the algorithm as stated in Theorem <ref> with ñ^0=(0,1) and ñ^0=(0,0) and
l̂_0:= p.κ.ν̃_0 + d.ρ=2× 1× 1+2×1=4.
The corresponding support of the vanishing polynomial P belongs to some ℱ∪𝒢 as in Definition <ref> and satisfying Conditions (i), (ii), (iii) of Lemma <ref>, namely for any (k,l,j)∈ℱ∪𝒢:
(i) (k,l)≥ (0,j);
(ii)k and l-j are even;
(iii)k≤ l-j.
For the first step of the algorithm (Section <ref>), the list of plausible indices to begin with are all the non-negative integers l≤l̂_0=4. We resume the notations of Section <ref> (see also the method in Section <ref>). For simplicity, let us write c_0 for c_0(s).
Step 1.
If l=0 then j=0 and thefore l=k=0, so ℱ'_0=∅ and 𝒢'_0={(0,0,0)}. Equation (<ref>) translates as a_0,0,0=0, which contradicts the assumption that such an equation should be non-trivial. Hence, we exclude l=0 from the list of admissible indices.
If l=1 then j=0 or 1. But l-j has to be even, so j=1 and l-j=0=k. Thus, ℱ'_1={(0,1,1)} and 𝒢'_1=∅. Equation (<ref>) translates as
a_0,1,1.s.C_0=0⇔ a_0,1,1=0,
which contradicts the assumption that such an equation should be non-trivial. Hence, we exclude l=1 from the list of admissible indices.
If l=2 then j∈{0,1,2}. But l-j has to be even, so j=0 or 2. Since k is even, in the former case, k=0 or 2, and in the latter case k=0. Thus, ℱ'_2={(0,2,2)} and 𝒢'_2={(0,2,0), (2,2,0)}. Equation (<ref>) translates as
a_0,2,2.C_0^2+a_0,2,0+a_2,2,0.s^2=0.
However, since c_0^2=1-s^2+s^4-s^6+s^8-s^10+⋯ is not a polynomial of degree at most 2, the only possibility is a_0,2,2=a_0,2,0=a_2,2,0=0 which contradicts the assumption that such an equation should be non-trivial. Hence, we exclude l=2 from the list of admissible indices.
If l=3 then j∈{0,1,2} (recall that _y P=2≤ d=2). But l-j has to be even, so j=1. Since k is even, k=0 or 2. Thus, ℱ'_3={(0,3,1), (2,3,1)} and 𝒢'_3=∅. Equation (<ref>) translates as
(a_0,3,1+a_2,3,1.s^2).C_0=0⇔ a_0,3,1=a_2,3,1=0,
which contradicts the assumption that such an equation should be non-trivial. Hence, we exclude l=3 from the list of admissible indices.
If l=4, again since l-j has to be even, we have that j=0 or 2. Since k is even, in the former case, k∈{0,2,4}, and in the latter case k∈{0,2}. Thus, ℱ'_4={(0,4,2), (2,4,2)} and 𝒢'_2={(0,4,0), (2,4,0),(4,4,0)}. Equation (<ref>) translates as
(a_0,4,2+a_2,4,2.s^2).C_0^2+a_0,4,0+a_2,4,0.s^2+a_4,4,0.s^4=0.
Let us consider the corresponding Wilczynski matrices, where for simplicity the lines consists only of the coefficients of 1, s^2, s^4, etc.
M_ℱ'_4,𝒢'_4 :=[[ 1 0 0 1 0; 0 1 0 -1 1; 0 0 1 1 -1; 0 0 0 -1 1; 0 0 0 1 -1; 0 0 0 -1 1; ⋮ ⋮ ⋮ ⋮ ⋮; ]] and
M_ℱ'_4,𝒢'_4^red :=[[ -1 1; 1 -1; -1 1; 1 -1; -1 1; ⋮ ⋮; ]]
(Recall that here the reduced matrix is obtained by removing the 3 first rows and columns.) One can easily check that all the minors of maximal order vanish up to order 2d_sd=2× 4× 2=16: as expected, c_0 is algebraic relatively to (ℱ'_4,𝒢'_4). Moreover, a first non-zero minor of order 1 in M_ℱ'_4,𝒢'_4^red is obtained e.g. with the coefficient 1 of the second column (this is the coefficient of s^6 in the expansion of s^2.c_0^2). Using the Cramer's rule, we identify it, up to a multiplicative constant λ∈ K, with a_2,4,2, and we also get a_0,4,2=λ. According to (<ref>), we derive a_0,4,0=-λ and a_2,4,0=a_4,4,0=0.
As a conclusion, the K-vector space E_4 of polynomials corresponding to Equation (<ref>) is
E_4:={λ[(1+s^2)y^2-1]t^4+R(s,t,y) | λ∈ K, R∈(K[s][[t]][y])_ℱ,𝒢, w_t(R)≥ 5}.
Here, the linear form L̃ of Theorem <ref> is given by:
L̃(n_1,n_2)=1n_1+n_2=n_1+n_2.
We go back to the variables (x_1,x_2) by the following transformation:
Q(s, t,y)=Q̃(s^2t^2,t^2,ty).
The space E_4 corresponds to the space of polynomials in K[[x_1,x_2]][y] of the form:
λ[(x_1+x_2)y^2-x_2^2]+ R̃(x_1,x_2,y)
with λ∈ K, R̃∈ K[[x_1,x_2]][y] such that:
R̃=ã_0+ã_1y+ã_2y^2
with _x(ã_0)≥ 3, _x(ã_1)≥ 2 and _x(ã_2)≥ 2.
Step 2.
Here, there isn't any l'>4 as in (<ref>).
Step 3.
We consider the case where l+1=5 corresponding to Third Step <ref>. By applying Conditions (i), (ii), (iii) of Lemma <ref> as before, we obtain:
ℱ'_5={(0,5,1),(2,5,1),(4,5,1) } and 𝒢'_5=∅.
The instance of (<ref>) is:
[ (a_0,5,1+a_2,5,1.s^2++a_4,5,1.s^4).C_0 = -( a_0,4,2+a_2,4,2.s^2)2C_0C_1; = -λ(1+s^2) 2C_0C_1. ]
Here, c_1≠ 0, and c_0 is not algebraic relatively to (ℱ'_5,𝒢'_5) since 𝒢'_5=∅, so we are in the case ⋆⋆ of Third Step <ref>. Note that θ_s,1=(4+2)a+b with a=1, b=0 (see Lemma <ref>), so θ_s,1=6. According to Lemma <ref>, we are assured to find a non zero reconstruction minor at depth at most 2.3.θ_s,(0,…,0,1)d^d+1=2× 3× 6× 2^3=288. However, here, the Wilczynski matrices (where again for simplicity we only consider the lines consisting of the coefficients of 1, s^2, s^4, etc.) are triangular with non zero diagonal coefficients:
M_ℱ'_5,𝒢'_5=M_ℱ'_5,𝒢'_5^red =[[ 1 0 0; -1/2 1 0; 3/8 -1/2 1; -5/16 3/8 -1/2; 35/128 -5/16 3/8; ⋮ ⋮ ⋮; ]].
A first nonzero minor is obtained with the three first lines, and is equal to 1. But we notice that, here, Equation (<ref>) can be simplified by C_0 (since c_0≠ 0) and we get:
a_0,5,1+a_2,5,1.s^2++a_4,5,1.s^4= -λ(1+s^2) 2 C_1.
By evaluating at c_1=-1/2s^2+1/2s^4-1/2s^6+1/2s^8-1/2s^10+⋯, we see that:
-λ(1+s^2) 2 c_1=λ s^2
and therefore a_0,5,1= a_4,5,1=0 and a_2,5,1=λ. As a conclusion, the K-vector space E_4,5 of polynomials corresponding to Third Step <ref> is
E_4,5:=
{λ[(1+s^2)y^2-1]t^4+(λ s^2 y) t^5+R(s,t,y) | λ∈ K, R∈(K[s][[t]][y])_ℱ,𝒢, w_t(R)≥ 6}.
As before, we go back to the variables (x_1,x_2) by the following transformation:
Q(s, t,y)=Q̃(s^2t^2,t^2,ty).
The space E_4,5 corresponds to the space of polynomials in K[[x_1,x_2]][y] of the form:
λ[(x_1+x_2)y^2+ x_1x_2 y-x_2^2]+ R̃(x_1,x_2,y)
with λ∈ K, R̃∈ K[[x_1,x_2]][y] such that:
R̃=ã_0+ã_1y+ã_2y^2
with _x(ã_0)≥ 3, _x(ã_1)≥ 3 and _x(ã_2)≥ 2.
Step 4.
We consider the case where S(l̃)=6 corresponding to Induction Step <ref>. By applying Conditions (i), (ii), (iii) of Lemma <ref> as before, we obtain:
ℱ'_6={(0,6,2),(2,6,2),(4,6,2) } and 𝒢'_6={(0,6,0),(2,6,0),(4,6,0),(6,6,0) }.
The instance of (<ref>) is:
[ (a_0,6,2+a_2,6,2.s^2++a_4,6,2.s^4).C_0^2+ a_0,6,0+a_2,6,0.s^2+a_4,6,0.s^4+a_6,6,0.s^6; =-(( a_0,4,2+a_2,4,2.s^2)(2C_0C_2+ C_1^2) +(a_0,5,1+a_2,5,1.s^2++a_4,5,1.s^4).C_1); = -λ[(1+s^2) (2C_0C_2+ C_1^2) + s^2 C_1]. ]
Note that we are in the case ⋆∙ of Induction Step <ref> since c_0 is algebraic relatively to (ℱ'_6,𝒢'_6). Moreover, when evaluating at c_0, c_1 and c_2=1/8s^4-3/16s^6+15/64s^8-35/128s^10+⋯, we obtain that the right-hand side of (<ref>) vanishes. So we get:
(a_0,6,2+a_2,6,2.s^2++a_4,6,2.s^4).C_0^2+ a_0,6,0+a_2,6,0.s^2+a_4,6,0.s^4+a_6,6,0.s^6=0
which is of the same type as (<ref>). The corresponding Wilczynski matrices (where again for simplicity the lines consists only of the coefficients of 1, s^2, s^4, etc.) are
M_ℱ'_6,𝒢'_6 :=[[ 1 0 0 0 1 0 0; 0 1 0 0 -1 1 0; 0 0 1 0 1 -1 1; 0 0 0 1 -1 1 -1; 0 0 0 0 1 -1 1; 0 0 0 0 -1 1 -1; ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮; ]] and
M_ℱ'_6,𝒢'_6^red :=[[ -1 1 -1; 1 -1 1; -1 1 -1; 1 -1 1; -1 1 -1; ⋮ ⋮ ⋮; ]]
We apply the reconstruction method of Section <ref> with maximal subfamily ℱ”_6={(2,6,2)}. According to Lemma <ref>, we obtain:
a_2,6,2= a_0,6,2λ_2,6,2^0,6,2+ a_4,6,2λ_2,6,2^4,6,2
where here λ_2,6,2^0,6,2=-1 is the coefficient relating the column (0,6,2) to the column (2,6,2). Likewise, λ_2,6,2^4,6,2=-1. Let us consider a_0,6,2 and a_4,6,2 as parameters α,β∈ K, so a_2,6,2=-α-β. Moreover, we compute the coefficients of 𝒢'_6 according to (<ref>) in Lemma <ref>:
[ a_0,6,0 = -a_0,6,2. 1 = -α; a_2,6,0 = a_0,6,2. 1 -a_2,6,2.1 = 2α+β; a_4,6,0 = - a_0,6,2. 1 +a_2,6,2.1 -a_4,6,2.1 = -2α-2β; a_6,6,0 = a_0,6,2. 1 -a_2,6,2.1 +a_4,6,2.1 = 2α+2β ]
As a conclusion, the K-vector space E_4,6 of polynomials corresponding to Induction Step <ref> is
[ E_4,6:={λ[(1+s^2)y^2-1]t^4+(λ s^2 y) t^5 +.; [ (α - (α +β )s^2 +β s^4) y^2 -α +(2α+β)s^2- 2(α +β)s^4+ 2(α +β)s^6 ]t^6
+R(s,t,y) |; .λ,α,β∈ K, R∈(K[s][[t]][y])_ℱ,𝒢, w_t(R)≥ 7}. ]
As before, we go back to the variables (x_1,x_2) by the following transformation:
Q(s, t,y)=Q̃(s^2t^2,t^2,ty).
The space E_4,6 corresponds to the space of polynomials in K[[x_1,x_2]][y] of the form:
[ (λ x_1+λ x_2+ αx_2^2- (α +β )x_1x_2 +β x_1^2)y^2+ λ x_1x_2 y; -λx_2^2 -αx_2^3 +(2α+β)x_1x_2^2- 2(α +β)x_1^2x_2+ 2(α +β)x_1^3
+ R̃(x_1,x_2,y) ]
with λ,α,β∈ K, R̃∈ K[[x_1,x_2]][y] such that:
R̃=ã_0+ã_1y+ã_2y^2
with _x(ã_0)≥ 4, _x(ã_1)≥ 3 and _x(ã_2)≥ 3.
Note that we recover the beginning of the analytic expansion of P̃ at 0 in (<ref>) for λ=1 and α=β=0.
§ A GENERALIZATION OF THE FLAJOLET-SORIA FORMULA.
In the monovariate context, let Q(x,y)=∑_i,ja_i,jx^iy^j ∈ K[x,y] with Q(0,0)=∂ Q/∂ y(0,0)=0 and Q(x,0)≠ 0.
In <cit.>, P. Flajolet and M. Soria give the following formula for the coefficients of the unique formal solution y_0=∑_n≥ 1c_nx^n of the implicit equation y=Q(x,y):
[Flajolet-Soria's Formula <cit.>] c_n=∑_m=1^2n-11/m∑_|k|=m, ||k||=m-1, g(k)=nm!/∏_i,jk_i,j!∏_i,ja_i,j^k_i,j,
where k=(k_i,j)_i,j, |k|=∑_i,jk_i,j, ||k|| = ∑_i,jj k_i,j and g(k) = ∑_i,ji k_i,j.
Note that in the particular case where the coefficients of Q verify a_0,j=0 for all j, one has m≤ n in the summation.
One can derive immediately from Theorems 3.5 and 3.6 in <cit.> a multivariate version of the Flajolet-Soria Formula in the case where Q(x,y)∈ K[x,y]. The purpose of the present section is to generalize the latter result to the case where Q(x,y)∈ K((u_1^ℤ,…,u_r^ℤ))^grlex_Mod[y].
We will need a special version of Hensel's Lemma for multivariate power series elements of K((x_1^ℤ,…,x_r^ℤ))^grlex. Recall that the latter denotes the field of generalized series (K((X^ℤ^r))^grlex, w) where w is the graded lexicographic valuation as described in Section <ref>. Generalized series fields are known to be Henselian <cit.>. For the convenience of the reader, we give a short proof in our particular context.
We call strongly reduced Henselian equation any equation of the following type:
y=F(u,y) with F(u,y)∈ K((u_1^ℤ,…,u_r^ℤ))^grlex_Mod,
such that w(F(u,y))>_grlex0 and F(u,0) 0.
[Hensel's lemma]
Any strongly reduced Henselian equation admits a unique solution y_0= ∑_n>_grlex0c_nu^n∈ K((u_1^ℤ,…,u_r^ℤ))^grlex.
Let
y=F(u,y)
be a strongly reduced Henselian equation and let y_0=∑_n>_grlex0c_nu^n∈ K((u_1^ℤ,…,u_r^ℤ))^grlex. For n∈ℤ^r, n>_grlex0, let us denote z̃_n:= ∑_m<_grlexn c_mu^m.
We get started with the following key lemma:
The following are equivalent:
* a series y_0 is a solution of (<ref>);
* for any n∈ℤ^r, n>_grlex0,
w(z̃_n-F(u,z̃_n))=w(y_0-z̃_n);
* for any n∈ℤ^r, n>_grlex0,
w(z̃_n-F(u,z̃_n))≥_grlexn.
For n>_grlex0, let us denote ỹ_n:=y_0-z̃_n=∑_m≥_grlexn c_mu^m. We apply Taylor's Formula to G(u,y):=y-F(u,y) at z̃_n:
G(u,z̃_n+y) =z̃_n-F(u,z̃_n)+(1-∂ F/∂ y(u,z̃_n))y +y^2H(u,y),
where H(u,y)∈ K((u_1^ℤ,…,u_r^ℤ))^grlex[y] with w(R(u,y))>_grlex0. The series
y_0 is a solution of (<ref>) iff for any n, ỹ_n is a root of G(u,z̃_n+y)=0, i.e.:
z̃_n-F(u,z̃_n)+(1-∂ F/∂ y(u,z̃_n))ỹ_n+ ỹ_n^2H(u,ỹ_n)=0.
Now consider y_0 a solution of (<ref>) and n∈ℤ^r, n>_grlex0. Either ỹ_n=0, i.e. y_0=z̃_n: (2) holds trivially. Or ỹ_n≠ 0, so we have:
n≤_grlex w((1-∂ G/∂ y(u,z̃_n))ỹ_n) =w(ỹ_n)<_grlex 2w(ỹ_n)<_grlex w(ỹ_n^2H(u,ỹ_n)).
So we must have w(z̃_n-G(u,z̃_n))=w(ỹ_n).
Now, (2) ⇒ (3) since w(ỹ_n)≥_grlexn.
Finally, suppose that for any n, w(z̃_n-F(u,z̃_n))≥_grlexn. If y_0-F(x,y_0)≠ 0, denote n_0:= w(y_0-F(u,y_0)). For n>_grlexn_0, one has n_0=w(z̃_n-F(u,z̃_n))≥_grlexn. A contradiction.
Let us return to the proof of Theorem <ref>. Note that, if y_0 is a solution of (<ref>), then its support needs to be included in the monoid 𝒮 generated by the i's from the nonzero coefficients a_i,j of F(x,y). If not, consider the smallest index n for ≤_grlex which is not in 𝒮. Property (2) of Lemma <ref> gives a contradiction for this index.
𝒮 is a well-ordered subset of (ℤ^r)_≥_grlex0 by <cit.>.
Let us prove by transfinite induction on n∈𝒮 the existence and uniqueness of a sequence of series z̃_n as in the statement of the previous lemma. Suppose that for some n∈𝒮, we are given a series z̃_n with support included in 𝒮 and <_grlexn, such that w(z̃_n-F(u,z̃_n))≥_grlexn. Then by Taylor's formula as in the proof of the previous lemma, denoting by m the successor of n in 𝒮 for ≤_grlex:
G(u,z̃_m)=G(u,z̃_n+c_nu^n) =z̃_n-F(u,z̃_n)+(1-∂ F/∂ y(u,z̃_n))c_nu^n +c_n^2u^2nH(u,z̃_n).
Note that w(H(u,z̃_n))≥_grlex0 since w(z̃_n)>_grlex0 and w(F(u,y))>_grlex0.
Therefore, one has:
w(G(u,z̃_m))=w(z̃_m-F(u,z̃_m))≥_grlexm>_grlexn
if and only if c_n is equal to the coefficient of u^n in F(u,z̃_n). This determines z̃_m in a unique way as desired.
We prove now our generalized version of the Flajolet-Soria Formula <cit.>. Our proof, as the one in <cit.>, uses the classical Lagrange Inversion Formula in one variable. We will use Notation <ref>.
[Generalized multivariate Flajolet-Soria Formula]
Let y=F(u,y)=∑_i,ja_i,ju^iy^j be a strongly reduced Henselian equation. Define ι_0=(ι_0,1,…,ι_0,r) by: -ι_0,k:=min{0, i_k / a_i,j≠ 0, i = (i_1,…,i_k,…,i_r)}, k=1,…,r.
Then the coefficients c_n of the unique solution y_0=∑_n>_grlex0 c_nu^n∈ K((u_1^ℤ,…,u_r^ℤ))^grlex are given by:
c_n=∑_m=1^μ_n1/m∑_|M|=m, ||M||=m-1, g(M)=nm!/M!A^M
where μ_n is the greatest integer m such that there exists an M with |M|=m, ||M||=m-1 and g(M)=n. Moreover, for n=(n_1,…,n_r), μ_n≤∑_k=1^rλ_k n_k with:
λ_k={[ ∏_j=k+1^r-1(1+ι_0,j)+∏_j=1^r-1(1+ι_0,j) if k<r-1;; 1+∏_j=1^r-1(1+ι_0,j) if k=r-1;; ∏_j=1^r-1(1+ι_0,j) if k=r. ].
* In (<ref>), note that the second sum is finite. Indeed, let M=(m_i,j) be such that |M|=m, ||M||=m-1, g(M)=n. Since F∈ K((u_1^ℤ,…,u_r^ℤ))^grlex_Mod[y], if i has a component negative enough, then a_i,j=0. On the other hand, since |M|=m and g(M)=n, the positive components of i are bounded.
* By <cit.>, 1/m·m!/M!∈ℕ. If we set m_j:=∑_im_i ,j and N=(m_j)_j, then |N|=m, N=m-1 and:
1/m·m!/M!= 1/m·m!/N!·N!/M!,
where N!/M! is a product of multinomial coefficients and 1/m·m!/N! is an integer again by <cit.>.
Thus, each c_n is the evaluation at the a_i,j's of a polynomial with coefficients in ℤ.
For a given strongly reduced Henselian equation y=F(u,y), one can expand:
f(u,y):=y/F(u,y)=∑_n≥ 1b_n(u)y^n ∈ K((u_1^ℤ,…,u_r^ℤ))^grlex[[y]] with b_1≠ 0,
which admits a unique formal inverse in K((u_1^ℤ,…,u_r^ℤ))^grlex[[y]]:
f̃(u,y)= ∑_m≥ 1d_m(u) y^m.
The Lagrange Inversion Theorem (see e.g. <cit.> with ℱ=K((u_1^ℤ,…,u_r^ℤ))^grlex and P=f(u,y)) applies: for any m, d_m(u) is equal to the coefficient of y^m-1 in [F(u,y)]^m, divided by m. Hence, according to the multinomial expansion of [F(u,y)]^m=[∑_i,ja_i,ju^iy^j]^m:
d_m(u)=1/m∑_|M|=m, ||M||=m-1m!/M!A^Mu^g(M).
Note that the powers n of u that appear in d_m are nonzero elements of the monoid generated by the exponents i of the monomials u^iy^j appearing in F(u,y), so they are >_grlex0.
Now, it will suffice to show that, for any fixed n, the number ∑_k=1^rλ_k n_k is indeed a bound for the number μ_n of m's for which d_m can contribute to the coefficient of u^n. Indeed, this will show that f̃(u,y)∈ K[y]((u_1^ℤ,…,u_r^ℤ))^grlex. But, by definition of f̃, one has that:
f̃(u,y)=y F(u,f̃(u,y)) ∈ K((u_1^ℤ,…,u_r^ℤ))^grlex[[y]].
Hence, both members of this equality are in fact in K[y]((u_1^ℤ,…,u_r^ℤ))^grlex.
So, for y=1, we get that f̃(u,1)∈ K((u_1^ℤ,…,u_r^ℤ))^grlex is a solution with w(f̃(u,1))>_grlex0 of the equation: f(u,y)=y/F(u,y)=1 ⇔ y=F(u,y).
It is equal to the unique solution y_0 of Theorem <ref>:
y_0=f̃(u,1)= ∑_m≥ 1d_m(u).
We consider the relation:
g(M)=n ⇔ {[ ∑_i,jm_i,j i_1 = n_1;; ⋮; ∑_i,jm_i,j i_r = n_r. ].
Let us decompose m=|M|=∑_i,jm_i,j as follows:
|M|=∑_|i|>0m_i,j+∑_|i|=0, i_1>0m_i,j+⋯+ ∑_|i|=0=i_1=⋯=i_r-2, i_r-1>0m_i,j.
So, the relation g(M)=n can be written as:
{[ ∑_|i|>0m_i,j i_1+∑_|i|=0, i_1>0m_i,j i_1 = n_1;; ⋮; ∑_|i|>0m_i,j i_k+∑_|i|=0, i_1>0m_i,j i_k+⋯+ ∑_|i|=0=i_1=⋯=i_k-1, i_k>0m_i,j i_k = n_k;; ⋮; ∑_i,jm_i,j i_r = n_r. ].
Firstly, let us show by induction on k∈{0,…,r-1} that:
[ ∑_|i|=0=i_1=⋯=i_k-1, i_k>0m_i,j ≤ ∑_q=1^k-1[ι_0,k(∏_p=q+1^k-1(1+ι_0,p) + ∏_p=1^k-1(1+ι_0,p) )]n_q; +[1+ι_0,k∏_p=1^k-1(1+ι_0,p) ]n_k; +[ι_0,k∏_p=1^k-1(1+ι_0,p)]n_k+1
+⋯+[ι_0,k∏_p=1^k-1(1+ι_0,p)]n_r , ]
the initial step k=0 being:
∑_|i|>0m_i,j≤ n_1+…+n_r.
This case k=0 follows directly from (<ref>), by summing its r relations:
∑_|i|>0m_i,j≤∑_|i|>0m_i,j|i|≤ n_1+…+n_r.
Suppose that we have the desired property until some rank k-1. Recall that for any i, i_k≥ -ι_0,k. By the k'th equation in (<ref>), we have:
[ ∑_|i|=0=i_1=⋯=i_k-1, i_k>0m_i,j ≤ ∑_|i|=0=i_1=⋯=i_k-1, i_k>0m_i,j i_k ][ ≤ n_k-(
∑_|i|>0m_i,j i_k+∑_|i|=0, i_1>0m_i,j i_k+⋯+ ∑_|i|=0=i_1=⋯=i_k-2, i_k-1>0m_i,j i_k); ≤ n_k+ι_0,k(
∑_|i|>0m_i,j +∑_|i|=0, i_1>0m_i,j +⋯+ ∑_|i|=0=i_1=⋯=i_k-2, i_k-1>0m_i,j). ]
We apply the induction hypothesis to these k sums and obtain an inequality of type:
∑_|i|=0=i_1=⋯=i_k-1, i_k>0m_i,j≤α_k,1 n_1+⋯+α_k,r n_r.
For q>k, let us compute:
[ α_k,q = ι_0,k( 1+ ι_0,1+ ι_0,2(1+ι_0,1)+ι_0,3(1+ι_0,1)(1+ι_0,2)+⋯ + ι_0,k-1∏_p=1^k-2(1+ι_0,p) ); = ι_0,k∏_p=1^k-1(1+ι_0,p). ]
For q=k, we have the same computation, plus the contribution of the isolated term n_k. Hence:
α_k,k=1+ι_0,k∏_p=1^k-1(1+ι_0,p).
For q<k, we have a part of the terms leading again by the same computation to the formula ι_0,k∏_p=1^k-1(1+ι_0,p). The other part consists of terms starting to appear at the rank q and whose sum can be computed as:
ι_0,k( 1+ ι_0,q+1+ ι_0,q+2(1+ι_0,q+1)+⋯ + ι_0,k-1∏_p=q+1^k-2(1+ι_0,p) )
= ι_0,k∏_p=q+1^k-1(1+ι_0,p).
So we obtain as desired:
α_k,q= ι_0,k[ ∏_p=q+1^k-1(1+ι_0,p)+ ∏_p=1^k-1(1+ι_0,p)].
Subsequently, we obtain an inequality for m=|M|=∑_i,jm_i,j of type:
[ m = ∑_|i|>0m_i,j+∑_|i|=0, i_1>0m_i,j+⋯+ ∑_|i|=0=i_1=⋯=i_r-2, i_r-1>0m_i,j; ≤ α_1 n_1+⋯ +α_r n_r, ]
with α_k= 1+∑_l=1^r-1α_l,k for any k. For k=r, let us compute in a similar way as before for α_k,q:
[ α_r = 1+ι_0,1+ι_0,2(1+ι_0,1)+⋯ +ι_0,k∏_p=1^k-1(1+ι_0,p)+⋯ +ι_0,r-1∏_p=1^r-2(1+ι_0,p); = ∏_p=1^r-1(1+ι_0,p)=λ_r. ]
For k=r-1, we have the same computation plus 1 coming from the term α_r-1,r-1. Hence:
α_r-1=1+ ∏_p=1^r-1(1+ι_0,p)=λ_r-1.
For k∈{1,…,r-2}, we have a part of the terms leading again by the same computation to the formula ∏_p=1^r-1(1+ι_0,p). The other part consists of terms starting to appear at the rank k and whose sum can be computed as:
1+ι_0,k+1+ι_0,k+2(1+ι_0,k+1)+⋯+ι_0,r-1∏_p=k+1^r-2(1+ι_0,p)=∏_p=k+1^r-1(1+ι_0,p)
Altogether, we obtain as desired:
α_k=∏_p=k+1^r-1(1+ι_0,p)+∏_p=1^r-1(1+ι_0,p)=λ_k.
* Note that for any k∈{1,…,r-1}, λ_k=λ_r(1/(1+ι_0,1)⋯(1+ι_0,k)+1), so λ_1≥λ_k>λ_r. Thus, we obtain that:
μ_n≤λ_1|n|.
Moreover, in the particular case where ι_0=0– i.e. when Q(x,y)∈ K[[x]][y] and y_0∈ K[[x]] as in <cit.>– we have λ_k=2 for k∈{1,…,r-1} and λ_r=1. Thus we obtain:
μ_n≤ 2|n|-n_r≤ 2|n|.
Note that :
|n| ≤ 2|n|-n_r≤ 2|n|
which can be related in this context with the effective bounds 2|n|-1 (case
w_x(Q(x,y))≥_grlex0) and |n| (case w_x(Q(x,y))>_grlex0) given in <cit.>.
* With the notation from Theorem <ref>, any strongly reduced Henselian equation y=Q(x,y) can be written:
x^ι_0y=Q̃(x,y)with Q̃(x,y)∈ K[[x]][y] and w_x(Q̃(x,y))>_grlexι_0.
Any element n of Supp y_0, being in the monoid 𝒮 of the proof of Theorem <ref>, is of the form:
n=m-k ι_0 with m∈ℕ^r, k∈ℕ and k |ι_0|≤ |m|.
Let us consider the following example of strongly reduced Henselian equation:
[ y = a_1,-1,2x_1x_2^-1 y^2 + a_-1,2,0x_1^-1x_2^2 +a_0,1,1x_2y+ a_-1,3,0x_1^-1x_2^3 +a_0,2,1x_2^2y; +(a_1, 1, 0+ a_1,1,2y^2)x_1 x_2 +a_1,2,0 x_1x_2^2+a_2,1,1yx_1^2x_2; + a_1,3,0 x_1x_2^3 +a_2,2,1 yx_1^2x_2^2+a_3,1,2y^2x_1^3x_2. ]
The support of the solution is included in the monoid 𝒮 generated by the exponents of (x_1,x_2), which is equal to the pairs n=(n_1,n_2)∈ℤ^2 with n_2=-n_1+ l and n_1≥ -l for l∈ℕ. We have ι_0=(1,1), so (λ_1,λ_2)=(3,2) and μ_n≤ 3n_1+2n_2=n_1+2l. We are in position to compute the first coefficients of the unique solution y_0. Let us give the details for the computation of the first terms, for l=0. In this case, to compute c_n_1,-n_1, n_1>0, we consider m such that 1≤ m≤μ_n_1,-n_1≤ n_1, and M=(m_i,j)_i,j such that:
{[ |M|=m ⇔ ∑_i,jm_i,j=m≤ n_1;; M=m-1 ⇔ ∑_i,jm_i,jj=m-1≤ n_1-1;; g(M)=n ⇔ {[ ∑_i,jm_i,j i_1 = n_1>0;; ∑_i,jm_i,j i_2 = -n_1<0. ]. ].
The last condition implies that m_1,-1,2≥ n_1.
But, according to the second condition, this gives n_1-1≥M≥ 2 m_1,-1,2≥ 2 n_1, a contradiction. Hence, c_n_1,-n_1=0 for any n_1>0.
In the case l=1, we consider the corresponding conditions to compute c_n_1,-n_1+1 for n_1≥ -1. We obtain that 1≤ m≤μ_n_1,-n_1+1≤ n_1+2. Suming the two conditions in g(M)=(n_1,-n_1+1), we get m_-1,2,0+m_0,1,1=1 and m_i,j=0 for any i such that i_1+i_2≥ 2. So we are left with the following linear system:
{[ (L_1) m_1,-1,2 + m_-1,2,0 + m_0,1,1 = m ≤ n_1+2; (L_2) 2 m_1,-1,2 + m_0,1,1 = m-1 ≤ n_1+1; (L_3) m_1,-1,2 - m_-1,2,0 = n_1; (L_4) -m_1,-1,2 + 2 m_-1,2,0 + m_0,1,1 = -n_1+1; ].
By comparing (L_2)-(L_3) and (L_1), we get that m=m-1-n_1, so n_1=-1. Consequently, by (L_1), m=1, and by (L_2), m_1,-1,2=m_0,1,1=0. Since m_-1,2,0+m_0,1,1=1, we obtain m_-1,2,0=1 which indeed gives the only solution. Finally, c_n_1,-n_1+1=0 for any n_1≥ 0 and:
c_-1,2=1/11!/1!0!a_-1,2,0^1=a_-1,2,0.
Similarly, we claim that one can determine that:
[ c_-2,4 = 0, μ_n≤ 2;; c_-1,3 = a_-1,3,0+a_0,1,1a_-1,2,0+a_1,-1,2a_-1,2,0^2, μ_n≤ 3;; c_0,2 = 0, μ_n≤ 4;; c_1,1 = a_1,1,0, μ_n≤ 5;; c_n_1,-n_1+2 = 0 for n_1≥ 0, n_1≠ 1 μ_n≤ n_1+4;; c_n_1,-n_1+3 = 0 for -3≤ n_1≤ -2, μ_n≤ n_1+6;; c_-1,4 = a_0,2,1a_-1,2,0+a_0,1,1a_-1,3,0+2 a_1,-1,2a_-1,2,0a_-1,3,0; +a_0,1,1^2a_-1,2,0+3 a_0,1,1a_1,-1,2a_-1,2,0^2+2 a_1,-1,2^2a_-1,2,0^3, μ_n≤ 5;; ⋮ ]
§ CLOSED-FORM EXPRESSION OF AN ALGEBROID MULTIVARIATE SERIES.
The field K of coefficients has still characteristic zero. Our purpose is to determine the coefficients of an algebroid series in terms of the coefficients of a vanishing polynomial. We consider the following polynomial of degree in y bounded by d_y and satisfying the conditions (i) to (iii) of Lemma <ref>:
[ P(u,y) = ∑_i∈^r∑_j=0^d_ya_i,ju^iy^j , with P(u,y)∈ K[[u]][y]∖{0}; = ∑_i∈^rπ_i^P(y)u^i; = ∑_j=0^d_ya_j^P(u)y^j, ]
and a formal power series:
y_0=∑_n≥_grlex0c_ nu^n, with y_0∈ K[[u]], c_0≠ 0.
The field K((u)) is endowed with the graded lexicographic valuation w.
For any k∈ℕ^r and for any Q(u,y)=∑_j=0^da_j^Q(u)y^j∈ K((u_1^ℤ,…,u_r^ℤ))^grlex[y], we denote:
* S(k) the successor element of k in (ℕ^r,≤_grlex);
* w(Q):=min{w (a_j^Q(u)), j=0,..,d};
* For any k∈^r, z_k:=∑_n=0^kc_nu ^n;
* y_k:=y_0-z_k=∑_n≥_grlexS( k)c_nu^n;
* Q_k(u,y):=Q(u,z_k+u^S(k)y) =∑_i≥_grlexi_kπ^Q_k,i(y)u^i where i_k:=w( Q_k).
Note that the sequence (i_k)_k∈ℕ^r is nondecreasing since Q_S(k)(u,y)=Q_k(u,c_S(k)+u^ny) for n=S^2(k)-S(k)>_grlex0, n∈ℤ^r.
As for the algebraic case <cit.>, we consider y_0 solution of the equation P=0 via an adaptation in several variables of the algorithmic method of Newton-Puiseux, also with two stages:
* a first stage of separation of the solutions, which illustrates the following fact: y_0 may share an initial part with other roots of P. But, if y_0 is a simple root of P, this step concerns only finitely many of the first terms of y_0 since w(∂ P/∂ y (u,y_0)) is finite.
* a second stage of unique "automatic" resolution: for y_0 a simple root of P, once it has been separated from the other solutions, we will show that the remaining part of y_0 is a root of a strongly reduced Henselian equation, in the sense of Definition <ref>, naturally derived from P and an initial part of y_0.
(i) The series y_0 is a root of P(u,y) if and only if the sequence (i_k)_k∈ℕ^r where i_k:=w( P_k) is strictly increasing.
(ii) The series y_0 is a simple root of P(u,y) if and only if the sequence (i_k)_k∈ℕ^r is strictly increasing and there exists a lowest multi-index k_0 such that i_S(k_0)=i_k_0-S(k_0)+S^2(k_0). In that case, one has that i_S(k)=i_k-S(k)+S^2(k)=i_k_0-S(k_0)+S^2(k) for any k≥_grlexk_0.
(i) Note that for any k∈ℕ^r,i_k≤_grlex w(P_k(u,0)=w(P(u,z_k)). Hence, if the sequence (i_k)_k∈ℕ^r is strictly increasing in (ℕ^r,≤_grlex), it tends to +∞ (i.e. ∀n∈ℕ^r, ∃k_0∈ℕ^r, ∀k≥_grlexk_0, i_k≥_grlexn), and so does w(P(u,z_k)). The series y_0 is indeed a root of P(u,y). Conversely, suppose that there exist k<_grlexl such that i_k≥_grlexi_l.
Since the sequence (i_n)_n∈ℕ^r is nondecreasing, one has that i_l≥i_k, so i_l=i_k.
We apply the multivariate Taylor's formula to P_j(u,y) for j>_grlexk:
[ P_j(u,y) = P_k(u,c_S(k)+ c_S^2(k)u^S^2(k)-S(k) +⋯+c_ju^j-S(k)+u^S(j)-S(k)y); = ∑_i≥_grlexi_kπ^P_k,i(c_S(k)+ c_S^2(k)u^S^2(k)-S(k) +⋯+u^S(j)-S(k)y) u^i; = π^P_k,i_k(c_S(k))u^i_k+b_S(i_k)u^S(i_k)+ ⋯. ]
Note that b_S(i_k)= π^P_k,S(i_k)(c_S(k)) or b_S(i_k)= (π^P_k,i_k )'(c_S(k)) c_S^2(k)+π^P_k,S(i_k)(c_S(k)) depending on whether S(i_k)<_grlexi_k+S^2(k)-S(k) or S(i_k)=i_k+S^2(k)-S(k).
For j=l, we deduce that π^P_k,i_k(c_S(k))≠ 0. This implies that for any j>_grlexk, i_j=i_k
and w(P_j(u,0))=w(P(u,z_j))=i_k. Hence w(P(u,y_0))=i_k≠ +∞.
(ii) The series y_0 is a double root of P if and only if it is a root of P and ∂ P/∂ y. Let y_0 be a root of P. Let us expand the multivariate Taylor's formula (<ref>) for j=S(k):
[ [ P_S(k)(u,y) = π^P_k,i_k(c_S(k))u^i_k+ π^P_k,S(i_k)(c_S(k))u^S(i_k)+⋯; +[(π^P_k,i_k)'(c_S(k)) y+π^P_k,i_k+S^2(k)-S(k)(c_S(k))]u^i_k+S^2(k)-S(k)+⋯ + ]; [(π^P_k,i_k)”(c_S(k))/2 y^2+(π^P_k,i_k+S^2(k)-S(k))'(c_S(k)) y+π^P_k,i_k+2(S^2(k)-S(k))(c_S(k))]u^i_k+2(S^2(k)-S(k))+⋯ ]
Note that if S(i_k)=i_k+S^2(k)-S(k), then there are no intermediary terms between the first one and the one with valuation i_k+S^2(k)-S(k).
We have by definition of P_k:
∂ P_k/∂ y(u,y)=u^S(k)(∂ P/∂ y)_k(u,y)=∑_i≥_grlexi_k(π^P_k,i)'(y)u^i
One has that π^P_k,i_k(y) 0 and π^P_k,i_k(c_S(k))=0 (see the point (i) above), so (π^P_k,i_k)'(y) 0. Thus:
w((∂ P/∂ y)_k)=i_k-S(k).
We perform the Taylor's expansion of (∂ P/∂ y)_S(k):
[ (∂ P/∂ y)_S(k)(u,y) = (∂ P/∂ y)_k(u,c_S(k)+u^S^2(k)-S(k)y); = ( π^P_k,i_k)'(c_S(k))u^i_k-S(k)+⋯; + [(π^P_k,i_k)”(c_S(k)) y+(π^P_k,i_k+S^2(k)-S(k))'(c_S(k))]u^i_k+S^2(k)-2S(k)+⋯. ]
By the point (i) applied to ∂ P/∂ y, if y_0 is a double root P, we must have (π^P_k,i_k)'(c_S(k))=0. Moreover, if π^P_k,i(c_S(k))≠ 0 for some i∈{S(i_k), … , i_k+S^2(k)-S(k)}, by Formula (<ref>) we would have i_S(k)≤_grlexi_k+S^2(k)-S(k) and even i_j≤_grlexi_k+S^2(k)-S(k) for every j>_grlexk according to Formula (<ref>): y_0 could not be a root of P. So, π^P_k,i(c_S(k))= 0 for i=S(i_k),..,i_k+S^2(k)-S(k), and, accordingly, i_S(k)>_grlexi_k+S^2(k)-S(k).
If y_0 is a simple root of P, from the point (i) and its proof there exists a lowest k_0 such that the sequence (i_k-S(k))_k∈ℕ^r is no longer strictly increasing, that is to say, such that (π^P_k_0,i_k_0)'(c_S(k_0))≠ 0. For any k≥_grlexk_0, we consider the Taylor's expansion of (∂ P/∂ y)_S(k)=(∂ P/∂ y)_k_0(c_S(k_0)+⋯+u^S^2(k)-S(k_0 )y):
[ (∂ P/∂ y)_S(k)(u,y) = (π^P_k_0,i_k_0)'(c_S(k_0))u^i_k_0-S(k_0)+⋯; +[(π^P_k_0,i_k_0)”(c_S(k_0))c_S^2(k_0)+(π^P_k_0, i_k_0+S^2(k_0)-S(k_0))' (c_S(k_0))]u^i_k_0+ S^2(k_0)-S(k_0) +⋯ ]
and we get that:
w(∂ P/∂ y(z_S(k),0) )=w((∂ P/∂ y)_S(k)(u,0))=w((∂ P/∂ y)_S(k))=i_k_0-S(k_0).
By Equation (<ref>), we obtain that w((∂ P/∂ y)_S(k))=i_S(k)-S^2(k). So, i_S(k)=i_k_0-S(k_0)+S^2(k). As every k>_grlexk_0 is the successor of some k'≥_grlexk_0, we get that for every k≥_grlexk_0, i_k-S(k)=i_k_0-S(k_0). So, finally, i_S(k)=i_k-S(k)+S^2(k) as desired.
Resuming the notations of Lemma <ref>, the multi-index k_0 represents the length of the initial part in the stage of separation of the solutions. In the following lemma, we bound it using the discriminant Δ_P of P (see just before Notation <ref>).
Let P(u,y) be a nonzero polynomial with _y(P)≤ d_y and with only simple roots. Let y_0=∑_n∈^rc_ nu^n, c_0≠ 0 be one of these roots.
The multi-index k_0 of Lemma <ref> verifies that:
|k_0|≤_u(Δ_P(u)).
By definition of k_0 and by Formula (<ref>), for any k≥_grlexk_0, w( ∂ P/∂ y(u,z_S(k)))=w(∂ P/∂ y(u,z_S(k_0)))=i_k_0-S(k_0). So, w(∂ P/∂ y(u,y_0))=w(∂ P/∂ y(u,z_S(k_0))).
Moreover, by minimality of k_0, the sequence (i_k-S(k))_k is strictly increasing up to k_0, so by Formula (<ref>): w( ∂ P/∂ y(u,y_0))=w(∂ P/∂ y(u,z_S(k_0)))=w((∂ P/∂ y)_S(k_0)(u,0))≥_grlex w((∂ P/∂ y)_S(k_0))≥_grlexk_0.
So:
|k_0|≤|w( ∂ P/∂ y(u,y_0))|=ord_u∂ P/∂ y(u,y_0).
Since P has only simple roots, its discriminant Δ_P is nonzero and one has a Bezout identity:
A(u,y)P(u,y)+B(u,y)∂ P/∂ y(u,y)=Δ_P(u)
with A,B∈ K[[u]][y].
By evaluating this identity at y=y_0, we obtain that _u(∂ P/∂ y(u,y_0) )≤_u(Δ_P(u)), so |k_0|≤_u(Δ_P(u)) as desired.
Resuming Notation <ref> and the content of Lemma <ref>, we set: ω_0:=(π^P_k_0,i_k_0)'(c_S(k_0)).
By Formula (<ref>), we note that (∂ P/∂ y)(u,y_0)=ω_0 u^i_k_0-S(k_0)+⋯.
Thus, ω_0 is the initial coefficient of (∂ P/∂ y)(u,y_0) with respect to ≤_grlex, hence ω_0≠ 0.
Consider the following nonzero polynomial in K[[u]][y] of degree in y bounded by d_y:
P(u,y)=∑_i∈^r∑_j=0^d_ya_i,ju^iy^j = ∑_i≥_grlex0π^P_i(y)u^i,
and a formal power series which is a simple root:
y_0=∑_n≥_grlex0c_nu^n ∈ K[[u]], c_0≠ 0.
Resuming Notations <ref> and <ref> and the content of Lemma <ref>, recall that
ω_0:=(π^P_k_0,i_k_0)'(c_S(k_0))≠ 0.
Then, for any k>_grlexk_0:
* either the polynomial z_S(k)=∑_n=0^S(k)c_nu^n is a solution of P(u,y)=0;
* or _kR(u,y):=P_k(u,y+c_S(k))/-ω_0u^i_k=-y+ _kQ(u,y)∈ K((u_1^ℤ,…,u_r^ℤ))^grlex_Mod[y] defines a strongly reduced Henselian equation:
y= _kQ(u,y)
as in Definition <ref> and satisfied by:
t_S(k):=y_0-z_S(k)/u^S(k)=c_S^2(k)u^S^2(k)-S(k)+c_S^3(k)u^S^3(k)-S(k)+⋯.
We show by induction on k∈(ℕ^r,≤_grlex), k>_grlexk_0, that _kR(u,y)=-y+ _kQ(u,y) with _kQ(u,y)∈ K((u_1^ℤ,…,u_r^ℤ))^grlex_Mod[y] is
such that w( _kQ(u,y)) >_grlex0. Let us apply Formula (<ref>) with parameter k=k_0. Since i_S(k_0)=i_k_0+S^2(k_0)-S(k_0), we have that π^P_k_0,i(c_S(k_0))=0 for i_k_0≤_grlexi<_grlexi_k_0+S^2(k_0)-S(k_0), and accordingly:
P_S(k_0)(u,y)=[ω_0 y+π^P_k_0,i_k_0+S^2(k_0)-S(k_0)(c_S(k_0))]u^i_k_0+S^2(k_0)-S(k_0)+ _S(k_0)T(u,y)
where _S(k_0)T(u,y)∈ K[[u]][y] with w( _S(k_0)T(u,y))>_grlexi_k_0+S^2(k_0)-S(k_0).
Since i_S^2(k_0)=i_k_0+S^3(k_0)-S(k_0)>_grlexi_k_0+S^2(k_0)-S(k_0), we obtain that: π^P_S(k_0),i_k_0+S^2(k_0)-S(k_0)(y)=ω_0 y+π^P_k_0,i_k_0+S^2(k_0)-S(k_0)(c_S(k_0)) vanishes at c_S^2(k_0), which implies that c_S^2(k_0)= -π^P_k_0,i_k_0+S^2(k_0)-S(k_0)(c_S(k_0))/ω_0. Computing _S(k_0)R(u,y), it follows that:
_S(k_0)R(u,y)=-y+ _S(k_0)Q(u,y),
with _S(k_0)Q(u,y)=_S(k_0)T(u,y +c_S^2(k_0))/-ω_0u^i_k_0+S^2(k_0)-S(k_0).
So _S(k_0)Q(u,y)∈ K((u_1^ℤ,…,u_r^ℤ))^grlex_Mod[y] with w( _S(k_0)Q(u,y))>_grlex0.
Now suppose that the property holds true at a rank k≥_grlexS(k_0), which means that _kR(u,y):=P_k(u,y+c_S(k))/-ω_0u^i_k=-y+ _kQ(u,y). Therefore, for _kQ̌(u,y)=-ω_0 _kQ(u,y-c_S(k))∈ K((u_1^ℤ,…,u_r^ℤ))^grlex_Mod[y] which is such that w( _kQ̌(u, y)) >_grlex0, we can write:
[ P_k(u,y) = ω_0(y-c_S(k))u^i_k+ u^i_k· _kQ̌(u,y); = π^P_k,i_k(y)u^i_k+π^P_k,S(i_k)(y)u^S(i_k)+ ⋯. ]
Since P_S(k)(u,y)= P_k(u,c_S(k)+u^S^2(k)-S(k)y) and i_S(k)=i_k+S^2(k)-S(k) by Lemma <ref>, we have that:
P_S(k)(u,y)=[ω_0 y+π^P_k,i_k+S^2(k)-S(k)(c_S(k))]u^i_k+S^2(k)-S(k)+π^P_S(k),S(i_S(k))(y)u^S(i_S(k))+⋯.
But, again by Lemma <ref>, i_S^2(k)=i_S(k)+S^3(k)-S^2(k) >_grlexi_S(k)=i_k+S^2(k)-S(k). So we must have π^P_S(k),i_S(k)(c_S^2(k))=0, i.e. c_S^2(k)=-π^P_k,i_k+S^2(k)-S(k)(c_S(k))/ω_0. It follows that:
P_S(k)(u,y)=ω_0(y-c_S^2(k))u^i_k+S^2(k)-S(k)+π^P_S(k),S(i_S(k))(y)u^S(i_S(k))+⋯,
Since, by definition, _S(k)R(u,y):=P_S(k)(u,y+c_S^2(k))/-ω_0u^i_S(k)=-y+ _S(k)Q(u,y), we get that:
[ _S(k)R(u,y) = -y-
π^P_S(k),S(i_S(k))(y+c_S^2(k))/ω_0u^S(i_S(k))-i_S(k)+ ⋯; = -y+ _S(k)Q(u,y), _S(k)Q∈ K((u_1^ℤ,…,u_r^ℤ))^grlex_Mod[y], ]
with w( _kQ(u,y)) >_grlex0 as desired.
To conclude the proof, it suffices to note that the equation _kR(u,y)=0 is strongly reduced Henselian if and only if _kQ(u,0) 0, which is equivalent to z_S(k) not being a root of P.
We will need the following lemma:
Let P(u,y)∈ K[[u]][y]∖{0} be a polynomial of degree _y(P)≤ d_y with only simple roots. Assume that y_0, y_1∈ K[[u]] are two distinct roots. One has that:
ord_u (y_0-y_1)≤_u(Δ_P(u)).
Note that the hypothesis imply that d_y≥ 2. Let us write y_1-y_0=δ_1,0 and k:=w(y_1-y_0)=w(δ_1,0)∈ℕ^r. By Taylor's Formula, we have:
[ P(u,y_0+δ_1,0) = 0; = P(u,y_0)+∂ P/∂ y(u,y_0) δ_1,0+⋯+1/d_y!∂^d_y P/∂ y^d_y(u,y_0)δ_1,0^d_y; = δ_1,0(∂ P/∂ y(u,y_0)+⋯+1/d_y!∂^d_y P/∂ y^d_y(u,y_0)δ_1,0^d_y-1). ]
Since δ_1,0≠ 0 and ∂ P/∂ y(u,y_0)≠ 0, one has that:
∂ P/∂ y(u,y_0)=-δ_1,0(1/2∂^2 P/∂ y^2(u,y_0)+⋯+1/d_y!∂^d_y P/∂ y^d_y(u,y_0)δ_1,0^d_y-2)
The valuation of the right hand side being at least k, we obtain that:
w(∂ P/∂ y(u,y_0))≥_grlexk.
But, by Lemma <ref>, we must have ord_u(∂ P/∂ y(u,y_0))≤_u(Δ_P(u)). So |k|≤_u(Δ_P(u)).
For the courageous reader, in the case where y_0 is a series which is not a polynomial, we deduce from Theorem <ref> and from the generalized Flajolet-Soria's Formula <ref> a closed-form expression for the coefficients of y_0 in terms of the coefficients a_i,j of P and of the coefficients of an initial part z_k of y_0 sufficiently large, in particular for any k∈ℕ^r such that |k|≥_u(Δ_P(u))+1. Recall that i_k=w( P_k(u,y)). Note that for such a k, since y_0 is not a polynomial, by Lemma <ref>, z_S(k) cannot be a root of P.
Let P(u,y)∈ K[[u]][y]∖{0} be a polynomial of degree _y(P)≤ d_y with only simple roots.
Let k∈ℕ^r be such that |k|≥_u(Δ_P(u))+1. For any p>_grlex S(k), consider n:=p-S(k). Then:
c_p=c_S(k)+n=∑_q=1^μ_n1/q(-1/ω_0)^q∑_|S|=q, S≥ q-1A^S(∑_|T_S|=S-q+1 g(T_S)=n+qi_k-(q-1)S(k)-g(S)e_T_SC^T_S),
where μ_n is as in Theorem <ref> for the equation y= _kQ(u,y) of Theorem <ref>, S=(s_i,j)_i∈^r, j=0,…,d_y with finite support, and as in Notation <ref>, A^S=∏_i, ja_i,j^s_i,j, T_S=(t_S,i), C^T_S=∏_i=0^S(k)c_i^t_S,i,
and e_T_S∈ℕ is of the form:
e_T_S=
∑_(n^l,m_i,j,L)q!/∏_l =S(i_k)-i_k,…, d_yS(k)+(d_u,0,…,0)-i_k m=0,…,m_l∏_|i|=0,…,d_u j=m,…,d_y∏_|L|=j-m g(L)=l+i_k-mS(k)-in^l,m_i,j,L!∏_l=S(i_k)-i_k,…, d_y S(k)+(d_u,0,…,0)-i_k m=0,…,m_l∏_|i|=0,…,d_u j=m,…,d_y∏_|L|=j-m g(L)=l+i_k-mS(k)-i(j!/m! L!)^n^l,m_i,j,L,
where we denote m_l:=min{d_y, max{m∈ℕ / mS(k)≤_grlexl +i_k}},
L=L_i,j^l,m=(l_i,j,0^l,m,…,l_i,j,S(k)^l,m),
and where the sum is taken over the set of tuples
(n^l,m_i,j,L)_l= S(i_k)-i_k,…,d_yS(k)+(d_u,0,…,0)-i_k, m=0,…,m_l |i|=0,…,d_u, j=m,…,d_y, |L|=j-m, g(L)=l+i_k-mS(k)-i such that:
∑_l,m∑_L n^l,m_i,j,L=s_i,j, ∑_l,m∑_i,j∑_Ln^l,m_i,j,L=q and ∑_l,m∑_i,j∑_Ln^l,m_i,j,LL= T_S.
Note that the coefficients e_T_S are indeed natural numbers, since they are sums of products of multinomial coefficients because ∑_l,m∑_i,j∑_L n^l,m_i,j,L=q and m+|L|=j. In fact, 1/qe_T_S∈ℕ by Remark <ref> as we will see along the proof.
We get started by computing the coefficients of ω_0u^i_k _kR, in order to get those of _kQ:
[ -ω_0u^i_k _kR = P_k(u, y+c_S(k)); = P(u,z_S(k)+u^S(k)y); = ∑_i∈^r , j=0,…,d_ya_i,ju^i(z_S(k)+u ^S(k)y)^j; = ∑_i∈^r , j=0,…,d_ya_i,ju^i∑_m=0^jj!/m! (j-m)!z_S(k)^j-mu^mS(k)y^m. ]
For L=(l_0,⋯,l_S(k)), we denote
C^L:=c_0^l_0⋯ c_S(k)^l_S(k). One has that:
z_S(k)^j-m=∑_|L|=j-m(j-m)!/L!C^Lu^g(L).
So:
-ω_0u^i_k _kR=∑_m=0^d_y∑_i∈^r j=m,…,d_ya_i,j∑_|L|=j-mj!/m! L!C^Lu^g(L) +mS(k)+i y^m.
We set l̂=g(L)+mS(k)+i. It verifies: l̂≥ mS(k). Thus:
-ω_0u^i_k _kR=∑_m=0,…,d_y ∑_l̂ ≥ mS(k)∑_i ≤ l̂- mS(k) j=m,…,d_ya_i,j∑_|L|=j-m g(L)=l̂-mS(k)-ij!/m! L!C^Lu^l̂y^m.
Since _kR(u,y)=-y+ _kQ(u,y) with w( _kQ(u,y))>_grlex0, the coefficients of _kQ are obtained for l̂≥_grlexS(i_k).
We set l:=l̂-i_k
and m_l:=min{d_y, max{m∈ℕ / mS(k)≤l +i_k}}.
We obtain: _kQ(u,y)=∑_l ≥_grlex S(i_k)-i_k m=0,…,m_lb_l,mu^ly^m,
with:
b_l,m=-1/ω_0∑_i ≤ l+i_k- mS(k) j=m,…,d_ya_i,j∑_|L|=j-m g(L)=l+i_k-mS(k)-ij!/m! L!C^L.
According to Lemma <ref>, Theorem <ref> and Lemma <ref>, we are in position to apply the generalized Flajolet-Soria's Formula of Theorem <ref> in order to compute the coefficients of the solution t_S(k)=c_S^2(k)u^S^2(k)-S(k)+c_S^3(k)u^S^3(k)-S(k)+⋯. Thus, denoting B:=(b_l,m), Q:=(q_l,m) with finite support and B^Q:=∏_l,m b_l,m^q_l,m for l≥_grlexS(i_k)-i_k and m=0,…,m_l, we obtain for n>_grlex0:
c_S(k)+n=∑_q=1^μ_n1/q∑_|Q|=q, Q=q-1 , g(Q)=nq!/Q!B^Q.
As in Remark <ref> (1), the previous sum is finite, and as in Remark <ref> (2), we have 1/q·q!/Q!∈ℕ.
Let us compute:
[ [ b_l,m^q_l,m = (-1/ω_0)^q_l,m(∑_i ≤ l+i_k- mS(k) j=m,…,d_ya_i,j∑_|L|=j-m g(L)=l +i_k-mS(k)-ij!/m! L!C^L)^q_l,m; = (-1/ω_0)^q_l,m∑_|M_l,m|=q_l,mq_l,m!/M_l,m!A^M_l,m∏_i ≤ l+i_k- mS(k) j=m,…,d_y(∑_|L|=j-m g(L)=l+i_k-mS(k)- ij!/m! L!C^L)^m^l,m_i,j ]; where M_l,m=(m^l,m_i,j) for i≤l+i_k- mS(k) , j=0,…,d_y and m^l,m_i,j=0 for j<m. ]
Note that, in the previous formula, (-ω_0)^q_l,mb_l,m^q_l,m is the evaluation at A and C of a polynomial with coefficients in ℕ. Since 1/q·q!/Q!∈ℕ, the expansion of (-ω_0)^q1/q·q!/Q!B^Q as a polynomial in A and C will only have natural numbers as coefficients.
Let us expand the expression ∏_i ≤ l+i_k- mS(k) j=m,…,d_y(∑_|L|=j-m g(L)=l+i_k-mS(k)-ij!/m! L!C^L)^m^l,m_i,j.
For each (l,m,i,j), we enumerate the terms j!/m! L!C^L with h=1,…,α_i,j^l,m. Subsequently:
[ (∑_|L|=j-m g(L)=l+i_k-mS(k)-ij!/m! L!C^L)^m^l,m_i,j = (∑_h=1^α_i,j^l,mj!/m! L_i,j,h^l,m!C^L_i,j,h^l,m)^ m^l,m_i,j; = ∑_|N^l,m_i,j|=m^l,m_i,jm^l,m_i,j!/N^l,m_i,j!( ∏_h=1^α_i,j^l,m(j!/m! L_i,j,h^l,m!)^ n^l,m_i,j,h) C^∑_h=1^α^l,m_i,j n^l,m_i,j,hL_i,j,h^l,m, ]
where N^l,m_i,j= (n^l,m_i,j,h)_h=1,…,α_i,j^l,m,
N^l,m_i,j!= ∏_h=1^α_i,j^l,m n^l,m_i,j,h!.
Denoting
H_l,m=(h^l,m_0,…,h^l,m_S(k)):= ∑_i ≤ l+i_k- mS(k) j=m,…,d_y∑_h=1^α_i,j^l,mn^l,m_i,j,hL_i,j,h^l,m,
one computes:
[ |H_l,m| = ∑_i ≤ l+i_k- mS(k) j=m,…,d_y∑_h=1^α_i,j^l,m n^l,m_i,j,h|L_i,j,h^l,m|; = ∑_i ≤ l+i_k- mS(k) j=m,…,d_y(∑_h=1^α_i,j^l,m n^l,m_i,j,h)(j-m); = ∑_i ≤ l+i_k- mS(k) j=m,…,d_ym^l,m_i,j(j-m); = M_l,m-m q_l,m. ]
Likewise, one computes:
[ g(H_l,m) = ∑_i ≤ l+i_k- mS(k) j=m,…,d_y∑_h=1^α_i,j^l,m n^l,m_i,j,hg(L_i,j,h^l,m); = ∑_i ≤ l+i_k- mS(k) j=m,…,d_y(∑_h=1^α_i,j^l,m n^l,m_i,j,h)(l+i_k-mS(k)-i); = ∑_i ≤ l+i_k- mS(k) j=m,…,d_ym^l,m_i,j(l+i_k-mS(k)-i); = q_l,m[l+i_k-mS(k)]-g(M_l,m). ]
So, according to Formula (<ref>) and the new way of writing the expression
∏_i ≤ l+i_k- mS(k) j=m,…,d_y(∑_|L|=j-m g(L)=l+i_k-mS(k)-ij!/m! L!C^L)^m^l,m_i,j, we obtain:
[ b_l,m^q_l,m = (-1/ω_0)^q_l,m∑_|M_l,m|=q_l,mA^M_l,m∑_|H_l,m|=M_l,m-m q_l,m g(H_l,m)=q_l,m[l+i_k-mS(k)]-g(M_l,m) d_H_l,mC^H_l,m; with d_H_l,m:=∑_(N^l,m_i,j)q_l,m!/∏_i ≤ l+i_k- mS(k) j=m,…,d_yN^l,m_i,j!∏_i ≤ l+i_k- mS(k) j=m,…,d_y∏_h=1^α_i,j^l,m(j!/m! L_i,j,h^l,m!)^n^l,m_i,j,h, ]
where the sum is taken over {(N^l,m_i,j)_i ≤ l+i_k- mS(k) j=m,…,d_y such that |N^l,m_i,j|=m^l,m_i,j and ∑_i ≤ l+i_k- mS(k) j=m,…,d_y∑_h=1^α_i,j^l,m n^l,m_i,j,hL_i,j,h^l,m=H_l,m}.
Note that, if the latter set is empty, then d_H_l,m=0.
Recall that we consider Q:=(q_l,m) with finite support and such that |Q|=q, Q=q-1 and g(Q)=n. We deduce that:
[ B^Q = ∏_l ≥_grlexS(i_k)-i_k m=0,…,m_lb_l,m^q_l,m; = (-1/ω_0)^q∏_l,m[∑_|M_l,m|=q_l,mA^M_l,m∑_|H_l,m|=M_l,m-m q_l,mH_l,m=q_l,m(l+i_k-mS(k))-g(M_l,m)d_H_l,mC^H_l,m]. ]
Now, in order to expand the latter product of sums, we consider the corresponding sets:
𝒮_Q:={∑_l,mM_l,m / ∃ (M_l,m) s.t. |M_l,m|=q_l,m and ∀l,m, m^l,m_i,j=0 for j<m or i ≰ l+i_k- mS(k)}
and, for any S∈𝒮_Q,
ℋ_Q,S:={(H_l,m) / ∃ (M_l,m) s.t. |M_l,m|=q_l,m and ∀l,m, m^l,m_i,j=0 for j<m or i ≰ l+i_k- mS(k), .
. ∑_l,mM_l,m=S, |H_l,m|=M_l,m-m q_l,m and g(H_l,m)=q_l,m(l+i_k-mS(k))-g(M_l,m) /}
and
𝒯_Q,S:={∑_l,mH_l,m / (H_l,m)∈ℋ_Q,S}.
We have:
[ B^Q = (-1/ω_0)^q∑_S∈𝒮_QA^S∑_T_S∈𝒯_Q,S(∑_(H_l,m)∈ℋ_Q,S∑_l,mH_l,m=T_S∏_l,m d_H_l,m) C^T_S; = (-1/ω_0)^q∑_S∈𝒮_QA^S∑_T_S∈𝒯_Q,Se_Q,T_SC^T_S. ]
where : e_Q,T_S:= ∑_(N^l,m_i,j)∏_l,mq_l,m!/∏_l,m∏_i,jN^l,m_i,j!∏_l,m∏_i,j∏_h(j!/m! L_i,j,h^l,m!)^n^l,m_i,j,h
and where the previous sum is taken over:
ℰ_Q,T_S:={( N^l,m_i,j)_l ≥_grlexS(i_k)-i_k, m=0,…,m_li ≤ l+i_k- mS(k), j=m,…,d_y / ∀i,j, ∑_l,m∑_h=1^α_i,j^l,mn^l,m_i,j,h=s_i,j, .
. ∀l,m, ∑_i,j|N^l,m_i,j|=q_l,m, and ∑_l,m∑_i, j∑_h=1^α_i,j^l,m n^l,m_i,j,hL_i,j,h^l,m =T_S}.
Note that, if the latter set is empty, then e_Q,T_S=0.
Observe that 1/qq!/Q!e_Q,T_S lies in ℕ as a coefficient of (-ω_0) ^q1/qq!/Q!B^Q as seen before.
Note also that, for any Q and for any S∈𝒮_Q, |S|=∑_l,mq_l,m=q and S≥∑_l,mmq_l,m=Q=q-1. Moreover, for any T_S∈𝒯_Q,S:
[ |T_S| = ∑_l,mM_l,m-m q_l,m; = S-Q; = S-q+1 ] and:
[ g(T_S) = ∑_l,mq_l,m(l+i_k-mS(k))-g(M_l,m); = g(Q)+|Q| i_k-Q S(k)-g(S); = n+q i_k-(q-1) S(k)-g(S). ] Let us show that:
[ ∑_|Q|=q, Q=q-1, g(Q)=nq!/Q!B^Q = (-1/ω_0)^q∑_|S|=q, S≥ q-1A^S∑_|T_S|=S-q+1 g(T_S)=n+qi_k-(q-1)S(k)-g(S)e_T_SC^T_S, ]
where e_T_S:=∑_(N^l,m_i,j)q!/∏_l,m∏_i,jN^l,m_i,j!∏_l,m∏_i,j∏_h(j!/m! L_i,j,h^l,m!)^n^l,m_i,j,h and where the sum is taken over
ℰ_T_S:={(N^l,m_i,j)_l ≥_grlexS(i_k)-i_k, m=0,…,m_li ≤ l+i_k- mS(k), j=m,…,d_y s.t. ∑_l,m∑_h n^l,m_i,j,h=s_i,j, ∑_l,m∑_i,j|N^l,m_i,j|=q.
. and ∑_l,m∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,hL_i,j,h^l,m=T_S}.
Note that, if the latter set is empty, then e_T_S=0.
Recall that N^l,m_i,j!= ∏_h=1^α_i,j^l,m n^l,m_i,j,h! and that the L^l,m_i,j,h's enumerate the L's such that |L|=j-m and g(L)=l+i_k-m S(k)-i for given l,m,i,j.
Let us consider S and T_S such that |S|=q, S≥ q-1, |T_S|=S-q+1, g(T_S)=n+qi_k-(q-1)S(k)-g(S) and such that ℰ_T_S≠∅. Take an element ( n^l,m_i,j,h)∈ℰ_T_S. Define m^l,m_i,j:=∑_h=1^α_i,j^l,m n^l,m_i,j,h for each i, j, l, m with j≥ m, and m^l,m_i,j:=0 if j<m or i ≰ l+i_k- mS(k). Set M_l,m:=(m^l,m_i,j)_i,j for each l, m. So, ∑_l,mm^l,m_i,j=∑_l,m∑_h=1^α_i,j^l,m n^l,m_i,j,h=s_i,j, and S=∑_l,mM_l,m. Define q_l,m:=∑_i,jm^l,m_i,j=|M_l,m| for each l, m, and Q:=(q_l,m). Let us show that |Q|=q, g(Q)=n and Q=q-1. By definition of ℰ_T_S, |Q|:=∑_l,mq_l,m= ∑_l,m∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,h=q.
Recall that Q:=∑_l,mmq_l,m. We have:
[ |T_S|= |∑_l,m∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,hL_i,j,h^l,m|=S -q+1; ⇔ ∑_l,m∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,h|L_i,j,h^l,m|=∑_i,jjs_i,j-q+1; ⇔ ∑_l,m∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,h(j-m)= ∑_i,jjs_i,j-q+1; ⇔ ∑_i,j j∑_l,m∑_h=1^α_i,j^l,mn^l,m_i,j,h- ∑_l,mm∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,h= ∑_i,jjs_i,j-q+1; ⇔ ∑_i,j js_i,j-∑_l,mmq_l,m =∑_i,jjs_i,j-q+1; ⇔ Q=q-1. ]
Recall that g(Q):=∑_l,mq_l,ml. We have:
[ g(T_S)= g(∑_l,m∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,hL_i,j,u ^l,m)=n+q i_k -(q-1)S(k) -g(S); ⇔ ∑_l,m∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,hg(L_i,j,h^l,m)=n+q i_k -(q-1)S(k) -g(S); ⇔ ∑_l,m∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,h(l+i_k -mS(k) -i)= n+q i_k -(q-1)S(k) - g(S); ⇔ [ ∑_l,ml∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,h+
i_k∑_l,m∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,h
-S(k) ∑_l,mm∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,h; -∑_i,ji∑_l,m∑_h=1^α_i,j^l,mn^l,m_i,j,h=n+q i_k -(q-1)S(k) -g(S); ]; ⇔ ∑_l,mq_l,ml+q i_k-S(k) ∑_l,mm q_l,m-∑_i,j s_i,ji= n+q i_k -(q-1)S(k) -g(S); ⇔ g(Q)+q i_k-QS(k)-g(S)=n+q i_k -(q-1)S(k) -g(S). ]
Since Q=q-1, we deduce that g(Q)=n as desired. So, S∈𝒮_Q for Q as in the left-hand side of (<ref>).
Now, set H_l,m:=∑_i,j∑_h=1^α_i,j^l,mn^l,m_i,j,hL_i,j,h^l,m, so ∑_l,mH_l,m=T_S. Let us show that (H_l,m)∈ℋ_Q,S, which implies that T_S∈𝒯_Q,S as desired. The existence of (M_l,m) such that |M_l,m|=q_l,m and m^l,m_i,j=0 for j<m and ∑_l,mM_l,m=S follows by construction. Conditions |H_l,m|=M_l,m-m q_l,m and g(H_l,m)=q_l,m[l+i_k-mS(k)]-g(M_l,m) are obtained exactly as in (<ref>) and (<ref>). This shows that (n^l ,m_i,j,h) ∈ℰ_Q,T_S, so:
ℰ_T_S⊆_|Q|=q, g(Q)=n, Q=q-1ℰ_Q,T_S.
The reverse inclusion holds trivially since |Q|=q, so:
ℰ_T_S = _|Q|=q, g(Q)=n, Q=q-1ℰ_Q,T_S.
We deduce that:
e_T_S=∑_|Q|=q, g(Q)=n, Q=q-1q!/Q! e_Q,T_S.
We conclude that any term occuring in the right-hand side of (<ref>) comes from a term from the left-hand side.
Conversely, for any Q as in the left-hand side of Formula (<ref>), S∈𝒮_Q and T_S∈𝒯_Q,S verify the following conditions:
|S|=q, S≥ q-1, |T_S|=S-q+1 , T_S=n+q i_k-(q-1)S(k)-g(S)
and
ℰ_T_S = _|Q|=q, g(Q)=n, Q=q-1ℰ_Q,T_S, e_T_S=∑_|Q|=q, g(Q)=n, Q=q-1q!/Q! e_Q,T_S.
Hence, any term occuring in the expansion of B^Q contributes to the right hand side of Formula (<ref>).
Thus we obtain Formula (<ref>) from which the statement of Corollary <ref> follows. Note also that:
1/qe_T_S=∑_|Q|=q, g(Q)=n, Q=q-11/qq!/Q! e_Q,T_S,
so 1/qe_T_S∈.
We have seen in Theorem <ref> and its proof (see Formula (<ref>) with k=k_0) that ω_0=(π^P_k_0,i_k_0)'(c_S(k_0)) is the coefficient of the monomial u^i_S(k_0)y in the expansion of P_S(k_0)(u,y)=P(u,c_0u_r+⋯+c_S(k_0)u^S(k_0)+ u^S^2(k_0)y), and that c_S^2(k_0)=-π^P_k_0,i_S(k_0)(c_S(k_0))/ω_0 where π^P_k_0,i_S(k_0)(c_S(k_0)) is the coefficient of u^i_S(k_0) in the expansion of P_S(k_0)(u,y). Expanding P_S(k_0)(u,y), having done the whole computations, we deduce that:
{[ ω_0 = ∑_i ≤ l+i_k- mS(k), j=1,..,d_y ∑_|L|=j-1, g(L)=i_k_0-S(k_0)-ij!/L!a_i,jC^L ;; c_S^2(k_0) = -1/ω_0∑_i ≤ l+i_k- mS(k), j=0,..,d_y ∑_|L|=j, g(L)=i_S(k_0)-i j!/L!a_i,jC ^L, ].
where C:=(c_0,…,c_S(k_0)) and L:=(l_0,…,l_S(k_0)).
amsalpha''
EKM+01[Abh56]abh:val-cent-local-domain
S. S. Abhyankar, On the valuations centered in a local domain, Amer. J.
Math. 78 (1956), 321–348.
[ADR22]aroca-decaup-rond:support-alg-laurent-series
F. Aroca, J. Decaup, and G. Rond, The minimal cone of an
algebraic Laurent series, Math. Ann. 382 (2022), no. 3-4,
1745–1773 (English).
[AI09]aroca-ilardi:puiseux-multivar
F. Aroca and G. Ilardi, A family of algebraically closed fields
containing polynomials in several variables, Comm. Algebra 37
(2009), no. 4, 1284–1296. 2510985 (2010f:12008)[AR19]aroca-rond:support-alg-series
F. Aroca and G. Rond, Support of Laurent series algebraic over the
field of formal power series, Proc. Lond. Math. Soc. (3) 118
(2019), no. 3, 577–605.
[EKM+01]evans-al:tot-ord-commut-monoids
K. Evans, M. Konikoff, J. J. Madden, R. Mathis, and G. Whipple, Totally
ordered commutative monoids, Semigroup Forum 62 (2001), no. 2,
249–278.
[EP05]engler-prestel:valued-fields
A. J. Engler and A. Prestel, Valued fields, Springer Monographs in
Mathematics, Springer-Verlag, Berlin, 2005.
[FS97]flajolet-soria:coeff-alg-series
P. Flajolet and M. Soria, Coefficients of algebraic series, Algorithms
seminar 1997-1998, Tech. Report, INRIA, 1997.
[GP00]gonzalez-perez_singul-quasi-ord
P. D. González Pérez, Singularités quasi-ordinaires toriques et
polyèdre de Newton du discriminant, Canad. J. Math. 52 (2000),
no. 2, 348–368.
[Hah07]hahn:nichtarchim
H. Hahn, Über die nichtarchimedischen Grössensystem,
Sitzungsberichte der Kaiserlichen Akademie der Wissenschaften, Mathematisch -
Naturwissenschaftliche Klasse (Wien) 116 (1907), no. Abteilung IIa,
601–655.
[Hen64]henrici:lagr-burmann
P. Henrici, An algebraic proof of the Lagrange-Bürmann formula, J.
Math. Anal. Appl. 8 (1964), 218–224.
[HM17]hickel-matu:puiseux-alg
M. Hickel and M. Matusinski, On the algebraicity of Puiseux series,
Rev. Mat. Complut. 30 (2017), no. 3, 589–620.
[HM19]hickel-matu:puiseux-alg-multivar
M. Hickel and M. Matusinski, About algebraic Puiseux series in several
variables, J. Algebra 527 (2019), 55–108.
[KKS23]kuhlmann-krapp-serra:generalised-LRR
L. S. Krapp, S. Kuhlmann, and M. Serra, Generalised power series
determined by linear recurrence relations, 2023, Arxiv: arxiv.org/abs/2206.04126.
[Leg30]legendre:theorie-nbres
A.-M. Legendre, Théorie des nombres t.1, Firmin-Didot (Paris), 1830.
[McD95]mcdonald_puiseux-multivar
J. McDonald, Fiber polytopes and fractional power series, J. Pure Appl.
Algebra 104 (1995), no. 2, 213–233.
[Neu49]neumann:ord-div-rings
B. H. Neumann, On ordered division rings, Trans. Amer. Math. Soc.
66 (1949), 202–252.
[PR12]parusinski-rond:abhyankar-jung
A. Parusiński and G. Rond, The Abhyankar-Jung theorem, J.
Algebra 365 (2012), 29–41.
[Ray74]rayner_puiseux-multivar
F. J. Rayner, Algebraically closed fields analogous to fields of
Puiseux series, J. London Math. Soc. (2) 8 (1974), 504–506.
[Rib92]rib:series-fields-alg-closed
P. Ribenboim, Fields: algebraically closed and others, Manuscripta Math.
75 (1992), no. 2, 115–150.
[RvdD84]rib-vdd_ratio-funct-field
P. Ribenboim and L. van den Dries, The absolute Galois group of a
rational function field in characteristic zero is a semidirect product,
Canad. Math. Bull. 27 (1984), no. 3, 313–315.
[Saf00]safonov:algebraic-power-series
K. V. Safonov, On power series of algebraic and rational functions in
C^n, J. Math. Anal. Appl. 243 (2000), no. 2, 261–277.
[Sat83]sathaye:newt-puiseux-exp_abh-moh-semigr
A. Sathaye, Generalized Newton-Puiseux expansion and
Abhyankar-Moh semigroup theorem, Inventiones Mathematicae 74
(1983), 149–157, 10.1007/BF01388535.
[Sin80]singmaster:binomial-multinomial
D. Singmaster, Divisibility of binomial and multinomial coefficients by
primes and prime powers, A collection of manuscripts related to the
Fibonacci sequence, Fibonacci Assoc., Santa Clara, Calif., 1980,
pp. 98–113.
[Sok11]sokal:implicit-function
A. D. Sokal, A ridiculously simple and explicit implicit function
theorem, Sém. Lothar. Combin. 61A (2009/11), Art. B61Ad, 21.
[SV06]soto-vicente:polyhedral-cones
M. J. Soto and J. L. Vicente, Polyhedral cones and monomial blowing-ups,
Linear Algebra Appl. 412 (2006), no. 2-3, 362–372.
[SV11]soto-vicente_puiseux-multivar, The Newton procedure for several variables, Linear Algebra
Appl. 435 (2011), no. 2, 255–269. 2782778[Wal78]walker_alg-curves
R. J. Walker, Algebraic curves, Springer-Verlag, New York, 1978, Reprint
of the 1950 edition.
[Wil19]wilczynski:alg-power-series
E. J. Wilczynski, On the form of the power series for an algebraic
function., Am. Math. Mon.26 (1919), 9–12 (English). |
http://arxiv.org/abs/2307.05425v2 | 20230711164741 | Axions and Cosmic Magnetic Fields | [
"George B. Field",
"Sean M. Carroll"
] | hep-ph | [
"hep-ph",
"astro-ph.CO"
] |
=13pt
empty
|
http://arxiv.org/abs/2307.04718v1 | 20230710173013 | On the randomized Euler algorithm under inexact information | [
"Marcin Baranek",
"Andrzej Kałuża",
"Paweł M. Morkisz",
"Paweł Przybyłowicz",
"Michał Sobieraj"
] | math.NA | [
"math.NA",
"cs.NA"
] |
Approximation of SDEs under inexact information]
On the randomized Euler algorithm under inexact information
M. Baranek]Marcin Baranek
AGH University of Krakow,
Faculty of Applied Mathematics,
Al. A. Mickiewicza 30, 30-059 Kraków, Poland
[email protected]
A. Kałuża]Andrzej Kałuża
AGH University of Krakow,
Faculty of Applied Mathematics,
Al. A. Mickiewicza 30, 30-059 Kraków, Poland
[email protected]
P. M. Morkisz]Paweł M. Morkisz
NVIDIA Corp. and AGH University of Krakow,
Faculty of Applied Mathematics,
Al. A. Mickiewicza 30, 30-059 Kraków, Poland
[email protected]
P. Przybyłowicz]Paweł Przybyłowicz
AGH University of Krakow,
Faculty of Applied Mathematics,
Al. A. Mickiewicza 30, 30-059 Kraków, Poland
[email protected], corresponding author
M. Sobieraj]Michał Sobieraj
AGH University of Krakow,
Faculty of Applied Mathematics,
Al. A. Mickiewicza 30, 30-059 Kraków, Poland
[email protected]
This paper focuses on analyzing the error of the randomized Euler algorithm when only noisy information about the coefficients of the underlying stochastic differential equation (SDE) and the driving Wiener process is available. Two classes of disturbed Wiener process are considered, and the dependence of the algorithm's error on the regularity of the disturbing functions is investigated. The paper also presents results from numerical experiments to support the theoretical findings.
Key words: stochastic differential equations, randomized Euler algorithm, inexact information, Wiener process, lower bounds, optimality
MSC 2010: 65C30, 68Q25
[
[
=====
§ INTRODUCTION
We investigate the strong approximation of solutions of the following SDEs
{[ X(t) = a(t,X(t)) t + b(t,X(t)) W(t), t∈ [0,T],; X(0)=η, ].
where T >0, W is an m-dimensional Wiener process,
and η∈ℝ^d. Our analysis is performed under the assumption that only standard noisy information about (a,b,W) is available. This means that we have access to a,b,W only through its inexact values at finite number of discretization points.
Our interest lies in approximating the values of X(T) using the inexact information about the coefficients (a,b) and the driving Wiener process W. We consider algorithms that are based on values of a,b, and W corrupted by noise. This noise can arise from measurement errors, rounding procedures, etc. The inspiration of considering such inexact information comes from various sources, such as numerically solving SDEs on GPUs and understanding of impact of low precision in computations (when switching from double to float and half, see <cit.>), as well as modeling real-world phenomena that are described by SDEs such as energy demand/production forecasting (where exact information is rarely available).
The study of inexact information has been explored in the literature for various problems, including function approximation and integration (<cit.>, <cit.>, <cit.>, <cit.>), approximate solving of ODEs (<cit.>) and PDEs (<cit.>, <cit.>), see also the related monograph <cit.>. In the context of stochastic integration and approximation of solution of stochastic differential equations inexact information about the integrands or coefficients of the underlying SDEs has been considered in <cit.>, <cit.>, <cit.>. However, it is important to note that in <cit.> and <cit.> the information considered about the process W was exact. We also refer to the article <cit.> where noisy information induced by the approximation of normally distributed random variables is considered. However, the computational setting (devoted for weak approximation of SDEs) is different that the one considered in this paper (established in the context of strong approximation of the solution X).
In this paper, we mainly extend the proof technique known from <cit.> and <cit.>. Namely, we cover the case when also the information about the Wiener process W is inexact. This assumption leads to a significant change in the proof technique. It allows us to investigate the error behavior for the randomized Euler scheme under inexact information about the tuple (a,b,W) with precision parameters δ_1,δ_2,δ_3∈ [0,1] for a,b,W, respectively. (See also, for example, <cit.>, <cit.>, <cit.>, <cit.> where other randomized algorithms for approximation of solutions of ODEs and SDEs have been defined and investigated under exact information.) Roughly speaking, we show that the L^r(Ω)-error of the randomized Euler scheme, that use O(n) noisy evaluations of (a,b,W), is O(n^-min{ϱ,1/2}+δ_1+δ_2+δ_3), provided that the corrupting functions for W are sufficiently regular (see Theorem <ref> (i). In the case of less regular corrupting functions for W (assuming only Hölder continuity) the error might increase due to the presence of informational noise (see Theorem <ref> (ii)).
The main contributions of this paper are as follows:
* Upper error bounds on the randomized Euler algorithm in two classes of corrupting functions for the Wiener process W (Theorem <ref>),
* Lower error bounds and optimality of the randomized Euler algorithm (Theorem <ref>),
* Results of numerical experiments that confirm our theoretical findings (Section 5).
The structure of the paper is as follows. Section 2 provides basic notions and definitions, along with a description of the computation model used when dealing with inexact information for drift and diffusion coefficients, as well as for the driving Wiener process. In Section 3, we analyze the upper bounds for the error of the randomized Euler algorithm. Lower bounds and some optimality results are stated in Section 4. Section 5 contains the results of numerical experiments conducted to validate our theoretical findings. Finally, the Appendix provides auxiliary results used in the paper.
§ PRELIMINARIES
We denote by ℕ={1,2,…}. Let W = {W(t)}_t≥ 0 be a standard m-dimensional Wiener process defined on a complete probability space (Ω,Σ, ℙ). By {Σ_t}_t≥ 0 we denote a filtration, satisfying the usual conditions, such that W is a Wiener process with respect to {Σ_t}_t≥ 0. We set Σ_∞=σ(⋃_t≥ 0Σ_t).
We denote by · the Frobenius norm in ℝ^m or ℝ^d× m respectively, where we treat a column vector in ℝ^m as a matrix of size m× 1. For x∈ℝ^m and α∈ℝ, by x·α or α· x we mean a componentwise scalar-by-vector multiplication and for a matrix y∈ℝ^d× m by y· x we mean a standard matrix-by-vector multiplication. For a sufficiently smooth function f:[0,T]×ℝ^m→ℝ we denote by ∂ f(t,y)/∂ y its gradient, while by ∂^2 f(t,y)/∂ y^2 its Hessian matrix of size m× m. Moreover, for a smooth function f:[0,T]×ℝ^m→ℝ^m we denote by ∂ f(t,y)/∂ y its Jacobi matrix of size m× m, computed also with respect to the space variable y. For r∈ [2,+∞) by the L^r(Ω)-norm, either for a random vector or a random matrix, we mean
Y_r := ( Y^r)^1/r Y:Ω→ℝ^m Y:Ω→ℝ^d× m.
We also make us of the following second order differential operator
ℒ=∂/∂ t+1/2∑_k=1^m∂^2/∂ y_k^2.
We now define classes of drift and diffusion coefficients.
Let T>0, K>0, ϱ∈ (0,1]. A function a:[0,T]×ℝ^d →ℝ^d belongs to 𝒜_K if
* the mapping a:[0,T]×ℝ^d→ℝ^d is Borel measurable,
* for all t∈[0,T]
a(t,0)≤ K,
* for all t∈[0,T], x,y ∈ℝ^d
a(t,x) - a(t,y)≤ K x - y.
Note that if a∈𝒜_K then for all (t,y)∈ [0,T]×ℝ^d we have
a(t,y)≤ K(1+y).
A mapping b:[0,T]×ℝ^d→ℝ^d× m belongs to ℬ^ϱ_K if
* b is bounded in the origin (0,0),
b(0,0)≤ K,
* for all t∈[0,T], x,y ∈ℝ^d
b(t,x) - b(t,y) ≤ K x-y,
* for all t,s∈[0,T], x ∈ℝ^d
b(t,x) - b(s,x) ≤ K (1+x)· |t-s|^ϱ.
The above conditions imply that for all (t,y)∈ [0,T]×ℝ^d
b(t,y)≤K̅ (1+ y),
where K̅=K·max{1,T^ϱ}. We also consider the following class of initial values
𝒥_K={η∈ℝ^d | η≤ K}.
The class of all admissible tuples (a,b,η) is defined as
ℱ(ϱ,K)=𝒜_K×ℬ^ϱ_K×𝒥_K.
Let
δ_1,δ_2, δ_3,δ_4 ∈ [0,1]
We refer to δ_1, δ_2, δ_3, and δ_4 as to precision parameters.
We now describe what we mean by corrupted values and information about a, b, W.
Let us set
𝒦^s = {p:[0,T]×ℝ^d→ℝ^d× s | p(·,·)-Borel measurable,
p(t,y)≤ 1+y for all t∈ [0,T], y∈ℝ^d},
for s∈{1,m}. The classes 𝒦^1, 𝒦^m are nonempty and contain constant functions. Let
V_c(γ)={c̃ | ∃_p_c∈𝒦^s: c̃=c+γ· p_c},
where c∈{a,b}, (γ,s)=(δ_1,1) if c=a and (γ,s)=(δ_2,m) if c=b. By ã and b̃ we mean any functions ã∈ V_a(δ_1) and b̃∈ V_b(δ_2), respectively. We have that {a}=V_a(0)⊂ V_a(δ_1)⊂ V_a(δ'_1) for 0≤δ_1≤δ'_1≤ 1 and {b}=V_b(0)⊂ V_b(δ_2)⊂ V_b(δ'_2) for 0≤δ_2≤δ'_2≤ 1.
In order to introduce perturbed information about the Wiener process W, we introduce the following classes of corrupting functions for W
𝒦_0={p:[0,T]×ℝ^m→ℝ^m | p^j∈ C^1,2([0,T]×ℝ^m;ℝ), |p^j(0,0)|≤ 1,
max{|∂ p^j/∂ t(t,y)|,∂ p^j/∂ y(t,y),∂^2 p^j/∂ y^2(t,y)}≤ 1
for all t∈ [0,T], y∈ℝ^m, j=1,2,…,m},
and
𝒦_α,β =
{
p:[0,T]×ℝ^m→ℝ^m
| ‖ p(t,x)-p(s, y)‖≤| t-s|^α
+ ‖ x-y‖^β,
for all t,s∈ [0,T], x,y∈ℝ^m}.
We consider the following classes of disturbed Wiener processes
𝒲_0(δ_3)={W̃ | ∃_p∈ 𝒦_0:∀_(t,ω)∈ [0,T]×Ω W̃(t,ω) = W(t,ω)+δ_3· p(t,W(t,ω))},
and
𝒲_α,β(δ_3)={W̃ | ∃_p∈𝒦
_α,β:∀_(t,ω)∈ [0,T]×Ω W̃(t,ω) = W(t,ω)+δ_3· p(t,W(t,ω))}.
We have that {W}=𝒲_0(0)⊂𝒲_0(δ_3)⊂𝒲_0(δ'_3) for 0≤δ_3≤δ'_3≤ 1, and similarly for 𝒲_α,β. As in <cit.> the classes defined above allow us to model the impact of regularity of noise on the error bound.
We assume that the algorithm is based on discrete noisy information about (a,b,W) and exact information about η. Hence, a vector of noisy information has the following form
𝒩(ã, b̃, W̃, η) = [ã (ξ_0,y_0),ã(ξ_1,y_1),…,ã(ξ_i_1-1,y_i_1-1),
b̃(t_0,z_0), b̃(t_1,z_1),…, b̃(t_i_1-1,z_i_1-1),
W̃(u_0), W̃(u_1),…,W̃(u_i_2-1),η],
where i_1,i_2∈ℕ and (ξ_0,ξ_1,…,ξ_i_1-1) is a random vector on (Ω,Σ,ℙ) which takes values in [0,T]^i_1. We assume that the σ-fields σ(ξ_0,ξ_1,…,ξ_i_1-1) and Σ_∞ are independent. Moreover, t_0,t_1,…,t_i_1-1∈ [0,T] and u_0,u_1,…,u_i_2-1∈ [0,T] are fixed time points. The evaluation points y_j, z_j for the spatial variables y,z of a(·,y) and b(·,z) can be computed in adaptive way with respect to (a,b,η) and W. Formally, it means that there exist Borel measurable mappings ψ_0:ℝ^i_2× m×ℝ^d→ℝ^2d, ψ_j:ℝ^d× j×ℝ^d× m× j×ℝ^i_2× m×ℝ^d→ℝ^2d, j=1,2,…,i_1-1, such that the successive points y_j,z_j are given as follows
(y_0,z_0)=ψ_0(W̃(u_0),W̃(u_1),…,W̃(u_i_2-1),η),
and for j=1,2,…, i_1-1
(y_j,z_j) = ψ_j(ã(ξ_0,y_0), ã(ξ_1,y_1),…, ã(ξ_j-1,y_j-1),
b̃(t_0,z_0), b̃(t_1,z_1),…, b̃(t_j-1,z_j-1),
W̃(u_0),W̃(u_1),…, W̃(u_i_2-1),η).
The total number of noisy evaluations of (a,b,W) is l = 2 i_1 + i_2.
The algorithm 𝒜 that uses the noisy information 𝒩(ã, b̃, W̃, η) and computes approximation of X(T) is defined as
𝒜(ã,b̃, W̃,η)=φ(𝒩(ã,b̃, W̃,η)),
for some Borel measurable function φ:ℝ^i_1× d×ℝ^i_1× d× m×ℝ^i_2× m×ℝ^d→ℝ^d. For a fixed n∈ℕ by Φ_n we denote a class of all algorithms (<ref>) for which the total number of evaluations l is at most n.
Let r∈ [2,+∞). The L^r(Ω)-error of 𝒜∈Φ_n for the fixed tuple (a,b,η)∈𝒢 is given by
e^(r)(𝒜,a,b,η,𝒲,δ_1,δ_2,δ_3)
=sup_(ã,b̃, W̃)∈ V_a(δ_1)× V_b(δ_2)×𝒲(δ_3)X(a,b,W,η)(T)-𝒜(ã,b̃, W̃,η)_r,
where 𝒲∈{𝒲_0,𝒲_α,β} and 𝒢 is a subclass of ℱ(ϱ,K). The worst case error of 𝒜 in 𝒢 is
e^(r)(𝒜,𝒢,𝒲,δ_1,δ_2,δ_3)=sup_(a,b,η)∈𝒢 e^(r)(𝒜,a,b,η,𝒲,δ_1,δ_2,δ_3).
Finally, we look for (essentially) sharp bounds for the nth minimal error, defined as
e^(r)_n(𝒢,𝒲,δ_1,δ_2,δ_3)=inf_𝒜∈Φ_ne^(r)(𝒜,𝒢,𝒲,δ_1,δ_2,δ_3).
In (<ref>) we define the minimal possible error among all algorithms of the form (<ref>) that use at most n noisy evaluations of a,b and W . Our aim is to find possibly sharp bounds on the nth minimal error, i.e., lower and upper bounds which match up to constants. We are also interested in defining an algorithm for which the infimum in e^(r)_n(𝒢,𝒲,δ_1,δ_2,δ_3) is asymptotically attained. We call such an algorithm the optimal one.
Unless otherwise stated, all constants appearing in this paper (including those in the 'O', 'Ω', and 'Θ' notation) will only depend on the parameters of the class ℱ(ϱ,K), α,β and r. Furthermore, the same symbol may be used in order to denote different constants.
§ ERROR OF THE EULER SCHEME UNDER INEXACT INFORMATION
We investigate the error of the randomized Euler scheme in the case of inexact information about a, b, and the driving Wiener process W.
Fix n∈ℕ, t_i=iT/n for i=0,1,…,n. Let {ξ_i}_i=0^n-1 be independent random variables on (Ω,Σ,ℙ), such
that the σ-fields σ(ξ_0, ξ_1,…, ξ_n-1) and Σ_∞ are independent, with ξ_i being uniformly distributed on [t_i,t_i+1]. Let us fix (a,b,W)∈ℱ(ϱ,K) and take any (ã,b̃, W̃)∈ V_a(δ_1)× V_b(δ_2)×𝒲(δ_3), where 𝒲∈{𝒲_0,𝒲_α,β}. The randomized Euler scheme under inexact information is defined by taking
X̅^RE_n(0)=η,
and
X̅^RE_n(t_i+1) = X̅^RE_n(t_i) + ã(ξ_i, X̅^RE_n(t_i)) ·T/n + b̃(t_i, X̅^RE_n(t_i)) ·ΔW̃_i,
for i=0,1, …, n-1, where ΔW̃_i = W̃(t_i+1) - W̃(t_i). The randomized Euler algorithm 𝒜̅^RE_n is defined as
𝒜̅^RE_n(ã,b̃,W̃,η):=X̅^RE_n(T).
The informational cost of the randomized Euler algorithm is O(n) noisy evaluations of a,b,W. By X^RE_n we denote the randomized Euler algorithm X̅^RE_n under the case when information is exact, i.e., when δ_1=δ_2=δ_3=0.
Let 𝒢^n=σ(ξ_0,ξ_1,…,ξ_n-1) and Σ̃_t^n=σ(Σ_t∪𝒢^n), t≥ 0. Since the σ-fields Σ_∞ and 𝒢^n are independent, the process W is still the m-dimensional Wiener process on (Ω,Σ,ℙ) with respect to {Σ̃_t^n}_t≥ 0.
Let r∈ [2,+∞), ϱ∈ (0,1].
(i) There exists C∈ (0,+∞), depending only on the parameters of the class ℱ(ϱ,K) and r, such that for all n∈ℕ, δ_1,δ_2,δ_3∈ [0,1], (a,b,η)∈ℱ(ϱ,K), (ã,b̃, W̃)∈ V_a(δ_1)× V_b(δ_2)×𝒲_0(δ_3) it holds
max_0≤ i≤ nX^RE_n(t_i)-X̅^RE_n(t_i)_r≤ C(δ_1+δ_2+δ_3).
(ii) Let α,β∈ (0,1]. There exists C∈ (0,+∞), depending only on the parameters of the class ℱ(ϱ,K), α,β, and r, such that for all n∈ℕ, δ_1,δ_2,δ_3∈ [0,1], (a,b,η)∈ℱ(ϱ,K), (ã,b̃, W̃)∈ V_a(δ_1)× V_b(δ_2)×𝒲_α,β(δ_3) it holds
max_0≤ i≤ nX^RE_n(t_i)-X̅^RE_n(t_i)_r
≤ C(δ_1+δ_2+δ_3· n^1-γ)· (1+δ_3 n^1-γ)· e^C(1+(δ_3 n^1-γ)^r),
where γ=min{α,β/2}.
Firstly, we prove (i). For W̃∈𝒲_0(δ_3) we have that
W̃(t)=W(t)+δ_3· Z(t),
with Z(t)=p_W(t,W(t)) and p_W∈𝒦_0. Then, by the Itô formula we get that
Z(t)=Z(0)+M(t)+V(t), t∈ [0,T],
where M(t)=[M^1(t),M^2(t),…, M^
m(t)], V(t)=[V^1(t),V^2(t),…, V^m(t)] and
V^j(t) = ∫_0^tℒp^j_W(z,W(z)) z,
M^j(t) = ∑_i=1^m∫_0^t∂ p^j_W/∂ y_i(z,W(z)) W^i(z),
for j=1,2,…,m. We stress that {V(t)}_t∈ [0,T] is a continuous process of bounded variation that is adapted to {Σ̃_t^n}_t≥ 0.
Moreover, since (M(t),Σ̃_t^n)_t∈ [0,T] is a martingale, Z is still a continuous semimartingale with respect to the extended filtration {Σ̃_t^n}_t≥ 0. In the sequel we will consider stochastic integrals, with respect to the semimartingales W and Z, of processes that are adapted to the filtration {Σ̃_t^n}_t≥ 0.
From (<ref>), (<ref>) for i=0,1,…,n we can write that
X̅^RE_n(t_i)=η+T/n∑_j=0^i-1ã(ξ_j,X̅^RE_n(t_j))+ ∑_j=0^i-1b̃(t_j,X̅^RE_n(t_j))·Δ W_j
+δ_3∑_j=0^i-1b̃(t_j,X̅^RE_n(t_j))·Δ Z_j,
and
X^RE_n(t_i)=η+T/n∑_j=0^i-1 a(ξ_j,X^RE_n(t_j))+ ∑_j=0^i-1 b(t_j, X^RE_n(t_j))·Δ W_j.
Therefore
e̅_i := X^RE_n(t_i)-X̅^RE_n(t_i)=∑_j=0^i-1∫_t_j^t_j+1(a(ξ_j, X^RE_n(t_j))-ã(ξ_j,X̅^RE_n(t_j))) s
+ ∑_j=0^i-1∫_t_j^t_j+1(b(t_j,X^RE_n(t_j))-b̃(t_j,X̅^RE_n(t_j))) W(s)
+ (-δ_3)∑_j=0^i-1∫_t_j^t_j+1b̃(t_j,X̅^RE_n(t_j)) Z(s)
= ∑_j=0^i-1(A_j+B_j+C_1,j+C_2,j+C_3,j),
where
A_j=∫_t_j^t_j+1(a(ξ_j,X^RE_n(t_j))-a(ξ_j,X̅^RE_n(t_j))) s
B_j=∫_t_j^t_j+1(b(t_j,X^RE_n(t_j))-b(t_j,X̅^RE_n(t_j))) W(s)
C_1,j=(-δ_1)∫_t_j^t_j+1p_a(ξ_j,X̅^RE_n(t_j)) s
C_2,j=(-δ_2)∫_t_j^t_j+1p_b(t_j,X̅^RE_n(t_j)) W(s),
C_3,j=(-δ_3)∫_t_j^t_j+1b̃(t_j,X̅^RE_n(t_j)) Z(s)
=(-δ_3)∫_t_j^t_j+1b̃(t_j,X̅^RE_n(t_j)) M(s)+(-δ_3)∫_t_j^t_j+1b̃(t_j,X̅^RE_n(t_j)) V(s)
=(-δ_3)∫_t_j^t_j+1b̃(t_j,X̅^RE_n(t_j))∂ p_W/∂ y(s,W(s)) W(s)
+(-δ_3)∫_t_j^t_j+1b̃(t_j,X̅^RE_n(t_j))ℒp_W(s,W(s)) s.
Then for all i=0,1,…,n
e̅_i≤∑_j=0^i-1A_j+∑_j=0^i-1B_j+∑_j=0^i-1C_1,j+∑_j=0^i-1C_2,j+∑_j=0^i-1C_3,j,
where
∑_j=0^i-1C_3,j≤δ_3·(C^1_3,i+C^2_3,i),
with
C^1_3,i= ∑_j=0^i-1∫_t_j^t_j+1b̃
(t_j,X̅^RE_n(t_j))∂ p_W/∂ y(s,W(s)) W(s),
C^2_3,i=∑_j=0^i-1∫_t_j^t_j+1b̃
(t_j,X̅^RE_n(t_j))ℒp_W(s,W(s)) s.
Hence, for i=0,1,…,n
e̅_i≤∑_j=0^i-1A_j+∑_j=0^i-1B_j+∑_j=0^n-1C_1,j
+max_1≤ i ≤ n∑_j=0^i-1C_2,j+δ_3·max_1≤ i ≤ n C^1_3,i+δ_3 · C^2_3,n,
and for all k=0,1,…,n
𝔼(max_0≤ i≤ ke̅_i^r)≤ c_r𝔼(∑_j=0^k-1A_j)^r+c_r𝔼(max_1≤ i≤ k∑_j=0^i-1B_j^r)+c_r𝔼(∑_j=0^n-1C_1,j)^r
+c_r𝔼(max_1≤ i ≤ n∑_j=0^i-1C_2,j^r)+c_rδ_3^r·𝔼(max_1≤ i ≤ n (C^1_3,i)^r)+c_rδ_3^r ·𝔼(C^2_3,n)^r.
By the Jensen inequality we have for k=0,1,…,n
(∑_j=0^k-1e̅_j)^r≤ k^r-1∑_j=0^k-1e_j^r≤ n^r-1∑_j=0^k-1e_j^r,
which implies that
(1/n∑_j=0^k-1e̅_j)^r≤1/n∑_j=0^k-1e_j^r.
Moreover
A_j≤KT/ne̅_j,
and hence
(∑_j=0^k-1A_j)^r≤ K^rT^r(1/n∑_j=0^k-1e̅_j)^r≤K^rT^r/n∑_j=0^k-1e̅_j^r.
It holds that (∑_j=0^k B_j,Σ̃^n_t_k+1)_k=0,1,…,n-1 is a discrete-time martingale. To see that let us denote by M_k ∑_j=0^k B_j. By the basic properties of the Itô integral we get for k=0,1,…,n-1 that σ ( M_k ) ⊂Σ̃^n_t_k+1,
𝔼(M_k+1 - M_k | Σ̃^n_t_k+1) = 𝔼(B_k+1 | Σ̃^n_t_k+1) = 0, k=0,1,…,n-2,
and for j=0,1,…,n-1
𝔼B_j^r≤ C(T/n)^r/2𝔼e̅_j^r<+∞.
Hence, by the Burkholder and Jensen inequalities we get for k=0,1,…,n
𝔼(max_1≤ i≤ k∑_j=0^i-1B_j^r)=𝔼(max_0≤ i≤ k-1∑_j=0^iB_j^r)≤ C_r^r𝔼(∑_j=0^k-1B_j^2)^r/2
≤ C_r^r k^r/2-1∑_j=0^k-1𝔼B_j^r≤C/n∑_j=0^k-1𝔼e̅_j^r.
From (<ref>), (<ref>), (<ref>), and the fact that e̅_0=0 we get for k=0,1,…,n that
𝔼(max_0≤ i≤ ke̅_i^r)≤C/n∑_j=0^k-1𝔼e̅_j^r+c_r R_n≤C/n∑_j=0^k-1𝔼(max_0≤ i≤ je̅_i^r)+c_r R_n
=C/n∑_j=1^k-1𝔼(max_0≤ i≤ je̅_i^r)+c_r R_n,
where
R_n=𝔼(∑_j=0^n-1C_1,j)^r+𝔼(max_1≤ i ≤ n∑_j=0^i-1C_2,j^r)
+ δ_3^r·𝔼(max_1≤ i ≤ n (C^1_3,i)^r)+δ_3^r ·𝔼(C^2_3,n)^r.
By the discrete version of the Gronwall's lemma (see, for example, Lemma 2.1 in <cit.>) we get
𝔼(max_0≤ i≤ ne̅_i^r)≤ KR_n.
By the Jensen inequality and Lemma <ref> we get
𝔼(∑_j=0^n-1C_1,j)^r≤δ_1^r T^r n^-1∑_j=0^n-1𝔼p_a(ξ_j,X̅^RE_n(t_j))^r
≤ Cδ_1^r (1+max_0≤ j≤ n𝔼X̅^RE_n(t_j)^r)≤ K_1δ_1^r.
The process (∑_j=0^k C_2,j,Σ̃^n_t_k+1)_k=0,1,…,n-1 is a discrete-time martingale - this can be justified in analogous way as for (∑_j=0^k B_j,Σ̃^n_t_k+1)_k=0,1,…,n-1. Hence, again by the Burkholder and Jensen inequalities, we obtain
𝔼(max_1≤ i ≤ n∑_j=0^i-1C_2,j^r)≤ C_r^r n^r/2-1∑_j=0^n-1𝔼C_2,j^r
≤ K_1 (1+max_0≤ j≤ n𝔼X̅^RE_n(t_j)^r)δ_2^r
≤ K_2δ_2^r.
Let us denote by
D_j=∫_t_j^t_j+1b̃
(t_j,X̅^RE_n(t_j))∂ p_W/∂ y(s,W(s)) W(s),
then (∑_j=0^k D_j,Σ̃^n_t_k+1)_k=0,1,…,n-1 is also a discrete-time martingale. Therefore, by the Burkholder and Jensen inequalities, we obtain
𝔼(max_1≤ i ≤ n (C^1_3,i)^r)=𝔼(max_1≤ i≤ n∑_j=0^i-1D_j^r)≤ C_r^r n^r/2-1∑_j=0^n-1𝔼D_j^r,
where, by (<ref>) and submultiplicativity of the Frobenius norm, we get
𝔼D_j^r=𝔼∫_t_j^t_j+1b̃
(t_j,X̅^RE_n(t_j))∂ p_W/∂ y(s,W(s)) W(s)^r
≤ C(T/n)^r/2-1𝔼∫_t_j^t_j+1b̃
(t_j,X̅^RE_n(t_j))∂ p_W/∂ y(s,W(s))^r s
≤ C(T/n)^r/2-1𝔼∫_t_j^t_j+1b̃
(t_j,X̅^RE_n(t_j))^r·∂ p_W/∂ y(s,W(s))^r s≤ K_3 n^-r/2,
for j=0,1,…,n-1. This implies that
𝔼(max_1≤ i ≤ n (C^1_3,i)^r)≤ K_4.
Finally, from (<ref>) and Lemma <ref>
𝔼(C^2_3,n)^r=𝔼(∑_j=0^n-1∫_t_j^t_j+1b̃
(t_j,X̅^RE_n(t_j))ℒp_W(s,W(s)) s)^r
≤ n^r-1∑_j=0^n-1𝔼∫_t_j^t_j+1b̃
(t_j,X̅^RE_n(t_j))ℒp_W(s,W(s)) s^r
≤ n^r-1∑_j=0^n-1𝔼(∫_t_j^t_j+1b̃
(t_j,X̅^RE_n(t_j))·ℒp_W(s,W(s)) s)^r
≤C/n∑_j=0^n-1𝔼b̃
(t_j,X̅^RE_n(t_j))^r≤ K_5.
Combining (<ref>), (<ref>), (<ref>), (<ref>), (<ref>) we obtain
𝔼(max_0≤ i≤ ne̅_i^r)≤ K_6(δ_1^r+δ_2^r+δ_3^r),
which proves (i).
We now show (ii). Let W̃∈𝒲_α,β(δ_3). Note that in this case Z might not be semimartingale nor even a process of bounded variation. Hence, we have that
e̅_i = X^RE_n(t_i)-X̅^RE_n(t_i)=∑_j=0^i-1(A_j+B_j+C_1,j+C_2,j+C_3,j),
where
A_j=∫_t_j^t_j+1(a(ξ_j,X^RE_n(t_j))-a(ξ_j,X̅^RE_n(t_j))) s
B_j=∫_t_j^t_j+1(b(t_j,X^RE_n(t_j))-b(t_j,X̅^RE_n(t_j))) W(s)
C_1,j=(-δ_1)∫_t_j^t_j+1p_a(ξ_j,X̅^RE_n(t_j)) s
C_2,j=(-δ_2)∫_t_j^t_j+1p_b(t_j,X̅^RE_n(t_j)) W(s),
and
C_3,j=(-δ_3)·b̃(t_j,X̅^RE_n(t_j))·Δ Z_j.
Using similar arguments as for the proof of (<ref>) we have
𝔼(max_0≤ i≤ ke̅_i^r)≤C/n∑_j=1^k-1𝔼(max_0≤ i≤ je̅_i^r)+c_r R̅_n,
where this time we get from (<ref>), (<ref>), and Lemma <ref> that
R̅_n=𝔼(∑_j=0^n-1C_1,j)^r+𝔼(max_1≤ i ≤ n∑_j=0^i-1C_2,j^r) + 𝔼(max_1≤ i ≤ n∑_j=0^i-1C_3,j^r)
≤ C(δ_1^r+δ_2^r)(1+δ_3^r n^r(1-γ))· e^C(1+δ_3^r n^r(1-γ))+𝔼(max_1≤ i ≤ n∑_j=0^i-1C_3,j^r).
Agian by the discrete version of the Gronwall's lemma we get
𝔼(max_0≤ i≤ ne̅_i^r)≤ KR̅_n.
Moreover,
max_1≤ i ≤ n∑_j=0^i-1C_3,j_r≤∑_j=0^n-1C_3,j_r≤∑_j=0^n-1C_3,j_r,
and, since X̅^RE_n(t_j) and Δ W_j are independent, we have by Lemma <ref>
C_3,j_r≤ Cδ_31+X̅^RE_n(t_j)_r·(T/n)^α+Δ W_j^β_r
≤ Cδ_3· n^-γ·(1+max_0≤ i ≤ nX̅^RE_n(t_i)_r)
≤ C δ_3 n^-γ· (1+δ_3 n^1-γ)· e^C(1+(δ_3 n^1-γ)^r),
and
max_1≤ i ≤ n∑_j=0^i-1C_3,j_r≤ C δ_3 n^1-γ· (1+δ_3 n^1-γ)· e^C(1+(δ_3 n^1-γ)^r).
From (<ref>), (<ref>), and (<ref>) we get the thesis of (i).
Let r∈ [2,+∞), ϱ∈ (0,1].
(i) There exists C∈ (0,+∞), depending only on the parameters of the class ℱ(ϱ,K) and r, such that for all n∈ℕ, δ_1,δ_2,δ_3∈ [0,1], (a,b,η)∈ℱ(ϱ,K), (ã,b̃, W̃)∈ V_a(δ_1)× V_b(δ_2)×𝒲_0(δ_3) it holds
X(a,b,W,η)(T)-𝒜̅^RE_n(ã, b̃,W̃,η)_r≤ C(n^-min{ϱ,1/2}+δ_1+δ_2+δ_3).
(ii) Let α,β∈ (0,1]. There exists C∈ (0,+∞), depending only on the parameters of the class ℱ(ϱ,K) and r, such that for all n∈ℕ, δ_1,δ_2,δ_3∈ [0,1], (a,b,η)∈ℱ(ϱ,K), (ã,b̃, W̃)∈ V_a(δ_1)× V_b(δ_2)×𝒲_α,β(δ_3) it holds
X(a,b,W,η)(T)-𝒜̅^RE_n(ã, b̃,W̃,η)_r≤ Cn^-min{ϱ,1/2}
+C(δ_1+δ_2+δ_3· n^1-γ)· (1+δ_3 n^1-γ)· e^C(1+(δ_3 n^1-γ)^r),
where γ=min{α,β/2}.
By Proposition 1 in <cit.> (for the case δ_1=δ_2=δ_3=0) and from Proposition <ref> (i) we get
X(a,b,W,η)(T)-𝒜̅^RE_n(ã, b̃,W̃,η)_r≤X(a,b,W,η)(T)-X_n^RE(a,b,W,η)(T)_r
+X_n^RE(a,b,W,η)(T)-X̅_n^RE(ã,b̃,W̃,η)(T)_r≤ C_1n^-min{ϱ,1/2}+C_2(δ_1+δ_2+δ_3),
which implies (i). Similarly, by using the above error decomposition together with Proposition <ref> (ii) we obtain the thesis of (ii).
§ LOWER BOUNDS AND OPTIMALITY OF THE RANDOMIZED EULER ALGORITHM
In this section, we investigate lower error bound for an arbitrary method (<ref>) from the class Φ_n. We focus only on the class 𝒲_0 of disturbed Wiener processes W̃. Essentially, sharp lower bounds in the class 𝒲_α,β are left as an open problem. For some special cases we also show optimality of the randomized Euler algorithm 𝒜̅^RE_n.
The following result follows from Lemma 3 in <cit.> and Theorem <ref>.
Let r∈ [2,+∞), K∈ (0,+∞), ϱ∈ (0,1]. Then there exist C_1,C_2∈ (0,+∞), depending only on the parameter of the class ℱ(ϱ,K) and r, such that for all n∈ℕ, δ_1,δ_2,δ_3∈ [0,1] it holds
C_1(n^-min{ϱ,1/2}+δ_1+δ_2)≤e^(r)_n(ℱ(ϱ,K),𝒲_0,δ_1,δ_2,δ_3)≤C_2(n^-min{ϱ,1/2}+δ_1+δ_2+δ_3).
In particular, the nth minimal error satisfies
e^(r)_n(ℱ(ϱ,K),𝒲_0,δ_1,δ_2,0)=Θ(n^-min{ϱ,1/2}+δ_1+δ_2),
and
e^(r)_n(ℱ(ϱ,K),𝒲_0,δ_1,δ_2,max{δ_1,δ_2})=Θ(n^-min{ϱ,1/2}+δ_1+δ_2),
as n→+∞, max{δ_1,δ_2}→ 0^+. In both cases (<ref>), (<ref>) an optimal algorithm is the randomized Euler algorithm 𝒜̅^RE_n.
In general we have a gap between upper and lower bounds, and sharp bounds appear only in special cases (i.e.: when δ_3=0 or δ_3=max{δ_1,δ_2}). However, in the particular case for the randomized Euler algorithm we have the following bounds for its worst-case error (the proof follows from Proposition 1 in <cit.>).
Let r∈ [2,+∞), K∈ (0,+∞), ϱ∈ (0,1]. Then for the randomized Euler algorithm 𝒜̅^RE_n it holds
e^(r)(𝒜̅^RE_n,ℱ(ϱ,K),𝒲_0,δ_1,δ_2,δ_3)=Θ(n^-min{ϱ,1/2}+δ_1+δ_2+δ_3),
as n→+∞, max{δ_1,δ_2,δ_3}→ 0+.
§ NUMERICAL EXPERIMENTS
m
ḍ
Let us consider the following linear SDE that describes the well-known multidimensional Black-Scholes model
{[ X̣(t)=[ μ_1 X_1(t); μ_2 X_2(t); ⋮; μ_d X_d(t) ]ṭ+[ σ^1,1X_1(t) σ^1,2X_1(t) ⋯ σ^1, X_1(t); σ^2,1X_2(t) σ^2,2X_2(t) ⋯ σ^2, X_2(t); ⋮ ⋮ ⋱ ⋮; σ^d,1X_d(t) σ^d,2X_d(t) ⋯ σ^d, X_d(t) ]Ẉ(t), ; X(0)=x_0, t∈ [0,T], ].
where σ^i,j>0 for i∈{1,…, d}, j∈{1,…, }, μ_i∈ , x_0∈_+^d.
Functions a and b take the following forms
a(t,x)= (μ_1 x_1,…,μ_d x_d)^T,
b(t,x)=[ σ^1,1x_1 σ^1,2x_1 ⋯ σ^1, x_1; σ^2,1x_2 σ^2,2x_2 ⋯ σ^2, x_2; ⋮ ⋮ ⋱ ⋮; σ^d,1x_d σ^d,2x_d ⋯ σ^d, x_d ].
The exact solution of problem (<ref>) has the following form
X_i (t)=X_i(0) ·exp((μ-1/2∑_j=1^(σ^i,j)^2)t+∑_j=1^σ^i,j W_j(t))
for i=1,…,d.
To perform numerical experiments, we choose two examples.
Example 1
a(t,x)= [ 0.5 x_1 ,0.7 x_2; ] ^ T,
b(t,x)=[ 0.5 x_1 , 0.7 x_1 , 0.2 x_1; -0.5 x_2 , -0.7 x_2 , -0.2 x_2; ],
x_0 = (1,2)^T, T=1.
Example 2
a(t,x)=[ 0.5 x_1 ,0.7 x_2, 0.4 x_3; ] ^ T,
b(t,x)=[ 0.5 x_1 , 0.7 x_1 , 0.2 x_1; 0.1 x_2 , 0. x_2 , 0.013x_2; 0. x_3 , 0.75 x_3, 0.013 x_3; ],
x_0 = (1,0.1,0.4)^T, T=1.
We take an estimator of the error of X(T)-X̅_n^RE(T)_2
ε_K(X̅^RE_n(T))=(1/K∑_j=1^K X_(j)(T)-X̅^RE_(j),n(T)^2)^1/2.
We also conduct numerical experiments for an equation in which the exact solution is unknown (Example 3). In this case, to estimate the error X(T)-X̅_n^RE(T)_2, the exact solution X(T) is approximated by X̅^RE_n computed under exact information with for n=1310720=10·2^17, and then
ε_K(X̅^RE_n(T))=(1/K∑_j=1^K X̅^RE_(j),1310720(T)-X̅^RE_(j),n(T)^2)^1/2.
Example 3
a(t,x)= 0.5[ t sin (10 x_1); cos (7x_2) ] ^ T,
b(t,x)=[ tx_1 tx_2 sin(x_2); t cos (x_1) x_2 -x_1 ],
x_0 = (1,2)^T, T=1.
For Examples 1 - 3 we used K=20000.
§.§ Linear disturbing function
To obtain results in the numerical simulations related to the Theorem <ref> (ii) we propose the following disruptive function
p_a(t,x)= U_1 · a(t,x)
and
p_b(t,x)= U_2 · b(t,x),
where U_1,U_2 has uniform distribution over the interval [-1,1].
As a corrupting function for the Wiener process W in Examples 1 - 3 we take
p_w(t,x) =U_3· x,
where U_3 is a random variable with a uniform distribution over the interval [-1,1]. We also assume that U_1, U_2, U_3 are independent of W and 𝒢^n. We use those uniform distributions to better approximate the worst case scenario setting of error.
§.§ Nonlinear disturbing function
To conduct illustrative numerical simulations, as per Theorem <ref> (ii), we propose the following disruptive functions
p_a(t,x)= U_1 · a(t,x)
and
p_b(t,x)= U_2 · b(t,x),
where U_1,U_2 is a random variable with a uniform distribution over the interval [-1,1].
As a corrupting function for the Wiener process, we consider
p_w, β(t,x) =U_3 ·sgn(sin(100‖ x‖))·|sin(100‖ x‖)|^β,
where U_3 is a random variable with a uniform distribution over the interval [-1,1]. We also assume that U_1, U_2, U_3 are independent of W and 𝒢^n.
In Figures <ref> and <ref>, we present results for (<ref>) as the disruptive function for Wiener process in Example 3.
In this case the, logarithmic error exhibits exponential growth, necessitating the use of doubly logarithmic y-axis. Notably, such error behavior was not observed for disruptive functions from the class 𝒦_0 for the Wiener process.
§ CONCLUSIONS
We have investigated the error and optimality of the randomized Euler scheme in the case when we have access only to noisy standard information about the coefficients a, b, and driving Wiener process W. We considered two classes of disturbed Wiener processes for which we derived upper error bounds for the randomized Euler algorithm. These bounds indicate that the error significantly depends on the regularity of the disturbing functions.
The numerical experiments demonstrate that beyond a certain value of n, which depends on the size of the disturbance, the error of the randomized Euler algorithm stabilizes, and increasing number of discretization points n does not lead to reduction in error.
One particularly interesting observation is depicted in Figure <ref>. When using function (<ref>) as a perturbation for the Wiener process with sufficiently high δ, we observe an exponential increase in error as n increases.
In future research, we plan to investigate the error of (multilevel) Monte Carlo method under inexact information for the weak approximation of solutions of SDEs.
§ APPENDIX
The proof of the following fact is straightforward and, therefore, omitted.
If p∈𝒦_0 then for all t∈ [0,T],x,y∈ℝ^m it holds
p(t,x)≤ m^1/2(1+x), p(t,x)-p(t,y)≤ m^1/2x-y,
∂ p/∂ y(t,y)≤ m^1/2,
ℒp(t,y)≤( 2m+m^2/2)^1/2.
In order to estimate absolute moments of X̅^RE_n(t_i), in the case when the disturbed Wiener process W̃ belongs to the class 𝒲_0(δ_3), we use the following time-continuous randomized Euler process
{[ X̃̅̃^RE_n(0) = η,; X̃̅̃^RE_n(t) = X̃̅̃^RE_n(t_i) + ã(ξ_i, X̃̅̃^RE_n(t_i)) · (t-t_i) + b̃(t_i, X̃̅̃^RE_n(t_i)) · (W̃(t)-W̃(t_i)) , ].
for t∈ [t_i,t_i+1], i=0,1, …, n-1, where W̃(t)= W(t)+δ_3· Z(t), Z(t)=p_W(t, W(t)), p_W∈𝒦_0. It is easy to see that X̃̅̃^RE_n(t_i)=X̅^RE_n(t_i) for i=0,1,…,n. Moreover, the process (X̃̅̃^RE_n(t))_t∈ [0,T] is adapted to (Σ̃^n_t)_t∈ [0,T], which can be shown by induction.
Let r∈ [2,+∞). There exists C∈ (0,+∞), depending only on the parameters of the class ℱ(ϱ,K) and r, such that for all n∈ℕ, δ_1,δ_2,δ_3∈ [0,1], (a,b,η)∈ℱ(ϱ,K), (ã,b̃, W̃)∈ V_a(δ_1)× V_b(δ_2)×𝒲_0(δ_3) it holds
sup_0≤ t ≤ T𝔼X̃̅̃^RE_n(t)^r≤ C(1+δ_1^r+δ_2^r+δ_3^r)e^CT(1+δ_1^r+δ_2^r+δ_3^r).
We denote by
V̅_i=(ξ_i, X̃̅̃^RE_n(t_i)), U̅_i=(t_i, X̃̅̃^RE_n(t_i)).
Firstly, we show by induction that
max_0≤ i≤ n𝔼X̃̅̃^RE_n(t_i)^r<+∞.
Let us assume that there exists l∈{0,1,…,n-1} such that max_0≤ i≤ l𝔼X̃̅̃^RE_n(t_i)^r<+∞. (This obviously holds for l=0.) Due to the fact that σ(U̅_l)⊂Σ̃^n_t_l and, by (<ref>),
V(t_l+1)-V(t_l)≤∫_t_l^t_l+1ℒp_W(s,W(s)) s≤ c(m)· (t_l+1-t_l),
we get
𝔼X̃̅̃^RE_n(t_l+1)^r≤ C 𝔼X̃̅̃^RE_n(t_l)^r+C(T/n)^r·𝔼ã(U̅_l)^r
+ C𝔼b̃(U̅_l)^r·𝔼W(t_l+1)-W(t_l)^r
+ C δ_3𝔼∫_t_l^tb̃(U̅_l)∂ p_W/∂ y(s,W(s)) W(s)^r
+ δ_3𝔼(b̃(U̅_l)^r·V(t_l+1)-V(t_l)^r)≤ K(1+𝔼X̃̅̃^RE_n(t_l)^r) <∞.
Hence, max_0≤ i≤ l+1𝔼X̃̅̃^RE_n(t_i)^r=max{max_0≤ i≤ l𝔼X̃̅̃^RE_n(t_i)^r,𝔼X̃̅̃^RE_n(t_l+1)^r}<+∞ and the inductive step is completed. Hence, we have shown (<ref>). Moreover, by (<ref>) we get
sup_0≤ t≤ T𝔼X̃̅̃^RE_n(t)^r≤ C(1+max_0≤ i≤ n-1𝔼X̃̅̃^RE_n(t_i)^r )<+∞
The constant in (<ref>) depends on n. In the second part of the proof we will
show that we can obtain the bound (<ref>) with C that does not depend on n.
Let for t∈ [0,T]
ϕ_n(t)=∑_i=0^n-1ã(V̅_i)·1_(t_i,t_i+1](t),
ψ_n(t)=∑_i=0^n-1b̃(U̅_i)·1_(t_i,t_i+1](t).
Note that {ψ_n(t)}_t∈ [0,T] is {Σ̃^n_t}_t≥ 0-progressively measurable simple process. Hence, we have for all t∈ [0,T] that
X̃̅̃^RE_n(t)=η+Ã̅̃^RE_n(t)+B̃̅̃^RE_n(t)+C̃̅̃^RE_n(t),
where
Ã̅̃^RE_n(t)=∫_0^tϕ_n(s) s,
B̃̅̃^RE_n(t)=∫_0^tψ_n(s) W(s),
and
C̃̅̃^RE_n(t)=δ_3·∫_0^tψ_n(s) Z(s)
=δ_3·∫_0^t ψ_n(s)∂ p_W/∂ y(s,W(s)) W(s)+δ_3·∫_0^tψ_n(s)ℒp_W(s,W(s)) s.
and all above stochastic integrals are well-defined. Hence, we have for t∈ [0,T]
X̃̅̃^RE_n(t)=η+∫_0^t[ϕ_n(s)+δ_3·ψ_n(s)·ℒp_W(s,W(s))] s
+∫_0^tψ_n(s)·[I+δ_3·∂ p_W/∂ y(s,W(s))] W(s),
where I is an identity matrix of size m× m. Hence, by (<ref>), (<ref>)
𝔼X̃̅̃^RE_n(t)^r≤ C_1η^r+C_2𝔼∫_0^tϕ_n(s)^r s+C_3(1+δ_3^r)𝔼∫_0^tψ_n(s)^r s
≤ K_1·(1+η^r)· (1+δ_1^r+δ_2^r+δ_3^r)
+K_2·(1+δ_1^r+δ_2^r+δ_3^r)·∫_0^t∑_i=0^n-1𝔼X̃̅̃^RE_n(t_i)^r·1_(t_i,t_i+1](s) s
and therefore for all t∈ [0,T]
sup_0≤ u≤ t𝔼X̃̅̃^RE_n(u)^r≤ K_1·(1+η^r)· (1+δ_1^r+δ_2^r+δ_3^r)
+K_2· (1+δ_1^r+δ_2^r+δ_3^r)∫_0^tsup_0≤ u≤ s𝔼X̃̅̃^RE_n(u)^r s
where K_1 and K_2 depends only on the parameters of the class ℱ(ϱ,K) and r. Since the function [0,T]∋ t→sup_0≤ u≤ t𝔼X̃̅̃^RE_n(u)^r is bounded (by (<ref>)) and Borel measurable (as a nondecreasing function), by using the Gronwall's lemma we get (<ref>).
In the case of the class 𝒲_α,β(δ_3) we have the following absolute moments estimate for X̅_n^RE(t_i). The proof technique is different from the one used in the proof of Lemma <ref>, since for p_W∈𝒦_α,β the process Z(t)=p_W(t,W(t)) might not be a semimartingale nor a process of bounded variation.
Let r∈ [2,+∞). There exists C∈ (0,+∞), depending only on the paramters of the class ℱ(ϱ,K) and r, such that for all n∈ℕ, δ_1,δ_2,δ_3∈ [0,1], (a,b,η)∈ℱ(ϱ,K), (ã,b̃, W̃)∈ V_a(δ_1)× V_b(δ_2)×𝒲_α,β(δ_3) it holds
𝔼[max_0≤ i ≤ nX̅_n^RE(t_i)^r]≤ C(1+δ_3^r n^r(1-γ))· e^C(1+δ_3^r n^r(1-γ)),
where γ=min{α,β/2}.
We have that for i=0,1,…,n we can write that
X̅^RE_n(t_i)=η+T/n∑_j=0^i-1ã(ξ_j,X̅^RE_n(t_j))+ ∑_j=0^i-1b̃(t_j,X̅^RE_n(t_j))·Δ W_j
+δ_3∑_j=0^i-1b̃(t_j,X̅^RE_n(t_j))·Δ Z_j,
and we have that
X̅_n^RE(t_i)≤ K+∑_j=0^i-1Ã_j+∑_j=0^i-1B̃_j+∑_j=0^i-1C̃_j,
where
Ã_j=T/nã(ξ_j,X̅^RE_n(t_j)),
B̃_j = b̃(t_j,X̅^RE_n(t_j))·Δ W_j,
C̃_j = δ_3·b̃(t_j,X̅^RE_n(t_j))·Δ Z_j.
Hence, for all k=0,1,…, n
𝔼(max_0≤ i≤ kX̅_n^RE(t_i)^r)≤ c_rK^r+c_r𝔼(∑_j=0^k-1Ã_j)^r
+c_r𝔼[max_1≤ i ≤ k∑_j=0^i-1B̃_j^r]+c_r𝔼(∑_j=0^k-1C̃_j)^r.
By Jensen inequality we have that
𝔼(∑_j=0^k-1Ã_j)^r≤ C_1+C_1/n∑_j=0^k-1𝔼X̅^RE_n(t_j)^r.
From Burkholder and Jensen inequality we obtain that
𝔼[max_1≤ i ≤ k∑_j=0^i-1B̃_j^r]≤ C_2+C_2/n∑_j=0^k-1𝔼X̅^RE_n(t_j)^r.
Finally, since X̅^RE_n(t_j) and Δ W_j are independent, and
Δ W_j^β_r≤ c (T/n)^β/2,
we get that
𝔼C̃_j^r≤K̅_1δ_3^r𝔼[(1+X̅^RE_n(t_j))^r·((T/n)^α+Δ W_j^β)^r]
≤K̅_2 δ_3^r (1+𝔼X̅^RE_n(t_j)^r)·(T/n)^α+Δ W_j^β_r^r
≤K̅_3δ_3^r n^-rγ+K̅_4δ^rn^-rγ𝔼X̅^RE_n(t_j)^r,
and hence
𝔼(∑_j=0^k-1C̃_j)^r≤ n^r-1∑_j=0^k-1𝔼C̃_j^r≤ C_3δ_3^r n^r(1-γ)
+C_4δ_3^r n^r(1-γ)-1∑_j=0^k-1𝔼X̅^RE_n(t_j)^r.
Combining (<ref>), (<ref>), (<ref>), (<ref>) we arrive at
𝔼(max_0≤ i≤ kX̅_n^RE(t_i)^r)≤ C_5(1+δ_3^rn^r(1-γ))
+C_6(δ_3^rn^r(1-γ)-1+n^-1)∑_j=1^k-1𝔼(max_0≤ i≤ jX̅_n^RE(t_i)^r).
By the discrete version of the Gronwall's lemma we get the thesis.
Acknowledgements
This research was realized as a part of joint research project between AGH UST and NVIDIA.
22
JenNeuen
A. Jentzen, A. Neuenkirch, A random Euler scheme for Carathéodory differential equations, J. Comp. and Appl. Math. 224 (2009), 346–359.
GSM2022
M. Giles, O. Sheridan-Methven, Analysis of nested Multilevel Monte Carlo using approximate normal random variables, SIAM J. Uncert. Quant., 10 (2022),
Hein1
S. Heinrich, Lower complexity bounds for parametric stochastic Itô integration, J. Math. Anal. Appl., 476 (2019), 177–195.
KaPl90
B. Kacewicz, L. Plaskota, On the minimal cost of approximating linear problems based on information with deterministic noise, Numer. Funct. Anal. and Optimiz. 11 (1990), 511-528.
KaPr16
B. Kacewicz, P. Przybyłowicz, On the optimal robust solution of IVPs with noisy information, Numer. Algor. 71 (2016), 505–518.
AKAPhD
A. Kałuża, Optimal algorithms for solving stochastic initial-value problems with jumps, PhD thesis, AGH University of Science and Technology, Kraków 2020,
.
AKPMPP
A. Kałuża, P. M. Morkisz, P. Przybyłowicz, Optimal approximation of stochastic integrals in analytic noise model, Appl. Math. and Comput., 356 (2019), 74–91.
KRWU_0
R. Kruse, Y. Wu, Error analysis of randomized Runge-Kutta methods for differential equations
with time-irregular coefficients, Comput. Methods Appl. Math., 17 (2017), 479–498.
KRWU
R. Kruse, Y. Wu, A randomized Milstein method for stochastic differential equations with non-differentiable drift coefficients, Discrete Contin. Dyn. Syst. Ser B, 24 (2019), 3475–3502.
Mao11
X. Mao, Stochastic differential equations and applications 2nd edition, Woodhead Publishing, Cambridge, 2011.
milvic
M. Milanese, A. Vicino, Optimal estimation theory for dynamic systems with set membership uncertainty: an overview, Automatica 27 (1991), 997–1009.
MoPl16
P. M. Morkisz, L. Plaskota, Approximation of piecewise Hölder functions from inexact information, J. Complex. 32 (2016), 122–136.
PMPP14
P. M. Morkisz, P. Przybyłowicz,
Strong approximation of solutions of stochastic differential equations with time-irregular coefficients via randomized Euler algorithm, Appl. Numer. Math. 78 (2014), 80–94.
PMPP17
P. M. Morkisz, P. Przybyłowicz,
Optimal pointwise approximation of SDE's from inexact information, Journal of Computational and Applied Mathematics 324 (2017), 85–100.
PMPP19
P. M. Morkisz, P. Przybyłowicz, Randomized derivative-free Milstein algorithm for efficient approximation of solutions of SDEs under noisy information, J. Comput. Appl. Math. 383 (2021), 1–22.
NOV E. Novak, Deterministic and Stochastic Error Bounds in Numerical Analysis, Lecture Notes in Mathematics, vol. 1349, New York, Springer–Verlag, 1988.
Pla96
L. Plaskota, Noisy Information and Computational Complexity,
Cambridge Univ. Press, Cambridge, 1996.
Pla14
L. Plaskota, Noisy information: optimality, complexity, tractability,
in Monte Carlo and quasi-Monte Carlo Methods 2012,
J. Dick, F.Y. Kuo, G.W. Peters, I.H. Sloan (Eds.), Springer 2013, 173–209.
Protter
P. Protter, Stochastic Integration and Differential Equations, second ed., Springer-Verlag Berlin Heidelberg, 2005.
TWW88
J.F. Traub, G.W. Wasilkowski, H. Woźniakowski,
Information-Based Complexity, Academic Press, New York, 1988.
Wer96
A.G. Werschulz, The complexity of definite elliptic problems with noisy data. J. Complex. 12 (1996), 440-473.
Wer97
A.G. Werschulz, The complexity of indefinite elliptic problems with noisy data. J. Complex. 13 (1997), 457-479.
|
http://arxiv.org/abs/2307.04661v1 | 20230710155909 | On the power of graph neural networks and the role of the activation function | [
"Sammy Khalife",
"Amitabh Basu"
] | cs.LG | [
"cs.LG"
] |
On the power of graph neural networks]On the power of graph neural networks and the role of the activation function
Johns Hopkins University, Department of Applied Mathematics and Statistics
[email protected]
[email protected]
[2020]68T07, 68Q19, 05D10, 11J85
[
Amitabh Basu
August 12, 2023
===================
In this article we present new results about the expressivity of Graph Neural Networks (GNNs). We prove that for any GNN with piecewise polynomial activations, whose architecture size does not grow with the graph input sizes, there exists a pair of non-isomorphic rooted trees of depth two such that the GNN cannot distinguish their root vertex up to an arbitrary number of iterations. The proof relies on tools from the algebra of symmetric polynomials. In contrast, it was already known that unbounded GNNs (those whose size is allowed to change with the graph sizes) with piecewise polynomial activations can distinguish these vertices in only two iterations. Our results imply a strict separation between bounded and unbounded size GNNs, answering an open question formulated by <cit.>.
We next prove that if one allows activations that are not piecewise polynomial, then in two iterations a single neuron perceptron can distinguish the root vertices of any pair of nonisomorphic trees of depth two (our results hold for activations like the sigmoid, hyperbolic tan and others). This shows how the power of graph neural networks can change drastically if one changes the activation function of the neural networks. The proof of this result utilizes the Lindemann-Weierstrauss theorem from transcendental number theory.
§ INTRODUCTION
Graph Neural Networks (GNNs) form a popular framework for a variety of computational tasks involving network data, with applications ranging from analysis of social networks, structure and functionality of molecules in chemistry and biological applications, computational linguistics, simulations of physical systems, techniques to enhance optimization algorithms, to name a few. The interested reader can look at <cit.>, which is a small sample of a large and actively growing body of work.
Given the rise in importance of inference and learning problems involving graphs and the use of GNNs for these tasks, significant progress has been made in recent years to understand their computational capabilities. See the excellent recent survey <cit.> for an exposition of some aspects of this research. One direction of investigation is on their so-called separation power which is the ability of GNNs to distinguish graphs with different structures. In this context, it becomes natural to compare their separation power to other standard computation models on graphs, such as different variants of the Wesfeiler-Lehman algorithm <cit.>, and the very closely related color refinement algorithm <cit.>. These investigations are naturally connected with descriptive complexity theory, especially to characterizations in terms of certain logics; see <cit.> for excellent introductions to these different connections. A closely related line of work is to investigate how well general functions on the space of graphs can be approximated using functions represented by GNNs; see <cit.> for a sample of work along these lines. Our work in this paper focuses on improving our understanding of the separation power of GNNs.
At a high level, the computational models of GNNs, Wesfeiler-Lehman/color refinement type algorithms and certain logics in descriptive complexity are intimately connected because they all fall under the paradigm of trying to discern something about the global structure of a graph from local neighborhood computations. Informally, these algorithms iteratively maintain a state (a.k.a. “color”) for each vertex of the graph and in every iteration, the state of a vertex is updated by performing some predetermined set of operations on the set of current states of its neighbors (including itself). The different kinds of allowed states and allowed operations determine the computational paradigm. For instance, in GNNs, the states are typically vectors in some Euclidean space and the operations for updating the state are functions that can be represented by deep neural networks. As another example, in the color refinement algorithm, the states are multisets of some predetermined finite class of labels and the operations are set operations on these multisets. A natural question then arises: Given two of these models, which one is more powerful, or equivalently, can one of the models always simulate the other? Several mathematically precise answers to such questions have already been obtained. For instance, it has been proved independently by <cit.> and <cit.> that the color refinement algorithm precisely captures the expressiveness of GNNs in the sense that there is a GNN distinguishing two nodes of a graph (by assigning them different state vectors) if and only if color refinement assigns different multisets to these nodes. Such a characterization holds for unbounded GNNs, i.e. GNNs for which the underlying neural network sizes can grow with the size of the input graphs. This implies a characterisation of the distinguishability of nodes by GNNs as being equivalent to what is known as Graded Modal Counting Logic (GC2); see <cit.> for some recent, quantitatively precise results in this direction.
Reviewing these equivalences in a recent survey <cit.>, Grohe emphasizes the fact that the above mentioned equivalence between the separation power of GNNs and the color refinement algorithm has only been established for unbounded GNNs whose neural network sizes are allowed to grow as a function of the size of the input graphs. Question 1 on his list of important open questions in this topic asks what happens if one considers bounded GNNs, i.e., the the size of the neural networks is fixed a priori and cannot change as a function of the size of the input graphs. Do bounded GNNs have the same separation power as unbounded GNNs and color refinement? We answer this question in the negative, by constructing, for any given bounded GNN, two non isomorphic rooted trees of depth two such that their root nodes cannot be distinguished by the GNN. Interestingly, only the sizes of the trees depend on the GNN, but their depth does not. This result is stated formally in Theorem <ref> and it holds for bounded GNNs with piecewise polynomial activations (this includes, e.g., ReLU activations). We prove a second result that shows how the activation function dramatically impacts the expressivity of bounded size GNNs: if one allows activation functions that are not piecewise polynomial, all root nodes of rooted trees of depth two can be distinguished by a single neuron perceptron. This result is formally stated in Theorem <ref>.
The rest of this article is organized as follows. In Section <ref> we present the main definitions and formal statement of our results. In Section <ref> we give an overview of the proofs. Sections <ref> and <ref> fill in the technical details.
§ FORMAL STATEMENT OF RESULTS
We assume graphs to be finite, undirected, simple, and vertex-labelled:
a graph is a tuple G = (V(G),E(G),P_1(G),...,P_ℓ(G)) consisting of a finite vertex set V(G), a binary edge relation E(G) ⊂ V(G)^2 that is symmetric and irreflexive, and unary relations P_1(G),⋯ ,P_ℓ(G) ⊂ V(G) representing ℓ > 0 vertex labels. In the following, the number ℓ of labels, which we will also call colors, is supposed to be fixed and does not grow with the size of the input graphs. When there is no ambiguity about which graph G is being considered, N(v) refers to the set of neighbors of v in G not including v. | G | will denote the number of vertices of G. We use simple curly brackets for a set X={x ∈ X} and double curly brackets for a multiset Y={{y ∈ Y }}. For a set X, | X| is the cardinal of X. When m is a positive integer, 𝔖_m is the set of permutations of {1, ⋯, m}.
Let m be a positive integer. A function f: ℝ^m→ℝ is piecewise polynomial iff there exist multivariate polynomials P_1, ⋯, P_r ∈ℝ[X_1, ⋯, X_m] such that for any x ∈ℝ^m, there exists i∈{1, ⋯, r} such that f(x)=P_i(x). The degree of a piecewise polynomial function f is 𝖽𝖾𝗀(f):=max{𝖽𝖾𝗀(P_1), ⋯, 𝖽𝖾𝗀(P_r)}. The number of polynomial pieces of a piecewise polynomial f is the smallest r such that f can be represented as above.
For any positive integer m, a polynomial P ∈ℝ[X_1, ⋯, X_m] is said to be symmetric if for any permutation π∈𝔖_m of {1, ⋯, m} and any v_1, …, v_m ∈ℝ, P(v_π(1), ⋯, v_π(m)) = P(v_1, ⋯, v_m) . For any k ∈{1, ⋯, m}, the elementary symmetric polynomial s_k is given by s_k(X_1, ⋯,X_m):=∑_1≤ j_1 < j_2 < ⋯ < j_k ≤ m X_j_1… X_j_k.
Given a set X, an embedding ξ is a function that takes as input a graph G and a vertex v∈ V(G), and returns an element ξ(G,v)∈ X for each vertex v of the graph. An embedding is equivariant if and only if for any pair of isomorphic graphs G, G', and any isomorphism f from G to G', it holds that
ξ(G,v) = ξ(G',f(v)). We say that ξ refines ξ' if and only if for any graph G and any v ∈ V(G), ξ(G, v) = ξ(G, v') ξ'(G, v) = ξ'(G,v').
Given a graph G, and v ∈ V(G), let (G, v) ↦𝖼𝗈𝗅(𝖦,𝗏) be the function which returns the color of the node v. The color refinement refers to a procedure that returns a sequence of equivariant embeddings cr^t, computed recursively as follows:
- cr^0(G,v) = 𝖼𝗈𝗅(G,v)
- For t≥ 0, 𝖼𝗋^t+1(G,v) := ( 𝖼𝗋^t(G,v), {{𝖼𝗋^t(G,w): w ∈ N(v) }})
In each round, the algorithm computes a coloring that is finer than the one computed in the previous round, that is, 𝖼𝗋^t+1 refines 𝖼𝗋^t.
For some t ≤ n := | G|, this procedure stabilises: the coloring does not become strictly finer anymore.
We refer the reader to the seminal work <cit.> for comments about the history and connections between the color refinement and Weisfeiler-Lehman algorithms.
A GNN is a recursive embedding of vertices of a labelled graph by relying on the underlying adjacency information and node features.
Each vertex v is attributed an indicator vector ξ^0(v) of size ℓ, encoding the color of the node v: the colors being indexed by the palette {1, ⋯, ℓ}, ξ^0(v)=e_i (the i-th canonical vector) if the color of the vertex v is i. The GNN is fully characterized by:
∘ A combination function 𝖼𝗈𝗆𝖻: ℝ^2ℓ⟶ℝ^ℓ which is a feedforward neural network with given activation function σ:ℝ⟶ℝ.
∘ The update rule of the GNN at iteration t ∈ℕ for any labelled graph G and vertex v ∈ V(G), is given as follows:
ξ^0(v) is the indicator vector of the color of v, ξ^t+1(v) = 𝖼𝗈𝗆𝖻 (ξ^t(v), ∑_w ∈ N(v)ξ^t(w) )
This type of GNN is sometimes referred to as a recurrent GNN. The general definition (cf. for instance <cit.>) usually considers a sequence of combine and aggregation functions which may depend on the iteration t. The aggregation functions replaces the sum over the neighborhood, i.e. at each iteration 𝖼𝗈𝗆𝖻(ξ^t(v), 𝖺𝗀𝗀({{ξ^t(w) : w ∈ N (v) }})) is applied. It has been proved in <cit.> that for any 𝖺𝗀𝗀 function, there is a GNN (of potentially larger size) whose aggregation function is the summation and which refines any GNN with this aggregation function. The results of this article extend to GNNs whose combination and aggregation functions are allowed to be different in different iterations, but are multivariate piecewise polynomials. For ease of presentation, we restrict to
recurrent GNNs.
Given these definitions, we can now formally state the previously known results about the expressivity of unbounded GNNs (Theorems <ref> and <ref>). Namely, in Theorem <ref>, the size of the GNN is allowed to grow with n.
<cit.>
Let d ≥ 1, and let ξ^d be a vertex invariant computed by a GNN after d iterations. Then 𝖼𝗋^d refines ξ, that is, for all graphs G, G' and vertices v ∈ V(G), v' ∈ V(G'), 𝖼𝗋^d(v)=𝖼𝗋^d(v') ξ^d(G,v)=ξ^d(G',v').
<cit.><cit.> Let n ∈ℕ. Then there is a recurrent GNN such that for all t=0, ⋯, n, the vertex invariant ξ^t computed in the t-th iteration of the GNN refines 𝖼𝗋^t on all graphs of order at most n.
In contrast, we prove Theorems <ref> and <ref> for bounded GNNs:
For any GNN, i.e., choice of combination function, represented by a feedforward neural network with piecewise polynomial activation, and any natural number I ∈ℕ, there exists a pair of rooted trees T and T' (unicolored, i.e. ℓ=1) of depth two with root nodes s and s' respectively such that:
* 𝖼𝗋^2(T,s) ≠𝖼𝗋^2(T',s'), i.e. s and s' can be distinguished with color refinement in two iterations.
* ξ^t(T,s) = ξ^t(T',s') for all t ≤ I, i.e s and s' cannot be distinguished by the GNN until iteration I+1.
In two iterations, a single neuron perceptron with an activation that is not piecewise polynomial such as σ∈{exp, sigmoid, cosh, sinh, tanh} can distinguish the root nodes of any pair of non-isomorphic rooted trees of depth two.
§ OVERVIEW OF THE PROOFS
To establish our first result, we will use rooted trees of the form shown in Figure <ref> which is a tree of depth two whose depth one vertices have prescribed degrees k_1, …, k_m, with k_1, …, k_m ≥ 1. Given a GNN with piecewise polynomial activation and a natural number I∈ℕ, we will show that there exist two sets of integers k_1, ⋯, k_m and k'_1, ⋯, k'_m that are not the same up to permutations, such that for the corresponding rooted trees T[k_1, ⋯, k_m] and T[k'_1,⋯,k'_m], the GNN cannot distinguish s and s' for the first I iterations, i.e. ξ^t(T,s) = ξ^t(T',s') for any t ∈{1, ⋯, I}.
Note that the natural numbers m, and k_1, ⋯, k_m and k'_1, ⋯, k'_m will depend on I, the activation and the size of the neural network considered.
The proof of the first result is structured as follows. Since the trees are parameterized by m-tuples of integers k_1, …, k_m, the embedding of the root node computed by the GNN at any iteration is a function of these m integers. Since the activations are piecewise polynomials, these embeddings of the root node are also piecewise multivariate symmetric polynomial functions of k_1, …, k_m (Lemma <ref>). Then, we show that there exists a large enough region of ℝ^m on which this piecewise polynomial function is evaluated by the same polynomial. This region is large enough in the following sense: we prove that it contains more integral vectors than the number of possible values a symmetric polynomial with bounded degree can take on these vectors, even after identifying vectors up to permutations of the coordinates.
This implies that the polynomial will take the same value on two distinct integral vectors whose coordinates are not identical up to permutations.
When translating this result back to the world of GNNs, this implies that the two embeddings of the root nodes of the trees corresponding to these two vectors will coincide. To conclude a separation between bounded and unbounded GNNs, we justify that the unbounded ones can seperate these two vertices. This is based on the previous result (Theorem <ref>) stating that unbounded GNNs refine color refinement.
Our second result states that for activations that are not piecewise polynomial, a one neuron perceptron GNN can distinguish the root nodes of any pair of nonisomorphic trees of depth two.
In particular, we prove this when the activation function is the exponential, the sigmoid or the hyperbolic sine, cosine or tangent functions. This is done by showing that the condition ξ^2(s)=ξ^2(s) corresponds to a relation between the exponentials of the integers k_1, ⋯, k_m and k'_1, ⋯, k'_m. Applying the Lindemann-Weirstrass Theorem in transcendental number theory (Lemma <ref> and Theorem <ref>) leads to the conclusion that k'_1, …, k'_m must be a permutation of k_1, …, k_
m, showing that the trees are isomorphic.
§ COLLISION WITH PIECEWISE POLYNOMIAL ACTIVATIONS
The following statement is a reformulation of the fundamental theorem of symmetric polynomials, where we added a constraint on the degree of the decomposition. We provide a proof for completeness.
Let P be a symmetric multivariate polynomial of m variables of degree q with q ≤ m. Then P can be written as a polynomial of degree q of the elementary symmetric polynomials s_1, ⋯, s_q.
For α=(α_1,⋯, α_m)∈ℕ^m, define the multidegree of a monomial:
𝗆𝖽𝖾𝗀(X_1^α_1⋯ X_m^α_m) :=∑_i=1^mα_i (q+1)^m-i
.
By definition, the leading term of a polynomial is the monomial with greatest multidegree. We present a proof by induction on the multidegree of P.
We first need the following claim.
Claim: Let P ∈ℝ[X_1, ⋯, X_m] be a symmetric polynomial. Let c_αX^α=c_αX_1^α_1⋯ X_m^α_m be the leading term of P (c_α≠ 0). Then c_αX^π(α) is also a monomial in P, for every permutation π of {1, ⋯, m}.
Define ϕ^π : ℝ[X_1, ⋯, X_m] →ℝ[X_1, ⋯ X_m], be the linear map between polynomials which maps variable X_i to X_π(i). Since P is symmetric, then ϕ^π(P) = P (we use here the equivalence between equality of ϕ^π(P) and P as functions and as formal algebraic objects). Since c_αX^α is a term in P and c_αX^π(α) is a monomial in ϕ^π(P) for any permutation π, we must have c_αX^π(α) as a monomial in P.
Base case. The property is true for any polynomial of multidegree 0 (constant polynomial).
Induction step. Let c_αX^α be the leading term of P. Then c_αX^π(α) is a monomial in P for every π∈𝔖_m by the previous claim. Therefore, the leading term's exponent α =(α_1, ⋯, α_m) must satisfy α_1 ≥α_2 ≥⋯≥α_m. Since P has degree q then α_i = 0 for any i ≥ q+1.
Let d_m = α_m, d_m-1= α_m-1 -α_m, ⋯, d_i = α_i - α_i+1, ⋯, d_1 = α_1 - α_2.
Define Q'(X_1, ⋯, X_m):=s_1^d_1⋯ s_m^d_m. Q' is a polynomial of s_1, ⋯, s_q and the leading term of Q' is c'_αX^α where c'_α≠ 0. In particular, 𝖽𝖾𝗀(Q')≤ q.
Now, let P':=P-c_α/c'_αQ'.
Then 𝗆𝖽𝖾𝗀(P') < 𝗆𝖽𝖾𝗀(P), and P' is symmetric because P and Q' are. Applying the induction hypothesis to P', get Q” such that P'=Q”(s_1, ⋯, s_q). Define Q := c_α/c'_αQ' + Q”. Q is a polynomial of degree at most q of s_1, ⋯, s_q and Q=P, which completes the induction step.
Let q be a positive integer. Then, there exists m∈ℕ and two integral vectors (k_1, ⋯, k_m) ∈ℕ^m and (k'_1, ⋯, k'_m)∈ℕ^m that are not equal up to a permutation such that for any sequence of symmetric piecewise polynomial functions (f_p: ℝ^p→ℝ )_p∈ℕ satisfying 𝖽𝖾𝗀(f_p) ≤ q for all p∈ℕ (bounded degree condition on each polynomial piece for any p), f_m(k_1, ⋯, k_m) = f_m(k'_1, ⋯, k'_m).
Let m be any natural number such that m > max{q^2, 2q}. For any natural number M, let F_M be the box { (k_1, ⋯, k_m) ∈ℤ^m: ∀ i 1 ≤ k_i ≤ M}.
Let Ω_M:= {{{ x_1, ⋯, x_m }}: (x_1, ⋯, x_m) ∈ F_m }; in other words, Ω_M is the set of multisets of size m whose elements, when arranged as vectors, are in the box F_M[Another way to define Ω_M is as the orbits of the action of the symmetric group 𝔖_m on F_M.].
Consider
Φ: Ω_M⟶ℤ^q
S ↦ ( s_1(S), s_2(S), ⋯, s_q(S) )
which is well-defined because of the symmetry of the elementary symmetric polynomials s_1, …, s_q. Note that for any i = 1, …, q, s_i is a sum of m i monomials whose maximum value is M^i on F_M. Therefore, |𝖨𝗆(Φ)|≤ (∑_i=1^qmiM^i)^q≤ M^q^2(mqq)^q
where the last inequality follows from mi≤mq because m > 2q.
On the other hand, |𝖣𝗈𝗆(Φ)| = M+m-1m (number of multisets of size m whose elements are taken from a set of size M).
Now, let f_m: ℝ^m→ℝ be the m-th function in the given sequence of symmetric piecewise polynomial functions where each polynomial piece has degree at most q. Let r be the number of pieces of f_m. Then, there is a subset of 𝖣𝗈𝗆(Φ) with at least 1/rM+m-1m elements where f_m is a symmetric polynomial P of degree at most q. Lemma <ref> tells us that P can be expressed as a polynomial of degree at most q of the elementary symmetric polynomials s_1, ⋯, s_q. Due to the pigeonhole principle, any such polynomial will be equal on two distinct multisets {{k_1, ⋯, k_m}} and {{k'_1, ⋯, k'_m}} in Ω_M as soon as:
1/rM+m-1m_number of points > M^q^2(mqq)^q ≥ |𝖨𝗆(Φ)|_number of values P can take at most
Such a value for M can be found by noticing that M+m-1m is a polynomial of M of degree m whereas M^q^2(mqq)^q is a polynomial of M of degree q^2. Since we chose m to be greater than q^2, there exists M ∈ℕ such that Equation <ref> is true. Hence there exist k and k' whose coordinates are not equal up any permutation and such that s_i(k_1, ⋯, k_m)=s_i(k'_1, ⋯, k'_m) for any i∈{1, ⋯, m}. In turn f_m(k_1, ⋯, k_m)=f_m(k'_1, ⋯, k'_m).
Let ξ^t(T[k_1, …, k_m],s) be the embedding obtained via a GNN with piecewise activation functions after t iterations, where ξ^0(v)=1 for all vertices v ∈ V(T[k_1, …, k_m]).
Then, for any iteration t, there exists a symmetric multivariate piecewise polynomial function F such that ξ^t(T[k_1,…,k_m],s)=F(k_1, ⋯, k_m).
Furthermore, the degree of F does not depend on m, but only on the underlying neural network and t.
We first prove by induction on t that, for any vertex v ∈ V(T[k_1, ⋯, k_m]), ξ^t(T[k_1, …, k_m],v) is a piecewise polynomial function of the k_i's.
Base case: for t=0 this is trivial since all vertices are initialised with the constant polynomial 1, whose degree does not depend on m.
Induction step: Suppose the property is true at iteration t, i.e for each node w, ξ^t(T[k_1, …, k_m],w) is a multivariate polynomial of the k_i's. Since
ξ^t+1(T[k_1, …, k_m],v) = ϕ(ξ^t(T[k_1, …, k_m],v), ∑_w∈ N(v)ξ^t(T[k_1, …, k_m],w))
where ϕ is a piecewise multivariate polynomial of k_1, ⋯, k_m, by composition ξ^t+1(T[k_1, …, k_m],v) is a multivariate polynomial of k_1, ⋯, k_m. By induction the degree of ξ^t+1(T[k_1, …, k_m],v) depends only on t and the degree of ϕ, which does not depend on m.
Finally, we know from <cit.> that the color refinement algorithm refines any GNN at any iteration. Since the tuple obtained by color refinement for the vertex s is invariant with respect to permutations of the k_i's, ξ^t(T[k_1, …, k_m],s) is also invariant with respect to permutations of the k_i's.
We already know <cit.> that color refinement refines any recurrent GNN (even with an architecture of unbounded size). We prove the existence of pairs of graphs that can be separated by the color refinement algorithm, but cannot be separated by a recurrent GNN of fixed (but arbitrary) size.
We use T[k_1, ⋯, k_m] to refer to the tree illustrated in Figure <ref>. This tree has depth two, a root node s, and contains m nodes at depth one. Each vertex i at depth 1 has exactly k_i-1 “children” at depth two (and therefore k_i neighbors, where k_i is a positive integer). In the following, all vertices have color label 1.
Claim: Let T[k_1, ⋯, k_m] and T'[k'_1, ⋯, k'_m] be two rooted trees given by Figure <ref>. If the k_i's and k'_i's are not equal up to a permutation, the color refinement distinguishes s and s' after two iterations, i.e. 𝖼𝗋^2(s) ≠𝖼𝗋^2(s').
Simply note that
𝖼𝗋^2(s) = ( 𝖼𝗋^1(s), {{𝖼𝗋^1(x_1), ⋯, 𝖼𝗋^1(x_m) }} )
𝖼𝗋^1(s) = (1_𝖼𝗋^0(s), {{ 1, ⋯, 1_m times}} )
∀ i ∈{1, ⋯, m } 𝖼𝗋^1(x_i) = (1_𝖼𝗋^0(x_i) , {{1, ⋯, 1 _k_i times}})
hence 𝖼𝗋^2(s) is uniquely determined by the multiset {{k_1, ⋯, k_m}}.
Let T > 0 be a positive integer, and for 0 ≤ t ≤ T, let f_t(k_1, ⋯, k_m):=ξ^t(T[k_1, …, k_m],s) be the value returned by a GNN with piecewise polynomial activation after t iterations (note that the embeddings are one-dimensional because only one color is used). Due to Lemma <ref>, for any t∈{0, ⋯, T}, ((k_1, ⋯, k_m) ↦ f_t(k_1, ⋯, k_m))_m∈ℕ is a sequence of symmetric piecewise multivariate polynomials with bounded degrees (the degree of f_t does not depend on m). Lemma <ref> tells us that there exists m∈ℕ, and two vectors k∈ℕ^m and k'∈ℕ^m whose coordinates are not equal up to permutations, such that for any t∈{0, ⋯, T}, f_t(k_1, ⋯, k_m) =f_t(k'_1, ⋯, k'_m).
Note that in Theorem <ref>, depth two is minimal: for any pair of non isomorphic rooted trees of depth one, any GNN with one neuron perceptron, an injective activation function, weights set to one, and zero bias can distinguish their root vertex in one iteration. Indeed, in that case, ξ^1(s) = σ(1+𝖽𝖾𝗀(s)) if the GNN is recurrent with a combine function given by ϕ: ℝ^2→ℝ,(x_1, x_2) ↦σ(x_1 + x_2). Hence, ξ^1(s)≠ξ^1(s') as soon as σ is injective and s and s' have distinct degree.
§ ACTIVATIONS THAT ARE NOT PIECEWISE POLYNOMIAL
In this Section we present a proof of Theorem <ref>. We prove that for any pair of non isomorphic rooted trees of depth two, i.e. trees of the form T[k_1, ⋯, k_m] and T'[k'_1, ⋯, k'_n] (here the k_i's and k'_i's are all greater than or equal to 1, cf. Figure <ref>) can be distinguished by a bounded GNN with any of the following activation functions: exponential, sigmoid, or a hyperbolic sine, cosine or tangent function. Consider the following 1-neuron perceptron ϕ with activation function σ, ϕ: ℝ^2→ℝ, ϕ(x_1,x_2) = σ( x_1 + x_2). Then it is easy to see that:
∀ v ∈ V(T[k_1, ⋯, k_m]) ξ^1(v) = σ( ξ^0(v) + ∑_w∈ N(v)ξ^0(w)) = σ(1 + 𝖽𝖾𝗀(v))
ξ^2(v) = σ(σ(1 + 𝖽𝖾𝗀(v)) + ∑_w∈ N(v)σ(1 + 𝖽𝖾𝗀(w))
In particular ξ^2(s) = σ( σ( 1 + m) + ∑_i=1^mσ(k_i+1))
Now suppose σ is either injective on ℝ, or nonnegative and injective on ℝ^+ (this is the case for the exponential, the sigmoid, the hyperbolic tan, and the hyperbolic cosine and sine), s and s' are vertices of two trees with potentially different number of leaves m and n, then
ξ^2(s) = ξ^2(s') ∑_i=0^mσ(k_i+1) =∑_i=0^nσ(k'_i+1)
where k_0:=1+m and k'_0:=1+n. The goal of the remainder of this section is to prove that the right hand side equality of Statement <ref> implies m=n and k_i's are the same as k'_i's, up to a permutation, for the activation functions σ of Theorem <ref>.
If α_1, ⋯, α_n are distinct algebraic numbers, then the exponentials e^α_1, ⋯, e^α_n are linearly independent over the algebraic numbers.
Let n and m be positive integers, and α_1, ⋯, α_n and α'_1, ⋯, α'_m be algebraic numbers. Then ∑_i=1^n e^α_i = ∑_i=1^m e^α'_i if and only if m=n and the α_i's and α'_i's are equal up to a permutation.
(⟸ ) is clear. For (⟹ ), by contradiction suppose that the α_i's and α'_i's are not equal up to a permutation. First, if the α_i's (resp. α'_i's) are not distinct one can group them by their number of occurrences in both sums. Then, we would have a linear dependence with integer coefficients of exponentials of integers. This contradicts Theorem <ref> (Linderman-Weirstrass).
Without loss of generality, suppose the k_i's and k'_i's are ordered in increasing order. For ease of notation, let α and α' be the vectors defined as α_i = k_i+1 for all i∈{1, ⋯, m} and α'_i=k'_i+1 for all i∈{1, ⋯, n}. We will now prove that Statement <ref> implies α=α' in each case.
- σ∈{𝗌𝗂𝗀𝗆𝗈𝗂𝖽, 𝗍𝖺𝗇𝗁}.
In the case of the sigmoid, Statement <ref> yields the following equation after multiplication by the the product of the denominators:
!( ∑_i=1^m e^α_i( ∏_j=1
j≠ i ^m (1 +e^α_j) ) ) ∏_i=1^n (1+ e^α'_i)= ( ∑_i=1^n e^α'_i( ∏_j=1
j≠ i ^n (1+e^α'_j) ) ) ∏_i=1^m (1+e^α_i)
After developing and grouping each hand side into linear combinations of exponentials we obtain an equation of the form:
∑_S ⊆{1, ⋯, m}
T ⊆{ 1, ⋯, n}γ_S,Texp(α_S + α'_T) =∑_S ⊆{1, ⋯, m}
T ⊆{ 1, ⋯, n}γ_S,Texp(α'_S + α_T)
where for S ⊆{1, ⋯, m}, α_S:=∑_i ∈ X α_i (resp. for T⊆{1, ⋯, n}, α'_T:=∑_i ∈ X α'_i). Note that γ_∅, T=0 for all subsets T ⊆{1, …, n}. We will prove by induction on m (the size of the vector α) that in these conditions, Equation <ref> implies that m=n and α= α'.
Base case:
If α has size one and α' has size n > 0, then the equation boils down to exp(α_1)=∑_i=1^nexp(α'_i) which is true if and only if n=1 and α_1=α'_1 using Lemma <ref>.
Induction step:
We suppose the following property true for some nonnegative integer m: For any nonnegative integers α_1, ⋯, α_m and α'_1, ⋯, α'_n, ∑_i=1^mσ(α_i)=∑_i=1^nσ(α'_i) m=n and k=k'.
Suppose now that ∑_i=1^m+1σ(α_i)=∑_i=1^nσ(α'_i). Since γ_∅,T = 0 for all T⊆{1, …, n}, the smallest term on the left hand side of (<ref>) is exp(α_1) and the smallest term on the right hand side is exp(α'_1) . Using Lemma <ref>, this implies that α_1 = α'_1. Therefore ∑_i=2^m+1σ(α_i) =∑_i=2^nσ(α'_i). We can apply the induction assumption on the vector (α_2, ⋯, α_m+1) of size m to obtain that m=n-1 and (α_k_2, ⋯, α_m+1)=(α_k'_2, ⋯, α'_m+1). This proves that m+1=n and α=α', which ends the induction.
If σ=𝗍𝖺𝗇𝗁=exp(2·)+1/exp(2·)-1. Equation <ref> becomes after multiplication by the the product of the denominators:
!( ∑_i=1^n (e^2α_i-1 )∏_j=1, j ≠ i^n(e^2α_j +1) ) ∏_j=1 ^m (1+e^2α'_j) = ( ∑_i=1^m (e^2α'_i-1) ∏_j=1, j ≠ i^m(e^2α'_j +1) ) ∏_j=1 ^n (1+e^2α_j)
After developing into a linear combination of exponentials on each side, the arguments containing α_T with T≠∅ on the left hand side and α'_T with T≠∅ on the right hand side have positive algebraic coefficients. There are also arguments of the form α'_T on the left hand side and α_T on the right hand side (in other words, γ_∅, T≠ 0, unlike the sigmoid case). However, note that the coefficients corresponding to these terms are (algebraic and) negative. Hence, as a consequence of Lemma <ref>, the arguments with negative coefficients in front of the exponentials must match up on each side, and we are left with an equation similar to Equation <ref> (the arguments have a factor 2), where again γ_∅, T = 0. We can apply the same reasoning by induction as for the sigmoid case, to prove that α=α'.
- σ∈{sinh, cosh}. If σ = cosh, then Equation <ref> becomes:
( ∑_0=1^nexp(iα_j) - ∑_0=1^mexp(iα'_j) ) + ( ∑_j=0^mexp(-iα'_j) - ∑_0=1^nexp(-iα_j) ) =0
Due to Lemma <ref>, this can only happen if m=n and for all j ∈{1, ⋯, n}, α_j = α'_j, because iα_j, iα'_j are algebraic for any j ∈{1, ⋯, n}, and the α_j's and α'_j's are ordered and positive. We conclude that α=α'. The case σ∈{sinh} can be treated similarly.
alpha
|
http://arxiv.org/abs/2307.03875v2 | 20230708014222 | Large Language Models for Supply Chain Optimization | [
"Beibin Li",
"Konstantina Mellou",
"Bo Zhang",
"Jeevan Pathuri",
"Ishai Menache"
] | cs.AI | [
"cs.AI",
"cs.CL",
"cs.DM",
"cs.LG"
] |
New Constraints on ALP Electron and Photon Couplings from ArgoNeuT and the MiniBooNE Beam Dump
Jaehoon Yu
==============================================================================================
Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in explaining and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design – a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.
§ INTRODUCTION
Modern supply chains are complex, containing multiple tiers of suppliers, customers, and service providers <cit.>. Optimization tools have been widely utilized for decision making in such supply chains. These tools not only automate some of the decision making processes, but also result in efficiency gains and substantial cost reductions across many industries <cit.>. However, some of the automated processes require involving business operators, for understanding and explaining certain decisions, providing what-if analysis, and even overriding some optimization outcomes. In many cases, these operators are not equipped with the necessary background in optimization, resulting in time-consuming back-and-forth interactions with program managers, data scientists and engineers.
Large language models (LLMs) have recently emerged as a promising tool for assisting humans with a wide variety of tasks, such as writing documents, presenting work, coding and health diagnosis <cit.>. Generative multimodal LLMs, such as OpenAI's GPT-4, are being rapidly integrated within co-pilots, for answering questions and increasing productivity through simple, language based interactions with technology <cit.>.
In this paper, we study how state-of-the-art LLMs can be applied for reasoning about supply chain optimization. Using LLMs in our context is challenging.
First, the underlying optimization problems are often large scale combinatorial optimization problems, and solving them directly is currently out of reach for LLMs <cit.>. Second, one needs to align the large foundation models to answer the domain-specific questions. Due to the large scale, fully training these models is not possible, and even middle-ground solutions such as fine-tuning LLMs require substantial compute and engineering investments <cit.>. Last but not least, any use of LLMs in business-critical operations, should have solutions when “things go wrong", including diagnosing of and recovering from mistakes and hallucinations <cit.>.
In view of these challenges, we design and implement – a framework that employs LLMs to interpret supply chain optimization solutions. A key idea behind is not to replace optimization technology by LLMs, but rather use optimization solvers in tandem with LLMs. In our design (see Figure <ref> for system architecture), the LLM is responsible for translating the human query to “optimization code", which is in turn used by an optimization solver to produce the necessary output; the output then passes through the LLM for producing the answer in human language (English). This architecture is used both for textual explanations and visualizations of the optimization solution, as well as for answering what-if queries. To address what-if queries, uses the LLM to appropriately modify the input to the optimization solver, and then reruns the solver under the hood to produce an answer.
To enable , we solve multiple technical challenges. First, we circumvent all forms of costly training, by applying in-context learning, namely “teaching" the LLM about the domain directly through the query's prompt (i.e., as part of the inference). This requires careful co-design of the optimization code and the prompt with the understanding that the prompt can be space constrained. For example, we write the code in certain functional form that can be efficiently mapped to questions asked by humans.
We also design a simple safeguard mechanism that confronts output mistakes.
To evaluate the effectiveness of , we introduce an evaluation benchmark that includes (i) a variety of common supply chain scenarios, and (ii) an evaluation methodology that incorporates new metrics for quantifying accuracy, generalizability within a scenario, and extrapolation capability to unseen scenarios. We test on five different scenarios and obtain 93% accuracy on average using GPT-4. We view the benchmark and methodology as contributions that stand on their own, and can be used to evaluate future approaches. We are in the process of open-sourcing our benchmark. Finally, we deploy for the server deployment optimization used in Microsoft Azure's supply chain. We discuss some of the engineering challenges, and report initial promising results from our evaluation.
We believe that this paper sets important foundations, which can be used by other organizations for explaining optimization outcomes through LLMs. There are several future directions that emerge from our study, for example, using smaller models that can be trained with modest resources. As a longer-term goal, it is natural to expand the scope of LLMs beyond explainability, to facilitate interactive optimization (e.g., “please provide a more load-balanced solution", “please use at most two suppliers"). With the constant advances of LLM technology, it will be fascinating to examine whether LLMs can be utilized not only as translators, but also for refining and improving optimization outcomes.
The rest of the paper is organized as follows. In Section <ref>, we provide the necessary background on supply chain optimization and current LLM technology. In Section <ref>, we describe the design of .
Section <ref> describes our evaluation benchmark, and 's evaluation results. In Section <ref>,
we outline our findings from 's deployment in Azure's supply chain. We discuss future perspectives in Section <ref>.
§ BACKGROUND AND MOTIVATION
In this section, we provide brief background on decision making in supply chain operations, and elaborate on the notion of explainability. We then describe current capabilities and limitations of LLMs, and conclude with a simple supply chain example, which will be useful for explaining our solution approach.
§.§ Decision Making in Supply Chains
A supply chain may be defined as “an integrated network of facilities and transportation options for the supply, manufacture, storage, and distribution of materials and products” <cit.>. A simple supply chain may consist of a company (e.g., a service provider) and the set of its suppliers and customers <cit.>. However, most supply chains nowadays contain multiple tiers with suppliers of suppliers, customers of customers, and hierarchies of service providers <cit.>.
This results in highly complex global networks where decisions must be optimized across multiple layers to satisfy customer demand while guaranteeing operational efficiency.
Decision making in supply chains spans different time-scales: starting from the design of the supply chain network (e.g., location of factories), planning (e.g., procurement of supply), and execution (e.g., transportation of goods). This leads to many types of decisions; a few examples:
0em
* How many factories should we open, where, and with what manufacturing capacity?
* What suppliers should we use?
* How much inventory should we keep in stock and at which locations?
* How should we transport intermediate and finished goods efficiently?
The complexity of the decision-making often requires the design of optimization approaches that can incorporate a multitude of constraints and objectives, and still generate good quality solutions in plausible running times. To this end, different aspects of the supply chain (facility location, inventory planning, routing) may be optimized separately or considered jointly (e.g., inventory planning integrated with routing <cit.>). Common solution approaches for these optimization problems include Mixed Integer Programming based techniques and heuristics that can tackle the large scale of the problem.
§.§ Explainability
Business operators and planners involved in decision-making need to maintain a good understanding of the optimization outcomes. This allows them to not only address customer questions, but also react to unexpected events, and resolve inefficiencies and bottlenecks. However, the understanding is often challenging due to the complexity of the decision process (e.g., large scale, solution obtained by “black-box" algorithm, etc.) and lack of optimization expertise.
For concreteness, we provide below some examples of questions that operators may wish to answer.
0em
* What is the cost breakdown for each fulfilled demand?
* How much excess inventory have I had per month in the past year?
* What would happen if the demand at a particular location increased by 10%?
* Can I reduce a factory's manufacturing capacity by 5% and still meet the demand?
* Why was a particular supplier selected for a demand?
* How would selecting a different transportation option affect the delivery timelines and the overall cost?
These and other questions aim at explaining the outcome of supply chain decisions. They include analyzing the current solution (input and output), investigating historical trends, and exploring what-if scenarios.
Obtaining insights on optimization decisions may require involving multiple professionals with different roles. Suppose that planners may wish to understand why a demand has not been fulfilled on time. They often surface the concern to the program managers, who involve domain experts, such as data scientists or the engineers that developed the optimization system. The domain experts in turn may need to write additional code and often rerun the optimization to extract the relevant insights. This overall process might be very time-consuming for all parties involved and can cause significant delays in the decision making process.
In some applications, teams maintain some custom tools that allow decision makers to reason about certain decisions. For example, application dashboards can provide visualizations or even allow enforcing some actions (e.g., fix a specific supplier for a demand). However, given the engineering overhead of maintaining
the tools, they are typically limited to the most common use cases.
The notion of explainability is certainly not novel, and has drawn attention in both academia and industry. There have been numerous studies on explaining ML/AI <cit.>. In the optimization context, IBM Decision Optimization <cit.> provides answers to a fixed set of queries that the user may choose to activate. See also <cit.> and references therein.
§.§ Large Language Models
Overview.
A large language model (LLM) is a foundation model <cit.> trained on extensive text data using deep learning techniques, such as Transformer neural networks; ELMo <cit.>, BERT <cit.>, Turing NLG <cit.>, GPT-3 <cit.>, GPT-4 <cit.>, PaLM <cit.>, PaLM-E <cit.>, LLaMA <cit.>, and Vicuna <cit.> are some examples of widely used LLMs.
In the training phase, a LLM learns statistical patterns, word relationships, and contextual information from diverse sources, such as books, articles, websites, and code repositories. LLMs are used for a variety of tasks in the inference phase <cit.>, including chatbots, translation, writing assistance, coding <cit.>, planning <cit.>, poem and story composition.
Using LLMs in applications. Multiple strategies can be employed to adapt LLMs for a specific application. The most common approaches are fine-tuning and in-context learning.
Fine-tuning is a classic approach for “transfer learning" aimed at transferring knowledge from a pre-trained LLM to a model tailored for a specific application <cit.>. Typically, this process involves tweaking some weights of the LLM. While fine-tuning approaches can be made efficient <cit.>, they still necessitate model hosting in GPUs. This requirement can prove excessively costly for many applications. In-context learning <cit.> is an alternative cheaper approach, which involves incorporating a few training examples into the prompt (or query). The idea here is to append the prompt with domain-specific examples and have the LLM learn from these “few-shot" examples. A key advantage of this approach is that it does not require model parameter updates.
Prompt engineering. In a production setting, developers often send prompts (aka, queries) to the model, which can be appended with domain-specific examples for obtaining higher-quality answers. A collection of prompt management tools, such as ChatGPT Plugin <cit.>, GPT function API call <cit.>, LangChain <cit.>, AutoGPT <cit.>, and BabyAGI <cit.>, have been designed to help engineers integrate LLMs in applications and services. The prompt size is measured in the number of tokens, which is proportional to the query size. LLMs can only process a limited number of tokens because of resource limitations, which is a strict constraint that developers and tools need to find workarounds for.
Privacy. Using domain-specific information in the prompt may involve proprietary data, which users may prefer not to reveal to LLM hosts. Even if LLM providers offer service level agreements (SLAs) for privacy, passive eavesdropping attackers might still intercept the data. Therefore, many organizations would prefer utilizing LLMs in a privacy-preserving way, namely keeping the proprietary data in-house.
Mistakes.
Naturally, LLMs might provide sub-optimal outcomes, such as inaccuracies and even hallucinations <cit.>.
There are generic tools that tackle this problem <cit.>, however one may need domain specific tools for better outcomes. One example is fixing code generated by LLMs <cit.>.
§.§ A Simple Example
We now describe a simple supply chain example that will be useful for illustrating our approach.
The supply chain. Consider a coffee roasting company that roasts two types of coffee (light and dark roast). The company sources coffee beans from three different suppliers, it roasts them in one of its two roasting facilities, and then ships them to one of its three retail locations for selling to customers. The goal is to fulfill the demand in each retail location, while minimizing the total cost. The total cost consists of the cost of purchasing the coffee from the suppliers, the roasting cost in each facility, and the shipping cost of the end product to the retail locations. An illustration is given in Figure <ref>.
Model formulation. We can model this problem as a Mixed Integer Program. Let x_s,r denote the number of units purchased from supplier s for roasting facility r, and y^L_r,ℓ and y^D_r,ℓ the amount of light and dark roast sent to retail location ℓ from roasting facility r. Each supplier s has a capacity C_s, and each retail location ℓ has demand D^L_ℓ and D^D_ℓ for light and dark roast respectively. There is a cost c_s,r for each unit purchased from supplier s for roasting facility r, a shipping cost of g_r,ℓ for each unit sent to retail location ℓ from roasting facility r, and a roasting cost h_r^L and h_r^D per unit of light roast and dark roast respectively in facility r.
The optimization problem is the following:
minimize ( ∑_s,r x_s,r· c_s,r +
∑_r,ℓ y^L_r,ℓ· h^L_r+
∑_r,ℓ y^D_r,ℓ· h^D_r + ∑_r,ℓ (y^L_r,ℓ + y^D_r,ℓ) · g_r,ℓ) (Objective)
subject to ∑_r x_s,r≤ C_s ∀ s (Supplier capacity constraint)
∑_s x_s,r = ∑_ℓ (y^L_r,ℓ + y^D_r,ℓ) ∀ r (Conservation of flow constraint)
∑_r y^L_r,ℓ≥ D^L_ℓ ∀ℓ (Light coffee demand constraint)
∑_r y^D_r,ℓ≥ D^D_ℓ ∀ℓ (Dark coffee demand constraint)
x_s,r, y^L_r,ℓ, y^D_r,ℓ∈ℤ^+ ∀ s,r,ℓ (Integrality constraint)
Explainability. Let us now zoom into the example from Figure <ref>. The optimal solution is depicted in Figure <ref>. We see that in the optimal plan, both roasteries produce light and dark coffee; the first roastery sources its beans from supplier 3, while the second from suppliers 1 and 2. The first two retail locations then obtain all their coffee from the first roastery, while the third retail location is supplied by both roasteries. A user may ask the following questions:
0em
* What would happen if the demand at retail location 1 increased by 10%?
* What would happen if the demands at all retail locations doubled?
* Why are we using supplier 3 for roasting facility 1?
* Can I use roasting facility 1 only for retail location 2?
* What if supplier 3 can now provide only half of the quantity?
* The per-unit cost from supplier 3 to roasting facility 1 is now $5. How does that affect the total cost?
* Why does Roastery 1 produce more light coffee than Roastery 2?
* Why does supplier 1 ship more to Roastery 2 than Roastery 1?
* Why not only use one supplier for Roastery 2?
§ THE LLM FRAMEWORK
Large-scale supply chain management entails multiple functions, such as extensive data gathering, data processing and analysis, optimization processes and communication and enforcement of decisions across multiple stakeholders. While LLMs and supporting tools may handle part of these functions, there is a need for an end-to-end framework that will address the underlying challenges in a systematic way. In this section, we describe the design of our framework, .
§.§ System Overview
The framework, depicted in Figure <ref>, consists of three sets of entities: agents, LLMs, and application-specific components. When a user poses a question (1), the coder takes the question and formulates it as an in-context learning (ICL) question (2) for the LLM. The LLM then generates code (3) to answer the question. The safeguard checks the validity of the code and aborts the operation in case of a mistake; otherwise the safeguard feeds the code to an application specific component (4), such as a database engine or an optimization solver (depending on the query). The component processes the code and produces results, which are logged in a file (5). We note that obtaining the final result
may involve multiple iterations (2 to 5) where the query is automatically refined until the desired output is achieved. Finally, the output logs from the component are fed back into the LLM (6). The LLM analyzes the logs and generates a human-readable answer (7) that is sent back to the user (8).
We now provide an overview of the different entities and components. More details can be found in Appendix <ref>.
§.§.§ Agents
Agents facilitate the interaction between users, the LLM, and application-specific components. The coder converts raw user questions into specific ICL queries. The conversion includes supplying the application context, providing ample training examples, and restructuring the user's query, as exemplified in Figure <ref>. The safeguard operates as a quality control checkpoint. It scrutinizes the code for potential discrepancies and initiates self-debugging upon encountering failures. When cannot successfully address a query, the safeguard would either initiate a new iteration with a proposed fix, or generate an error message for the user. The interpreter takes the output logs, tables, graphs, etc., and generates a human friendly response to the user's query.
§.§.§ Application Specific Components
Different applications may have different types of components; we provide an overview of the most common ones. is designed in a modular way, so that using for a different application requires only switching to a new set of components.
The database is a systematically arranged collection of data in various formats, such as CSV, SQL, JSON, Parquet, which are queried to extract answers. The solver can be a commercial integer programming solver, such as Gurobi. can query the solver output directly, or the output can be stored and queried from the database. If a question demands profound domain knowledge or historical context, consults documents to enhance the depth and relevance of the response. The helper is an optional component. It consists of a set of functions written by application engineers, for simplifying the code produced by LLMs. For example, a complex data analysis workflow can be simplified to a single helper function call.
§.§ A Running Example
We illustrate 's data flow via the user question, “What if we prohibit shipping from supplier 1 to roastery 2? Show me the new plan and compare with the previous result". First, the coder converts this question into an in-context learning query for the LLM, see Figure <ref> for the prompt. In addition to the question itself, the prompt contains (i) training examples, namely pairs of questions and code answers, and (ii) a documentation of the helper functions. Intuitively, (ii) supplements (i) by providing additional context into what the code does.
Subsequently, the LLM generates code that adds a new constraint (green region in Figure <ref>). The safeguard then extracts the code from the LLM's response, and calls the optimization solver to resolve the planning problem, yielding a result depicted in the yellow region in Figure <ref>. This result is then fed into the LLM by the interpreter, which produces a response. Finally, presents the response to the user alongside a visualization of the plan (green region in Figure <ref>) and a comparison with the original cost. Note that preserves privacy, since the domain-specific data remains in either the solver or database, and is never transferred to the LLM. Additional examples are provided in Figure <ref>.
§ EVALUATION BENCHMARK
In this section, we develop a benchmark for evaluating the performance of our framework on a variety of supply chain optimization problems. The benchmark and the methodology around it can guide future efforts for using LLMs in supply chain optimization.
§.§ Scenarios and Data
To evaluate our framework, we selected a variety of optimization problems that capture multiple types of decisions that may be relevant in different supply chain settings. Specifically, our dataset includes a facility location scenario, a multi-commodity network flow for distribution of products, workforce assignment optimization, the traveling salesman problem, as well as the coffee distribution scenario from Section <ref>. The code for all problems is in Python and the Gurobi optimization solver <cit.> is used to obtain the optimal solution; Appendix <ref> provides the code for the coffee distribution problem as an example.
Our next step is to generate a repository of questions and code answers for each scenario. Some of these question-answer pairs will be used as examples for in-context learning, while others for evaluating 's performance.
To create a large set of questions, we write macros for each question, which results in generating question sets of closely related question-answer pairs. An example of a macro for a question set is the following:
In order to increase the diversity in the question sets, we also ask GPT to rephrase the questions while preserving their meaning. For instance, GPT might rephrase the generated question “Why would we ship beans from Supplier 1 to Roastery 2” to “What benefits are associated with the choice of shipping beans from Supplier 1 to Roastery 2?”.
We note that the question sets for all problems that are used in the benchmark were created from scratch and kept in house, so that the LLMs have not observed these data as part of their training.
§.§ Evaluation Methodology
The goal of our evaluation is to assess the accuracy of LLMs in answering user questions for supply chain optimization problems. Unfortunately, existing metrics, such as pass@k which is used for analyzing coding accuracy <cit.>, are not well suited for explainability through code (intuitively, the metrics are “too forgiving"). We therefore propose a different methodology which is inspired by the unit-test approach used in software development.
Our evaluation proceeds as follows. For each scenario we run R experiments. Each experiment consists of T question sets. Each question set consists of Q test questions and answers.
The LLM is asked to write the code and answer for a test question; it is given three chances to produce a response in case of an evident error (runtime or syntax). We then evaluate the correctness of the final answer. Note that we do not necessarily evaluate whether the generated code matches exactly with our ground-truth code, as there are different ways to obtain the correct response. The following example demonstrates a scenario where the generated code is quite different, but the optimization outcome would be the same.
Accuracy. We define the accuracy metric AC as the average success rate across all scenarios, experiments and question sets. Formally,
AC = 1/ SR∑_s=1^S∑_r=1^R1/T_s∑_t=1^T_s1(q_t),
where q_t is the question set, and 1(q_t) is the indicator whether it passed successfully. The LLM passes a question set if and only if it successfully answers all questions in the question set.
In-distribution and out-of-distribution evaluation.
As common practice, we evaluate our framework in both `in-distribution' and `out-of-distribution' <cit.> settings.
For in-distribution evaluation (Figure <ref>), the test question and the examples used in the prompt are from the same question set. In contrast, for out-of-distribution evaluation (Figure <ref>), the example questions are extracted from different question sets.
Example selection. As the number of tokens that can be provided as input to the LLMs is limited, we explore different approaches for selecting the training examples for each query. The approaches can be evaluated both for in-distribution and out-of-distribution evaluation. One approach is random selection, where a fixed number of example questions is selected uniformly at random. Another approach is based on nearest neighbors, where we select examples that are similar to the test question; similarity is based on the text embedding <cit.> of the questions as determined by the model text-embedding-ada-002 <cit.>. We also experiment with different sizes of the example set (0, 1, 3, 5, or 10 examples).
§.§ Performance
Setup. For each scenario s, we run R=10 experiments. In each experiment we evaluate T_s≥ 10 question sets. Each question set q_t usually contains 10-30 questions and answers.
We use both text-davinci-003 <cit.> and GPT-4 <cit.> for our evaluation.
Performance results across different LLMs, example selection approaches, and example set sizes are summarized in Table <ref>.
Observations. GPT-4 consistently outperforms text-davinci-003 in both in-distribution and out-of-distribution evaluation.
As expected, both models show higher accuracy on in-distribution compared to out-of-distribution evaluation. GPT-4 performs relatively much better in out-of-distribution evaluation, demonstrating its stronger reasoning and generalization capabilities; another sign for these capabilities is the 59% accuracy even without any training examples. Increasing the number of examples results in improved accuracy across the board. We also note that the gap between text-davinci-003 and GPT-4 decreases with the size of the example set.
The nearest neighbor selection approach yields slight performance improvements for in-distribution evaluation. Interestingly, when the size of the example set is greater than one, random selection outperforms nearest neighbor for out-of-distribution evaluation. One explanation here is that selecting examples based on text similarity results in overfitting, and random selection results in more diverse training examples.
§ FOR AZURE'S SUPPLY CHAIN
In this section, we demonstrate 's capabilities on the server fulfillment supply chain of Microsoft Azure. We start with providing the necessary details for the decisions involved in Azure's supply chain. We then outline the steps for deploying in production, and provide examples of user interactions and early feedback we obtained. We conclude this section by describing preliminary performance results.
§.§ The Supply Chain
The rapid growth of the cloud industry requires cloud providers to continuously deploy additional capacity to keep up with the demand. This is achieved by acquiring new clusters of servers and deploying them in the data centers. The Microsoft Azure supply chain encompasses a broad array of processes including demand forecasting, strategic foresight, hardware semantic search, fulfillment planning, and document management. Due to complexity and large scale, the optimization of Azure's supply chain is assigned to different subsystems. We focus here on one such subsystem called Intelligent Fulfillment System (IFS), which deals with assigning and shipping servers from the warehouse to the data centers.
Main decisions. For each demand for cloud capacity, the main decisions consist of (i) the hardware supplier that will be used to fulfill the demand, (ii) the timeline of the deployment - in particular, the cluster's dock-date (which determines the date of shipping from the warehouse), and (iii) the cluster's deployment location in the data center (selection of a row of tiles to place the cluster on). The goal is to minimize the total cost that consists of multiple components, such as delay/idle cost of the clusters compared to their ideal dock-date and shipping costs, while respecting a multitude of constraints. Examples of constraints include capacity constraints on the suppliers and the data centers, location preferences for demands and compatibility constraints. The underlying optimization problem is formulated as a Mixed Integer Program (MIP) where the total input data size is around 500 MB.
The optimal solution is obtained hourly using Gurobi. More details about the optimization problem can be found in Appendix <ref>.
Stakeholders. The main consumers of IFS are planners. These are professionals that have the business context, so when they receive the outcome of the optimization, they can confirm that it meets business needs (or override decisions otherwise) and ensure the execution of the decisions is completed as planned. However, the increased complexity of the underlying optimization problem in combination with the global scale of decision making (hundreds of data centers) prevents immediate clarity in the reasoning behind each decision. Consequently, planners often reach out to the engineers (including data scientists) that develop the optimization system for obtaining additional insights.
Oftentimes, planners and engineers have multiple rounds of interaction around understanding an issue or exploring what-if scenarios.
Common questions. We summarize below the main types of questions that are raised by planners:
0em
* [Management] Does the system support a particular region, resource, or supplier?
* [Availability] Is a resource available or allocated?
* [Decisions] Why did the system make decision `x' related to supplier/demand selection, time, and location?
* [Details of shipments] What are the details related to cross-geographical shipments and expected dock counts on a specific date?
* [Historical data analysis] What is the standard deviation of the supplier's inventory in the last month?
* [Visualization] Can you visualize the dock capacity, availability, dates, or delays at a given location?
§.§ Deploying for Azure Supply Chain
Our current deployment of consists of (i) a front-end service for multiple-user interaction; (ii) an agent service, which is connected to Azure OpenAI for LLM access; (iii) multiple virtual machines (VMs) which host IFS and the application specific components to support multiple users at the same time.
We preload VMs' memories with the input data and solver's solutions to speedup code executions for users. The input data for the optimization problem are updated periodically (hourly), where the VMs load the updated data in a round-robin fashion so that there are always some VMs available to support users. We use GPT-4 as the LLM.
§.§ Preliminary Feedback and Results
Figure <ref> provides examples of interactions between users and .
The preliminary feedback we obtained from both planners and engineers has been positive. Users expressed excitement noting the potential of to help them understand the underlying optimization logic. Users especially emphasized the benefits of supporting key what-if scenarios, which gives planners more autonomy and may substantially reduce the engineering on-call burden. For example, before , answering one what-if question would need more than three operators to coordinate the investigation and one on-call engineer to inspect the plan output.
Our preliminary evaluation indicates that can achieve more than 90% accuracy for our in-distribution evaluation. This result is consistent with the ones obtained in Section <ref>.
§ CONCLUDING REMARKS
We conclude this paper by discussing current limitations, and highlighting intriguing directions for future work.
§.§ Current Limitations
Users need to be specific. The user needs to ask precise questions. For instance, “Can we dock demand xc132 fifteen days earlier?" is ambiguous, because “earlier" can mean “15 days before today", “15 days before the currently planned date", or “15 days before the deadline". Consequently, the LLM might misunderstand the user and yield the wrong code.
Dependency on application-specific components.
relies on proper design of application-specific components, such as the schema of the database and the helper functions. Some of these components might require non-negligible engineering efforts. While there has been progress in automating some of these components <cit.>, there are still gaps in using them in some production settings.
Undetected mistakes. We observed cases where the LLM writes code that runs smoothly, but it may be totally wrong (e.g., due to string matching mistakes). We expect that things will improve in the future with more advances in LLMs and supporting tools.
Generalize to new questions. While the LLM performs well on seen questions, it still struggles when presented with questions that do not appear in the examples (see, e.g., Table <ref>). We believe that future models will have better generalizability.
Benchmark. Our current evaluation quantifies performance only for quantitative questions; for example, we exclude visualization queries from our analysis. Furthermore, the evaluation is based on a specific programming language (Python) and optimization solver (Gurobi).
§.§ Future Directions
We see our work as a cornerstone for future research in the area.
One interesting direction is incorporating human feedback (e.g., from supply chain planners) which could lead to significant performance improvements <cit.>. Another direction that we are currently examining is using smaller models (see, e.g., <cit.> and references therein) for the specific tasks of supply chain optimization; using such models allows for more affordable hosting and fine-tuning of the model. In particular, we are examining whether fine-tuning can help with interpreting unseen questions. On a related note, it is of interest to consider a hybrid framework that combines the strengths of different AI models, for example combining large LMs with smaller ones. A natural longer-term goal is to go beyond explainability and facilitate interactive optimization, where the user directly influences the optimization outcomes; this will require designing more comprehensive safeguards, to prevent costly mistakes.
§.§ Acknowledgements
We thank Sébastien Bubeck, Yin Tat Lee, Chi Wang, Erkang Zhu, Leonardo Nunes, Srikanth Kandula, Adam Kalai, Marco Molinaro, Luke Marshall, Patricia Kovaleski, Hugo Barbalho, Tamires Santos, Runlong Zhou, Ashley Llorens, Surajit Chaudhuri, and Johannes Gehrke from Microsoft Research for useful discussions. We also thank Brian Houser, Matthew Meyer, Ryan Murphy, Russell Borja, Yu Ang Zhang, Rojesh Punnath, Naga Krothapalli, Navaneeth Echambadi, Apoorav Trehan, Jodi Larson, and Cliff Henson from the Microsoft Cloud Supply Chain for their advice and support.
unsrt
§ INTELLIGENT FULFILLMENT SYSTEM
In this section, we present a partial formulation of the optimization in the Intelligent Fulfillment System that assigns and ships servers from the warehouse to the data centers.
§.§ Main Decisions
We introduce the following variables:
* z_dt∈{0,1}: equals 1 if demand d docks on day t, and 0 otherwise
* u_dr∈{0,1}: equals 1 if demand d docks on row r, and 0 otherwise
* w_ds∈{0,1}: equals 1 if d is fulfilled using supplier s, and 0 otherwise
* y_d,dc,t∈{0,1}: equals 1 if d docks at datacenter dc on day t, and 0 otherwise.
* v_d,s,t≥ 0 : whether demand d docks on day t using supplier s or not
§.§ Constraints
This section describes some of the constraints in the formulation.
Docking day. The docking for each demand takes place on a single day.
∑_t z_dt≤ 1 ∀ d
Datacenter dockings. For each demand d, we dock at a datacenter dc on a specific day t only if the selected row belongs to that datacenter dc and the selected day is that particular day t.
∑_dc y_d,dc,t≤ z_dt ∀ d, t
∑_t y_d,dc,t = ∑_r ∈ rows(dc) u_dr ∀ d,dc
Datacenters' daily capacities. There are restrictions restr on the daily amount of dockings that sets of datacenters can handle. Let R_d denote the number of racks required for demand d.
∑_d, dc ∈ DC(restr) y_d,dc,t· R_d ≤DockRestrAvailCap(restr,t) ∀ restr ∈ Restrictions, t
Single supplier. Each demand must be fulfilled by a single supplier. A row is selected for a demand only if a supplier has been found.
∑_s w_ds≤ 1 ∀ d
u_dr≤∑_s w_ds ∀ d,r
Auxiliary supplier variables. Connecting variables v_dst with the rest of the variables.
z_dt = ∑_s v_dst ∀ d,t
w_ds = ∑_t v_dst ∀ d,t
Supply availability. We have a set of supply pools with a certain capacity (amount of available supply) evaluated at times ct. We need to make sure that the supply s we consume from each supply pool sp is available at the time t that we consume it. The time where each supply becomes available depends on its lead time.
∑_d, s ∈ sp, t ≤ leadtime(ct, d, s) v_dst≤Available_Supply(sp,ct) ∀ sp, ct
Overrides. Some demand-supply combinations might be undesirable or disallowed for some reason. These can be explicitly blocked. Let B denote the set of blocked pairs.
w_ds = 0 ∀ (d,s) ∈ B
§.§ Objective
Our goal is to minimize the total cost which is the aggregate of multiple components, including the cost of docking too early or too late compared to the ideal dock-date of each demand, the cost of not fulfilling demands, and the shipping cost, among others.
DockCost = ∑_d,t z_dt·Demand_Day_DockCost(d,t)
NoDockCost = ∑_d (1-∑_t z_dt) ·Unsatisfied_Cost(d)
ShippingCost = ∑_d,s w_ds·Transit_Ship_Cost(d,s)
§ ENGINEERING DETAILS
Figure <ref>, at the end of this document, presents a detailed screenshot of with IFS, including intermediate results for illustration purposes.
§.§ Useful Tricks
SQL: Many LLMs are trained with SQL database. Hence, saving optimization input and output data into SQL could make the system easier to use and more explainable.
Logical simplification:
If the prompt is not designed well, the LLM might make many simple logical mistakes (e.g., “not use" v.s. “use", before v.s. after, etc.).
Intermediate outputs. When dealing with complex prompts, providing intermediate outputs can help keep the LLM on track. By returning intermediate results or steps, the LLM can check the consistency of its process, making it easier to debug and refine.
§.§ Failed Attempts
Chain of thought (CoT) failures. Unlike many recent studies <cit.> that have found that LLMs have strong CoT abilities, we found CoT is not helpful for writing complex code. This is another reason why we integrated the helper functions in the application-specific tools, which outperformed CoT. Our hypothesis is that if the LLM makes one mistake in the thinking chain, then the whole response would be wrong because correcting its own mistakes is hard.
Overuse of prompt engineering: While prompt engineering can often lead to improved results, overdoing it can sometimes lead to worse outcomes. When the prompts become too complex or too specific, the LLM might not understand them correctly or might overfit to the specific prompt structure, limiting its ability to handle a variety of questions.
§ COFFEE DISTRIBUTION EXAMPLE
§.§ Code
§.§ Question and Ground Truth Macros
|
http://arxiv.org/abs/2307.04484v1 | 20230710111419 | Invertible Low-Dimensional Modelling of X-ray Absorption Spectra for Potential Applications in Spectral X-ray Imaging | [
"Raziye Kubra Kumrular",
"Thomas Blumensath"
] | cs.LG | [
"cs.LG",
"physics.app-ph",
"physics.comp-ph"
] |
Invertible Low-Dimensional Modelling of X-ray Absorption Spectra for Potential Applications in Spectral X-ray Imaging
Raziye Kubra Kumrular and Thomas Blumensath
R. K. Kumrular and T. Blumensath are with the ISVR Signal Processing and
Audio Hearing Group, University of Southampton, Southampton SO17 1BJ, U.K.
(e-mail: [email protected] )
Received; accepted
=============================================================================================================================================================================================================================================
X-ray interaction with matter is an energy-dependent process that is contingent on the atomic structure of the constituent material elements. The most advanced models to capture this relationship currently rely on Monte Carlo (MC) simulations. Whilst these very accurate models, in many problems in spectral X-ray imaging, such as data compression, noise removal, spectral estimation, and the quantitative measurement of material compositions, these models are of limited use, as these applications typically require the efficient inversion of the model, that is, they require the estimation of the best model parameters for a given spectral measurement. Current models that can be easily inverted however typically only work when modelling spectra in regions away from their K-edges, so they have limited utility when modelling a wider range of materials. In this paper, we thus propose a novel, non-linear model that combines a deep neural network autoencoder with an optimal linear model based on the Singular Value Decomposition (SVD). We compare our new method to other alternative linear and non-linear approaches, a sparse model and an alternative deep learning model. We demonstrate the advantages of our method over traditional models, especially when modelling X-ray absorption spectra that contain K-edges in the energy range of interest.
Convolutional neural network (CNN), Denoising autoencoder, K-edge, Singular value decomposition (SVD), X-ray absorption spectrum
§ INTRODUCTION
X-ray Computed Tomography (XCT), which generates volumetric images based on measurements of X-ray transmission through an object, is a versatile imaging technique with applications in industry, security, medicine, and scientific investigation <cit.>. X-ray transmission is a function of X-ray energy, and the measurement of this dependency can be of significant importance in many applications. We are here interested in building models of this dependency that can help achieve this by allowing us to remove measurement noise, compress measurement data and constrain the estimation from limited measurements. In these applications, using models that both constrain the measurement whilst also allowing for easy estimation of the model parameters is crucial.
Whilst the physical interaction between photons and material can be modelled explicitly via very accurate Monte Carlo (MC) simulations<cit.>, these models do not allow for simple model inversion. We thus here develop models with few parameters (so-called low-dimensional models) that are easy to invert, that is, that allow us to easily compute optimal parameters for a given X-ray spectral observation. These models can then be used to create a parameterised function as a computational tool for spectral data analysis that has a range of significant applications in X-ray imaging. For example, traditional XCT reconstruction algorithms that ignore energy dependence produce image artefacts which can be removed when using invertible low-dimensional models <cit.>. Furthermore, low-dimensional models are crucial to remove measurement noise or constrain the ill-conditioned inverse problems that arise in several spectral imaging methods <cit.>.
Our work here is particularly motivated by our interest in measuring the spatial distribution of X-ray absorption spectra using commonly available lab-based X-ray tomography systems. There are several approaches to this. X-ray sources found in these systems generate X-ray photons with a range of energies (the X-ray source spectrum I_0(E)), though the X-ray detector does not normally differentiate different energy levels. To estimate absorption spectra, Dual-Energy CT uses two source spectra to allow spectral estimation <cit.> by utilising a two-parameter linear absorption spectral model. In Multi-Energy computed tomography (MECT), also called spectral X-ray tomography, spectrally resolved measurements are taken using photon counting detectors (PCD), though this comes at the cost of additional hardware requirements, a significant decrease in measurement speed, a significant increase in measurement noise as well as an increase in computational loads associated with the increase in measured data <cit.>. In all of these applications, a more accurate invertible low-dimensional model of the X-ray absorption spectra is of significant interest, especially when imaging a wide range of materials.
The attenuation of an X-ray beam with photons of a single energy (E) travelling along a path through an object is often modelled using the Beer-Lambert law:
I(E)=I_0(E)e^-∫μ(x,E) dx
where I(E) is the X-ray intensity measured by the detector, and I_0(E) is the X-ray intensity that would be measured by the detector without an object present. μ(x,E) is the energy-dependent X-ray linear attenuation coefficient (LAC) at position x along the X-ray beam and the integration is along the line of the X-ray path through the object.
For X-ray energies below about 1.02 MeV, X-ray material interactions are due to three primary phenomena: Rayleigh scattering (μ_R(E)); Compton scattering (μ_C(E)); and the Photoelectric effect (μ_P(E)) <cit.>. The total linear attenuation coefficient μ(E) can thus be written as:
μ(E)=μ_R(E)+ μ_C(E) +μ_P(E)
Figure <ref> shows the total linear attenuation of Aluminum (atomic-number(Z)=13) and Iodine (Z=53) and the contribution of each of these interactions.
We here show the LAC as a function of energy, focusing on the energy range between 20 keV and 150 keV commonly used in lab-based X-ray systems. Of particular interest for our paper will be the step in the LAC due to the Photoelectric effect (as seen in Fig <ref>b for Iodine at 33.17 keV), which appears at the K-shell binding energy of the atom. This step is known as the K-edge of the element and is unique for each element <cit.>.
The intrinsic dimensionality of the LACs using a linear principal component model has been studied previously in different settings <cit.>. We here instead investigate non-linear low-dimensional models of the X-ray absorption spectrum that work for all elements (Z≤ 92) and over energy ranges found in typical lab-based tomography systems (20keV≤ E≤150keV).
§ STATE OF THE ART ABSORPTION SPECTRUM MODELLING
§.§ Representation of Linear Attenuation Coefficient with Linear Models
Different X-ray absorption models have been proposed in the literature. These models typically assume absorption spectra to be a linear combination of two or more basis functions, which are assumed to be independent of the material <cit.>.
§.§.§ Photoelectric-Compton Basis (PCB) model
The first model is based on (<ref>). Rayleigh scattering is negligible. <cit.>. The Photoelectric-Compton Basis (PCB) model thus only uses basis functions to represent the Photoelectric effect and Compton scattering which are assumed to be material invariant.
This holds for energies far from the K-edge energy of a material (see Fig. <ref>a), where the Photoelectric absorption and Compton scattering phenomenon can be approximated as <cit.>:
μ(E)=a_pf_p(E) + a_cf_KN(E)
where f_p(e) and f_KN(e) are functions of energy only and capture the energy dependence of the Photoelectric absorption and Compton scattering. a_p and a_c on the other hand are parameters that are independent of energy and instead only vary with the material (they are functions of the electron density (ρ_e) and the atomic number of the material). a_p and a_c are thus two parameters that can be used to fit this linear two-dimensional model to data <cit.>. For a single material, they can be derived as functions of ρ_e and Z as:
a_p=ρ_eC_PZ^m
a_c=ρ_e,
where C_P = 9.8x10^-24 <cit.> is a constant, and m= 3.8 was determined experimentally.
The energy dependence of the Photoelectric effect is approximated by f_p(E)=1/E^n and it is possible to approximate the energy dependence of Compton scattering using the Klein-Nishina function (<ref>)<cit.>.
f_KN(E) = 1+α/α^2[ 2(1+α)/(1+ 2α)- 1/αln(1+ 2α) ]
+1/2αln(1+ 2α) -1+3α/(1+ 2α)^2
where α is E/E_e and E_e ≈ 511 keV denotes the rest mass energy of an electron. This two-dimensional linear model is suitable for low atomic number materials (Z<18) that do not have a K absorption edge in the range of energies considered, though the approximation error increases close to the K-edge as well as for higher energies <cit.>.
§.§.§ Material-basis (MB) model
The second linear model uses material-basis (MB) functions, which are two or more LAC functions taken from previously chosen reference materials<cit.>. This model is particularly popular in medical imaging, where the imaged object can be modelled using a basis function for the LAC of bone and one for soft tissue (or water)<cit.>. However, it is hard to express a wider range of materials with just two reference materials in the MB model, and this model does not provide a direct estimate of the electron density and effective atomic number <cit.>
§.§.§ Learned linear representations
Basis functions for low-dimensional modelling of X-ray absorption spectra can also be learned from training data. This can be done using the singular value decomposition (SVD), which also gives an estimate of the approximation error that can be achieved. The SVD computes the best linear approximation to a given training dataset in the mean squared error sense for a given size of subspace. There is a close relationship between SVD and principal component analysis (PCA) <cit.>, which has been used several times to derive low-dimensional linear models <cit.>.
For materials without K-edge in the energy range, it has been found that SVD models provide good approximations to LACs using two basis functions <cit.>. Furthermore, the learned basis functions are very similar to μ_p(E) and μ_c(E) <cit.>.
However, these models no longer work close to a K-edge <cit.>, though increasing the number of basis functions naturally has been found to increase performance also in these cases <cit.>.
§.§ Non-linear models
Given the inability of low-dimensional linear models to capture K-edges in absorption spectra, non-linear models might be suitable alternatives. As there are no known analytic models that capture the change in absorption around the K-edge of all materials in a succinct parameterization, learned non-linear models are a viable alternative.
§.§.§ Sparse Model
We have already introduced the idea of using a material basis function model. It is possible to include a basis function for each material in the periodic table, but as a linear model, this would require us to fit many coefficients. Instead, to derive models with few non-zero coefficients when using a larger set of material basis functions, sparse models can be used <cit.>. Whilst the generative model here is still linear (i.e. a spectrum is modelled as a linear computation of basis function), the estimation of the weights now becomes a non-linear process. To get a low-dimensional model, the basic assumption then is that a given spectrum represents a material that is a combination of a few elements.
Sparse models have been suggested as a complement to traditional regression methods for better identification of spectra in Raman spectroscopy <cit.>, though to the best of our knowledge, have not yet been used to model X-ray absorption spectra.
§.§.§ Neural Network based Models
Recent advances in deep learning now allow the estimation of complex non-linear relationships in complex data.
A suitable model for our purpose is an autoencoder, which is a deep neural network that can learn a non-linear low-dimensional representation <cit.>.
The network consists of two main components; a non-linear encoder, which compresses the input into a latent space representation, and a non-linear decoder which reconstructs the data from the low-dimensional representation <cit.>. For a single material, autoencoders have already demonstrated the ability to capture fine detail in the absorption spectrum around K-edge energies <cit.>.
To increase robustness and to incorporate noise suppression, an autoencoder is often trained as a denoising autoencoder (DAE), where the difference in training is that the input to the encoder is corrupted by noise during training.
§ MATERIAL AND METHODS
In this paper, we hypothesize that a single non-linear low-dimensional latent representation will allow us to model the X-ray absorption spectra of all elements, including those that have a K-edge in the energy range of interest. As low Z materials without a K-edge in the energy range under investigation are already well approximated using two linear basis functions, we propose to model the difference between a given spectrum and an optimal two-dimensional linear approximation.
§.§ Proposed Non-linear Model
Our proposed model combines a non-linear autoencoder with a two-dimensional SVD-based representation as shown in Fig. <ref>.
The SVD learns the effect of the Photoelectric effect and Compton scattering for materials with low atomic numbers, where we do not have a K-edge in the energy range of interest. The autoencoder then uses a 3-node latent representation to try and model the deviation of the true spectrum from the linear model of materials that have a K-edge in the energy range.
§.§ Low Dimensional Representation of X-ray Absorption Spectrum with the Autoencoder
There are different network architectures that can be used as the autoencoder in our model. We here compare two convolutional neural network (CNN) and three fully connected neural network (FCNN). The most basic FCNN simply consisted of a single input layer, the hidden (code) layer and an output layer with ReLU non-linearities in the input and output layers. The other four network architectures are shown in Fig.<ref> and Fig. <ref>. These architectures all had 3 nodes in the latent space when used jointly with the two-component SVD model, or 5 nodes if used without an initial SVD
approximation.
The main difference between FCNN2 and FCNN3 is the layer structure. In the FCNN3, the number of nodes shrinks gradually in the encoder and expands gradually in the decoder, which is a regular layer structure for the autoencoder, whilst, for FCNN2, the number of nodes in two consecutive layers first shrinks by about half before slightly expanding again in the next layer, a pattern that is repeated in the encoder and inverted in the decoder. Batch normalization is used to prevent overfitting. The main difference between the two convolutional networks is that CNN1 uses strided convolutions, whilst in CNN2 we use max pooling. To apply our idea to datasets sampled at different energy levels (131 energy levels), the CNN2 architecture is modified by creating much deeper layers but using the same node number in the latent space.
§.§ A Sparse Regularized Model for X-ray Absorption Spectrum
As a comparison, we also implement a sparse model using a material basis function matrix. Let Y be the X-ray absorption spectrum of a chemical mixture (Y ϵ R^N), and A (A=a_1,a_2,...a_N ) a matrix whose columns are the material basis functions of all elements of interest (A ϵ R^NxM ). To compute a sparse representation X, we solve the lasso problem:
Xmin𝐘 - AX _2 + λ X_1
where we use the FISTA algorithm for optimisation.
We here generate the material basis function matrix by using the LAC values for the 92 elements provided by the National Institute of Standards and Technology (NIST) database <cit.>. As the solution to the above lasso problem does potentially provide approximations of the data with more than 5 basis functions, for consistency with our 5 parameter model, we restrict the solution by selecting the 5 largest elements (in magnitude) of X and then fitting these values by computing a least squares solution using only the selected 5 material basis functions.
§.§ Traditional methods
To compare the two non-linear models above to traditional linear and non-linear models, we furthermore implemented an SVD-based method, where we selected the largest 5 components to provide a 5-dimensional linear model. We also implemented our three autoencoder models without the initial 2-dimensional SVD model by extending the dimension of the hidden layer to 5. Thus, all our models could be used to fit 5 parameters into a spectrum.
§.§ Dataset of the simulated X-ray absorption spectrum
X-ray absorption spectra have been simulated using the linear attenuation coefficients of the 92 chemical elements with Z≤ 92. LAC values were obtained by multiplying MAC (Mass attenuation coefficient) with average mass densities obtained from the NIST <cit.>.
The energy range of interest was chosen to be between 20 keV to 150 keV, which is the available source energy range found in many lab-based X-ray tubes. For computational efficiency, spectra were re-sampled into 26 equally sized energy bins, though similar results can be achieved with a finer energy resolution.
We generated a range of different datasets, consisting of combinations of between 1 and 5 different elements, with some datasets having pre-specified numbers of elemental spectra with K-edges. All datasets are summarised in Table <ref>. Each mixture is generated by randomly choosing the elements (possibly with restrictions on the required numbers of K-edges) and then combining them by multiplying them by the standard elemental density for that material as well as a random scalar drawn from a uniform distribution in the range between 0 and 1. To consistent data, the datasets scaled with standardization after generating combined LACs. To train the de-noising autoencoders, Gaussian noise, with zero mean and 0.1 standard deviations, was added to generate a noisy version of each dataset.
We created various datasets to perform the proposed method and compare it with other methods. Table <ref> shows the name of the generated datasets, where the subscript indicates the number of elements in each mixture in that dataset, e.g. each element in D_2E consists of two randomly selected elements, as well as the number of the elements in each mixture that have K-edges, e.g. each element in D_2E,2K contains two elements with K-edges in the energy range of interest (i.e. (Z>42)). The dataset containing 131 energy levels (D_2E,131) was generated the same way as the other datasets, the only difference being that it was quantized at every energy level.
Example spectra are shown in Figure <ref>a where we show noisy and noise-free spectra from D_2E, and Fig. <ref>b, where we show two example spectra from D_2E,0K.
§.§ Loss function
To evaluate the performance of different methods, we use the normalised mean squared error (NMSE):
NMSE = Y-Ŷ^2/Y^2,
where Y is true X-ray absorption spectrum, and Ŷ is predicted X-ray absorption spectrum. Y-Ŷ is the l_2 norm of the error between true spectrum and predicted spectrum, while Y is the l_2 norm of the true spectrum.
§ EXPERIMENTS AND RESULTS
We test the sparse and machine learning-based non-linear models and compare them to the linear methods. In the rest of the paper, we referred to the proposed hybrid models as SVD/autoencoder and the autoencoder models with 5 nodes in the latent space layer as 5-dimensional autoencoders. We also fit an SVD model using the largest 5 components, which we call the 5-dimensional SVD. The sparse model, where we fit the largest 5 components after sparse decomposition is called the Fista model.
All models that include one of the autoencoders were trained using the same parameters, using an Adam optimiser with a batch size of 64 and running for 300 epochs with a mean squared error loss function.
Unless otherwise stated, all autoencoder-based models were trained on the data-set D_2E, which was divided by random training (72%), validation (20%) and test (8%). The validation set was used to validate the model performance during training.
For the SVD/autoencoder model, we trained the SVD and the autoencoder separately, starting by fitting the SVD using data without K-edges, namely D_2E,0K. We then trained the autoencoder in the SVD/autoencoder model with the training data from D_2E, which also included simulated absorption spectra with K-edges. For the training of the autoencoder part of the SVD/autoencoder model, each spectrum was first projected onto the SVD subspace and the residual error was used to train the autoencoder. The output of the autoencoder was then added back to the approximation computed with the SVD model to provide the spectral approximation (as shown in Fig. <ref>).
We also used the same training dataset from D_2E to fit the 5-dimensional SVD model. For the FISTA model, the sparsity parameter (λ) was optimised for optimal performance on the same dataset. As the SVD is known to provide the best linear low-dimensional approximation in the mean squared error sense, we do not report results for other linear models.
After the training step, all models were tested on the test dataset of D_2E, and each model was evaluated with by plotting Box-whisker plots of the MMSE for each spectrum in the test data. Fig. <ref> shows results for the SVD/autoencoder models, the 5-dimensional autoencoder models, the 5-dimensional SVD model as well as the Fista model. From these results, we see that CNN2 performs better as the non-linear model, both on its own or in conjunction with the initial SVD projection. (Similar results were found when analysing other datasets (results not shown for brevity).) For the remainder of this paper, we thus only report the results for the CNN2-based models, the 5-dimensional SVD model and the Fista model.
To research the performance of our ideas on the dataset with finer energy resolution, we followed the same steps in the training and testing with other architectures. We focused on two different architectures that extended versions of CNN2 (have a better result than others) in this experiment. Figure <ref> shows the result of the modified version of SVD/CNN2 and CNN2 along with Fista and 5-dimensional SVD results for D_2E,131 dataset. The average NMSE performance here is similar to that found for the D_2E dataset.
To further demonstrate this, the modelling performance of our approach was also tested using the D_3E,0K (see Fig. <ref>) of 3 element mixtures without K-edge. For this dataset without K-edges, we again found that the SVD/autoencoder model no longer outperforms all other methods, and in fact, the 5-dimensional CNN2 now performed slightly better in terms of the mean of the NMSE errors. Crucially, the 5-dimensional SVD and Fista models showed almost the same performance. Of interest here is also the fact that the 5-dimensional SVD does not work as well as the non-linear models, which is likely due to the fact that the linear approximation used is not valid in energy ranges close to K-edges.
To see how the performance of our methods changes when the data has more materials with K-edges in their absorption spectra, we plot the NMSE of the datasets (D_3E,1K, D_3E,2K, D_3E,3K, and D_5E,5K) in Fig. <ref> for the SVD/CNN2, the 5-dimensional CNN2, the 5-dimensional SVD and Fista models. Whilst there is a decrease in the performance of the non-linear models if we increase the number of elements with K-edges, their relative performance is consistent, with Fista, SVD/CNN2 and CNN2 working better than the 5-dimensional SVD model in general.
We also trained our best architectures (SVD/CNN2 and CNN2), 5-dimensional SVD and Fista with two different dataset of 5 materials each to see if their performance depended on the training sets. We here used D_5E and D_5E,0K. The training in this experiment is the same as in the previous one; the only difference is the dataset used for training. After the training, these architectures were tested with D_2E,2K and D_5E,5K. Figures <ref>a and <ref>b demonstrate the NMSE result of this experiment, and our models (SVD/CNN2, CNN2 and Fista) still have lower errors than the traditional models.
§ DISCUSSION AND CONCLUSIONS
Accurate and precise modelling of the X-ray absorption spectrum of objects has been important for reducing image artefacts <cit.>, estimating material distributions within the object<cit.>, and constraining the ill-conditioned inverse problems <cit.> that arise in
several spectral imaging methods. In this paper, we considered non-linear models of the energy-dependent X-ray absorption spectrum for all possible materials. We introduced a novel non-linear model, consisting of a linear SVD and a deep learning-based approach, that accurately represents the LACs of K-edge-containing materials using several parameters. Furthermore, we evaluated the performance of different deep learning architectures, traditional linear models, and a sparse model for various simulated objects.
As seen in Fig. <ref>, all complex architectures (except SVD/FCNN1 and FCNN1), and the Fista model have a lower approximation error than the best linear model (5-dimensional SVD). Crucially, the traditional linear model has almost the same error, which is 5% higher than the SVD/FCNN1 and FCNN1 models. This primarily shows that a non-linear model is useful for modelling K-edges. Furthermore, this result suggests that more layers should be used while designing the deep learning architectures for modelling. The last and most important result is that the SVD/CNN2 and CNN2 architectures showed the best performance compared to other architectures in the experiment with the D_2E test dataset.
The result of experiments with data with D_3E,131 dataset showed that our models have the same sensitivity even for finer energy resolution. Interestingly, it shows that if objects with finer resolution have a K-edge in their absorption spectra, the 5-dimensional SVD approach cannot capture it. It can be seen in Fig <ref>, the SVD model has a higher error (10%) than all other models. For computational efficiency, we did not conduct any further experiments with the 131 energy level dataset, even though our models achieved better performance.
For objects whose K-edge in the X-ray absorption spectrum lies outside of the considered energy range, there is some loss in the SVD/autoencoder approach, as can be seen in Fig.<ref>. The main reason for this is that we have not trained the autoencoder part in the SVD/autoencoder model with non-K-edge materials. We trained the non-linear step in this model with the residual error (i.e. to model the K-edges), whilst the linear step is trained to model the non-K-edge X-ray absorption spectrum. Since the training methods used to model the non-K-edge X-ray absorption spectrum are not the same (such as non-linear and linear), this is likely to affect the performance of our approach. However, the errors are lower than the best linear model in the SVD/autoencoder, the 5-dimensional autoencoder and the Fista model (error value below 2% for CNN2, below 4% for SVD/CNN2 and Fista). Interestingly, the best linear model has a higher error than the other model, even for objects that do not contain K-edges in the X-ray absorption spectrum. Although traditional models are used to model the X-ray absorption spectra of non-K-edge materials in the selected energy range in the literature, these results show that our models can also be used for these spectra.
All experiments with objects with various numbers of K-edges in the X-ray absorption spectrum suggest that our models can be more accurate than the traditional model.
Furthermore, the error in the SVD/autoencoder and the 5-dimensional autoencoder model have increased when the number of K-edges in the X-ray absorption spectrum is increased, as seen in Fig. <ref>a and <ref>b <ref>c <ref>d. Interestingly, the errors of the Fista model for all experiments nearly stayed the same. The reason for this is that there is no training step in the Fista model (apart from fitting the sparsity parameter). Crucially, with the five-element dataset test (as shown in Fig. <ref>), we found that our models work better than the traditional model, even when trained with more complex datasets.
Our experimental results indicate that using the SVD/autoencoder model approach has significant advantages in the representation of the X-ray absorption spectrum of high atomic number materials compared to the linear model. In addition, the 5-dimensional autoencoder method has been experimentally shown to work better than traditional linear methods for non-K-edge materials (low atomic number materials) and also complex datasets. Whilst the Fista model did not show good performance for objects that don't have a K-edge, it has good accuracy for objects that have K-edges. The overall utility of our approach lies in that exploring the so-called low-dimensional representation of the X-ray absorption spectrum can be a valuable tool for analyzing the information on the scanned material.
IEEEtran
|
http://arxiv.org/abs/2307.04330v1 | 20230710035411 | A uniform and pressure-robust enriched Galerkin method for the Brinkman equations | [
"Seulip Lee",
"Lin Mu"
] | math.NA | [
"math.NA",
"cs.NA",
"65N15, 65N30, 76D07"
] |
Quasicrystalline second-order topological semimetals
Dong-Hui Xu
August 12, 2023
====================================================
This paper presents a pressure-robust enriched Galerkin (EG) method for the Brinkman equations with minimal degrees of freedom based on EG velocity and pressure spaces. The velocity space consists of linear Lagrange polynomials enriched by a discontinuous, piecewise linear, and mean-zero vector function per element, while piecewise constant functions approximate the pressure. We derive, analyze, and compare two EG methods in this paper: standard and robust methods. The standard method requires a mesh size to be less than a viscous parameter to produce stable and accurate velocity solutions, which is impractical in the Darcy regime. Therefore, we propose the pressure-robust method by utilizing a velocity reconstruction operator and replacing EG velocity functions with a reconstructed velocity. The robust method yields error estimates independent of a pressure term and shows uniform performance from the Stokes to Darcy regimes, preserving minimal degrees of freedom. We prove well-posedness and error estimates for both the standard and robust EG methods. We finally confirm theoretical results through numerical experiments with two- and three-dimensional examples and compare the methods' performance to support the need for the robust method.
10pt
Keywords: enriched Galerkin finite element methods, Brinkman equations, pressure-robust, velocity reconstruction, uniform performance
§ INTRODUCTION
We consider the stationary Brinkman equations in a bounded domain Ω⊂ℝ^d for d=2,3 with simply connected Lipschitz boundary ∂Ω: Find fluid velocity :Ω→ℝ^d and pressure p:Ω→ℝ such that
-μΔ𝐮 + μ/K 𝐮 + ∇p = 𝐟 in Ω,
∇·𝐮 = 0 in Ω,
=0 on ∂Ω,
where μ is fluid viscosity, K is media permeability, and is a given body force.
The Brinkman equations describe fluid flow in porous media characterized by interconnected pores that allow for the flow of fluids, considering both the viscous forces within the fluid and the resistance from the porous media. The Brinkman equations provide a mathematical framework for studying and modeling complex phenomena such as groundwater flow, multiphase flow in oil reservoirs, blood flow in biological tissues, and pollutant transport in porous media.
In this paper, for simplicity, we consider the scaled Brinkman equations
-νΔ𝐮 + 𝐮 + ∇p = 𝐟 in Ω,
∇·𝐮 = 0 in Ω,
=0 on ∂Ω,
where ν∈[0,1] is a viscous parameter.
Mathematically, the Brinkman equations can be seen as a combination of the Stokes and Darcy equations.
When ν→1, the Brinkman equations approach a Stokes regime affected by the viscous forces, so standard mixed formulations require the H^1-conformity for velocity.
On the other hand, since the Darcy model becomes more prominent as ν→ 0, finite-dimensional spaces for velocity are forced to satisfy the H(div)-conformity.
This compatibility in velocity spaces makes it challenging to construct robust numerical solvers for the Brinkman equations in both the Stokes and Darcy regimes.
The numerical tests in <cit.> show that standard mixed methods with well-known inf-sup stables Stokes elements, such as MINI and Taylor-Hood elements, produce suboptimal orders of convergence in the Darcy regime.
Moreover, with piecewise constant approximations for pressure, the standard methods' velocity errors do not converge in the Darcy regime, while mesh size decreases.
On the other hand, Darcy elements such as Raviart-Thomas and Brezzi-Douglas-Marini do not work for the Stokes domain because they do not satisfy the H^1-conformity.
Therefore, the development of robust numerical solvers for the Brinkman equations has had considerable attention.
There have been three major categories in developing robust numerical methods for the Brinkman equations. The first category considers Stokes/Darcy elements and adds stabilization (or penalty) terms or degrees of freedom to impose normal/tangential continuity, respectively. This approach allows Stokes elements to cover the Darcy regime <cit.> or H(div)-conforming finite elements to be extended to the Stokes regime <cit.>. Also, the stabilized method in <cit.> coarsens a pressure space and applies a stabilization term on pressure, while the robust method in <cit.> uses an enlarged velocity space. The second approach is to introduce another meaningful unknown and define its suitable formulation and finite-dimensional space, such as velocity gradient <cit.>, vorticity <cit.>, and Lagrange multipliers at elements' boundaries <cit.>. The third direction is the development of a velocity reconstruction operator, first introduced in <cit.>, mapping Stokes elements into an H(div)-conforming space. In a discrete problem for the Brinkman equations, reconstructed velocity functions replace Stokes elements in the Darcy term and the test function on the right-hand side. This idea has been adopted for a uniformly robust weak Galerkin method for the Brinkman equations <cit.>, which inspires our work because of its simplicity in modification.
Our research focuses on developing a robust numerical method for the Brinkman equations with minimal degrees of freedom. The enriched Galerkin (EG) velocity and pressure spaces have been proposed by <cit.> for solving the Stokes equations with minimal degrees of freedom. The velocity space consists of linear Lagrange polynomials enriched by a discontinuous, piecewise linear, and mean-zero vector function per element, while piecewise constant functions approximate the pressure. More precisely, a velocity function =^C+^D consists of a continuous linear Lagrange polynomial ^C and a discontinuous piecewise linear enrichment function ^D, so interior penalty discontinuous Galerkin (IPDG) formulations are adopted to remedy the discontinuity of ^D. These velocity and pressure spaces satisfy the inf-sup condition for the Stokes equations, so they are stable Stokes elements.
We first observe a standard EG method derived from adding the Darcy term (,)_Ω to the Stokes discrete problem in <cit.>.
Our numerical analysis and experiments show that the standard EG method provides stable solutions and convergent errors for the Brinkman equations if a mesh size satisfies the condition h<√(ν) that is impractical in the Darcy regime (ν→0). Hence, inspired by <cit.>, we use the velocity reconstruction operator <cit.> mapping the EG velocity to the first-order Brezzi-Douglas-Marini space, whose consequent action is preserving the continuous component ^C and mapping only the discontinuous component ^D to the lowest-order Raviart-Thomas space. Then, we replace the EG velocity in the Darcy term and the test function on the right-hand side with the reconstructed linear H(div)-conforming velocity.
Therefore, with this simple modification, our resulting EG method yields pressure-robust error estimates and shows uniform performance from the Stokes to Darcy regime without any restriction in a mesh size, which is verified by our numerical analysis and experiments. Through two- and three-dimensional examples, we compare the numerical performance of our robust EG and the standard EG methods with the viscous parameter ν and mesh size h. The numerical results demonstrate why the standard EG method is not suitable for the Brinkman equations in the Darcy regime and show that the robust EG method has uniform performance in solving the Brinkman equations.
The remaining sections of this paper are structured as follows:
Some important notations and definitions are introduced in Section <ref>.
In Section <ref>, we introduce the standard and robust EG methods for the Brinkman equations, recalling the EG velocity and pressure spaces <cit.> and the velocity reconstruction operator <cit.>.
We prove the well-posedness and error estimates of the standard EG method in Section <ref>.
In Section <ref>, we show the robust method's well-posedness and error estimates that mathematically verify the uniform performance from the Stokes to Darcy regimes.
Section <ref> validates our theoretical results through numerical
experiments in two and three dimensions. Finally, we summarize our contribution in this paper and discuss
related future research in Section <ref>.
§ PRELIMINARIES
In this section, we introduce some notations and definitions used in this paper.
For a bounded Lipschitz domain 𝒟∈ℝ^d, where d=2,3, we denote the Sobolev space as H^s(𝒟) for a real number s≥ 0.
Its norm and seminorm are denoted by ·_s,𝒟 and |·|_s,𝒟, respectively.
The space H^0(𝒟) coincides with L^2(𝒟), and the L^2-inner product is denoted by (·,·)_𝒟.
When 𝒟=Ω, the subscript 𝒟 will be omitted.
This notation is generalized to vector- and tensor-valued Sobolev spaces.
The notation H_0^1(𝒟) means the space of v∈ H^1(𝒟) such that v=0 on ∂𝒟, and L_0^2(𝒟) means the space of v∈ L^2(𝒟) such that (v,1)_𝒟=0.
The polynomial spaces of degree less than or equal to k are denoted as P_k(𝒟).
We also introduce the Hilbert space
H(div,𝒟):={∈ [L^2(𝒟)]^d:div ∈ L^2(𝒟)}
with the norm
_H(div,𝒟)^2:=_0,𝒟^2+div _0,𝒟^2.
For discrete setting, we assume that there exists a shape-regular triangulation of Ω whose elements T∈ are triangles in two dimensions and tetrahedrons in three dimensions.
Also, denotes the collection of all edges/faces in , and =∪, where is the collection of all the interior edges/faces and is that of the boundary edges/faces.
For each element T∈, let h_T denote the diameter of T and _T (or ) denote the outward unit normal vector on ∂ T.
For each interior edge/face e∈ shared by two adjacent elements T^+ and T^-, we let _e be the unit normal vector from T^+ to T^-.
For each e∈, _e denotes the outward unit normal vector on ∂Ω.
In a triangulation , the broken Sobolev space is defined as
H^s():={v∈ L^2(Ω):v|_T∈ H^s(T), ∀ T∈},
equipped with the norm
v_s,:=(∑_T∈v^2_s,T)^1/2.
When s=0, the L^2-inner product on is denoted by (·,·)_.
Also, the L^2-inner product on is denoted as ⟨·,·⟩_, and the L^2-norm on is defined as
v_0,:=(∑_e∈v^2_0,e)^1/2.
The piecewise polynomial space corresponding to the broken Sobolev space is defined as
P_k() = {v∈ L^2(Ω): v|_T∈ P_k(T), ∀ T∈}.
In addition, the jump and average of v on e∈ are defined as
v:={[ v^+-v^- on e∈,; v on e∈, ].
v:={[ (v^++v^-)/2 on e∈,; v on e∈, ].
where v^± is the trace of v|_T^± on e∈∂ T^+∩∂ T^-. These definitions are extended to vector- and tensor-valued functions.
We finally introduce the trace inequality that holds for any function v∈ H^1(T),
v_0,e^2≤ C(h_T^-1v_0,T^2+h_T∇ v_0,T^2).
§ ENRICHED GALERKIN METHODS FOR THE BRINKMAN EQUATIONS
We first introduce the enriched Galerkin (EG) finite-dimensional velocity and pressure spaces <cit.>.
The space of continuous components for velocity is
= {^C ∈ : ^C|_T∈ [P_1(T)]^d, ∀ T ∈}.
The space of discontinuous components for velocity is defined as
= {^D ∈ L^2(Ω) : ^D|_T = c ( - _T), c ∈ℝ, ∀ T ∈},
where _T is the barycenter of T∈.
Thus, the EG finite-dimensional velocity space is defined as
:= ⊕.
We note that any function ∈ consists of unique continuous and discontinuous components, =^C+^D for ^C∈ and ^D∈.
At the same time, the EG pressure space is
Q_h := { q ∈ : q|_T ∈ P_0(T), ∀ T ∈}.
Therefore, we formulate a standard EG method for the Brinkman equations with the pair of the EG spaces × Q_h by adding the Darcy term to the Stokes formulation <cit.>.
This algorithm employs interior penalty discontinuous Galerkin (IPDG) formulations because any EG velocity function in 𝐕_h has a discontinuity.
IPDG formulations include two penalty terms scaled by h_e with the penalty parameters ρ_1 and ρ_2.
The method provides reliable numerical solutions in the Stokes regime.
However, this approach may not be effective in solving the Brinkman equations in the Darcy regime because it requires
H(div)-conforming discrete velocity functions. Moreover, the method's velocity error bounds may depend on a pressure term inversely proportional to ν.
For this reason, we develop a pressure-robust EG method that produces stable and accurate solutions to Brinkman problems with any value of ν∈(0,1].
First, the velocity reconstruction operator <cit.> is defined as : →ℬDM_1()⊂ H(div,Ω) such that
∫_e () ·_e p_1 ds = ∫_e ·_e p_1 ds,
∀p_1 ∈P_1(e), ∀e ∈,
∫_e () ·_e p_1 ds = 0, ∀p_1 ∈P_1(e), ∀e ∈,
where ℬDM_1() is the Brezzi-Douglas-Marini space of index 1 on .
Then, we propose the pressure-robust EG method as follows.
Using the velocity reconstruction operator , we force discrete velocity functions in to be H(div)-conforming.
We replace the velocity functions in the bilinear form (,)_ in (<ref>) and the right-hand side with the reconstructed velocity .
Thus, the term (,)_ with the H(div)-conforming velocity dominates the formulation when ν approaches to 0 (the Darcy regime).
Moreover, the reconstructed velocity on the right-hand side allows us to obtain error bounds independent of a pressure term inversely proportional to ν.
§ WELL-POSEDNESS AND ERROR ANALYSIS FOR ST-EG (ALGORITHM <REF>)
First of all, we introduce the discrete H^1-norm in <cit.> for all ∈,
^2 := ∇_0, ^2 + ρ_1 h_e^-1/2_0, ^2,
where ρ_1 is an H^1-penalty parameter. With this norm, the coercivity and continuity results for the bilinear form (·,·) have been proved in <cit.>: For a sufficiently large H^1-penalty parameter ρ_1, there exist positive constants κ_1 and κ_2 independent of ν and h such that
(, ) ≥κ_1 ^2 ∀∈,
|(, )| ≤κ_2 ∀, ∈.
Then, we define an energy norm for Brinkman problems involving the discrete H^1-norm and L^2-norm,
^2 := ν^2 + _0^2 +ρ_2 h_e^1/2_0, ^2.
In this case, ρ_2 is an L^2-penalty parameter that should be sufficiently large for well-posedness, and its simple choice is ρ_2=ρ_1.
The following lemma shows an essential norm equivalence between · and · scaled by ν and h.
For given ν and h, we define a positive constant C_ne (Norm Equivalence) as
C_ne:=C√(ν+h^2(ρ_2/ρ_1+1)),
where C is a generic positive constant independent of ν and h.
Then, the following norm equivalence holds: For any ∈, we have
√(ν)≤√(ν+c_1 h^2)≤≤ C_ne,
for some small 0<c_1<1. Moreover, the constant C_ne is bounded as
C_ne≤ C( √(ν)+h)
for some generic constant C>0.
We observe each term in the energy norm
^2=ν^2 + _0^2 +ρ_2 h_e^1/2_0, ^2.
Since .|_T is a linear polynomial in the second term, a scaling argument implies
_0≤ Ch∇_0,≤ Ch.
For the trace term, we have
ρ_2 h_e^1/2_0, ^2≤ Ch^2(ρ_2/ρ_1)ρ_1h_e^-1/2_0, ^2≤ Ch^2(ρ_2/ρ_1)^2.
Thus, we obtain
^2≤ C(ν+h^2(ρ_2/ρ_1+1))^2.
On the other hand, the inverse inequality and the same argument for the trace term lead to
^2≤ C h^-2(^2_0+ρ_2 h_e^1/2_0, ^2),
where C contains ρ_1/ρ_2. In this case, we assume C>1 and set c_1=1/C, so
(ν+c_1h^2)^2≤^2.
Let us introduce the interpolation operator in <cit.> : [H^2(Ω)]^d → defined by
Π_h=Π_h^C+Π_h^D,
where Π_h^C∈_h
is the nodal value interpolant of and Π_h^D∈_h satisfies
(∇·Π_h^D,1)_T=(∇·( - Π_h^C ), 1)_T for all T∈.
The following interpolation error estimates and stability <cit.> are used throughout our numerical analysis:
|- | _j, ≤C h^m-j ||_m, 0 ≤j ≤m ≤2, ∀∈[H^2(Ω)]^d,
- ≤C h _2, ∀∈[H^2(Ω)]^d,
≤C _1,
∀∈.
For the pressure, we introduce the local L^2-projection 𝒫_0: → Q_h such that (q - q, 1)_T = 0 for all T∈. Its interpolation error estimate is given as,
q -
q_0 ≤ C h q_1, ∀ q ∈ H^1(Ω).
§.§ Well-posedness
We first prove the coercivity and continuity results concerning the energy norm ·.
For any ,∈𝐕_h, we have the coercivity and continuity results:
ν(,)+(,) ≥ K_1^2,
|ν(,)+(,)| ≤ K_2,
where K_1=min(κ_1,1) and K_2=max(κ_2,1).
If we observe the bilinear forms (·,·) and (·,·) and use the coercivity (<ref>), then we have
ν(,)+(,) ≥κ_1ν^2+_0^2 +ρ_2 h_e^1/2_0, ^2
≥min(κ_1,1)^2.
Moreover, it follows from the Cauchy-Schwarz inequality and the continuity (<ref>) that
|ν(,)+(,)| ≤κ_2ν+_0_0
+ (√(ρ_2)h_e^1/2_0,)(√(ρ_2)h_e^1/2_0,)
≤max(κ_2,1).
Next, we prove the discrete inf-sup condition for the problem (<ref>) in Algorithm <ref>.
Assume that the penalty parameters ρ_1 and ρ_2 are sufficiently large.
Then, there exists a positive constant C_1:=C_is/C_ne such that
sup_∈(,q)/≥ C_1q_0, ∀ q∈ Q_h,
where C_is>0 (Inf-Sup), independent of ν and h, is the constant for the inf-sup condition for · in <cit.>.
It follows from the discrete inf-sup condition in <cit.> and the upper bound of in (<ref>) that
C_isq_0≤sup_∈(,q)/≤ C_nesup_∈(,q)/.
Furthermore, Lemma <ref> yields the continuity of (·,·) with .
For any ∈ and q∈ Q_h, there exists a positive constant C independent of ν and h such that
|(,q)|≤C/√(ν+c_1 h^2)q_0.
It follows from
the continuity of (·,·) in <cit.> and
the upper bound of in (<ref>) that
|(,q)|≤ Cq_0≤C/√(ν+c_1 h^2)q_0.
Thus, we obtain the well-posedness of the method in Algorithm <ref>.
There exists a unique solution (,)∈× Q_h to the method.
It suffices to show that _h=0 and p_h=0 when =0 because and Q_h are finite-dimensional spaces.
Choosing =_h in (<ref>) and q=p_h in (<ref>) and adding the two equations imply ν(_h,_h)+(_h,_h)=0.
Hence, _h=0 by (<ref>), so _h=0.
If _h=0 in (<ref>), then (,p_h)=0 for all ∈. Therefore, the inf-sup condition (<ref>) yields p_h_0=0, so p_h=0.
§.§ Error estimates
Let (,p)∈ [H_0^1(Ω)∩ H^2(Ω)]^d× [L_0^2(Ω)∩ H^1(Ω)] be the solution to (<ref>)-(<ref>).
We define the error functions used in the error estimates
χ_h:=-Π_h, 𝐞_h:=Π_h-_h, ξ_h:=p- p, ϵ_h:= p-p_h.
First, we derive error equations in the following lemma.
For any ∈ and q∈ Q_h, we have
ν(_h,)+(_h,)-(,ϵ_h) =l_1(,)+l_2(,)+𝐬(Π_h,)+(,ξ_h),
(_h,q) =-(χ_h,q),
where the supplemental bilinear forms are defined as follows:
l_1(,):=ν(Π_h-,),
l_2(,):=(Π_h-, )_,
𝐬(Π_h,):=ρ_2⟨ h_eΠ_h,⟩_.
We have -(Δ,)
_=(,) for any ∈ from <cit.>, which implies that
-ν(Δ,)_
=ν(Π_h,)-ν(Π_h-,).
The definition of (·,·) also gives
(,)_=(Π_h,)-(Π_h-, )_-ρ_2⟨ h_eΠ_h,⟩_,
and integration by parts and continuity of p lead to
(∇ p,)_ = ∑_T∈⟨ p,·⟩_∂ T -(p,∇·)_T= -(,p).
Thus, the equation (<ref>) imposes
ν(Π_h,)+(Π_h,)-(,p)=(,)+l_1(,)+l_2(,)+𝐬(Π_h,).
By comparing this equation with (<ref>) in the method, we arrive at
ν(_h,)+(_h,)-(,ϵ_h)=l_1(,)+l_2(,)+𝐬(Π_h,)+(,ξ_h).
Moreover, it follows from the continuity of and (<ref>) that
(∇·,q)_=(,q)=0=(_h,q),
which implies (<ref>).
In what follows, we prove estimates for the supplemental bilinear forms in Lemma <ref>.
Assume that ∈[H^2(Ω)]^d and ∈. Then, we have
|l_1(,)|≤C√(ν) h_2,
|l_2(,)|≤C h^2_2,
|𝐬(Π_h,)|≤Ch^2_2,
where C is a generic positive constant independent of ν and h and may vary in each case.
It follows from (<ref>), (<ref>), and (<ref>) that
|l_1(,)| =|ν(Π_h-,)|
≤νκ_2Π_h-
≤ Cν h _2
≤ C√(ν)h_2.
Using the Cauchy-Schwarz inequality and (<ref>),
we get the following upper bounds
|l_2(,)| =|(Π_h-,)_|
≤Π_h-_0_0
≤ Ch^2||_2.
Finally, the Cauchy-Schwarz inequality, trace inequality (<ref>), and (<ref>) imply
|𝐬(Π_h,)| =|ρ_2⟨ h_eΠ_h,⟩_|
=|ρ_2⟨ h_eΠ_h-,⟩_|
≤ρ_2h_e^1/2Π_h-_0,h_e^1/2_0,
≤h_e^1/2Π_h-_0,
≤ Ch^2||_2.
In addition, we expand the continuity of (·,·) in <cit.> to be relevant to the error equations (<ref>) because χ_h=-Π_h∉𝐕_h and ξ_h=p- p∉Q_h.
For any ∈ and q∈ Q_h, we have
|(,ξ_h)|≤Ch p_1,
|(χ_h,q)|≤Chq_0_2,
where C is a generic positive constant independent of ν and h and may vary in each case.
First, we use the Cauchy-Schwarz inequality to get
|(,ξ_h)| =|(∇·,ξ_h)_-⟨·_e,ξ_h⟩_|
≤ C(∇_0,ξ_h_0+h_e^-1/2_0,h_e^1/2ξ_h_0,).
Then, the trace term is bounded by using the trace inequality (<ref>) and interpolation error estimate (<ref>),
h_e^1/2ξ_h_0,^2≤ C(ξ_h_0^2+h^2∇ξ_h_0,^2)≤ Ch^2p_1^2
because ∇ξ_h=∇(p- p)=∇ p.
Hence, the definition of the discrete H^1-norm and estimate (<ref>) imply
|(,ξ_h)|≤ Chp_1.
Similarly, it follows from the Cauchy-Schwarz inequality, trace inequality (<ref>), and (<ref>) that
|(χ_h,q)| ≤ C(∇χ_h_0,q_0+h_e^-1/2χ_h_0,h_e^1/2q_0,)
≤ Cq_0χ_h≤ Chq_0_2.
Therefore, we show error estimates of the method in Algorithm <ref> for the Brinkman equations.
Let (,p)∈ [H_0^1(Ω)∩ H^2(Ω)]^d× [L_0^2(Ω)∩ H^1(Ω)] be the solution to (<ref>)-(<ref>), and (,p_h)∈× Q_h be the discrete solution from the method. Then, we have the following error estimates
Π_h-_h ≤ C[(√(ν)+1)h_2 + ( h+h/√(ν+c_1 h^2))p_1 ],
p-p_h_0 ≤ C[ (ν+√(ν))h_2 + (√(ν)+1)hp_1 ].
First of all, we apply the continuity results (<ref>), (<ref>), the estimates (<ref>), and the norm equivalence (<ref>) to the error equation (<ref>),
(,ϵ_h) =ν(_h,)+(_h,)-l_1(,)-l_2(,)-𝐬(Π_h,)-(,ξ_h)
≤ C(_h+√(ν)h_2+h^2_2+h/√(ν+c_1 h^2)p_1).
Thus, the inf-sup condition (<ref>) with (<ref>) implies
ϵ_h_0≤ C(√(ν)+h)(_h+√(ν)h_2+h^2_2+h/√(ν+c_1 h^2)p_1).
We choose =_h in (<ref>) and q=ϵ_h in (<ref>) and substitute (_h,ϵ_h) with -(χ_h,ϵ_h) to obtain
ν(_h,_h)+(_h,_h)=-(χ_h,ϵ_h)+l_1(,_h)+l_2(,_h)+𝐬(Π_h,_h)+(_h,ξ_h).
In this case, we estimate the term (χ_h,ϵ_h)
using (<ref>),
|(χ_h,ϵ_h)|≤ Ch_2ϵ_h_0.
The term (_h,ξ_h) is estimated by using (<ref>) and (<ref>),
|(_h,ξ_h)|≤ Chp_1_h≤ Ch/√(ν+c_1h^2)p_1_h.
Hence, it follows from (<ref>), (<ref>), (<ref>), and (<ref>) that
_h^2≤ C(h_2ϵ_h_0 + √(ν)h_2_h+h^2_2_h + h/√(ν+c_1 h^2)p_1_h).
We use the estimate (<ref>) and omit high-order terms (h^3 or h^4) to obtain,
h_2ϵ_h_0 ≤ C( (√(ν)+h)h_2_h + ν h^2_2^2 + √(ν)+h/√(ν+c_1 h^2)h^2_2p_1)
≤ C( (√(ν)+h)h_2_h + ν h^2_2^2+ h^2_2p_1)
because √(ν) +h≤ (√(2/c_1))√(ν+c_1 h^2).
If we apply the Young’s inequality to each term with a positive constant α, then we have
√(ν)h_2_h≤ν h^2/2α_2^2+α/2_h^2,
h^2_2_h≤h^4/2α_2^2 + α/2_h^2,
h^2_2p_1≤h^2/2α_2^2 + α h^2/2p_1^2,
h/√(ν+c_1 h^2)p_1_h≤h^2/2α(ν+c_1 h^2)p_1^2+α/2_h^2.
Therefore, a proper α implies
_h^2≤ C[(ν+1)h^2_2^2 + ( h^2+h^2/ν+c_1 h^2)p_1^2 ],
so we finally get
_h≤ C[(√(ν)+1)h_2 + ( h+h/√(ν+c_1 h^2))p_1 ].
On the other hand, we observe the intermediate estimate (<ref>) and omit high-order terms (h^2 or h^3) to show the pressure error estimate,
ϵ_h_0≤ C[(√(ν)+h)_h+ν h_2+hp_1].
Thus, we bound _h with the velocity error estimate (<ref>), so we finally obtain
ϵ_h_0≤ C[ (ν+√(ν))h_2 + (√(ν)+1)hp_1 ],
when omitting h^2-terms.
Theorem <ref> explains that the errors converge in the first order with h under the condition h<√(ν) easily satisfied in the Stokes regime.
However, the velocity error in the Darcy regime may not decrease with h due to the pressure term in the velocity error bound, that is, when ν→ 0,
h/√(ν+c_1h^2)p_1→1/√(c_1)p_1.
We will confirm these theoretical results through numerical experiments.
For this reason, the method in Algorithm <ref> may not be effective in solving the Brinkman equations with small ν, which motivates us to develop and analyze the method in Algorithm <ref>.
§ WELL-POSEDNESS AND ERROR ANALYSIS FOR PR-EG (ALGORITHM <REF>)
In this section, we prove well-posedness and error estimates for the method in Algorithm <ref>.
The error estimates show that the method's velocity and pressure errors decrease in the optimal order of convergence in both the Stokes and Darcy regimes, so we expect stable and accurate numerical solutions with any ν as h decreases.
We first define another energy norm by replacing _0 with _0,
^2_ℛ := ν^2 + _0^2 +ρ_2 h_e^1/2_0, ^2.
We also introduce the interpolation error estimate of the operator in <cit.>.
For any ∈, there exists a positive constant C independent of ν and h such that
- _0≤ Chh_e^-1/2_0,≤ C h .
This interpolation error estimate allows to have the norm equivalence between _ℛ and scaled by ν and h, similar to Lemma <ref>.
For any ∈, it holds
√(ν)≤√(ν+c_2 h^2)≤_ℛ≤ C_ne,
where C_ne is the constant defined in Lemma <ref> and 0<c_2<1 is a small constant.
It suffices to prove that _0≤ Ch for the upper bound because _0 is replaced by _0 in the norm _ℛ.
Indeed, it follows from the triangle inequality, the error estimate (<ref>), and the argument in the proof of Lemma <ref> that
_0 ≤_0 + -_0≤_0+Ch≤ Ch.
Hence, we obtain
_ℛ^2=ν^2 + _0^2 +ρ_2 h_e^1/2_0, ^2≤ C(ν + h^2(ρ_2/ρ_1+1))^2.
For the lower bound, we recall the result in Lemma <ref> and apply (<ref>) to it,
^2 ≤ C h^-2(_0^2+ρ_2 h_e^1/2_0, ^2)
≤ C h^-2(_0^2+-_0^2+ρ_2 h_e^1/2_0, ^2)
≤ Ch^-2(_0^2+ h^2h_e^-1/2_0,^2+ρ_2 h_e^1/2_0, ^2)
=Ch^-2(_0^2+ρ_2 h_e^1/2_0, ^2)+C_0h_e^-1/2_0,^2,
where C_0 contains ρ_1/ρ_2 but is independent of ν and h.
Then, for a sufficiently large ρ_1, we have
ρ_1-C_0/ρ_1^2≤ Ch^-2(_0^2+ρ_2 h_e^1/2_0, ^2).
Therefore, we set c_2=(ρ_1-C_0)/(Cρ_1) and assume c_2<1 to have
c_2h^2^2≤_0^2+ρ_2h_e^1/2_0,^2,
which implies
(ν+c_2h^2)≤_.
In addition, we prove the norm equivalence between and _ using the results in Lemma <ref>, Lemma <ref>, and Lemma <ref>.
For any ∈, it holds
c_*_≤≤ c^*_,
where c_* and c^* are positive constants independent of ν and h.
It follows from the results in Lemma <ref> and Lemma <ref> that
ν^2+_0^2≤ C(ν^2+c_1h^2^2+_0^2)≤ C^2.
Similarly, from Lemma <ref> and Lemma <ref>, we obtain
ν^2+_0^2≤ C(ν^2+c_2h^2^2+_0^2)≤ C^2_.
§.§ Well-posedness
Most of the results for the well-posedness of the method are similar to those of the method. Thus, we briefly state and prove the results concerning ·_ℛ in this subsection.
For any ,∈𝐕_h, the coercivity and continuity results hold:
ν(,)+𝐜̃(,) ≥ K_1^2_ℛ,
|ν(,)+𝐜̃(,)| ≤ K_2_ℛ_ℛ,
where K_1=min(κ_1,1) and K_2=max(κ_2,1).
The proof is the same as that of Lemma <ref>, so we omit the details here.
Assume that the penalty parameters ρ_1 and ρ_2 are sufficiently large.
Then, we have
sup_∈(,q)/_ℛ≥ C_1q_0, ∀ q∈ Q_h,
for C_1=C_is/C_ne defined in Lemma <ref>.
Similar to the proof of Lemma <ref>, the discrete inf-sup condition in <cit.> and the upper bound of _ℛ in (<ref>) imply
C_isq_0≤sup_∈(,q)/≤ C_nesup_∈(,q)/_ℛ.
For any ∈ and q∈ Q_h, it holds
|(,q)|≤C/√(ν+c_2 h^2)q_0_ℛ,
for a generic positive constant C independent of ν and h.
Similar to the proof of Lemma <ref>, this result is proved by the continuity of (·,·) in <cit.> and the upper bound of in (<ref>).
Finally, we obtain the well-posedness of the method in Algorithm <ref>.
There exists a unique solution (,)∈× Q_h to the method.
The proof is the same as Theorem <ref>, so we omit the details here.
§.§ Error estimates
We recall the error functions
χ_h:=-Π_h, 𝐞_h:=Π_h-_h, ξ_h:=p- p, ϵ_h:= p-p_h,
where (,p)∈ [H_0^1(Ω)∩ H^2(Ω)]^d× [L_0^2(Ω)∩ H^1(Ω)] is the solution to (<ref>)-(<ref>).
Then, we derive error equations for the method.
For any ∈ and q∈ Q_h, we have
ν(_h,)+(_h,)-(,ϵ_h) =l_1(,)+l_3(,)+l_4(,)+𝐬(Π_h,),
(_h,q) =-(χ_h,q),
where l_1(,) and 𝐬(Π_h,) are defined in Lemma <ref>, and the other supplemental bilinear forms are defined as follows:
l_3(,):=ν(Δ, -)_,
l_4(,):=(Π_h-,)_.
Since -(Δ,)
_=(,) for any ∈, we have
-ν(Δ,)_ =-ν(Δ,)_-ν(Δ,-)_
=ν(,)-ν(Δ,-)_
=ν(Π_h,)-ν(Π_h-,)-ν(Δ,-)_.
By the definition of (·,·), we also have
(,)_ =(Π_h,)_-(Π_h-,)_
=(Π_h,)-(Π_h-,)_-ρ_2⟨ h_eΠ_h,⟩_.
Since · is continuous on ∂ T and ∇· is constant in T, integration by parts implies
(∇ p,)_ = -(, p).
Hence, we obtain the following equation from (<ref>),
ν(Π_h,)+(Π_h,)-(, p)=(,)+l_1(,)+l_3(,)+l_4(,)+𝐬(Π_h,).
If we compare this equation with (<ref>) in the method, then we arrive at
ν(_h,)+(_h,)-(,ϵ_h)=l_1(,)+l_3(,)+l_4(,)+𝐬(Π_h,).
For the second equation (<ref>), the continuity of and (<ref>) in the method lead us to
(∇·,q)_=(,q)=0=(_h,q).
We present estimates for the supplementary bilinear forms used in Lemma <ref>.
Assume that ∈[H^2(Ω)]^d and ∈. Then, we have
|l_1(,)|≤C√(ν)h_2_ℛ,
|l_3(,)|≤C√(ν)h_2_ℛ,
|l_4(,)|≤C h_2_ℛ,
|𝐬(Π_h,)|≤C h^2_2_ℛ,
where C is a generic positive constant independent of ν and h and may vary in each case.
The estimates (<ref>) and (<ref>) are proved by the estimate in Lemma <ref> and the norm equivalence (<ref>).
On the other hand, the Cauchy-Schwarz inequality, (<ref>), and (<ref>) lead to
|l_3(,)| =|ν(Δ, -)_|
≤ν_2-_0
≤ Cν h_2
≤ C√(ν)h_2_ℛ.
Using the Cauchy-Schwarz inequality, (<ref>), (<ref>), and (<ref>),
we get the following upper bounds,
|l_4(,)| =|(Π_h-,)_|
≤|(Π_h-Π_h,)_|+|(Π_h-,)_|
≤ ChΠ_h_0+Π_h-_0_0
≤ Ch||_1_ℛ.
Hence, we prove error estimates of the method in Algorithm <ref>.
Let (,p)∈ [H_0^1(Ω)∩ H^2(Ω)]^d× [L_0^2(Ω)∩ H^1(Ω)] be the solution to (<ref>)-(<ref>), and (,p_h)∈× Q_h be the discrete solution from the method. Then, we have the following pressure-robust error estimates
Π_h-_h_ℛ≤ Ch(√(ν)+1)_2,
𝒫_0p-p_h_0≤ C h(ν+√(ν))_2 + Ch^2_2.
We start with the error equation (<ref>),
(,ϵ_h)=ν(_h,)+(_h,)-l_1(,)-l_3(,)-l_4(,)-𝐬(Π_h,).
Then, it follows from (<ref>) and (<ref>) that
(,ϵ_h)≤ C(_h _ℛ+√(ν)h_2+h_2+h^2_2)_ℛ.
From the inf-sup condition (<ref>) with (<ref>), we obtain
ϵ_h_0≤ C(√(ν)+h)(_h_ℛ+√(ν)h_2+h_2+h^2_2).
We also choose =_h and q=ϵ_h in (<ref>) and substitute (<ref>) into (<ref>) to get
ν(_h,_h)+(_h,_h)=-(χ_h,ϵ_h)+l_1(,_h)+l_3(,_h)+l_4(,_h)+𝐬(Π_h,_h).
Here, it follows from (<ref>) that
|(χ_h,ϵ_h)|≤ Ch_2ϵ_h_0.
Therefore, from (<ref>), (<ref>), and (<ref>), we have
_h_ℛ^2≤ C( h_2ϵ_h_0+√(ν)h_2_h_ℛ+h_2_h_ℛ),
while omitting h^2-terms.
We also replace ϵ_h_0 by its upper bound in (<ref>) omitting high-order terms,
_h^2_ℛ≤ C(√(ν)h_2_h_ℛ+h_2_h_ℛ).
In this case, the Young's inequality gives
√(ν)h_2_h_ℛ≤ν h^2/2α_2^2+α/2_h^2_ℛ,
h_2_h_ℛ≤h^2/2α_2^2+α/2_h^2_ℛ.
Therefore, it follows from choosing a proper α that
_h^2_ℛ≤ Ch^2(ν+1)_2^2,
which implies that
_h_ℛ≤ Ch(√(ν)+1)_2.
If we apply this estimate to (<ref>), then we obtain
ϵ_h_0≤ Ch(ν+√(ν))_2+Ch^2_2.
We emphasize that the error bounds in Theorem <ref> are pressure-robust and have no detrimental effect from small ν.
With ν→0, the method's velocity errors decrease in the optimal order, and pressure errors do in the second order (superconvergence is expected).
This result implies that the method produces stable and accurate solutions to the Brinkman equations in the Darcy regime.
In addition, we prove total error estimates showing the optimal orders of convergence in velocity and pressure.
Under the same assumption of Theorem <ref>, we have the following error estimates
-_h_ℛ≤ Ch(√(ν)+1)_2,
p-p_h_0≤ Ch((ν+√(ν))_2+p_1).
For the velocity error estimate, we show
-Π_h_ℛ≤ C√(ν)h_2.
More precisely, we recall χ_h=-Π_h and observe the energy norm,
χ_h^2_ℛ=νχ_h^2+χ_h_0^2+ρ_2h_e^1/2χ_h_0,^2.
Then, it follows from (<ref>), (<ref>), and (<ref>) that
χ_h_0≤χ_h-χ_h_0+χ_h_0≤ Chχ_h+χ_h_0≤ Ch^2_2.
Also, from (<ref>) and (<ref>), we obtain
h_e^1/2χ_h_0,≤ C(χ_h_0^2+h^2∇χ_h_0,^2)^1/2≤ Ch^2_2.
Hence, since χ_h≤ Ch_2, the error bound is
χ_h_ℛ≤ C(√(ν)h+h^2)_2.
Furthermore, the pressure error estimate is readily proved by the triangle inequality and interpolation error estimate (<ref>).
In conclusion, the proposed method solves the Brinkman equations in both the Stokes and Darcy regimes, having the optimal order of convergence for both velocity and pressure.
§ NUMERICAL EXPERIMENTS
This section shows numerical experiments validating our theoretical results with two- and three-dimensional examples.
The numerical methods in this paper and their discrete solutions are denoted as follows:
* (_h^,p_h^): Solution by the method in Algorithm <ref>.
* (_h^,p_h^): Solution by the method in Algorithm <ref>.
While considering the scaled Brinkman equations (<ref>) with the parameter ν, we recall the error estimates for the method in Theorem <ref>,
Π_h-^_h≲(√(ν)+1)h_2 + ( h+h/√(ν+c_1 h^2))p_1,
p-p_h^_0≲(ν+√(ν))h_2 + (√(ν)+1)hp_1,
and the error estimates for the method from Theorem <ref>
Π_h-_h^≲(√(ν)+1)h_2,
p-p_h^_0≲(ν+√(ν))h_2+h^2_2.
We mainly check the error estimates (<ref>) and (<ref>) by showing various numerical experiments with ν and h.
We also display the difference between the numerical solutions for and in the Darcy regime, which shows that the method is needed to obtain stable and accurate velocity solutions.
Moreover, we present permeability tests considering the Brinkman equations (<ref>) with viscosity μ and permeability K and applying both EG methods.
The permeability tests enhance the motivation of using the method for the case of extreme μ or K.
We implement the numerical experiments using the authors' MATLAB codes developed based on iFEM <cit.>.
The penalty parameters are ρ_1=ρ_2=3 for all the numerical experiments.
§.§ Two dimensional tests
Let the computational domain be Ω=(0,1)× (0,1). The velocity field and pressure are chosen as
= ([ 10x^2(x-1)^2y(y-1)(2y-1); -10x(x-1)(2x-1)y^2(y-1)^2 ]),
p = 10(2x-1)(2y-1).
Then, the body force and the Dirichlet boundary condition are obtained from (<ref>) using the exact solutions.
§.§.§ Robustness and accuracy test
We compare the and methods to see robustness and check their accuracy based on the error estimates (<ref>) and (<ref>).
First, we interpret the method's velocity error estimate (<ref>) depending on the relation between coefficient ν and mesh size h.
The first-order convergence of the energy norm with h is guaranteed when ν≫ h^2, but it is hard to tell any order of convergence when ν is smaller than h^2 due to the term h/√(ν+c_1h^2).
On the other hand, the velocity error estimate for the method (<ref>) means the first-order convergence in h regardless of ν.
In Figure <ref>, we check the discrete H^1-error for the velocity scaled by ν, √(ν)-_h. It is a component of the energy norm -_h.
The method tends to produce errors increasing with 𝒪(h^-1/2) when h>√(ν), while the errors decrease with 𝒪(h^3/2) when h<√(ν).
This result supports the error estimates (<ref>) (superconvergence may happen because we solve the problem on structured meshes) and means that a tiny mesh size is needed for accurate solutions with small ν.
However, the method's errors uniformly show the first-order convergence, 𝒪(h), regardless of ν.
This result supports the error estimates (<ref>), so the method guarantees stable and accurate solutions in both the Stokes and Darcy regimes.
We fix ν=10^-6 and compare the velocity errors and solutions of the and methods.
Table <ref> displays the energy errors and their major components, the discrete H^1-errors scaled by ν and L^2-errors.
For the method, the energy errors decrease in the half-order convergence because the L^2-errors are dominant and decrease in the same order.
However, the H^1-errors keep increasing unless h<√(ν)=10^-3, so the H^1-errors will become dominant and deteriorate the order of convergence of the energy errors.
On the other hand, using the method, we expect from (<ref>) that the energy errors and major components converge in at least the first order of h.
Indeed, Table <ref> shows that the H^1-errors decrease in the first order with h, while the L^2-errors reduce in the second order.
Since the energy error involve both H^1- and L^2-errors, the energy errors decrease in the second order because of the dominant L^2-errors but eventually converge in the first order coming from the H^1-errors.
In Figure <ref>, the method produces accurate velocity solutions clearly showing a vortex flow pattern when ν=10^-6 and h=1/16. In contrast, the numerical velocity from the method includes significant oscillations around the boundary of the domain.
Moreover, the pressure error estimates (<ref>) and (<ref>) tell us that the convergence order for the pressure errors is at least 𝒪(h) in both methods. However, the method can produce superconvergent pressure errors because the term h^2p_1 is dominant when ν is small.
In Table <ref>, the pressure errors of the method, p-p_h^_0, decrease in at least 𝒪(h^3), which means superconvergence compared to the interpolation error estimate (<ref>).
On the other hand, the method still yields pressure errors converging in the first order with h.
Since the interpolation error is dominant in the total pressure errors p-p_h_0, the errors in Table <ref> have the first-order convergence with h in both methods.
Therefore, the numerical results support the pressure error estimates (<ref>) and (<ref>).
§.§.§ Error profiles with respect to ν
We shall confirm the error estimates (<ref>) and (<ref>) in terms of the parameter ν by checking error profiles depending on ν.
We define the following error profile functions of ν based on the error estimates and show that these functions explain the behavior of the velocity and pressure errors with ν:
* E_,2^(ν):=0.1h√(ν)+0.3h/√(ν+3h^2)+0.4h=0.1/32√(ν)+0.3/√(32^2ν+3)+0.4/32 from (<ref>),
* E_,2^(ν):=0.8h√(ν)+0.05h=0.8/32√(ν)+0.05/32 from (<ref>),
* E_p,2^(ν):=2hν+3h√(ν)+0.3h=2/32ν+3/32√(ν)+0.3/32 from (<ref>),
* E_p,2^(ν):=0.5hν+0.01h√(ν)+0.01h^2=0.5/32ν+0.01/32√(ν)+0.01/32^2 from (<ref>),
where h=1/32.
Figure <ref> shows the velocity and pressure errors and the graphs of the above error profile functions when ν decreases from 1 to 0 and h=1/32.
As shown in Figure <ref>, the velocity errors for the method increase when ν is between 1 to 10^-4 and tend to remain constant when ν is smaller.
The method's pressure errors decrease slightly and stay the same as ν→0.
On the other hand, the velocity and pressure errors for the method significantly reduce and remain the same after ν=10^-4.
This error behavior can be explained by the graphs of the error profile functions guided by the error estimates (<ref>) and (<ref>), so this result supports the estimates concerning ν.
In addition, the velocity and pressure errors for the method are almost 1000 times smaller than the method in Figure <ref>.
Therefore, we confirm that the method guarantees more accurate solutions for velocity and pressure when ν is small.
§.§.§ Permeability test
In this test, we consider the Brinkman equations (<ref>) with viscosity μ=10^-6 and permeability given as the permeability map in Figure <ref>.
The permeability map indicates that fluid tends to flow following the blue regions, so the magnitude of numerical velocity will be more significant in the blue areas than in the red parts.
We set the velocity on the boundary of the domain as =⟨ 1,0⟩ and body force as = ⟨ 1, 1⟩.
We mainly compare the magnitude of the numerical velocity obtained from the two methods in Figure <ref>.
We clearly see that the method's velocity is more stable than the method's velocity containing nonnegligible noises (or oscillations) around the boundary.
This result tells that the method is necessary for stable and accurate velocity solutions to the Brinkman equations with extreme viscosity and permeability.
§.§ Three dimensional tests
We consider a three-dimensional flow in a unit cube Ω=(0,1)^3. The velocity field and pressure are chosen as
= ([ sin(π x)cos(π y) - sin(π x)cos(π z); sin(π y)cos(π z) - sin(π y)cos(π x); sin(π z)cos(π x) - sin(π z)cos(π y) ]),
p = π^3sin(π x)sin(π y)sin(π z)-1.
The body force and the Dirichlet boundary condition are given in the same manner as the two-dimensional example.
§.§.§ Robustness and accuracy test
In the two-dimensional tests, we checked that the condition h<√(ν) was required to guarantee the optimal order of convergence for the method, while the method showed a uniform performance in convergence independent of ν.
We obtained the same result as in Figure <ref> from this three-dimensional test.
Table <ref> displays the velocity solutions' energy errors and influential components, comparing the method with when ν=10^-6.
The method's energy errors tend to decrease because the dominant L^2-errors decrease, but the H^1-errors scaled by ν increase.
These H^1-errors may make the energy errors nondecreasing until h<√(ν)=10^-3.
However, the methods guarantee at least first-order convergence for all the velocity errors, showing much smaller errors than the method.
This numerical result supports the velocity error estimates in (<ref>) and (<ref>), and we expect more accurate solutions from the method when ν is small.
In addition, we compare numerical velocity solutions of the and methods when ν=10^-6 and h=1/16 in Figure <ref>.
The velocity solutions of both methods seem to capture a three-dimensional vortex flow expected from the exact velocity.
However, the velocity of the method contains noises around the right-top and left-bottom corners, where the streamlines do not form a circular motion.
In Table <ref>,
as expected in (<ref>), the method's pressure errors decrease in at least first-order.
On the other hand, the method's pressure errors, p -p_h^𝚄𝚁_0, decrease much faster, showing superconvergence.
This phenomenon is expected by the pressure estimate (<ref>) when ν is small.
Moreover, the orders of convergence of the total pressure errors, p-p_h_0,
for both methods are approximately one due to the interpolation error.
§.§.§ Error profiles with respect to ν
We define error profile functions suitable for the three-dimensional test by determining constants in the estimates (<ref>) and (<ref>):
* E_,3^(ν):=0.1h√(ν)+h/√(ν+3h^2)+9h=0.1/16√(ν)+1/√(16^2ν+3)+9/16 from (<ref>)
* E_,3^(ν):=6h√(ν)+0.25h=6/16√(ν)+0.25/16 from (<ref>),
* E_p,3^(ν):=1.5hν+h√(ν)+2.5h=1.5/16ν+1/16√(ν)+2.5/16 from (<ref>),
* E_p,3^(ν):=2hν+0.02h√(ν)+0.2h^2 = 2/16ν+0.02/16√(ν)+0.2/16^2 from (<ref>),
where h=1/16.
In Figure <ref>, the method's velocity and pressure errors decrease when ν changes from 1 to 10^-4 and remain the same when ν gets smaller.
However, the errors for the method slightly increase or decrease when 10^-4≤ν≤ 1, and they stay the same as ν→0.
Thus, the errors of the method are almost 100 times smaller than the method when ν≤ 10^-4, which means the method solves the Brinkman equations with small ν more accurately.
The error profile functions show similar error behaviors in Figure <ref>, supporting error estimates (<ref>) and (<ref>).
§.§.§ Permeability test
We apply piecewise constant permeability to the Brinkman equations (<ref>) in the cube domain Ω=(0,1)^3,
K() = {[ 10^-6 if ||≤ (0.25)^2,; 1 otherwise. ].
The other conditions are given as; viscosity μ=10^-6, boundary condition =⟨ 1,0,0⟩, and body force =⟨ 1, 1,1⟩.
We expect the fluid flow to be faster out of the ball with small permeability, and it tends to avoid the ball and be affected by the boundary velocity.
The streamlines and colored magnitude of the method's velocity in Figure <ref> exactly show such an expectation on the fluid flow, while the method fails to provide a reliable velocity solution.
§ CONCLUSION
In this paper, we proposed a pressure-robust numerical method for the Brinkman equations with minimal degrees of freedom based on the EG piecewise linear velocity and constant pressure spaces <cit.>.
To derive the robust method, we used the velocity reconstruction operator <cit.> mapping the EG velocity to the first-order Brezzi-Douglas-Marini space.
Then, we replaced the EG velocity in the Darcy term and the test function on the right-hand side with the reconstructed velocity. With this simple modification, the robust EG method showed uniform performance in both the Stokes and Darcy regimes compared to the standard EG method requiring the
mesh restriction h<√(ν) that is impractical in the Darcy regime.
We also validated the error estimates and performance of the standard and robust EG methods through several numerical tests with two- and three-dimensional examples.
Our efficient and robust EG method for the Brinkman equations can be extended to various Stokes-Darcy modeling problems, such as coupled models with an interface and time-dependent models. Also,
the proposed EG method can be extended for nonlinear models, such as nonlinear Brinkman models for non-Newtonian fluid and unsteady Brinkman-Forchheimer models.
plain
|
http://arxiv.org/abs/2307.05782v1 | 20230711202102 | Large Language Models | [
"Michael R. Douglas"
] | cs.CL | [
"cs.CL",
"hep-th",
"math.HO",
"physics.comp-ph",
"68T01",
"I.2.7"
] |
plainurl
sentenceSentence
paraParagraph
ℝ
𝕂
ℤ
𝔼
|
http://arxiv.org/abs/2307.04523v1 | 20230710124620 | 1D non-LTE corrections for chemical abundance analyses of very metal-poor stars | [
"L. Mashonkina",
"Yu. Pakhomov",
"T. Sitnova",
"A. Smogorzhevskii",
"P. Jablonka",
"V. Hill"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.GA"
] |
firstpage–lastpage
Cluster-Induced Mask Transformers for Effective Opportunistic Gastric Cancer Screening on Non-contrast CT Scans
Mingze Yuan^1,2,3,*, Yingda Xia^1,, Xin Chen^4,, Jiawen Yao^1,3, Junli Wang^5, Mingyan Qiu^1,3, Hexin Dong^1,2,3, Jingren Zhou^1, Bin Dong^2,6, Le Lu^1, Li Zhang^2, Zaiyi Liu^4,, Ling Zhang^1
August 12, 2023
===================================================================================================================================================================================================
Detailed chemical abundances of very metal-poor (VMP, [Fe/H] < -2) stars are important for better understanding the First Stars, early star formation and chemical enrichment of galaxies. Big on-going and coming high-resolution spectroscopic surveys provide a wealth of material that needs to be carefully analysed. For VMP stars, their elemental abundances should be derived based on the non-local thermodynamic equilibrium (non-LTE = NLTE) line formation because low metal abundances and low electron number density in the atmosphere produce the physical conditions favorable for the departures from LTE. The galactic archaeology research requires homogeneous determinations of chemical abundances. For this purpose, we present grids of the 1D-NLTE abundance corrections for lines of Na, Mg, Ca, Ca, Ti, Fe, Zn, Zn, Sr, and Ba in the range of atmospheric parameters that represent VMP stars on various evolutionary stages and cover effective temperatures from 4000 to 6500 K, surface gravities from = 0.5 to = 5.0, and metallicities -5.0 ≤ [Fe/H] ≤ -2.0. The data is publicly available, and we provide the tools for interpolating in the grids online.
line: formation – stars: abundances – stars: atmospheres.
§ INTRODUCTION
Very metal-poor (VMP, [Fe/H][In the classical notation, where [X/H] = log(N_ X/N_
H)_star - log(N_ X/N_ H)_⊙.] < -2) stars are fossils of the early epochs of star formation in their parent galaxy.
Their detailed elemental abundances are of extreme importance for understanding the nature of the First Stars, uncovering the initial mass function and the metallicity distribution function of the galaxy, testing the nuclesynthesis theory predictions and the galactic chemical evolution models <cit.>.
Since 1980th, the number of discovered VMP star candidates has grown tremendously thanks to the wide-angle spectroscopic and photometric surveys, such as HK <cit.>, HES <cit.>, RAVE <cit.>, SMSS <cit.>, SEGUE/SDSS <cit.>, LAMOST <cit.>.
The survey Pristine has been specially designed for efficient searching VMP stars <cit.>. Using the narrow-band photometric filter centered on the Ca H & K lines makes possible to successfully predict stellar metallicities <cit.>.
The number of confirmed VMP stars is substantially lower than the number of candidates because the verification of very low metallicity requires the high-resolution follow-ups. The SAGA (Stellar Abundances for Galactic Archaeology) database <cit.> includes about 1390 Galactic stars with [Fe/H] ≤ -2, for which their metallicities were derived from the R = λ /Δλ≥ 20 000 spectra. The 470 stars of them have [Fe/H] ≤ -3, and 28 stars are ultra metal-poor (UMP, [Fe/H] ≤ -4). A burst in the number of VMP stars with detailed elemental abundances derived is expected with the launch of the WEAVE (WHT Enhanced Area Velocity Explorer) project <cit.>. A vast amount of spectral data will be taken with the coming 4-metre Multi-Object Spectroscopic Telescope <cit.>.
Abundance ratios among the elements of different origin, such as Mg and Fe, for stellar samples covering broad metallicity ranges serve as the observational material for the galactic archaeology research.
The simplest and widely applied method to derive elemental abundances is based on using one-dimensional (1D) model atmospheres and the assumption of local thermodynamic equilibrium (LTE), see, for example, the abundance results from the high-resolution spectroscopic survey APOGEE <cit.>.
In metal-poor atmospheres, in particular, of cool giants, low total gas pressure and low electron number density lead to departures from LTE that grow towards lower metallicity due to decreasing collisional rates and increasing radiative rates as a result of dropping ultra-violet (UV) opacity. The non-local thermodynamic equilibrium (non-LTE = NLTE) line formation calculations show that the NLTE effects for lines of one chemical species and for different chemical species are different in magnitude and sign, depending on the stellar parameters and element abundances. Ignoring the NLTE effects leads to a distorted picture of the galactic abundance trends and thus to wrong conclusions about the galactic chemical evolution.
The NLTE abundance from a given line in a given star can be obtained by adding the theoretical NLTE abundance correction, which corresponds to the star's atmospheric parameters, to the LTE abundance derived from the observed spectrum: NLTE = LTE + Δ_ NLTE. For a number of chemical species, Δ_ NLTE can be taken online from the websites
* INSPECT (<http://www.inspect-stars.com>) for lines of Li, Na, Mg, Ti, Fe-, and Sr,
* NLTE_MPIA (<http://nlte.mpia.de/>) for lines of O, Mg, Si, Ca-, Ti-, Cr, Mn, Fe-, and Co,
* <http://spectrum.inasan.ru/nLTE/> for lines of Ca, Ti-, and Fe.
Extensive grids of the NLTE abundance corrections are provided by <cit.>, <cit.>, and <cit.>.
The NLTE abundance corrections for the selected lines of S and Zn in the limited set of atmospheric models were computed by <cit.>. <cit.> report the NLTE to LTE equivalent width ratios for lines of Mg, Ca, and Ca in the grid of model atmospheres representing cool giants.
Different approach is a determination of the NLTE abundance directly, by using the synthetic spectrum method and the precomputed departure coefficients, b_i = n_i^ NLTE/n_i^ LTE, for the chemical species under investigation. Here, n_i^ NLTE and n_i^ LTE are the statistical equilibrium and the Saha-Boltzmann's number densities, respectively, for the enery level i.
<cit.> provide the grids of b_i
for 13 chemical species (neutral H, Li, C, N, O, Na, Mg, Al, Si, K, Ca, Mn; singly ionized Mn, Ba)
across a grid of the classical one-dimensional (1D) MARCS model atmospheres <cit.>.
This approach is based on using the 1D-NLTE spectral synthesis codes, such as SME <cit.> synthV_NLTE <cit.>, Turbospectrum <cit.>.
An approach based on three-dimensional (3D) model atmospheres combined with the NLTE line formation is extremely time consuming and, to now, was applied to a few chemical species in the Sun <cit.> and the benchmark VMP stars <cit.>. Grids of the 3D-NLTE abundance corrections were computed for lines of O <cit.> and Fe- <cit.> using the STAGGER grid of model atmospheres for a limited range of effective temperatures ( = 5000-6500 K), surface gravities ( = 3.0-4.5), and metallicities ([Fe/H] = 0 to -3). For the Li lines, grids of the 3D-NLTE abundance corrections were computed by <cit.> and <cit.> with the CO^5BOLD and STAGGER model atmospheres, respectively.
The 3D-NLTE calculations are available for a small number of the chemical elements observed in VMP stars, and they cover only in part the range of relevant atmospheric parameters. Furthermore, as shown by <cit.> for Fe, the abundance differences between 3D-NLTE and 1D-NLTE are generally less severe compared with the differences between 3D-NLTE and 1D-LTE and reach 0.2 dex, at maximum (see Figs. 5-7 in their paper). Therefore, calculations of the 1D-NLTE abundance corrections for extended linelists across the stellar parameter range which represents the VMP stars make sense, and they are useful for the galactic archaeology research. Availability and comparison of Δ_ NLTE from different independent studies increase a credit of confidence in the spectroscopic NLTE analyses.
This paper presents the 1D-NLTE abundance corrections for lines of 10 chemical species in the grid of MARCS model atmospheres with = 4000-6500 K, = 0.5-5.0, and -5 ≤ [Fe/H] ≤ -2.
We provide the tools for calculating online the NLTE abundance correction(s) for given line(s) and given atmospheric parameters by interpolating in the precomputed grids.
Potential users may take the following advantages of our data compared with the grids of the 1D-NLTE abundance corrections available in the literature.
* Only this study provides extended grids of the NLTE abundance corrections for lines of Zn and Ba.
* For Ca and Ca, the NLTE calculations were performed with advanced treatment of the Ca + H and Ca + H collisions, following <cit.> and <cit.>, respectively.
* For Zn and Sr, our results are based on advanced treatment of collisions with H, following <cit.> and <cit.>. Our grids cover the broader range of , , and [Fe/H] compared to that for Zn in <cit.> and for Sr in the INSPECT database.
* For Ca–Ca, Fe–Fe, and Na, the developed 1D-NLTE methods have been verified with spectroscopic analyses of VMP stars and have been shown to yield reliable results.
The paper is organised as follows. Section <ref> describes our NLTE methods and their verification with observations of VMP stars. New grids of the NLTE abundance corrections are presented in Sect. <ref>. In Sect. <ref>, we compare our calculations with those from other studies. Our recommendations and final remarks are given in Sect. <ref>.
§ NLTE METHODS AND THEIR VERIFICATION
The present investigation is based on the NLTE methods developed and tested in our earlier studies.
Details of the adopted atomic data and the NLTE line formation for Na, Mg, Ca-, Ti-Ti, Fe-, Zn-, Sr, and Ba can be found in the papers cited in Table <ref>.
It is important to note that collisions with hydrogen atoms were treated with the data based on quantum-mechanical calculations.
The exceptions are Ti and Fe-, for which we adopted the Drawinian rates <cit.> scaled by an empirically estimated factor of = 1 <cit.> and = 0.5 <cit.>, respectively.
The code detail <cit.> with the revised opacity package <cit.> was used to solve the coupled radiative transfer and statistical equilibrium (SE) equations. The obtained LTE and NLTE level populations were then implemented in the code linec <cit.> that, for each given spectral line, computes the NLTE curve of growth and finds the shift in the NLTE abundance, which is required to reproduce the LTE equivalent width. Such an abundance shift is referred to as the NLTE abundance correction, Δ_ NLTE = NLTE-LTE.
All the calculations were performed using the classical LTE model atmospheres with the standard chemical composition <cit.>, as provided by the MARCS website[<http://marcs.astro.uu.se>].
Below we provide evidence for a correct treatment of the NLTE line formation for Fe-Fe, Ca-Ca, and Na in the atmospheres of VMP stars.
§.§ Spectroscopic versus Gaia eDR3 distances
Iron is represented in the VMP stars by the two ionization stages, which are used in many studies to determine spectroscopic surface gravities (g_ Sp) from the requirement that abundances from lines of Fe and Fe in a given star must be equal. The surface gravity can also be derived from distance; this is the distance-based surface gravity, g_ d. If g_ Sp based on the NLTE calculations and g_ d are obtained to be consistent within the error bars, this means that the calculations for Fe-Fe are correct.
<cit.> and <cit.> derived the surface gravities for the two Galactic stellar samples using photometric effective temperatures and the NLTE analysis of the Fe and Fe lines. Using the Gaia eDR3 parallaxes corrected according to
<cit.>, we calculated distances from the maximum
of the distance probability distribution function, as recommended by
<cit.>, and then
_ d from the relation
log g_ d = -10.607 +log M+4 log T_ eff -
0.4 [4.74 - (V + BC + 5 - 5 log d - A_V)]
Here, M is a star's mass, A_V is an interstellar extintion in the V-band, BC is a bolometric correction which was calculated by interpolation in the grid of <cit.>[<https://wwwuser.oats.inaf.it/castelli/colors/bcp.html>]. The atmospheric parameters and A_V were taken from <cit.> and <cit.>. Stellar masses and V magnitudes for the <cit.> sample are listed in their Table 5 and 2, respectively. For the stellar sample of <cit.>, the V magnitudes are listed in their Table 5. For each VMP giant, we adopt M = 0.8 M_⊙.
Statistical error of the distance-based surface gravity was computed as the quadratic sum of errors of the star's distance, effective temperature, mass, visual magnitude, and BC. We assumed the stellar mass error as σ_M = 0.1 M_⊙ and took the effective temperature errors, σ_T, from <cit.> and <cit.>. The total error is dominated by σ_M for the nearby stars and by the distance error, σ_ d, for the distant objects.
Table <ref> lists the obtained Gaia eDR3 distances and _ d values, as well as the spectroscopic surface gravities from <cit.> and <cit.>.
The differences log g_ Sp – _ d are shown in Fig. <ref>. The majority of our stars lie within 631 pc from the Sun, and their spectroscopic surface gravities are found to be consistent within the error bars with the distance-based ones. A clear outlier is HD 8724, with log g_ Sp – _ d = -0.48. We note that the discrepancy between log g_ Sp and _ d has reduced compared to -0.76 dex obtained for HD 8724 by <cit.> using the Gaia DR1 parallax <cit.>. However, it is still greater than the error of spectroscopic surface gravity, σ_log g (sp) = 0.24 dex. Formal calculation of σ_log g (d) leads to 0.07 dex (Table <ref>), however, astrometric_excess_noise_sig = 6.005 and astrometric_chi2_al = 419.84 indicated by <cit.> for HD 8724 suggest an unreliable solution for the Gaia eDR3 parallax.
For 15 distant stars, with d > 2 kpc, the errors of _ d grow. Nevertheless, the spectroscopic surface gravities are consistent, on average, with the distance-based ones.
Thus, our NLTE method for Fe/Fe is reliable and can be used for determinations of surface gravities, in particular, for distant stars with large distance errors.
§.§ Ca versus Ca
A firm argument for a correct treatment of the NLTE line formation for Ca-Ca can be obtained from a comparison of the NLTE abundances from lines of the two ionization stages. <cit.> report the LTE and NLTE abundances from lines of Ca and Ca 8498 Å for five reference stars with well-determined atmospheric parameters in the -2.7 < [Fe/H] < -1.3 metallicity range and find fairly consistent NLTE abundances, while the LTE abundance difference between Ca and Ca 8498 Å grows in absolute value towards lower metallicity and reaches -0.45 dex for [Fe/H] = -2.62, see their Fig. 6.
<cit.> studied the UMP stars and improved their atmospheric parameters using an extensive method based on the colour- calibrations, NLTE fits of the Balmer line wings, and Gaia DR2 trigonometric parallaxes. For each star, the derived effective temperature and surface gravity were checked by inspecting the Ca/Ca NLTE ionization equilibrium and by comparing the star's position in the - plane
with the theoretical isochrones of 12 and 13 Gyr.
The abundance differences between the two ionization stages from the NLTE and LTE calculations of <cit.> and <cit.> are displayed in Fig. <ref>. Nowhere, the NLTE abundance difference Ca – Ca exceeds 0.15 dex, while the LTE abundances from lines of Ca are systematically lower compared with that from Ca, by up to 0.85 dex. Thus, the NLTE results obtained using our NLTE method for Ca- <cit.> can be trusted.
§.§ Na resonance lines in VMP stars
Figure <ref> displays the [Na/Mg] abundance ratios in the wide range of metallicities from the LTE and NLTE calculations of <cit.> and <cit.>. For [Fe/H] > -1, both LTE and NLTE data form a well-defined upward trend, with a small star-to-star scatter for the stars of close metallicity. The situation is very different in LTE and NLTE for [Fe/H] < -1. In LTE, the [Na/Mg] ratios reveal a big scatter, which is substantially reduced in the NLTE calculations. An explanation lies mostly with the NLTE effects for lines of Na. For Mg, the differences between the NLTE and LTE abundances do not exceed 0.1 dex.
For [Fe/H] > -1, the Na abundances were derived by <cit.> from the Na 5682, 5688, 6154, 6160 5895 Å subordinate lines, which are slightly affected by NLTE, with negative Δ_ NLTE of ≾0.1 dex, in absolute value.
In the lower metallicity stars, sodium is observed in the Na 5889, 5895 Å resonance lines only. They are subject to strong NLTE effects, with Δ_ NLTE depending on the atmospheric parameters and the Na abundance itself. For different stars, Δ_ NLTE varies between -0.1 and -0.6 dex <cit.>. Removing the star-to-star scatter of the [Na/Mg] NLTE abundance ratios for [Fe/H] < -1 can serve as a circumstantial evidence for the line formation to be treated correctly.
Taking advantage of the obtained Galactic NLTE [Na/Mg] trend, we found that the modern nuclesynthesis and Galactic chemical evolution (GCE) calculations, which are represented in Fig. <ref> (right panel) by the GCE model of <cit.>, predict correctly contributions from the core-collapse supernovae (SNeII) and the asymptotic giant branch (AGB) stars to production of Mg and Na during the Galaxy history.
§ GRIDS OF THE NLTE ABUNDANCE CORRECTIONS
By request of the Pristine collaboration <cit.>, the NLTE abundance corrections were computed for the lines which can be detected in spectra of VMP stars, that is, for the [Fe/H] ≤ -2 range. We focused, in particular, on the spectral ranges observed by WEAVE[https://ingconfluence.ing.iac.es/confluence/display/WEAV/Science], that is 4040-4650 Å, 4750-5450 Å, and 5950-6850 Å for the high-resolution (R = λ/Δλ = 20 000) observations and 3660-9590 Å for the R = 5000 observations, and 4MOST [https://www.4most.eu/cms], that is 3926-4350 Å, 5160-5730 Å, and 6100-6790 Å for the high-resolution spectrograph (HRS, R ≃ 20 000) and 3700-9500 Å for the low-resolution spectrograph (LRS, R ≃ 4000-7500). We selected
4 / 15 / 28 / 4 / 54 / 262 / 7 / 2 / 2 / 5 lines of Na / Mg / Ca / Ca / Ti / Fe / Zn / Zn / Sr / Ba.
The range of atmospheric parameters was selected to represent metal-poor stars on various evolutionary stages, from the main sequence to the red giant branch (RGB); see the isochrone of 12 Gyr, [Fe/H] = -2, and [α/Fe] = 0.4 from <cit.> in Fig. <ref>. The NLTE calculations were performed in the following ranges of effective temperature and surface gravity:
= 4000 to 4750 K for = 0.5 to 2.5;
= 5000 K for = 0.5 to 5.0;
= 5250 to 5500 K for = 2.0 to 5.0;
= 5750 to 6500 K for = 3.0 to 5.0.
Metallicity range is -5.0 ≤ [Fe/H] ≤ -2.0.
The nodes of the NLTE abundance correction grids correspond to the nodes of the MARCS model grid. Therefore, varies with a step of 250 K, with a step of 0.5, and [Fe/H] with a step of 0.5. The MARCS website does not provide models with [Fe/H] = -3.5 and -4.5. The missing models were calculated by interpolating between the [Fe/H] = -3 and -4 and between the [Fe/H] = -4 and -5 models. We applied the FORTRAN-based interpolation routine written by Thomas Masseron and available on the MARCS website.
For Fe- and Zn-, the SE calculations were performed with [Element/Fe] = 0.0;
for Mg and Ti with [Element/Fe] = 0.4 and 0.3, respectively.
For Na, Ca, Ca, Sr, and Ba, the NLTE effects are sensitive to not only //[Fe/H], but also the element abundance used in the SE calculations. Therefore, the grids of the NLTE corrections are 4-dimensional where [Element/Fe] takes the following numbers:
[Na/Fe] = -0.6, -0.3, 0.0, 0.3, 0.6;
[Ca/Fe] = 0.0 and 0.4;
[Sr/Fe] = -1.0, -0.5, 0.0, 0.5, 1.0 for the dwarf model atmospheres,
[Sr/Fe] = -1.5, -1.0, -0.5, 0.0, 0.5 for the giant model atmospheres;
[Ba/Fe] = -1.0, -0.5, 0.0, 0.5 for the dwarf model atmospheres,
[Ba/Fe] = -1.5, -1.0, -0.5, 0.0, 0.5 for the giant model atmospheres.
The website INASAN_NLTE[<http://spectrum.inasan.ru/nLTE2/>] provides the tools for calculating online the NLTE abundance correction(s) for given spectral line(s) and atmospheric parameters , , [Fe/H], [Element/Fe] by an interpolation in the NLTE correction grids.
§.§ NLTE corrections depending on atmospheric parameters
Figure <ref> displays the NLTE abundance corrections predicted for representative lines of different chemical species in VMP stars on different evolutionary stages, namely, the turn-off (TO, / = 6250/4.0), the bottom red giant branch (bRGB, 5250/3.0), and the RGB (4500/1.5). For each line, Δ_ NLTE depends on , and [Fe/H]. Therefore, neglecting the NLTE effects distorts the galactic abundance trends.
In the same atmosphere, different lines have the NLTE corrections of different magnitude and sign. Therefore,
the star's element abundance pattern derived under the LTE assumption does not reflect correctly relative contributions of different nuclesynthesis sources.
The sign of Δ_ NLTE is determined by the mechanisms that produce the departures from LTE for lines of a given species in given physical conditions.
In the stellar parameter range with which we concern, Mg, Ca, and Fe are the minority species in the line formation layers, and they are subject to the ultra-violet (UV) overionization, resulting in depleted atomic level populations, weakened lines, and positive NLTE abundance corrections <cit.>. The intensity of the ionizing UV radiation increases with decreasing metallicity, resulting in growing departures from LTE.
Na is also the minority species, however, due to low photoionization cross-sections of its ground state, the main NLTE mechanism is a "photon suction" process <cit.> which produces overpopulation of the neutral stage, resulting in strengthened Na lines and negative NLTE abundance corrections. Photon suction is connected with collisional processes that couple the high-excitation levels of Na with the singly ionized stage. In contrast to the radiative processes, an influence of collisional processes on the statistical equilibrium of Na is weakened with decreasing metallicity, and Δ_ NLTE for Na 5895 Å decreases in absolute value and becomes even slightly positive for [Fe/H] ≤ -4.5 in the 4500/1.5 models.
The NLTE effects for the majority species Ca, Ti, Sr, and Ba are driven by the bound-bound (b-b) transitions. For an individual line, the sign and magnitude of Δ_ NLTE depend on the physical conditions and the transition where the line arises. Ca 8498 Å arises in the transition 3d2D3/2 – 4p2P∘3/2. The upper level is depopulated in the atmospheric layers where the core of Ca 8498 Å forms via photon loss in the wings of the Ca 3933, 3968 Å resonance lines and the 8498, 8542, 8668 Å infra-red (IR) triplet lines. The Ca 8498 Å line core is strengthened because the line
source function drops below the Planck function, resulting in negative Δ_ NLTE <cit.>. In the [Fe/H] = -2 models, Ca 8498 Å is very strong with a total absorption dominated by the line wings that form in deep atmospheric layers where the NLTE effects are small. With decreasing [Fe/H] (and Ca abundance, too) the line wings are weakened, and Δ_ NLTE grows in absolute value. In the 6250/4.0 and 5250/3.0 models, Δ_ NLTE decreases in absolute value for [Fe/H] ≤ -3.5 because of shifting the formation depths for Ca 8498 Å in deep atmospheric layers.
Owing to a complex atomic term structure, the levels of Ti are tightly coupled to each other and to the ground state via radiative and collisional processes, and the NLTE corrections for the Ti lines are slightly positive in the stellar parameter range with which we concern <cit.>: Δ_ NLTE≾ 0.1 dex for Ti 4395 Å.
<cit.> and <cit.> predicted theoretically that NLTE may either strengthen or weaken the lines of Sr and Ba, depending on the stellar parameters and elemental abundance. For example, in the 6250/4.0 models, Δ_ NLTE is positive for Ba 4554 Å over full range of [Fe/H] = -2 down to -4.5, while, for Sr 4215 Å, Δ_ NLTE is negative when [Fe/H] ≥ -2.5 and positive for the more metal-deficient atmospheres. In the RGB atmospheres, both Sr 4215 Å and Ba 4554 Å are very strong until metallicity decreases to [Fe/H] = -3.5, and the NLTE corrections are small. For the lower metallicity, Δ_ NLTE is positive for both lines and grows with decreasing [Fe/H].
For lines of Zn, the NLTE abundance corrections depending on atmospheric parameters are discussed by <cit.>.
§.§ NLTE corrections depending on elemental abundances
The stars of close metallicity in the [Fe/H] < -2 range reveal a substantial scatter of the Na, Sr, and Ba abundances <cit.>. Exactly for Na, Sr, and Ba the NLTE effects depend strongly on not only atmospheric parameters, but also the element abundance. Therefore in order to interpret correctly the chemical evolution of Na, Sr, and Ba, abundance analyses of VMP samples should be based on the NLTE abundances.
Figure <ref> shows that, for the TO and bRGB stars, the LTE analysis overestimates the Na abundances, by the quantity which is greater for the Na-enhanced than for Na-poor star. The difference in Δ_ NLTE exceeds 0.4 dex for [Fe/H] = -2.5 and reduces towards the lower [Fe/H]. The same is true for the RGB stars with [Fe/H] ≤ -3.5, but the situation is more complicated for the higher metallicities. For [Fe/H] > -3, the Na 5895 Å line is very strong in the Na-enhanced cool atmospheres, and the total line absorption is dominated by the line wings that form in deep atmospheric layers affected only weakly by NLTE. Accounting for the NLTE effects for the Na lines reduces substantially the abundance discrepancies found for stellar samples in LTE, as well illustrated by Fig. <ref>.
Using the same atmospheric parameters, LTE may either overestimate, or underestimate abundances of Sr and Ba depending on the elemental abundances, as shown in Fig. <ref>. For [Fe/H] < -2, the NLTE abundance corrections for Sr 4215 Å and Ba 4554 Å are positive in the Sr- and Ba-poor atmospheres, while they can be negative for the Sr- and Ba-enhanced atmospheres. Accounting for the NLTE effects can reduce the abundance discrepancies found for stellar samples in LTE, by more than 0.4 dex for Sr in the TO [Fe/H] = -2.5 stars and for Ba in the bRGB [Fe/H] = -2.5 stars.
§.§ NLTE corrections for different type model atmospheres
The model atmospheres computed with different codes produce, as a rule, very similar atmospheric structures and spectral energy distributions for common atmospheric parameters. We checked how different type model atmospheres influence on magnitudes of the NLTE abundance corrections. Taking the ATLAS9-ODFNEW models from R. Kurucz's website[<http://kurucz.harvard.edu/grids/gridm40aodfnew/>], we performed the NLTE calculations for Ca-, Fe-, and Ba with the models 6250/4.0/-4.0 and 4500/1.5/-4.0. For these atmospheric parameters, the selected lines reveal the greatest NLTE effects. The results are presented in Table <ref>.
For 6250/4.0/-4.0, the MARCS and ATLAS9-ODFNEW model atmospheres provide consistent within 0.036 dex NLTE abundance corrections. Slightly larger differences of up to 0.058 dex are obtained for the strong lines, Ca 4226 Å and Ca 8498 Å, in the cool giant atmosphere. We remind that the MARCS models with ≤ 2 were computed as spherically-symmetric, and the difference in temperature stratification between the spherically-symmetric and plane-parallel (ATLAS9-ODFNEW) models can explain, in part, differences in Δ_ NLTE for strong spectral lines.
§ COMPARISONS WITH OTHER STUDIES
The NLTE methods based on comprehensive model atoms and the most up-to-date atomic data have been developed in the literature for many chemical species observed in spectra of the Sun and F-G-K type stars because the NLTE results are in demand in chemical abundance analyses of, in particular, VMP stars. For a common chemical species, the model atoms in different NLTE studies can differ by a treatment of inelastic collisions with electrons and hydrogen atoms and by the sources of transition probabilities and photoionization cross-sections. Different NLTE studies use different NLTE codes, with a different treatment of background opacity, and different model atmospheres. We compared our NLTE calculations with the NLTE abundance corrections from the other studies.
§.§ Lines of Fe
As shown in Fig. <ref>, our results for lines of Fe agree well with the NLTE abundance corrections from the NLTE_MPIA database, which were computed using the model atom of <cit.> and the same treatment of collisions with H, as in our calculations, namely, the formulas of <cit.> with a scaling factor of = 0.5. The differences in Δ_ NLTE between this study (TS) and NLTE_MPIA mostly do not exceed 0.02 dex, with the maximal (TS – NLTE_MPIA) = 0.06 dex for Fe 5506 Å in the 6350/4.09/-2.18 model and Fe 5041 Å in the 4630/1.28/-2.99 model.
<cit.> provide the NLTE abundance corrections computed with the 1D and 3D model atmospheres. The 3D-NLTE calculations were performed for a limited atmospheric parameter range ( = 5000–6500 K, = 4.0 and 4.5, [Fe/H] = 0 to -3) and a limited number of Fe lines. We selected Fe 5232 Å for a comparison. Amarsi22 computed more positive NLTE corrections compared with ours (Fig. <ref>), by 0.07 to 0.27 dex in the 1D case and by 0.14 to 0.39 dex in the 3D case. The difference between 1D-NLTE corrections is most probably due to a different treatment of the Fe + H collisions in this and Amarsi22's studies. For H impact excitation and charge transfer, Amarsi22 apply the asymptotic model of <cit.> complemented by the free electron model of <cit.> for the b-b transitions. We showed earlier <cit.> that compared with the <cit.> formulas with = 0.5 using data of <cit.> leads to stronger NLTE effects. For example, Δ_ NLTE = 0.08 dex and 0.35 dex, respectively, for Fe 5232 Å in the 6350/4.09/-2.18 model atmosphere. In the 3D model atmospheres, the NLTE effects for Fe are stronger than in the 1D models, and notable departures from LTE appear for lines of Fe, in contrast to the 1D case, such that, for two benchmark VMP stars, Amarsi22 (see their Table 5) obtain similar abundance differences between Fe and Fe in the 1D-NLTE and 3D-NLTE calculations. To remind the reader, our 1D-NLTE approach for Fe- makes the spectroscopic distances of the VMP stellar sample to be consistent with the Gaia eDR3 ones (Sect. <ref>).
§.§ Lines of Na, Mg, Ca, Ca, and Sr
We selected Mg 5528 Å, in order to compare our NLTE calculations with the 1D-NLTE corrections provided by the NLTE_MPIA database and by <cit.>. The used model atoms <cit.> are similar to ours, including a treatment of collisions with H atoms. As seen in Fig. <ref>, our calculations agree very well with those of Lind22. The differences in Δ_ NLTE do not exceed 0.01 dex and 0.02 dex for the = 4.0 and 2.5 models, respectively. The exception is the 4000/2.5/-3 model, for which we obtained a 0.065 dex more negative Δ_ NLTE. NLTE_MPIA provides more positive NLTE corrections compared with ours, by 0.03–0.05 dex. The difference is 0.12 dex for the 4000/2.5/-3 model.
Similar model atoms of Na were used in this study and by Lind22. The differences in Δ_ NLTE for Na 5895 Å are very small (∼0.01 dex) for the coolest and the hottest temperatures in Fig. <ref>. It is difficult to explain why TS – Lind22 = 0.07 dex for the 5000/2.5/-3 model, but TS – Lind22 = 0.00 for 5000/4.0/-3.
For lines of Sr, the 1D-NLTE corrections are provided by the INSPECT database. Their NLTE calculations were performed with the model atom developed by <cit.> and did not take into account collisions with H atoms. This is in contrast to this study based on quantum-mechanical rate coefficients for the Sr + H collisions. The atmospheric parameter range is narrower in INSPECT compared with this study, namely: 4400 K ≤≤ 6400 K, 2.2 ≤≤ 4.6, -3.9 ≤ [Fe/H] ≤ 0. The differences in Δ_ NLTE for Sr 4077 Å are small except the models 5500/2.5/-3 and 6000/4.0/-3, where TS – INSPECT = -0.07 dex and +0.05 dex, respectively (Fig. <ref>).
The 1D-NLTE corrections for the Ca lines at the NLTE_MPIA database were computed with the model atom developed by <cit.> and using <cit.> formulas with = 0.1 for calculating hydrogen collision rates. In this study, we applied the same model atom, however, the Ca + H collisions were treated using quantum-mechanical rate coefficients from <cit.>. As seen in Fig. <ref>, NLTE_MPIA provides systematically greater NLTE corrections for Ca 6162 Å compared with our data, by 0.08 to 0.20 dex, probably due to a simplified treatment of hydrogenic collisions.
Ignoring the Ca + H collisions in the SE calculations resulted in stronger NLTE effects for the Ca triplet lines in <cit.> study compared with ours. For example, <cit.> report the NLTE/LTE equivalent ratios of 1.28 and 1.16 for Ca 8498 and 8542 Å, respectively, in the 4250/1.5/-4.0 model, while our corresponding values are 1.22 and 1.12.
§.§ Lines of Ba
Finally, we compared our results with the 1D-NLTE corrections calculated by <cit.> for lines of Ba. <cit.> provide the data for the -2 ≤ [Fe/H] ≤ 0.5 metallicity range. Therefore, Δ_ NLTE comparisons are presented in Fig. <ref> for the same temperatures and surface gravities, as in Fig. <ref>, but for [Fe/H] = -2. The differences in Δ_ NLTE for Ba 6496 Å do not exceed 0.02 dex except the coolest and the hottest giant atmospheres, where TS – K15 = -0.05 dex and +0.05 dex, respectively.
To summarise this section, the situation with the 1D-NLTE corrections for lines of Na, Mg, and Fe looks good. For each of these chemical species, there are, at least, two independent NLTE studies that predict consistent within 0.01-0.02 dex NLTE corrections and provide the grids which cover the full range of atmospheric parameters of VMP stars. For Sr and Ba, the NLTE corrections predicted by the independent studies agree reasonably well in the overlapping atmospheric parameter range.
§ FINAL REMARKS
This study presents grids of the 1D-NLTE abundance corrections for the Na, Mg, Ca, Ca, Ti, Fe, Zn, Zn, Sr, and Ba lines, which are used in the galactic archaeology research. The range of atmospheric parameters represents VMP stars on various evolutionary stages and covers 4000 K ≤ ≤ 6500 K, 0.5 ≤ ≤ 5.0, and -5.0 ≤ [Fe/H] ≤ -2.0. The NLTE corrections for Zn, Zn, Sr, and Ba have been calculated for the first time for such a broad atmospheric parameter range. Compared to the data available in the literature, our NLTE corrections for lines of Ca, Ca, Zn, Zn, Sr, and Ba are based on accurate treatment of collisions with H atoms in the statistical equilibrium calculations.
In the same model atmosphere, the NLTE abundance corrections may have different magnitude and sign for lines of the same chemical species, for example Δ_ NLTE = 0.092 dex (Mg 5528 Å) and Δ_ NLTE = -0.083 dex (Mg 5172 Å) in the 4500/1.5/-3.5 model. Accounting for the NLTE effects in stellar abundance determinations is expected to improve an accuracy of the obtained results.
In the same model atmosphere, the NLTE abundance corrections may have different magnitude and sign for lines of different chemical species, for example, Δ_ NLTE = -0.222 dex (Na 5895 Å) and Δ_ NLTE = 0.092 dex (Mg 5528 Å) in the 4500/1.5/-3.5 model. Therefore, an appropriate treatment of the line formation is obligatory for the studies based on analysis of the stellar element abundance patterns.
For all spectral lines and chemical species, the NLTE corrections depend on metallicity. Neglecting the NLTE effects in stellar abundance determinations leads to distorted galactic abundance trends and incorrect conclusions on the Galactic chemical evolution.
We show that, for common spectral lines and the same atmospheric parameters, independent NLTE studies of Na, Mg, and Fe predict consistent 1D-NLTE abundance corrections, with the difference of 0.01-0.02 dex in Δ_ NLTE.
The obtained results are publicly available. At the website INASAN_NLTE (<http://spectrum.inasan.ru/nLTE2/>), we provide the tools for calculating online the NLTE abundance correction(s) for given line(s) and given atmospheric parameters.
§ ACKNOWLEDGEMENTS
This research has made use of the data from the European Space Agency (ESA) mission Gaia[<https://www.cosmos.esa.int/gaia>], processed by the Gaia Data Processing and Analysis Consortium (DPAC[<https://www.cosmos.esa.int/web/gaia/dpac/consortium>]).
This research has made use of the MARCS and ADS[<http://adsabs.harvard.edu/abstract_service.html>] databases. L.M. thanks the Russian Science Foundation (grant 23-12-00134) for a partial support of this study (Sections 1, 2, 4, 5). T.S. acknowledges a partial support (Section 3) from the MK project, grant 5127.2022.1.2.
§ DATA AVAILABILITY
All our results are publicly available at the website INASAN_NLTE (<http://spectrum.inasan.ru/nLTE2/>).
mnras
|
http://arxiv.org/abs/2307.05860v2 | 20230712005243 | Conservative (failed)-tail effects at the fifth post-Newtonian order | [
"Quentin Henry",
"François Larrouturou"
] | gr-qc | [
"gr-qc",
"hep-th"
] |
[email protected]
Max Planck Institute for Gravitational Physics
(Albert Einstein Institute), D-14476 Potsdam, Germany
[email protected]
Deutsches Elektronen-Synchrotron DESY, Notkestr. 85, 22607 Hamburg, Germany
DESY-23-100
This work deals with the tail and “failed” tail sectors of the conservative dynamics for compact binary systems at the 5PN order.
We employ the Fokker Lagrangian method with dimensional regularization, and our results for the tail sector are perfectly consistent with the previous EFT computations.
As for the “failed” tail sector, we have good hopes that this new computation will help solving the current discrepancy in the literature.
Conservative (failed)-tail effects at the fifth post-Newtonian order
François Larrouturou
August 12, 2023
====================================================================
§ INTRODUCTION
The post-Newtonian (PN) approximation scheme is a very efficient framework that relies on weak-field and slow-velocity approximations to perturbatively solve Einstein's equation.
It has been notably implemented through a large class of methods to resolve the dynamics of bound compact binaries system, see e.g. <cit.> for reviews.
In the conservative sector,[This paper focuses on the conservative sector, i.e. the study of the (conserved) dynamics of the system.
Nevertheless, those PN frameworks are also in use to solve the dissipative sector, i.e. to derive the waveform.
Notably, using matched post-Newtonian and multipolar-post-Minkowskian methods <cit.>, the gravitational flux at 4PN and phase at 4.5PN were recently obtained <cit.>.
Note also that the 2PN sector of the gravitational flux has been confirmed by EFT means <cit.>.] the current accuracy is the fourth PN order (i.e. the (v/c)^8 correction to the Newtonian energy and angular mometum), that was obtained by means of the canonical Hamiltonian formalism <cit.>, the Fokker method <cit.>, and effective field theory (EFT) approach <cit.>.
Starting at this 4PN precision, the conservative dynamics can be split between an “instantaneous” sector and a “hereditary” one.
The latter takes into account the back-reaction of emitted radiation onto the dynamics of the binary, which induces non-local in time effects (thus the name).
The computation of the instantaneous dynamics has been completed at 5PN by a large variety of methods <cit.>, and pushed up to the 6PN precision <cit.>.
As for the hereditary sector, due to the very subtle nature of the computations, only partial results exist.
For instance, the tail sector (due to the scattering of waves onto the static curvature induced by the ADM mass) has been computed by means of EFT <cit.>.
As for the “failed” tail[We borrow this nomenclature to <cit.>, where it has been dubbed “failed” as, although it comes as an hereditary effect, this interaction fails to induce a non-local-in-time sector.] (due to the scattering of waves onto the static curvature induced by the ADM angular momentum), it was also derived within the EFT framework. However, there is a discrepancy between previous results <cit.> and the recent work of <cit.>.
Note also that the logarithmic dependencies in the binding energy (due to this hereditary sector) are known up to the 7PN order <cit.>.
The aim of this work is to derive the (failed) tail effects by means of the Fokker method using dimensional regularization.
We thus work with d=3 + ε space-like dimensions and the d-dimensional gravitational strength, G, is linked to the usual Newton constant G_N by a new length scale ℓ_0 as G = ℓ_0^ε G_N (this regularization constant is directly related to the scale μ used in EFT framework <cit.> as ℓ_0 = μ^-1).
The tail interactions entering at 5PN involve the constant ADM mass , the mass octupole _ijk and the current quadrupole _i| jk.
As for the failed tail, it describes the interaction between the constant angular mometum _i| j and the mass quadrupole _ij.
Note that, as we work in d dimensions, we use the notations of <cit.> for current moments, notably _i| j≡ε_ijkL^k, where L^i is the usual angular momentum.
Our result reads
𝒮_tail =
- G^2 /c^10∫ t∫_0^∞τ{1/189( - 82/35) _ijk^(4)(t) _ijk^(5)(t-τ)
+16/45( - 49/20) _k| ji^(3)(t) _k| ji^(4)(t-τ)}
+ G^2/30 c^10∫ t _i| j _ik^(3)(t) _jk^(4)(t) ,
where parenthesis denote time derivations and we have dressed the pole as
≡1/ε -2 ln(c√() τ/2 ℓ_0) ,
with ≡ 4π e^γ_E ,
where γ_E is the Euler constant.
The first line of (<ref>), corresponding to the tail interactions, is in perfect agreement with Eqs. (5) and (9) of <cit.>.
As for the failed tail (the second line), we fully agree with the recent result of <cit.>, obtained by an independent method.
The plan of this paper is as follows.
Sec. <ref> describes the method employed to derive the (failed) tail effects, namely the Fokker method with dimensional regularization.
This method is then applied to each interaction separately in Sec. <ref>.
Finally, Sec. <ref> concludes our work.
§ GENERAL METHOD
In order to perform the computation of the conservative (failed) tail sector at 5PN, we follow the method that was used for the lowest-order tail ×_ij×_ij, entering at 4PN <cit.>.
The following section briefly recalls and discuss its main steps.[The conventions employed throughout this work are as follows: we work with a mostly plus signature; greek letters denote spacetime indices and latin ones, purely spatial indices; bold font denotes d-dimensional vectors, e.g. y_A = y_A^i; we use the multi-index notations of <cit.> (coming from Young tableaux), i.e. _L = _i_1i_2… i_ℓ and _i| L = _i| i_ℓ… i_2i_1; hats and angular brackets denote a symmetric and trace-free operator, x̂_L = x_⟨ L⟩ = STF[x_L]; the d'Alembertian operator is defined with respect to the flat Minkowski metric ≡η^μν∂_μν = Δ - c^-2∂_t^2; (anti-)symmetrizations are weighted, e.g. A_(ij) = (A_ij + A_ji)/2; the Lagrangian and Lagrangian density are denoted as 𝒮 = ∫ t ℒ = ∫ t^d x L, and we will refer to “Lagrangian” for “Lagrangian density” henceforth; finally, and as usual, we dubb “nPN” a quantity of order (c^-2n).]
§.§ Tail effects in the action
The starting point of the method is naturally an action composed of two sectors: the gravitational kinetic term and the matter description.
For the first one, we work with the usual Landau-Lifschitz Lagrangian, together with a gauge-fixing term (see e.g. <cit.>)
𝒮_g
= c^4/16π G∫ t^dx √(-g) [g^μν(Γ_μρ^λΓ_νλ^ρ-Γ_μν^λΓ_ρλ^ρ)- 1/2 g_μνΓ^μΓ^ν] ,
where Γ^μ_νρ are the Christoffel symbols and the last term enforces the gauge Γ^μ≡ g^αβΓ_αβ^μ = 0.
In terms of the so-called “gothic metric” ^μν≡√(-g) g^μν, this action becomes
𝒮_g = c^4/32π G∫ t^dx [
_αβ(∂_μ^αν ∂_ν^βμ-∂_μ^αμ ∂_ν^βν)
- 1/2^αβ_μν_στ(
∂_α^μσ ∂_β^ντ
- 1/d-1 ∂_α^μν ∂_β^στ)] .
As for the matter sector, we consider structureless, non-spinning point-particles, thus described by the action
𝒮_pp = -c ∑_A m_A ∫τ_A
= -c^2∑_A m_A∫ t ^dx δ_A/u_A^0 ,
where m_A is the mass of the particle A, τ_A its proper time, u_A^0 ≡ [-(g_μν)_A v_A^μ v_A^ν/c^2]^-1/2 is the associated Lorentz factor, v_A^μ = (c,v_A^i) (with v_A^i the usual velocity), and the d-dimensional Dirac distribution δ_A ≡δ[x - y_A(t)] locates the Lagrangian on the world-line of the particles.
As we are interested by the dynamics of binary systems, we will run A only over two values.
From the gothic metric, we define the exact perturbation
h^μν≡^μν - η^μν ,
for which the gauge condition Γ^μ = 0 translates into the usual harmonic gauge ∂_ν h^μν = 0.
This perturbation obeys a wave equation source by the Laundau-Lifschitz pseudo-tensor τ^μν
h^μν = τ^μν = 16π G/c^4 | g| T^μν + Λ^μν ,
where T^μν is the stress-energy tensor of the matter distribution and Λ^μν encrypts the non-linearities intrinsic to GR (its expression is given e.g. in Eq. (175) of <cit.>).
We are interested here in the near-zone behavior of the metric, i.e. we aim at solving h^μν_NZ = τ̅^μν, where τ̅^μν is the PN expansion of τ^μν.
The solution to such wave equation can be split in two sectors, as
h^μν_NZ = h̅^μν + ℋ^μν .
The first sector, h̅^μν, is a particular solution of the wave equations, corresponding to the potential modes of the EFT framework.
It is computed by applying the PN-expanded, regularized Green function on τ̅^μν, see e.g. Eq. (2.5) of <cit.>, and its expression is known up to 4PN <cit.>. Due to PN expansions and regularization of the Green function, the metric h̅^μν is not the correct prescription, and we have to add an homogeneous solution, ℋ^μν.
This solution is a consequence of the matching equation linking the near- and far-zone behaviors of the metric <cit.>, and its construction is the purpose of the next section.
As will be clear there, it corresponds to the conservative sector of the waves radiated by the source, and thus one can assimilate it to the radiative modes of the EFT framework.
Following the spirit of the Fokker method, we inject the near-zone metric (<ref>) into the conservative action (<ref>)–(<ref>), in order to obtain a resulting Lagrangian depending only on the matter variables (which accounts to integrating out the gravitational modes).
This yields an action mixing potential and radiative modes.
The sector free from any ℋ^μν is the usual, instantaneous action, computed at 5PN by EFT means in e.g. <cit.>, and we let its re-computation within the Fokker framework for future studies.
What interests us here is the linear-in-ℋ^μν sector of the action, encompassing the leading order (failed) tail effects.[As the constant (ADM) masses and angular momentum do not radiate, the quadratic-in-ℋ^μν sector of the action cannot contribute to tail effects at leading order. Note however that, at 5PN, this quadratic sector can contribute to the memory effect and to the 1PN corrections to the ×_ij×_ij tail effect. The study of both those effects are let for future works.]
This linear sector can be interpreted as the backreaction of the scattered wave, ℋ^μν, onto the dynamics of the binary, thus describing indeed a tail effect.
This point of view corresponds to the closure of radiative Feynamm diagrams, performed in <cit.>.
As will be explicit hereafter, the different components of the radiative metric at a given PN order will follow ℋ^μν = (c^-2n-2,c^-2n-1,c^-2n) with ℋ^kk = (c^-2n-2) (in particular, n=5 for this work). The leading PN order of the linear-in-ℋ^μν sector of the action reads
𝒮^tails_LO
= -∫ t^dx {m_1 c^2/8[
ℋ^00ii
- 4 v_1^i/c ℋ^0i
+ 2 v_1^ij/c^2 ℋ^ij] δ_1
+ (d-1) ℋ^ij/64(d-2)π G ∂_iV∂_jV} + (1 ↔2) ,
where we have shortened ℋ^00ii≡2/d-1[(d-2)ℋ^00+ℋ^kk].
From the compact terms (proportional to the Dirac distribution), one will be able to reconstruct the Newtonian value of the moments.[For example, if ℋ^00ii = x̂^ijkF_ijk(t), where F_ijk only depends on time, then ∫^dx m_1 ℋ^00iiδ_1 + (1↔ 2) = O_ijk F_ijk, where O_ijk = m_1 ŷ_1^ijk + m_2 ŷ_2^ijk is the Newtonian mass octupole moment.]
As for the non-compact term (the last piece), it is treated by using the generalized Riesz formulae displayed in App. A of <cit.>.
§.§ Computation of the radiative metric
The radiative metric ℋ^μν corresponds to an homogeneous solution of the wave equations (<ref>), regular in the source (when r → 0).
This means that it has the structure
ℋ^μν = ∑_j,ℓ≥ 0 Δ^-jx̂_L(/c t)^2j f_L^μν(t) ,
with Δ^-jx̂_L = Γ(d/2+ℓ)/Γ(d/2+ℓ+j)r^2jx̂_L/2^2j j! ,
where the functions f_L^μν(t) are determined by the matching equation <cit.>, i.e. by imposing that the near- and far-zone expansions of the metric agree in some overlapping region.
Therefore it is clear that ℋ^μν encodes the fact that the dynamics of the system is sensitive to the gravitational waves radiated at spatial infinity.
This fact is even more evident when looking at the practical computation of ℋ^μν.
As derived in <cit.>, the matching equation imposes that ℋ^μν is nothing but an homogeneous solution of the far-zone expansion of the wave equations (<ref>).
It is thus sourced by the expansion of Λ^μν at spatial infinity[We consider compact binaries, and so a compact-supported matter stress-energy tensor: at spatial infinity, τ^μν reduces to Λ^μν.] and can be computed by means of the d-dimensional usual multipolar-post-Minkowskian algorithm.
This algorithm starts with the d-dimensional generalization of Thorne's linearized metric <cit.>, namely <cit.>
h^00_1=
- 4/c^2∑_ℓ⩾ 0(-)^ℓ/ℓ ! ∂̂_L ℳ_L ,
h^0i_1 =
4/c^3∑_ℓ⩾ 1(-)^ℓ/ℓ ! [∂̂_L-1 ℳ_iL-1^(1)+ℓ/ℓ+1∂̂_L𝒮_i| L] ,
h^ij_1 =
- 4/c^4∑_ℓ⩾ 2(-)^ℓ/ℓ ! [
∂̂_L-2 ℳ_ijL-2^(2)+2ℓ/ℓ+1∂̂_L-1𝒮^(1)_(i|L-1 j)+ℓ-1/ℓ+1∂̂_L𝒦_ij| L] .
Underlined indices are excluded from symmetrization, and we have introduced the notation
ℳ_L(r,t) ≡k̃/r^d-2∫_1^+∞ y γ_1-d/2(y) _L(t- yr/c) ,
where
k̃≡Γ(d-2/2)/π^d-2/2 = 1 - ε/2 ln + (ε^2) and γ_k(z) ≡2√(π)/Γ(1+k)Γ(-1/2-k) (z^2-1)^k
is such that lim_d→3ℳ_L(r,t) = _L(t-r/c)/r.
Note the presence of the additional set of moments K_ij| L, which are a pure artifact of working in d ≠ 3 dimensions.
For our practical purpose, we will only consider interactions between a static moment (either the ADM mass or angular momentum _i| j) and a propagating one.
Therefore, injecting in Λ^μν the sectors of the linear metric (<ref>) that are of interest for us, the quadratic sources are of the form
N(𝐱,t) = n̂_L ℓ_0^qε/r^p+qε∫_1^+∞ z γ_1-d/2(z) z^k F(t-zr/c) ,
where F(t) represents a product of {,_i| j} with (temporal derivatives of) one of the moments {_ij, _ijk, _i| jk}.
Following the computation performed in <cit.>, we then define the homogeneous solution 𝒰^μν corresponding to the source (<ref>) as
𝒰 = (-)^p+ℓ/d+2ℓ-2B=0PF Γ(qε-B)/Γ(p+ℓ-1+qε-B) C^k,p,q_ℓ ∑_j∈ℕΔ^-jx̂_L∫_0^∞ττ^B-qε/r_0^BF^(2j+ℓ+p-1)(t-τ)/c^2j+ℓ+p+qε-B ,
where the PF operator corresponds to the finite part operation when B→ 0 <cit.>, and C^k,p,q_ℓ reads
C^k,p,q_ℓ≡∫_1^+∞ y γ_1-d/2-ℓ(y)∫_1^+∞ z γ_1-d/2(z) z^k(y+z)^ℓ-2+p+qε-B .
These coefficients are generalizations for q ∈ℤ of the ones introduced in <cit.>, and can be computed following the lines of the App. D of that work.[In the case of the memory interaction, the two moments under consideration are propagating, and thus sources are of the form N ∝∫ y γ_1-d/2(y) y^k F(t-yr/c)∫ z γ_1-d/2(z) z^m G(t-zr/c). In such cases, we were not able to write the homogeneous solution in a form as simple as (<ref>), notably because there are no factorization similar to (<ref>). ]
If 𝒰^μν is of the form (<ref>), namely an homogeneous solution regular in the source, it is not yet the homogeneous solution ℋ^μν that we seek. At this stage, 𝒰^μν has no reason to be divergenceless and thus does not verify in general the harmonic condition. So to construct the correct solution, we add to 𝒰^μν a suited homogeneous solution, 𝒱^μν, which cancels its divergence, following the standard procedure described e.g. in <cit.>
𝒰^μν ⟶ 𝒱^μν = H(∂_μ𝒰^μν) ⟶ ℋ^μν≡𝒰^μν+𝒱^μν .
In this method, 𝒱^μν is uniquely determined via the harmonicity algorithm given by Eqs. (47)–(48) in <cit.> and dubbed H here.
Note that a similar removal of the divergence was employed in the EFT computation of <cit.>, and was in fact a crucial step to obtain the failed tail.
Once the divergenceless ℋ^μν is known, one can inject it in the action (<ref>) and compute the integrals to obtain the desired effects.
In order to simplify the procedure, one can also implement a gauge transformation to “push” the ij components of the metric to higher PN orders, and thus only have compact integrals to perform.
The gothic metric transforms under the gauge transformation with vector ξ^μ as
ℋ^μν→ℋ'^μν = ℋ^μν + ∂^μξ^ν + ∂^νξ^μ - ∂_ρξ^ρ η^μν + (ξ^2) .
So, by choosing ξ^μ adequately, one can cancel the leading order of ℋ^ij.
In the next section, both raw and shifted metrics are displayed for each interaction, and we have naturally verified that they give the same result.
Note that we have also performed another consistency check on the results presented in the next section.
By employing the method developed in <cit.>, we have first derived the metric ℋ^μν by using a three-dimensional, purely Hadamard regularization procedure.
Then, we have computed the contribution induced by the difference between the d-dimensional regularization scheme and the Hadamard one, using techniques elaborated in <cit.>.
By summing those two contributions, we recover the metric computed directly in d dimensions.
§ RESULTS AT 5PN
Let us implement the method described in the previous section (with extensive use of the xAct library from the Mathematica software <cit.>) in the cases of the tails appearing at 5PN in the conservative action, composed of the ×_ijk×_ijk, ×_i| jk×_i| jk and _i| j×_ij×_ij interactions.
We recall that pole is dressed as in Eq. (<ref>).
By summing the separate results, we obtain our main result (<ref>).
§.§ Mass octupole tail
The divergenceless metric for the ×_ijk interaction reads at the leading order
ℋ^00ii_×_ijk = 4 G^2M x̂^ijk/315 c^12∫_0^∞τ( - 199/70)_ijk^(9)(t-τ)
+ (c^-14) ,
ℋ^0i_×_ijk = -4 G^2M x̂^jk/45 c^11∫_0^∞τ( - 1189/420)_ijk^(8)(t-τ)
+ (c^-13) ,
ℋ^ij_×_ijk = 4 G^2M x̂^k/9 c^10∫_0^∞τ( - 113/42)_ijk^(7)(t-τ)
+ (c^-12) .
By applying the following shift
ξ^0_×_ijk = -G^2M x̂^ijk/189 c^11∫_0^∞τ( - 149/70)_ijk^(8)(t-τ) ,
ξ^i_×_ijk =-G^2M x̂^ijk/9 c^10∫_0^∞τ( - 113/42)_ijk^(7)(t-τ) ,
the metric becomes of order ℋ'^ μν_×_ijk = (c^-12,c^-13,c^-12) and reads
ℋ'^ 00ii_×_ijk = 8 G^2M x̂^ijk/189 c^12∫_0^∞τ( - 82/35)_ijk^(9)(t-τ)
+ (c^-14) ,
which, injected in the action (<ref>), yields (upon integrations by parts)
𝒮_×_ijk = -G^2M/189 c^10∫ t ∫_0^∞τ( - 82/35)_ijk^(4)(t) _ijk^(5)(t-τ)
+ (c^-12) .
§.§ Current quadrupole tail
The divergenceless metric for the ×_i| jk interaction reads at the leading order
ℋ^00ii_×_i| jk =
(c^-14) ,
ℋ^0i_×_i| jk = 8 G^2M x̂^jk/45 c^11∫_0^∞τ( - 71/30)_i| jk^(7)(t-τ)
+ (c^-13) ,
ℋ^ij_×_i| jk = -16 G^2M x̂^k/9 c^10∫_0^∞τ( - 73/30)_(i|kj)^(6)(t-τ)
+ (c^-12) .
By applying the following shift
ξ^0_×_i| jk =0 ,
ξ^i_×_i| jk =8 G^2M x̂^jk/9 c^10∫_0^∞τ( - 73/30)_i| jk^(6)(t-τ) ,
the metric becomes of order ℋ'^ μν_×_i| jk = (c^-14,c^-11,c^-12), and reads
ℋ'^ 0i_×_i| jk = -32 G^2M x̂^jk/45 c^11∫_0^∞τ( - 49/20)_i| jk^(7)(t-τ)
+ (c^-13) ,
which, injected in the action (<ref>), yields (upon integrations by parts)
𝒮_×_i| jk = -16 G^2M/45 c^10∫ t∫_0^∞τ( - 49/20)_i| jk^(3)(t) _i| jk^(4)(t-τ)
+ (c^-12) .
§.§ Angular momentum failed tail
Finally, the divergenceless metric for the _i| j×_ij interaction reads at the leading order
ℋ^00ii__i| j×_ij = 4 G^2/45 c^12 x̂^jk _i| k^ _ij^(7)(t)
+ (c^-14) ,
ℋ^0i__i| j×_ij = 4 G^2/9 c^11 x̂^k _j| k^ _ij^(6)(t)
-G^2/9 c^11 x̂^k _i| j^ _jk^(6)(t)
+ (c^-13) ,
ℋ^ij__i| j×_ij = -22 G^2/15 c^10 _k| (i^ _j)k^(5)(t)
+ (c^-12) .
By applying the shift
ξ^0__i| j×_ij = 4 G^2/45 c^11 x̂^jk _i| k^ _ij^(6)(t) ,
ξ^i__i| j×_ij = 8 G^2/15 c^10 x̂^k _j| k^ _ij^(5)(t)
-3 G^2/15 c^10 x̂^k _i| j^ _jk^(5)(t) .
the metric becomes of order ℋ'^ μν__i| j×_ij = (c^-12,c^-13,c^-12), and reads
ℋ'^ 00ii__i| j×_ij = - 4 G^2/15 c^12 x̂^jk _i| k^ _ij^(7)(t)
+ (c^-14) ,
which, injected in the action (<ref>), yields (upon integrations by parts)
𝒮__i| j×_ij = G^2/30 c^10 _i| j^ ∫ t _ik^(3)(t) _jk^(4)(t)
+ (c^-12) .
§ SUMMARY AND CONCLUSION
In this work, we have derived the leading order tail and “failed” tail sectors appearing at the 5PN order in the conservative dynamics of compact binaries, by employing the Fokker Lagrangian framework.
Making use of dimensional regularization, we have computed the homogeneous solution of the near-zone metric, and have integrated it out in the action.
Our result, given in Eq. (<ref>), is consistent with previous works performed within the EFT framework: the tail sector agrees with <cit.>, and the failed tail one, with <cit.>.
With this new computation at hand, we hope that the current discrepancy in EFT results for the failed tail sector will be fully understood and resolved.
The last step towards completion of the 5PN conservative dynamics is the memory effect, i.e. the interaction of three mass quadrupoles.
In order to compute it within the Fokker Lagrangian framework, the method presented in this work has to be enhanced, as briefly discussed in the footnote <ref>.
This subtle computation is thus left for future work.
It is a pleasure to thank G. Luz Almeida, S. Foffa, A. Müller and R. Sturani for enlightening discussions at a late stage of this work, and Luc Blanchet at an early stage.
F.L. received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement No 817791).
|
http://arxiv.org/abs/2307.06171v1 | 20230712135454 | exoMMR: a New Python Package to Confirm and Characterize Mean Motion Resonances | [
"Mariah G. MacDonald",
"Michael S. Polania Vivas",
"Skylar D'Angiolillo",
"Ashley N. Fernandez",
"Tyler Quinn"
] | astro-ph.EP | [
"astro-ph.EP"
] |
exoMMR
MacDonald et al.
Mariah G. MacDonald
[email protected]
0000-0003-2372-1364]Mariah G. MacDonald
Department of Astronomy & Astrophysics, Center for Exoplanets and Habitable Worlds, The Pennsylvania State University, University Park, PA 16802, USA
Department of Physics, The College of New Jersey, 2000 Pennington Road, Ewing, NJ 08628, USA
0009-0001-2321-7865]Michael S. Polania Vivas
Department of Physics, The College of New Jersey, 2000 Pennington Road, Ewing, NJ 08628, USA
0000-0001-5592-6220]Skylar D'Angiolillo
Department of Physics, The College of New Jersey, 2000 Pennington Road, Ewing, NJ 08628, USA
0009-0003-6956-4066]Ashley N. Fernandez
Department of Physics, The College of New Jersey, 2000 Pennington Road, Ewing, NJ 08628, USA
0000-0002-8974-8095]Tyler Quinn
Department of Astronomy & Astrophysics, Center for Exoplanets and Habitable Worlds, The Pennsylvania State University, University Park, PA 16802, USA
The study of orbital resonances allows for the constraint of planetary properties of compact systems. We can predict a system's resonances by observing the orbital periods of the planets, as planets in or near mean motion resonance have period ratios that reduce to a ratio of small numbers. However, a period ratio near commensurability does not guarantee a resonance; we must study the system's dynamics and resonant angles to confirm resonance. Because resonances require in-depth study to confirm, and because two-body resonances require a measurement of the eccentricity vector which is quite challenging, very few resonant pairs or chains have been confirmed. We thus remain in the era of small number statistics, not yet able to perform large population synthesis or informatics studies. To address this problem, we build a python package to find, confirm, and analyze mean motion resonances, primarily through N-body simulations. We then analyze all near-resonant planets in the Kepler/K2 and TESS catalogues, confirming over 60 new resonant pairs and various new resonant chains. We additionally demonstrate the package’s functionality and potential by characterizing the mass-eccentricity degeneracy of Kepler-80g, exploring the likelihood of an exterior giant planet in Kepler-80, and constraining the masses of planets in Kepler-305. We find that our methods overestimate the libration amplitudes of the resonant angles and struggle to confirm resonances in systems with more than three planets. We identify various systems that are likely resonant chains but that we are unable to confirm, and highlight next steps for exoplanetary resonances.
§ INTRODUCTION
Two planets are in mean motion resonance (MMR) with one another when they repeatedly conjunct at the same place, allowing them to exchange both energy and angular momentum. MMRs allow otherwise unstable configurations to persist and act as a potential well, resisting change from small perturbations.
We define the critical resonant angle for two bodies as:
Θ_b,c = j_1λ_b + j_2λ_c + j_3ω_b + j_4ω_c + j_5Ω_b + j_6Ω_c
where λ_p is the mean longitude of planet p, ω_p is the argument of periapsis, Ω_p is the longitude of the ascending node, j_i are coefficients which sum to zero, and planet b orbits interior to planet c.
If a system contains more than two planets in resonance, the planets can be in a resonant chain, either a chain of two-body resonances or in a three-body resonance. The zeroth-order three-body resonance can be defined as the difference between two consecutive two-body resonances:
ϕ_b,c,d = Θ_c,d - Θ_b,c = mλ_d - (m + n)λ_c + nλ_b
where λ_p is the mean longitude of planet p, and m and n are integers.
For planets in resonance, the two-body and/or three-body resonant angle will librate about some center with some amplitude. From this libration amplitude, we can learn additional information about the system's formation history. Small libration amplitudes indicate low energy of the resonance and overall a close proximity to exact resonance, achieved by smooth and dissipative formation <cit.>, whereas large libration amplitudes could be a consequence of perturbations from an additional planet <cit.>, overstable librations <cit.>, or stochastic forcing <cit.>.
If two planets are in resonance with one another, such a configuration requires a specific parameter space. Because of this, we are able to constrain planetary masses and orbits to those that allow for resonance <cit.>.
The two bodies in resonance will have orbital periods whose ratio reduces to a ratio of small integers, providing a straightforward method for identifying potential resonances; however, the two planets need not be at exact commensurability, nor does exact commensurability guarantee resonance, as the resonant configuration depends on other factors as well, such as the planets' masses and eccentricities. Because of this complexity, we cannot simply assume that any two adjacent planets near commensurability are resonant and instead must study their dynamics to confirm a resonance. Such a study is tedious and sometimes not possible, since the eccentricity vector is challenging to constrain, and so most systems remain classified as “near-resonant.”
Traditionally, mean motion resonance in exoplanets is confirmed by integrating forward the solutions to the system's radial velocities <cit.> or transit timing variations <cit.>. Such a process, however, requires that the system has detectable RVs or TTVs and that the signal from the planet-planet perturbations is sufficiently large to favor non-Keplerian orbits. One additional method of confirming resonance is modeling all possible solutions to the system <cit.>. Although computationally intensive, this brute-force method can confirm resonance if all solutions lead to resonance.
Following the methods of <cit.> and <cit.>, we create a python package to identify, confirm, and characterize additional and known resonances in exoplanetary systems that we call <cit.>.
In Section <ref>, we discuss the structure and performance of , and we provide numerous examples of verification and utility of the software in Section <ref>, including identifying new resonant systems. We discuss the limitations of this software and our methods in Section <ref> and summarize and conclude in Section <ref>.
§ OVERVIEW OF THE CODE STRUCTURE
performs various functions associated with finding, confirming, and characterizing mean motion resonances in exoplanetary systems. Each of these functions, summarized below, can be performed separately from one another.
§.§ Identifying resonance
We can quantify a planet pair's proximity to mean motion resonance, sometimes referred to as the resonance offset, by taking the difference between the observed motions and the mean-motion commensurability:
Δ_(p+q)/p = n_i/n_(i+1) - (p+q)/p
where p and q describe the resonance, n is the mean motion, and i is the planet closer to the primary. Typically, a proximity of 0.01 or less is associated with a resonant pair, although this proximity can be shifted by tidal dissipation and perturbations from an additional resonant pair <cit.>.
In practice, requires an array of orbital periods to calculate the proximity to resonance; it will test all first- and second-order two-body mean motion resonances, returning the resonant angle and the pair's proximity to that resonance.
§.§ Creating suite of N-body simulations
Most of the functions of require numerous models of the system. Although this requirement can be satisfied with posteriors from radial velocity, transit timing variations, or photodynamical fitting, it can also be met with a suite of N-body simulations. will create and run a suite of rebound simulations <cit.>, pulling planetary, stellar, and simulation parameters from an input file. The software is structured to run the suite of simulations as a SLURM job array, but the jobs can be run in series or with another resource manager.
will default to the following options:
Stellar masses will be fixed to the value provided. Planet masses, orbital periods, eccentricities, and inclinations will be drawn from independent, normal distributions centered on the nominal values with standard deviations equal to the uncertainties. Each planet is initialized with a longitude of the ascending node drawn from a uniform distribution U[0,π] and a mean longitude calculated from the given transit epoch or mid-transit time associated with the orbital period fit. Each suite of simulations will consist of 500 simulations, integrated for 1e6 years, with the WHFast integrator <cit.> and an integration timestep of 5% the smallest orbital period.
§.§ Confirming resonance and resonant chains
Given a rebound simulation, will calculate the center of a resonant angle as the median and the amplitude of the libration as twice the standard deviation of the angle over a few years. The angles will be wrapped between [0, 360] and between [-180, 180], and the angle with the smallest calculated amplitude will be taken. A simulation is marked as resonant if the amplitude of libration is less than 150.
can then calculate the statistics of the angle across all simulations, including the percentage of simulations with that angle librating, then the median and uncertainties of the libration center and amplitude. Once the individual angles are characterized, is able to study the possibility of resonant chains; here, we define a resonant chain as either two or more consecutive librating two-body angles or three-body angles. will return the percentage of simulations that result in a three-body, four-body, etc., resonant chain, as well as the percentage of simulations where each planet is dynamically decoupled.
For this work, we confirm resonance if 90% or more of the simulations result in librating angles; although this number is fairly arbitrary, we caution against reducing it. We discuss the potential limitations of this cut-off more in Section <ref>.
§.§ Constraining parameters with resonance
Using statistical tests, we can confirm whether there is a significant difference between solutions that lead to resonance and those that do not. Following <cit.> and <cit.>, we compare two samples of a parameter, split by whether or not a resonant angle is librating, using both a Kolmogorov–Smirnov test and an Anderson-Darling test. Both of these tests explore the null hypothesis that the two samples are drawn from the same population, so a resulting p-value less than α=5% allows us to reject this hypothesis.
§.§ Exploring chain formation
Resonant chains are often seen as the hallmarks of convergent migration, as planets will migrate until they lock into resonances then the resonant pair will migrate together, locking in additional planets <cit.>. However, resonant chains do not require such long-distance migration; dissipation from a disk, tides, or planet-planet scattering can damp a planet's eccentricity and cause slight migration, also resulting in chains of resonances <cit.>.
Following the methods outlined in <cit.>, creates and runs suites of N-body simulations. Each simulation initializes the planets out of resonances and then damps the semi-major axes and eccentricities of the planets, following the prescription in <cit.> and using the implementation <cit.> in REBOUNDx 3.1.0 <cit.>; for the migration simulations, these forces are applied only to the outermost planet under the assumption that it is a shorter migration timescale than the other planets in the system. For simulations with only eccentricity damping, the damping is applied to each planet. The migration and eccentricity damping timescales are drawn from independent log-normal distributions whose bounds are user-defined.
§ TEST PROBLEMS AND UTILITY
§.§ Recovering known resonances
To verify the usability of , we study two well-studied resonant chain systems: Kepler-223 and Kepler-80. For each system, we run 500 N-body simulations, drawing the planet masses and orbital elements from independent normal distributions, centered around the nominal values with widths of the uncertainties constrained by <cit.> and <cit.>. We integrate at 5% the innermost planet's orbital period using the WHFast integrator <cit.>.
After 1 Myr, we study the two- and three-body resonant angles that correspond to the orbital period commensurabilities. We estimate the angle amplitude as twice the standard deviation of the angle over a period of 20 years[2σ resulted in an amplitude that was least biased by long-term linear changes to the libration center and by cycling in and out of resonance], and constrain the number of simulations in which each angle is librating. We summarize our results in Table <ref>.
lccc
4
Known Resonant Systems
System
% librating
Center
Amplitude
Kepler-80
Θ_1,2 85.00% -0.06 _- 0.42 ^+ 0.61 94.66 _- 44.32 ^+ 38.63
Θ_2,3 40.00% -0.95 _- 6.77 ^+ 3.66 131.26 _- 10.54 ^+ 33.38
Θ_3,4 63.00% -0.81 _- 8.46 ^+ 5.97 120.50 _- 20.59 ^+ 20.29
ϕ_1 16.00% 176.38 _- 5.95 ^+ 7.91 126.03 _- 15.51 ^+ 37.52
ϕ_2 30.00% 56.07 _- 31.10 ^+ 132.47 100.78 _- 39.21 ^+ 23.81
Kepler-223
Θ_1,2 39.81% -0.04 _- 24.26 ^+ 26.87 122.85 _- 19.73 ^+ 30.27
Θ_2,3 31.52% 0.51 _- 24.73 ^+ 27.61 127.39 _- 17.75 ^+ 28.02
Θ_3,4 29.62% -0.81 _- 40.31 ^+ 24.68 128.80 _- 12.93 ^+ 22.70
ϕ_1 69.19% 52.90 _- 130.27 ^+ 87.42 84.48 _- 43.69 ^+ 23.46
ϕ_2 32.94% 67.49 _- 102.48 ^+ 126.51 116.13 _- 25.73 ^+ 34.01
The results of for two known resonant chain systems. We should the percentage of simulations where each angle is librating along with the center and amplitude of the libration.
Both Kepler-223 and Kepler-80 have well-studied and confirmed four-body resonant chains. However, is unable to confirm such a chain. Although each two-body angle librates in a significant percentage of simulations, we are not able to confirm any of these angles since no angle librates in more than 90% of the simulations. In addition, the three-body resonant angles librate in a fair fraction of simulations for both systems, but no angle librates in a large enough fraction to consider the system in resonance.
We discuss the implications of our inability to recover these resonant chains in Section <ref>.
§.§ Systems without resonance
In addition to studying systems with known resonances, we validate the effectiveness of by studying Kepler-11. Kepler-11 is a G-type star hosting six super-Earths. While five of these planets all orbit their star within 50 days, making this system one of the first compact and dynamically cold systems discovered <cit.>, the planets in this system are well-studied and confirmed to not be in resonance with one another <cit.>. We run 500 N-body simulations of the system and its five inner planets. For each planet, we draw its mass and orbital elements from independent normal distributions centered on the nominal values and with widths equal to the uncertainties in <cit.>. We integrate for 1 Myr with a timestep of 5% the innermost planet's orbital period using the WHFast integrator <cit.>. We assume a stellar mass of 1.04M_⊙ <cit.>.
Of the 500 simulations, marks six simulations (1.2%) as containing resonances. In each of these simulations, the two-body angle Θ_c,d= 5λ_d - 4λ_c - ω_c and the three-body angle ϕ_d,e,f = 4λ_f-7λ_e+3λ_d librate with large amplitudes of 76.74^+32.38_-32.38 and 69.23^+2.44_-2.44, respectively. Although we only have these six simulations, we find no statistical evidence of preferred masses or orbits that lead to this resonance and instead find it likely that these resonant angles are switching between librating and circulating.
§.§ Constraining outer companions in Kepler-80
Kepler-80 is a K-dwarf that hosts six known transiting exoplanets, with orbital periods ranging between 1.0 and 14.7 days and radii between 1.2 and 2.2 R_⊕ <cit.>. Four of these planets are locked in a chain of three-body mean motion resonances, and the outermost planet is likely also in resonance <cit.>. The two-body angles associated with the commensurabilities in this system do not librate <cit.>, making Kepler-80 a relatively unique system and a useful test ground for planetary formation and evolution.
Since the resonances in this system are well-studied, we are able to leverage the dynamics to constrain any undetected outer companions. We model the five outermost planets and an injected theoretical planet. We draw the orbital period of this injected from a uniform distribution spanning 15 to 60 days, the mass from 0.5M_⊕–5M_J, and initialize the planet with a dynamically cold orbit (e = 0.0, i=90). We draw the masses and orbital elements of the known planets from independent normal distributions as described in Section <ref> and integrate for 5×10^5 years at a timestep of 5% the inner planet's orbital period.
We constrain the feasibility of the injected planet with two criteria, the system's stability and the known resonance. If the new planet results in orbital evolution and a close encounter or ejection, it could not possibly exist. Similarly, if the new planet disrupts or breaks the known three-body resonances and causes them to circulate, it could not exist. We therefore restrict the ranges of possible masses and orbital resonances with these criteria.
We summarize our results as heat maps in Figure <ref>. We find that any planet close to Kepler-80g (P<20 days), regardless of mass, is likely to cause instability. We also find that any planet exterior to P>50 days would not disrupt the resonances or cause system instability. Curiously, a massive planet (M_p>0.25M_J) with 20<P<50 would need to participate in the resonant chain to avoid instability or breaking the existing resonances.
§.§ Exploring the mass-eccentricity degeneracy of Kepler-80g
Discovered via neural nets by <cit.>, Kepler-80g is the outermost known planet orbiting its K-type host. Due to its low signal-to-noise ratio of 8.6 <cit.>, the planet's orbital period and radius are constrained to a much lower precision than is typical for transiting planets, with P=14.65±0.001 days and R_p=1.05^+0.22_-0.24 R_⊕.
Kepler-80g's orbital period suggests that it likely continues the chain of MMRs seen in the rest of the system. To confirm this resonance and further characterize the planet, <cit.> photodynamically fit the system. They recover a radius of R_p=1.05^+0.22_-0.24 R_⊕, an orbital period of P=14.65±0.001 days, and a mass of M_p=0.065^+0.044_-0.038 M_⊕. This mass in conjunction with the radius estimate suggests a low density planet that is atypical of terrestrial-size planets. Combined with the high precision of the mass estimate, it is likely that this planet was overfit. In addition, <cit.> find an eccentricity of e=0.13, significantly greater than the eccentricities of the other planets in the systems and greater than most resonant, small planets. We find it likely, then, that <cit.> report a mass that is far too low. We aim to constrain the possible ranges of mass and eccentricity that this planet must need to exist, as well as to not disrupt the resonance of its neighboring planets within Kepler-80.
To break this mass-eccentricity degeneracy, we run a total of 1200 N-body simulations. We draw Kepler-80g's mass and eccentricity from independent uniform distributions of U[0.0,1.0] M_⊕ and U[0.0,0.1], respectively, draw its orbital period from a Gaussian distribution of N[14.651,0.001] days, and initialize its inclination at 88.26. We draw the masses and orbital parameters for the other planets in the system from independent normal distributions centered around the nominal values from <cit.> and fix the stellar mass to 0.73 M_⊙ <cit.>.
We integrate for 1 Myr with a timestep of 5% the inner planet's orbital period using the WHFast integrator <cit.>. We stop integrating if any planet experiences a close encounter or if any planet's eccentricity exceeds 0.9. We then analyze each simulation for resonance, looking for libration of the two three-body resonant angles ϕ_1=3λ_b-5λ_e+2λ_d and ϕ_2=2λ_c-3λ_b+λ_e.
We show our results in Figure <ref>. We find that Kepler-80g must be relatively low mass with low eccentricity for the system to remain stable with its resonances intact. If the planet is more massive than 0.5 M_⊕, corresponding to a minimum bulk density of ρ=2.38 g/cm^3 with an assumed radius of R_p=1.05 R_⊕, we find the eccentricity must be small, e<0.005. Eccentricities larger than this result in a disruption of the known three-body resonances, causing one or both angle to circulate instead of librate. Specifically, we constrain the mass and eccentricity to 0.20^+0.25_-0.14 M_⊕ and 0.01^+0.03_-0.007, respectively.
The results for mass and eccentricity that we derive from dynamics are still far from realistic. Assuming a planetary radius of R_p=1.05 R_⊕, Kepler-80g would have a bulk density of 0.927^+1.19_-0.65 g/cm^3, significantly less dense than most planets. We do, however, recover an eccentricity for Kepler-80g that is much smaller than the estimate from <cit.> of e=0.13 and is more inline with other compact systems.
Recently, <cit.> re-visited the Kepler-80 system, performing an analysis similar to <cit.> but including Kepler-80g in their fits. When they allow the eccentricity vectors of the five planets to float, they recover a mass of 0.8^+0.8_-0.6 M_⊕ and an eccentricity of 0.02^+0.03_-0.02. This eccentricity is consistent with our dynamically-derived estimate of 0.01^+0.03_-0.007, suggesting this to likely be accurate. This mass estimate is larger than our mass estimate, although consistent within 1σ, suggesting that we are, still, underestimating the mass.
We have ultimately constrained the mass and eccentricity of Kepler-80g using the system's dynamics. We use this example as a proof-of-concept that other less-studied systems can have their parameters dynamically constrained in the absence of detectable TTVs or RVs.
§.§ Confirming new resonances
We explore all Kepler, K2, and TOI systems for mean motion resonances.
For each consecutive planet pair, we calculate the proximity to resonance, using the periods reported in the <cit.>. We then study systems with at least one planet pair that is wide of a resonance with a proximity to resonance less than 0.2[We do not study systems if the only near-resonant pair is inside the resonance as these are unlikely to be resonant.] We then down-select this target list, removing systems with known resonances.
For each system, we run a suite of 500 N-body simulations. We assume a stellar mass from that reported in each catalog and draw the planetary parameters from independent normal distributions that are centered on the nominal values reported in the respective reference. For planets without mass estimates, we use the mass-radius relationship from <cit.>[Although resonant state indeed depends on planetary mass, the uncertainty in mass that results from assuming a mass-radius relationship is typically smaller than the resonance width for small planets; therefore, our results are not sensitive to this mass-radius relationship for the majority of the planets included in this study.]. Otherwise, we use the mass estimates from the primary reference on the . We summarize our starting conditions in Table <ref>.
We use the WHFast integrator with a timestep set to 5% of the inner planet's orbital period and integrate for 1 Myr[We select 1 Myr to help ensure long-term stability while reducing computational costs. We have found no statistically significant difference in results between 1.0 and 10 Myr <cit.>.] or until instability (close encounter). We then study each simulation for resonant behaviour; we look for libration of each resonant angle based on a libration amplitude that is less than 150, and we confirm a resonance if the resonant angle librated in 90+% of our N-body simulations.
Overall, we confirm 66 new resonances in 60 systems. We summarize these new resonances in Table <ref>. For completeness, we summarize resonances that we explored but cannot confirm in the Appendix in Table <ref>[Although we classify systems as resonant, we caution against labeling the other systems as “nonresonant.” Instead, we say we are unable to confirm resonance and mark some as potentially resonant.].
lcccccc
7
New Mean Motion Resonances
System
Planets
Resonance
% librating
Center
Amplitude
Notes
HD 28109 02, c 3:2 100.0 -1.63 ^+ 17.97 _- 14.35 120.83 ^+ 5.95 _- 10.85 2, 4
HIP 41378 b, c 2:1 100.0 -0.012 ^+ 0.42 _- 0.39 69.61 ^+ 28.78 _- 22.08 3, 4
HD 191939 c, d 4:3 100.0 0.21 ^+ 5.26 _- 5.73 114.23 ^+ 5.00 _- 3.86 3
HD 260655 b, c 2:1 100.0 0.047 ^+ 0.56 _- 0.57 96.67 ^+ 8.90 _- 8.71
K2-80 b, d 3:2 100.0 0.29 ^+ 3.57 _- 3.58 118 ^+ 6.20 _- 7.47
K2-178 02, b 2:1 91.2 0.011 ^+ 0.42 _- 0.45 73.4 ^+ 47.97 _- 29.87 2
b, 03 3:2 99.4 0.085 ^+ 4.13 _- 4.6 117.96 ^+ 11.20 _- 12.65 2
K2-239 b, c 3:2 93.6 179.98 ^+ 0.767 _- 0.75 99.89 ^+ 32.18 _- 48.06
K2-268 e, c 3:2 91.6 -0.072 ^+ 0.94 _- 0.89 105.18 ^+ 32.03 _- 37.14 1
K2-285 b, c 2:1 100.0 0.0083 ^+ 0.29 _- 0.3 71.89 ^+ 30.17 _- 27.00
Kepler-18 c, d 2:1 100.0 180.02 ^+ 1.00 _- 0.99 79.56 ^+ 16.24 _- 21.13 1, 3
Kepler-23 b, c 3:2 97.2 -0.03 ^+ 2.06 _- 1.97 132.43 ^+ 8.94 _- 8.11 3
Kepler-31 c, d 2:1 99.8 0.058 ^+ 2.06 _- 2.09 81.5 ^+ 31.67 _- 23.63 1, 3
Kepler-32 e, b 2:1 99.8 -0.043 ^+ 0.63 _- 0.53 77.55 ^+ 29.59 _- 18.81 3
b, c 3:2 98.6 180.04 ^+ 0.69 _- 0.75 129.56 ^+ 8.67 _- 14.08
Kepler-51 b, c 2:1 95.0 179.93 ^+ 2.41 _- 2.08 82.38 ^+ 41.42 _- 35.52 3
c, d 3:2 98.8 0.31 ^+ 14.53 _- 14.06 122.03 ^+ 13.21 _- 10.82
Kepler-53 b, c 2:1 98.8 -0.060 ^+ 2.33 _- 2.34 79.16 ^+ 33.32 _- 31.40 1, 3
Kepler-55 d, e 2:1 97.4 0.0078 ^+ 0.35 _- 0.40 71.03 ^+ 41.92 _- 28.90 3
b, c 3:2 100.0 0.96 ^+ 9.15 _- 11.02 116.10 ^+ 7.77 _- 7.45
Kepler-62 e, f 2:1 100.0 0.045 ^+ 2.70 _- 2.76 79.61 ^+ 29.49 _- 27.73 1
Kepler-83 b, c 2:1 99.8 0.035 ^+ 0.90 _- 0.91 78.33 ^+ 37.96 _- 37.64 1, 3
Kepler-102 d, e 3:2 99.8 0.07 ^+ 1.87 _- 1.87 89.23 ^+ 28.79 _- 27.22 1, 3
Kepler-104 b, c 2:1 99.8 -0.01 ^+ 0.57 _- 0.49 69.59 ^+ 38.54 _- 35.80 1, 3
Kepler-105 c, 03 4:3 98.4 180.02 ^+ 2.03 _- 1.81 123.16 ^+ 10.82 _- 17.93 1, 2, 3
Kepler-131 c, 03 3:2 93.8 -0.020 ^+ 0.61 _- 0.64 112.43 ^+ 22.34 _- 26.83 1, 2
Kepler-138 b, c 4:3 100.0 0.24 ^+ 3.97 _- 3.85 115.00 ^+ 5.98 _- 4.72 3
Kepler-154 f, d 2:1 99.2 0.040 ^+ 0.36 _- 0.40 85.57 ^+ 31.14 _- 23.08 1
Kepler-169 c, d 4:3 99.6 -0.051 ^+ 1.36 _- 1.25 95.63 ^+ 14.51 _- 19.83 1
Kepler-176 c, d 2:1 99.2 -0.45 ^+ 4.48 _- 4.13 76.03 ^+ 37.57 _- 35.55 1, 3
Kepler-176 d, e 2:1 89.0 180.28 ^+ 9.36 _- 10.12 92.76 ^+ 37.62 _- 41.72 1
Kepler-207 c, d 2:1 89.6 179.86 ^+ 3.35 _- 3.17 86.56 ^+ 35.64 _- 28.32 1
Kepler-208 c, d 3:2 95.8 179.91 ^+ 1.89 _- 1.64 112.74 ^+ 26.22 _- 31.69
Kepler-249 c, d 2:1 96.8 0.0078 ^+ 0.36 _- 0.37 72.28 ^+ 45.87 _- 30.94 1
Kepler-254 c, d 3:2 99.2 -0.15 ^+ 2.41 _- 2.22 116.40 ^+ 13.04 _- 14.18 1, 3
Kepler-305 b,c 3:2 99.6 0.011 ^+ 1.35 _- 1.24 89.8 ^+ 21.33 _- 24.5 1, 3
Kepler-327 b, c 2:1 100.0 0.00074 ^+ 0.16 _- 0.18 69.49 ^+ 32.38 _- 40.79 3
Kepler-332 b, c 2:1 99.2 0.007 ^+ 0.56 _- 0.58 67.97 ^+ 25.10 _- 22.9
c, d 2:1 95.4 -0.052 ^+ 4.48 _- 3.65 69.34 ^+ 38.03 _- 20.83
Kepler-339 c, d 3:2 99.6 0.0047 ^+ 1.21 _- 1.29 84.74 ^+ 18.27 _- 17.37 3
Kepler-341 b, c 3:2 95.4 0.017 ^+ 0.89 _- 0.99 128.18 ^+ 10.34 _- 17.30
c, d 3:2 99.8 -0.094 ^+ 1.09 _- 1.01 129.75 ^+ 10.57 _- 19.71
Kepler-363 b, c 2:1 99.2 0.0029 ^+ 0.22 _- 0.24 49.42 ^+ 33.52 _- 23.94 1, 3
Kepler-394 b, c 3:2 99.8 0.12 ^+ 0.83 _- 0.90 89.87 ^+ 22.48 _- 32.39
Kepler-968 c,d 4:3 96.6 -0.04 ^+ 1.04 _- 1.01 126.35 ^+ 10.07 _- 9.93 1, 3
Kepler-1518 b, 02 2:1 99.8 179.96 ^+ 3.15 _- 3.22 51.65 ^+ 21.79 _- 22.48 1
Kepler-1581 02,04 3:2 90.4 0.044 ^+ 0.432 _- 0.515 103.31 ^+ 27.34 _- 28.07 2
L 98-59 c, d 2:1 100.0 -0.028 ^+ 1.04 _- 0.95 65.94 ^+ 24.45 _- 27.28 4
LHS 1678 c, 03 4:3 100.0 -0.0071 ^+ 0.67 _- 0.66 119.29 ^+ 5.83 _- 5.89 2
TOI-178 97.2 51.9 ^+ 45.39 _- 147.1 59.64 ^+ 29.95 _- 35.34 1, known
TOI-270 c, d 2:1 100.0 -0.14 ^+ 3.49 _- 3.12 94.11 ^+ 7.44 _- 18.05
TOI-406 02, 01 2:1 100.0 180.06 ^+ 7.25 _- 7.45 96.71 ^+ 5.28 _- 5.11 2
TOI-561 c, f** 3:2 100.0 0.067 ^+ 1.07 _- 1.17 120.3 ^+ 9.11 _- 13.26 3
TOI-663 02, 03 3:2 98.0 0.045 ^+ 1.07 _- 1.17 126.81 ^+ 11.42 _- 14.45 2
TOI-1097 01, 02 3:2 100.0 0.28 ^+ 1.88 _- 2.46 118.40 ^+ 7.08 _- 10.94 2
TOI-1130 b, c 2:1 100.0 -0.027 ^+ 0.60 _- 0.62 103.17 ^+ 4.99 _- 4.28
TOI-1246 d, e 2:1 100.0 0.17 ^+ 2.25 _- 2.38 62.37 ^+ 24.83 _- 24.35 1, 3
TOI-1445 02, 01 2:1 100.0 0.015 ^+ 0.64 _- 0.68 78.77 ^+ 19.69 _- 23.13 2
TOI-1453 02, 01 3:2 100.0 0.000019 ^+ 0.32 _- 0.31 119.49 ^+ 9.19 _- 12.02 2
TOI-1730 01, 03 2:1 100.0 -0.14 ^+ 2.18 _- 1.92 96.8 ^+ 5.43 _- 7.73 2
TOI-1746 01, 02 3:2 99.6 0.00059 ^+ 0.43 _- 0.41 126.31 ^+ 10.11 _- 16.75 2
TOI-1749 b, c 2:1 100.0 0.15 ^+ 1.09 _- 1.31 94.00 ^+ 7.76 _- 10.69
TOI-1803 02, 01 2:1 100.0 -0.037 ^+ 0.92 _- 0.87 87.39 ^+ 13.63 _- 16.95 2
TOI-2086 02, 01 2:1 100.0 0.21 ^+ 2.12 _- 2.57 84.69 ^+ 17.40 _- 18.11 2
TOI-2096 01, 02 2:1 100.0 0.00090 ^+ 0.30 _- 0.24 87.38 ^+ 12.99 _- 17.05 2
TOI-2267 03, 01 3:2 100.0 0.0091 ^+ 0.39 _- 0.43 122.83 ^+ 8.51 _- 12.67 2
TOI-4495 02, 01 2:1 100.0 0.0016 ^+ 0.66 _- 0.68 100.39 ^+ 4.01 _- 5.26 2
For each new resonance, we include the system's name, the planets in resonance, the resonance, the percentage of simulations where this angle librates, the center of libration, and the amplitude of libration.
1: System contains potential additional resonant pair (see Table <ref>)
2: Resonant pair contains candidate planet
3: System has known TTVs
4: Pair previously studied for resonance
lccccccc
8
Initial Conditions for Systems in Table <ref>
System, No. Planets
M_⋆ [M_⊙]
Planets R_p [R_⊕] P [d] t_0 [d] M_p [M_⊕] i[] Ref.
HD 28109, 3 (4) 1.26^+0.08_-0.08
b 2.2^+0.1_-0.1 22.89104^+0.00035_-0.00036 2458344.81772^+0.00757_-0.00757 18.5^+9.1_-7.6 87.725^+0.023_-0.012 [10]
c 4.23^+0.11_-0.11 56.00819^+0.00194_-0.00202 2458377.80109^+0.00724_-0.00733 7.9^+4.2_-3.0 89.543^+0.093_-0.086 [10]
d 3.25^+0.11_-0.11 84.25999^+0.00744_-0.00662 2458355.67324^+0.00432_-0.00432 5.7^+2.7_-2.1 89.682^+0.093_-0.082 [10]
02 2.01^+0.90_-0.09 31.32312^+0.00354_-0.00354 2459044.82593^+0.05923_-0.05923
HIP 41378, 5 1.26^+0.23_-0.16
b 2.90^+0.44_-0.44 15.572098^+0.000018_-0.000019 2457152.2818^+0.0012_-0.0012 88.8^+0.8_-1.4 [36], [3]
c 2.56^+0.40_-0.40 31.70648^+0.00024_-0.00019 2457163.1609^+0.0023_-0.0027 87.5^+2.2_-1.4 [36], [3]
d 3.96^+0.59_-0.59 156^+163_-78 2457166.2604^+ 0.0017_-0.0017 89.930^+0.025_-0.018 [36], [20], [3]
e 5.51^+0.77_-0.77 131^+61_-36 2457142.0194^+0.0010_-0.0010 89.910^+0.220_-0.045 [36], [3]
f 10.2^+1.4_-1.4 324^+121_-126 2457186.91423^+0.00039_-0.00038 89.980^+0.009_-0.006 [36], [20], [3]
… … … … … … … …
TOI-406, 0 (2) 0.38±0.02
02 1.27±3.3 6.61491±0.00003 2458385.388±0.002
01 1.96±0.45 13.17573±0.00003 2458388.567±0.001
[1] = <cit.>;
[3] = <cit.>;
[4] = <cit.>;
[5] = <cit.>;
[6] = <cit.>;
[7] = <cit.>;
[8] = <cit.>;
[9] = <cit.>;
[10] = <cit.>;
[11] = <cit.>;
[12] = <cit.>;
[13] = <cit.>;
[14] = <cit.>;
[15] = <cit.>;
[16] = <cit.>;
[17] = <cit.>;
[18] = <cit.>;
[19] = <cit.>;
[20] = <cit.>;
[21] = <cit.>;
[22] = <cit.>;
[23] = <cit.>;
[24] = <cit.>;
[25] = <cit.>;
[26] = <cit.>;
[27] = <cit.>;
[28] = <cit.>;
[29] = <cit.>;
[30] = <cit.>;
[31] = <cit.>;
[32] = <cit.>;
[33] = <cit.>;
[34] = <cit.>;
[35] = <cit.>;
[36] = <cit.>;
[37] = <cit.>;
[38] = <cit.>;
[39] = <cit.>.
For each system we study for resonance, the number of
planets (including planetary candidates) and the stellar mass
M_⋆ in solar masses. For each planet in these systems,
the planet's radius R_p in Earth radii, orbital period P in
days, the mid-transit time t_0 in BJD, the planet's mass M_p
in Earth masses, the sky-plane inclination i in degrees, and
the reference for these values.
Table <ref> is published in its
entirety in the machine-readable format. A portion is shown here
for guidance regarding its form and content.
§.§.§ Systems to follow-up
As reported in Tables <ref> and <ref>, we study numerous systems whose resonant angles we cannot confirm are librating, but whose planets could very well be in a resonant chain. When we apply our method of resonant confirmation to the resonant chains in Kepler-80 and Kepler-223 (see Section <ref>), we find that the resonant angles only librate in ∼50-90% of the simulations, but do not all reach our 90% cut-off; we would therefore not be able to confirm such resonances with our method, and indeed we would not be able to confirm similar resonant chains and would instead see their resonant angles librate in ∼50+% of our simulations. Following this logic, any systems that we explore where a) the period ratios between adjacent planets suggest the planets could be in a chain of resonances and b) our methods result in some high (but not 90+%) percentage of N-body simulations with librating resonant angles could be resonant but we are not able to confirm that resonance in this work. Instead, the proximity to resonance is likely to lead to large gravitational perturbations that would be detectable in the system's RVs or transit times as deviations from Keplerian orbits.
We list such systems below in Table <ref>. For each system, we estimate the planet's TTVs by subtracting a linear least square fit from the transit times from each of the N-body simulations. Such resonances could be confirmed through TTV fitting, transit+RV fitting, or photodynamic fitting <cit.>, and therefore these systems deserve follow-up.
llllc
5
Systems to follow up
System
m_V
Planets
Estimated TTVs (min)
Prior Dynamics Study
K2-243 10.971 b–01, 01–c 1–4, 2–15, 1–7
Kepler-104^† 14.266 c–d 1–3, 1–4, 0–1 [1]
Kepler-105^† 12.981 b–c 1–4, 1–15, 1–60
KOI-1358** 15.477 02–03, 03–04 1–12, 5–22, 1–2, 1–5 [1], [3], [4]
Kepler-79 14.036 b–c, c–d, d–e 1–20, 8–60, 6–24, 10–210 [1], [2], [5]
Kepler-416 14.166 b–c, 03–04 1–2, 1–4, 1–5, 1–6 [1]
Kepler-122 14.403 b–c, e–f 1–3, 1–2, 2–10, 6–65 [1]
Kepler-402 13.270 b–c, d–e 1–8, 1–3, 2–9, 2–8, 1–6
Kepler-31^† 15.496 d–04 1–5, 3–85, 12–64, 2–77 [1]
TOI-1136 9.534 02–01–04, 02–01, 01–04 ∼3800, ∼1300, ∼8400, ∼6000
TOI-178^† 11.955 b–c–d, b–c, c–d, d–e 1–14, 5–133, 16–825, 8–320
TOI-797 13.689 01–03, 03–02 1–1240, 1–6, 1–8
Kepler-154^† 14.646 b–c 0, 1–10, 1–5, 5–20, 1–6, 1–3
Kepler-169^† 14.424 b–c 0, 1–4, 1–6, 1–2, 0
Kepler-176^† 14.767 b–e 0, 2–30, 6–36, 2–74 [1]
Kepler-62^† 13.965 b–c 0, 1–9, 0, 1–6, 1–3
Kepler-224^† 15.801 b–c 1–2, 1–3, 1–4, 2–5
Kepler-226^† 15.563 b–c, c–d 2–29, 1–16, 1–14 [1], [6]
Kepler-254^† 16.012 b–c ∼1, 4–64, 2–64 [1], [6]
Kepler-374 14.701 c–d, d–04 1–1300, 1–3, 1–3, 1–7, 1–2
Kepler-1518^†** 13.374 02–04 1–2, 1–2, 2–3
Estimated TTV amplitudes, in minutes, for each planet in systems with possible resonant chains. We also include each star's V (Johnson) magnitude as recorded in the exoplanet archive. These planets are likely in a chain of resonances, but we are unable to confirm them with the methods we apply in this work. If the dynamics of the system has been studied before, we include a reference to the work.
^† System contains a confirmed resonance, either confirmed by this work or a previous work.
*Since our original study, the planets in KOI-1358 have been confirmed; the system is now Kepler-1987 <cit.>. KOI-3741.04 has been confirmed as Kepler-1518c <cit.>.
Ref: [1] <cit.>, [2] <cit.>, [3] <cit.>, [4] <cit.>, [5] <cit.>, [6] <cit.>
§.§.§ Shifted libration centers
Without additional perturbations, a two-body resonance should always librate about 0. We therefore identify systems below in Table <ref> whose angles librate about 180 instead, suggesting that an additional resonant angle might be librating. These systems are interesting, but an in-depth study of their dynamics is beyond the scope of this work.
lll
3
Confirmed Resonant Angles Librating About 180
System
Planets
Additional Res.
K2-239 b, c None, Θ_c,d at 6.6%
TOI-406 02, 01 None
Kepler-105 c, 03 None, Θ_b,c at 37%
Kepler-18 c, d None, Θ_b,c at 13%
Kepler-176 d, e Θ_c,d confirmed
Kepler-51 b, c Θ_c,d confirmed
Kepler-207 c, d Θ_b,c 22%
Kepler-208 c, d None
Kepler-32 b, c Θ_e,b confirmed
Kepler-1518 b, 02 None, Θ_02,04 79.4%
For each confirmed angle that librates about 180 instead of 0, we list the system name, the planets involved in the resonance, and information about additional librating angles in the system. For additional angles that are not confirmed, we list the percentage of simulations quite resulted in libration.
§.§ Constraining the masses of Kepler-305
Kepler-305 is a K-type star hosting three super-Earths and one mini-Neptune, with orbital periods ranging between 3.2 and 16.7 days. The orbital periods of the three outer planets suggest a chain of mean motion resonances of 1:2:3. The inner planet Kepler-305e sits just wide of the 5:3 resonance with Kepler-305b; although not a strong resonance, the larger mass of planet b <cit.> might be sufficient to lock the pair into resonance.
Kepler-305b and Kepler-305c exhibit anti-correlated TTVs which <cit.> used to confirm the planet pair and constrain their masses to 10.5^+2.6_-2.0 M_⊕ and 6.0^+2.4_-2.2 M_⊕, respectively. More recently, <cit.> studied all four planets in the system, including the then-candidate Kepler-305e, fitting the system's TTVs to recover planetary masses and orbits. Despite robustly constraining the masses of both Kepler-305 c and Kepler-305 d and noting how close the three planets are to perfect commensurability, they do not find any of their fits to be resonant.
We model the Kepler-305 system via N-body simulations using REBOUND. We initialize each planet with an orbital period, mid-transit time, eccentricity, and inclination drawn from independent, normal distributions centered on the nominal values from <cit.> and with widths equal to the uncertainties. We assume a stellar mass of 0.76M_⊙ <cit.>. We use a timestep of 5% the inner planet's orbital period and integrate the system for 2 Myr using the WHFast integrator <cit.>.
All of the 500 simulations we perform survive the 2 Myr integration without experiencing instability. We explore each simulation for resonance, looking for libration of the critical resonant angles
ϕ_1 = 2λ_c -3λ_b + λ_e,
ϕ_2 = λ_d -2λ_c + λ_b,
Θ_e,b = 5λ_b - 3λ_e - 2ω_b,
Θ_b,c = 3λ_c -2λ_b - ω_c, and
Θ_c,d = 2λ_d - λ_c - ω_c.
We summarize our results in Table <ref>.
lccc
4
Kepler-305 Resonances
System
% librating
Center
Amplitude
Θ_e,b 0.0%
Θ_b,c 99.6% 0.011 ^+ 1.35 _- 1.24 89.8 ^+ 21.33 _- 24.5
Θ_c,d 20.4% -0.041 _-1.98^+2.24 138.71_-32.13^+7.37
ϕ_1 0.00%
ϕ_2 1.6% Resulting three-body and two-body angles from our REBOUND N-body simulations, including the libration centers and amplitudes in degrees. We find that the angle Θ_b,c is librating in nearly all of our simulations, so we confirm the two planets are resonant.
Since the resonant angle Θ_b,c librates in 99.6% of our N-body simulations, we are able to conclude that planets b and c are in a 3:2 resonance. We use this confirmed resonance to constrain the masses of these two planets. We report the median of the distribution of masses in simulations with librating angles and the 16th and 84th percentile as the lower and upper uncertainties, respectively. In addition, we further estimate the masses of the other two planets, e and d, using the simulations with librating angles. We note that to use resonances to constrain planetary parameters, the resonances must be confirmed, so the mass estimates we report for planets e and d should not be seen as anything more than proof-of-concept. We summarize our mass estimates in Table <ref>.
lcc
3
2.0
Mass Estimates for Kepler-305
Planet
Angle
M_p [M_⊕]
e ϕ_2* 4.4^+0.22_ -0.21
b Θ_b,c 10.3^+2.7_ -2.3
b ϕ_2* 11.7^+2.5_ -1.5
c Θ_b,c 6.1^+2.6_ -2.5
c Θ_c,d* 5.8^+2.6_ -2.5
c ϕ_2* 6.6 ^+2.2_ -1.4
d Θ_c,d* 8.6^+7.1_ -4.4
Mass estimates in M_⊕ for each planet of Kepler-305. These estimates are the median with 16th and 84th percentile uncertainties of the distribution of mass for simulations where each angle is librating.
* We are not able to confirm these resonances but include these mass estimates as proof-of-concept.
We estimate the bulk density of each planet by drawing a radius estimate from a normal distribution centered on the nominal value from <cit.> with a width of the published uncertainties for each planet mass in our simulations with librating angles. We find that planets b and c likely have inflated atmospheres with bulk densities of 1.16_-0.60^+ 1.72 g/cm^3 and 0.82_ - 0.44^ + 1.44 g/cm^3, respectively, and that planet d could also be a mini-Neptune (ρ=2.29_- 1.27^+ 2.66 g/cm^3). Since planet e was only in resonance in 8 N-body simulations, we do not estimate a density, although its size (R_p=1.7^+0.11_-0.08 R_⊕) and proximity to its host star (P=3.2 days) suggest it is terrestrial.
§ KNOWN LIMITATIONS
§.§ Cut-off for confirmed resonances
In this work, we confirm a resonance if 90% or more of the simulations result in librating angles. This cutoff of 90% is arbitrary and intentionally high to avoid false positives. In Figure <ref>, we show the distribution of percentages of simulations with librating angles throughout all of the suites of simulations for this work. This distribution is roughly bimodal,
with an absolute minimum at roughly 53%. At a cut-off of 90%, marked as a blue line in Figure <ref>, we are minimizing, but not eliminating, the number of false positives, but we are also likely failing to confirm truly resonant systems. We discuss these potential false negatives above in Section <ref> and again stress the importance of additional, independent studies on these systems.
§.§ Measuring libration amplitudes
quantifies the libration amplitude as twice the angle's standard deviation (2σ). We explored variations of this measurement, including a more simplistic and traditional method of the difference between the minimum and maximum values over a period of time and the median absolute deviation, but these methods struggled with amplitudes of slowly circulating angles, identifying the angles as librating. Of these three methods, we found the 2σ method to be better at marking truly librating angles and better at quantifying the amplitudes of angles that phase in and out of resonance; however, other works that have studied exoplanetary resonances use other methods, so a direct comparison of the libration amplitudes is not feasible.
§.§ Confirming resonances using libration
There are numerous other ways to study and confirm mean motion resonances, including verifying that the system lies within the separatrix of the system's phase space <cit.>. Our methods rely entirely on the libration of the resonant angles, and recent studies have suggested this technique might not be completely accurate; <cit.> found that many TTV systems might show librating angles, even if the system is formally nonresonant because the Hamiltonian has no separatrix.
In this work, we study 20 systems and 36 resonances that overlap with <cit.>. Of these systems, we confirm 40% of resonances that they also confirm. In addition, we confirm
nine resonances that <cit.> mark as not in resonance and fail to confirm twelve other resonances that they mark as resonant. Most of these twelve resonances are in multi-planet systems, with angles that librate in a large fraction of our simulations that we discuss in Section <ref>.
In addition, the two-body resonant angles that we study for libration might not fully describe the dynamics of the system. Depending on the apsidal angles of the two planets, a planet pair could be resonant with one or both resonant angles circulating <cit.>. Therefore, it is possible that some of the systems we were not able to confirm as resonant are resonant, but their two-body angles circulate. We leave the expansion of to study the behaviour of Δω̅=ω̅_̅1̅-ω̅_̅2̅ and the behaviour of the mixed resonant angle defined in Eq. 39 of <cit.> to future work.
§.§ Inflated libration amplitudes
The libration amplitudes of the resonant angles can provide information into a system's formation history and subsequent evolution; they are to some degree reliant on the eccentricities of the planets when they lock into resonance <cit.>, and the amplitudes can also grant insight into the system's stability and rate of migration <cit.>. Ideally, then, we would be able to study each of these newly confirmed resonances and place constraints on their stability and formation history, but unfortunately, the amplitudes we recover through this method appear artificially inflated. In fact, the inflation of libration amplitudes was first noted by <cit.> and later explored by <cit.>; they found that noisy data can lead to libration amplitudes that are systematically biased towards larger values.
We test to see if this bias could be artificially inflating our libration amplitudes. We perform two additional suites of N-body simulations of Kepler-363. In the first suite, we reduce the uncertainties on the planet masses from 30% to 10%, and in the second suite, we inflate the uncertainties of the orbital elements by 100% their measured values. In total, we explore three situations: 1) average mass uncertainty and average orbital uncertainty, 2) small mass uncertainty and average orbital uncertainty, and 3) average mass uncertainty and large orbital uncertainty. We then study these systems as we have the real systems, using to characterize the resonances. We find that we recover inflated amplitudes for each of the three the cases with well constrained masses and orbital parameters as with more poorly constrained parameters; a Kolmogorov–Smirnov
two-sample test between the recovered libration amplitudes from each suite of simulations results in large p-values (p>0.05), failing to reject the null hypothesis that the samples are drawn from the same population. Because of this, we are not able to claim that the uncertainties in our planetary masses or orbital parameters are driving factors behind our inflated amplitudes.
We do, however, recover some interesting additional results that might provide insight:
* Suite 2 (reduced mass uncertainties) has a narrower eccentricity distribution than Suites 1 and 3.
* The libration amplitude of Θ_b,c is larger with smaller e_b (negatively correlated).
* When mass uncertainties are reduced (Suite 2), we see a positive correlation between the libration amplitude of Θ_b,c and e_b.
* The libration amplitude of Θ_c,d is larger with smaller e_c (negatively correlated) and larger e_d (positively correlated).
* Correlation strength between libration amplitudes and eccentricities depends on how eccentricities compare to one another. Lower mass uncertainties result in these correlations becoming much stronger.
* If e_c<e_d, then the libration amplitude of Θ_b,c is greater with larger e_b and e_c (positively correlated), and the libration amplitude of Θ_c,d is greater with larger e_c and e_d (positively correlated).
* If e_b>e_c, then the libration amplitude of Θ_b,c is greater with larger e_b and e_c (positively correlated), but the libration amplitude of Θ_c,d is greater with smaller e_c and e_d (negatively correlated).
§.§ Exoplanet Archive
As described above in Section <ref>, we pull the inputs to our analysis from the , and therefore the results we present in Section <ref> are dependent on the parameters being correct. The parameters reported as default parameters on the span a range of quality and precision and result from various methods of measurement and estimation. Although we attempt to mitigate this variety by exploring a large number of simulations with parameters drawn from independent distributions, there likely still exist underlying biases that skew our results. By comparing the output of with dynamical integrations of RV or TTV fits, we can vet our results and determine how reliant they are on the accuracy and precision of the input parameters. We leave such a study to future work.
In addition to the planetary parameters, we pull the stellar mass from the and keep a fixed mass for all simulations. Although the host mass does not directly impact the resonant nature of a system, no study[<cit.> find that existing resonant chains can be broken if the host loses mass.] has explored how much of an impact, if any, a different host mass could have on the resonant state or libration centers or amplitudes of librating resonant angles. We leave such a study to future work.
§.§ Resonant Chains
As noted above in Section <ref>, struggles to confirm resonant chains and resonances in systems with more than four planets. Because of this, we are not able to confirm many resonant chains that would likely greatly improve our numbers.
We try to mitigate this limitation with our discussion in Section <ref> and Table <ref>, as these systems will require additional follow-up as cannot be the sole method of study.
§.§ Expensive
The methods employed in this work are very computationally intensive. They are expensive and only possible at this large scale on a high-performance computing cluster. Ideally, we would individually characterize enough two-body resonances and resonant chains to perform more informed searches, making use of various machine learning techniques to improve performance.
§ CONCLUSION
The study of mean motion resonances (MMRs) allows for the unique constraint of planetary formation and evolution as well as the constraint of the planets' masses and orbital parameters. Although MMRs are so information-rich, it can be challenging and computationally intensive to confirm two planets are actually in resonance with one another. Because of these factors, we have had relatively few confirmed resonant systems.
Following the methods of <cit.> and <cit.>, we create the python package <cit.> to identify, confirm, and study new mean motion resonances in the exoplanetary population. Our methods rely on suites of N-body simulations in rebound, and we confirm a resonance if 90% or more of the simulations result in librating resonant angles. We recover the known resonances in Kepler-80 and Kepler-223, noting the shortcomings of this method for resonant chains. We demonstrate this software's capabilities by constraining orbital parameters and masses of planets in known resonances and by constraining the parameter space of unknown large planets in well-studied resonant systems.
After verifying the software's abilities and demonstrating its use, we search the Kepler/K2 and TESS catalogues for new resonances. We identify 66 new resonant systems, including seven new resonant chains. We describe the limitations of the software, and include a list of resonances we were not able to confirm but are likely to librate. These systems deserve follow-up analysis, either through different methods or with additional data, since they are likely to demonstrate detectable TTV signals.
Our methods herein are computationally intensive and infeasible on personal machines with small numbers of CPUs. We intend to confirm additional resonances until we have a large enough population for more informed and AI-trained searches, but such an analysis is beyond the scope of this work. In addition, our methods for confirming resonance are limited as described in Section <ref>, potentially leading to a high false negative rate and missing resonances. We will explore how our methods of resonant confirmation (e.g., the use of librating angles, amplitude estimate) ultimately affect our ability to confirm resonance and compare our results to RV and TTV fitting in follow-up studies.
We thank the anonymous referee for the constructive feedback that improved this manuscript. The authors acknowledge use of the ELSA high performance computing cluster at The College of New Jersey for conducting the research reported in this paper. This cluster is funded in part by the National Science Foundation under grant numbers OAC-1826915 and OAC-1828163. This research has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program.
aasjournal
Below, in Table <ref>, we include the resonances we explored but were not able to confirm. We include systems with planetary candidates and note potential resonances that are in systems with confirmed resonances; we have found that resonant chains are challenging to confirm with this method, so it is possible these systems have more than the singular confirmed resonance. In addition, there exist various reasons why could fail to confirm a known resonance. We will explore these reasons, outlined in Section <ref>, in follow-up studies.
lcccc
5
Other potential resonances
System
Planets
Resonance
% librating
Notes
K2-19 d, b 3:2 0.00
K2-19 b, c 3:2 67.21
K2-37 b, c 4:3 0.00
K2-37 c, d 2:1 37.40
K2-72 b, d 4:3 0.00 1
K2-72 d, c 2:1 0.00 1
K2-72 c, e 3:2 0.00 1
K2-178 04, 05 4:3 0.00 1, 2
K2-239 c, d 3:2 6.60 1
K2-243 b, 01 3:2 58.60 2
K2-243 01, c 4:3 44.20 2
K2-266 c, d 2:1 0.00
K2-266 d, e 4:3 32.68
K2-268 b, d 2:1 26.00 1
K2-268 d, e 4:3 9.20 1
K2-285 c, d 4:3 0.00 1
K2-285 d, e 4:3 0.00 1
K2-384 c, d 3:2 5.65
K2-384 d, e 4:3 0.00
K2-384 e, f 4:3 21.47
Kepler-18 b, c 2:1 13.00 1
Kepler-23 c, d 4:3 0.00 1
Kepler-31 b, c 2:1 5.40 1
Kepler-31 d, 04 2:1 70.60 1, 2
Kepler-32 c, d 5:2 0.0
Kepler-33 d, e 4:3 0.0
Kepler-33 e, f 5:4 1.2
Kepler-53 d, b 2:1 36.8 1
Kepler-62 b, c 2:1 40.4 1
Kepler-62 c, d 4:3 0.0 1
Kepler-79 b, c 2:1 59.20
Kepler-79 c, d 2:1 79.20
Kepler-79 d, e 3:2 77.40
Kepler-83 d, b 2:1 25.2 1
Kepler-84 d, b 2:1 63.8
Kepler-84 b, c 4:3 0.0
Kepler-84 c, e 2:1 0.6
Kepler-102 b, c 4:3 19.80 1
Kepler-102 c, d 4:3 0.00 1
Kepler-102 e,f 5:3 0.00 1
Kepler-104 c, d 2:1 60.80 1
Kepler-105 b, c 3:2 37.20 1
Kepler-114 b, c 3:2 3.60
Kepler-114 c, d 4:3 0.00
Kepler-122 b, c 2:1 83.40
Kepler-122 e, f 3:2 82.00
Kepler-131 b, c 5:3 28.20 1
Kepler-138 c, d 5:3 0.00 1
Kepler-154 e, f 5:2 0.00 1
Kepler-154 b, c 2:1 67.00 1
Kepler-169 b, c 2:1 50.80 1
Kepler-176 d, e 2:1 89.00 1
Kepler-184 b, c 2:1 75.80
Kepler-184 c, d 4:3 0.40
Kepler-207 b, c 2:1 22.40 1
Kepler-208 d, e 4:3 0.0 1
Kepler-215 b, c 3:2 18.0
Kepler-215 c, d 2:1 82.4
Kepler-217 d, b 4:3 33.80
Kepler-217 b, c 3:2 22.60
Kepler-224 b, c 2:1 72.0 1
Kepler-226 b, c 4:3 42.0
Kepler-226 c, d 3:2 45.8
Kepler-249 b, c 2:1 82.8 1
Kepler-254 b, c 2:1 42.4 1
Kepler-271 d,c 4:3 0.00
Kepler-271 c, d 4:3 17.80
Kepler-271 b, 04 5:3 0.00 2
Kepler-271 04, 05 5:4 0.00 2
Kepler-292 b, c 4:3 0.0
Kepler-292 c, d 2:1 86.2
Kepler-292 d, e 5:3 5.2
Kepler-305 e, b 5:3 0.00 1
Kepler-305 c, d 2:1 20.40 1
Kepler-326 b, c 2:1 67.0
Kepler-326 c, d 4:3 0.0
Kepler-327 c, d 5:2 0.0
Kepler-339 b, c 4:3 3.00 1
Kepler-350 b, c 3:2 10.8
Kepler-350 c, d 4:3 0.0
Kepler-352 04, d 4:3 0.00 2
Kepler-352 d, b 4:3 0.00
Kepler-363 c, d 3:2 68.4 1
Kepler-374 b, c 5:3 0.6
Kepler-374 c, d 3:2 79.6
Kepler-374 d, 04 3:2 74.2 2
Kepler-374 04, 05 3:2 0.0 2
Kepler-394 d, b 4:3 0.0
Kepler-402 b, c 3:2 69.80
Kepler-402 c, d 4:3 0.00
Kepler-402 d, e 5:4 87.60
Kepler-402 e, 05 4:3 0.00 2
Kepler-416 b, c 2:1 45.80
Kepler-416 c, 03 2:1 0.00 2
Kepler-416 03, 04 2:1 82.80 2
Kepler-431 b, c 5:4 25.20
Kepler-431 c, d 4:3 40.20
Kepler-968 b, c 3:2 13.80 1
Kepler-1073 c, 04 3:2 59.40 2
Kepler-1073 04, b 4:3 1.20 2
Kepler-1130 04, c 3:2 0.00 2
Kepler-1130 c, d 4:3 0.00
Kepler-1130 d, b 5:4 30.80
Kepler-1371 c, b 4:3 0.00
Kepler-1371 03, 04 5:4 0.00 2
Kepler-1371 04, 05 5:4 0.66 2
Kepler-1518 02, 04 2:1 79.4 1, 2
Kepler-1542 c, b 4:3 8.60
Kepler-1542 b, e 5:4 0.20
Kepler-1542 d, 05 5:4 0.00 2
Kepler-1581 b, 02 4:3 0.00 1, 2
Kepler-1693 c, 04 3:2 0.00 2
Kepler-1693 04, b 3:2 45.60 2
Kepler-1693 b, 03 3:2 0.00 2
KOI-1358 01, 02 3:2 6.20 2
KOI-1358 02, 03 3:2 48.00 2
KOI-1358 03, 04 3:2 75.60 2
KOI-3083 01, 02 4:3 0.00 2
KOI-3083 02, 03 5:4 0.00 2
TOI-178 b, c, d 1:2:3 63.20 1
TOI-178 b, c 2:1 68.00 1
TOI-178 c, d 3:2 76.40 1
TOI-178 d, e 2:1 79.20 1
TOI-270 b, c 5:3 0.40 1
TOI-421 b, c 3:1 0.20
TOI-700 04, d 4:3 85.40 2
TOI-797 01, 03 3:2 60.80 2
TOI-797 03, 02 3:2 50.00 2
TOI-1136 02, 01, 04 1:2:3 26.56 2
TOI-1136 02, 01 2:1 75.89 2
TOI-1136 01, 04 3:2 43.97 2
TOI-1136 04, 03 4:3 0.00 2
TOI-1246 b, c 4:3 31.40 1
TOI-1246 c, d 3:1 0.00 1
Each additional potential resonance explored, including system name, planet pair, resonance explored, and percentage of simulations with librating angle.
1: System contains confirmed resonant pair
2: Pair contains at least one candidate planet
|
http://arxiv.org/abs/2307.07490v1 | 20230714172244 | A novel family of finite automata for recognizing and learning $ω$-regular languages | [
"Yong Li",
"Sven Schewe",
"Qiyi Tang"
] | cs.FL | [
"cs.FL"
] | Probing multipartite entanglement through persistent homology
[
August 12, 2023
=============================================================
Families of DFAs (FDFAs) have recently been introduced as a new representation of ω-regular languages.
They target ultimately periodic words, with acceptors revolving around accepting some representation u· v^ω.
Three canonical FDFAs have been suggested, called periodic, syntactic, and recurrent.
We propose a fourth one, limit FDFAs, which can be exponentially coarser than periodic FDFAs and are more succinct than syntactic FDFAs, while they are incomparable (and dual to) recurrent FDFAs.
We show that limit FDFAs can be easily used to check not only whether ω-languages are regular, but also whether they are accepted by deterministic automata.
We also show that canonical forms can be left behind in applications: the limit and recurrent FDFAs can complement each other nicely, and it may be a good way forward to use a combination of both.
Using this observation as a starting point, we explore making more efficient use of Myhill-Nerode's right congruences in aggressively increasing the number of don't-care cases in order to obtain smaller progress automata. In pursuit of this goal, we gain succinctness, but pay a high price by losing constructiveness.
§ INTRODUCTION
The class of ω-regular languages has proven to be an important formalism to model reactive systems and their specifications, and automata over infinite words are the main tool to reason about them.
For example, the automata-theoretic approach to verification <cit.> is the main framework for verifying ω-regular specifications.
The first type of automata recognizing ω-regular languages is nondeterministic automata <cit.> (NBAs) where an infinite word is accepted if one of its runs meets the accepting condition for infinitely many times.
Since then, other types of acceptance conditions, such as Muller, Rabin, Streett and parity automata <cit.>, have been introduced.
All the automata mentioned above are finite automata processing infinite words, widely known as ω-automata <cit.>.
The theory of ω-regular languages is more involved than that of regular languages.
For instance, nondeterministic finite automata (NFAs) can be determinized with a subset construction, while NBAs have to make use of tree structures <cit.>.
This is because of a fundamental difference between these language classes: for a given regular language R, the Myhill-Nerode theorem <cit.> defines a right congruence (RC) _R in which every equivalence class corresponds to a state in the minimal deterministic finite automata (DFA) accepting R.
In contrast, there is no similar theorem to define the minimal deterministic ω-automata for the full class of ω-regular languages[Simple extension of Myhill-Nerode theorem for ω-regular languages only works on a small subset <cit.>].
Schewe proved in <cit.> that it is NP-complete to find the minimal deterministic ω-automaton even given a deterministic ω-automaton.
Therefore, it seems impossible to easily define a Myhill-Nerode theorem for (minimal) ω-automata.
Recently, Angluin, Boker and Fisman <cit.> proposed families of DFAs (FDFAs) for recognizing ω-regular languages, in which every DFA can be defined with respect to a RC defined over a given ω-regular language <cit.>.
This tight connection is the theoretical foundation on which the state of the art learning algorithms for ω-regular languages <cit.> using membership and equivalence queries <cit.> are built.
FDFAs are based on well-known properties of ω-regular languages <cit.>:
two ω-regular languages are equivalent if, and only if, they have the same set of ultimately periodic words.
An ultimately periodic word w is an infinite word that consists of first a finite prefix u, followed by an infinite repetition of a finite nonempty word v; it can thus be represented as a decomposition pair (u, v).
FDFAs accept infinite words by accepting their decomposition pairs:
an FDFA = (, ^q) consists of a leading DFA that processes the finite prefix u, while leaving the acceptance work of v to the progress DFA ^q, one for each state of .
To this end, intuitively tracks the Myhill-Nerode's RCs, and an ultimately periodic word u· v^ω is accepted if it has a representation x · y^ω such that x and x · y are in the same congruence class and y is accepted by the progress DFA ^x.
Angluin and Fisman <cit.> formalized the RCs of three canonical FDFAs, namely periodic <cit.>, syntactic <cit.> and recurrent <cit.>, and provided a unified learning framework for them.
In this work, we first propose a fourth one, called limit FDFAs (cf. Section <ref>).
We show that limit FDFAs are coarser than syntactic FDFAs.
Since syntactic FDFAs can be exponentially more succinct than periodic FDFAs <cit.>, so do our limit FDFAs.
We show that limit FDFAs are dual (and thus incomparable in the size) to recurrent FDFAs, due to symmetric treatment for don't care words.
More precisely, the formalization of such FDFA does not care whether or not a progress automaton ^x accepts or rejects a word v, unless reading it in produces a self-loop.
Recurrent progress DFAs reject all those don't care words, while limit progress DFAs accept them.
We show that limit FDFAs (families of DFAs that use limit DFAs) have two interesting properties.
The first is on conciseness:
we show that this change in the treatment of don't care words not only defines a dual to recurrent FDFAs but also allows us to identify languages accepted by deterministic automata (DBAs) easily.
It is only known that one can identify whether a given ω-language is regular by verifying whether the number of states in the three canonical FDFAs is finite.
However, if one wishes to identify DBA-recognizable languages with FDFAs, a straight-forward approach is to first translate the input FDFA to an equivalent deterministic Rabin automaton <cit.> through an intermediate NBA, and then use the deciding algorithm in <cit.> by checking the transition structure of Rabin automata.
However, this approach is exponential in the size of the input FDFA because of the NBA determinization procedure <cit.>.
Our limit FDFAs are, to the best of our knowledge, the first type of FDFAs able to identify the DBA-recognizable languages in polynomial time (cf. Section <ref>).
We note that limit FDFAs also fit nicely into the learning framework introduced in <cit.>, so that they can be used for learning without extra development.
We then discuss how to make more use of don't care words when defining the RCs of the progress automata, leading to the coarsest congruence relations and therefore the most concise FDFAs, albeit to the expense of losing constructiveness (cf. Section <ref>).
intro ends
FDFAs:
Families of deterministic finite automata (FDFAs) are rather traditionally organized:
an FDFA = (, _q) consists of a lead leading DFA that rules the family like a matriarch, while leaving the acceptance work to the progress DFAs _q, one for each state of .
To this end, intuitively tracks the Myhill-Nerode's right congruences, and an ultimately periodic word u· v^ω is accepted if it has a representation x · y^ω such that x and x · y are in the same congruence class and y is accepted by the progress automaton _x.
Active learning framework:
In this framework, there are two roles, namely the learner and an oracle.
The task of the learner is to learn an automaton representation of an unknown language L from the oracle.
The learner can ask two types of queries about L, which will be answered by the oracle.
A membership query is about whether a word w is in L;
an equivalence query is to ask whether a given automaton recognizes the language L.
One plan to propose a unified translation from a family of DFAs to nondeterministic , limit deterministic , and deterministic Rabin automata
FDFAs <cit.>
Maybe we can learn a good-for-game automaton, e.g., GFG-tDCA, in this scenario.
§ PRELIMINARIES
In the whole paper, we fix a finite alphabet .
A word is a finite or infinite sequence of letters in ;
denotes the empty word.
Let and denote the set of all finite and infinite words (or ω-words), respectively.
In particular, we let = ∖.
A finitary language is a subset of ;
an ω-language is a subset of .
Let be a sequence;
we denote by i the i-th element of and by ik the subsequence of starting at the i-th element and ending at the k-th element (inclusively) when i ≤ k, and the empty sequence when i > k.
Given a finite word u and a word w, we denote by u · w (uw, for short) the concatenation of u and w.
Given a finitary language L_1 and a finitary/ω-language L_2, the concatenation L_1· L_2 (L_1 L_2, for short) of L_1 and L_2 is the set L_1· L_2 = uwu ∈ L_1, w ∈ L_2 and L^ω_1 the infinite concatenation of L_1.
Transition system.
A (nondeterministic) transition system (TS) is a tuple = (, , ), where is a finite set of states, ∈ is the initial state, and : ×→ 2^ is a transition function.
We also lift to sets as
(S, σ) := ⋃_q∈ S(q, σ).
We also extend to words, by letting (S, ) = S and (S, a_0 a_1⋯ a_k) = ((S, a_0), a_1 ⋯ a_k), where we have k ≥ 1 and a_i∈ for i ∈0, ⋯ , k.
The underlying graph _ of a TS is a graph ⟨, E⟩, where the set of vertices is the set of states in and (q, q') ∈ E if q' ∈(q, a) for some a ∈.
We call a set C ⊆ a strongly connected component (SCC) of if, for every pair of states q, q' ∈ C, q and q' can reach each other in _.
Automata.
An automaton on finite words is called a nondeterministic finite automaton (NFA).
An NFA is formally defined as a tuple (, F), where is a TS and F⊆ is a set of final states.
An automaton on ω-words is called a nondeterministic automaton (NBA).
An NBA is represented as a tuple (, ) where is a TS and ⊆(q, a, q'): q,q'∈, a∈, q'∈(q,a) is a set of accepting transitions.
An NFA is said to be a deterministic finite automaton (DFA) if, for each q ∈ and a ∈, (q, a)≤ 1.
Deterministic automata (DBAs) are defined similarly and thus is a subset of (q, a): q∈, a ∈, since the successor q' is determined by the source state and the input letter.
A run of an NFA on a finite word u of length n ≥ 0 is a sequence of states = q_0 q_1⋯ q_n∈^+ such that, for every 0 ≤ i < n, q_i+1∈(q_i, ui).
We write q_0u q_n if there is a run from q_0 to q_n over u.
A finite word u ∈ is accepted by an NFA if there is a run q_0⋯ q_n over u such that q_n∈ F.
Similarly, an ω-run of on an ω-word w is an infinite sequence of transitions = (q_0, w0, q_1) (q_1, w1, q_2)⋯ such that, for every i ≥ 0, q_i+1∈(q_i, w[i]).
Let inf() be the set of transitions that occur infinitely often in the run .
An ω-word w ∈ is accepted by an NBA if there exists an ω-run of over w such that inf() ∩≠∅.
The finitary language recognized by an NFA , denoted by , is defined as the set of finite words accepted by it.
Similarly, we denote by the ω-language recognized by an NBA , i.e., the set of ω-words accepted by .
NFAs/DFAs accept exactly regular languages while NBAs recognize exactly ω-regular languages.
todo
Define the transition graph for both DFAs and NBAs, then the acceptance conditions on states and transitions.
Acceptance conditions for DFAs are defined on states, while that for ω-automata are defined over transitions.
A nondeterministic Büchi automaton (NBA) is a tuple = (, , , , ), where
* is a finite alphabet,
* is a finite set of states,
* ∈ is the initial state,
* ⊆×× are transitions, and
* ⊆ is the transition-based acceptance condition.
A run r of on w ∈ is an ω-word r_0, w_0, r_1, …∈ (×)^ω such that r_0 = init and for i ∈, (r_i, w_i, r_i+1)∈.
We denote by (r) for the set of transitions that appear infinitely often in the run r.
A run r of is accepting if (r) ∩≠∅.
The language, L_, of is the subset of words in that have accepting runs in .
A language is ω-regular if it is accepted by a Büchi automaton.
is deterministic if (q, σ, q'), (q, σ,q”) ∈ implies q' = q”.
Right congruences.
A right congruence (RC) relation is an equivalence relation over such that x y implies xv yv for all v ∈.
We denote by the index of , i.e., the number of equivalence classes of .
A finite RC is a RC with a finite index.
We denote by the set of equivalence classes of under .
Given x ∈, we denote by x the equivalence class of that x belongs to.
For a given RC of a regular language R, the Myhill-Nerode theorem <cit.> defines a unique minimal DFA D of R, in which each state of D corresponds to an equivalence class defined by over .
Therefore, we can construct a DFA [] from in a standard way.
Let be a right congruence of finite index.
The TS [] induced by is a tuple (S, s_0, ) where S =, s_0 =, and for each u ∈ and a ∈, (u, a) = ua.
For a given regular language R, we can define the RC _R of R as
x _R y if, and only if, ∀ v ∈. xv ∈ R ⟺ yv ∈ R.
Therefore, the minimal DFA for R is the DFA [_R] = ([_R], F__R) by setting final states F__R to all equivalence classes [u]__R such that u ∈ R.
Ultimately periodic (UP) words.
A UP-word w is an ω-word of the form uv^ω, where u ∈ and v ∈.
Thus w = uv^ω can be represented as a pair of finite words (u, v), called a decomposition of w.
A UP-word can have multiple decompositions:
for instance (u, v), (uv, v), and (u, vv) are all decompositions of uv^ω.
For an ω-language L, let L = uv^ω∈ Lu ∈ v ∈ denote the set of all UP-words in L.
The set of UP-words of an ω-regular language L can be seen as the fingerprint of L, as stated below.
(1)
Every non-empty ω-regular language L contains at least one UP-word.
(2)
Let L and L' be two ω-regular languages.
Then L = L' if, and only if, L = L'.
Families of DFAs (FDFAs).
Based on Theorem <ref>, Angluin, Boker, and Fisman <cit.> introduced the notion of FDFAs to recognize ω-regular languages.
An FDFA is a pair = (, ^q) consisting of a leading DFA and of a progress DFA ^q for each state q in .
Intuitively, the leading DFA of = (, ^q) for L consumes the finite prefix u of a UP-word uv^ω∈L, reaching some state q and, for each state q of , the progress DFA ^q accepts the period v of uv^ω.
Note that the leading DFA of every FDFA does not make use of final states—contrary to its name, it is really a leading transition system.
Let A be a deterministic automaton with TS = (, q_0, ) and x ∈.
We denote by A(x) the state (q_0, x).
Each FDFA characterizes a set of UP-words by following the acceptance condition.
Let = (, ^q) be an FDFA and w be a UP-word.
A decomposition (u, v) of w is normalized with respect to if (u) = (uv).
A decomposition (u, v) is accepted by if (u, v) is normalized and we have v ∈^q where q = (u).
The UP-word w is accepted by if there exists a decomposition (u, v) of w accepted by .
Note that the acceptance condition in <cit.> is defined with respect to the decompositions, while ours applies to UP-words.
So, they require the FDFAs to be saturated for recognizing ω-regular languages.
Let be an FDFA and w be a UP-word in .
We say is saturated if, for all normalized decompositions (u, v) and (u', v') of w, either both (u, v) and (u', v') are accepted by , or both are not.
We will see in Section <ref> that under our acceptance definition the saturation property can be relaxed while still accepting the same language.
Let U, V⊆ be two languages such that UV^* = U and V^+ = V. Then if w ∈UV^ω, there must exist two words u∈ U and v∈ V such that w = u· v^ω.
In the remainder of the paper, we
fix an ω-language L unless stated otherwise.
§ LIMIT FDFAS FOR RECOGNIZING Ω-REGULAR LANGUAGES
In this section, we will first recall the definitions of three existing canonical FDFAs for ω-regular languages, and then introduce our limit FDFAs and compare the four types of FDFAs.
§.§ Limit FDFAs and other canonical FDFAs
Recall that, for a given regular language R, by Definition <ref>, the Myhill-Nerode theorem <cit.> associates each equivalence class of _R with a state of the minimal DFA [_R] of R.
The situation in ω-regular languages is, however, more involved <cit.>.
An immediate extension of such RCs for an ω-regular language L is the following.
For two u_1, u_2 ∈, u_1 _L u_2 if, and only if ∀ w ∈. u_1 w ∈ L ⟺ u_2 w ∈ L.
Since we fix an ω-language L in the whole paper, we will omit the subscript in _L and directly use in the remainder of the paper.
Assume that L is an ω-regular language.
Obviously, the index of is finite since it is not larger than the number of states in the minimal deterministic ω-automaton accepting L.
However, is only enough to define the minimal ω-automaton for a small subset of ω-regular languages; see <cit.> for details about such classes of languages.
For instance, consider the language L = (· aa)^ω over = a, b:
clearly, || = 1 because L is a suffix language (for all u ∈, w ∈ L ⟺ u · w ∈ L).
At the same time, it is easy to see that the minimal deterministic ω-automaton needs at least two states to recognize L.
Hence, alone does not suffice to recognize the full class of ω-regular languages.
Nonetheless, based on Theorem <ref>, we only need to consider the UP-words when uniquely identifying a given ω-regular language L with RCs.
Calbrix et al. proposed in <cit.> the use of the regular language L_$ = u $ v: u ∈, v ∈, uv^ω∈ L to represent L, where $∉ is a fresh letter[This enables to learn L via learning the regular language L_$ <cit.>.].
Intuitively, L_$ associates a UP-word w in L by containing every decomposition (u, v) of w in the form of u$ v.
The FDFA representing L_$ is formally stated as below.
The is as defined in Definition <ref>.
Let [u]_ be an equivalence class of .
For x, y ∈, we define periodic RC as: x ^u_P y if, and only if, ∀ v ∈, u· (x · v)^ω∈ L ⟺ u· (y· v)^ω∈ L.
The periodic FDFA _P = (, ^u_P) of L is defined as follows.
The leading DFA is the tuple ([], ∅). Recall that [] is the TS constructed from by Definition <ref>.
The periodic progress DFA ^u_P of the state [u]_∈/_ is the tuple ([^u_P], F_u), where [v]_^u_P∈ F_u if uv^ω∈ L.
One can verify that, for all u, x, y, v ∈, if x ^u_P y, then xv ^u_P y v.
Hence, ^u_P is a RC.
It is also proved in <cit.> that L_$ is a regular language, so the index of ^u_P is also finite.
Angluin and Fisman in <cit.> showed that, for a variant of the family of languages L_n given by Michel <cit.>, its periodic FDFA has Ω(n!) states, while the syntactic FDFA obtained in <cit.> only has (n^2) states.
The leading DFA of the syntactic FDFAs is exactly the one defined for the periodic FDFA.
The two types of FDFAs differ in the definitions of the progress
DFAs ^u for some [u]_.
From Definition <ref>, one can see that ^u_P accepts the finite words in V_u = v ∈: u · v^ω∈ L.
The progress DFA ^u_S of the syntactic FDFA is not required to accept all words in V_u, but only a subset V_u,v = v ∈: u· v^ω∈ L, u u · v, over which the leading DFA can take a round trip from (u) back to itself.
This minor change makes the syntactic FDFAs of the language family L_n exponentially more succinct than their periodic counterparts.
Formally, syntactic FDFAs are defined as follows.
The is as defined in Definition <ref>.
Let [u]_ be an equivalence class of .
For x, y ∈, we define syntactic RC as: x ^u_S y if and only if u · x u · y and for ∀ v ∈, if u· x · v u, then u· (x · v)^ω∈ L ⟺ u· (y· v)^ω∈ L.
The syntactic FDFA _S = (, ^u_S) of L is defined as follows.
The leading DFA is the tuple ([], ∅) as defined in Definition <ref>.
The syntactic progress DFA ^u_S of the state [u]_∈/_ is the tuple ([^u_S], F_u) where [v]_^u_S∈ F_u if u · v u and uv^ω∈ L.
Angluin and Fisman <cit.> noticed that the syntactic progress RCs are not defined with respect to the regular language V_u,v = v ∈: u· v^ω∈ L, u u · v as _V_u,v that is similar to _R for a regular language R. They proposed the recurrent progress RC ^u_R that mimics the RC _V_u,v to obtain a DFA accepting V_u, v as follows.
The is as defined in Definition <ref>.
Let [u]_ be an equivalence class of .
For x, y ∈, we define recurrent RC as: x ^u_R y if and only if ∀ v ∈, (u · x · v u u· (x v)^ω∈ L) ⟺ (u· y v u u· (y · v)^ω∈ L).
The recurrent FDFA _R = (, ^u_R) of L is defined as follows.
The leading DFA is the tuple ([], ∅) as defined in Definition <ref>.
The recurrent progress DFA ^u_R of the state [u]_∈/_ is the tuple ([^u_R], F_u) where [v]_^u_R∈ F_u if u · v u and uv^ω∈ L.
As pointed out in <cit.>, the recurrent FDFAs may not be minimal because, according to Definition <ref>, FDFAs only care about the normalized decompositions, i.e, whether a word in C_u = v ∈: u · v u is accepted by the progress DFA ^u_R.
However, there are don't care words that are not in C_u and recurrent FDFAs treat them all as rejecting
[Minimizing DFAs with don't care words is NP-complete <cit.>].
Our argument is that the don't care words are not necessarily rejecting and can also be regarded as accepting.
This idea allows the progress DFAs ^u to accept the regular language v ∈: u · v u u · v^ω∈ L, rather than v ∈: u · v u u · v^ω∈ L.
This change allows a translation of limit FDFAs to DBAs with a quadratic blow-up when L is DBA-recognizable language, as shown later in Section <ref>.
We formalize this idea as below and define a new type of FDFAs called limit FDFAs.
The is as defined in Definition <ref>.
Let [u]_ be an equivalence class of .
For x,y ∈, we define limit RC as: x ^u_L y if and only if ∀ v ∈, (u · x · v u ⟹ u· (x · v)^ω∈ L) ⟺ (u · y· v u ⟹ u· (y · v)^ω∈ L).
The limit FDFA _L = (, ^u_L) of L is defined as follows.
The leading DFA is the tuple ([], ∅) as defined in Definition <ref>.
The progress DFA ^u_L of the state [u]_∈/_ is the tuple ([^u_L], F_u) where [v]_^u_L∈ F_u if u · v u uv^ω∈ L.
We need to show that ^u_L is a RC.
For u, x, y , v' ∈, if x ^u_L y, we need to prove that xv' ^u_L yv', i.e., for all e ∈, (u · xv' · e u u · (xv'· e)^ω∈ L) ⟺ (u · yv' · e u u · (yv'· e)^ω∈ L).
This follows immediately from the fact that x ^u_L y by setting v = v'· e for all e∈ in Definition <ref>.
Let L = (· (c^* · b)^ω) be a language over = a, b, c where every word in L has only finitely many a's and infinitely many b's.
It is easy to see that L is not recognized by either DBAs or DCAs.
However, it can be recognized by an NBA.
We can construct the limit FDFAs _L = ([], [_u]_u∈), depicted in Figure <ref>, with respect to L.
First, there is only one equivalence class in since · w ∈ L ⟺ u · w ∈ L holds for all u ∈, w ∈.
Further, there are three equivalence classes in /__, namely , a, b.
First, observe that · v for all v ∈.
So, we can distinguish two different words v_1 and v_2 by finding a word v ∈ such that only one of · (v_1 · v)^ω∈ L and · (v_2 · v)^ω∈ L holds.
Thus, a can be distinguished from and b with v = b;
can be distinguished from b with , since · (·)^ω∉ L while · (b ·)^ω∈ L.
and c both belong to since we have · (· v)^ω∈ L ⟺· (c · v)^ω∈ L hold for all v ∈.
We only have one final equivalence class b.
By Definition <ref> and Definition <ref>, one can observe that that _L = L.
In fact, we show in Theorem <ref> that _L = L holds for every given ω-regular language L.
Let L = a^ω + ab^ω be a language over = a, b.
Three types of FDFAs are depicted in Figure <ref>, where the leading
DFA is given in the column labeled with ”Leading” and the progress DFAs are in the column
labeled with “Syntactic", “Recurrent" and “Limit".
We omit the periodic FDFA here since we will focus more on the other three in this work.
Consider the progress DFA ^aa_L:
there are only two equivalence classes, namely []_^aa_L and [a]_^aa_L.
We can use v = to distinguish and a word x ∈ since aa · aa aa · (·)^ω∈ L does not hold, while aa · x aa aa · (x ·)^ω∈ L holds.
For all x, y ∈, x ^aa_L y since both aa · x aa aa · (x · v)^ω∈ L and aa · y aa aa · (y· v)^ω∈ L hold for all v ∈.
One can also verify the constructions for the syntactic and recurrent progress DFAs.
We can see that the don't care word b for the class [aa]_ are rejecting in both _S^aa and ^aa_R, while it is accepted by ^aa_L.
Even though b is accepted in ^aa_L, one can observe that (aa, b) (and thus aa· b^ω) is not accepted by the limit FDFA, according to Definition <ref>.
Indeed, the three types of FDFAs still recognize the same language L.
When the index of is only one, then u holds for all u ∈.
Corollary <ref> follows immediately.
Let L be an ω-regular language with || = 1.
Then, periodic, syntactic, recurrent and limit FDFAs coincide.
We show in Lemma <ref> that the limit FDFAs are a coarser representation of L than the syntactic FDFAs.
Moreover, there is a tight connection between the syntactic FDFAs and limit FDFAs.
For all u, x, y ∈,
*
x ^u_S y if, and only if u· x u · y and x ^u_L y.
* |^u_L| ≤ |^u_S| ≤ || · |^u_L|; |^u_L| ≤ || · |^u_P|.
*
* Assume that ux uy and x ^u_L y.
Since x ^u_L y holds, then for all v ∈, (u x v u u · (xv)^ω∈ L) ⟺ (u y v u u · (yv)^ω∈ L).
Since u x u y holds, then u · x v u ⟺ u · y v u for all v ∈.
Hence, by Definition <ref>, if u xv u (and thus u yv u), it follows that x ^u_S y by definition of ^u_S;
otherwise we have both u xv u and u yv u hold, and also u· (xv)^ω∈ L⟺ u· (yv)^ω∈ L, following the definition of ^u_L.
It thus follows that x _S^u y.
* Assume that x ^u_S y.
First, we have ux u y by definition of ^u_S.
Since u x u y holds, then u · x v u ⟺ u · y v u for all v ∈.
Assume by contradiction that x ^u_L y. Then there must exist some v ∈ such that u · x v u · y v u holds but u· (xv)^ω∈ L ⟺ u· (yv)^ω∈ L does not hold.
By definition of ^u_S, it then follows that x ^S_u y, violating our assumption.
Hence, both ux uy and x ^u_L y hold.
* As an immediate result of the Item (1), we have that |^u_L| ≤ |^u_S| ≤ || · |^u_L|.
We prove the second claim by showing that, for all u, x,y ∈, if u x u y and x ^u_P y, then x ^u_S y (and thus x ^u_L y).
Fix a word v ∈.
Since ux uy holds, it follows that u x · v u ⟺ uy· v u.
Moreover, we have u· (xv)^ω∈ L ⟺ u· (yv)^ω∈ L because x ^u_P y holds.
By definition of _S^u, it follows that x _S^u y holds.
Hence, x ^u_L y holds as well.
We then conclude that | ^u_L | ≤ || · |^u_P|.
According to Definition <ref>, we have x y iff [](x) = [](y) for all x, y ∈.
That is, = ([], ∅) is consistent with , i.e., x y iff (x) = (y) for all x,y∈.
Hence, u · v u iff (u) = (u · v).
In the remaining part of the paper, we may therefore mix the use of and without distinguishing the two notations.
We are now ready to give our main result of this section.
Let L be an ω-regular language and
_L = ([], [_u]_u∈) be the limit FDFA of L.
Then (1) _L has a finite number of states,
(2) _L = L, and (3) _L is saturated.
Since the syntactic FDFA _S of L has a finite number of states <cit.> and _L is a coarser representation than _S (cf. Lemma <ref>), _L must have finite number of states as well.
To show _L⊆L,
assume that w ∈_L.
By Definition <ref>, a UP-word w is accepted by _L if there exists a decomposition (u, v) of w such that (u) = (u · v) (equivalently, u · v u) and v ∈^u_L where u = (u).
Here u is the representative word for the equivalence class [u]_.
Similarly, let v = ^u_L(v).
By Definition <ref>, we have u·vuu·v^ω∈ L holds as v is a final state of ^u_L.
Since v ^u_L v (i.e., ^u_L(v) = ^u_L(v)), u· v uu· v^ω∈ L holds as well.
It follows that u · v u u · v^ω∈ L since u u and u · v u· v (equivalently, (u · v) = (u· v)).
Together with the assumption that (u · v) = (u) (i.e, u u · v), we then have that u · v^ω∈ L holds.
So, _L⊆L also holds.
To show that L⊆_L holds, let w ∈L.
For a UP-word w ∈ L, we can find a normalized decomposition (u, v) of w such that w = u· v^ω and u · v u (i.e., (u) = (u · v)), since the index of is finite (cf. <cit.> for more details).
Let u = (u) and v = ^u_L(v).
Our goal is to prove that v is a final state of ^u_L.
Since u u and u· v^ω∈ L, then u· v^ω∈ L holds.
Moreover, u· v u holds as well because u = (u) = (u)= (u· v) = (u · v).
(Recall that is deterministic.)
Hence, u· v uu· v^ω∈ L holds.
Since v^u_L v, it follows that u·vuu·v^ω∈ L also holds.
Hence, v is a final state.
Therefore, (u, v) is accepted by _L, i.e., w ∈_L.
It follows that L⊆_L.
Now we show that _L is saturated.
Let w be a UP-word.
Let (u, v) and (x, y) be two normalized decompositions of w with respect to (or, equivalently, to ).
Assume that (u, v) is accepted by _L.
From the proof above, it follows that both u· v u and u· v^ω∈ L hold.
So, we know that u· v^ω = x· y^ω∈ L.
Let x = (x) and y = ^x_L(y).
Since (x, y) is a normalized decomposition, it follows that x · y x.
Again, since x x, x· y x and x· y^ω∈ L also hold.
Obviously, x· y xx· y^ω∈ L holds.
By the fact that y ^x_L y, x·yxx·y^ω∈ L holds as well.
Hence, y is a final state of ^x_L.
In other words, (x, y) is also accepted by _L.
The proof for the case when (u, v) is not accepted by _L is similar.
We can also define the dual of limit RCs as follows.
For two u_1, u_2 ∈, u_1 u_2 is the same as in Definition <ref>.
Let u be an equivalence class of .
For any v_1, v_2 ∈, v_1 ^C_u v_2 if and only if ∀ v ∈, (uv_1 v u ⟹ u(v_1 v)^ω∉ L) ⟺ (uv_2 v u ⟹ u(v_2 v)^ω∉ L).
The final equivalence classes are the equivalence classes vu such that uv u ⟹ uv^ω∉ L holds.
As usual, By Definition <ref>, one can easily construct an FDFA ^C_L = ([], [^C_u]_u∈) based on Definition <ref>, which we call dual limit FDFAs.
Let L be an ω-regular language.
Let ^C_L = ([], [^C_u]_u∈) be its dual limit FDFA of L.
Then (1) ^C_L = L and (2) ^C_L is saturated.
First, we can also prove that ^C_L
Consider another language L = a^ω + a· b^ω over = a, b.
One can verify that the equivalence classes of are , a, b, aa and ab.
There are only two equivalence classes, namely and a, defined by _. More precisely, and a can be distinguished in _ with v =, since ·· (·)^ω∈ L does not hold, while · v · (v ·)^ω∈ L holds for all v ∈.
It follows that the set · v^ω: · v · v^ω∈ L contains all the words v^ω with v ∈, including the word b^ω∉ L.
Therefore, _L≠ L holds in general.
Can only recognize the languages by DBAs/DCAs.
We may get more languages in than L.
The following is not right???
Let [(, _u)] be a family of infix DFAs.
We define
= ⋃_u∈ (⋃_vu∈/__uu·vu^ω uv u, uv^ω∈ L ).
As an immediate result of _L being saturated in Theorem <ref>, it is obviously to obtain following corollary.
Let L be an ω-regular language.
Let _L = ([], [_u]_u∈) be the limit FDFA of L.
Then if (u, v) is accepted by _L, so is (u, v^k) for all k ≥ 1.
§.§ Size comparison with other canonical FDFAs
As aforementioned, Angluin and Fisman in <cit.> showed that for a variant of the family of languages L_n given by Michel <cit.>, its periodic FDFA has Ω(n!) states, while the syntactic FDFA only has (n^2) states.
Since limit FDFAs are smaller than syntactic FDFAs, it immediately follows that:
There exists a family of languages L_n such that its periodic FDFA has Ω(n!) states, while the limit FDFA only has (n^2) states.
Now we consider the size comparison between limit and recurrent FDFAs.
Consider again the limit and recurrent FDFAs of the language L = a^ω + ab^ω in Figure <ref>:
one can see that limit FDFA and recurrent FDFA have the same number of states, even though with different progress DFAs.
In fact, it is easy to see that limit FDFAs and recurrent FDFAs are incomparable regarding the their number of states, even when only the ω-regular languages recognized by weak DBAs are considered.
A weak DBA (wDBA) is a DBA in which each SCC contains either all accepting transitions or non-accepting transitions.
If L is a wDBA-recognizable language, then its limit FDFA and its recurrent FDFA have incomparable size.
We fix u, x, y ∈ in the proof.
Since L is recognized by a wDBA, the TS [] of the leading DFA is isomorphic to the minimal wDBA recognizing L <cit.>.
Therefore, a state [u]_ of is either transient, in a rejecting SCC, or in an accepting SCC. We consider these three cases.
* Assume that [u]_ is a transient SCC/state.
Then for all v ∈, u · x · v u and u · y · v u.
By the definitions of ^u_R
and ^u_L, there are a non-final class []_^u_L and possibly a sink final class [σ]_^u_L for ^u_L where σ∈, while there is a non-final class []_^u_R for ^u_R.
Hence, x ^u_L y implies x ^u_R y.
* Assume that [u]_ is in a rejecting SCC.
Obviously, for all v ∈, we have that u · x · v u u· (x· v)^ω∉ L and u · y · v u u· (y · v)^ω∉ L.
Therefore, there is only one equivalence class []_^u_R for ^u_R.
It follows that x ^u_L y implies x ^u_R y.
* Assume that u is in an accepting SCC.
Clearly, for all v ∈, we have that both u · x · v u u· (x· v)^ω∈ L and u · y · v u u· (y· v)^ω∈ L hold.
That is, we have either u · x · v u u· (x· v)^ω∈ L hold, or u · x · v u.
If x ^u_R y holds, it immediately follows that (u · x · v u u· (x· v)^ω∈ L) ⟺ (u · y · v u u· (y· v)^ω∈ L ) holds.
Hence, x ^u_R y implies x ^u_L y.
Based on this argument, it is easy to find a language L such that its limit FDFA is more succinct than its recurrent FDFA and vice versa, depending on the size comparison between rejecting SCCs and accepting SCCs.
Therefore, the lemma follows.
Lemma <ref> reveals that limit FDFAs and recurrent FDFAs are incomparable in size.
Nonetheless, we still provide a family of languages L_n in Lemma <ref> such that the recurrent FDFA has Θ(n^2) states, while its limit FDFA only has Θ(n) states.
One can, of course, obtain the opposite result by complementing L_n.
Notably, Lemma <ref> also gives a matching lower bound for the size comparison between syntactic FDFAs and limit FDFAs, since syntactic FDFAs can be quadratically larger than their limit FDFA counterparts, as stated in Lemma <ref>.
The language which witnesses this lower bound is given as its DBA depicted in Figure <ref>.
We refer to Appendix <ref> for detailed proof.
lemmalemLowBound
Let _n = 0, 1,⋯, n.
There exists an ω-regular language L_n over _n such that its limit FDFA has Θ(n) states, while both its syntactic and recurrent FDFAs have Θ(n^2) states.
The family of languages L_n is defined as the language of the DBA = (, _n, q_0, ,) as shown in Figure <ref>, where _n = 0, 1, ⋯, n.
First, one can easily verify that the index of _L_n is n+2.
Here we add the subscript L_n to _L_n to distinguish it from for the language L.
In fact, the leading DFA induced by _L_n is the exactly the TS of .
Here, we only show that the limit FDFA and the recurrent FDFA of L_n, respectively, have Θ(n) states and Θ(n^2) states.
We refer to Appendix <ref> for detailed proofs of this lemma.
Now we fix a word u and consider the index of ^u_L.
Let x ∈.
Obviously, if q_ = (u), then for all v ∈, we have u · x · v _L_n u but u · (x· v)^ω∉ L_n.
Hence, |^u_L | = 1.
Now let q_i = (u) with 0 ≤ i ≤ n.
For all v ∈, if u · x · v _L_n u holds, it must be the case that u · (x · v)^ω∈ L_n except that x · v =.
Hence, |^u_L| = 2.
It follows that the limit FDFA of L_n has exactly 2 × (n+1) + 1 + n+2 ∈Θ(n) states.
Now we consider the index of ^u_R for a fixed u ∈.
Similarly, when q_ = (u), |^u_R| = 1 since for all v ∈, we have u · x · v _L_n u u · (x· v)^ω∉ L_n hold.
Now we consider that q_k = (u) with 0 ≤ k ≤ n.
Let x_1, x_2 ∈.
First, assume that (u · x_1 ) ≠(u · x_2).
W.l.o.g., let q_j = (u · x_2) with 0 ≤ j ≤ n and let q_i = (u · x_1) with either i < j or q_i = q_.
We can easily construct a finite word v such that q_k = (u) =(u · x_2 · v), i.e., u · x_2 · v _L_n u, and u · (x_2 · v)^ω∈ L_n.
For example, we can let v = (j+1) ⋯ n · 0 ⋯ k if j < k ≤ n.
Hence, u · x_2 · v _L_n u u · (x_2 · v)^ω∈ L_n holds.
On the contrary, it is easy to see that q_ = (u · x_1 · v) = (q_i, j+1) since either j +1 > i + 1 or q_i = q_.
In other words, we have u · x_1 · v _L_n u u · (x_1 · v)^ω∉ L_n holds.
By definition of ^u_R, x_1 ^u_R x_2.
Hence, |^u_R| ≥ n + 2.
Next, we assume that (u · x_1) = (u · x_2).
For a word v ∈, it is easy to see that u · x_1 · v _L_n u ⟺ u · x_2 · v _L_n u.
Moreover, since u · x_1 · v _L_n u implies u · (x_1 · v)^ω∈ L_n, we thus have that u · x_1 · v _L_n u u · (x_1 · v)^ω∈ L_n ⟺ u · x_2 · v _L_n u u · (x_2 · v)^ω∈ L_n.
In other words, x_1 ^u_R x_2, which implies that |^u_R| ≤ n + 2.
Hence |^u_R| = n + 2 when (u) ≠ q_.
It follows that the recurrent FDFA of L_n has exactly (n+2) × (n + 1) + 1 + (n+2) ∈Θ(n^2) states.
The language L_n is given as its DBA = (, _n, q_0, ,) depicted in Figure <ref>.
First, we show that the index of _L_n is n+2.
In fact, the leading DFA induced by _L_n is the exactly the TS of .
For every two words u_1, u_2 ∈, if u_1 _L_n u_2, then there exists a word w ∈ such that u_1 · w ∈ L_n ⟺ u_2 · w ∈ L_n does not hold.
That is, u^-1_1· L_n ≠ u^-1_2 · L_n where u^-1· L_n = w ∈: u · w ∈ L_n for a word u ∈.
Let L_q = ^q.
For every pair of different states q_i, q_j ∈ with i ≠ j, obviously L_q_i≠ L_q_j since L_q_i contains an infinite word i^ω, while L_q_j does not contain such a word.
So, if (u_1) ≠(u_2), then u^-1_1· L_n ≠ u^-1_2 · L_n.
Hence, || ≥ n + 2.
It is trivial to see that || ≤ n + 2 since the index of is always not greater than the number of states in a deterministic ω-automaton accepting L_n.
Therefore, || = n + 2.
Now we fix a word u and consider the index of ^u_L.
Let x ∈.
Obviously, if q_ = (u), then for all v ∈, we have u · x · v _L_n u but u · (x· v)^ω∉ L_n.
Hence, |^u_L | = 1.
Now let q_i = (u) with 0 ≤ i ≤ n.
For all v ∈, if u · x · v _L_n u holds, it must be the case that u · (x · v)^ω∈ L_n except that x · v =.
Hence, |^u_L| = 2.
It follows that the limit FDFA of L_n has exactly 2 × (n+1) + 1 + n+2 ∈Θ(n) states.
Now we consider the index of ^u_R for a fixed u ∈.
Similarly, when q_ = (u), |^u_R| = 1 since for all v ∈, we have u · x · v _L_n u u · (x· v)^ω∉ L_n hold.
Now we consider that q_k = (u) with 0 ≤ k ≤ n.
Let x_1, x_2 ∈.
First, assume that (u · x_1 ) ≠(u · x_2).
W.l.o.g., let q_j = (u · x_2) with 0 ≤ j ≤ n and let q_i = (u · x_1) with either i < j or q_i = q_.
We can easily construct a finite word v such that q_k = (u) =(u · x_2 · v), i.e., u · x_2 · v _L_n u, and u · (x_2 · v)^ω∈ L.
For example, we can let v = (j+1) ⋯ n · 0 ⋯ k if j < k ≤ n.
Hence, u · x_2 · v u u · (x_2 · v)^ω∈ L holds.
On the contrary, it is easy to see that q_ = (u · x_1 · v) = (q_i, j+1) since either j +1 > i + 1 or q_i = q_.
In other words, we have u · x_1 · v _L_n u u · (x_1 · v)^ω∉ L holds.
By definition of ^u_R, x_1 ^u_R x_2.
Hence, |^u_R| ≥ n + 2.
Next, we assume that (u · x_1) = (u · x_2).
For a word v ∈, it is easy to see that u · x_1 · v _L_n u ⟺ u · x_2 · v _L_n u.
Moreover, since u · x_1 · v _L_n u implies u · (x_1 · v)^ω∈ L_n, we thus have that u · x_1 · v _L_n u u · (x_1 · v)^ω∈ L_n ⟺ u · x_2 · v _L_n u u · (x_2 · v)^ω∈ L_n.
In other words, x_1 ^u_R x_2, which implies that |^u_R| ≤ n + 2.
Hence |^R_u| = n + 2 when (u) ≠ q_.
It follows that the recurrent FDFA of L_n has exactly (n+2) × (n + 1) + 1 + (n+2) ∈Θ(n^2) states.
For the syntactic FDFA, since ^u_S refines ^u_R <cit.>, then |^u_S| ≥ |^u_R| for all u ∈.
The upper bound is proved similarly as for recurrent FDFAs.
Therefore, the syntactic FDFA of L_n also has Θ(n^2) states.
This completes the proof of the lemma.
Finally, it is time to derive yet another “Myhill-Nerode" theorem for ω-regular languages, as stated in Theorem <ref>.
This result follows immediately from Lemma <ref> and a similar theorem about syntactic FDFAs <cit.>.
Let _L be the limit FDFA of an ω-language L.
Then L is regular if, and only if _L has finite number of states.
For identifying whether L is DBA-recognizable with FDFAs, a straight forward way as mentioned in the introduction is to go through determinization, which is, however, exponential in the size of the input FDFA.
We show in Section <ref> that there is a polynomial-time algorithm using our limit FDFAs.
Let _n = a_1, ⋯, a_n.
There exists a language L_n such that the limit FDFA has Θ(2^n) states, while syntactic and recurrent FDFAs have Θ(2^n^2) states.
We let L_n = w∈: w does not contain all letters in.
In the leading DFA , there are 2^n states.
That is, there are 2^n equivalence classes for .
Each equivalence class of can be uniquely encoded with the set of letters that occur in its words.
For instance, a_1 a_2 and a_1 a_2 a_1 can be encoded as the set a_1, a_2.
Let u_1 and u_2 be two different words with different sets S_1 and S_2, respectively.
We can construct an infinite word w to distinguish u_1 and u_2:
We analyze the case when S_1 ∖ S_2 ≠∅.
The case for S_2 ∖ S_1 ≠∅ is symmetric.
Assume a ∈ S_1∖ S_2.
We let w be a word with the set of letters ∖ S_1 occurring infinitely often.
Obviously, u_1 · w ∉ L_n since u_1 · w contains all the letters in .
Moreover, u_2 · w ∈ L since a ∉ (∖ S_1) and a ∉ S_2.
Now we check for the progress DFA for limit FDFAs.
For the progress DFA _u where u ∈ and the set of letters occurring in u is S_u, we show that there are at most two states in _u.
There are the equivalence class u and x for all other words in .
For a word v ∈, we know that u · x · v u u · (x · v)^ω∈ L_n.
The only one sink equivalence class of is u in which u contains all letters in _n.
So if u · x · v u, then u · (x · v)^ω must not contain all letters.
It follows that u · (x · v)^ω∈ L_n.
Therefore, the limit FDFA has (2^n - 1) × 2 + 1 + 2^n ∈Θ(2^n) states. (The progress DFA of the sink equivalence class of has only one state.)
Now we look at the progress DFA for recurrent FDFAs.
§ LIMIT FDFAS FOR IDENTIFYING DBA-RECOGNIZABLE LANGUAGES
Let R ⊆ be a regular language.
We consider the liveness language L = ∞ R where
w ∈∞ R iff there are infinitely many indices i_1 ≤ i'_1 < i_2 ≤ i'_2 < ⋯ such that wi_j,i'_j∈ R for all j ≥ 1.
It is easy to see that u for all u∈ since for all w ∈, we have u · w ∈ L if w contains infinitely many disjoint infixes in R.
Given an ω-regular language L, we show in this section how to use the limit FDFA of L to check whether L is DBA-recognizable in polynomial time.
To this end, we will first introduce how the limit FDFA of L looks like in Section <ref> and then introduce the deciding algorithm in Section <ref>.
§.§ Limit FDFA for DBA-recognizable languages
Bohn and Löding <cit.> construct a type of family of DFAs _BL from a set S^+ of positive samples and a set S^- of negative samples, where the progress DFA accepts exactly the language V_u = x ∈: ∀ v ∈. if u· xv u, then u · (xv)^ω∈ L[Defining directly a progress RC ^u that recognizes V_u is hard since V_u is quantified over all v-extensions.].
When the samples S^+ and S^- uniquely characterize a DBA-recognizable language L, _BL recognizes exactly L.
The progress DFA ^u_L of our limit FDFA _L of L usually accepts more words than V_u.
Nonetheless, we can still find one final equivalence class that is exactly the set V_u, as stated in Lemma <ref>.
In this subsection, we show that when L is a DBA-recognizable language, there will be a co-safety language identified by the limit progress DFAs, as stated in Lemma <ref>.
A co-safety language is a regular language R ⊆ such that R · = R.
In other words, u ∈ R implies u · v ∈ R for all v ∈ if R is a co-safety language.
lemmadbaCoSafe
Let L be a DBA-recognizable language and _L = (, ^u_L_u∈/_) be the limit FDFA of L.
Then, for each progress DFA ^u_L with ^u_L≠∅, there must exist a final state x∈ F_u such that [x]_^u_L = x ∈: ∀ v ∈. u · (x · v) u u· (x· v)^ω∈ L.
In <cit.>, it is shown that for each equivalence class [u]_ of , there exists a regular language V_u = x ∈: ∀ v ∈. if u· xv u, then u · (xv)^ω∈ L.
We have also provided the proof of the existence of V_u in Appendix <ref>, adapted to our notations.
The intuition of V_u is the following.
Let = (, , ι, , ) be a DBA accepting L.
Then, [u]_ corresponds to a set of states S = q ∈: q = (ι, u'), u' ∈ [u]_ in .
For each q ∈ S, we can easily create a regular language V_q such that x ∈ V_q iff over the word x, ^q (the DBA derived from by setting q its initial state) visits an accepting transition, ^q goes to an SCC that cannot go back to q, or ^q goes to a state that cannot go back to q unless visiting an accepting transition.
Then, V_u = ∩_q ∈ S V_q.
Now we show that V_u is an equivalence class of ^u_L as follows.
On one hand, for every two different words x_1, x_2 ∈ V_u, we have that x_1 ^u_L x_2, which is obvious by the definition of V_u.
On the other hand, it is easy to see that x' ^u_L x for all x' ∉ V_u and x ∈ V_u because there exists some v ∈ such that u · x' · v u but u· (x' · v)^ω∉ L.
Hence, V_u is indeed an equivalence class of ^u_L.
Obviously, V_u ⊆^u_L, as we can let v =, so for every word x ∈ V_u, we have that u · x u u · x^ω∈ L.
Let x = ^u_L(x) for a word x ∈ V_u.
It follows that x is a final state of ^u_L and we have [x]_^u_L = V_u.
This completes the proof.
Let L be a DBA-recognizable language and _S = (, ^u_[u]_∈/_) be the syntactic FDFA of L.
Let u ∈ and V_u = x ∈: ∀ v ∈. u · (x · v) u u· (x· v)^ω∈ L.
Then for every equivalence class [x]_^u_S∈/_^u_S, we have either [x]_^u_S∩ V_u = ∅ or [x]_^u_S⊆ V_u.
Fix an equivalence class [x]_^u_S.
For a word y ∈ [x]_^u_S, assume that there exists a word v ∈, such that u · y · v u and u · (y · v)^ω∉ L, i.e., y ∉ V_u.
Let y' ∈ [x]_^u_S, i.e., y' ^u_S y.
By definition of ^u_S, it follows that u· y' u · y, which implies that u · y' · v u · y · v u.
Therefore, we also have u · y' · v u and u · (y' · v)^ω∉ L.
Similarly, we can prove that if there exists a finite word y∈ such that y ∈ [x]_^u_S and y ∈ V_u, then [x]_^u_S⊆ V_u.
The syntactic/limit FDFA _B = (, ^u_B) of L is defined as follows.
The transition systems of and ^u_B for each [u]_∈/_ are exactly the same as in Definition <ref>/<ref>.
The set of final states F_u contains the equivalence classes [x]_^u_K such that for all v ∈, u · xv u ⟹ u(xv)^ω∈ L holds, where K ∈S,L.
By Lemma <ref>, we can define a variant of limit FDFAs for only DBAs with less number of final states.
This helps to reduce the complexity when translating FDFAs to NBAs <cit.>.
Let n be the number of states in the leading DFA and k be the number of states in the largest progress DFA.
Then the resultant NBA from an FDFA has (n^2k^3) states <cit.>.
However, if the input FDFA is _B as in Definition <ref>, the complexity of the translation will be (n^2k^2), as there is at most one final state, rather than k final states, in each progress DFA.
The limit FDFA _B = (, ^u_B) of L is defined as follows.
The transition systems of and ^u_B for each [u]_∈/_ are exactly the same as in Definition <ref>.
The set of final states F_u contains the equivalence classes [x]_^u_L such that, for all v ∈, u · xv u ⟹ u· (xv)^ω∈ L holds.
The change to the definition of final states would not affect the language that the limit FDFAs recognize, but only their saturation properties.
We say an FDFA is almost saturated if, for all u, v ∈, we have that if (u, v) is accepted by , then (u, v^k) is accepted by for all k ≥ 1.
According to <cit.>, if is almost saturated, then the translation algorithm from FDFAs to NBAs in <cit.> still applies (cf. Appendix <ref> about details of the NBA construction).
Let L be a DBA-recognizable language and _B be the limit FDFA induced by Definition <ref>.
Then (1) _B = L and (2) _B is almost saturated but not necessarily saturated.
The proof for _B⊆L is trivial, as the final states defined in Definition <ref> must also be final in Definition <ref>.
The other direction can be proved based on Lemma <ref>.
Let w ∈L and = (, , ι, , ) be a DBA accepting L.
Let ρ be the run of over w.
We can find a decomposition (u, v) of w such that there exists a state q with q = (ι, u) = (ι, u · v) and (q, v[0]) ∈.
As in the proof of Lemma <ref>, we are able to construct the regular language V_u = x ∈: ∀ y ∈, u· x · y u u · (x· y)^ω∈ L.
We let S = p ∈: ^q = ^p.
For every state p ∈ S, we have that v^ω∈^p.
For each p ∈ S, we select an integer k_p > 0 such that the finite run p (p, v^k_p) visits some accepting transition.
Then we let k = max_p ∈ S k_p.
By definition of V_u, it follows that v^k∈ V_u.
That is, V_u is not empty.
According to Lemma <ref>, we have a final equivalence class [x]_^u_L = V_u with v^k ∈ [x]_^u_L.
Moreover, u · v^k u since q = (ι, u) = (q, v).
Hence, (u, v^k) is accepted by _B, i.e., w ∈_B.
It follows that _B = L.
Now we prove that _B = (, ^u_B) is not necessarily saturated.
Let L = (· aa)^ω.
Obviously, L is DBA recognizable,
and has only one equivalence class, []_.
Let w = a^ω∈L.
Let (u= , v = a) be a normalized decomposition of w with respect to (thus, ).
We can see that there exists a finite word x (e.g., x=b is such a word) such that · a · x and · (a · x)^ω∉ L.
Thus, (, a) will not be accepted by _B.
Hence _B is not saturated.
Nonetheless, it is easy to verify that _B is almost saturated.
Assume that (u, v) is accepted by _B.
Let u = (u) and v = ^u_B(v).
Since v is the final state, then, according to Definition <ref>, we have for all e ∈ that u·v e uu· (v e)^ω∈ L.
Since v ^u_L v, u· v e uu· (v e)^ω∈ L also holds for all e ∈.
Let e = v^k· e' where e' ∈, k ≥ 0.
It follows that u· v^k e' uu· (v^k e')^ω∈ L holds for k ≥ 1 as well.
Therefore, for all e' ∈, k ≥ 1, (u·v e' uu· (v e')^ω∈ L ) ⟺ (u· v^k e' uu· (v^k e')^ω∈ L ) holds.
In other words, v^u_L v^k for all k ≥ 1.
Together with that u v^k u, (u, v^k) is accepted by _B for all k ≥ 1.
Hence, _B is almost saturated.
§.§ Deciding DBA-recognizable languages
We show next how to identify whether a language L is DBA-recognizable with our limit FDFA _L.
Our decision procedure relies on the translation of FDFAs to NBAs/DBAs.
In the following, we let n be the number of states in the leading DFA and k be the number of states in the largest progress DFA.
We first give some previous results below.
Let be an (almost) saturated FDFA of L. Then one can construct an NBA with (n^2k^3) states such that = L.
Now we consider the translation from FDFA to DBAs.
By Lemma <ref>, there is a final equivalence class [x]_^u_L that is a co-safety language in the limit FDFA of L.
Co-safety regular languages are regular languages R ⊆ such that R · = R.
It is easy to verify that if x' ∈ [x]_^u_L, then x'v∈ [x]_^u_L for all v∈, based on the definition of ^u_L.
So, [x]_^u_L is a co-safety language.
The DFAs accepting co-safety languages usually have a sink final state f (such that f transitions to itself over all letters in ).
We therefore have the following.
If L is DBA-recognizable then every progress DFA ^u_L of the limit FDFA _L of L either has a sink final state, or no final state at all.
Our limit FDFA _B of L, as constructed in Definition <ref>, accepts the same co-safety languages in the progress DFAs as the FDFA obtained in <cit.>, although they may have different transition systems.
Nonetheless, we show that their DBA construction still works on _B.
To make the construction more general, we assume an FDFA = (, ^q_q ∈) where = (, , ι, ) and, for each q ∈, we have ^q = (_q, ,ι_q, _q, F_q).
Let = (, ^q_q ∈) be an FDFA.
Let [] be the TS constructed from defined as the tuple [] = (_, , ι_, _) and ⊆(q, σ): q ∈_, σ∈ be a set of transitions where
* Q_ := ×⋃_q∈_q;
* ι_ := (ι, ι_ι);
* For a state (m, q) ∈ Q_ and σ∈, let q' = _m(q, σ) where ^m is the progress DFA that q belongs to and let m' = (m, σ).
Then
((m, q), σ) =
(m', q') if q' ∉ F_m
(m', ι_m') if q' ∈ F_m
* ((m, q), σ) ∈ if q' ∈ F_m
If is an FDFA with only sink final states.
Let [] = ([], ) as given in Definition <ref>.
Then, []⊆.
Let w ∈[] and ρ be its corresponding accepting run.
Since w is a UP-word and [] is a DBA of finite states, then we must be able to find a decomposition (u, v) of w such that (m, ι_m) = [](u) = [](u · v), where ρ will visit a -transition whose destination is (m, ι_m) for infinitely many times.
It is easy to see that (u · v) =( u) since [](u) = [](u · v).
Moreover, we can show there must be a prefix of v, say v', such that v' ∈^m.
Since ^m is co-safety, we have that v ∈^m.
Thus, (u, v) is accepted by .
By Definition <ref>, w ∈.
Therefore, []⊆.
By Corollary <ref>, _B has only sink final states;
so, we have that [_B]⊆_B.
However, Corollary <ref> is only a necessary condition for L being DBA-recognizable, as explained below.
Let L be an ω-regular language over = 1, 2, 3, 4 such that a word w ∈ L iff the maximal number that occurs infinitely often in w is even.
Clearly, L has one equivalence class []_ for .
The limit FDFA = (, ^_L) of L is depicted in Figure <ref>.
We can observe that the equivalence class [4]_^_L corresponds to a co-safety language.
Hence, the progress DFA ^_L has a sink final state.
However, L is not DBA-recognizable.
If we ignore the final equivalence class [2]_^_L and obtain the variant limit FDFA _B as given in Definition <ref>, then we have _B≠L since the ω-word 2^ω is missing.
But then, by Theorem <ref>, this change would not lose words in L if L is DBA-recognisable, leading to contradiction.
Therefore, L is shown to be not DBA-recognizable.
So the key of the decision algorithm here is to check whether ignoring other final states will retain the language.
With Lemma <ref>, we guarantee that [_B] accepts exactly L if L is DBA-recognizable.
Let L be a DBA-recognizable language.
Let _B be the limit FDFA L, as constructed in Definition <ref>.
Let [_B] = ([_B], ), where [_B] and are the TS and set of transitions, respectively, defined in Definition <ref> from _B.
Then _B = L⊆[_B].
We first assume for contradiction that some w ∈ L is rejected by [_B].
For this, we consider the run ρ = (q_0,w[0],q_1)(q_1,w[1],q_2)… of [_B] on w. Let i ∈ω be such that (q_i-1,w[i-1],q_i) is the last accepting transition in ρ, and i=0 if there is no accepting transition at all in ρ.
We also set u=w[0⋯ i-1] and w' = w[i⋯].
By Definition <ref>, this ensures that [_B] is in state ([u]_,ι_[u]_) after reading u and will not see accepting transitions (or leave 𝒩^[u]__B) while reading the tail w'.
Let 𝒟 = (Q',Σ,ι',δ',Γ') be a DBA that recognizes L and has only reachable states.
As 𝒟 recognizes L, it has the same right congruences as L; by slight abuse of notation, we refer to the states in Q' that are language equivalent to the state reachable after reading u by [u]_ and note that 𝒟 is in some state of [u]_ after (and only after) reading a word u' u.
As u· w', and therefore u' · w' for all u' u, are in L, they are accepted by 𝒟, which in particular means that, for all q ∈ [u]_, there is an i_q such that there is an accepting transition in the first i_q steps of the run of
^q (the DBA obtained from by setting the initial state to q) on w'. Let i_+ be maximal among them and v=w[i ⋯ i+i_+].
Then, for u' u and any word u' v v', we either have u' v v' u, or u' v v' u and u' · (v v')^ω∈ L. (The latter is because v is constructed such that a run of 𝒟 on this word will see an accepting transition while reading each v, and thus infinitely many times.)
Thus, 𝒩^[u]__B will accept any word that starts with v, and therefore be in a final sink after having read v.
But then [_B] will see another accepting transition after reading v (at the latest after having read uv), which closes the contradiction and completes the proof.
First, we directly give the following claim and refer its proof to Appendix <ref>.
Let L_m = w' ∈: m · w' ∈ L.
Claim 1. For every progress DFA ^m in _B of a DBA-recognizable language L and a word w ∈ L_m, there must be a prefix of w accepted by ^m. (The proof is similar to the one proving that L⊆_B in Theorem <ref> and relies on the proof of Lemma <ref>.)
Let w be an arbitrary word from L.
Assume that ρ = ((m_0, q_0), w[0], (m_1, q_1)) ⋯((m_k, q_k), w[k], (m_k+1, q_k+1))⋯ is the run of [_B] over w.
We want to prove that ρ visits infinitely many accepting transitions.
By construction, we have that m_i+1w0⋯ i for all i ≥ -1.
(Recall that w0 ⋯ -1 =.)
First, by Claim 1, there must be a prefix, say u_0, of w that is accepted by ^m_0_B since w ∈ L_m_0 where m_0 ∈ []_.
Recall that m_0 implies that m_0 · w ∈ L, so w ∈ L_m_0.
Thus, we will see a -transition after reading u_0 and ρ will arrive at some state, say (m_i_1, ι_m_i_1).
Let w_i_1∈Σ ^ω such that u_0w_i_1 = w.
It is easy to see that w_i_1∈ L_m_i_1, since u_0 m_i_1 implies that m_i_1· w_i_1∈ L.
Again, by Claim 1, ρ will visit a -transition after reading a prefix of w_i_1, say u_1.
Repeating this procedure, we prove that ρ sees infinitely many -transitions.
Hence L ⊆[_B], i.e., L⊆[_B].
So, our decision algorithm works as follows.
Assume that we are given the limit FDFA _L = (, ^q_L) of L.
* We first check whether there is a progress DFA ^q_L such that there are final states but without the sink final state.
If it is the case, we terminate and return “NO".
* Otherwise, we obtain the FDFA _B by keeping the sink final state as the sole final state in each progress DFA (cf. Definition <ref>).
Let = NBA(_L) be the NBA constructed from _L (cf. Lemma <ref>) and = DBA(_B) be the DBA constructed from _B (cf. Definition <ref>).
Obviously, we have that = L and ⊆_B = L.
* Then we check whether ⊆ holds.
If so, we return “YES", and otherwise “NO".
Now we are ready to give the main result of this section.
Deciding whether L is DBA-recognizable can be done in time polynomial in the size of the limit FDFA of L.
We first prove our decision algorithm is correct.
If the algorithm returns “YES", clearly, we have ⊆.
It immediately follows that L = ⊆⊆_B⊆_L = L according to Lemmas <ref> and <ref>.
Hence, = L, which implies that L is DBA-recognizable.
For the case that the algorithm returns “NO", we analyze two cases:
* has final states but without sink accepting states for some progress DFA. By Corollary <ref>, L is not DBA-recognizable.
* ⊈. It means that L⊈ (by Lemma <ref>).
It follows that L is not DBA-recognizable by Lemma <ref>.
The algorithm is therefore sound; its
completeness follows from Lemmas <ref> and <ref>.
The translations above are all in polynomial time.
Moreover, checking the language inclusion between an NBA and a DBA can also be done in polynomial time <cit.>.
Hence, the deciding algorithm is also in polynomial time in the size of the limit FDFA of L.
Recall that, our limit FDFAs are dual to recurrent FDFAs.
One can observe that, for DBA-recognizable languages, recurrent FDFAs do not necessarily have sink final states in progress DFAs.
For instance, the ω-regular language L = a^ω + ab^ω is DBA-recognizable, but its recurrent FDFA, depicted in Fig. <ref>, does not have sink final states.
Hence, our deciding algorithm does not work with recurrent FDFAs.
Let L be an ω-regular language and _L be its limit FDFA.
Then, L is DBA-recognizable language iff for every decomposition (u, v) accepted by _L, there will be an integer k > 0 such that v^k · y is accepted by _u for all y ∈.
§ MINIMALITY OF TDCAS
Probably not necessarily minimal
For constructing tDCAs, one can define the following RCs.
Let L be an ω-regular language and ℱ𝒞_L = (, ^C_u).
We let [ℱ𝒞_L] be a tDCA.
Then,
* L ⊆[ℱ𝒞_L].
* If L is DCA-recognizable language, then [ℱ𝒞_L] = L.
The proof of the second claim is more tricky.
Assume that L is DCA-recognizable.
Then we can consider a tDCA = (, , , , ) that accepts L.
Our goal is to prove that ⊆[ℱ𝒞_L].
Not sure whether this will hold. But I guess maybe it holds since deterministic safety automaton is unique.
Now we consider the ω-regular language L whose leading automaton has only one equivalence class.
That is, u for all u ∈.
It is easy to see that for L, the periodic, syntactic, recurrent and limit FDFA all coincide.
Let L be DBA-recognizable language with || = 1 and _L be its limit FDFA.
Then [_L] is the minimal DBA of L.
§.§ Application to passive learning of DBAs
We add don't care states in the learned FDFA.
So, given a set of positive samples S^+ and a set of negative samples S^-, we want to construct a DBA such that S^+ ⊆ and ∩ S^- = ∅.
We can construct a language L with S^+ ⊆ L and L ∩ S^- = ∅ and first construct the canonical FDFA of L from the samples and then transform it to a DBA .
The main challenge here is to make sure that L is DBA-recognizable.
§ LIMIT FDFA FOR LIVENESS LANGUAGES
§ FROM LIMIT FDFA TO LDBAS
We can use the construction proposed in <cit.> to construct a DBA/DCA from the limit FDFA of a given ω-regular language L.
Let L be an ω-regular language and _L = ([], [_u]_u∈) be its limit FDFA.
Let [_L] be the TS constructed from _L defined as the tuple [_L] = (, , , ) and ⊆() be a set of transitions where
* Q := ⋃_u∈u×/__u;
* q_0 := (, );
* For a state (u, vu) ∈ Q and σ∈, let v'u = vσu and u' = uσ.
Then
((u, vu), σ) =
(u', v'u) if v'u∉ F__u
(u', u') if v'u∈ F__u
* ((u, vu), σ) ∈ if v'u∈ F__u
Let L be an ω-regular language.
We let [_L] = ([_L], ) and [_L] = ([_L], ) be a tDBA and a tDCA, respectively.
Then,
* [_L]⊆ L and L⊆[_L].
* If L is DBA-recognizable language, then [_L] = L and [_L] = L.
[_L]⊆ L. Assume that w ∈[_L] and ρ is an accepting run ρ of [_L] over w.
Then, we can find infinitely many integers i_0, i_1, ⋯ such that i_j > 0, ρi_j = (w[0⋯ i_j], v_i_jw[0⋯ i_j]) ρi_j+1 = (w[0⋯ (i_j + 1)], w[0⋯ (i_j+1)]) and v_i_j· w[i_j + 1]w[0⋯ i_j] is a final equivalence class of _w[0 ⋯ (i_j + 1)] for all j ∈.
Therefore, there must exist a subset of integers i_j_0, i_j_1, ⋯ such that u = w[0 ⋯ i_j_k] and v_j_k = w[i_j_k⋯ i_j_k+1] such that u · v_j_k u and uv_j_k^ω∈ L for all k ∈.
Therefore, we conclude that w ∈ L.
So, [_L]⊆ L.
L⊆[_L].
Let w ∈L.
We prove by contradiction.
Assume that the run ρ of [_L] over w is not accepting, that is, ρ traverses infinitely many -transitions.
Then, we can find infinitely many integers i_0, i_1, ⋯ such that i_j > 0, ρi_j = (w[0⋯ i_j], v_i_jw[0⋯ i_j]) ρi_j+1 = (w[0⋯ (i_j + 1)], w[0⋯ (i_j+1)]) and v_i_j· w[i_j + 1]w[0⋯ i_j] is a final equivalence class of ^C_w[0 ⋯ (i_j + 1)] for all j ∈.
Therefore, there must exist a subset of integers i_j_0, i_j_1, ⋯ such that u = w[0 ⋯ i_j_k] and v_j_k = w[i_j_k⋯ i_j_k+1] such that u · v_j_k u and uv_j_k^ω∈ L for all k ∈.
According to <cit.>, we conclude that w ∈ L, which violates the assumption.
Therefore, L⊆[_L], i.e., L⊆[_L].
§ CONSTRUCTIONS OF LDBAS AND DRAS
The ω-regular language L recognized by its limit FDFA _L can be written in the following form:
⋃_u∈, vu∈ F__uu·( v ∈vu: u · v u^ω).
§.§ NBA
Easy, already seen in previous works.
§.§ LDBA
Describe a construction for each fixed u and a final class vu.
Let [] = (_, , , _) be the DFA constructed from .
We collect all states of the progress DFA [_u] that can not reach the final equivalence class vu in S_u,v.
Now we construct a DBA _u,v = (_u,v, , q_u,v, _u,v, _u,v) where
* Q_u, v := ⋃_u∈u×/__u∪;
* q_u,v := (u, u);
* For a state (u_1, v_1u_1) ∈ Q_u,v and σ∈, let u'_1 = u_1σ.
((u_1, v_1u), σ) =
(u'_1, u'_1) if u'_1≠u or u≠u_1
(u, v_1 σu') if u'_1 = u = u_1, and
v_1σu≠vu
(u, u) if u'_1 = u = u_1 and
v_1σu = vu
* (u_1, v_1u_1, σ) ∈ R_u,v if v_1σu = vu
Then we can define a LDBA _u,v as
(_∪__u,v, , , = _∪_j ∪__u,v, _u,v)
where
we have that (u, , (u, u)) ∈(_j);
Let L be an ω-regular language.
Let _u,v be the LDBA as constructed above.
Then, _u,v = u· v ∈vu: u · v u^ω.
Let L_u, v = u· v ∈vu: u · v u^ω.
We only need to consider all the ultimately periodic words, according to <cit.>.
That is, we only prove that = L_u,v.
First, we prove that L_u,v⊆_u,v.
Every UP-word w ∈L_u,v can be written as its decomposition (u', v') such that u'· v' u' u and w = u'· v'^ω∈ L.
By definition of vu, we also have that v' ∈vu.
So we only need to prove that v'^ω is accepted from the state (u, u).
Thus, L⊆.
The other direction is trivial.
is a good-for-MDP automaton.
Should be easy to prove as it is limit deterministic and we can loop in [] before hitting BSCC and then jump to the right state, since we should know what finite word in the BSCC should be satisfy.
It has only branching degree of 2.
§.§ DRA
Describe a construction for each fixed u and a final class vu.
Construction idea:
* We fix one u and a final class vu.
* We extract the part that can reach vu in [_u];
* We construct a DRA for the language u· (vu)^ω with u·vuu.
This DRA only has one pair of Rabin condition.
* Then we do union product for those DRAs.
We collect all states of the progress DFA [_u] that can not reach the final equivalence class vu in S_u,v.
Now we construct a deterministic Rabin automaton _u,v = (_u,v, , q_u,v, _u,v, _u,v) where
* Q_u, v := ⋃_u∈u×/__u;
* q_u,v := (, u);
* For a state (u_1, v_1u_1) ∈ Q_u,v and σ∈, let u'_1 = u_1σ.
((u_1, v_1u), σ) =
(u'_1, u'_1) if u'_1≠u or u≠u_1
(u, v_1 σu') if u'_1 = u = u_1 and v_1σu≠vu
(u, u) if u'_1 = u = u_1 and v_1σu = vu
* _u, v = (L_u, v, R_u, v) where
(u_1, v_1u_1, σ) ∈ R_u,v if v_1σu = vu and (u_1, v_1u_1, σ) ∈ L_u, v if v_1 σu∈ S_u, v.
Let L be an ω-regular language.
Let _u, v be the DRA as constructed above.
Then, _u, v = u· v ∈vu: u · v u^ω.
Let L_u, v = u· v ∈vu: u · v u^ω.
We first prove that _u, v⊆ L_u, v.
Let w ∈_u, v and ρ be its accepting run.
It follows that ρ can be written as the form of q_u, v (u, v_1u) (u, u) (u, v_2u) (u, u) ⋯, where v_k σu = vu for all k ≥ 1 and the states in S_u, v is not visited for all k ≥ j with j ≥ 0.
It is easy to see that w_1 σ_1 ⋯ w_k σ_k u for all k ≥ 1;
moreover, w_kσ_k _u v for all k ≥ 2.
Therefore, w ∈ L_u,v holds trivially by definition of L_u,v.
Hence, _u, v⊆ L_u, v.
Now we prove that L_u, v⊆_u, v.
Let w ∈L_u,v.
By definition, w can be written as w = u_1 · v_1 · v_2 · v_3 ⋯ where u_1 · v_i u_1 u and v_i ∈vu hold for all i ≥ 1.
We can construct an accepting run ρ for w as follows.
First, after reading u_1, _u, v will reach state (u, v'_1u).
Counterexample analysis
When EQ() returns a counterexample w ∈ L ⊖, we need to find a decomposition (u, v) of w to further refine the current hypothesis FDFA .
We analyze the counterexample to in following cases.
* w ∈ L ∖.
Since w ∉, it follows that the run ρ of over w will get trapped in states of the form (m', q) where q belongs to one progress DFA, say _m.
(If ρ does not get trapped in any progress DFAs, then it will visit accepting transitions for infinitely many times.)
We can find a state (m, q) that occurs infinitely many times in ρ with (m, q) = (u), (m,q) = _((m, q), v) and w = u · v^ω.
Let u' = u · v and n = |u'|.
Let s_i = (u'[1⋯ i]) for all 0 ≤ i ≤ n.
Recall that s_0 = ι_ = and s_n = m.
We test whether u' indeed belongs to m by checking the results of a sequence of membership queries EQ(s_0 · u'[i⋯ n], v), ⋯, EQ(s_n · u'[n+1⋯ n], v).
If there exists a smallest integer j ≥ 1 such that EQ(s_j-1· u'[j⋯ n], v) ≠EQ(s_j· u'[j+1⋯ n], b), then we can find an experiment e = (u'[j+1 ⋯ n], v) to distinguish two states s_j-1· u'[j] and s_j.
Otherwise, u and m can not be distinguished in currently.
Since ρ gets trapped inside _m and w ∈ L, we know that there must be a prefix of w that needs to be accepted by _m.
If there exists a state (m, q) that occurs in the loop part of ρ, i.e., there exist some x ∈, y ∈ such that (m, q) = (x) = (x · y) and w = x · y^ω, we can then find an experiment e to refine _m as follows.
First, we know that m· y^ω∈ L (since x · y ∈ L and x and m cannot be distinguished by y^ω.) and (m· y) = (m).
Thus, let s_i = _m(y[0⋯ i]) and n = |y|.
In particular, s_0 = and s_n is not an accepting state.
Then we can compute the sequence of results ((m) ?=(m· s_0 · y), m· (s_0 · y[1⋯ n])^ω?∈ L), ⋯ ((m) ?=(m· s_n ), m· (s_n · y[n+1⋯ n])^ω?∈ L).
Let y = s_n= _m(y), which is not a final equivalence class.
If is learning syntactic FDFA, then there will be some experiment v ∈ such that have (m) = (m·y· v)m· (y· v)^ω∉ L hold since s_n is not a final state.
On the contrary, we have both (m) = (m· s_0 · y) and m· (s_0 · y[1⋯ n])^ω∈ L hold.
If (m) = (m·y) m· (y)^ω∈ L does not hold, then it is easy to find an experiment in this case.
If we have (m) = (m·y) m· (y)^ω∈ L hold, then we have either (m) ≠(m·y) or (m) = (m·y) m· (y)^ω∈ L hold.
In case that (m) ≠(m·y), we know that y and y does not belong to the same equivalence class by definition of ^S_m.
We can find a suffix of y to distinguish some intermediate states.
In case of (m) = (m·y) m· (y)^ω∈ L and (m·y· v) = (m) m· (y· v)^ω∉ L.
Then there must be some integer k > 0, such that m· (y· v)^k · (y)^ω∉ L, otherwise it will violate the fact that m· (y· v)^ω∉ L.
Here k is bounded by the number of states in the minimal DBA accepting L.
Therefore, we know that m and m· (y· v)^k can be distinguished by (y)^ω.
When is learning limit FDFA, the cases analysis is similar except for the case when (m) ≠(m·y).
If (m) = (m· y · v), then we analyze by two cases:
If m· ( y · v)^ω∈ L, then we get that (m) = (m· y · v) m· ( y · v)^ω∈ L, in contrast with (m) = (m·y· v) m· ( y· v)^ω∉ L.
We know that y· v and y· v can be distinguished by some experiment, in particular some suffix of y · v.
Otherwise, we have m· ( y · v)^ω∉ L (m) = (m· y · v) = (m· y).
Similarly, we can find an integer j > 0 such that m· ( y · v)^k · y^ω∉ L and thus discover new equivalence classes in .
If (m) ≠(m· y · v), we know that y· v and y · v can be distinguished since (m) = (m·y· v) m· (y· v)^ω∉ L.
In case that (m, ι_m) is not in the loop part.
We then let x be the word that leads to the last occurrence of (m, ι_m).
Let w = x · y · v^ω where u = x · y and (m, ι_m) = (x).
Recall that (m, q) = (u) = (u · v).
By assumption, we have that m ≠m, which implies that (x) ≠(x· y).
Let y = _m(y).
Obviously, y is not a final state of _m.
If |x| < |u|, we can let w = x· y · v^ω.
There must be a prefix of y · v^ω that needs to be accepted by _m.
More precisely, the shortest prefix that leads to another equivalence class other than m.
Since from the state (m, ι_m), the run ρ will get trapped in _m.
So from the
TODO
So, we can always find a normalized decomposition (u, v) of w such that (u, v) is not accepted by .
Reformation
If the run ρ over w gets stuck in some progress DFA, say _m, then we must identify a fragment of w that needs to be accepted by _m.
* Assume that there is a state (m, ι_m) that occurs in ρ infinitely often.
Since is deterministic, we can find a decomposition (u, v) of w such that (m, ι_m) = (u) = (u· v) and w = u· v^ω.
We then first check whether there is misclassifcation of words in the leading DFA over the word u · v by the experiment v^ω.
If there is, then we refine , otherwise we have m · v^ω∈ L and (m · v) = (m).
Moreover, the run of _m over v starts from the initial state ι_m and stops at ι_m without visiting a final state.
Thus, we know that and v have been classified into the same equivalence class.
However, since for the experiment e =, we have that (m ··) = m and m · (·)^ω∉ L but (m · v · ) = m and m · (v·)^ω∈ L.
That is, we have (m ··) = m m · (·)^ω∉ L and (m · v · ) = m m · (v·)^ω∈ L.
* There is no state (m, ι_m) but some state (m, q) occurring in the loop part of ρ.
Let x be the longest word with (m, ι_m) = (x), y be the word with (m, q) = (x· y), v be the word with (m, q) = (x · y · v) and w = x · y · v^ω.
We then first check whether there is misclassifcation of words in the leading DFA over the word x · y · v by the experiment v^ω.
If we can find descrepancies, we can refine the leading DFA , otherwise we have m · x · v^ω∈ L, m · v^ω∈ L and m = (x) = (x · y) = (x· y· v).
* There is no state (m, q) occurring in the loop part of ρ.
* w ∈∖ L.
So the run ρ of over w must visit infinitely many -transitions.
Similarly, we can find a decomposition (u,v) of w such that (m, ι_m) = (u) = (u · v) and w = uv^ω.
We then first check whether there is misclassifcation of words in the leading DFA over the word u · v by the experiment v^ω.
If everything is fine with , then we know that there must be a prefix of v that accepted by _m.
Since m · v^ω∉ L and (m) = (m · v), it follows that (m) = (m · v) m · v^ω∈ L does not hold.
§ UNDERSPECIFYING PROGRESS RIGHT CONGRUENCES
Recall that recurrent and limit progress DFAs ^u either treat don't care words in C_u = v∈: u v u as rejecting or accepting, whereas it really does not matter whether or not they are accepted.
So why not keep this question open?
We do just this in this section; however, we find that treating the progress with maximal flexibility comes at a cost: the resulting right progress relation ^u_N is no longer an equivalence relation, but only a reflexive and symmetric relation over × such that x ^u_N y implies xv ^u_N yv for all u, x, y, v∈.
For this, we first introduce Right Pro-Congruences (RP) as relations on words that satisfy all requirements of an RC except for transitivity.
Let [u]_ be an equivalence class of .
For x, y ∈, we define the progress RP ^u_N as follows:
x ^u_N y iff ∀ v∈. (u xv u u yv u) (u· (xv)^ω∈ L ⟺ u· (yv)^ω∈ L).
Obviously, ^u_N is a RP, i.e., for x, y, v'∈, if x ^u_N y, then xv' ^u_N yv'.
That is, assume that x ^u_N y and we want to prove that, for all e ∈, (u · x v' e u u · y v' e u) (u · (x v' e)^ω∈ L ⟺ u · (yv'e)^ω∈ L).
This follows immediately by setting v = v' e in Definition <ref> for all e ∈ since x ^u_N y.
As ^u_N is not necessarily an equivalence relation
[
In the language L = a^ω + ab^ω from the example of Figure <ref>, for example, we have a ^ab_N and a ^ab_N b, but b ^ab_N.], so that we cannot argue directly with the size of its index.
However, we can start with showing that ^u_N is coarser than ^u_P, ^u_S, ^u_R, and ^u_L.
For u, x, y ∈, we have that if x ^u_K y, then x ^u_N y, where K ∈P, S, R, L.
First, if x ^u_P y, x ^u_N y holds trivially.
For syntactic, recurrent, and limit RCs, we first argue for fixed v ∈ that
* ux uy ⟹ uxv uyv, and therefore
ux uy ( u· x · v u ⟹ (u· (x · v)^ω∈ L ⟺ u· (y· v)^ω∈ L))
(u xv u u yv u) (u· (xv)^ω∈ L ⟺ u· (yv)^ω∈ L),
* (u · x · v u u· (x v)^ω∈ L) ⟺ (u· y v u u· (y · v)^ω∈ L)
(u xv u u yv u) (u· (xv)^ω∈ L ⟺ u· (yv)^ω∈ L), and
* (u · x · v u ⟹ u· (x · v)^ω∈ L) ⟺ (u · y· v u ⟹ u· (y · v)^ω∈ L)
(u xv u u yv u) (u· (xv)^ω∈ L ⟺ u· (yv)^ω∈ L),
which is simple Boolean reasoning.
As this holds for all v ∈ individually, it also holds for the intersection over all v ∈, so that the claim follows.
Assume that x ^u_R y.
It follows that, for all v ∈, (u x v u u · (xv)^ω∈ L) ⟺ (u y v u u · (yv)^ω∈ L).
If u xv u u · (xv)^ω∈ L holds, then u yv u u · (yv)^ω∈ L holds as well.
Hence, u xv u u y v u (u · (xv)^ω∈ L ⟺ u · (yv)^ω∈ L) also holds.
Otherwise u xv u u · (xv)^ω∈ L does not holds, it follows that u yv u u · (yv)^ω∈ L does not hold as well.
In case of one of u xv u and u yv u holds, obviously u xv u u y v u (u · (xv)^ω∈ L ⟺ u · (yv)^ω∈ L) holds.
In case of both u xv u and u yv u hold, then we have u · (xv)^ω∉ L and u · (yv)^ω∈ L.
Hence, we also have u xv u u y v u (u · (xv)^ω∈ L ⟺ u · (yv)^ω∉ L).
Therefore, if x ^u_R y, x ^u_N y.
Since according to <cit.>, ^u_S refines ^u_R, it immediately follows that if x ^u_S y, then x ^u_N y.
The proof for limit progress RCs is also similar and thus omitted.
Now, it is easy to see that we can use any RC that refines ^u_N and use it to define a progress DFA.
It therefore makes sense to define the set of RCs that refine ^u_N as 𝖱𝖢(^u_N) = { | ⊂^u_N is a RC}, and the
best index |^u_N| of our progress RP as |^u_N| = min{|| | ∈𝖱𝖢(^u_N)}.
With this definition,
Corollary <ref> follows immediately.
For u∈, we have that |^u_N| ≤ |^u_K| for all K ∈P, S, R, L.
We note that the restriction of ^u_N to C_u × C_u is still an equivalence relation, where C_u = v∈: u v u are the words the FDFA acceptance conditions really care about.
This makes it easy to define a DFA over each ∈𝖱𝖢(_N^u) with finite index: C_u/__N^u is good if it contains a word v s.t. u· v^ω∈ L, and a quotient of Σ^*/_ is accepting if it intersects with a good quotient (note that it intersects with at most one quotient of C_u).
With this preparation, we now show the following.
Consider L = a^ω + ab^ω and u = ab in Figure <ref>:
We can see that and b can be distinguished by v = under ^u_N, since both ab · ab and ab · b ab hold, but ab · ()^ω∉ L and ab · (b)^ω∈ L.
However, one can verify that both a ^ab_N and a ^ab_N b hold.
The reason why a ^ab_N holds is that ab · a v ab holds for all v ∈, which obviously implies that a ^ab_N.
The proof for a ^ab_N b is similar.
Wouldn't the simplest thing to do – whether in addition or instead of the rest – to state that any EQ that refines ^u_N would do for defining a progress DFA for u? That would, naturally, still include anyone from the old classes, and thus would still win.
I like this idea.
Do you have a concrete idea about how to proceed in this way?
By Example <ref>, we can see that the relation ^u_N is no longer an equivalence relation over ×, as it is no longer transitive.
Nonetheless, one can verify that ^u_N is an equivalence relation over C_u × C_u, where C_u = v∈: u v u are the words the FDFA acceptance conditions really care about.
Hence, the ?RC? ^u_N still can help us to identify L correctly.
From above, we can see that a can be classified into both []_^ab_N and [b]_^ab_N.
I don't quite follow. If we do give it to , then it accepts a b, which is not right. If we give it to b, it is accepting, which is worse? In some sense, it does not matter, as the leading automaton covers rejection, but it is still a bit weird.
This is from the example explained above (Maybe it is too far?): ^ab_N a holds because we have ab· a · v ab for all v ∈, so in Definition <ref>, the left part of the implication is always false, and hence the implication is always true.
The same reasoning applies to b ^ab_N a.
Indeed, the leading automaton covers rejection to all don't care words, so it will not accept different languages.
Ah, I did not put this well. The property is not so terribly intuitive, and what we lack is a proof that the nondeterministic automaton we produce does not accept too much (for any accepting cluster).
We may prove above like this:
That is, for two words x, y ∈ C_u, it is impossible that both x ^u_N y and ux^ω∉ L ⟺ uy^ω∈ L hold at the same time.
Assume that x ^u_N y.
Since x, y ∈ C_u, i.e., ux u and uy u hold, then for v =, we must have u· (x)^ω∈ L ⟺ u· (y) ∈ L by Definition <ref>.
If x ^u_N y and y ^u_N z for x,y,z ∈ C_u, we have x ^u_N z.
That is, we want to prove that ^u_N is transitive over C_u × C_u.
For v ∈, we prove x ^u_N z by considering the following two cases:
* u y v u. It follows that both uxv u and uzv u hold since ux u uy uz.
Then, clearly u· (xv)^ω∈ L ⟺ u ·(yv)^ω∈ L and u·(yv)^ω∈ L ⟺ u · (zv)^ω∈ L hold since x ^u_N y and y ^u_N z.
It follows that u· (xv)^ω∈ L ⟺ u · (zv)^ω∈ L.
Hence, x ^u_N z holds by definition.
* u y v u.
Since x, y, z ∈ C_u, we have u x u uy uz.
It follows that u x v u and u zv u.
Hence, x ^u_N z holds as well.
So, by definition, x ^u_N z holds and thus ^u_N is transitive over C_u × C_u.
It is easy to see that ^u_N is also reflexive and symmetric.
Thus, ^u_N is an equivalence relation (and thus a RC) over C_u × C_u.
(Interestingly, we have ux uy x ^u_N y iff x ^u_S y.)
So, if an accepting cluster have one accepting word in C_u, then all words of C_u inside the cluster will be accepting as well since [x]_^u_N is a final cluster if there is y ∈ [x]_^u_N such that uy u uy^ω∈ L.
In Theorem 6, we proved that the FNFA accepts exactly the given language L.
It follows that, from this RP, we can define a nondeterministic TS , as stated in Definition <ref>.
We abuse the notation [x]_ and indicate it as one cluster that x ∈ is assigned to with respect to .
We denote by |_ the set of clusters of words in with respect to .
Let be a RP of finite index.
The nondeterministic TS [] induced by is a tuple (S, s_0, _N) where S = |_, s_0 = []_, and for each u ∈ and a ∈, [u']_∈_N([u]_, a) if u a ∈ [u']_.
Now we are ready to define the canonical FNFAs.
The FNFA _N = (, ^u_N) of L is defined as follows.
The transition system of is exactly the same as in Definition <ref>.
The progress NFA ^u_N is the tuple ([^u_N], F_u) where [^u_N] = (Q_u, s_u, _u) for [u]_ is constructed by Definition <ref> from ^u_N and the set of final states F_u contains the clusters [x]_^u_N such that there exists some v ∈ [x]_^u_N with u · v u and uv^ω∈ L.
We still use similar acceptance condition in Definition <ref> and saturation property in Definition <ref> for FNFAs.
Our main results of this section are summarized below.
Let L be an ω-regular language and _N be its FNFA.
Then (1) _N has a finite number of states, (2) _N = L, and (3) _N is saturated.
Item (1) is a direct ressult from Corollary <ref>.
The proofs of Items (2) and (3) are quite similar to those for Theorem <ref>.
_N⊆L.
Let w ∈_N.
Then there must be a normalized decomposition (u, v) of w such that u = (u) = (uv) and v ∈^u_N.
Let v' = ^u(v) be the representative word.
Since v' is an accepting state of ^u_N, there must be a word v∈ [v']_^u_N with uvuuv^ω∈ L.
Since v ^u_N v, then we have (uvuu v u) (uv^ω∈ L ⟺uv^ω∈ L) hold.
First, from u u, we know that u v u holds too since u v u ( is consistent with ).
Hence, we have that uv^ω∈ L ⟺uv^ω∈ L holds, which implies that uv^ω∈ L.
Again, by the fact that u u, it follows that uv^ω∈ L, thus also w ∈L.
Therefore, _N⊆L.
L⊆_N.
Let w ∈L.
There must be a normalized decomposition (u, v) of w with respect to , i.e., uv u.
Let u = (u) = (uv).
It follows that u v u and uv^ω∈ L hold since u u.
Let v = ^u_N(v).
Hence, v ∈ [v]_^u_N.
By definition, v must be a final state since u v uuv^ω∈ L holds.
It directly follows that (u, v) is accepted by _N, i.e., w ∈_N.
Therefore, L⊆_N.
It follows that if L is regular, _N must have finite number of states.
Moreover, if _N has infinite number of states, so do periodic, syntactic, recurrent and limit FDFAs.
Theorem <ref> thus follows.
Let _N be the FNFA of the ω-regular language L.
Then, L is regular if, and only if _N has finite number of states.
theoremthmLastFdfaRep
Let L be an ω-regular language and
_L = ([], [_u]_u∈) be the limit FDFA of L s.t. _u ∈𝖱𝖢(_N^u) with finite index for all u.
Then (1) _L has a finite number of states,
(2) _L = L, and (3) _L is saturated.
The proof is similar to the proof of Theorem <ref> and moved to Appendix <ref>.
The first claim follows from the restriction to finite indices in the definition (we have seen that they exist, and that we can, e.g., choose limit RC).
To show _L⊆L,
assume that w ∈_L.
By Definition <ref>, a UP-word w is accepted by _L if there exists a decomposition (u, v) of w such that (u) = (u · v) (equivalently, u · v u) and v ∈^u where u = (u).
Here u is the representative word for the equivalence class [u]_.
Similarly, let v = ^u(v).
By Definition <ref>, we have u·vuu·v^ω∈ L holds as v is a final state of ^u.
Since v _uv (i.e., ^u(v) = ^u(v)), u· v uu· v^ω∈ L holds as well.
It follows that u · v u u · v^ω∈ L since u u and u · v u· v (equivalently, (u · v) = (u· v)).
Together with the assumption that (u · v) = (u) (i.e, u u · v), we then have that u · v^ω∈ L holds.
So, _L⊆L also holds.
To show that L⊆_L holds, let w ∈L.
For a UP-word w ∈ L, we can find a normalized decomposition (u, v) of w such that w = u· v^ω and u · v u (i.e., (u) = (u · v)), since the index of is finite (cf. <cit.> for more details).
Let u = (u) and v = ^u(v).
Our goal is to prove that v is a final state of ^u.
Since u u and u· v^ω∈ L, then u· v^ω∈ L holds.
Moreover, u· v u holds as well because u = (u) = (u)= (u· v) = (u · v).
(Recall that is deterministic.)
We now have that v ∈ C_u, so that C_u∩Σ^*/_^u_N is good (as u· v^ω∈ L).
We also have that v v, so that [v]_ is accepting.
Hence, v is a final state, and (u, v) therefore accepted by _L, i.e., w ∈_L.
It follows that L⊆_L.
Now we prove that _L is saturated.
Let w be a UP-word.
Let (u, v) and (x, y) be two normalized decompositions of w with respect to (or, equivalently, to ).
We have seen that (u, v) is accepted by _L iff u · v^ω = x · y^ω∈(L), which is the case iff (x, y) is accepted by _L with the same argument.
§ DISCUSSION AND FUTURE WORK
Our limit FDFAs fit nicely into the learning framework for FDFAs <cit.> and are already available for use in the learning library [<https://github.com/iscas-tis/roll-library>] <cit.>.
Since one can treat an FDFA learner as comprised of a family of DFA learners in which one DFA of the FDFA is learned by a separate DFA learner, we only need to adapt the learning procedure for progress DFAs based on our limit progress RCs, without extra development of the framework; see Appendix <ref> for details.
We leave the empirical evaluation of our limit FDFAs in learning ω-regular languages as future work.
We believe that limit FDFAs are complementing the existing set of canonical FDFAs, in terms of recognizing and learning ω-regular languages.
Being able to easily identify DBA-recognizable languages, limit FDFAs might be used in a learning framework for DBAs using membership and equivalence queries.
We leave this to future work.
Finally, we have looked at retaining maximal flexibility in the construction of FDFA by moving from progress RCs to progress RPs.
While this reduces size, it is no longer clear how to construct them efficiently, which we leave as a future challenge.
§.§.§ Acknowledgements
We thank the anonymous reviewers for their valuable feedback.
This work has been supported by the EPSRC through grants EP/X021513/1 and EP/X017796/1.
splncs04
§ PROOF OF LEMMA <REF>
*
The language L_n is given as its DBA = (, _n, q_0, ,) depicted in Figure <ref>, where Σ_n = {0, 1, …, n}.
First, we show that the index of _L_n is n+2.
Here we add the subscript L_n to _L_n to distinguish it from for the language L.
In fact, the leading DFA induced by _L_n is the exactly the TS of .
Here, we only show that the limit FDFA and the recurrent FDFA of L_n, respectively, have Θ(n) states and Θ(n^2) states.
For every two words u_1, u_2 ∈, if u_1 _L_n u_2, then there exists a word w ∈ such that u_1 · w ∈ L_n ⟺ u_2 · w ∈ L_n does not hold.
That is, u^-1_1· L_n ≠ u^-1_2 · L_n where u^-1· L_n = w ∈: u · w ∈ L_n for a word u ∈.
Let L_q = ^q.
For every pair of different states q_i, q_j ∈ with i ≠ j, obviously L_q_i≠ L_q_j since L_q_i contains an infinite word i^ω, while L_q_j does not contain such a word.
So, if (u_1) ≠(u_2), then u^-1_1· L_n ≠ u^-1_2 · L_n.
Hence, |_L_n| ≥ n + 2.
It is trivial to see that |_L_n| ≤ n + 2 since the index of _L_n is always not greater than the number of states in a deterministic ω-automaton accepting L_n.
Therefore, |_L_n| = n + 2.
Now we fix a word u and consider the index of ^u_L.
Let x ∈.
Obviously, if q_ = (u), then for all v ∈, we have u · x · v _L_n u but u · (x· v)^ω∉ L_n.
Hence, |^u_L | = 1.
Now let q_i = (u) with 0 ≤ i ≤ n.
For all v ∈, if u · x · v _L_n u holds, it must be the case that u · (x · v)^ω∈ L_n except that x · v =.
Hence, |^u_L| = 2.
It follows that the limit FDFA of L_n has exactly 2 × (n+1) + 1 + n+2 ∈Θ(n) states.
Now we consider the index of ^u_R for a fixed u ∈.
Similarly, when q_ = (u), |^u_R| = 1 since for all v ∈, we have u · x · v _L_n u u · (x· v)^ω∉ L_n hold.
Now we consider that q_k = (u) with 0 ≤ k ≤ n.
Let x_1, x_2 ∈.
First, assume that (u · x_1 ) ≠(u · x_2).
W.l.o.g., let q_j = (u · x_2) with 0 ≤ j ≤ n and let q_i = (u · x_1) with either i < j or q_i = q_.
We can easily construct a finite word v such that q_k = (u) =(u · x_2 · v), i.e., u · x_2 · v _L_n u, and u · (x_2 · v)^ω∈ L_n.
For example, we can let v = (j+1) ⋯ n · 0 ⋯ k if j < k ≤ n.
Hence, u · x_2 · v _L_n u u · (x_2 · v)^ω∈ L_n holds.
On the contrary, it is easy to see that q_ = (u · x_1 · v) = (q_i, j+1) since either j +1 > i + 1 or q_i = q_.
In other words, we have u · x_1 · v _L_n u u · (x_1 · v)^ω∉ L_n holds.
By definition of ^u_R, x_1 ^u_R x_2.
Hence, |^u_R| ≥ n + 2.
Next, we assume that (u · x_1) = (u · x_2).
For a word v ∈, it is easy to see that u · x_1 · v _L_n u ⟺ u · x_2 · v _L_n u.
Moreover, since u · x_1 · v _L_n u implies u · (x_1 · v)^ω∈ L_n, we thus have that u · x_1 · v _L_n u u · (x_1 · v)^ω∈ L_n ⟺ u · x_2 · v _L_n u u · (x_2 · v)^ω∈ L_n.
In other words, x_1 ^u_R x_2, which implies that |^u_R| ≤ n + 2.
Hence |^u_R| = n + 2 when (u) ≠ q_.
It follows that the recurrent FDFA of L_n has exactly (n+2) × (n + 1) + 1 + (n+2) ∈Θ(n^2) states.
For the syntactic FDFA, since ^u_S refines ^u_R <cit.>, then |^u_S| ≥ |^u_R| for all u ∈.
The upper bound is proved similarly as for recurrent FDFAs.
Therefore, the syntactic FDFA of L_n also has Θ(n^2) states.
This completes the proof of the lemma.
§ TRANSLATIONS FROM FDFAS TO NBAS
It is possible to transform a canonical FDFA of L to an equivalent NBA <cit.>.
In the following, we only briefly describe how we construct a NBA from an FDFA.
Angluin and Fisman proved in <cit.> that every saturated FDFA can be polynomially translated to an equivalent NBA [].
In fact, the requirement for being saturated is somewhat strong;
we only need to be almost saturated.
The translation given in <cit.> works as follows.
Let = (, ^q) be an almost saturated FDFA, where = (, Q, ι, ), and for each state q ∈ Q, there is a progress DFA ^q = (, Q_q, ι_q, _q, F_q).
Recall that (A)^s_f denotes the DFA A where s is the initial state and f is the sole final state.
By Definition <ref>, we have that = α∈: α is accepted by, where α is accepted if there is a decomposition (u, v) of α, such that (u) = (uv), and ^q(v) ∈ F_q where q = (u).
This implies that a word α∈ can be decomposed into two parts u and v, such that u is accepted by the DFA ^ι_q and v by the DFA (^q)^ι_q_f where f = ^q(v).
Hence, = ⋃_q∈ Q, f ∈ F_qM^ι_q· N_(q, f), where
N_(q, f) = v^ω∈: v ∈, q = ^q_q(v), v ∈(^q)^ι_q_q is the set of all infinite repetitions of the finite words v accepted by (^q)^ι_q_f.
It is hard to construct a NBA to accept exactly N_(q, f).
However, it suffices to under approximate N_(q, f) with the DFA P_(q, f) = ^q_q × (^q)^ι_q_q × (^q)^f_f, where × stands for the intersection product between DFAs.
On one hand, the DFA ^q_q × (^q)^ι_q_q makes sure that for a word v ∈^q_q × (^q)^ι_q_q and u ∈^ι_q, it follows that q = (u) = (uv).
On the other hand, (^q)^f_f ensures that v, v^k ∈(^q)^ι_q_f for all k ≥ 1.
One can construct a NBA [] = ⋃_q ∈ Q, f ∈ F_q^ι_q· P^ω_(q,f) to under approximate <cit.>.
It is worth noting that we can construct easily a DBA that accepts P^ω_(q,f) from the DFA P_(q,f) by redirecting all incoming transitions of final states to the initial state and mark them as -transitions.
This way, we obtain a LDBA [] that recognizes , which allows easier determinization algorithm <cit.>.
This construction of LDBAs is much easier than the one proposed in <cit.> where the acceptance condition is defined on states, rather than transitions.
Since the four types of canonical FDFAs are all saturated, Corollary <ref> immediately follows.
Let L be an ω-regular language.
Then its periodic, syntactic, recurrent and limit FDFAs are almost saturated.
Let n is the number of states in the leading DFA and k is the largest number of states of progress DFAs of .
For each pair q ∈, f ∈ F_q, the constructed NBA/DBA accepting P_(q,f) has nk^2 states, and there are at most nk such pairs;
So, all four types of canonical FDFAs can be polynomial translated to equivalent NBA/LDBAs with (n^2 k^3) states.
For the variant limit FDFA _B, there is at most one final state in each progress DFA.
So, the equivalent NBA for _B has (n^2 k^2) states.
§ PROOF OF LEMMA <REF>
*
The proof is inspired and adapted from the proof of <cit.>.
We let = (, ) be a DBA of L, where =( , , q_0, ) is the TS of and is the set of accepting transitions.
We assume that is complete in the sense that for every state q ∈ and σ∈, we have that (q, σ) ∈.
For two different states q_1, q_2 ∈, we define an equivalence relation _ where q_1 _ q_2 if and only if ^q_1 = ^q_2 where ^q is the DBA obtained from by setting the initial state to q ∈.
Let U_q = u ∈: (q_0, u) = q.
Let U_[q]__ = ∪_p ∈ [q]__ U_p where [q]__ is the equivalence class of _ that q belongs to.
Clearly, U_[q]__ is an equivalence class u of defined with respect to L where u∈ U_[q]__.
Now consider the periodic finite words for each state q ∈.
Let V_q = x ∈: ∀ v ∈. if q q. (x· v)^ω∈^q.
That is, a word x belongs to V_q iff for every v ∈, if takes a round trip from q back to itself over x· v, the run must go through a -transition.
We first prove that V_q is regular.
We can construct the DFA D_q of V_q from the TS by first removing all -transitions in , resulting a TS ', and then collect all the transitions (p, σ, q) in a set β such that p and q are in the different SCCs of the reduced TS '.
We then define D_q = (∪⊤, , q, _D, F = ⊤) where
(1) for a state p ∈, σ∈ and q = δ(p, σ), _D(p, σ) = q if (p, σ, q) ∉∪β and otherwise _D(p, σ) = ⊤;
and (2) _D(⊤, σ) = ⊤ for all σ∈.
Next we prove that D_q = V_q.
First, let x ∈D_q and we want to prove that x ∈ V_q.
Obviously, the last transition of over x from q will be either a -transition or a transition jumping between two SCCs in the reduced '.
If it is a -transition, obviously, we have that for all v ∈, if q q, then it must visit a -transition.
Hence, (xv)^ω∈^q.
If it is a transition jumping between different SCCs, it would be the case that either does not go back to q over xv or it must be visiting a -transition, since in the reduced TS ', they can not reach each other.
Therefore, x ∈ V_q.
Now let x ∈ V_q and we want to prove that x ∈D_q.
Let p = (q, x) in .
If p and q lie in two different SCCs of , then it is impossible to find a v ∈ such that p q, otherwise, p and q will belong to the same SCC of .
In this case, there will be a transition between different SCCs along the way from q to p over xv, which of courses also separates these two SCCs in the reduced TS '.
Thus, there will be a prefix of x accepted by D_q, so x is also accepted by D_q as ⊤ is a sink final state.
Now assume that p and q are in the same SCC of .
At state p, for each v ∈ such that q p q, we have that (x · v)^ω∈^q.
There must be some -transition visited along the way from q back to itself.
It follows that in the reduced TS ', it is impossible to reach p from q.
In other words, q and p are not in the same SCC of '.
So, the run from q to p over x must visit some transition jumping between two different SCCs.
Again, this means that there will be a prefix of x accepted by D_q.
So x will also be accepted by D_q.
Therefore, V_q is a regular language.
Now, for an equivalence class [q]__, we define V_[q]__ = ⋂_p ∈ [q]__ V_p.
So, V_[q]__ is also a regular language.
Let u be a word in U_[q]__.
Let V_u = x ∈: ∀ v ∈. u · (x · v) u u· (x· v)^ω∈ L.
Next, we prove that V_u≡ V_[q]__.
Let p = (, u).
Let x ∈ V_[q]__ and we want to prove that x ∈ V_u.
That is, we need to prove that for all v ∈, we have that u · (x · v) u u · (x · v)^ω∈ L.
First, if u · (x · v) u, then x ∈ V_u holds trivially.
Otherwise we have that u · x · v u, which implies that (q_0,u · (x · v)^k) _(q_0, u) for all k ≥ 0.
Thus, we will have a run ρ = q_0 q_1 ⋯ of over u · (xv)^ω where q_i ∈ [q]__ for all i > 0.
There must be some state q occurs for an inifinite set of indices I = i ∈: q = q_i.
For each q_i ∈ [q]__, we have that x ∈ V_q_i.
First, x ∈ V_p for all states p ∈ [q]__, so for every two pairs of integers i, j ∈ I with i < j, there must be a -transition along the way from q_i to q_j.
It follows that u · (x · v)^ω∈^q holds.
Hence, x ∈ V_u holds as well, since u · x · v u u · (x · v)^ω∈ L holds for all v ∈.
Now assume that x ∉ V_[q]__ and we want to prove that x ∉ V_u holds.
Assume by contradiction that x ∈ V_u.
Since x does not belong to V_[q]__, then there exists a state r ∈ [q]__ such that x ∉ V_r.
That is, there exists a word v∈ such that r r and (x · v)^ω∉^r.
Since p _ r, i.e., ^p = ^r, (x · v)^ω∉^p as well.
It then follows that u · (x · v) u and u · (x· v)^ω∉ L, which contradicts that x ∈ V_u.
Therefore, x ∉ V_u.
Hence, V_u = V_[q]__.
Now we show that V_u is an equivalence class of ^u_L as follows.
On one hand, for every two different words x_1, x_2 ∈ V_u, we have that x_1 ^u_L x_2, which is obvious by the definition of V_u.
On the other hand, it is easy to see that x' ^u_L x for all x' ∉ V_u and x ∈ V_u because there will exists some v ∈ such that u · x' · v u but u· (x' · v)^ω∉ L.
Hence, V_u is indeed an equivalence class of ^u_L.
Obviously, V_u ⊆^u, as we can let v =, so for every word x ∈ V_u, we have that u · x u u · x^ω∈ L.
Let x = ^u(x) for a word x ∈ V_u.
It follows that x is a final state of ^u and we have [x]_^u_L = V_u.
Thus, we complete the proof of the lemma.
Now we need to prove that if L is DBA-recognizable, [_u]⊆ V_u.
Assume that there exists a word x ∈[_u]∖ V_u.
That is, u · x u u · x^ω∈ L holds but there exists some v ∈ such that u · x · v u and u · (x· v)^ω∉ L.
We prove by breaking it into two cases.
Case 1: u · x u and u · x^ω∈ L.
We need to prove that x ∈ V_p for every p ∈ [(, u)]__.
First, it is easy to see that x^ω∈^p.
Let q = (p, x).
There must be an accepting run of over u · x^ω in the form of q_1 ⋯ where q_i _ q_j for all 0< i<j.
As described above, there will be one state q that occurs infinitely often, and a -transition is visited between its two occurrence.
Therefore, x ∈ V_q.
Let q = (, u). By
Then there must be a state q such that x ∈ V_q, with q = (, u).
§.§ Proof of Lemma <ref>
We need to prove the second claim of Lemma <ref>, as it is an easy but not immediate result from <cit.>.
Is there a proof for the claim?
old texts; can be deleted
Let L be a DBA-recognizable language.
Let _L and _S be, respectively, the limit FDFA and syntatic FDFA of L, as constructed in Definition <ref>.
Let ([], ) be the TS and set of transitions defined in Definition <ref> from .
Then [_L] = [_S] = L.
First, we need to give following claims without proof.
Let L_m = w' ∈: m · w' ∈ L.
Claim 1. for every progress DFA _m of _L and a word w ∈ L_m, there must be a prefix of w accepted by _m. (The proof is similar to the one proving that L⊆_L in Theorem <ref>.)
Claim 2.
Let V(L_m) = x ∈: ∀ v ∈. m · x · v m m · (x · v)^ω.
Then we have _m = V(L_m).
This is a direct consequence of Lemma <ref>.
First, we prove that L ⊆[_L].
Assume that ρ = (m_0, q_0) ⋯ (m_k, q_k) ⋯ is the run of [_L] over w.
First, there must be a prefix, say u_0, of w that is accepted by _m_0 since w ∈ L_m_0.
Thus, we will see a -transition after reading u_0 and ρ will arrive some state, say (m_i_1, ι_m_i_1).
Let w = u_0 w_i_1.
It is easy to see that w_i_1∈ L_m_i_1.
Again, ρ will visit a -transition after reading a prefix of w_i_1, say u_1.
Repeating this procedure, we prove that ρ sees infinitely many -transitions.
Hence L ⊆[_L].
Now we prove that [_L]⊆ L.
We prove this direction by proving that [_L]⊆L.
Let w ∈[_L] and ρ be its corresponding accepting run.
Since w is a UP-word and [_L] is a DBA of finite states, then we must be able to find a decomposition (u, v) of w such that (m, ι_m) = [_L](u) = [_L](u · v), where ρ will visit a -transition whose destination is (m, ι_m) for infinitely many times.
It is easy to see that u · v u since [_L](u) = [_L](u · v) (thus also (u ) = (u · v)).
Moreover, it is easy to see that there must be a prefix of v, say v', such that v' ∈_m.
By definition of _m, it follows that v ∈_m.
Thus, (u, v) is accepted by _L.
By Theorem <ref>, we then have w ∈ L since _L = L.
Therefore, [_L]⊆ L.
Thus, we conclude that [_L] = L.
When we consider the syntactic FDFA _S of L, the proof for the direction of L ⊆[_S] is similar.
For the other direction, we prove as follows.
Let w ∈[_S] and ρ be its corresponding accepting run.
Since w is a UP-word and [_S] is a DBA of finite states, then we must be able to find a decomposition (u, v) of w such that (m, ι_m) = [_S](u) = [_S](u · v), where ρ will visit a -transition whose destination is (m, ι_m) for infinitely many times.
It is easy to see that u · v u since [_S](u) = [_S](u · v) (thus also (u ) = (u · v)).
Moreover, it is easy to see that there must be a prefix of v, say v', such that v' ∈_m.
By definition of _m, a word v' is accepted by _m if for all y ∈, m m · v' y m · (v'· y)^ω∈ L.
It follows that v is also accepted by _m.
Since m m · v u u · v, (u, v) is accepted by _S.
By Theorem <ref>, we then have w ∈ L since _S = L.
§ PROOF OF CLAIM 1 FOR LEMMA <REF>
Claim 1. For every progress DFA ^m in _B of a DBA-recognizable language L and a word w ∈ L_m, there must be a prefix of w accepted by ^m. (The proof is similar to the one proving that L⊆_B in Theorem <ref> and relies on the proof of Lemma <ref>.)
Let _B be the variant of limit FDFA of L as defined in Definition <ref>.
In the following, we fix a progress DFA ^m of _B and a word w ∈ L_m.
Recall that L_m = w ∈: m· w ∈ L.
Let = (, , ι, , ) be a DBA accepting L.
Let S = q ∈: ^q = L_m.
Clearly, w ∈^q for all q ∈ S.
In the proof of Lemma <ref> (cf. Appendix <ref>), we already proved that for each state q ∈, we can construct a regular language V_q = x ∈: ∀ v ∈. if q q, then (xv)^ω∈^q.
Obviously, V_q is a co-safety language for every q ∈.
For every q ∈ S, we will prove below that there exists a prefix of w in V_q.
Let k_q be the smallest integer such that ^q visits an accepting transition over w0⋯ k_q.
We know that k_q must exist because w ∈^q.
By definition of V_q, it follows that w0⋯ k_q∈ V_q, since for all v ∈, if q q, we must have (w0⋯ k_q· v)^ω∈^q (^q takes a round trip from q back to itself over w0⋯ k_q· v while visiting at least one accepting transition).
Since V_q is a co-safety language, w0⋯ k_q∈ V_q implies that w0⋯ k_q· v ∈ V_q for all v ∈.
Hence, w0⋯ j∈ V_q for all j ≥ k_q.
Let k = max_q ∈ S k_q.
It follows that w0⋯ k∈⋂_q ∈ S V_q.
We already proved in the proof of Lemma <ref> that V_m = x∈: ∀ v∈. m · x · v m m · (xv)^ω∈ L = ⋂_q ∈ S V_q and V_m is the sink final state of _L.
Then, V_m is also the sink final state of _B according to Definition <ref>.
It follows that w0⋯ k∈^m.
That is, w0⋯ k will be accepted by ^m.
Hence, we have completed the proof.
§ PROOF OF THEOREM <REF>
*
The first claim follows from the restriction to finite indices in the definition (we have seen that they exist, and that we can, e.g., choose limit RC).
To show _L⊆L,
assume that w ∈_L.
By Definition <ref>, a UP-word w is accepted by _L if there exists a decomposition (u, v) of w such that (u) = (u · v) (equivalently, u · v u) and v ∈^u where u = (u).
Here u is the representative word for the equivalence class [u]_.
Similarly, let v = ^u(v).
By Definition <ref>, we have u·vuu·v^ω∈ L holds as v is a final state of ^u.
Since v _uv (i.e., ^u(v) = ^u(v)), u· v uu· v^ω∈ L holds as well.
It follows that u · v u u · v^ω∈ L since u u and u · v u· v (equivalently, (u · v) = (u· v)).
Together with the assumption that (u · v) = (u) (i.e, u u · v), we then have that u · v^ω∈ L holds.
So, _L⊆L also holds.
To show that L⊆_L holds, let w ∈L.
For a UP-word w ∈ L, we can find a normalized decomposition (u, v) of w such that w = u· v^ω and u · v u (i.e., (u) = (u · v)), since the index of is finite (cf. <cit.> for more details).
Let u = (u) and v = ^u(v).
Our goal is to prove that v is a final state of ^u.
Since u u and u· v^ω∈ L, then u· v^ω∈ L holds.
Moreover, u· v u holds as well because u = (u) = (u)= (u· v) = (u · v).
(Recall that is deterministic.)
We now have that v ∈ C_u, so that C_u∩Σ^*/_^u_N is good (as u· v^ω∈ L).
We also have that v^u_N v, so that [v]_^u_N is accepting.
Hence, v is a final state, and (u, v) therefore accepted by _L, i.e., w ∈_L.
It follows that L⊆_L.
Now we prove that _L is saturated.
Let w be a UP-word.
Let (u, v) and (x, y) be two normalized decompositions of w with respect to (or, equivalently, to ).
We have seen that (u, v) is accepted by _L iff u · v^ω = x · y^ω∈L, which is the case iff (x, y) is accepted by _L with the same argument.
§ ACTIVE LEARNING OF LIMIT FDFAS
First, there are two roles, namely the learner and an oracle in the active learning framework <cit.>.
The task of the learner is to learn an automaton representation of an unknown language L from the oracle.
The learner can ask two types of queries about L, which will be answered by the oracle.
A membership query is about whether a word w is in L;
an equivalence query is to ask whether a given automaton recognizes the language L.
If the oracle returns positive answer to equivalence query, then the learner has completed the task and output the correct automaton;
otherwise, the learner will receive a counterexample which will then be used to refine current hypothesis.
Angluin and Fisman proposed a learning framework in <cit.> to learn the classical three types of FDFAs.
We show that our limit FDFA can easily fit into this learning framework.
The learner L^ω is described in the following framework.
We refer to <cit.> for details about the learning framework.
We mainly use the notations and description from <cit.> in the following.
As usual, the framework makes use of the notion of observation tables.
An observation table is a tuple = (S, S, E, T) where S is a prefix-closed set of finite words, E is a set of experiments trying to distinguish the strings in S, and T: S × E → D stores
the element (membership query results) in entry T(s, e) an element in some domain D, where s ∈ S and e ∈ E.
For our limit FDFA, D is purely a Boolean values ⊤,.
We usually determine when two strings s_1, s_2 ∈ S
should be considered not equivalent depending on the RC we are using.
The component S⊆ S is the subset considered as representatives of the equivalence classes, i.e., the state names of the constructed DFA.
A table is said to be closed if S is prefix closed and for every s ∈S and σ∈, we
have sσ∈ S.
The procedure CloseTable uses two sub-procedures ENT and DFR to make a given observation closed.
Here ENT is used to fill in the
entries of the table by means of asking membership queries.
The procedure DFR is used to determine which row (words) of the table should be distinguished.
A learning procedure usually begins with create an initial observation table by asking membership queries, close the table with ENT and DFR procedures, and then construct an hypothesis automaton for asking equivalence query.
The learner should be able to use the counterexample to the equivalence query to find new experiments for discovering new equivalence classes.
We now give the subprocedures for learning our limit FDFAs.
We let MQ(x, y) be the result of the membership query ω-word x· y^ω to the oracle.
The procedures ENT_1 and DFR_1 and Aut_1 are the same for all four types of FDFAs.
More precisely, for u, x, y ∈, ENT_1(u,(x, y)) = MQ(u · x, y);
for two finite row words u_1, u_2 ∈ S, DFR_1(u_1, u_2)= ⊤ iff there exists (x, y) ∈ E such that T(u_1, (x,y)) ≠ T(u_2, (x, y)).
That is, we can use x· y^ω to distinguish the finite words u_1 and u_2 according to .
The procedure Aut_1 is simply to construct the leading DFA without final states from , by Definition <ref>.
When learning our limit FDFAs, for u, x, v ∈, we define ENT^u_2(x,v) = ⊤ if (u x· v) ≠(u) or MQ(u, x · v) = ⊤ holds, corresponding to whether ux· v u u · (xv)^ω∈ L holds in Definition <ref>;
for two finite row words, x_1, x_2 ∈ S_u, DFR^u_2(x_1, x_2) returns true if there exists v∈ E such that T_u(x_1,v) ≠ T_u(x_2, v).
The procedure Aut_u(_u) not only constructs the TS but also set a state x as accepting if T_u(x, ) = ⊤.
Note that here T_u(x, v) stores the result of whether ((u · xv) = (u)) MQ(u, xv).
To be consistent with the notations in <cit.>, we also denote by ik the subsequence of starting at the i-th element and ending at the k-th element (inclusively) when i ≤ k, and the empty sequence when i > k.
However, the first element will be [1] instead of [0] in the main content.
Now we provide more details in learning our limit FDFAs and also prove that the learner L^ω will make progress in every iteration.
We assume that now we have received the counterexample (u,v) in the algorithm to current hypothesis and we prove that our limit FDFA learner is able to make use of (u,v) to refine current FDFA.
Let (x, y) be the normalized decomposition of the counterexample u· v^ω with respect to and let x = (x).
If MQ(x,y) ≠MQ(x,y), then we know that xx.
So, we can find an experiment as follows:
let n = |x| and for 1 ≤ i ≤ n, let s_i = (x1⋯ i) be state/word that arrives after reading the first i letters of x.
Recall that s_i is also the representative word of (x1⋯ i).
In particular, s_0 = () = and s_n = (x) = x.
Thus, we can construct the sequence, MQ(s_0 ·x1⋯ n, y), MQ(s_1 ·x2⋯ n, y), MQ(s_2 ·x3⋯ n, y),⋯, MQ(s_n ·xn+1⋯ n, y).
Obviously, this sequence has different results for the first and last elements since MQ(s_0 ·x1⋯ n, y) ≠MQ(s_n, y), where s_n = x.
Therefore, there must exist the smallest j∈ [1⋯ n] such that MQ(s_j-1·xj⋯ n, y) ≠MQ(s_j ·xj+1⋯ n, y),
It follows that we can use the experiment e = (u[j + 1 ⋯ n], v) to distinguish s_j-1·xj and s_j.
Otherwise if MQ(x,y) = MQ(x,y), we need to similarly refine current _x.
Similarly, we let n = |y| and s_i = _x(y[1⋯ i]).
We also consider a sequence (m_0, c_0), ⋯, (m_n, c_n) where m_i = ⊤ iff x = (x· s_i ·yi+1 ⋯ n) and c_i = ⊤ iff x· (s_i ·yi+1 ⋯ n)^ω∈ L.
First, we know that m_0 = ⊤ and m_n = ⊤ since (x, y) is a normalized decomposition of u· v^ω, i.e., x = (x) = (x· y) = (x· y).
Since (x, y) is a counterexample to current hypothesis ℋ, we know that either the normalized decomposition (x, y) is not accepted by ℋ and xy^ω∈ L or (x, y) is accepted by ℋ and xy^ω∉ L.
Therefore, one out of (m_0, c_0) and (m_n, c_n) must be (⊤, ⊤) and the other is not.
That is, either m_0 c_0 or m_n c_0 holds.
There must be the smallest j ∈ [1⋯ n] such that m_j-1 c_j-1 and m_j c_j differs.
W.l.o.g., we let m_j-1 c_j-1 hold.
In this case, we can set the experiment e = yj+1 ⋯ n to distinguish s_j-1·yj and s_j since we have x = (x· s_j-1·yj⋯ n) x· (s_j-1·yj· n)^ω∈ L but x = (x· s_j·yj+1⋯ n) x· (s_j·yj+1⋯ n)^ω∈ L does not hold.
We can see that every time we received a counterexample from the oracle, either the leading DFA or the progress DFA _x will add at least state.
Since the limit FDFA _L has finite number of states, ℋ will eventually be _L in the worst case.
The limit FDFAs can be learned with membership and equivalence queries in time in polynomial in the size of canonical limit FDFAs.
|
http://arxiv.org/abs/2307.05545v2 | 20230708232436 | Robotic Ultrasound Imaging: State-of-the-Art and Future Perspectives | [
"Zhongliang Jiang",
"Septimiu E. Salcudean",
"Nassir Navab"
] | cs.RO | [
"cs.RO"
] |
Z. Jiang et al.
1]Zhongliang Jiangcor1
[cor1]Corresponding author at: Technische Universität München, Fakultät für Informatik – I16, Boltzmannstr. 3, 85748 Garching bei München
[email protected]
2]Septimiu E. Salcudean
1,3]Nassir Navab
[1]Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany
[2]Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC V6T 1Z4, Canada
[3]Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
XX June 2021
xx Month 2021
xx Month 2021
xx Month 2021
Ultrasound (US) is one of the most widely used modalities for clinical intervention and diagnosis due to the merits of providing non-invasive, radiation-free, and real-time images. However, free-hand US examinations are highly operator-dependent. Robotic US System (RUSS) aims at overcoming this shortcoming by offering reproducibility, while also aiming at improving dexterity, and intelligent anatomy and disease-aware imaging. In addition to enhancing diagnostic outcomes, RUSS also holds the potential to provide medical interventions for populations suffering from the shortage of experienced sonographers. In this paper, we categorize RUSS as teleoperated or autonomous. Regarding teleoperated RUSS, we summarize their technical developments, and clinical evaluations, respectively. This survey then focuses on the review of recent work on autonomous robotic US imaging. We demonstrate that machine learning and artificial intelligence present the key techniques, which enable intelligent patient and process-specific, motion and deformation-aware robotic image acquisition.
We also show that the research on artificial intelligence for autonomous RUSS has directed the research community toward understanding and modeling expert sonographers' semantic reasoning and action. Here, we call this process, the recovery of the “language of sonography". This side result of research on autonomous robotic US acquisitions could be considered as valuable and essential as the progress made in the robotic US examination itself.
This article will provide both engineers and clinicians with a comprehensive understanding of RUSS by surveying underlying techniques.
Additionally, we present the challenges that the scientific community needs to face in the coming years in order to achieve its ultimate goal of developing intelligent robotic sonographer colleagues. These colleagues are expected to be capable of collaborating with human sonographers in dynamic environments to enhance both diagnostic and intraoperative imaging.
Ultrasound imaging, robotic ultrasound, telesonography, medical robotics, orientation optimization, path planning, visual servoing, compliant control, robotic US, robot learning, reinforcement learning, learning from demonstrations
§ INTRODUCTION
Today, medical imaging is one of the most crucial components of the entire healthcare industry, from wellness and screening to early diagnosis, treatment selection, and follow-up <cit.>.
Compared to the other three most common medical imaging modalities used in the current clinical practice [i.e., Radiography (X-ray), Computerized tomography (CT), and Magnetic resonance imaging (MRI)], Ultrasound (US) imaging has the advantage of being noninvasive, low-cost, portable, and free of ionizing radiation <cit.>.
These merits make it particularly suitable for some clinical needs, such as image-guided interventions <cit.> and obstetric applications <cit.>.
In October 2021, 0.79 million US examinations were performed in England, whereas there were 0.52 million CT scans and 0.31 million MRI scans <cit.>.
However, regarding traditional free-hand US examinations, substantial experience and visuo-tactile skills are required for achieving high-quality US images <cit.>. These factors limit the utilization of US in clinical applications requiring reliable biometric measurements or repeatable images for monitoring lesions. To obtain high-quality images, sonographers need to maintain the probe with proper pressure and adjust the probe orientation for optimal acoustic windows. To overcome intra- and inter-operator variations, the robotic US system (RUSS) has been gaining attention for two decades.
To illustrate the increased interest about RUSS, the number of related peer-reviewed publications in each year and cumulative years are depicted in Fig. <ref>. For individual years, the number of publications has grown from 1,020 in the year 2001 to 15,500 in the year 2022.
The accumulated number of publications exponentially increased to 125,110 from 2001 to 2022.
This dramatic rise in interest can be attributed to three distinct communities: engineers, clinicians, and entrepreneurs <cit.>. The need from clinicians for high-quality images and efficient and easy-to-use RUSS stimulates the development of RUSS by engineers. Due to the considerable economic benefits, entrepreneurs are motivated to develop prototypes and market them [https://www.adechotech.com/],[https://en.mgi-tech.com/],[https://www.bkmedical.com/].
To assist in combating global pandemics (e.g., COVID-19 and Ebola), the demand for intelligent systems and robotics is boosted extensively in the fields of disease prevention, screening, diagnosis, treatment, home care, etc. <cit.>.
RUSS has been investigated to remotely or autonomously perform US tests for early detection and diagnosis <cit.>.
Deploying RUSS in hospitals enables the separation of patients and sonographers, hence lowering the risks of virus transmission between patients and medical staff.
This paper is motivated by the desire to assist both robotic US technicians and clinicians. For roboticists, we provide a comprehensive summary of enabling technologies (i.e., compliant force control and path planning) that are commonly needed for a variety of applications. In addition to the enabling technologies, the advanced solutions developed by integrating additional techniques (e.g., surface registration, visual servoing, and image segmentation) are summarized to demonstrate the potential of RUSS for addressing real-world challenges (e.g., tissue motion and deformation). Using these techniques, clinicians and technicians can further consider how RUSS can assist them in addressing particular clinical needs by sensibly integrating the different techniques together. This will help to bridge the gap between medical and technology research.
Prior to this survey, there were some reviews that summarized the development of RUSS <cit.>. Recently, Salcudean et al. discussed the roles robotics play in the acquisition of medical images, including US, endoscopy, X-ray, optical coherence tomography, and nuclear medicine <cit.>.
Specific to RUSS, Von Haxthausen et al. provided a systematic summary of recent publications between 2016 and 2020 <cit.>. Li et al. focused on the development of autonomous RUSS <cit.>. These two surveys categorize literature based on the level of automation; in contrast, this article emphasizes the connection between the potential clinical applications and enabling techniques. In addition, some novel concepts of application-oriented techniques (e.g., motion-aware <cit.> and deformation-aware <cit.> characteristics) have not been discussed before. However, they are important to further pave the way for applying RUSS in real scenarios. Due to the fast development of artificial intelligence (AI), learning-based RUSS is emerging to automatically perform specific US examinations <cit.>.
Li et al. also noted this trend and mentioned the AI-based RUSS as one of the future directions <cit.>.
Nevertheless, learning-based RUSS solutions have not been systematically discussed yet.
Therefore, a comprehensive survey article covering these new trends of RUSS will be helpful for roboticists to quickly and systematically learn the key knowledge of RUSS, as well as for clinicians to comprehend how the robot benefits their specific clinical needs.
Regarding future development for RUSS, we discussed some open challenges and promising perspectives to inspire the research community and other stakeholders.
§ MATERIALS AND METHODS
§.§ Searching Policy
In order to provide an objective view of the development of robotic US imaging over the last two decades, we carried out an extensive search of RUSS on the Web of Science and google scholar. The search term was “(remote OR teleoperat*) AND (ultrasound OR US OR ultrasonography OR echography)", and “robot* AND (ultrasound OR US OR ultrasonography OR echography) AND (Imaging OR screening OR scan* OR acquisition* OR servoing)". To further narrow the most relevant and most impactful articles, the titles and abstracts were carefully reviewed to exclude the articles that were (a) not focusing on the medical domain, (b) not using robotic imaging adjustment or optimization, or (c) not employing traditional 2D/3D probes. This excludes papers using endocavitary probes <cit.> for cardiology and prostate applications. Finally, among similar articles, the most representative ones (the newest or most cited) were selected.
§.§ Technological Developments in RUSS
Skilled sonographers are often in shortage, particularly in rural areas.
To allow accurate adjustment of US acquisition parameters and address the unbalanced distribution of healthcare resources across nations and regions, teleoperated RUSS solutions have been developed over the past two decades (see Section <ref>). For such systems, the operations are fully carried out by experts via teleoperation techniques; thereby, remote experts take the responsibility of robotic acquisition.
To improve the level of autonomy of RUSS, quite a large number of RUSS solutions have been proposed for different applications in the past decades. To review the key characteristics of autonomous RUSS, we first summarize the existing articles in terms of enabling technologies, namely three key acquisitions parameters: contact force (Section <ref>), probe orientation (Section <ref>), and scan path (Section <ref>). By precisely controlling these parameters, the accuracy and reproducibility of US imaging can be improved <cit.>.
In addition, more advanced techniques need to be developed to tackle additional practical complications occurring in clinical routines, e.g., patient movement and probe pressure-induced deformation.
In this article, we featured four advanced techniques: 1) motion-aware US imaging (Section <ref>), deformation-aware US imaging (Section <ref>), US visual servoing (Section <ref>), and elastography imaging (Section <ref>).
Sonographers often need to search for standard examination planes for biometric measurement and diagnosis. It is a time-consuming and non-repeatable process, even for experienced sonographers, due to the noisy US images and tissue motion. Benefiting from the development of artificial intelligence, and in particular deep learning, the area of medical image processing has achieved phenomenal success <cit.>. Learning-based image processing techniques lead to accurate and robust understandings of US images, which further enables training RUSS to learn both manipulation skills and clinical knowledge directly from human sonographers. We summarize the most recent developments in learning-powered RUSS (Section <ref>), aiming to automatically search for specific anatomy or navigate a probe to visualize standard US planes. Finally, we discuss the open challenges and provide a few potential directions for future developments Section <ref>. The important components of robotic US and the organization structure of this article are depicted in Fig. <ref>.
By incorporating additional techniques to fundamental enabling technologies, the level of technical complexity is increased from Section <ref> to Section <ref>. In this way, we would like to highlight our strategy to inspire the community to achieve the ultimate goal of developing an intelligent robotic sonographer that can collaborate with human sonographers to improve diagnostic and intraoperative imaging in real scenarios.
§ TELEOPERATION IN RUSS
Teleoperation allows operators to remotely carry out certain tasks. Due to the development of networks, multimedia, and communication technologies in the past decades, teleoperation has become one of the most mature techniques for reforming modern medical procedures <cit.>. The main characteristic of teleoperation is that the robot's motion is controlled by operators. This is important for obtaining regulatory approval. The most successful representative is da Vinci from Intuitive Surgical, which has become the clinical standard of minimally invasive surgery for a wide range of surgical procedures <cit.>.
Regarding teleoperated RUSS, it has been seen as a solution for work-related musculoskeletal disorders of sonographers <cit.>. In addition, separating operators from patients reduces the risk of transmitting pandemics (e.g., Covid-19) <cit.>.
This section summarizes the technical and clinical contributions of remote RUSS, respectively.
§.§ Technical Developments
Teleoperated RUSS often consists of three individual components: 1) an expert console, 2) a patient-side manipulator (PSM) used to maneuver a US probe, and 3) a software control system mapping the movement made by experts to the PSM. The teleoperated RUSS allows sonographers to manually, unconstrainedly, and safely control the probe motion onto the patient via the PSM.
Teleoperated systems are also utilized on-site because robotic systems can overcome human limits in manipulation and perception by adding dexterity and precision. A common example is da Vinci, which is often employed on-site <cit.>.
§.§.§ Robotic Mechanism
In 1999, Salcudean et al. designed a six degree of freedom (DOF) lightweight mechanism with limited force capability for teleoperated RUSS <cit.>. Due to the need for a large orientation workspace, a parallelogram linkage was employed to decouple the orientation and translation in their final design, achieving the control resolution of 0.1 mm for translation and 0.09^∘ for rotation. Similarly, Lessard et al. designed the PSM in parallel structure in order to have enough workspace <cit.>.
Masuda et al. designed a 6-DOF mechanism consisting of gimbals, pantograph and slide mechanisms, which weighed 3.3 kg <cit.>.
To guarantee the safety of patients, there are four sensors symmetrically deployed around the probe to monitor real-time force.
In addition, a number of soft mechanisms were developed for force-sensitive applications, e.g., obstetric examinations, to strictly limit the maximum US probe pressure. Vilchis et al. proposed a cable-driven nonrigid remote robot <cit.>. This system has been used on 100 patients with abdominal aortic aneurysm (AAA) at a distance of 1125 km. Tsumura et al. designed a passive mechanism using springs for fetal examinations, which can prevent excessive contact force <cit.>. Besides, a portable and attachable robotic system has been designed by Ito et al. <cit.> [see Fig. <ref> (e)]. In the same direction, Vieyres et al. proposed a 4-DOF light mechanism with 3-DOF rotation and 1-DOF translation in probe centerline <cit.>. Then, they updated the design of the portable RUSS to allow all 6-DOF motions using serial mechanism <cit.>. The portable RUSS is easily used by paramedics, which makes it ideal for use in emergency medical circumstances. Nevertheless, owing to the need of the compact structure, portable RUSS typically have restricted working space.
Since mechanical design is beyond the scope of this survey's primary focus on imaging acquisition, we refer readers to two comprehensive review articles with mechanical designs for RUSS <cit.>.
To reduce the cost of RUSS, commercial robotic manipulators e.g., Universal Robot (University robot, Denmark) and Franka Emika Panda (Franka Emika GmbH, Germany) are often used as PSM <cit.> [see Fig. <ref> (b) and (c)].
It is noteworthy that another typical standard robotic arm KUKA LBR iiwa (KUKA Robotics GmbH, Germany), with integrated joint torque sensors, is also commonly employed as a PSM <cit.>.
HIPPOCRATE is a representative of teleoperated RUSS developed using a serial industrial robotic arm <cit.>.
§.§.§ Shared Autonomy in Teleoperated RUSS
To fully take advantage of the stability and accuracy of robotic techniques, Abolmaesumi et al. proposed a shared autonomy strategy between an expert and an image servo <cit.>. The in-plane three DOFs were controlled by visual servoing to automatically center the carotid artery in cross-sectional images, while the other three DOFs were teleoperated by an expert. In this case, the image servo can provide pixel-by-pixel control accuracy and further mitigate the negative influence of human tremor. To keep the tissue of interest always visible in the image and give more flexibility to the expert, Li et al. and Krupa et al. shared all four (in-plane and out-of-plane) DOFs of a lightweight body-mounted mechanism between the visual servoing algorithm and a human operator via teleoperation <cit.>. The visual servoing technique has also been widely used in autonomous RUSS to estimate and compensate for the motion of internal organs <cit.>, visualize and track the object of interest <cit.>, and improve the image quality by optimizing the acoustic windows <cit.>, etc. Please refer to Section <ref> for more details.
§.§.§ User Interface
Masuda et al. employed two joysticks to remotely control the three-dimensional rotation and translation individually of the PSM <cit.>.
Yet, this manner differs from how experts conduct conventional US examinations. To enhance the intuitiveness of the interaction, a dummy probe is frequently utilized to intuitively control PSM from the expert console <cit.>. A gyroscope was installed within the dummy probe so that it could track the motion of the expert <cit.>. To improve the accuracy of the motion estimation, some mature techniques, such as optical and electromagnetic tracking can be utilized.
As the use of a dummy probe allows experts to conduct US examinations as usual, RUSS can reduce training time and increase examination efficiency.
However, the lack of force feedback on expert side may hinder the clinical acceptance.
To tackle this problem, Martinelli et al. employed a haptic control system that rendered contact force in three dimensions <cit.>.
Conti et al. employed a commercial 6-DOF haptic device (Omega 6) to reflect the contact force in six dimensions <cit.> [see Fig. <ref> (a)].
Recently, Naceri et al. directly deployed two 7-DOF Franka Emika Panda <cit.>, one of which was used at expert console with force feedback, and the other one used at patient side to precisely reproduce the movements of the experts.
Benefiting from the development of virtual reality (VR) techniques, a VR simulator was designed as a new type of interface for teleoperated RUSS <cit.> [see Fig. <ref> (f)]. Compared to traditional joysticks or other haptic devices, an immersive experience can be achieved using VR simulators, which could intuitively visualize the remote scenes in 3D. The initial evaluation of a VR simulator has been performed by 12 experienced sonographers and the results suggest that the immersive simulator could be used for teleoperated RUSS <cit.>.
A deeper discussion about human-robotic interaction studies will be beyond the focus of this paper. To inspire further research incorporating novel human-machine interfaces to improve the efficiency, intuitiveness, and robustness of teleoperated RUSS, we refer readers to two comprehensive surveys on interface approaches <cit.>. Specific to medical applications, Abdelaal et al. provided a crucial review of interfaces that have been used or tested in vivo <cit.>.
§.§ Clinical Feasibility Evaluation
Teleoperated RUSS can fully utilize the advanced knowledge of experts. Compared to autonomous RUSS, teleoperated RUSS is easier to be certified for clinical use due to the fact that all diagnostic decisions and scan trajectory are made by experts. To achieve this objective, clinical studies have been performed using different teleoperated RUSS for a number of examinations. Clinical evaluations of existing teleoperated RUSS solutions have been categorized according to their clinical applications as TABLE-<ref>.
§.§.§ Abdominal Imaging
The abdomen is often examined using US images, which is one of the primary focuses of teleoperated RUSS. To validate the feasibility and diagnostic accuracy of such systems, Arbeille et al. evaluated a preliminary version of a teleoperated RUSS for general abdominal imaging on 20 patients <cit.>. The expert was in a room at some distance (20-50 km) from the patient's site. The time delay between experts and the PSM
was less than 0.1 s using ISDN (terrestrial) telephone lines and less than 0.5 s using satellite links. To evaluate the performance, the authors validated their approach on four different groups of organs. The results demonstrated that the expert could image the main views (longitudinal and transverse) of the liver, gallbladder, kidneys, aorta, pancreas, bladder, and uterus on the patient. Only the heart and spleen were not identified in two and four of the 20 cases, respectively. The experiments also showed that sonographers can master the teleoperated RUSS in less than 3 hours, while the examination time (27±7 min for three or four organs) was approximately 50% longer than the traditional US examination.
In a following study, Arbeille et al. further compared the performance of robotized and conventional US examinations on 87 patients examined in the emergency department at the Tours University in France <cit.>. The results demonstrated that each organ (e.g., liver, gallbladder, pancreas, kidney) can be correctly imaged by a robotized system in between 91100% of cases compared with the conventional US examinations. In addition, the mean visualization score for the teleoperated RUSS was 87.4% for the abdomen, while there were no false diagnoses made in this study <cit.>. In another clinical evaluation, Adams et al. also assessed the feasibility of performing adult abdominal US examination using a remote RUSS on 18 patients in the University of Saskatchewan, Canada <cit.>. Telerobotic examinations were successful in 92% of the examinations on various abdominal organs (given the organs were sufficiently visualized on the conventional examination);
five pathological findings were identified on both modalities, three and two findings were only identified using conventional and telerobotic system, respectively. Furthermore, they reported that all participating patients were willing (89% were strongly willing and the remaining 11% were willing) to have another telerobotic examination <cit.>.
Martinelli et al. carried out a study on 58 patients with a focus on the aorta <cit.>. The examination results demonstrated that all aneurysm cases were correctly detected by both conventional scans and the teleoperated RUSS. Furthermore, the quantitative results show that the diameter of the patient's aorta can be accurately measured. The interobserver correlation coefficient was 0.98 and the difference in measurement was less than 4 mm in 96.3% cases. In addition, the examination duration (mean±SD) of the teleoperated system and traditional examinations are 17±8 min and 12±7 min, respectively. Finally, they also reported that the acceptability of patients was 84±18%, which is similar to the result in <cit.>.
§.§.§ Cardiovascular Imaging
Compared with general abdominal organs, cardiac examinations are considered more technically demanding procedures. Regarding echocardiography, the clinical needs include the visualization and evaluation of the four cardiac chambers, measurements of aortic flow, and the identification of mitral, tricuspid, or aortic valve leaks or aortic stenosis <cit.>. To successfully perform tele-echocardiography, the probe was held by a 3-DOF robotic arm providing three orthogonal rotations, and then, the robotic arm was fixed to a motorized plate for obtaining translational movements <cit.>. The results on 41 cardiac patients demonstrated that similar measurements can be achieved in most cases (93%100%).
Among the 71 valve leaks or aortic stenosis patients, 61 (86%) were successfully detected using tele-echocardiography and there was no false-positive diagnosis reported.
Boman et al. also carried out a similar study on cardiovascular examination in Sweden <cit.>. The evaluations were carried out in three different stages. In stage 1, there were 27 patients in a different place than sonographers with a distance of 80 km. Regarding the other two stages, a total of 31 subjects were recruited in a place at 135 km from the experts. The results indicate that real-time echocardiographic examinations are possible <cit.>. Boman et al. compared the tele-echocardiography examination with the standard of care referral approach in terms of time and diagnosis <cit.>. 19 patients were randomized to remote consultation and imaging, and 19 to the standard of care consultation. The results demonstrated that the processing time was significantly reduced in the remote one (only 26.5 days vs 114 days for the standard one). Therefore, compared with the standard of care approach, patients were more satisfied with the remote consultation strategy, which offered an increased rapidity of diagnosis and the likelihood of receiving faster patient management <cit.>.
In 2007, Sekar et al. evaluated tele-echocardiography examination in the diagnosis of congenital heart diseases in pediatric populations <cit.>. In this 3-year study, 102 pediatric telecardiology examinations were performed between a tertiary care cardiac center and a remote rural hospital located 193 km away. Pathology was ruled out in 50 children by tele-echocardiography. In addition, heart lesions were identified in 52 children and 30 among them required surgery. By using teleoperation techniques, the total cost for such remote care can be controlled under 90 USD, which becomes considerable for most developing areas <cit.>. Sengupta et al. further validate the feasibility of long-distance (trans-Atlantic) telerobotic US scans for vascular examinations <cit.>. The results showed that the procedure to localize the remote probe along the short axis of the carotid artery took less than 60 s and an examination could successfully be conducted in 4 min. Avgousti et al. employed 4G wireless networks in order to reduce the time delay for live tele-echography <cit.>. However, it is also important to note that the communication stability and potential signal interference may lead to uncertainty.
§.§.§ Obstetric Imaging
Obstetric imaging is also one of the most frequent applications of US examination in clinical practice. From the beginning phase to the birth of infants, more than five fetal examinations are carried out and such examinations are important to evaluate the health of both fetuses and pregnant women <cit.>. To assess the feasibility of teleoperating fetal US examinations in pregnant women, Arbeille et al. carried out a study on 29 pregnant women in an isolated hospital 1700 km away using both conventional and teleoperation examinations <cit.>. The results demonstrated that the biometric parameters, placental location, and amniotic fluid volume can be correctly measured in most cases (93.1%) using a teleoperated RUSS. Only in two cases, femur length could not be correctly measured. The mean duration of US examination of the remote examinations (18 min) was longer than that of conventional examinations (14 min).
Another study with a similar objective was presented by Adams et al. on 30 patients in Canada <cit.>. In this study, the results indicated that there was no statistically significant difference between teleoperated RUSS and conventional measurements of overhead circumference, biparietal diameter, or single deepest vertical pocket of amniotic fluid; however, there were slight differences in the measures of abdominal circumference and femur length. Besides, 80% of the fetal structures could be sufficiently acquired by the telerobotic system (range, 57%–100% for each patient). Finally, a survey of participants shows that 92% patients are willing to have another telerobotic examination in the future. The aforementioned studies demonstrated the feasibility of using teleoperation to remotely carry out fetal US examinations while keeping comparable biometric measurements as precise as the conventional approach.
§.§.§ General Applications
Georgescu et al. reported the usability of a teleoperation system for general applications over one year <cit.>. In total 300 patients were involved: 138 supra-aortic vessels, 68 abdomen, 33 thyroid, 30 lower limb vein, 20 pelvis, 7 kidneys, 3 small parts, and 1 obstetrics. The reported average duration of a teleoperation examination was 24±5 min over all 300 examinations. In addition, the results showed that the use of teleoperation in the general medicine practice significantly reduced the waiting time (save several days) for patients, and similar information as conventional US examinations was achieved. It also contributed to saving costs for the healthcare system and facilitating earlier treatment of conditions, potentially leading to improved patient outcomes and less time in care facilities <cit.>. Most recently, a teleoperated RUSS was tested on 22 Covid-19 patients, and they concluded that teleoperated RUSS can be used to diagnose common abdominal, vascular, and superficial organ pathologies with acceptable accuracy <cit.>.
§ ENABLING TECHNOLOGIES FOR AUTONOMOUS RUSS
Recently, interest in autonomous RUSS has increased relatively to teleoperated RUSS. Autonomous RUSS has the potential to achieve standardized and reproducible US acquisitions. RUSS solutions further release sonographers from burdensome manipulation tasks and allow them to focus on diagnosis, requiring deep anatomical and physiological knowledge.
The move of the research community toward autonomous RUSS has also proposed novel scientific questions, which defined important and exciting challenges. To develop autonomous RUSS, we first need to understand how human sonographers perform US scans. In this paper, we call this process the recovery of the “language of sonography". The community has not investigated this consciously, but this path can be traced throughout the analysis of the state of the art.
The adjustment of contact force, probe position and orientation for optimal image acquisition has often been the first focus. Then, it is also crucial to plan an appropriate path for covering the area of interest and to compensate for the potential motion and deformation of the target anatomy during imaging.
These points will be discussed explicitly in the following sections in more detail when we review some of the most relevant states of the art.
In this section, three fundamental techniques used in RUSS are elaborated: 1) compliant control used to apply and maintain a given contact force between US probe and patients, 2) orientation optimization to determine the appropriate probe orientation for a given scan (often orthogonal to the contacted surface) and 3) path planning to best localize and visualize the anatomy of interest.
§.§ Force Control Approaches
Due to the inherited characteristic of US imaging, a certain contact force between a US probe and human tissues is required to optimize acoustic coupling, thereby achieving high-quality US images. It is challenging for human operators to maintain a constant force during US scans. The varying force will result in non-homogeneously deformed US images. Thus, a dedicated force controller is needed to maintain the contact force during scans. Furthermore, such a controller is also crucial for guaranteeing the safety of patients by preventing excessive force.
Depending on the target tissues, the acceptable contact force is less than approximately 20 N <cit.>. In the meanwhile, a small force (less than 1.2 N) is commonly considered as not being in complete contact with the skin <cit.>.
It is noteworthy that this subsection only summarized the force control approaches (both software and hardware-wise) that have been used for developing RUSS. A more general and comprehensive summary of force control can refer to <cit.>.
§.§.§ Hybrid Force/Position Controller
The traditional hybrid force/position control approaches are implemented in two decoupled subspaces taking position law and force control law, respectively, into account <cit.>. Both force and position differences between current values and desired values are fed into the robotic dynamic model to update the manipulator's motion. To apply a constant contact force between a probe and subjects, Gilbertson et al. implemented a hybrid position/force controller for a 1-DOF hand-held RUSS <cit.>. In this study, they simplified the contact model as two interfaces (human-machine and probe-patient) using a set of masses, springs, and dampers. Thereby, the contact force can be dynamically connected to the probe position and velocity by selecting proper interface parameters.
A similar hybrid position/force method based on an external 6-DOF force/torque (F/T) sensor was designed for 6-DOF RUSS <cit.>. Their approaches can automatically switch between velocity and force control modes according to the contact condition (free or contact space).
The External hybrid force/position control is also often used in RUSS. The external controller first updates the position based on the force; then, the positional error is controlled using an internal servo.
Pierrot et al. used a PI controller to maintain the contact force and a PID controller to continually run the joint position servo loop for a 7-DOF robotic US system <cit.>. Similarly, Ma et al. used a PID controller to actively compute the variation of Cartesian position based on the force error; and then used a position controller (provided by the manufacturer) in the inner loop <cit.>.To limit the negative effect caused by potential force measurement errors, a low-pass filter, and a moving filter were used to smooth the measured force. The authors claimed that the implementation of such an external force controller is simpler and can be adapted for any kind of robot <cit.>.
§.§.§ Compliant Controller
Regarding the hybrid force/position controller, a position controller is employed either in a sub-space for the traditional ones or in the low-level servoing loop for the external ones. Since the environment is unknown in real scenarios, the position control may result in excessive force to move to the computed positions. To ensure the safety of patients, two compliant control methods (impedance controller and admittance controller) are often used. The dynamic model of compliant controller is described as Eq. (<ref>) <cit.>.
F + F_ext = K_m e + Dė + Më
where F is the applied force/torque in Cartesian space, e = (x_d - x_c) is the Cartesian position and orientation error between the current pose x_c and the target pose x_d, F_ext is the desired force/torque, K_m, D and M are the stiffness, damping and inertia matrices, respectively.
Based on Eq. (<ref>), the compliant performance can be achieved in all directions by giving different K_m and D, which enables safe/soft interactions between RUSS and patients. Regarding Eq. (<ref>), there are two different interpretations, which are referring to impedance control and admittance control, respectively. For the former one, the pose error is seen as feedback and the computed force and torque are applied to achieve the expected force F_ext. On the other hand, for an admittance controller, the force applied at the end-effector F is measured as input, while the output is the Cartesian movement. Since admittance control only requires the measurement of external force/torque, it is often used for low-cost robots without accurate joint torque sensors, e.g., universal Robots <cit.>. On the contrary, impedance control is more often used when robotic manipulators are equipped with accurate joint torque sensors, e.g., KUKA LBR iiwa <cit.> and Franka Emika Panda <cit.>.
When the stiffness of the environment diminishes, the performance of impedance control will decrease due to friction and unmodeled dynamics, while the performance of admittance control will increase <cit.>. Therefore, admittance control could achieve better performance on soft tissues, while impedance control could be more suitable for stiff tissues.
§.§.§ Spring-based Mechanism
Since some clinical applications, e.g., fetal examination, are really sensitive to the applied force during US examinations, Tsumura et al. proposed a spring-based mechanism to maintain the contact force and passively adjust the probe pose with respect to the constrained surface <cit.>. Compared to the aforementioned sensor-based controllers, the passive mechanism can apply a constant force quickly and safely, especially in unstructured environments. Wang et al. proposed a spring-loaded ball clutch to limit the maximum contact force <cit.>. In normal cases, the detent structure is in its engaged position with ball restricted by a preloaded compressed spring. Once excessive force occurs, the ball comes out from the detent hole. Thus, the involved clutch joint will lose the function of transmitting torque <cit.>. In these ways, the maximum contact force of such mechanisms can be mechanically limited to 10 N <cit.> and 21.98±0.96 N <cit.>. Yet, this approach cannot precisely and dynamically control the contact force.
To address this challenge, Housden et al. extended their work <cit.> by integrating a customized multi-axis F/T sensor to allow active adjustment of contact force <cit.>. The designed F/T sensor consists of two pieces with eight legs in total and the displacements of the legs were measured with eight optoelectronic sensors.
By using the measured force as feedback, this system can actively adjust the contact force toward the desired values <cit.>. Bao et al. designed a parallel, motor-spring-based end-effector to actively generate a certain force for US scanning <cit.>. The force is adjusted by changing the position of two sliders connecting a moving platform using springs. The symmetrical configuration restricted the contact force consistent with the probe's centerline.
§.§.§ Others
Huang et al. attached two thin force sensors (IMS-Y-Z03, I-Motion Inc., China) on both sides of the front face of a linear probe <cit.>. Then, a simple rule was implemented to control the applied force: the probe moves downward 3.1 mm when the force is smaller than 1 N, the probe moves upward 3.1 mm when the force is larger than 8 N, and scans were only performed when both sensors measurements are in the range of [1, 8 N]. Their team extended this work by replacing a 3-DOF linear stage with a 6-DOF robotic arm <cit.>. A robotic arm enables in-plane rotation; thereby, an updated rule was used to maintain the constant force: the probe moves downward 0.2 mm when both the forces are smaller than the desired force, the probe moves upward 0.2 mm when the forces are larger than the desired one, the probe rotates 0.2^∘ (in-plane) when the two forces are different. Compared with other force adjustment approaches, this method is easy to be implemented, while the handcraft rule needs further improvement to adapt to inter-patient variations.
§.§ Probe Orientation Optimization
The relative probe orientation with respect to the restricted surface is also a key factor dominating the image quality. For some applications like US imaging of bone, US probe orientation is often optimized to be orthogonal to the constraint surface <cit.>.
In certain applications, such as image-guided interventions, the US probe may need to be tilted from the orthogonal direction in order to better visualize the targets and/or inserted instruments <cit.>. The articles discussing probe orientation adjustment are summarised in three subcategories: in-plane orientation, out-of-plane orientation, and full orientation optimization in this section.
§.§.§ In-Plane Optimization
The in-plane orientation of a 2D probe represents the rotation around the short axis of the probe (see Fig. <ref>). In other words, in-plane motion only happens in the plane of US view.
In <cit.>, the in-plane rotation was optimized using the visual servoing technique to improve the general image quality. To quantitatively assess the image's quality and further use it as the input signal for servoing control, the US confidence map <cit.> was computed for individual images. The US confidence map provides a pixel-wise measure of signal loss based on a simplified model of wave propagation in tissues.
The computed confidence map is often used as a measurement metric of image's quality <cit.>. However, it is worth noting that the quality here refers only to the strength of US signal. The best US images according to the confidence map may not be the best images expected by clinicians in examinations.
To obtain the US images leading to higher overall confidence values, the probe's orientation was often optimized to the orthogonal direction of the surface <cit.>.
In addition, Jiang et al. and Welleweerd et al. also employed US confidence map-based in-plane adjustments to improve sub-optimal contact conditions for limb arm and breast scans <cit.>, respectively.
Huang et al. adjusted in-plane orientation to balance the contact forces measured at two endpoints on the probe tip <cit.>.
Zettinig et al. proposed a 3D-to-3D volume registration to adapt the movement of target anatomy; then they further optimized the in-plane orientation to align the current needle guideline with the planned path on a preoperative CT or MR <cit.>.
§.§.§ Out-of-Plane Optimization
The out-of-plane motion is defined as the rotation around the probe's axial direction (see Fig. <ref>).
In <cit.>, authors claimed that in-plane adjustment only benefit axial aortic scans marginally; therefore, they optimized out-of-plane rotation to improve the imaging quality in terms of overall US confidence values <cit.>. A fixed rotation angle interval was applied step by step. However, it is uncommon for existing articles to only optimize the out-of-plane orientation.
§.§.§ Full Orientation Optimization
To estimate the normal direction of a constrained surface, depth camera-based approaches are most often used in the existing literature <cit.>. The advantage of these approaches is high computational efficiency, while the main limitation is relatively low accuracy of the estimations.
Recently, Ma et al. designed a probe holder with four laser distance sensors to actively adjust the probe's orientation to be normal to the surface <cit.>. The results demonstrated their adjustment can be computed in real-time.
In addition, Jiang et al proposed a method to identify the normal direction of the restricted surface using contact force for out-of-plane optimization and US images for in-plane optimization <cit.> (see Fig. <ref>). The bone boundary was used to demonstrate the probe orientation's impact on the imaging quality. In this study, Jiang et al proposed a feature called the smooth derivative of contact force, which enabled the accurate estimation of the out-of-plane orientation without the requirement for an expensive external F/T sensor <cit.>. To further improve the accuracy of the estimated normal direction, Jiang et al. deduced the underlying mechanical model based on the force measured during two orthogonal fan motions at a given contact point <cit.>. The upgraded method works for both convex and linear probes, and due to its purely force-based nature, it is invariant to image noises. Yet, due to nonnegligible deformations of the soft tissue (e.g., breast), the force-based approaches are more suitable for orthopedic applications (e.g., limbs and back).
Besides, a number of studies optimized the probe's full orientation solely using US images. Welleweerd et al. proposed a framework for automatic breast scanning without requiring patient-specific models <cit.>. To achieve this, in-plane optimization was firstly carried out to ensure acoustic coupling between the probe and the examined breast. Once the mean confidence value <cit.> of the resulting image is inside the given range, the probe will be moved tangentially to the breast. If the current mean confidence value differs from the specified range, out-of-plane corrections will be carried out to maintain constant confidence.
The mean error between the estimated normal directions and ground truth at all points of trajectory was 12.6^∘ out-of-plane and 4.3^∘ in-plane <cit.>. Chatelain et al. extended their preliminary work <cit.> from in-plane control of a 2D probe to full-orientation control of a 3D wobbler probe using the confidence map <cit.>. Recently, Osburg et al. used Convolutional Neural Network (CNN) to compute the surface normal at the point of contact based on native 3D volumetric data <cit.>.
Instead of identifying the normal direction of constraint surfaces, Jiang et al. estimated the normal direction of a subcutaneous tubular structure directly based on the segmented vessels of the most recent images <cit.>. The vascular boundaries obtained at different positions contain the local geometrical information (radius and centerline) of the blood vessel; thus, the US probe can be oriented orthogonally to the estimated centerline of the local segment of the tubular structure.
§.§ Path Generation for Autonomous US Scanning
In order to accomplish US examinations, a proper path is essential to visualize the object or locate the lesion on human tissue, e.g., along a target blood vessel and covering a volume of interest. This section categorizes the existing path planning methods as 1) offline scan path generation methods and 2) online scan path generation methods.
§.§.§ Offline Scan Path Generation
To locate and evaluate the length and severity of stenosis for planning the treatment of peripheral arterial disease (PAD), Merouche et al. directly give the scanning path by manually moving the robotic arm along the target artery <cit.>. To address the potential visualization issue caused by small motions after path planning procedures and to facilitate the tracking of the artery during automatic scans, the probe's position was tuned to maintain the cross-sectional lumen horizontally centered in the US view. Similarly, Jiang et al. manually drew a scan path on the surface of a vascular phantom, and then extracted the path based on RGB images <cit.>.
Considering autonomous path planning, scan trajectories can be determined on pre-scanned images (e.g., MRI and CT); then, transferring the planned path to the current setup by registering the live US or RGB-D image to the preoperative atlas.
Hennersperger et al. validated the feasibility of autonomously transferring a planned scan path from MRI to the current setup based on the registration between the MRI and 3D surface point clouds acquired by a Kinect camera (Microsoft Corporation, USA) <cit.>. Similarly, Langsch et al. computed the scanning trajectory of an aorta by registering 3D US volume to the patient's MRI <cit.>. However, due to the need for tomographic data (MRI or CT) of each patient, the advantage of these approaches is reduced in clinical practice. To further address this challenge, Virga et al. carried out non-rigid registration between the patient-specific 3D surface extract from a depth camera and a generic preoperative MRI template <cit.> [see Fig. <ref> (a)].
Specific to thorax examinations, Jiang et al. presented a skeleton graph-based non-rigid registration between the cartilage point clouds extracted from a tomographic template and US images of patients <cit.>. To further improve the registration accuracy, Jiang et al. introduced the dense skeleton graph to replace the manually designed key points of the skeleton <cit.> [see Fig. <ref> (b)].
Akbari et al. presented a complete US-based approach to find a proper trajectory for breast US imaging <cit.>. A manual prior scan is carried out in advance; then, the desired trajectory for the post scan is computed based on geometrical analysis of the target using the pre-scanned US images.
In addition, the scanning path is often planned solely on the surface extracted by an external camera directly <cit.>. Mustafa et al. extracted the patient's abdomen surface from an RGB image acquired using a web camera (2D) based on a preset HSV color filter; then, the position of the liver was estimated and a four-step acquisition protocol was applied <cit.>. Due to the lack of imaging depth information, the camera needed to be carefully configured anteriorly to subjects. Ma et al. used a Realsense SR305 RGB-D camera (Intel Corporation, USA) to extract the 3D surface data using a depth threshold and further planned the scanning path on the extracted 3D surface <cit.>.
Huang et al. extracted 2D skin surfaces of patients from an RGB image using the rule “red>Green>Blue" <cit.> [see Fig. <ref> (c)]. They claimed this is more generic and robust than the threshold-based approaches. Then, a “snake" trajectory was automatically generated to cover the area of interest. Suligoj et al. used the same logic to generate scan paths over a region manually annotated in an RGB image <cit.> [see Fig. <ref> (d)]. Recently, Ma et al. proposed a learning-based method to extract the human abdomen from a depth camera, and further divided the extracted region into four parts for autonomously generating scanning paths of the lung <cit.>.
The aforementioned path planning approaches for US scanning were directly determined on the patient's surface. However, the optimal coverage of an underlying volume of interest is not considered. To address this challenge, Graumann et al. proposed a method to automatically compute a suitable scanning path to cover a volume of interest easily selected in preoperative images <cit.>. Depending on the sizes of targeting volumes, one or multiple lines were automatically generated for full coverage. To automatically determine the optimal probe position on the skin to monitor the motion of the internal organ of interest, Bruder et al. computed patient-specific US image quality from a given CT scan <cit.>.
To further consider the full coverage of subcostal organs like liver and heart, Göbl et al. proposed a framework integrating both geometrical and physics-based constraints to estimate the best US scanning path with respect to the limited acoustic windows <cit.>. The poses maximizing the image quality (i.e., less acoustic attenuation) are finally selected. The results on both human and phantom data demonstrated that superior image quality was achieved using their method in comparison with a naive planning approach while maintaining the necessary coverage of the target.
§.§.§ Online Scan Path Generation
Although the off-line path planning are more often used in RUSS, some online planning approaches based on live US images have also been developed. Online approaches can generate more flexible trajectories than offline approaches, which can effectively guarantee the target's visibility inside the US view, even in the presence of unexpected motion. In <cit.>, Jiang et al. proposed a pipeline to enable a RUSS to automatically perform US screening of tubular structures based only on real-time US image feedback. The US probe was manually positioned on the tubular structures [see Fig. <ref> (e)]. Afterward, a U-Net was activated to constantly segment cross-sectional vessel lumen from US images; and thereby, a set of boundary point clouds were extracted and further used to estimate the geometry (centerline and radius) of the local artery sections. To completely scan the whole artery, the US probe was moved forward in the direction of the estimated local vessel centerline in real-time. In addition, similar work was accomplished by Huang et al. for automatically screening of carotid artery based on the US image feedback <cit.>. In <cit.>, Kim et al. employed a CNN as a classifier for real-time B-mode images to update the probe position for heart examinations. Since the next action is planned in real-time, the online path planning approach can facilitate the robust tracking of the target during autonomous scans. To ensure the scanning quality to facilitate the clinical diagnosis, Jiang et al. first presented an online segmentation quality-aware method based on the Doppler signal <cit.>. Once the segmentation performance is considered low, the probe orientation will be adjusted to enhance the Doppler signal and thereby improve the accuracy and completeness of the reconstructed 3D vessel. The significance of this study lies in its ability to inspire future research into quality-aware, closed-loop robotic scanning.
§ APPLICATION-ORIENTED
ADVANCED TECHNOLOGIES FOR AUTONOMOUS RUSS
The aforementioned three enabling technologies (force control, orientation optimization, and scanning path generation) have been extensively studied in the existing literature.
However, the enabling technologies can only guarantee the quality of US acquisition in ideal cases. To further enable the implementation of extensive and autonomous RUSS screening programs, more advanced technologies tackling practical challenges in real scenarios should be considered. In this section, four distinctive techniques are discussed: 1) Motion-aware US imaging: regarding the autonomous scanning of the anatomy of interest, the potential body motion should be monitored and properly compensated to achieve accurate and complete 3D anatomy geometry. 2) Deformation-aware US imaging: due to the inherited characteristic of US imaging, a certain force is necessary for properly visualizing the underlying anatomy of interest; thereby, the inevitable force-induced deformation hinders the correct measurements of the target anatomy. 3) US visual servoing: by providing pixel-to-pixel control to accurately move the probe to reach the desired cross-sectional images and guarantee the visibility of the object of interest in US views. 4) Elastography imaging: benefiting from the accurate control over probe position and contact force between the probe and tested objects, the underlying tissue properties can be estimated for diagnosis using RUSS.
§.§ Motion-Aware US Imaging
§.§.§ Periodic Motion Detection and Compensation
In this context, periodic or quasiperiodic motions refer primarily to internal physiological motions such as respiration and pulsation. Because of the advantages of non-invasive and real-time performance, US can be used to monitor internal tissue motion <cit.>. In free-hand mode, it is extremely difficult to compensate for such motions to achieve stable US images. To tackle this challenge, RUSS has been seen as a promising solution <cit.> because robots usually can provide higher accuracy in terms of positioning and repeatability than humans <cit.>. Esteban et al. reported that RUSS can intrinsically compensate for small motions caused by breathing or human tremor using compliant force control <cit.>. Heunis et al. employed a 6-DOF Stewart platform to mimic the involuntary periodic movements that occur during scans; and further proposed a pipeline to create an effective scanning path to cover a surface while compensating for these motions and adhering to preset contact forces <cit.>. This movement was also compensated for by using force control. The results demonstrated that the reconstruction error of arteries was 1.9±0.3 mm in non-static scenarios. To actively compensate for the respiration-induced motion in the liver or prostate, Ipsen et al. applied a constant force control to accomplish continuous US scans in long-term monitoring <cit.>. Furthermore, visual servoing (Section <ref>) is another potential solution for compensating the respiration motion <cit.> and pulsation caused by heart beating <cit.>.
§.§.§ Non-Periodic Motion Detection and Compensation
Subjects are often adjusted by sonographers to better visualize the target during scans. Thus, the ability to compensate for non-periodic patient’s motion is crucial for the practical use of RUSS. A representative example of the influence caused by non-periodic motion of the imaged patients is shown in Fig. <ref>. The scanned results are significantly different when the same object is kept stationary and moved during scanning.
To obtain complete and accurate 3D US scans of a vascular phantom in the presence of rigid motion, Jiang et al. proposed a vision-based RUSS to actively compensate for such non-periodic motion <cit.>. In this study, five passive markers were rigidly attached to the imaged phantom surface and further used to monitor the potential target motion. Once the target is moved, the motion-aware RUSS automatically computes the transformation and updates the trajectory to recover the scanning from the breaking point. To eliminate the requirement for careful configuration of the passive markers in real scenarios, Jiang et al monitored the patient's motion based on the real-time segmentation of objects in RGB images and computed the compensation matrix using extracted surface point clouds acquired before and after the motion <cit.>. The results on a realistic arm phantom demonstrate the effectiveness of this marker-less compensation method. The advantages of robotic US (accuracy and stability) and free-hand US (flexibility) were combined by including active compensation for potential patient motion during scans. However, such systems only considered the rigid motion of objects. To further tackle non-rigid articulated joint motions, Jiang et al. proposed a vision-based framework, combining joint detection and non-rigid surface registration, to automatically update scanning trajectories from a template to individual volunteers with varying arm gestures <cit.>. The robustness and accuracy of the proposed system have been evaluated on multiple volunteers.
§.§ Deformation-Aware US Imaging
Due to the probe-patient contact force, shape distortion of the visualized anatomy's geometry is inevitable, particularly for soft tissues such as superficial blood vessels (see Fig. <ref>). The force-induced deformation reduces the precision and repeatability of US images, and thereby could further limiting the diagnostic accuracy and consistency, especially for computer-assisted diagnosis.
To provide precise and reliable US images, pressure-induced image deformation needs to be properly corrected. Unlike human sonographers, robots/computers are not trained to make the diagnosis based on deformed images. Therefore, such corrections are particularly important for RUSS.
To achieve distortion-free images, Treece et al. combined non-rigid image-based registration with position sensing to correct pressure-induced deformations for free-hand 3D imaging <cit.>. Sun et al. computed 2D deformation fields based on the estimated pixel displacements and corresponding contact forces using polynomial regression models <cit.>. The pixel displacements were computed based on flow techniques using raw echo frequency (RF) data. Based on their experimental results, the parabolic polynomial regression model significantly outperforms the linear model. However, there was no significant performance difference between 2nd order and higher-order polynomial models. Burcher et al. build a model using the finite element method (FEM) to predict the deformation <cit.>. Nonetheless, the performance of the FEM-based approach is heavily dependent on the prior knowledge of tissue properties, which are usually hard to measure in real scenarios. To overcome this challenge, Dahmani et al. employed a linear elastic model to approximate personalized biomedical properties of involved tissues from the images <cit.>.
To alleviate the inter-variation of pressure-induced deformation between the acquired images along a scanning path, RUSS is often required to maintain a constant force during the screening.
To correct distorted images, Virga et al. built a 4th-order polynomial model to regress the pixel displacement with respect to contact force and further propagate the computed deformation field at sparse sampling points to the whole sweep direction <cit.>. The sampling points were selected manually on the first frame and this method took 186 s on average to compute a deformation field at one location. To speed up the process for compression-free 3D volume, Jiang et al. proposed a stiffness-based deformation correction approach, incorporating image pixel displacements, contact forces, and nonlinear tissue stiffness <cit.>. To obtain patient-specific stiffness models, robotic palpation was performed at sampling positions. Since tissue stiffness is the key factor dominating the deformation, the optimal deformation regression models at sampling positions can be propagated to other positions on the trajectory by interpolating the estimated local stiffness. However, the state of the art in the field of US image correction for force-induced deformation is not yet applicable to clinical practice. To further achieve this objective, a pixel-wise tissue properties estimator and anatomy-aware correction system should be developed to bridge the gap between different anatomy and different patients.
§.§ Ultrasound Visual Servoing
Understanding the interaction of sonographers with the patient and the US probe is of high importance when developing RUSS. In order to acquire B-mode images of the anatomy of interest, sonographers perform a rough positioning of the probe on the human body. Consecutively, the B-mode images are analyzed while adjusting the probe to obtain the final view with the anatomy of interest in focus. This dynamic image-based adjustment and exploring of the anatomy can be defined as “visual servoing". While this has been the subject of research in the last decades, we believe that the introduction of deep learning and the advances in reinforcement learning could allow the scientific community to further understand and solve this image-based optimization problem. Recent work that has been published in this field <cit.> can be taken as an indicator for being a potentially interesting research topic in the coming years. In this section, we review some prior work on visual servoing that can be considered as a development of the state of the art towards the goal of autonomous intelligent exploration of particular anatomy and physiology views needed for examination and treatment.
§.§.§ Autonomous US Probe Guidance
To automatically rediscover a previously registered US imaging view, Bachta et al. developed an image-based visual servoing approach using boundary information and tested it in a simulator <cit.>. The target edge was retrieved using a polynomial regression analysis, and the optimized coefficients were used as visual features to guide a robot-controlled probe to reach a desired image section. However, this method suffers from image noise and is limited to a specific shape. To overcome this challenge, Mebarki et al. employed image moments as visual features <cit.>, which are generic and robust with respect to measurement perturbations. To further achieve a model-free servoing task on unknown targets, they compute the interaction matrix in real-time using B-mode images <cit.>. The experiments on gelatin phantoms demonstrated promising results in terms of minimizing the visual-features error; however, only local convergence can be guaranteed. In particular, in the case of a roughly symmetric object, similar geometric properties can be observed from different cross-sectional images. To overcome this shortage, Nadeau et al. defined a set of 2D features based on a three-dimensional space using a motorized 3D probe <cit.>.
To accurately and actively navigate the probe to a given US plane using the visual servoing technique, Duflot et al. first used the subsampled shearlet coefficients as novel visual features as an input to the controller, instead of pure image signal information, i.e., point, lines, moments, etc. <cit.>. Since a set of noiseless and redundant features can be extracted using shearlet coefficients, promising performances of their approach in terms of accuracy, repeatability, and robustness could be achieved. A comprehensive comparison between shearlet-based and photometric-based visual servoing controllers was carried out in both simulator and physical phantom <cit.>.
§.§.§ Imaging Stabilization and Object Tracking
Visual servoing has also been used to track anatomies of interest and perform online compensation of the anatomy’s motion to stabilize the real-time US images. Without compensating for some potential motion like breathing, the resulting images will be affected. This will lead to inaccuracies in the estimation of the precise location of intervention target tissues. US visual servoing technologies are developed to compute the corresponding probe adjustment against environment dynamics based on real-time image feedback.
Nadeau et al. presented an intensity-based approach to maintain the view of an organ while compensating for the physiological motion of the patient <cit.>. Since the computation of image moments depends on object segmentation, image intensity values were directly used as visual features. In an extension work, they adapted their method for 3D probes and did first validations on soft animal tissues <cit.>. In 2015, Nadeau et al. applied a similar intensity-based visual servoing method to keep a target centered within a virtual imaging view in the context of intracardiac surgery <cit.>. Its effectiveness has been validated on in-vivo data. Besides cardiac applications, Nadeau et al. applied visual servoing to stabilize respiratory motion by compensating periodic disturbances with a predictive controller <cit.>.
In addition to intensity-based approaches, Krupa et al. employed US speckle information to estimate both in-plane and out-of-plane motion, thereby, realizing the tracking of soft tissue movements in US view <cit.>. Speckle is often considered to be noise, however, it conveys valuable data on the tissue of interest.
Speckle contains spatially coherent information between consecutive US images because it physically results from coherent reflections of small components in human tissue.
The preliminary experiments performed on a phantom with 2-DOF in-plane and out-of-plane motions demonstrated the potential of a speckle-based servoing approach. The validation for 6-DOF motion was further reported in <cit.>. To further consider soft tissues' deformation, Royer et al. developed a physics-based model to facilitate the accurate tracking of the target of interest in 3D US images <cit.>.
§.§.§ Imaging Quality Optimization
Visual servoing techniques have also been investigated to improve imaging quality. Chatelain et al. first introduced the US confidence map as a new feature for visual servoing <cit.>. The authors claimed that the US imaging quality could be improved by optimizing the probe orientation to maximize the overall confidence value. An interesting extension using 3D probes instead of 2D probes has been reported in <cit.>. To evaluate the effect of the proposed method in real scenarios, in-vivo validations were performed on healthy volunteers. In addition, Patlan et al. directly employed elastography as the input of the visual servoing controller <cit.>. To optimize the quality of the resulting elastography, the probe was automatically actuated to image a soft tissue object from different views, and further fused to enhance the computed elastography.
§.§ Elastography Imaging
US elastography is a non-invasive technique aiming to estimate the mechanical proprieties (i.e., stiffness) of the underlying soft tissues. Elastography has gained great interest in applications such as differentiating tumors from healthy tissues (breast, prostate, liver, etc.) and guiding radiofrequency ablation surgeries <cit.>. Based on the underlying principles for producing US elastography, the currently available techniques can be mainly grouped into shear wave imaging and mechanical strain imaging. In shear wave imaging, the propagation speed of shear wave is measured. In addition, for strain imaging, a mechanical compression is performed using a US probe on the object's skin, where the mechanical compression process can be accurately controlled and measured based on robotic techniques. Thereby, accurate and standardized elastography is expected to be achieved.
Compared with shear wave imaging, strain images are more common for robotic elastography imaging because it doesn't require specialized US hardware. Schneider et al. computed laparoscopic US elastography using an external vibrator positioned on the patient skin, where the US probe was remotely controlled by da Vinci (see Fig. <ref>) <cit.>. Patlan-Rosales et al. computed strain images using real-time radio-frequency (RF) signals to precisely locate subcutaneous tumors <cit.>. In this study, robot-assisted palpation was used instead of an external vibrator and the resulting strain images were used to horizontally maintain the object in the imaging center. To estimate the strain map of moving tissues, Patlan-Rosales et al. estimated and compensated the non-rigid motion using visual servoing on an abdominal phantom <cit.>. Instead of 2D elastography, the same team extended their work to create 3D elastography based on the pre- and post-compressed volumes obtained by a 3D US probe <cit.>.
To compute 3D elastography without using a 3D probe, Huang et al. designed a linear sliding track with a position sensor and a height-adjustable holder for conventional 2D probes <cit.>. In this study, the pre- and post-compression echo signals were recorded by manually adjusting the height of the probe holder. Then, paired frames of RF data from the pre- and post-compression sweeps were obtained by interpolation. 2D strain images were computed using the paired RF data; thereby, 3D strain maps were obtained by stacking the computed 2D strain images. To allow automatic acquisition of 3D strain maps, they replaced the linear track with a motorized 3-DOF linear stage <cit.> and a 6-DOF robotic arm <cit.>, respectively.
§ AI-POWERED ROBOTIC US ACQUISITION
AI techniques have been seen as a promising way to further improve the automation level of RUSS by enhancing the understanding of US images and enabling the intuitive transfer of senior sonographers' advanced physiological knowledge. Such techniques have gained increasing attention most recently. A diverse set of tasks like segmentation and classification of US images have achieved great success. Regarding the field of US image segmentation and classification, a large number of research articles have been published. More detailed techniques can be found in these survey articles <cit.>. In this article, we will only focus on the studies that aim to automatize and/or standardize US scanning using AI-based approaches. More specifically, the approaches tried to automatically search for specific anatomical features or navigate a probe to display standard US planes needed for examinations. These tasks are challenging because RUSS must be able to properly interpret the current states (US image, contact force, probe pose) and the surrounding context.
Due to the potential tissue deformation and inconsistent acoustic artifacts of medical US images, guiding a probe to visualize target objects in desired planes is a highly sophisticated task, which requires years of training <cit.>. However, such knowledge is not yet available for robots or computers. Due to the great advantage in feature representation over naive handcrafted features, CNN has the potential to achieve superhuman performance to robustly and accurately locate standard planes on challenging US images. Chen et al. employed a deep CNN to identify the fetal abdominal standard plane from recorded US video <cit.>. Since data collection and manual labeling are time-consuming, a transfer learning strategy was used to guarantee the performance with limited training data. To achieve real-time performance, Baumgartner et al. proposed a deep CNN architecture called SonoNet to automatically detect 13 fetal standard planes as well as provide localization of the fetal structures using a bounding box <cit.>. The SonoNet was trained in a weakly supervised mode with only image-level scan plane labels, which make it possible to prepare a large data set. These approaches aid sonographers to locate standard planes that can also improve efficiency in particular for novices. Yet, these methods cannot automatically guide the probe towards target planes or anatomical structures of interest.
To enable the ability of RUSS to automatically perform US scans, Mylonas et al. proposed a learning-based approach allowing autonomous execution of US scanning according to expert demonstrations <cit.>. To achieve this objective, a Gaussian Mixture Modeling (GMM) was employed to model the demonstrations (trajectories) towards target objects in a probabilistic manner. However, since the real-time US image was not taken into consideration, all the demonstrations roughly started from the same initial position. This limitation severely impairs the usability of this method in real scenarios. To overcome this limitation and further provide real-time probe movement guidance for obtaining standard planes, Droste et al. proposed a behavioral cloning framework to mimic the process of sonographers searching for standard planes <cit.>. The proposed US-GuideNet consists of two fully connected layers and a gated recurrent unit (GRU) used to extract the sequential information. Due to hardware limitations, the predicted next movement of the probe and the estimated final standard planes only accounted for the rotational component, while the translational component remained unaccounted for.
The performance of the imitation-based approach heavily relies on the given demonstrations.
However, human US demonstrations are frequently and inherently sub-optimal, where the sonographers often need to adjust the probe around the desired pose to finally determine the optimal view. To tackle sub-optimal demonstrations, Burke et al. introduced a probabilistic temporal ranking model which assumes that the images shown in the later stage are more important than the earlier images <cit.>. The probabilistic ranking model can generate a large data set consisting of pair-wise images based on limited demonstrations; and then, a reward inference network was trained to assess individual B-mode images in self-supervised mode. To automatically navigate the probe to the viewpoint visualizing the mimicked tumor inside the gel phantom, an exploratory Bayesian optimization policy was employed. Nonetheless, due to safety concerns, it is impractical to interact richly with patients to gain enough experience to achieve the optimal searching policy in real scenarios.
The process of navigating a US probe to a proper viewpoint displaying standard planes can be seen as a series of probe motions performed in accordance with current observations (e.g., US images, force, probe pose). Therefore, the reinforcement learning (RL) architecture has been seen as a particularly suitable solution for this type of task. Milletari et al. presented an initial work using a deep Q-learning (DQN) architecture to guide sonographers towards the correct sonic window for cardiac examination <cit.>.
To avoid dynamic interaction with patients, a grid world environment was built over the chest using recorded videos to simulate acquisition environment.
The results demonstrated that the DQN-based approach achieved better results (86.1% correct guidance) than a supervised approach (77.8% correct guidance) trained on the same data. A similar work also trained a DQN on a simulated 2D grid environment to navigate the probe towards the sacrum <cit.>. To automatically terminate the navigation process, a binary classifier (ResNet18) was employed to determine if the target object had been reached.
Since this method only considered 3-DOF translational movements, the probe orientation is necessary to be carefully initialized.
To further eliminate the requirement of manual initialization and automatically localize the paramedian sagittal oblique plane (a standard plane used in spine US examination), Li et al. trained a DQN to predict the potential actions in 5-DOF spaces (besides the translation in the probe centerline) <cit.>. In contrast to the grid word environment, this work built a simulator using 3D US volumes that cover the target anatomy of interest. This simulator can generate synthetic US images based on arbitrary probe poses. The experimental results demonstrated that the method can repeatably navigate the probe to the target standard plane with an accuracy of 4.91 mm (translational) and 4.65^∘ (orientational) in the intra-patient setting. Then, the authors extended the work by adding a deep learning module (VGG-16) to recognize the target standard views from real-time US images <cit.>. Due to the US simulator, a large amount of state-action data can be obtained for training the DQN agent. In addition, to learn the policy to guide the probe to the position visualizing the kidney, Chen et al. used a supervised learning process to predict the next actions based on the current US image; and an actor-critic RL module was developed to improve the utilization of data and enhance the generalization <cit.>. Recently, to bridge the gap between simulation and real scenarios, Bi et al. proposed VesNet-RL to perform US standard plane (longitudinal view) searching for vascular structures <cit.>. To achieve high generalization capability, this study computed the binary mask of real-time B-mode images and used the background-irreverent binary masks as the input to train the RL agent.
Instead of performing validation in the simulated environment with a virtual probe, Ning et al. proposed a state representation model to encode the force and US images into the scene image space acquired using an RGB camera; and then an agent was trained using the proximal policy optimization (PPO) method to control the robotic manipulator to automatically perform US scans in real world <cit.>. Similarly, Deng et al. employed a deep neural network to encapsulate the scanning skill (the US images, the pose/position of the probe, and the contact force) into a high-dimensional multi-modal model; then, a policy was trained based on expert demonstrations <cit.>. Due to the differences between the images in the given demonstrations and real ones obtained during dynamic interactions, the trained model was further improved with guided explorations carried out by human operators. However, such manual correction is very expensive during clinical examinations, and it will limit the efficiency of the RUSS.
Instead of directly learning a policy to search for standard planes, Jiang et al. proposed a novel machine learning framework (MI-GPSR) to understand the implicit physiological knowledge from expert demonstrations, which is implemented in a fashion of self-supervised mode using a probability ranking approach <cit.>. To ensure the generalization capability of the method, the authors employed the mutual information <cit.> to explicitly disentangle the task-related features from the domain features. The results on three types of phantoms [gel tubular structure, chicken heart, and lamb kidney phantom (see Fig. <ref>)] demonstrated that MI-GPSR can properly predict the reward of individual US images from unseen demonstrations and unseen phantoms with the same anatomy <cit.>. Understanding and modeling the semantic reasoning and intention of expert sonographers can facilitate not only the development of autonomous intelligent RUSS but also the design of US education and training systems and advanced methods for grading and evaluating the performance of human and robotic sonography.
§ OPEN CHALLENGES AND FUTURE PERSPECTIVES
Medical robots have gained increased attention, in particular during the COVID-19 pandemic. The role of robotics in managing public health and infectious diseases has been widely discussed among the community <cit.>. In order to apply RUSS in clinical practice, there are still many open challenges, including both technological (e.g., deep understanding of the dynamic scene, and advanced sensing technologies) and nontechnological (e.g., regulatory affairs and financing) aspects <cit.>. Here, we highlight two aspects that will widely affect the roadmap for RUSS, particularly for clinical translation and commercialization: 1) the acceptance of RUSS, and 2) the ethical and legal issues. In addition, we discussed some promising research directions to inspire the future development of RUSS.
§.§ Acceptance by Patients and Clinicians
The RUSS are designed to help both sonographers and patients in clinical practice. Besides demonstrating comparable or even better outcomes, the acceptance for RUSS is also important. Here, we want to first make a distinction between the concepts of acceptance and trust. Trust is mostly based on how well RUSS performs in terms of technical performance, such as safety, clinical results, robustness, repeatability, and so on. Yet, effective communication, friendly interaction, and mental development would also be necessary for improving acceptance.
Regarding teleoperated RUSS, Adams et al. indicated that all patients (18) were willing (89% were strongly willing and the remaining 11% were willing) to have another telerobotic examination <cit.>. A similar result was reported by <cit.>, where 97% of 28 patients were willing to have another teleoperation scan. However, the number of participating patients in these two studies is limited. A more comprehensive survey about the patients' acceptance of RUSS should be carried out in the future. Furthermore, it is noteworthy that the clinicians' attitudes toward RUSS are still missing.
Teleoperation systems are controlled by human operators, and there are some very successful teleoperation surgical systems, e.g., da Vinci system. This fact contributes to the positive attitude of stakeholders for teleoperated RUSS <cit.>. In contrast, since autonomous RUSS are partially or fully out of the control of experts, non-negligible worries about safety arise, which stress both patients and experts during scans. Autonomous RUSS is still far from gaining widespread acceptance.
A standard evaluation metric considering clinical practices will help improve the trustiness of emerging autonomous medical robotics <cit.>. Nagy et al. defined the concept of level of Clinical Realism: 1) Training tasks with rigid phantoms; 2) Surgical tasks with simple phantoms; 3) Surgical tasks with realistic phantoms, but little or no soft-tissue interaction; 4) Surgical tasks with soft-tissue interaction; 5) Surgical tasks with soft-tissue topology changes <cit.>.
To tackle the safety concern of autonomous RUSS, robotic arms are often controlled in compliant force mode, which will result in soft interaction between the probe and patients to prevent excessive contact force <cit.>. A force threshold is specified as a hard limitation in the low-level controllers to completely eliminate the potential extreme situation. The RUSS will stop instantly whenever the real-time force exceeds the predetermined threshold, which was 25 N in <cit.>. During robotic scans, two emergency buttons are often held by the clinical expert and the patient, respectively, to incorporate their observations into the safety-aware loop. Such a dedicated multi-layer safety-aware framework is beneficial for increasing the trust of clinicians and patients. By offering detailed explanations of the ongoing robotic US scans over audio and doing some straightforward interactions with patients such as ”high five", Eilers et al. claimed that the acceptance from patients could be enhanced <cit.>.
To improve the acceptance of new medical devices in clinical practices, the robotic system with a medical certification can speed up the process in both research and market-driven developments <cit.>. For example, KUKA LBR iiwa has been widely used as the key component for developing RUSS <cit.>. Nevertheless, this comes with a high unit cost and may necessitate the assistance of an experienced engineer for imaging acquisition or routine system maintenance <cit.>. Since the fee will be paid by the end-users, the financial issue will become a practical factor hindering the acceptance from the patients. Most recently, Kosa et al. examined the role of robotics in Intensive Care Medicine and their acceptability to patients and caregivers <cit.>. They concluded that it is still immature to use robots directly handling patients, and close collaborations between roboticists and clinicians are required to advance robotics to benefit the ICU.
§.§ Ethical and Legal Issues
The ethical and legal issues regarding medical robotics are still not clearly defined, particularly for autonomous systems. The distribution of responsibility between experts and RUSS (or other surgical robotic systems) remains unclear. Clinical translation will also need regulatory acceptance.
In order to properly tackle the ethical, regulatory, and legal issues for RUSS, Yang et al. divided surgical robots into six subgroups in terms of autonomy levels: no autonomy, robot assistance, task autonomy, conditional autonomy, high autonomy, and full autonomy <cit.>. To further improve the concept of level of autonomy, Haidegger defined the term “situation awareness" as the operator’s perception, comprehension, and prediction of a robot’s behavior in its environment <cit.>. Then, “situation awareness" is used to distinguish the required level of human supervision.
Up to the time of writing this article, commercial surgical robots are still solidly resting at Level-0, while a very large number of high-autonomy surgical robotic systems are waiting for clinical translation <cit.>. Since commercial surgical robots are dominated by a few disproportionately large companies; thereby they have no rush in disrupting the status quo <cit.>.
Ethical and legal regulations are critical for clinical translation and further commercialization. The need for such a regulation has been highlighted by various senior researchers in multiple impactful publications recently <cit.>.
To establish such regulations for medical robots, O'Sullivan et al. defined three different responsibilities: (1) accountability: the capacity of a system to give an explanation for its actions; (2) liability: the legal liability for potential damages caused by a robot; and (3) culpability: whom and how to implement punishment <cit.>. In addition, Vayena et al. discussed ethical and legal issues for digital health in terms of privacy and security, trust, and accountability <cit.>. As a large amount of data is often necessary for analysis, protecting privacy is undoubtedly important for avoiding misuse. Public trust is of paramount importance. Vayena et al. considered that the creation of a culture of trust will enable all stakeholders to benefit from the development of digital health <cit.>. Similarly, Yang et al. summarised five increasingly pressing topics in terms of ethics for robotics and AI <cit.>. Besides the aforementioned terms like responsibility, this works further emphasized some societal issues such as potential influence on employment and human freedom. Due to the quick evolution of the area of medical robotics, a proper and comprehensive regulatory system will boost a prosperous market and gradually benefit all stakeholders.
To deal with the unsolved issues regarding the safety, transparency, and trustworthiness of modern medical devices with a certain level of autonomy, the two leading Standard Development Organizations International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) created the first joint standardization document (IEC/TR 60601-4-1) regarding autonomy for technical developers <cit.>. Recently, Prestes et al. established the first global ontological standard for AI and robotics: IEEE 7007—Ontological Standard for Ethically Driven Robotics and Automation Systems <cit.>. For an in-depth review of the ongoing initiatives regarding regulations, we highly recommend that readers refer to these two articles <cit.>.
§.§ Future Perspectives
In addition to challenges, there are also numerous opportunities in the field of RUSS, particularly in light of the boom in both fundamental sensor development and advanced AI research. This survey will elaborate on future perspectives from these two aspects. By providing an understanding of the state of the art, we hope it can stimulate a number of exciting ideas. To clarify, the opportunities extend far beyond what are described below.
§.§.§ Fundamental Sensing Systems
Sensors are essential components of all intelligent systems. Generally, the development of new sensors has a substantial effect on existing systems in numerous ways. To achieve the ultimate goal of an autonomous RUSS, it is necessary to integrate multiple sensing systems mimicking the sophisticated human sensing system. By developing efficient data fusion techniques, redundancy, and multi-modality data would aid in achieving robust and reliable perception results. This applies not only to RUSS but to a vast array of autonomous systems.
Most recently, the novel concept and development of US patches have become attractive. Due to the advantages of small size, stretchable probability, and no need for US gel, it is very desired for continuous healthcare monitoring. The traditional US probes are rigid and bulky, making them unsuitable for imaging through nonplanar surfaces. To address this challenge, Hu et al. proposed a stretchable US probe that can conform to and detect nonplanar complex surfaces <cit.>. This soft probe consisted of a 10× 10 array of piezoelectric transducers covered by compliant silicone elastomers, and the results demonstrated that it could be stretched more than 50%. Similarly, Wang et al. developed and tested a skin-conformal ultrasonic phased array to monitor the physiological signals from tissues up to 14 cm <cit.>. To tackle the practical issue that the image quality is highly affected by US gels, Wang et al. designed a bioadhesive US device consisting of a thin and rigid US probe robustly adhered to the skin via a couplant made of a soft, tough, antidehydrating, and bioadhesive hydrogel-elastomer hybrid <cit.>. Based on this device, continuous imaging of internal tissues over days becomes feasible. Most recently, Hu et al. demonstrate a wearable cardiac US imager providing direct cardiac function assessment <cit.>. Such fundamental changes in US probe would open numerous opportunities for revolutionizing the techniques of robot-assisted US imaging.
§.§.§ Advanced AI-based RUSS
We consider the AI-based RUSS would be another promising direction, where the core task is to improve the intelligence of RUSS. To this end, the research community needs first to improve the computer's understanding of dynamic environments through multi-modality signals. Only when the system owns precise perception abilities, we can further expect and explore the way to make proper decisions autonomously.
Several studies have demonstrated that AI-based approaches outperformed conventional image processing methods <cit.>.
Benefiting from the accurate segmentation of target objects (e.g., blood vessels), precise state representations will further facilitate the development of autonomous scanning <cit.> or autonomous exploration of standard US planes <cit.>.
In addition, advanced learning-based frameworks have the potential to be used to transfer senior sonographers' physiological knowledge and experience to novices. Recent studies in the direction of learning from demonstrations <cit.> implicitly result in an attractive and influential new research topic on recovery of “language of sonography". Hands-on experience is very important and necessary for sonographers. Senior sonographers who can perform flawless US scans are still unable to directly parameterize and intuitively describe the acquisition requirements. However, US examinations are carried out based on their understanding of high-level physiological knowledge. Such knowledge is common among sonographers, although their comprehension may vary slightly due to experience. The concept of recovery of “language of sonography" refers to the underlying understanding of high-level anatomical knowledge. We believe that efforts to retract the “language of sonography" from intuitive demonstrations with multiple signals, such as US images, RGB-D images, force information, probe movement, gaze information, etc., are as valuable and essential as the progress made in robotic sonography itself <cit.>.
§ DISCUSSION
Robotic technologies have demonstrated promising potential to extend the use of US imaging in the healthcare industry, such as remote examinations, and accurate and quantitative control of acquisition parameters.
Compared with conventional US examinations, although current RUSS cannot yet show superiority in terms of improving clinical outputs, a number of benefits have been demonstrated. From the perspective of patients, the waiting time for the healthcare intervention was significantly reduced from 144 to 26.5 days <cit.> and their cost was reduced as well <cit.>. As for sonographers, robots bring dexterity as well as reduce work-related musculoskeletal disorders <cit.>.
Additionally, RUSS has the potential to make a significant contribution in a variety of clinical scenarios, including performing trauma examinations in pre-hospital settings <cit.>, freeing up a clinician's hand during the intervention <cit.>, and performing routine PAD screening or monitoring without radiation <cit.>. When it comes to trauma scans, it is vital to spot life-threatening intracavitary hemorrhage as soon as possible because this will enable doctors to make prompt treatment decisions to save lives in emergency scenarios. RUSS could be used for reliable and accurate trauma scan identification in pre-hospital settings by fusing precise sensing devices with a cutting-edge learning-based semantic segmentation framework.
Continuing the current progress on RUSS requires a deep understanding of how its embedded technologies add value to healthcare practices. Intelligent robotic imaging systems could provide different benefits. On one hand, they can democratize the healthcare by making US examination available at locations in which patient populations do not currently have access to expert sonographers. On the other hand, to maximize the added value of RUSS, it is important to also focus on enabling new types of interventions or new procedures that are impractical or impossible based on traditional US examination, e.g., 3D or 4D visualization of scanned anatomy compensating or embedding physical breathing and heartbeat. Although there is not yet any fully autonomous system for US examinations, autonomy is one of the main objectives of the scientific community. Similar to surgical robotics, autonomous RUSS will be more challenging to commercialize <cit.>, however, due to its nature of offering images and visualization rather than decision making, cutting, and suturing tissues, we believe autonomous RUSS is easier to be certified and productized than autonomous surgical robotic solutions. On the other hand, compared to robotic X-ray and nuclear imaging, RUSS may be harder to certify because it requires direct interaction with patients. Researchers, therefore, need to continue their studies to guarantee the trust in and acceptance of autonomous RUSS by both doctors and patients.
The reported results on current autonomous RUSS are still far from maturity and do not perform as well as or outperform clinicians.
Most existing research makes simplifying assumptions and often uses artificial setups for their validation. For example, most US servoing approaches (Section <ref>) are validated on phantoms or using simulation rather than on human subjects, and the existing motion and deformation compensation approaches may not perform as well on patients within the complex and dynamic clinical setups.
* Could advanced machine learning allow us to learn the “language of sonography" by observing expert sonographers?
* Could our RUSS systems understand the physics of imaging and its interaction with dynamic patient physiology?
* Could RUSS allow optimizing B-Mode, 3D and 4D image acquisition?
* Could advanced sensing and intelligent control allow for guaranteeing reproducibility and safety of scanning procedures?
* Could multimodal imaging and pretraining allow RUSS systems to observe and understand the specific anatomy and physiology of each patient?
* Could explainable AI enable RUSS systems to report and justify their actions and decisions to physicians?
* Could user-centric RUSS design allow smooth and friendly communication between sonographer robots, physician colleagues, and patients?
Answering each of these exciting and essential questions requires large multi-disciplinary scientific and engineering communities to gather, communicate and collaborate. The current review paper hopes to play a small role in gathering and highlighting some of the requirements and opening the path for the community to study and analyze the next crucial steps to take.
§ CONCLUSION
This survey has provided a brief picture of the rapidly evolving field of robot-assisted US imaging systems. Starting from the technical developments and clinical translations of various teleoperation systems in the first decade of the new millennium, in Section <ref>, the article summarizes the path the community took to get to its recent research focus on autonomous RUSS, in particular after the booming of machine learning and artificial intelligence throughout the last decade.
It is challenging to develop intelligent RUSS solutions, which require a number of advanced capabilities to understand dynamic environments, physics of US imaging, human anatomy and physiology, and thereby to tackle complex cases of diagnostic and interventional imaging.
To date, there are no such systems available. This paper aims at reviewing the state of the art and discussing the paths the community has taken or needs to take in the future.
The survey shows that the recent progress has demonstrated that RUSS may be able to improve image acquisition and 3D visualization, also taking motion and deformation into account, real-time geometrical (including volumetric) measurements, and in particular their reproducibility. The US handling habits vary among expert sonographers, and cannot be well described using handcrafted features. We believe that in the near future, the development of advanced machine learning will allow for figuring out the underlying “language of sonography" based on expert demonstrations. This can not only allow for autonomous intelligent RUSS development but also for designing US education and training systems, and advanced methodologies for grading and evaluating the performance of human and robotic US examinations. In view of its speed of progress, RUSS has the potential to revolutionize not only the US-based medical interventions themselves but also clinical screening, diagnosis, and robotic-assisted surgery.
§ DECLARATION OF COMPETING INTEREST
The authors report no conflicts of interest.
§ ACKNOWLEDGMENTS
The authors would like to acknowledge the Editors and anonymous reviewers for their time, and implicit contributions to the improvement of the article's thoroughness, readability, and clarity.
model2-names.bstauthoryear
|
http://arxiv.org/abs/2307.05979v1 | 20230712075112 | Transformers in Reinforcement Learning: A Survey | [
"Pranav Agarwal",
"Aamer Abdul Rahman",
"Pierre-Luc St-Charles",
"Simon J. D. Prince",
"Samira Ebrahimi Kahou"
] | cs.LG | [
"cs.LG",
"cs.AI",
"cs.CV"
] |
École de Technologie Supérieure/Mila
Montréal
Québec
Canada
[email protected]
École de Technologie Supérieure/Mila
Montréal
Québec
Canada
[email protected]
Mila, Applied ML Research Team
Montréal
Québec
Canada
[email protected]
University of Bath
Bath
United Kingdom
[email protected]
École de Technologie Supérieure/Mila/CIFAR
Montréal
Québec
Canada
[email protected]
Transformers have significantly impacted domains like natural language processing, computer vision, and robotics, where they improve performance compared to other neural networks. This survey explores how transformers are used in rl, where they are seen as a promising solution for addressing challenges such as unstable training, credit assignment, lack of interpretability, and partial observability. We begin by providing a brief domain overview of rl, followed by a discussion on the challenges of classical rl algorithms. Next, we delve into the properties of the transformer and its variants and discuss the characteristics that make them well-suited to address the challenges inherent in rl. We examine the application of transformers to various aspects of rl, including representation learning, transition and reward function modeling, and policy optimization. We also discuss recent research that aims to enhance the interpretability and efficiency of transformers in rl, using visualization techniques and efficient training strategies. Often, the transformer architecture must be tailored to the specific needs of a given application. We present a broad overview of how transformers have been adapted for several applications, including robotics, medicine, language modeling, cloud computing, and combinatorial optimization. We conclude by discussing the limitations of using transformers in rl and assess their potential for catalyzing future breakthroughs in this field.
<ccs2012>
<concept>
<concept_id>10010147.10010257.10010258.10010261</concept_id>
<concept_desc>Computing methodologies Reinforcement learning</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10002944.10011122.10002945</concept_id>
<concept_desc>General and reference Surveys and overviews</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010147.10010257.10010293.10010294</concept_id>
<concept_desc>Computing methodologies Neural networks</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Computing methodologies Reinforcement learning
[500]General and reference Surveys and overviews
[500]Computing methodologies Neural networks
Transformers in Reinforcement Learning: A Survey
Samira Ebrahimi Kahou
August 12, 2023
================================================
§ INTRODUCTION
rl is a learning paradigm that enables sequential decision-making by learning from feedback obtained through trial and error. It is usually formalized in terms of an mdp, which provides a mathematical framework for modeling the interaction between an agent and its environment.
Most rl algorithms optimize the agent's policy to select actions that maximize the expected cumulative reward. In deep rl, neural networks are used as function approximators for mapping the current state of the environment to the next action and for estimating future returns. This approach is beneficial when dealing with large or continuous state spaces that make tabular methods computationally expensive <cit.> and has been successful in challenging applications <cit.>. However, standard neural network architectures like cnn and rnn struggle with long-standing problems in rl. These problems include partial observability <cit.>, inability to handle high-dimensional state and action spaces <cit.>, and difficulty in handling long-term dependencies <cit.>.
Partial observability is a challenge in rl <cit.>; in the absence of complete information, the agent may be unable to make optimal decisions. A typical way to address this problem is to integrate the agent's input <cit.> over time using cnn and rnn. However, rnn tend to forget information <cit.>, while cnn are limited in the number of past-time steps they can process <cit.>. Various strategies have been proposed to overcome this limitation, including gating mechanisms, gradient clipping, non-saturating activation functions, and manipulating gradient propagation paths <cit.>.
Sometimes different data modalities, such as text, audio, and images are combined to provide additional information to the agent <cit.>. However, integrating encoders for different modalities increases the model's architectural complexity. With cnn and rnn, it is also difficult to determine which past actions contributed to current rewards <cit.>. This is known as the credit assignment problem. These challenges and others, such as training instability, limit the scope of most rl applications to unrealistic virtual environments.
The transformer was first introduced in 2017 <cit.> and has rapidly impacted the field of deep learning <cit.>, improving the state-of-the-art in nlp and cv tasks <cit.>. The key idea behind this neural network architecture is to use a self-attention mechanism to capture long-range relationships within the data. This ability to model large-scale context across sequences initially made transformers well-suited for machine translation tasks. Transformers have since been adapted to tackle more complex tasks like image segmentation <cit.>, visual question answering <cit.>, and speech recognition <cit.>.
This document surveys the use of transformers in rl. We begin by providing a concise overview of rl (Sec. <ref>) and transformers (Sec. <ref>) that is accessible to readers with a general background in machine learning. We highlight challenges that classical rl approaches face and how transformers can help deal with these challenges (Sec. <ref> and <ref>). Transformers can be applied to rl in different ways (Fig. <ref>). We discuss how they can be used to learn representations (Sec. <ref>), model transition functions (Sec. <ref>), learn reward functions (Sec. <ref>) and learn policies (Sec. <ref>). In Sec. <ref> and Sec. <ref>, we discuss different training and interpretation strategies, and in Sec. <ref>, we provide an overview of rl applications that use transformers, including robotics, medicine, language modeling, edge-cloud computing, combinatorial optimization, environmental sciences, scheduling, trading, and hyper-parameter optimization. Finally, we discuss limitations and open questions for future research (Sec. <ref>). With this work, we aim to inspire further research and facilitate the development of rl approaches for real-world applications.
§ BACKGROUND
This section introduces the fundamental concepts of rl and discusses associated challenges. We also provide an overview of transformers and their potential advantages in rl.
§.§ Reinforcement Learning
rl is a reward-based learning paradigm that enables agents to learn from their experience and improve their performance over time. This is commonly formulated in terms of mdp, in which the agent chooses an action (𝐚∈𝐀) based on the state (𝐬∈𝐒) of the environment and receives feedback in the form of rewards (r ∈ℝ). The mdp framework assumes that the environment satisfies the Markov property, which asserts that the next state is independent of the past states, given the present state and the most recent action. This allows the agent to make decisions based only on the current state without tracking the history of previous states and actions. After taking action, the agent receives feedback from the environment in the form of a reward. The state is updated to a new value (𝐬^'∈𝐒), as determined by the transition function p(𝐬'|𝐬,𝐚) describing how the environment responds to the agent's action. The overall goal of rl is to learn how to solve a multi-step problem by maximizing the return g, which is the total discounted reward:
g = ∑_t=0^Tγ^t · r_t,
where r_t is the reward received at each time step t, T is the total number of time steps in the given episode, and γ∈[0,1] is a discount factor, affecting the importance given to immediate rewards versus future rewards in a given episode.
There are multiple categories of rl algorithms, each with advantages and disadvantages <cit.>. Choosing the appropriate category relies on factors such as the problem's complexity, the size of the state and action spaces, and available computational resources. We now briefly review these categories.
Model-Based rl. In model-based rl, a transition function is learned using transitions (𝐬, 𝐚, r, 𝐬') generated by environmental interaction. This transition function models the probability distribution p(𝐬', r |𝐬, 𝐚) over the subsequent state 𝐬' and reward r given the current state 𝐬 and action 𝐚. By leveraging this learned model, the agent plans and selects actions that maximize the expected return g. However, this approach can be computationally expensive and may suffer from inaccuracies in the learned model, leading to sub-optimal performance <cit.>. Despite these drawbacks, this approach is sample efficient. In other words, it can achieve good performance using relatively few interactions with the environment compared to other methods <cit.>.
Model-Free rl. In model-free rl, optimal actions are learned by direct interaction with the environment. Methods from this category cannot model state-transition dynamics to plan actions, which can result in slower convergence and lower sample efficiency compared to model-based rl <cit.>.
However, model-free rl is more adaptable to environmental changes, making it more robust in complex or noisy environments <cit.>. Additionally, it is less computationally expensive as it does not need to learn a model of the environment.
Further, rl methods can be categorized as on-policy, off-policy, and offline rl depending on how the data collection relates to the policy being learned.
On-Policy rl. On-policy rl approaches use the current policy to gather transitions for updating the value function. For instance in SARSA <cit.>, the current policy is used to collect a tuple (𝐬, 𝐚, r, 𝐬', 𝐚'), consisting of the current state-action pair 𝐬, 𝐚, the immediate reward r, and the next state-action pair 𝐬', 𝐚'. This is then used for estimating the return of the current state-action pair Q_target(𝐬, 𝐚) and the next state-action pair Q_target(𝐬', 𝐚'). The value function Q is updated using the following td learning rule:
Q_target(𝐬,𝐚) ← Q_target(𝐬,𝐚) + α·[ r + γ· Q_target(𝐬',𝐚') - Q_target(𝐬,𝐚) ],
where α∈(0,1] represents the learning rate.
Although on-policy methods are comparatively easy to implement, they have several drawbacks. They tend to be sample inefficient <cit.>, requiring significant interaction with the environment to achieve good performance. Additionally, they can be susceptible to policy oscillation and instability <cit.>, and they lack the flexibility to explore, slowing down the learning process and resulting in sub-optimal policies.
Off-Policy rl. Off-policy rl strategies use two policies — a behavior policy and a target policy. The behavior policy collects data that is subsequently used to estimate the expected return of an action under the given target policy. Since the behavior policy is used for data collection, it can explore different states and actions without affecting the current target policy. Thus, off-policy methods are well-suited for understanding the value of a given state and action. Usually, the target policy is updated using the behavior policy via importance sampling (IS). This adjusts the value estimates of the target policy based on the IS ratio between the behavior b(𝐚 | 𝐬) and target π(𝐚 | 𝐬) policies:
IS = π(𝐚|𝐬)/b(𝐚|𝐬),
and the final value estimate is given as:
Q_target(𝐬,𝐚) ← Q_target(𝐬,𝐚) + α·[ IS·(r + γ· Q_target(𝐬',𝐚')) - Q_target(𝐬,𝐚) ].
Offline-RL. Offline rl or batch rl <cit.> uses a static dataset of transitions, denoted by 𝐃={𝐬_𝐭, 𝐚_𝐭, r_t, 𝐬_𝐭^'}_t=1^T, collected using a behavior policy, and so does not require interaction with the environment to collect trajectories. Offline-RL updates the state-action value function Q_target as:
Q_target(𝐬,𝐚) ← Q_target(𝐬,𝐚) + α·[ r + γ·max_𝐚' (Q_target(𝐬',𝐚')) - Q_target(𝐬,𝐚) ],
where the max operator estimates the maximum expected return over all possible actions in the next state. Offline rl is a more practical strategy for safety-critical applications as it does not require interactions with the environment <cit.>. However, the static nature of the dataset does not allow the agent to explore and adapt to new information, potentially limiting performance <cit.>.
Multi-Agent Reinforcement Learning. Online, offline, and off-policy learning setups can be used to facilitate adaptive decision-making in dynamic environments with multiple interacting agents <cit.>. Each of the I agents has its own policy π_i(𝐚_i|𝐬_i), state space 𝐒_i, and action space 𝐀_i. The agents interact with each other and the environment, and their actions can affect the outcomes of other agents. The goal in marl is to learn a joint policy π(𝐚_1, 𝐚_2, ..., 𝐚_I|𝐬_1, 𝐬_2, ..., 𝐬_I) that maximizes the collective reward of all agents. Formally, the objective in marl can be expressed as maximizing the expected sum of discounted rewards of all the agents:
𝔼[∑_t=0^∞γ^t∑_i=1^I r_i^(t)],
where γ is the discount factor and r_i^(t) is the reward received by agent i at time t.
In general, there are two main approaches to marl: decentralized policies <cit.>, and centralized training <cit.>. Decentralized policies involve each agent independently learning its policy without explicit coordination or communication with other agents. In contrast, centralized training uses a shared value function that considers joint states and actions, enabling agents to communicate and coordinate their actions through a communication protocol. Communication protocols <cit.> facilitate information exchange and collaboration among agents.
Upside Down RL. Classical rl usually involves optimizing policies by estimating the expected future return. Upside-down rl <cit.> flips the traditional rl paradigm and uses the desired return g, the horizon h (i.e., the time remaining until the end of the current trial), and the state as inputs. This input acts as a command which is mapped to action probabilities. Upside-down rl offers improved stability compared to classical rl as it avoids the need to estimate the value function, which can introduce instabilities in traditional rl algorithms <cit.>. The loss function of upside-down rl can be defined as:
ℒ(θ) = ∑_t=0^[ 𝐚_t - f(𝐬_t, g_t, h_t,θ) ]^2,
where θ contains the model parameters. The term 𝐚_t is the action at time step t, and f(𝐬_t, g_t, h_t,θ) is the predicted action when conditioned on state 𝐬_t, expected future return g_t, and horizon h_t.
§.§ Challenges in Reinforcement Learning
In this section, we discuss the different challenges of classical rl algorithms.
Curse of Dimensionality. Real-world applications often involve high-dimensional state spaces, which makes it hard for classical rl algorithms to learn optimal policies <cit.>. This is because the required training data grows exponentially as the data dimensionality increases <cit.>. One way to mitigate this problem is to encode high-dimensional states into a lower-dimensional space. rl policies perform better when trained on encoded low-dimensional data <cit.>.
Partially Observable Environment. A partially observable environment presents a challenge for rl algorithms, as the agent cannot access observations that contain complete information about the environmental state at each time-step <cit.>. Without complete information, the algorithm may struggle to make the best decision, leading to uncertainty and compromises in performance. To address this, the policy must maintain an internal representation of the state, often in the form of memory, from which the actual state can be estimated <cit.>. Historically, this has often been done with rnn, but these cannot efficiently model long contexts <cit.>.
Credit Assignment. The term credit assignment refers to the problem of associating the actions taken by an agent with the reward it receives <cit.>. This is challenging for two reasons: First, the reward may be delayed; the agent may not be able to observe the consequences of its actions until several time steps into the future. Second, other factors or multiple actions may influence the received reward, making it challenging to identify which action led to that reward.
Inaccurate credit assignment can lead to slower training and sub-optimal policies <cit.>. Moreover, when the reward is sparse (i.e., when the agent receives little feedback for its actions), the credit-assignment problem becomes even more difficult <cit.>. One potential solution is to use models that integrate information across all time steps, which may be better suited for solving this issue <cit.>.
Recent studies have exploited transformers to tackle these three key challenges. Transformers have demonstrated success in modeling long-term dependencies in sequential data while showing promising results in promoting generalization and faster learning in domains such as nlp and cv. We now provide a brief overview of transformers and explore the various ways in which they have been applied to learning optimal rl policies.
§.§ Transformers
Transformers are a class of neural network architectures consisting of multiple layers, each containing a multi-head self-attention mechanism, parallel fully-connected networks, residual connections, and layer normalization. Given a sequence of N input embeddings, transformers produce a sequence of N output embeddings, each of which represents the relationship between the corresponding input embedding and the rest of the input sequence. In nlp, the input embeddings may represent words from a given sentence, while in rl, they may represent different states.
The self-attention mechanism allows each input embedding (each row of 𝐗) to simultaneously attend to all the other embeddings in the input sequence. It computes one attention score for each pair of inputs. This is done by projecting each input into a query 𝐐=𝐗𝐖_q and a key 𝐊=𝐗𝐖_k tensor. The attention scores are then computed by taking the dot product of each query vector (row of the query tensor) with every key vector (row of the key tensor), followed by a softmax operation that normalizes the resulting scores such that they add up to one for each query. The attention scores are then used to compute a weighted sum of value 𝐕=𝐗𝐖_v tensors (see Fig. <ref>):
Attention(𝐐, 𝐊, 𝐕) =Softmax( 𝐐𝐊^⊤/√(d_q)) ·𝐕.
To help stabilize gradients during training, the dot product is scaled by a factor of √(d_q), where d_q is the dimension of the query tensor.
Transformers often compute multiple sets of attention scores in parallel (each with a different set of learned parameters 𝐖_k, 𝐖_q, 𝐖_v). This allows the model to attend to multiple aspects of the input sequence simultaneously and is known as multi-head self-attention. Each attention “head” output is concatenated and linearly transformed to produce the final output representation. For applications where the order of the inputs is important, a position encoding is added that allows the network to establish the position of each input.
In a transformer block (Fig. <ref>b), a residual connection is placed around the multi-head self-attention mechanism. This improves training stability by allowing the gradient to flow easily through the network. The output is then processed using layer normalization, which normalizes the activations of each layer across the feature dimension. Each output is processed in parallel by the same mlp. Once more, these are bypassed by a residual connection, and a second layer norm is subsequently added.
Architectural Variations. The bert model <cit.> and the gpt <cit.> are two popular variants of the transformer architecture. bert (Fig. <ref>) is a transformer encoder in which each output receives information from every input in the self-attention mechanism (Fig. <ref>a). The goal is to process the incoming data to generate a latent representation that integrates contextual information. This can be particularly useful in rl, as it enables the agent to make informed decisions based on a more comprehensive understanding of the environment <cit.>.
Conversely, gpt (Fig. <ref>) uses a decoder architecture to auto-regressively generate a sequence of output tokens, considering only the past tokens. The use of masked self-attention (Fig. <ref>b) prevents it from cheating by looking ahead to tokens that it should not know yet during training by clamping the associated attention values to zero. In rl, this autoregressive nature can be used to implement an rl policy that is conditioned on a sequence of past states and actions <cit.>. gpt uses multiple blocks, each containing a multi-head attention mechanism. The original transformer <cit.> combined these two approaches in an encoder-decoder architecture for machine translation. An encoder architecture processes the incoming sentence, and the decoder auto-regressively produces the output sentence. In doing so, the decoder also considers the attention over the encoder's latent representation using a “cross-attention” block.
Vision Transformers. Inspired by the success of transformer-based architectures like bert and gpt in nlp, <cit.> proposed the vit architecture for processing images. vit architecture <cit.> is suited to a wide range of rl tasks where images must be used to learn a policy <cit.>. The vit architecture is a transformer encoder that processes patches of the image (Fig. <ref>). Each patch is combined with a positional encoding that provides knowledge about its original image location.
Transformer-XL. The computational complexity of the self-attention mechanism increases quadratically with input sequence length due to an exponential increase in pairwise comparison. Hence, the transformer architectures discussed so far typically partition long input sequences into shorter sequences to reduce memory demands. While this approach helps minimize memory usage, it makes capturing global context challenging. Moreover, the conventional transformer model is limited because it does not consider the boundaries of the input sequence when forming context. Instead, it selects consecutive chunks of symbols without regard for a sentence or semantic boundaries. This can result in context fragmentation, where the model lacks the necessary contextual information to accurately predict the first few symbols in a sequence. The tr-xl architecture <cit.> addresses these issues by dividing the input into segments and incorporating segment-level recurrence (Fig. <ref>c) and relative positional encodings. By caching and reusing the representation computed for the previous segment during training, tr-xl can extend the context and better capture long-term dependencies. Additionally, tr-xl can process elements of new segments without recomputing the past segments, leading to faster inference.
§.§ Key Advantages of Transformers in RL
This section outlines the transformer characteristics that are important for rl applications.
Attention Mechanism. The attention mechanism is crucial in transformers for sequence modeling of states <cit.>. It enables the rl agent to focus selectively on relevant cues in the environment <cit.> and ignore redundant features, which leads to faster training. This is particularly useful in high-dimensional state spaces where there are a large number of input elements.
Multi-Modal Architectures. For complex tasks, rl agents may require additional information from different data modalities <cit.>. Past approaches have used different architectures to handle multiple modalities <cit.>. However, transformers can process multiple modalities of data (e.g., text, images) effectively <cit.> using the same architecture.
Parallel Processing. Learning a policy in rl can be computationally expensive, especially for complex tasks requiring many samples <cit.>. rnn require the sequential processing of inputs, which is inefficient. Transformers are well-suited for parallelization due to their self-attention mechanism, which considers all inputs simultaneously. rl algorithms can exploit transformers to learn more efficient policies in significantly less time.
Scalability. Current rl algorithms struggle to scale effectively to complex tasks that require the integration of multiple skills <cit.>. However, the performance of transformers has been shown to improve smoothly as the size of the model, dataset, and compute increases <cit.>. This ability can potentially be leveraged in rl to create generalist agents capable of performing various tasks in different environments and with different embodiments <cit.>.
These points highlight how the properties of transformers make them attractive for rl. In the following sections, we examine the use of transformers in each stage of the rl workflow, including representation learning (Sec. <ref>), policy learning (Sec. <ref>), learning the reward function (Sec. <ref>), and modeling the environment (Sec. <ref>). Additionally, we cover various training strategies (Sec. <ref>) and techniques for interpreting rl policies that use transformers (Sec. <ref>).
§ REPRESENTATION LEARNING
Concise and meaningful representations are critical for efficient decision-making <cit.> in rl. Empirical evidence has shown that training agents directly on high dimensional data, such as image pixels, is sample inefficient <cit.>. Therefore, good data representations are crucial for learning rl policies <cit.> since they can enhance performance, convergence speed, and policy stability <cit.>.
For instance, in self-driving cars, the raw sensory inputs 𝐨_t (e.g., camera images, LIDAR readings) are high dimensional and often contain redundant information. If these inputs 𝐨_t are mapped to a compact representation 𝐬_t, the rl agent can learn more efficiently. Similarly, in game-playing scenarios (Fig. <ref>), it's helpful to encode and extract relevant features from the pixel to be used as input to the rl algorithm learning the policy. Transformers can produce transferable and discriminative feature representations for diverse data modalities <cit.>.
§.§ Comparisons between Transformers, CNNs, and GNNs
Encoding high-dimensional representations using pre-trained cnn and transformer architectures is an active research area. Both approaches yield comparable performance in computer vision tasks <cit.>. However, several studies <cit.> have shown that transformers generate more expressive representations than cnn for tasks where the data distribution differs at training and test time. This advantage stems from cnn's inherent inductive bias towards local spatial features, which limits their ability to capture the global dependencies necessary for reasoning <cit.>. Transformers can encode the image as a sequence of patches without local convolution and resolution reduction (Fig. <ref>). Hence, they model the global context in every layer, leading to a stronger representation for learning efficient policies <cit.>. Transformers exhibit comparable generalization capabilities to gnn in graphs <cit.> and, in some instances, outperform them by capturing long-range semantics <cit.>.
mtrl is a learning paradigm where an agent is trained to perform multiple tasks simultaneously. It has traditionally relied on gnn to handle incompatible environments (i.e., differing state-action spaces) <cit.>. This is due to the ability of gnn to operate on graphs of variable sizes. However, <cit.> hypothesize that the restrictive nature of message passing in sparse graphs may adversely impact performance. They propose replacing gnn with transformers which obviates the need to learn multi-hop communication; the transformer can be considered a gnn applied to fully-connected graphs with attention as an edge-to-vertex aggregation operation <cit.>. This enables a dedicated message-passing scheme for each state and pass, effectively avoiding the requirement for multi-hop message propagation. This overcomes the challenges of gradient propagation and information loss arising from such multi-hop propagation. The transformer-based model of <cit.>, Amorpheus, learns better representations and improves performance without imposing a relational inductive bias.
§.§ Advanced Representation Learning using Transformers
Transformers, in combination with other attention mechanisms, enable the learning of expressive representations. SloTTAr <cit.> combines a transformer encoder-decoder architecture with slot attention <cit.>. The transformer encoder focuses on learning spatio-temporal features from action-observation sequences. Utilizing the slot attention mechanism, features are grouped at each temporal location, resulting in K slot representations. The decoder subsequently decodes these slot representations to generate action logits. Notably, this parallelizable process enables faster training compared to existing benchmarks.
In multi-agent reinforcement learning, transformers have proven effective in modeling relations among agents and the environment <cit.>. <cit.> proposed replacing rnn with a transformer encoder for robust temporal learning. Similarly, <cit.> used a visual feature extractor based on the vit architecture to obtain more robust representations for robotic visual exploration. Their network, utilizing self-attention, outperformed cnn backbones in robotic tasks.
Transformers have been widely adopted in scenarios involving the processing of multimodal information. <cit.> introduced scene-fusion transformers that fuse observed trajectories and scene information to generate expressive representations for trajectory prediction. To reduce computational complexity, they employ sparse self-attention. <cit.> utilize transformers to integrate visual and text features effectively.
§.§ Enhancing Transferability and Generalization
An inherent difficulty faced by rl is generalizing to new unseen tasks <cit.>. This difficulty results from the intrinsic differences between various rl tasks (e.g., autonomous driving and drug discovery). While meta-learning methods such as maml <cit.> have been developed to generalize to new tasks with different distributions using limited data, these methods are hard to use in rl due to poor sample-efficiency and unstable training <cit.>.
Transformers have shown great potential for meta-reinforcement learning (TrMRL), as demonstrated by <cit.>. Transformers are excellent at handling long sequences and capturing dependencies over a long period, which enables them to adapt to new
tasks using self-attention quickly. In TrMRL, the proposed agent uses self-attention blocks to create an episodic memory representing a consensus of recent working memories. The transformer architecture encodes the working memory and tasks as a distribution over these memories. During meta-training, the agent learns to differentiate tasks and identify similarities in the embedding space. This approach performs comparably or better than PEARL <cit.> and maml <cit.>. It is particularly efficient in memory refinement and task association.
<cit.> introduced the state-action-reward transformer (StARformer) to model multiple data distributions by learning transition representations between individual time steps of the state, action, and reward. StARformer consists of the step transformer and the sequence transformer. The step transformer uses self-attention to capture a local representation that understands the relationship between state-action-reward triplets within a single time-step window. The sequence transformer combines these local representations with global representation in the form of pure state features extracted as convolutional features, introducing a Markovian-like inductive bias. This bias helps reduce model capacity while effectively capturing long-term dependencies.
§ TRANSITION FUNCTION LEARNING
The transition function p(𝐬', r|𝐬, 𝐚) describes how the environment transitions from the current state 𝐬 to the next state 𝐬' and issues rewards r in response to the actions 𝐚 taken by the agent. Learning this function (Fig. <ref>) and subsequently exploiting it to train an rl agent is known as model-based rl (Sec. <ref>). Model-based rl offers a significant advantage compared to model-free rl approaches <cit.>; it allows the agent to plan future trajectories for each action, improving robustness and safety. Interactions with the external environment can be computationally expensive, particularly when relying on simulations that mimic the real world <cit.>. If we learn the transition function, these interactions can be reduced.
A standard method in mbrl <cit.> involves training an end-to-end world model to represent the environment's dynamics accurately. For instance, TransDreamer <cit.> trains a single model that learns visual representations and dynamics using the evidence lower bound loss <cit.>. However, this approach can result in inaccuracies in the learned world model.
The masked world model (MWM) <cit.> addresses this by decoupling visual representation and dynamics learning. This framework utilizes an autoencoder with convolutional layers and vit to learn visual representations. The autoencoder reconstructs pixels based on masked convolutional features. A latent dynamics model is learned by operating on the representations from the autoencoder. An auxiliary reward prediction objective is
introduced for the autoencoder to encode task-relevant information. Importantly, this approach outperforms the strong rnn-based model, DreamerV2 <cit.> in terms of both sample efficiency and final performance on various robotic tasks.
Learning the dynamics of the world 𝐳_t+1∼ p_G (𝐳_t+1| 𝐳_≤ t, 𝐚_≤ t) has been formulated as a sequence modeling problem in iris <cit.>. This approach takes advantage of the transformer's ability to process sequences of discrete tokens. iris uses a discrete autoencoder to construct a language of image tokens, while a transformer models the dynamics over these tokens. By simulating millions of trajectories accurately, iris surpasses recent methods in the Atari 100k benchmark <cit.> in just two hours of real-time experience.
Building upon the auto-regressive nature of transformer decoders, <cit.> introduces the transformer-based world model (TWM). Based on the tr-xl architecture, TWM learns the transition function from real-world episodes while attending to latent states, actions, and rewards associated with each time step. By allowing direct access to previous states instead of viewing
them through a compressed recurrent state, the tr-xl architecture enables the world model to learn long-term dependencies while maintaining computational efficiency.
§ REWARD LEARNING
The reward function is crucial in rl as it quantifies the desirability of different actions 𝐚 for a given state 𝐬, guiding the learning process. Typically, reward functions are predefined by human experts who carefully consider relevant factors based on their domain knowledge. However, designing an appropriate reward function is challenging in real-world scenarios, requiring a deep understanding of the problem domain. Moreover, manually designing it introduces bias and may lead to sub-optimal behavior.
Recent research has explored different approaches for learning reward functions by integrating human data in various forms, such as real-time feedback, expert demonstrations, preferences, and language instructions. Transformers have proven valuable in these contexts. The transformer architecture is particularly advantageous with non-Markovian rewards, which are characterized by delays and dependence on the sequence of states encountered during an episode (e.g., when rewards are only provided at the end). Transformers efficiently capture dependencies across input sequences, making them well-suited to handle such scenarios.
The preference transformer <cit.> model captures human preferences by focusing on crucial events and modeling the temporal dependencies inherent in human decision-making processes; it effectively predicts non-Markovian rewards and assigns appropriate importance weights based on the trajectory segment. This approach reduces the effort required for designing reward functions and enables handling complex control tasks such as locomotion, navigation, and manipulation.
To train an rl policy for generating text that aligns with human-labeled ground truth, the bilingual evaluation understudy (BLEU) score <cit.> is often used as a reward function. However, BLEU may not consistently correlate strongly with human evaluation. In <cit.>, a bert-based reward function is introduced, demonstrating a higher correlation with human evaluation. This approach leverages a pre-trained bert model (Fig. <ref>) to assess the semantic similarity between the generated and reference sentences and update the policy accordingly.
§ POLICY LEARNING
Policy learning is central to rl; it involves learning the policy π(𝐬) which the agent uses to select actions 𝐚=π(𝐬) with the objective of maximizing the discounted cumulative reward g.
Transformers have been used for modeling π(𝐬) in various scenarios, including off-policy, on-policy, and offline rl.
§.§ Offline RL with the Decision Transformer
Offline rl trains a policy using a limited, static dataset of previously collected experiences. This is different from online or off-policy rl approaches (which continuously interact with the environment to update their policies) since the agent cannot collect experience beyond the fixed dataset, which limits its ability to learn, explore, and improve performance.
The dt <cit.> (Fig. <ref>) is an offline rl method that uses the upside-down rl paradigm (see Sec. <ref>). It uses a transformer-decoder to predict actions conditioned on past states, past actions, and expected return-to-go (the sum of the future rewards). The parameters are optimized by minimizing the cross-entropy (discrete) or mean square error (continuous) loss between the predicted and actual actions.
dt uses the gpt architecture to address the credit-assignment problem; the self-attention mechanism can associate rewards with the corresponding state-action pairs across long time intervals. This also allows the dt policy to learn effectively even in the presence of distracting rewards <cit.>. Empirical experiments demonstrate that the dt outperforms state-of-the-art model-free offline approaches on offline datasets such as Atari and Key-to-Door tasks.
The dt is a model-free approach that predicts actions based on past trajectories without forecasting new states, so it can't plan future actions. This limitation is addressed by the tt <cit.>, an mbrl approach that formulates rl as a conditional sequence modeling problem. tt models past states, actions, and rewards to predict future actions, states, and rewards effectively. Using rewards as inputs prevent myopic behavior and enable the agent to plan future actions through search methods like beam-search <cit.>.
This task-specific conditioning of agents offers flexibility in learning complex tasks. Prompt-based dt <cit.>, enables few-shot adaptation in offline rl. The input trajectory, which acts as a prompt, contains segments of few-shot demonstrations, encoding task-specific information to guide policy generation. This approach allows the agent to exploit offline trajectories collected from different tasks and adapt to new scenarios for generalizing to unseen tasks. Similarly, the text decision transformer (TDT) <cit.> employs natural language signals to guide policy-based language instruction in the Atari-Frostbite environment.
However, dt face several challenges. They struggle to learn effectively from sub-optimal trajectories. In a stochastic environment, their performance tends to degrade since the action taken may have been sub-optimal, and the achieved outcomes are merely a result of random environment transition. Insufficient distribution coverage of the environment is another challenge in offline rl approaches like dt. To overcome these challenges, solutions such as qdt <cit.> re-label the return-to-go using a more accurate learned Q-function. esper <cit.> addresses stochastic performance degradation by conditioning on average return. Additionally, boot <cit.>, incorporates bootstrapping to generate more offline data. By adopting these approaches, the learning capabilities of dt can be improved, enabling more effective and robust policies in various scenarios.
§.§ Online RL with Transformers
Transformers have also been applied to online rl, where the agent interacts with the environment while learning. In realistic environments, issues such as noisy sensors, occluded images, or unknown agents introduce the problem of partial observability. This makes it difficult for agents to choose the correct action (Sec. <ref>). Here, retaining recent observations in memory is crucial to help disambiguate the true state. Traditionally, this problem has been approached using rnn, but transformers can provide better alternatives.
The deep transformer Q-network (DTQN) <cit.> addresses the challenge of partially observable environments using a transformer decoder architecture. At each time step of training, it receives the agent's previous k observations and generates k sets of Q-values. This unique training strategy encourages the network to predict Q-values even in contexts with incomplete information, leading to developing a more robust agent. During the evaluation, it selects the action with the highest Q-values from the last time-step in its history (Fig. <ref>).
The DTQN incorporates a learned positional encoding, which enables the network to adapt to different domains by learning domain-specific temporal dependencies. This domain-specific encoding matches the temporal dependencies of each environment and allows the DTQN to adapt to environments with varying levels of temporal sensitivity. The DTQN demonstrates superior learning speed and outperforms previous recurrent approaches in various partially observable domains, including gym-gridverse, car flag, and memory cards <cit.>.
§.§ Transformers for Multi-Agent Reinforcement Learning
marl (Sec. <ref>) presents unique challenges as agents learn and adapt their behaviors through interactions with other agents and the environment. One such challenge stems from the model architecture's fixed input and output dimensions, which means that different tasks must be trained independently from scratch <cit.>. Consequently, zero-shot transfer across tasks is limited.
Another challenge arises from the failure to disentangle observations from different agents <cit.>. When all information from various agents or environments is treated equally, it can result in misguided decisions by individual agents. This challenge becomes particularly prominent when utilizing a centralized value function, which serves as a shared estimate of actions and state value across multiple agents to guide their behavior <cit.>. As a result, appropriately assigning credit to individual agents becomes difficult.
The universal policy decoupling transformer (UPDeT) <cit.> is designed to handle challenges in tasks with varying observation and action configuration requirements. It achieves this by separating the action space into multiple action groups, effectively matching related observations with corresponding action groups. UPDeT improves the decision-making process by employing a self-attention mechanism and optimizing the policy at the action-group level. This enhances the explainability of decision-making while allowing for high transfer capability to new tasks.
This characteristic is also observed in the multi-agent transformer (MAT) <cit.>. MAT (Fig. <ref>) transforms the joint policy search problem into a sequential decision-making process, allowing for parallel learning of agents' policies regardless of the number of agents involved. The encoder utilizes the self-attention mechanism to process a sequence of each agent's observations, capturing their interactions. This generates a sequence of latent representations that are then fed into the decoder. The decoder, in turn, produces each agent's optimal action in an auto-regressive and sequential manner. As a result, MAT possesses robust generalization capabilities, surpassing mappo <cit.>, and happo <cit.> in few-shot experiments on multi-agent MuJoCo tasks.
However, marl faces limitations in real-world applications due to the curse of many agents <cit.>, which stems from the exponentially growing state-action space as the number of agents increases. This presents challenges in learning the value functions and policies of the agents, leading to
inefficient relational reasoning among them and credit-assignment problems. Concatenating the state-action spaces of individual agents and treating them as a single-agent problem leads to exponential state and time complexity <cit.>. Additionally, independent learning of policies may struggle to converge without cooperation <cit.>.
TransMix <cit.> tackles the challenge through a centralized learning approach, enabling agents to exchange information during training. During policy execution, each agent relies on a partially observable map. The action space in the star-craft multi-agent challenge (SMAC) <cit.> encompasses various actions, including moving units, attacking enemies, gathering resources, constructing buildings, and issuing commands to control the game state. Utilizing transformers, TransMix captures global and local contextual interactions among agent Q-values, histories, and global state information, facilitating efficient credit assignment.
The transformer's ability to reason about relationships between agents improves results in both model-free marl (regardless of the number of agents) and model-based marl (with a logarithmic dependence on the number of agents) <cit.>. Notably modeling the transformer's self-attention with other neural network types requires an impractically large number of trainable parameters, highlighting the significance of self-attention in capturing agent interactions <cit.>. Moreover, the transformer's performance remains stable across different agent counts, with accuracy impacted by neural network depth <cit.>, making it highly efficient for marl.
§ TRAINING STRATEGY
Training transformers poses challenges due to their reliance on residual branches, which amplify
minor parameter perturbations, disrupting model output <cit.>; specialized optimizers and weight initializers are needed for successful training. Likewise, the training of rl policies can be unstable <cit.> and require distinct strategies for achieving optimal performance. Hence, the integration of transformers into rl is particularly challenging. These challenges can manifest as sudden or extreme changes in performance during training, impeding effective learning and generalization.
The standard transformer architecture is difficult to optimize using rl objectives and needs extensive hyper-parameter tuning, which is time-consuming. Here, we review strategies for training transformers in rl. These include pre-training and transfer learning to expedite learning, improved weight initialization to mitigate gradient issues, and efficient layer utilization for capturing relevant information.
§.§ Pre-Training and Transfer Learning
Transformers can be pre-trained on large, reward-free datasets, providing opportunities to fine-tune when only small annotated datasets are available. <cit.> propose using dt to pre-train agents on large, reward-free offline datasets of prior interactions. During pre-training, reward tokens are masked, allowing the transformers to learn to predict actions based on the previous state and action content while extracting behavior from the dataset. This pre-trained model can then be fine-tuned with a small, reward-annotated dataset to learn the skills necessary to achieve the desired behavior based on the reward function.
Transfer learning is challenging when the environment dynamics change. A training method for dt <cit.> addresses this challenge by using counterfactual reasoning. It generates counterfactual trajectories in an alternative environment, which are used to train a more adaptable learning agent. This process aids in regularizing the agent's internal representation of the environment, enhancing its adaptability to structural changes. Moreover, unsupervised pre-training of vision and sequence encoders has also improved downstream few-shot learning performance <cit.>. By leveraging pre-trained models, the agent can quickly adapt to new, unseen environments and achieve higher performance with limited training data.
§.§ Stabilizing Training
In the rl setting, transformer models require learning rate warmup to prevent divergence caused by backpropagation through the layer normalization modules, which can destabilize optimization. To enhance stability, <cit.> proposes to use T-Fixup initialization <cit.>. This applies Xavier initialization <cit.> to all parameters except input embeddings, eliminating the need for learning rate warmup and layer normalization. It is crucial in environments where learned behavior guides exploration; it addresses instability during early training stages when policies are more exploratory and prevent convergence to sub-optimal policies.
The gtr-xl architecture <cit.> has demonstrated promising results in stabilizing rl training and improving performance. It improves upon the original tr-xl architecture by applying layer normalization exclusively to the input stream within the residual model rather than the shortcut stream. This modification allows the initial input to propagate through all the layers, promoting training stability. gtr-xl replaces the residual connection with a gru-style gating mechanism. This gating mechanism regulates information flow through the network controlling the amount of information passed via the shortcut. This added flexibility enhances the model's adaptability to rl scenarios and facilitates stable training.
§ INTERPRETABILITY
Interpretability of the learned rl policies is desirable in safety-critical applications like healthcare and autonomous driving <cit.>. This helps in building trust, facilitating debugging, and promoting ethical and fair decision-making. However, achieving interpretability has been a significant challenge and a bottleneck in the progress of rl <cit.>.
One way to interpret transformers is to visualize the attention weights using heatmaps <cit.>. This helps to understand which features are used to learn the particular task. In multi-agent scenarios, these visualizations reveal the localized areas of the input space where the individual agents focus their attention, facilitating coordinated and cooperative behavior. For instance, <cit.> introduce a multi-agent transformer deep Q-network (MAT-DQN) that integrates transformers into a deep Q-network. Using heatmaps, MAT-DQN provides insights into the important input information that influences the agent's decision-making process for cooperative behavior.
Analyzing attention heatmaps unveils the agent's ability to consider other agents, relevant objects, and pertinent tasks, allowing for a clear interpretation of the policy. Such visualization is critical in sparse reward settings, where understanding which past state had the most influence on decision-making is crucial. Attention-augmented memory (AAM) <cit.> exemplifies this by combining the current observation with memory. This enables the agent to understand “what” the agent observes in the current environment and “where” it directs its attention in its memory.
An interesting method for enhancing interpretability involves the use of transformers in neuro-symbolic policies <cit.>. Neuro-symbolic policies combine programs and neural networks to improve interpretability and flexibility in rl tasks. Specifically, a neuro-symbolic transformer is a variant of the traditional transformer model that incorporates programmatic policies into the attention mechanism. Instead of utilizing a neural network, the attention layer employs a program to determine the relevant inputs to focus on. These programmatic policies can take various forms, including decision trees, rule lists, and state machines. This approach improves interpretability by providing a more precise understanding and visualization of why agents attend to specific inputs.
However, it has been demonstrated that attention weights alone are unreliable predictors of the importance of intermediate components in nlp <cit.>, leading to inaccurate explanations of model decisions; learned attention weights often highlight less meaningful tokens and exhibit minimal correlation with other feature importance indicators like gradient-based measures. Furthermore, relying solely on attention weights can result in fragmented explanations that overlook most other computations. Recent work has introduced the assignment of a local relevancy score <cit.>. These scores are propagated through layers to achieve class-based separation and enhance the interpretability of the transformers. This approach holds promise for future research to improve the interpretability of rl policies.
§ APPLICATIONS
rl has traditionally been constrained to unrealistic scenarios in virtual environments. However, with modern deep neural network architectures, there has been a notable shift towards employing rl to address a broader range of practical challenges. The following section describes real-world applications where rl powered by transformers can make a substantial impact.
§.§ Robotics
In robotics, autonomous agents automate complex real-world tasks; a classic example is autonomous driving. Here, learning the rl policy for trajectory planning is essential: it involves forecasting the future positions of one or more agents in an environment while considering contextual information. This requires adequate planning and coordination among agents by modeling their spatial and temporal interactions.
Several studies have proposed to use transformers for processing sequences of high-dimensional scene observations for predicting actions. A recent study <cit.> uses vit to extract spatial representations from a birds-eye view of the ego vehicle to learn driving policies. Compared with cnn, vit are more effective in capturing the global context of the scene. The attention mechanism used in vit allow the policy to discern the neighboring cars that are pivotal in the decision-making process of the ego vehicle. As a result, the vit-based DQN agent outperforms its cnn-based counterparts. <cit.> introduce a transformer architecture to encode heterogeneous information, including the historical state of the ego vehicle and candidate route waypoints, into the scene representation. This approach enhances sample efficiency and results in more diverse and successful driving behaviors during inference.
The object memory transformer <cit.> explores how long-term histories and first-person views can enhance navigation performance in object navigation tasks. An object scene memory stores long-term scene and object semantics, focusing attention on the most salient event in past observations. The results indicate that incorporating long-term object histories with temporal encoding significantly enhances prediction performance.
Transformers also excel in capturing both spatial relationships and intra-agent interactions,
making them ideal for facilitating cooperative exploration and developing intelligent embodied
agents. maans <cit.> addresses the challenge of cooperative multi-agent exploration <cit.>, where multiple agents collaborate to explore unknown spatial regions. This approach extends the single-agent active neural SLAM <cit.> method to the multi-agent setting and utilizes a multi-agent spatial planner with a self-attention-based architecture known as the Spatial-TeamFormer. This hierarchically integrates intra-agent interactions and
spatial relationships, employing two layers: An individual spatial encoder that captures spatial features for each agent, and a team relational encoder for reasoning about interactions among agents. To focus on spatial information, the intra-agent self-attention performs spatial self-attention over each agent's spatial map independently. The team relation encoder focuses on capturing team-wise interactions without leveraging spatial information. This allows maans to outperform planning-based competitors in a photo-realistic environment, as shown in experiments on Habitat <cit.>.
§.§ Medicine
rl has the potential to assist clinicians; tasks involving diagnosis, report generation, and drug discovery can be considered sequential decision-making problems <cit.>.
Disease Diagnosis. Diagnosing a medical condition involves modeling a patient's information (e.g., treatment history, present signs, and symptoms) to accurately understand the disease. <cit.> propose a model for disease diagnosis called the DxFormer. This employs a decoder-encoder transformer architecture, where the decoder inquires about implicit symptoms. At the same time, the encoder is responsible for disease diagnosis, which models the input sequence of symptoms as a sequence classification task. To facilitate symptom inquiry, the decoder is formulated as an agent that interacts with a patient simulator in a serialized manner, generating possible symptom tokens that may co-occur with prior known symptoms and inquiring about them. The inquiry process proceeds until the confidence level in the predicted disease surpasses a selected threshold, thus enabling a more accurate and reliable diagnosis.
Clinical Report Generation. rl can generate medical reports from images by employing appropriate evaluation metrics such as human evaluations or consensus-based image description evaluation (CIDEr) <cit.> and BLEU metrics as rewards. Previous approaches to medical image captioning were constrained by their reliance on rnn for text generation, which often resulted in slow performance and incoherent reports, as highlighted by <cit.>. To address this limitation, their work introduces an rl approach based on transformers for medical image captioning. Initially, a pre-trained cnn is employed to identify the region of interest in chest X-ray images. Then, a transformer encoder is utilized to extract the visual features from the identified regions. These features serve as input to the decoder, which generates sentences describing the X-ray scans. Similarly <cit.> used a meshed-memory transformer (M2 Trans) <cit.> that generates radiology reports, proving more effective than traditional rnn and transformer models. M2 Trans incorporates a cnn to extract image regions. These regions are then encoded using a memory-augmented attention process. This involves assigning attention weights to the image based on prior knowledge stored in memory matrices that capture relationships between different regions. This model is trained using
rewards, aiming to enhance generated reports' factual completeness and consistency.
Drug Discovery. rl has the potential to accelerate drug discovery efforts. It has been utilized to bias or fine-tune generative models, enabling the generation of molecules with more desirable characteristics, such as bioactivity. Traditional generative models for molecules, such as rnn or gan <cit.> have limitations in satisfying specific constraints, such as synthesizability or desirable physical properties. Recent research <cit.> uses transformers as generative models for molecular generation. These approaches generate better plausible molecules with rich semantic features. A discriminator grants rewards that guide the policy update of the generator. These works demonstrate that transformer-based methods significantly improve capturing and utilizing structure-property relations, leading to higher structural diversity and a broader range of scaffold types for the generated molecules.
§.§ Language Modeling
Language modeling involves understanding the sequential context of language to perform diverse tasks like recognition, interpretation, or retrieval. Large language models like gpt leverage pre-training on vast corpora, enabling them to generate fluent, natural language by sampling from the learned distribution, thus minimizing the need for extensive domain-specific knowledge engineering. However, these models face challenges in maintaining task coherence and goal-directedness. <cit.> use ppo to fine-tune an existing transformer-based language model specifically
for story generation to address this issue. This approach inputs a text prompt and generates a story based on
the provided goal. This policy is updated using a reward mechanism that considers the proximity of the generated story to the desired input goal and the frequency of verb occurrence in the story compared to the goal.
Several studies use additive learning to benefit from pre-trained language models with limited data, incorporating a task-specific adapter over the frozen pre-trained language model. <cit.> use rl to selectively sample tokens between the general pre-trained language model and the task-specific adapter. The authors argue that this enables the adapter to focus solely on the task-relevant component of the output sequence, making the model more robust to over-fitting. <cit.> introduce a conversational bot powered by rl where pre-trained models encode conversation history. Given that the action space for dialogue systems can be very large, the authors propose limiting the action space to a small set of generated candidate actions at each conversation turn. They use Q-Learning-based rl to allow a dynamic action space at each stage of the conversation.
Increasing the size of language models alone does not necessarily mitigate the risk of toxic biases in the training data. Several RL-based approaches have been proposed to better align these models with the user's intended objectives. To align GPT-3 to the user’s preferred intentions, <cit.> introduce InstructGPT. First, the authors propose to collect a set of human-written demonstrations of desired output behavior and fine-tune GPT-3 with supervised learning. Next, a reward model is trained on model outputs ranked from best to worst. Using this reward model, the model is further optimized with RL using ppo. Results demonstrate that InstructGPT, with 1.3B parameters, produces preferable outputs than much larger models such as GPT-3 with 175B parameters. <cit.> propose an alternative method to mitigate toxicity in language models via fine-tuning with ppo. They use a reward model based on multi-task learning to mitigate unintended bias in toxicity prediction related to various social identities.
§.§ Edge and Cloud Computing
rl is a valuable tool for optimizing the performance of decision-making systems that require real-time adaptation to changing conditions, such as those used in edge and cloud computing. In edge computing, rl can optimize resource-constrained iot devices' performance <cit.>. In cloud computing, rl can be used to optimize resource allocation and scheduling in large-scale distributed systems <cit.>. Integrating transformers with rl in these two settings can be particularly useful as they can handle high-dimensional sensory states <cit.> and sequences of symbolic states <cit.>.
A distributed deep rl algorithm proposed by <cit.> utilizes transformers to model the policy for optimizing offloading strategies in vehicular networks. These networks enable vehicle-to-vehicle communication. To represent the input sub-task priorities and dependencies, a dag is used. Thereafter the attention mechanism employed by the transformer allows for the efficient extraction of state information from this dag-based topology representation. This facilitates informed offloading decisions. The reward function used in this algorithm optimizes for latency and energy consumption providing valuable feedback. This approach enables faster convergence of the vehicular agent to equilibrium.
§.§ Combinatorial Optimization
Combinatorial optimization involves finding the values of a set of discrete parameters that minimize cost functions <cit.>. Recently, transformer-based models have shown promise in combinatorial optimization (e.g., for the traveling salesman and routing problems) due to their ability to handle sequential data and model complex relationships between entities.
Travelling Salesman. This problem is a classical combinatorial optimization problem commonly found in crew scheduling applications. It has been formulated as an rl problem by <cit.>. The problem is NP-hard and not NP-complete. The high polynomial complexities of brute-force algorithms necessitate the development of faster methods. This study uses a dt to solve this challenge. The dt is fed random walks as input and aims to find the optimal path among all nodes. The dt has the advantage of scaling with pseudo-linear time, as it only needs to predict once per node in the route. This significantly improves over previous methods, such as dynamic programming and simulated annealing, which have polynomial and exponential complexity. However, the dt could not always accurately model the travelling salesman problem leading to inconsistent performance.
Routing. Identifying the most efficient route between two nodes in a graph is important in industries such as transportation, logistics, and networking <cit.>. Traditional heuristic-based algorithms may not always yield the optimal solution, as adapting to changing conditions is challenging <cit.>. gnn have been used to address these challenges <cit.>. However, these may not be sufficient for handling data with complex inter-relationships and structures, motivating the use of transformers. In addition, routing often requires optimizing for multiple constraints, such as cost, time, or distance, which can be tackled using rl. A transformer-based policy has been proposed by <cit.> to tackle the routing problem using a standard transformer encoder with positional encoding, which ensures translation invariance of the input nodes. A gnn layer is used in the decoder, enabling consideration of the graph's topological structure formed by the node relationships. The policy is then trained using the REINFORCE algorithm <cit.>. This approach improves learning efficiency and optimization accuracy compared to traditional methods while providing better generalization in new scenarios.
§.§ Environmental Sciences
rl algorithms can help address climate change by optimizing the behavior of systems and technologies for reducing greenhouse gas emissions and mitigating the impacts of climate change <cit.>. These algorithms can learn and adapt to multiple constraints, optimizing performance without compromising productivity. However, in such settings, rl algorithms must rely on past context stored in memory, and integrating prior knowledge is crucial for their success, suggesting the use of transformers.
<cit.> formulate the problem of closed-loop reservoir management problem as a pomdp. In subsurface flow settings, such as oil reservoirs, the goal is to extract as much oil as possible while minimizing costs and environmental impact. However, this requires making decisions about well pressure settings often complicated by geological model uncertainties. This work models the rl policy with ppo using temporal convolution and gated transformer blocks for an efficient and effective representation of the state. The framework's training is accomplished with data generated from flow
simulations across an ensemble of prior geological models. After appropriate training, the policy instantaneously maps flow data observed at wells to optimal pressure. This approach helps reduce computational costs and improve decision-making in subsurface flow settings.
<cit.> introduce transformer-based multi-agent actor-critic framework (T-MAAC) leveraging marl algorithms to stabilize voltage in power distribution networks. This framework recognizes the need for coordination among multiple units in the grid to handle the rapid changes in power systems resulting from the increased integration of renewable energy and tackles this problem by using marl algorithms. The proposed approach introduces a transformer-based actor that takes the grid state representation as input and outputs the maximum reactive power ratio that each agent in the power distribution can generate. Subsequently, the critic approximates the global Q-values using the self-attention mechanism to model the correlation between agents across the entire grid. The policy is reinforced through feedback in the form of reward, aiming to control voltage within a safe range while minimizing power loss in the distribution network. This approach consistently enhances the effectiveness of the active voltage control task.
§.§ Scheduling
The scheduling problem involves determining the optimal arrangement of tasks or events within a specified time frame while considering constraints such as resource availability or dependencies between tasks <cit.>. This problem can arise in various contexts, such as scheduling jobs in manufacturing or optimizing computer resource usage, and can be approached using various techniques <cit.>. Transformers are now being used to solve scheduling problems.
The job-shop scheduling problem (JSSP) is a classical NP-hard problem that involves scheduling a set of jobs on a set of machines, where each job has to be processed on each machine exactly once, subject to various constraints. An rl approach for solving the JSSP using the dgerd transformer is proposed by <cit.>. This work uses the attention
mechanism and disjunctive graph embedding to model the JSSP, which allows complex
relationships between jobs and machines to be captured. In the context of JSSP, the attention module learns to prioritize certain jobs or machines based on their importance or availability. By doing
so, it can generate more efficient and robust schedules. The disjunctive graph embedding converts the JSSP instance into a graph representation to capture the structural properties, enabling better generalization and reducing over-fitting. This acts as an input for the dgerd transformer consisting of a parallel-computing encoder and a recurrent-computing decoder. The encoder takes the disjunctive graph embedding of the JSSP instance and generates a set of hidden representations that capture the relevant features of the input. This hidden representation is then fed to the decoder to generate an output schedule sequentially. The policy is optimized using feedback from the environment in the form of makespan (length of time that elapses from the start of work to the end) and tardiness penalties. This helps in generating schedules that are both fast and reliable.
§.§ Trading
Stock portfolio optimization involves choosing the optimal combination of assets to obtain the highest possible returns while minimizing risk <cit.>. This process can be challenging due to the various factors that can impact a portfolio's performance <cit.>, which include market conditions, economic events, and changes in the value of individual stocks. Various techniques can be used to optimize a portfolio, including modern portfolio theory and optimization algorithms <cit.>, and rl is one such approach to automate the trading process. For this application, rl involves training a model to make trading decisions based on historical data and market conditions to maximize the portfolio's return over time.
Although past performance may not indicate future results, data-driven approaches rely on past features to model a particular stock's expected future performance. This is because various historical data points, such as price trends, trading volumes, and market sentiment, may hint at a stock's future performance <cit.>. In portfolio optimization, it is necessary to consider both short-term and long-term trends <cit.>. The transformer architecture is well-suited for this task.
The first application of transformers in portfolio selection, as introduced by <cit.>, involved using the rat. This uses an encoder-decoder transformer architecture to model the rl policy. The encoder takes in the sequential price series of assets, such as stocks and cryptocurrencies, as the input state. It performs sequential feature extraction, comprising a sequential attention layer for capturing patterns in asset prices and a relation attention layer for capturing correlations among assets. The decoder has a network resembling an encoder, with an additional decision-making layer that incorporates leverage and enables accurate decisions, including short sales, for each asset. The final action is determined by combining the initial portfolio vector, the short sale vector, and the reinvestment vector. The agent then receives reward-based feedback, measured as the log return of portfolios. To evaluate the proposed method, real-world cryptocurrency, and stock datasets are used and compared against state-of-art portfolio selection methods. The results demonstrate a significant improvement over existing approaches.
§.§ Hyper-Parameter Optimization
hpo involves finding the optimal set of hyper-parameters for training a machine learning model. Some commonly used hyper-parameters in machine learning models include the learning rate, batch size, number of hidden units in a neural network, and activation function. As such, finding the best combination of these hyper-parameters can be challenging for large models due to their correspondingly large search space <cit.>. Manually setting hyper-parameter values is fast but requires expertise and domain knowledge <cit.>. Automated techniques like random search, grid search, or Bayesian optimization can automatically find ideal hyper-parameter combinations <cit.>, but minimizing overall computational costs remains a challenge. Such auto-tuners rarely perform well for complex tasks and are prone to errors with increased model complexity <cit.>.
ame <cit.>, is a transformer-based search algorithm to enhance the selection of hyper-parameters that tackles these challenges. ame utilizes rl and addresses hpo without relying on distribution assumptions. The agent, or the searcher, is modeled using gtr-xl and learns a series of state-to-action mappings based on rewards. In this context, the state refers to the combination of evaluated configurations, while an action corresponds to the new configuration chosen by the agent from the search space. Utilizing gtr-xl improves the ability to capture relationships among different configurations through memory mechanisms and multi-head attention, thereby enabling attentive sampling. The agent is trained using feedback in the form of rewards, which promotes the generation of high-performance configurations and penalizes those leading to reduced performance. Consequently, it
effectively locates high-performance configurations within vast search space. Results demonstrate that the ame algorithm surpasses other hpo like Bayesian optimization, evolutionary algorithms, and random search methods in terms of adaptability to diverse tasks, efficiency, and stability.
§ LIMITATIONS
As discussed above, transformers are gradually being integrated into rl for various applications. Despite these advances, some limitations impede their widespread use. This section details these limitations and provides insights for future research.
Balancing Local & Global Context. In rl, global contextual information is required for efficient high-level planning <cit.>. This information is combined with additional nearby details, known as the local context, to predict low-level actions precisely. As detailed by <cit.>, transformers may not be as effective as other models in capturing local context. This limitation is mainly because of the self-attention mechanism, which compares queries and keys for all elements in a sequence using the dot product. This point-wise comparison does not directly consider the local context for each sequence position, which may lead to confusion due to noisy local points. Recent studies <cit.> inspired by cnn have proposed modifications to the original attention mechanism to balance the local and global context more effectively. These approaches include local window-based boundary-aware attention, allowing the model to focus on a small window of nearby details and the global context when making predictions.
Weak Inductive Bias. cnn and lstm <cit.> models have a strong inductive bias toward the dataset's structure, which helps narrow the search space and leads to faster training <cit.>. This makes them better suited for situations with less training data. However, transformers have a relatively weak inductive bias, making them more capable of finding general solutions <cit.> but more susceptible to over-fitting, especially when less data is available. This limitation can be a significant challenge in rl, where training a policy already requires millions of trajectories. Furthermore, learning models like the decision transformer require collecting trajectories from learned policies which can be even more challenging. One approach to counter weaker inductive bias in transformers is to use foundation models <cit.>. Foundation models are pre-trained on large and diverse datasets, which allows them to learn general patterns that can be applied to a wide range of downstream tasks. The foundation model can achieve state-of-the-art results with less data by fine-tuning the pre-trained model on a smaller task-specific dataset.
Quadratic Complexity. The self-attention mechanism of transformers becomes more computationally expensive as input sequence length increases due to the quadratic increase in pairwise comparisons between tokens <cit.>. This limitation, along with hardware and model size constraints, restricts the ability of transformers to process longer input sequences, making them unsuitable for specific tasks that require substantial amounts of contextual information, like document summarization or genome fragment classification. This limitation can also pose challenges in rl for applications requiring extended temporal modeling. However, recent works <cit.> have provided methods to reduce this cost to linear or sub-quadratic, providing new possibilities for using transformers in applications that require longer input sequences.
§ CONCLUSION
This survey explored the diverse uses of transformers in rl, including representation learning, reward modeling, transition function modeling, and policy learning. While the original transformer architecture has limitations, it can be modified for many rl applications. We showcased the advances in transformers that have broadened the scope of rl to real-world problems in robotics, drug discovery, stock trading, and cloud computing. Finally, we discussed the current limitations of transformers in rl and ongoing research in this field. Given its versatility in addressing challenges such as partial observability, credit assignment, interpretability, and unstable training — issues commonly encountered in traditional rl — we anticipate that the transformer architecture will continue to gain popularity in the rl domain.
Acknowledgement. We thank CIFAR, Google, CMLabs for funding the project, and Vincent Michalski for the valuable feedback.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.04111v1 | 20230709070831 | Model-Based End-to-End Learning for Multi-Target Integrated Sensing and Communication | [
"José Miguel Mateos-Ramos",
"Christian Häger",
"Musa Furkan Keskin",
"Luc Le Magoarou",
"Henk Wymeersch"
] | eess.SP | [
"eess.SP"
] |
Model-Based End-to-End Learning for Multi-Target Integrated Sensing and Communication
José Miguel Mateos-Ramos, Student Member, IEEE,
Christian Häger, Member, IEEE,
Musa Furkan Keskin, Member, IEEE,
Luc Le Magoarou, Member, IEEE,
Henk Wymeersch, Senior Member, IEEE
This work was supported, in part, by a grant from the Chalmers AI Research Center Consortium (CHAIR), by the National Academic Infrastructure for Supercomputing in Sweden (NAISS), the Swedish Foundation for Strategic Research (SSF) (grant FUS21-0004, SAICOM), Hexa-X-II, part of the European Union’s Horizon Europe research and innovation programme under Grant Agreement No 101095759., and Swedish Research Council (VR grant 2022-03007). The work of C. Häger was also supported by the Swedish Research Council under grant no. 2020-04718.
José Miguel Mateos-Ramos, Christian Häger, Musa Furkan Keskin and Henk Wymeersch are with the Department of Electrical Engineering, Chalmers University of Technology, Sweden (email: [email protected]; [email protected]; [email protected]; [email protected]).
Luc Le Magoarou is with INSA Rennes, CNRS, IETR - UMR 6164, F-35000, Rennes, France (email: [email protected]).
Accepted 08-Jul-2023. Received 18-Jun-2023; in original form 23-May-2023
=============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
We study model-based end-to-end learning in the context of integrated sensing and communication (ISAC) under hardware impairments.
A monostatic orthogonal frequency-division multiplexing (OFDM) sensing and multiple-input single-output (MISO) communication scenario is considered, incorporating hardware imperfections at the ISAC transceiver antenna array.
To enable end-to-end learning of the ISAC transmitter and sensing receiver, we propose a novel differentiable version of the orthogonal matching pursuit (OMP) algorithm that is suitable for multi-target sensing.
Based on the differentiable OMP, we devise two model-based parameterization strategies to account for hardware impairments: (i) learning a dictionary of steering vectors for different angles, and (ii) learning the parameterized hardware impairments.
For the single-target case, we carry out a comprehensive performance analysis of the proposed model-based learning approaches, a neural-network-based learning approach and a strong baseline consisting of least-squares beamforming, conventional OMP, and maximum-likelihood symbol detection for communication.
Results show that learning the parameterized hardware impairments offers higher detection probability, better angle and range estimation accuracy, lower communication symbol error rate (SER), and exhibits the lowest complexity among all learning methods.
Lastly, we demonstrate that learning the parameterized hardware impairments is scalable also to multiple targets, revealing significant improvements in terms of ISAC performance over the baseline.
Hardware impairments, integrated sensing and communication (ISAC), joint communication and sensing (JCAS), machine learning, model-based learning, orthogonal matching pursuit (OMP).
§ INTRODUCTION
Next-generation wireless communication systems are expected to operate at higher carrier frequencies to meet the data rate requirements necessary for emerging use cases such as smart cities, e-health, and digital twins for manufacturing <cit.>. Higher carrier frequencies also enable new functionalities, such as ISAC. ISAC aims to integrate radar and communication capabilities in one joint system, which enables hardware sharing, energy savings, communication in high-frequency radar bands, and improved channel estimation via sensing-assisted communications, among other advantages <cit.>.
ISAC has been mainly considered by means of dual-functional waveforms. For instance, radar signals have been used for communication <cit.>, while communication waveforms have proven to yield radar-like capabilities <cit.>. Furthermore, optimization of waveforms to perform both tasks simultaneously has also been studied <cit.>, where the results depend on the cost function to optimize and the ISAC optimization variables. However, conventional ISAC approaches degrade in performance under model mismatch, i.e., if the underlying reality does not match the assumed mathematical models. In particular at high carrier frequencies, hardware impairments can severely affect the system performance and hardware design becomes very challenging <cit.>. This increases the likelihood of model mismatch in standard approaches, and problems become increasingly difficult to solve analytically if hardware impairments are considered.
DL approaches based on large NN have proven to be useful under model mismatch or complex optimization problems <cit.>. DL does not require any knowledge about the underlying models as it is optimized based on training data, which inherently captures the potential impairments of the system.
DL has been investigated in the context of ISAC for a vast range of applications, such as predictive beamforming in vehicular networks <cit.>, waveform design <cit.> and channel estimation <cit.> in IRS-assisted ISAC scenarios, multi-target sensing and communication in THz transmissions <cit.>, or efficient resource management <cit.>.
However, most previous works on DL for ISAC consider single-component optimization, either at the transmitter or receiver. On the other hand, end-to-end learning <cit.> of both the transmitter and receiver has proven to enhance the final performance of radar <cit.> and communication <cit.> systems. End-to-end learning in ISAC was applied by means of an AE architecture in <cit.>, to perform single-target angle estimation and communication symbol estimation, under hardware impairments. This was recently extended to multiple targets in <cit.>, although without considering impairments, where the AE outperformed conventional ESPRIT <cit.> in terms of angle estimation for single- and dual-snapshot transmissions.
Nevertheless, DL approaches often lack interpretability and require large amounts of training data to obtain satisfactory performance.
To overcome the disadvantages of large DL models, MB-ML <cit.> instead parameterizes existing models and algorithms while maintaining their overall computation graph as a blueprint.
This allows training initialization from an already good starting point, requiring less training data to optimize, and typically also offers a better understanding of the learned parameters.
A popular example of MB-ML learning is deep unfolding <cit.>, where iterative algorithms are “unrolled” and interpreted as multi-layer computation graphs.
In the context of sensing, deep unfolding of the fixed-point continuation algorithm with one-sided l_1-norm was applied to angle estimation of multiple targets <cit.>, showing enhanced accuracy with respect to DL and model-based benchmark approaches. In <cit.>, the ISTA was unfolded to perform angle estimation in the presence of array imperfections.
Related to communications, deep unfolding has been applied to massive MIMO channel estimation in <cit.>, where classical steering vector models are used as a starting point and then optimized to learn the system hardware impairments, by unfolding the matching pursuit algorithm <cit.>. This approach was later refined to reduce the required number of learnable parameters in <cit.>.
Previous MB-ML approaches <cit.> exhibit three primary shortcomings that can limit their effectiveness in practical scenarios. Firstly, they focus only on receiver learning; however, end-to-end learning of transmitter and receiver, which holds great potential given its promising performance in model-free DL applications <cit.>, remains unexplored in MB-ML. Secondly, sensing works <cit.> only investigate angle estimation, although range estimation is also required to estimate target locations. Hence, end-to-end MB-ML for multi-target positioning has not been studied before. Finally, while MB-ML has been utilized to address individual challenges related to sensing and communications, its untapped potential to significantly improve system performance in ISAC applications remains undiscovered.
In view of the current literature on DL and MB-ML for ISAC, three questions arise: (i) How can efficient end-to-end MB-ML strategies be developed for multi-target positioning? (ii) What computational and performance benefits can be harnessed by employing MB-ML in ISAC systems compared to large DL models and model-based approaches? (iii) To what extent can ISAC trade-offs be improved under hardware impairments by employing MB-ML strategies compared to large DL models and model-based approaches?
This paper aims to answer the above questions by studying end-to-end MB-ML for ISAC, focusing on the effect of hardware impairments in the ISAC transceiver ULA.
Considering a MIMO monostatic sensing and MISO communication scenario (as depicted in Fig. <ref>), we propose novel end-to-end MB-ML strategies for joint optimization of the ISAC transmitter and sensing receiver, suitable for both single- and multi-target scenarios.
Building upon our preliminary analysis in <cit.>, the main contributions of this work can be summarized as follows:
* Multi-target position estimation via end-to-end learning of OFDM ISAC systems:
For the first time in the literature, we investigate end-to-end learning of OFDM ISAC systems under hardware impairments at the ISAC ULA. To combat these hardware imperfections, we introduce novel learning architectures to simultaneously optimize the ISAC beamformer and sensing receiver. OFDM transmission enables joint angle and range (and, hence, position) estimation of multiple targets, significantly extending the single-carrier models and methods in our previous work <cit.>, and the recent works <cit.>.
* MB-ML via differentiable OMP:
Expanding upon the foundation laid by <cit.>, we propose a differentiable version of the OMP algorithm that is suitable for single- and multi-target sensing.
This new algorithm allows for end-to-end gradient-based optimization, where we consider two different MB-ML parameterization approaches.
The first approach learns a dictionary of steering vectors at each OMP iteration, extending our results in <cit.> to joint range-angle estimation and multiple targets.
The second approach is new compared to <cit.> and directly learns the parameterized ULA impairments at each iteration.
This offers the advantage of drastically reducing the number of parameters to be learned.
* Single- and multi-target performance comparison and ISAC trade-off characterization:
We first consider the single-target case (corresponding to one OMP iteration) and compare different solutions based on the extent of model knowledge: (i) NNBL[Note that the neural-network architectures in <cit.> do not directly apply to the scenario considered here due to the use of OFDM signals.], representing no knowledge of the system model, (ii) the two MB-ML approaches, where model knowledge is utilized, but impairments are learned, and (iii) a strong baseline, which fully relies on the mathematical description of the system model under no hardware impairments.
Our results show that under hardware impairments, the new MB-ML ULA impairment learning outperforms all other approaches in terms of target detection and range-angle estimation, with fewer trainable parameters.
Lastly, we show that impairment learning scales smoothly also to multiple targets, where it achieves better sensing and communication performance than the baseline.
In the rest of this paper, we first describe the mathematical ISAC system model in Sec. <ref>. Then, we describe the two approaches to perform target positioning and communication:
the baseline in Sec. <ref>, and MB-ML in Sec. <ref>. The main ISAC results are presented and discussed in Sec. <ref> before the concluding remarks of Sec. <ref>.
Notation. We denote column vectors as bold-faced lower-case letters, a, and matrices as bold-faced upper-case letters, A. A column vector whose entries are all equal to 1 is denoted as 1. The identity matrix of size N× N is denoted as I_N. The transpose and conjugate transpose operations are denoted by (·)^ and (·)^, respectively. The i-th element of a vector and the (i,j)-th element of a matrix are denoted by [a]_i and []_i,j. The element-wise product between two matrices is denoted by ⊙, while ⊘ denotes element-wise division, and ⊗ denotes the Kronecker product. · denotes matrix vectorization operator. Sets of elements are enclosed by curly brackets and intervals are enclosed by square brackets. The set {x∈|x≥0} is denoted as _≥0. The cardinality of a set 𝒳 is denoted by 𝒳. The uniform distribution is denoted by , and denotes the circularly-symmetric complex distribution. The Euclidean vector norm is represented by ‖·‖_2, while the matrix Frobenius norm is denoted by ‖·‖_F. The indicator function is denoted by 𝕀{·}.
§ SYSTEM MODEL
This section provides the mathematical models for the received sensing and communication signals, the ISAC transmitted signal and the hardware impairments. In Fig. <ref>, a block diagram of the considered ISAC system is depicted.
§.§ Multi-target MIMO Sensing
We consider an ISAC transceiver consisting of an ISAC transmitter and a sensing receiver sharing the same ULA of K antennas, as shown in Fig. <ref>.
The transmitted signal consists of an OFDM waveform across S subcarriers, with an inter-carrier spacing of Hz. In the sensing channel, we consider at most possible targets. Then, the backscattered signal impinging onto the sensing receiver can be expressed over antenna elements and subcarriers as <cit.>
= 1/√(S)ψ_t (θ_t) ^(θ_t) [() ⊙(τ_t)]^ + W,
where ∈KS collects the observations in the spatial-frequency domains, T ∼{0,...,} is the instantaneous number of targets in the scene,
and ψ_t ∼(0,^2) represents the complex channel gain of the t-th target. The steering vector of the ISAC transceiver ULA for an angular direction θ is, under no hardware impairments, [(θ)]_k= exp(- 2 π (k-(K-1)/2) d sin (θ ) / λ), k=0,...,K-1, with d = λ / 2, λ = c/f_c, c is the speed of light in vacuum and f_c is the carrier frequency[In case of different ULAs for transmitting and receiving, different steering vector models should be used in (<ref>).]. The precoder ∈ℂ^K permits to steer the antenna energy into a particular direction. Target ranges are conveyed by (τ_t) ∈ℂ^S, with [(τ_t)]_s = exp(-j2π s τ_t), s=0,...,S-1, and where τ_t = 2R_t/c represents the round-trip time of the t-th target at R_t meters away from the transmitter. Moreover, the communication symbol vector () ∈ℂ^S conveys a vector of messages ∈^S, each uniformly distributed from a set of possible messages . Finally, the receiver noise is represented by W, with [W]_i,j∼(0,N_0). Note that if T=0, only noise is received. From the complex channel gain and the noise, we define the integrated sensing SNR across antenna elements as _r = K^2/N_0.
The angles and ranges of the targets are uniformly distributed within an uncertainty region, i.e., θ_t ∼[, ] and R_t ∼[, ]. However, uncertainty regions might change at each new transmission. The position of each target is computed from target angle θ_t and range R_t as
_t = [ R_tcos(θ_t); R_tsin(θ_t) ].
The transmitter and the sensing receiver are assumed to have knowledge of {, , , }. In the considered monostatic sensing setup, the receiver has access to communication data (), which enables removing its impact on the received signal (<ref>) via reciprocal filtering <cit.>
= ⊘^() = α_t (θ_t) ^(τ_t) + ,
where α_t=1/√(S)^(θ_t) ψ_t and = W⊘^().
The goal of the sensing receiver is to estimate the presence probability of each target in the scene, denoted as û∈ [0,1]^, which is later thresholded to provide a hard estimate of the target presence, t̂∈{0,1}^. For all detected targets, the sensing receiver estimates their angles, θ̂∈ [-π/2, π/2]^, and their ranges, R̂∈_≥ 0^, from which target positions can be estimated according to (<ref>).
§.§ MISO Communication
In the considered ISAC scenario, communication and sensing share the same transmitter. We assume that the communication receiver is equipped with a single antenna element. In this setting, the received OFDM signal at the communication receiver in the frequency domain is given by <cit.>
= [()⊙]^(φ) + ,
with ∈ℂ^S denoting the S-point DFT of the channel taps [β_0, β_1, ..., β_L-1,0,...,0], where each tap is distributed as β_l ∼(0,σ_l^2). Complex Gaussian noise ∼(0,N_0I_S) is added at the receiver side. The average communication SNR per subcarrier is defined as _c = ∑_l=1^Lσ_l^2/(SN_0).
The communication receiver is assumed to be always present at a random position, such that φ∼[, ]. The transmitter has also knowledge of {, }. The receiver is fed with the CSI = ^(φ).
The goal of the receiver is to retrieve the communication messages that were transmitted.
§.§ ISAC Transmitter
ISAC scenarios require the use of a radar-communication beamformer to provide adjustable trade-offs between the two functionalities. Using the multi-beam approach from <cit.>, we design the ISAC
beamformer, based on a sensing precoder _r ∈ℂ^K, and a communication precoder _c∈ℂ^K, as
(η,ϕ) = √(P)√(η)_r + √(1-η)e^ϕ_c/‖√(η)_r + √(1-η)e^ϕ_c ‖ ,
where P is the transmitted power, η∈ [0,1] is the ISAC trade-off parameter, and ϕ∈ [0,2 π) is a phase ensuring coherency between multiple beams.
By sweeping over η and ϕ, we can explore the ISAC trade-offs of the considered system. The sensing precoder _r points to the angular sector of the targets, {, }, whereas the communication precoder _c points to the angular sector of the communication receiver, {, }. In Secs. <ref> and <ref>, we detail how _r and _c are computed for the baseline and MB-ML, respectively. However, the same precoding function is applied for sensing and communication, as represented in Fig. <ref>.
§.§ Hardware Impairments
We study the effect of hardware impairments in the ULA in the ISAC transceiver, which affect the steering vectors of (<ref>), (<ref>), (<ref>). Impairments in the antenna array include mutual coupling, array gain errors, or antenna displacement errors, among others <cit.>. Following the impairment models of <cit.>, we consider two types of impairments:
* Unstructured impairments: In this case, the true steering vector (θ) is unknown for all angles θ, while the methods for beamforming design and signal processing assume the nominal steering vector (θ). If we consider a grid of possible angles with N_θ points, then the steering vectors require K× N_θ complex values to be described.
* Structured impairments: In this case, the steering vector model is known, conditional on an unknown perturbation vector . We can thus write (θ;), where the meaning and dimensionality of depend on the type of impairment. In contrast to the unstructured impairments, the impairments are often described with a low-dimensional vector, independent of N_θ.
[Impact of structured impairments]
Consider the example of inter-antenna spacing errors, where ∈ℂ^K and [(θ; )]_k = exp(- 2 π (k-(K-1)/2) []_k sin (θ ) / λ), k=0,...,K-1.
In Fig. <ref>, the angle-delay map (defined in Sec. <ref>) is depicted under ideal conditions (top) and hardware impairments (bottom), when T = 4 targets are present. The main effect of hardware impairments is to expand target lobes in the angle domain. In the example shown in Fig. <ref>, two targets become indistinguishable due to impairments, and the appearance of spurious lobes hinders the detection of the target at the highest range. Another effect of hardware impairments is that the magnitude of the target lobes is decreased, which makes them harder to differentiate from noise. These results highlight the relevance of addressing hardware impairments in our sensing scenario.
§ BASELINE
In this section, we derive the baseline method according to model-based benchmarks, which will later be compared with end-to-end learning approaches in Sec. <ref>.
§.§ ISAC Beamformer
We design the baseline for the precoding mapping in Fig. <ref>, which affects both the sensing precoder _r, and the communication precoder _c in (<ref>), by resorting to the beampattern synthesis approach in <cit.>.
We define a uniform angular grid covering [-π/2, π/2] with grid locations {θ_i}_i=1^. For a given angular interval (i.e., = [, ] for communications, and = [, ] for sensing),
we denote by ∈1 the desired beampattern over the defined angular grid, given by
[]_i =
K, if θ_i ∈θ_interval
0, otherwise.
The problem of beampattern synthesis can then be formulated as
min__bs‖ - ^_bs‖_2^2, where = [(θ_1) … (θ_)] ∈K denotes the transmit steering matrix evaluated at the grid locations. This least-squares (LS) problem
has a simple closed-form solution
_bs = (^^)^-1^,
which yields, after normalization according to the transmit power constraints, a communication-optimal beam _c or a radar-optimal beam _r, which can then be used to compute the joint ISAC beam in (<ref>).
§.§ Multi-target Sensing Receiver
We propose to formulate the multi-target sensing problem based on the received signal in (<ref>) as a sparse signal recovery problem <cit.> and employ the OMP algorithm <cit.> to solve it, which represents our model-based benchmark.
To construct an overcomplete dictionary for OMP, we specify an angular grid {θ_i}_i=1^ and a delay grid {τ_j}_j=1^ depending on the region of interest for target detection (i.e., the a priori information {, , , }). Then, a spatial-domain and a frequency-domain dictionary covering angular and delay grids can be constructed as
_a = [ (θ_1) ⋯ (θ_) ] ∈K ,
_d = [ (τ_1) ⋯ (τ_) ] ∈S .
Using (<ref>), the problem of multi-target sensing based on the observation in (<ref>) becomes a sparse recovery problem
= ∑_i=1^∑_j=1^ []_i,j [_a]_:, i ([_d]_:, j)^ + ,
where ∈. Here, the goal is to estimate the T-sparse vector ∈1 under the assumption T ≪. The baseline OMP algorithm <cit.> to solve this problem is summarized in Algorithm <ref>, which will serve as a foundation to the proposed MB-ML approaches in Sec. <ref>.
§.§ Communication Receiver
We assume that the communication receiver has access to the CSI = ^(φ). Hence, the received signal can be expressed as = ⊙() +. Optimal decoding in this case corresponds to subcarrier-wise maximum likelihood estimation according to
_s = min_m_s ∈[]_s - []_s x(m_s)^2,
for s=0,...,S-1. Since communication decoding is already optimal, given the CSI, learning methods described in Sec. <ref> apply (<ref>) for communication message estimation.
§ MODEL-BASED LEARNING
MB-ML is inspired by the baseline of Sec. <ref>, although we need to develop differentiable beamforming and estimation algorithms that permit end-to-end learning, as well as a suitable loss function for multiple targets. This section describes the two MB-ML methods developed for multi-target sensing: (i) dictionary learning, which learns a dictionary of steering vectors for different angles as in <cit.>, and is suitable for unstructured impairments, as defined in Sec. <ref>; (ii) impairment learning, which directly learns a parameterization of the hardware impairments and thus is suitable for structured impairments, also defined in Sec. <ref>. This section also defines the loss function to train them.
§.§ Beamformer
MB-ML follows the same operations (<ref>) and (<ref>) to compute the precoding vector _r or _c, given an angular interval . Dictionary learning considers ∈ℂ^K× N_θ from (<ref>) as a free learnable parameter to account for unstructured impairments, which is comprised of KN_θ complex parameters.
The new proposed impairment learning considers instead as a free learnable parameter the vector ∈ℂ^K, which represents a parameterization of the structured hardware impairments. From , the dictionary of steering vectors is computed as () = [(θ_1;) … (θ_;)], such that () is used in (<ref>) instead of . Impairment learning reduces the number of learnable parameters by taking into account the structured hardware impairments of Sec. <ref>. Indeed, it has only K complex parameters, which can be several order of magnitudes less than the dictionary learning approach, since the dictionary of steering vector needs a relatively large number of columns N_θ to perform well. Note that the operation in (<ref>), which involves the learning parameters of both MB-ML methods, is already differentiable.
§.§ Sensing Receiver
Range-angle estimation of targets is based on Algorithm <ref>.
However, the max operation in line <ref> of Algorithm <ref> is not differentiable and the gradient of no loss function could be backpropagated in MB-ML.
To circumvent this issue, we develop a differentiable algorithm which is represented in Fig. <ref>. The difference with the conventional OMP in Algorithm <ref> is that we replace the operations of lines <ref>-<ref> by the following steps:
* max_i,j: We still perform this nondifferentiable operation as a temporary result to obtain the final estimation. Note that is based on an angular grid ={θ_i}_i=1^ and a delay grid ={τ_j}_j=1^. In line <ref> in Algorithm <ref>, this calculation yields the estimated angle-delay pair, which serves as foundation for the following step of the differentiable OMP algorithm.
* Mask the angle-delay map, , based on angle and range resolution: in order to consider elements of that solely correspond to a single target, we select the elements around the maximum of the angle-delay map that are within the angle and range resolution. This operation also helps to obtain a differentiable angle-delay estimation, similar to line <ref> in Algorithm <ref>.
We create the mask based on the angle and range resolution, since it determines the minimum angle or range for which two targets are indistinguishable.
The angle and range resolutions in our case are
≈2/K ≈c/2B = c/2S,
with B the bandwidth of the transmitted signal. The resolutions are considered in terms of the number of pixels of the angle-delay map, depending on and .
* Softmax: We apply a softmax operation to the masked matrix from the previous operation, so that the sum of its elements is equal to 1. Unlike line <ref> in Algorithm <ref>, the softmax function is differentiable, enabling end-to-end learning.
* Weighted sum: A weighted sum of and is implemented, where each weight corresponds to the output of the previous softmax operation, and they represent an estimate of the probability that a certain angle-delay pair is the true value. From this interpolation operation, an angle-delay pair (θ̂_I, τ̂_I) is obtained, which may not be included in or . From this computation, the angle-delay pairs are updated, as in line <ref> in Algorithm <ref>. Note that these four first steps (center column of Fig. <ref>), amount to looking in the dictionary for the most correlated atoms with the input, and then estimating the angle-delay pair as a convex combination of the corresponding angle-delays on the grid. This kind of similarity-based learning has been applied to other tasks within MIMO systems <cit.>, and is reminiscent of the attention mechanism <cit.>.
* Compute estimated spatial-domain and frequency-domain vectors (θ̂_I), (τ̂_I): unlike line <ref> in Algorithm <ref>, we recompute the spatial-domain and frequency-domain vectors based on the estimated angle-delay pair of the previous step, since the estimated angle-delay pair (_I, _I) might not be contained in (, ). The sets _a and _d are updated with the new vectors, as represented in Fig. <ref>.
After the previous steps, differentiable OMP continues as lines <ref>-<ref> in Algorithm <ref> to obtain the new residual ^(I+1), as depicted in Fig. <ref>. This differentiable OMP algorithm still involves looking over a grid of possible angles. We utilize as the dictionary of angles _a the same matrices and () from the beamformer of Sec. <ref> to compute , which allows parameter sharing between the co-located transmitter and receiver. The gradient of the loss function does not flow through the max operation, as illustrated in Fig. <ref>. To further improve memory efficiency, gradient flow is also discarded when computing the new residual ^(I+1) from the estimates (_I, _I).
§.§ Loss Function
As loss function for MB-ML multi-target sensing, we select the GOSPA loss from <cit.>. In our case, the GOSPA loss is defined as follows. Let γ>0, 0<μ≤2 and 1≤ p < ∞. Let = {_1, ..., _||} and = {_1,...,_||} be the finite subsets of ℝ^2 corresponding to the true and estimated target positions, respectively, with 0≤||≤, 0≤||≤. Let d(, ) = ‖ - ‖_2 be the distance between true and estimated positions, and (, ) = min(d(, ),γ) be the cut-off distance. Let Π_n be the set of all permutations of {1,...,n} for any n ∈ℕ and any element π∈Π_n be a sequence (π(1),...,π(n)). For || ≤ ||, the GOSPA loss function is defined as
d_p^(γ,μ)(, ) =
( min_π∈Π_||∑_i=1^||(_i, _π(i))^p + γ^p/μ (-) )^1/p.
If > , d_p^(γ,μ)(, ) = d_p^(γ,μ)(, ). The parameter p is proportional to the penalization of outliers, and the value of γ dictates the maximum allowable distance error. The role of μ, together with γ, is to control the detection penalization. This loss function becomes suitable for multiple targets, since it considers the association between estimated and true positions that gives the minimum loss, tackling the data association problem of multiple targets. In terms of target detection, we follow the same principle as the baseline, i.e., we stop the OMP algorithm when the maximum of the angle-delay map drops below a threshold. Sweeping this threshold over different values yields a trade-off in terms of detection and false alarm rates.
§ RESULTS
This section details the simulation parameters and the results for single- and multi-target ISAC.[Source code to reproduce all numerical results in this paper will be made available at <https://github.com/josemateosramos/MBE2EMTISAC> after the peer-review process.]
Four methods will be evaluated and compared:
* The model-based baseline from Sec. <ref>, working under the mismatched assumption of no hardware impairments.
* A NNBL method, extending <cit.>, which replaces the precoding and sensing estimation mappings in Fig. <ref> by NN, and can operate in the absence of any knowledge of the ISAC system (including the hardware impairments). More details can be found in Appendix <ref>.
* Dictionary learning from Sec. <ref>, where the unstructured impaired steering vectors (θ) are learned for both precoding and sensing.
* Impairment learning from Sec. <ref>, where the structured impairment vector d is learned for precoding and sensing.
§.§ Simulation Parameters
We consider a ULA of K=64 antennas, S=256 subcarriers, and a subcarrier spacing of 120 kHz. We set the maximum number of targets in the scene as = 5. The transmitted power is P=1 and the carrier frequency is f_c = 60 GHz. The sensing SNR across antenna elements was set to _r = K^2/N_0 = 15 dB, and the average communication SNR per subcarrier was fixed to _c = ∑_l=1^Lσ_c,l^2/(SN_0) = 20 dB. The number of channel taps in the communication channel is L=5, with an exponential power delay profile, i.e., σ_l^2 = exp(-l), l=0,...,L-1. The power delay profile is later normalized to obtain the desired average SNR. The number of grid points for angle and range is set as = 720 and =200.
To train the learning methods for a wide range of angles, we randomly draw {, } as in <cit.>, i.e.,
we draw a realization of ∼[-60, 60] and Δ∼[10, 20], for each new transmission. The target angular sector is computed as = - Δ/2, = + Δ/2. The communication angular sector and the range uncertainty region are set as {, } = {30, 50}, {, } = {10, 190} m, for all transmissions.
For hardware impairments, we consider the model of <cit.>, i.e., we assume structured hardware impairments where the antenna elements in the ULA array are spaced as ∼((λ/2) 1, ^2I_K). We select a standard deviation of = λ/25 = 0.2 mm. MB-ML is initialized with the same knowledge as the baseline, i.e., the steering vector models firstly assume that d=(λ/2) 1.
In the GOSPA loss, we set μ=2, as recommended in <cit.>, p=2, and γ = (-)/2=90 m. The cardinality mismatch term in (<ref>) implies the use of a threshold during training. However, our goal is to train the learning methods regardless of the threshold, and then explore sensing performance by changing the threshold. Hence, during training it is assumed to know the actual number of targets T, which means that || = || = T, and the GOSPA loss during training becomes
d_p^(γ,μ)(, ) = (min_π∈Π_||∑_i=1^||(_i, _π(i))^p)^1/p.
However, there is no detection penalization term in (<ref>), which implies that the detection probability estimation NN of NNBL cannot be optimized. Hence, we adopt a two-step training approach for NNBL, as follows:
* We first train and based on the simplified GOSPA loss of (<ref>).
* While freezing the parameters ξ, we then train and by minimizing
d_u^(γ_u,μ)(, ) = (min_π∈Π_||∑_i=1^|| d^(γ_u)(u_i, û_π(i))^p)^1/p,
where = {u_1, ..., u_||} and = {û_1, ..., û_||} are the true and estimated sets of target probabilities, d^(γ_u)(u_i, û_π(i)) = min(d(u_i, û_π(i)),γ_u), and d(u_i, û_π(i)) = -u_ilog(û_π(i)) - (1-u_i)log(1-û_π(i)). That is, we replace the position distance error in (<ref>) with the BCE loss. Note that in (<ref>) we also assume that ||=||=T.
The previous two-step training approach was observed to yield better performance, compared to joint training of all NN parameters ε, ξ, ζ based on the sum of the losses (<ref>) and (<ref>).
Network optimization is performed using the Adam optimizer <cit.>, with a batch size of B=3000 and 100,000 training iterations. The learning rate of dictionary and impairment learning was set to 5·10^-3 and 10^-7, respectively. In the two-step training approach for NNBL, 100,000 training iterations are applied to each of the steps. Position estimation training used a learning rate of 10^-2, while target detection utilized 10^-3 as learning rate. The architecture of NNBL is described in Appendix <ref>. NNBL also benefited from using a scheduler, to reduce the learning rate when the loss function has reached a plateau. Details of the scheduler parameters can be found in Appendix <ref>.
§.§ Performance Metrics
Concerning testing, we compute as detection performance metrics a measure of the probability of misdetection and the probability of false alarm, for multiple targets. We use the same definitions as in <cit.>, which correspond to
= 1-∑_i=1^Bmin{T_i, _i}/∑_i=1^B T_i,
= ∑_i=1^B max{T_i, _i} - T_i/∑_i=1^B - T_i,
where
T_i, _i are the true and estimated number of targets in each batch sample, respectively. The regression performance is measured via the GOSPA (for multiple targets sensing) and RMSE (for single target sensing).
As communication performance metric, we use the average SER across subcarriers, computed as
SER = 1/BS∑_i=1^B ∑_j=1^S 𝕀{[_i]_j ≠ [_i]_j},
with _i and _i the true and estimated message vectors at the i-th batch sample. All described methods in this paper (baseline of Sec. <ref>, MB-ML of Sec. <ref>, and NNBL) use a QPSK encoder, and the message estimation rule in (<ref>).
§.§ Single-target ISAC
In single-target ISAC, the maximum number of targets is =1, which implies that the GOSPA loss function in (<ref>) becomes (, ).
However, in order to compare with our previous work <cit.>, we train MB-ML and position estimation of NNBL using the MSE loss d(, )^p = - _2^2, and detection estimation of NNBL using the BCE loss, d(u, ) = -ulog() - (1-u)log(1-). Position estimation is assessed by the angle RMSE, √([(θ-θ̂)^2]), and the range RMSE, √([(R-R̂)^2]).
ISAC performance results are represented in Fig. <ref>, where
we sweep over [0,1] and [0,7π/4], taking 8 uniformly spaced values, to set η and ϕ in (<ref>), respectively. For testing, we fixed {, } = {-40, -20}[Unless otherwise stated, the authors also tested other values of {, }, and the results were qualitatively the same.]. The probability of false alarm was set to = 10^-2.
Result show that under no complexity limitations (solid lines) and hardware impairments, learning methods outperform the baseline in terms of misdetection probability, angle and range estimation, and SER, which implies that learning methods have adapted to hardware impairments. Communication performance, even in the case of optimal symbol estimation, is enhanced by learning approaches, which suggests that the impairments have a significant impact on the optimal communication precoder. In addition, dictionary learning outperforms NNBL for range estimation, although the converse happens for misdetection probability. Impairment learning yields the best performance among all learning methods, and with fewer parameters, which usually implies less training time. Indeed, NNBL is composed of a total of 7.78 million real learnable parameters, while dictionary learning uses K = 40,080 complex parameters, and impairment learning consists of K=64 complex parameters.
Under limited complexity, the number of parameters of dictionary learning and NNBL are restricted. We follow the approach of <cit.>, and restrict the number of (complex) parameters of dictionary learning by setting = 156, which reduces the number of parameters to 9,984 complex parameters. The complexity constraints applied to NNBL-learning are detailed in Appendix <ref>, which decreases the number of real parameters to 10,555. From Fig. <ref>, it is observed that while NNBL drops in performance, especially for angle and range estimation, dictionary learning still yields better results than the baseline. However, dictionary learning also decreased in performance compared to the unconstrained approach, which means that dictionary learning cannot achieve the same performance as impairment learning for the same number of parameters.
Lastly, we test all learning approaches for a scenario that was not encountered during training, to assess their generalization capabilities. Fig. <ref> depicts the performance of the learning methods for {, } = {-20, 20}, which includes a span of the angular uncertainty region wider than expected. The complexity of the networks is not restricted. The performance of all learning approaches has dropped compared to Fig. <ref>. However, while NNBL performs worse than the baseline, and dictionary learning yields similar results to the baseline, impairment learning is the only approach that still outperforms the baseline. NNBL and dictionary learning appear to overfit to the training data and degrade for unexpected inputs. This means that for new testing scenarios, impairment learning is the learning approach that best generalizes in terms of performance. This is due to the fact that impairment learning is the only method for which parameters are shared between all directions (all columns of the dictionary are affected each time the parameters are updated). Dictionary learning does not exhibit this feature, since each column of the dictionary (corresponding to a direction) is considered an independent set of parameters.
§.§ Multi-target ISAC
Based on the results of Sec. <ref>, impairment learning performs the best among all considered learning methods for the simpler case of single-target ISAC.
Hence, we only consider impairment learning to compare against the baseline for multi-target sensing. The batch size for MB-ML is decreased to B=1500 due to memory restrictions. The number of iterations was also reduced to 25,000, since finding the association between estimated and true data that minimizes the GOSPA loss of (<ref>) increases training time. In addition, ISAC results perform very close to perfect knowledge of impairments, as observed in the following.
We first compare the performance of the differentiable OMP algorithm of Sec. <ref> with the baseline, when hardware impairments are perfectly known. In Fig. <ref>, the sensing performance of both approaches is depicted. Results show that differentiable OMP performs closely to the baseline. The difference in performance might be because the dictionary _a in the baseline only covers the angular range {, }, while differentiable OMP uses a fixed dictionary that covers [-π/2, π/2]. However, this allows for efficient parameter sharing in MB-ML. Differentiable OMP takes a weighted sum of angles and ranges, which permits to select an angle or range outside the predefined dictionaries, unlike the baseline.
The GOSPA loss in Fig. <ref> achieves a minimum for different false alarm probabilities, since it takes into account both position and detection errors. For high , OMP estimates a higher number of targets than the true value, and conversely for low .
Fig. <ref> shows the results of the baseline without impairment knowledge, differentiable OMP with perfect impairment knowledge, and impairment learning. Impairment learning outperforms the baseline, which illustrates the adaptability of impairment learning to antenna imperfections in multi-target sensing. Moreover, the performance is very close to perfect knowledge of the impairments, which suggests that the learned spacing is quite similar to the underlying reality.
In terms of ISAC trade-off, Fig. <ref> presents the ISAC trade-offs in case of multiple targets when = 10^-2. In this case, we sweep in (<ref>) over η and fixed ϕ = 0, since in Figs.<ref> and <ref> we observed that the effect of ϕ is not very significant. Compared to Fig. <ref>, it is observed that impairment learning also outperforms the baseline when impairments are not known in terms of communication performance, due to the impact of hardware impairments in the communication precoder.
§ CONCLUSIONS
In this work, we studied the effect of antenna spacing impairments in multi-target ISAC, and different learning approaches to compensate for such impairments. A new efficient MB-ML approach to perform end-to-end learning and impairment compensation was proposed, based on a differentiable OMP algorithm. Simulation results showed that learning approaches outperform the baseline and they can compensate for hardware impairments. Among learning methods, the new proposed impairment learning approach outperformed all other considered methods, also exhibiting better generalization capabilities to new testing data, with much fewer parameters to optimize. Simulations results verify that injection of the system and impairment knowledge in learning methods improves their performance and reduces their complexity.
§ NNBL
Since the optimal detection and estimation rules might not be tractable, NNBL can be trained based on data to achieve optimality. Moreover, when no information about the impairments is available, NNBL can provide data-driven solutions to account for them. This appendix describes the principles and architecture of the considered NNBL approach.
§.§ Principles
NNBL replaces the precoding and sensing estimation mappings in Fig. <ref> by NN. The precoding network, :^2→^2K, takes as input and produces a precoder as output, where ε corresponds to the learnable parameters. NN in this work are considered to work with real-valued numbers, hence, the output dimension is doubled. The same mapping is applied to both sensing and communication precoders, to obtain _r and _c, which are later used to design the ISAC precoder according to (<ref>).
Sensing estimation is divided into two tasks, each corresponding to a different NN: (i) detection probability estimation, and (ii) position estimation. As input to both NN, we use ∈^× defined in Sec. <ref>, instead of , since we observed a better sensing performance.
In addition to the angle-delay map, the input is also composed of the a priori information {, , , }, as shown in Fig <ref>, to improve network performance.
The output of each NN is task-dependent. The detection probability network, : ^××^4→ [0,1]^, outputs a probability vector û whose elements correspond to the probability that each target is present in the scene, which is later thresholded to provide an estimate of the number of targets. The position estimation network, : ^××^4→^×2, outputs a matrix P̂ whose columns represent the position estimation of each potential target. The learnable parameters of each network are ζ and ξ, respectively. Both NN are trained based on the GOSPA loss function of Sec. <ref>.
§.§ NN Architectures
The precoding operation of Fig. <ref> was implemented as a MLP, whose input is an angular sector ({, } or {, }), with 3 hidden layers of 8K neurons and an output layer of 2K neurons, where we recall that K=64 is the number of antennas in the ULA transceiver. The activation function after each layer is the ReLU function, except for the final layer, which contains a normalization layer to ensure a unit-norm output, i.e., ‖_bs‖_2=1.
For the receiver side, we resort to CNN given the 2-dimensional nature of the input , as represented in Fig. <ref>. The receiver architecture repeats a set of layers, represented in Fig. <ref>, which we call residual bottleneck block. This block was inspired by the ResNet architecture <cit.>. A convolutional layer is first introduced with some stride to decrease the number of pixels to process. Then, 2 bottleneck blocks with skipped connections similar to <cit.> follow. However, we reduce the number of activation functions and normalization layers, as suggested in <cit.>. Another residual connection is introduced from the beginning to the end of both bottleneck blocks to help with gradient computation.
We observed that splitting position estimation into angle and range estimation, each of them involving a CNN, yielded better results than using a single network. Angle and range estimates are later combined into a position vector following (<ref>). The common architecture for all CNN (detection, angle and range estimation) is shown in Table <ref>. Convolutional layers introduce zero-padding so that the number of pixels is preserved. After the first and last convolutional layers, a 2-dimensional batch normalization and a ReLU activation function are also applied. The resulting feature map of the CNN has / 2^12 elements. For NNBL, = 320 and = 128 due to memory constraints. The resulting feature map from the convolutional layers, together with the a priori information {, , , } of the target locations, are processed by MLP. The angle estimation network only uses {, }, the range estimation network {, }, and the detection network utilizes both of them. The architecture of each MLP is described in Table <ref>. The activation function after each fully-connected layer is the ReLU function. Unless stated otherwise, all NN architectures were optimized to give the best ISAC performance, where we explored, for instance, kernel sizes up to 13x13, the number of residual bottleneck blocks from 3 to 7, or the number of layers of the MLP of Table <ref>, from K to 64K, among others.
When training NNBL, a scheduler is used to reduce the learning rate if the loss function plateaus. The patience of the scheduler was set as 10^4 iterations. If the loss function was regarded to plateau, the learning rate was decreased by half, with a minimum attainable learning rate of 10^-6.
When complexity limitations are considered, in the transmitter network the number of neurons in each hidden layer was reduced to 4. At the receiver side, the kernel size of the Maxpool layer is increased to 4x4, the number of residual bottleneck blocks is changed from 6 to 3, the number of channels in the network is reduced by a factor of 4, and the number of neurons in the hidden layer of the last MLP are constrained to 4.
IEEEtran
|
http://arxiv.org/abs/2307.04635v1 | 20230710152254 | Self-consistent Combined HST, K-band, and Spitzer Photometric Catalogs of the BUFFALO Survey Fields | [
"Amanda Pagul",
"F. Javier Sánchez",
"Iary Davidzon",
"Anton M. Koekemoer",
"Hakim Atek",
"Renyue Cen",
"Lukas J. Furtak",
"Mathilde Jauzac",
"Guillaume Mahler",
"Bahram Mobasher",
"Mireia Montes",
"Mario Nonino",
"Keren Sharon",
"Charles L. Steinhardt",
"John R. Weaver"
] | astro-ph.GA | [
"astro-ph.GA"
] |
0000-0002-6015-8614]Amanda Pagul
A. Pagul et al.
Space Telescope Science Institute, 3700 San Martin Dr., Baltimore, MD 21218, USA
Department of Physics and Astronomy, University of California Riverside, Pierce Hall, Riverside, CA 92521, USA
Amanda Pagul
[email protected]
0000-0003-3136-9532]F. Javier Sánchez
Space Telescope Science Institute, 3700 San Martin Dr., Baltimore, MD 21218, USA
0000-0002-2951-7519]Iary Davidzon
Cosmic Dawn Center (DAWN)
Niels Bohr Institute, University of Copenhagen, Lyngbyvej 2, Copenhagen Ø 2100
0000-0002-6610-2048]Anton M. Koekemoer
Space Telescope Science Institute, 3700 San Martin Dr., Baltimore, MD 21218, USA
Institut d'astrophysique de Paris, CNRS UMR7095, Sorbonne Université, 98bis Boulevard Arago, F-75014 Paris, France
Department of Astrophysical Sciences, 4 Ivy Lane, Princeton, NJ 08544, USA
0000-0001-6278-032X]Lukas J. Furtak
Physics Department, Ben-Gurion University of the Negev, P. O. Box 653, Be'er-Sheva, 8410501, Israel
0000-0003-1974-8732]Mathilde Jauzac
Centre for Extragalactic Astronomy, Durham University, South Road, Durham DH1 3LE, U.K.
Institute for Computational Cosmology, Durham University, South Road, Durham DH1 3LE, U.K
Astrophysics and Cosmology Research Unit, School of Mathematical Sciences, University of KwaZulu-Natal, Durban 4041, South Africa
0000-0003-3266-2001]Guillaume Mahler
Centre for Extragalactic Astronomy, Durham University, South Road, Durham DH1 3LE, UK
Institute for Computational Cosmology, Durham University, South Road, Durham DH1 3LE, UK
Department of Physics and Astronomy, University of California Riverside, Pierce Hall, Riverside, CA 92521, USA
0000-0001-7847-0393]Mireia Montes
Instituto de Astrofísica de Canarias, c/ Vía Láctea s/n, E-38205 - La Laguna, Tenerife, Spain
Departamento de Astrofísica, Universidad de La Laguna, E-38205 - La Laguna, Tenerife, Spain
0000-0001-6342-9662]Mario Nonino
INAF-Trieste Astronomical Observatory
0000-0002-7559-0864]Keren Sharon
Department of Astronomy, University of Michigan, 1085 S. University Ave, Ann Arbor, MI 48109, USA
0000-0003-3780-6801]Charles L. Steinhardt
Cosmic Dawn Center (DAWN)
Niels Bohr Institute, University of Copenhagen, Lyngbyvej 2, Copenhagen Ø 2100
0000-0003-1614-196X]John R. Weaver
Department of Astronomy, University of Massachusetts, Amherst, MA 01003, USA
BUFFALO Catalogs
This manuscript presents new astronomical source catalogs using data from the BUFFALO Survey. These catalogs contain detailed information for over 100,000 astronomical sources in the 6 BUFFALO clusters: Abell 370, Abell 2744, Abell S1063, MACS 0416, MACS 0717, and MACS 1149 spanning a total 240 arcmin^2. The catalogs include positions and forced photometry measurements of these objects in the F275W, F336W, F435W, F606W, F814W, F105W, F125W, F140W, and F160W HST-bands, Keck-NIRC2/VLT-HAWKI Ks band, and IRAC Channel 1 and 2 bands. Additionally, we include photometry measurements in the F475W, F625W, and F110W bands for Abell 370. This catalog also includes photometric redshift estimates computed via template fitting using LePhare. When comparing to spectroscopic reference, we obtain an outlier fraction of 9.2% and scatter, normalized median absolute deviation (NMAD), of 0.062. The catalogs are publicly available for their use by the community.
§ INTRODUCTION
The Hubble Frontier Fields (HFF) <cit.> is a multi-waveband program obtaining deep imaging observations of six massive clusters in a narrow redshift range z∼ 0.308 - 0.545. Combining the sensitivity, resolution power and multi-wavelength capability of the Hubble Space Telescope (HST), with the gravitational lensing effect introduced by the massive galaxy clusters selected for this study, one can reach unprecedented depths. Two HST instruments, the Advanced Camera for Surveys (ACS) and Wide-Field Camera 3 (WFC3), were used in parallel to simultaneously observe each cluster and parallel field. The parallel fields separated by ∼ 6 arcmin from the cluster core, corresponding to > 1.8 projected co-moving Mpc for a z>0.3 cluster. The six parallel fields are comparable in depth to the Hubble Ultra Deep Field <cit.>, corresponding to m(AB) ∼ 29 mag. The area coverage and depth of the parallel fields provide significant improvement in the volume covered and statistics of faint galaxies.
The aims of the HFF observations were: (1) leverage gravitational lensing due to massive clusters <cit.> to magnify fluxes and hence detect very faint background galaxies at z ∼ 5 - 10 <cit.>. Strong lensing allows us to probe ∼ 2 magnitudes fainter than in blank fields. At the time of HFF observations, blank fields studies reached ∼-17 rest-frame UV magnitudes <cit.>; (2) study the stellar population of these faint galaxies at high redshifts and constrain the mass function of galaxies at early epochs. Stellar masses reach down to 10^8 M_⊙ in blank fields <cit.> and down to 10^6 M_⊙ in HFF lensed fields <cit.>; (3) study of the morphology and other observable properties of lensed galaxies at z > 8.
The Beyond Ultra-deep Frontier Fields and Legacy Observations (BUFFALO) is an HST treasury program with 101 prime orbits (and 101 parallel orbits) (GO-15117; PIs: Steinhardt and Jauzac), covering the immediate areas around the HFF clusters where deep Spitzer (IRAC channels 1 and 2) and multi-waveband coverage already exist <cit.>. BUFFALO extends the spatial coverage of each of the six HFF clusters by three to four times. Observing these fields in five filters (ACS: F606W, F814W and WFC3: F105W, F125W and F160W), BUFFALO aims at a factor of 2 improvement in the statistics of high redshift galaxies <cit.>, improves the cosmic variance and allows a more accurate modeling of the dark matter distribution in the foreground clusters. The HST and Spitzer data for BUFFALO, combined with ground-based observations <cit.> was specifically designed to expand the HFF to sufficiently large area to encompass a full James Webb Space Telescope NIRSpec field of view, without the need for JWST/NIRCam pre-imaging. The program significantly improves the statistics of galaxies in the outskirts of clusters and field samples.
In this paper, we present photometric and redshift catalogs for the BUFFALO galaxies. The catalogs presented in this work aim to extend and complement previous efforts in the HFF <cit.>. In section 2, we present the data used in this study. In section 3, we briefly outline the data reduction process, referring the reader to <cit.> for a more detailed description. In section 4, we describe our photometric validation procedure. Section 5, details the data products and results. Section 6 describes the photometric redshifts extracted. Finally, our conclusions are presented in section 7.
Throughout this paper we assume standard cosmology with Ω_M = 0.23, Ω_Λ = 0.76 and H_0 = 73 Km/sec/Mpc. Magnitudes are in the AB system.
§ THE DATA
We provide a brief summary of the dataset in the following subsections. For more details about the design, aims and observations of BUFFALO we refer the reader to the BUFFALO overview paper <cit.>. All our data products are available at MAST as a High Level Science Product via [10.17909/t9-w6tj-wp63]10.17909/t9-w6tj-wp63
§.§ HST observations
The BUFFALO images provide the deepest exposures of galaxy clusters by HST, only second to the HUDF with respect to depth. With 101 additional prime (and 101 parallel) orbits, they build on the existing HFF cluster and parallel field surveys. BUFFALO slightly increases the depth at the center of the HFF clusters while increasing their areal coverage three- to four fold. As a result, it expands the radial coverage of cluster outskirts, providing observations of the global mass distribution of clusters to almost the virial radius, i.e. ∼ 3/4 × R_vir. The coverage was chosen to increase the high-z sample size, in particular for rare bright high-mass galaxies at z∼8-9. Furthermore, BUFFALO's footprint is chosen to be compatible with JWST's NIRSpec field of view, allowing multiwavelength programs with JWST[These were produced using the module (<https://github.com/spacetelescope/JWST_footprints>).] (Figure <ref>), which is especially timely for planning robust observations with JWST.
In the HFF, the gravitational potential of the clusters' halo, besides binding together the galaxies in the system, produces a lensing magnification that could detect background objects to apparent magnitudes of 30–33 mag, i.e. 10–100 times fainter than previous surveys. With BUFFALO, we get magnifications of ∼ 4 on average. Details of the BUFFALO survey design are provided in <cit.>. In Table <ref>, we report the main characteristics of the six clusters, with a summary of the ancillary observations in Table <ref>. We use the official BUFFALO mosaics, with a pixel scale of 0.06"/pix, which have been produced following the procedures outlined in <cit.>; the full BUFFALO dataset is described further in <cit.>.
We complement this data with the available public F275W and F336W HFF data from the HFF-Deepspace campaign <cit.>, which uses observations from <cit.>.
§.§ Ancillary data
The large wealth of complementary legacy datasets and programs for the HFF clusters has contributed to its success. The Spitzer Space Telescope dedicated more than 1,000 hours of Director's Discretionary time to obtain Infrared Array Camera (IRAC) 3.6 μm (channel 1) and 4.6 μm (channel 2) imaging down to the depths of 26.5 and 26.0 mag., in cluster and parallel fields respectively (program IDs: Abell 2744: 83, 90275; MACS J0416.1-2403: 80168, 90258; MACS J0717.4+3745: 40652, 60034, 90009, 90259; MACS J1149.4+2223: 60034, 90009, 90260; Abell S1063 (RXC J2248.7-4431): 83, 10170, 60034; Abell 370: 137, 10171, 60034). These observations are especially important for redshift determination given that they help break the degeneracies between low-redshift interlopers and high-redshift galaxies, and are beneficial in constraining galaxy properties since they provide a good proxy for galaxy stellar mass.
The HFF clusters in the southern sky are also covered in the Ks band using the High Acuity Wide Field K-band Imager (HAWK-I) <cit.> at the Very Large Telescope (VLT), reaching a depth of 26.0 mag (5σ, point-like sources) for Abell 2744, MACS-0416, Abell S1063, and Abell 370 clusters. In the northern sky, this campaign used the Multi-Object Spectrometer for Infrared Exploration (MOSFIRE) <cit.> at Keck to observe MACS-0717 and MACS-1149 to a K-band 5σ depth of 25.5 and 25.1 mag respectively. This data covers all of the cluster and parallel field centers, but not the entirety of the outer area observed by BUFFALO. Table <ref> summarizes the available ancillary data.
lcccccc
Frontier Field cluster and parallel field positions, along with clusters' mean redshift (z_clu), virial mass (M_vir), and X-ray luminosity (L_X) <cit.>
Field Cluster Center (J2000) Parallel Center (J2000) z_clu M_vir L_X
R.A., Decl. R.A., Decl.
Abell 370 02:39:52.9, -01:34:36.5 02:40:13.4, -01:37:32.8 0.375 ∼ 1×10^15 1.1×10^45
Abell 2744 00:14:21.2, -30:23:50.1 00:13:53.6, -30:22:54.3 0.308 1.8 × 10^15 3.1×10^45
Abell S1063 22:48:44.4, -44:31:48.5 22:49:17.7, -44:32:43.8 0.348 1.4×10^15 1.8×10^45
MACS J0416.1-2403 04:16:08.9, -24:04:28.7 04:16:33.1, -24:06:48.7 0.396 1.2 × 10^15 1.0×10^45
MACS J0717.5+3745 07:17:34.0 +37:44:49.0 07:17:17.0 +37:49:47.3 0.545 ∼ 2-3×10^15 3.3×10^45
MACS J1149.5+2223 11:49:36.3, +22:23:58.1 11:49:40.5, +22:18:02.3 0.543 2.5×10^15 1.8×10^45
lccccc
Existing multi-wavelength HFF coverage from follow-up programs, as used in the present work. The 5-σ point-source depth was estimated by integrating the noise in a 2D Gaussian PSF aperture with the FWHM value set to the ones given in Table <ref>. The HFF <cit.> program is led by PIs T. Soifer and P. Capak; KIFF PI is G. Brammer <cit.>.
Field Observatory/Camera Central Wavelength Depth
Abell 370 VLT/HAWK-I 2.2μ m ∼ 26.18
Spitzer IRAC 1,2 3.6μ m, 4.5μ m ∼ 25.19, 25.09
MACS J0717.5+3745 Keck/MOSFIRE 2.2μ m ∼ 25.31
Spitzer IRAC 1,2 3.5μ m, 4.5 μ m ∼ 25.04, 25.17
MACS J0416.1-2403 VLT/HAWK-I 2.2μ m ∼ 26.25
Spitzer IRAC 1,2 3.5μ m, 4.5 μ m ∼ 25.31, 25.44
Abell S1063 VLT/HAWK-I 2.2μ m ∼ 26.31
Spitzer IRAC 1,2 3.6μ m, 4.5μ m ∼ 25.04, 25.04
Abell 2744 VLT/HAWK-I 2.2μ m ∼ 26.28
Spitzer IRAC 1,2 3.6μ m, 4.5μ m ∼ 25.32, 25.08
MACS J1149.5+2223 Keck/MOSFIRE 2.2μ m ∼ 25.41
Spitzer IRAC 1,2 3.5μ m, 4.5 μ m ∼ 25.24, 25.01
lccc[t]
The Point Spread Function radius and effective wavelengths for different photometric bands used for the BUFFALO fields.
Band FWHM λ_pivot (Å)
F275W 011 2710
F336W 012 3354
F435W 013 4329
F606W 011 5922
F814W 010 8045
F105W 020 10551
F125W 020 12486
F140W 020 13923
F160W 020 15369
Ks 036 21524
I1 129 35634
I2 142 45110
Values were calculated for the cluster Abell 370.
§ DATA PROCESSING
The workflow followed for the data processing in this work is the same as the one in <cit.> (P21 hereafter). The main steps taken to obtain the data products presented here are summarized as follows:
* Error map correction: we compare the standard deviation of the values of the background pixels in the science image, with the reported root mean-square (rms) values as given by the error maps, and correct the latter so that the mean ratio in the background pixels are equal to 1.
* PSF extraction: we select unsaturated, unblended stars and perform median stacking to obtain an estimate of the PSF.
* Intracluster light (ICL) + bright galaxy modeling: Perform multi-object fits to Sérsic profiles, plus a local background using a combination of <cit.> and <cit.>.
* Bright galaxy photometry: we run Source Extractor <cit.> on HST bands PSF-matched to the reddest, F160W, band, and obtain photometric measurements.
* Background galaxy photometry: we subtract the bright galaxies and ICL, and run Source Extractor on the “cleaned” field for the PSF-matched HST images.
* Spitzer and K-band photometry: we use T-PHOT <cit.> to obtain self-consistent photometry measurements on the Spitzer and K-band images, using the HST images and segmentation maps as priors.
* Synthetic source injection: we inject synthetic sources and repeat the process to validate and correct the photometric measurements.
* Estimate photometric redshifts: the last step consists on using LePhare <cit.> to obtain photometric redshift estimates of detected galaxies in these catalogs.
In the following subsections some of these steps are described in more detail. For a detailed description of all the steps, we refer the reader to P21.
§.§ Point Spread Function
A well-defined point spread function (PSF) as a function of wavelength is crucial to perform consistent photometry within a `panchromatic' baseline to correctly model galaxies and obtain galaxy fluxes in PSF-matched images. In order to perform multi-waveband photometry with accurate signal-to-noise and resolution for each aperture, we convolve images with a kernel generated by taking (in Fourier space) the ratio between their original and target PSFs, to match that of the reddest F160W PSF. In order to generate the PSFs for the HST and K-band images, we stack isolated and unsaturated stars in each individual image, taking the median of the stack. Up to this point, the procedure is identical to that followed in P21. We improve upon our previous work by creating PSFs for the representative inner (deeper) and outer (shallower) regions in both the cluster- and parallel-fields. Figure <ref> shows examples of the stacked PSFs derived in different regions, and Table <ref> gives the representative FWHM as a function of wavelength. We note that the full-width-half-max (FWHM) in both regions are compatible.
Due to large spatial variations of the PSF in the mid-IR Spitzer channels [
See https://irsa.ipac.caltech.edu/data/Spitzer/docs/irac/calibrationfiles/psfprf/the Spitzer/IRAC handbook], we do not use the same approach to create our Spitzer PSF model. Furthermore, the individual pixel response functions (PRFs) are asymmetric and are thus dependent on the orientation of the camera. Moreover, the pixels on IRAC Ch 1 and 2 tend to under sample the PRF[More information in the https://irsa.ipac.caltech.edu/data/Spitzer/docs/files/Spitzer/simfitreport52_final.pdfthe Spitzer/IRAC handbook.]. Thus, instead of stacking stars and generating a single PSF per field, we use a synthetic pixel response function (PRF) that combines the information on the PSF, the detector sampling, and the intrapixel sensitivity variation in response to a point-like source, as done in P21. A PRF model for a given position on the IRAC mosaic is generated by the code (A. Faisst, private communication) by combining the single-epoch frames that contribute to that mosaic. To do so, stacks individual PRF models with the same orientation of the frames, resulting in a realistic, spatially-dependent PSF model.
§.§ Modeling the intra-cluster light
The deep potential well and high density of galaxy clusters make them rich laboratories to study galaxy dynamics and interactions. Due to these complex processes, stars and gas stripped from their constituent galaxies build up in the cluster core as intracluster light (ICL) <cit.>. This can bias the flux measurements of galaxies, close in angular space, to the cluster center. Following <cit.> and P21, in order to model the ICL in the BUFFALO clusters, we first generate 18×18 arcsecond (300×300 pixel) stamps centered on each galaxy with a magnitude brighter than 26 in each image/band. Using <cit.>, we fit all galaxies in each stamp with a single Sérsic profile, masking those that are fainter than magnitude 26. In case a given pixel with coordinates (x, y) is only included in one cutout, the ICL emission (F_ICL) is defined as the local background measurement as reported by (namely, the parameter). If there are overlapping cutouts in (x, y), we use the inverse χ^2-weighted mean of their background measurements:
F_ICL(x,y)=Σ_i s_i(x,y)/χ^2_i(x,y)/Σ_i 1/χ^2_i(x,y) ,
where s_i and χ_i^2 are the (fit value to the local background of the postage stamp) and goodness-of-fit values from for the i-th cutout, respectively.
As described in P21, the resulting ICL map has unphysical sharp features, which are smoothed out using a Gaussian kernel with σ=4.32".
Similarly, for the K_s and Spitzer bands, we use to obtain the local background for each measured source, which is then merged into a single mosaic, and smoothed with a representative kernel.
As a caveat, though these maps primarily contain ICL emission, they also contain inhomogeneities in the background. This ensures a robust `background+ICL subtraction" in the individual images. Cleaning of these maps via color selection of the individual stamps will then be performed.
§.§ Modeling the brightest galaxies
The procedure to model bright galaxies (magnitude brighter than 19) is also unchanged from P21. We rely on GALAPAGOS-M <cit.> to fit Sérsic profiles simultaneously to galaxies in all bands, with the fitting parameters varying as a function of wavelength. We construct galaxy models for the relevant galaxies and also cross-check the fits with those in <cit.>. The results of the ICL and bright galaxy modeling and subtraction are illustrated in Figure <ref>.
Finally, we apply a median filter to the ICL+bright galaxy subtracted images. We use a filter with a box size of 1^∘ per side, applied only to pixels within 1σ of the background level to reduce the effects of over-subtraction in the residual. Figure <ref> shows the modeling and filtering process. The lower right panel shows the effect of median filtering. Note that this process does not significantly affect the
outskirts of the cluster.
§.§ Source Extraction
To detect galaxies and perform photometry, we use Source Extractor, focusing only on the "super hot" mode, rather than creating a dual run with hot and cold modes (see P21 for definition of "hot" and "cold" modes). This is one of the main differences with the procedure presented in P21 where a second “cold" mode run is performed. We find that this second run does not have a significant impact on the detection nor photometric performance (< 0.05 mag), especially after bright galaxy and ICL subtraction. This is a consequence of the cold mode focusing on extracting information about the brightest objects, which have already been removed by the bright galaxy subtraction. This is illustrated in Figure <ref>, where we compare a dual run with our new “super hot" run, finding similar magnitudes for the BUFFALO cluster Abell 370. The final Source Extractor configuration file is presented in Appendix <ref>.
We also show the magnitude distribution of sources in the F160W band for all clusters in Figure <ref>. The large number density (defined as the number of sources per square arcmin) and depth of these catalogs are indicated. We subdivided the catalogs into sources detected in the inner field regions (the overlap with HFF), which reaches to significant depth, and the outer regions (the extension), where the depth is noticeably lower. The differences between the distributions of the cluster and the parallel regions is apparent. The cluster regions typically contain an over-abundance of brighter galaxies, whereas the parallel fields contain less of these bright objects but reach slightly deeper levels.
§.§ Photometry in Ancillary images
Because the Ks and Spitzer images have lower angular resolution than the HST images, they are more affected by blending. In order to effectively deblend sources and maximize the information extracted in each image, we use T-PHOT as in P21 to perform forced photometry in the Ks- and IRAC images on sources detected in the IR-Weighted HST image. T-PHOT <cit.> is a software that uses priors from high resolution data in order to deblend and extract fluxes of the same objects in a lower resolution image. We first use T-PHOT's built-in background routine to generate a local background for each source and remove the excess ICL light as well as inhomogenieties in the backgrounds. Then, as “real” galaxy priors, we use the IR-Weighted segmentation map and flux measurements from the F160W-band image. Additionally, we use the galaxy models that have been created in the bright galaxy+ICL removal step as the “model” priors. Given the spatial variation of the PRF in the IRAC bands, we take advantage of T-PHOT's “multikernel” option, and use a separate PRF to model sources at each position. We emphaize that the flux (FitQty) that is provided by T-PHOT corresponds to the total flux emitted by a given source.
§ PHOTOMETRIC VALIDATION
In order to characterize the performance of our detection and measurement procedures, we proceed as in P21 injecting synthetic galaxies in the original BUFFALO images using <cit.> to render noiseless realistic galaxies via the class following the morphology measurements in COSMOS by <cit.>. This catalog only contains information for fluxes in the F814W band. Thus, we match these sources to the COSMOS catalog <cit.> in order to obtain the fluxes in the rest of our bands of interest. We choose to keep the morphology and centroids fixed across bands in order to simplify data handling and bookkeeping. In this case, we generate 10 realizations of a set of 160 sources using the F160W image footprint as reference. Note that, since not all bands cover the same footprint, some sources will not be recovered after processing. We then insert these sources in the original images, run our pipeline on the resulting combined image (which is the sum of the original and the noiseless synthetic sources) and compare their measured fluxes and positions to their inputs.
This provides valuable information about completeness and absolute zeropoint calibration. The two catalogs are matched using a nearest neighbor matching routine, , included in the package <cit.>. The results of this comparison are shown in Figure <ref>. We see that for all of the HST bands (F435W, F606W, F814W, F105W, F125W, F140W, F160W) the recovered magnitude is within 20 mmags of the input, and that the reconstruction of the fluxes is relatively stable across the considered range of magnitudes. We note that at the bright end, there is a small fraction of the flux missing, probably due to the extended tails of the sources not being captured by the aperture. This photometric bias becomes smaller with increasing magnitude up to the point where we start to lose sensitivity. We use these offsets to robustly correct the fluxes in each band. For Ks the performance is also excellent and we find a median value of Δmag=-0.05 mag. For the Spitzer IRAC channels, we find a small photometric offset Δmag = -0.12 and Δmag = -0.13 for I1 and I2, respectively.
We compare the mean uncertainty reported by the measurement pipeline to the standard deviation of Δmag as a function of magnitude. Again, for the HST bands the performance is excellent, and we find that the reported errors are in good agreement with the scatter measured using our synthetic sources. This is not the case for Ks nor IRAC, where we find that a correction is needed. In particular, we use a power-law correction:
Δ F_new = Δ F_old AF^B,
where Δ F_new is the corrected uncertainty estimate, Δ F_old is the reported uncertainty by the measurement software, F is the reported flux, and A, B are free parameters. We fit A, B and tabulate the results in Table <ref>.
§ DATA PRODUCTS AND RESULTS
In this section, we discuss the data products from this work and present some validation results. We produce several new data products from BUFFALO, including catalogs, models for the point spread function, and models for the ICL and bright galaxies. The final catalogs include properties of >100,000 sources in the 6 BUFFALO cluster and parallel fields, and extend the Frontier Fields footprint, covering a total of ∼240 square-arcminutes. These include positions, multi-waveband photometry, and photometric redshift estimates for the sources detected as provided by LePhare <cit.>.
Additional details about the information provided by these catalogs can be found in Appendix <ref>.
Point spread function (PSF) estimates are provided as as FITS images. Section <ref> describes the modeling of the PSFs. We summarize some of their properties in Table <ref>. Unsurprisingly, these results are very similar to those found by P21, as the BUFFALO fields are mostly extensions of the HFF.
The procedure to obtain models for the ICL and bright galaxies is described in Section <ref>. These models are also available as FITS images.
§.§ Photometric redshifts
In this section we present our redshift estimates based on the photometric measurements presented in previous sections. We run LePhare <cit.>, a template-based code that derives a redshift likelihood function for each source. As in P21, the fluxes used as inputs to LePhare are rescaled by a factor:
f_tot = ∑_i w_i (FLUX_AUTO/FLUX_ISO)_i/∑_i w_i,
i.e. the weighted mean of the AUTO-to-ISO flux ratio summed over the observed HST bands, where the weights, w_i, are the sum in quadrature of the Source Extractor errors: w_i= √(σ_i,AUTO^2 +σ_i,ISO^2). This is done in order to improve the accuracy of the colors. For the -based photometry (Ks, and IRAC bands), as we do not have an equivalent to FLUX_ISO, we include our baseline fluxes. The template library, and dust attenuation follows <cit.>, using <cit.> or <cit.> extinction laws depending on the galaxy type. For details about the templates and the extinction prescriptions we refer the reader to <cit.> and P21. In our catalog the redshift estimates, , correspond to the position of the maximum-likelihood for each object.
The redshift calibration procedure is similar to that presented in P21, which is based on spectroscopic data described in <cit.>. We obtain the best-fit template for each source and try to find a systematic offset in each band by comparing the predicted and observed flux for all sources that have a measured spectroscopic redshift with a spectroscopic quality flag >3. These magnitude offsets, when applied to the photometric baseline, compensate for a possible bias in the template library and/or for calibration issues in data reduction. We find these corrections to be below 9% for all the HST bands. For the K_s band, we find a correction of 0.883 while in the IRAC channels 1 and 2, the correction is a factor 1.117 and 1.182, respectively. These corrections are shown in Table <ref>.
Figure <ref> also shows the photometric redshift distribution for objects in each cluster, estimated from the SED fits with a reduced χ^2< 10.
lc[h!]
Multiplicative factors applied to each band in the photo-z calibration step.
Band Multiplicative Factor
F275W 1.055
F336W 1.011
F435W 1.085
F475W 1.060
F606W 1.004
F625W 1.006
F814W 0.992
F105W 1.004
F110W 1.015
F125W 1.011
F140W 1.008
F160W 0.995
Ks 0.883
IRAC1 1.117
IRAC2 1.182
§ COMPARISON WITH THE HUBBLE FRONTIER FIELDS
By design, there is significant overlap between the HFF and the BUFFALO fields. This makes the HFF catalogs an exceptional reference to verify and validate the data presented in this work and to check for potential improvements, given the increased number of exposures. Here, we compare our BUFFALO data products with those presented in P21.
Figure <ref> compares the magnitude distribution of sources in the F160W band between the catalog presented here and the catalogs in P21 in the overlapping region of the MACS J1149 cluster. Here we show that our new BUFFALO catalogs reach fainter sources than those from the HFF. We also show the fraction of detected objects as a function of magnitude, finding that both catalogs have a similar completeness to magnitude ∼ 27.5 in the F160W band. This is in agreement with P21, where the completeness dropped below 100% at ∼ 27.5. Other bands and clusters show a similar behavior. We note that these completeness estimates do not take into account the effects of strong lensing.
§ SUMMARY
The wealth of deep (HST) observations and ancillary data in the HFF <cit.>, open a window to the high-redshift universe, and provides a complementary sample to the JWST. The BUFFALO survey <cit.> used these data and extended the observations in the 6 HFFs, to allow for follow-up spectroscopy. This work presents a new set of data products based on the BUFFALO observations. The data products include models for the point spread function (PSF), intra-cluster light (ICL), the bright galaxies, and catalogs of astronomical sources. The catalogs contain detailed information (including positions and photometry) of over 100,000 sources distributed across 6 separate cluster and parallel fields covering a total area of 240 arcmin^2.
The data products are obtained using a similar procedure to that outlined in <cit.>. First, a model of the bright galaxies, and the ICL are created. These models are then subtracted from the original image, in order to increase our sensitivity allowing us to observe fainter sources, which are detected and measured using Source Extractor in the HST bands. We then use the IR-weighted segmentation map as priors in the T-PHOT package to obtain forced-photometry in ancillary data from Keck Ks band, and Spitzer IRAC channels 1 and 2. The photometric measurements are validated using synthetic source injection. Finally, LePhare is run to obtain redshift estimates based on our photometric measurements. The main change with respect to the procedure in P21 is the usage of a “super hot” mode Source Extractor run, that simplifies bookkeping, while not biasing the photometric estimates. As a sanity check, we plot the redshift histograms and note that the peaks of these histograms correspond to the redshift of each respective cluster.
This catalog represents one of the deepest views at galaxy clusters to date and a sample that lends itself well for JWST follow-up. All of the data products presented in this work will be made publicly available to the astronomical community through the usual astronomical archive databases (MAST and Vizier).
§ ACKNOWLEDGEMENTS
ID acknowledges the support received from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 896225. This work has made use of the CANDIDE Cluster at the Institut d'Astrophysique de Paris and made possible by grants from the PNCG and the DIM-ACAV. The Cosmic Dawn Center is funded by the Danish National Research Foundation under grant No. 140. LF acknowledges support by Grant No. 2020750 from the United States-Israel Binational Science Foundation (BSF) and Grant No. 2109066 from the United States National Science Foundation (NSF).
Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with programs GO-15117, WFC3/UV imaging (GO 13389, 14209; B. Siana), A370 HST/ACS additional imaging (GO 11507; K. Noll, 11582; A. Blain, 13790; S. Rodney, 11591; J.P. Kneib)
This work is based in part on data and catalog products from HFF-DeepSpace, funded by the National Science Foundation and Space Telescope Science Institute (operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555).
Support for HST Program GO-15117 was provided through a grant from the STScI under NASA contract NAS5-26555.
This work is based in part on observations made with the Spitzer Space Telescope, which was operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA.
Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere under ESO programme(s) 090.A-0458, 092.A-0472,
and 095.A-0533.
Some of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation.
The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain.
§ CATALOG DETAILS
The catalogs presented in this work contain the following information:
* ID: Source number
* FLUX_FXXXW: Total scaled flux in cgs units of erg/cm^2/s/Hz
* FLUXERR_FXXXW: Corrected flux error in cgs units of erg/cm^2/s/Hz
* ZSPEC: reported spectroscopic redshift
* ZSPEC_Q: reported quality flag of spectroscopic redshift
* ZSPEC_REF: dataset from which spectroscopic redshift was obtained
* ALPHA_J2000_STACK: Right Ascension (J2000) in degrees using GAIA DR2 as reference.
* DELTA_J2000_STACK: Declination (J2000) in degrees using GAIA DR2 as reference.
* FIELD: denotes the field object belongs to
* ZCHI2: photometric redshift goodness of fit
* CHI2_RED: reduced chi square
* ZPDF: photometric redshift derived via maximum likelihood
* ZPDF_LOW: lower threshold for photometric redshift
* ZPDF_HIGH: upper threshold for photometric redshift
* MOD_BEST: galaxy model for best χ^2
* EXT_LAW: Extinction law
* E_BV: E(B-V)
* ZSECOND: secondary photometric redshift peak in maximum likelihood distribution
* BITMASK: Base 2 number to determine which bands were used. Calculated via bitmask=∑_n=good band index 2^n
* NB_USED: number of bands used
§ SOURCE EXTRACTOR CONFIGURATION
3
0.5
0.5
Y
gauss_4.0_7x7.conv
64
0.000005
Y
0.8
CORRECT
2.0, 3.5
0.5
0.17
goods_default.nnw
64
3
LOCAL
24
|
http://arxiv.org/abs/2307.05451v1 | 20230711172743 | Detection Threshold of Audio Haptic Asynchrony in a Driving Context | [
"Gyanendra Sharma",
"Hiroshi Yasuda",
"Manuel Kuehner"
] | cs.HC | [
"cs.HC",
"91E30",
"H.1.2"
] |
Solvable Neural Network Model for Input-Output Associations: Optimal Recall at the Onset of Chaos
Kunihiko Kaneko
August 12, 2023
=================================================================================================
In order to provide perceptually accurate multimodal feedback during driving situations, it is vital to understand the threshold at which drivers are able to recognize asyncrony between multiple incoming Stimuli. In this work, we investigated and report the detection threshold (DT) of asynchrony between audio and haptic feedback, in the context of a force feedback steering wheel. We designed the experiment to loosely resemble a driving situation where the haptic feedback was provided through a steering wheel (Sensodrive), while the accompanying audio was played through noise cancelling headphones. Both feedbacks were designed to resemble rumble strips, that are generally installed on the side of major roadways as a safety tool. The results indicate that, for 50% of the participants, asynchrony was detectable outside the range of -75 ms and 110 ms, where the former is related to perceiving audio before haptic and vice versa for the latter. We were also able to concur with previous studies, which state that latency is perceivable at a lower threshold when audio precedes haptic stimuli.
§ INTRODUCTION
Rumble strips on the side of the road are a major safety tool to make sure drivers do not drift off the road during driving situations. These physical artifacts assert driver's attention by providing both auditory and haptic feedback. Designing safety systems that translate similar phenomena in a synthetic manner, so that drivers can be warned in cases of road departures holds significant promise. Specifically, an interaction concept that offers digitally created rumble strip phenomena through audio and haptic feedback can be created to warn drivers when they unintentionally depart the lane or the road itself. One key advantage of such a concept is an easy translation of mental model from an already existing warning mechanism that is widely understood and familiar, especially in North America.
In order to do this, there are various parameters that characterize the feedback, and one of the major components is the timing at which both audio and haptic responses are delivered to the driver. While physical rumble strips, as shown in figure <ref> are characterized by natural law of physics, synthetic or electronically created feedback systems require each signal to propagate through a separate set of actuators, control units and computation units before they arrive to the driver. This causes delay in regards to the delivery of either of the haptic or audio stimuli to the driver. Figure <ref> shows the case where haptic stimuli is delivered to the driver slightly later than audio. A large asynchrony between the stimuli representing the same feedback event can lead to confusing user feedback systems. So, for these feedback systems to be effective and also to avoid annoyance, which has been shown to be a major reason for safety systems being turned off, it is pertinent that both audio and haptic signal onset time are perceptually in sync, even if asynchrony exists in the underlying system. The detection threshold (DT) at which this asynchrony is perceivable to the driver is the major contribution of this work.
Human beings can only detect asynchrony outside of a certain time window. This is also referred to as the point of subjective simultaneity (PSS), where stimulus onset asynchrony (SOA) are most likely to yield responses that affirm that the SOA was detectable <cit.>. Early works on this topic are derived from the field of psychology, where SOA between audio and visual stimuli were studied <cit.>. More recently, haptic has also been included to study the perception of cross-modal simultaneity <cit.>. Beyond the reported DT numbers, these studies also demonstrated that latency is perceived at a lower threshold when audio precedes the haptic feedback.
While earlier studies focused on a more physical setup, recent studies have investigates this topic in the context of modern applications such as specific computer interfaces, touchscreens and force feedback haptic devices. For instance, work of Silva et al. <cit.> was specifically focused on computer based multimedia application and showed that asynchrony of audio played before haptic within 92 ms and, audio played after haptic within 110 ms was not detected by the participants. With the rapid increase in usage of AR and VR technologies in recent years, the application areas upon which this topic is being investigated has expanded significantly. Tele-operation <cit.>, multi-sensory media (mulsemedia) <cit.> are a few examples.
Through this user study, we investigate the detection threshold (DT) at which drivers are able to perceive asynchrony between haptic and audio. The DT is determined using a 3 alternative forced choice (3AFC) adaptive staircase procedure <cit.>. Threshold obtained in this manner corresponds to the 50 percentile point on a psychometric function <cit.>.
In particular, we are interested in the following research questions.
* What is the time difference at which drivers begin to notice asynchrony between haptic and audio feedback.
* Is the previous conclusion, latency is perceived at a lower threshold when audio precedes the haptic, applicable to driving context?
§ SYSTEM CHARACTERIZATION
Before embarking on designing a system that is able to accurately provide desired offsets between audio and haptic feedback, we had to explore the role of system latency i.e the inherent latency within the computing environment of the workstation. We investigated this by recording repeated samples of audio-haptic feedback pairs, as shown in figure <ref>, and noting the time difference through analysis of their respective frequencies.
We used a total of 58 audio-haptic pairs and analyzed each pair to determine their actual offset time and compared it with the desired offset time to obtain a value for system latency. The audio had variety of offsets ranging between 6 ms to 30 ms and it was played at 1KHz while the vibration was played at 100 Hz. As shown in figure <ref>, a spectrogram for each feedback pair was analyzed to determine the actual offset. The audio onset time (ao), haptic onset time (ho), desired offset (do) from all samples (N = 58) was used to calculate the system latency (sl) as follows.
sl = 1/N∑_α = 1^N ((ao_α - ho_α) - do_α)
Using equation <ref>, we calculated the system latency on average to be +15.93 ms, with median value of 16 ms and standard deviation of 4.61 ms. This showed that audio on average gets delayed by approximately 15.93 ms due to system latency even in cases when both are played simultaneously. We included the finding in our experimental design and setup to compensate for this system delay.
§ METHODOLOGY
In this study, participants held onto a Sensodrive steering wheel <cit.> that was connected to a Linux workstation while wearing a pair of noise canceling wired headphones. Participants were exposed to 3 sets of haptic and audio feedback pairs with 3 seconds interval between each set. The haptic feedback appeared in the form of vibration on the Sensodrive wheel while the audio feedback was played through the headphone. Two of the audio-haptic feedback pairs were in sync while the third one was played out of sync with experimentally controlled offset values. The order at which each of these sets appeared was randomized. The pattern of the feedback for both the sound and haptic were designed to resemble a rumble strip. After each trial, participants were asked the following question to understand their perception of the feedback; “which audio-haptic pair do you think was NOT in sync?”
We employed a 3AFC (3 alternatives force choice) method i.e participants can respond with only one of the three given choices. In this particular case, they were instructed to respond by speaking out one of the three numbers; 1, 2 or 3 based on whether the first, second or third feedback pair was out of sync. Their responses were recorded by the experimenter and the next trial would commence. The exposure to stimulus followed the 2 step up, 1 step down staircase approach as shown in figure <ref>. So, the stimulus separation between audio and haptic for the out of sync feedback pair is determined based on user's previous input. For successive correct responses, the stimulus difference would be decreased, whereas a single incorrect response would lead to increase in stimulus latency between the audio haptic pair. This leads to each participant responding in ways where they move back and forth within a certain time window of stimulus separation. Each participant was continuously exposed in this manner until 8 reversal of responses was recorded. A representation of an actual participant response is shown in <ref>.
We designed a within subjects study since we wanted to expose each participant to both latency conditions; audio preceding haptic and vice versa. Each participant underwent two experimental conditions; (i) negative audio haptic latency, where audio is played before the haptic feedback and (ii) positive audio haptic latency, where audio is played after the haptic feedback. To overcome ordering influence, each participant was randomly assigned to one of the experimental conditions to begin with and moved to the next one. Once the participants completed this stage, they were asked to complete a post task questionnaire to gather information on driving experiences, gender, age and other possible relevant information.
§.§ Pilot Study
We conducted pilot studies with participants that were employees of an automotive company. We used 4 participants to explore and establish certain parameters that were later applied to the user study. This included the starting stimulus offset for both haptic or audio first setup, number of reversal points, and the step size. We initially overestimated the starting stimulus offset for both audio and haptic first setup to be between 350 ms - 400 ms. However, it became quite obvious during the pilot studies that a more conservative estimation of approximately 260 ms was adequate. In addition, during the pilot studies, we used 9 reversal points to end the experiment for each session. However, we realized that, based on the length of the experiment and overall convergence of stimulus values, decreasing it to 8 reversals, excluding the first reversal in both cases, was more appropriate. In terms of step size, our initial estimate of 30ms as large or initial step size and 15 ms as the small step size didn’t seem to cause any issues and therefore, we continued with the same numbers for the user study.
§.§ Participants
We recruited 15 external participants to conduct the user study out of which 7 were female and 8 were male. In terms of the age distribution, 7 were above the age of 40 and 8 below. In addition, all participants were experienced drivers, with 11 participants indicating that they had more than 10 years of driving experience, with only 2 participants indicating less than 5 years of driving experience. 3 participants indicated that they drove 4-6 times a week while the rest indicated that they drove daily.
§.§ Experimental Procedure
All participants were hired using the services of Fieldwork San Francisco. The entire experiment was conducted over a period of approximately one week. Once each participant arrived at the research facility, they went through all the covid protocols put in place before they were allowed inside. Each participant signed the consent form and also verbally consented to being recorded before they were given a brief description about the experiment and their task. Before commencing the experiment, each participant went through a simple test to make sure the sound being played on the wired headset was at an acceptable level while also making sure that they were able to feel the vibration on the steering wheel. The physical setup is shown in figure <ref>.
As a safety protocol, we instructed the participants to hold the steering wheel in a 10 - 2 position and make sure to not have their thumb or fingers inside the wheel to avoid injuries in an off chance that the wheel makes sudden movements. At this time, participants were allowed to ask any questions they might have before beginning the trials. After a session was concluded i.e 8 recorded reversals, participant were told that they could rest their hands for a minute before beginning the second session. Once this was over, they were instructed to fill out a post task questionnaire and this marked the end of the experimental session. Any thoughts, questions or concerns they had were verbally discussed and noted by the experimenter at this time before escorting them out of the research facility.
§ RESULTS
The results for each participant is shown in figure <ref> and <ref>. In terms of the analysis of the overall results, participants 1 and 3 were discarded as outliers as their responses did not follow even the minimal expected pattern i.e they consistently failed to notice stimulus offsets that were larger than even the first stimulus value in both the audio first as well as the haptic first scenario. The DT was derived using equation <ref> for the remaining 13 participants. When haptic feedback preceded audio, the average DT was reported at 123.41 ± 4.61 ms and 87.73 ± 4.61 ms when audio preceded haptic. We were unable to observe any effects of age and gender in terms of the results.
DT = 1/7∑_α = 2^8 R_α
The cumulative score of participants based on their individual DTs is shown in figure <ref>. This shows that in 50% of cases, the detection threshold lies between -75 ms and 100 ms. Similarly, extrapolating from the same figure, we can also see that 20% of the participants/drivers would notice a latency of 50 ms when audio is played before haptic and 65 ms when audio is played after haptic feedback. The implications based on these findings are briefly discussed below.
§ DISCUSSION
The results indicate that the detection threshold for a haptic and audio based multi-modal feedback system in a driving context closely follow the findings from the past i.e audio preceding haptic is observable at smaller absolute time differences than the other way round.
On the other hand, there are also design implications that can be derived based on the results reported above. For instance, safety systems in cars that rely on providing audio-haptic feedback, the robustness of the system can be tuned to accommodate the latency parameters based on our findings. In situations where, if for example, we want to design a feedback system that would only allow for 20% of the participants to notice the asynchrony, latency window between -50 ms to 65 ms could be used, based on extrapolation on the graph shown in figure <ref>. Similarly, an HMI interface that requires no more than 20% of the drivers to perceive the asynchrony will have to optimize their computation power to include delivery of haptic and audio stimuli within the range of -50 ms to 65 ms. Based on the computational cost as well as the objective of the design, this can be tuned to either cover for more or less of the population. Our results provide a clear guideline in terms of what these human factors costs will amount to be.
§ LIMITATIONS AND FUTURE WORK
This study was conducted entirely within a research setting and without the use of real driving situations or scenarios. This is a major limitation in regards to drawing accurate conclusions as the primary task for the participants in these scenarios was not driving. So, the DT estimates derived from this user study should be considered as conservative. We expect the DT for the same set of feedback; haptic and audio to be significantly higher if a similar setup were to be tested in a real driving context as cognitive resources are expended on higher priority tasks. However, to what extent such effects are characterized would be part of future research. The results presented in this study can be applied as baselines to draw meaningful extrapolations as needed.
The second major limitation was characterized by the Sensodrive wheel that was used as the primary component to provide haptic feedback. While the audio component is generated entirely in a digital format, that’s not exactly the case with the Sensodrive wheel which used the physical motors to generate the vibration. This in turn also creates sound artifacts that are clearly audible. The usage of over the ear noise canceling headphones mitigate this factor but does not entirely nullify the sound factor emanating from the wheel.
Overall, we were able to obtain results that we believe will have significant implications in designing multi-modal feedback systems as part of vehicle safety system framework. Our results, especially the one presented in figure <ref>, can be applied to characterize system specifications when steering based multi-modal feedback systems are designed. Also, as evidenced by our results, more care can be put in place to make sure audio isn't played much earlier than other modalities as threshold of latency perception in such scenarios is lower.
15
urlstyle
[Stone et al.(2001)Stone, Hunkin, Porrill, Wood, Keeler, Beanland,
Port, and Porter]stone2001now
JV Stone, NM Hunkin, J Porrill, R Wood, V Keeler, M Beanland, M Port, and
NR Porter.
When is now? perception of simultaneity.
Proceedings of the Royal Society of London. Series B:
Biological Sciences, 2680 (1462):0 31–38, 2001.
[Hershenson(1962)]hershenson1962reaction
Maurice Hershenson.
Reaction time as a measure of intersensory facilitation.
Journal of experimental psychology, 630 (3):0
289, 1962.
[Woodworth et al.(1954)Woodworth, Barber, and
Schlosberg]woodworth1954experimental
Robert Sessions Woodworth, Bernard Barber, and Harold Schlosberg.
Experimental psychology.
Oxford and IBH Publishing, 1954.
[Adelstein et al.(2003)Adelstein, Begault, Anderson, and
Wenzel]adelstein2003sensitivity
Bernard D Adelstein, Durand R Begault, Mark R Anderson, and Elizabeth M Wenzel.
Sensitivity to haptic-audio asynchrony.
In Proceedings of the 5th international conference on
Multimodal interfaces, pages 73–76, 2003.
[Levitin et al.(2000)Levitin, MacLean, Mathews, Chu, and
Jensen]levitin2000perception
Daniel J Levitin, Karon MacLean, Max Mathews, Lonny Chu, and Eric Jensen.
The perception of cross-modal simultaneity (or “the greenwich
observatory problem” revisited).
In AIP Conference Proceedings, volume 517, pages 323–329.
American Institute of Physics, 2000.
[Altinsoy(2003)]altinsoy2003perceptual
M Ercan Altinsoy.
Perceptual aspects of auditory-tactile asynchrony.
In Proceedings of the Tenth International Congress on Sound and
Vibration, pages 3831–3838. Citeseer, 2003.
[Silva et al.(2013)Silva, Orozco, Cha, Saddik, and
Petriu]silva2013human
Juan M Silva, Mauricio Orozco, Jongeun Cha, Abdulmotaleb El Saddik, and Emil M
Petriu.
Human perception of haptic-to-video and haptic-to-audio skew in
multimedia applications.
ACM Transactions on Multimedia Computing, Communications, and
Applications (TOMM), 90 (2):0 1–16, 2013.
[Neumeier et al.(2019)Neumeier, Wintersberger, Frison, Becher, Facchi,
and Riener]neumeier2019teleoperation
Stefan Neumeier, Philipp Wintersberger, Anna-Katharina Frison, Armin Becher,
Christian Facchi, and Andreas Riener.
Teleoperation: The holy grail to solve problems of automated driving?
sure, but latency matters.
In Proceedings of the 11th International Conference on
Automotive User Interfaces and Interactive Vehicular Applications, pages
186–197, 2019.
[Liu et al.(2017)Liu, Kwak, Devarakonda, Bekris, and
Iftode]liu2017investigating
Ruilin Liu, Daehan Kwak, Srinivas Devarakonda, Kostas Bekris, and Liviu Iftode.
Investigating remote driving over the lte network.
In Proceedings of the 9th international conference on
automotive user interfaces and interactive vehicular applications, pages
264–269, 2017.
[Covaci et al.(2018)Covaci, Zou, Tal, Muntean, and
Ghinea]covaci2018multimedia
Alexandra Covaci, Longhao Zou, Irina Tal, Gabriel-Miro Muntean, and Gheorghita
Ghinea.
Is multimedia multisensorial?-a review of mulsemedia systems.
ACM Computing Surveys (CSUR), 510 (5):0 1–35,
2018.
[Yuan et al.(2015)Yuan, Bi, Muntean, and Ghinea]yuan2015perceived
Zhenhui Yuan, Ting Bi, Gabriel-Miro Muntean, and Gheorghita Ghinea.
Perceived synchronization of mulsemedia services.
IEEE Transactions on Multimedia, 170 (7):0
957–966, 2015.
[Prins et al.(2016)]prins2016psychophysics
Nicolaas Prins et al.
Psychophysics: a practical introduction.
Academic Press, 2016.
[Gescheider(1997)]gescheider1997psychophysics
GA Gescheider.
Psychophysics: The fundamentals. lawrence erlbaum associates.
Inc., Publishers, pages 1–71, 1997.
[Wichmann and Hill(2001)]wichmann2001psychometric
Felix A Wichmann and N Jeremy Hill.
The psychometric function: I. fitting, sampling, and goodness of fit.
Perception & psychophysics, 630 (8):0
1293–1313, 2001.
[sen(2022)]sensodrive
Force feedbackc sensodrive wheel.
<https://www.sensodrive.de/products/senso-wheel-sd-lc_e.php>,
2022.
[Online; accessed 02-Feb-2022].
|
http://arxiv.org/abs/2307.04907v1 | 20230710211646 | SimpleMTOD: A Simple Language Model for Multimodal Task-Oriented Dialogue with Symbolic Scene Representation | [
"Bhathiya Hemanthage",
"Christian Dondrup",
"Phil Bartie",
"Oliver Lemon"
] | cs.CL | [
"cs.CL",
"cs.LG"
] |
Planar Curve Registration using Bayesian Inversion
[
==================================================
SimpleMTOD is a simple language model which recasts several sub-tasks in
multimodal task-oriented dialogues as sequence prediction tasks. SimpleMTOD is built on a large-scale transformer-based auto-regressive architecture, which has already proven to be successful in uni-modal task-oriented dialogues, and effectively leverages transfer learning from pre-trained GPT-2. In-order to capture the semantics of visual scenes, we introduce both local and de-localized tokens for objects within a scene. De-localized tokens represent the type of an object rather than the specific object itself and so possess a consistent meaning across the dataset. SimpleMTOD achieves a state-of-the-art BLEU score (0.327) in the Response Generation sub-task of the SIMMC 2.0 test-std dataset while performing on par in other multimodal sub-tasks: Disambiguation, Coreference Resolution, and Dialog State Tracking. This is despite taking a minimalist approach for extracting visual (and non-visual) information. In addition the model does not rely on task-specific architectural changes such as classification heads.
§ INTRODUCTION
Multimodal conversational agents have witnessed a rapidly growing level of interest among the conversational AI community as well as within the computer vision community.
Most multimodal conversational datasets to-date are an extension of visual question answering (VQA) <cit.>. Consequently building upon the success of other visio-linguistic tasks such as VQA, state-of-the-art multimodal conversational agents commonly depend on non-autoregressive models <cit.> most of which are based on BERT <cit.>.
However, dialogues with such systems significantly differ from what the conversational AI community has typically viewed as a multi-turn dialogue. First, most of the current multimodal dialogue datasets are focused on querying the visual content whereas external knowledge bases have been an integral part of traditional unimodal dialogue datasets <cit.>. Second, in traditional unimodal dialogues, co-reference resolution (explicitly or implicitly) plays a major role within the dialogues. Additionally, state-of-the-art unimodal conversational agents predominantly rely on GPT-based auto-regressive models <cit.> due to their proven language generation capabilities <cit.>. The SIMMC 2.0 <cit.> task-oriented dialogue dataset bridges this gap between multimodality and the more traditional view of a multi-turn dialogue. Due to the simultaneous presence of signals from multiple modalities, which a user can refer to at any point in the conversation, the multimodal task-oriented dialogues proposed in the SIMMC 2.0 are challenging compared to both text-only counterparts and image querying dialogue datasets.
In spite of the inherent complexity of multimodal dialogues, we propose SimpleMTOD, recasting all sub-tasks into a simple language model. SimpleMTOD combines the idea of 'de-localized visual object representations' with a GPT-like auto-regressive architecture. The idea of de-localized representations stems from the analogous process of de-lexicalization that has been extensively used in task-oriented dialogues. In de-lexicalization <cit.>, slot-values such as vegan are replaced by a more general abstracted token such as food-type. Likewise, when de-localized, objects are represented by the catalogue type of the object instance rather than the instance itself. These de-localized tokens then possess a consistent meaning throughout the dataset.
Along with the dataset, <cit.> propose four benchmark tasks decomposing multi-modal task oriented dialogue into sub-tasks: Multimodal Disambiguation, Multimodal Co-reference Resolution, Multimodal Dialog State Tracking, and Response Generation. The first three tasks deal with the dialogue context understanding, analogous to NLU and DST in unimodal agents. The last task is similar to unimodal NLG, but expects the generated responses to be sensible within a multimodal context with visual signals and associated knowledge base.
The main objective this work is to evaluate the effectiveness of de-localized object representations within SimpleMTOD. Despite the simplicity, SimpleMTOD achieves the state-of-the-art BLEU score of 0.327 for assistant response generation in the SIMMC2.0 test-std [The testing dataset (test-std) is not publicly available and was part of the SIMMC 2.0 challenge used for scoring the submitted systems.] dataset . Furthermore, the model achieves an accuracy of 93.6% in Multimodal Disambiguation (MM-Disambiguation), Object-F1 of 68.1% in Multimodal Co-reference Resolution (MM-Coref), and 87.7% (Slot-F1) and 95.8 (Intent-F1) in Multimodal Dialogue State Tracking (MM-DST). Other than the proposed benchmark settings, we also evaluate SimpleMTOD in an end-to-end setting. Major contributions of our work are as follows:
* We formalise notion of multimodal task oriented dialogues as an end-to-end task.
* We propose a GPT-based simple language model combined with visual object de-localization and token based spatial information representation, that addresses four sub-tasks in multimodal dialogue state tracking with a single architecture.
* We analyse the behaviour of our model using salience scores from the Ecco <cit.> framework, which provide an intuition into which previous token mostly influence predicting the next token.
§ BACKGROUND
Traditional task-oriented dialogue datasets consist of a dialogue corpus, a dialogue ontology with a pre-defined set of slot-value pairs, and annotations required for related sub-tasks in a set of domains <cit.>.
The SIMMC 2.0 dataset follows a similar structure and contains dialogues in both the fashion and the furniture domains. However, in the SIMMC 2.0 multimodal dialogue corpus, each dialogue is also associated with an image representing the scene where each dialogue takes place. A scene is made by re-arranging a known set of items (objects) in different configurations. Along with the raw-image, the dataset provides a file (scene JSON) containing details of the images such as objects and relationships between objects. Furthermore, a meta-data file contains visual and non-visual attributes of objects that recur within a scene.
§.§ Benchmark Tasks
Multimodal Disambiguation: In real-world conversations, references made by humans related to objects or entities can be ambiguous. For example, consider A: Blue trousers are priced at $149.99. U: What about the red ones?, in a setting where there are multiple red trousers. In these situations, there is insufficient information available for co-reference resolution. This task is aimed at identifying such ambiguous scenarios, given the dialogue history.
Multimodal Co-reference Resolution:
The goal of this task is to resolve any reference in a user utterance to canonical object ids of the object as defined per each scene (see image in Figure <ref>). Users may refer to 1) dialogue context 2) visual context, or 3) both.
Mutltimodal Dialogue State Tracking:
Similar to unimodal DST, this tracks the belief states of users across multiple turns. The belief state consists of an intent, slot-value pairs, and user requested slots.
Assistant Response Generation
Given the user utterance, ground-truth APIs, and ground-truth cannonical object ids (with meta-data), the model needs to generate a natural language response describing objects as observed and understood by the user.
§ METHODS
In the first part of this section, we model multimodal task oriented dialogues as a sequence generation task. We define the problem in a more general setup and discuss some empirical limitations applied to the model.
§.§ Multimodal Task-Oriented Dialogues
Similar to unimodal setting, we view dialogue state (belief-state) tracking, action prediction, and response generation to be the core components of multi-modal task-oriented dialogues. However, outputs of each of the sub-tasks should be conditioned not only on the dialogue history, but also on the associated scene.
Multimodal dialogues consist of multiple turns. In a turn t, there exists an associated visual scene V_t, the user-provided input U_t and the system-generated response S_t. Theoretically, the dialogue context can be denoted as C_t = [V_0,U_0, S_0|V_0, . . . S_t-1|M_t-1,V_t, U_t]. Here S_t-1|M_t-1 denotes that the statement S_t-1 is associated with the representation of multimodal information such as objects viewed and mentioned to the user during that turn.
Given the context, C_t, SimpleMTOD generates the belief-state B_t:
B_t = SimpleMTOD(C_t)
B_t is a concatenation of intent, slot-values, requested slots, and resolved object references MRef_t.
However, it should be noted that, SimpleMTOD models the context as C_t = [V_t, U_t-n, S_t-n|M_t-n, . . . S_t-1|M_t-1,U_t, ] where the n is the context window. Major deviations from the theoretical representation of C_t are, 1) we ignore the history of visual signals and only consider the current visual scene; 2) we consider only n previous turns in contrast to the entire dialogue.
Then, in a more generalized setting where the system have access to an external database, which can be queried,B_t would be used to retrieve database results D_t. These D_t along with context and belief states can be used to generate the system action A_t.
A_t = SimpleMTOD(C_t, B_t, D_t)
Action A_t is a triplet containing system intent, slot-value pairs, and details on requested slots. However, in our setup, no such database exists. Hence we model action A_t from B_t and C_t keeping D_t=∅.
Finally, the concatenation of the context, belief state, (database results), and action is used to generate system responses S_t.
S_t = SimpleMTOD(C_t, B_t, D_t, A_t)
§.§ De-localized Visual Representation
Here we discuss how visual information of a scene is represented within the SimpleMTOD as de-localized tokens and how V_t is derived from those tokens.
In the SIMMC 2.0 dataset a scene is a spatial configuration of a set of object instances. From here on we will refer to these instances simply as objects. Probable types of these objects are pre-defined in two meta-data files, with one for each domain. We will refer to these files as catalogues and an entry of these catalogues as a catalogue-item. See Figure<ref> for an example catalogue-item with visual and non-visual attributes defined. For benchmark tasks, non-visual attributes can be used during inference while visual attributes are not allowed. However, we use neither of these attributes in the SimpleMTOD visual representation explained below.
In our setup, we assign a unique token (eg: INV_278) to each catalogue-item. These catalogue-items are used as a de-localized version of objects within a scene. While these catalogue-item tokens are consistent across the entire dataset, spatial relationships associated with the objects will be lost. Therefore we encode spatial details of objects as follows:
Each scene is divided into 9 regions as shown in Figure <ref>. Every object is assigned to a region based on the center-point of the object bounding box. Then concatenation of catalogue-item tokens and assigned region description (eg: INV_278@TOP:LEFT) tokens are used as object representations. A scene-description is obtained by concatenating all such tokens representing every object within a scene. This is our V_t in SimpleMTOD.
§.§ SimpleMTOD Training and Inference
For training, we follow routine causal language modeling with teacher forcing. A training sequence X_t in SimpleMTOD is obtained by concatenating all the components; context,user belief state, database results (which is null in our case), system actions and system utterance.
X_t = [C_t, B_t, D_t, A_t, S_t]
In terms of tokens, X_t can be denoted as X_t = (x^0_t,x^1_t,....x^n(t)_t) when n(t) represent the number of tokens in turn t. In general, the goal of the model is to learn ρ(X) given X = (x^0,x^1,..x^i..x^n) :
ρ(X) = Π_i=1^nρ(x^i|x^<i)
For this, we train the neural network with parameterization θ minimizing the negative log-likelihood over the multimodal dialogue corpus MD where MD={X_1,X_2....X_|MD} . However, in our setup the tokens related to scene-description V are ignored during the loss calculation. When n(V) is the number of tokens related to the scene description:
L(D) = -∑_t=1^|MD|∑_i=n(V)^n(t)logρ_θ(x^i_t|x^<i_t)
During inference, the learnt parameter θ is used to predict a token at a time. Unlike training time where ground-truth tokens are used every time, generated tokens become part of the left-context. For inference, we stick to a simple greedy prediction approach with top-k=1. That is we always generate the token with highest probability as the next token.
§ EXPERIMENTS
In Section <ref> we defined an end-to-end setting for SimpleMTOD. However, some of the benchmark tasks allow more ground-truth information to be utilized during training and inference time.
For the MM-Disambiguation task, we consider two setups. In the task-specific scenario, we train the model to predict YES or NO tokens directly from context C_t. In the end-to-end setup, we consider the label to be YES only if the system intent predicted is to Disambiguate. Two similar setups are considered for MM-Coref as well. It should be noted that end-to-end version of SimpleMTOD predicts de-localized tokens with spatial information and we obtain the canonical object id by reversing the de-localization process explained in Section <ref>. If multiple objects were found in the same region with same catalogue-item token, the area of the object bounding box is used as a tie-breaker. In the case of assistant response generation, the benchmark task defined in SIMMC 2.0 allows ground-truth system belief state to be used as an input. Therefore, we evaluate both from action response generation as well as end-to-end setting.
§.§ Baselines
We consider 2 baselines which were provided as part of the SIMMC2.0 challenege.
GPT-2: This extends <cit.> to multi modal task-oriented dialogues, encoding objects in a scene using canonical object ids concatenated with the token OBJECT_ID. For the MM-Disambiguation task, a classification head is used, while other tasks are modeled in a generative manner.
Multimodal Transformer Networks (MTN): Adapts <cit.> (only) for the MM-DST and Response Generation sub-tasks [MTN-SIMMC2 implementation <https://github.com/henryhungle/MTN/tree/simmc2>]. In contrast to the auto-regressive modeling of SimpleMTOD, MTN uses an encoder-decoder architecture.
§.§ Training and Evaluation
We follow the experimental setup of the SIMMC 2.0 challenge with same dataset-splits, inference time limitations, and performance metrics. See Appendix:<ref> for details. It should be noted that the test-std split of the SIMMC2.0 dataset is not publicly available and is a held-out set for evaluating submissions to SIMMC2.0 challenge. Therefore, the final version of our model could only be evaluated on the dev-test split. However, the prior version of the model SimpleMTOD_Sub, which did not encode region information or scene information, was submitted to the SIMMC2.0 challenge.
§ RESULTS
MM-Disambiguation
As shown in Table <ref> and Column 2 of Table <ref>, SimpleMTOD_Sub achieves accuracy scores of 92.17% and 93.6 on devtest and test-std respectively when trained to predict YES/NO tokens. This is a 27% relative improvement over the GPT-2 based baseline with a classification head. Furthermore, we evaluate the model on the MM-Disambiguation task as part of the end-to-end model. based on the system intent predicted by the model. Here, we consider any INFORM:DISAMBIGUATE prediction as a YES. This approach demonstrates a very similar accuracy score of 92.12. The best performing model (94.5% : Team-6) on test-std, ensembles two models trained on RoBERTa and BART [This is based on the description provided at: < https://github.com/NLPlab-skku/DSTC10_SIMMC2.0>].
MM-Coref
Table <ref> and the Third column of the Table <ref> show the MM-Coref Object-F1 scores of on devtest and test-std respectively. SimpleMTOD achieved 68.2 (54% relative gain over baseline) in test-std dataset and 67.6 (84% gain) on the devtest split. While there is no information available on Team-2's leading solution, the BART-based model of Team-4 which is trained end-to-end with task-specific heads achieves 75.8% on this task.
MM-DST
Despite being a simple language model, both our Intent-F1 (95.8%) and Slot-F1 (87.7%) scores on test-std split are comparable with complex visual-language models. Furthermore, as in Table <ref>, there is significant improvement in the Joint Accuracy scores from 57.3% to 63.1% when positional information is used.
Response Generation
A prior version of the model, SimpleMTOD_Sub achieves a state-of-the-art BLEU score of 0.327 on the test-std split of the SIMMC2.0 dataset. This is in comparison with models which rely on sophisticated feature extraction processes. In our view, the simplified representation of visual information preserves and complements the generative capabilities of pre-trained models. Furthermore, as shown in Table <ref>, SimpleMTOD achieves a BLEU score of 0.49 on devtest when the ground-truth actions are used. The end-to-end version of SimpleMTOD also achieves a BLEU score of 0.45. It should be noted that this is an improvement over the SimpleMTOD_Sub model score of 0.43. This indicates the importance of associating region related information.
§ DISCUSSION
In order to understand the behaviour of SimpleMToD, we use gradient-based salience <cit.> provided with the Ecco framework <cit.>. Using Ecco, we inspect salience scores for all the tokens in the left side of the token of interest. In the heat-maps presented in this section, darker colors mean a higher salience score. It should also be noted that the model assigns high salience scores on separator tokens (such as <USB>, [ , ] ) that define the structure of the generation. While proper attention to the structure is of paramount importance, our discussion focuses on salience scores assigned to the rest of the tokens, which represent the semantics of the multimodal conversations.
Effect of De-localization and Scene Descriptions:
The introduction of de-localized tokens significantly improves the Object-F1 of MM-coref and joint accuracy of MM-DST. Accordingly, we first analyse the behaviour of the model when predicting co-references. Figures <ref> and <ref> show example utterances with and without scene descriptions respectively.
In the case where scene description is not provided, the model puts a high salience on tokens `yellow' and `shirt', and predicts the token INV_146 which represents a yellow color shirt as shown in Table <ref>. (It should be noted that none of the metadata shown in the diagram are provided to the model explicitly and the model figures this out from globally consistent use of tokens). However, in this case, a particular catalogue item INV_146 is not present in the scene. When we observe the confidence values of the prediction from the last layer (shown in Table <ref>), it can be seen that the model is not quite certain about the prediction with 13.75 for INV_146 and 13.04 for INV_247, both of which represent yellow shirts. This is to indicate that even though the model has learnt to associate object attributes necessary for co-reference resolution, it lacks information to be certain about the prediction. To this end, we provide the model with a scene description as described in <ref>. When the scene descriptions are provided, SimpleMTOD correctly predicts the token INV_247 with 92.63% confidence and high salience score over the same token from the scene description, as well as tokens `shirt' and `yellow'.
Additionally from Figure <ref> it can be noted that INV_199 also shows a high salience score. From the metadata, we can see it is a pink color shirt. However, there is a significant salience score over the token `yellow' that results in generating the correct token INV_247 over INV_199 (which is the second ranked token with only had 7.17 confidence). Extending the analysis, we modified the original utterance to “I need a pink shirt" and generated the next token, and SimpleMToD accordingly predicted the token INV_199 (with high confidence of 99.79%) as observed in Figure <ref>.
Effect on Intent prediction:
Even though scene descriptions play a key role in overall belief tracking as described earlier, the Intent-F1 score drops from 95.8% to 94.0% when the scene descriptions are encoded. In order to understand the effect, we inspect salience scores when predicting the user intent. It can be observed that when the scene descriptions are omitted, higher salience scores are assigned to the user utterance suggesting more focus on that. However, when the scene information is included, salience scores assigned to the utterance decreased to an extent, resulting in wrong predictions in certain cases. This is to indicate that scene descriptions are either redundant or act as a distractor when we consider intent-detection, which explains reduction in score. Furthermore, this behaviour aligns with our intuition that the intent parts of the user utterances are predominantly language-driven.
Figure <ref> shows an example where omitting the scene information produces the correct intent of REQUEST:COMPARE, whereas our final version of SimpleMTOD wrongly predicted the intent as ASK:GET
§ RELATED WORK
<cit.> are closely related to our work as they all model task-oriented dialogues in an end-to-end manner with GPT-2-like large-scale transformer-based architectures. However, all those models focus on text-only task-oriented dialogues. The GPT-2 adaptation <cit.>, which is provided as a baseline along with the SIMMC2.0 dataset, is also closely related to our work. However, this baseline represents visual objects by canonical ids
and demonstrates subpar results to our model in all four tasks.
Generative encoder-decoder models <cit.> are a promising alternative to decoder-only (GPT-2 based) dialogue models that have been extensively investigated in unimodal task-oriented dialogues. The MTN-baseline <cit.>, which we compare to, is based on the encoder-decoder architecture. While being inferior with respect to performance in both the tasks considered, this model involves sophisticated feature extraction process.
<cit.> coined the term `de-lexicalization' for abstraction in neural dialogue state tracking tasks. This idea has been extensively used in goal oriented dialogues. Our notion of de-localized object representation is influenced by this work.
§ CONCLUSION
We explore a simple, single generative architecture (SimpleMTOD) for several sub-tasks in multimodal task-oriented dialogues. We build on large-scale auto-regressive transformer-based language modeling, which has been effectively utilized in task-oriented dialogues, and formalize the multimodal task-oriented dialogue as a sequence prediction task. Our model employs a `de-localization' mechanism for visual object representation that ensures the consistency of those tokens throughout the dataset. Furthermore, we encoded spatial information of object instances with a very small number of special (globally consistent) tokens. Despite the simplicity in representing visual information, our model demonstrates comparable or better performance with models that heavily rely on visual feature extraction, on four multimodal sub-tasks in the SIMMC2.0 challenge.
§ FUTURE DIRECTIONS
Most current vision-language research relies on fusing pixel-level vision information with token-level language representations. However, their applicability for dialogues where the language is sophisticated remain sparsely studied. In contrast, we explore a symbolic approach for representing visual information and combining it with auto-regressive language models. While we rely on smaller scale models (with 17 million parameters), our work is readily extendable for large language models (LLMs). Unlike pixel level visual representations, special tokens representing visual information being more similar to the word tokens which the LLMs area trained on, symbolic visual representation would facilitate effective transfer learning.
SimpleMTOD represents visual information using carefully designed input tokens. Capturing these information through semantic scene-graphs, which would provide richer representation, and fusing them with LLMs would be an interesting future direction of research for multimodal dialogues. Development in knowledge-graph based language grounding would complement this line of work.
§ ACKNOWLEDGEMENTS
This work is partially supported by the European Commission under the Horizon 2020 framework programme for Research and Innovation (H2020-ICT-2019-2, GA no. 871245), SPRING project, https://spring-h2020.eu
acl_natbib
§ SIMMC 2.0 DATASET
The SIMMC 2.0 dataset ( released under CC-BY-NC-SA-4.0 licence) [https://github.com/facebookresearch/simmc2 ] consists of three major components:
* Dialogue Data: Includes system and user utterance with relevant annotations. Figure <ref> provide first 4 turns of a sample dialogue.
* Scene Data: Set of scenes representing environments in which dialogues take place. Figure <ref> provide the scene related to the dialogue segment shown in Figure <ref>. Other than raw-images , an json file associated with each image provides detail of objects, such as bounding boxes and spatial relationships (left of, right of, over, under) among objects.
* Meta-data: acts as a catalogue of items related to the dialogue corpus. Scene images are made-up by positioning instances of catalogue items in different configurations. Entries contain both visual and non-visual attributes of each item. Visual attributes of items from the meta-data file are not allowed to be used during inference.Figure <ref> shows a single entry in meta-data file.
§.§ Data Statistics
§ TRAINING AND EVALUATION
We conduct our experiments with the SIMMC 2.0 <cit.> dataset. Further, we follow the experimental setup of the SIMMC 2.0 challenge with the same dataset splits, inference time limitations, and performance metrics.
Implementation: We conduct our experiments using PyTorch Huggingface’s transformers <cit.>. All SimpleMTOD model variants were initialized with Open AI GPT-2 pretrained weights and exhibits computational speed identical to Open AI GPT-2. We use Adam optimizer <cit.> with default parameter of Huggingface's AdamW implementation (lr=1e-3, eps= 1e-6, weight_decay=0).
We use the GPT-2 tokenizer for encoding user and system utterances. However, we noticed that the default tokenizer encoder mechanism chunks special tokens introduced for visual object representation. Therefore, we implemented an encoding mechanism which selectively skips the default byte-pair encoding for object tracking tokens.
Evaluation: We use the same evaluation metrics and evaluation scripts provided with the SIMMC2.0 challenge. Table <ref> shows metrics that are used for evaluating each benchmark task.
§ SALIENCE SCORES
For the discussion we use input X gradient (IG) method from <cit.> as suggested in <cit.>. In the IG method of input saliency, attribution values are calculated across the embedding dimensions. With the values from embeddings dimension, the L2 norm is used to obtain a score per each token Then resulting values are normalized by dividing by the sum of the attribution scores for all the tokens in the sequence.
Here we provide actual salience scores for heat-maps provided in the discussion in Section: <ref>.
|
http://arxiv.org/abs/2307.03892v1 | 20230708035958 | Embedding Mental Health Discourse for Community Recommendation | [
"Hy Dang",
"Bang Nguyen",
"Noah Ziems",
"Meng Jiang"
] | cs.IR | [
"cs.IR",
"cs.CL"
] |
Feature selection simultaneously preserving both class and cluster structures
Suchismita Dasmycorrespondingauthor and Nikhil R. Pal
August 12, 2023
=============================================================================
*These authors contributed equally to this work
Our paper investigates the use of discourse embedding techniques to develop a community recommendation system that focuses on mental health support groups on social media. Social media platforms provide a means for users to anonymously connect with communities that cater to their specific interests. However, with the vast number of online communities available, users may face difficulties in identifying relevant groups to address their mental health concerns. To address this challenge, we explore the integration of discourse information from various subreddit communities using embedding techniques to develop an effective recommendation system. Our approach involves the use of content-based and collaborative filtering techniques to enhance the performance of the recommendation system. Our findings indicate that the proposed approach outperforms the use of each technique separately and provides interpretability in the recommendation process.
§ INTRODUCTION
The rise of social media as a platform has allowed people all over the world to connect and communicate with one another.
Further, these communities that exist online are able to keep their members anonymous from one another, allowing new communities to form which would have a hard time existing without anonymity.
Specifically, this new and robust anonymity has allowed an explosion of online communities with a focus on giving each other advice on health issues.
While being involved in seeking peer support in a community with people that have experienced similar issues can provide a significant positive impact on someone's ability to navigate their personal problems <cit.>, finding communities with relevant discourse is not trivial.
Often, the platforms which host these communities have a very large quantity of them.
There are over 100,000 different communities on Reddit alone.
Further, some communities are not easily found due to their inherently anonymous nature, so the only way a user can decide if they fit within the community is by spending time reading through the discourse happening within the community.
For these reasons, new users seeking others who have experienced similar situations may have a very hard time finding communities that would help them the most, even if they are familiar with the platform which hosts the communities.
Recently, embedding long sequences of text has received lots of interest both from the research community and from practitioners.
A number of studies have shown embeddings can be useful for measuring the similarity both between document pairs and between question-document pairs <cit.>, allowing for retrieval of the most similar documents given a new question or document.
However, little work has been done investigating how the discourse within a community, which represents the meaning of that community, can be represented in a single embedding. The discourse of a community in this context can be all users' posts in that specific community or represented community's description.
This poses a unique challenge as discourse within these communities is often in the form of threads that, unlike documents, are not naturally represented as a single block of text.
The goal of this work is to develop a system to recommend support groups to social media users who seek help regarding mental health issues using embeddings to represent the communities and their discourse.
Specifically, we aim to leverage the text of a given user's posts along with the description and posts in each subreddit community to help recommend support groups that the user could consider joining.
Our main research questions are as follows:
* In representing online communities through discourse embeddings, what type of information can be used?
* To what degree do these representations improve the accuracy of predicting users' behaviors regarding their involvement in sharing experiences within groups or communities?
* Do different discourse embedding methods change the prediction capacity of our community recommendation model?
In exploring these research questions, we propose a hybrid recommendation approach that leverages both content-based and collaborative filtering to construct our community recommendation model. As shown in Fig. <ref>, the content-based filtering component investigates different methods of embedding discourse within a community to recommend similar communities to users. It is then combined with a matrix factorization model that learns user engagement behavior in a community to improve recommendation decisions. Utilizing users' past interactions as well as text-based information about the communities, we show that our model achieves promising accuracy while offering interpretability.
§ RELATED WORK
There are a number of studies related to our work.
<cit.> and <cit.> constructed discourse embeddings to find relations between short text segments.
While the two studies were similar in concept, they focused on short text segments where this work instead focused on constructing discourse embeddings for entire social media communities.
<cit.> showed NLP techniques could be used with electronic health records to predict mental health crises 4 weeks in advance.
While online communities were no replacement for professional medical help, this suggested many who had looming mental health problems seek help before a crisis.
<cit.> experimented on the same dataset we used with Natural Language Processing techniques such as TF-IDF and sentiment analysis to understand the effects of COVID-19 on mental health.
Although working on the same dataset, our work studies a different task: to recommend mental health-related support community to Reddit users.
<cit.> adopted a similar approach to ours in content-based filtering for recommendation.
Specifically, they mapped a Wikipedia page to each item and generate its corresponding vector representation using three feature-extraction methods - Latent Semantic Indexing, Random Indexing, and Word2Vec.
We extended this method by exploring more recent representations of text such as BERT <cit.> and OpenAI embeddings.
<cit.> recommended threads in health forums based on the topics of interest of the users.
Specifically, self-reported medical conditions and symptoms of treatments were used as additional information to help improve thread recommendations <cit.>.
While our work is also situated in the health domain, we are interested in recommending a broader support group to users rather than a specific thread.
<cit.> used sentiment and other features to automatically evaluate dialog, showing NLP techniques could be used to evaluate quality of discourse.
In doing so, they leveraged weak supervision to train a model on a large dataset without needing quality annotations.
§ PROBLEM DEFINITION
Suppose we have a Reddit's "who-posts-to-what" graph, which is denoted by G = (U, V, E) where U is the set of users, V is the set of subreddit communities, and E, a subset of U× V, is the set of edges.
The number of user nodes is m = |U| and the number of subreddit communities is n = |V|. So, U = {(u_1, P_1), (u_2, P_2) , ..., (u_m, P_m)} where P_i is the set of posts by user u_i and V = {(v_1, P^'_1), ..., (v_n, P^'_n)} where P^'_j is the set of all posts in subreddit v_j.
If a user u_i posts to subreddit v_j, there is an edge that goes from u_i to v_j, which is denoted by e_ij = e(u_i, v_j).
The problem is that given G, predict if e_ij = e(u_i, v_j) exists.
In other words, will user u_i post to subreddit v_j?
§ METHODOLOGY
Figure <ref> illustrates our recommendation pipeline, which adopts a hybrid approach by incorporating both content-based filtering (CBF) and collaborative filtering, specifically matrix factorization (MF) strategies. The CBF model recommends new subreddits based on the average of a user's previous interactions, weighted by how similar the previous subreddits are to the new ones. Meanwhile, users and subreddits are represented in a k-dimensional joint latent space in the MF model. The distance between users and subreddits in this latent space is used to provide recommendations for new subreddits. The predictions from these two components are linearly combined to obtain the final recommendation of subreddits to users.
The collaborative filtering component of our solution leverages nonnegative matrix factorization to represent our users and subreddits in lower-dimensional latent space. In this sense, we redefine the adjacency matrix 𝐀 in our problem definition so that it works with nonnegative factorization. More specifically, users' past interactions with items are represented by the adjacency matrix 𝐀∈{5, 1, 0}^m × n. A_ij = 5 if the user u_i has posted to subreddit j, A_ij = 1 if the user u_i has NOT posted to the subreddit v_j, and A_ij = 0 is the missing connection that needs predicting. Given this adjacency matrix 𝐀, the task is to predict the missing elements A_ij = 0. In the following sections, we elaborate on each component of our recommendation model and then discuss how they are combined to obtain our final solution.
§.§ Content-based Filtering
In recommending items to users based on their past interactions and preferences, content-based filtering methods represent each item with a feature vector, which can then be utilized to measure the similarity between items <cit.>. If an item is similar to another item with which a user interacted in the past, it will be recommended to that same user. Thus, in addition to the adjacency matrix 𝐀, we utilize another matrix 𝐂 of size n× n, where 𝐂_ab is the similarity between the embeddings for two subreddits with embedding vectors 𝐚 and 𝐛.
In this paper, we use cosine similarity as the similarity measure:
𝐂_ab = 𝐚·𝐛𝐚𝐛,
To predict the value of the missing element where A_ij = 0 (whether user u_i will post to subreddit v_j), we compute the average of user u_i's past interactions (which subreddits user u_i posted and did not post to), weighted by the similarity of these subreddits to subreddit v_j.
Mathematically,
A^'_ij = ∑_k=1^n A_ik C_kj/∑_k=1^n C_kj.
We can generalize the above formula to obtain the new predicted adjacency matrix using matrix-level operations:
𝐀^(CBF) = (𝐀𝐂) ⊙𝐃,
where
* 𝐃 = 1. / (𝐈·𝐂) (element-wise),
* 𝐈 is an indicator matrix such that I_ij = 1 if A_ij≠ 0, otherwise I_ij = 0,
* and ⊙ is the Hadamard product.
§.§.§ Representing Subreddit Discourse with Description and Posts
It is helpful to consider the specific domain of the application to represent each item as an embedding. In the context of our subreddit recommendation problem, we take advantage of two types of text-based information about a subreddit to construct the similarity matrix: (1) the posts within the subreddit itself and (2) the general description about the reddit provided by the subreddit moderators.
We then use a feature extraction method to obtain two embeddings of a subreddit, one based on its description and the other based on its posts. As a subreddit contains many posts, each of which has a different embedding given the same feature-extraction method, we take the average of the embeddings across all posts within a subreddit to obtain one embedding for the subreddit.
§.§.§ Feature Extraction
In this paper, we consider three feature-extraction methods: Term Frequency-Inverse Document Frequency (TF-IDF), Bidirectional Encoder Representations from Transformers (BERT) <cit.>, and OpenAI.[OpenAI API Embeddings: <https://platform.openai.com/docs/guides/embeddings>]
TF-IDF: The TF-IDF algorithm represents a document as a vector, each element of which corresponds to the TF-IDF score of a word in that document.
The TF-IDF score for each word in the document is dictated by (1) the frequency of the word in the document <cit.>, and (2) the rarity of the word in the entire text corpus <cit.>.
That is, a term is important to a document if it occurs frequently in the document but rarely in the corpus.
We use the implementation from scikit-learn <cit.> to obtain the TF-IDF representations of our subreddits.
BERT: We employ BERT to generate sentence embeddings as another feature extraction technique <cit.>.
BERT takes a sentence as input and generates a fixed-length vector representation of the sentence.
This representation is meant to capture the syntactic and semantic meaning of the input sentence in a way that can be used for various natural language processing tasks, such as sentence classification or semantic similarity comparison.
In the context of our problem, we can treat each subreddit description or each post as a sentence and feed it to a pre-trained BERT model to generate the embeddings that represent the subreddit. Long posts are truncated to fit within the context limits of pre-trained models. We experiment with 4 different variations of BERT embeddings:
* BERT base and large <cit.>
* Sentence-BERT, or SBERT <cit.>
* BERTweet <cit.>
OpenAI: Similar to BERT embeddings, OpenAI embeddings take in a string of text and output an embedding that represents the semantic meaning of the text as a dense vector.
To do this, the input string is first converted into a sequence of tokens.
The tokens are then fed to a Large Language Model (LLM), which generates a single embedding vector of fixed size.
OpenAI's text-embedding-ada-002 can take strings of up to 8191 tokens and returns a vector with 1536 dimensions.
§.§ Nonnegative Matrix Factorization for Collaborative Filtering
Matrix factorization (MF) approaches map users and items (subreddits in this case) to a joint latent factor space of a lower dimension k <cit.>. The goal of this method is to recommend to a user the subreddits that are close to them in the latent space. More formally, MF involves the construction of user matrix 𝐏 of dimension m× k and subreddit matrix 𝐐 of dimension n× k. In this sense, the resulting term, 𝐩_i^⊤𝐪_j, captures user u_i's interest in item v_j’s characteristics, thereby approximating user u_i's rating of item v_j, or denoted by A_ij.
This modeling approach learns the values in 𝐏 and 𝐐 through the optimization of the loss fuction
min_𝐏,𝐐∑_A_ij∈𝐀 ( A_ij - 𝐩_i^⊤𝐪_j )^2 + λ ( 𝐩_i ^2 + 𝐪_j ^2).
Matrix factorization offers the flexibility of accounting for various data and domain-specific biases that may have an effect on the interaction between user u_i and subreddit v_j. In this paper, we consider three types of biases: global average μ, user bias b_i^(p), and subreddit bias b_j^(q). The updated loss function is given by:
min_𝐏,𝐐∑_A_ij∈𝐀 ( A_ij - μ - b_i^(p) - b_j^(q) - 𝐩_i^⊤𝐪_j )^2 +
λ ( 𝐩_i ^2 + 𝐪_j ^2 + b_i^(p)^2 + b_j^(q)^2).
After optimization, each element in the new predicted adjacency matrix 𝐀^𝐌𝐅 is given by:
𝐀^(MF)_ij = 𝐩_i^⊤𝐪_j + μ + b_i + b_j
§.§ Final Model: Hybrid Approach
Our main model leverages insights from both content-based filtering and matrix factorization by taking a linear combination of their predicted adjacency matrix. Specifically, the new adjacency matrix is given by:
𝐀^(MF+CBF) = β𝐀^(CBF) + (1 - β) 𝐀^(MF),
where β is a hyperparameter that controls how much the CBF model (vs MF model) contributes to the final prediction.
§ DATA AND EXPERIMENTAL SETUP
For the experimental setup, we use the data from <cit.> working on Reddit platforms in mental health domains, particularly health anxiety.
§.§ Data Description
The dataset is collected from 28 mental health and non-mental health subreddits.
The dataset is suitable for studying how subreddits and social media platforms correlated with individuals' mental health and behavior.
The original data comprises 952,110 Reddit posts from 770,176 unique users across 28 subreddit communities, which include 15 mental health support groups, 2 broad mental health subreddits, and 11 non-mental health subreddits. We also manually collect descriptions of the 28 subreddits and use that information along with the posts to conduct the content similarity matrix.
§.§ Data Preprocessing
Although the original dataset has a large number of unique users, the majority of them only contribute posts to one or two different communities. This presents a challenge when evaluating our specific task. As our objective is to examine users' behavior over time and provide recommendations for engaging in suitable subreddits, we have implemented a filter to exclude users who post to fewer than three subreddits. After filtering, the remaining users and posts are 16,801 and 69,004, respectively, while the number of subreddits remains to be 28.
We also seek to understand the distribution of interactions between users and different subreddits. The detailed distribution of post frequency across subreddits is visualized in Figure <ref>.
§.§ Experimental Setup
§.§.§ Data Splits
To construct our data splits, for each user in our dataset, we choose the most recent subreddit that the user first posted to as the test example.
For example, if the user post history is [subreddit1, subreddit2, subreddit3, subreddit1, subreddit2], then subredddit3 will be used as the test example.
For each positive training example, we pair it with a negative example randomly sampled from the list of subreddits where the user has not posted to.
§.§.§ Evaluation Metrics
In assessing the performance of our recommendation method and the baseline, we use the following evaluation metrics: Recall@K and Mean Reciprocal Rank (MRR).
§.§ Results
Table <ref> presents the performance of our hybrid recommendation system as well as its individual components (MF or CBF). For CBF, we report its performance on different types of embeddings constructed using different information (posts or description) and different feature extraction methods (TF-IDF, BERT, or OpenAI). Figure <ref> visualizes the results of exemplary models in a diagram for better analysis using Recall@K.
According to Table <ref>, all variants of our recommendation method outperform the random predictor. Among all the variants, the hybrid solution using the content similarity matrix generated from OpenAI embeddings achieves the highest performance in MRR (0.4244) and average Recall@K.
For CBF, operating a feature-extraction method on subreddit posts results in higher performance than operating the same method on description. For example, the MRR for CBF - BERT base is 0.3140 when using posts and 0.3024 when using description. It can also be observed that given the same information (either posts or information), deep-learning-based feature extraction methods like OpenAI and BERT bring about better performance for CBF than TF-IDF.
As our recommendation model combines both MF and CBF, we investigate the effect of hyperparameter β, which dictates how much CBF contributes to the final prediction. Figure <ref> illustrates the performance of the hybrid models on varying β. When β = 0, the hybrid model's performance is the same as that of MF. When β = 1, the hybrid model's performance is the same as that of CBF. It can be seen from the peak of these curves that this way of linearly combining MF and CBF brings about significant improvement in MRR.
§.§ Case Studies
We perform a series of case studies to understand why certain information and methods are more helpful than others in recommending subreddits to users. We present our findings by comparing the behavior of the following models: (1) CBF models using TF-IDF and OpenAI Embedding on Subreddit Descriptions, (2) CBF models using OpenAI Embeddings on Subreddit Descriptions and Posts, and (3) MF model and Hybrid model.
§.§.§ CBF models using TF-IDF and OpenAI Embedding on Subreddit Descriptions
The objective of the first case study is to investigate the impact of different types of embedding methods on the performance of recommendations. To achieve this, we employ TF-IDF and OpenAI Embedding approaches to analyze subreddit descriptions and compare their predictions using content-based filtering (CBF) approaches, as illustrated in Figure <ref>. Specifically, we consider User A's historically interacted subreddits, which relate to depression, loneliness, and anxiety, respectively, with the ground truth of socialanxiety. For CBF models, the content similarity C between historically interacted and ground truth subreddits is crucial for accurate predictions. Hence, we evaluate the similarity scores between them. According to the result, the OpenAI Embedding technique outperforms TF-IDF in learning the representation of subreddits.
Based on the analysis of content similarity matrices of the two approaches, we observe that TF-IDF has low similarity scores among subreddits due to its bag-of-words (BOW) approach, which fails to capture semantic relationships in short texts <cit.>, such as subreddit descriptions. In contrast, OpenAI Embeddings, which can capture semantic meanings, performs better for encoding the meanings of subreddit descriptions for recommendation tasks.
§.§.§ CBF models using OpenAI Embeddings on Subreddit Descriptions and Posts
The second case study aims to investigate the impact of different types of information on the performance and recommendations of CBF models. To achieve this goal, we evaluate OpenAI Embeddings approaches on two types of information, subreddit descriptions, and posts. Figure <ref> illustrates the predictions using CBF approaches utilizing OpenAI Embeddings on posts and descriptions. Specifically, we examine User B's historical posts, which are in depression and personalfinance, and the ground truth label is legaladvice. To understand the behavior of CBF on these two types of information, we analyze the similarities between historical subreddit interactions of User B and how the ground truth label is correlated with these subreddits. Our analysis shows that using OpenAI Embeddings on subreddit posts can capture strong relationships between personalfinance and legaladvice, where many legaladvice posts are related to financial information. However, when only using subreddit descriptions of legaladvice, which is "A place to ask simple legal questions, and to have legal concepts explained.", the model fails to capture this relationship.
Furthermore, as shown in Table <ref>, the use of subreddit posts as representations for communities generally exhibits higher performance across most metrics when compared to using community descriptions. The reason is that subreddit descriptions contain less information than posts describing only the general purpose of the subreddit. In contrast, using subreddit posts can accurately learn the representations of the subreddits. Therefore, among the two types of information, using subreddit posts to represent subreddits helps models achieve better performance.
§.§.§ MF vs MF + CBF model using OpenAI Embeddings on Subreddit Discourses
The objective of the third study is to investigate the performance improvement achieved by combining MF and CBF. Specifically, we aim to explore how the use of discourse embeddings to generate content similarity matrices among subreddits can address challenges encountered by the MF approach. To this end, we evaluate the MF and MF + CBF approaches using OpenAI Embeddings on posts. The predictions generated by the two models are presented in Figure <ref>.
We further examine the construction of scores using MF for this case study. The scores values are generated using latent features P, Q, μ, b^(p), and b^(q), representing user, item features, global average, user, and item biases, respectively. However, due to the imbalance in the dataset, there are more posts in some subreddits than others, leading to a cold start problem for the MF approach to accurately learn communities with a small number of examples. In this case study, MF fails to generate correct predictions for the divorce community due to the limited number of posts available. Additionally, MF is biased towards subreddits with more posts, as reflected by the b^(q) values that have strong correlations with the number of posts in the subreddit communities, as depicted in Figure <ref>.
We demonstrate that the top three predictions generated by MF are the subreddits with the highest item biases compared to other subreddits, which are also the ones with the most posts. However, as divorce only accounts for 0.78% of the dataset, the performance of MF is limited. By utilizing OpenAI Embeddings on Subreddit Discourses to represent subreddit communities, we can integrate semantic information into the prediction process, thereby overcoming the cold start problem encountered by MF. Furthermore, this approach captures the relationships between the target recommended subreddit, historically interacted communities and semantic similarities. In this case, the most similar subreddits to personalfinance are legaladvice and divorce, while the most similar subreddits to parenting are autism and divorce.
Overall, we showcase that integrating semantic information into MF can address the cold start problem, and combining MF with CBF using discourse embeddings can make better recommendations.
§ CONCLUSION
This study aimed to investigate the effectiveness of different types of discourse embeddings when integrated into content-based filtering for recommending support groups, particularly in the mental health domain. Our findings showed that the hybrid model, which combined content-based filtering and collaborative filtering, yielded the best results. Moreover, we conducted an extensive case study to demonstrate the interpretability of our approach's predictions.
Previous studies have brought to light the use of past behaviors to make more accurate recommendations in mental health <cit.>. They also emphasize effective communication between the recommender system and the user as an essential factor for users' proper understanding of mental health in general as well as in their own journey <cit.>. Through promising prediction accuracy and interpretability, we believe that this method can serve as a valuable tool to support individuals, particularly those with mental health concerns, to share and seek help regarding their issues.
§ LIMITATIONS
In our current project, we have not taken into account the temporal information that treats the historical behavior of users as a sequence of actions. Thus, the model may not capture how user behaviors change over time. To ensure full support to users in need, we recommend that future work should address this limitation by considering users' historical behaviors as a sequence of actions. Moreover, although our pre-trained models achieved significant results without fine-tuning discourse embeddings, we suggest that fine-tuning these models can enhance performance by capturing the nuances of the datasets' distribution and contexts. Furthermore, conducting a detailed comparison of additional open-source Large Language Models (LLMs) would provide more comprehensive insights into their performance.
Additionally, in addition to analyzing the efficiency of different models, it is crucial to evaluate the cost associated with implementing these models. Therefore, future work should consider both fine-tuning and evaluating additional LLMs, while also taking into account the costs of utilizing these models.
§ ACKNOWLEDGEMENT
This work was supported by NSF IIS-2119531, IIS-2137396, IIS-2142827, CCF-1901059, and ONR N00014-22-1-2507.
style/acl_natbib
|
http://arxiv.org/abs/2307.05027v1 | 20230711060032 | Multimode resonance transition to collapsed snaking in normal dispersion Kerr resonators: Bright versus dark solitons | [
"Yifan Sun",
"Stefan Wabnitz",
"Pedro Parra-Rivas"
] | physics.optics | [
"physics.optics",
"nlin.PS"
] |
[email protected]
Department of Information Engineering, Electronics and Telecommunications, Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy
Department of Information Engineering, Electronics and Telecommunications, Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy
Department of Information Engineering, Electronics and Telecommunications, Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy
We study the dynamics of Kerr cavity solitons in the normal dispersion regime, in the presence of an intracavity phase modulation. The associated parabolic potential introduces multimode resonances, which promote the formation of high-order bright solitons. By gradually reducing the potential strength, bright solitons undergo a transition into dark solitons. We describe this process as a shift from a multimode resonance to a collapsed snaking bifurcation structure. This work offers a comprehensive overview of cavity dynamics and may provide a potential pathway to access multi-stable states by effectively varying the phase modulation.
Multimode resonance transition to collapsed snaking in normal dispersion Kerr resonators: Bright versus dark solitons
Pedro Parra-Rivas
August 12, 2023
=====================================================================================================================
Temporal dissipative Kerr solitons (DKS) <cit.> have emerged as a significant research topic in the field of photonics over the past decade. In the frequency domain, DKS are associated with the generation and manipulation of coherent frequency combs <cit.>.
DKS have been effectively generated in passive ring Kerr resonators, by mainly using two platforms: microresonators <cit.> and macroscopic fiber rings <cit.>.
The formation of DKS relies on a delicate equilibrium that encompasses both the counter-balance between dispersion and nonlinearity, as well as the balance between cavity loss and external driving field.
The dynamics and stability of DKS have been the object of detailed analysis in a mean-field approximation. In this framework, the evolution of the field recirculating in a passive Kerr resonator is described by a driven and damped nonlinear Schrödinger equation (NLSE) <cit.>, also known as Lugiato-Lefever equation (LLE).
With increasing pump intensity, the intracavity field exhibits a variety of instabilities, resulting in intricate spatiotemporal dynamics, that can manifest as periodic phenomena, such as breathers, and chaotic behaviors <cit.>.
In Kerr resonators, it is well-known that in the anomalous dispersion regime, "bright" DKS manifest as light pulses superimposed on a relatively low-intensity continuous wave (CW) background, maintaining their shape and energy throughout as they circulate within the cavity. In this context, single-peak DKS solutions can be computed by applying standard perturbation analysis to the conservative soliton solution of the NLSE <cit.>.
In contrast, in a normal dispersion cavity, "dark" DKS, characterized by localized intensity dips embedded within a relatively high-intensity background, can be formed <cit.>.
Recent studies have shown that the presence of higher-order perturbations can bring about a fundamental change in the occurrence of bright and dark DKS, under various dispersion driving conditions.
Notably, the effects of third- and fourth-order dispersion <cit.>, the influence of the stimulated Raman effect <cit.>, and the interaction with frequency-dependent cavity losses <cit.> may result in the emergence of novel forms of localized states, or in the coexistence of bright and dark DKS. Furthermore, the occurrence of dark-bright DKS bound states have been observed by seeding two modes with dispersion of opposite signs <cit.>.
In addition,
intracavity phase modulation [e.g., via an electro-optical modulator (EOM)], offers an additional degree of control over DKS dynamics and serves as a valuable tool for investigating synthetic dimensions <cit.>.
Our recent studies in this framework have shown that the parabolic potential introduced by the EOM can lead to soliton stabilization in the anomalous regime <cit.>, and the emergence of a host of new solutions, such as high-order DKS, chaoticons <cit.> and 3D DKS <cit.>.
In anomalous dispersion, DKS will always drift along an increasing phase gradient<cit.>. Therefore, attractive localization for bright DKS occurs at the maximum position of parabolic potential.
In this letter, we remarkably show that bright DKS can be generated even in a normal dispersion cavity with the presence of local minimal phase modulation.
Our theoretical investigation demonstrates the significant influence of the parabolic potential in the transition between bright and dark DKS, which occurs within a dissipative Kerr resonator operating under normal dispersion.
The introduction of the parabolic potential facilitates the generation of multimode bright DKS.
By reducing the potential strength, the temporal duration of high-order bright DKS grows larger, eventually transitioning towards dark DKS. This phenomenology is comprehensively analyzed by using a systematic bifurcation analysis, which permits to establish a clear connection between the multimodal bifurcation structure and the collapsed homoclinic snaking, that appears in the absence of external phase modulation <cit.>.
In the mean-field approximation, the evolution of the light field in a coherently driven and phase-modulated passive cavity in the normal dispersion regime obeys the dimensionless equation <cit.>
∂_t A = -i∂_τ^2 A - iCτ^2A
+ i|A|^2A - (1+iδ)A+ P,
where A is the slowly varying field envelope, τ and t are fast and slow time, respectively. The term -∂^2_τ A describes second-order normal dispersion, -A and δ A represent linear loss and cavity phase detuning, i|A|^2A is the Kerr nonlinear term, and P is the driving pump field amplitude.
The synchronous parabolic temporal potential -iCτ^2 from the EOM is introduced, where C is the phase modulation curvature <cit.>.
It is worth noting that the potential shape can be readily adjusted by designing the voltage profile incorporated into the EOM <cit.>. This flexibility allows for the implementation of various potential shapes, such as a parabola, linear, sinusoidal, or other desired profiles.
By changing the coordinates meaning, Eq. (<ref>) also describes spatial bottle resonators <cit.>.
To perform the bifurcation analysis of steady-state soliton solutions A_s (i.e., ∂_t A_ s=0) of Eq.(1), we apply a combination of numerical techniques, including direct numerical simulations (DNS), path-continuation techniques through pde2path, and numerical linear stability analysis <cit.>.
To illustrate the essential characteristics arising from the parabolic potential and Kerr nonlinearity, Fig. <ref>(a)
compares the bifurcation structure of stationary solutions of Eq.(1) either in the absence or in the presence of the Kerr nonlinear term i|A|^2A, as a function of detuning δ, for the fixed
(P,C)=(2.5,-1).
Here, we plot the modification of the average field amplitude N=√(E/T), with E=∫|A|^2τ, as a function of δ for a time domain window T=100.
The gray curve in Fig. <ref>(a) shows the modification of N in the absence of the Kerr nonlinearity. This curve consist of many peaks, equally spaced at δ=1, 5, 9, 13, 17,⋯, with a gap Δδ=4. This pattern already illustrates the presence of multimode resonances (MMR), indicating that the cavity can sustain higher energy levels at these specific resonance points.
These MMR find their precise determination in the eigenvalues δ_n and eigenmodes ψ_n of the linear system operator Ĥ_0=[-i∂_τ^2 - iCτ^2 ]. These quantities emerge as a result of the intricate interplay between second-order dispersion and the parabolic potential. The modes obey the linear eigenvalue equation δ_nψ_n=Ĥ_0ψ_n, enabling us to obtain the eigenvalues δ_n=2√(|C|)(n+1/2) and eigenmodes ψ_n, which are characterized by Hermite-Gaussian (HG) functions, namely
ψ_n(τ) = (2^nn!)^-1/2π^-1/4exp(-τ^2/2a_τ^2) H_n(τ/a_τ), where H_n represents the Hermite polynomial, and a_τ=|C|^-1/4 is the scaling ratio.
The relation among eigenmodes ψ_n(τ), eigenvalues δ_n, and their parabolic potential τ^2 is illustrated in Fig. <ref>(c). Homogeneous pumping effectively suppresses resonances of asymmetric HG modes, resulting in a resonance gap of Δδ=4√(|C|) between consecutive resonances <cit.>.
Note that in this work, we focus on scenarios where the field is located at the minimal phase modulation point (C<0) within the normal dispersion regime (-i∂_τ^2A). The signs of dispersion and potential in Eq.(<ref>) are opposite, when compared with the model presented in Ref.<cit.>. As a consequence, the linear eigenvalues are positive, while the eigenfunctions exhibit the same profiles.
With the inclusion of the Kerr effect, noticeable changes occur in the resonance peaks [see the black curve in Fig. <ref>(a)]. This is because the Kerr effect introduces an intensity-dependent phase modulation, thus altering the effective detuning as i(|A|^2-δ).
For resonance peaks with higher intensity |A|^2, a larger detuning δ is preferred, in order to compensate for the phase modification caused by |A|^2, resulting in the tilting of the resonance peaks. These tilts lead to overlapping portions of the solutions on the right side of the resonance peaks. Consequently, fold bifurcations (FB) arise (see F_n,1 and F_n,2 marked at resonance peak n), leading to the emergence of branches of multi-stable states. The solid (dashed) curves in the bifurcation diagram represent stable (unstable) solutions, which have been verified by a linear stability analysis.
Figures <ref>(i,ii,iii) show three stable solutions for the same value of detuning δ=4.9. Each solution exhibits a different number of peaks: 1, 3, and 5, respectively. These differences originate from the dominance of a Hermite-Gaussian (HG) mode with index n=0,2,4.
To verify this, we may project the field onto the HG mode basis [see Fig. <ref>(b)]. This allow for determining the mode amplitudes C_n(t) = e^-iδ_nt∫_-∞^∞A(τ,t)ψ_n(τ)τ, which are represented in Fig. <ref>(b) in terms of the averaged mode amplitude N_n=√(C_n^2/T).
As δ increases, the component of mode 2(n-1) exhibits a significant growth, particularly when δ approaches the corresponding resonance peak n. This indicates that each resonance is predominantly associated with a given mode.
Furthermore, we observe the existence of breather solutions within the two Hopf bifurcations (HB) H_2,1 and H_2,2 which occur at the second resonance peak [see Fig. <ref>(a)].
By increasing the pump strength, more energy gets coupled into the cavity, leading to potentially richer dynamics. To explore the impact of pump strength P on the system behavior, we constructed phase diagrams as in Fig. <ref>, by varying both P and the detuning δ. In Fig. <ref>(a), the FB-point-connected curves F_n for each resonance peak n are displayed, for C=-1. It is observed that increasing the pump P results in larger tilts for all resonance peaks, and a higher number of occurrences of FB at higher-order resonances. Additionally, larger regions for breathers are observed as the pump is increased (see the dashed purple curves representing the HB-point-connected curves H_2 and H_3 at resonances n=2,3). On the other hand, decreasing the pump strength eliminates multiple stable and unstable states, leading the cavity system to gradually return to its linear state.
The strength of the potential has a significant impact on the cavity dynamics: not only the potential determines the eigenvalue δ_n, but it also dictates the scaling factor a_τ = C^-1/4 for the size of the HG modes. This implies that reducing the potential strength leads to a denser distribution of resonance points in τ, while resulting in a wider field distribution. To examine this effect, we have calculated the phase diagram with a potential strength of C=-0.25, corresponding to the linear resonance separation Δδ=2. As depicted in Fig. <ref>(b), the phase diagram exhibits similar dynamics to Fig. <ref>(a), but with a greater number of FB (solid curves) and HB (dashed purple curves), occurring at higher-order resonances within the same region.
At this point, a natural question arises, regarding the bifurcation structures and DKS profiles when the potential strength C is reduced: do these high-order resonances vanish, and how do high-order DKS modify? To address this question, in Fig. <ref>(a) we plot the bifurcation diagrams for progressively reduced potential strengths, namely C=-0.2, -10^-2, -10^-4, corresponding to the eigenvalue differences Δδ=0.89,0.2,0.02, respectively.
For C=-0.2, the bifurcation diagram exhibits a similar pattern to Fig. <ref>(a), but within a narrower detuning region.
Three individual solutions with δ = 6.5, marked (i-iii) in Fig. <ref>(a), are plotted in Fig. <ref>(i-iii). When compared with the solutions in Fig. <ref>(i-iii), it is evident that the solutions in Fig. <ref>(i-iii) have wider temporal extension. For the reduced strength C=10^-2, the close-up view of Fig. <ref>(a) shown in Fig. <ref>(b) exhibits a striking resemblance in the bifurcation structure to the previous scenarios. Now the linear eigenvalue separation becomes very small (Δδ=×10^-2), while the mode scale is very large a_τ=C^-1/4=3.16.
As a result, the full width at half-maximum of the fundamental mode increases to 1.665a_τ=5.27, and the high-order modes become much wider. Figures <ref>(iv-vi) showcase examples of DKS states at various branches, specifically for δ=5.1. Note that, these states seem dark DKS emerging within a non-uniform domain, and may be similar to those arising in pulse-pumped cavities <cit.>.
By further reducing C, the bifurcation structure in Fig. <ref>(b) tends to rotate to the right leading, eventually, to the collapsed snaking shown in Fig. <ref>(a),(c) for to C=-10^-4 <cit.>. This structure consists of a sequence of DKS state branches that oscillate back and forth, in a damped fashion, around the Maxwell point of the system (δ=4.85). Around this point, dark DKS, like those depicted in Fig. <ref>(vii, viii, ix) for δ=5.1 originate due to uniform front locking <cit.>.
Looking at it from another perspective, when increasing the potential strength,
the uniform background field on the two sides of the dark DKS undergoes deformation, resulting in a smooth transition to bright DKS.
Such potential-induced confinement also applies to breather solutions. Breathers, marked by (x, xi, xii) in Fig. <ref>(a, b, c), are plotted in Fig. <ref> (x, xi, xii), showcasing the impact of the potential on their formation.
In the bifurcation diagram, the potential effectively separates all multi-stable state branches in a collapse snaking structure towards the high detuning regime, leading to an MMR structure.
The transition dynamics between high and low potential strength can be effectively visualized through the √(|C|) vs. δ phase diagrams in Fig. <ref>. In Fig. <ref>(a), we plot the FB-point-connected curves F_n for each resonance peak n, while varying √(|C|) and δ. A zoomed region is shown in Fig. <ref>(b).
Increasing potential strength √(C) weakens the influence of Kerr nonlinearity, reducing the resonance tilts and causing the gradual disappearance of the FB-point-connected curves F_n [see F_5 in <ref>(a) and F_6 in <ref>(b)].
Conversely, reducing √(C) causes the resonances to converge towards the Maxwell point δ=4.85. Notably, lower potential strengths give rise to the appearance of additional FB points (only 6 FB curves are plotted here). Further decreasing √(C), higher-order resonances undergo the evolution that ultimately leads to the emergence of a collapse-snaking structure.
In this study, we utilized bifurcation analysis to uncover the transition from bright to dark DKS in a driven passive nonlinear cavity in the normal dispersion regime with a parabolic potential. Our analytical study of the linear eigenmodes demonstrated how the parabolic potential localizes the field even in the presence of normal cavity dispersion, leading to the emergence of high-order modes and resonances in the system. These resonances, affected by Kerr nonlinearity, lead to the formation of high-order bright multimode DKS and breathers. Furthermore, by reducing the potential strength, the DKS bifurcation structure shows denser resonances and an increased temporal duration of bright DKS. Eventually, the former converges to the well-known collapse snaking structure, and the latter modify into dark DKS. Our study not only provides physical insight into the nonlinear dynamics modification from MMR to collapse snaking, but also offers a pathway to realize multi-stable states in such cavity systems.
§ ACKNOWLEDGEMENTS
This work was supported by EU under the NRRP of NextGenerationEU, partnership on “Telecommunications
of the Future” (PE00000001 - “RESTART”), Marie Sklodowska-Curie Actions (101023717,101064614), Sapienza University of Rome Additional Activity for MSCA (EFFILOCKER).
|
http://arxiv.org/abs/2307.04567v1 | 20230710135910 | Homogenization of a reaction-diffusion problem with large nonlinear drift and Robin boundary data | [
"Vishnu Raveendran",
"Ida de Bonis",
"Emilio N. M. Cirillo",
"Adrian Muntean"
] | math.AP | [
"math.AP",
"35B27, 35B45, 35Q92, 35A01"
] |
Homogenization of a reaction-diffusion problem with large nonlinear drift and Robin boundary data
Vishnu Raveendran^a,*, Ida de Bonis^b, Emilio N.M. Cirillo^c, Adrian Muntean^a
^a Department of Mathematics and Computer Science, Karlstad University, Sweden
^b Dipartimento di Pianificazione, Design, Tecnologia dell'Architettura,
Sapienza Università di Roma, Italy
^c Dipartimento di Scienze di Base e Applicate per l’Ingegneria,
Sapienza Università di Roma, Italy
[email protected]
August 12, 2023
====================================================================================================================================================================================================================================================================================================================================================================================================================
We study the periodic homogenization of a reaction-diffusion problem with large nonlinear drift and Robin boundary condition posed in an unbounded perforated domain. The nonlinear problem is associated with the hydrodynamic limit of a totally asymmetric simple exclusion process (TASEP) governing a population of interacting particles crossing a domain with obstacle. We are interested in deriving rigorously the upscaled model equations and the corresponding effective coefficients for the case when the microscopic dynamics are linked to a particular choice of characteristic length and time scales that lead to an exploding nonlinear drift. The main mathematical difficulty lies in proving the two-scale compactness and strong convergence results needed for the passage to the homogenization limit. To cope with the situation, we use the concept of two-scale compactness with drift, which is similar to the more classical two-scale compactness result but it is defined now in moving coordinates. We provide as well a strong convergence result for the corrector function, starting this way the search for the order of the convergence rate of the homogenization process for our target nonlinear drift problem.
Keywords: Homogenization; reaction-diffusion equations with large nonlinear drift; two-scale convergence with drift; strong convergence in moving coordinates; effective dispersion tensors for reactive flow in porous media.
MSC2020: 35B27; 35B45; 35Q92; 35A01
Homogenization of a reaction-diffusion problem with large nonlinear drift and Robin boundary data
Vishnu Raveendran^a,*, Ida de Bonis^b, Emilio N.M. Cirillo^c, Adrian Muntean^a
^a Department of Mathematics and Computer Science, Karlstad University, Sweden
^b Dipartimento di Pianificazione, Design, Tecnologia dell'Architettura,
Sapienza Università di Roma, Italy
^c Dipartimento di Scienze di Base e Applicate per l’Ingegneria,
Sapienza Università di Roma, Italy
[email protected]
August 12, 2023
====================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Our interest lies in performing the periodic homogenization limit for a reaction-diffusion problem with large nonlinear drift
and Robin boundary condition posed in an unbounded perforated domain; see the structure of our target equations (<ref>)–(<ref>). The particular nonlinear drift problem we have in mind is associated with the hydrodynamic limit of a totally asymmetric simple exclusion process
(TASEP) governing a population of interacting particles crossing a domain with obstacle <cit.>. In this framework, we aim to
derive rigorously the upscaled model equations and the corresponding effective coefficients for the case when the microscopic dynamics are linked to a particular choice of
characteristic length and time scales that lead to an exploding nonlinear drift. Ideally, we wish to get as well some insights into the structure of the corrector for the homogenization procedure.
The question of exploding drifts is a bit unusual in the context of homogenization asymptotics as only particular scalings leading to blow-up can be handled rigorously. Hence, besides a couple or relatively recent papers [we are going to mention a few of them briefly in the following], there are not many contributions in the area. The celebrated concept of two-scale convergence with drift meant to cope with at least one particular exploding scaling has been introduced in <cit.> and later developed in <cit.> very much motivated by attempts to understand to so-called turbulent diffusion (see e.g. <cit.> where one speaks about convective microstructures).
Our own motivation is aligned with statistical mechanics-motivated approaches to modeling reactive transport in porous media <cit.>. For phenomenological approaches to describing the physics of porous media, we refer the reader for instance to <cit.>. Closely related large-drift upscaling questions appear e.g. in the design of filters to improve catalysis <cit.> and in the control of microstructure for an efficient smoldering combustion in solid fuels <cit.>. The classical Taylor dispersion topic belongs to this context as well, compare <cit.>. Another class of periodic homogenization problems in the same framework arise from suitably scaled SDEs descriptions with exploding "volatilities" that lead to Fokker-Plank type counterparts with correspondingly exploding drifts (cf. e.g. <cit.>). Other relevant work related to large drift homogenization problems posed for different geometries can be found e.g. in <cit.>.
This work is organized as follows: In section <ref>, we introduced our microscopic model and the geometry of the heterogeneous domain. Then we list the structural assumptions that we rely on to study the homogenization limit of our problem. In section <ref>, we show the existence and uniqueness of strong solutions to the target microscopic problem. The difficulty is here twofold: the type of nonlinearity and the unboundedness of the domain. We first study the model equations posed on o a bounded domain and then treat the case of the unbounded domain, which is where the microscopic problem needs to be formulated. To this end, we are using a suitable extension of the concept of solution, a comparison principle, jointly with a suitable monotonicity argument.
We conclude this section by showing -independent energy estimates and the positivity of the solution. These estimates are key ingredients for the passage to the homogenization limit. In section <ref>, we prove our main result which is the rigorous upscaling of the microscopic problem posed in an unbounded periodically perforated domain. The next steps follow the path of the periodic homogenization technique. We first define an extension operator which preserve the energy estimates from the original problem. Finally, we employ the two-scale convergence with drift and related compactness results together with strong convergence-type arguments in moving coordinates. Using such results, we derive the upscaled model which is a nonlinear reaction-dispersion equation coupled with an elliptic cell problem. In section <ref>, we study the difference of solution to micro-problem to solution to macro-problem in H^1 norm with the help of a corrector function. A couple of brief remarks and some ideas for potential future work are the subject of the closing section.
§ THE MICROSCOPIC MODEL
Let Y⊂ℝ^2 be a unit square in ℝ^2. We define the standard cell Z as Y having as inclusion an impenetrable compact object Z_0 called obstacle that is placed inside Y (i.e. Z:=Y\ Z_0). We assume ∂ Z_0 has C^2 boundary regularity and ∂ Y∩∂ Z_0=∅ . We denote ∂ Z_0 as Γ_N To give our geometry a porous media description, we define the pore skeleton to be
Ω_0^:=⋃_(k_1,k_2)∈ℕ×ℕ{(Z_0+Σ_i=1^2 k_i e_i)},
where >0 and {e_1,e_2} is the orthonormal basis of ℝ^2. We can now describe the (open) total pore space and its total internal boundary via
Ω^:=ℝ^2∖Ω_0^,
and respectively,
Γ_N^:= ⋃_(k_1,k_2)∈ℕ×ℕ{(Γ_N+Σ_i=1^2 k_i e_i)}.
We denote by n_, and respectively, n_y the unit normal vectors across the interfaces Γ_N^ε, and respectively, Γ_N; they are directed outward with respect to ∂Ω_.
As target system, we consider the following reaction-diffusion-convection problem
∂ u^ε/∂ t +div(-D^ε∇ u^ε+ 1/B^εP(u^ε)) =f^ε Ω^ε× (0,T),
(-D^ε∇ u^ε+ 1/B^εP(u^ε))· n_ = g_N(u^) Γ_N^× (0,T),
u^(0) =g Ω^.
Here f^:Ω_→ℝ and g_N:ℝ→ℝ are given functions, D^(x_1,x_2):=D(x_1/, x_2/) for (x_1,x_2)∈Ω_, where D is a 2× 2 matrix and Z–periodic defined in the standard unit cell Z, B^(x_1,x_2):=B(x_1/, x_2/) where B is a 2× 1 vector with positive entries and Z–periodic, C^(x_1,x_2):=C(x_1/, x_2/) where C(·) is Z periodic function.
What concerns the nonlinear drift P(·):ℝ→ℝ, we set P(u^) as
P(u^)=u^(1-C^ u^),
with
∫_Z BC dy=0.
If C^=1, then the structure of the nonlinear drift, i.e. B^εP(u^ε), is precisely what one gets as mean field limit for a suitable TASEP process (cf. <cit.>). It is worth noting that the homogenization question has been posed already for such kind of problem (see <cit.>). The novelty here is the treatment of the exploding scaling on nonlinear drift.
§.§ Structural assumptions
We consider the following restrictions on data and model parameters. We summarize them in the list of assumptions <ref>–<ref>, viz.
*
For all η∈ℝ^2 there exists θ, θ>0 such that
θη^2≤
Dη·η≤θη^2,
D^∈ C_#^2,β(Z)^2× 2,
where β∈ (0,1).
*
B∈ C_#^1,β(Z)^2, C∈ C_#^1,β(Z) satisfies
divB=0 (0,T)× Z
div(BC)=0 (0,T)× Z
B· n_y =0 (0,T)×Γ_N.
.
* f^ :ℝ^2→ℝ^+ such that f^∈ C_c^2(ℝ^2) and
f^2-drift(B^*)f, where the convergence is defined in the sense of Definition <ref>.
* g_N∈ C^1(ℝ) satisfies
-g_N(x)x <0 x≠ 0,
g_N(x) ≥ g_N(y) x≥ y.
Note that in order to prove uniqueness of microscopic problem we need to assume g_N(x)≤ g_N(y) x≥ y, Hence to prove the uniqueness of microscopic problem we assume g_N is a constant.
At later stage using Theorem <ref> we prove that domain of g_N(·) is positive. So, condition (<ref>) is automatically satisfied if we assume g_N is positive constant.
* g:ℝ^2→ℝ^+ such that
g∈ C_c^∞(ℝ^2).
Assumptions <ref>–<ref> are technical, but they have all a natural physical explanation. <ref> is linked to the choice of non-degenerating diffusion process in the underlying stochastic description, <ref> points out the incompressibility of the flow and how the flow behaves along the boundary of the perforations (the microscopic obstacles), and <ref>–<ref> are simple structural choices for the volume and boundary production terms. Both <ref> and <ref> are essential for the success of our work, while <ref>–<ref> can be replaced by other suitable options. Note that the relatively high regularity stated in <ref>–<ref> is primarily needed to ensure the well-posedness of the microscopic problem. To reach the homogenization limit and to guarantee the strong convergence of the corrector we only need minimal regularity on the data.
§ WELL-POSEDNESS OF THE MICROSCOPIC PROBLEM
Let us introduce our concept of solution.
A weak solution to problem (<ref>)–(<ref>) is a function u^∈ L^2(0,T;H^1(Ω^))∩ H^1(0,T;L^2(Ω^))
satisfying the identity
∫_Ω^∂_t u^ϕ dx+∫_Ω^ D^∇ u^∇ϕ dx -1∫_Ω^ B^ u^(1-C^ u^) ∇ϕ dx= ∫_Ω^ f^ϕ dx- ∫_Γ_N^ g_N(u^) ϕ dσ
for all ϕ∈ H^1(Ω^) and a.e. t∈ (0,T) with initial condition u(0)=g.
Before proceeding to any asymptotic study of type → 0, we must ensure first that this concept of solution is suitable for the problem at hand.
§.§ Analysis for bounded domains, extension arguments, -independent bounds
In this section, we are concerned with guaranteeing the weak solvability of the microscopic problem (<ref>)–(<ref>), i.e. we look for solutions in the sense of Definition <ref>. Relying on fundamental results from <cit.>, we first show the existence of a strong solution for the same model equations while they are formulated in a bounded domain.
As next step, by using a comparison principle combined with a monotone convergence argument, we show that a particular sequence of extensions of the solution to the bounded domain problem converges to the weak solution of our original problem (<ref>)–(<ref>) posed on an unbounded domain.
Consider the following nonlinear reaction diffusion convection equation
u_t-a_ij(t,x,u)u_x_ix_j+b(t,x,u,u_x) =0 (0,T)×Ω
a_ij(t,x,u)u_x_i· n = ψ (0,T)×∂Ω
u(0,x)=ψ _0(x,u)
with the assumptions
(P1)
νξ^2 ≤ a_ij(t,x,u)ξ_iξ_j≤μξ^2.
(P2) If |y|≤ M for some constant M>0, ∂ a_ij(t,x,y)/∂ y, ∂ a_ij(t,x,y)/∂ x, ψ, ∂ψ(t,x,y)/∂ x,∂ψ(t,x,y)/∂ y, ∂^2 a_ij(t,x,y)/∂^2y,
∂^2 a_ij(t,x,y)/∂ y∂ t, ∂^2 a_ij(t,x,y)/∂ x∂ t
are uniformly bounded in their respective domain.
(P3) For fixed x,t,y and arbitrary p, there exist μ>0
|b(t,x,y,p)|≤μ (1+p^2).
(P4) There exist C_0, C_1,C_2>0 such that for (t,x)∈ (0,T)×Ω, b(t,x,y,p) satisfies the condition
-yb(t,x,y,0)≤ C_0y^2+C_1,
|b_p|(1+|p|)+|b_y|+|b_t|≤ C_2(1+p^2).
(P5) There exist C_0, C_1>0 such that for (t,x)∈ (0,T)×∂Ω, ψ(t,x,y) satisfies the condition
-yψ(t,x,y)< 0 (t,x)∈
(0,T) ×∂Ω |y|>0.
(P6) For |y|≤ M, where M is a constant, a_ij(t,x,y), b(t,x,y,p) and ψ(t,x,y) are continuous in their arguments.
(P7) For |y|,|p|≤ M a_ij_x(t,x,y), b(t,x,y,p) are Hölder continuous in variable x with exponent β ψ_x(t,x,y) is Hölder continuos in x and t with exponent β and β/2 respectively.
(P8) ∂Ω is of class H^2+β.
(P9) ψ(0,x,0)=0.
Then there exist unique solution u∈ H^2+β,1+β/2(Ω_T)
satisfying (<ref>)–(<ref>) and
u_L^∞((0,T)×Ω)≤ M,
where M is dependent on C_0,C_1 and ψ_0 and independent of |Ω|.
Note that the formulation of Theorem <ref> is done using notations similar to the ones used in the monograph <cit.>.
The proofs of the existence and uniqueness properties follow directly from Theorem 7.4 in <cit.>. The proof of the upper bound (<ref>) is a consequence of combining both Theorem 7.2 and Theorem 7.3 of <cit.>.
Assume <ref>, <ref> and <ref> to hold true. Let E⊂Ω^ be a bounded domain and v∈ L^2(0,T;H^1(E))∩ H^1(0,T;L^2(E))∩ L^∞ ((0,T)× E), u∈ L^2(0,T;H_E)∩ H^1(0,T;L^2(E))∩ L^∞ ((0,T)× E), where
H_E:={u∈ H^1(E): u=0 ∂ E\ (Γ^∩∂ E)}.
If u,v satisfy the following identities
∫_E∂_t u ϕ dx+∫_E D^∇ u ∇ϕ dx - 1/ε∫_E B^ P(u) ∇ϕ dx= ∫_E f^ϕ dx-∫_Γ_N^∩∂ E g_N(u) ϕ dσ,
∫_E∂_t v ψ dx+∫_E D^∇ v ∇ψ dx - 1/ε∫_E B^ P(v) ∇ψ dx= ∫_E f^ψ dx-∫_Γ_N^∩∂ E g_N(v) ψ dσ,
and
u(0) =v(0),
u ≤ v ∂ E\ (Γ^∩∂ E).
for all ϕ∈ H_E, ψ∈ H^1(E),
then it holds
u≤ v E.
Let (v-u)=(v-u)^+-(v-u)^-, where
(v-u)^- :=max{0,-(v-u)},
(v-u)^+ :=max{0,(v-u)}.
Since u≤ v ∂ E\(Γ^∩∂ E), (v-u)^-=0 at ∂ E\(Γ^∩∂ E).
In (<ref>) and (<ref>) we choose the test functions ϕ, ψ to be both (v-u)^-. Subtracting the corresponding results form each other yields the expressions:
∫_E∂_t (v-u) (v-u)^- dx+∫_E D^∇ (v-u) ∇ (v-u)^- dx
- 1/ε∫_E B^ (P(v)-P(u)) ∇ (v-u)^- dx= ∫_Γ_N^∩∂ E (g_N(u)-g_N(v)) (v-u)^- dσ,
∫_E∂_t (v-u)^- (v-u)^- dx+∫_E D^∇ (v-u)^- ∇ (v-u)^- dx
= 1/∫_E B^ (P(v)-P(u)) ∇ (v-u)^- dx +∫_Γ_N^∩∂ E (g_N(v)-g_N(u)) (v-u)^- dσ.
For δ>0, we have
1/∫_E B^ (P(v)-P(u)) ∇ (v-u)^- dx =1/∫_E B^ (v-u) ∇ (v-u)^- dx
-1/∫_E B^ C^ (v-u)(v+u) ∇ (v-u)^- dx
≤ C() ∫_E|(v-u)^-||∇ (v-u)^-|
+C(,||u||_∞,||v||_∞) ∫_E|(v-u)^-||∇ (v-u)^-|
≤ C(δ,)∫_E|(v-u)^-|^2 dx+δ∫_E|∇(v-u)^-|^2 dx.
By (<ref>) and (<ref>), we get
∫_Γ_N^∩∂ E (g_N(v)-g_N(u)) (v-u)^- dσ≤ 0.
Now, using <ref>, (<ref>) and (<ref>) on (<ref>), for a δ>0 small enough, we obtain
1/2d/dt(v-u)^-^2+(θ-δ)∫_E|∇(v-u)^-|^2 dx≤ C(δ,)∫_E|(v-u)^-|^2 dx.
Applying Grönwall's inequality to (<ref>), we conclude
(v-u)^-=0.
Hence, we are led to u≤ v for a.e x∈ E and all t∈ [0,T].
Assume <ref>–<ref> hold. Then for every fixed >0, there exists a unique weak solution u^∈ L^2(0,T;H^1(Ω^))∩ H^1(0,T;L^2(Ω^)) to the problem (<ref>)–(<ref>) in the sense of Definition <ref>
We begin the proof with defining a boundary value problem similar to (<ref>)–(<ref>) on a bounded domain. Then we study the existence, uniqueness and global boundedness property of the bounded domain problem. Later using some monotonicity argument we derive unique weak solution to (<ref>)–(<ref>).
We define Ω^_R:=(- R, R)^2∩Ω^, Γ_NR^:=∂Ω^_R\∂ (- R, R)^2, where R∈ℕ. Let f_R^, g__R^, D^_R, B^_R be restriction of f^, g_N^, D^, B^ on Ω^_R respectively.
Let us consider the following boundary value problem
∂ u_R^ε/∂ t +div(-D^ε_R ∇ u_R^ε+ 1/B_R^εP(u_R^ε)) =f_R^ε Ω_R^ε× (0,T),
(-D^ε∇ u_R^ε+ 1/B_R^εP(u_R^ε))· n_ = g_R^ Γ_NR^× (0,T),
u_R^ =0 (∂Ω\Γ_NR^) × (0,T),
u_R^(0) =g Ω^_R.
In problem (<ref>)–(<ref>), we choose
a_ij(t,x,u)=D^_R_ij(x),
b(t,x,u,u_x)=B·∇ (u^_R(1-(u^_R)))-f^_R,
ψ (t,x,u)= g_N(u^),
ψ_0(x)=g(x).
Using assumption <ref>, we find that (<ref>) is equivalent to (<ref>). Using assumption <ref> we obtain D^_R_ij(x) satisfies condition (P1),(P2),(P6),(P7). We use assumption <ref> and we get
∇ B (u^_R(1-u^_R))= B ∇ u^_R-B2u^_R∇ (u^_R).
By Young's inequality and by (<ref>) we verify that the the term (<ref>) satisfies (P3),(P4),(P6) and (P7).
From <ref> and using <ref> we have that g_N satisfies (P2), (P5) and (P9).
Now, by Theorem <ref> we deduce that there exists a unique u^_R∈ H^2+β,1+β/2(Ω) which solves (<ref>)–(<ref>) and
u^_R_L^∞((0,T)×Ω_R^) ≤ M,
where M is a constant independent of R.
Using arguments similar to those involved in the proof of Theorem <ref> and jointly with <ref>, we obtain
max_t∈ (0,T)u_R^_L^2(Ω_R^)≤ C,
u_R^_L^2(0,T;H^1(Ω_R^))≤ C,
where C is independent of R. Following similar steps of Theorem <ref>, we obtain
0≤ u_R^,
for a.e. x∈Ω^_R and all t∈ (0,T).
Now, we define an extension of the bounded domain problem in unbounded domain.
In particular we set
u_R^=
u_R^ x∈Ω_R^
0 x∈Ω^\Ω_R^ .
From (<ref>)–(<ref>) and (<ref>), we get
u^_R_L^∞((0,T)×Ω_R^)≤ M,
max_t∈ (0,T)u_R^_L^2(Ω_R^)≤ C,
u_R^_L^2(0,T;H^1(Ω_R^))≤ C.
Using (<ref>)–(<ref>), Lemma <ref> and (<ref>) together with Monotone Convergence Theorem, Banach-Alaoglu theorem (see <cit.>) there exists u^∈ L^2(0,T;H^1(Ω^))∩ H^1(0,T;L^2(Ω^)) such that
u_R^→ u^ L^2((0,T)×Ω^),
∇u_R^⇀∇ u^ L^2(0,T;L^2(Ω^)),
∂ (u_R^)/∂ t⇀∂ (u^)/∂ t L^2(0,T;L^2(Ω^)),
P(u_R^) ⇀ P( u^) L^2(0,T;L^2(Ω^)),
g_N(u_R^) ⇀ g_N( u^) L^2(0,T;L^2(Γ_N^)).
Convergences (<ref>)–(<ref>) guarantee the existence of weak solution to the problem (<ref>)–(<ref>) in the sense of Definition <ref>.
To prove the uniqueness of the weak solution , we consider u^_1,u^_2∈ L^2(0,T;H^1(Ω^)) weak solutions to problem (<ref>)– (<ref>) in the sense of Definition (<ref>). Then we have
∫_Ω^(u^_1)_t ϕ dx + ∫_Ω^ D^∇ u^_1 ∇ϕ dx = 1/∫_Ω^ B^ P(u^_1)∇ϕ dx+∫_Ω^f^ϕ dx- ∫_Γ_N^ g_N(u^_1) ϕ dσ,
∫_Ω^(u^_2)_t ϕ dx + ∫_Ω^ D^∇ u^_2 ∇ϕ dx = 1/∫_Ω^ B^ P(u^_2)∇ϕ dx+∫_Ω^f^ϕ dx- ∫_Γ_N^ g_N(u^_2) ϕ dσ,
for all ϕ∈ L^2(0,T;H).
We choose the test function ϕ:= u^_1-u^_2 for (<ref>) and (<ref>) and substracting (<ref>) to (<ref>) we obtain
∫_Ω^ϕ_t ϕ dx + ∫_Ω^D^∇ϕ∇ϕ dx = 1/∫_Ω^ B^ (P(u_1^)-P(u_2^))∇ϕ dx+∫_Γ_N^( g_N(u_2^)-g_N(u_1^) )ϕ dσ.
From <ref>, we get
∫_Γ_N^( g_N(u^_2)-g_N(u^_1) )ϕ dσ≤0.
Using the Mean Value Theorem, (<ref>), and the structure of P(·) we get
1/∫_Ω^ B^ (P(u^_1)-P(u^_2))∇ϕ dx≤ C()∫_Ω^ϕ∇ϕ dx,
where C is constant determined by
C=sup_r∈ [0,M] P'(r),
while M is the constant from (<ref>).
Using the ellipticity condition (<ref>) together with (<ref>) and (<ref>), we obtain
d/dtϕ^2_L^2(_Ω^)+θ∫_Ω^ |∇ϕ|^2 dx ≤ C∫_Ω^ϕ∇ϕ dx.
Using Young's inequality and choosing δ small enough, we get
d/dtϕ^2_L^2(_Ω^)+(θ-δ) ∫_Ω^ |∇ϕ|^2 dx ≤ C(δ)∫_E ϕ^2 dx,
d/dtϕ^2_L^2(_Ω^)≤ C(δ)∫_Ω^ϕ^2 dx.
Now, using Grönwall's inequalty, we get ϕ=0 a.e. in Ω^ for all times t∈ (0,T). Hence the uniqueness of the weak solution to problem (<ref>)–(<ref>) follows.
Note that in Theorem <ref> we are considering Neumann boundary conditions. In our problem (<ref>)–(<ref>), we are dealing with a Dirichlet boundary condition imposed across a part of the boundary. Since it is a homogeneous condition and also the sets (∂Ω\Γ_NR^) and Γ_NR^ are disjoint, we can adapt the proof of Theorem <ref> to our setup (<ref>)–(<ref>).
§.§ Energy estimates
Assume <ref>–<ref> hold. There exists a constant C>0 independent of ε such that the following energy estimate hold
u^ε_L^2(0,T;H^1(Ω^))≤ C,
u^ε_L^∞(0,T;L^2(Ω^)≤ C,
u^ε_L^∞((0,T)×Ω^)≤ C,
u^ε_L^2(0,T;L^2(Γ_N^))≤ C.
We prove the estimate (<ref>) by choosing the test function ϕ= u^ in the weak formulation (<ref>), we get
1/2d/dtu^_L^2(Ω^)+θ∫_Ω^|∇ u^|^2 dx≤1/∫_Ω^B^ P(u^)∇ u^ dx + ∫_Ω^f^ u^ dx-∫_Γ_N^ g_N(u^) u^ dσ.
We define
P(u^):=(u^)^2/2-C^(u^)^3/3,
then we have
∇P(u^) =P(u^) ∇ u^ .
Now, using <ref>, (<ref>), we have
∫_Ω^B^ P(u^)∇ u^ dx = ∫_Ω^B^∇P(u^) dx
=-∫_Ω^∇· B^P̃(u^) dx + ∫_Γ_N^B^· n P(u^) dx
=0.
Using (<ref>) and Grönwall's inequality from (<ref>) we get the required results, i.e. (<ref>) and (<ref>).
The estimate (<ref>) follows directly from (<ref>).
Since Γ_N^ is uniformly Lipschitz, we use the trace result stated in Theorem 15.23 of <cit.> and we obtain (<ref>).
Assume <ref>–<ref> hold. Let u^ be a weak solution of (<ref>)–(<ref>). Then u^≥ 0.
We choose as test function ϕ=(u^)^- in (<ref>), where u^=(u^)^+-(u^)^- with u^-=max{0,-u^} and u^+=max{0,u^} and we get
∫_Ω^∂_t u^ (u^)^- dx+∫_Ω^ D^∇ u^∇ (u^)^- dx - 1/∫_Ω^ B^ P(u^) ∇ (u^)^- dx
= ∫_Ω^ f^ (u^)^- dx-∫_Γ_N^ g_N(u^) (u^)^- dσ.
Using (A1), we have
1/2d/dt||(u^)^-||^2_L^2(Ω^)+θ∫_Ω^ | ∇ (u^)^- |^2 dx + 1/∫_Ω^ B^ P((u^)) ∇ (u^)^- dx
≤ -∫_Ω^ f^ (u^)^- dx+∫_Γ_N^ g_N((u^)^-) (u^)^- dσ.
After an integration by parts with respect to the space variable and <ref> , we get
1/∫_Ω^B^ P(-(u^)^-)∇ (u^)^- dx =-1/∫_Ω^B^( (u^)^-(1+C^(u^)^-)∇ (u^)^-) dx
=-1/∫_Ω^B^∇1/2((u^)^-)^2-B^C^1/3∇((u^)^-)^3 dx
=1/∫_Ω^∇ B^1/2((u^)^-)^2+∇ (B^C^)1/3((u^)^-)^3 dx
+1/∫_Γ_N^B^· n (1/2((u^)^-)^2+C^1/3((u^)^-)^3) dσ
=0.
Since
∫_Γ_N^ g_N(u^) (-u^)^- dσ=∫_Γ_N^ g_N((-u^)^-) (-u^)^- dσ,
by (<ref>) and <ref> we get
-∫_Ω^ f^ (u^)^- dx+ ∫_Γ_N^ g_N((-u^)^-) (-u^)^- dσ≤ 0.
Combining (<ref>), (<ref>) and (<ref>), we obtain
1/2d/dt||(u^)^-||^2_L^2(Ω^)≤ 0
As a direct application of Grönwall's inequality (see Appendix B of <cit.>) on (<ref>), we conclude
(u^)^-^2_L^2(Ω^)≤ 0.
Hence u^≥ 0 a.e. in Ω^ and for all t∈ (0,T).
§ DERIVATION OF THE UPSCALED MODEL
In this section we pass → 0 to the homogenization limit and derive the corresponding upscaled equation and effective coefficients.
§.§ Extension to fixed domain
If u^∈ H^1(Ω^)∩ L^∞(Ω^), where Ω^ defined as in (<ref>). Then there exist a constant C>0 and an extension of u^ to H^1(ℝ^2) denoted by u^ such that
u^|_Ω^ =u^,
u^_L^2(ℝ^2) ≤ C u^_L^2(Ω^),
∇u^_L^2(ℝ^2) ≤ C ∇u^_L^2(Ω^),
u^_L^∞(ℝ^2) ≤ C u^_L^∞(Ω^).
Define Ω_R^ :=((- R,+ R)× (- R,+ R))∩Ω^, where R∈ℕ, clearly Ω_R^∩Ω_0^ =∅.
Let u_R^:=u_|_Ω_R^^. Since u^∈ H^1(Ω^) we have u_R^∈ H^1(Ω_R^). By using Theorem 2.1 of <cit.> there exist a u∈ H^1((- R,+ R)× (- R,+ R)) and C>0 which is independent of R such that
u_R^|_Ω_R^ =u_R^,
u_R^_L^2((- R,+ R)× (- R,+ R)) ≤ C u_R^_L^2(Ω_R^),
∇u_R^_L^2((- R,+ R)× (- R,+ R)) ≤ C ∇ u_R^_L^2(Ω_R^),
u_R^_L^∞((- R,+ R)× (- R,+ R)) ≤ C u_R^_L^∞(Ω^),
≤ C u^_L^∞(Ω^).
The inequalities (<ref>) and (<ref>) follows directly from Theorem 2.1 of <cit.>.
The inequality (<ref>) follows from the fact that the proof of the extension operator contains a standard reflection argument, hence it preserves the L^∞ bound in the extended regions(see Theorem 9.7 of <cit.>).
Now, we prove the following identity
u_R+N^|_Ω_R^=u_R^ Ω_R^
for all N∈ℕ.
From the definition of u_R^ and u_R+1^, we have
u_R+1^|_Ω_R^=u_R^.
We define the extension of u_R+1^ in such a way that, if u_R^ is the extension of u_R^, then
u_R+1^|_Ω_R^=u_R^
and in (- (R+1), (R+1))^2\ (- R, R)^2, we extend u_R+1 using a reflection argument similar to Theorem 9.7 of <cit.> and <cit.>. Now inductively we get (<ref>).
We define extension of u^ on ℝ^2 as
u^ where u^ is defined as for any x∈ℝ^2 there exists R∈ℕ such that x∈ (- R,+ R)× (- R,+ R)) and
u^(x)= u_R^(x).
By identity (<ref>), u^(x) is well defined function on ℝ^2.
Using Fatou's Lemma, (<ref>) and (<ref>), we have
∫_ℝ^2(u^)^2dx =∫_ℝ^2lim_R→∞χ_((- R,+ R)× (- R,+ R)) (u^)^2dx
=∫_ℝ^2lim_R→∞χ_((- R,+ R)× (- R,+ R)) (u_R^)^2dx
≤lim inf_R→∞∫_ℝ^2χ_((- R,+ R)× (- R,+ R)) (u_R^)^2dx
≤ C u_R^_L^2(Ω_R^)^2
≤ C u^_L^2(Ω^)^2.
Similarly, we prove
∫_ℝ^2|∇u^|^2dx =∫_ℝ^2lim_R→∞χ_((- R,+ R)× (- R,+ R)) |∇u^|^2dx
=∫_ℝ^2lim_R→∞χ_((- R,+ R)× (- R,+ R)) |∇u_R^|^2dx
≤lim inf_R→∞∫_ℝ^2χ_((- R,+ R)× (- R,+ R)) |∇u_R^|^2 dx
≤ C ∇u_R^_L^2(Ω_R^)^2
≤ C ∇ u^_L^2(Ω^)^2.
The inequality
u^_L^∞(ℝ^2)≤ C u^_L^∞(Ω^)
follows from (<ref>) and (<ref>).
It should be noted that our proof of the extension Lemma <ref> relies on the fact that at the level of the standard
cell Z, the obstacle does not touch the external boundary of Z.
§.§ Strong convergence
In this section we prove that the sequence of solutions to the microscopic problem (<ref>) strongly converges to a limit formulated in moving co-ordinates. To obtain the wanted result, we adapt to our setup similar techniques as discussed also in <cit.>,<cit.> and <cit.>.
Let f∈ L^2(Z), g∈ L^2(Γ_N) be given functions. Then the boundary value problem
-Δ_y v=f Z,
∇ v· n=g Γ_N,
v ,
has a unique solution v∈ H^1_#(Z)/ℝ if and only if the compatibility condition
∫_Z f dy=∫_∂ Z g dσ_y
is satisfied.
The proof follows via standard argument involving Fredholm alternative.
We define
B^* e_i:=∫_YB(y)e_idy|Z|,
Ω^(t):={x+B^*t/:x∈Ω^},
v^(t,x):=u^(t,x+B^*t).
We define {e_j}, j∈ℤ^2 as orthonormal
basis for L^2((0,1)^2) with compact support and C^2 differentiable. Related to e_j, we define
e_j,k(x):= e_j(x-k),
q_j,k(t,x):= e_j,k(x-B^*t).
Note that e_j,k and q_j,k forms orthonormal basis for L^2(ℝ^2). We have
v^(t,x)=∑_j∈ℕ,k∈ℤ^2v^_jk(t)e_jk(x),
v^(t,x)=∑_j∈ℕ,k∈ℤ^2v^_jk(t)e_jk(x),
where v^(t,x) is the extension of v^(t,x) on ℝ^2 as defined in Lemma <ref> and
v^_jk(t):=∫_Ω^(t)v^(t,x)e_jk(x) dx,
v^_jk(t):=∫_ℝ^2v^(t,x)e_jk(x) dx.
Assume <ref>–<ref> hold. Then for all δ>0, there exist a R_δ>0 such that
u^(t,x+B^*t)_L^2(Ω^(t)\Ω(R_δ,))≤δ
where Ω(R_δ,):=(Ω^∩ (-R_δ,R_δ)^2).
We choose a specific cutoff function ψ∈ C^∞(ℝ^2) such that
ψ(x)=
0 x∈ [-1,1]
1 x∈ [-2,2]^c
and 0≤ψ(x)≤ 1 for x∈ (-2,-1)∪ (1,2) . Now, for x∈ℝ^2, we define
ψ_R(x):=ψ(|x|R),
ψ_R^(t,x):=ψ_R(x-B^*t).
Consider the weak formulation (<ref>). Integrating it over (0,t) and choosing ϕ(t,x)=ψ_R^(t,x)u^(t,x), we get
∫_0^t ∫_Ω^∂_t u^ (ψ_R^(t,x)u^) dxdt+∫_0^t∫_Ω^ D^∇ u^∇ (ψ_R^(t,x)u^) dxdt
-1∫_0^t∫_Ω^ B^ u^(1-C^ u^) ∇ (ψ_R^(t,x)u^) dxdt
= ∫_0^t∫_Ω^ f^ (ψ_R^(t,x)u^) dxdt
-∫_0^t∫_Γ_N^ g_N(u^) (ψ_R^(t,x)u^) dσ dt.
Using integration by parts with respect to time variable, we have
∫_0^t∫_Ω^∂_t u^ (ψ_R^(t,x)u^) dxdt=12B^*∫_0^t∫_Ω^ (u^)^2 ∇ψ_R^(t,x) dxdt+12∫_Ω^ (u^(t,x))^2 ψ_R^(x,t) dx
-12∫_Ω^ (u^(0,x))^2 ψ_R^(0,x) dx,
while the second term in the left hand side of (<ref>) becomes
∫_0^t∫_Ω^ D^∇ u^∇ (ψ_R^(t,x)u^) dx=∫_0^t∫_Ω^ D^∇ u^∇ψ_R^(t,x) u^ dxdt+∫_0^t∫_Ω^ D^∇ u^∇ u^ψ_R^(x,t) dxdt.
We have
1∫_0^t∫_Ω^ B^ u^(1-C^ u^) ∇ (ψ_R^(t,x)u^) dxdt=1∫_0^t∫_Ω^ B^ u^ψ_R^(t,x)∇ u^ dxdt
+1∫_0^t∫_Ω^ B^ u^ u^∇ψ_R^(t,x) dxdt -1∫_0^t∫_Ω^ B^ C^∇ u^ψ_R^(t,x)(u^)^2 dxdt
-1∫_0^t∫_Ω^ B^ C^ (u^)^3 ∇ψ_R^(t,x) dxdt,
integration by parts with respect to the space variable and <ref>, we get
1∫_0^t∫_Ω^ B^ u^(1-C^ u^) ∇ (ψ_R^(t,x)u^) dxdt=-1∫_0^t∫_Ω^ B^ u^∇ (ψ_R^(t,x) u^) dxdt
+1∫_0^t∫_Ω^ B^ u^ u^∇ψ_R^(t,x) dxdt+1∫_0^t∫_Ω^ B^ C^ u^∇(ψ_R^(t,x)(u^)^2) dxdt
-1∫_0^t∫_Ω^ B^ C^ (u^)^3 ∇ψ_R^(t,x) dxdt,
1∫_0^t∫_Ω^ B^ u^(1-C^ u^) ∇ (ψ_R^(t,x)u^) dxdt=
12∫_0^t∫_Ω^ B^ u^ u^∇ψ_R^(t,x) dxdt
-12∫_0^t∫_Ω^ B^ C^ (u^)^3 ∇ψ_R^(t,x) dxdt,
12B^*∫_0^t∫_Ω^ (u^)^2 ∇ψ_R^(t,x) dxdt+12∫_Ω^ (u^(t,x))^2 ψ_R^(t,x)) dx
-12∫_Ω^ (u^(0,x))^2 ψ_R^(0,x)) dx
+∫_0^t∫_Ω^ D^∇ u^∇ψ_R^(t,x) u^ dxdt+∫_0^t∫_Ω^ D^∇ u^∇ u^ψ_R^(t,x) dxdt
- 12∫_0^t∫_Ω^ B^ u^ u^∇ψ_R^(t,x) dxdt
+12∫_0^t∫_Ω^ B^ C^ (u^)^3 ∇ψ_R^(t,x) dxdt
= ∫_0^t∫_Ω^ f^ (ψ_R^(t,x)u^) dxdt
-∫_0^t∫_Γ_N^ g_N(u^) (ψ_R^(t,x)u^) dσ dt.
Using <ref> and the definition of ψ_R^, we have
lim_R→∞12∫_Ω^ (u^(0,x))^2 ψ_R^(0,x)) dx=0.
By the definition (<ref>), we have
|∇ψ_R^|, |∇·∇ψ_R^|≤C/R,
where C>0 is a constant.
Using <ref>, (<ref>) and Theorem <ref>, we conclude
lim_R→∞∫_0^t∫_Ω^ D^∇ u^∇ψ_R^(t,x) u^ dxdt=0.
Consider the following auxiliary problem: Find G_i (with i∈{1,2}) such that
-Δ_y G_i =B^*e_i-B(y)e_i Z,
∇ G_i· n = 0 Γ_N,
G_i .
Using Lemma <ref>, definition (<ref>), there exist a weak solution G_i∈ H^1_#(Z)/ℝ satisfying (<ref>)–(<ref>).
Now using the change of variable y= x/ and defining G_i^(x):=G_i(x/), we get
-^2Δ_x G_i^ =B^*e_i-B^(x)e_i Ω_,
∇ G_i^· n = 0 Γ_N^.
Thanks to the auxiliary problem (<ref>)–(<ref>) we can write
12∫_0^t∫_Ω^(B^*-B) (u^)^2 ∇ψ_R^(t,x) dxdt =∑_i=1^21/2∫_0^t∫_Ω^∇ G_i^∇( (u^)^2 ∂ψ_R^(t,x)∂ x_i) dxdt.
We observe that
∇ G_i^=∇_y G(x/),
and
|∇( (u^)^2 ∂ψ_R^(t,x)∂ x_i)|≤ |u^∇ u^||∇ψ_R|+|(u^)^2∇·∇ψ_R|.
By Theorem <ref> and by (<ref>) we can now estimate the right hand side of (<ref>)
in the following way
12∫_0^t∫_Ω^(B^*-B) (u^)^2 ∇ψ_R^(t,x) dxdt ≤CR.
Using Theorem <ref>, <ref> and observing that ψ_R^ is nonnegative, we have
-∫_0^t∫_Γ_N^ g_N(u^) (ψ_R^(t,x)u^) dσ dt≤ 0.
To obtain an estimate on the nonlinear term, we consider the following auxiliary problem: Find H_i (with i∈{1,2}) such that
-Δ_y H_i =BCe_i Z,
∇ H_i· n = 0 Γ_N,
H_i .
Using Lemma <ref>, (<ref>), there exists a weak solution H_i∈ H^1_#(Z)/ℝ satisfying (<ref>)–(<ref>).
Using the change of variable y=x/ and defining H_i^:=H_i(x/), we have
-^2Δ_x H_i^ =BCe_i Ω_,
∇ H_i^· n = 0 Γ_N^.
Hence (<ref>) and (<ref>) and following similar steps used to obtain (<ref>)–(<ref>), lead to
12∫_0^t∫_Ω^ B^ C^ (u^)^3 ∇ψ_R^(t,x) dxdt≤CR.
Combining (<ref>), (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>), we get
lim_R→∞∫_Ω^ (u^(t,x))^2 ψ_R^(t,x)) dx=0.
So, for any δ>0, there exists a R_δ such that
∫_Ω^ (u^(t,x))^2ψ_R_δ^(t,x) ) dx≤δ.
Finally, using the change of variable x→(x+B^*t), we find
∫_Ω^(t)(u^(t,x+B^*t))^2ψ_R_δ(x) ) dx≤δ.
Hence, (<ref>) is proved.
Assume <ref>–<ref> hold. Then for p,s∈ (0,T)
| ∫_Ω^ (p)v^(p,x)q_j,k-∫_Ω^ (s)v^(s,x)q_j,k|≤ C√(p-s),
where C is a positive constant depending on j, k∈ℕ.
Using the Fundamental Theorem of Calculus, for p,s∈ (0,T) we can write
∫_Ω^ (p)v^(p,x)e_j,k-∫_Ω^ (s)v^(s,x)e_j,k=∫_t^pddt∫_Ω^ (t)v^(t,x)e_j,k.
By the change of variable x→ x-B^*t and the product rule, we get
∫_Ω^ (p)v^(p,x)e_j,k-∫_Ω^ (s)v^(s,x)e_j,k=∫_s^p∫_Ω^(ddtu^(t,x)q_j,k-B^*∇ q_j,k u^) dx dt.
Choosing now ϕ=q_j,k as test function in (<ref>) and integrating the result from s to p with respect to time variable, we obtain
∫_s^p∫_Ω^∂_t u^ q_j,k dx=-∫_s^p∫_Ω^ D^ ∇ u^∇ q_j,k dxdt +1∫_s^p∫_Ω^ B^ u^∇ q_j,k dxdt
-1∫_s^p∫_Ω^ B^ C^ (u^)^2 ∇ q_j,k dxdt+∫_s^p∫_Ω^ f^ q_j,k dxdt
- ∫_s^p∫_Γ_N^ g_N(u^) q_j,k dσ dt.
Combining (<ref>) and (<ref>) leads to
∫_Ω^ (p)v^(p,x)e_j,k-∫_Ω^ (s)v^(s,x)e_j,k=-∫_s^p∫_Ω^ D^∇ u^∇ q_j,k dxdt
-1∫_s^p∫_Ω^ B^ C^ (u^)^2 ∇ q_j,k dxdt+∫_s^p∫_Ω^ f^ q_j,k dxdt
- ∫_s^p∫_Γ_N^ g_N(u^) q_j,k dσ dt+∫_s^p∫_Ω^B^-B^*∇ q_j,k u^ dx dt.
Using Theorem <ref>, <ref>, and the definition of q_j,k, we have
|∫_s^p∫_Ω^ D^∇ u^∇ q_j,k dxdt| ≤ C ∫_s^p∇ u^_L^2(Ω^)∇ q_j,k_L^2(Ω^)dt
≤ C ∇ u^_L^2(0,T;L^2(Ω^))√(p-s)
≤ C√(p-s),
and
|∫_s^p∫_Ω^ f^ q_j,k dxdt|≤ C√(p-s).
The trace theorem and <ref> ensure that
| ∫_s^p∫_Γ_N^ g_N(u^) q_j,k dσ dt| ≤ C∫_s^p∫_Γ_N^∩ supp{q_j,k} |u^||q_j,k| dσ dt
≤ C ∫_s^pu^_L^2(Γ_N^) dt
≤ C ∫_s^p∇ u^_L^2(Ω^) dt
≤ C√(p-s).
Since the functions q_j,k have compact support, using the auxiliary problem (<ref>)–(<ref>), and the integration by parts with respect to space variable, we obtain
|∫_s^p∫_Ω^B^-B^*∇ q_j,k u^ dx dt| ≤∑_i=1^2|∫_s^p∫_Ω^Δ G_i ∂/∂ x_i q_j,k u^ dx dt|
≤∑_i=1^2|∫_s^p∫_Ω^∇ G_i ∇(∂/∂ x_i q_j,k) u^ dx dt|
+ |∫_s^p∫_Ω^∇ G_i ∂/∂ x_i q_j,k∇ u^ dx dt|
≤ C√(p-s).
Similarly, using auxiliary problem (<ref>)–(<ref>) and Theorem <ref> we get
| 1∫_s^p∫_Ω^ B^ C^ (u^)^2 ∇ q_j,k dxdt|
≤ C√(p-s).
Combining the estimates (<ref>)–(<ref>) into (<ref>), we obtain (<ref>).
Assume <ref>–<ref> hold. Then there exists v_0∈ L^2(0,T;L^2(ℝ^2)) such that
lim_→ 0∫_0^T∫_Ω^(t)|v^ - v_0|^2dxdt=0,
where v^ and Ω^(t) are defined in (<ref>) and (<ref>), respectively.
We first prove that the sequence {v^_jk} has a subsequence that converges to some {v^0_jk} as → 0. From (<ref>), we have
|v^_jk(t)| ≤∫_Ω^|v^(t,x)e_jk(x)|dx
≤v^_L^2(Ω^)e_jk_L^2(Ω^)
≤ C.
By Lemma <ref>, we know that the sequence {v^_jk(t)} is equicontinous, hence by using Arzela-Ascoli Theorem we have that there exists a subsequence (denoted again by v^_jk(t)) such that v^_jk(t) uniformly converges to some v^0_jk(t) on C([0,T]). Since [0,T] is a bounded domain, the uniform convergence leads to strong convergence in L^2(0,T).
We define
v^0_*(t,x):=∑_j∈ℕ,k∈ℤ^2v^0_jk(t)e_jk(x).
Claim 1: for fixed j and k, there exists a constant C_jk>0 such that
|v^_jk(t)-|Y||Z|v^_jk(t)|≤ C_jk ^2.
Proof of Claim 1:
v^_jk(t)-|Y||Z|v^_jk(t)=∫_ℝ^2(χ_Y(x/)-|Y||Z|)v^(t,x)e_jk(x)dx,
where χ_Y is the characteristic function defined on Z and extended periodically to whole ℝ^2, where χ_Y(x)=1 if x∈ Y, 0 if x∈ Y_0. Since ∫_Z(χ_Y-|Y||Z|)dx=0, the following auxiliary problem has a unique weak solution
-ΔΠ(y) = χ_Y(y)-|Y||Z| Z,
Π-Z .
Using the change of variable y=x/, defining Π^(x):=Π(x/), and stating the problem for whole ℝ^2, we get
-^2ΔΠ^(x) = χ_Y(x/)-|Y||Z| ℝ^2.
We use (<ref>) in (<ref>) and we get
|v^_jk(t)-|Y||Z|v^_jk(t)| ≤^2∫_ℝ^2|∇Π^∇(v^(t,x)e_jk(x))|dx
≤^2∫_ℝ^2∩ supp(e_jk)|∇Π^∇(v^(t,x)e_jk(x))|dx
≤ C_jk^2.
Note that to get the inequality (<ref>), we used
(<ref>) and Theorem <ref>.
Hence we proved the Claim 1.
Claim 2: for any δ>0 there exists R_δ∈ℕ such that
v^χ_[-R_δ,R_δ]^2-∑_|j|,|k|≤ R_δv^_jke_jk_L^2(0,T;L^2(ℝ^2))< δ/5,
where χ_[-R_δ,R_δ]^2 is characteristic function of [-R_δ,R_δ]^2.
Proof of Claim 2 follows similar arguments as in Lemma 4 from <cit.>. We use Theorem <ref>, Lemma <ref>, Lemma <ref> and Rellich–Kondrachov theorem. For details see Lemma 4 from <cit.>.
Now, from Lemma <ref> there exists N_1∈ℕ such that
v^- v^χ_[-N_1,N_1]^2_L^2(0,T;L^2(ℝ^2))< δ/5.
Using the property (<ref>) for v^ and (<ref>), we can guarantee that
v^χ_[-R_δ,R_δ]^2-∑_|j|,|k|≤ R_δv^_jke_jk_L^2(0,T;L^2(Ω^(t)))< δ/5.
Choosing small enough and using (<ref>), we have
∑_|j|,|k|≤ R_δv^_jke_jk-|Z||Y|∑_|j|,|k|≤ R_δv^_jke_jk_L^2(0,T;L^2(Ω^(t)))< δ/5.
Since v^_jk strongly converges to v^0_jk in L^2(0,T), for small enough , we are lead to
∑_|j|,|k|≤ R_δv^_jke_jk-∑_|j|,|k|≤ R_δv^0_jke_jk_L^2(0,T;L^2(Ω^(t)))< |Z||Y|δ/5.
From (<ref>), we get
∑_|j|,|k|≤ R_δv^0_jke_jk-v^0_*(t,x) _L^2(0,T;L^2(Ω^(t))< δ/5.
Choosing in (<ref>)–(<ref>) N_1,R_δ large enough, we obtain for small enough that
v^(t,x)-|Z||Y|v^0_*(t,x) _L^2(0,T;L^2(Ω^(t))< δ/5.
Finally, we conclude the proof by choosing v^0(t,x)=|Z||Y|v^0_*(t,x).
Observe that, by using the change of variable x→ x-B^*t into the identity (<ref>), we obtain the following strong two-scale convergence with drift
lim_→ 0∫_0^T∫_Ω^|u^(t,x) - v_0(t,x-B^*t)|^2dxdt=0.
This is a useful result that will appear in a several context in the following.
§.§ Two-scale convergence with drift
In this section we recall the concept of two-scale convergence with drift.
Let r∈ℝ^2 and u^∈ L^2(0,T;L^2(Ω^), we say u^ two-scale converges with drift r to u_0, if for all ϕ∈ C_c^∞((0,T)×ℝ^2;C_#^∞ (Z)) the following identity holds
lim_→ 0∫_(0,T)×Ω^ u^(t,x)ϕ(t,x-rx,x/)dxdt= ∫_0^T∫_ℝ^2∫_Z u_0(t,x,y)ϕ(t,x,y)dydxdt.
We denote the convergence as u^2-drift(r)u_0.
Let r∈ℝ^2 and
u^∈ L^2(0,T;L^2(Ω^). We say that u^ strongly two-scale converges with the drift r to u_0 if and only if
lim_→ 0∫_0^T∫_Ω^|u^(t,x) - u_0(t,x-rt,x)|^2dxdt=0.
We denote this convergence by u^u_0.
§.§.§ Compactness results
Let ϕ∈ L^2((0,T)×ℝ^2;C_#(Z)). Then
lim_→ 0∫_0^T∫_ℝ^2ϕ(t,x-B^*t,x)^2dxdt=∫_0^T∫_ℝ^2∫_Zϕ(t,x,y)^2dydxdt
holds true.
We refer the reader to Proposition 2.6.7 in <cit.> for the details leading to a proof of this statement.
Let u^∈ L^2(0,T;H^1(Ω^)). Assume there exists a constant C>0 independent of such that
u^_L^2(0,T;H^1(Ω^))≤ C.
Then, there exist u_0∈ L^2(0,T;H^1(ℝ^2)) and u_1∈ L^2((0,T)× H^1(ℝ^2);H_#^1(Z)) such that
u^2-drift(r) u_0,
∇ u^2-drift(r)∇_x u_0+∇_y u_1.
For a detailed proof of this compactness result, see <cit.>.
Let u^∈ L^2(0,T;L^2(Γ_N^)). Assume there exists a constant C>0 independent of such that
u^_L^2(0,T;L^2(Γ_N^))≤ C.
Then, there exists u_0∈ L^2(0,T;L^2(ℝ^2×Γ_N)) such that
lim_→ 0∫_(0,T)×Γ_N^ u^(t,x)ϕ(t,x-rt,x/)dxdt= ∫_0^T∫_ℝ^2∫_Γ_N u_0(t,x,y)ϕ(t,x,y)dydxdt.
For a detailed proof, see Proposition 5.4 of <cit.>.
It is useful to
note that the function v_0 which we obtained from (<ref>) and u_0 coming from (<ref>) are equal for a.e (t,x)∈ (0,T)×ℝ^2. To see this, we can argue as follows:
for any ϕ∈ C_c^∞((0,T)×ℝ^2), we have
∫_0^T∫_ℝ^2(u_0-v_0)ϕ dxdt =lim_→ 01/|Z|∫_0^T∫_Ω^(u^(t,x)-v_0(t,x-B^*t))ϕ(t,x-B^*t)dxdt
≤lim_→ 01/|Z|u^(t,x)-v_0(t,x,x-B^*t)_L^2((0,T)×Ω^)ϕ(t,x-B^*t)_L^2((0,T)×Ω^)
=0.
Hence, we can conclude that v_0 and u_0 coincide.
§.§ Limit problem – Structure of the upscaled model equations
The main result of this paper is stated in the next Theorem. A connected companion result is Theorem <ref>.
Assume <ref>–<ref> hold and g_N(r)=r for all r∈ℝ. Then the weak solution u^ to the microscopic problem (<ref>)–(<ref>) two-scale converges with the drift B^* to u^0(t,x) as → 0, where u^0(t,x) is the weak solution of the homogenized problem, viz.
∂_t u_0 +div( -D^*(u_0,W)∇_x u_0) =1/|Z|∫_Z f dy - |Γ_N|/|Z|g_N(u_0) (0,T)×ℝ^2,
u_0(0) =g ℝ^2
-∇_y· D(y) ∇_y w_i+B(y)(1-2C(y)u_0)·∇_y w_i =∇_y D(y) e_i+B^* e_i-B(y)(1-2C(y)u_0)· e_i
(0,T)×ℝ^2× Z,
( -D(y)∇_y w_i+B(y)(1-2C(y)u_0)w_i)· n_y = ( -D(y)e_i)· n_y
(0,T)×ℝ^2×Γ_N,
w_i(t,x,·) ,
where
B^*e_i:=∫_Z B(y)e_idy|Z|
and
D^*(u_0,W):=1/|Z|∫_Z D(y)(I+[ ∂ w_1/∂ y_1 ∂ w_2/∂ y_1; ∂ w_1/∂ y_2 ∂ w_2/∂ y_2 ]) dy
+1/|Z|B^*∫_Z W(y)^t dy-1/|Z|∫_ZB(y)(1-2C(y)u_0) W(y)^t dy
for all t∈ (0,T) and a.e x∈ℝ^2, y∈ Z, with
W:=(w_1,w_2).
Using the compactness result stated in Theorem <ref> and the energy estimates obtained in Theorem <ref>, we can state that there exist u_0∈ L^2(0,T;H^1(ℝ^2)) and u_1∈ L^2((0,T)× H^1(ℝ^2);H_#^1(Z)) such that
u^2-drift(B^*) u_0,
∇ u^2-drift(B^*)∇_x u_0+∇_y u_1.
Take ϕ_0∈ C_c^∞((0,T)×ℝ^2) and ϕ_1∈ C_c^∞((0,T)×ℝ^2)× C_#^∞(Z). Now, in the weak formulation (<ref>) of our microscopic problem, we choose ϕ=ϕ_0(t,x-B^*t)+ϕ_1(t,x-B^*t,x) to obtain:
∫_0^T∫_Ω^∂_t u^ϕ_0(t,x-B^*t) dxdt+∫_0^T∫_Ω^ D^∇ u^∇ϕ_0(t,x-B^*t) dxdt
-1∫_0^T∫_Ω^ B^ u^(1-C^ u^) ∇ϕ_0(t,x-B^*t) dxdt
+∫_0^T∫_Ω^∂_t u^ϕ_1(t,x-B^*t,x) dxdt
+∫_0^T∫_Ω^ D^∇ u^∇ϕ_1(t,x-B^*t,x) dxdt
- ∫_0^T∫_Ω^ B^ u^(1-C^ u^) ∇ϕ_1(t,x-B^*t,x) dxdt
= ∫_0^T∫_Ω^ f^ϕ_0(t,x-B^*t) dxdt- ∫_0^T∫_Γ_N^ g_N(u^) ϕ_0(t,x-B^*t) dσ dt
∫_0^T∫_Ω^ f^ϕ_1(t,x-B^*t,x) dx- ^2∫_0^T∫_Γ_N^ g_N(u^) ϕ_1(t,x-B^*t,x)dσ dt.
By using the integration by parts and the chain rule, we have
∫_0^T∫_Ω^∂_t u^ϕ_0(t,x-B^*t) dxdt
=-∫_0^T∫_Ω^ u^∂_tϕ_0(t,x-B^*t) dxdt+ B^*/∫_0^T∫_Ω^ u^∇_xϕ_0(t,x-B^*t) dxdt.
We rewrite the next term as
∫_0^T∫_Ω^∂_t u^ϕ_1(t,x-B^*t,x) dx
=-∫_0^T∫_Ω^ u^∂_tϕ_1(t,x-B^*t,x) dxdt+ B^*∫_Ω^ u^∇_xϕ_1(t,x-B^*t,x) dxdt.
The following calculations rules will be used in the sequel, viz.
∇(ϕ_1(t,x-B^*t,x)) =∇_xϕ_1(t,x-B^*t,x)+1/∇_yϕ_1(t,x-B^*t,x),
∇(ϕ_0(t,x-B^*t)) =∇_xϕ_0(t,x-B^*t).
From assumption <ref>, we deduce
∫_0^T∫_Ω^ B^ u^(1-C^ u^) ∇ϕ_0(t,x-B^*t) dxdt
=-∫_0^T∫_Ω^ B^∇ u^ϕ_0(t,x-B^*t) dxdt+∫_0^T∫_Ω^ B^ C^ u^∇ u^ϕ_0(t,x-B^*t) dxdt.
To derive the structure of the cell problem, we choose in (<ref>) as test function ϕ_0≡ 0. Then from (<ref>)–(<ref>), we are led to
-∫_0^T∫_Ω^ u^∂_tϕ_1(t,x-B^*t,x) dxdt+ B^*∫_0^T∫_Ω^ u^∇_xϕ_1(t,x-B^*t,x) dxdt
+∫_0^T∫_Ω^ D^∇ u^∇_x ϕ_1(t,x-B^*t,x) dxdt+∫_0^T∫_Ω^ D^∇ u^∇_y ϕ_1(t,x-B^*t,x) dxdt
∫_0^T∫_Ω^ B^(1-2C^ u^)∇ u^ϕ_1(t,x-B^*t,x) dxdt=∫_0^T∫_Ω^f^ϕ_1(t,x-B^*t,x)
- ^2 ∫_0^T∫_Γ_N^g_N(u^) ϕ_1(t,x-B^*t,x)dσ dt.
As a direct application of Theorem <ref>, we obtain
lim_→ 0∫_0^T∫_Ω^ u^∂_tϕ_1(t,x-B^*t,x) dxdt=0.
Using (<ref>), we have
lim_→ 0 B^*∫_0^T∫_Ω^ u^∇_xϕ_1(t,x-B^*t,x) dxdt= B^*∫_0^T∫_ℝ^2∫_Z u_0 ∇_xϕ_1(t,x,y) dydxdt
=-B^*∫_0^T∫_ℝ^2∫_Z ∇_xu_0 ϕ_1(t,x,y) dxdt.
Relying on <ref> and (<ref>), we can write
lim_→ 0∫_0^T∫_Ω^ D^∇ u^∇_x ϕ_1(t,x-B^*t,x) dxdt = 0,
lim_→ 0∫_0^T∫_Ω^ D^∇ u^∇_y ϕ_1(t,x-B^*t,x) dxdt = ∫_0^T∫_ℝ^2∫_Z D^ (∇_xu_0+∇_yu_1) ∇_y ϕ_1(t,x,y) dydxdt.
Using <ref> together with the strong convergence result stated in Theorem <ref>, and recalling as well (<ref>), we get
lim_→ 0∫_0^T∫_Ω^ B^(1-2C^ u^)∇ u^ϕ_1(t,x-B^*t,x) dxdt
= ∫_0^T∫_ℝ^2∫_Z B(y)(1-2C(y) u_0)(∇_x u_0+∇_yu_1) ϕ_1(t,x,y) dydxdt.
By <ref> and Theorem <ref>, we get
lim_→ 0∫_0^T∫_Ω^ f^ϕ_1(t,x-B^*t,x) dxdt= 0.
Using <ref> jointly with Theorem <ref> yields
lim_→ 0^2∫_0^T∫_Γ_N^ g_N(u^) ϕ_1(t,x-B^*t,x)dσ dt= 0.
Now, passing to → 0 in (<ref>) and using (<ref>)–(<ref>), we obtain
-B^*∫_0^T∫_ℝ^2∫_Z ∇_xu_0 ϕ_1(t,x,y) dydxdt+ ∫_0^T∫_ℝ^2∫_Z D^∇_xu_0 ∇_y ϕ_1(t,x,y) dydxdt
+∫_0^T∫_ℝ^2∫_Z D^∇_yu_1 ∇_y ϕ_1(t,x,y) dydxdt+∫_0^T∫_ℝ^2∫_Z B(y)(1-2C(y) u_0)∇_x u_0 ϕ_1(t,x,y) dydxdt
+∫_0^T∫_ℝ^2∫_Z B(y)(1-2C(y) u_0)∇_yu_1 ϕ_1(t,x,y) dydxdt=0.
Looking now at (<ref>), the choice ϕ_1(t,x,y)=ϕ_2(t,x)ϕ_3(y) yields for almost every (t,x)∈ (0,T)×ℝ^2 the identity:
-B^*∫_Z ∇_xu_0 ϕ_3(y) dy+ ∫_Z D^∇_xu_0 ∇_y ϕ_3(y) dy+∫_Z D^∇_yu_1 ∇_y ϕ_3(y) dy
+∫_Z B(y)(1-2C(y) u_0)∇_x u_0 ϕ_3(y) dy
+∫_Z B(y)(1-2C(y) u_0)∇_yu_1 ϕ_3(y) dy=0.
The structure of (<ref>) allows us to choose further
u_1(t,x,y):=W(t,x,y)·∇_x u_0(x),
where W:=(w_1,w_2) with w_i (with i∈{1,2}) solving the following cell problem:
-B^*∫_Z e_i ϕ_3(y) dy+ ∫_Z D^ e_i ∇_y ϕ_3(y) dy+∫_Z D^∇_yw_i ∇_y ϕ_3(y) dy
+∫_Z B(y)(1-2C(y) u_0) e_i ϕ_3(y) dy
+∫_Z B(y)(1-2C(y) u_0)∇_yw_i ϕ_3(y) dy=0.
Note that (<ref>) is the weak formulation of the cell problem (<ref>)–(<ref>).
To derive the macroscpic equation (<ref>), in (<ref>) we choose ϕ_1≡ 0 and we get
-∫_0^T∫_Ω^ u^∂_tϕ_0(t,x-B^*t) dxdt +∫_0^T∫_Ω^ D^∇ u^∇ϕ_0(t,x-B^*t) dxdt
+ ∫_0^T∫_Ω^B^*-B^/ u^∇_xϕ_0(t,x-B^*t) dxdt +1∫_Ω^ B^ C^ (u^)^2 ∇ϕ_0(t,x-B^*t) dxdt
= ∫_0^T∫_Ω^ f^ϕ_0(t,x-B^*t) dxdt- ∫_0^T∫_Γ_N^ g_N(u^) ϕ_0(t,x-B^*t) dσ dt.
Using (<ref>), we get
lim_→ 0 -∫_0^T∫_Ω^ u^∂_tϕ_0(t,x-B^*t) dxdt = -∫_0^T∫_ℝ^2∫_Z u_0 ∂_tϕ_0(t,x) dydxdt
=|Z|∫_0^T∫_ℝ^2∂_t u_0 ϕ_0(t,x) dxdt.
Using (<ref>) and <ref>, we get
lim_→ 0∫_0^T∫_Ω^ D^∇ u^∇_x ϕ_0(t,x-B^*t) dxdt = ∫_0^T∫_ℝ^2∫_Z D(y) (∇_x u^0+∇_y u_1) ∇_x ϕ_0(t,x) dxdt.
Using the auxiliary problem (<ref>)–(<ref>) and (<ref>), (<ref>) we have
lim_→ 0∫_0^T∫_Ω^ B^*-B^/ u^∇_xϕ_0(t,x-B^*t) dxdt=lim_→ 0-∑_i=1^2∫_0^T∫_Ω^Δ G_i^ u^∂/∂ x_iϕ_0(t,x-B^*t) dxdt
=lim_→ 0∑_i=1^2∫_0^T∫_Ω^∇ G_i^∇ (u^∂/∂ x_iϕ_0(t,x-B^*t)) dxdt
=lim_→ 0∑_i=1^2∫_0^T∫_Ω^1/∇_y G_i(x/) ∇ u^∂/∂ x_iϕ_0(t,x-B^*t) dxdt
+lim_→ 0∑_i=1^2∫_0^T∫_Ω^1/∇_y G_i(x/) u^∇_x(∂/∂ x_iϕ_0(t,x-B^*t)) dxdt
= ∑_i=1^2∫_0^T∫_ℝ^2∫_Z ∇_yG_i(y)(∇_xu_0+∇_yu_1)∂ϕ_0/∂ x_i(t,x)dydxdt
+∑_i=1^2∫_0^T∫_ℝ^2∫_Z ∇_yG_i(y)u_0∇_x(∂ϕ_0/∂ x_i(t,x))dydxdt
=∑_i=1^2∫_0^T∫_ℝ^2∫_Z ∇_yG_i(y)(∇_xu_0+∇_yu_1)∂ϕ_0/∂ x_i(t,x)dydxdt
-∑_i=1^2∫_0^T∫_ℝ^2∫_Z ∇_yG_i(y)∇_xu_0(∂ϕ_0/∂ x_i(t,x))dydxdt
=∑_i=1^2∫_0^T∫_ℝ^2∫_Z ∇_yG_i(y)∇_yu_1∂ϕ_0/∂ x_i(t,x)dydxdt
=-∑_i=1^2∫_0^T∫_ℝ^2∫_Z Δ_yG_i(y)u_1∂ϕ_0/∂ x_i(t,x)dydxdt
=∫_0^T∫_ℝ^2∫_Z(B^*-B(y))u_1∇_xϕ_0(t,x)dydxdt
=-∫_0^T∫_ℝ^2∫_Z(B^*-B(y))∇_x u_1ϕ_0(t,x)dydxdt.
Using the auxiliary problem (<ref>)–(<ref>), (<ref>) and (<ref>), we obtain
lim_→ 0∫_0^T∫_Ω^ B^ C^/ (u^)^2 ∇_xϕ_0(t,x-B^*t) dxdt=-lim_→ 0∑_i=1^2∫_0^T∫_Ω^Δ H_i^ (u^)^2 ∂/∂ x_iϕ_0(t,x-B^*t) dxdt
=lim_→ 0∑_i=1^2∫_0^T∫_Ω^∇ H_i^∇ ((u^)^2 ∂ϕ_0/∂ x_i(t,x-B^*t)) dxdt
=2lim_→ 0∑_i=1^2∫_0^T∫_Ω^1/∇_y H_i(x/) u^∇ u^∂ϕ_0/∂ x_i(t,x-B^*t) dxdt
+lim_→ 0∑_i=1^2∫_0^T∫_Ω^1/∇_y H_i(x/) (u^)^2 ∇_x(∂ϕ_0/∂ x_i(t,x-B^*t)) dxdt
= 2∑_i=1^2∫_0^T∫_ℝ^2∫_Z ∇_yH_i(y)u_0(∇_xu_0+∇_yu_1)∂ϕ_0/∂ x_i(t,x)dydxdt
+∑_i=1^2∫_0^T∫_ℝ^2∫_Z ∇_yH_i(y)(u_0)^2∇_x(∂ϕ_0/∂ x_i(t,x))dydxdt
=2∑_i=1^2∫_0^T∫_ℝ^2∫_Z ∇_yH_i(y)u_0(∇_xu_0+∇_yu_1)∂ϕ_0/∂ x_i(t,x)dydxdt
-2∑_i=1^2∫_0^T∫_ℝ^2∫_Z ∇_yH_i(y)u_0∇_xu_0(∂ϕ_0/∂ x_i(t,x))dydxdt
=2∑_i=1^2∫_0^T∫_ℝ^2∫_Z ∇_yH_i(y)u_0∇_yu_1∂ϕ_0/∂ x_i(t,x)dydxdt
=-2∑_i=1^2∫_0^T∫_ℝ^2∫_Z Δ_yH_i(y)u_0u_1∂ϕ_0/∂ x_i(t,x)dydxdt
=2∫_0^T∫_ℝ^2∫_ZB(y)C(y)u_0u_1∇_xϕ_0(t,x)dydxdt
=-2∫_0^T∫_ℝ^2∫_ZB(y)C(y)∇_x(u_0 u_1)ϕ_0(t,x)dydxdt.
Using <ref>, we get
lim_→ 0∫_0^T∫_Ω^ f^(x,x) ϕ_0(t,x-B^*t) dxdt= ∫_0^T∫_ℝ^2∫_Z f(x,y)dydxdt.
Using <ref> and Theorem (<ref>), gives
lim_→ 0∫_0^T∫_Γ_N^ g_N(u^) ϕ_0(t,x-B^*t) dσ dt =lim_→ 0∫_0^T∫_Γ_N^ u^ϕ_0(t,x-B^*t) dσ dt
=|Γ_N|∫_0^T∫_ℝ^2u_0(t,x)ϕ_0(t,x)dxdt.
By (<ref>)–(<ref>), the passage to the limit → 0 in (<ref>) discovers the weak form
|Z|∫_0^T∫_ℝ^2∂_t u_0 ϕ_0(t,x) dxdt+∫_0^T∫_ℝ^2∫_Z D(y) ∇_x u_0 ∇_x ϕ_0(t,x) dxdt
+∫_0^T∫_ℝ^2∫_Z D(y) ∇_y u_1 ∇_x ϕ_0(t,x) dxdt-∫_0^T∫_ℝ^2∫_Z(B^*-B(y))∇_x u_1ϕ_0(t,x)dydxdt
-2∫_0^T∫_ℝ^2∫_ZB(y)C(y)∇_x(u_0 u_1)ϕ_0(t,x)dydxdt=∫_0^T∫_ℝ^2∫_Z f(x,y)ϕ_0(t,x)dydxdt
-|Γ_N|∫_0^T∫_ℝ^2u_0(t,x)ϕ_0(t,x)dxdt.
Inserting the ansatz (<ref>) into (<ref>), we can rewrite the last identity as
∫_0^T∫_ℝ^2∂_t u_0 ϕ_0(t,x) dxdt+∫_0^T∫_ℝ^2 D^*(u_0,W) ∇_x u_0 ∇_x ϕ_0(t,x) dxdt
=1/|Z|∫_0^T∫_ℝ^2∫_Z f(x,y)ϕ_0(t,x)dydxdt-|Γ_N|/|Z|∫_0^T∫_ℝ^2u_0(t,x)ϕ_0(t,x)dxdt,
where D^*(u_0,W) defined as (<ref>) and (<ref>) is the weak formulation of (<ref>).
It is worth noting that the homogenization limit derived in Theorem <ref> and the corrector convergence result stated in Theorem <ref> in the next section are still valid if we replace the assumption g_N(r)=r with g_N(r)= k, where k∈ℝ is fixed arbitrarily.
§ SEARCHING FOR CORRECTORS
In this section, we study the strong convergence of the corrector term obtained from the homogenized limit problem. To prove such corrector-type result, we rely on techniques similar to those used in <cit.> to prove the strong convergence of solutions to the microscopic problem in L^2. We begin with stating two auxiliary lemmas which later will be employed to prove the wanted strong convergence result. We omit their proofs since they are straightforward extensions of classical results related to the concept of two-scale convergence (Theorem 17 of <cit.>) and, respectively, to the two-scale convergence with drift (see in particular <cit.>).
Let v^∈ L^2(0,T;L^2(Ω^)) such that v^2-drift(B^*) v_0 as → 0 for some v_0=v_0(t,x,y)∈ L^2((0,T)×ℝ^2;L_#^2(Z)). Then it holds
lim inf_→ 0∫_0^T∫_Ω^(v^)^2 dxdt≥∫_0^T∫_ℝ^2∫_Z v_0^2 dydxdt.
Let v^∈ L^2(0,T;Γ_N^) such that
lim_→ 0∫_(0,T)×Γ_N^ v^(t,x)ϕ(t,x-B^*t,x/)dxdt= ∫_0^T∫_ℝ^2∫_Γ_N v_0(t,x,y)ϕ(t,x,y)dydxdt,
for some v_0(t,x,y)∈ L^2((0,T)×ℝ^2;L_#^2(Γ_N)). Then it holds
lim inf_→ 0∫_0^T∫_Γ_N^(v^)^2 dxdt≥∫_0^T∫_ℝ^2∫_Γ_N v_0^2 dydxdt.
Let u^∈ L^2(0,T;H^1(Ω^)) satisfy strongly two-scale convergence with drift to u_0 in the sense of Defenition <ref>. Then it holds:
lim_→ 0u^_L^2((0,T)×Ω^)=u_0_L^2((0,T)×ℝ^2× Z).
By the Minkowski inequality, we have
lim_→ 0u^(t,x)_L^2((0,T)×Ω^)≤lim_→ 0 u^(t,x)-u_0(t,x-B^*t,x)_L^2((0,T)×Ω^)
+lim_→ 0u_0(t,x-B^*t,x)_L^2((0,T)×Ω^).
Now, using Lemma <ref> and (<ref>) to deal with (<ref>) leads to
0≤|lim_→ 0u^(t,x)_L^2((0,T)×Ω^)-u_0_L^2((0,T)×ℝ^2× Z)|≤ 0.
The main result of this section is stated in the next Theorem.
Assume <ref>–<ref> hold, g_N(r)=r for all r∈ℝ and f^ f . Then
lim_→ 0∇( u^(t,x/)-u_0(t,x-B^*t)- u_1(t,x-B^*t,x/))_L^2(0,T;L^2(Ω^))=0,
where u^ solves the original microscopic problem and u_0 and u_1 are given cf. (<ref>) and (<ref>), respectively.
We choose as test function ϕ=u^ in the weak formulation (<ref>). Integrating the result from 0 to t, we obtain
∫_0^t ∫_Ω^∂_t u^ u^ dxds+∫_0^t ∫_Ω^D^∇ u^∇ u^ dxds - 1/∫_0^t ∫_Ω^B^ P(u^)∇ u^ dxds
= ∫_0^t ∫_Ω^f^ u^ dxds-∫_0^t ∫_Γ_N^ g_N(u^) u^ dσ ds.
Using (<ref>) into (<ref>), yields
1/2∫_Ω^ (u^)^2 dx +∫_0^t ∫_Ω^D^∇ u^∇ u^ dxds= 1/2∫_Ω^ (u^(0))^2 dx +∫_0^t ∫_Ω^f^ u^ dxds
-∫_0^t ∫_Γ_N^ g_N(u^) u^ dσ ds.
Since u^ strongly converges to u_0, f^ strongly converges to f in the sense of (<ref>), we have lim_→ 0∫_0^t ∫_Ω^f^ (x)u^ (t,x)dxds=∫_0^t ∫_ℝ^2∫_Z f(x,y) u_0(t,x) dydxds.
We integrate (<ref>) from 0 to p, using Lemma <ref>, Lemma <ref> and pass → 0, we arrive at
|Z|/2∫_0^p∫_ℝ^2 (u_0)^2 dxdt+ lim_→ 0 ∫_0^p ∫_0^t ∫_Ω^D^∇ u^∇ u^ dxdsdt
= |Z|/2∫_0^p∫_ℝ^2 g^2 dxdt
+∫_0^p∫_0^t ∫_ℝ^2∫_Z f u_0 dydxdsdt - lim_→ 0∫_0^p∫_0^t ∫_Γ_N^ (u^)^2 dσ dsdt.
Using Lemma <ref> and (<ref>) in (<ref>), we have
1/2∫_0^p∫_ℝ^2 (u_0)^2 dxdt+ lim_→ 0 1/|Z|∫_0^p ∫_0^t∫_Ω^D^∇ u^∇ u^ dxdsdt ≤1/2∫_0^p∫_ℝ^2 g^2 dxdt
+1/|Z|∫_0^p∫_0^t ∫_ℝ^2∫_Z f u_0 dydxdsdt- |Γ_N|/|Z|∫_0^p∫_0^t∫_ℝ^2 u_0^2 dx dsdt.
Now, observing the structure of D^* as it appears in (<ref>), we obtain for any ξ∈ℝ^2 that
D^*ξ·ξ =1/|Z|∫_Z D(ξ+∇_y ∑ w_i ξ_i)· D(ξ+∇_y ∑ w_i ξ_i).
Using the structure of u_1 from (<ref>) and (<ref>), we have
D^*∇_x u_0 ·∇_x u_0 =1/|Z|∫_Z D(∇_x u_0+∇_y u_1)· (∇_x u_0+∇_y u_1).
Consider the weak form of (<ref>) and choose the test function u_0. We thus obtain
∫_0^t∫_ℝ^2∂_t u_0 u_0 dxds+∫_0^t∫_ℝ^2 D^*∇_x u_0 ∇_x u_0 dxds
=1/|Z|∫_0^t∫_ℝ^2∫_Z fu_0dydxds-|Γ_N|/|Z|∫_0^t∫_ℝ^2u_0^2dxds,
and hence, it holds as well that
1/2∫_ℝ^2 u_0^2 dx+1/|Z|∫_0^t∫_ℝ^2∫_Z D(∇_x u_0+∇_y u_1)· (∇_x u_0+∇_y u_1)dy dxds
=1/|Z|∫_0^t∫_ℝ^2∫_Z fu_0 dydxds-|Γ_N|/|Z|∫_0^t∫_ℝ^2u_0^2dxds+ 1/2∫_ℝ^2 g^2 dx.
Integrating (<ref>) from 0 to p and then comparing with (<ref>), we get
1/2 ∫_0^p∫_ℝ^2 u_0^2 dxdt+ lim_→ 01/|Z|∫_0^p ∫_0^t∫_Ω^D^∇ u^∇ u^ dxdsdt
≤1/2∫_0^p∫_ℝ^2 u_0^2 dxdt+1/|Z|∫_0^p∫_0^t∫_ℝ^2∫_Z D(∇_x u_0+∇_y u_1)· (∇_x u_0+∇_y u_1)dy dxdsdt.
So,
lim_→ 0∫_0^p ∫_0^t∫_Ω^D^∇ u^∇ u^ dxdsdt ≤∫_0^p ∫_0^t∫_ℝ^2∫_Z D(∇_x u_0+∇_y u_1)· (∇_x u_0+∇_y u_1)dy dxdsdt.
From Lemma (<ref>) and (<ref>), we have
∫_0^p∫_0^t∫_ℝ^2∫_Z D(∇_x u_0+∇_y u_1)· (∇_x u_0+∇_y u_1)dy dxdsdt ≤lim_→ 0∫_0^p∫_0^t ∫_Ω^D^∇ u^∇ u^ dxdsdt.
Comparing (<ref>) and (<ref>), lead us to
lim_→ 0∫_0^p∫_0^t∫_Ω^D^∇ u^∇ u^ dxdsdt=∫_0^p∫_0^t∫_ℝ^2∫_Z D(∇_x u_0+∇_y u_1)· (∇_x u_0+∇_y u_1)dy dxdsdt.
Now, differentiating (<ref>) with respect to p, using the Fundamental Theorem of Calculus, and finally choosing t=T, allows us to write
lim_→ 0∫_0^T∫_Ω^D^∇ u^∇ u^ dxdt=∫_0^T∫_ℝ^2∫_Z D(∇_x u_0+∇_y u_1)· (∇_x u_0+∇_y u_1)dy dxdt.
By the ellipticity condition of D, we have
θ ∇( u^(t,x)-u_0(t,x-B^*t)- u_1(t,x-B^*t,x/))_L^2(0,T;L^2(Ω^))
≤∫_0^T ∫_Ω^D^∇( u^(t,x)-u_0(t,x-B^*t)- u_1(t,x-B^*t,x/))
·∇( u^(t,x)-u_0(t,x-B^*t)- u_1(t,x-B^*t,x/)) dxdt.
Using the definition of two-scale convergence with drift, we have
lim_→ 0∫_0^T ∫_Ω^D^∇ u^∇ u_0 dxdt =∫_0^T∫_ℝ^2∫_Z D(∇_x u_0+∇_y u_1)·∇_x u_0 dy dxdt,
lim_→ 0∫_0^T ∫_Ω^ D^∇ u^∇ u_1 dxdt =∫_0^T∫_ℝ^2∫_Z D(∇_x u_0+∇_y u_1)·∇_y u_1 dy dxdt,
lim_→ 0∫_0^T ∫_Ω^ D^∇ u_0∇ u_0dxdt =∫_0^T∫_ℝ^2∫_Z D∇_x u_0∇_x u_0 dy dxdt,
lim_→ 0∫_0^T ∫_Ω^^2 D^∇ u_1∇ u_1 dxdt =∫_0^T∫_ℝ^2∫_Z D∇_y u_1∇_y u_1dy dxdt,
Now, using (<ref>), (<ref>)–(<ref>), we finally obtain
lim_→ 0∫_0^T ∫_Ω^D^∇( u^(t,x)-u_0(t,x-B^*t)- u_1(t,x-B^*t,x/))
·∇( u^(t,x)-u_0(t,x-B^*t)- u_1(t,x-B^*t,x/)) dxdt=0.
We conclude our proof by using (<ref>) and
(<ref>) to obtain the desired result (<ref>).
§ CONCLUSION AND OUTLOOK
We rigorously derived a macroscopic equation for a reaction-diffusion problem with nonlinear drift posed in an unbounded perforated domain together with Robin type boundary data. The main challenge for the homogenization was the presence of the nonlinear drift term which is scaled with a factor of order of 𝒪(1). The key tools used to handle the homogenization asymptotics were the two-scale convergence with drift and the strong convergence formulated in moving co-ordinates. To study the well-posedness of microscopic problem we relied on a classical result from <cit.> that we extended to cover the case of the unbounded perforated domain as needed in our target problem. To do so, we used a suitable comparison principle jointly with a monotonicity argument.
The obtained upscaled equation is a reaction-dispersion equation posed in an unbounded domain, strongly coupled with an elliptic cell problem posed in a bounded domain. The corresponding dispersion tensor carries information concerning the microscopic diffusion and drift. We also provided a strong convergence result related to the structure of the corrector, exploiting the difference in the micro- and macro-solutions in the H^1 norm.
We did perform the analysis and subsequent discussion in two-space dimensions since the original modeling of our equations was done in terms of interacting particle systems inter-playing in a plane. However, both the homogenization result and the corrector convergence hold for higher dimensions. One technical assumption deserves further attention – we assumed here that the mean value of the coefficient in front of the nonlinear drift is zero. We believe it is a challenging open problem to handle the homogenization of such nonlinear drift case when the mean value of this drift coefficient is non vanishing.
In this work, we only discussed about the convergence of the corrector. From a more practical perspective, it would be though very useful to derive a corrector estimate which can later be used to analyse the problem numerically. Till now, to our knowledge, the only corrector estimate result related to large-drift homogenization problems is reported in <cit.>, where the authors considered a linear diffusion-large convection problem. Along the same line of thinking, developing a two-scale finite element approach for the micro-macro problem could be an excellent research direction to pursue (following e.g. the works <cit.>, <cit.>, or <cit.>). The challenging parts for the numerical analysis are the presence of the large nonlinear drift occurring in the microscopic problem as well as the strong coupling through the transport coefficient in the upscaled problem.
§ ACKNOWLEDGMENTS
We thank H. Hutridurga (Bombay) for fruitful discussions during his KAAS seminar. V.R. thanks M. Eden (Karlstad) for interesting discussions related to the corrector result.
The work of V.R. and A.M. is partially supported by the Swedish Research Council's project “Homogenization and dimension reduction of thin heterogeneous layers" (grant nr. VR 2018-03648).
amsplain
|
http://arxiv.org/abs/2307.07456v1 | 20230714163347 | Turán's Theorem Through Algorithmic Lens | [
"Fedor V. Fomin",
"Petr A. Golovach",
"Danil Sagunov",
"Kirill Simonov"
] | cs.DS | [
"cs.DS"
] |
Structured Pruning of Neural Networks for Constraints Learning
[
August 12, 2023
==============================================================
The fundamental theorem of Turán from Extremal Graph Theory determines the exact bound on the number of edges t_r(n) in an n-vertex graph that does not contain a clique of size r+1. We establish an interesting link between Extremal Graph Theory and Algorithms by providing a simple compression algorithm that in linear time reduces the problem of finding a clique of size ℓ in an n-vertex graph G with m ≥ t_r(n)-k edges, where ℓ≤ r+1, to the problem of finding a maximum clique in a graph on at most 5k vertices. This also gives us an algorithm deciding in time
2.49^k·(n + m) whether G has a clique of size ℓ.
As a byproduct of the new compression algorithm, we give an algorithm that in time 2^𝒪(td^2 )· n^2 decides whether a graph contains an independent set of size at least n/(d+1) +t. Here d is the average vertex degree of the graph G.
The multivariate complexity analysis based on ETH indicates that the asymptotical dependence on several parameters in the running times of our algorithms is tight.
§ INTRODUCTION
In 1941, Pál Turán published a theorem that became one of the central results in extremal graph theory. The theorem bounds the number of edges in an undirected graph that does not contain a complete subgraph of a given size.
For positive integers r≤ n, the Turán's graph T_r(n) is the unique complete r-partite n-vertex graph where each part consists of ⌊n/r⌋ or ⌈n/r⌉ vertices. In other words,
T_r(n) is isomorphic to K_a_1,a_2,…,a_r, where a_i=⌈n/r⌉ if i is less than or equal to n modulo r and a_i=⌊n/r⌋ otherwise. We use t_r(n) to denote the number of edges in T_r(n).
Let r ≤ n. Then any K_r+1-free n-vertex graph has at most t_r(n) edges.
The only K_r+1-free n-vertex graph with exactly t_r(n) edges is T_r(n).
The theorem yields a polynomial time algorithm that for a given n-vertex graph G with at least t_r(n) edges decides whether G contains a clique K_r+1. Indeed, if a graph G is isomorphic to T_r(n), which is easily checkable in polynomial time, then it has no clique of size r+1. Otherwise, by Turán's theorem, G
contains K_r+1. There are constructive proofs of Turán's theorem that also allow to find a clique of size r+1 in a graph with at least t_r(n) edges.
The fascinating question is whether Turán's theorem could help to find efficiently larger cliques in sparser graphs. There are two natural approaches to defining a “sparser” graph and a “larger” clique. These approaches bring us to the following questions; addressing these questions is the primary motivation of our work.
First, what happens when the input graph has a bit less edges than the Turán's graph? More precisely,
[colback=green!5!white,colframe=blue!40!black]
Is there an efficient algorithm that for some k≥ 1, decides whether an n-vertex graph with at least t_r(n)-k edges
contains a clique of size r+1?
-2mm
Second, could Turán's theorem be useful in finding a clique of size larger than r+1 in an n-vertex graph with t_r(n) edges? That is,
[colback=green!5!white,colframe=blue!40!black]
Is there an efficient algorithm that for some ℓ> r decides whether an n-vertex graph with at least t_r(n) edges
contains a clique of size ℓ?
-2mm
We provide answers to both questions, and more. We resolve the first question by showing a simple fixed-parameter tractable (FPT) algorithm where the parameter is k, the “distance” to the Turán's graph.
Our algorithm builds on the cute ideas used by Erdős in his proof of Turán's theorem <cit.>. Viewing these ideas through algorithmic lens leads us to a simple preprocessing procedure, formally a linear-time polynomial compression. For the second question, unfortunately, the answer is negative.
Our contribution. To explain our results, it is convenient to state the above questions in terms of the computational complexity of the following problem.
An n-vertex graph G, positive integers r, ℓ≤ n, and k such that |E(G)|≥ t_r(n)-k.Is there a clique of size at least ℓ in G?
Our first result is the following theorem (<Ref>).
Let G be an n-vertex graph with m ≥ t_r(n)-k edges. Then there is an algorithm that
for any ℓ≤ r+1, in time 2.49^k·(n + m) either finds
a clique of size at least ℓ in G or correctly reports that G does not have a clique of size ℓ. Thus for ℓ≤ r+1,
is FPT parameterized by k. More generally, we prove that the problem admits a compression of size linear in k. That is, we provide a linear-time procedure that reduces an instance (G, r,ℓ, k) of to an equivalent instance (G',p) of the Clique problem with at most 5k vertices. The difference between Clique and is that we do not impose any bound on the number of edges in the input graph of Clique. This is why we use the term compression rather than kernelization,[A kernel is by definition a reduction to an instance of the same problem. See the book <cit.> for an introduction to kernelization.] and we argue that stating our reduction in terms of compression is far more natural and helpful.
Indeed, after reducing the instance to the size linear in the parameter k, the difference between Clique and vanes, as even the total number of edges in the instance is automatically bounded by a function of the parameter. On the other hand, Clique is a more general and well-studied problem than .
Pipelined with the fastest known exact algorithm for Maximum Independent Set of running time Ø(1.1996^n)
<cit.>, our reduction provides the FPT algorithm for parameterized by k.
This algorithm is single-exponential in k and linear in n+m, and we also show that the existence of an algorithm subexponential in k would contradict Exponential Time Hypothesis (<Ref>). Thus the running time of our algorithm is essentially tight, up to the constant in the base of the exponent.
The condition ℓ≤ r+1 required by our algorithm is, unfortunately, unavoidable. We prove (<Ref>) that for any fixed p≥ 2, the problem of deciding whether an n-vertex graph with at least t_r(n) edges contains a clique of size ℓ=r+p is -complete. Thus for any p≥ 2, parameterized by k is para-NP-hard. (We refer to the book of Cygan et al. <cit.> for an introduction to parameterized complexity.)
While our hardness result rules out finding cliques of size ℓ> r+1 in graphs with t_r(n) edges in FPT time, an interesting
situation arises when the ratio ξ:= ⌊n/r⌋ is small. In the extreme case, when n=r, the n-vertex graph G with t_r(n)=n(n-1)/2 is a complete graph. In this case the problem becomes trivial.
To capture how far the desired clique is from the Turán's bound, we introduce the parameter
τ=
0, if ℓ≤ r,
ℓ - r, otherwise.
The above-mentioned compression algorithm into
Clique with at most 5k vertices yields almost “for free” a compression of into
Clique with 𝒪(τξ^2+k) vertices. Hence for any ℓ, one can decide whether an n-vertex graph with m ≥ t_r(n)-k edges contains a clique of size ℓ in time 2^𝒪(τξ^2+k)· (n + m). Thus the problem is FPT parameterized by τ+ξ +k. This result has an interesting interpretation when we
look for a large independent set in the complement of a graph.
Turán's theorem, when applied to the complement G of a graph G, yields a bound
α(G)≥n/d + 1,
where α(G) is the size of the largest independent set in G (the independence number of G), and d is the average vertex degree of G.
This motivates us to define the following problem.
An n-vertex graph G with average degree d, a positive integer t.Is there an independent set of size at least n/d+1+t in G?
By <Ref>, we have a simple algorithm (<Ref>) that compresses an instance of
into an instance of Independent Set with 𝒪(td^2 ) vertices. Pipelined with an exact algorithm computing a maximum independent set, the compression results in the algorithm solving in time
2^𝒪(td^2 )· n^2.
As we already mentioned, is -complete for any fixed τ≥ 2 and k=0. We prove that the problem remains intractable being parameterized by any pair of the parameters from the triple {τ, ξ, k}.
More precisely, is also
-complete for any fixed ξ≥ 1 and τ=0, as well as for
any fixed ξ≥ 1 and k=0. These lower bounds are given in <Ref>.
Given the algorithm of running time 2^𝒪(τξ^2+k)· (n + m) and the lower bounds for parameterization by any pair of the parameters from {τ, ξ, k}, a natural question is, what is the optimal dependence of a algorithm on {τ, ξ, k}? We use the Exponential Time Hypothesis (ETH) of Impagliazzo, Paturi, and Zane <cit.> to address this question. Assuming ETH, we rule out the existence of algorithms solving in time
f(ξ,τ)^o(k)· n^f(ξ,τ), f(ξ,k)^o(τ)· n^f(ξ, k), and f(k,τ)^o(√(ξ))· n^f(k, τ), for any function f of the respective parameters.
Related work.
Clique is a notoriously difficult computational problem. It is one of Karp's 21 -complete problems <cit.>
and by the work of Håstad, it is hard to approximate Clique within a factor of
n^1-ϵ <cit.>. Clique parameterized by the solution size is W[1]-complete <cit.>. The problem plays the fundamental role in the W-hierarchy of Downey and Fellows, and serves as the starting point in the majority of parameterized hardness reductions. From the viewpoint of structural parameterized kernelization, Clique does not admit a polynomial kernel when parameterized by the size of the vertex cover <cit.>. A notable portion of works in parameterized algorithms and kernelization is devoted to solving Independent Set (equivalent to Clique on the graph's complement) on specific graph classes like planar, H-minor-free graphs and nowhere-dense graphs
<cit.>.
Our algorithmic study of Turán's theorem fits into the paradigm of the “above guarantee” parameterization <cit.>.
This approach was
successfully applied to various problems, see e.g. <cit.>.
Most relevant to our work is the
work of Dvorak and Lidicky on independent set “above Brooks' theorem” <cit.>. By Brooks' theorem <cit.>, every n-vertex graph of maximum degree at most Δ≥ 3 and clique number at most Δ has an independent set of size at least n/Δ. Then the
problem
is to decide whether an input graph G has an independent set of size at least n/Δ +p.
Dvorak and Lidicky <cit.> proved that admits a kernel with at most 114pΔ^3 vertices. This kernel also implies an algorithm of running time 2^𝒪(pΔ^3 )·.
When average degree d is at most Δ -1, by <Ref>, we have that admits a compression into an instance of Independent Set with 𝒪(pΔ^2) vertices. Similarly,
by <Ref>, for d ≤Δ -1, is solvable in time 2^𝒪(pΔ^2 )·. When d> Δ -1, for example, on regular graphs, the result of Dvorak and Lidicky is non-comparable with our results.
§ ALGORITHMS
While in the literature it is common to present Turán's theorem under the implicit assumption that n is divisible by r,
here we make no such assumption. For that, it is useful to recall the precise value of t_r(n) in the general setting, as observed by Turán <cit.>.
For positive integers r ≤ n,
t_r(n)=(1-1/r)·n^2/2-s/2·(1-s/r)
where s=n-r·⌊n/r⌋ is the remainder in the division of n by r.
Note that <cit.> uses the expression t_r(n) = r - 1/2r· (n^2 - s^2) + s2, however it can be easily seen to be equivalent to the above.
We start with our main problem, where we look for a K_r + 1 in a graph that has slightly less than t_r(n) edges.
Later in this section, we show how to derive our other algorithmic results from the compression routine developed next.
§.§ Compression algorithm for ℓ≤ r + 1
First, we make a crucial observation on the structure of a instance that will be the key part of our compression argument.
Take a vertex v of maximum degree in G, partition V(G) on S = N_G(v) and T = V(G) ∖ S, and add all edges between S and T while removing all edges inside T. It can be argued that this operation does not decrease the number of edges in G while also preserving the property of being K_r + 1-free. Performing this recursively yields that T_r(n) has indeed the maximum number of edges for a K_r + 1-free graph, and this is the gist of Erdős' proof of Turán's Theorem <cit.>. Now, we want to extend this argument to cover our above-guarantee case. Again, we start with the graph G and perform exactly the same recursive procedure to obtain the graph G'. While we cannot say that G' is equal to G, since the latter has slightly less than t_r(n) edges, we can argue that every edge that gets changed from G to G' can be attributed to the “budget” k. Thus we arrive to the conclusion that G is different from G' at only 𝒪(k) places.
The following lemma makes this intuition formal.
There is an 𝒪(m + k)-time algorithm that for non-negative integers k ≥ 1, r ≥ 2 and an n-vertex graph G with m ≥ t_r(n)-k edges, finds a partition V_1, V_2, …, V_p of V(G) with the following properties
(i) p ≥ r-k;
(ii) For each i∈{1,…, p}, there is a vertex v_i ∈ V_i with N_G(v_i) ⊃ V_i+1∪ V_i+2∪⋯∪ V_p;
(iii) If p ≤ r, then for the complete p-partite graph G' with parts V_1, V_2, …, V_p, we have |E(G')|≥ |E(G)| and |E(G) E(G')|≤ 3k.
Moreover, all vertices covered by E(G)∖ E(G') are covered by E(G')∖ E(G) and |E(G')∖ E(G)|≤ 2k.
Let us clarify this technical definition.
The lemma basically states that if a graph G has at least t_r(n)-k edges, then it either has a clique of size r+1, or it has at most 3k edit distance to a complete multipartite graph G' consisting of p ∈ [r-k,r] parts.
Moreover, G has a clique of size p untouched by the edit, i.e. this clique is present in the complete p-partite graph G' as well.
We should also note that <Ref> is close to the concept of stability of Turán's theorem.
This concept received much attention in extremal graph theory (see e.g. recent work of Korándi et al. <cit.>), and appeals the structural properties of graphs having number of edges close to the Turán's number t_r(n).
<Ref> can also be seen as a stability version of Turán's theorem, but from the algorithmic point of view.
We move on to the proof of the lemma.
First, we state the algorithm, which follows from the Erdős' proof of Turán's Theorem from <cit.>.
We start with an empty graph G' defined on the same vertex set as G, and set G_1 = G.
Then we select the vertex v_1 ∈ V(G_1) as an arbitrary maximum-degree vertex in G_1, i.e. _G_1(v_1)=max_u∈ V(G_1)_G_1(u).
We put V_1=V(G_1)∖ N_G_1(v_1) and add to G' all edges between V_1 and V(G_1)∖ V_1.
We then put G_2:=G_1-V_1 and, unless G_2 is empty, apply the same process to G_2.
That is, we select v_2 ∈ V(G_2) with _G_2(v_2)=max_u ∈ V(G_2)_G_2(u) and put V_2=V(G_2)∖ N_G_2(v_2) and add all edges between V_2 and V(G_2)∖ V_2 to G'.
We repeat this process with G_i+1:=G_i-V_i until G_i+1 is empty.
The process has to stop eventually as each V_i is not empty.
In this way three sequences are produced: G=G_1,G_2,…, G_p, G_p + 1, where G_1 is G and G_p+1 is the empty graph; v_1,v_2,…, v_p, and V_1, V_2, …, V_p.
Note that the sequences {v_i} and {V_i} satisfy property (ii) by construction.
Observe that this procedure can be clearly performed in time 𝒪(n^2), and for any r ≥ 2, m + k = t_r(n) = Θ(n^2), thus the algorithm takes time 𝒪(m + k).
Clearly, G' is a complete p-partite graph with parts V_1, V_2, …, V_p as in G' we added all edges between V_i and V(G_i)∖ V_i=(V_i+1∪ V_i+2∪…∪ V_p) for each i∈{1,…, p} and never added an edge between two vertices in the same V_i.
Since a p-partite graph is always K_p + 1-free, by <Ref> |E(G')|≤ t_p(n).
|E(G')|-|E(G)|≥∑_i=1^p |E(G[V_i])| and for each u ∈ V(G), _G(u)≤_G'(u).
For each i ∈{1,…, p}, denote by E_i the edges of G' added in the i-th step of the construction.
Formally, E_i=V_i× (V_i+1∪ V_i+2∪…∪ V_p) for i < p and E_p = ∅.
We aim to show that |E_i|-|E(G_i)∖ E(G_i + 1)|≥ |E(G[V_i])|.
The first part of the claim will follow as |E(G')|=∑_i=1^p |E_i| and |E(G)|=∑_i=1^p |E(G_i)∖ E(G_i + 1)|.
Denote by d_i the degree of v_i in G_i.
Since N_G_i(v_i)=(V_i+1∪ V_i+2∪…∪ V_p), |E_i|=d_i|V_i|.
As v_i is a maximum-degree vertex in G_i, d_i≥_G_i(u) for every u∈ V_i, so |E_i|≥∑_u∈ V_i_G_i(u).
Recall that G_i + 1 =G_i-V_i.
Then
|E(G_i)∖ E(G_i + 1)|= ∑_u∈ V_i_G_i(u)-|E(G_i[V_i])|=∑_u∈ V_i_G_i(u)-|E(G[V_i])|
≤ |E_i|-|E(G[V_i])|,
and the first part of the claim follows.
To show the second part, note that for a vertex u ∈ V_i, _G(u)≤∑_j=1^i-1|V_j|+_G_i(u).
On the other hand, u is adjacent to every vertex in V_1∪ V_2∪⋯∪ V_i-1∪ V_i+1∪⋯∪ V_p in G'.
We have already seen that |V_i+1∪⋯∪ V_p|≥_G_i(u).
Thus, _G(u)≤_G'(u).
Proof of the claim is complete.
The claim yields that |E(G)|≤ t_p(n), so t_p(n)≥ t_r(n)-k.
By <Ref>, we have that t_i(n)>t_i-1(n), as T_i-1(n) is distinct from T_i(n), so t_i(n)≥ t_i-1(n)+1 for every i ∈ [n].
Hence if r≥ p then k≥ t_r(n)-t_p(n)≥ r-p.
It concludes the proof of (i).
It is left to prove (iii), i.e. that |E(G) E(G')|≤ 3k under assumption p≤ r.
First note that E(G)∖ E(G')=⋃ E(G[V_i]).
Second, since |E(G')|≤ t_p(n)≤ t_r(n) and |E(G)|≥ t_r(n)-k, |E(G')|-|E(G)|≤ k.
By Claim, we have that |E(G')|-|E(G)| ≥∑|E(G[V_i])|.
Finally
|E(G) E(G')|= |E(G')|-|E(G)|+2|E(G)∖ E(G')|
= |E(G')|-|E(G)|+2∑ |E(G[V_i])|≤ 3k.
By Claim, each vertex covered by E(G)∖ E(G') is covered by E(G')∖ E(G).
The total size of these edge sets is at most 3k, while |E(G')∖ E(G)|-|E(G)∖ E(G')|=|E(G')|-|E(G)|≤ k.
Hence, the size of |E(G)∖ E(G')| is at most 2k.
This concludes the proof of (iii) and of the lemma.
We are ready to prove our main algorithmic result. Let us recall that we seek a clique of size ℓ in an n-vertex graph with t_r(n)-k edges, and that
τ =max{ℓ-r,0}.
with τ∈{0,1} admits an 𝒪(n + m)-time compression into Clique on at most 5k vertices.
Let (G,r,k,ℓ) be the input instance of . If r < 2 or n ≤ 5k, a trivial compression is returned.
Apply the algorithm of <Ref> to (G, r, k, ℓ) and obtain the partition V_1, V_2, …, V_p.
Observe that this takes time 𝒪(m + k) = 𝒪(n + m) since n > 5k.
By the second property of <Ref>, v_1, v_2, …, v_p induce a clique in G, so if p≥ℓ we conclude that (G,r,k,ℓ) is a yes-instance. Formally, the compression returns a trivial yes-instance of Clique in this case.
We now have that r-k ≤ p ≤ r.
Then the edit distance between G and the complete p-partite graph G' with parts V_1, V_2, …, V_p is at most 3k.
Denote by X the set of vertices covered by E(G) E(G').
Denote R=E(G')∖ E(G) and A=E(G)∖ E(G').
We know that |R|+|A|≤ 3k, |R|≤ 2k and |R|≥ |A|.
By <Ref>, R covers all vertices in X, so |X|≤ 2|R|.
Clearly, (G,r,k,ℓ) as an instance of is equivalent to an instance (G,ℓ) of Clique.
We now apply the following two reduction rules exhaustively to (G,ℓ).
Note that these rules are an adaption of the well-known two reduction rules for the general case of Clique (see, e.g., <cit.>).
Here the adapted rules employ the partition V_1, V_2, …, V_p explicitly.
If there is i ∈ [p] such that V_i ⊈X and V_i is independent in G, remove V_i from G and reduce ℓ by one.
For each i ∈ [p] with |V_i∖ X|> 1, remove all but one vertices in V_i∖ X from G.
Since the reduction rules are applied independently to parts V_1, V_2, …, V_p, and each rule is applied to each part at most once, clearly this can be performed in linear time. We now argue that these reduction rules always produce an equivalent instance of Clique.
<Ref> and <Ref> are safe.
For <Ref>, note that there is a vertex v ∈ V_i∖ X such that N_G(v)=N_G(V_i)=V(G)∖ V_i.
Since V_i is independent, for any vertex set C that induces a clique in G, we have |C∩ V_i|≤ 1.
On the other hand, if C∩ V_i=∅, C∪{v} also induces a clique in G as C ⊆ N_G(v).
Hence, any maximal clique in G contains exactly one vertex from V_i, so <Ref> is safe.
To see that <Ref> is safe, observe that N_G(u)=N_G(v) for any two vertices u,v∈ V_i∖ X.
Then no clique contains both u and v, and if C∋ v induces a clique in G, C∖{v}∪{u} also induces a clique in G of the same size.
Hence, v can be safely removed from G so <Ref> is safe.
It is left to upperbound the size of G after the exhaustive application of reduction rules.
In this process, some parts among V_1, V_2,…, V_p are removed from G.
W.l.o.g. assume that the remaining parts are V_1, V_2, …, V_t for some t ≤ p.
Note that parts that have no common vertex with X are eliminated by <Ref>, so t ≤ |X|.
On the other hand, by <Ref>, we have |V_i∖ X|≤ 1 for each i ∈ [t].
Consider i ∈ [t] with |V_i∖ X|=1.
By <Ref>, G[V_i] contains at least one edge.
Since V_i is independent in G', E(G[V_i])⊆ A.
Hence, the number of i ∈ [t] with |V_i∖ X|=1 is at most |A|.
We obtain
|V(G)|= ∑_i=1^t |V_i|=∑_i=1^t |V_i∩ X|+∑_i=1^t |V_i∖ X|
≤ |X|+|A|≤ 2|R|+|A|≤ |R|+(|R|+|A|)≤ 5k.
We obtained an instance of Clique that is equivalent to (G,r,k,ℓ) and contains at most 5k vertices.
The proof is complete.
Combining the polynomial compression of <Ref> with the algorithm of Xiao and Nagamochi <cit.> for Independent Set running in 𝒪(1.1996^n), we obtain the following.
with τ≤ 1 is solvable in time 2.49^k·(n + m).
Take a given instance of and compress it into an equivalent instance (G,ℓ) of Clique with |V(G)|≤ 5k.
Clearly, (G,|V(G)|-ℓ) is an instance of Independent Set equivalent to (G,ℓ).
Use the algorithm from <cit.> to solve this instance in 𝒪(1.1996^|V(G)|) running time.
Since 1.1996^5<2.49, the running time of the whole algorithm is bounded by 2.49^k·.
§.§ Looking for larger cliques
In this subsection we consider the situation when τ > 1. As we will see in <Ref>, an FPT algorithm is unlikely in this case, unless we take a stronger parameterization.
Here we show that is FPT parameterized by τ+ξ+k. Recall that ξ= ⌊n/r⌋.
<Ref> argues that this particular choice of the parameter is necessary.
First, we show that the difference between t_ℓ(n) and t_r(n) can be bounded in terms of τ and ξ. This will allow us to employ <Ref> for the new FPT algorithm by a simple change of the parameter.
The proof of the next lemma is done via a careful counting argument.
Let n, r,ℓ be three positive integers with r< ℓ≤ n. Let ξ=⌊n/r⌋ and τ=ℓ-r.
Then for τ =Ø(r), t_ℓ(n)-t_r(n)=Θ(τξ^2).
Throughout the proof, we assume ξ=n/r since this does not influence the desired Θ estimation.
Let s_r be the remainder in the division of n by r and s_ℓ be the remainder in the division of n by ℓ.
By <Ref>,
t_ℓ(n)-t_r(n)= τ n^2/2rℓ+(s_r/2·(1-s_r/r)-s_ℓ/2·(1-s_ℓ/ℓ)).
The first summand in (<ref>) is Θ(ξ^2 τ).
Indeed, since τ =Ø(r) we have
τ n^2/2rℓ=τ/2·n/r·n/r+τ=ξ^2τ/2·r/r+τ=Θ(ξ^2 τ).
For the second summand,
s_r/2·(1-s_r/r) - s_ℓ/2·(1-s_ℓ/ℓ)
= ℓ s_r(r-s_r)-rs_ℓ(ℓ-s_ℓ)/2rℓ=(rs_ℓ^2-ℓ s_r^2)+rℓ(s_r -s_ℓ )/2rℓ
= (rs_ℓ^2-r s_r^2-τ s_r^2)+rℓ(s_r -s_ℓ )/2rℓ
= r(s_ℓ-s_r)(s_ℓ+s_r)+rℓ (s_r-s_ℓ)/2rℓ-τ s_r^2/2rℓ
= (s_r-s_ℓ)(ℓ-(s_ℓ+s_r))/2ℓ-τ s_r^2/2rℓ.
Since n=⌊n/ℓ⌋·ℓ + s_ℓ, we have that
s_r≡⌊n/ℓ⌋·ℓ + s_ℓr,
and
s_r≡⌊n/ℓ⌋· (r+τ) + s_ℓr.
Hence,
s_r-s_ℓ≡⌊n/ℓ⌋·τr.
By definition s_r<r, thus we get from the above that s_r-s_ℓ≤⌊n/ℓ⌋·τ≤ξτ.
Analogously,
s_ℓ-s_r≡⌊n/r⌋· (-τ) ℓ
Since s_ℓ-s_r> -r > -ℓ, we have that s_ℓ-s_r≥⌊n/r⌋· (-τ)≥ -ξτ.
Therefore |s_ℓ-s_r|≤ξτ.
It is easy to see that |ℓ-(s_ℓ+s_r)|≤ℓ+(s_ℓ+s_r)≤ 3ℓ.
Finally, τ s^2_r/2rℓ is non-negative and is upper bounded by τ r^2/2rℓ≤τ/2.
Thus, the absolute value of (<ref>),
is at most
ξτ·3ℓ/2ℓ+τ/2=𝒪(ξτ).
By putting together (<ref>) and (<ref>), we conclude that t_ℓ(n)-t_r(n)=Θ(ξ^2τ) + 𝒪(ξτ)=Θ(ξ^2τ).
The following compression algorithm is
a corollary of <Ref> and <Ref>.
It provides a compression of size linear in k and τ.
admits a compression into Clique on 𝒪(τξ^2+k) vertices.
Let (G,k,r,ℓ) be the given instance of .
If ℓ≤ r+1, then the proof follows from <Ref>.
Otherwise, reduce (G,k,r,ℓ) to an equivalent instance (G,k+t_ℓ(n)-t_r(n),ℓ,ℓ) of just by modifying the parameters.
This is a valid instance since |E(G)|≥ t_r(n)-k≥ t_ℓ(n)-(t_ℓ(n)+t_r(n)+k).
Denote k'=k+(t_ℓ(n)-t_r(n)).
By <Ref>, k'=k+𝒪(τξ^2).
Apply polynomial compression of <Ref> to (G,k',ℓ,ℓ) into Clique with 𝒪(k'), i.e. 𝒪(τξ^2+k), vertices.
Pipelined with a brute-force algorithm computing a maximum independent set in time Ø(2^n), <Ref> yields the following corollary.
is solvable in time 2 ^Ø(τξ^2+k)· (n + m).
§.§ Independent set above Turán's bound
Another interesting application of <Ref> concerns computing
Independent Set in graphs of small average degree. Recall that
Turán's theorem, when applied to the complement G of a graph G, yields a bound
α(G)≥n/d + 1.
Here α(G) is the size of the largest independent set in G (the independence number of G), and d is the average vertex degree of G. Then in
,
the task is for an n-vertex graph G and positive integer t to decide whether there is an independent set of size at least n/d + 1+t in G.
<Ref> implies a compression of into Independent Set. In other words, we give a polynomial time algorithm that for an instance (G, t) of constructs an equivalent instance (G', p) of Independent Set with at most 𝒪(td^2) vertices. That is, the graph G has an independent set of size at least n/d+1 +t if and only if G' has an independent set of size p.
admits a compression into Independent Set on 𝒪(td^2) vertices.
For simplicity, let us assume that n is divisible by d +1. (For arguments here this assumption does not make an essential difference.)
We select r=n/d+1, τ =t, and k=0. Then d=n/r-1=ξ -1. The graph G has at most nd/2 edges, hence G has at least n(n-1)/2- nd/2=n(n-1)/2- n(ξ-1)/2 ≥ t_r(n) edges, see <Ref>.
An independent set of size n/d+1 +t in graph G, corresponds in graph G to a clique of size r+t. Since <Ref> provides compression into a Clique with
𝒪(τξ^2+k)=𝒪(τξ^2) vertices, for independent set and graph G this corresponds to a compression into an instance of Independent Set with 𝒪(td^2) vertices.
By <Ref>, we obtain the following corollary.
is solvable in time 2 ^Ø(td^2)· n^2.
§ LOWER BOUNDS
In this section, we investigate how the algorithms above are complemented by hardness results.
First, observe that k has to be restricted, otherwise the problem is not any different from Clique.
In fact, reducing from Independent Set on sparse graphs, one can show that there is no 2^o(k)-time algorithm for even when τ≤ 1. (The formal argument is presented in <Ref>.) This implies that the 2^𝒪(k)-time algorithm given by <Ref> is essentially tight.
Also, the difference between r and ℓ has to be restricted, as it can be easily seen that admits no n^o(ℓ)-time algorithm even when k = 0, assuming ETH. This is observed simply by considering the special case of where r = 1, there the only restriction on G is that |E(G)| ≥ t_r(n) - k = 0, meaning that the problem is as hard as Clique.
However, <Ref> shows that even for any fixed τ≥ 2 and k = 0 is -complete.
This motivates <Ref>, where the exponential part of the running time has shape 2^𝒪(τξ^2 k).
In the rest of this section, we further motivate the running time of <Ref>. First, in <Ref> we show that not only setting τ and k to constants is not sufficient to overcome -hardness, but also that the same holds for any choice of two parameters out of {τ, ξ, k}.
is -complete. Moreover, it remains -complete in each of the following cases
(i) for any fixed ξ≥ 1 and τ=0;
(ii) for any fixed ξ≥ 1 and k=0;
(iii) for any fixed τ≥ 2 and k=0.
Towards proving (i) and (ii), we provide a reduction from Clique. Let ξ≥ 1 be a fixed constant.
Let (G,ℓ) be a given instance of Clique and let n=|V(G)|.
We assume that ℓ≥ξ, otherwise we can solve (G,ℓ) in polynomial time.
Construct a graph G' from G as follows.
Start from G'=G and ℓ'=ℓ.
Then add max{ξℓ- n,0} isolated vertices to G'. Note that (G,k) and (G',k') are equivalent and |V(G')|≥ξℓ'.
If we have ξℓ' ≤ |V(G')|<(ξ+1)ℓ', we are done with the construction of G'.
Otherwise, repeatedly add a universal vertex to G', increasing ℓ' by one, so |V(G')|-(ξ+1)ℓ' decreases by ξ each time.
We repeat this until |V(G')| becomes less than (ξ+1)ℓ'.
Since the gap between ξℓ' and (ξ+1)ℓ' is at least ξ at any moment, we derive that ξℓ' ≤ |V(G')| < (ξ+1)ℓ'.
The construction of G' is complete. Note that(G',ℓ') is an instance of Clique equivalent to (G,ℓ).
We added at most max{n,ξℓ} vertices to G', hence this is a polynomial-time reduction.
By the above, ⌊ V(G')/ℓ' ⌋ = ξ, so we can reduce (G',ℓ') to an equivalent instance (G',ℓ',|V(G')|2,ℓ') of .
Clearly, this instance has the required fixed value of ξ and τ=0.
This proves (i).
For (ii), we use the fact that t_1(n)=0 for every n>0 and reduce (G',ℓ') to (G',1,0,ℓ').
To show (iii), we need another reduction from Clique.
Let τ≥ 2 be a fixed integer constant.
Take an instance (G,ℓ) of Clique with ℓ≥ 2τ.
We denote n=|V(G)|.
To construct G' from G, we start from a large complete (ℓ-1)-partite graph with equal-sized parts.
The size of each part equals x, so |V(G')|=(ℓ-1)x.
We denote N=|V(G')| and choose the value of x later, for now we only need that N≥ n.
Clearly, |E(G')|=t_ℓ-1(N) at this point.
To embed G into G', we select arbitrary n vertices in G' and make them isolated.
This removes at most n(ℓ-2)x edges from G'.
Then we identify these n isolated vertices with V(G) and add edges of G between these vertices in G' correspondingly.
This operation does not decrease |E(G')|.
This completes the construction of G'.
Since G' is isomorphic to a complete (ℓ-1)-partite graph united disjointly with G, we have that (G,ℓ) and (G',ℓ) are equivalent instances of Clique.
We now want to reduce (G',ℓ) to an instance (G',ℓ-τ,0,ℓ) of .
To do so, we need |E(G')|≥ t_ℓ-τ(N).
By <Ref>, t_ℓ-1(N)-t_ℓ-τ(N)≥ C · (τ-1) ·(N/ℓ-τ)^2 for some
constant
C>0.
Since |E(G')|≥ t_ℓ-1(N)-n(ℓ-2)x, we want to choose x such that
n(ℓ-2)x≤ C · (τ-1) ·(N/ℓ-τ)^2.
By substituting N=(ℓ-1)x, we derive that x should satisfy
n/C·(ℓ-2)(ℓ-τ)/(ℓ-1)^2·ℓ-τ/τ-1≤ x.
Now simply pick as x the smallest integer that satisfies the above.
Then (G',ℓ-τ,0,ℓ) is an instance of that is equivalent to the instance (G,k) of Clique and is constructed in polynomial time.
Now, recall that <Ref> gives an -algorithm for that is single-exponential in τξ^2+k.
The previous theorem argues that all three of τ, ξ, k have to be in the exponential part of the running time. However, that result does not say anything about what can be the best possible dependency on these parameters.
The next <Ref> aims to give more precise lower bounds based on ETH, in particular it turns out that the dependency on τ and k cannot be subexponential unless ETH fails.
First, we need to show the relation between the parameter ξ and the average degree of G.
Let G be an n-vertex graph, r≤ n be an integer, and denote ξ=⌊n/r⌋.
Let G denote the complement of G and d denote the average degree of G.
Then d≤ξ if |E(G)|≥ t_r(n) and |E(G)|≥ t_r(n) if d≤ξ-1.
Let s<r be the remainder in the division of n by r.
Then
n2-t_r(n) = n^2/2-n/2-(1-1/r)·n^2/2+s/2·(1-s/r)
= n/2·n/r-n/2+s/2·(1-s/r)
= n/2·ξ +n/2·s/r-n/2+s/2-s/2·s/r=n/2·ξ-n-s/2·(1-s/r)=
= n/2·ξ-(n-s)(r-s)/2r
=n/2·ξ - r-s/2·ξ.
Since |E(G)|=n2-|E(G)|, one direction is proved: assuming |E(G)|≥ t_r(n), |E(G)|≤n2-t_r(n)≤ξ n/2.
For the other direction, assume that d≤ξ-1, i.e. |E(G)|≤n/2· (ξ-1).
As n2-t_r(n)≥n/2·ξ-n/2, we have n2-t_r(n)≥ |E(G)|.
Then |E(G)|≥ t_r(n) follows and the proof is complete.
We are ready to give lower bounds for algorithms solving in terms of the parameters τ, ξ, and k.
Unless the Exponential Time Hypothesis fails, for any function f there is no f(ξ,τ)^o(k)· n^f(ξ,τ), f(ξ,k)^o(τ)· n^f(ξ,k), or f(k,τ)^o(√(ξ))· n^f(k,τ) algorithm for .
It is well-known that under ETH Independent Set cannot be solved in 2^o(n) time on instances with linear number of edges <cit.>.
This is a basis of our proof: we provide several reductions from Independent Set with linear number of edges.
Note that these reductions mostly repeat the reductions given in the proof of <Ref> but are different in terms of requirements for τ, ξ and k.
In fact, we give three polynomial-time algorithms that reduce an instance (G,q) of Independent Set, where n=|V(G)| and |E(G)|=𝒪(n), to an equivalent instance of such that
* τ, ξ are constant but k=𝒪(n);
* ξ, k are constant but τ=𝒪(q);
* τ, k are constant but ξ=𝒪(nq).
Clearly, once we show these three reductions, the proof of the theorem is complete.
For an instance of Independent Set (G,q) we denote n=|V(G)| and m=|E(G)|.
We always assume that the number of edges in G is linear, so m=𝒪(n) and the average degree of G is not greater than some fixed constant D.
We also assume that n≥ 2(D+1) and q>2.
The first reduction takes (G,q) and trivially reduces it to an instance (G,n,m,q) of .
Note that |E(G)|=n2-m=t_n(n)-m, so this is a valid instance of the problem.
For this instance, ξ=1 and τ=0 but k=m=𝒪(n).
The second algorithm reduces (G,q) to an equivalent instance (G,r,0,q), where r=⌊n/D+1⌋.
As ⌊ n/r⌋-1 upper bounds the average degree of G, by <Ref> we have that |E(G)|≥ t_r(n), so (G, r,0,q) is a valid instance.
This instance has k=0 and
ξ=⌊ n/r⌋ < n·(n/D+1-1)^-1=(D+1)·n/n-(D+1)≤ 2(D+1),
but τ≤ q=𝒪(q).
To show the last reduction, we reduce from the instance (G,ℓ) of Clique instead of (G,q) of Independent Set, since constraints on the number of edges in G are not necessary for it.
Formally it means that we reduce from (G,q) of Independent Set to (G,q) of Clique, and then apply reductions as required.
Slightly abusing the notion we denote (G, q) by (G,ℓ).
We adjust the last reduction from Clique from the proof of <Ref>.
Recall that in this reduction we reduce an instance (G,ℓ) of Clique to an equivalent instance (G',ℓ) of Clique with |V(G')|=(ℓ-1)x for some chosen integer x.
For (G',ℓ-τ,0,ℓ) to be a valid equivalent instance of , it is enough that x satisfies
x ≥n/C·ℓ-τ/ℓ - 1·ℓ-τ/τ-1,
where the fixed constant C>0 comes from <Ref>.
To show the third reduction, we pick τ:=2.
Then we choose x:=⌈ nℓ/C ⌉, so (G',ℓ-τ,0,ℓ) is a valid instance of equivalent to (G,ℓ).
This instance has k=0 and τ=2, but ξ≤ |V(G')|/ℓ<x=𝒪(nℓ).
The first part of <Ref> lets us observe that our 2.49^k · (n +m)-time algorithm for with τ≤ 1 is essentially tight.
Assuming ETH, there is no 2^o(k)· algorithm for with ℓ≤ r + 1.
§ CONCLUSION
We conclude by summarizing natural questions left open by our work. <Ref>
rules out (unless ETH fails) algorithms with running times subexponential in τ and k. However, when it comes to ξ, the dependency
in the upper bound of <Ref> is 2^𝒪(τξ^2+k)·, while <Ref> only rules out the running time of f(k,τ)^o(√(ξ))· n^f(k,τ) under ETH.
Thus, whether the correct dependence in ξ is single-exponential or subexponential, is left open.
Similarly, the question whether admits a compression into Clique whose size is linear in
ξ, τ, and k, is open. A weaker variant of this question (for the case k=0) for , whether it admits a compression or kernel linear in d and in t, is also open.
|
http://arxiv.org/abs/2307.06204v1 | 20230712144919 | High-energy X-ray spectrum reconstruction: solving the inverse problem from optimized multi-material transmission measurements | [
"Arthur Walker",
"Alexandre Friou",
"Kevin Ginsburger"
] | physics.app-ph | [
"physics.app-ph",
"physics.med-ph"
] |
[
[
Received May 01, 2023 / Accepted May 31, 2023
=================================================
Reconstructing the unknown spectrum of a given X-ray source is a common problem in a wide range of X-ray imaging tasks. For high-energy sources, transmission measurements are mostly used to recover the X-ray spectrum, as a solution to an inverse problem. While this inverse problem is usually under-determined, ill-posedness can be reduced by improving the choice of transmission measurements. A recently proposed approach optimizes custom thicknesses of calibration materials used to generate transmission measurements, employing a genetic algorithm to minimize the condition number of the system matrix before inversion.
In this paper, we generalize the proposed approach to multiple calibration materials and show a much larger decrease of the condition number of the system matrix than thickness-only optimization.
Additionally, the spectrum reconstruction pipeline is tested in a simulation study with a challenging high-energy Bremsstrahlung X-ray source encountered in Linear Induction Accelerators (LIA), with strong scatter noise. Using this approach, a realistic noise level is obtained on measurements. A generic anti-scatter grid is designed to reduce noise to an acceptable -yet still high- noise range. A novel noise-robust reconstruction method is then presented, which shows much less sensitive to initialization than common expectation-maximization approaches, enables a precise choice of spectrum resolution and a controlled injection of prior knowledge of the X-ray spectrum.
§ INTRODUCTION
The reconstruction of the unknown spectrum of a given X-ray source is a common problem in a wide range of X-ray imaging tasks <cit.>. If the source flux is low, spectrometers are usually preferred to estimate the spectrum. If a very precise modeling of the source is available, a good knowledge of the X-ray spectrum can be obtained, provided that the model input parameters, such as voltage, are measured precisely enough during the pulse <cit.>.
When none of the two previous conditions are met, transmission measurements are most frequently used to recover the X-ray spectrum, as a solution to an inverse problem <cit.>.
This inverse problem is usually under-determined, because a high resolution of the reconstructed spectrum is required. It is also ill-conditioned, making the spectral estimation unstable and very sensitive to noise. The ill-posedness of this inverse problem can be reduced using a parametric model of the reconstructed spectrum <cit.>. However, these model-based methods restrict, by design, the space of possible solutions, thus requiring a fine and general enough physical modelling of the spectrum prior to reconstruction.
Another way to reduce ill-posedness is to improve the quality of the set of transmission measurements. In particular, the recent approach described in <cit.> proposed to compute custom thicknesses of calibration materials used to generate transmission measurements, by optimizing on the condition number of the system matrix used for inversion. Using a genetic algorithm, the interest of optimized measurements to reconstruct spectra was demonstrated, in comparison with common linear slab phantoms.
While the approach proved to be very efficient for the configurations tested in <cit.>, their simulation study was restricted to unrealistically small amounts of Poisson noise, and relatively low-energy spectra. In this work, a challenging high-energy Schiff spectrum <cit.> is used to simulate transmission measurements using the Monte-Carlo N-Particle code (MCNP4C) with realistic noise levels. The Schiff spectrum is a typical model for thin-target Bremsstrahlung spectra encountered in radiographic sources based on Linear Induction Accelerators (LIA), used to perform high-energy flash X-ray imaging <cit.>. We show that the method presented in <cit.> is not readily applicable to this real-world reconstruction problem. Three improvements are thus proposed to obtain a robust spectrum reconstruction. Firstly, the transmission measurement optimization, reduced to variations of material thicknesses in <cit.>, is extended to multiple materials by modifying the genetic algorithm. We show that using multiple materials yields a much larger decrease of the condition number of the system matrix than thickness-only optimization. Secondly, instead of a generic Poisson noise, a realistic simulated scatter noise is used to evaluate the ability to recover the spectrum. Using this approach, we observe a much higher noise level on measurements, which makes the design of a noise reduction setup mandatory. As such, an anti-scatter grid is proposed, reducing noise to an acceptable range for spectrum reconstruction, but still much higher than noise levels encountered in <cit.>. Thirdly, once measurements are optimized and the experimental setup is fixed, a novel reconstruction method is presented, which shows much less sensitive to initialization than expectation-maximization approaches <cit.>, enables a precise choice of spectrum resolution and a controlled injection of prior knowledge of the X-ray spectrum.
§ METHODS
§.§ Transmission measurement model
As illustrated in figure <ref>, two types of transmission measurements are considered, which correspond to the two experimental configurations tested in this study.
In what follows, the detector spectral response D(E) is not accounted for and taken as unity. For each measurement, the forward transmission model is given by the Beer-Lambert law. Given an X-ray source with spectrum S(E), the transmitted intensity I of a measurement writes
I = ∫_E S(E)D(E)e^-lμ(E) dE
where l is the thickness and μ(E) the linear attenuation coefficient of the material used for this measurement.
§.§.§ One material per measurement
This configuration corresponds to the right side of figure <ref>.
For M measurements y_i and with an even discretization of the spectrum into N bins, the forward problem can be cast into a set of linear equations
y_i = ∑_j=1^N e^-l_iμ_i,j s_j, i ∈ [[ 1, M ]], j ∈ [[ 1, N ]]
where l_i is the thickness of the i-th measurement and μ_i,j is the linear attenuation coefficient value for the i-th measurement at the discrete energy level j.
Equation <ref> is conveniently put in the matrix form as
y = A ×s
with y = (y_i) ∈ℝ^M the vector of transmission measurements, s = (s_j)∈ℝ^N the vector of the discretized spectrum and A = (e^-l_iμ_i,j) ∈ℝ^M × N the forward system matrix.
§.§.§ Multiple materials per measurement
This configuration is illustrated at the left side of figure <ref>. Each measurement contains K layers of distinct materials.
Without any loss of generality, we consider that each measurement contains the same number of layers with the same material order. Only the thickness of each material layer is changed, and can be set to zero. With this assumption, the attenuation coefficient no longer depends on the measurement number i. The transmitted intensity y_i for the i-th measurement thus writes
y_i = ∑_j=1^N∏_k=1^K e^-l_i,kμ_k,j s_j, i ∈ [[ 1, M ]], j ∈ [[ 1, N ]], k ∈ [[ 1, K ]]
where l_i,k is the thickness of the k-th layer of the i-th measurement and μ_k,j(E) is the linear attenuation coefficient of the material k at energy level j. The forward system matrix in equation <ref> is modified as A = (e^-l_i,kμ_k,j) ∈ℝ^M × N.
§.§ Multi-material thickness optimization
Using multiple materials in transmission measurements implies some modifications to the thickness optimization genetic algorithm described in <cit.>. Two optimizations with slightly different implementations, corresponding to the two types of transmission measurements discussed above, are here considered and tested.
§.§.§ Multiple materials per measurement
When multiple piled up materials are allowed for each measurement, the genetic algorithm optimizes on two-dimensional matrices of size M × K instead of one-dimensional vectors in the single material case. Each cell of the matrices contains the current thickness of the column's corresponding material for the line's corresponding measurement. In comparison with <cit.>, the main steps of the genetic optimization are modified as follows:
* Initialization : N_pop matrices are initialized with a randomly chosen value in each cell, such that the sum on each row (i.e. the total thickness of the corresponding measurement slab) remains in a chosen interval [l_min, l_max]
* Crossover : the crossover formula used in <cit.> is applied similarly to the rows of the system matrix
* Mutation : instead of simply modifying the value of the mutated cell as in <cit.>, the sum of values on the row, corresponding to the total thickness, is modified as well as the proportion of materials
§.§.§ One material per measurement
When only one material is allowed per measurement, two-dimensional M × K matrices are also used but with only one non-zero value per row, at the column corresponding to the employed material. The main steps of the genetic optimization are modified as follows:
* Initialization : N_pop matrices are initialized with a randomly chosen value in only one cell per row, and the other cells are initialized to zero
* Crossover : If the two parents are using the same material, the situation refers to the case of <cit.>. Otherwise, the two children receive one material each, with thickness values crossed with the same formula
* Mutation : Each mutation on a row also has a probability of changing the material used for the corresponding measurement
§.§ Geometry design and simulation
A basic MCNP4C geometry was built for each set of measurements. Transmission measurement phantoms were represented as cylinders of known thicknesses and radii. Cylinders axes are aiming at the photon source, which is modeled as a point source located 180 cm away from the phantoms. A Schiff spectrum, corresponding to a maximum electron energy of 20 MeV, incident on a 1.2 mm thick tantalum target, is used. The photons are emitted within a cone with a constant angular distribution, chosen in order to cover all the measurement devices. Both photon and electron interactions are modeled, so as to accurately account for scattering. The filling medium is air. All these parameters were chosen to be as close as possible to reality.
The detectors in the simulation are modeled as simple fluence tallies, placed behind each measurement slab. Consistent modeling of detectors in the simulation and accounting for their spectral responses in the optimization process, is left to future work.
As mentioned earlier, the MCNP4C simulation accounts for the presence of scatter noise in the measurements, produced during the passage of X-rays through materials. A specific simulation setup was designed to reduce this scatter noise and obtain clean enough measurements for the spectrum reconstruction.
The first approach employed to decrease scatter noise was to optimize the placement of the measurement slabs within the experimental setup and space them out in order to reduce the interaction between particles coming through the materials. The differential evolution algorithm <cit.> was used to compute the optimal placement of M points constrained in a circle by minimizing the electrostatic potential between them. With this method, the scatter noise was reduced by half compared to slabs placed on a circle. While significant, this reduction was not sufficient to exploit transmission measurements for spectrum reconstruction. To further reduce this noise, a specific anti-scatter grid was designed. This grid consists of a 20 cm deep lead layer between the measurement slabs and the dose sensors. Cylinders of small radius are carved in the lead behind each measurement slab in order to absorb all particles except source photons, yielding the expected signal. Using this anti-scatter grid reduces the scatter noise to an exploitable amount of around 3%, allowing the measurements to be used for the spectrum reconstruction.
§.§ Reconstruction algorithm
§.§.§ Adaptive resampling based on a typical spectrum
Prior to the reconstruction algorithm itself, a preliminary step of dimension reduction is realized on a typical spectrum presenting roughly the same characteristics as the unknown spectrum. As shown in figure <ref>, by sampling uniformly from the cumulative integral of this typical spectrum's derivative, an approximately optimal choice of the N energy bins is obtained, allowing an efficient spectrum representation, later employed during the optimization process of the reconstruction.
§.§.§ Spectrum reconstruction as an interpolation
The spectrum is reconstructed on the optimal energy sampling presented above, using experimental or simulated measurements y. A candidate spectrum s_cand is initialized and then modified during the optimization process. The measurements computed through the forward model y_cand = A ×s_cand are expected to be as close to y as possible.
The optimization problem for the spectrum reconstruction thus writes :
s_candargmin ‖y - A ×s_cand‖^2
A straightforward method would consist in performing a spectrum discretization over the optimal energy sampling values E_1, ... ,E_N where N is usually large to obtain a good resolution of the spectrum. However, a large N increases the degrees of freedom and makes the optimization process harder. It is therefore necessary to reduce the number of parameters used to describe the spectrum. The method chosen here is to sample the spectrum with a small number P of interpolation points. This requires the spectrum to be continuous and deprived of high variation peaks, which is a general feature of high-energy Bremsstrahlung spectra.
As shown in figure <ref>, the spectrum is thus fully described by P interpolation points, which coordinates can evolve both in energy and magnitude (e.g. 2P degrees of freedom). At each optimization step, the points are interpolated by a piecewise cubic polynomial function <cit.>, then projected over the N energy intervals. y_cand is then computed and new values of coordinates for the interpolation points can be determined.
§.§.§ Reconstruction algorithm
A trust-region algorithm for constrained optimization <cit.> is used to compute the interpolation points corresponding to the approximated spectrum. Minimum and maximum energy levels are enforced, based on a known low energy cut-off and the maximum energy of electrons. The spectrum magnitude is constrained using a simple step function. In addition to this restriction, the spectrum is normalized by its integral during each step of the optimization process.
For this algorithm, the P abscissa of the interpolation points are fixed, which improves optimization performances and decreases execution time. However, other algorithms might be more efficient with the abscissa taken as additional degrees of freedom. In the case of this study, these abscissa values are taken as evenly spaced inside the energy interval, in logarithmic scale. The optimization algorithm is then applied to the P magnitudes of the interpolation points, with a fixed number of iterations.
§ RESULTS
§.§ Impact of multiple materials
A comparison between our multi-material genetic algorithm and the version implemented in <cit.> was performed using the same setup. A number of M = 8 measurements was fixed, N=100 energy bins and K=4 materials (iron, copper, tantalum and lead) were used. For a fair comparison, all other hyper-parameters, including the number of generations, the population size, the crossover and mutation probabilities, the target distribution mean and the best fitness boosting factor, were kept identical to <cit.>.
§.§.§ Algorithms performance
The work done in <cit.> led to a great reduction in orders of magnitude of the forward matrix condition number, and highlighted an exponential distribution of thicknesses for the optimal measurement set using one material. Using this one-material genetic algorithm, the optimal arrangement of thicknesses plotted on figure <ref> was obtained, with a condition number of 1.8× 10^6.
Figure <ref> displays the results obtained using the first type of transmission measurements with several materials, described in section <ref>. When multiple piled up materials are allowed per measurement, a significant improvement of the previous results is observed, with a further reduced condition number of 1.61× 10^4.
Figure <ref> shows the results obtained using a single material per measurement.
With this second version of the algorithm presented in <ref>, an even greater improvement is observed with a highly reduced condition number of 2.11× 10^3.
§.§.§ Stability of the optimization
Regardless of the performance of the different algorithms, another main goal of the proposed algorithms is to ensure a good stability of the optimized measurements. To compare the stability of the multi-material algorithms developed in this study, both were executed several times with the same parameters. The set of materials obtained for each optimization was then plotted.
Results are shown in figure <ref>. In figure <ref> where multiple materials per measurement are allowed, the mean thickness value and the standard deviation of each piled up material for each measurement are plotted. However, in figure <ref> where only one material is allowed per measurement, the plots have to be separated because a given measurement can correspond to different materials for different optimizations.
The result of the stability comparison is clear. As illustrated on figure <ref>, the algorithm version with piled up materials is quite unstable, with a significant standard deviation on material thicknesses between experiments and a strong variance on the condition number. Conversely, for the algorithm with only one material per measurement, the stability is satisfying. As shown in figure <ref>, each measurement is attributed almost always the same material with the same thickness across experiments.
The reduction of the research space size thus plays a key role in the stability improvement.
§.§ Reduction of scatter noise
Different geometric configurations have been tested in MCNP4C to reduce scatter noise while keeping an easy-to-design setup. For each of them, a simulation was run using the measurement slabs returned by the one-material-per-measurement version of the genetic algorithm for M = 12 measurements. In this study we consider that scattered rays represent the entire measurement noise, which is a reasonable approximation. The Monte-Carlo simulation code allows the calculation of the theoretical unscattered rays along with the measurement of total (unscattered and scattered) rays. The objective is to design a configuration in which the measured total rays are as close to the theoretical direct rays as possible. Total and unscattered rays have been measured for each simulation and are plotted in figure <ref>.
Three configurations were tested : a straightforward design where measurement slabs were placed on the edge of a circle of given radius (config. 1, figure <ref>), a reworked configuration in which slabs were optimally distributed in a circle (config. 2, figure <ref>), and a last design where a thick lead anti-scatter grid was added between the measurement slabs and the sensors of configuration 2 (config. 3, figure <ref>).
§.§ Schiff spectrum reconstruction
§.§.§ Nominal configuration
In this study, the nominal configuration for the spectrum reconstruction consists in : M = 12 measurements, N = 100 energy intervals, K = 4 different materials (iron, copper, tantalum and lead), only one material allowed per measurement in the genetic measurement set optimization (hyperparameters : 500 generations, 1000 in population size), config. 3 for the simulation configuration, and P = 10 interpolation points for the reconstruction algorithm.
For this nominal configuration, the reconstruction algorithm has been applied multiple times. Reconstructed spectrums (dotted lines) are plotted on figure <ref>, along with the objective theoretical spectrum (solid black line).
§.§.§ Ablation studies
Finally, ablation studies were performed to evaluate the influence of every part of the spectrum estimation pipeline on the reconstruction accuracy. Figure <ref> highlights the relative improvements obtained with each major step of the reconstruction method.
More precisely, reconstructions have been performed independently in the nominal configuration in the cases where :
* No anti-scatter grid was used for the measurements (figure <ref>)
* Only one material (iron) was used for the experimental slabs (figure <ref>)
* The expectation-maximization (EM) algorithm was used for the reconstruction instead of the algorithm proposed in this study (figure <ref>)
§ DISCUSSION
§.§ Impact of multiple materials
The work done in <cit.> led to a great reduction in orders of magnitude of the forward matrix condition number, and highlighted an exponential distribution of thicknesses for the optimal measurement set using one material. Results shown in section <ref> illustrate the impact of using multiple materials on the condition number, with two distinct cases.
Using multiple piled up materials is beneficial in comparison to the previous setup from <cit.>.
This can be explained by the much larger size of the research space when multiple materials are allowed, leading to a better optimum than the single material algorithm. However, as shown in figure <ref>, the expected drawback of this larger research space is a worse stability. The reduction of the research space size thus plays a key role in the stability improvement.
With the second version of our algorithm presented in <ref>, where only one material is allowed per measurement, an even greater improvement of results is observed with a highly reduced condition number and a good stability, shown in figures <ref>, <ref>. Even though the research space is smaller than in the previous version of the algorithm (which theoretically includes this one's particular case), the relevant constraints put on the optimizer allow a more thorough exploration, leading to the discovery of a better optimum.
§.§ Reduction of scatter noise
Reduction of the measurement scatter noise has proven to be necessary when a challenging high-energy spectrum is reconstructed. Indeed, even though the ill-posedness of the inverse problem is reduced by the search of optimal measurement sets, the condition number remains significant enough to disturb the reconstruction when the measurement noise is too high.
As displayed in figure <ref>, when a naive simulation configuration is used (config. 1) the scatter noise tends to become as large as 100 %, leading to measurements unusable for reconstruction.
When the measurement slabs are optimally distributed within the experimental circle (config. 2), figure <ref> shows a substantial reduction of scattering, with unscattered rays noised by an amount of 50 %. This scatter noise is however still way too high for the inversion.
Lastly, when a thick lead anti-scatter grid is added behind the measurement slabs (config. 3), the scatter noise is reduced to a negligible amount of around 3 % as shown in figure <ref>. The lead layer allows the absorption of almost every scattered ray and the detection of the unscattered rays that pass through material slabs parallel to its axis. The scatter noise obtained with this last configuration appears to be small enough for the measurements to be used to solve the inverse problem and reconstruct the spectrum.
§.§ Schiff spectrum reconstruction
§.§.§ Nominal configuration
As shown in figure <ref>, the overall reconstruction accuracy is satisfying in the nominal configuration, but two areas can be distinguished. For high energy (E > 100 keV) there is no restriction as the constraint step function is set to unity, and the reconstruction shows great accuracy even for as few as 12 interpolation points. However, in the low energy range, the points are constrained to 0 and are thus not optimized. In practice, this is necessary because of the much lower contribution of low energies in the transmission measurements. Indeed, when dense materials are subjected to an X-ray beam of given spectrum, most of the low-energy photons are absorbed and are not detected at the end. The difficulty to reconstruct the low energy part of spectra is thus an intrinsic issue for the inverse problem at hand.
§.§.§ Ablation studies
Figure <ref> emphasizes the substantial impact of the presence of an anti-scatter grid in the experimental setup on the reconstruction accuracy. When nothing is done to reduce the scattered rays, the measurements are very noisy, leading inevitably to an inaccurate reconstruction.
Similarly, the influence of using multiple materials is illustrated in figure <ref>. Because only iron is allowed in the first optimization problem, the genetic algorithm cannot converge to a satisfying minimum of the condition number of the system. Thus, even with low noise, the reconstruction is unstable and inaccurate.
Finally, the interest of using the "trust-constr" algorithm for the second optimization problem formulated above is clearly highlighted in figure <ref>. When the expectation-maximization algorithm is used instead, as it has usually been done in other studies <cit.>, the quality of the reconstruction is very poor for low and medium energy ranges, and the spectrum peak is not retrieved at the expected energy level.
§ CONCLUSION
In this article, a full spectrum reconstruction pipeline for high-energy X-ray sources was presented. Building on prior work which introduced the optimization of transmission measurements, the present work generalizes this approach to multiple calibration materials, enabling to reach better performance than thickness-only optimization. This work also demonstrated the importance of noise reduction to perform spectrum reconstruction in realistic experimental setups. As such, it showed that the design of an adapted anti-scatter grid is a precious asset to solve the inverse problem and obtain faithful spectrum estimations. Finally, a novel noise-robust reconstruction method was shown to outperform common expectation-maximization approaches, enabling a precise choice of spectrum resolution and a controlled injection of prior knowledge of the X-ray spectrum.
§ CONFLICT OF INTEREST STATEMENT
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
§ AUTHOR CONTRIBUTIONS
KG and AF lead the research. AW wrote the code and conducted numerical experiments. AW, KG and AF wrote the article.
§ FUNDING
No funding information applicable.
unsrtnat
|
http://arxiv.org/abs/2307.04104v1 | 20230709055200 | lcs4Foam -- An OpenFOAM Function Object to Compute Lagrangian Coherent Structures | [
"Constantin Habes",
"Alexandra von Kameke",
"Mohammed Elwardi Fadeli",
"Holger Marschall"
] | physics.flu-dyn | [
"physics.flu-dyn",
"physics.app-ph",
"76-04, 76-10",
"I.6.3; I.6.6; J.2"
] |
lcs4Foam]lcs4Foam – An OpenFOAM Function Object to
Compute Lagrangian Coherent Structures
^1Mathematical Modeling and Analysis, Technical University of Darmstadt, 64287 Darmstadt, Germany
[email protected], [email protected], [email protected]
^2Department of Mechanical Engineering and Production Management, Hamburg University of Applied Sciences, 20099 Hamburg, Germany
[email protected]
To facilitate the understanding and to quantitatively assess the material transport in fluids, a modern characterisation method has emerged in fluid dynamics in the last decades footed in dynamical systems theory. It allows to examine the most influential material lines which are called Lagrangian Coherent Structures (LCS) and order the material transport into dynamically distinct regions at large scales which resist diffusion or mixing. LCS reveal the robust skeleton of material surfaces and are essential to assess material transport in time-dependent flows quantitatively. Candidates of LCS can be estimated and visualised from finite-time stretching and folding fields by calculating the Finite-Time Lyapunov Exponents (FTLE).
In this contribution, we provide an OpenFOAM function object to compute FTLE during CFD simulation. This enables the OpenFOAM community to assess the geometry of the material transport in any flow quantitatively on-the-fly using principally any OpenFOAM flow solver.
[
H. Marschall^1
August 12, 2023
===================
§ INTRODUCTION
Material transport and mixing in fluids is enhanced by advection. This advection is usually described mathematically in an Eulerian view by a time-dependent velocity field 𝐮(𝐮, t). With this Eulerian description, numerous important fluid mechanical characteristics can be derived and assessed. For instance, a higher Reynolds number (higher velocities) typically will go along with better overall mixing. However, such intuition might be misleading as has been shown for example in <cit.> studying a rising bubble. Here, a coherent structure has been found to arise for intermediate Reynolds numbers, which causes material to move together and locally hinders mixing and increases residence times in the vicinity of the bubble rear. The example shows: a closer look at the coherent structures is necessary to evaluate the details of the material transport in the specific flow situation. Lagrangian Coherent Structures (LCS) are often observable in fluid flows due to the shape that passive tracers take on, e.g. plankton in the ocean <cit.> or dissolved oxygen in the wake behind a rising bubble.
That the classical Eulerian view on advection is not optimal for addressing these issues was first noted in oceanography and atmospheric science <cit.>. The transport analysis was therefore started from its roots, the Lagrangian view, where the observer travels on the fluid parcels rather than watching them move by (Eulerian frame). The Lagrangian analysis thus considers the trajectories of individual fluid parcels and allows to draw conclusions on the transport from their evaluation. Nowadays, computational and theoretical advances allow for the calculation and analysis of the time-dependent dynamical system that governs material transport.
The underlying ideas for Lagrangian analysis stem from dynamical systems theory. In time-independent incompressible velocity fields, the dynamical system is the velocity field itself and the streamlines of the velocity field coincide with the trajectories of the fluid parcels. As such, trivially, structures in the velocity fields represent governing structures for the material that is transported (as long as molecular diffusion is comparably low and negligible) <cit.>. In this setting, unstable and stable manifolds divide the flow into different subdomains that move coherently (together) <cit.>.
For time-dependent flows however, the instantaneous streamlines and the trajectories of the fluid parcels do not coincide. It is thus a misleading habit to draw any conclusion about the material transport from the streamlines or any other material lines of the mean velocity field of a fluid flow. The resulting transport structures might have no relevance for the real dynamical system at all.
To obtain the lines that govern material transport in time-dependent flows the Lagrangian Coherent Structures are calculated from the trajectories of particles evaluated in the time-dependent velocity field 𝐮 = 𝐮(𝐱, t). LCS are those material lines and surfaces that separate regions of particles with very different fates or history for the time interval under consideration. Several different approaches to evaluate LCS have been developed during the last years <cit.>.
With this contribution we introduce an OpenFOAM function object that calculates the three dimensional Finite Time Lyapunov Exponents (FTLE) on-the-fly based on the general purpose numerical library libcfd2lcs <cit.> with the main computational details explained in <cit.>. The ridges in the FTLE-field are then candidates for LCS and can be assumed to coincide with LCS if some further conditions are met <cit.>. However, as also pointed out in <cit.>, these additional conditions are hard to evaluate in 3D and thus the FTLE-field will be viewed as an approximate representation of the 3D LCS. The details about the calculation of the FTLE-field and the underlying mathematical foundation are set out in Section <ref>.
§ THEORETICAL BACKGROUND OF LCS CALCULATIONS
From time-resolved CFD simulations, the time-dependent velocity field 𝐮(𝐱, t) is known in space and time. From this information the fluid parcel or passive particle trajectories
𝐱(𝐱_0, t) = 𝐱_0 + ∫_t_0^t𝐮(𝐱(τ), τ) d τ
can be calculated, where 𝐱_0 is the starting point of a trajectory in 3D space at a starting time t_0. Note, that each trajectory is now labelled by its start location in space and time. If a set of initially close passive particles is released at the same time the distances between them change over time due to the fluid motion. Passive particles initially forming a tiny sphere will undergo a linear deformation towards an ellipse for short times as would occur in a solid body under stress before it breaks. Certainly, in a fluid, the deformation will progress, and non-linear higher-order terms will play a role in causing stretching and folding which is crucial for mixing. However, as a first approximation and for short times these higher-order terms are neglected for the analysis of the deformation. If we consider infinitesimal spheres of initially close particles around all mesh cell centres of our simulation starting at the same initial time t_0, we obtain a set of different ellipsoids. All these ellipsoids have differently stretched and contracted principal axes which point in different directions at a slightly later time t_1. The principal axes of each ellipse denominate the final directions of maximal stretching (major axis) and maximal contraction (minor axis) of the initially spherical particle blob. The stretching factor S is the length of the major axis of the final ellipse divided by the initial radius of the sphere. If this stretching factor at each initial grid point is plotted, a 3D stretching field results revealing the regions at which stretching and thus particle separation for the time interval of interest [t_0,t_1] is largest due to the local flow conditions. Normally, the scaled logarithm of this stretching factor, defined by
σ_t_0^t_1(𝐱_0, t_0)=1/|t_1-t_0|log (S) ,
is plotted. This scaled logarithmic stretching factor is called the Finite-Time Lyapunov Exponent <cit.>. Connected areas or lines of large FTLE values characterise the fluid transport as these denote the areas or lines along which deformation and thus particle separation is largest. All these geometrical considerations have their mathematical counterparts. The stretching factor as described is the square root of the maximal eigenvalue of the right Cauchy-Green deformation tensor 𝐂_t_0^t_1. This tensor can be calculated for every mesh cell as envisioned above for the ellipsoid. As its name reveals it includes all the information about the deformation of the fluid masses at this point for the short time interval t_1 -t_0, and notably it is an objective tensor such that high stretching values and candidates for LCS derived from it will persist regardless of the motion of the observer (invariant to a time-dependent translation and rotation of the coordinate system of the observer) <cit.>.
The governing ordinary differential equation (ODE) for the evolution of a fluid parcel or a passive particle reads
𝐱̇=𝐮(𝐱(t), t) .
Therefore, the infinitesimal separation γ = 𝐱-𝐱^* of the passive particle, imagined in the centre of a infinitesimal sphere, to a particle on the surface of the sphere will be governed by the ODE
δ𝐱̇ = ∇𝐮γ .
The solution of this ODE is an exponential function, which explains why the FTLE is defined as the logarithm of the stretching factor.
To analyse the stretching during short but finite time intervals, particles distributed on a mesh are advected with the flow from an initial time t_0 over the time interval T=|t_1-t_0| to t_1. From the integral version of the governing ODE (Eq. <ref>) we obtain the definition of the flow map, Φ_t_0^t_1, which maps all the particles from their initial positions onto their final positions at time t_1, viz.
Φ_t_0^t_1: ℝ^n→ℝ^n ; 𝐱_0↦𝐱_0 + ∫_t_0^t_1𝐮(𝐱(τ), τ) d τ .
To obtain the separation of two initially close particles after this time interval a Taylor series
δ𝐱(t_1) = Φ_t_0^t_1(𝐱_0)-Φ_t_0^t_1(𝐱_0 + δ 𝐱(𝐭_0)) = 𝐃Φ_t_0^t_1(𝐱_0, t_0) δ x(t_0) + 𝒪(| δ x(t_0)^2|)
around the initial position can be employed. Where 𝐃Φ_t_0^t_1(𝐱_0, t_0) is the gradient (Jacobian) of the flow map with regard to the initial separation and is also the normalised fundamental matrix solution of the equation of variations above (Eq. <ref>) <cit.>. Therefore, the particle separation at time t_1 can be written as
δ𝐱(t_1)=√(⟨δ𝐱(t_0),[𝐃Φ_t_0^t_1(𝐱_0, t_0)]^*[𝐃Φ_t_0^t_1(𝐱_0, t_0)] δ𝐱(t_0)⟩).
The right Cauchy-Green deformation tensor is then defined as
𝐂_t_0^t_1(𝐱_0, t_0)=[𝐃Φ_t_0^t_1(𝐱_0, t_0)]^*[𝐃Φ_t_0^t_1(𝐱_0, t_0)] .
In this way the Finite-Time Lyapunov Exponent σ_t_0^t_1 for the time interval t_0 to t_1 can now be defined on the basis of this tensor in a more thorough, mathematical way. Therefore, it is now defined by
σ_t_0^t_1(𝐱_0, t_0)=1/|t_1-t_0|log√(λ_max(𝐂_t_0^t_1(𝐱_0, t_0))) .
Here λ_max is the maximum eigenvalue of the right Cauchy-Green deformation tensor and can be calculated using standard solvers. In the picture of the small ellipsoid, the square root of the eigenvalue is just the above stretching rate S.
§ COMPUTATIONAL DETAILS
The computation of flow maps within libcfd2lcs is described thoroughly in <cit.>. The following section presents a brief overview of how the computation is done in practice and which different timescales play a role in the calculations. Hereafter, we describe the structure and functionality of the newly developed function object. We will focus on how the function object acts as an interface between OpenFOAM and libcfd2lcs, how parallelisation is ensured and what has to be considered for the output of the generated data.
§.§ Numerical flow map computation in libcfd2lcs
libcfd2lcs is able to calculate both forward-time and backward-time FTLE fields. However, it uses two very different approaches for calculating the respective flow maps. The general approach used for the computation of the forward time flow-map Φ_t_0^t_0+T and the resulting forward-time FTLE field is very straightforward. A set of tracer particles is initialised on a grid with spacing Δ x_lcs by setting each initial tracer coordinate to the cell centre coordinate of a corresponding mesh cell. Then the flow map at each cell centre is computed by passively advecting these tracers with the flow, which mathematically corresponds to an integration of equation
d 𝐱/d t=𝐮(𝐱, t)
over the time interval T. Numerically this integration is done by utilising Runge-Kutta methods, with step size Δ t_lcs.
The time and space dependent velocity field 𝐮(𝐱, t) results from the specific fluid simulation under consideration and is passed to libcfd2lcs after each simulation time step Δ t_sim (see Section <ref>). In order to save the flow map Φ_t_0^t_0+T, the location of each particle after the integration is stored at its initial position.
As the evaluation of FTLE fields, indicating LCS candidates, is mainly relevant for time-dependent flows, it is often important to animate their evolution. At first glance, this would mean that a sequence of large particle sets would have to be integrated, requiring a great amount of computation. This problem is solved using a method developed by Brunton and Rowley <cit.>. With this method a flow map of the interval T can be constructed from a sequence of k flow maps over a smaller interval h, where T=kh. Following the notation of <cit.> this can be expressed as
Φ_t_0^t_0+T=Φ_t_0+(k-1)h^t_0+kh∘⋯∘Φ_t_0+h^t_0+2h∘Φ_t_0^t_0+h .
In practical terms, this means that the particle grid is reinitialised for every new time interval h after which they are advected again with the flow. Then the sub-step flow map is stored and the complete flow map is constructed when all needed sub-step flow maps are available. It is important to note that since a discrete particle grid is used for the sub-step flow map computation, interpolation of the sub-step flow maps is needed in order to match the trajectories at different timelevels when reconstructing the flow map Φ_t_0^t_0+T (see <cit.> for more details).
A different approach for constructing the backward-time flow maps is used. This is due to the fact that using the Lagrangian approach would require to store all computed velocity fields in the subset interval h before the integration of the tracers from t_0+h to t_0 could be done backward in time. Although this already includes Brunton's and Rowley's method for the flow map construction, the Lagrangian approach would be "cumbersome and resource intensive" <cit.>. Therefore, libcfd2lcs uses an Eulerian approach for the flow map computation proposed by Leung <cit.>. In contrast to the forward-time flow map, the backward-time flow map Φ_t_0+T^t_0 describes for each grid point where a particle, that is at that point at time t_0+T, originally was at time t_0. With Leung's Eulerian approach this backward-time flow map at time t_0+T is computed by initialising a vector field Ψ(𝐱, t_0) on a grid with the cell centre coordinates at time t_0. The advection of this so called "takeoff coordinate field" in an Eulerian reference frame is then described by the level set equation
∂Ψ(𝐱, t)/∂ t+(𝐮·∇) Ψ(𝐱, t)=0 .
Solving this equation over the time Interval [t_0, t_0+T] in forward time gives Ψ(𝐱, t_0+T), which represents the takeoff coordinates of a Lagrangian particle at t_0 reaching 𝐱 at time t_0+T. Thus, the backward-time flow map Φ_t_0+T^t_0 is equivalent to Ψ(𝐱, t_0+T). libcfd2lcs solves equation (<ref>) by using a semi-Lagrangian advection approach with the time step size
Δ t_lcs = c_cfl Δ x_lcs/𝐮(𝐱, t)
of this procedure being restricted by the CFL condition c_cfl < 1 (see <cit.> and <cit.> for more details).
Furthermore, Brunton's and Rowley's flow map construction method is also applied to the backward-time flow maps computed with the Eulerian method. Hence, the takeoff coordinate field is reinitialised after every sub-step time interval h and the backward-time flow map
Φ_t_0+T^t_0=Φ_t_0+h^t_0∘Φ_t_0+2h^t_0+h∘⋯∘Φ_t_0+kh^t_0+(k-1)h
is constructed form k sub-step backward-time flow maps.
Since a lot of different timescales are relevant in the practical FTLE field computation described above, we try to differentiate and order them in the following, before describing the structure and functionality of the newly developed function object in the next section. The basis of the on-the-fly LCS evaluation is a parallel running simulation that provides the velocity fields. Here, three intervals are of interest (see Fig. <ref>): the overall simulation time that spans from the simulation start time t_sim_start to the simulation end time t_sim_end, the time step size of the simulation Δ t_sim and the write time interval of the simulation results Δ t_sim_write. The computed velocity fields represent a fluid flow for which a reference timescale Δ t_ref can be identified. This reference timescale characterises the dominant hydrodynamic timescale of the flow and is typically larger than the simulation time step size. In order to save computing resources the LCS evaluation of the simulated flow does not necessarily have to start and end at the same time as the simulation. Therefore, a separate start and end time for the LCS evaluation denoted as t_lcs_start and t_lcs_end can be defined (see Fig. <ref>). During the LCS evaluation, a series of FTLE fields are computed. These FTLE fields are calculated from time T flow maps, which themselves are calculated as described earlier in this section. This means storing and constructing the time T flow maps from multiple sub-step flow maps after each LCS sub-step integration interval h. Calculating the sub-step flow maps in turn requires to numerically solve the equations (<ref>) or (<ref>) using the finite time step Δ t_lcs. While Δ t_lcs is set automatically according to equation (<ref>) and a specified CFL number, T and h have to be defined by the user. In order to detect all LCS candidates, T is usually chosen to be larger than Δ t_ref of the investigated flow <cit.>. With the aim of animating the evolution of the FTLE field, h is typically set significantly smaller than Δ t_ref while being in the order of magnitude of Δ t_sim_write.
§.§ Structure and functionality of the function object
In general, function objects can be used to generate additional data at runtime of the simulation. In doing so, function objects can access data generated by the flow solver at runtime, which offers a great advantage over classical post-processing since it can only utilise the stored fields or logged information. The newly developed function object incorporates the functionalities of libcfd2lcs into OpenFOAM at runtime while acting as an interface between both. This is achieved by processing the data generated by OpenFOAM and the subsequent exchange of this data via the libcfd2lcs API (see <cit.> for a detailed description of the libcfd2lcs API).
The calculation of the flow maps, the calculation of the resulting FTLE fields and the subsequent saving of these fields is completely handled by libcfd2lcs. The basic task of the function object is to pass the cell centre position vectors of the computational grid as well as the velocity field calculated by OpenFOAM to libcfd2lcs. Due to the very strict data structure requirements of libcfd2lcs this is not a trivial task. libcfd2lcs can only use static rectlinear grids for the calculation of forward-time and backward-time flow maps and therefore needs the velocity fields on these grids. This means that the mesh and velocity data has to be globally organised in an (i, j, k) structured format <cit.>. Since the LCS evaluation should also be available for simulations on moving grids with general topology and adaptive grid refinement, the function object offers several possibilities to deal with this problem.
In the simplest case, where the simulation mesh is already a static rectlinear mesh, the function object does not need to process the grid and velocity data, but can directly transfer it to libcfd2lcs as basic C++ arrays. This is the preferred method when the flow domain can be represented by a static rectlinear mesh and e.g. immersed boundary methods are used. If a moving mesh, a mesh of general topology or adaptive mesh refinement is used for the simulation a different approach is needed in order to prepare the data for its use in libcfd2lcs. Here, an additional static rectlinear mesh needs to be constructed in the preprocessing step, which can be done e.g. by using the utility. This mesh has to contain the region for which the LCS diagnostic should be performed, meaning that it can cover the whole simulation domain as well as only a part of it. However, since libcfd2lcs also requires boundary conditions for the FTLE field calculations, the boundary patches of the additional LCS mesh must be set accordingly. The user can choose between , , , and the generic patch types which the function objects translates into the corresponding libcfd2lcs boundary types. Then, during runtime, the velocity fields are mapped from the simulation mesh of general topology to the static rectlinear LCS mesh, from which the data can again be transferred to libcfd2lcs as basic C++ arrays. Although this implies that interpolation errors are made during the mapping process, the LCS evaluation is hardly affected by this. Haller showed in <cit.> that LCS are very robust against errors in the velocity field. Also, the additional computational overhead due to the mapping can be neglected compared to the overhead caused by the flow map computations. The function object also implements a third approach in which no additional LCS mesh is needed. This approach utilises the ability to construct complex, moving mesh geometries out of simple unconnected mesh regions in OpenFOAM with the approach. Using this approach the function object can utilise any specified static rectlinear mesh region of the for the LCS evaluation, meaning that the background mesh as well as any other static rectlinear mesh region can be used. In doing so, the function object extracts the mesh and velocity data from the specified mesh region of the and passes it to libcfd2lcs analogously to the previous approaches. Here the type patches are generally passed on as inlet or outlet, as they are treated the same by libcfd2lcs.
As libcfd2lcs also uses the domain decomposition approach and MPI for the parallelisation of the computations, the integration within the parallelisation of OpenFOAM is done in a straightforward manner. The local subdomains of the rectlinear LCS mesh and its velocity data are passed to libcfd2lcs together with an offset, which describes the position of the cell data in the globally (i, j, k) structured data array (see Fig. <ref>). For the MPI communication, the same MPI communicator as used for OpenFOAM is shared with libcfd2lcs. Therefore, the function object can be used for simulations running in parallel or serial. However, if the approach involving an additional LCS mesh is used, special attention is required for the domain decomposition in the preprocessing step. Here the simulation mesh, as well as the LCS mesh, must be cut along the same surfaces to make sure that the mapping of the velocity fields from one mesh to the other works properly.
As already mentioned, the data output of the flow-map and FTLE field data is completely handled by libcfd2lcs. This is due to the fact that the data output interval defined by h can differ from the solver write interval Δ t_sim_write (see section <ref>). Therefore, the results generated by the function object are not stored in corresponding time directories but in a separate folder in the case directory called . Additionally, a directory named is created inside of which all the sub-step data is stored. All data is stored in the Tecplot ASCII data file format (*.dat) and therefore can be visualised in ParaView when opened with its internal Tecplot reader or other common visualisation programs. In addition to this data, the computational overhead generated by the use of the function object with respect to the actual simulation is also output in the solver log file after each simulation time step. This enables the user to examine the computational costs of the LCS evaluation.
§ EXAMPLES OF USAGE
In this section a few examples are presented which are designed to show the functionality and capabilities of the function object. Therefore, example cases are presented in which only a rectlinear simulation mesh, a separate simulation and LCS mesh and a single are used.
§.§ Steady ABC flow
The Arnold-Beltrami-Childress (ABC) flow is an exact periodic solution of the Euler equations and is often used in the literature to verify LCS calculation methods. Therefore this case is also being reviewed here. The velocity field
𝐮=∇×[-Ψ𝐤+∇×(Φ𝐤)]
of the ABC flow can be described using 2 scalar potentials Ψ and Φ <cit.> which themselves are defined as
Ψ=-[C sin (y)+B cos (x)]
Φ=A[-x cos (z)+y sin (z)]-Ψ .
In (<ref>), 𝐤 can be any unit vector but is commonly chosen to be the vertical unit vector. This leads to the three expressions of the velocity components
u=A sin (z)+C cos (y)
v=B sin (x)+A cos (z)
w=C sin (y)+B cos (x) .
The parameters A, B and C can be freely selected and influence the properties of the ABC flow. In order to create comparability with literature values, A=0.5, B=0.8, C=0.8 is chosen. In order to test the newly developed function object on this flow configuration a dedicated ABC flow OpenFOAM solver was written. This solver does not solve the Euler equations in the usual sense, but sets the velocity components on a given computational mesh according to (<ref>). Due to the periodicity of the flow solution, the dimensions of the computational mesh used in this case setup are specified as x,y,z ∈ [0,2π] with a mesh size of 100×100×100.
Since the described mesh is rectlinear no additional LCS mesh is used. Again for reasons of comparability, a LCS integration time of T=10 s is selected for the LCS evaluation. The results of the LCS evaluation, both in forward- and backward-time, can be seen in Figure <ref>.
In these results the FTLE ridges, which indicate the LCS candidates in the ABC flow, can be seen very clearly. Furthermore, the results agree very well with the results from <cit.>, both qualitatively and quantitatively, which suggests that the new function object calculates the FTLE ridges reliably.
§.§ Time dependent double gyre
Another frequently used flow for the verification of LCS computing algorithms is the time periodic Rayleigh-Bénard convection flow, or often called double gyre, proposed by Solomon and Gollub <cit.>. The velocity field of this flow can be describe by using a stream function ψ
u=-∂ψ/∂ y
v=∂ψ/∂ x .
Here ψ is defined by
ψ(x, y, t)=A sin (π f(x, t)) sin (π y)
with
f(x, t)=a(t) x^2+b(t) x
a(t)=ϵsin (ω t)
b(t)=1-2 ϵsin (ω t)
This leads to the expressions for two-dimensional velocity components
u=-π A sin (π f(x)) cos (π y)
v=π A cos (π f(x)) sin (π y) d f/ d x .
As the name double gyre suggests, this model defines the flow of two two-dimensional gyres enclosed in a rectangle which expand and contract periodically along the x-axis. Therefore, the periodic motion is controlled by ϵ if ϵ≠ 0. Then ϵ describes approximately how far the line separating the gyres moves to the left or right from its centre position <cit.>. Otherwise (ϵ=0), no periodic motion is happening. Furthermore, A specifies the magnitude of the velocity vectors and ω/2π determines the oscillation frequency of the gyres.
Similar to the ABC flow example, a dedicated OpenFOAM solver was written for this case, which sets the velocity field on a given computational mesh according to (<ref>). For comparability, a mesh with the same specifications as in <cit.>,<cit.> and <cit.> was used. It has the dimensions [0,2]×[0,1]×[0,0.1]m and a resolution of 512×256×1 cells. As this mesh is also static and rectlinear no additional LCS mesh was used. For the mathematical model of the flow the parameter values are chosen to be ϵ=0.1, A=0.1 m s^-1 and ω=2π/10 s. Since the oscillation frequency is known, the hydrodynamic time scale can be easily determined by t_ref=2π/ω=10 s. As described in section <ref>, the LCS integration time interval T should be set larger than t_ref. Therefore, it is set to T=1.5· t_ref= 15 s. Figure <ref> shows the forward- and backward-time FTLE fields at t= 15 s of the previously described double gyre flow. Again, the results match very well with the results from <cit.>,<cit.> and <cit.>. This confirms that the function object is able to calculate the correct FTLE fields from velocity fields generated by OpenFOAM.
§.§ Flow around cylinder
As it has already been shown in the previous examples that the function object can calculate the correct FTLE fields from velocity fields provided by OpenFOAM, this example will focus on how to deal with non-rectlinear simulation meshes. For this purpose, a standard flow problem is selected that is very well suited for an LCS evaluation: the flow around an infinitely long cylinder.
The general case setup contains a fluid domain with size [-20,30]×[-20,20]×[-0.5,0.5]m that surrounds a cylinder with diameter D=2m and its centre axis at x=y=0m. The free-stream velocity and the fluids kinematic viscosity are set to 𝐮^ T=(1 0 0)m s^-1 and ν = 0.01m^2s^-1, respectively. This results in a Reynolds number of Re=200 which indicates that vortex shedding behind the cylinder occurs in a barely laminar regime. If we also assume a Strouhal number of St=0.2 at Re=200, the hydrodynamic time scale of this flow is t_ref=D/(u·St)=10s.
Because of the cylinder in the middle of the domain, a computational mesh discretising this domain is no longer rectlinear. Therefore, we consider two different procedures in the LCS evaluation, the first of which is carried out in two different ways.
Starting with the procedure where an additional rectlinear computational mesh is used for the LCS evaluation, the flow domain is discretised with a simulation mesh consisting of 9200 hexahedra (see upper left mesh in Fig. <ref>).
The flow solver that is used to simulate the previously described flow from t=0s to t=120s is with the initial conditions being calculated by . The first additional LCS mesh that is used within this procedure encloses the whole flow domain (see upper right mesh in Fig. <ref>). In order to minimise the loss of information during the mapping of the velocity fields between the two grids, the resolution of the LCS mesh is chosen in a way that it corresponds approximately to the finest resolution in the simulation mesh. This leads to a LCS mesh with 200×160×1 hexahedra. The boundary patch types are set to for the left and right patch (inlet,outlet), to for the bottom and top patch and to for the front and back patch. The LCS integration time T is again based on t_ref and is set to T=1.5· t_ref=15s. For a good animation of the dynamics of the FTLE fields h is chosen to be h=T/10=1.5s. The results of the forward- and backward-time FTLE fields can be seen in Fig. <ref>. They show how the vortices behind the cylinder form large coherent structures, where the FTLE ridges of the backward-time FTLE fields separate different fluid packages that do not mix in the vortex street.
Since the FTLE ridges only appear in a fraction of the overall domain and the LCS evaluation is computation-wise a quite costly operation, a second LCS mesh is prepared. This second LCS mesh is a lot smaller than the first one and encloses only the fraction of the flow domain where the FTLE ridges are expected to show up (see Fig. <ref>). The boundary patches on the smaller LCS mesh and its spacial resolution are also set analogous to its bigger counterpart, leading to a LCS mesh of size [-13,27]×[-7.5,7.5]×[-0.5,0.5] containing 160×60×1 hexahedra. Repeating the computations with the use of the smaller LCS mesh gives the results which are displayed in Fig. <ref> and are found to match with the results from the bigger LCS mesh. This shows that the LCS evaluation, when done with a separate LCS mesh, can be used in a very targeted way. The advantages this brings in terms of computational costs are discussed after considering the second procedure for the LCS evaluation of this flow problem.
The second procedure, which can be used on problems where no single static rectlinear mesh can be constructed, utilises OpenFOAM's functionalities. With regard to the flow problem considered here, an is constructed with the same dimensions as the simulation mesh used previously. It consists of three mesh zones, namely a rectlinear background mesh zone that spans the whole fluid domain, another finer and smaller mesh zone that is used for a finer resolution of the flow and a cylindrical mesh that surrounds the cylinder (see Fig. <ref>). For comparability reasons the finer rectlinear mesh zone has the same dimensions and resolution as the smaller additional LCS mesh considered previously and is therefore specified as the cell zone for the LCS evaluation. Also all other LCS evaluation settings are adopted. The only difference to the previously considered simulations is the used flow solver. Here the flow solver is due to the used . The resulting forward- and backward-time FTLE fields of this simulation can be found in Fig. <ref>. They match with the results from the previously considered procedure which shows that both approaches can be used equally well. The only thing that stands out are the high FTLE values along some boundaries in the studied solutions. These occur because of the way libcfd2lcs handles its inlet and outlet boundary conditions. It fixes out-flowing Lagrangian particles/takeoff coordinates on "open" boundaries and cannot generate new in-flowing particles during the flow map computation. Therefore, high FTLE values occur in the forward-time FTLE fields at "open" boundaries where inflow occurs, since there the most "stretching" happens. Vice versa, high FTLE values occur in the backward-time FTLE fields at "open" boundaries where outflow occurs, since there the most "folding" happens. These high values at "open" boundaries are just artefacts and have to be neglected. The reason they appear more in the approach is that all type patches are passed to libcfd2lcs as "open" boundaries whereas the user can specify all patches problem dependent in the additional LCS mesh approach.
Looking at the computation times of the flow calculations including the LCS evaluation, it becomes evident that LCS evaluation is a very costly operation (see Tab. <ref>). When using the "large" additional LCS mesh the simulation takes approximately 30 time longer than without the LCS evaluation. This can be improved by using the smaller additional LCS mesh. Here the simulation takes 9 times longer than without the LCS evaluation. Since the costs for the LCS evaluations are almost independent of the underlying simulation for a constant grid size, this factor becomes smaller and smaller for more complex simulations. This can also be seen from the fact that the factor is only 2.5 when the approach is used because the computations of the pressure and velocity fields take longer on an . At this point, however, it must be emphasised that the flow considered here is not a highly complex problem, which can also be seen from the simulation time of 1.5 min on a normal mesh and 8 min on an .
§ SUMMARY & CONCLUSION
We provide an OpenFOAM function object based on libcfd2lcs to compute Finite-Time Lyapunov Exponent (FTLE) fields that indicate candidates of Lagrangian Coherent Structures (LCS) and allow to visualise finite-time stretching and folding fields. LCS reveal the robust skeleton of material surfaces and are key to quantitatively assess material transport in time-dependent flows. This enables the OpenFOAM community to assess the geometry of the material transport in any flow quantitatively on-the-fly using principally any OpenFOAM flow solver.
Focusing on the practical aspects, we only give a brief overview of the mathematical foundation as well as how the computation is done in practice. We describe the structure and functionality of the newly developed function object. Further focus is laid on how the function object acts as an interface between OpenFOAM and libcfd2lcs, how parallelisation is ensured and what has to be considered for the output of the generated data.
From validation of the presented function object using simple benchmark problems, a notable computational overhead has been recognised. However, if LCS evaluations are used for much more complex problems as the ones used here, the computational overhead significantly drops and the LCS evaluation no longer accounts for the largest proportion of the computation time. Nevertheless, the user should be aware that the calculation of FTLE fields is expensive and should therefore think carefully about the size and position of the LCS mesh. In addition, consideration should also be given to whether both forward and backward-time FTLE calculations are required or if one of them is sufficient.
IEEEtran
|
http://arxiv.org/abs/2307.03881v1 | 20230708024615 | On Delay Performance in Mega Satellite Networks with Inter-Satellite Links | [
"Kosta Dakic",
"Chiu Chun Chan",
"Bassel Al Homssi",
"Kandeepan Sithamparanathan",
"Akram Al-Hourani"
] | cs.IT | [
"cs.IT",
"math.IT"
] |
The Busboy Problem: Efficient Tableware Decluttering
Using Consolidation and Multi-Object Grasps
Kishore Srinivas^1, Shreya Ganti^1, Rishi Parikh^1, Ayah Ahmad^1,
Wisdom Agboh^1,2,
Mehmet Dogar^2, Ken Goldberg^1
^1The AUTOLab at UC Berkeley (automation.berkeley.edu).
^2University of Leeds, UK.
=============================================================================================================================================================================================================
Utilizing Low Earth Orbit (LEO) satellite networks equipped with Inter-Satellite Links (ISL) is envisioned to provide lower delay compared to traditional optical networks. However, LEO satellites have constrained energy resources as they rely on solar energy in their operations. Thus requiring special consideration when designing network topologies that do not only have low-delay link paths but also low-power consumption. In this paper, we study different satellite constellation types and network typologies and propose a novel power-efficient topology. As such, we compare three common satellite architectures, namely; (i) the theoretical random constellation, the widely deployed (ii) Walker-Delta, and (iii) Walker-Star constellations. The comparison is performed based on both the power efficiency and end-to-end delay. The results show that the proposed algorithm outperforms long-haul ISL paths in terms of energy efficiency with only a slight hit to delay performance relative to the conventional ISL topology.
Low Earth orbit constellations, inter-satellite links, mega satellite constellations, dense satellite constellations, delay.
§ INTRODUCTION
The deployment of Low Earth Orbit (LEO) constellation networks is at an accelerating pace aiming to provide global connectivity. Many satellite projects, such as OneWeb, SpaceX's Starlink, and Amazon's Kupier <cit.> are currently been deployed with thousands of satellites into LEO orbit to provide seamless Internet coverage around the globe. These constellations will complement the existing terrestrial communications including both 5G and future 6G networks.
One of the key enablers of such constellations envisioned inter-satellite connectivity. A connection between two satellites is referred to as Inter-Satellite Link (ISL) and is being widely anticipated (and demonstrated) in the upcoming LEO constellations. ISLs relay data directly between satellites, unlike current methods which depend on the large network of ground stations <cit.>. ISLs could liberate constellations from the burden of establishing costly, and sometimes infeasible, ground station networks. An additional advantage is that the data carried by ISLs travel in free space and thus at the exact speed of light as opposed to conventional optical fiber networks. The average propagation speed in a typical single-mode optical fiber cables network is around 65-70% the speed of light <cit.>. Hence, ISLs can potentially enable new low-delay applications such as remote control industry operations, cloud-controlled autonomous vehicles and farming, and telesurgery, in addition to enabling faster financial transactions. However, despite these advantages, satellite networks with ISLs face their own set of challenges, such as power constraints due to reliance on solar energy and the need to maintain reliable inter-satellite connections in a dynamic orbital environment. Consequently, the development of power-efficient and low-delay satellite constellation topologies that effectively leverage ISLs remains an active area of research and innovation.
Nevertheless, using satellites rather than optical fiber submarine cable possesses the potential to decrease the propagation delay by a few tens of milliseconds depending on the distance. Apart from delay reduction, the LEO satellite constellation can provide an access network for remote and rural communities, and to locations with extreme terrain such as mountains <cit.>. This is also particularly important for industries that require real-time monitoring and control, such as manufacturing and logistics. Recent studies have also shown that LEO satellites with ISLs offer better resilience to cyber-attacks, making them a more secure option for data transfer in Industry 4.0 applications <cit.>. Furthermore, due to their low altitude, LEO satellites offer lower latency and higher bandwidth capacity compared to traditional geostationary satellites, enabling faster and more efficient communication services for users in remote areas <cit.>. Recent studies have demonstrated the potential benefits of LEO satellite networks for improving connectivity in developing countries and bridging the digital divide <cit.>.
Simulations presented in <cit.> and in <cit.> illustrate the comparative delay advantages of ISL as a data relay technology versus optical cable. Additionally, authors in <cit.>, delve deeper into the idea of using optical satellite links rather than terrestrial fiber by developing a crossover function to optimize delay. The crossover function is a mathematical formula that determines the optimal point at which to switch from using a terrestrial fiber link to an optical satellite link. Authors in <cit.> revealed the limitations of traditional network design approaches in the context of ISL-enable satellite communications and suggested the use of repetitive 3-satellite link patterns to address the temporal dynamics, achieving a higher efficiency than previous state-of-the-art methods. Nevertheless, more investigation is needed to better evaluate the benefits of optical satellite links for data relaying as well as to develop satellite-aware ISL topologies rather than just applying the shortest path ISLs. For example, a power-efficient ISL topology would take into consideration the power limitations of satellites due to their reliance on intermittent solar energy.
In this paper, we further analyze the prospect of using LEO satellite ISLs to relay data. The analysis is made through the simulation of different LEO satellite constellations (shown in Fig. <ref>) where network topologies are proposed; (i) Nearest hop topology and (ii) Cutoff distance topology. We concentrate on the performance in terms of delay between the transmitting and receiving device because ISL is expected to facilitate lower delays, however, the performance of ISL-enabled networks needs to be carefully studied and assessed. An illustration of data communications with ISLs is shown in <ref>. The contributions of this work are summarized as follows:
* We compare the performance of three common satellite constellations, namely the theoretical random constellation, the widely deployed Walker-Delta, and the Walker-Star constellations in terms of end-to-end delay.
* We propose and evaluate two network topologies for LEO satellite constellations with ISLs: Nearest hop topology and Cutoff distance topology, focusing on their impact on delay performance comparing a theoretical great-circle optical fiber connection.
* We show that our proposed topologies achieve competitive delay performance relative to the conventional ISL topology, highlighting their potential for use in future LEO satellite networks.
§ SYSTEM MODEL
§.§ Geomtric model
In this section, we introduce the three constellation models used for benchmarking the performance.
§.§.§ Random
The random distribution of satellites with random circular orbits. This is distribution is used in the simulations, as outlined in <cit.>, with the assumption that satellite collisions are disregarded. The left-hand side of Fig. <ref> provides an illustration of the random constellation along with the common Walker constellations.
§.§.§ Walker-Star Constellation
Satellite providers, such as OneWeb and Iridium use the Walker-Star constellation. The orbits in such constellation follow a near-polar configuration which has an inclination angle close to 90^∘, this ensures global coverage, including the poles. However, this inherently results in an increased density of satellites with higher latitudes. The right ascension of the ascending node (RAAN) of the orbital planes in a Walker-Star configuration is spread across 0 to π, unlike the Walker-Delta constellation which uses 0 to 2π. The middle of Fig. <ref> depicts a typical Walker-Star constellation with 200 satellites along with their orbital planes.
§.§.§ Walker-Delta Constellation
The Walker-Delta constellation, employed in satellite networks such as Kupier and Starlink <cit.>, reduces inter-satellite distance variations. These networks are currently considering inter-satellite links (ISLs) to support end-to-end communication without the need for a large network of terrestrial gateways. In the Walker-Delta configuration, the orbital planes are equally spaced and rotate around the Earth's axis of rotation, with a RAAN of Ω∈0,2π/P,22π/P,…,(P-1)2π/P. The right-hand side of Fig. <ref> displays the constellation and orbital planes for the Walker-Delta constellation.
§.§ Footprint model
In order to avoid terrestrial interference and heavy signal fading, the user-terminal only connects to satellites that have an elevation angle larger than a given threshold θ_min. One reasonable threshold is 25^∘ as FCC filings by Starlink <cit.>. Thus, according to <cit.>, the effective footprint of a satellite is bounded by the minimum permissible elevation angle. In a practical sense, the footprint might be even smaller than this bound depending on the antenna beamwidth ψ. For the purpose of this study the footprint projection is assumed to be an ideal spherical cap bounded by an earth-centered zenith angle, denoted as φ, see Fig. <ref> for details. In order to calculate the beamwidth, the maximum slant distance between the satellite and the ground device needs to be calculated with the cosine rule as follows,
a = R_ecos(π/2+θ_min)+√(R^2-R_esin(π/2+θ_min)^2) ,
where R_e is Earth's average radius, R = R_e + h, and h is the satellite altitude above the Earth's mean sea level. Thus, the maximum effective beamwidth can be again calculated with the cosine rule as follows <cit.>,
ψ = acos(R_e^2+R^2-a^2/2Ra) .
Then the earth-centered zenith angle is calculated using the law of sines as follows <cit.>,
φ = (1/αsinψ/2) - ψ/2 ,
where α = R_e/R. Finally, the area of the spherical cap (footprint) of the beam is calculated as follows,
A_fp = 2π R_e^2(1-cosφ) ,
The perimeter of a spherical cap can then be drawn on the earth's surface to define each satellite footprint, where if a device is located within the footprint, it is able to connect to the satellite. For defining the perimeter of the footprint, the latitude and longitude of the footprint boundary need to be calculated with the heading formulae <cit.> as follows,
ϕ_fp = (sinϕ_satcosφ + cosϕ_satsinφcosθ) ,
and the longitude,
ρ_fp = ρ_sat + (sinθsinφcosϕ_sat ,
cosφ - sinϕ_satsinϕ_fp) ,
where θ is an array from 0 to 2π with 360 elements and ϕ_sat and ρ_sat is the latitude and longitude of the satellite in radians. An illustration of the geometry of a LEO satellite is shown in Fig. <ref>.
§.§ Communication Delay
In order to benchmark the satellite network, a direct point-to-point fiber link is assumed between the communicating ground terminals. The link thus follows the great-circle which is the shortest path on a spherical surface. The delay is calculated as follows,
τ_gc = d_gc/(c/n)
where c is the speed of light, n is the refractive index of the optical fiber cable, and d_gc is the great circle distance between the communicating ground terminals. When using the satellite network, the total delay for related to the distance of the sum of all hops distances plus the approximated processing delay and is formulated as follows,
τ_gc = (d_sat+d_tx→ sat+d_sat → rx) /c + N_hopsτ_process ,
where d_sat is the sum of all the ISL distances from the first satellite point to the last satellite point, d_tx → sat is the distance from the transmitter to the first satellite, and d_sat → rx is the distance from the last satellite in the path to the receiver. The processing delay is denoted as τ_process.
§ TOPOLOGIES
In this paper, we evaluate the end-to-end delay performance of different ISL-enabled constellations with two different network topologies. The topologies being utilized are (i) cutoff distance-based topology and (ii) nearest hop-based topology. For both topologies, after the links are constructed, Dijkstra's algorithm is used to calculate the lowest delay path. For Dijkstra's algorithm, the distance of each link is used for the link weight because it is directly proportional to the propagation delay.
§.§ Nearest Hop-based Topology
In a practical LEO satellite constellation, the number of available optical ISL ports (links) is limited due to various reasons including, cost, energy consumption, and satellite size. Therefore, ISLs links to neighboring satellites need to be optimized according to the given criteria. One way to form such links is to connect with the next satellites (next hop) that minimize the transmission energy. One method to establish the lowest energy next hops is to evaluate every satellite within a given vicinity and then pick up only the neighbors that if reached by a direct link would be more energy efficient. In geometric terms, this topology can be realized using the following steps:
* Draw a virtual sphere between the current satellite and the candidate satellite residing on the opposing side of the diameter segment.
* Create a link from the test node to the candidate only if there is no other candidate node in the sphere.
* Repeat for every candidate satellite in the vicinity
By referring to the illustration in Fig. <ref>, the figure shows an ISL path from node A to node D. If we assume the transmission power is variable, the link from A → C cannot be made as link A → B is more efficient. The link between A and C cannot be made as d_AC^2 ≥ d_AB^2 + d_BC^2. The inverse-square relationship between distance and transmit power due to the free space path-loss (FSPL) exponent means that less transmit power is required if the signal travels A → B rather than directly from A → C. The FSPL is calculated as follows,
l = [4π d/λ]^2 ,
Then to calculate the total energy consumption of the system is formulated as follows,
E = α∑_i=0^K d_i^2 + (E_processing× K) ,
where K is the number of hops and α is the transmit power.
After all the ISLs are made, Dijkstra's algorithm <cit.> is then used to calculate the shortest path between the transmitter and receiver using the satellite network. A figure showing the ISLs between each satellite using the nearest hop algorithm is shown in Fig. <ref>. Additionally, the pseudocode for constructing the nearest hop topology is shown in <ref>.
§.§ Cutoff Distance -based Topology
This topology assumes that a satellite can link with any other satellite if it is closer that a given distance threshold. The practical sense of such a topology is that ISL links would have a maximum viable distance either limited by the link budget or by the occlusion caused by the Earth's curvature. For a generic case, we take the assumption that the links are limited by the Earth's curvature which imposes the upper bound distance of feasible ISL links. Accordingly, the connection between the neighbor pairs is removed when the weight exceeds the maximum horizon range, denoted as d_max. To be more accurate, the practical visibility constraint of ISL is limited by the troposphere which contains 99% of the atmospheric water vapor and aerosols <cit.>. The troposphere has an average height of around 18 km above the Earth’s surface so we use this as a threshold <cit.>. The maximum ISL links distance is then calculated as follows,
d_max= 2√((R_⊕+h_s)^2-(R_⊕+h_t)^2) ,
where h_s is the height of the satellite and h_t is the average height of the troposphere. Dijkstra's algorithm <cit.> is then utilized to find the shortest path in the topology. Note, the cutoff algorithm provides a lower bound for ISL delay, as this is the maximum distance at which ISLs could be formed. An illustration of how the links for the Cutoff routing concept are formed is shown in Fig. <ref>.
§ RESULTS
In this section, we present the performance results compared to torrential optical fiber connections between two arbitrary locations on Earth. Additionally, we explore three different LEO satellite constellations, (i) random, (ii) Walker-Star, and (iii) Walker-Delta. Note, we explore the performance on the random constellation as a baseline because it has been shown to be analytically tractable from coverage handover problems <cit.>.
For calculating the processing delay, we assume a processing clock operating at 533 MHz, which is chosen as the Zynq UltraScale+ RFSoC <cit.> as an example from a real-time single-chip radio platform. We also assume 3000 instructions to decide the ISL for the next satellite, the low number of instructions is due to the orbits being deterministic, thus all the topologies can be calculated apriori and the satellites would just need to use a look-up table to calculate the next link. The processing delay can then be calculated as D_p = Number of Instructions / CLK. As the processing delay within satellites is likely to be fixed and similar to each other, we assumed the same processing delay for all satellites. Therefore, the processing delay is multiplied by the number of satellite hops K.
§.§ Distance vs. Delay
When comparing the delay of LEO satellite networks for data communication and the great circle optical fiber path, we normalize the delay to the great circle optical fiber path. The Improvement = D_Optical fiber/D_Satellite - 1, where D is the delay. As such, the improvement relative to the great circle optical fiber path is plotted relative to the great circle distance.
The great circle distance is plotted against the delay in Fig. <ref>, where we normalize the delay to the delay achieved by using an optical fiber path. The plot then shows a trend as the number of orbiting satellites increases, the performance also improves. The improvement is due to the greater coverage of satellites around the globe so the probability of a more efficient path existing also increases. Additionally in the same plot, the performance of the nearest hop topology is compared when processing is included in the calculation as well as assumed negligible. The plot shows the performance decreases very slightly due to the high clock rate of the hardware considered in this work <cit.>. The performance of cutoff topology with and without processing was also considered, however, due to the smaller hop count relative to the nearest hop topology, the effect is negligible thus it is not shown on the plot. Note, the random constellation is used as a baseline example.
In Fig. <ref> the performance of the delay against distance for LEO walker constellations, which are also normalized a great circle optical fiber path. From the plot, it can be seen that like with the random constellation, the performance in terms of delay increases as the number of satellites increases. Also, the Walker-Star seems to outperform the Walker-Delta constellation when the nearest hop topology is used. Furthermore, the performance contrast between the Walker-Delta and Walker-Star is the greatest when the nearest hop algorithm is used and also increases as the distance between the transmitting device and the receiving device grows. The performance disparity between the walker constellations is much small when the cutoff topology is used. The Walker-Star constellation outperforms the Walker-Delta constellation in terms of delay as the Walker-Star has better coverage over the globe compared to the Walker-Delta and random locations for the ground-based transmitter and receiver are used.
§.§ Alternate Paths
To determine the robustness of each LEO constellation to link failures, we investigate how the link delay increases as we introduce alternate paths. We construct the best possible path and calculate the delay, then remove the links that correspond to the best path and form a new path. The delay of the new path is then calculated. The process is repeated until we have 10 distinct paths from the transmitting device to the receiving device. The results are shown in Fig. <ref> where the delay performance is shown for Walker-Delta as well as Walker-Star satellite constellations. In addition, the delay for using terrestrial optical fiber cable over the great circle distance between the cities is shown. An illustration showing the different types of links is shown in Fig. <ref>. The Walker-Delta constellation using the nearest hop topology has the greatest deviation between cities, particularly for the path between New York and London. Moreover, the deviation from using the cutoff algorithm is very low, thereby showing greater robustness to failed links between satellites. Finally, the performance of satellites for ISL paths compared to traditional fiber has a greater pay-off when the distance between the two locations increases such as in the delay between Perth and Brest.
§ CONCLUSION
In this study, we analyzed ISL-enabled LEO satellite constellations as a low-delay alternative to traditional terrestrial optical fiber networks. We investigated three LEO constellations: Walker-Delta, Walker-Star, and random. Additionally, we explored two topologies, presenting the power-efficient nearest hop topology and comparing it to the delay-minimizing cutoff topology. Our results demonstrated that satellite networks improve delay performance compared to optical fiber connections as the transmitter-receiver distance increases. The proposed nearest hop topology maintains a better delay performance compared to the great-circle fiber path while also utilizing more energy-efficient ISLs. Future work will involve using machine learning to develop a topology for dynamically changing ISLs due to high traffic loads and link failures.
The Walker-Star constellation had a lower delay, but the Walker-Delta constellation was more power-efficient in terms of average ISL length with the nearest hop algorithm.
§ ACKNOWLEDGMENT
The authors would like to acknowledge the discussions with Dr. Ben Allen and Mr. Ben Moores, also the partial funding by SmartSat CRC under the UK-Australia spacebridge program.
IEEEtran
|
http://arxiv.org/abs/2307.04089v1 | 20230709035156 | Can Variational Quantum Algorithms Demonstrate Quantum Advantages? Time Really Matters | [
"Huan-Yu Liu",
"Zhao-Yun Chen",
"Tai-Ping Sun",
"Cheng Xue",
"Yu-Chun Wu",
"Guo-Ping Guo"
] | quant-ph | [
"quant-ph"
] |
[email protected]
0000-0002-6158-9627
0000-0002-5181-160X
[email protected]
0009-0009-2591-1672
0000-0003-2207-9998
[email protected]
0000-0002-8997-3030
0000-0002-2179-9507
Applying low-depth quantum neural networks (QNNs), variational quantum algorithms (VQAs) are both promising and challenging in the noisy intermediate-scale quantum (NISQ) era: Despite its remarkable progress, criticisms on the efficiency and feasibility issues never stopped.
However, whether VQAs can demonstrate quantum advantages is still undetermined till now, which will be investigated in this paper.
First, we will prove that there exists a dependency between the parameter number and the gradient-evaluation cost when training QNNs. Noticing there is no such direct dependency when training classical neural networks with the backpropagation algorithm, we argue that such a dependency limits the scalability of VQAs.
Second, we estimate the time for running VQAs in ideal cases, i.e., without considering realistic limitations like noise and reachability. We will show that the ideal time cost easily reaches the order of a 1-year wall time.
Third, by comparing with the time cost using classical simulation of quantum circuits, we will show that VQAs can only outperform the classical simulation case when the time cost reaches the scaling of 10^0-10^2 years.
Finally, based on the above results, we argue that it would be difficult for VQAs to outperform classical cases in view of time scaling, and therefore, demonstrate quantum advantages, with the current workflow.
Since VQAs as well as quantum computing are developing rapidly, this work does not aim to deny the potential of VQAs. The analysis in this paper provides directions for optimizing VQAs, and in the long run, seeking more natural hybrid quantum-classical algorithms would be meaningful.
Can Variational Quantum Algorithms Demonstrate Quantum Advantages? Time Really Matters
Guo-Ping Guo
August 12, 2023
======================================================================================
§ INTRODUCTION
Machine learning (ML) <cit.> is one of the most remarkable technology in the 21st century, which has applications ranging from daily works to scientific research <cit.>. Developments of ML rely on the success of computer science and the neural network (NN) model <cit.>, which provided the capability of carrying out huge computational tasks and simulating complex functions. Quantum computing <cit.> is also developed rapidly in decades, whose features, like quantum entanglement and quantum operation parallelism, are unavailable for their classical counterparts. Quantum computing has been introduced to the ML region, known as quantum machine learning (QML) <cit.>.
Variational quantum algorithms (VQAs) <cit.> are representative of QML, whose workflow is shown in Fig. <ref>. It is a hybrid quantum-classical algorithm. A quantum processor prepares an ansatz with the quantum neural network (QNN) <cit.> U(θ)
[It is also called parameterized quantum circuits in some works. To make it consistent with classical machine learning, we use QNN here.]
as | ψ (θ) ⟩= U( θ )|0⟩ with θ={θ_1,θ_2,⋯,θ_L } the (trainable) parameter vector. The ansatz is then used to evaluate cost functions with quantum measurements, which is usually an expectation value under some Hamiltonian H: C(θ) =⟨ψ(θ)|H|ψ(θ)⟩. The classical processor optimizes θ to minimize the cost function. QNNs in VQAs are usually low-depth, which can be performed on current noisy intermediate-scale quantum (NISQ) <cit.> devices even without the support of fault-tolerant quantum computation technology <cit.>.
This makes VQAs potential to achieve quantum advantages in the NISQ era. Since its proposal, VQAs have been developed rapidly and have
applications ranging from quantum chemistry simulation <cit.> to numerical computation <cit.>. Experimental demonstrations have also been performed <cit.>.
As research progresses, the challenges of VQAs gradually attracted attention, which can be divided into the efficiency part and feasibility part: Efficiency challenges usually mean that executing VQAs requires huge resources. The well-known barren plateaus <cit.> describes a phenomenon with exponentially vanishing gradients, indicating the required sampling times to obtain the cost function also grows exponentially with the number of qubits. On the other hand, feasibility challenges are the major part. They focus on whether the correct answer can be acquired by running VQAs. Training VQAs is an NP-hard problem <cit.>, besides the barren plateaus problem mentioned above, there usually exists a variety of local minimum points in the optimization landscape of VQAs <cit.>, implying that it is difficult to achieve the global optimal point. The expressibility of QNNs <cit.> also affected the reachability issue <cit.>, where global optimal points will never be reachable if they cannot be represented by the QNN. Noise <cit.> and other factors will also affect the correctness of executing VQAs. Great efforts have also been provided to deal with such challenges, including mitigating barren plateaus to improve trainability <cit.>, reducing sampling times to improve efficiency <cit.>, mitigating noises <cit.>, etc.
We focus on challenges in the efficiency part in this work. First, we will prove that there exists a dependency between the number of parameters in QNNs and the gradient-evaluation cost when training the QNN. Noticing that such a dependency does not exist when training classical NN models with the backpropagation algorithm <cit.>, we argue that the parameter number affected the scalability of VQAs. Next, we consider the time cost for running VQAs in an ideal setting, i.e., we do not consider realistic limitations on VQAs like noise, qubit connectivity, reachability, etc. The time cost analysis is used as follows:
* The time cost scaling easily reached the 1-year wall time at about 20 qubits.
* By comparing with the time cost using classical simulation, we can see that VQAs can only outperform classical simulations when the time cost reaches a scaling of 10^0-10^2 years. Therefore, quantum advantages are difficult for VQAs to achieve based on the current workflow.
In performing such analysis, we would not deny the potential of VQAs, as well as other hybrid quantum-classical algorithms in the NISQ era, but some changes and improvements need to be made. According to our analysis, some directions for optimizing VQAs are provided. Taking one step further, we need to consider what is the natural way of executing machine learning with quantum computing.
The rest of this paper is organized as follows:
In Sec. <ref>, we introduced some backgrounds needed for the latter analysis, including training NNs with the backpropagation algorithm and QNNs.
In Sec. <ref>, the dependency of the parameter number and the gradient-evaluation cost in training QNNs is provided.
In Sec. <ref>, we analyze the time cost of running VQAs.
Sec. <ref> gives the total time cost of running VQAs.
In Sec. <ref>, we compare the time cost using both VQAs and classical simulation.
A conclusion is given in Sec. <ref>.
§ PRELIMINARY
§.§ Training classical neural networks using the backpropagation algorithm
The NN model is widely applied in solving ML tasks. General NNs are comprised of neurons, whose diagram is shown in Fig. <ref>. A neuron can be viewed as a non-linear function that maps n inputs x={x_1,x_2,⋯,x_n} to an output y as:
y = f( ∑_i w_ix_i-b ),
where b is a bias, w={ w_1,w_2,⋯,w_n} is the adjustable weight vector, f is the non-linear activation function and one example is the sigmod function:
f(x)=1/1+e^-x.
Different functions can be approximated by adjusting the weight vector, and the core idea of ML is to make such functions approach desired maps. “Learning” is exactly the process of adjusting the weights.
Only one neuron has limited learning capability. To further increase the expressive power, i.e., be able to fit more functions, neurons can be used to construct a NN, which is shown in Fig. <ref>. In the NN, the input is fed into several neurons, whose outputs are then viewed as inputs to neurons in the next layer. Denote y={y_1,y_2⋯,y_m} as the output of the whole NN, or equivalently, the output of neurons corresponding to the final layer. Denote the desired value as d={d_1,d_2⋯,d_m} and the vector of weights for all neurons as W. As introduced, the learning process is to adjust W such that y is close to d.
To achieve this, one can define a cost function as:
C ≡ C(W) := 1/2∑_i=1^m (y_i-d_i)^2.
C=0 implies we have finished the learning process. To find the minimum value of the cost function, one can start from some specific set of parameters and then optimize the weight vector according to optimization algorithms like gradient descent:
W←W - η·∇ C,
where η > 0 is the learning rate, the gradient is ∇ C={∂ C/∂ w_j |w_j∈W}. Every element in the gradient can be obtained via methods like the finite difference method:
∂ C/∂ w_j=lim_δ→ 0C(w_jδ+)-C(w_jδ-)/2δ,
where w_jδ±={ w_1,⋯,w_j±δ,⋯}.
Denote the total number of weights as M
[The parameters number in NN and QNN may not be the same, therefore we apply different notations (M and L).].
If we apply Eq. (<ref>) to evaluate the gradient for every weight, we will need to execute the NN O(M) times, and execute the NN once will query all M weights, then the query complexity for directly evaluating the gradient scales O(M^2). However, large NN execution will cost huge resources, so reducing the costs for evaluating gradients would be remarkable. We introduce the backpropagation algorithm below, which achieved this goal.
Take Fig. <ref> as one example, Consider the weight w_2, which is representative of weights corresponding to neurons in the final layer. The gradient element for this weight is:
∂ C/∂ w_2 = ∂ C/∂ y_1∂ y_1/∂ w_2.
According to Eq. (<ref>), ∂ C/∂ y_1 = y_1-d_1. And ∂ y_i/∂ w_2 is the operation within one neuron, which can be easily acquired according to Eq. (<ref>).
Next, we consider evaluating the gradient concerning w_1, which is representative of weights in the middle layer.
∂ C/∂ w_1 = ∂ C/∂ y_m1∂ y_m1/∂ w_1
= ( ∑_i ∂ C/∂ y_i∂ y_i/∂ y_m1) ∂ y_m1/∂ w_1.
According to Eq. (<ref>), ∂ C/∂ y_i is already known if all the gradients of weights corresponding to neurons in the final layers are obtained, which can be reused, and other partial derivatives are all within one neuron.
Moving back, ∂ C/∂ w_0 can be analyzed similarly.
Therefore, when training classical NN models, one can first execute the NN and record the output (y) for every neuron. When evaluating gradients, weights of neurons corresponding to the final layer can be first evaluated, whose information can be reused when evaluating gradients for neurons corresponding to former layers.
Gradient evaluation with this back-forward propagation of information is called the backpropagation algorithm, whose query complexity is O(M), which establishes a reduction compared to the directly finite difference method. Using this method, we do not need to execute NNs for every weight and this makes it scalable for training NNs even with huge sizes.
§.§ Quantum Neural Networks
To make it convenient for the latter analysis, we introduce the unitary coupled-cluster singles and doubles ansatz <cit.> and the hardware-efficient ansatz (HEA) <cit.> in this section.
§.§.§ Unitary coupled-cluster singles and doubles ansatz
In quantum chemistry simulations, the unitary coupled-cluster (UCC) ansatz is widely applied. It is derived from the coupled-cluster theory <cit.>, which applies symmetry-conserved excitation operators on some initial states, usually the Hartree-Fock (HF) state, to expand wavefunctions in the target subspace.
Denote the number of spin-orbitals and electrons of a given system as n_o and n_e. And order the n_o spin-orbitals from 1 to n_o, whose corresponding energies are in non-decreasing order. Then the HF state |ψ_HF⟩ = | 1,1,⋯,1,0,0,⋯,0⟩ with exactly n_e 1s and n_o-n_e 0s is the state with the lowest energy when ignoring interaction energies, which is usually served as ground state approximations.
When considering the interaction energies, the ground state should be |ψ⟩ = ∑_ |ψ_i⟩∈ S a_i |ψ_i⟩, where a_i are coefficients and all states in the set S satisfying the condition that the Hamming weight, i.e, the sum of all 1s is exactly n_e. Starting from the |ψ_HF⟩, some symmetry-conserved operations can be applied to expand the target subspace spanned by S. This can be realized with the fermionic creation(annihilation) operators a_j^†(a_j). For instance, the operator a_i^†a_α can excite one electron from the α-th spin-orbital to the i-th one and will result in 0 (not the vacuum state) if the α-th orbital has no electron or the i-th already has one electron. Therefore, we can define it as a single-excitation operator. Double-excitation operator a_i^†a_j^†a_α a_β can be similarly defined.
Since considering all excitations will cost huge resources, we usually consider the single- and double-excitations, and the UCC ansatz with only the single- and double-excitation is called the UCCSD ansatz:
|ψ_UCCSD(θ)⟩ = U_UCCSD(θ) |ψ_HF⟩,
where the QNN has the form:
U_UCCSD(θ) = e^T-T^†,
where T=T_1+T_2 are linear combinations of excitation operators, which are expressed as:
T_1 = ∑_α={1,2,⋯,n_e},
i={n_e+1,⋯,n_o}θ_iα a_i^† a_α,
T_2 = ∑_α,β={1,2,⋯,n_e},
i,j={n_e+1,⋯,n_o},
α<β,i<jθ_ijαβ a_i^† a_j^† a_α a_β,
where θ={θ_iα,θ_ijαβ} is the parameter vector. Therefore:
T-T^† = ∑_α={1,2,⋯,n_e},
i={n_e+1,⋯,n_o}θ_iα (a_i^† a_α - a_α^† a_i)
+ ∑_α,β={1,2,⋯,n_e},
i,j={n_e+1,⋯,n_o},
α<β,i<jθ_ijαβ (a_i^† a_j^† a_α a_β-a_β^† a_α^† a_j a_i).
To further implement the ansatz on quantum processors, fermionic-to-qubit mappings are required. We apply the Jordan-Wigner (JW) transformation <cit.>.
a_j^† = 1/2[∏_k<jZ_k] (X_j-iY_j),
a_j = 1/2[∏_k<j Z_k](X_j+iY_j).
After this, the HF state is mapped to |1⟩^⊗ n_e⊗ |0⟩^⊗ n_o-n_e, implying that under JW transformation, the number of qubits required is the same as the number of spin-orbitals: n=n_o. And the excitation operator becomes a linear combination of tensor products of Pauli operators (Pauli strings). Finally, the operation T-T^† will be a linear combination of Pauli strings. With some orders of Trotter expansion, we have:
U_UCCSD(θ) = ∏_l e^-iθ'_lP_l,
where θ' can be obtained from θ.
For every e^-iθ P, we can implement it on the quantum processor shown in Fig. <ref>.
§.§.§ Hardware-efficient ansatz
HEA is a problem-agnostic ansatz, which directly applies easy-implementable quantum gates of the quantum processor. We assume the HEA to be comprised of P blocks, each of which consists of single-qubit rotation and two-qubit entangling operations:
U_HEA(θ) = ∏_p=1^P U_entangle U_single(θ_p),
where:
U_entangle = CNOT_n,1∏_i=1^n-1CNOT_i,i+1,
U_single(θ_p) = ∏_i=1^n R_Z(θ_p^i1) R_X(θ_p^i2) R_Z(θ_p^i3),
where subscripts in CNOT gates represent the control and target qubit, respectively. The quantum circuit for the HEA described here is shown in Fig. <ref>.
It has been pointed out that HEA has remarkable expressibility <cit.>. Combined with the fact that HEA is hardware-friendly, it has become the most common-applied QNN model.
§ GRADIENTS IN VARIATIONAL QUANTUM ALGORITHMS
Training parameters in QNNs is the main step in executing VQAs, which is NP-hard <cit.>. On the one hand, cost functions in VQAs are obtained via repeated measurements, and achieving sampling error ϵ will require sampling O(1/ϵ^2) times. Then about 10^6 sampling times is required to reach the widely-applied chemical accuracy 1.6× 10^-3 Hartree
[1 Hartree = 2625.5 kJ/mol.]
. On the other hand, problems like barren plateaus can cause exponentially increased sampling times. Together with noise and other factors, evaluating cost functions in VQAs would be difficult.
Note that in the training process, measuring cost function is mainly used to evaluate gradients. If we apply Eq. (<ref>) for gradient evaluation, O(L) times of cost function needs to be evaluated. In Sec. <ref>, we introduced that the backpropagation algorithm can be used to reduce the times required for executing classical NNs, Therefore, it would be natural to ask whether such type of methods can be applied to reduce the gradient-evaluation cost when training QNNs.
First of all, the backpropagation algorithm cannot be implemented directly because a QNN is a parameterized unitary transformation that maps an initial state to the ansatz, without recording to inter-layer state, which, however, is required when performing backpropagation algorithms. As introduced in <cit.>, the backpropagation scaling for training QNNs is only possible when we have multiple copies of the ansatz.
Next, we consider whether there is some dependency between the gradient elements. If it is the case, after evaluating some gradient elements, we can apply this relation to directly compute the remaining gradient elements without running the QNN. However, we will show below that this is also unavailable.
For a general ansatz U(θ) with L independent parameters, and the cost function defined as the expectation value under some Hamiltonian H, we need at least O(L) times for evaluating the cost function to obtain the gradient.
The proof of this Theorem is provided below. According to this theorem, the costs for evaluating gradients in training QNNs depend on the number of parameters. This dependency heavily limits the scalability of VQAs.
In ML tasks, it is common to improve performance by increasing the number of parameters. Since there is no dependency of the gradient evaluation cost and the NN depth, such a performance-improving strategy works. However, scalability limitation makes increasing parameters not a good choice in VQAs. Since the parameter number naturally grows with the problem size or complexity, applying VQAs would be challenging.
Suppose the PQC has the form:
U(θ) = ∏_l=1^L U_l(θ_l) W_l = ∏_l=1^L (cosθ_l I - i sinθ_l P_l) W_l,
where θ = {θ_1,θ_2,⋯,θ_L } is a vector of independent parameters. P_l is a Hermitian operator and W_l is the un-parameterized gate. Denote the initial state as ρ_0, then the cost function is:
C(θ) = Tr [ U(θ) ρ_0 U^† (θ) H ].
Expand Eq. (<ref>) according to Eq. (<ref>), we have:
C(θ) = Tr[ ∏_l=1^L (cosθ_l I - i sinθ_l P_l) W_l ρ_0 ∏_l=L^1 W_l^† (cosθ_l I + i sinθ_l P_l) H ].
Observe there are 4 terms for every θ_l. We view cosθ_l and sinθ_l as coefficients. Then the function for each term in the cost function is:
cosθ_l cosθ_l, f(I,I);
cosθ_l sinθ_l, f(I,iP_l);
sinθ_l cosθ_l, f(-iP_l,I);
sinθ_l sinθ_l, f(-iP_l,iP_l).
Note that such four cases can be described by two bits p_lq_l and we define the above four cases mean p_lq_l=00,01,10,11, respectively. Then the cost function is expressed as:
C = ∑_ pq = { p_lq_l|p_lq_l=00,01,10,11 }_l=1^L a_pq f_pq,
where:
a_pq = ∏_l a_p_lq_l, a_p_lq_l = cos^2θ_l, p_lq_l=00,
sinθ_lcosθ_l, p_lq_l = 01,10,
sin^2θ_l, p_lq_l=11.
Denote:
g^l_pq = ∂ a_pq/∂θ_l.
Then the gradient is:
∂ C/∂θ_l = ∑_pq g_pq^l f_pq.
We assume {f_pq} are unknown. Computing ∂ C/∂θ_l through {f_pq} requires computing almost 4^L times, which is impractical.
If we can obtain the full gradient by evaluating the QNN k<O(L) times, then after evaluating some gradient elements we can obtain the others. Due to the unknown functions {f_pq}, unknown elements must be a linear combination of known gradients. If such a case exists, we consider the easiest case that we have obtained L-1 gradient elements, the remaining gradient can be expressed as:
∂ C/∂θ_l = ∑_k≠ l m_k ∂ C/∂θ_k.
This means that the vectors {g^k_pq}_k=1^L are linear dependent. Then there exists a set of numbers {m_i}_i=1^L that are not all 0:
∑_l=1^L m_l ∂ C/∂θ_l = 0.
This means:
∑_l=1^L m_l g^l_pq = 0, ∀ pq = {p_lq_l}.
We consider the following 2^L elements with indices:
pq = {00,11}^L.
And we re-order them as w_l=p_lq_l. Then the above equation will become:
∑_l=1^L m_l g^l_w=0, ∀ w={w_l}={0,1}^L.
Define w'={w_l}_l=2^L. Consider every pair of index 0,w' and 1,w', we have:
∑_l=1^L m_l g^l_0,w'=0,
∑_l=1^L m_l g^l_1,w'=0.
Add the two equations together:
∑_l=1^L m_l ( g^l_0,w' +g^l_1,w') =0.
Observe:
g^l_0,w' + g^l_1,w' = ∂ a_0,w'/∂θ_l + ∂ a_1,w'/∂θ_l = ∂/∂θ_l (a_0,w'+a_1,w').
While:
a_0,w'+a_1,w'= cos^2θ_l a_w' + sin^2θ_l a_w'=a_w',
we have:
g^0_0,w' +g^0_1,w' = 0.
Then Eq. (<ref>) will become:
∑_l=2^L m_l ( g^l_0,w' +g^l_1,w') = ∑_l=2^L m_l ∂ a_w'/∂θ_l = 0.
This is exactly the (L-1)-parameter case.
Repeat this process and we will eventually have:
m_L ∂ a_w_L/∂θ_L = 0, w_L=0,1.
Since a_w_L=0=cos^2θ_L, ∂ a_w_L=0/∂θ_L = -sin (2θ_L). Then we have m_L=0 except when θ_l=0. Moving back, we will obtain m_L-1=0. Finally, m_l=0,∀ l. This conflicts with the assumption that the vectors are linearly dependent. Then the proof is now finished.
§ TIME COSTS FOR EXECUTING VARIATIONAL QUANTUM ALGORITHMS
In this part, we estimate the time cost for executing VQAs, especially when using the UCCSD ansatz and HEA introduced in Sec. <ref>. Since VQA is executed by repeatedly measuring cost functions and updating parameters, the total time of running a VQA is:
t_VQA = t_cost× N_cost,
where t_cost is the time needed to obtain a cost function and N_cost is the number of cost functions needed to obtain to finish the algorithm.
On the one hand, cost functions in VQAs are obtained via repeated sampling of the ansatz. Then: t_cost = t_sample× N_sample, where t_sample and N_sample are the time needed to sample the ansatz once and the number of samples needed to obtain a cost function, respectively. On the other hand, N_cost depends on the optimization algorithms applied. When using gradient-based algorithms, we have:
N_cost = N_gradient× N_iterate, where N_gradient and N_iterate are the number of cost functions needed to evaluate to obtain one gradient and the number of iteration times, respectively. Below we will analyze the above four factors. And the sketch diagram for the analysis is shown in Fig. <ref>.
N_gradient As described in Theorem <ref>, we can view N_gradient simply as the number of parameters in the ansatz. In the UCCSD ansatz, the number of parameters is exactly the sum of single- and double-excitation terms:
L_UCCSD = C_n_e^1 C_n_o-n_e^1 + C_n_e^2 C_n_o-n_e^2,
where
C_n^m = n!/m!(n-m)!.
In HEA, parameters only appear in the single-qubit rotation operations. In each of the P blocks, we apply three single-qubit gates on every qubit, then we have:
L_HEA = 3nP.
t_sample Generally, sampling a quantum circuit includes three parts: initializing the quantum hardware, running the circuit, and measuring the outcome. Then:
t_sample = t_initial + t_gate + t_read.
On current superconducting hardware, t_initial
and t_read together will reach the order of 1 μ s <cit.>. The time of applying a single- and two-qubit gate are t_single=30 ns and t_double=60 ns <cit.>, respectively.
[The detailed time differs in systems but is in the same order. We will apply the averaged and experienced values.]
Then:
t_gate = l_single× t_single + l_double× t_double,
where l is the single- and two-qubit gate layer depth, where two gates in the same layer indicates they can be applied at the same time. Since the time of initializing the hardware and measuring the outcome is approximate to applying 10^2 quantum gates, then we will ignore this cost and only take the circuit running time as t_sample. The following theorems provide the value of t_gate for the UCCSD ansatz and HEA.
For a many-body system with n_o spin-orbitals and n_e electrons, the gate layer depth for the UCCSD ansatz under the first-order Trotter expansion is:
l_single = 6 C_n_e^1 C_n_o-n_e^1 +24 C_n_e^2 C_n_o-n_e^2 ,
l_double = 2 n_oC_n_e^1 C_n_o-n_e^1 + 8/3 (2n_o+1) C_n_e^2 C_n_o-n_e^2.
As introduced in Sec. <ref>, implementing the UCCSD ansatz on the quantum hardware requires transforming the ansatz into the form of Eq. (<ref>). According to Fig. <ref>, for a k-local Pauli operator, which means that the operator acts non-trivially on k qubits, the single-qubit and two-qubit depth of implementing e^-iθ P is 3 and 2k-2, respectively. Therefore, to determine the gate layer depth with the first-order Trotter expansion, we just need to determine the number of operators e^-iθ P in Eq. (<ref>) and the locality for each operator P.
Consider the single-excitation term, for every pair of i>α, the single-excitation term a_i^† a_α - a_α^† a_i is mapped with the JW transformation as:
a_i^†a_α - a_α^†a_i = [ ∏_k<i Z_k ] (X_i - i Y_i) [ ∏_k<α Z_k ] (X_α + i Y_α)
- [ ∏_k<i Z_k ] (X_i + i Y_i) [ ∏_k<α Z_k ] (X_α - i Y_α)
= Z_α (X_α + i Y_α) [ ∏_α<k<i Z_k ] (X_i-i Y_i)
- Z_α (X_α - i Y_α) [ ∏_α<k<i Z_k ] (X_i+i Y_i)
= 2 i X_α[ ∏_α<k<i Z_k ] Y_i - 2 i Y_α[ ∏_α<k<i Z_k ] X_i.
After mapping, a_i^† a_α - a_α^† a_i is mapped to a sum of 2 Pauli strings, each of which is (i-α+1)-local. Similar to Eq. (<ref>), for every group of i>j>α>β, the double-excitation term a_i^† a_j^† a_α a_β-a_β^† a_α^† a_j a_i is mapped to a sum of 8 Pauli strings, each of which is (i-β+1)-local.
Now we are going to determine the circuit depth. Since every e^-iθ P will cause 3 single-qubit circuit depth, and according to Eq. (<ref>), the number of single-excitation and double-excitation terms are C_n_e^1 C_n_o-n_e^1 and C_n_e^2 C_n_o-n_e^2, respectively. Then:
l_single = C_n_e^1 C_n_o-n_e^1 × 2 × 3 + C_n_e^2 C_n_o-n_e^2 × 8 × 3
=6 C_n_e^1 C_n_o-n_e^1 + 24 C_n_e^2 C_n_o-n_e^2.
The case for the two-qubit depth is more complex. For every pair of i,α, there are 2 Pauli strings for each single-excitation term, the two-qubit circuit depth for each of which is 2(i-α+1)-2=2(i-α). Therefore, the two-qubit gate layer depth with the single-excitation term is:
∑_i=n_e+1^n_e+(n_o-n_e)( ∑_α=1^n_e 4(i-α ) ) = ∑_i=n_e+1^n_e+(n_o-n_e)( 4in_e - n_e(n_e+1)/2× 4)
= 4n_e (n_e+1+n_o) (n_o-n_e) /2 -2 n_e(n_e+1)(n_o-n_e)
=2 n_on_e (n_o-n_e)
=2 n_o C_n_e^1 C_n_o-n_e^1.
For every group of i,j,α,β, the double-excitation operator will result in 8 Pauli strings, each of which is (i-β+1)-local. And different choices of j,α will not affect the locality. Then the two-qubit gate depth caused by the double-excitation term is:
∑_i=n_e+1^n_e+(n_o-n_e)( ∑_β=1^n_e (i-β)(n_e-β)(i-n_e-1) ) × 8 = 8/3 (2n_o+1) C_n_e^2 C_n_o-n_e^2 .
Adding Eq. (<ref>) and (<ref>), we obtain the overall two-qubit layer depth. And the theorem is now finished.
For the HEA described above with P blocks, we have:
l_single = 3P,
l_double = nP.
N_sample Cost functions in VQAs are obtained via repeated sampling, where reaching the sampling error ϵ requires sampling the circuit O(1/ϵ^2) times. then N_sample is determined by the sampling accuracy required.
Generally, the sampling error should be within the accuracy required for solving the problem. However, to perform parameter optimization, sampling accuracy should also be related to the scaling of the gradient. Suppose we are applying the parameter-shift rule <cit.> to evaluate the gradient as:
∂_jC = 1/2( C_+ -C_- ),
with C_± = C(θ_j±π/2) and ∂_jC = ∂ C/∂θ_j.
Denote the sampling error as ϵ and the sampled gradient as ∂_jC. The worst case is (Suppose ϵ > 0):
∂_jC = 1/2( [C_+ - ϵ] - [C_-+ϵ] )
= ∂_jC - ϵ.
To update parameters in the correct direction, we need:
∂_jC/∂_jC = ∂_jC/∂_jC-ϵ > 0.
Then sampling accuracy is dependent on the scaling of the gradient.
While the magnitude of the gradient could be affected by the barren plateaus, exponential sampling times would be required, which is not workable in practice. We will analyze the time cost with a set of several given sampling times. In real tasks, we can apply methods to reduce the sampling times, address the barren plateaus phenomenon and reduce measurement costs.
N_iterate Generally, N_iterate is not pre-known and differs between problems. Even for the same problem, different initial parameters and the choice of optimization algorithms will make N_iterate different. In gradient descent algorithms, both the learning rate and the gradient scaling will affect the iteration times. Moreover, while the scaling of the gradient can be affected by barren plateaus or local minimum points, optimization will take more steps. Therefore, we will treat N_iterate similar to N_sample, where we will provide the time cost for a set of given N_iterate. And we combine these two factors as:
N_si = N_sample× N_iterate,
t_VQA
Now we provide the value of t_VQA for both UCCSD ansatz and HEA. In general,
t_VQA = t_sample× N_sample× N_gradient× N_iterate
= N_si× ( t_single× l_single + t_double× l_double ) × L
= 3× 10^-8× N_si× (l_single+2l_double )× L.
Based on the former analysis, when considering the above ansatzes, we have:
t_VQA-UCCSD = 10^-8× N_si×( C_n_e^1 C_n_o-n_e^1 + C_n_e^2 C_n_o-n_e^2 )
×[ (12n_o+18) C_n_e^1 C_n_o-n_e^1 + (16n_o+88)C_n_e^2 C_n_o-n_e^2 ],
and
t_VQA-HEA = 9× 10^-8× N_si× (2n^2+3n)P^2 .
We can see that for a fixed N_si, the total time establishes a polynomial growth.
§ TOTAL TIME COST
Based on the analysis in Sec. <ref>, we now provide the detailed time cost for running VQAs. We will estimate the time cost under realistic assumptions of an ideal quantum processor. That is, we only take into account circuit running time and the sampling process for obtaining cost functions, and other factors including hardware noise, connectivity between physical qubits, the time for initializing the hardware and reading out the outcomes, as well as limitations for VQAs like reachability and trainability, are all ignored. The goal of ignoring these factors is to show the “best” time-scaling performance of VQAs.
As a representative application scenario, we consider applying VQAs to solve the ground states of different-sized molecular systems and label the systems according to their spin-orbital numbers n_o, which is also the number of qubits required: n. The number of electrons is set to be n_e=n_o/2.
Since N_sample and N_iterate are not pre-determined, we will provide the time cost concerning the value of the two factors, which are listed as:
N_sample ∈{ 10^4,10^5,10^6,10^7,10^8 },
N_iterate ∈{ 10^2,10^3,10^4 }.
Combine them as one factor: Therefore, N_si ranges from 10^6 to 10^12.
Given n_o and n_e, the structure of UCCSD ansatz is determined. However, the block depth P needed is generally hard to be determined. Therefore, we will consider the following two cases: P=n and P=n^2.
In Fig. <ref> and <ref>, we plot the time cost with different values of N_si for both UCCSD ansatz and HEA. The 1-year and 1000-year time are given as benchmarks.
From the figures, it is clear that for a fixed value of N_si, the total time cost for running VQAs establishes a polynomial growth with the number of qubits. Compared to the exponential scaling with classical simulation, VQAs seem to perform better.
However, in terms of real-time scaling, it is not the case. Even at a scaling of about 20 qubits, VQAs easily reached the 1-year time. In quantum chemistry tasks, to achieve chemical accuracy, sampling times is at least 10^6 times. Then the total time cost corresponding to N_si=10^6 can be viewed as the time for performing one step of parameter optimization, which comes at the level of 1 year. Since this is already the time on an ideal quantum computer, the real-time cost will be larger than this result.
§ VQAS V.S. CLASSICAL SIMULATIONS
Since the term “quantum advantage” is a topic compared to classical simulations, it is insufficient to only provide the time cost for using VQAs. In this part, we also consider the time cost of simulating VQAs using classical simulation of quantum circuits.
As quantum processors are unavailable for common research, classical simulation of quantum circuits is widely applied. The major difference between quantum simulation and classical simulation of quantum circuits is the time of quantum gates does not change with the number of qubits, but it is not the case with classical simulation. A quantum operation U_x with x the list of qubits that the operation acts on, is indeed U_x⊗ I_x̅, where x̅={ k|k∉x}. In this case, the time of applying a quantum gate grows exponentially with the number of qubits.
We set the gate time of 10 qubits as t_10=10^-3 s and the time for n qubits is t_n=t_102^n-10. Sampling is not required with classical simulation. We set N_sample=10^6 for quantum simulations to reach the chemical accuracy. And N_iterate is listed in Eq. (<ref>).
The time comparison between VQAs and classical simulations with both UCCSD ansatz and HEA is shown in Fig. <ref>. Due to the different increasing speeds, the time curve of VQAs and classical simulations crossed, whose corresponding time is denoted as T, which is a function of the ansatz, iteration number, etc.
It is only possible for VQAs to outperform classical computers when the time required is larger than T. From the figures, this time is at the scaling of years, and it also increased with the number of parameters.
Moreover, different from quantum processors, classical simulations can apply multi-cores, which can also provide a time reduction. For instance, in <cit.>, the average gate time is 2.09 s and 1.22 s when performing a 29-qubit and 40-qubit quantum operation. While quantum simulation with multiple quantum processors is still unavailable nowadays. Therefore, quantum advantages are difficult to reach for VQAs in the acceptable time-scaling.
§ CONCLUSION AND OUTLOOK
In this paper, we have investigated the time-scaling performance of VQAs and the potential for VQAs to achieve quantum advantages. We proved that methods like backpropagation cannot be directly applied when training QNNs since the inter-layer quantum states of QNNs are not recorded. And this makes the gradient-evaluation cost depend on the number of parameters in the quantum version of NN models, which limits the scalability of VQAs. Based on this result, we estimated the time cost of running VQAs in ideal cases, where realistic limitations like noise, reachability, and qubit connectivity are not considered, and we only take into account the time of performing quantum gates and errors due to finite sampling times. The result showed that even though the time established a polynomial growth, the time scaling easily reached the 1-year time wall time. Finally, we considered the time of applying classical simulations, which grows exponentially with the number of qubits. The result showed that the running time of VQAs is only shorter when the time-scaling is over 10^2 years with the UCCSD ansatz. However, due to the realistic limitations mentioned above, whether VQAs can perform better is still not sure. At a regular time-scaling, quantum advantages may be unavailable with VQAs.
By providing such a negative comment, we do not want to deny the potential of VQAs and the NISQ algorithms. In view of VQAs, optimizations need to be made to reduce the time cost, examples like more efficient sampling strategies and more parameter-saving ansatzes. And one of our future works is to design backpropagation-type algorithms for efficiently training QNNs.
In the view of long term, introducing quantum computing into the context of machine learning, or equivalently, quantum machine learning, has remarkable potential. However, due to the different features between quantum and classical computation, directly replacing the NN model with QNN may not be the optimal way to achieve quantum advantages. Seeking a more natural way to carry out QML tasks would be meaningful.
Taking one step further, a variety of quantum algorithms is a quantum-classical hybrid: A question is solved by classical pre-processing, quantum computation, and classical post-processing. Usual algorithms replace one step of classical computation with quantum computation, but the pre-processing process to fit quantum computation is preferred.
§ ACKNOWLEDGEMENT
This work was supported by the National Natural Science Foundation of China (Grant No. 12034018), and Innovation Program for Quantum Science and Technology No. 2021ZD0302300.
§ DATA AVAILABILITY
All the data that support the findings of this study are available within this article.
quantum
|
http://arxiv.org/abs/2307.05229v1 | 20230711125321 | Pulsar kicks in ultralight dark matter background induced by neutrino oscillation | [
"Geatano Lambiase",
"Tanmay Kumar Poddar"
] | hep-ph | [
"hep-ph",
"astro-ph.CO",
"gr-qc"
] |
=1
|
http://arxiv.org/abs/2307.04097v1 | 20230709045910 | Restricted Generative Projection for One-Class Classification and Anomaly Detection | [
"Feng Xiao",
"Ruoyu Sun",
"Jicong Fan"
] | cs.LG | [
"cs.LG"
] |
Journal of Class Files, Vol. 14, No. 8, August 2021
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals
Restricted Generative Projection for One-Class Classification and Anomaly Detection
Feng Xiao, Ruoyu Sun, Jicong Fan Member, IEEE,
The authors are with the School of Data Science, The Chinese University of Hong Kong, Shenzhen, and Shenzhen Research Institute of Big Data. E-mail: [email protected].
Manuscript received April 19, 2021; revised August 16, 2021.
May 17, 2023
====================================================================================================================================================================================================================================================================================================
We present a simple framework for one-class classification and anomaly detection. The core idea is to learn a mapping to transform the unknown distribution of training (normal) data to a known target distribution. Crucially, the target distribution should be sufficiently simple, compact, and informative. The simplicity is to ensure that we can sample from the distribution easily, the compactness is to ensure that the decision boundary between normal data and abnormal data is clear and reliable, and the informativeness is to ensure that the transformed data preserve the important information of the original data. Therefore, we propose to use truncated Gaussian, uniform in hypersphere, uniform on hypersphere, or uniform between hyperspheres, as the target distribution. We then minimize the distance between the transformed data distribution and the target distribution while keeping the reconstruction error for the original data small enough. Comparative studies on multiple benchmark datasets verify the effectiveness of our methods in comparison to baselines.
Anomaly Detection, One-class Classification, Generative Projection.
§ INTRODUCTION
Anomaly detection (AD) under the setting of one-class classification aims to distinguish normal data and abnormal data using a model trained on only normal data <cit.>. AD is useful in numerous real problems such as intrusion detection for video surveillance, fraud detection in finance, and fault detection for sensors. Many AD methods have been proposed in the past decades <cit.>. For instance, Schölkopf et al.<cit.> proposed the one-class support vector machine (OC-SVM) that finds, in a high-dimensional kernel feature space, a hyperplane yielding a large distance between the normal training data and the origin. Tax et al.<cit.> presented the support vector data description (SVDD), which obtains a spherically shaped boundary (with minimum volume) around the normal training data to identify abnormal samples. Hu et al.<cit.> propose a new kernel function to estimate samples’ local densities and propose a weighted
neighborhood density estimation to increase the robustness to changes in the neighborhood size.
There are also many deep learning based AD methods including unsupervised AD methods <cit.> and semi-supervised AD methods <cit.>.
Deep learning based AD methods may be organized into three categories. The first category is based on compression and reconstruction. These methods usually use an autoencoder <cit.> to learn a low-dimensional representation to reconstruct the high-dimensional data <cit.>. The autoencoder learned from the normal training data is expected to have a much higher reconstruction error on unknown abnormal data than on normal data.
The second category is based on the combination of classical one-class classification <cit.> and deep learning <cit.>. For instance, Ruff et al.<cit.> proposed a method called deep one-class SVDD. The main idea is to use deep learning to construct a minimum-radius hypersphere to include all the training data, while the unknown abnormal data are expected to fall outside.
The last category is based on generative learning or adversarial learning
<cit.>.
For example, Perera et al. <cit.> proposed to use the generative adversarial network (GAN) <cit.> with constrained latent representation to detect anomalies for image data. Goyal et al.<cit.> presented a method called deep robust one-class classification (DROCC) and the method aims to find a low-dimensional manifold to accommodate the normal data via an adversarial optimization approach.
Although deep learning based AD methods have shown promising performance on various datasets, they still have limitations. For instance, the one-class classification methods such as Deep SVDD <cit.> only ensure that a hypersphere could include the normal data but cannot guarantee that the normal data are distributed evenly in the hypersphere, which may lead to large empty regions in the hypersphere and hence yield incorrect decision boundary (see Fig.<ref>). Moreover, the popular hypersphere assumption may not be the best one for providing a compact decision boundary (see Fig.<ref> and Tab.<ref>). The adversarial learning methods such as <cit.> may suffer from instability in optimization.
In this work, we present a restricted generative projection (RGP) framework for one-class classification and anomaly detection. The main idea is to train a deep neural network to convert the distribution of normal training data to a target distribution that is simple, compact, and informative, which will provide a reliable decision boundary to identify abnormal data from normal data. There are many choices for the target distribution, such as truncated Gaussian and uniform on hypersphere. Our contributions are summarized as follows.
* We present a novel framework called RGP for one-class classification and anomaly detection. It aims to transform the data distribution to some target distributions that are easy to be violated by unknown abnormal data.
* We provide four simple, compact, and informative target distributions, analyze their properties theoretically, and show how to sample from them efficiently.
* We propose two extensions for our original RGP method.
We conduct extensive experiments (on eight benchmark datasets) to compare the performance of different target distributions and compare our method with state-of-the-art baselines. The results verify the effectiveness of our methods.
The rest of this paper is organized as follows. Section <ref> introduces the related work.
Section <ref> details our proposed methods.
Section <ref> presents two extensions of the proposed method.
Section <ref> shows the experiments.
Section <ref> draws conclusions for this paper.
§ RELATED WORK
Before elaborating our method, we in this section briefly review deep one-class classification, autoencoder-based AD methods, and maximum mean discrepancy (MMD)<cit.>.
We also discuss the connection and difference between our method and these related works.
§.§ Deep One-Class Classification
The Deep SVDD proposed by <cit.> uses a neural network to learn a minimum-radius hypersphere to enclose the normal training data, i.e.,
minimize_𝒲1/n∑^n_i=1‖ϕ(𝐱_i; 𝒲) - 𝐜‖^2 + λ/2∑^L_l=1‖𝐖_l ‖^2_F
where 𝐜∈ℝ^d is a predefined centroid and 𝒲={𝐖_1,…,𝐖_L} denotes the parameters of the L-layer neural network ϕ, and λ is a regularization hyperparameter. In (<ref>), to avoid model collapse, bias terms should not be used and activation functions should be bounded <cit.>. There are also a few variants of Deep SVDD proposed for semi-supervised one-class classification and anomaly detection <cit.>.
Both our method and Deep SVDD as well as its variants aim to project the normal training data into some space such that a decision boundary between normal data and unknown abnormal data can be found easily. However, the sum-of-square minimization in Deep SVDD and its variants only ensures that the projected data are sufficiently close to the centroid 𝐜 in the sense of Euclidean distance and does guarantee that the data are sufficiently or evenly distributed in the hypersphere centered at 𝐜. Thus, in the hypersphere, there could be holes or big empty regions without containing any normal data and hence it is not suitable to assume that the whole space enclosed by the hypersphere is completely a normal space. In other words, the optimal decision boundary between normal data and abnormal data is actually very different from the hypersphere. An intuitive example is shown in Fig.<ref>. We see that there is a large empty space in the hypersphere learned by Deep SVDD. In contrast, the transformed data of our method are sufficiently distributed.
§.§ Autoencoder-based AD Methods
Our method is similar to but quite different from the variational autoencoder (VAE) <cit.>. Although our model is an autoencoder, the main goal is not to represent or generate data; instead, our model aims to convert distribution to find a reliable decision boundary for anomaly detection. More importantly, the latent distribution in VAE is often Gaussian and not bounded while the latent distribution in our model is more general and bounded, which is essential for anomaly detection. In addition, the optimizations of VAE and our method are also different: VAE involves KL-divergence while our method involves maximum mean discrepancy <cit.>.
It is worth noting that similar to our method, Perera et al.<cit.> also considered bounded latent distribution in autoencoder for anomaly detection. They proposed to train a denoising autoencoder with a hyper-cube supported latent space, via adversarial training. The latent distribution and optimization are different from ours. In addition, the latent distributions of our method, such as uniform on hypersphere, are more compact than the multi-dimensional uniform latent distribution of their method.
Compared with the autoencoder based anomaly detection method NAE <cit.> that uses reconstruction error to normalize autoencoder, our method pays more attention to learning a mapping that can transform the unknown data distribution into a simple and compact target distribution. The ideas are orthogonal.
§.§ Maximum Mean Discrepancy
In statistics, maximum mean discrepancy (MMD)<cit.> is often used for Two-Sample test and its principle is to find a function that assumes different expectations on two different distributions:
MMD[ℱ, p,q] =‖ f ‖_ℋ≤1sup(𝔼_p[f(𝐱)]-𝔼_q[f(𝐲)]),
where p, q are probability distributions, ℱ is a class of functions f:𝕏→ℝ and ℋ denotes a reproducing kernel Hilbert space.
Using the kernel trick, MMD can be represented as a simple loss function to measure the discrepancy between two distributions by finite samples, which is easy to apply to deep learning and can be efficiently trained by gradient descent. Based on the aforementioned advantages of MMD, Li et al.<cit.> proposed generative moment matching networks (GMMNs), which leads to a simpler optimization objective compared to the min-max optimization of GAN <cit.>.
Although both our method and GMMNs <cit.> minimize the MMD between data distribution and prior distribution, our goal is not generating new data but detecting anomalies. In addition, we consider a few bounded target distributions and analyze their sampling properties. More importantly, our method has very competitive performance when compared with SOTA methods of anomaly detection and one-class classification.
§ RESTRICTED GENERATIVE PROJECTION
In this section, we introduce our RGP framework, bounded target distributions, and the computation of anomaly scores.
§.§ Restricted Distribution Projection
Suppose we have a set of m-dimensional training data 𝐗={𝐱_1,𝐱_2,…,𝐱_n }
drawn from an unknown bounded distribution 𝒟_𝐱 and any samples drawn from 𝒟_𝐱 are normal data. We want to train a model ℳ on 𝐗 to determine whether a test data 𝐱_new is drawn from 𝒟_𝐱 or not. One may consider estimating the density function (denoted by p_𝐱) of 𝒟_𝐱 using some techniques such as kernel density estimation <cit.>. Suppose the estimation p̂_𝐱 is good enough, then one can determine whether 𝐱_new is normal or not according to the value of p̂_𝐱(𝐱_new): if p̂_𝐱(𝐱_new) is zero or close to zero, 𝐱_new is an abnormal data point; otherwise, 𝐱_new is a normal data point [Here we assume that the distributions of normal data and abnormal data do not overlap. Otherwise, it is difficult to determine whether a single point is normal or not.]. However, the dimensionality of the data is often high and hence it is very difficult to obtain a good estimation p̂_𝐱.
We propose to learn a mapping 𝒯:ℝ^m→ℝ^d to transform the unknown bounded distribution 𝒟_𝐱 to a known distribution 𝒟_𝐳 while there still exists a mapping 𝒯':ℝ^d→ℝ^m that can recover 𝒟_𝐱 from 𝒟_𝐳 approximately.
Let p_𝐳 be the density function of 𝒟_𝐳. Then we can determine whether 𝐱_new is normal or not according to the value of p_𝐳(𝒯(𝐱_new)). To be more precise, we want to solve the following problem
𝒯, 𝒯'minimize ℳ(𝒯(𝒟_𝐱), 𝒟_𝐳)+λℳ(𝒯'(𝒯(𝒟_𝐱)),𝒟_𝐱),
where ℳ(·, ·) denotes some distance metric between two distributions and λ is a trade-off parameter for the two terms. Note that if λ=0, 𝒯 may convert any distribution to 𝒟_𝐳 and lose the ability of distinguishing normal data and abnormal data.
Based on the universal approximation theorems <cit.> and substantial success of neural networks, we use deep neural networks (DNN) to model 𝒯 and 𝒯' respectively. Let f_θ and g_ϕ be two DNNs with parameters θ and ϕ respectively. We solve
θ, ϕminimize ℳ(𝒟_f_θ(𝐱), 𝒟_𝐳)+λℳ(𝒟_g_ϕ(f_θ(𝐱)), 𝒟_𝐱),
where f_θ and g_ϕ serve as encoder and decoder respectively.
However, problem (<ref>) is intractable because 𝒟_𝐱 is unknown and 𝒟_f_θ(𝐱), 𝒟_g_ϕ(f_θ(𝐱)) cannot be computed analytically. Note that the samples of 𝒟_𝐱 and 𝒟_g_ϕ(f_θ(𝐱)) are given and paired. Then the second term in the objective of (<ref>) can be replaced by sample reconstruction error such as 1n∑_i=1^n𝐱_i-g_ϕ(f_θ(𝐱_i))^2. On the other hand, we can also sample from 𝒟_f_θ(𝐱) and 𝒟_𝐳 easily but their samples are not paired. Hence, the metric ℳ in the first term of the objective of (<ref>) should be able to measure the distance between two distributions using their finite samples. To this end, we propose to use the kernel maximum mean discrepancy (MMD)<cit.> to measure the distance between 𝒟_f_θ(𝐱) and 𝒟_𝐳.
Its empirical estimate is
MMD^2[ℱ, X,Y] = 1/m(m-1)∑_i=1^mj≠ i∑^mk(𝐱_i, 𝐱_j)
+ 1/n(n-1)∑_i=1^nj≠ i∑^nk(𝐲_i, 𝐲_j)
- 2/mn∑_i=1^mj=1∑^nk(𝐱_i, 𝐲_j),
where X = {𝐱_1, …, 𝐱_m} and Y = {𝐲_1, …, 𝐲_n} are samples consisting of i.i.d observations drawn from p and q, respectively. k(·, ·) denotes a kernel function, e.g., k(𝐱, 𝐲)=exp(-γ𝐱-𝐲^2), a Gaussian kernel.
Based on the above analysis, we obtain an approximation for (<ref>) as
minimize_θ, ϕ MMD^2(𝐙_θ,𝐙_T)+ λ/n∑_i=1^n𝐱_i-g_ϕ(f_θ(𝐱_i))^2,
where 𝐙_θ={f_θ(𝐱_1),f_θ(𝐱_2),…,f_θ(𝐱_n) } and 𝐙_T={𝐳_i:𝐳_i∼𝒟_𝐳, i=1,…,n}.
The first term of the objective function in (<ref>) makes f_θ learn the mapping 𝒯 from data distribution 𝒟_𝐱 to target distribution 𝒟_𝐳 and the second term ensures that f_θ can preserve the main information of observations provided that λ is sufficiently large.
§.§ Bounded Target Distributions
Now we introduce four examples of simple and compact 𝒟_𝐳 for (<ref>). The four distributions are Gaussian in Hypersphere (GiHS), Uniform in Hypersphere (UiHS), Uniform between Hyperspheres (UbHS), and
Uniform on Hypersphere (UoHS). Their 2-dimensional examples are visualized in Fig.<ref>.
GiHS (Fig.<ref>.a) is actually a truncated Gaussian. Suppose we want to draw n samples from GiHS. A simple approach is drawing (1+ρ)n samples from a standard d-dimensional Gaussian and discarding the ρ n samples with larger ℓ_2 norms. The maximum ℓ_2 norm of the remaining n points is the radius of the hypersphere. One may also use the inverse transform method of <cit.>. We have the following results.
Suppose 𝐳_1,𝐳_2,…,𝐳_n are sampled from 𝒩(0,𝐈_d) independently. Then for any r>√(d), we have
Pr(𝐳_j≥ r) ≤exp(-0.5α), j∈[n],
and
Pr(max_1≤ j≤ n‖𝐳_j‖≤ r)≥ 1-nexp(-0.5α),
where α=√(d+2r^2)-√(d).
Inequality (<ref>) means a hypersphere of radius r can include all the n samples with a high probability if r is sufficiently large. On the other hand, according to (<ref>), if we expect to get n samples in a hypersphere of radius r, we need to sample about n/(1-exp(-0.5α)) points from 𝒩(0,𝐈_d). If d is larger, we need to sample more points.
UiHS (Fig.<ref>.b) is a hyperball in which all the samples are distributed uniformly. To sample from UiHS, we first need to sample from 𝒰(-r,r)^d. Then we discard all the data points outsides the radius-r hyperball centered at the origin.
The following proposition (the proof is in Appendix) shows some probability result of sampling from a d-dimensional uniform distribution.
Suppose 𝐳_1,𝐳_2,…,𝐳_n are sampled from 𝒰(-r,r)^d independently. Then for any t>0, we have
Pr(𝐳_j≥rt) ≤d/3t^2, j∈[n],
and
Pr(max_1≤ j≤ n‖𝐳_j‖≤ rt)≥ 1-nd/3t^2.
Inequality (<ref>) means a hypersphere of radius rt can include all the n samples with probability at least 1-nd/(3t^2). On the other hand, inequality (<ref>) indicates that if we draw n/(1-d/(3t^2)) samples from 𝒰(-r,r)^d, the expected number of samples falling into a hypersphere of radius rt is at least n.
Actually, sampling from UiHS is closely related to the Curse of Dimensionality and we need to sample a large number of points from 𝒰(-r,r)^d if d is large because only a small volume of the hypercube is inside the hyperball. To be more precisely, letting V_hypercube be the volume of a hypercube with length 2r and V_hyperball be the volume of a hyperball with radius r, we have
V_hyperball/V_hypercube=π ^d/2/d2^d-1Γ (d/2)≜η,
where Γ is the gamma function. Therefore, we need to draw n/η samples from 𝒰(-r,r)^d to ensure that the expected number of samples included in the hyperball is n, where η is small if d is large.
UbHS (Fig.<ref>.c) can be obtained via UiHS. We first sample from UiHS and then remove all samples included by a smaller hypersphere. Since the volume ratio of two hyperballs with radius r and r'is (r/r')^d, where r'<r, we need to draw n/(1-(r'/r)^d) samples from UiHS to ensure that the expected number of samples between the two hyperspheres is n. Compared with GiHS and UiHS, UbHS is more compact and hence provides larger abnormal space for abnormal data to fall in.
UoHS (Fig.<ref>.d) can be easily obtained via sampling from 𝒩(0,𝐈_d). Specifically, for every 𝐳_i drawn from 𝒩(0,𝐈_d), we normalize it as 𝐳_i←r𝐳_i/‖𝐳_i‖, where r is the predefined radius of the hypersphere. UoHS is a special case of UbHS when r'=r.
To quantify the compactness of the four target distributions, we define density ρ as the number of data points in unit volume, i.e., ρ=n/V. Consequently, the densities of the four target distributions are reported in Table <ref>.
UoHS is more compact than UbHS as well as GiHS and UiHS, it should have better performance in anomaly detection. Indeed, our numerical results show that UoHS outperforms others in most cases.
§.§ Anomaly Scores
In the test stage, we only use the trained f_θ^* to calculate anomaly scores. For a given test sample
𝐱_new, we define anomaly score s for each target distribution by
s(𝐱_new)= {[ |‖ f_θ^*(𝐱_new) ‖ - r |, for UoHS; ‖ f_θ^*(𝐱_new) ‖, for GiHB or UiHS; (‖ f_θ^*(𝐱_new) ‖ - r)· (‖ f_θ^*(𝐱_new) ‖ - r'),; for UbHS ].
There are clear decision boundaries according to (<ref>) and they can be regarded as `hard boundaries' between normal samples and abnormal samples. However, these `hard boundaries' only work in ideal cases where the projected data exactly match the target distributions. In real cases, due to the noise of data or the non-optimality of optimization, the projected data do not exactly match the target distributions. Therefore, we further propose a `soft boundary' for calculating anomaly scores. Specifically, for a given test sample 𝐱_new, we define anomaly score s for all four target distributions as
s(𝐱_new)= 1/k∑_i ∈ N_k‖ f_θ^*(𝐱_new) - f_θ^*(𝐱_i) ‖
where 𝐱_i denotes a single sample with index i in the training data and N_k denotes the index set of the k nearest training (projected) samples to f_θ^*(𝐱_new).
Empirically, in the experiments, we found that (<ref>) has better performance than (<ref>) in most cases. Table <ref>, <ref>, <ref> only report the results from (<ref>). The comparison results between (<ref>) and (<ref>) are provided in Section <ref>.
We call our method Restricted Generative Projection (RGP), which has four variants, denoted by RGP-GiHS, RGP-UiHS, RGP-UbHS, and RGP-UoHS respectively, though any bounded target distribution applies.
§ EXTENSIONS OF RGP
In this section, based on the general objective in (<ref>), we provide two variants of RGP.
§.§ Double-MMD based RGP
In the objective function of RGP defined by (<ref>), the second term is the reconstruction error for 𝐗, which is only a special example of approximation for the second term in the objective function of (<ref>), i.e., ℳ(𝒟_g_ϕ(f_θ(𝐱)), 𝒟_𝐱). Alternatively, we can use MMD to approximate ℳ(𝒟_g_ϕ(f_θ(𝐱)), 𝒟_𝐱), which yields the following Double-MMD RGP:
minimize_θ, ϕ MMD^2(𝐙_θ,𝐙_T)+ λMMD^2(g_ϕ(𝐙_θ),𝐗).
Compared to the sum of squares reconstruction error used in (<ref>), MMD^2(g_ϕ(𝐙_θ),𝐗) is a weaker approximation for ℳ(𝒟_g_ϕ(f_θ(𝐱)), 𝒟_𝐱),
because it does not exploit the fact that the samples in 𝐙_θ and 𝐗 are paired. Thus, the projection of Double-MMD RGP cannot preserve sufficient information of 𝐗,
which will reduce the detection accuracy. Indeed, as shown by the experimental results in Section
<ref>, our original RGP outperforms Double-MMD RGP.
§.§ Sinkhorn Distance based RGP
Besides MMD, the optimal transport theory can also be used to construct a notion of distance between pairs of probability distributions. In particular, the Wasserstein distance <cit.>, also known as “Earth Mover’s Distance”, has appealing theoretical properties and a very intuitive formulation
𝒲 = ⟨γ^*, 𝐂⟩_F
where 𝐂 denotes a metric cost matrix and γ* is the optimal transport plan.
Finding the optimal transport plan γ^* might appear to be a really hard problem. Especially, the computation cost of Wasserstein distance can quickly become prohibitive when the data dimension increases. In order to speed up the calculation of Wasserstein distance, Cuturi <cit.> proposed Sinkhorn distance that regularizes the optimal transport problem with an entropic penalty and uses Sinkhorn's algorithm <cit.> to approximately calculate Wasserstein distance.
Now, if replacing the first term in (<ref>) with the Sinkhorn distance<cit.>, we can get a new optimization objective
minimize_θ,ϕ ⟨γ, ℳ(𝐙_θ ,𝐙_T) ⟩_F + ϵ∑_i,jγ_ijlog(γ_ij)
+ λ/n∑_i=1^n 𝐱_i-g_ϕ(f_θ(𝐱_i))^2
subject to γ1 = 𝐚, γ^T 1 = 𝐛, γ≥ 0
where ℳ(𝐙_θ ,𝐙_T) denotes the metric cost matrix between 𝐙_θ and 𝐙_T, ϵ is the coefficient of entropic regularization term, 𝐚 and 𝐛 are two probability vectors and satisfy 𝐚^T1=1 and 𝐛^T1=1 respectively. We call this method Sinkhorn RGP.
Compared to MMD, Sinkhorn distance is more effective in quantifying the difference between two distributions using their finite samples. Therefore, the Sinkhorn RGP usually has better performance than our original RGP (<ref>), which will be shown by the experimental results in Section <ref>.
§ EXPERIMENTS
§.§ Datasets and Baselines
We compare the proposed method with several state-of-the-art methods of anomaly detection on five tabular datasets and three widely-used image datasets for one-class classification. The datasets are detailed as follows.
* Abalone[http://archive.ics.uci.edu/ml/datasets/Abalone]<cit.> is a dataset of physical measurements of abalone to predict the age. It contains 1,920 instances with 8 attributes.
* Arrhythmia[http://odds.cs.stonybrook.edu/arrhythmia-dataset/]<cit.> is an ECG dataset. It was used to identify arrhythmic samples in five classes and contains 452 instances with 279 attributes.
* Thyroid[http://odds.cs.stonybrook.edu/thyroid-disease-dataset/]<cit.> is a hypothyroid disease dataset that contains 3,772 instances with 6 attributes.
* KDD[https://kdd.ics.uci.edu/databases/kddcup99/]<cit.> is the KDDCUP99 10 percent dataset from the UCI repository and contains 34 continuous attributes and 7 categorical attributes. The attack samples are regarded as normal data, and the non-attack samples are regarded as abnormal data.
* KDDRev is derived from the KDDCUP99 10 percent dataset. The non-attack samples are regarded as normal data, and the attack samples are regarded as abnormal data.
* MNIST[http://yann.lecun.com/exdb/mnist/]<cit.> is a well-known dataset of handwritten digits and totally contains 70,000 grey-scale images in 10 classes from number 0-9.
* Fashion-MNIST[https://www.kaggle.com/datasets/zalando-research/fashionmnist]<cit.> contains 70,000 grey-scale fashion images (e.g. T-shirt and bag) in 10 classes.
* CIFAR-10[https://www.cs.toronto.edu/ kriz/cifar.html]<cit.> is a widely-used benchmark for image anomaly detection. It contains 60,000 color images in 10 classes.
We compare our method with three classic shallow models, four deep autoencoder based methods, three deep generative model based methods, and some latest anomaly detection methods.
* Classic shallow models: local outlier factor (LOF)<cit.>, one-class support vector machine (OC-SVM)<cit.>, isolation forest (IF)<cit.>.
* Deep autoencoder based methods: denoising auto-encoder (DAE)<cit.>, DCAE<cit.>, E2E-AE, DAGMM<cit.>, DCN <cit.>.
* Deep generative model based methods: AnoGAN<cit.>, ADGAN<cit.>, OCGAN <cit.>.
* Some latest AD methods: DeepSVDD<cit.>, GOAD <cit.>, DROCC <cit.>, HRN <cit.>, SCADN <cit.>, NeuTraL AD <cit.>, GOCC <cit.>, PLAD <cit.>, MOCCA <cit.>.
§.§ Implementation Details and Evaluation Metrics
In this section, we introduce the implementation details of the proposed method RGP and describe experimental settings for image and tabular datasets. Note that our method neither uses any abnormal data during the training process nor utilizes any pre-trained feature extractors.
For the five tabular datasets (Abalone, Arrhythmia, Thyroid, KDD, KDDRev), in our method, f_θ and g_ϕ are both MLPs. We follow the dataset preparation of <cit.> to preprocess the tabular datasets for one-class classification task. The hyper-parameter λ is set to 1.0 for the Abalone, Arrhythmia and Thyroid. For the KDD and KDDRev, λ is set to 0.0001.
For the three image datasets (MNIST, Fashion-MNIST, CIFAR-10), in our method, f_θ and g_ϕ are both CNNs. Since the three image datasets contain 10 different classes, we conduct 10 independent one-class classification tasks on both datasets: one class is regarded as normal data and the remaining nine classes are regarded as abnormal data. In each task on MNIST, there are about 6,000 training samples and 10000 testing samples. In each task on CIFAR-10, there are 5,000 training samples and 10,000 testing samples. In each task on Fashion-MNIST, there are 6,000 training samples and 10,000 testing samples. The hyper-parameter λ is chosen from {1.0, 0.5, 0.1, 0.01, 0.001, 0.0001} and varies for different classes.
In our method, regarding the radius r of GiHS and UiHS, we first generate a large number (denoted by N) of samples from Gaussian or uniform, sort the samples according to their ℓ_2 norms, and set r to be the pN-th smallest ℓ_2 norm, where p=0.9. For UbHS, we need to use the aforementioned method to determine an r with p=0.95 and a r' with p=0.05. We see that {r, r'} are not related to the actual data, they are determined purely by the target distribution.
In each iteration (mini-batch) of the optimization for all four target distributions, we resample 𝐙_T according to r. For UoHS, we draw samples from Gaussian and normalize them to have unit ℓ_2 norm, then they lie on a unit hypersphere uniformly. The procedure is repeated in each iteration (mini-batch) of the optimization.
For hyper-parameter k on the testing stage, we select k=3 for Thyroid, Arrhythmia, KDD, KDDRev, and select k=5 for Abalone dataset. For three image datasets, the hyper-parameter k is chosen from {1, 3, 5, 10} and varies for different classes.
We use Adam <cit.> as the optimizer in our method. For MNIST, Fashion-MNIST, CIFAR-10, Arrhythmia and KDD, the learning rate is set to 0.0001. For Abalone, Thyroid and KDDRev, the learning rate is set to 0.001. Table <ref> shows the detailed implementation settings of RGP on all datasets. All experiments were run on AMD EPYC CPU with 64 cores and with NVIDIA Tesla A100 GPU, CUDA 11.6.
To evaluate the performance of all methods, we follow the previous works such as <cit.> and <cit.> to use AUC (Area Under the ROC curve) for image datasets and F1-score for tabular datasets.
Note that when conducting experiments on the tabular datasets, we found that most of the strong baselines, like DROCC <cit.>, NeuTral AD <cit.>, GOCC <cit.>, used the F1-score and we just followed this convention.
In our method, we get the threshold via simply calculating the dispersion of training data in latent space. Specifically, we first calculated the scores s(𝐗) on training data 𝐗 using (12) or (13), and then sorted s(𝐗) in ascending order and set the threshold to be the pN-th smallest score, where p is a probability varying for different datasets.
§.§ Results on Image Datasets
Tables <ref> and <ref> show the comparison results on Fahsion-MNIST and CIFAR-10 respectively. We have the following observations.
* Firstly, in contrast to classic shallow methods such as OC-SVM <cit.> and IF <cit.>, our RGP has significantly higher AUC scores on all classes of Fashion-MNIST and most classes of CIFAR-10. An interesting phenomenon is that most deep learning based methods have inferior performance compared to IF <cit.> on class `Sandal' of Fashion-MNIST and IF <cit.> outperforms all deep learning based methods including ours on class `Deer' of CIFAR-10.
* Our methods outperformed the deep autoencoder based methods and generative model based methods in most cases and have competitive performance compared to the state-of-the-art in all cases.
* RGP has superior performance on most classes of Fashion-MNIST and CIFAR-10 under the setting of UoHS (uniform distribution on hypersphere).
Table <ref> shows the average performance on MNIST, Fashion-MNIST, and CIFAR-10 over all 10 classes to provide an overall comparison. We see that RGP achieves the best average AUC on Fashion-MNSIT and CIFAR-10 among all competitive methods. Four variants of RGP have relatively close average performance on all three image datasets. The experimental results of a single class on MNIST are reported in Appendix.
§.§ Results on Tabular Datasets
In Table <ref>, we report the F1-scores of our methods in comparison to ten baselines on the five tabular datasets. Our four variants of RGP significantly outperform all baseline methods on Arrhythmia, thyroid, and Abalone. Particularly, RGP-GiHS has 23.25%, 12.22%, and 19.58% improvements on the three datasets in terms of F1-score compared to the runner-up, respectively. It is worth mentioning that Neutral AD <cit.> and GOCC <cit.> are both specially designed for non-image data but are outperformed by our methods in most cases.
Compared with image datasets, the performance improvements of RGPs on the three tabular datasets are more significant. One possible reason is that, compared to image data, it is easier to convert tabular data to a compact target distribution. Furthermore, we also report the AUC scores on Abalone, Thyroid and Arrhythmia datasets and the results are provided in Appendix.
In addition to the quantitative results, we choose Thyroid (with 6 attributes) as an example and transform the data distribution to 2-dimensional target distributions, which are visualized in Figure <ref>. Plots (a), (b), (c), (d) in Figure <ref> refer to GiHS, UiHS, UbHS, UoHS, respectively. The blue points, orange points, green points, and red points denote samples from target distribution, samples from training data, normal samples from test set, and abnormal samples from test set, respectively. For much clearer illustration, the left figure in each plot of Figure <ref> shows all four kinds of instances and the right figure shows two kinds of instances including normal and abnormal samples from test set.
We see that RGPs are effective to transform the data distribution to the restricted target distributions, though the transformed data do not exactly match the target distributions (it also demonstrates the necessity of using the `soft boundary' defined by (<ref>)).
§.§ Comparison between`soft' and `hard' boundary
We further explore the performance of two different anomaly scores. Specifically, we compare the `hard boundaries' (<ref>) and `soft boundary' (<ref>) as anomaly scores during the test stage on image datasets and tabular datasets. The results are showed in Figures <ref>, <ref>, <ref>. It can be observed that using `soft boundary' (<ref>) to calculate anomaly score has better performance than using `hard boundaries' (<ref>) on most classes of image and tabular datasets. Nevertheless, using `hard boundaries' to calculate anomaly scores still achieves remarkable performance on some classes. For example, on the class `Ankle-boot' of Fashion-MNIST and the class `Trunk' of CIFAR-10, the best two results are both from RGPs using `hard boundaries' (<ref>) to calculate anomaly score.
§.§ Experiments of Double-MMD RGP and Sinkhorn RGP
We use Double-MMD RGP (<ref>) to conduct experiments and the results are reported in Table <ref>, <ref>. On image datasets, we just consider the target distribution UoHS (Uniform on HyperSphere) for simplicity.
On tabular datasets, we conduct experiments on the proposed four different target distributions.
From the experimental results of Table <ref>, <ref>, we found that Double-MMD RGP and original RGP have similar performance on the three tabular datasets, whereas on image datasets including Fashion-MNIST and CIFAR-10, the performance has apparent gap in spite of a large range of adjustment of λ∈{10.0, 5.0, 1.0, 0.5, 0.1, 0.01} for Double-MMD RGP (<ref>). Note that Table <ref> reports the average AUC(%) on all classes of Fahion-MNIST and CIFAR-10, the results on single class are provided in Appendix.
For the phenomenon, we consider that the tabular datasets in our implementation have fewer features (no more than 279) than the image datasets and second term of (<ref>) is a much weaker constraint for preserving data information than that of (<ref>). As a consequence, Double-MMD RGP (<ref>) is able to preserve the enough key information on the tabular data but loses a lot of important information on the image data than original RGP (<ref>). Meanwhile, we know that the generalization error of MMD for high-dimensional samples or distribution is often larger than that for low-dimensional samples or distribution. To ensure that MMD is able to accurately measure the distance between two high-dimensional distributions, the sample sizes should be sufficiently large.
We use Sinkhorn RGP (<ref>) to conduct experiments on Abalone, Arrhythmia, and Thyroid datasets and the results are reported in Table <ref>. In all implementations, ϵ is set to 0.01 and the a, b are uniform. In keeping with our expectation, the performance of Sinkhorn RGP (<ref>) is similar to or better than the original RGP (<ref>) for all four objective distributions, whereas the time cost of Sinkhorn RGP (<ref>) is much higher. We do not experiment with Sinkhorn RGP for the image dataset since the time cost is too higher.
§.§ Ablation Study
§.§.§ The Gaussian Kernel Function for MMD
We use the Gaussian kernel exp(-γ‖𝐱 - 𝐲‖^2) for MMD in optimization objective and set γ = 1/d^2 in all experiments, where d=1/n(n-1)∑^n_i=1∑^n_j=1‖𝐱_i - 𝐱_j ‖ denotes the mean Euclidean distance among all training samples.
To show the influence of γ, we fix γ from {0.1, 1, 10, 100} to run experiments on Fashion-MNIST.
As shown in Table <ref>, there are differences in every single case but the gaps in the average results are not significant. This demonstrated that our methods are not sensitive to γ.
§.§.§ The Coefficient λ of Reconstruction Term in Optimization Objective
The coefficient λ is a key hyperparameter in problem (<ref>). Now we explore the influence of λ for model performance.
Figures <ref>, <ref> show F1-scores of our methods with λ varying from 0 to 1000, on the tabular datasets. It can be observed that too small or too large λ can lower the performance of RGP. When λ is very tiny, the reconstruction term of (<ref>) makes less impact on the training target and f_θ can easily transform the training data to the target distribution but ignores the importance of original data distribution (see Figure <ref>). On the other hand, when λ is very large, the MMD term of optimization objective becomes trivial for the whole training target and f_θ under the constraint of reconstruction term more concentrates on the original data distribution yet can not learn a good mapping from data distribution to the target distribution. Figure <ref> illustrates the influence of hyper-parameter λ on the training set of Thyroid dataset. We see that f_θ transforms training data to target distribution better with the decrease of the λ. The blue points and orange points in Figure <ref> denote samples from target distribution, samples from training data, respectively.
§ CONCLUSION
We have presented a novel and simple framework for one-class classification and anomaly detection. Our method aims to convert the data distribution to a simple, compact, and informative target distribution that can be easily violated by abnormal data. We presented four target distributions and the numerical results showed that four different target distributions have relatively close performance and uniform on hypersphere is more effective than other distributions in most cases. Furthermore, we also explore two extensions based on the original RGP and analyze performance difference among them. Importantly, our methods have competitive performances as state-of-the-art AD methods on all benchmark datasets considered in this paper and the improvements are remarkable on the tabular datasets.
IEEEtran
|
http://arxiv.org/abs/2307.04920v1 | 20230710220024 | Enhanced Food Availability can Deteriorate Fitness through Excessive Scrounging | [
"Robin Vacus",
"Amos Korman"
] | cs.GT | [
"cs.GT",
"q-bio.PE"
] |
Enhanced Food Availability can Deteriorate Fitness
through Excessive Scrounging
Robin VacusCNRS, located at the Research Institute on the Foundations of Computer Science (IRIF), Paris, France (e-mail: [email protected]). and Amos KormanCNRS, located at the French-Israeli Laboratory on Foundations of Computer Science, UMI FILOFOCS, CNRS, UP7, TAU, HUJI, WIS International Joint Research Unit, Tel-Aviv, Israel (e-mail: [email protected]).
===============================================================================================================================================================================================================================================================================================================================================================================
In group foraging situations, the conventional expectation is that increased food availability would enhance consumption, especially when animals prioritize maximizing their food intake. This paper challenges this conventional wisdom by conducting an in-depth game-theoretic analysis of a basic producer-scrounger model, in which animals must choose between intensive food searching as producers or moderate searching while relying on group members as scroungers. Surprisingly, our study reveals that, under certain circumstances, increasing food availability can amplify the inclination to scrounge to such an extent that it paradoxically leads to a reduction in animals' food consumption compared to scenarios with limited food availability. We further illustrate a similar phenomenon in a model capturing free-riding dynamics among workers in a company. We demonstrate that, under certain reward mechanisms, enhancing workers' production capacities can inadvertently trigger a surge in free-riding behavior, leading to both diminished group productivity and reduced individual payoffs. Our findings underscore the significance of contextual factors when comprehending and predicting the impact of resource availability on individual and collective outcomes.
§.§
Braess's paradox, a thought-provoking result in game theory, demonstrates that in certain transportation networks, adding a road to the network can paradoxically increase traffic latency at equilibrium <cit.>. In a similar vein, our study demonstrates how improvement in underlying conditions, which may initially seem beneficial, can actually lead to degraded performance. However, instead of focusing on network flows as in Braess's paradox, our findings relate to contexts of productive groups, highlighting the impact of free-riding behavior.
Productive groups, such as workers in a company, collaborating researchers, or ensembles of foraging animals, consist of individuals who not only benefit from their own resource generation or findings, but also enjoy the added advantage
of reaping the rewards of others' contributions
<cit.>.
For example, workers in a company may receive performance-based bonuses as a reward for their productivity, while also benefiting from the collective production of their peers, through stock shares or other profit-sharing mechanisms.
Similarly, in the realm of joint research projects, the success of the endeavor contributes to the collective prestige of the researchers, yet those who make substantial contributions often receive heightened recognition and prestige. Likewise, in group foraging scenarios, animals that first discover food patches often have an opportunity to feed before other group members join in, granting them the ability to directly consume a portion of the food they found and secure a larger share
<cit.>.
Within such productive group contexts, the pervasive occurrence of free-riding becomes apparent <cit.>. Free-riding refers to individuals exploiting collective efforts or shared resources without contributing their fair share. In team projects, for instance, free-riders neglect their assigned tasks to avoid costs or risks while still benefiting from the project's overall success <cit.>.
This phenomenon is also remarkably prevalent in animal foraging contexts, where individuals opportunistically engage in scrounging or kleptoparasitism, feeding off prey discovered or captured by others <cit.>.
The framework of Producer-Scrounger (PS) games is a widely used mathematical framework for studying free-riding in foraging contexts <cit.>. In a PS game, players are faced with a choice between two strategies: producer and scrounger. The interpretation of these strategies varies according to the context, but generally, a producer invests efforts in order to produce or find more resources, whereas a scrounger invests less in producing or finding resources, and instead relies more on exploiting resources produced or found by others.
Based on the rules of the particular PS game, specifying the production and rewarding mechanisms, each animal chooses a strategy and the system is assumed to converge into an equilibrium state, where each animal cannot improve its own calorie intake by changing its strategy <cit.>.
This paper examines the impact of individual production capacity on the resulting payoffs in equilibrium configurations. The first PS game we consider aims to model a scenario consisting of a group of foraging animals, with each animal striving to maximize its own food intake.
Intuitively, as long as the group size remains unchanged, one may expect that, even if it may trigger more opportunistic behavior <cit.>, increasing food abundance should ultimately improve consumption rather than diminish it.
Likewise, within a productivity-based reward system in a company, one may expect that enhancing individual productivity levels would boost group productivity and subsequently increase workers' payoffs, despite a possible increase in free-riding behavior. However, our findings uncover a more nuanced reality, unveiling a remarkably pronounced detrimental effect of free-riding behavior and emphasizing that the existence of such a positive correlation between individual productivity and payoffs is strongly contingent on the specific characteristics of the setting.
§.§ Results
We investigate two types of PS games: a foraging game involving animals searching for food and a company game involving a group of workers in a company. Our main objective is to analyze the effects of changes in individual production capabilities on players' payoffs, evaluated at equilibrium configurations. To facilitate comparisons across different parameter settings, we ensure that the games we examine have unique
equilibrium configurations (see SI, <Ref>). We say that a PS game exhibits a Reverse-Correlation phenomenon if an increase in individuals' production capacities leads to a decrease in the players' payoff, when evaluated at equilibrium configurations (see sec:methods).
We begin with the Foraging game, which is a generalization of the classical PS game in <cit.>. The main difference with the classical model is that our model considers two types of food, instead of a single type as previously assumed.
The Foraging game.
To illustrate our model, consider a scenario involving a group of animals engaged in fruit picking from trees (see <Ref>). Each animal aims to maximize its fitness, which is determined by the amount of food it consumes. The trees in this scenario contain both low-hanging fruit, accessible to both producers and scroungers, and high-hanging fruit, which can only be reached by producers. When an animal picks fruit, it retains a portion for its own consumption (let's say 70%), while the remaining fruit falls to the ground. Scroungers, instead of picking high-hanging fruit, focus on scanning the ground for fallen fruit. Fallen fruit is distributed equally among all scroungers and the animal that originally obtained it.
More precisely, consider n≥ 2 animals, where each of which needs to choose to be either a producer or a scrounger.
We assume that a producer finds an amount of food corresponding to F_ = 1+γ calories, where, adhering to the trees example above, 1 corresponds to the amount of high-hanging fruit and γ is a parameter that governs the animal's access to low-hanging fruit. In contrast, a scrounger directly finds only low-hanging fruit, corresponding to F_ = γ calories. After finding food (of any type) consisting of F calories, the animal (either producer or scrounger) consumes a fraction s ∈ [0,1] of what it found (called the “finder's share”) and shares the remaining (1-s)F calories equally with all scroungers. See <Ref> for a schematic illustration of structure of the foraging game.
The payoff of a player is defined as the capacity of its calorie intake.
Hence, for each 0≤ k≤ n, the payoff of each pure strategy in the presence of exactly k producers in the population is:
π_^(k) = s (1+γ) + (1-s) 1+γ/1+n-k, π_^(k) = γ + k(1-s) 1+γ/1+n-k,
where the second equation follows since scrounger-to-scrounger interactions compensate each-others, and hence, can be ignored in the expression of the payoff. Note that the classical model <cit.> is retrieved by setting γ=0, which essentially implies that there is only one type of food.
We study what happens to the payoffs of players at equilibria configurations, denoted by π_⋆, as we let γ increase (see sec:methods). This increase aims to capture the case that the low-hanging fruit becomes more abundant in the environment.
Note that for each fixed k, both π^(k)_ and π^(k)_ are increasing in γ. Hence, simply increasing γ, without changing the strategy profile, necessarily results in improved payoffs. However, allowing players to modify their strategies after such a change may potentially lead to enhanced scrounging at equilibrium, which can have a negative impact on the payoffs. Nevertheless, as mentioned earlier, one might expect that this negative effect would be outweighed by the overall improvement in fruit availability, resulting in an increase in consumption rather than a decrease.
This intuition becomes apparent when comparing the scenarios with γ=0 and γ=1. As γ increases from 0 to 1, we can expect an increase in the proportion of scroungers due to the rising ratio of F_/F_=γ/(1+γ). However, even if the system with γ=1 ends up consisting entirely of scroungers, the average food consumption of a player (which equals 1) would still be at least as large as that of any strategy profile in the γ=0 case. Nonetheless, as shown here, upon closer examination within the interval γ∈[0,1], a different pattern is revealed.
We combine simulations (<Ref>) with mathematical game-theoretical analysis (<Ref> in SI) to disclose a Reverse-Correlation phenomenon in the Foraging game. Specifically, for the case of n=3 players, we prove in <Ref> that for any finder's share s<1/2, there exists an interval of values for γ over which the Reverse-Correlation phenomenon occurs.
In the case of 2 players, the Reverse-Correlation phenomenon does not happen over an interval, and
instead, we prove in <Ref>
that there exists a critical value of γ at which π_⋆ decreases locally.
In our simulations, which focus on n=4 players, a noticeable decline is observed in the payoffs at equilibrium as γ increases over a relatively large sub-interval of [0,1] (<Ref>).
The Company game. We consider a PS game aiming to model a scenario with a group of n≥ 2 workers of equal capabilities who collaborate to produce a common product for a company. (Alternatively, by replacing the salary received by a worker with a notion of prestige, the game can also capture a scenario where a group of researchers collaborate in a research project.)
Each worker is assigned a specific part of the project and can choose between two pure behavioral strategies.
A producer pays an energetic cost of c>0 units and with probability p produces a product of quality γ (otherwise, with probability 1-p, it produces nothing). In contrast, a scrounger pays no energetic cost and with probability p produces a product of lower quality γ'=aγ for some given 0≤ a<1 (similarly, with probability 1-p, nothing is produced).
Let I={1,2,…,n}, and let q_i denote the quality of the product made by worker i, for i∈ I, with q_i=0 if no product is made by this player. We define the total production as:
Γ=∑_i∈ I q_i.
We assume that the salary σ_i of player i is proportional to a weighted average between the quality of the products made by the workers, with more weight given to q_i.
If fact, by appropriately scaling the system, we may assume without loss of generality that the salary is equal to this weighted average. Formally, we set:
σ_i= s q_i + 1-s/n-1∑_j∈ I∖i q_j,
for some s ∈ [1/n,1].
Note that s=1 implies that the salary each worker receives is identical to the quality of its own production, and s=1/n represents equally sharing the quality of the global product between the workers.
Next, we aim to translate the income salary of a player into his payoff using a utility function, denoted by ϕ(·).
These quantities are expected to be positively correlated, however, the correlation may in fact be far from linear. Indeed, this is supported by the seminal work by Kahneman and Deaton <cit.> which found that the emotional well-being of people increases relatively fast as their income rises from very low levels, but then levels off at a certain salary threshold. To capture such a correlation, we assume that ϕ is both monotonically non-decreasing, concave and bounded.
In addition, the payoff of worker i is decreased by its energetic investment. Finally,
π_i:= ϕ(σ_i)-c_i,
where the energetic investment c_i=c>0 if i is a producer and c_i=0 if i is a scrounger. See <Ref> for an illustration of the semantic structure of the game.
The question of whether or not the system incurs a Reverse-Correlation phenomenon turns out to depend on the model's parameters, and, in particular, on the function ϕ(x). For example, when ϕ : x ↦ x (i.e., the case that the salary is converted entirely into payoff), there is no Reverse-Correlation phenomenon (see Section <ref> in SI).
However, for some concave and bounded functions ϕ(x) the situation is different.
We combine mathematical analysis with simulations
considering the function (see inset in <Ref>):
ϕ(x) = 1-exp(-2 x).
Our mathematical analysis proves the presence of a Reverse-Correlation phenomenon for the case of two workers (Theorem <ref> in the SI). Interestingly, this result holds for every s < 1, demonstrating that the Reverse-Correlation phenomenon can occur even when the payoffs of individuals are substantially biased towards their own production compared to the production of others.
Our simulations consider the case of n=4 workers and reveal (Figure <ref>) that for certain parameters, letting γ increase over a range of values results in a reduction in payoffs in equilibrium, thus indicating a Reverse-Correlation phenomenon. Moreover, as γ increases over a range of values we also observe
a substantial reduction in total production at equilibria.
While the general shape of the utility function ϕ(x)= 1-exp(-2 x) is justifiable, the function itself was chosen somewhat arbitrarily. To strengthen the generality of our results, we also provide in the SI (<Ref>) simulations supporting the Reverse-Correlation phenomenon under another type of non-decreasing, concave, and bounded utility function, specifically,
ϕ(x)= min(1,x).
A necessary condition.
Finally, we identify a necessary condition for the emergence of a Reverse-Correlation phenomenon in arbitrary PS models. Specifically, we prove (SI, <Ref>) that a Reverse-Correlation phenomenon can occur only if the definition of the producer's payoff is sensitive to the fraction of scroungers in the population.
An interesting consequence of this condition is that a seemingly minor change in the definition of the Foraging game can prevent the occurrence of the Reverse-Correlation phenomenon. Recall that in this game, when an animal finds food, it consumes a fraction s of it (the finder's share), and the remaining 1-s fraction falls to the ground and is then equally shared between the animal and all scroungers.
If the game is changed such that when a producer finds food, it only consumes the finder's share and does not eat at all from the food that falls on the ground (i.e., only the scroungers eat from it), then the game stops satisfying the aforementioned necessary condition. Indeed, in this case, the payoff of a producer would always be 1+γ irrespective of the number of scroungers.
Hence, the modified game does not exhibit a Reverse-Correlation phenomenon, regardless of the parameters involved.
§.§ Discussion
In foraging contexts, it is commonly anticipated that an increase in food abundance would result in higher consumption, which, in turn, would lead to population growth over time. In contrast, this paper introduces the intriguing possibility of a reversed scenario: that under certain producer/scrounger conditions, if animals have sufficient time to update their producer/scrounger strategy and reach a stable configuration before reproducing <cit.>, then an increase in food abundance can paradoxically result in reduced consumption, which, in turn, can lead to a decline in population size! Note that this idea can also be viewed from the opposite perspective, namely, that by reducing food abundance, the inclination to scrounge can decrease, resulting in improved food consumption, ultimately leading to an increase in population size.
The Reverse-Correlation phenomenon corresponds to a decrease in payoffs as underlying conditions improve. The counter-intuitive aspect of it stems from the fact that players aim to maximize their payoffs, yet when conditions improve, they are driven to perform worse.
Another measure of interest is the total production, defined as the sum of production over all players (<Ref>). Observe that in the Foraging game, since the animals eventually consume all food found by the group, the total production (i.e., the total food found) at equilibrium is proportional to the payoff π_⋆, and hence their dynamics are similar.
This implies that whenever an increase in γ results in a decrease in payoff at equilibrium (indicating a Reverse-Correlation phenomenon), the same increase in γ also leads to a decrease in total production at equilibrium. In contrast, in the Company game, production is not fully represented in the payoffs, since some of it is “lost” when translating salaries into utilities. Additionally, the distinction between payoffs and production is further emphasized due to the energetic cost incurred by producers, which is reflected in their payoffs. Despite this distinction, as observed in <Ref>,
the measure of total production also exhibits a decrease across a range of γ values. This phenomenon may carry particular importance for system designers, such as the company's principal, as it challenges a fundamental assumption underlying bottom-up approaches, namely, that as long as the system naturally progresses without external disruptions, improving individual performances should lead to enhanced group performances.
We demonstrated the Reverse-Correlation phenomenon on two basic game theoretical models. As evident by these games, the occurrence of this counter-intuitive phenomenon is highly contingent on the specific details of the game.
For example, the Foraging game considers two types of food: low-hanging and high-hanging fruit (instead of just one type as consider in the classical game in <cit.>). Only producers have access to high-hanging fruit, while both producers and scroungers can access low-hanging fruit. Similarly to the classical model, when an animal finds food, it consumes a portion s of it and the remaining 1-s portion is equally shared between this animal and all scroungers. The Reverse-Correlation phenomenon emerges as the abundance of low-hanging fruit increases. However, as we showed, if one modifies the model so that the remaining 1-s portion is shared only between the scroungers, then the system no longer exhibits a Reverse-Correlation phenomenon. Hence, while at first glance this change may appear minor, it has a profound impact on the dynamics.
In the Company game, a key aspect of the model concerns the choice of the utility function, which captures the relationship between salary and payoff. Inspired by the work of Kahneman and Deaton <cit.>, we focused on non-decreasing, concave, and bounded utility functions. Within this family of functions, we identified two that exhibit a Reverse-Correlation phenomenon. However, we note that not all utility functions in this family enable this phenomenon.
In conclusion, this paper uncovers a counter-intuitive phenomenon that can arise in productive group contexts involving rational players. It reveals that under certain conditions, increasing individual production efficiency can paradoxically lead to diminished payoffs and overall group production, due to a significant rise in free-riding behavior. These findings provide valuable insights into the complex dynamics at play, underscoring the intricate relationship between individual and group performances, as well as the detrimental impact of free-riding behavior. Moreover, our results highlight the nuanced consequences of contextual factors in understanding and predicting the impact of increased (or decreased) resource availability on both individual and collective outcomes.
§.§ Methods
We consider two types of PS models, for which we combine analytic investigations with computer simulations. In both models, we assume that both producers and scroungers are able to produce, but that producers are expected to produce more.
In our models, the payoffs and total production are positively correlated with the number of producers.
We consider a parameter γ that is positively correlated to the expected production capacities of both producers and scroungers. To check what happens as individual capabilities improve, we increase γ and observe how the payoff and total production measures change, for configurations at equilibria.
We focus on the strong definition of equilibria, known as Evolutionary Stable Strategy (ESS), using the standard definition as introduced by Maynard Smith and Price <cit.>.
Specifically, given a PS game, let qp denote the expected payoff of a player if it chooses to be a producer with probability q, in the case that all n-1 remaining players are producers with probability p.
We say that p_⋆∈ [0,1] is an ESS if and only if for every q ∈ [0,1] such that q ≠ p_⋆,
(i) either p_⋆p_⋆ > qp_⋆,
(ii) or p_⋆p_⋆ = qp_⋆ and p_⋆q > qq.
To be able to compare instances with different parameters, we make sure that for every value of γ, the game we consider always has a unique ESS, termed p_⋆(γ). In such a case, we write π_⋆(γ) = π_p_⋆(γ),p_⋆(γ) the payoff at the ESS, and omit the parameter γ when clear from the context.
In our rigorous analysis, presented in the SI, we prove the existence and uniqueness of the ESS, for the corresponding scenarios.
To determine the ESS in our simulations, we utilize simple procedures that take the values of p and q as inputs and calculate qp.
Then, we search for the specific value of p that satisfies (i) 1p = 0p, (ii) for every q<p, 1q > 0q and (iii) for every q>p, 1q < 0q, which together are sufficient conditions for p to be the unique ESS (see SI, <Ref>).
Both the code used in the simulations and the code employed to generate the figures were implemented in Python. For further details and access to the code, please refer to <cit.>.
We say that the system incurs a Reverse-Correlation phenomenon if increasing γ over a certain interval yields decreased payoff when evaluated at (the unique) ESS. In other words, this means that π_⋆(γ) is a decreasing function of γ over this interval.
Acknowledgements. The authors would like to thank Yossi Yovel, Ofer Feinerman, Yonatan Zegman and Yannick Viossat for helpful discussions.
unsrt
Supplementary Information
§ UNIQUENESS OF ESS
The following sufficient condition for the existence and uniqueness of an ESS is well-known. We state and prove it below for the sake of completeness.
If p_⋆∈ [0,1] is such that (i) 1p_⋆ = 0p_⋆, (ii) for every q<p_⋆, 1q > 0q and (iii) for every q>p_⋆, 1q < 0q, then p_⋆ is a unique ESS.
By assumption (i), we have for every q ∈ [0,1]:
qp_⋆ = q 1p_⋆ + (1-q) 0p_⋆ = p_⋆ 1p_⋆ + (1-p_⋆) 0p_⋆ = p_⋆p_⋆.
Thus, to show that p_⋆ is an ESS, we need to check the second condition in the definition.
We start by considering the case that q < p_⋆. By assumption (ii), it implies that 1q > 0q, so
p_⋆q = p_⋆ 1q + (1-p_⋆) 0q > q 1q + (1-q) 0q = qq.
Similarly, in the case that q > p_⋆, assumption (iii) implies that 1q < 0q, so
p_⋆q = p_⋆ 1q + (1-p_⋆) 0q > q 1q + (1-q) 0q = qq.
By the second condition in the definition, this implies that p_⋆ is an ESS.
Finally, we prove the unicity property.
Let p ≠ p_⋆.
If p<p_⋆, then 1p > 0p by assumption (ii), and p<1. Therefore,
1p > p 1p + (1-p) 0p = pp.
If p>p_⋆, then 1p < 0p by assumption (iii), and p>0. Therefore,
0p > p 1p + (1-p) 0p = pp.
In both cases, p is not an ESS, which concludes the proof of <Ref>.
§ ANALYSIS OF THE FORAGING GAME
The goal of this section is to prove <Ref>. Note that the theorem considers n=2,3. For the case of 3 players, the theorem states that as long as the finder's share satisfies s<1/2, there exists an interval of values for γ over which the Reverse-Correlation phenomenon occurs. In contrast, in the case of 2 players, the Reverse-Correlation phenomenon does not happen over an interval, and
instead, there exists a critical value of γ at which π_⋆ decreases locally. In fact, this happens even when the finder's share is close to 1.
Consider the Foraging game with γ≥ 0 and s<1.
* If n=3, then for every γ≥ 0, there is a unique ESS.
Moreover, for every s < 1/2, there exist γ_min, γ_max > 0 such that the payoff π_⋆(γ) (and hence, also the total production) at ESS is strictly decreasing in γ on the interval [γ_min,γ_max].
* If n=2, then for every γ≠γ_s, where γ_s = 1+s/1-s, there is a unique ESS.
Moreover, π_⋆(γ) is increasing on [0,γ_s) and on (γ_s,+∞].
However, for every ϵ∈ (0,1/2), π_⋆(γ_s-ϵ)>π_⋆(γ_s+ϵ).
§.§ Proof of Theorem <ref>
Towards proving the theorem, we first establish the following lemma, which quantifies the expected payoffs of the two pure strategies, conditioning on other agents choosing to be producers with probability p.
For every 0≤ p<1,
1p = s F_ + (1-s)F_·1-p^n/n(1-p),
and
0p = F_ + (1-s)F_· p ·n(1-p)+p^n-1/n(1-p)^2.
These expressions can be extended by continuity at p=1, giving
11 = F_
and
01 = F_ + (n-1)(1-s)F_/2.
Fix a player i. Consider the case that Player i is a producer, and that each player j≠ i is a producer with probability p.
Let X_p be the random variable indicating the number of scroungers in the population. By <Ref>,
1p = s F_ + (1-s) F_·1/1+X_p.
By definition, X_p ∼(n-1,1-p). The first part of the claim, concerning 1p, now follows using <Ref>, that implies that (1/(1+X_p)) = 1-p^n/n(1-p).
Now, consider the case that Player i is a scrounger, and that each player j≠ i is a producer with probability p.
Let Y_p be the random variable indicating the number of producers in the population. By <Ref>,
0p = F_ + (1-s) F_·Y_p/1+n-Y_p.
By definition, Y_p ∼(n-1,p). The second part of the claim, concerning 0p, now follows using <Ref>, that implies that
Y_p/1+n-Y_p = Y_p/2+(n-1)-Y_p = p ·n(1-p)+p^n-1/n(1-p)^2.
This completes the proof of Lemma <ref>.
In order to characterize the (unique) ESS,
we first define the following quantities:
A(γ) = n(F_ - s F_)/(1-s)F_ = n(γ - s(1+γ))/(1-s)(1+γ), γ_1 = 2/(n-1)(1-s)-1, γ_2 = n/(n-1)(1-s)-1.
We have A(γ_1) = -n(n-3)/2 and A(γ_2) = 1.
First, we rewrite
A(γ) = n(γ - s(1+γ))/(1-s)(1+γ) = n ·γ(1-s) - s/(1-s)(1+γ)
= n ·1- 1/(1-s)(1+γ).
Plugging in the definition of γ_1 and γ_2, we obtain
A(γ_1) = n ·1- 1/2/n-1 = -n(n-3)/2 and A(γ_2) = n ·1- 1/n/n-1 = 1,
as stated.
Next, for every γ, the following result identifies the unique ESS.
(a) For every n ≥ 2, for every γ∈ [0,γ_1) ∪ (γ_2, +∞), there is unique ESS, termed p_⋆(γ), that satisfies p_⋆(γ) = 1 on [0,γ_1) and p_⋆(γ) = 0 on (γ_2,+∞).
(b) for every n ≥ 3, for every γ∈ [γ_1,γ_2], there is unique ESS, termed p_⋆(γ). Moreover, p_⋆ is continuously differentiable on [γ_1,γ_2], p_⋆(γ_1) = 1 and p_⋆(γ_2) = 0.
Define the following function for 0≤ p<1.
f(p) = 1/1-p1-p^n/1-p - np .
We next identify lim_p→ 1f(p).
Function f can be extended to a continuous function at p=1 by setting f(1) = -n(n-3)/2.
Let x = 1-p. Using Taylor expansion at x=0, we have:
f(x) = 1/x1-(1-x)^n/x - n(1-x) = 1/xnx-n(n-1)/2 x^2 + o(x^3)/x - n(1-x)
= 1/x n 1-n-12 x + o(x^2) - n(1-x) = n/x - n-32 x + o(x^2) = -n(n-3)/2 + o(x).
Therefore, lim_p → 1 f(p) = -n(n-3)/2, which concludes the proof of the observation.
To compute the ESS, we need to compare 1p and 0p.
For every p ∈ [0,1], 1p > 0p f(p) > A(γ) and 1p < 0p f(p) < A(γ).
By <Ref>, for every p ∈ [0,1],
1p > 0p s F_ + (1-s)F_·1-p^n/n(1-p) > F_ + (1-s)F_· p ·n(1-p)+p^n-1/n(1-p)^2
1-p^n/1-p - p ·n(1-p)+p^n-1/(1-p)^2 > n(F_ - s F_)/(1-s)F_.
By definition, the right hand side is equal to A(γ). Let us rewrite the left hand side:
1-p^n/1-p - p ·n(1-p)+p^n-1/(1-p)^2 = 1/1-p (1-p) 1-p^n/1-p - np + p 1-p^n/1-p = 1/1-p1-p^n/1-p - np = f(p),
which concludes the proof of the first equivalence in <Ref>. The second equivalence is obtained similarly.
f is non-increasing in p.
Moreover, if n ≥ 3, then f is strictly decreasing in p.
First, consider the case that n=2. Then,
f(p) = 1/1-p1-p^2/1-p - 2p = 1/1-p (1+p) - 2p = 1,
so f is non-increasing.
Now, consider the case that n ≥ 3.
Let us write f(p) = u(p)/v(p), with
u(p) = 1-p^n/1-p - np, v(p) = 1-p.
We have
u'(p) = -n p^n-1(1-p) + (1-p^n)/(1-p)^2 - n, v'(p) = -1,
so
u'(p) · v(p) = -n p^n-1 + 1-p^n/1-p - n(1-p), u(p) · v'(p) = - 1-p^n/1-p + np.
Therefore,
u'(p) · v(p) - u(p) · v'(p) = -n p^n-1 + 2 1-p^n/1-p - n = 2 1-p^n/1-p - n(1+p^n-1).
Finally,
f'(p) = u'(p) · v(p) - u(p) · v'(p)/v(p)^2 = - n(1-p)(1+p^n-1) - 2(1-p^n)/(1-p)^3.
Let us define the ratio:
g_0(p) = n(1-p)(1+p^n-1)/2(1-p^n).
Next, we show that g_0 is strictly greater than 1.
To this aim, we study g_0 by differentiating it several times. Define:
g_1(p) = (n-1)p^n-2(1-p^2)+p^2n-2-1, g_2(p) = 2p^n-np^2+n-2, g_3(p) = -2np(1-p^n-2).
Since n ≥ 3,
g_3(p) < 0.
We have
g_2'(p) = g_3(p) < 0,
so g_2 is strictly decreasing, and hence g_2(p) > g_2(1) = 0.
We have
g_1'(p) = (n-1)p^n-3 g_2(p) > 0,
so g_1 is strictly increasing, and g_1(p) < g_1(1) = 0.
Eventually, we have:
g_0'(p) = n/2·-1-p^n-1+(n-1)p^n-2(1-p)·(1-p^n)+(1-p)(1+p^n-1)· n p^n-1/(1-p^n)^2
= n/2· -1-p^n-1+(n-1)p^n-2(1-p)+p^n+p^2n-1-(n-1)p^2n-2(1-p)+n(1-p)p^n-1+n(1-p)p^2n-2/(1-p^n)^2
= n/2· p^n-2 -p+(n-1)(1-p)+p^2+np(1-p) +p^2n-2-1 /(1-p^n)^2
= n/2·g_1(p)/(1-p^n)^2 < 0,
so g_0 is strictly decreasing, and g_0(p) > g_0(1) = 1.
Therefore, n(1-p)(1+p^n-1) > 2(1-p^n). By <Ref>, this implies that f'(p) < 0, which concludes the proof of <Ref>.
Function A is (strictly) increasing in γ.
Using <ref>, we obtain
dA(γ)/dγ = n/(1-s)(1 + γ)^2 > 0,
from which <Ref> follows.
See <Ref> for an overview of the following arguments.
Proof of (a). Assume n≥ 2, and consider the case that γ < γ_1 (<Ref>). By Claims <ref>, <ref> and <ref>, and <Ref>,
for every p ∈ [0,1],
f(p) ≥ f(1) = -n(n-3)/2 = A(γ_1) > A(γ).
By <Ref>, this implies that 1p > 0p.
Thus, for every p<1 and every q ∈ [0,1],
pq = p 1q + (1-p) 0q < 1q.
On the one hand, <Ref> implies that for every p < 1, p cannot satisfy neither condition (i) nor (ii) in the definition of ESS.
On the other hand, <Ref> implies that p_⋆ = 1 will always satisfy condition (i) in the definition of ESS.
Finally, we conclude that on [0,γ_1), p_⋆(γ) = 1 is the only ESS.
Next, consider the case that γ > γ_2 (<Ref>). By Claims <ref>, <ref> and <ref>, for every p ∈ [0,1],
f(p) ≤ f(0) = 1 = A(γ_2) < A(γ).
By <Ref>, this implies that 1p < 0p.
Similarly, we conclude that on (γ_2,+∞], p_⋆(γ) = 0 is the only ESS.
Proof of (b). Consider the case that n≥ 3 and γ_1 ≤γ≤γ_2 (<Ref>).
By Claim <ref>, f : [0,1] ↦ [f(1),f(0)] is a bijection, and we can consider the inverse function f^-1 : [f(1),f(0)] ↦ [0,1].
Moreover, by Claims <ref>, <ref> and <ref>, and <Ref>,
f(1) = A(γ_1) ≤ A(γ) ≤ A(γ_2) = f(0).
Therefore, there is a unique p_⋆∈ [0,1] such that f(p_⋆) = A(γ).
By <Ref>, we have
f(p_⋆) = A(γ) 1p_⋆ = 0p_⋆,
for every q < p_⋆, f(q) > f(p_⋆) 1p_⋆ > 0p_⋆,
for every q > p_⋆, f(q) < f(p_⋆) 1p_⋆ < 0p_⋆.
By <Ref>, this implies that p_⋆ is the unique ESS.
As a function of γ on the interval [γ_1,γ_2], p_⋆ satisfies p_⋆(γ) = f^-1(A(γ)).
Function f is continuously differentiable, and the derivative is non-zero by <Ref>, so f^-1 is continuously differentiable.
Moreover, A is also continuously differentiable.
Therefore, p_⋆ is continuously differentiable.
Finally, p_⋆ verifies p_⋆(γ_1) = f^-1(A(γ_1)) = f^-1(f(1)) = 1, and p_⋆(γ_2) = f^-1(A(γ_2)) = f^-1(f(0)) = 0.
If 3 ≤ n < min{ 1+1/s, 1+2/1-s}, then there exists 0 ≤γ_min < γ_max such that π_⋆(γ) is decreasing on the interval [γ_min,γ_max].
Since n ≥ 3, by definition, γ_1 < γ_2 and so [γ_1 , γ_2] is a non-empty interval.
We have that
n ≤ 1+2/1-s2/(n-1)(1-s)-1 ≥ 0 γ_1 ≥ 0.
Moreover,
n < 1+1/s2/(n-1)(1-s) > n/(n-1)(1-s)-1 1+γ_1 > γ_2.
Next, note that by definition:
π_⋆(γ) = p_⋆(γ) ·1p_⋆(γ)(γ) + (1-p_⋆(γ)) ·0p_⋆(γ)(γ),
By assumption on n, we know that γ_1≥ 0 and 1+γ_1>γ_2.
Since both 1p(γ) and 0p(γ) are continuously differentiable in p and in γ (from their expression in <Ref>), and
since p_⋆(γ) is continuously differentiable in γ on [γ_1,γ_2] (by statement (b) in <Ref>),
then π_⋆(γ)
is continuously differentiable in γ on [γ_1,γ_2]. Moreover, it satisfies
π_⋆(γ_1) = F_ = 1+γ_1 (since p_⋆(γ_1) = 1), and π_⋆(γ_2) = F_ =γ_2 < π_⋆(γ_1) (since p_⋆(γ_2) = 0).
Therefore, we can find an interval [γ_min,γ_max] ⊆ [γ_1,γ_2] on which π_⋆(γ) is decreasing, which concludes the proof of <Ref>.
When n=3,
n < min{ 1+1/s, 1+2/1-s} 0 < s < 1/2,
and the first item in <Ref> follows as a special case of <Ref>.
When n=2, γ_1 = γ_2 = γ_s = 1+s/1-s.
By statement (a) in <Ref>, for every γ < γ_s, there is a unique ESS satisfying p_⋆(γ) = 1 and so π_⋆(γ) = 1+γ.
Similarly, for every γ > γ_s, there is a unique ESS satisfying p_⋆(γ) = 0 and so π_⋆(γ) = γ.
Therefore, π_⋆ is increasing on [0,γ_s) and on (γ_s,+∞).
Moreover, let ϵ∈ (0,1/2). Since s ≥ 0, we have γ_s ≥ 1, and
π_⋆(γ_s-ϵ) = 1+γ_s-ϵ > γ_s+1/2 > γ_s+ϵ = π_⋆(γ_s+ϵ),
which establishes the second item in <Ref>, and thus concludes the proof of theorem.
§.§ Technical Claims
Let X ∼(n,p). If 0 < p ≤ 1, then
1/1+X = 1-(1-p)^n+1/(n+1)p.
Moreover, if p = 0, then 1/1+X = 1.
The claim holds trivially for p=0. Consider the case that p>0.
1/1+X = ∑_k=0^n 1/1+k(X=k)
= ∑_k=0^n 1/1+k·nk p^k (1-p)^n-k
= 1/(n+1)p∑_k=0^n n+1k+1 p^k+1 (1-p)^(n+1)-(k+1) using nk = n+1k+1·k+1/n+1.
By setting k' = k+1, we can rewrite the sum
∑_k=0^n n+1k+1 p^k+1 (1-p)^(n+1)-(k+1) = ∑_k'=0^n+1n+1k' p^k' (1-p)^(n+1)-k' - (1-p)^n+1 = 1-(1-p)^n+1,
which concludes the proof of <Ref>.
Let X ∼(n,p). If 0 ≤ p < 1, then
X/2+n-X
= p ·(n+1)(1-p)+p^n+1-1/(n+1)(1-p)^2.
Moreover, if p = 1, then X/2+n-X = n/2.
The claim holds trivially for p=1. Consider the case that p<1.
Let q = 1-p > 0. We have
X/2+n-X
= ∑_k=0^n k/n-k+2(X=k) = ∑_k=1^n k/n-k+2·nk p^k (1-p)^n-k
= ∑_k=0^n-1 n-k/k+2·nk q^k (1-q)^n-k k ↦ n-k, q =1-p
= ∑_k=1^n n-k+1/k+1·nk-1 q^k-1 (1-q)^n-k+1 k ↦ k-1
= ∑_k=1^n k/k+1·nk q^k-1 (1-q)^n-k+1 using nk-1 = nk·k/n-k+1
= ∑_k=1^n k/n+1·n+1k+1 q^k-1 (1-q)^n-k+1 using nk = n+1k+1·k+1/n+1
= ∑_k=2^n+1 k-1/n+1·n+1k q^k-2 (1-q)^n-k+2 k ↦ k-1
= 1-q/q^2 (n+1)·∑_k=1^n+1 (k-1) n+1k q^k (1-q)^(n+1)-k.
By expectation of the binomial distribution,
∑_k=1^n+1 k n+1k q^k (1-q)^(n+1)-k = (n+1)q.
Moreover, by the binomial theorem,
∑_k=1^n+1n+1k q^k (1-q)^(n+1)-k = 1-(1-q)^n+1.
Putting every equation together, we obtain
X/2+n-X = (1-q) ·(n+1)q+(1-q)^n+1-1/(n+1)q^2,
which concludes the proof of <Ref>.
§ ANALYSIS OF THE COMPANY GAME
§.§ Preliminaries
Following classical notations from game theory, we define, for n=2 players:
(Reward) R(γ,s,c,p,a) = 11
(Sucker) S(γ,s,c,p,a) = 10
(Temptation) T(γ,s,c,p,a) = 01
(Punishment) P(γ,s,c,p,a) = 00
For simplicity, we do not mention (γ,s,c,p,a) when there is no risk of confusion.
The following result is well-known in game theory folklore. However, we provide a proof here for the sake of completeness.
If n=2 and T>R>S>P, then there is a unique ESS, that satisfies
π_⋆ = ST-RP/S+T-R-P.
We have, by definition 1p = p R + (1-p) S and 0p = p T + (1-p) P. Note that
1p = 0p p = S-P/S+T-R-P.
Define
p_⋆ = S-P/S+T-R-P = 1/1+T-R/S-P
with T-R > 0, S - P > 0, so p_⋆∈ [0,1].
Let
π_⋆ = 1p_⋆ = 0p_⋆.
By assumption in the theorem,
d/dp1p = R-S < T-P = d/dp0p,
and hence, by Eq. (<ref>),
for every q < p_⋆, 1q > 0q, and for every q > p_⋆, 1q < 0q.
By <Ref>, this, together with Eq. (<ref>), implies that p_⋆ is a unique ESS.
To conclude the proof of <Ref>, we just check that π_⋆ satisfies <Ref>.
§.§ There is no Reverse-Correlation phenomenon for ϕ : x ↦ x
Consider the case that there are n=2 players and that ϕ : x ↦ x.
Let c≥ 0, s∈[1/2,1], p,a∈ [0,1], and set γ_0 = c/(p s (1-a)).
Then for every γ≠γ_0, there is a unique ESS.
Moreover, the payoff π_⋆ and the total production Γ_⋆ corresponding to the ESS are both strictly increasing functions of γ.
Recall that in the Company game,
π_1 = ϕ(s q_1 + (1-s) q_2) - c_1.
Consider the case that ϕ : x ↦ x. For every s∈ [1/2,1],
<Ref> gives
(π_1) = s (q_1) + (1-s) (q_2) - c_1.
Therefore,
R(γ,s,c,p,a) = s ·(p γ) + (1-s) ·(p γ) - c = p γ - c,
S(γ,s,c,p,a) = s ·(p γ) + (1-s) ·(p a γ) - c = p γ (s+a-s a) - c,
T(γ,s,c,p,a) = s ·(p a γ) + (1-s) ·(p γ) = p γ (1-s+s a),
P(γ,s,c,p,a) = s ·(p a γ) + (1-s) ·(p a γ) = p a γ.
Recall that γ_0 = c/(p s (1-a)).
* If γ < γ_0, then T>R and P>S, in which scrounger is a dominant strategy. Therefore, there is a unique ESS, and we have:
π_⋆=Γ_⋆=P = paγ.
In particular, these values are increasing in γ.
* If γ > γ_0, then R>T and S>P, and hence producer is a dominant strategy. Therefore, there is a unique ESS, and
π_⋆=R = p γ - c, Γ_⋆=pγ.
In particular, both these values are increasing in γ.
* If γ = γ_0, then R=T and P=S, which implies that no player can unilaterally change its payoff.
Indeed, for every p,q ∈ [0,1],
pq = pqR + (1-p)qT + p(1-q)S + (1-p)(1-q)P = qR + (1-q)S,
so for every p,p',q ∈ [0,1], pq = p'q.
In this degenerate case, neither condition (i) nor (ii) in the definition of ESS can be satisfied, so there is no ESS.
To conclude the proof of <Ref>, we only need to show that π_⋆ and Γ_⋆ do not decrease at the discontinuity point γ = γ_0.
Since γ_0 ≥ c/(p(1-a)), we have
lim_ϵ→ 0^+Γ_⋆(γ_0-ϵ) = lim_ϵ→ 0^+π_⋆(γ_0-ϵ) = apγ_0 ≤ pγ_0-c = lim_ϵ→ 0^+π_⋆(γ_0+ϵ) ≤lim_ϵ→ 0+Γ_⋆(γ_0+ϵ).
Thus, overall, the payoffs of players at equilibrium, π_⋆, and the total production, Γ_⋆, are both increasing in γ.
§.§ The Reverse-Correlation phenomenon in the Company game
In this section, we demonstrate the Reverse-Correlation phenomenon in the Company game for two utility functions. Specifically, we first prove that the Reverse-Correlation phenomenon can occur when assuming
the utility function ϕ : x ↦ 1-exp(-2 x).
Then, in <Ref> we provide simulations that demonstrate the Reverse-Correlation phenomenon assuming the utility function ϕ : x ↦min(1,x).
Consider the Company game with n=2, ϕ : x ↦ 1-exp(-2 x), p=1/2, a=1/2.
For every s < 1, there exist c_0>0, and γ_min , γ_max > 1, for which there is a unique ESS such that π_⋆ is decreasing in γ on the interval [γ_min,γ_max].
Let
γ_i = γ Player i is a producer,
γ/2 otherwise.
By definition, if player i succeeds in producing a product (which happens with probability p=1/2) then the quality of its product is γ_i.
Hence,
<Ref> gives
(π_1) = 1/4ϕ(s γ_1 + (1-s) γ_2) + ϕ(s γ_1) + ϕ((1-s) γ_2) - c_1.
Plugging in ϕ(x) = 1-exp(-2x), we obtain
R(γ,s,c) = 1/4 3 - e^-2γ - e^-2s γ - e^-2(1-s)γ - c,
S(γ,s,c) = 1/4 3 - e^-(1+s)γ - e^-2s γ - e^-(1-s)γ - c,
T(γ,s,c) = 1/4 3 - e^-(2-s)γ - e^-s γ - e^-2(1-s)γ,
P(γ,s,c) = 1/4 3 - e^-γ - e^-s γ - e^-(1-s)γ.
Note that R, S, T and P are all increasing functions of γ.
Consequently, if the strategies of Players 1 and 2 remain unchanged, then (π_1) is also increasing in γ. However, we will show that at equilibrium, the tendency of the player to be a scrounger increases in γ to such an extent that ultimately reduces (π_1).
The next step towards proving <Ref> is to show that for some specific values of s and c, the Company game is in fact a game of chicken.
For every s < 1, there exists a value c_0 = c_0(s) and an interval [γ_min,γ_max] such that for every γ∈ [γ_min,γ_max],
T(γ,s,c_0) > R(γ,s,c_0) > S(γ,s,c_0) > P(γ,s,c_0).
In particular, by <Ref>, this implies that for every γ∈ [γ_min,γ_max], there is a unique ESS satisfying
π_⋆(γ,s,c_0) = ST-RP/S+T-R-P.
Before proving <Ref>, we need two preliminary technical results.
The next claim implies, in particular, that if c=0 then a producer is a dominant strategy.
For all γ,s such that s < 1, R(γ,s,0) > S(γ,s,0),T(γ,s,0), and S(γ,s,0),T(γ,s,0) > P(γ,s,0).
By pairwise comparison of the terms in <Ref>.
The next claim implies that T-P > R-S, or in other words, that scroungers lose more than producers when the other player switches from producer to scrounger.
For all γ,s,c such that s < 1,
S(γ,s,c)-P(γ,s,c) > R(γ,s,c)-T(γ,s,c).
We have
4(R-S) = e^-(1+s)γ - e^-2γ + e^-(1-s)γ - e^-2(1-s)γ,
and
4(T-P) = e^-γ - e^-(2-s)γ + e^-(1-s)γ - e^-2(1-s)γ.
Factoring by e^-γ, this gives
(T-P) - (R-S) = e^-γ/4 1 + e^-γ - e^-sγ - e^-(1-s)γ.
Factoring again by e^-γ, and using the convexity of the function e^x+e^γ-x, we obtain
(T-P) - (R-S) = e^-2γ/4 e^γ + 1 - e^s γ + e^(1-s) γ > 0,
which concludes the proof of <Ref>.
Let us fix γ_0 > max(1,ln(1+√(2)) / s). By <Ref>, we can take c_0 = c_0(s) such that
0 < R(γ_0,s,0)-T(γ_0,s,0) < c_0 < S(γ_0,s,0)-P(γ_0,s,0).
As a consequence,
R(γ_0,s,c_0) = R(γ_0,s,0) - c_0 < T(γ_0,s,0)=T(γ_0,s,c_0),
and
S(γ_0,s,c_0) = S(γ_0,s,0) - c_0 > P(γ_0,s,0)=P(γ_0,s,c_0).
Finally, by <Ref>, we have that
T(γ_0,s,c_0) > R(γ_0,s,c_0) > S(γ_0,s,c_0) > P(γ_0,s,c_0).
By continuity, there exist γ_min,γ_max such that max(1,ln(1+√(2)) / s) < γ_min < γ_0 < γ_max and for every γ∈ [γ_min,γ_max], <Ref> holds,
which concludes the proof of <Ref>.
For every γ∈ [γ_min,γ_max],
π_⋆(γ,s,c_0) = 1 - c_0 e^s γ·e^s γ+1/e^s γ-1 = 1 - c_0 e^s γs γ/2.
We start from the expression of π_⋆(γ,s,c_0) given by <Ref>.
First, we compute ST-RP using <Ref>. For that purpose, we expand ST and RP separately, and then simplify.
We have (each line corresponds to one term of S multiplied by all the terms of T):
16 · ST = 3 - e^-(1+s)γ - e^-2s γ - e^-(1-s)γ - 4c_0·3 - e^-(2-s)γ - e^-s γ - e^-2(1-s)γ
= 9 - 3e^-(2-s)γ - 3e^-sγ - 3e^-2(1-s)γ
- 3e^-(1+s)γ + e^-3γ + e^-(1+2s)γ + e^-(3-s)γ
- 3e^-2sγ + e^-(2+s)γ + e^-3sγ + e^-2γ
- 3e^-(1-s)γ + e^-(3-2s)γ + e^-γ + e^-3(1-s)γ
-12c_0 + 4c_0e^-(2-s)γ + 4c_0e^-sγ + 4c_0 e^-2(1-s)γ.
Similarly,
16 · RP = 3 - e^-2γ - e^-2s γ - e^-2(1-s)γ - 4c_0· 3 - e^-γ - e^-s γ - e^-(1-s)γ
= 9 - 3e^-γ - 3e^-sγ - 3e^-(1-s)γ
- 3e^-2γ + e^-3γ + e^-(2+s)γ + e^-(3-s)γ
- 3e^-2sγ + e^-(1+2s)γ + e^-3sγ + e^-(1+s)γ
- 3e^-2(1-s)γ + e^-(3-2s)γ + e^-(2-s)γ + e^-3(1-s)γ
-12c_0 + 4c_0e^-γ + 4c_0e^-sγ + 4c_0 e^-(1-s)γ.
When computing the difference, many terms disappear, leaving us with:
16 · (ST-RP) = 4 e^-γ + e^-2γ - e^-(1+s)γ - e^-(2-s)γ - c_0 e^-γ + e^-(1-s)γ - e^-(2-s)γ - e^-2(1-s)γ.
Factoring the right hand side by e^-γ, we obtain
ST-RP = e^-γ/4 1 + e^-γ - e^-sγ - e^-(1-s)γ - c_0 1 + e^sγ - e^-(1-s)γ - e^-(1-2s)γ.
Using <Ref>, we obtain
ST-RP/S+T-R-P = 1-c_0 ·1 + e^sγ - e^-(1-s)γ - e^-(1-2s)γ/1 + e^-γ - e^-sγ - e^-(1-s)γ.
Factoring the numerator of the fraction by e^sγ and rearranging, we get
1-c_0 e^sγ·1 - e^-(1-s)γ + e^-sγ - e^-γ/1 - e^-(1-s)γ - e^-sγ - e^-γ.
Dividing both the numerator and denominator by e^-sγ - e^-γ, and using the fact that
1-e^-(1-s)γ/e^-sγ - e^-γ = e^s γ·e^-sγ - e^-γ/e^-sγ - e^-γ = e^s γ,
we finally get
ST-RP/S+T-R-P = 1-c_0 e^sγ·e^s γ+1/e^s γ-1,
which concludes the proof of <Ref>.
For every γ∈ [γ_min,γ_max],
∂/∂γπ_⋆(γ,s,c_0) = c_0 s e^s γ/2· 1 - sinh(s γ)/sinhs γ/2^2 .
We start from the expression of <Ref>, and derive using the fact that
d/dx(x) = -1/sinh(x)^2.
More precisely, by <Ref>,
∂/∂γπ_⋆(γ,s,c_0) = ∂/∂γ1 - c_0 e^s γs γ/2
= -c_0 s e^s γs γ/2 + c_0 s e^s γ/2 sinhs γ/2^2
= c_0 s e^s γ/2 sinhs γ/2^2·1 - 2 s γ/2sinhs γ/2^2.
Then, we observe that for every x ∈,
2 (x) sinh(x)^2 = 2 e^x+e^-x/e^x-e^-xe^x-e^-x/2^2 = (e^x+e^-x)(e^x-e^-x)/2 = e^2x-e^-2x/2 = sinh(2x).
Plugging this in the last equation concludes the proof of <Ref>.
By definition,
γ > γ_min > ln(1+√(2))/s = sinh^-1(1)/s,
so sinh(s γ) > 1. By <Ref>, this implies that ∂/∂γπ_⋆(γ,s,c_0) < 0 on the interval [γ_min,γ_max], which concludes the proof of <Ref>.
§ A NECESSARY CONDITION FOR THE REVERSE-CORRELATION PHENOMENON
We assume that the payoffs are positively correlated with the number of producers in the group, that is,
for every q ∈ [0,1] and every γ≥ 0, p ↦qp(γ) is non-decreasing in p.
In addition, we assume that the payoffs are positively correlated with the parameter γ, that is,
for every q ∈ [0,1] and every p ∈ [0,1], γ↦qp(γ) is non-decreasing in γ.
Under these assumptions, we identify the following necessary condition for the emergence of a Reverse-Correlation phenomenon.
For any PS model in which the payoff of producers does not depend on the strategies of other players, there is no Reverse-Correlation phenomenon.
More precisely, if there are two values γ_1,γ_2 such that γ_1 < γ_2 and two ESS
denoted p_⋆(γ_1) and p_⋆(γ_2), then the corresponding payoffs satisfy π_⋆(γ_1) ≤π_⋆(γ_2).
Fix a PS model.
By assumption in <Ref>,
For every γ≥ 0, p ↦1p(γ) does not depend on p.
In what follows, we will simply write 1p(γ) = π_(γ).
By definition of ESS, and by <Ref>, we have for every i ∈{1,2}:
equation1
p_⋆(γ_i) = 0 π_⋆(γ_i) = 00(γ_i) ≥π_(γ_i), .a
p_⋆(γ_i) = 1 π_⋆(γ_i) = π_(γ_i) ≥01(γ_i), .b
p_⋆(γ_i) ∉{0,1} π_⋆(γ_i) = π_(γ_i) = 0p_⋆(γ_i)(γ_i), .c
where <Ref> holds because 00(γ_i) ≥10(γ_i)=π_(γ_i).
As a consequence of <Ref>, we have
p_⋆(γ_i) ≠ 0 π_⋆(γ_i) = π_(γ_i) ≥0p_⋆(γ_i)(γ_i).
Now, let us
show that π_⋆(γ_1) ≤π_⋆(γ_2).
* If p_⋆(γ_1) = p_⋆(γ_2) = 0, then
π_⋆(γ_1) (<ref>)=00(γ_1) (<ref>)≤00(γ_2) (<ref>)=π_⋆(γ_2).
* If p_⋆(γ_1) ≠ 0 and p_⋆(γ_2) ≠ 0, then
π_⋆(γ_1) (<ref>)=π_(γ_1) (<ref>)≤π_(γ_2) (<ref>)=π_⋆(γ_2).
* If p_⋆(γ_1) ≠ 0 and p_⋆(γ_2) = 0, then
π_⋆(γ_1) (<ref>)=π_(γ_1) (<ref>)≤π_(γ_2) (<ref>)≤π_⋆(γ_2).
* If p_⋆(γ_1) = 0 and p_⋆(γ_2) ≠ 0, then
π_⋆(γ_1) (<ref>)=00(γ_1) (<ref>)≤0p_⋆(γ_2)(γ_1) (<ref>)≤0p_⋆(γ_2)(γ_2) (<ref>)≤π_⋆(γ_2).
This concludes the proof of <Ref>.
|
http://arxiv.org/abs/2307.07633v1 | 20230714211128 | Taming the Panda with Python: A Powerful Duo for Seamless Robotics Programming and Integration | [
"Jean Elsner"
] | cs.RO | [
"cs.RO"
] |
Preprint Version.
Elsner: Taming the Panda with Python
Features of a spin glass in the random field Ising model
Sourav ChatterjeeDepartment of Statistics, Stanford University, 390 Jane Stanford Way, Stanford, CA 94305, USA. Email: mailto:[email protected]@stanford.edu.
August 12, 2023
=============================================================================================================================================================================
Franka Emika robots have gained significant popularity in research and education due to their exceptional versatility and advanced capabilities. This work introduces panda-py – a Python interface and framework designed to empower Franka Emika robotics with accessible and efficient programming. The panda-py interface enhances the usability of Franka Emika robots, enabling researchers and educators to interact with them more effectively. By leveraging Python's simplicity and readability, users can quickly grasp the necessary programming concepts for robot control and manipulation. Moreover, integrating panda-py with other widely used Python packages in domains such as computer vision and machine learning amplifies the robot's capabilities. Researchers can seamlessly leverage the vast ecosystem of Python libraries, thereby enabling advanced perception, decision-making, and control functionalities. This compatibility facilitates the efficient development of sophisticated robotic applications, integrating state-of-the-art techniques from diverse domains without the added complexity of ROS.
robotics, software, python, control, franka, emika, panda, panda-py
§ INTRODUCTION
In recent years, Python has emerged as a dominant language in the machine learning community, thanks to its extensive libraries and frameworks such as TensorFlow, PyTorch, and scikit-learn <cit.>. However, its popularity is not limited to machine learning alone. Python is gaining significant traction in the robotics community as well <cit.>. While there have been occasional voices of concern from the robotics community regarding Python's performance for real-time and resource-intensive robotics tasks, it is worth noting that performance-critical components can be implemented in languages like C or C++ and seamlessly integrated with Python <cit.>. This combination allows developers to harness the high-level features and ease of use provided by Python while still achieving the desired performance <cit.>. Additionally, Python's cross-platform compatibility, portability, and extensive ecosystem of libraries make it an attractive choice for robotics. The language's ease of use and rapid prototyping capabilities further contribute to its growing adoption in the robotics community, enabling researchers and developers to quickly iterate, experiment, and deploy robotics systems with ease. Finally, the programming language's immense popularity and ease of use make it an excellent choice for robotics education, enabling students to quickly learn and experiment with robotics concepts.
Franka Emika robot manipulators have gained significant popularity in research and education due to their exceptional capabilities and versatility. These robots are highly sought after for their industrial repeatability, force sensitivity, and torque control interface, making them well-suited for various applications. The ability to precisely repeat tasks with high accuracy and their sensitive force feedback capabilities, enables researchers to explore areas such as human-robot interaction, collaborative robotics, and intricate manipulation tasks <cit.>. Efforts are already underway to integrate Franka Emika robots into educational settings, including schools in Germany, where the user-friendly browser-based interface called "Desk" enables users to program the robots graphically using drag and drop for simple tasks, facilitating robotics education at various levels <cit.>. However, for more advanced applications, users will have to use either the provided interface through an open-source C++ library called libfranka directly or a ROS version of the same interface. The C++ library has rather strict real-time requirements, and setting it up and effectively programming it can be a daunting task for novice programmers. Similarly, using ROS comes with a considerable overhead, a steep learning curve, and limited portability and cross-platform support.
The panda-py[Earlier models of the Franka Emika robot were known as Panda, hence the name.] framework simplifies the programming, deployment, and installation process for Franka Emika robot systems. It offers an all-in-one solution with pre-packaged dependencies, ensuring a seamless experience out of the box. With panda-py, researchers and developers can focus on their work without the hassle of manual setup, benefiting from its user-friendly interface, extensive Python ecosystem, and effortless integration. It streamlines the process, making programming and experimentation with Franka Emika robots more accessible and efficient for research and education purposes. In this paper, the author aims to demonstrate the utility of panda-py in a tutorial style[All of the provided examples are also available online and are ready to run on real hardware.].
§ INSTALLATION AND SETUP
The panda-py software is implemented as a Python package. Specifically, it is distributed as a Python wheel, i.e., a pre-built binary that can be installed using the package manager pip. The wheel includes all the needed dependencies to connect to and control the robot, while the package manager will install the appropriate versions for the local platform. To install the package, execute
[mathescape,
gobble=2,
bgcolor=bg,
framesep=2mm]bash
pip install panda-python
from a terminal[Visit the online repository at <http://github.com/JeanElsner/panda-py> for more information on how to build from source or install specific versions.]. Upon initiating the robotic system, a prerequisite for controlling the robot involves the release of its brakes and the activation of the interface. Typically, this procedure is done through the browser-based Desk interface <cit.>. Nonetheless, this approach may prove inconvenient when dealing with headless setups or highly integrated systems. To address this challenge, panda-py incorporates a specialized Desk client, facilitating the seamless execution of these essential tasks more efficiently and user-friendly. (cf. Code Block <ref>).
pythonmathescape,
linenos,
numbersep=2pt,
gobble=2,
bgcolor=bg,
breaklines=true,
framesep=2mm
[H]
import panda_py
desk = panda_py.Desk(hostname, username, password)
desk.unlock()
desk.activate_fci()
Use the Desk client to connect to the web application on the control unit to unlock the brakes and activate the Franka Research Interface (FCI) for robot torque control.
Once the robot is prepared, a connection can be established by instantiating the Panda class with the robot's hostname as an argument, as exemplified in Code Block <ref>.
[H]
from panda_py import libfranka
panda = panda_py.Panda(hostname)
gripper = libfranka.Gripper(hostname)
Connect to the robot using the Panda class. The default gripper from Franka Emika does not support real-time control and can be controlled using the libfranka bindings directly.
The Panda class is a high-level wrapper with various convenience functions over libfranka's robot class. However, panda-py also includes bindings for all the low-level libfranka types and functions as part of a subpackage that may be used directly. This feature is used in Code Block <ref> to connect to the Franka Emika Hand.
For the remainder of the tutorial, we will assume that a connection to the robot hardware was established, and the Panda and Gripper instances were assigned to the variables panda and gripper, respectively.
§ BASIC ROBOT CONTROL
The panda-py package offers its users powerful features out of the box. There are modules for time-optimal motion generation <cit.>, analytical inverse kinematics <cit.>, a library of proven robust standard controllers, integrated state logging, and more. These components are implemented as CPython modules in C++ and can seamlessly be used in Python code. Table <ref> compares the runtimes of common API calls between the Python bindings and native C++. Motion generation can be accessed through the methods of the Panda class. The robot's neutral or starting pose can be reached with a single call to move_to_start. The phase space around this pose is characterized by high manipulability, reachability, and distance to joint limits. Code Block <ref> further demonstrates motion in joint space by adding a displacement to the homogeneous transform describing the end-effector pose and using the built-in inverse kinematics to compute the goal joint positions.
[H]
panda.move_to_start()
pose = panda.get_pose()
pose[2,3] -= .1
q = panda_py.ik(pose)
panda.move_to_joint_position(q)
Simple motion generation in joint space. The call to get_pose produces a 4× 4 matrix representing the homogeneous transform from robot base to end-effector. The indices 2,3 refer to the third row and fourth column, respectively, i.e., the z-coordinate. The position in z is lowered by 0.1 and passed to the inverse kinematics function to produce joint positions. Finally, the call to move_to_joint_position generates a motion from the current to the desired joint potions.
Similarly, Cartesian motions can be executed directly by providing a goal pose as seen in Code Block <ref>. Note that the interface can also compute motions with multiple waypoints. The integrated motion generators will compute time-optimal trajectories between the points while ensuring the robot adheres to velocity, acceleration, and joint limit constraints. Internally the trajectory will be traced by a joint impedance controller. In panda-py, the user can set various advanced parameters, such as control gains and allowed path deviation, when generating motion.
[H]
panda.move_to_start()
pose = panda.get_pose()
pose[2,3] -= .1
panda.move_to_pose(pose)
Simple motion generation in Cartesian space. The z-coordinate of the current end-effector pose is lowered by 0.1 as in Code Block <ref>. However, the resulting pose is passed directly to move_to_pose to produce a motion in Cartesian space.
While the software features numerous shorthands and convenience methods, users can always access the full breadth of the libfranka library. For instance, a call to
[mathescape,
gobble=2,
bgcolor=bg,
framesep=2mm]bash
panda.get_state()
will retrieve the latest libfranka RobotState received from the robot. Similarly, the Panda class also provides a reference to the libfranka Model associated with the running instance.
[mathescape,
gobble=2,
bgcolor=bg,
framesep=2mm]bash
panda.get_model()
These Python wrappers offer the entire C++ API, i.e., class member variables and functions. The same is true for the previously initialized gripper. The gripper can be controlled using the class member functions grasp and move.
[mathescape,
gobble=2,
bgcolor=bg,
breaklines=true,
framesep=2mm]bash
gripper.grasp(width, speed, force, epsilon_inner, epsilon_outer)
[mathescape,
gobble=2,
bgcolor=bg,
breaklines=true,
framesep=2mm]bash
gripper.move(width, speed)
Function calls in panda-py allow users to use native Python types as arguments. More than that, the backend uses the powerful Eigen <cit.> library for linear algebra and will transparently and efficiently convert Eigen matrices to numpy <cit.> arrays without modifying the underlying memory structure.
Logging robot states is a ubiquitous requirement in robotics experiments, yet it can be challenging to set up, particularly when capturing states at high frequencies. However, panda-py offers a convenient solution with its integrated mechanism to write the whole state of the robot into a circular buffer at 1 when activated. This feature simplifies the logging process, allowing users to easily capture and store data for subsequent evaluation, plotting, and other signal-processing tasks. Code Block <ref> illustrates how to enable logging and store the resulting buffer, while Figure <ref> shows plots of some of the captured telemetry.
[H]
from panda_py import constants
T_0 = panda_py.fk( constants.JOINT_POSITION_START)
T_0[1,3] = 0.25
T_1 = T_0.copy()
T_1[1,3] = -0.25
panda.move_to_pose(T_0)
panda.enable_logging(2e3)
panda.move_to_pose(T_1)
panda.disable_logging()
log = panda.get_log()
Using the integrated logging mechanism, the libfranka RobotState can be logged at a frequency of 1. Based on the starting pose, this example creates two end-effector poses, T_0 and T_1, displaced 0.25 to the left and right, respectively. Logging is enabled for this Panda instance before a motion is generated between these poses (line 9). The enable_logging function takes the buffer size in the number of steps as an argument. As such, 2e3 steps at 1 correspond to a buffer holding the state of the past 2 seconds. After the motion is finished, logging is disabled, and the buffer is retrieved (line 12).
Finally, for more advanced applications, there is a library of standard controllers. Controllers are classes instantiated by users and passed to the Panda class for execution. The controllers will run independently of the Python Global Interpreter Lock in the background and meet all of libfranka's real-time requirements. At the same time, the user may provide an input signal asynchronously.
[H]
import numpy as np
panda.move_to_start()
ctrl = controllers.CartesianImpedance()
x0 = panda.get_position()
q0 = panda.get_orientation()
runtime = np.pi*4.0
panda.start_controller(ctrl)
with panda.create_context(frequency=1e3, max_runtime=runtime) as ctx:
while ctx.ok():
x_d = x0.copy()
x_d[1] += 0.1*np.sin(ctrl.get_time())
ctrl.set_control(x_d, q0)
Running a panda-py controller. After initializing the controller, the current position and orientation are stored in x0 and q0, respectively, where q0 is a quaternion representation of the end-effector orientation. After starting the controller, a PandaContext is created from the Panda object (line 10). PandaContext is a convenient context manager that executes a loop at a fixed frequency. The call to PandaContext.ok throttles the loop and raises any control exceptions that may have been raised by libfranka. The use of PandaContext is optional, and users are free to manage the control flow how they wish. This example adds a periodic linear displacement along the y-axis to the initial pose (line 13). This results in the end-effector moving periodically from left to right in straight lines.
Code Block <ref> demonstrates the essentials of using a panda-py controller. The controller CartesianImpedance is instantiated and passed to the Panda class for execution. While the controller is running, the user can use its set_control method asynchronously to provide an input signal. The panda-py controllers provide many configuration options specific to the individual controller, such as control gains or filter settings. These options may be changed at run-time as well. In this example, the input signal is provided in a high-frequency loop without filtering for more fine-grained control. All controllers include virtual joint walls, so the input signal need not explicitly consider the joint limits. While various methods exist to control aspects such as the robot's reflex behavior and error recovery, the content covered in this chapter already provides a foundational level of control over the robot.
§ INTEGRATION WITH PYTHON PACKAGES
A notable aspect of panda-py is its ability to seamlessly integrate with popular Python packages, such as the Robotics Toolbox for Python <cit.> or MuJoCo <cit.>. Using Python bindings to integrate the robot hardware makes for singularly lightweight and straightforward integration. The middleware layer can be avoided entirely while all necessary hardware setup and preparation can be centralized. In our previous work, we found this aspect particularly convenient when running MuJoCo simulations with hardware in the loop for haptic interaction and predictive modeling <cit.>.
By leveraging the capabilities of these existing packages, researchers can easily extend the functionalities of Franka Emika robots, as is demonstrated in this final example. Specifically, a resolved rate motion controller utilizing reactive control based on quadratic programming is integrated with panda-py, leveraging the Robotics Toolbox for Python. This integration exemplifies the ease of extending panda-py to incorporate more complex functionalities, such as reactive collision avoidance and mobile manipulation. The code snippet in Code Block <ref> provides a clear representation of the integrated implementation. Notably, the solution to the controller's quadratic program yields joint velocities that seamlessly interface with the IntegratedVelocity controller.
import qpsolvers as qp
import roboticstoolbox as rtb
import spatialmath as sm
ctrl = controllers.IntegratedVelocity()
panda.move_to_start()
panda.start_controller(ctrl)
# Initialize roboticstoolbox model
panda_rtb = rtb.models.Panda()
# Set the desired end-effector pose
Tep = panda_rtb.fkine(panda.q) * sm.SE3(0.3, 0.2, 0.3)
# Number of joints in the panda which we are controlling
n = 7
arrived = False
# The original example runs in simulation with a control frequency of 20Hz
with panda.create_context(frequency=20) as ctx:
while ctx.ok() and not arrived:
# The pose of the Panda's end-effector
Te = panda_rtb.fkine(panda.q)
# Transform from the end-effector to desired pose
eTep = Te.inv() * Tep
# Spatial error
e = np.sum(np.abs(np.r_[eTep.t, eTep.rpy() * np.pi / 180]))
# Calulate the required end-effector spatial velocity for the robot
# to approach the goal. Gain is set to 1.0
v, arrived = rtb.p_servo(Te, Tep, 1.0)
# Gain term (lambda) for control minimisation
Y = 0.01
# Quadratic component of objective function
Q = np.eye(n + 6)
# Joint velocity component of Q
Q[:n, :n] *= Y
# Slack component of Q
Q[n:, n:] = (1 / e) * np.eye(6)
# The equality contraints
Aeq = np.c_[panda_rtb.jacobe(panda.q), np.eye(6)]
beq = v.reshape((6,))
# Linear component of objective function: the manipulability Jacobian
c = np.r_[ -panda_rtb.jacobm().reshape((n,)), np.zeros(6)]
# The lower and upper bounds on the joint velocity and slack variable
lb = -np.r_[panda_rtb.qdlim[:n], 10 * np.ones(6)]
ub = np.r_[panda_rtb.qdlim[:n], 10 * np.ones(6)]
# Solve for the joint velocities dq
qd = qp.solve_qp(Q, c, None, None, Aeq, beq, lb=lb, ub=ub, solver='daqp')
# Apply the joint velocities to the Panda
ctrl.set_control(qd[:n])
listingResolved rate controller with reactive manipulability maximization. This example is from the Robotics Toolbox for Python <cit.>. To run it on the real hardware with panda-py requires only connecting the inputs and outputs of the control loop to panda-py, i.e., using the joint positions Panda.q and providing the control signal to IntegrateVelocity.set_control. Additionally, the inequality constraints to avoid the joint limits were removed, as panda-py controllers already have integrated joint limit avoidance using impedance control.
§ CONCLUSION
In conclusion, this research paper has introduced panda-py as a Python interface and framework that facilitates the programming of Franka Emika robots. The inclusion of concise and approachable code examples throughout the paper highlights panda-py's user-friendly nature and effectiveness in controlling the robots. It is worth noting that this paper provides only a glimpse into the extensive capabilities of panda-py, as it represents a dynamic and evolving ecosystem. Online resources, including additional examples, documentation, and the help of the robotics community, contribute to the continual maintenance and expansion of panda-py. Researchers and users are encouraged to explore these resources for a comprehensive understanding of panda-py's potential. The author aims to apply the same paradigm and lessons learned from developing panda-py to other robot hardware in future work. Additionally, integrating panda-py with reinforcement learning environments opens up exciting opportunities to explore robot learning.
§ ACKNOWLEDGMENT
The author gratefully acknowledges the funding support provided by the Lighthouse Initiative Geriatronics by StMWi Bayern (Project X, grant no. IUK-1807-0007// IUK582/001).
IEEEtran
|
http://arxiv.org/abs/2307.03974v2 | 20230708132712 | Comparing EventB, $\{log\}$ and Why3 Models of Sparse Sets | [
"Maximiliano Cristiá",
"Catherine Dubois"
] | cs.SE | [
"cs.SE"
] |
Short-time large deviations of the spatially averaged height
of a KPZ interface on a ring
Baruch Meerson
August 12, 2023
=========================================================================================
Many representations for sets are available in programming languages libraries. The paper focuses on sparse sets used, e.g., in some constraint solvers for representing integer variable domains which are finite sets of values, as an alternative to range sequence. We propose in this paper verified implementations of sparse sets, in three deductive formal verification tools, namely , and 3. Furthermore, we draw some comparisons regarding specifications and proofs.
§ INTRODUCTION
Sets are widely used in programs. They are sometimes first-class objects of programming languages, e.g. SETL <cit.> or <cit.>,
but more frequently they are data structures provided in libraries. Many different representations are available, depending on the targeted set operations. In this paper, we focus on sparse sets, introduced by Briggs and Torczon in <cit.>, used in different contexts and freely available for different programming languages (Rust, C++ and many others). In particular,
sparse sets are used in constraint solvers as an alternative to range sequences or bit vectors for implementing domains of integer variables <cit.> which are nothing else than mathematical finite sets of integers. Their use in solvers implementations is motivated by -at least- the two following properties: searching and removing an element are constant-time operations—removing requires only two swapping
operations on arrays; sparse sets are cheap to trail and restore, which is a key point when backtracking.
Confidence on constraint solvers using sparse sets can be improved if the algorithms implementing the main operations are formally verified, as it has been done by Ledein and Dubois in <cit.> for the traditional implementation of domains as range sequences. Hence, the main contribution of this paper is
a verified implementation of sparse sets for representing finite sets of integers in , and 3.
We prove that the implemented operations preserve the invariants and we also prove properties that can be seen as formal foundations of trailing and restoring. As far as we know, this is the first formally verified implementation of sparse sets, whereas it has been done for other representations e.g. <cit.>. All the specifications and proofs can be found here: <https://gitlab.com/cdubois/sets2023.git>.
It has been known for decades that there is no silver bullet for software engineering or software development. The best we can do as software engineers is to increase our toolbox as much as possible and use the best available tool in it for the problem at hand. This software engineer practical principle still applies when it comes to formal development, formal methods and formal verification. In our opinion the Formal Methods (FM for short) community should have as much information as possible about the relative advantages and disadvantages of different FM methods and tools. With the intention to shed some light on the ups and downs of different FM, we specified and verified sparse sets with three different FM techniques. Then, a second contribution of this paper is a comparison of these FM w.r.t. aspects such as expressiveness, specification analysis and automated proof.
§ SPARSE SETS
We deal here with sets as subsets of natural numbers up to N-1, where N is any non null natural number. A sparse set S is represented by two arrays of length N called mapD and domD (as in <cit.>), and a natural number sizeD. The array mapD maps any value v ∈ [0,N-1] to its index ind_v in domD, the value indexed by ind_v in domD is v. The main idea that brings efficiency when removing an element or testing membership is to split domD into two sub-arrays, domD[0,sizeD-1] and domD[sizeD, N-1], containing resp. the elements of S and the elements of [0,N-1] not in S. Then, if S is empty, sizeD
is equal to 0, if S is the full set, then sizeD is N.
Checking if an element i belongs to the sparse set S simply consists in the evaluation of the expression mapD[i]<sizeD. Removing an element from the set consists in moving this element
to domD[sizeD, N-1] (with 2 swaps in mapD and domD and decreasing sizeD). Binding S to the singleton set {v} follows the same idea: moving this element at the first place in domD and assigning the value 1 to sizeD.
In our formalizations, we only deal with two operations consisting in removing an element in a sparse set and bind a sparse set to a singleton set since these two operations are fundamental when solving constraints. In this context, we may also need to walk through all the elements of a variable domain, it means exploring domD[0..sizeD-1]. If minimal and maximal values are required, then they have to be maintained in parallel. This is outside the scope of this work.
§ FORMAL DEVELOPMENT
In this section we succinctly introduce the formal specification language and with more detail the models for sparse sets.
§.§
<cit.> is a deductive formal method based on set theory and first order logic allowing users to design correct-by-construction systems. It relies on a state-based modeling language in which a model, called a machine,
is made of a state and a collection of events allowing for state changes. The state consists of variables constrained by invariants.
Proof obligations are generated to verify the preservation of invariants by events. A machine may use a -mathematical- context which introduces abstract sets, constants, axioms or theorems. A formal design in starts with an abstract machine which is usually refined several times. Proof obligations are generated to verify the correctness of a refinement step.
An event may have parameters. When its guards are satisfied, its actions, if any, are executed, updating state variables. Actions may be -multiple- deterministic assignments, x,y:=e, f, or -multiple- nondeterministic ones, x,y :| BAP(x,x',y,y') where BAP is called a Before-After Predicate relating current (x, y) and next (x', y') values of state variables x and y.
In the latter case, x and y are assigned arbitrary values satisfying the BAP predicate. When using such a non-deterministic form of assignment, a feasibility proof obligation is generated in order to check that there exist values for x' and y' such that BAP(x,x',y,y') holds when the invariants and guards hold. Furthermore when this kind of action is used and refined, the concrete action updating x and y is required to assign them values which satisfy the BAP predicate.
In the following, we use Rodin, an Eclipse based IDE for project management, model edition, refinement and proof, automatic proof obligations generation, model animation and code generation. Rodin supports automatic and interactive provers <cit.>. In this work we used the standard provers (AtelierB provers) and also the SMT solvers VeriT, CVC3 and CVC4. More details about and Rodin can be found in <cit.> and <cit.>.
§.§ formalization
The formalization is made of six components, i.e. two contexts, a machine and three refinements. Context Ctx introduces the bound N as a non-zero natural number and context Ctx1 extends the latter with helper theorems. The high level machine gives the abstract specification. This model contains a state composed of a finite set D, constrained to be a subset of the (integer) range 0..N-1, and two events, to remove an element from D or set D as a singleton set (see Fig. <ref> in which bind is removed for lack of space).
The first refinement (see Fig.<ref>)
introduces the representation of the domain as a sparse set, i.e. two arrays mapD and domD modeled as total functions and also the variable sizeD which is a natural number in the range 0..N. Invariants inv4 and inv5 constrain mapD and domD to be inverse functions of each other.
The gluing invariant inv6 relates the states between the concrete and former abstract machines. So the set domD[0..sizeD-1] containing the elements of the subarray from 0 to sizeD-1 is exactly the set D.
Theorem inv7 is introduced to ease some interactive proofs, it is proved as a consequence of the previous formulas (inv1 to inv6).
It follows directly from a theorem of Ctx1 whose statement is inv7 where domD and mapD are universally quantified. Theorem inv8, also used in an interactive proof, and automatically proved by CVC3, states that domD is an injective function.
Variables mapD and domD are both set initially to the identity function on 0..N-1 and sizeD to N. So invariants are satisfied at the initial state. Machine SparseSets_ref1 refines the events of the initial machine by non deterministic events. So here the remove event assigns the three state variables with values that satisfy invariants and also such that sizeD strictly decreases and removed elements in domD are kept at the same place (properties in bold font). Event bind follows the same pattern (again not shown here).
The second refinement has the same state than the previous refinement (see Fig. <ref>). Its events implement the operations using the new state variables. It is a straightforward translation of the algorithms described in <cit.>.
The only reason to have introduced the intermediate model
SparseSets_ref1 is to express the properties written in bold font and thus generate, in the next refinement, proof obligations which, when discharged, will not only ensure that the events refined in Fig. <ref> preserve the invariants inv1, inv2 …inv6 but also the local properties regarding sizeD and domD[sizeD..N-1] (SIM proof obligations).
The feasibility (FIS) proof obligations generated by the non-deterministic events of SparseSets_ref1 require to prove that there exist values such that the BAP predicate holds. We can prove it using the new values of domD, mapD and sizeD specified in the last refinement as witnesses. The simulation (SIM) proof obligations generated by events of SparseSets_ref2 require to prove that the latter values again satisfy the BAP predicate used in SparseSets_ref1. In order not to do these -interactive- proofs twice, we generalize them and prove them as theorems of the context. Thus to discharge the FIS and SIM proof obligations, we only have to instanciate these theorems to provide a proof.
A last algorithmic refinement, omitted here, refines the remove event in two events, removeLastButOne and removeLast. The former differs from remove only by its more restrictive guard; the latter is dedicated to the case where the element with index sizeD-1 in domD is removed thus avoiding the unnecessary swapping.
§ FORMAL DEVELOPMENT
In this section we briefly present the tool and how we used it to encode the model of sparse sets.
§.§
is a constraint logic programming (CLP) language and satisfiability solver where sets and binary relations are first-class citizens <cit.>. The tool implements several decision procedures for expressive fragments of set theory and set relation algebra including cardinality constraints <cit.>, restricted universal quantifiers <cit.>, set-builder notation <cit.> and integer intervals <cit.>. In previous works has been satisfactory tested against some known case studies <cit.>.
code enjoys the formula-program duality. This means that code can behave as both a formula and a program. When seen as a formula, it can be used as a specification on which verification conditions can be (sometimes automatically) proved. When seen as a program, it can be used as a (less efficient) regular program. Due to the formula-program duality, a piece of code is sometimes called forgram—a portmanteau word resulting from combining formula with proggram.
§.§ formalization
The formalization presented in this paper is the result of translating the abstract specification (i.e., Fig. <ref>) and the second refinement (i.e. Fig. <ref>). Both models can be easily translated into by using the (still under development) state machine specification language (SMSL) defined on top of
(see Fig. <ref> and <ref>) <cit.>. The notions of context and refinement are not available in SMSL. For this reason, refinements introduced in the model have to be manually encoded in . The context is encoded simply as an axiom. In order to ensure that the code verifies the properties highlighted in bold in Fig. <ref> as well as the gluing invariant (i.e., inv6), a few user-defined verification conditions are introduced as theorems. Since the first refinement is introduced to express the properties written in bold, its events have not been encoded in .
Figures <ref> and <ref> list only representative parts of the forgram.
We tried to use the same identifiers as for the models as much as possible. In this way, for example, the invariant labeled as inv6 in the SparseSets_ref1 machine (Fig. <ref>), is named in the forgram. The name of variables in cannot fully complain with those used in the models because requires all variables to begin with a capital letter. So, for example, domD in the SparseSets_ref1 machine becomes in .
As can be seen in Fig. <ref>, the state machine specification language defined on top of allows for the declaration of parameters (similar to context constants), state variables, axioms (similar to context axioms) and invariants. Parameter is used to compute the identity relation on the integer interval [0,N-1] as shown in axiom , which in turn is used in invariant . As is a CLP language implemented on top of Prolog, it inherits many of Prolog's features. In particular, integer expressions are evaluated by means of the predicate. Along the same lines, all set operators are implemented in as constraints. For example, is true when is the identity relation on the set . The term corresponds to the integer interval [0,M].
Invariants named , and correspond to invariant inv1 of the SparseSets_ref1 machine. Splitting invariants in smaller pieces, is a good practice when using as a prover because it increases the chances of automated proofs. implements the negation of invariant . does not automatically compute the negation of user-defined predicates. As a user-defined predicate can contain existential variables, its negation could involve introducing universal quantifiers which fall outside 's decision procedures. Then, users are responsible for ensuring that all predicates are safe.
In invariant we can see the constraint. This constraint implements the notion of restricted universal quantifier (RUQ). That is, for some formula ϕ and set , corresponds to ∀ X.(X ∈ A ϕ(X)). In a constraint it is possible to quantify over binary relations, as is the case of . Hence, we have a quantified ordered pair (), rather than just a variable. Likewise, offers the constraint implementing the notion of restricted existential quantifier (REQ). The important point about REQ and RUQ is not only their expressiveness but the fact that there is a decision procedure involving them <cit.>. In these constraints are used to state a double set inclusion equivalent to the formula domD[0 .. sizeD - 1] = D. If the user is not convinced or unsure about the validity of this equivalence (s)he can use itself to prove it.
Note that is not declared as an invariant because in Fig. <ref> it is a theorem that can be deduced from previous invariants.
Therefore, we introduce it as a simple predicate but then we declare a theorem whose conclusion is . Later, will include as a proof obligation and will attempt to discharge it. Given that is a satisfiability solver, if Φ is intended to be a theorem then we ask it to prove the unsatisfiability of ¬Φ.
Moving into in Fig. <ref> we can see the encoding of the remove operation specified in the SparseSets_ref2 machine of Fig. <ref>, along with two user-defined proof obligations. In , there is no global state so state variables have to be included as explicit arguments of clauses representing operations. Next-state variables are denoted by decorating the base name with an underscore character (e.g., corresponds to the value of in the next state). Another important difference between the and the specifications is that in the latter we can use set unification to implement function application. For instance, is equivalent to the predicate: ∃ y_2, y_5, domD_1. (domD = {sizeD - 1 ↦ y_2, y_1 ↦ y_5}∪ domD_1), where y_1 = mapD(v) (due to the previous set unification). The not-membership constraints following the equality constraint prevent to generate repeated solutions. Hence, when is called with some set term in its fourth argument, this term is unified with . If the unification succeeds, then the images of and are available.
As said before, some user-defined proof obligations are introduced as theorems to ensure that the forgram verifies the gluing invariant (i.e., inv6) and the properties written in bold in machine SparseSets_ref1. Precisely, theorem states that if holds and and its abstract version (not shown in the paper) are executed, then holds in the next state.[ and its abstract version can be distinguished by their arities.]
Likewise, theorem ensures that the second property written in bold in machine SparseSets_ref1 is indeed a property of the forgram. As can be seen, the theorem states that if is executed and the functional image[ is a user-defined predicate computing the relational image through a function— stands for functional image.] of the interval from up to through is , then it must coincide with the functional image of the same interval but through .
Once the specification is ready, we can call the verification condition generator (VCG) and run the verification conditions (VC) so generated:
VCs include the satisfiability of the conjunction of all axioms, the satisfiability of each operation and preservation lemmas for each and every operation and invariant. The last command above will attempt to automatically discharge every VC. Part of the output is as follows:
An answer means that, for some reason, is unable to discharge the VC. Most of the times this is due to some missing hypothesis which, in turn, is due to the way the VCG generates the VCs. Briefly, when it comes to invariance lemmas, the VCG generates them with the minimum number of hypothesis. So, for instance, the invariance lemma named is as follows:
By including minimum hypothesis, will have to solve a simpler goal which reduces the possibilities to have a complexity explosion. If the hypothesis is not enough, the command can be used to find potential missing hypothesis.
In this way, users can edit the VC file, add the missing hypothesis and run the VC again. If more hypotheses are still missing, the process can be executed until the proof is done—or the complexity explosion cannot be avoided.
discharges all the VC generated by the VCG for the present forgram.
§ WHY3 FORMAL DEVELOPMENT
In this section we briefly introduce the 3 platform and describe with some details our specification of sparse sets.
§.§ 3
Why3 <cit.> is a platform for deductive program verification providing
a language for specification and programming, called WhyML, and relies on external automated and interactive theorem provers, to discharge verification conditions. In the context of this paper, we used Why3 with the SMT provers CVC4 and Z3.
Proof tactics are also provided, making 3 a proof environment close to the one of Rodin for interactive proofs. 3 supports modular verification.
WhyML allows the user to write functional or imperative programs featuring polymorphism, algebraic data types, pattern-matching, exceptions, references, arrays, etc. These programs can be annotated by contracts and assertions and thus verified. User-defined types with invariants can be introduced, the invariants are verified at the function call boundaries. Furthermore to prevent logical inconsistencies, 3 generates a verification condition to show the existence of at least one value satisfying the invariant. To help the verification, a witness is explicitly given by the user (see the clause in Fig. <ref>).
The and operators can be used inside post-conditions and assertions to refer to the value of a mutable program variable at some past moment of execution. In particular in a function post-condition refers to the value of term when the function is called. Programs may also contain ghost variables and ghost code to facilitate specification and verification.
From verified WhyML programs, correct-by-construction OCaml programs (and recently C programs) can be automatically extracted.
§.§ 3 formalization
From the 3 library, we use pre-defined theories for integer arithmetic, polymorphic finite sets and arrays. In the latter, we use in particular the operation that exchanges two elements in an array and its specification using the predicate.
We first define a record type, , whose mutable fields are a record of type containing the computational elements of a sparse set representation and a ghost finite set of integer numbers which is the abstract model of the data structure. The type invariant of relates the abstract model with the concrete representation. It is used
to enforce consistency between them. Invariants enforcing consistency between the two arrays and and the bound are attached to the type: lengths of the arrays is , contents are belonging to 0..-1 and the two arrays are inverse of each other, is in the interval 0... These type definitions and related predicates are shown in Fig. <ref>.
Our formalization (see Fig. <ref>, where, again, bind is removed for lack of place) contains three functions,
, and , which update their arguments. They are the straightforward translation of the algorithms in <cit.> in WhyML, except for the supplementary ghost code (the last statement in both and ) which updates the abstract model contained in . Function is a helper function
which is called in the other ones. The contract of makes explicit the modifications of both arrays and , using the predicate defined in the library. Verification conditions for this function concern the conformance of the code to the two post-conditions (trivial as it is ensured by ) and also the preservation of the invariant attached to the type—i.e. mainly that and after swapping elements remain inverse from each other.
Both and act not only on the two arrays and the bound but also on the ghost part, i.e. the corresponding mathematical set . Thus the verification conditions here not only concern the structural invariants related to , and but also the ones deriving from the use of the type, proving the link between the abstract logical view (using finite sets) and the computational one implemented through arrays.
Observe that types and correspond to the state and invariants of the refinements. The abstract specification presented in the first machine becomes a ghost field in WhyML. The invariant of the type corresponds to the gluing invariant (inv6). A similar transposition happens for the operations. Actions in the abstract events, i.e. updating the abstract set, appear as ghost code in WhyML.
All proofs are discovered by the automatic provers except for some proof obligations related to the function. Nevertheless these proofs are simplified thanks to some 3 tactics that inject some hints that can be used by the external provers to finish the proofs.
§ COMPARISON AND DISCUSSION
Set theory is primitive in and whereas Why3 which permits to express other theories, provides a theory for it. Rodin uses provers where set theory is primitive but can also call external provers such as VeriT, Z3 and CVC4—where set theory is not primitive. However a big effort has been done to process set theory in VeriT, which is often recognized as allowing significant improvements in proofs <cit.>.
Why3 relies entirely on external provers where set theory is not primitive. Conversely, is a satisfiability solver that can only work with set theory—and linear integer algebra. It is the only
of the three tools implementing advanced decision procedures for set theory. Likely, this proved to be crucial for being able to be the only tool that automatically discharged all the VC, although it required a simple hypothesis discovery procedure. It should be a concern the time needs to discharge all the VC because with more complex models the resolution time might be prohibitive. It worth to be studied ways of avoiding the algorithmic complexity of the decision procedures implemented in . Results on Computable Set Theory should be revisited (eg. <cit.>). Why3 and Rodin interactive proofs are not numerous and remain quite simple.
In , 51 proof obligations were generated for the whole development, around half of them coming from the first refinement.
37 were proven automatically by the standard provers (AtelierB provers), 18 automatically by SMT provers, mainly VeriT, either directly or after applying the Rodin lasso allowing for adding additional,
backup hypotheses having identifiers in common with
the goal. Only two proof obligations required real human intervention, mainly instantiations of the general theorems introduced in Ctx1 or explicit witnesses introduction in the case of feasibility proof obligations.
After working in the way described in Sect. <ref>, discharges all the 38 VC generated by the VCG in around 7 minutes.
Why3 makes it possible to apply transformations (e.g. split conjunctions) on a proof goal instead of calling an automatic prover on it. Some of these transformations are very simple, e.g. splitting conjunctions, and can then been applied systematically and automatically. Most of the generated VC in our formalization were proven automatically thanks to the split transformation. Only two of them about pieces of type invariants, required human interaction to insert some more complex transformations, e.g a case analysis on indexes in mapD (). At the end, 55 VC were proved by CVC4, except two of them discharged by Z3, in a total amount of time of 30 seconds.
Clearly, all three tools are expressive enough for the problem at hand. However, the specification is probably the most readable. The tools permit to express axioms, invariants and automatically generate similar VC. still needs work to express how two models are linked in terms of abstraction/refinement relations. Writing some key properties proved to be complex in . Indeed, it was necessary to add a somewhat artificial refinement level for Rodin being able to generate the desired VC linking. These properties can be easily defined by the user in . However, in Why3 and , proof obligations are automatically generated from the specifications, in particular the abstract and concrete models can be naturally linked and the tool automatically generates the corresponding VC. In that regard, Why3 and are safer than .
The possibility to count with executable code without much effort enables many lightweight analysis that can be put into practice before attempting complex proofs. In tool where specification and implementation are described by only one piece of code (cf. forgrams). This tool is not the integration of an interpreter and a prover; the same set of rewrite rules are used to compute and prove. In /Rodin there is only a specification—later it can be converted into an executable representation if tools such as ProB are used.
Why3 can execute WhyML programs natively thanks to its interpreter and the command.
Furthermore, once the the program is proved to verify the specification, correct-by-construction OCaml and C programs can be automatically extracted. These programs will be orders of magnitude more efficient than the equivalent forgrams.
§ CONCLUSION
We formally verified the implementation of sparse sets using three formal languages and associated tools, focusing on the operations and correctness properties required by a constraint solver when domains of integer variables are implemented with sparse sets. We compared in particular the several statements of invariants and pre-post properties and their proofs.
As future work, two directions can be investigated. The first one is to complete the formal developments with other set operations. A second one is to implement and verify, in Why3 or , a labeling procedure such as the ones used in constraint solvers, it would need to backtrack on the values of some domains, and thus make use of the theorems proven in this paper. Labeling is native in when the CLP(FD) solver is active.
abbrv
|
http://arxiv.org/abs/2307.03998v1 | 20230708154349 | Lightweight Improved Residual Network for Efficient Inverse Tone Mapping | [
"Liqi Xue",
"Tianyi Xu",
"Yongbao Song",
"Yan Liu",
"Lei Zhang",
"Xiantong Zhen",
"Jun Xu"
] | cs.CV | [
"cs.CV",
"eess.IV"
] |
Lightweight Improved Residual Network for Efficient Inverse Tone Mapping
Liqi Xue, Tianyi Xu, Yongbao Song, Yan Liu, Lei Zhang, Xiantong Zhen, and Jun Xu
This work was sponsored by the National Natural Science Foundation of China (No. 62002176, 62176068, and 12101334), CAAI-Huawei MindSpore Open Fund, the Natural Science Foundation of Tianjin (No. 21JCQNJC00030), and the Fundamental Research Funds for the Central Universities. Corresponding author: Xiantong Zhen ([email protected]) and Jun Xu ([email protected]).
Liqi Xue, Tianyi Xu, Yan Liu, and Jun Xu are with the School of Statistics and Data Science, Nankai University, Tianjin 300071, China.
Yongbao Song is with the School of Mathematical Science, Nankai University, Tianijn 300071, China.
Lei Zhang and Xiantong Zhen are with the Computer Science College, Guangdong University of Petrochemical Technology, Maoming 525000, China.
August 12, 2023
====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
The display devices like HDR10 televisions are increasingly prevalent in our daily life for visualizing high dynamic range (HDR) images. But the majority of media images on the internet remain in 8-bit standard dynamic range (SDR) format. Therefore, converting SDR images to HDR ones by inverse tone mapping (ITM) is crucial to unlock the full potential of abundant media images. However, existing ITM methods are usually developed with complex network architectures requiring huge computational costs. In this paper, we propose a lightweight Improved Residual Network (IRNet) by enhancing the power of popular residual block for efficient ITM. Specifically, we propose a new Improved Residual Block (IRB) to extract and fuse multi-layer features for fine-grained HDR image reconstruction. Experiments on three benchmark datasets demonstrate that our IRNet achieves state-of-the-art performance on both the ITM and joint SR-ITM tasks. The code, models and data will be publicly available at <https://github.com/ThisisVikki/ITM-baseline>.
Inverse tone mapping, improved residual block, lightweight network, inference efficiency.
§ INTRODUCTION
High dynamic range (HDR) images defined in Rec.2020 <cit.> exhibit clearer details in highlights and shadows, as well as smoother transitions on brightness and color, than the standard dynamic range (SDR) images with 8-bit color depth defined in Rec.709 <cit.>. Owing to these benefits, the manufacturers of television and mobile devices make a push to bring HDR contents to demanding consumers. Though HDR display devices allow more visually-pleasing contents in HDR images by Dolby Vision, HDR10, and HDR10+ technologies <cit.>, the SDR images in 8 bit-depth would be featureless when being directly broadcast on HDR display devices <cit.>. To present the SDR images closer to human perception on HDR display devices, it is essential to convert SDR images into comfortable HDR ones without color or information loss. This challenging problem is known as inverse tone mapping (ITM) <cit.>, which has been studied in a more general sense rather than expanding the luminance range of camera raw image files in linear color space <cit.>.
Early image ITM methods mainly resort to global or local image processing operators for promising performance. Global ITM operators <cit.> usually utilize reverse tone mapping functions to extend the dynamic range of image pixels. But this would bring distorted details and uneven transitions between neighborhood pixels in different levels of brightness. Local ITM operators <cit.> expand the image bit depth in a spatially-varying manner. Unfortunately, these methods would fail to preserve the global consistency of luminance ranges across an image. Recently, deep neural networks have been employed to tackle the ITM task from a data-driven perspective <cit.>. These networks usually contain strong backbones with complex architectures, which may require huge computational costs for promising ITM performance. Besides, the methods of <cit.> simultaneously tackle the joint image super-resolution (SR) and ITM (joint SR-ITM) tasks by separating the base and detail components from the input image with extra image decomposition <cit.>. However, this would further increase the model complexity and computational costs of current joint SR-ITM methods over previous ITM ones.
Despite their promising performance, the above-mentioned ITM methods suffer from two main limitations. Firstly, the complex model architectures obscure the core of the ITM problem, that is, “expanding the luminance range of a low dynamic range image to produce a higher dynamic range image” <cit.>. This problem of extending luminance range or color bit depth is similar to the tasks of image super-resolution <cit.> and video frame prediction <cit.>, all of which aim to increase the highly-correlated information of the input image at different aspects. Therefore, it is possible to tackle the ITM problem by simple and lightweight neural networks, as inspired by the concurrent works in image super-resolution <cit.> and video frame prediction <cit.>. Secondly, the huge computational costs also limits the prevalence of ITM methods from being deployed into edge devices. For example, to perform ITM on a 4K-resolution (3840×2160) image, Deep SR-ITM <cit.> needs 2.50M parameters, ∼1.05×10^4G FLOPs, and a speed of 777.95ms, while HDRTVNet <cit.> needs 37.20M parameters, ∼1.41×10^4G FLOPs, and a speed of 1513.43ms.
In this paper, we leverage the popular residual learning recipe <cit.> to develop a simple and lightweight Improved Residual Network (IRNet) for efficient ITM performance. Specifically, we propose an Improved Residual Block (IRB) with simple modifications of the residual block <cit.> for fine-grained feature extraction and fusion. On network design, we also adopt the plain residual learning framework to avoid complex multi-branch architecture <cit.>. Experiments on three benchmark datasets, including our newly collected one, show that our IRNet is very efficient and outperforms ITM methods. As shown in Figure <ref>, our IRNet only needs ∼0.13M parameters and ∼0.22×10^4G FLOPs at a speed of 398.33ms to process a 4K-resolution image, which outperforms state-of-the-art methods on the ITM task. On the HDRTV1K dataset <cit.>, our IRNet exceeds AGCM+LE <cit.> on visual quality and PSNR by 0.59dB, but only has one tenth of parameter amounts (∼0.13M v.s. ∼1.41M). Besides, our IRNet also achieves superior performance to Deep SR-ITM <cit.> and JSI-GAN <cit.> on joint SR-ITM.
In summary, our main contributions are three-fold:
* We develop a lightweight Improved Residual Network (IRNet) for efficient image inverse tone mapping (ITM). Our IRNet is built upon a new Improved Residual Block (IRB) customized from the popular residual block for fine-grained feature extraction and fusion.
* We collect a new test set for ITM, , ITM-4K, that has 160 4K-resolution images of versatile scenes with ground truth HDR images. It serves as a good supplementary to HDRTV1K <cit.> that has 117 test images.
* Experiments on the HDRTV1K dataset <cit.>, our new ITM-4K test set, and the test set in <cit.> show that our lightweight IRNet is efficient and achieves impressive quantitative and qualitative results on the ITM and joint SR-ITM tasks. Comprehensive ablation studies also validate the effectiveness of our model design.
The rest of this paper is organized as follows. In <ref>, we summarize the related work. In <ref>, we present the proposed Improved Residual Network (IRNet). In <ref>, we perform experiments to validate the efficiency of our IRNet on ITM and joint SR-ITM. In <ref>, we conclude this paper.
§ RELATED WORK
§.§ Inverse Tone Mapping
The inverse tone mapping (ITM) task aims to transform a standard dynamic range (SDR, usually in 8-bit) image into a high dynamic range (HDR, usually in 16-bit) image. This problem is ill-posed due to the information loss in the luminance ranges of SDR images. Early explorations on the ITM task can be divided into global and local ITM operators. While the global ITM operators equally apply linear expansion <cit.>, cross-bilateral filtering <cit.>, or a gamma-based expansion <cit.> to all the pixels or patches of an input SDR image, the local ITM operators <cit.> reconstruct highlight regions or expand the luminance ranges of each pixel or patch according to the local information around it. Previous works show that global ITM operators <cit.> could avoid undesired artifacts, but result in rough details and unnatural transitions due to the ignorance of local detail reconstruction. On the contrary, local ITM operators <cit.> implemented adaptively on small areas would fail to capture the global consistency of luminance ranges.
To deal with the issues of locally undesired artifacts and global luminance consistency raised by the early methods mentioned above, many recent ITM methods <cit.> shift to utilize the advancements of deep convolutional neural networks (CNNs).
Early CNN-based methods <cit.> merge low dynamic range (LDR) images captured under multiple exposure settings to produce an SDR image. Meanwhile, the work of <cit.> presents a multi-branch CNN to implement ITM both on global and local perspectives. Then, the method of <cit.> introduces a feature masking strategy to address the problem of undesired artifacts emerged during the image reconstruction. Recently, the physical principle of HDR image formation is also incorporated into the designing of ITM CNNs <cit.>. For example, HDRTVNet <cit.> contains of an adaptive global color mapping network, a local enhancement network, and a highlight generation network.
Despite their promising performance, most of these methods require huge parameter amounts and computational costs, which hinders them from being deployed into resource-constrained edge devices. In this paper, we aim to develop a lightweight yet efficient ITM network.
§.§ Joint Super-Resolution and Inverse Tone Mapping
Joint Super-Resolution and Inverse Tone Mapping (joint SR-ITM) aims to simultaneously increase the spatial resolution and dynamic range of an input low-resolution and standard dynamic range (LR-SDR) image. Deep convolutional neural networks have also been applied to tackle the joint SR-ITM task <cit.>. Considering that the luminance ranges of different image areas should be expanded adaptively, the method of <cit.> firstly decomposes an SDR image into a low-frequency structure component and a high-frequency detail component, and then processes the two components by two different but correlated network branches. The separation is implemented by guided-filtering <cit.>, which is widely used in image smoothing <cit.>. This framework is also employed in the subsequent work of JSI-GAN <cit.>. To tackle multi-frame SDR inputs, Lecouat <cit.> reformulated the joint SR-ITM task as an optimization problem to fuse multiple LR-SDR raw image bursts in different exposures into an HR-HDR image. Tan <cit.> developed a two-branch network to fuse a series of LR-LDR dynamic images into an HR-HDR one by estimating the motion cues by a deformable module <cit.>.
Though with appealing performance, the image decomposition based methods usually require multi-branch network architectures for the joint SR-ITM task, which, however, usually implies a considerable growth of parameter amounts and computational burden to tackle parallel feature extraction and elaborate interaction. In this paper, we propose a lightweight ITM network for inference efficiency, inspired by the merits of lightweight image super-resolution networks <cit.>.
§.§ Efficient Image Restoration
For the goal of inference efficiency, network compression and acceleration techniques are exploited to reduce the computational burden and memory consumption of image restoration methods <cit.>.
One popular solution is employing Laplacian pyramid <cit.> to decompose the input image into a low-resolution base layer consuming the majority of computations and several high-resolution detail layers requiring a few computations <cit.>. Bilateral grid learning <cit.> is also utilized to learn approximate operators on the downsampled images and execute the learned operators to the original images. Other inference strategies like recursive learning <cit.> and look-up table <cit.> are also exploited to accelerate image restoration networks.
Instead of developing new methods, some recent works accelerate existing restoration networks by model slimming <cit.> or input-adaptive inference <cit.>.
In this paper, we develop an efficient ITM network that can well process a 4K-resolution SDR image with ∼134K parameters and 0.4 seconds.
§ PROPOSED METHOD
§.§ Motivation
In the scene-referred workflow <cit.>, an HDR raw image in camera color space (usually in 16-bit color depth) will be tone mapped to an SDR RGB image in display-referred color space (usually in 8-bit color depth). This process is usually implemented in a camera imaging pipeline containing multiple image processing operations, during which different pixels usually undergo different compression strengths on dynamic ranges to produce visually pleasing image contrasts <cit.>.
The task of inverse tone mapping (ITM) aims to increase the dynamic range of light intensity (or luminance) in an SDR image. An SDR image in 8-bit depth can display a maximum of around 16.7 million shades of color, while an HDR image in 10-bit depth can display a maximum of around 1.07 billion shades of color, allowing it to exhibit more colors with better visual quality <cit.>. To better understand the luminance difference between SDR and HDR images, in Figure <ref> (a), we visualize the maximum and minimum luminance values of 117 HDR images from the test set of <cit.>, as well as the luminance values at the corresponding positions of the paired SDR images. We observe that there are obvious gaps between the maximum values of HDR images and the values at the corresponding positions of the paired SDR images, whilst
slight differences between the minimum values of the HDR images and the values at the corresponding positions of the paired SDR images. This indicates that high-luminance values change more greatly than low-luminance ones. Besides, the luminance values of different HDR images also show distinct gaps when compared to those values at the corresponding positions in the paired SDR images.
For promising ITM performance, the ITM methods <cit.> suffer from complex network backbones with huge parameter amounts and computational costs. To implement efficient ITM, in this paper, we propose to develop a simple, lightweight, and efficient Improved Residual Network (IRNet) by slightly modifying the residual block <cit.>. As shown in Figure <ref> (b), we concatenate the intermediate feature map F_1 after the LeakyReLU in the residual block with the fused feature of F_in and F_2 (more details will be presented in <ref>). Our IRNet shows clear improvements, especially in the bright area near the sun, over that without using the feature map F_1 on the image “028” from the HDRTV1K test set <cit.>, as shown in Figure <ref> (b). Along the highlighted lines,
the green line of our IRNet enjoys closer approximation to the blue line of the “Ground Truth” HDR image than the red line of our IRNet without using the intermediate feature F_1 (denoted as “IRNet w/o F_1”). In Figure <ref> (c), we plot the ratios of luminance values of the highlighted lines by our IRNet and “IRNet w/o F_1”, which also validates that our IRNet achieves better approximation to the “Ground Truth” than the IRNet without F_1. This validates the effectiveness of our IRB over the residual block for ITM.
Adaptive luminance extension is also important for the ITM task. For this goal, many joint SR-ITM methods <cit.> performed image or feature decomposition to extract and fuse multi-scale feature maps. However, these ITM networks with decomposition techniques often suffer from complex network structures with heavy computational costs (Table <ref>). For efficiency consideration, we design our IRNet as a simple and lightweight network by employing the popular residual block <cit.> as a proper backbone for our IRNet. The promising results in Figure <ref> (b) by our IRNet without using F_1 motivates us to further improve our IRNet for better ITM performance.
§.§ Proposed Improved Residual Network
Our IRNet first extracts the initial feature map using a 1×1 convolution layer (instead of 3×3 one to reduce the parameter amounts). Then we cascade n Improved Residual Blocks (IRBs) proposed for fine-grained feature extraction and fusion. The details of our IRB block will be introduced later. To boost the ITM performance, each IRB is followed by a Contrast-aware Channel Attention (CCA) layer <cit.>. We also use a skip connection to sum the feature maps before the IRB block and after the CCA layer.
Improved Residual Block (IRB). The proposed IRB block is built upon the residual block <cit.>, which achieves great success in many computer vision tasks <cit.>. As shown in Figure <ref> (a), the residual block <cit.> contains two 3 × 3 convolution layers with an activation function (here we replace ReLU by LeakyReLU) between them, the output feature is added by the input feature F_in and activated by another LeakyReLU function.
Built upon the residual block, our IRB block is designed to keep our IRNet as simple as possible with better ITM performance. This is feasible by fully exploiting the multi-layer feature maps within the IRB block. To this end, given the input feature F_in∈ℝ^H× W× C, our IRB first refines it by a 3×3 convolution layer and a LeakyReLU activation function. The extracted feature F_1∈ℝ^H× W× C/2 is further refined in our IRB by a second 3×3 convolution layer to output the feature F_2∈ℝ^H× W× C:
F_1 = LeakyReLU(Conv_3 × 3(F_in)),
F_2 = Conv_3× 3(F_1 ).
Then our IRB uses a skip connection and a Conv_1×1 to fuse F_in and F_2 and obtain the fusion feature F_fuse:
F_fuse = Conv_1×1(F_in+F_2).
Finally, different from the residual block, our IRB explicitly concatenates the intermediate feature F_1
with the fusion feature F_fuse
to produce the output feature F_out as follows:
F_out=Conv_1×1(Concat(F_fuse,F_1)).
We visualize the structure of our IRB block in Figure <ref> (b).
Compared with the original residual block, our IRB well extracts and utilizes the multi-layer features, which correspond to spatially adaptive luminance areas for ITM. As shown in Figure <ref> (a), compared with the IRNet w/o F_1, our IRNet restores the luminance of HDR image closer to the ground truth, especially in the highlight regions. Even though popular encoder-decoder frameworks like U-net <cit.> or Uformer <cit.> can be utilized here to extract strong multi-scale features, this would bring significant growth on parameter amounts and computational costs <cit.>. Through a simple modification to the residual block, the proposed IRB serves as a lightweight building block in our IRNet for efficient ITM performance.
The mean feature map along the channel dimension could reflects the luminance information of that feature <cit.>. In Figure <ref> (b), we visualize the mean feature maps of F_in, F_1, F_2, F_fuse, and F_out extracted by our IRNet and “IRNet w/o F_1”. One can see that the mean feature map of F_1 extracted by our IRNet exhibits higher luminance in the sky area around the sun than that of “IRNet w/o F_1”. Due to the lack of luminance information by the intermediate feature F_1, “IRNet w/o F_1” produces stronger contrasts at the input feature F_in of IRB blocks and darker luminance around the sun in the output feature F_out, than our IRNet using F_1 in our IRB block.
Contrast-aware Channel Attention (CCA). To preserve image details, we utilize a CCA layer <cit.> after each IRB block. As shown in Figure <ref> (c), the CCA layer is consisted of contrast computation, two 1×1 convolution layers interleaved with a ReLU function, a sigmoid function, and a skip connection between the input and output features to help gradient propagation. Given the input X=[x_1,...,x_C]∈ℝ^H× W× C, the contrast is computed as follows:
z_c = H_GC(x_c)
= √(1/HW∑_(i,j)∈ x_c (x_c^i,j-∑_(i,j)∈ x_c x_c^i,j)^2) +
1/HW∑_(i,j)∈ x_c x_c^i,j, c=1,....,C.
After the i-th (i=1,...,n-1) IRB block and CCA layer, the output feature is added to the input feature F_in^i by a skip connection, and F_in^n+1 is the final feature that will be inputted to the next convolution layers as follows:
F_in^i+1=F_in^i+CCA(IRB(F_in^i)).
After extracting n scales of fine-grained feature maps, we concatenate them for multi-scale feature fusion, which is implemented by a sequence of 1×1 convolution layer, a LeakyReLU activation function, and a 3×3 convolution layer. Finally, we reconstruct the output HDR image using a 3×3 convolution layer. The overall architecture of the proposed IRNet is shown in Figure <ref> (d).
To apply the proposed IRNet to the joint SR-ITM task, we further add a Pixel Shuffle operation <cit.> after the final 3×3 convolution layer of our IRNet to make it feasible for super-resolution. The Pixel-Shuffle contains two 3×3 convolution layers interleaved with a ReLU function. The first convolution layer reduces the channel dimension of the feature map from C to 3s^2, where s is the upsampling factor, while the second convolution layer reconstructs the 3-channel HR-HDR image via upsampling the feature map by a factor of s.
§.§ Implementation Details
Here, we set the channel dimension of the feature map F_in as C=64. The number of IRB blocks n is set as n=2 for the ITM task and n=5 for the joint SR-ITM task. We use Kaiming initialization <cit.> to initialize the parameters of our IRNet.
To optimize these parameters, we adopt Adam optimizer <cit.> with β_1 = 0.9 and β_2 = 0.999 to minimize an ℓ_1 loss function. The learning rate η is initialized as 5×10^-4 and degrades to 1×10^-11 by cosine annealing schedule with warm restart <cit.> in every 60 epochs. The batch size is set as 16. We train the models of our IRNet for 200 epochs on an NVIDIA V100 GPU with 32GB memory.
§ EXPERIMENTS
In this section, we evaluate the performance of comparison methods and our IRNet on the ITM and joint SR-ITM tasks. We first introduce the used datasets and metrics. Then we present the the comparison results on ITM and joint SR-ITM, respectively. Finally, we conduct a series of ablation experiments to study the components of our IRNet.
§.§ Dataset and Metrics
Training set. In our experiments, we use the recently published HDRTV1K dataset <cit.> to evaluate the comparison methods. This dataset contains 1,235 pairs of 8-bit SDR and 10-bit HDR images for training and 117 pairs of images for testing. We crop each image in the training set into 30 256×256 image patches. For data augmentation, we randomly flip the cropped patches horizontally or vertically, rotate these patches by 90°, 180°, or 270°.
To perform joint SR-ITM on the HDRTV1K dataset, which is originally developed only for ITM, we downsample the SDR images by a factor of s=4 to obtain the low-resolution (LR) SDR images, similar to <cit.>. The high-resolution (HR) and HDR images from the HDRTV1K dataset can still be used as the training targets.
Test sets. On the ITM task, we evaluate the comparison methods on three datasets: the test set of HDRTV1K <cit.>, our newly collected ITM-4K dataset (for high-resolution images), and the test set in <cit.>. On the joint SR-ITM task, we evaluate the comparison methods on the test set of HDRTV1K <cit.>. The details of these test sets are summarized as follows:
* HDRTV1K <cit.> contains 117 test SDR images of size 3840×2160×3, with paired HDR images. For joint SR-ITM, we downsample the SDR images by a factor of 4 to generate the LR-SDR test images.
* ITM-4K contains 160 pairs of SDR and HDR images of size 3840×2160×3. These images are extracted from 9 HDR10 videos collected from https://4kmedia.org4kmedia.org. The corresponding SDR videos are generated through YouTube similar to <cit.>. We display 12 typical scenes from the 160 test images in Figure <ref>. In Figure <ref>, we also visualize the distribution of the 160 SDR images in our ITM-4K dataset and the 117 SDR test images in HDRTV1K <cit.> using t-SNE <cit.>. One can see that our ITM-4K dataset contains diverse scenes similar yet supplementary to the test set of HDRTV1K <cit.>.
* The test set in <cit.>. This dataset contains 28 test images, 12 of which are overlapped with the training set of HDRTV1K <cit.> and the test set of our ITM-4K. Thus, we use the remaining 16 images to evaluate the ITM methods. Note that although this dataset is used for joint SR-ITM task, the test set provides the SDR images of the same sizes with the corresponding HDR images, which can be used to evaluate ITM methods. We do not use this test set for the joint SR-ITM task due to its overlap with the training set of HDRTV1K <cit.>.
Metrics. We evaluate the performance of different methods on ITM and joint SR-ITM in terms of PSNR, SSIM <cit.>, LPIPS <cit.>, and HDR-VDP3 <cit.>. PSNR is used to evaluate the closeness of the output image to the corresponding ground truth image. SSIM <cit.> and LPIPS <cit.> evaluate the structural and perceptual similarity, respectively, of the output image to the corresponding ground truth image. HDR-VDP3 <cit.> is a widely used metric to evaluate the quality of HDR images <cit.>, and we use its prediction of “quality” (Q) here.
§.§ Results on Inverse Tone Mapping
Comparison methods.
For our IRNet, we set n=2 and C=64, and denote it as “IRNet-2 (64c)”.
We compare it with four ITM methods of HDRNet <cit.>, CSRNet <cit.>, Ada-3DLUT <cit.>, and HDRTVNet <cit.>. The methods of Pixel2Pixel <cit.> and CycleGAN <cit.> are also evaluated as two generative baselines for ITM. As suggested in <cit.>, we also modify the joint SR-ITM methods of Deep SR-ITM <cit.> and JSI-GAN <cit.> for the ITM task, by setting the stride of the first convolution layer as 2 to make them feasible for the ITM task. This manner reduces their computational costs while not degrading the ITM performance.
Objective results. The comparison results on the test set of HDRTV1K <cit.> are summarized in Table <ref>. One can see that our IRNet-2 (64c) outperforms the second best method, , AGCM+LE, by 0.59dB, 0.0011, and 0.3 in terms of PSNR, SSIM, and LPIPS, respectively. Note that our IRNet-2 (64c) has 134.73K parameters, fewer than all the other comparison methods except CSRNet (36.49K) and AGCM (35.25K). But these two methods suffer from clear performance gap to our IRNet-2 (64c) in terms of all evaluation metrics. On HDR-VDP3, our method is slightly (0.03) lower than the best method AGCM+LE. But AGCM+LE requires 1410K parameters, 6228.31G FLOPs, and 3114.09G MACs to process a 4K-resolution SDR image at a speed of 691.30ms, much larger than those of our IRNet-2 (64c).
Besides, our IRNet-1 (48c), , the IRNet with a single IRB block and C=48, only needs 49.3K parameters to achieve competitive results with the second best method of AGCM+LE.
We further evaluate our IRNet-2 and other methods on our ITM-4K dataset and the 16 SDR images in the test set of <cit.>. As shown in Table <ref>, our IRNet-2 (64c) still achieves better results than other comparison methods on PSNR and HDR-VDP3. In summary, our IRNet achieves efficient ITM performance with a lightweight backbone.
Visual quality is an important criterion to evaluate the performance of ITM methods, since human are the final reviewers of the image quality. For the purpose of visualization, the HDR images are generated from HDR10 videos and stored in the 16-bit PNG format. The comparison results of visual quality by different methods on three test sets are shown in Figure <ref>. We observe that most comparison methods suffer from a certain degree of color bias, especially near the light source. Our IRNet achieves closer results to the ground truth images than other methods, with more correct colors and color contrasts. In addition, our IRNet achieves better PSNR and SSIM results than the other comparison methods. All these results demonstrate that our IRNet is very effective on ITM.
Running speed is the actual wall-clock time of evaluating model efficiency on SDR images. We calculate the running time of comparison methods on 4K-resolution (3840×2160×3) images. As shown in Table <ref>, our IRNet-2 (64c) is faster than the second and third best methods, , AGCM+LE and HDRTVNet, by a gap of 292.97ms and 1115.10ms, respectively. Meanwhile, IRNet-1 (48c) reduces the running time of IRNet-2 (64c) from 398.33ms to 166.91ms with guaranteed performance. Although faster than our IRNet-2, the methods of HDRNet, CSRNet, Ada-3DLUT, and AGCM suffer from obvious performance degradation on quantitative metrics.
§.§ Results on Joint SR-ITM
Comparison methods. Here, we set n=5 and C=64 in our IRNet, and denote it as “IRNet-5 (64c)”. We compare it with two SR methods, , EDSR <cit.> and RFDN <cit.>, two cascaded two stage SR-ITM methods, , “HDRTVNet+RFDN” (sequentially performing ITM by HDRTVNet and SR by RFDN) and “RFDN+HDRTVNet” (vice versa), and two joint SR-ITM methods, , Deep SR-ITM <cit.> and JSI-GAN <cit.>. For the cascaded SR-ITM methods, we choose RFDN <cit.> and HDRTVNet<cit.> since they are methods on SR and ITM, respectively.
Objective results. The comparison of numerical results are summarized in Table <ref>. It can be seen that
the two SR methods still achieve reasonable performance in terms of objective metrics. By first performing SR and then ITM, the cascaded method achieves better results on image quality metrics, but requires heavy computational costs, , 14783.55G FLOPs and 7391.58G MACs to process an LR-SDR image of size 960×540. Of course, first performing ITM and then SR significantly reduces the computational costs, and the performance on evaluation metrics suffers from a huge degradation as well. Besides, compared with Deep SR-ITM and JSI-GAN, our IRNet-5 (64c) achieves the best PSNR results (0.38dB higher than the second best method “RFDN+HDRTVNet”) and comparable results on the other metrics, but with the least requirements on parameter amounts, computational costs, and inference time. These observations demonstrate that our IRNet is a lightweight and efficient backbone that can achieve performance on the joint SR-ITM task.
Visual quality. In Figure <ref>, we qualitatively compare the visual results of different methods on the HDRTV1K test set <cit.> modified for joint SR-ITM (please refer to <ref> A). One can see that all these methods obtain promising visual results on the presented scenes. The method of “HDRTVNet+RFDN” produces blurry edges around the lighting area. Besides, the images output by “HDRTVNet+RFDN”, “RFDN+HDRTVNet”, Deep SR-ITM <cit.> and JSI-GAN <cit.> suffer from the color shift problem to some extent. By fully exploiting multi-layer features for fine-grained image reconstruction, our IRNet-5 (64c) not only accurately restores the image colors, but also well increases the image details during the SR process. These results validate that, though being lightweight with the fewest parameter amounts and computational costs, the proposed IRNet is very efficient on the joint SR-ITM task.
Running speed. The comparison results of running speed on the downsampled images (960×540×3) are summarized in Table <ref>. It can be seen that our IRNet is faster than other comparison methods. Note that when comparing with “RFDN+HDRTVNet”, our IRNet-5 achieves comparable performance with only 4.08% of its running time. These results validate the efficiency of our IRNet on joint SR-ITM.
§.§ Ablation Study
To study in detail the working mechanism of our IRNet, we present comprehensive ablation experiments of our IRNet on ITM. Specifically, we assess:
1) how to extract the intermediate feature F_1 in our IRB?
2) how does the number of IRB blocks affect our IRNet?
3) how does the channel dimension C in IRB influence our IRNet?
4) how does the CCA layer boost our IRNet?
All variants of our IRNet are trained and evaluated on the training set and test set of HDRTV1K <cit.>, respectively.
1) How to extract the intermediate feature F_1 in our IRB? The IRB in our IRNet is modified from the residual block (RB). To validate the effectiveness of our IRB, we first evaluate our IRNet by replacing the IRB blocks by the RB blocks (using LeakyReLU instead of ReLU for fair comparison). The results listed in the first two rows of Table <ref> show that our IRNet with the IRB block achieves much better performance than our IRNet with the original RB block.
Besides, we design several variants of our IRB block (“IRB”) and study how they influence our IRNet on ITM. We first remove the intermediate feature F_1 to verify its importance in our IRB, which is denoted as “IRB w/o F_1”. Then we study where to extract the intermediate feature F_1, which can be put before the first convolution layer (take F_1 as F_in), after the activation layer (our IRB), before the addition operation (take F_1 as F_2). The results are summarized in Table <ref>. One can see that our IRNet with the original IRB achieves the best PSNR and SSIM results. By removing the feature F_1, the variant of our IRNet achieves clear drop on PSNR and SSIM, but similar LPIPS and HDR-VDP3 results. If we use the input feature F_in of IRB or the feature after the second convolution layer F_2 as the intermediate feature F_in, the variants of our IRNet suffer from clear drop on PSNR, but with a little difference on SSIM and LPIPS. All these results validate the effectiveness of utilizing the feature after the activation function as the intermediate feature for our IRB to achieve promising ITM performance.
2) How does the number of IRB blocks affect our IRNet? In our IRNet, we use two IRB blocks for ITM and five IRB blocks for joint SR-ITM. Here, we vary the number of IRB blocks to study how it influences our IRNet. The results are listed in Tables <ref> and <ref>, respectively. It can be seen that our IRNet achieves promising performance with 1∼4 IRB blocks on SSIM, LPIPS, and HDR-VDP3. Our IRNet with two IRB blocks achieves the best PSNR results among all choices. Similarly, our IRNet with five IRB blocks achieves the best PSNR and SSIM results on joint SR-ITM, while that with six IRB blocks achieves the best LPIPS and HDR-VDP3 results. To reduce the parameter amounts, we use two and five IRB blocks in our IRNet for ITM and joint SR-ITM, respectively.
3) How does the channel dimension C in IRB influence our IRNet? To answer this question, we perform experiments on our IRNet with different number of channels in the IRB block. The results of our IRNet-1 and IRNet-2 on ITM and those of our IRNet-5 on joint SR-ITM are shown in the Table <ref>, Table <ref> and Table <ref>, respectively.
For ITM, our IRNet-1 using one IRB achieves the best PSNR and SSIM results when C=48 and with 49.30K parameters, while our IRNet-2 using two IRBs achieves the best PSNR and SSIM results when C=64 and with 134.73K parameters. For joint SR-ITM, our IRNet-5 using five IRBs achieves the best PSNR results when C=64 and with 468.19K parameters. Our IRNet-5 with C=96 achieves better SSIM, LPIPS, and HDR-VDP3 results, but suffers from a huge growth of parameter amounts. Thus, we set C=48 and C=64 in our IRNet-1 and IRNet-2, respectively for ITM, and C=64 in our IRNet-5 for joint SR-ITM.
4) How does the CCA layer boost our IRNet? Our IRNet uses one CCA layer after each IRB block to refine the feature maps. We remove the first CCA layer between two IRB blocks in our IRNet-2. The results on ITM are shown in Table <ref>. One can see that our IRNet-2 without the first CCA layer suffers from a clear performance drop on PSNR. This demonstrates that the CCA layer is important to our IRNet-2 on ITM.
§ CONCLUSION
In this paper, we developed a lightweight and efficient inverse tone mapping (ITM) network. The proposed Improved Residual Network (IRNet) is mainly consisted of Improved Residual Blocks (IRB) modified from the popular residual block and Contrast-aware Channel Attention (CCA) layers. The proposed IRB block is able to fuse multi-layer features extracted by different convolution layers for fine-grained ITM. We also collected a new ITM-4K test set containing 160 versatile 4K-resolution SDR images. Experiments on three benchmark datasets demonstrated that, our IRNet outperforms the state-of-the-art methods on the ITM task with only ∼0.13M parameters and ∼0.22×10^4G FLOPs per 4K image. Further experiments on the joint SR-ITM task also showed the advantages of our IRNet over the comparison methods on the objective metrics, the computational efficiency, and most importantly, the image quality such as color depth restoration.
plain
|
http://arxiv.org/abs/2307.07535v3 | 20230714120324 | EPOCHS VII: Discovery of high redshift ($6.5 < z < 12$) AGN candidates in JWST ERO and PEARLS data | [
"Ignas Juodžbalis",
"Christopher J. Conselice",
"Maitrayee Singh",
"Nathan Adams",
"Katherine Ormerod",
"Thomas Harvey",
"Duncan Austin",
"Marta Volonteri",
"Seth H. Cohen",
"Rolf A. Jansen",
"Jake Summers",
"Rogier A. Windhorst",
"Jordan C. J. D'Silva",
"Anton M. Koekemoer",
"Dan Coe",
"Simon P. Driver",
"Brenda Frye",
"Norman A. Grogin",
"Madeline A. Marshall",
"Mario Nonino",
"Nor Pirzkal",
"Aaron Robotham",
"Russell E. Ryan, Jr.",
"Rafael Ortiz III",
"Scott Tompkins",
"Christopher N. A. Willmer",
"Haojing Yan"
] | astro-ph.GA | [
"astro-ph.GA"
] |
firstpage–lastpage
Sampling-Priors-Augmented Deep Unfolding Network for Robust Video Compressive Sensing
Youran Ge
August 12, 2023
=====================================================================================
We present an analysis of a sample of robust high-redshift galaxies selected from the `blank' fields of the Prime Extragalactic Areas for Reionization Science (PEARLS) survey and Early Release Observations (ERO) data from JWST with the aim of selecting candidate high-redshift active galactic nuclei (AGN). Sources were identified from this parent sample using a threefold selection procedure, which includes spectral energy distribution (SED) fitting to identify sources that are best fitted by AGN SED templates, a further selection based on the relative performance of AGN and non-AGN models, and finally morphological fitting to identify compact sources of emission, resulting in a purity-oriented procedure. Using this procedure, we identify a sample of nine AGN candidates at 6.5 < z < 12, from which we constrain their physical properties as well as measure a lower bound on the AGN fraction in this redshift range of 5 ± 1%. As this is an extreme lower limit due to our focus on purity and our SEDs being calibrated for unobscured Type 1 AGN, this demonstrates that AGN are perhaps quite common at this early epoch. The rest-frame UV colors of our candidate objects suggest that these systems are potentially candidate obese black hole galaxies (OBG). We also investigate Chandra and VLA maps of these areas from which we calculate detection limits. Of note is a z = 11.9 candidate source exhibiting an abrupt morphological shift in the reddest band as compared to bluer bands, indicating a potential merger or an unusually strong outflow.
galaxies: active – galaxies: high-redshift – quasars: supermassive black holes
§ INTRODUCTION
The origin and evolution of supermassive black holes remains an active area of research in astrophysics. One of the major problems is that predicted masses for black hole seeds, which are expected to form from population III stars at z=10-50 <cit.>, are too low to grow to the sizes observed at lower redshifts. For example, z = 7.5 quasars host black holes with masses in excess of 10^9 M_⊙ <cit.>, which are difficult to form unless super-Eddington accretion takes place <cit.>. Other formation models, such as the direct collapse of gas clouds, stellar collisions in dense clusters, and collapsing primordial density fluctuations, similarly lack conclusive observational evidence to distinguish them from one another as summarized in <cit.>.
A few candidate direct collapse black holes have been identified to date, pre-JWST. This includes candidates identified by <cit.> in the GOODS-S region of the CANDELS survey <cit.>, which are very faint objects. Thus, a next generation instrument, with increased survey depth is likely to identify more of such candidates (Nabizadeh et al. 2023 in prep.). The recent launch of the James Webb Space Telescope (JWST) has given us such an instrument and presents an excellent opportunity to start investigating the formation of young central massive black holes and start testing the validity of current black hole formation models by direct observations. While most black hole seeds are expected to have formed between redshifts of z=14 and z=30 <cit.>, which lies somewhat beyond the expected capabilities of the telescope, JWST may be able to detect accreting black hole seeds up to z=10 - 12 <cit.>. It is also important to note that some black hole seed formation models, for instance <cit.> and <cit.> predict their formation to occur, albeit at a significantly diminished pace up to the end of reionization at z ∼ 5, giving further credence to the idea that JWST surveys may be able to validate our current understanding on the origin of these objects.
Currently JWST efforts in tracing the evolution of galaxies have been focused largely on the detection of high redshift star forming galaxies (<cit.>, <cit.>, <cit.> and <cit.>) and morphological evolution of galaxies (<cit.>, <cit.> and <cit.>). The first year of observation also yielded three spectroscopically confirmed active galactic nuclei (AGN). Two at z≈5 <cit.> and one at z≈8 by <cit.>, as well as two candidates, one at z≈12 by <cit.> and one at z ≈ 11 by <cit.>. More recent studies by <cit.> also hint at the presence of a significant population of partially obscured AGN at z < 7. However, no search aimed explicitly at searching for high redshift (z > 7) AGN candidates has been carried out in detail using imaging - a combination of both spectral energy distributions and morphology/structure.
Furthermore, finding forming black holes through AGN is an important process that needs to be done photometrically, if possible, to find objects that can be followed up with Near Infrared Spectrograph (NIRSpec) spectroscopy. This will be critical for determining the demographics of early AGN as well as 'naked' black holes that might exist at early times. If these systems can be found photometrically, in principle, this would allow for large samples and demographics of this population to be characterised.
In this work, we present the results for such a search for candidate AGN sources using JWST Near Infrared Camera (NIRCam) imaging data <cit.>. We identify a small sample of excellent candidates for being high redshift AGN. We discuss in this paper our method for finding these objects and give a description of the properties of these potential early/young AGN or black holes, and provide a pathway to use our methods to find further larger samples of such objects.
This paper is organised as follows - section <ref> contains a brief description of the data and the reduction process. Section <ref> presents a discussion of AGN identification methods used and checks of their validity, section <ref> presents an overview of the properties of the selected sources. The findings of this paper are summarized in section <ref>. Where appropriate we adopt a standard cosmology of Ω_m = 0.3, Ω_Λ = 0.7 and H_0 = 70 km s^-1Mpc^-1, all reported magnitudes use the AB system.
§ DATA
§.§ Sample and Data reduction
The galaxy sample from which we carry out this analysis derives from the Early Release Observations alongside the PEARLS GTO Survey fields: El Gordo, MACS-0416 and the North Ecliptic Pole Time Domain Field (NEP-TDF) <cit.>. This data set is from the EPOCHS sample which are, a re-redcution and analysis of these fields to create a homogeneous reduction and ultimately a catalog, see <cit.> and Conselice et al. 2023, in prep. This sample results from processing these data ourselves at all steps using our own refined methods that maximises our detection of faint galaxies and the accuracy of the photometry. This paper is part VII in this series, with other forthcoming papers on the star formation rate, stellar populations, morphologies and stellar masses of this sample. This paper is our first look at finding AGN within distant galaxies and can been as a first look at how to approach this problem.
Specifically, the NIRCam data used for source detection and photometry originated from the CEERS <cit.>, GLASS <cit.>, SMACS 0723 <cit.> and NGDEEP <cit.> public surveys. We also include GTO targets of El-Gordo, MACS 0416 and NEP from the Prime Extragalactic Areas for Reionization Science (PEARLS) survey (PI: R. Windhorst & H.Hammel, PID: 1176 & 2738, <cit.>).
NIRCam filter sets used by these surveys were largely similar, with all of them utilizing some combination of the seven wide (F090W, F115W, F150W, F200W, F277W, F356W and F444W) and one medium width (F410M) filter.
The reduction procedure applied to all unprocessed JWST data products is described in detail by <cit.> and in Conselice et al. (2023, in prep), and can be summarized as follows: (1) Initial processing is carried out using version 1.8.2 of the JWST pipeline <cit.> and v0995 of the Calibration Reference Data System (CRDS), which were most recent in the first half of 2023. (2) Wisps and artefacts from F150W and F200W are subtracted using a template set in between stages 1 and 2 of the pipeline. (3) 1/f noise correction, derived by Chris Willott.[<https://github.com/chriswillott/jwst>], is applied after stage 2 of the pipeline. (4) A 2-dimensional sky subtraction is run on each of the individual calibrated frames before stacking in stage 3 of the pipeline. (5) After stage 3 of the pipeline, the final F444W image was matched to a GAIA-derived WCS using available GAIA stars in the NIRCam imaging, and all other images in the other bands were then aligned with the new F444W image to ensure consistent source positions. The processed images have a final resolution of 0.03 arcsec/pixel.
§.§ Initial high redshift catalog construction
Source detection and measurement from the processed science images was carried out using SExtractor <cit.>, with the key configuration parameters taken from Table 1 in <cit.>. We used the F444W band for initial source detection. Using this fluxes of the detected sources were then measured in each band using 0.32 arcsec diameter apertures, with corrections applied derived from Point Spread Function (PSF) models taken from <cit.>. Detection depths were calculated individually for each source by placing 0.32 arcsec diameter apertures in empty spaces of the image, then picking 200 nearest apertures for each source and calculating the total background RMS across all of them. The 5σ detection depth was estimated as 5 times the background RMS in magnitudes. A summary of average 5σ depths is provided in <ref>. This is further described in <cit.> and Conselice et al. (2023, in prep).
Initial source redshifts were constrained by photometrically fitting the SExtractor catalog sources with both the LePhare and EAZY <cit.> codes. All detected sources were run through these SED fitting tools in order to provide a photometric redshift estimate. Both EAZY and Le Phare were run with a minimum 5% flux error to account for uncertainties in the calibration of the NIRCam detector.
The LePhare code was run using the BC03 stellar population synthesis (SPS) template set <cit.> with both exponentially decaying and constant star formation histories (SFHs). We include templates with 10 characteristic timescales between 0.1<τ<30 Gyr, and 57 different ages between 0 and 13 Gyr, with fixed metallicities Z={0.2, 1.0} Z_⊙. The redshift range is allowed to vary between 0<z<25, and dust extinction is varied from 0 < E(B-V)<3.5 to account for potential dusty lower-z contaminants <cit.>. Attenuation from the inter-galactic medium (IGM) follows the treatment derived in <cit.> and nebular line emission is modelled internally by Le Phare.
EAZY was run with the 12 default Flexible Stellar Population Synthesis fsps templates (tweak_fsps_QSF_v12_v3), which model a range of stellar ages, dust extinction and metallicities <cit.>, along with 6 additional templates from <cit.>. These templates were designed to better reproduce the blue colors and β slopes of high-z galaxies. Some high-z galaxies have been shown to have high equivalent width (EW) emission lines, which are more accurately modelled by the FSPS templates, which can boost photometric measurements by as much as a magnitude. This EAZY template set will be referred to as FSPS+Larson hereafter.
The selection criteria for constructing the robust high-redshift galaxy catalog can be summarized as follows, although see Conselice et al. (2023, in prep) and <cit.>:
* The candidate object must have a higher than 5σ detection in the first two bands redwards of the fitted Lyman break and < 3σ detections in bands containing the break.
* The integrated probability density function (PDF) within 10 % of the best fit redshift must contain at least 60% of the probability.
* Less than 10 % of the PDF must lie in the z < 5 range; secondary peak solutions, if present, must have a maximum lower than 50 % of the primary peak to avoid Lyman - Balmer break confusion.
* Candidate must have χ^2_R < 6 to be considered `good' or χ^2_R < 3 for a 'robust' classification.
* The PDF criteria must be satisfied by both codes and their photometric redshifts have to be consistent within 3σ.
The above procedure is discussed in depth by <cit.>.
A total of 214 high-redshift (6.5 < z < 12) sources were identified using our criteria and were further analyzed for the presence of an AGN component. It should be noted here that the lowest redshift available in CEERS and NGDEEP surveys was 8.5 instead as these fields use F115W as the bluest available band and we did not incorporate the Hubble Space Telescope (HST) imaging into our selection.
§ CANDIDATE AGN SELECTION
In the following section we describe how our AGN candidates were found using our methods based on the full EPOCHS sample described above. This involves several step including the initial discovery of the objects through a series of photometric redshift codes and tests see <cit.> and the previous section for further details. We then carry out a further analysis examining the likelihood that these systems are dominated by emission from black holes to construct our final sample.
§.§ SED fitting
This work seeks to identify robust candidate AGN. The easiest ones to identify using imaging are those of the unobscured, Type 1, variety, where the immediate surroundings of the black hole are capable of outshining its host galaxy. Our methods are designed to identify these particular AGN candidates and subsequently are not expected to produce a complete sample of the AGN population in the data covered by the EPOCHS sample.
We begin our selection by refitting our sources from the robust galaxy catalog using EAZY with a set of SED templates for direct collapse black hole (DCBH) hosts from <cit.> added on top of the FSPS+Larson set (see section <ref>). These templates are tuned for unobscured, intermediate mass (10^5 - 10^6 M_⊙) active black holes, which may reasonably be expected to make up a significant fraction of high redshift AGN. This gives us an AGN+star formation set of templates from which we can find galaxies that match various combinations of these templates. The continuum shape of these SEDs is characterized chiefly by their UV power-law slopes (α), and the so called Big-Bump temperatures (T_bb). We adopt the full range of values for both, with α = -1.2, -1.6, -2.0 and T_bb = 5×10^4, 1×10^5, 2×10^5 K. We fix the ionization parameter logU to -0.5 and metallicity Z to 0.014 as reasonable choices for pristine, high redshift environments <cit.>. These parameters are otherwise difficult to constrain using SED fitting. Thus, the additional set of SEDs consists of 9 templates with the aforementioned parameters and will be referred to as the 'Nakajima' set hereafter.
After re-fitting, we derive the weighting for each template in the fit via the following equation:
W_i = a_i(∑_ja_j)^-1,
where a_i is the linear coefficient of the i-th template as defined in <cit.>. From these, we define the W_AGN parameter as W_AGN = ∑ W_i for all AGN templates in the set. This parameter thus serves as an indication of the relative weight of AGN versus non AGN templates in the best fit for each source.
Sources were then selected according to the following criteria:
* χ^2_R < 3, consistent with the 'robust' classification from section <ref>.
* Nakajima templates having W_AGN > 0.5 in the fit, ensuring that a candidate is mostly fitted by an AGN model.
* The new χ^2_R value is lower by at least 0.5 than the one given by FSPS+larson set to ensure that the fit improvement provided by adding the AGN models is not the result of an expanded parameter space.
* Redshift given by the Nakajima templates consistent with other redshift estimates as the location of the Lyman break should be insensitive to the nature of emission.
The above procedure resulted in 12 sources being selected from the initial sample. This selection is illustrated by <ref>. The AGN candidates are strongly separated from the rest of the sample along the W_AGN axis, with most sources concentrated either at 1 or 0. This is likely because EAZY prefers single template models instead of mixed templates. Therefore, this parameter does not necessarily correspond to a physical AGN fraction, but it remains useful for further selection of strong candidates.
Afterwards, as part of our refined method for finding AGN, we then use CIGALE <cit.> to narrow our selection. Before fitting our SEDs to these templates we increase the SExtractor measured flux errors such that they have a floor, or lower limit values, which represent a 5% error. We then set the 3σ upper limits with 1σ errors in bands which contain fluxes consistent with a 3σ background fluctuation. Upper limits of 1σ with 1σ error were set in bands containing negative flux measurements.
We modeled the AGN component of the CIGALE templates using the SKIRTOR continuum emission models <cit.> with a varying rest-frame UV slope (α) as the key shape parameter. This was chosen to largely overlap with the α values from the Nakajima set. The allowed viewing angles were 30 and 70 degrees in order to consider both obscured and unobscured emissions. The stellar emission was modeled using an initial mass function from <cit.> together with the BC03 templates, and a delayed exponential star formation history, with stellar ages ranging from 5000 to 100 Myr, and 0.5<τ<2 Gyr. This range is narrower than the one used with LePhare due to the need to simplify the parameter space to allow for more AGN models and the high redshift nature of the fitted galaxies already being confirmed by the previous selection steps.
The stellar and gas metallicities in our fit were sampled from the range Z={0.2, 1.0} Z_⊙, consistent with what used within the LePhare fitting. The nebular emission was modeled with the ionization parameter using values of -1.0, -1.5 and -2.0. The dust extinction for the stellar component was modeled using the Calzetti dust attenuation law, assuming 0 < E(B-V) < 0.9. AGN polar dust extinction was assumed to follow the SMC curve, taken from <cit.>, with extinction values ranging from 0 < E(B-V) < 0.8. The values not listed were kept to CIGALE defaults.
The relative performance of AGN versus non-AGN models was established by running CIGALE with two groups of SED templates, the first one with f_AGN = 0, while the second ranges from 0.1 < f_AGN < 1. This f_AGN parameter quantifies the ratio of observed infrared luminosity of the AGN component to the total observed infrared luminosity of the source. The average performance of the two template sets was quantified using a parameter P_AGN:
P_AGN = N_AGN(χ^2_R<χ^2_lim)/N_GAL(χ^2_R < χ^2_lim)×N_GAL/N_AGN,
where N_AGN(χ^2_R<χ^2_lim) is the number of AGN models giving χ^2_R<χ^2_lim, and N_GAL(χ^2_R<χ^2_lim) is the number of galaxy models satisfying the same criterion, N_AGN and N_GAL are the total number of AGN and galaxy templates fitted. We use this ratio to normalise the number of models as otherwise one type would dominate over the other. In cases where N_GAL(χ^2_R<χ^2_lim) = 0 and N_AGN(χ^2_R<χ^2_lim) ≠ 0, P_AGN was set =99, if no models gave χ^2_R < χ^2_lim, P_AGN was set to =0. Sources with P_AGN > 1 were selected for furhter morphological
and structural analysis.
The value of χ^2_lim was fixed by using different values χ^2_lim for <ref> to classify a sample of known AGN and likely non-AGN sources, minimizing the number of misclassifications. For the known AGN sample we use three objects in total - the two spectroscopic AGN from <cit.>, CEERS 1670 and CEERS 3210, and one from <cit.>, CEERS 1019, as these low mass sources are more likely to be representative of the ultra high redshift AGN population. None of these sources end up in our robust catalog due to them having >3σ detections in the F115W band where the Lyman-break is located, causing them to fail one of the robust redshift criteria in section <ref>. In fact objects such as these objects require HST data for reliable classification to ensure that there is a Lyman-break and not a Balmer-break within the 'drop' filter. However, we use our measured photometry from the original SExtractor catalog for fitting this object. We also note that CEERS 3210 is obscured, while CEERS 1670 is a classic, more evolved, Type 1 AGN, thus the Nakajima templates, calibrated for AGN hosted by pristine early environments, do not reproduce their photometry well. The CEERS 1019 object, while having a relatively flat continuum has a strong OIII line visible in the F444W band, which is not captured well by any SED templates used in our fitting. This results in a fit which does not imply an AGN as the templates we use do not have line emission this strong. This reveals that even stronger line emission from AGN should be implemented in future AGN template models.
This source also has a continuum strongly influenced by stellar emission, see discussion in <cit.>.
The non-AGN high redshift galaxy sample was taken from the original robust galaxy catalog by removing all galaxies that satisfied the EAZY selection criterion and were not classified by us as AGN using our methods.
We run these galaxies through our procedure and the results of this are shown in <ref> as the blue line. We find that some of these galaxies do have a high AGN fraction, and thus it remains possible, if not likely, that some of these systems are in fact AGN. However, using our methods we are more certain to find a pure selection of AGN as also shown by the orange line.
As can be seen in <ref>, χ^2_lim = 2.5 gives the highest probability of correctly classifying an AGN, however, 80% of the remaining galaxy sample has P_AGN > 1, and while some of these sources may harbour obscured AGN akin to CEERS 3210, such a high AGN fraction is unlikely as the fraction of dust reddened AGN at 4<z<7 was estimated to be about 10% by <cit.>. Therefore, this method can only be used to exclude dubious sources as its purity is too low for standalone use. Nevertheless a further two sources are excluded from the sample by this method.
This is our main method for finding AGN. It is however important to note that this is not a unique method, and other methods using photometry and SED fitting can be developed. However our method does provide a way for finding a sample with a high probability of being AGN. It is also important to note that the template set with which we constrain most of our sources was produced using unobscured AGN models, thus our search is inherently biased towards Type 1 AGN in unevolved low metallicity environments.
§.§ Structural Measures
In order to improve the purity of our sample, we apply morphological cuts to search for sources containing a point source. To do this, we use , a two-dimensional fitting algorithm that uses a Levenberg-Marquardt algorithm to find the optimum solution to a fit <cit.>. We select sources which are best fit by a point spread function (PSF), a combination of an extended Sersic profile and a PSF, or a single Sersic profile with half light radius less than the FWHM of the PSF used in the fitting process. We define the best-fitting model as the model with the lowest χ_v^2, which is defined by as
χ_ν^2=1/N_DOF∑_x=1^n_x∑_y=1^n_y(f_data(x, y)-f_model(x, y))^2/σ(x, y)^2.
Within our fitting we use the SExtractor parameters for each object as the initial guesses, and run three times for each object, modelling the source as an extended Sersic profile, a Sersic profile containing a PSF, and as just a pure PSF. For all fits, the ERR extension of the image, which is a measure of the noise of the image, is used as the input sigma image. Sources containing an AGN cannot be modelled accurately by a single Sersic fit, as the AGN appears as a distinct point source. However, determining the structures of sources with angular sizes similar to the PSF of the telescope is difficult and results should be interpreted with some caution <cit.>. As a result of this, we also select sources where the determined half light radius is less than the FWHM of the PSF. All object cutouts are from the F444W NIRCam image, where the PSF for this band has a FWHM of 4.83 pixels on our pixel scale, therefore sources with R_e < 4.83 pixels are selected as being a point source object. We use the F444W band for our fitting process, as this is closest to the rest-frame optical for each source, and keep this consistent throughout in order to model each source using the same parameters and constraints. Due to the faint magnitudes of these sources, we fix the Sersic index to a value of n = 1. Where multiple sources are fit simultaneously, the image positions of all objects are constrained to within ± 3 pixels, to ensure the correct sources are fit. We also visually inspect fits and residual images as a final quality check. An example of each fit is shown in in <ref>.
The final structural analysis result in two of our objects being classified as a PSF, five objects being classified as a Sersic profile with R_e < the FWHM of the PSF, and four objects classified as a combined model of an extended Sersic profile containing a PSF. The remaining two objects are not morphologically classified as a pure AGN due to their radius being larger than that of the PSF. These could in fact still be AGN, but we are interested here in systems where the AGN dominates the light of the source. The classification of each object is given in <ref>, and properties, including radii for those fit as Sersic profiles, are given in <ref>.
We further check our results in the F277W band, where most of our sources have higher S/N ratios, and find that our classifications do not change. In particular, we check the sizes of our sources best fit by a single Sersic profile, and find that in general, we recover them as having sizes smaller than the FWHM in F277W. We find one source as having R_e, F277W∼ 1.04×FWHM, which could occur due to noise, or faint extended emission better detected in this band, and as such, we still classify this as a compact Sersic profile, small enough to be a PSF. The only source we do not recover in this way, is CEERS 1019, however this source has a very complex morphology, which we analyse further.
We find that the source discovered in <cit.> is classified differently in our methods than in the discovery paper, where the object is found to be three components, with the central component best fit by a combination of a PSF and Sersic profile. Our combined fit of a Sersic profile and PSF has a marginally higher χ_ν^2, therefore we select this object as a compact Sersic profile, small enough to be classified as a PSF. This source has a complex morphology due to likelihood of it being a merger, and thus we replicate the fitting process completed in the discovery paper, and model the source as a three component model, with two components fit by Sersic profiles, and a central PSF component. We find that this has a lower χ^2_ν than our model fits, confirming our original findings that this object is compact enough to contain a point source. Our final classification information for the 12 sources selected from Nakajima templates is given in <ref>.
§ AGN SOURCE PROPERTIES
Using our selection procedure we identify a total of nine robust candidate sources out of a sample of 214. We also include the CEERS 1019 source from <cit.> for the sake of comparison with our candidates, for a total sample of ten high redshift sources. Thus we estimate an AGN fraction at 6.5 < z < 12 of 5±1%, assuming a Poisson counting error. Due to our investigation focusing on purity rather than completeness as well as being strongly biased towards Type 1 AGN, this value should be viewed as very much a lower bound estimate. This is still consistent with the 1 - 10% observable AGN fractions derived from the FLARES simulation results by Kuusisto et al 2023 (in prep) and matches well with <cit.> finding of ∼5% of galaxies at 4<z<7 hosting low luminosity Type 1 AGN, potentially indicating that the AGN fraction does not evolve much during this epoch.
The f_AGN values for our sources were estimated by rerunning CIGALE with the same parameters as in section <ref>, except the f_AGN parameter was varied over the full range of 0 to 1 in steps of 0.1. Physical values of R_e were measured by noting that the pixel scale of the images was 0.03 arcsec, and using angular diameter distances calculated at best-fit redshifts, with both GALFIT and redshift errors contributing to the final uncertainties. The values of T_bb and α were taken from the best fitting Nakajima templates, the model grid for these being too sparse to estimate meaningful uncertainties. We also measure the rest-frame absolute UV magnitudes M_UV by redshifting the best-fit SED to z = 0 and convolving it with a top-hat between 1450 and 1550 Å in wavelength space, with the uncertainties provided by redshift errors. Photometric redshifts and their errors were taken from LePhare results. All physical properties measured for our candidate sources are presented in <ref>.
§.§ X-ray and radio limits
We check if any of our candidate sources present in the NEP and CEERS fields have X-ray detections by matching our final candidate catalog with Chandra deep field point source catalogs from AEGIS-X survey of the Extended Groth Strip, overlapping the CEERS field <cit.>, and a deep X-ray survey of the JWST NEP field. The matching was carried out using 0.3 arcsec matching radii. However, none of our sources in CEERS and NEP fields appear to have X-ray detections in Chandra data. Thus, we use it to estimate upper limits on the full band (0.5 - 10 keV) X-ray luminosity of our sources.
For the AEGIS-X data, we take the 50% completeness limit of 1.30 × 10^-15 erg s^-1 cm^-2 from <cit.> as our limiting flux. For the TDF survey we were able to determine a 3σ detection limit by checking the catalog for the faintest sources that were detected at 3σ significance. This came out to 6×10^-6 cps, using a conversion factor of 1 cps = 2.842×10^-11 erg s^-1 cm^-2, this resulted in an upper limit of 1.7×10^-16 erg s^-1 for sources in the NEP survey fields. It should be noted that the X-ray catalog for the NEP field did not have completeness estimates at the time of writing, thus this value may be an underestimate. The calculated X-ray luminosity limits are of order 10^43 - 10^44 erg s^-1. This places our sources at or below the characteristic X-ray luminosity of ∼10^44 erg s^-1 for AGN at z = 4 - 5 <cit.>. However, our sources have low inferred stellar masses, so we probably would not expect the AGN to belong on the bright end of the luminosity function.
We check for radio detections by matching our candidates in the NEP field with the Very Large Array (VLA) catalog for the same field, described by <cit.>. Using 2 arcsec matching radii, as expected, no matches were found between our candidate sources, thus giving limiting fluxes of 10 μJy for all candidates in the NEP field, based on the flux cutoff in <cit.>.
§.§ Near-infrared colors
In order to compare the photometry of our selected candidates with theoretical predictions for DCBHs, we adopt two sets of NIR color cuts. The first one consists of the 90% purity cuts for an AGN number fraction (n_AGN) of 25% from Table 1 of <cit.>, which were tailored to identify low mass BHs at 7<z<10 accreting at an Eddington ratio of >0.1. The second set was adopted from <cit.> and was derived for a hypothetical class of obese black hole galaxies (OBG) at z ∼ 9, which form after a DCBH acquires a stellar emission component.
Computing the colors using aperture corrected SExtractor magnitudes in each filter we found that our candidate sources have marginally flatter SEDs than the rest of the high-z galaxy sample, however, the overall colour difference is not substantial, as can be seen in <ref>. This same figure also shows that our sources are significantly bluer than the red predictions from <cit.>, likely due to differing assumed SED sets. The key difference seems to stem from <cit.> assuming an α = -0.79, derived by <cit.> from low redshift narrow-line Seyfert 1 galaxies. We make a further comparison of the Nakajima SEDs with models used in <cit.>, which result in similar predicted colors to <cit.>. These models, hereafter referred to as the Volonteri set, are explicitly parametrized by the black hole mass (M_BH) and the Eddington ratio (f_edd) and do not include nebular emission lines, unlike the Nakajima set. A comparison between the bluest and reddest SEDs possible from both model sets in the considered wavelength range is provided in <ref>. It can be seen from the figure that running the Volonteri models with lower M_BH results in bluer continuum shapes, however, the overall range of apparent slopes of Volonteri models is significantly redder than that of the Nakajima set. The likely reason for this is that the Volonteri models assume an α = -0.5, following <cit.>. These slopes differ significantly from the steeper values assumed by the Nakajima model, following <cit.> results for low redshift quasars. Thus a possible reason for our objects not matching the <cit.> color cuts is the differing assumptions of the underlying SED models.
It should also be noted, however, that the CEERS 1019 source is likewise not significantly differentiated from either the high-z galaxies or our AGN sample in the <cit.> color space. The GN-z11 source, while not in our photometric sample, has also been reported to have a blue (β = -2.26 ± 0.1) rest-frame UV slope <cit.>. These bluer than expected colors may also be partially attributed to some of our sources having a significant stellar component, as suggested by the AGN fractions given by CIGALE in <ref>.
A comparison of the AGN candidates the rest of the galaxies in the (F200W - F444W) colour band (<ref>) shows the relative flatness of their spectra more clearly, with the average (F200W - F444W) color being close to 0, while the same average for the high-z galaxies is at ∼ 0.2. The CEERS 1019 object appears redder in this figure, however, this is due to it possessing a strong OIII line above an otherwise flat continuum <cit.>. While the derived M_UV values do not differentiate our sources from the rest of the sample, it should be noted that 7 out of 9 of our candidates lie in the region -0.3 < F200W - F444W < 0.3, in line with predictions from <cit.>. Our magnitudes, however, are fainter by up to 1 mag than their predictions, assuming an optical bolometric correction K_O = 5 <cit.>. It should be noted that this correction was derived from AGN at z< 4 and may not hold at the redshift range considered here. In general, much more work is needed to understand the SEDs and spectral characteristics of z > 5 AGN.
§.§ Masses and start formation rates
We adopt the star formation rates (SFR) from the parent sample of 214 sources. These SFRs were estimated by taking the average flux in the restframe 1450 and 1550 Å wavelength range, using it to calculate the UV luminosity, which, after dust corrections from <cit.>, is converted into SFR using the procedure described in <cit.>. Stellar masses were obtained by fitting the sample sources with BAGPIPES <cit.>.
The above methods do not take into account potential AGN emissions, however, a comparison between our candidates and the parent sample may be useful in seeing if AGN may be efficiently identified by looking at outliers in an SFR versus stellar mass plot. Such a plot is presented in <ref>. As can be seen from the figure, calculating stellar masses and SFRs assuming purely stellar emission does not produce anomalous results for our candidates, likely due to their low UV luminosity. This low luminosity could be the result of low black hole masses and accretion rates. However, an intriguing possibility is that some of our AGN may be the high redshift counterparts of sources found by <cit.>, which are characterized by a faint and relatively blue UV continuum and a bright, red rest-frame optical SED. At the redshifts considered in this paper the red continuum would mostly lie outside of the NIRCam range. Thus deep mid-infrared observations are required to check this hypothesis.
§.§ An unusual object at z = 12
In terms of individual sources, S0723-z11c stands out as our candidate at the highest redshift of ∼12. As can be seen from <ref>, LePhare galaxy models give similar performance to the Nakajima set in terms of χ^2 values, however, the image cutouts presented in the same figure showcase a composite and complex nature of the source. The morphological fits in <ref> identify the second component as a point source, contributing almost 40% to the total emission in the F444W band. However, it is important to note that the apparent morphology changes somewhat drastically in this band with respect to others, as highlighted in <ref>. In order to better understand the complex morphology of this source, we run GALFIT across all bands in which the source had >5σ detections (F200W, F277W, F356W and F444W), with the results summarized in <ref>. As can be seen from the figure, the source in each band is best-fit by a combination of a Sersic profile and a PSF, with component locations being roughly consistent from F200W to F356W, with a rather abrupt location shift occurring in the F444W band. The consistent bimodal nature of this source along with the shift in the F444W band may point to a morphology disturbed by a merger event or a strong outflow.
A possible reason behind the abrupt nature of the shift between F356W and F444W band images is either line emission or a discontinuity in the continuum itself. Assuming a source redshift of z = 12, this emission feature should occur at rest-frame wavelengths of 300 - 383 nm. While this may be caused by either a NeV (3346 Å, 3426 Å) or OII (3727 Å) doublets, the spatially extended nature of this emission suggests that it may be due to a Balmer break, which in turn would indicate the presence of evolved stellar populations in the object. However, observations probing redwards of the F444W band or a spectroscopic followup is required to truly confirm the nature of this discontinuity in emission patterns.
§ CONCLUSIONS AND SUMMARY
In this paper we have identified a population of high redshift AGN candidates by utilizing a set photometric and morphological selection techniques. The basic idea behind our method and paper is to find systems that are much better fit by AGN templates with an active galaxy, or black holes, and are consistent with having a small point source that dominates the light of the galaxy. our methods is not meant to find complete samples of AGN or early black holes, but as a way to find the best candidates for further spectroscopy and detailed follow up.
Our parent samples originates from the EPOCHS sample of z > 6.5 galaxies whose luminosity function and selection is discussed in <cit.>. From this sample we refit the galaxies using a variety of model SEDs using the photometric redshift code EAZY. We use this method to isolate most of the sample sources, however, other steps in our pipeline - a combination of analysis of relative performance between AGN and non-AGN models and morphological fitting was also utilised in removing weak candidates. We are thus left with nine strong AGN sources that are likely emitting their light due to a central massive black hole.
We develop a new method of finding likely AGN through template fitting, such that we define a statistic P_AGN that reveals how likely an object is better fit by an AGN spectrum rather than a star forming one. It should also be noted that the reason for the P_AGN limited performance in isolation may be its implicit reliance on the general properties of the z>6 AGN population being well known. New AGN templates are needed at these highest redshifts which will eventually be developed with the availability of more JWST spectroscopy of these objects. Our overall selection method may, however, provide a good way for searching for the highest redshift candidate AGN for spectroscopic followup using NIRSpec.
We find that the estimated AGN fraction in the interval of 6.5 < z < 12 is 5± 1%. However, our investigation was strongly biased towards Type 1 AGN, due to the initial set of SED templates not accounting for dust extinction, and calibrated for purity rather than completeness, thus this result only establishes a lower bound, which is nevertheless consistent with theoretical predictions.
We also find that rest-frame UV photometry of our candidates suggest that color-color cuts alone may not be sufficient to differentiate AGN from other galaxies at high redshifts - with SED and morphological fitting in conjunction with deep X-ray and spectroscopic observations being necessary for robust identification. However, color-color cuts may still be useful as a way to pre-select potential candidates, as evidenced by a large fraction of our sources lining up with bluer colors predicted for OBGs.
The presence of JWST-detectable AGN sources out to z = 12 alone suggests evidence in favour of the Direct Collapse Black Hole model <cit.>, while the photometric properties of our sample suggest evidence in favour of the OBG stage of galaxy formation and thus a type of 'naked' black holes existing in the early Universe, however, follow up spectroscopy will be needed to confirm the nature of our objects and estimating their black hole masses in order to place more defined constraints on black hole seeding models.
§ ACKNOWLEDGEMENTS
We gratefully acknowledge support from the ERC Advanced Investigator Grant EPOCHS (788113), as well as studentships from STFC. LF acknowledges financial support from Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brazil (CAPES) in the form of a PhD studentship. DI acknowledges support by the European Research Council via ERC Consolidator Grant KETJU (no. 818930). CCL acknowledges support from the Royal Society under grant RGF/EA/181016. CT acknowledges funding from the Science and Technology Facilities Council (STFC). This work is based on observations made with the NASA/ESA Hubble Space Telescope (HST) and NASA/ESA/CSA James Webb Space Telescope (JWST) obtained from the () at the Space Telescope Science Institute (STScI), which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST, and NAS 5–26555 for HST. The Early Release Observations and associated materials were developed, executed, and compiled by the ERO production team: Hannah Braun, Claire Blome, Matthew Brown, Margaret Carruthers, Dan Coe, Joseph DePasquale, Nestor Espinoza, Macarena Garcia Marin, Karl Gordon, Alaina Henry, Leah Hustak, Andi James, Ann Jenkins, Anton Koekemoer, Stephanie LaMassa, David Law, Alexandra Lockwood, Amaya Moro-Martin, Susan Mullally, Alyssa Pagan, Dani Player, Klaus Pontoppidan, Charles Proffitt, Christine Pulliam, Leah Ramsay, Swara Ravindranath, Neill Reid, Massimo Robberto, Elena Sabbi, Leonardo Ubeda. The EROs were also made possible by the foundational efforts and support from the JWST instruments, STScI planning and scheduling, and Data Management teams. The effort of CEERS, NGDEEP and GLASS teams in making their data public is hereby acknowledged. We would also like to thank Adi Zitrin, Rachana Bhatawdekar and Nimish Hathi for their timely and useful comments.
§ DATA AVAILABILITY
Some of the data data underlying this article is made available by <cit.>. The remainder of the data set will be released together with Conselice et al. 2023. The catalogues of the sample discussed herein may be acquired by contacting the corresponding author.
mnras
|
http://arxiv.org/abs/2307.06127v2 | 20230712122710 | Stationary state of harmonic chains driven by boundary resetting | [
"Ritwick Sarkar",
"Pritam Roy"
] | cond-mat.stat-mech | [
"cond-mat.stat-mech",
"cond-mat.soft"
] |
'26-12mu d
[ ⟨⟩øωεJ_Stationary state of harmonic chains driven by boundary resetting]Stationary state of harmonic chains driven by boundary resettingS. N. Bose National Centre for Basic Sciences, Kolkata 700106, India
E-mail:
We study the nonequilibrium steady state (NESS) of an ordered harmonic chain of N oscillators connected to two walls which undergo diffusive motion with stochastic resetting. The intermittent resetting of the walls effectively emulate two nonequilibrium reservoirs that exert temporally correlated forces on the boundary oscillators. These reservoirs are characterized by the diffusion constants and resetting rates of the walls. We find that, for any finite N, the velocity distribution of the bulk oscillators remains non-Gaussian, as evidenced by a non-zero bulk kurtosis that decays ∼ N^-1. We calculate the spatio-temporal correlation of the velocity of the oscillators v_l(t) v_l'(t') both analytically as well as using numerical simulation. The signature of the boundary resetting is present at the bulk in terms of the two-time velocity correlation of a single oscillator and the equal-time spatial velocity correlation. For the resetting driven chain, the two-time velocity correlation decay as t^-1/2 at the large time, and there exists a non-zero equal-time spatial velocity correlation v_l(t) v_l'(t') when l ≠ l'. A non-zero average energy current will flow through the system when the boundary walls reset to their initial positions at different rates. This average energy current can be computed exactly in the thermodynamic limit. Numerically we show that the distribution of the instantaneous energy current at the boundary is independent of the system size. However, the distribution of the instantaneous energy current in the bulk approaches a stationary distribution in the thermodynamic limit.
§ INTRODUCTION
It is worthwhile to understand the nonequilibrium steady state (NESS) of an extended system driven by equilibrium reservoirs to comprehend the transport properties of that system. A paradigmatic model of energy transport in one-dimensional systems was studied by Rieder, Lebowitz, and Lieb in 1967, which consists of a harmonic chain of N oscillators, driven by two thermal reservoirs at the boundaries <cit.>. For this model, the thermal driving at the two boundaries leads to a NESS with a nonzero average energy current flowing through the system. Several generalizations — e.g.; inclusion of anharmonic interactions between the oscillators of the chain, pinning potential, and disorders, were studied in the recent past. These studies conclude that these generalizations lead to some nontrivial behavior of the steady state including anomalous transport and nonlinear temperature profile <cit.>.
The action of nonequilibrium reservoirs, which do not satisfy fluctuation-dissipation theorem (FDT)<cit.>, on single probe particles leads to unusual features like anomalous relaxation dynamics, negative viscosity, modification of equipartition theorem, etc <cit.>. Recently, the NESS of a harmonic chain driven by active baths (nonequilibrium bath) is studied in Ref. <cit.>. Here we model a nonequilibrium reservoir using stochastic resetting. Briefly, in our model, the boundary oscillators of a harmonic chain are attached to two walls that perform free diffusion with intermittent resetting. These walls act as nonequilibrium reservoirs at the two ends and the effect of these reservoirs on the NESS of the harmonic chain is studied in detail in this work.
Stochastic resetting is the process where the dynamics of a system stochastically stop and restart from some predefined condition<cit.>. The paradigmatic example is where the position of a Brownian particle resets to a fixed point in space with some specific rate <cit.>. This leads to some substantial consequences like — a non-trivial stationary state, finite mean first passage time, and anomalous relaxation behavior. Stochastic resetting has been investigated regarding other nonequilibrium processes — e.g. nonequilibrium bath <cit.>, diffusion in a logarithmic potential <cit.>, telegraphic process <cit.>, Lévy flights <cit.>, coagulation process <cit.>, particle transport <cit.>, Run-and-tumble particles <cit.>, geometric Brownian motion <cit.>, extreme value statistics <cit.>, work fluctuations <cit.> etc.
In this paper, we study the NESS of a harmonic chain with N oscillators connected to two boundary walls that are undergoing free diffusion with intermittent resetting at different rates. These walls exert exponentially correlated force on the boundary oscillators and the strength of the force is characterized by the diffusion constant and resetting rate of the specific wall <cit.>. The resetting walls exert a drive that leads to a NESS with an average nonzero energy current flowing through the system. The average energy current can be easily calculated from the formalism used in Ref. <cit.>. We find that the velocity distribution of the bulk oscillator reaches the Gaussian distribution only at a thermodynamically large system size. The velocity distributions of the boundary oscillators are found to be size-independent with exponentially decaying tails. Numerical simulation results confirm that the kurtosis of the velocity of the bulk oscillators decay as N^-1. We also calculate the two-point spatio-temporal correlation of the velocity of the bulk oscillators. We find that at large time, the two-time velocity correlation v_l(t) v_l(0) decay as t^-1/2 and there exists a non-zero equal-time spatial velocity correlation v_l(t) v_l'(t) in the bulk when l≠ l'. Instantaneous energy current distribution at the bulk reaches a stationary distribution in the thermodynamic limit. The exact expression of this stationary distribution is calculated in Ref. <cit.>. In contrast, the distribution of instantaneous energy current at the boundary remains size independent. We also discuss the effective thermal limit which is achieved at a high resetting rate.
The paper is organized as follows; in Sec. <ref>, we define the model and summarize our key findings. In Sec. <ref>, we discuss the velocity distribution and kurtosis profile of velocity. The two-point spatio-temporal correlation of the velocity for the resetting driven chain is discussed in Sec. <ref>. Sec. <ref> and Sec. <ref> are devoted to the distribution of instantaneous energy current and the effective thermal limit. We conclude in Sec. <ref> along with some general remarks.
§ MODEL AND RESULTS
We consider a chain of N oscillators, each of mass m, coupled to its nearest neighbors by springs of stiffness constant k. Two boundary oscillators are in contact with Langevin heat baths at temperatures T_1 and T_N. These heat baths exert white noises as well as damping forces, proportional to the velocities, on the boundary oscillators. The boundary oscillators are also attached to two walls with springs of the same stiffness constant. These walls perform free diffusion and undergo intermittent resetting to their initial positions with different rates r_1 and r_N. We denote the displacement of the l-th oscillator from its equilibrium position as x_l and the displacements of the left and right resetting walls as X_1 and X_N respectively. These moving walls exert stochastic forces on the boundary oscillators due to the intermittent resetting dynamics. The equations of motion of the oscillators are,
m ẍ_̈l̈=
-k(2x_1-x_2)+f_1-γẋ_1+ξ_1, when l=1,
-k(2x_l-x_l-1-x_l+1), ∀ l∈ [2,N-1],
-k(2x_N-x_N-1)+f_N-γẋ_N+ξ_N. when l=N.
f_1=kX_1 and f_N=kX_N are the stochastic forces exerted by the boundary walls. ξ_1 and ξ_N are white noise from the Langevin heat bath acting on the boundary oscillators and satisfy the fluctuation-dissipation relation <cit.>,
ξ_i(t) ξ_j(t') =2 γ T_i δ_ijδ(t-t') and i,j=1,N.
T_1 and T_N are the temperatures of the thermal reservoir at the two ends of the chain. γ is the damping coefficient of the respective reservoir. Note that, we assume the motion of the wall is independent of the boundary oscillators.
The boundary walls perform free diffusion with Poisson resetting <cit.>. In a small time interval Δ t, the position of the i-th wall, relative to its equilibrium position, is updated by the following procedure<cit.>,
X_i(t+Δ t)=
0 with probability r_i Δ t,
X_i(t)+√(2 D Δ t) η_i(t) with probability 1-r_i Δ t, and i=1,N,
where D is the diffusion constant and η_i(t) has mean and second moment, ⟨η_i(t)⟩ = 0, and ⟨η_i(t)η_j(t) ⟩ = δ_ij. The auto-correlation of the position X_i(t) is given by<cit.>,
X_i(t)X_j(t') = δ_ij2 D/r_ie^-r_i|t-t'|,
Here we assume the same D for both boundary walls.
Similarly, the stochastic forces, f_1=kX_1 and f_N=kX_N are also correlated as,
⟨ f_i(t) f_j(t') ⟩ =δ_ij a_i^2 e^-r_i|t-t'| with a_i^2=2 D k^2/r_i i=1,N,
here a_i is the strength of the noise coming from the i-th resetting wall. The stochastic force f_i(t) acting on the boundary oscillator violates the fluctuation-dissipation relation. For a resetting driven linear chain, the equations of motion Eq. (<ref>) can be solved in the frequency domain,
x_l(t)=∫_-∞^∞d ω/2 π e^-i ω t[G_l1(ω) f̃_1(ω)+G_lN(ω) f̃_N(ω)].
Here G(ω) denotes the Green's function matrix [See <ref> for a detailed derivation] and f̃_i(ω) is the Fourier transform of f_i(t) with respect to t. The two-point correlation of the f̃_i(ω) is,
f̃_i(ω) f̃_j(ω') =2 πδ_ijg̃(ω,r_i)δ(ω+ω') and g̃(ω,r_i)=2 a_i^2 r_i/r_i^2+ω^2,
where g̃(ω,r_i) is the frequency spectrum of the resetting force.
It can be easily shown that for the harmonic chain, the average energy current has two independent components— a thermal component, J_therm and a resetting component J_r. J_therm is proportional to the temperature difference at the two ends of the chain, T_1-T_N, and J_r will depend on the resetting rates of the boundary walls. The main objective of this paper is to characterize the NESS driven by the boundary resetting and we choose T_1=T_N=0 for the rest of the calculation. In this paper the numerical simulations are done using the scheme of <cit.> which are accurate to order (Δ t)^2.
Before going into the detailed discussion, a summary of the result is presented.
* Velocity fluctuation: We measure the velocity distribution of the oscillators P(v_l) in the NESS. We find that, P(v_l) of the bulk oscillators approach Gaussian distribution in the thermodynamic limit, N→∞ with width T̂_bulk=m ẋ_l^2, which is a measure of the kinetic temperature. To characterize the finite size-dependence, we measure the kurtosis profile of velocity, κ_l= v_l^4 / v_l^2 ^2-3. We find that for the bulk oscillators, κ_l decays as N^-1. The velocity distributions of the boundary oscillators [P(v_1) or P(v_N)] are found to be size-independent as well as non-Gaussian.
* Spatio-temporal correlation of velocity: Another physical observable of interest is the spatio-temporal velocity correlation of the oscillators v_l(t) v_l'(t'), in the steady state. We calculate two-time velocity correlation of single oscillator (v_l(t) v_l(t')) and equal-time spatial velocity correlation (v_l(t) v_l'(t)) for resetting driven chain explicitly. We show that in the thermodynamic limit, the two-time velocity correlation of a single oscillator in the bulk is,
v_l(t) v_l(t') =1/γ m∫_0^πdq/2 πcos[ω_c sin(q/2)(t-t') ][a_1^2 r_1 /r_1^2+ω_c^2 sin^2q/2+a_N^2 r_N /r_N^2+ω_c^2 sin^2q/2].
where ω_c=2 √(k/m). When t-t'≫ω_c^-1, v_l(t) v_l(t') ∝ J_0(ω_c(t-t')), here J_0(z) is the Bessel function of first kind <cit.>. Equal-time spatial velocity correlation in thermodynamic limit is given by,
v_l(t) v_l'(t) = 1/2 γ[ a_1^2 θ(r_1,l,l')+a_N^2 θ(r_N,l,l') ].
where,
θ(r_i,l,l')=r_i/m r_i^2+4 k _3 F̃_2 [1/2,1,1;1-l'+l,1+l'-l;4 k/4 k+mr_i^2].
Here l'-l is an integer and _p F̃_q is a generalized regularized hypergeometric function <cit.>.
* Fluctuation of energy current: We also measure the distribution of instantaneous energy current flowing through the system in the steady state. For the resetting driven oscillator chain, the distribution of instantaneous energy current at the bulk, 𝒥_l is found to approach
P(𝒥_l)=1/π√( g_l) e^J_r/g_l𝒥_lK_0( u_l/g_l|𝒥_l|).
in the thermodynamic limit. Here K_0(z) is the zeroth-order modified Bessel function of the second kind and the definition of J_r, g_l and u_l is given in Eq. (<ref>) and Eq. (<ref>). The numerical simulation also confirms that the distribution of instantaneous energy current at the boundary 𝒥_1, is size-independent.
* Effective thermal limit: At high resetting rate, the i-th boundary wall acts as effective Langevin bath with effective temperature T_i^eff. In this limit, the known result of the average energy current for a thermally driven oscillator chain is recovered <cit.>. The equal-time spatial velocity correlation also becomes uncorrelated for l ≠ l' in this effective thermal limit.
§ VELOCITY DISTRIBUTION AND KURTOSIS PROFILE OF THE VELOCITY
The probability distributions of the velocity of the constituent oscillators play an important role in characterizing the NESS of the driven oscillator chain. For thermally or activity-driven oscillator chains, the velocity distributions of the oscillators at the bulk are found to be Gaussian with its width given by local kinetic temperature T̂_bulk=m v_l^2<cit.>. However, the velocity distribution of boundary oscillators depends on the specific dynamics of driving. For resetting driven harmonic chain also, we first measure the velocity distributions of bulk and boundary oscillators for different system sizes and different resetting rates at the two boundaries.
§.§ Distribution of velocity
The velocity of the l-th oscillator for resetting driven chain is [from Eq. (<ref>)],
v_l(t)=∫_-∞^∞d ω/2 πe^-iω t(-i ω)[G_l1(ω)f̃_1(ω)+G_lN(ω)f̃_N(ω)],
and we numerically measure the velocity distribution of the bulk oscillators P(v_l) for different system sizes. In Fig. <ref>(a), the numerically measured P(v_l) is plotted for different system sizes. For any finite N, P(v_l) shows significant deviation at the tails from the Gaussian distribution with standard deviation T̂_bulk. However, the deviation tends to decrease with an increase in system size. Therefore we conclude that,
For N→∞, P(v_l)=1/√(2 πT̂_bulk)exp(-m v_l^2/2 T̂_bulk),
where the value of kinetic temperature at the bulk is [see <ref> for the derivation],
T̂_bulk=a_1^2 /2 γ√(r_1^2+4 k /m)+a_N^2 /2 γ√(r_N^2+4 k /m).
The scaled velocity distribution of the oscillator at the left boundary is shown in Fig. <ref>(b). Here the numerically measured velocity distribution P(v_1) is scaled with numerically measured √( v_1^2 ), which shows a scaling collapse. The numerical simulation also indicates that the velocity distribution of the boundary oscillator has an exponentially decaying tail [which is shown with the red dashed line in Fig. <ref>(b)]. Note that, numerical simulations with different system sizes suggest, unlike velocity distributions of the bulk oscillators, P(v_l), the velocity distribution of the boundary oscillator P(v_1) [or P(v_N)] is size-independent, [see the Fig. <ref> in <ref>].
§.§ Kurtosis profile of velocity
To characterize the finite size dependence of the velocity distribution of the bulk oscillators, we measure the kurtosis profile of velocity. The definition of the kurtosis of the velocity of the l-th oscillator is,
κ_l= v_l^4 / v_l^2 ^2-3.
If the velocity distribution of the l-th oscillator is Gaussian, then the κ_l=0. Typically the distribution will have a fatter tail when κ_l>0. We compute the velocity kurtosis profile for different system sizes using numerical simulation. In Fig. <ref>(a), κ_l for different system sizes is plotted for r_1=r_N and from this plot, it is clear that the value of the kurtosis at the bulk decreases as the system size increases. In Fig. <ref>(b) κ_l for a chain of length N=1024 is plotted for a fixed value of r_1 and three different values of r_N. The velocity kurtosis profile is symmetric for r_1=r_N. It is clear from Fig. <ref>(b) that the velocity distribution of the bulk oscillators will reach the Gaussian distribution if the resetting rate is high. However, the boundary oscillators will have a fatter tail even at a higher resetting rate.
It is intriguing to investigate the system size dependence of the velocity kurtosis profile, therefore we numerically estimate κ_bulk [κ_l at l=N/2] with increasing system size N for fixed r_1 and three different values of r_N. The result is shown in Fig. <ref>. From the result of numerical simulation, we conclude that,
κ_bulk∼ N^-1.
Therefore only in the thermodynamic limit, the velocity distributions of the bulk oscillators become Gaussian.
§ SPATIO-TEMPORAL CORRELATION OF VELOCITY
Correlation functions play an important role in characterizing transport processes. For example, the integral of equilibrium velocity-velocity correlation v(t)v(t') is related to the diffusion coefficient of a Brownian particle. For thermally driven transport in the mass-disordered harmonic chain, the two-time correlation of the current determines the asymptotic size dependence of the current fluctuation in the steady state <cit.>. In this section, we discuss the two-point spatio-temporal correlation of velocity v_l(t) v_l'(t') in the bulk.
Using Eq. (<ref>), we can write the two-point spatio-temporal correlation of velocity as,
v_l(t)v_l'(t') = ∫_-∞^∞dω/2 πω^2 e^-iω (t-t')[G_l1(ω)G_l'1^*(ω)g̃(ω,r_1)+G_lN(ω)G_l'N^*(ω)g̃(ω,r_N)].
In the following, we calculate the two-time velocity correlation of a single oscillator v_l(t) v_l(t') and equal-time spatial velocity correlation v_l(t) v_l'(t) in the stationary state explicitly for resetting driven chain.
§.§ Two-time velocity correlation of single oscillator
In the stationary state, the two-time velocity correlation of a single oscillator, v_l(t) v_l(t') is a function of t-t' [see Eq. (<ref>)]. To simplify the calculation, we consider v_l(t) v_l(0) which can be written as,
v_l(t) v_l(0) =∫_-∞^∞dω/2 πω^2 e^-iω t[|G_l1(ω)|^2g̃(ω,r_1) +|G_lN(ω)|^2 g̃(ω,r_N)].
The above equation has two separate contributions coming from two resetting walls at the two boundaries and we can write,
v_l(t) v_l(0) = a_1^2C(r_1,t)+a_N^2C(r_N,t).
where C(r_i,t) is the contribution from the i-th resetting wall. For the oscillators in the bulk [1≪ l≪ N], C(r_i,t) can be computed explicitly, and takes a simple form in the thermodynamic limit,
C(r_i,t) =r_i/γ m∫_0^πdq/2 πcos[ω_c t sin(q/2)] /r_i^2+ω_c^2 sin^2q/2,
see <ref> for the detailed derivation. Combining the contributions from both the reservoirs, we can write v_l(t) v_l(0) in the thermodynamic limit as,
v_l(t) v_l(0) =1/γ m∫_0^πdq/2 πcos[ω_c t sin(q/2)][a_1^2 r_1 /r_1^2+ω_c^2 sin^2q/2+a_N^2 r_N /r_N^2+ω_c^2 sin^2q/2].
The above integral can be evaluated numerically for arbitrary values of t. This is illustrated in Fig <ref>(a) where the symbols correspond to the data obtained from numerical simulation for a fixed r_1 and different values of r_N. The black solid lines in Fig <ref>(a) correspond to the numerical integration of Eq. (<ref>).
It is difficult to get the closed form of Eq. (<ref>). However, it is possible to predict the small and large time behavior of v_l(t) v_l(0). For very small value of t or t≪ω^-1_c, cos [ω_c t sin(q/2) ] can be expanded upto second order in time t. Therefore, for small value of t, v_l(t) v_l(0) is proportional to t^2. Similarly, when t≫ω_c^-1, the integral can be performed exactly [see <ref> for the detailed derivation]. In this large time limit,
v_l(t) v_l(0) ≃[a_1^2 r_1/2 γ (m r_1^2+4 k)+ a_N^2 r_N/2 γ (m r_N^2+4 k)]J_0(ω_c t),
where ω_c=2√(k/m), and J_0(z) is the Bessel function of the first kind. In Fig <ref>(b) numerically measured v_l(t) v_l(0) (l=N/2) is plotted for large t, which is an oscillatory function of t and the envelope of v_l(t) v_l(0) seemed to decay as t^-1/2, this is similar to the behavior of J_0(ω_c t) when t≫ω_c^-1.
§.§ Equal-time spatial velocity correlation
Equal-time spatial velocity correlation, v_l(t)v_l'(t), can be calculated by taking t=t' in Eq. (<ref>).
v_l(t)v_l'(t) = ∫_-∞^∞dω/2 πω^2 [G_l1(ω)G_l'1^*(ω)g̃(ω,r_1)+G_lN(ω)G_l'N^*(ω)g̃(ω,r_N)].
In the stationary state v_l(t)v_l'(t) does not depend on the time and has two separate contributions coming from the two walls performing intermittent resetting. Therefore v_l(t)v_l'(t) can be written as,
v_l(t)v_l'(t) = a_1^2θ(r_1,l,l')+a_N^2θ(r_N,l,l').
In the bulk, i.e. 1≪ l ,l' ≪ N, the contribution from the i-th wall in the limit N →∞ is [see <ref> for detailed derivation],
θ(r_i,l,l')=r_i^2/2 πγ∫_0^π dq cos(l'q-lq)/m r_i^2+4 k sin^2q/2
where,
θ(r_i,l,l')=r_i/m r_i^2+4 k _3 F̃_2 [1/2,1,1;1-l'+l,1+l'-l;4 k/4 k+mr_i^2] and l'-l∈𝕀.
Here _p F̃_q is generalized regularized hypergeomatric function <cit.>. Combining the contribution of both reservoirs we arrive at,
v_l(t) v_l'(t) = a_1^2/2 γθ(r_1,l,l')+a_N^2/2 γθ(r_N,l,l') .
In Fig <ref> we plotted numerically measured v_l(t) v_l'(t) for a fixed value of r_1 and different values of r_N which matches with the analytic prediction, Eq. (<ref>) well.
§ FLUCTUATION AND SECOND MOMENT OF INSTANTANEOUS ENERGY CURRENT
The definition of instantaneous energy current from the left reservoir to the system is,
𝒥_1=(-γ v_1 + f_1)v_1.
Similarly instantaneous energy current from the right reservoir to the system is
𝒥_N+1=(-γ v_N+ f_N)v_N.
In the bulk, the expression for the instantaneous energy current between l-1th and l-th oscillator,
𝒥_l=k/2(v_l-1+v_l) (x_l-1-x_l).
The steady state of the resetting-driven harmonic chain is characterized by the existence of non-zero average energy flux through the chain. In NESS, the average energy current flowing through the chain is,
J_r=𝒥_1 = 𝒥_2 =⋯ =𝒥_l = ⋯ = -𝒥_N+1.
The analytical expression of the average energy current J_r can be calculated easily following the method described in Ref. <cit.>. For completeness, we quote the result here.
J_r = m/4 γ ^2[a_1^2ζ(r_1)-a^2_Nζ(r_N)] whereζ(r_j) = r_j k^2 ( √(1+4γ^2/mk)-1 )+r_j^3γ^2( 1-√(1+4 k/mr_j^2))/(k^2-r_j^2γ^2).
In Fig.<ref>, we have shown the plot of average energy current J_r for fixed r_1 and different r_N along with the analytical prediction Eq. (<ref>). In the next subsection, we will discuss the distribution of the instantaneous energy current at the bulk and the boundary.
§.§ Distribution of instantaneous energy current at the bulk
In Ref. <cit.>, the distribution of instantaneous energy current in NESS is derived. The derivation is exact when joint distribution of {v_l-1,v_l,x_l-1,x_l} is a multivariate Gaussian <cit.>,𝒫(v_l-1,v_l,x_l-1,x_l)=exp[ -1/2W_l^Tℰ^-1_lW_l ]/√((2 π)^4 det(ℰ_l)),
where, W_l^T=(v_l-1 v_l x_l-1 x_l) and ℰ_l is the corresponding 4× 4 correlation matrix. Therefore, the instantaneous current distribution, P(𝒥_l) is,
P(𝒥_l)=1/π√( g_l)exp(J/g_l𝒥_l)K_0( u_l/g_l|𝒥_l|).
Here K_0(z) is the zeroth-order modified Bessel function of
the second kind, and J is the average energy current flowing through the system. Also g_l and u_l are given by,
g_l=k T̂_bulk/2[T̂_bulk+ v_l-1 v_l]-J^2, and u_l=√( g_l+J^2).v_l-1 v_l is equal time spatial velocity correlation for a driven harmonic chain.
In Sec. <ref>, we have shown that the velocity distribution of the bulk oscillators of the resetting driven harmonic chain approaches the expected Gaussian distribution Eq. (<ref>) for thermodynamically large system size. In Sec. <ref>, our numerical simulation results have suggested that the kurtosis of the velocity of the bulk oscillators κ_l, is inversely proportional to the system size N. Therefore, the velocity distribution of the l-th oscillator in the bulk is Gaussian only when N→∞. Let us assume that the joint probability distribution of {x_l,v_l} of the bulk oscillators, 𝒫({x_l,v_l}) is a multivariate Gaussian in this thermodynamic limit. Therefore, we can expect that the distribution of the instantaneous energy current at the bulk, P(𝒥_l) can be expressed as Eq. (<ref>), with average energy current flowing through the system J_r [given in Eq. (<ref>)]. The other parameters in Eq. (<ref>) are also calculated for the resetting driven chain [see Eq. (<ref>) and Eq. (<ref>)].
In Fig. <ref>(a), numerically measured P(𝒥_l) of different system sizes are plotted for different r_1 and r_N. For a large system size (e.g. N=1024), there is a good agreement for large 𝒥_l with Eq. (<ref>), shown by the black solid line. This validates our assumption that in the thermodynamic limit, the joint distribution 𝒫({v_l,x_l}) for the resetting driven harmonic chain, is a multivariate Gaussian, where 1≪ l≪ N.
Using the asymptotic behavior of K_0(z) for z→ 0, we get,
P(𝒥_l)=-1/π√( g_l)(lnu_l/2 g_l|𝒥_l|+E_γ)+O(𝒥_l),
near 𝒥_l=0. Here E_γ≃ 0.577216 is the Euler's constant <cit.>. In Fig. <ref>(b), behavior of P(𝒥_l) is shown near origin along with Eq. (<ref>) using solid black line. From this figure, it is clear that for small system size, P(𝒥_l) has a logarithmic divergence near the origin and approaches Eq. (<ref>) with the increase in system size.
§.§ Distribution of the instantaneous energy current at the boundary
The definition of instantaneous current from the left reservoir to the system is,
𝒥_1=(-γ v_1+ f_1)v_1,
and corresponding distribution can be written as,
P(𝒥_1)=∫ dv_1 df_1 δ [𝒥_1-(-γ v_1+f_1) v_1] 𝒫(v_1,f_1).
We mentioned in Sec. <ref> that the velocity distributions of the boundary oscillators [P(v_1) or P(v_N)] are independent of the size of the system. It is important to note that, in the stationary state, the position distribution P(X_i) of the boundary wall [whose dynamics is given in Eq. (<ref>)] is given by <cit.>,
P(X_i)=√(r_i/4 D)exp(-√(r_i/D)|X_i|).
Therefore the force f_1=k X_1 and f_N=k X_N exerted on the boundary oscillators are of the same form and independent of the size of the oscillator chain.
As the marginal distributions P(v_1) and P(f_1) are both size-independent, it can be inferred that the joint distribution 𝒫(v_1,f_1) is size-independent. Therefore, the distribution of instantaneous current at the boundary will also be size-independent [see Eq. (<ref>)]. The analytical form of P(𝒥_1) is hard to calculate as the analytical form of P(v_1) is unknown. However, in Fig. <ref>(a), the numerical simulation result for P(𝒥_1) with different system size is shown, which confirms that the P(𝒥_1) is size-independent. Behavior of P(𝒥_1) near origin is shown in Fig. <ref>(b). Similar to P(𝒥_l), P(𝒥_1) also has logarithmic divergence near the origin.
§.§ Second moment of the energy current
The second moment of the instantaneous energy current at bulk 𝒥_l and boundary 𝒥_1 can be calculated from the moment generating functions given by,
𝒥_l^2 =-d^2/d μ^2 e^i μ𝒥_l|_μ=0,
𝒥_1^2 =-d^2/d μ^2 e^i μ𝒥_1|_μ=0.
In Sec. <ref>, it has been shown that the current distribution P(𝒥_l) approaches Eq. (<ref>) for thermodynamically large system and the second moment of 𝒥_l is given by <cit.>,
𝒥_l^2 =2 J_r+u_l^2.
In Fig. <ref>(a), numerically measured 𝒥_l^2 is shown as a function of r_1 for different values of r_N along with the analytical plot of Eq. (<ref>), and they are in a good agreement.
We do not have the analytical expression of the second moment of 𝒥_1. However, 𝒥_1^2 is size-independent as the probability distribution P(𝒥_1) is size-independent. In Fig. <ref>(b) numerically measured 𝒥_1^2 is shown as a function of r_1 for different values of r_N.
§ EFFECTIVE THERMAL LIMIT
In the limit of a high resetting rate, the colored noise coming from the resetting wall at the boundary becomes an effective white noise. Therefore an effective fluctuation-dissipation relation can be written in terms of an effective temperature, T_i^eff.
⟨ f_i(t) f_j(t') ⟩ =δ_ij a_i^2 e^-r_i|t-t'| 2 γδ_ij T_i^effδ(t-t') where, T_i^eff=a_i^2/γ r_i.
Therefore the resetting wall at the boundary acts as an effective Langevin bath with effective temperature T_i^eff. In this high resetting rate, the average energy flux in the NESS Eq. (<ref>) becomes identical to the thermally driven harmonic chain <cit.> with higher order correction,
J_r=k(T_1^eff-T_N^eff)/2 γ[ 1+m k/2 γ^2-m k/2 γ^2√(1+4 γ^2/mk)]+O(1/r_j^2).
Similarly, we can evaluate the two-time velocity correlation of a single oscillator and equal-time spatial velocity correlation in the limit of a high resetting rate. For large r_i, Eq. (<ref>) can be approximated as,
v_l(t) v_l(0) ≃T_1^eff+T_N^eff/2 m J_0(ω_c t),
here J_0(z) is the Bessel function of the first kind. Equal-time spatial velocity correlation in the limit of high resetting rate can be evaluated using Eqs.(<ref>) and (<ref>) and the result is,
v_l(t) v_l'(t) ≃T_1^eff+T_N^eff/2 mδ_ll'.
§ CONCLUSION
In this work, we study NESS of a harmonic chain driven by resetting dynamics at the boundary. The harmonic chain is attached to two walls at the boundaries which undergo one-dimensional Brownian motion and stochastically reset to their initial position at different rates. As a result, the boundary oscillators experience exponentially correlated forces that lead to a NESS. For the resetting driven harmonic chain, we conclude from the numerical simulation that the velocity distributions of the bulk oscillators reach Gaussian distribution when the system size is thermodynamically large. The velocity distributions of the boundary oscillators are non-Gaussian and size-independent. We have numerically computed the velocity kurtosis profile and found that velocity kurtosis of the bulk oscillators decay as N^-1, where N is the system size. We also show that the two-time velocity correlations of the bulk oscillators v_l(t) v_l(0) decay as t^-1/2 when t ≫ω_c^-1. In the bulk, there exists a nonzero equal-time spatial velocity correlation v_l(t) v_l'(t) when l ≠ l'. The distribution of the instantaneous energy current at the bulk is found to be size-dependent and reaches a stationary distribution only when N →∞. However, The distribution of instantaneous current at the boundary is size-independent and depends only on the resetting rate of the boundary walls. It is important to note that, at a high resetting rate, an effective thermal picture emerges where we recover expression for the average energy current of a thermally driven harmonic chain. In this effective thermal limit, equal-time spatial velocity correlation v_l(t) v_l'(t) vanishes when l ≠ l'.
It will be interesting to study the effect of nonlinearity and mass disorder, momentum non-conserving dynamics, etc on the energy transport driven by boundary resetting. It will also be intriguing to explore the effect on the NESS of a chain of harmonic oscillators when the walls attached to the boundary oscillators are subjected to non-markovian resetting <cit.> and non-instantaneous resetting <cit.>. There is other interesting question like how our results change when the boundary oscillators of the harmonic chain are attached to two baths of over-damped particles which resets to a specific position<cit.>.
§ ACKNOWLEDGMENTS
The authors would like to thank Urna Basu and Ion Santra for useful discussions. R.S. acknowledges support from the Council of Scientific and Industrial Research, India [Grant No.
09/0575(11358)/2021-EMR-I].
§ MATRIX FORMULATION
In this appendix, we will briefly discuss the correlation calculations. To begin with, let us start with the matrix form the Langevin equations Eq. (<ref>),
MẌ=-Φ X(t) -ΓẊ(t) + F(t),
where X(t) is a state vector of dimension N× 1, and the l-th component is x_l(t), displacement of the l-th oscillator from the equilibrium position. The definition of Γ is,
Γ_ij=γδ_i1δ_j1 +γδ_iNδ_jN.
F(t) is the noise coming from the resetting wall. We can write these as,
F_j(t)=f_1(t)δ_j1+f_N(t)δ_jN.
The correlation of this noise in the frequency domain is,
⟨F̃(ω)F̃^T(ω')⟩_ij=2πδ(ω+ω')[ g̃(ω,r_1)δ_i1δ_j1+g̃(ω,r_N)δ_iNδ_jN],
where g̃(ω,r_j)=2 a_j^2 r_j /r_j^2+ω^2. With the help of the Fourier transform, we can get rid of the time derivatives. The Fourier transform of the displacement matrix,
X̃(ω)=∫^∞_-∞dt e^iω tX(t).
Therefore the Langevin equation Eq.(<ref>) takes the form,
X̃(ω)=G(ω) F̃(ω) where G(ω)=[-Mω^2+Φ-iωΓ]^-1. G(ω) is the inverse of the tridiagonal matrix. The explicit form of this matrix is,
G(ω)=[ -mω^2+2k-iωγ -k ⋯; -k -mω^2+2k ⋯; ⋮ ⋱ ⋯; 0 ⋯ -mω^2+2k-iωγ; ]^-1.
G is the inverse of a symmetric matrix, therefor G it self is a symmetric matrix and G^*(ω)=G(-ω).
We can write the components of G(ω)<cit.>,
G_l1=(-k)^l-1θ_N-l/θ_N and
G_lN=(-k)^N-lθ_l-1/θ_N.
where θ_l satisfy recursion relations,
θ_l = (-m ω^2 +2k)θ_l-1-k^2θ_l-2 ∀ l = 2,3,⋯ N-1,
and,
θ_N=(-mω^2+2k-iωγ)θ_N-1-k^2θ_N-2,
with the boundary condition,
θ_0 = 1, θ_1 = -mω^2 + 2k-iωγ .
Solving the recursion relations of Eq. (<ref>) we get,
θ_l = (-k)^l-1/sinq[k sin(l+1)q-iωγsinlq] and θ_N = (-k)^N/sinq[a(q)sinNq+b(q)cosNq].
Here,
a(q) = -2iγω/k+cosq(1-γ^2 ω^2/k^2) and b(q) = sinq( 1-γ^2 ω^2/k^2),
and ω and q are related through,
cosq=(1-mω^2/2k), and ω= ω_c sinq/2.
with ω_c=2√(k/m).
§ CALCULATION FOR SPATIO-TEMPORAL TWO-POINT CORRELATION OF VELOCITY
In this section, we will calculate the spatio-temporal two-point correlation of the velocity of l-th oscillator v_l(t)v_l'(t') in the steady state for resetting driven oscillator chain. The velocity of the l-th oscillator of the resetting driven chain can be written using Eqs. (<ref>) and (<ref>),
v_l(t)=∫_-∞^∞d ω/2 πe^-iω t(-i ω)[G_l1(ω)f̃_1(ω)+G_lN(ω)f̃_N(ω)].
Here f̃_j(ω) is the Fourier transform of the resetting force, f_j(t). Using Eq. (<ref>), we can write the spatio-temporal correlation of the velocity as,
v_l(t)v_l'(t') = ∫_-∞^∞dω/2 πω^2 e^-iω (t-t')[G_l1(ω)G_l'1^*(ω)g̃(ω,r_1)+G_lN(ω)G_l'N^*(ω)g̃(ω,r_N)].
§.§ Two-time velocity correlation of single oscillator:
For l=l' and t>t',
v_l(t) v_l(t') =∫_-∞^∞dω/2 πω^2 e^-iω (t-t')[|G_l1(ω)|^2g̃(ω,r_1) +|G_lN(ω)|^2 g̃(ω,r_N)].
From the above equation, it is clear that the two-time velocity correlation of single oscillator v_l(t) v_l(t') for resetting driven chain, can be written as a sum of two separate contributions coming from two resetting walls at the two boundaries, i.e.,
v_l(t) v_l(t') =C̅(r_1,t,t')+C̅(r_N,t,t').
The contribution from the left resetting wall,
C̅(r_1,t,t')=1/2π∫_-∞^∞ dωω^2 e^-iω (t-t')|G_l1(ω)|^2g̃(ω,r_1).
C̅(r_1,t,t') will have a non-zero contribution from the even terms with respect to ω. Using Eq. (<ref>) and keeping only the terms that are even in ω, we get,
C̅(r_1,t,t') = 1/k^4∫_0^∞dω/πω^2 cos[ω (t-t')](k^2 sin^2(N q-l q+q)/|a(q)sinNq+b(q)cosNq|^2 + ω^2 γ^2 sin^2(N q-l q)/|a(q)sinNq+b(q)cosNq|^2)g̃(ω,r_1)
We are interested in the time correlation of velocity in the bulk of the chain, therefore we take N/2+ϵ, and take the limit ϵ≪ N, which leads to,
C̅(r_1,t,t') = 1/k^4∫_0^∞dω/2 πω^2 cos[ω (t-t')]((k^2+ω^2 γ^2)/|a(q)sinNq+b(q)cosNq|^2 - k^2 cos(Nq -2ϵ q +2q)+ω^2 γ^2 cos(Nq-2 ϵ q)/|a(q)sinNq+b(q)cosNq|^2) g̃(ω,r_1).
For ω>ω_c, q becomes complex. In the thermodynamically large system size, the integrand vanishes exponentially as e^-2Nq̅ in the region ω>ω_c, here q̅ is real. The range of the integration reduces to 0≤ω≤ω_c, or 0≤ q≤π. In the large-N limit, cosN q is a highly oscillatory function and we can evaluate these integrations by averaging over fast oscillation in x=N q, using the following identities,
1/2 π∫^2π_0d x/(c_1 sinx+d cosx)^2+c_2^2 sinx^2 = -1/c_2 d, for c_2<0,
1/2 π∫^2π_0d xcosx/(c_1 sinx+d cosx)^2+c_2^2 sinx^2 = 0.
with c_2=Im[a(q)]=-2γω/k and d=Re[b(q)]=sinq( 1+γ^2 ω^2/k^2). Therefore Eq. (<ref>) can be written as,
C̅(r_1,t,t')= 1/k^4∫_0^∞dω/2πω^2 cos[ω (t-t')]k^2+γ^2ω^2/-d c_2g̃(ω,r_1).
Finally, we arrive at,
C̅(r_1,t,t') = 1/k∫_0^πdq/π|d ω/d q|ωcos[ω (t-t')]/4γsinqg̃(ω,r_1) = a_1^2 r_1/γ m∫_0^πdq/2πcos[ω_c sin(q/2)(t-t') ] /r_1^2+ω_c^2 sin^2q/2
=a_1^2 C(r_1,t,t'),
where we have used the explicit form of g̃(ω,r_1) and ω(q) from Eq. (<ref>) and (<ref>). Similarly, we can evaluate the contribution from the right resetting wall, and by combining these two results we arrive at,
v_l(t) v_l(t') =1/γ m∫_0^πdq/2 πcos[ω_c sin(q/2)(t-t') ][a_1^2 r_1 /r_1^2+ω_c^2 sin^2q/2+a_N^2 r_N /r_N^2+ω_c^2 sin^2q/2].
This integral can be evaluated numerically.
Large time behavior (t-t'≫ω_c^-1):
To calculate the large time behavior of v_l(t) v_l(t'), we first substitute z=ω_c sin(
q/2) in Eq. (<ref>) and arrive at,
C(r_1,t,t')= r_1/γ m∫_0^ω_cd z/πcos(z (t-t'))/√(ω_c^2-z^2)(r_1^2+z^2).
For large t-t', cos(z (t-t')) is a bounded and fast oscillatory function. Therefore, the dominating contribution to the integral is coming from the region z=ω_c and contributions from small z are negligible. We can now approximate r_1^2+z^2 as r_1^2+ω_c^2. Therefore the last integral becomes,
C(r_1,t,t')≃ r_1/γ m∫_0^ω_cd z/πcos(z (t-t'))/√(ω_c^2-z^2)(r_1^2+ω_c^2)= r_1/2 γJ_0(ω_c (t-t'))/(m r_1^2+4 k)
Therefore when t-t' is large, using Eqs. (<ref>) and (<ref>) we get
v_l(t) v_l(t') ≃[a_1^2 r_1/2 γ (mr_1^2+4 k)+a_N^2 r_N/2 γ (mr_N^2+4 k)]J_0(ω_c (t-t')).
Kinetic temperature profile at the bulk: The bulk value of the kinetic temperature profile can be evaluated using Eq. (<ref>) by taking t=t'. Therefore, in the stationary state,
v_l(t)^2 =1/γ∫_0^πdq/2 π[a_1^2 r_1 /mr_1^2+4k sin^2q/2+a_N^2 r_N /mr_N^2+4k sin^2q/2].
The last integral can be evaluated exactly, which leads to the following expression of the bulk value of the kinetic temperature profile,
T̂_bulk=m v_l^2(t) =a_1^2 /2 γ√(r_1^2+4 k /m)+a_N^2 /2 γ√(r_N^2+4 k /m).
§.§ Equal-time spatial velocity correlation:
For l≠ l' and t=t', using Eq. (<ref>),
v_l(t)v_l'(t) = ∫_-∞^∞dω/2 πω^2 [G_l1(ω)G_l'1^*(ω)g̃(ω,r_1)+G_lN(ω)G_l'N^*(ω)g̃(ω,r_N)].
Similar to the previous calculation, v_l(t)v_l'(t) can be written as a sum of two separate contributions coming from resetting walls at the boundaries as the following,
v_l(t)v_l'(t) =θ̅(r_1,l,l')+θ̅(r_N,l,l').
The contribution from the left resetting wall,
θ̅(r_1,l,l')=∫_-∞^∞dω/2πω^2 G_l1(ω)G_l'1^*(ω)g̃(ω,r_1).
Using Eq. (<ref>) and keeping only the terms that are even in ω, we get,
θ̅(r_1,l,l') = 1/ k^4∫_0^∞dω/2πω^2 ( (k^2+γ^2 ω^2)cos(l' q-lq)/|a(q)sinNq+b(q)cosNq|^2 - k^2 cos(Nq-ϵ q +2 q)+ω^2 γ^2 cos(N q-ϵ q)/|a(q)sinNq+b(q)cosNq|^2)g̃(ω,r_1).
Now we are interested in l=N/2, l'= N/2+α where α is an integer. For ω>ω_c, the integrand vanishes [see the discussion after Eq. (<ref>)]. The final expression after averaging over fast oscillation using Eq. (<ref>) is,
θ̅(r_1,l,l')=1/ k^4∫_0^πdq/2π| dω/dq| ω^2 (k^2+γ^2 ω^2)cos(l'-l)q/-d c_2g̃(ω,r_1).
Using c_2=Im[a(q)]=-2γω/k and d=Re[b(q)]=sinq( 1+γ^2 ω^2/k^2), and explicit form of g̃(ω,r_1), we arrive at,
θ̅(r_1,l,l')=a_1^2 r_1^2/γ∫_0^πdq/2πcos(l'q-lq)/m r_1^2+4 k sin^2q/2
= a_1^2 /2γθ(r_1,l,l') ,
where,
θ(r_i,l,l')=r_i/m r_i^2+4 k _3 F̃_2 [1/2,1,1;1-l'+l,1+l'-l;4 k/4 k+mr_i^2].
Here l'-l is an integer and _p F̃_q is a generalized regularized hypergeometric function. We can calculate the contribution from the right resetting wall in a similar manner. Combining both the contribution we arrive at,
v_l(t) v_l'(t) = 1/2 γ[ a_1^2 θ(r_1,l,l')+a_N^2 θ(r_N,l,l') ].
§ VELOCITY DISTRIBUTION OF THE BOUNDARY OSCILLATOR
References
99rllmothRieder Rieder Z, Lebowitz J L and Lieb E 1967 J. Math. Phys. 8 1073
advncpdharDhar A 2008 Adv. in Phys. 57 457
Transportbook Thermal Transport in
Low Dimensions 2016, Ed. Stefano Lepri, Springer Heidelberg
nakazawa Nakazawa H 1970 Prog. Theor. Phys. Suppl. 45 231
RoyDhar2008 Roy D and Dhar A 2008 J. Stat. Phys. 131 535
Dhar2001 Dhar A 2001 Phys. Rev. Lett. 86 5882
FPUT Lepri S, Livi R and Politi A 2005 Chaos 15 015118
FPUT_alternatingmass Mai T, Dhar A and Narayan O 2007 Phys. Rev. Lett. 98 184301
kundu_sanjibKundu A, Sabhapandit S and Dhar A 2011 J. Stat. Mech. P03007
kannan_12Kannan V, Dhar A and Lebowitz J L 2012 Phys. Rev. E 85 041118
Kubo Kubo R 1966 Rep. Prog. Phys. 29 255
bacterialbath2011 Valeriani C, Li M, Novosel J, Arlta J and Marenduzzoa D 2011 Soft Matter 7 5228
gopal2021 Gopal A, Roldán É and Ruffo S 2021 J. Phys. A: Math. Theor. 54 164001
kafri2021 Granek O, Kafri Y and Tailleur J 2022 Phys. Rev. Lett. 129 038001
maggi2014 Maggi C, Paoluzzi M, Pellicciotta N, Lepore A, Angelani L and Di Leonardo R 2014 Phys. Rev. Lett. 113 238303
maes2020 Maes C 2020 Phys. Rev. Lett. 125 208001
active_bath Seyforth H, Gomez M, Rogers W B, Ross J L and Ahmed W W 2022 Phys. Rev. Research 4 023043
collapse_polymer Mousavi S M, Gompper G and Winkler R G 2021 J. Chem. Phys. 155 044902
work_fluct Pal A and Sabhapandit S 2014 Phys. Rev. E 90 052116
dissipation_activefluid Fodor É, Nemoto T and Vaikuntanathan S 2020 New J. Phys. 22 013052
sup_diff_colloid Chaki S and Chakrabarti R 2019 Physica A: Stat. Mech. Appl. 530 121574
santra2022 Santra I 2023 J. Phys. Complex.4, 015013
activity_driven_chainSantra I and Basu U 2022 Scipost Phys. 13 041
activity_stationary Sarkar R, Santra I and Basu U 2023 Phys. Rev. E 107 014123
resetrevEvans M R, Majumdar S N and Schehr G 2020 J. Phys. A: Math. Theor. 53 193001
satyaprl_reset Evans M R and Majumdar S N 2011 Phys. Rev. Lett. 106 160601
noneq_bath Maes C and Thiery T 2017 J. Phys. A: Math. Theor.50, 415001
log_pot_reset Ray S and Reuveni S 2020 J. Chem. Phys152, 234110
telegraphic Masoliver J 2019 Phys. Rev. E99, 012121
levyflight1 Kuśmierz L, Majumdar S N, Sabhapandit S, and Schehr G 2014 Phys. Rev. Lett.113, 220602
levyflight2 Majumdar S N, Mounaix P, Sabhapandit S and Schehr G 2022 J. Phys. A: Math. Theor.55, 034002
coagulation Durang X, Henkel M, and Park H 2014 J. Phys. A: Math. Theor.47, 045002
particle_transport1Basu U, Kundu A and Pal A 2019 Phys. Rev. E100, 032136
particle_transport2Mishra S and Basu U 2023 J. Stat. Mech. 053202
particle_transport3Jain S, Boyer D, Pal A, and Dagdug L 2023 J. Chem. Phys.158, 054113
particle_transport4 Di Bello C, Hartmann A K, Majumdar SN, Mori F, Rosso A and Schehr G, rtp1 Evans M R, Majumdar S N 2018 J. Phys. A: Math. Theor.51, 475003
rtp2 Santra I, Basu U, and Sabhapandit S 2020 J. Stat. Mech. 113206
gbp_reset Stojkoski V, Sandev T, Kocarev L, and Pal A 2021 Phys. Rev. E104, 014121
extreme_reset Singh P, and Pal A 2021 Phys. Rev. E103, 052119
work_fluctuation Gupta D, Plata C A, and Pal A 2020 Phys. Rev. E124, 110608
resetting_correlation Majumdar S N and Oshanin G 2018 J. Phys. A: Math. Theor. 51 435001
lang_int Vanden-Eijndena E and Ciccotti G 2006 Chem. Phys. Lett. 429, 310
DLMF NIST Digital Library of Mathematical Functions, Olver F W J, Olde Daalhuis A B, Lozier D W, Schneider B I, Boisvert R F, Clark C W, Miller B R, Saunders B V, Cohl H S, and McClain M A, eds. , Release 1.1.6 of 2022-06-30
time_correlKundu A 2010 Phys. Rev. E 82 031131
nonmarkovian_resetNagar A and Gupta S 2016 Phys. Rev. E, 93, 060102
noninstant_reset1Besga B., Bovon A, Petrosyan A, Majumdar S. N. and Ciliberto S. 2020 Phys. Rev. Res., 2, 032029
noninstant_reset2Gupta D, Plata C. A., Kundu A, and Pal A 2020. J. Phys. A: Math. Theor., 54, 025003.
noninstant_reset3Santra I, Das S and Nath S. K. 2021 J. Phys. A: Math. Theor., 54, 334001
noninstant_reset4 Gupta D, Pal A and Kundu A 2021 J. Stat. Mech., 043202
tridiagonal Usmani R. A 1994 Comput. Math. Appl. 27, 59 ]
|
http://arxiv.org/abs/2307.04188v1 | 20230709144204 | Wasserstein-p Bounds in the Central Limit Theorem Under Local Dependence | [
"Tianle Liu",
"Morgane Austern"
] | math.PR | [
"math.PR",
"math.ST",
"stat.TH",
"60F05"
] |
Wasserstein-P Bounds in CLT Under Local Dependence]Wasserstein-P Bounds in the Central Limit Theorem Under Local Dependence
[email protected]
[email protected]
Department of Statistics, Harvard University, Cambridge, MA 02138
[2020]60F05.
The central limit theorem (CLT) is one of the most fundamental results in probability; and establishing its rate of convergence has been a key question since the 1940s. For independent random variables, a series of recent works established optimal error bounds under the Wasserstein-p distance (with p≥ 1). In this paper, we extend those results to locally dependent random variables, which include m-dependent random fields and U-statistics. Under conditions on the moments and the dependency neighborhoods, we derive optimal rates in the CLT for the Wasserstein-p distance. Our proofs rely on approximating the empirical average of dependent observations by the empirical average of i.i.d. random variables. To do so, we expand the Stein equation to arbitrary orders by adapting the Stein's dependency neighborhood method. Finally we illustrate the applicability of our results by obtaining efficient tail bounds.
[
Morgane Austern
Received: date / Accepted: date
===================================
§ INTRODUCTION
The central limit theorem (CLT) is one of the most fundamental theorems in probability theory. Initially formulated for independent and identically distributed random variables, it has since then been generalized to triangular arrays <cit.>, martingales <cit.>, U-statistics <cit.>, locally dependent random variables <cit.>, and mixing random fields <cit.>. Let (I_n) be an increasing sequence of subsets I_1⊆ I_2⊆⋯⊆ I, whose sizes increase to infinity |I_n|→∞. Set (X_i)_i∈ I to be (dependent) centered random variables. Under certain conditions on the moments of (X_i) and on its dependence structure, the CLT states that the scaled sum is asymptotically normal, i.e.,
W_n:=σ_n^-1∑_i∈ I_n X_i𝒩(0,1),
where we write σ_n^2=Var(∑_i∈ I_nX_i). Starting with the work of Berry and Esseen in 1940s, there is a long history of quantifying how far W_n is from being normally distributed. One of the most important metrics to do so is the Wasserstein-p distance originated in optimal transport theory <cit.>. For two probability measures ν and μ over the real line ℝ, we denote by Γ(ν,μ) the set of all couplings of ν and μ, and the Wasserstein-p distance between ν and μ is defined as
𝒲_p(ν,μ):=inf_γ∈Γ(ν,μ)(𝔼_(X,Y)∼γ[| X-Y |^p])^1/p.
When the observations (X_i) are independent, <cit.> established that for p=1 the convergence rate for the CLT is 𝒪(| I_n|^-1/2). Extending such results to p>1 remained for a while an open question. The first bounds for p> 1 obtained by <cit.> dating back to the 1970s were sub-optimal in terms of the sample size |I_n| as they decrease at a slower rate of 𝒪(|I_n|^-1/2+1/p). <cit.> obtained that, for 1≤ p≤ 2, the Wasserstein distance converges at the optimal rate 𝒪(| I_n|^-1/2) under some additional necessary moment conditions, and they conjectured that such a rate would be extendable to arbitrary p≥ 1. This was recently proven to be true by <cit.> using a series of methods including the Edgeworth expansion and the exchangeable pair method. They showed that if max_i X_i_p+2<∞ and if Var(X_1)=Var(X_i)=1, then there is a constant K_p<∞ such that
𝒲_p(ℒ(W_n),𝒩(0,1))≤K_p‖ X_1‖_p+2^1+2/p/√(| I_n|),
where ℒ( · ) designates the distribution of the given random variable. It is however crucial to note that these rates were obtained under the key assumption of independence of the (X_i). In this paper, we aim to generalize this beyond the assumption of independence which is restrictive for many applications.
An important class of dependent observations (X_i) are locally dependent random variables. Intuitively, we say that (X_i) are locally dependent if for every finite group of random variables (X_i)_i∈ J, where J⊂ I, there exists a subset N(J)⊂ I such that (X_i)_i∈ J is independent from (X_i)_i∈ I∖ N(J). The subset N(J) is often called the dependency neighborhood of J. Examples of such random variables include m-dependent random fields, U-statistics, and subgraph count statistics in the Erdős–Rényi random graphs. Under general conditions on the sizes of the dependency neighborhoods the central limit theorem is known to hold and its rate of convergence in Wasserstein-1 distance was established by <cit.>. This was extended to Wasserstein-2 bounds by <cit.> by relating it to Zolotarev's metrics and cleverly exploiting Stein's method. Drawing inspiration from <cit.>, sub-optimal rates were also achieved in <cit.> for arbitrary p≥ 1 under more technical conditions. Nevertheless, an optimal rate bound for general Wasserstein-p distances (p≥ 1) remains unknown. This is the gap that we fill in this paper. We consider locally dependent (not necessarily identically distributed) random variables (X_i), and consider the empirical average W_n:=σ_n^-1∑_i∈ I_nX_i where σ_n^2:=∑_i∈ I_nX_i. For all p≥ 1 we obtain bounds for the 𝒲_p distance 𝒲_p(ℒ(W_n),𝒩(0,1)). We do so under the assumption that the variances (σ_n) are nondegenerate, and under moment conditions and on the sizes of dependency neighborhoods. Notably if the size of the dependency neighborhoods is uniformly bounded we obtain bounds that decrease at the optimal rate (see <ref>)
𝒲_p(ℒ(W_n),𝒩(0,1))=𝒪(1/√(|I_n|)).
We further generalize our results to triangular arrays where the random variables (X^0.6(n)_i) are allowed to change with n. Finally, we demonstrate how those bounds can be exploited to obtain non-uniform Berry–Esseen type bounds that have polynomial decay.
The key idea of our proofs is to approximate the empirical average W_n by an empirical average V_n of i.i.d. random variables for which Wasserstein's bounds are already known. To do this we establish an Edgeworth-type expansion of the Stein equation in terms of the cumulants of the W_n. Indeed, in <ref> we prove that if h is a function smooth enough (made precise later) and Z∼𝒩 is a standard random variable then
𝔼[ h(W_n)]-𝔼[h(Z)]=𝔼 [f_h'(W_n)-W_nf_h(W_n)]
= ∑_(r,s_1:r)∈Γ (⌈ p⌉-1)(-1)^r∏_j=1^rκ _s _j+2(W_n)/(s _j+1)!𝒩 [∏_j=1^r(∂ ^s _j+1Θ) h] + Remainders,
where f_h is the solution of the Stein equation <ref> and where (κ _j(W_n)) designates the cumulants of W_n (the other notations will be made explicit in the next few sections). This generalizes a similar well-known result for i.i.d. observations established in <cit.>. To guarantee that our choice of V_n is a good approximation of W_n we utilize this expansion and exploit the Hamburger moment problem to choose V_n to be such that its first ⌈ p ⌉+1 cumulants match the ones of W_n.
§.§ Related Literature
<cit.> established that the convergence rate in the central limit theorem is 𝒪(| I_n|^-1/2) in terms of the Wasserstein-1 distance. Since then it has been tightened and generalized to dependent observations.
Notably, the Stein's method offers a series of powerful techniques for obtaining Wasserstein-1 bounds in the dependence setting. See <cit.> for a survey of those methods. <cit.> obtained Wasserstein-1 bounds under local dependence conditions.
<cit.> proposed a rate of 𝒪(|I_n|^-1/2+1/p) for the Wasserstein-p distance under the hypothesis that the random variables have finite exponential moments. <cit.> obtained a similar rate but only required the existence of p-th moments. <cit.> showed that in order to obtain a convergence rate of 𝒪(| I_n|^-1/2), it is necessary to require finite (p+2)-th moments of the random variables. They also obtained the optimal rate for 1≤ p≤ 2 and conjectured that a similar rate should be valid for any arbitrary p> 2. This conjecture was demonstrated to be true by <cit.>. Those two papers took different approaches. <cit.> used an Edgeworth expansion argument. <cit.>, on the other hand, used the Ornstein-Uhlenbeck interpolation combined with a Stein exchangeable pair argument and their methods further applied to multivariate settings. Previous to that, <cit.> had already obtained the optimal rate for the Wasserstein-p distance using the Ornstein-Uhlenbeck interpolation but needed significantly stronger assumptions on the distribution of the random variables by requiring the existence of a Stein kernel. Moreover, for the special case p=2, the celebrated HWI inequality <cit.> and Talagrand quadratic transport inequality <cit.> can help obtain Wasserstein-2 bounds by relating it to the Kullback-Leibler divergence.
Contrary to the independent case, much less is known for the general Wasserstein-p distance for dependent data. <cit.> adapted the Stein's method to obtain Wasserstein-2 bounds for locally dependent variables. <cit.> modified the approach of <cit.> and obtained a sub-optimal rate 𝒪(| I_n|^-1/2log | I_n|) for the Wasserstein-p distance under local dependence. Our results propose significant extensions to both of those results by generalizing the optimal rate to arbitrary p≥ 1.
Our proofs also rely on the Stein's method and a result of <cit.> that allows to upper the Wasserstein-p distance by an integral probability metric <cit.>. As those metrics are defined as the supremum of expected differences over a certain class of functions, the Stein's method lends itself nicely to this problem.
The Stein's method was first introduced in <cit.> as a new method to obtain a Berry–Esseen bound and prove the central limit theorem for weakly dependent data. It has since then become one of the most popular and powerful tools to prove asymptotic normality for dependent data, and different adaptations of it have been proposed, notably the dependency neighborhoods, the exchangeable pairs, the zero-bias coupling, and the size-bias coupling <cit.>. In addition to being used to prove the central limit theorem, it has also been adapted to obtain limit theorems with the Poisson distribution <cit.> or the exponential distribution <cit.>. Moreover, it has been used for comparing different univariate distributions <cit.>. Our use of the Stein's method is closely related to the dependency neighborhood method described in <cit.>.
§.§ Paper Outline
In <ref> we clarify some notations that we use throughout the paper. Then we present our results under two different local dependence conditions in <ref>. In <ref> and <ref> we respectively apply our results to m-dependent random fields and to U-statistics. In <ref> we apply our results to obtain non-uniform Berry–Esseen bounds with polynomial decay.
In <ref>, we make an overview of our proof techniques. In <ref> we present the main lemmas (notably <ref>) and use them to prove the main result <ref>. Those lemmas and additional results are proved in <ref>.
§ GENERAL NOTATIONS
toc
§.§ Notations concerning integers and sets
In this paper, we will write ⌈ x⌉ to denote the smallest integer that is bigger or equal to x and ⌊ x⌋ denotes the largest integer smaller or equal to x. We use ℕ to denote the set of non-negative integers and let ℕ_+ be the set of positive integers. For any n∈ℕ_+, denote [n]:={ℓ∈ℕ_+:1≤ℓ≤ n}.
Moreover, for a finite set B we denote by |B| its cardinality.
toc
§.§ Notations for sequences
Given a sequence (x_i) we will shorthand x_1:ℓ=(x_1,⋯,x_ℓ) and similarly for any subset B⊆ℕ_+ we denote x_B:=(x_i)_i∈ B.
toc
§.§ Notations for functions
For any real valued functions f( · ),g( · ):ℕ_+→ℝ, we write f(n)≲ g(n) or f(n)=𝒪(g(n)) if there exists some constant C (with dependencies that are fixed in the contexts) and an integer N>0 such that the inequality f(n)≤ C g(n) holds for all n≥ N. We further write f(n)≍ g(n) as shorthand for f(n)≲ g(n) and g(n)≲ f(n).
toc
§.§ Notations for probability distributions
For a random variable X we write by ℒ(X) the distribution of X.
§ MAIN THEOREMS
Let p≥ 1 be a positive real number, we write ω :=p+1-⌈ p⌉∈ [0,1]. We choose I to be an infinite index set and (I_n)_n=1^∞ to be an increasing sequence of finite subsets of I_1⊆ I_2⊆⋯⊊ I that satisfy |I_n|∞.
Let (X^0.6(n)_i)_i∈ I_n be a triangular array of random variables, each row indexed by i∈ I_n (n=1,2,⋯), we define W_n to be the following empirical average
W_n:=σ_n^-1∑_i∈ I_nX^0.6(n)_i, with σ_n^2:=Var(∑_i∈ I_n X^0.6(n)_i).
Under the hypothesis that the random variables (X^0.6(n)_i) are locally dependent we will, in this section, bound the Wasserstein-p distance between W_n and its normal limit. The bound we obtain depends on the size of the index set I_n, the moments of the random variables and the structure of local dependence in question.
To formally state our conditions on the dependency structure of (X_i^(n)), we first define the notion of dependency neighborhoods similarly as in <cit.>.
Given random variables (Y_i)_i∈ I and given J⊆ I, we say that N(J)⊂ J is a dependency neighborhood of J if { Y_j:j∉ N(J) } is independent of { Y_j: j∈ J}. To state our theorem, we impose that such dependency neighborhoods can be defined for (X_i^0.6(n)). More formally, we assume that there is a sequence (N_n(i_1:q))_q of subsets of I_n that satisfy the following conditions:
[LD-1]: For each i_1∈ I_n, the subset N_n(i_1)⊆ I_n is such that { X^0.6(n)_j:j∉ N_n(i_1) } is independent of X^0.6(n)_i_1.
[LD-q] (q≥ 2): For each i_1∈ I_n, i_2∈ N_n(i_1), ⋯, i_q∈ N_n(i_1:(q-1)), the subset N_n(i_1:q)⊂ I_n is such that { X^0.6(n)_j:j∉ N_n(i_1:q) } is independent of (X^0.6(n)_i_1,⋯,X^0.6(n)_i_q).
We remark that the sequence of subsets (N_n(i_1:q))_q is increasing, i.e., N_n(i_1:(q-1))⊆ N_n(i_1:q) in q; and that the neighborhoods N_n(i_1:q) are allowed to be different for different values of n–which reflects the triangular array structure of our problem. The condition of dependency neighborhoods here generalizes the one in <cit.> and was also adopted in <cit.>, inspired by <cit.>. <cit.> obtained a Wasserstein-1 bound under “decomposable” conditions similar to [LD-1] and [LD-2], and <cit.> showed a Berry–Esseen type result under slightly stronger assumptions for local dependence, while finally <cit.> obtained a Wasserstein-2 bound.
In order to define the remainder terms that will appear in our bounds, we introduce the following notions. Given t∈ℕ_+, and ℓ∈ℕ_+ such that k≥ 2, we say that the tuple (η_1,η_2,⋯,η_ℓ) is an integer composition of t if and only if η_1:ℓ are positive integers such that η_1+η_2+⋯+η_ℓ=t. We denote by C(t) the set of all possible integer compositions
C (t):={ℓ,η_1:ℓ∈ℕ_+:∑_j=1^ℓη_j=t }.
Moreover, for any random variables (Y_i)_i=1^t, we define the order-t compositional expectation with respect to η_1:ℓ as
[η_1,⋯,η_ℓ]▹ (Y_1,⋯,Y_t):=
𝔼[Y_1⋯ Y_η_1] 𝔼[Y_η_1+1⋯ Y_η_1+η_2] ⋯ 𝔼[Y_η_1+⋯+η_ℓ-1+1⋯ Y_t].
Note that if η_ℓ=1, the last expectation reduces to 𝔼 [Y_t]. For any positive integer k and real value ω∈ (0,1], we define
R_k,ω,n:= ∑_(ℓ,η_1:ℓ)∈ C^*(k+2)∑_i_1∈ I_n∑_i_2∈ N_n(i_1)⋯∑_i_k+1∈ N_n(i_1:k)
[η_1,⋯,η_ℓ]▹(| X^0.6(n)_i_1|,⋯,| X^0.6(n)_i_k+1|,(∑_i_k+2∈ N_n(i_1:(k+1))| X^0.6(n)_i_k+2|)^ω),
where C^*(k+2) is given by
C^*(t):={(ℓ,η_1:ℓ)∈ C(t): η_j≥ 2 for 1≤ j≤ℓ-1,}⊆ C(t).
The terms (R_k,ω,n) are remainder terms that appear in our bound of the Wasserstein-p distance between W_n and its normal limit.
Let (X^0.6(n)_i)_i∈ I_n be a triangular array of mean zero random variables and suppose that they satisfy [LD-1] to [LD-(⌈ p⌉+1)]. Let σ_n^2:=Var(∑_i∈ I_n X^0.6(n)_i) and define W_n:=σ_n^-1∑_i∈ I_nX^0.6(n)_i. Further suppose for any j∈ℕ_+ such that j≤⌈ p⌉ -1, it holds that R_j,1,nn→∞⟶ 0 as n→∞. Then there exists an integer N∈ℕ_+ such that for all n≥ N, we have the following Wasserstein bounds:
𝒲_p(ℒ(W_n), 𝒩(0,1)) ≤ C_p (∑_j=1^⌈ p⌉-1R _j,1,n^1/j+∑_j=1^⌈ p⌉R _j,ω,n ^1/(j+ω -1) ),
where ω=p+1-⌈ p⌉ and C_p is a constant that only depends on p.
We note that the condition that the remainder terms R_j,1,n shrink to 0 for all j≤⌈ p⌉ -1 impose an implicit constraint on the size of the sets N_n(i_1:q).
In particular, for p=1,2 we have
𝒲_1(ℒ(W_n), 𝒩(0,1))≤ C_1R_1,1,n,
𝒲_2(ℒ(W_n), 𝒩(0,1))≤ C_2(R_1,1,n+R_2,1,n^1/2).
where the remainders are given by
R_1,1,n= σ_n^-3∑_i∈ I_n∑_j∈ N_n(i)∑_k∈ N_n(i,j)(𝔼[| X^0.6(n)_iX^0.6(n)_jX^0.6(n)_k|]+𝔼[| X^0.6(n)_iX^0.6(n)_j|] 𝔼[| X^0.6(n)_k|]),
R_2,1,n= σ_n^-4∑_i ∈ I_n∑_j∈ N_n(i)∑_k∈ N_n(i,j)∑_ℓ∈ N_n(i,j,k)(𝔼[| X^0.6(n)_iX^0.6(n)_jX^0.6(n)_kX^0.6(n)_ℓ|]
+𝔼[| X^0.6(n)_iX^0.6(n)_jX^0.6(n)_k|] 𝔼[| X^0.6(n)_ℓ|]+𝔼[| X^0.6(n)_iX^0.6(n)_j|] 𝔼[| X^0.6(n)_kX^0.6(n)_ℓ|]).
Note that (<ref>) was proven by <cit.> and (<ref>) is a corollary of Theorem 2.1, <cit.>. The bound (<ref>) with an integer p was also proposed as a conjecture in <cit.>. As p grows, the right-hand side of (<ref>) becomes more and more complicated, which suggests the necessity of new assumptions in order to obtain a simplified result. We further remark that the choice of N_n (i_1:q) might not be unique (even if we require that it has the smallest cardinality among all possible index sets that fulfill the assumption [LD-q]).
Therefore, to be able to obtain more interpretable upper-bounds for the remainder terms (R_j,ω,n)
, we impose a slightly stronger assumption on the dependence structure:
[LD*]: We suppose that there exists a graph G_n=(V_n,E_n), with V_n:=I_n being the vertex set and E_n being the edge set, such that for any two disjoint subsets J_1,J_2⊆ I_n if there is no edge between J_1 and J_2, then { X^0.6(n)_j:j∈ J_1} is independent of { X^0.6(n)_j:j∈ J_2}.
Introduced by <cit.> the graph G_n defined above is known as the dependency graph and was later adopted in <cit.>. Please refer to <cit.> for a detailed discussion.
If is satisfied, for any subset J⊆ I_n, we define N_n(J) to be the set of vertices in the neighborhood of J⊆ I_n in the graph G.
To be precise, this is
N_n(J):=J∪{ i∈ I_n: e(i,j)∈ E_n for some j∈ J },
where e(i,j) denotes an edge between the vertices i and j.
To simplify the notations, we further denote N_n(J) by N_n(i_1:q) if J={ i_1,⋯,i_q} for any 1≤ q≤⌈ p⌉+1.
Then (N_n(i_1:q)) not only satisfies [LD-1] to [LD-(⌈ p⌉+1)], but has the following properties as well:
* N_n(i_1:q)=N_n(i_π(1),⋯,i_π(q)) for any permutation π on { 1,⋯,q };
* i_q∈ N_n(i_1:(q-1))⇔ i_1∈ N_n(i_2:q).
We point out that by definition of the dependency graph even if { X^0.6(n)_j:j∈ J_1} is independent of { X^0.6(n)_j:j∈ J_2}, there can still be edges between the vertex sets J_1 and J_2. In fact, there might not exist G_n such that there is no edge between J_1 and J_2 as long as { X^0.6(n)_j:j∈ J_1} is independent of { X^0.6(n)_j:j∈ J_2} since pairwise independence does not imply joint dependence.
The condition provides us with a tractable bound on R_k,ω,n, which is applicable in most of the commonly encountered settings, including m-dependent random fields and U-statistics.
Given M∈ℕ_+ and real number ω∈ (0,1], suppose that (X^0.6(n)_i)_i∈ I_n satisfies and that the cardinality of N_n(i_1:(k+1)) is upper-bounded by M<∞ for any i_1,⋯,i_k+1∈ I_n. Then there exists a constant C_k+ω only depending on k+ω such that
R_k,ω,n ≤ C_k+ω M^k+ω∑_i∈ I_nσ_n^-(k+1+ω)𝔼[| X^0.6(n)_i|^k+1+ω].
We remark that the upper bound on (R_k,ω,n) depends on the moments of the random variables (X^0.6(n)_i) and the maximum size of the dependency neighborhoods. The results of <ref> can be used to propose a more interpretable upper bound for the Wasserstein-p distance.
Suppose that (X^0.6(n)_i) is a triangular array of mean zero random variables satisfying , and that the cardinality of index set N_n(i_1:(⌈ p⌉+1)) is upper-bounded by M_n<∞ for any i_1,⋯,i_⌈ p⌉ +1∈ I_n. Furthermore, assume that
M_n^1+ωσ_n^-(ω+2)∑_i∈ I_n𝔼[| X^0.6(n)_i|^ω +2]→ 0, M_n^p+1σ_n^-(p+2)∑_i∈ I_n𝔼[| X^0.6(n)_i|^p+2]→ 0.
Then there is N such that for all n≥ N we have
𝒲_p(ℒ(W_n),𝒩(0,1))
≤ C_p(M_n^1+ωσ_n^-(ω+2)∑_i∈ I_n𝔼[| X^0.6(n)_i|^ω +2] )^1/ω+C_p(M_n^p+1σ_n^-(p+2)∑_i∈ I_n𝔼[| X^0.6(n)_i|^p+2] )^1/p,
for some constant C_p that only depends on p.
We notably remark that if the moments are nicely behaved in the sense that
B_1:=sup_i∈ I_n, n∈ℕ_+√(| I_n|)·X^0.6(n)_i_p+2/σ_n<∞,
and that the size of the dependency neighborhood are universally bounded, i.e.,
B_2:=sup_i_1:(⌈ p⌉ +1)∈ I_n,n∈ℕ_+|N_n(i_1:(⌈ p⌉ +1))|<∞,
then there is a constant K_p that only depends on B_1, B_2 and p≥1 such that for n large enough we have
𝒲_p(ℒ(W_n),𝒩(0,1))≤K_p/√(|I_n|).
The rate of convergence matches the known rate for independent random variables (see <cit.>).
§ RESULTS FOR M-DEPENDENT RANDOM FIELDS
Let d∈ℕ_+ be a positive integer, in this section, we study d-dimensional random fields.
A random field (X_i)_i ∈ T on T ⊆ℤ^d is m-dependent if and only if for any subsets U_1, U_2⊆ℤ^d, the random variables (X_i_1)_i_1∈ U_1∩ T and (X_i_2)_i_2∈ U_2∩ T are independent whenever ‖ i_1-i_2‖ >m for all i_1∈ U_1 and i_2∈ U_2.
Here ‖·‖ denotes the maximum norm on ℤ^d, that is ‖z‖=max _1 ≤ j ≤ d| z_j| for z=(z_1, ⋯, z_d).
Now we consider an increasing sequence T_1⊆ T_2⊆⋯ of finite subsets of ℤ^d that satisfy |T_n|∞. We have the following result as a corollary of <ref>.
Let p∈ℕ_+ and m∈ℕ_+ be positives integer.
Suppose that (X^0.6(n)_i) is a triangular array where each row is an m-dependent random field indexed by finite subsets T_n⊆ℤ^d such that |T_n|∞. Let σ_n^2:=Var(∑_i∈ T_nX_i^0.6(n)) and define W_n:=σ_n^-1∑_i∈ T_nX_i^0.6(n). Further suppose that 𝔼[X^0.6(n)_i]=0 for any i∈ T_n and that the following conditions hold:
* Moment condition: σ_n^-(p+2)∑_i∈ T_n𝔼[| X^0.6(n)_i|^p+2] → 0 as n→∞;
* Non-degeneracy condition: lim sup_nσ_n^-2∑_i∈ T_n𝔼[| X_i^0.6(n)|^2]≤ M<∞ for some M≥ 1.
Then for n large enough, we have
𝒲_p(ℒ(W_n),𝒩(0,1))≤ C_p,dm^(1+ω)d/ωM^p-ω/pωσ_n^-p+2/p (∑_i∈ T_n𝔼[| X^0.6(n)_i|^p+2] )^1/p,
where C_p,d only depends on p and d.
In particular, for a triangular array of m-dependent stationary random fields, suppose that we have sup_n𝔼[| X_i^0.6(n)|^p+2]<∞, and that the non-degeneracy condition lim inf_nσ_n^2/| T_n |>0 holds. Then we have
𝒲_p(ℒ(W_n),𝒩(0,1))=𝒪 (| T_n |^-1/2).
§ APPLICATION TO U-STATISTICS
Let (X_i)_i=1^n be a sequence of i.i.d. random variables. Fix m∈ℕ_+ such that m≥ 2. Let h:ℝ^m→ℝ be a fixed Borel-measurable function. The Hoeffding U-statistic is defined as
∑_1≤ i_1≤⋯≤ i_m≤ nh(X_i_1,⋯,X_i_m).
Given p≥ 1, suppose that the U-statistic of an i.i.d. sequence (X_i)_i=1^n induced by a symmetric function h:ℝ^m→ℝ satisfies the following conditions
* Mean zero: 𝔼[h(X_1, ⋯, X_m)]=0;
* Moment condition: 𝔼[| h(X_1, ⋯, X_m)|^p+2]<∞;
* Non-degeneracy condition: 𝔼 [g(X_1)^2]>0, where g(x):=𝔼[h(X_1,⋯,X_m)| X_1=x].
If we let
W_n:=1/σ_n∑_1≤ i_1≤⋯≤ i_m≤ nh(X_i_1,⋯,X_i_m),
where
σ_n^2:=Var(∑_1≤ i_1≤⋯≤ i_m≤ n h(X_i_1,⋯,X_i_m)),
the following Wasserstein bound holds:
𝒲_p(ℒ(W_n),𝒩(0,1))=𝒪(n^-1/2).
§ APPLICATION TO NON-UNIFORM BERRY–ESSEEN BOUNDS
In this section, we show a specific application of our results to non-uniform Berry–Esseen bounds with polynomial decay of any order. Mirroring the classical literature, <cit.> established Berry–Esseen bounds for locally dependent random variables. Notably, their Theorem 2.4 showed that if the random variables (X_i^(n)) satisfy some boundedness condition on the dependency neighborhoods, then there is a constant C>0 such that
sup_t|ℙ(W_n≥ t)-Φ^c(t)|≤ C ∑_i∈ I_n‖ X_i^(n)‖ _3^3 / σ_n^3,
where Φ^c(t):=ℙ (Z≥ t) with Z∼𝒩(0,1).
This extends the classical Berry–Esseen bound to locally dependent random variables, and can potentially be used to construct Kolmogorov–Smirnov tests under local dependence in nonparametric inference. However, one of the drawbacks of this inequality is that it does not depend on t. One would imagine that for large t we could find tighter bounds for |ℙ(W_n≥ t)-Φ^c(t)|. Non-uniform Berry–Esseen bounds establish this. Notably <cit.> (Theorem 2.5) showed that under the above conditions, there exists some universal constant C' such that
|ℙ(W_n≥ t)-Φ^c(t)|≤C'/1+|t|^3∑_i∈ I_n‖ X_i^(n)‖ _3^3 / σ_n^3, ∀ t∈ℝ.
This bound does decrease as |t| increases and does so at a rate of |t|^-3. However one would expect that this rate could be tightened if additional assumptions were made about the moments of (X_i^(n)). If the random variables admit some exponential moments then <cit.> demonstrated that locally dependent random variables admit moderate deviation inequalities. In this section, we show how 𝒲_p bound
can help us obtain bounds that decrease polynomially fast in t at a small price in its dependence on |I_n|, and do so without assuming infinite moments.
We assume that the conditions of <ref> are satisfied. There is a constant C>0 such that for all β>0 and t>0 satisfying
(√(2π)p)^1/p+1(1-√(2βlog t)/t)t^1-β/p+1≥𝒲_p(ℒ(W_n),𝒩(0,1)),
we have
- C/tφ(t(1-1/p+1)) ≤ℙ(W_n≥ t)-Φ^c(t)/𝒲_p(ℒ(W_n),𝒩(0,1))^1-1/p+1≤C/t^1+β(1-1/p+1),
where φ is the density function of 𝒩(0,1).
We can see from this result that the quantity |ℙ (W_n≥ t)-Φ^c(t) | decays in both t and n. Notably given any p,r≥ 1 assuming that the (p+2)-th moments of X _i's and the dependency neighborhoods are bounded in the sense that
sup_n∈ℕ^+,i_1:(p +1)∈ I_n|N_n(i_1:(p +1))|<∞,
we have |ℙ (W_n≥ t)-Φ^c(t) |=o(t^-r| I_n|^-p/2(p+1)) for t and n large enough.
In particular, for p∈ℕ^+ <ref> imply the uniform Berry–Esseen bound by taking the supremum over t:
sup_t∈ℝ|ℙ(W_n≥ t)- Φ^c(t)|≤ C (∑_i∈ I_n‖ X_i^(n)‖ _3^3 / σ_n^3+∑_i∈ I_n‖ X_i^(n)‖ _p+2^1+2/p / σ_n^1+2/p).
Note that it recovers the uniform Berry–Esseen bound in <cit.> with p=1.
§ OVERVIEW OF THE PROOFS
The key idea of our proofs is to approximate the sum of weakly dependent random variables (X_i^0.6(n))_i∈ I_n by the empirical average of q_n i.i.d. random variables ξ_1^0.6(n),⋯,ξ_q_n^0.6(n) which we denote V_n:=1/√(q_n)∑_i=1^q_nξ_i^0.6(n). More specifically we aim for the Wasserstein-p distance between them
𝒲_p(ℒ(W_n),ℒ(V_n)) to be as small as possible. To establish the desired result we then exploit the triangle inequality that guarantees that
𝒲_p(ℒ(W_n),𝒩(0,1))≤𝒲_p(ℒ(W_n),ℒ(V_n))+𝒲_p(ℒ(V_n),𝒩(0,1)),
and we use previously known bounds for 𝒲_p(ℒ(V_n),𝒩(0,1)) (<ref>).
To be able to show that such random variables ξ_1^0.6(n),⋯,ξ_q_n^0.6(n) exist, we first show (<ref>) that as long as the third and higher-order cumulants of W_n decay then there exist integers (q_n) and i.i.d. random variables such that the first k (k∈ℕ_+) cumulants of
V_n:=1/√(q_n)∑_i=1^q_nξ_i^0.6(n)
matches those of W_n for n large enough. The decay of the cumulants can be proven to hold by exploiting the local dependence assumptions (see <ref>).
As a reminder, our goal is to establish that the Wasserstein distance 𝒲_p(ℒ(V_n),ℒ(W_n)) is small. We relate this to the cumulants thanks to the fact that the Wasserstein-p distance can be upper-bounded by integral probability metrics (<ref>) and the well-known Stein equation.
Indeed for i.i.d. random variables (ξ_i^0.6(n))_i=1^q_n, <cit.> showed that the following approximation holds (restated in <ref>)
𝔼[ h(V_n)]-𝒩h=𝔼 [f'_h(V_n)-V_nf_h(V_n)]
= ∑_(r,s_1:r)∈Γ (⌈ p⌉-1)(-1)^r∏_j=1^rκ _s _j+2(V_n)/(s _j+1)!𝒩 [∏_j=1^r(∂ ^s _j+1Θ) h] + Remainders,
where f_h is the solution of the Stein equation (<ref>) and κ_j( · ) denotes the j-th cumulant of a random variable. (All the other notations in (<ref>) will be made clear in <ref>.) We show that we can obtain similar expansions for 𝔼 [f'(W_n)-W_nf(W_n)] (see <ref>):
𝔼[ h(W_n)]-𝒩h=𝔼 [f'_h(W_n)-W_nf_h(W_n)]
= ∑_(r,s_1:r)∈Γ (⌈ p⌉-1)(-1)^r∏_j=1^rκ _s _j+2(W_n)/(s _j+1)!𝒩 [∏_j=1^r(∂ ^s _j+1Θ) h] + Remainders,
As mentioned in the previous paragraph, q_n and ξ_i^0.6(n) can be chosen to be such that κ_j(V_n)=κ_j(W_n) for j=1,⋯,⌈ p⌉+1. Thus, by taking the difference of (<ref>) and (<ref>), we get an upper bound on |𝔼[h(W_n)]-𝔼 [h(V_n)]| for a large class of function h. As shown in <ref>, this allows us to obtain an upper bound of the Wasserstein-p distance between ℒ(W_n) and ℒ(V_n) for a general p≥ 1. The desired result is therefore implied by the triangle inequality of the Wasserstein-p distance
𝒲_p(ℒ(W_n),𝒩(0,1))≤𝒲_p(ℒ(W_n),ℒ(V_n))+𝒲_p(ℒ(V_n),𝒩(0,1)),
and the already known Wasserstein-p bounds for i.i.d. random variables (<ref>).
To be able to show that (<ref>) holds, we develop new techniques to obtain such expansions, which will be carefully elaborated and discussed in <ref>.
§ ADAPTING STEIN'S METHOD FOR WASSERSTEIN-P BOUNDS
In this section, we provide the proofs of <ref> using Stein's method. We first introduce some background definitions and lemmas before showing the proofs of the main theorems.
toc
§.§ Preliminaries and Notations
For any k∈ℕ and real number ω∈ (0,1], the Hölder space 𝒞^k,ω(ℝ) is defined as the class of k-times continuously differentiable functions f: ℝ→ℝ such that the k-times derivative of f is ω-Hölder continuous, i.e.,
| f|_k, ω:=sup _x ≠ y ∈ℝ|∂^k f(x)-∂^k f(y)|/| x-y|^ω<∞,
where ∂ denotes the differential operator. Here ω is called the Hölder exponent and | f |_k,ω is called the Hölder coefficient.
Using the notions of Hölder spaces, we define the Zolotarev's ideal metrics, which are related to the Wasserstein-p distances via <ref>.
Suppose μ and ν are two probability distributions on ℝ. For any p>0 and ω :=p+1-⌈ p⌉∈ (0,1], the Zolotarev-p distance between μ and ν is defined by
𝒵_p(μ, ν):=sup _f∈Λ_p(∫_ℝ f(x) μ(x)-∫_ℝ f(x) ν(x)),
where Λ_p:={ f ∈𝒞^⌈ p⌉-1,ω(ℝ):| f |_⌈ p⌉-1,ω≤ 1 }
We will see in <ref> how the Zolotarev distance can be used to obtain 𝒲_p(·,·) rates. To bound 𝒵_p( · , · ) we rely on the Stein's method which was introduced by <cit.> in order to prove the central limit theorem for dependent data. It has been widely adapted to all kinds of normal approximation problems. See <cit.>
for a detailed exposition.
toc
§.§ Stein equation and its solutions
Let Z∼𝒩(0,1) be a standard normal random variable. For any measurable function h:ℝ→ℝ, if h(Z)∈ℒ^1(ℝ), we write 𝒩 h:=𝔼 [h(Z)]. Thus, h(Z)∈ℒ^1(ℝ) if and only if 𝒩|h|<∞. Moreover, we define f_h( · ) by
f_h(w) :=∫_-∞^w e^(w^2-t^2)/2(h(t)-𝒩 h) t
=-∫_w^∞ e^(w^2-t^2)/2(h(t)-𝒩 h) t .
We remark that f_h(·) is a solution of the Stein equation meaning that it satisfies
f_h'(w)-w f_h(w)=h(w)-𝒩 h, ∀ w∈ℝ.
Bounding |𝔼(f'_h(W_n)-W_nf_h(W_n))| therefore allows to control |𝔼(h(W_n))-𝒩h|. If we do this for a large class of functions h we can therefore upper-bound the Zolotarev distance between ℒ(W_n) and a normal distribution. This is the key idea behind the Stein's method. For notational convenience, we denote by Θ the operator that maps h to f _h for any h such that 𝒩| h |< ∞, i.e.,
Θ h=f _h.
Note that Θ h( · ) is a function. If h∈Λ_p, then we see in <ref> that Θ h can be bounded.
For any p>0, let h ∈Λ_p be as defined in <ref>. Then Θ h=f_h in (<ref>) is a solution to (<ref>). Moreover, Θ h∈𝒞^⌈ p⌉-1,ω(ℝ)∩𝒞^⌈ p⌉,ω(ℝ) and the Hölder coefficients |Θ h |_⌈ p⌉-1,ω and |Θ h |_⌈ p⌉,ω are bounded by some constant only depending on p.
toc
§.§ Key Lemmas
First, we present an important result that states that the Wasserstein-p distance can be controlled in terms of the Zolotarev distance.
For any p≥ 1, there exists a positive constant C_p, such that for any pair of distributions μ,ν on ℝ with finite absolute moments of order p such that
𝒲_p(μ, ν) ≤ C_p(𝒵_p(μ, ν))^1/p.
In particular, 𝒲_1(μ,ν)=𝒵_1(μ,ν) by Kantorovich–Rubinstein duality.
In the next two lemmas, we present already-known results for the normal approximation of sums of independent random variables. Firstly <ref> provides an expansion for the difference between 𝔼[h(S_n)], where S_n is an empirical average and 𝒩h. This lemma will allow us to relate the Zolotarev distance to the cummulants.
For any p>0, let h ∈Λ_p and S_n:=∑_i=1^n X_i where {X_1, ⋯, X_n} are independent, with 𝔼 [X_i]=0 and 𝔼 [S_n^2]=1. Then it follows that
𝔼[ h(S_n)]-𝒩h
=
∑_(r,s_1:r)∈Γ (⌈ p⌉-1)(-1)^r∏_j=1^rκ _s _j+2(S_n)/(s _j+1)!𝒩 [∏_j=1^r(∂ ^s _j+1Θ) h] +𝒪( ∑_i=1^n𝔼 [| X_i| ^p+2]),
where the first sum is over Γ (⌈ p⌉ -1):={ r, s _1:r∈ℕ_+:∑_j=1^rs _j≤⌈ p⌉-1}.
Note that there is a slight abuse of notation in (<ref>). The last ∏ indicates the composition of the operators in the parentheses rather than the product.
Secondly <ref> gives an upper bound on the Wasserstein distance between the distribution of this empirical average, S_n, and the standard normal distribution. This lemma will guarantee that if an approximation of W_n by a sum of independent random variables V_n can be obtained then V_n is approximately normally distributed.
For any p≥ 1, let S_n:=∑_i=1^n X_i where {X_1, ⋯, X_n} are independent and satisfy that 𝔼 [X_i]=0 and 𝔼 [S_n^2]=1. Then it follows that
𝒲_p(ℒ(S_n), 𝒩(0,1)) ≤ C_p(∑_i=1^n𝔼[| X_i| ^p+2])^1/p,
where C_p continuously depends on p.
We now introduce two new lemmas crucial in the proof of <ref>. They will be proven in <ref> and <ref>. The first lemma generalizes <ref> to the dependent setting.
Suppose that (X^0.6(n)_i)_i∈ I_n is a triangular array of random variables with dependency neighborhoods satisfying the local dependence conditions [LD-1] to [LD-(⌈ p⌉+1)]. Let W_n:=∑_i∈ I_nX^0.6(n)_i with 𝔼[X_i^0.6(n)]=0, 𝔼 [W_n^2]=1.
Then for any p>0 and h ∈Λ_p, we have
𝔼 [h(W_n)]-𝒩h
= ∑_(r,s_1:r)∈Γ (⌈ p⌉ -1)(-1)^r∏_j=1^rκ _s _j+2(W_n)/(s _j+1)!𝒩 [∏_j=1^r(∂ ^s _j+1Θ) h]
+𝒪(∑_j=1^⌈ p⌉-1R _j,1,n^p/j+∑_j=1^⌈ p⌉R _j,ω,n ^p/(j+ω -1)),
where the first sum is over Γ (⌈ p⌉ -1):={ r, s _1:r∈ℕ_+:∑_j=1^rs _j≤⌈ p⌉-1}.
We can see that <ref> look quite similar to one another with the only differences being the dependence structures of (X^0.6(n)_i) and the remainder terms in the expansions. This similarity inspires the proof of <ref>. To illustrate this, imagine that there would exist some i.i.d. random variables (ξ_i^0.6(n))_i=1^q_n and a large sample size q_n such that the first ⌈ p⌉+1 cumulants of V_n:=q_n^-1/2∑_i=1^q_nξ_i^0.6(n) match with those of W_n, then the expansion (<ref>) and in (<ref>) would be almost identical, and the difference between those would be controlled by the remainder terms (R_j,1,n) and (R_j,ω,n). If those remainder terms are small then we could exploit the asymptotic normality of V_n to obtain the asymptotic normality of W_n. We show that such a sequence exists when |I_n| is large.
Let p≥ 1 and k:=⌈ p⌉. If p>1, let (u_j^0.6(n))_j=1^k-1 be a sequence of real numbers. Suppose that for any j=1,⋯, k-1, we have u_j^0.6(n)→ 0 as n→∞. Then there exist constants C_p, C_p' only depending on p and a positive value N>0 (that might depend on (u_j^0.6(n)) ) such that for any n >N, there exists q_n∈ℕ_+ and a random variable ξ^0.6(n) such that
* 𝔼 [ξ^0.6(n)]=0, 𝔼 [(ξ^0.6(n))^2]=1;
* κ_j+2(ξ^0.6(n))=q_n^j/2u_j^0.6(n) for j=1,⋯, k-1;
* Either max_1≤ j≤ k-1|κ_j+2(ξ^0.6(n)) |=0 or max_1≤ j≤ k-1|κ_j+2(ξ^0.6(n)) |≥ C_p>0;
* 𝔼 [|ξ^0.6(n)|^p+2]≤ C_p'.
Furthermore, q_n can be chosen to be such that q_n→∞ as | I |→∞.
We note that the condition that u_j^0.6(n)→ 0 as n→∞ is crucial. <ref> is an asymptotic statement in the sense that for a given n≤ N, q_n and ξ^0.6(n) might not exist.
Intuitively, <ref> and <ref> determines the cumulants of ξ^0.6(n) and relates them to the cumulants of W_n. <ref> requires that the maximum
max_1≤ j≤ k|κ_j+2(ξ^0.6(n)) |
is either 0 or bounded away from 0 as n grows. And <ref> indicates that the (p+2)-th absolute moment is upper-bounded.
toc
§.§ Proof of Theorem <ref>
The proof of <ref> works in three stages:
* Using <ref> we find a sequence of i.i.d. random variables (ξ^0.6(n)_ℓ)_ℓ and a sample size q_n such that the first k+1 cumulants of W_n match the first k+1 cumulants of V_n:=q_n^-1/2∑_i=1^q_nξ^0.6(n)_i;
* Using <ref> we remark that we can bound the Wasserstein distance between the distributions of W_n and an empirical average, V_n, of i.i.d. observations in terms of |𝔼 [h(W_n)]-𝔼 [h(V_n)] | for a large class of functions h. We do so by exploiting <ref>;
* We remark that <ref> provides us with the bound on the Wasserstein distance between the distribution of V_n and the standard normal.
Then <ref> follows from the triangle inequality of the Wasserstein metric:
𝒲_p(W_n,𝒩(0,1))≤𝒲_p(ℒ(W_n),ℒ(V_n))+𝒲_p(ℒ(V_n),𝒩(0,1)).
Without loss of generality, we assume σ_n= 1 and denote W_n:=∑_i∈ I_nX^0.6(n)_i.
Firstly, we remark that according to <ref>, for all 1≤ j≤ k-1 we have |κ_j+2(W_n) |≲ R_j,1,n. Moreover, by assumption we have R_j,1,n→ 0 as n→∞. Therefore, |κ_j+2(W_n) |→ 0 as n→∞ and the assumptions of <ref> hold, which implies that there exist constants C_p and C'_p
such that for any n large enough there are positive integers (q_n) and random variables (ξ^0.6(n)) such that
* 𝔼 [ξ^0.6(n)]=0, 𝔼 [(ξ^0.6(n))^2]=1;
* κ_j+2(ξ^0.6(n))=q_n^j/2κ_j+2 (W_n) for j=1,⋯, k-1;
* Either max_1≤ j≤ k-1|κ_j+2(ξ^0.6(n)) |=0 or max_1≤ j≤ k-1|κ_j+2(ξ^0.6(n)) |≥ C_p>0;
* 𝔼 [|ξ^0.6(n)|^p+2]≤ C_p'.
Furthermore, we know that (q_n) satisfies that q_n→∞ as n→∞.
As presented in the proof sketch we will use this to bound the distance between the distribution of W_n to the one of an empirical average of at least q_n i.i.d. random variables. Note that when max_1≤ j≤ k-1|κ_j+2(ξ^0.6(n))|>0 then we can obtain (by combining <ref> ) a lower bound on q_n which will be crucial in our arguments as it will allow us to control the distance between this empirical average and its normal limit. When κ_3(W_n)=⋯=κ_k+1(W_n)=0, such a lower bound on q_n cannot be obtained in a similar way. Thus, we introduce an alternative sequence (q_n) by setting q_n:=| I_n |^2(p+1)/p∨ q_n if κ_3(W_n)=⋯=κ_k+1(W_n)=0, and q_n:=q_n otherwise. We remark that the sequence (q_n) still respects q_n→∞ as n→∞.
Let ξ_1^0.6(n),⋯,ξ_q_n^0.6(n) be i.i.d. copies of ξ^0.6(n). Define V_n:=q_n^-1/2∑_i=1^q_nξ_i^0.6(n).
By construction, for any j∈ℕ_+ such that j≤ k-1=⌈ p⌉ -1 we have
κ_j+2(V_n)(*)=q_n^-(j+2)/2∑_i=1^q_nκ_j+2(ξ_i^0.6(n))=q_n^-j/2κ_j+2(ξ^0.6(n))=κ_j+2(W_n).
Here in (*) we have used the fact that cumulants are cumulative for independent random variables, which is directly implied by their definition. For more details on this, please refer to <cit.>.
Thus, by <ref> and <ref>, for any h∈Λ_p we have
|𝔼 [h(W_n)]-𝔼 [h(V_n)] |≲∑_j=1^k-1R _j,1,n^p/j+∑_j=1^kR _j,ω ,n^p/(j+ω -1)+q_n^-(p+2)/2∑_i=1^q_n𝔼[|ξ_i^0.6(n)|^p+2].
To be able to have this upper bound not depend on ξ_i^0.6(n) we will upper-bound
q_n^-(p+2)/2∑_i=1^q_n𝔼 [|ξ_i^0.6(n)|^p+2]
in terms of the remainders (R_j,1,n) and (R_j,ω,n). To do so we use the lower bounds on (q_n) implied by the specific form we chose.
If max_1≤ j≤ k-1|κ_j+2(W_n)|>0, <ref> implies that
C_p≤max_1≤ j≤ k-1|κ_j+2(ξ^0.6(n)) |(*)=max_1≤ j≤ k-1{q_n^j/2|κ_j+2(W_n)|}(**)≲max_1≤ j≤ k-1{q_n^j/2R_j,1,n}.
where to get (*) we used <ref> and to get (**) we used <ref>.
Thus, the following holds
q_n^-p/2=(q_n^-j_0/2)^p/j_0≲ R_j_0,1,n^p/j_0≤∑_j=1^k-1 R_j,1,n^p/j,
where j_0 is the integer satisfying that |κ_j_0+2(ξ^0.6(n)) |=max_1≤ j≤ k-1|κ_j+2(ξ^0.6(n)) |.
On the other hand, if κ_j+2(W_n)=0 for all 1≤ j≤ k-1, then by definitions we have q_n≥| I_n |^2(p+1)/p, and therefore, q_n^-p/2≤| I_n |^-(p+1).
Moreover, by Hölder's inequality we know that the following holds
∑_i∈ I_n𝔼[| X^0.6(n)_i|^2]≤| I_n|^p/(p+2)(∑_i∈ I_n𝔼[| X^0.6(n)_i|^p+2])^2/(p+2).
and
(∑_i∈ I_nX^0.6(n)_i)^2≤| I_n |∑_i∈ I_n| X^0.6(n)_i|^2.
Since 𝔼[(∑_i∈ I_nX^0.6(n)_i)^2]=σ_n^2=1, we have
q_n^-p/2≤ | I_n |^-(p+1)(𝔼[(∑_i∈ IX^0.6(n)_i)^2])^(p+2)/2
(*)≤ | I_n |^-p/2(∑_i∈ I𝔼[| X^0.6(n)_i|^2])^(p+2)/2
(**)≤ ∑_i∈ I_n𝔼[| X^0.6(n)_i|^p+2]≤ R_k,ω,n ,
where to obtain (*) we used (<ref>) and to obtain (**) we used (<ref>).
Thus, using <ref> and the fact that ξ_1^0.6(n),⋯,ξ_q_n^0.6(n) are i.i.d., we obtain
q_n^-(p+2)/2∑_i=1^q_n𝔼[|ξ_i^0.6(n)|^p+2]≤ C_p'q_n^-p/2≲∑_j=1^k-1R _j,1,n^p/j+∑_j=1^kR _j,ω ,n^p/(j+ω -1).
Therefore, by combining this with (<ref>) we obtain that there is a constant K>0 that does not depend on h such that
|𝔼 [h(W_n)]-𝔼 [h(V_n)] |≤ K( ∑_j=1^kR _j,1,n^p/j+∑_j=1^k+1R _j,ω ,n^p/(j+ω -1)).
By taking supremum over h∈Λ_p and by <ref> we obtain that
𝒲_p(ℒ(W_n),ℒ(V_n))≲sup_h∈Λ_p|𝔼[h(W_n)]-𝔼 [h(V_n)] |^1/p≲∑_j=1^k-1R _j,1,n^1/j+∑_j=1^kR _j,ω ,n^1/(j+ω -1).
Moreover, by combining <ref> and (<ref>) we have
𝒲_p(ℒ(V_n),𝒩(0,1))≲(q_n^-(p+2)/2∑_i=1^q_n𝔼[|ξ_i^0.6(n)|^p+2])^1/p≲∑_j=1^k-1R_j,1,n^1/j+∑_j=1^kR _j,ω,n ^1/(j+ω -1).
Therefore, as the Wasserstein distance 𝒲_p satisfies the triangle inequality we conclude that
𝒲_p(ℒ(W_n),𝒩(0,1))
≤ 𝒲_p(ℒ(W_n),ℒ(V_n))+𝒲_p(ℒ(V_n),𝒩(0,1))
≲ ∑_j=1^k-1R _j,1,n^1/j+∑_j=1^kR _j,ω ,n^1/(j+ω -1).
§ PROOF OF LEMMA <REF>
For ease of notation, when there is no ambiguity we will drop the dependence on n in our notation and write W, N(·), σ, X_i, I and R_j,ω for respectively W_n, N_n(·), σ_n, X^0.6(n)_i, I_n and R_j,ω,n.
toc
§.§ Example and Roadmap
Given the form of expression in <ref>, it is natural to consider performing induction on ⌈ p⌉. In fact, <cit.> used a similar induction idea to prove <ref>, the analogous result to <ref> for independent variables. As <cit.> suggested, the key of each inductive step is the following expansion of 𝔼 [Wf(W)].
Denote by κ_j(W) the j-th cumulant of W. Given k∈ℕ_+ and real number ω∈ (0,1], for any f∈𝒞^k,ω(ℝ), we have
𝔼 [Wf(W)]=∑_j=1^kκ_j+1(W)/j !𝔼 [∂^j f(W)]+𝒪(| f |_k,ωR_k,ω).
The case k=ω=1 is a well-known result in the literature of Stein's method (for example see <cit.>). The case k=2, ω=1 was first proven by <cit.>, and they also conjectured that it was true for any positive integer k with ω=1. Inspired by <cit.>'s method, we confirm that this conjecture is correct by proving <ref>.
To help better understand the intuition behind our proof for the general settings, let's first consider the simplest case with k=ω=1. Given a positive integer m, suppose that (X_i)_i=1^n is an m-dependent random sequence (the special case of d=1 in <ref>). We let W:=∑_i=1^nX_i and require that 𝔼 [X_1]=0 and 𝔼 [W^2]=1. For simplicity, we further assume f∈𝒞^2(ℝ)∩𝒞^1,1(ℝ) meaning that f” is a continuous and bounded function.
For any indexes i,j∈ [n] (by convention [n]:={ 1,2,⋯,n }), we write
N(i)={ℓ∈ [n]: |ℓ-i |≤ m }, N(i,j):={ℓ∈ [n]:|ℓ-i |≤ m or |ℓ-j |≤ m }.
Denote W_i,m:=∑_j∉ N(i)X_j and W_i,j,m:=∑_ℓ∉ N(i,j)X_ℓ. The idea is that for each i, we split W into two parts, W_i,m and W-W_i,m. The former is independent of X_i while the latter is the sum of X_j's in the neighborhood of X_i and will converge to 0 when n grows to ∞. Thus, we perform the Taylor expansion for f(W) around W_i,m.
We have
𝔼[ Wf (W)- f'(W) ]
= ∑_i=1^n𝔼[X_i(f (W)-f (W_i,m) - f'(W_i,m)(W-W_i,m))]
+ ∑_i=1^n𝔼 [X_if (W_i,m)]
+ ∑_i=1^n𝔼[X_i(W-W_i,m)f'(W_i,m)]-𝔼 [f'(W)]
= ∑_i=1^n𝔼[X_i(f (W)-f (W_i,m) - f'(W_i,m)(W-W_i,m))]
+ ∑_i=1^n 𝔼 [X_i] 𝔼 [f (W_i,m)]
+ ∑_i=1^n∑_j∈ N(i)𝔼[X_iX_jf'(W_i,m)]-𝔼 [f'(W)]
= ∑_i=1^n𝔼[X_i(f (W)-f (W_i,m) - f'(W_i,m)(W-W_i,m))]
+ (∑_i=1^n∑_j∈ N(i)𝔼[X_iX_jf'(W_i,m)]-𝔼 [f'(W)] )=:E_1+E_2.
By assumption, ‖ f”‖ is bounded and we have
| E_1|= |∑_i=1^n𝔼[X_i(f (W)-f (W_i,m)
- f'(W_i,m)(W-W_i,m))]|
≤ ‖ f”‖/2∑_i=1^n𝔼[| X_i(W-W_i,m)^2|]
= ‖ f”‖/2∑_i=1^n𝔼[| X_i| (
∑_j∈ N(i)X_j)^2]
= ‖ f”‖/2∑_i=1^n∑_j∈ N(i)∑_ℓ∈ N(i)𝔼[| X_iX_jX_ℓ|]≤‖ f”‖/2∑_i=1^n∑_j∈ N(i)∑_ℓ∈ N(i,j)𝔼[| X_iX_jX_ℓ|].
Now we bound E_2.
E_2 = ∑_i=1^n∑_j∈ N(i)𝔼[X_iX_jf'(W_i,m)]-𝔼 [f'(W)]
= ∑_i=1^n∑_j∈ N(i)𝔼[X_iX_j(f'(W_i,m)-f'(W_i,j,m))]
+∑_i=1^n∑_j∈ N(i)𝔼[X_iX_jf'(W_i,j,m)] -𝔼 [f'(W)]
(*)= ∑_i=1^n∑_j∈ N(i)𝔼[X_iX_j(f'(W_i,m)-f'(W_i,j,m))]
+∑_i=1^n∑_j∈ N(i)𝔼[X_iX_j] 𝔼 [f'(W_i,j,m)] -𝔼 [f'(W)]
(**) = ∑_i=1^n∑_j∈ N(i)𝔼[X_iX_j(f'(W_i,m)-f'(W_i,j,m))]
+ ∑_i=1^n∑_j∈ N(i)𝔼[X_iX_j] 𝔼[f'(W_i,j,m)-f'(W)]
= (t_1)+(t_2),
where to obtain (*) we have used the fact that W_i,j,m is independent of (X_i,X_j) in the second equation and to obtain (**) we have assumed hat 𝔼(W^2)=1.
The first term (t_1), can be upper-bounded by the mean value theorem as
|∑_i=1^n∑_j∈ N(i)𝔼[X_iX_j(f'(W_i,m)-f'(W_i,j,m))]|
≤ ∑_i=1^n∑_j∈ N(i)‖ f”‖ 𝔼[| X_iX_j(W_i,m-W_i,j,m)|]
≤ ‖ f”‖∑_i=1^n∑_j∈ N(i)∑_ℓ∈ N(i,j)𝔼[| X_iX_jX_ℓ|].
By another application of the mean-value theorem, we remark that the second term (t_2), is controlled by
|∑_i=1^n∑_j∈ N(i)𝔼[X_iX_j] 𝔼[f'(W_i,j,m)-f'(W)]|
≤ ∑_i=1^n∑_j∈ N(i)‖ f”‖ 𝔼[| X_iX_j|] 𝔼[| W_i,j,m-W|]
≤ ‖ f”‖∑_i=1^n∑_j∈ N(i)∑_ℓ∈ N(i,j)𝔼[| X_iX_j|] 𝔼 [| X_ℓ|].
Thus,
|𝔼[Wf(W)-f'(W)] |
≤ ‖ f”‖∑_i=1^n∑_j∈ N(i)∑_ℓ∈ N(i,j)(3/2𝔼 [| X_iX_jX_ℓ|]+𝔼 [| X_iX_j|] 𝔼 [| X_ℓ|])
≤ 3‖ f”‖/2R_1,1.
This gives us a bound that matches with (<ref>).
For k≥ 2, we would like to carry out the expansion in the same spirit. However, it would be too tedious to write out every sum in the process. Thus, in <ref>, we introduce the terms called 𝒮-sums, 𝒯-sums, and ℛ-sums, which serve as useful tools in tracking different quantities when we approximate 𝔼 [f'(W)-Wf(W)] with respect to locally dependent random variables. Instead of performing the expansion to get (<ref>) for 𝔼 [Wf(W)], we first do it for any 𝒯-sum and use induction to prove a more general result for the existence of such expansions (see <ref>). In the general situation of 𝒯-sums, the cumulants are replaced by other constants that only depend on the specific 𝒯-sum in consideration and the joint distribution of (X_i)_i∈ I. Finally, we prove that in particular, those constants for 𝔼 [Wf(W)] are precisely the cumulants of W. This will be done by direct calculation when f is a polynomial and then extended to more general f's by applying <ref>.
toc
§.§ Notations and Definitions
As in <ref>, given an integer k≥ 1, suppose (X_i)_i∈ I is a class of mean zero random variables indexed by I that satisfy the local dependence assumptions [LD-1] to [LD-k]. Without loss of generality, we always assume that σ^2:=Var(∑_i∈ IX_i)=1. We denote W:=σ^-1∑_i∈ IX_i=∑_i∈ IX_i.
§.§.§ 𝒮-sums
Fix k∈ℕ_+ and t_1,⋯,t_k∈ℤ be integers such that | t_j|≤ j-1 for any j∈ [k]. We set t_1=0. Let z=|{ j:t_j>0 }| be the number of indexes j for which t_j is strictly positive. If z≥ 1, we write { j:t_j>0 }={ q_1,⋯,q_z}. Without loss of generality, we suppose that the sequence 2≤ q_1<⋯<q_z≤ k is increasing. We further let q_0:=1 and q_z+1:=k+1. We define an order-k 𝒮-sum with respect to the sequence t_1:k as
𝒮 [t_1,⋯,t_k]
:= ∑_i_1∈ N_1∑_i_2∈ N_2⋯∑_i_k∈ N_k [q_1-q_0,⋯,q_z+1-q_z]▹(X_i_1,⋯,X_i_k)
= ∑_i_1∈ N_1∑_i_2∈ N_2⋯∑_i_k∈ N_k 𝔼[X_i_q_0⋯ X_i_q_1-1] 𝔼[X_i_q_1⋯ X_i_q_2-1] ⋯ 𝔼[X_i_q_z⋯ X_i_q_z+1-1],
where N_1:=I, and for j∈ℕ_+ such that j≥ 2, we let
N_j:= N (i_1:| t_j|)=N(i_1,⋯,i_| t_j|) if t_j≠ 0
∅ if t_j=0
.
Note that N_j depends on t_j and the sequence i_1:(j-1). For ease of notation, we do not explicitly write out the dependencies on i_1:(j-1) when there is no ambiguity. Further note that if any t_j, that is not t_1, is null then N_j=∅ therefore, the 𝒮-sum 𝒮[t_1,⋯,t_k]=0.
By definition all 𝒮-sums are deterministic quantities, the value of which only depends on t_1:k, and the joint distribution of (X_i)_i∈ I. We also remark that the signs of t_j's determine how an 𝒮-sum factorizes into different expectations. Notably if z=0 (meaning that all the t_j are negative) then the 𝒯-sum is
𝒯_f,s [t_1,⋯,t_k]=∑_i_1∈ N_1∑_i_2∈ N_2⋯∑_i_k∈ N_k𝔼[X_i_1⋯ X_i_k∂^k-1 f(W_i.[k-s])].
Since by assumption, X_i's are centered random variables, the 𝒮-sum vanishes if q_j+1=q_j+1 for some 0≤ j≤ z:
𝒮 [t_1,⋯,t_k]
=∑_i_1∈ N_1∑_i_2∈ N_2⋯∑_i_k∈ N_k 𝔼[X_i_q_0⋯ X_i_q_1-1] ·
𝔼[X_i_q_1⋯ X_i_q_2-1] ⋯𝔼[X_i_q_j] ⋯ 𝔼[X_i_q_z⋯ X_i_q_z+1-1]=0.
Furthermore, the absolute value of t_j's influences the range of running indexes. The bigger | t_j| is the larger the set N_j is. The largest possible index set for i_j+1 is N(i_1:(j-1)), which corresponds to the case | t_j|=j-1. On the other hand, if t_j=0, the sum is over an empty set and vanishes.
In particular, if we require that the 𝒮-sum is not always zero, then t_2 is always taken to be -1 and i_2∈ N(i_1).
§.§.§ 𝒯-sums
For any function f∈𝒞^k-1(ℝ) and integer s∈ℕ such that s≤ k, the order-k 𝒯-sum, with respect to the sequence t_1:k, is defined as
𝒯_f,s [t_1,⋯,t_k]
:=
∑_i_1∈ N_1∑_i_2∈ N_2⋯∑_i_k∈ N_k [q_1-q_0,⋯,q_z+1-q_z]▹(X_i_1,⋯,X_i_k-1,X_i_k∂^k-1f(W_i.[k-s]))
= { ∑_i_1∈ N_1∑_i_2∈ N_2⋯∑_i_k∈ N_k𝔼[X_i_1⋯ X_i_k∂^k-1 f(W_i.[k-s])] if z=0
∑_i_1∈ N_1∑_i_2∈ N_2⋯∑_i_k∈ N_k𝔼[X_i_q_0⋯ X_i_q_1-1] ⋯ 𝔼[X_i_q_(z-1)⋯ X_i_q_z-1] ·
𝔼[X_i_q_z⋯ X_i_k∂^k-1f(W_i.[k-s])]
if z≥ 1,
.
where N_1:k,z,q_0:(z+1) are defined as in the definition of 𝒮-sums and W_i.[j] is defined as
W_i.[j]:=
W if j=0
∑_i∈ I\ N(i_1:j)X_i if 1≤ j≤ k
.
Note that the bigger s is, the larger the set I\ N(i_1:(k-s)) is, which means that W_i.[k-s] is the sum of more X_i's. Again we remark that the values of 𝒯-sums can depend on the values of s and the sequences t_1:k. In particular, if s=0, then we have W_i.[k-s]=W_i.[k]=∑_i∈ I \ N (i_1:k)X_i, which implies that W_i.[k-s] is independent of X_i_1,⋯,X_i_k by the assumption [LD-k]. Thus, we have
𝔼[X_i_q_z⋯ X_i_k∂^k-1f(W_i.[k-s])]=𝔼[X_i_q_z⋯ X_i_k] 𝔼[∂^k-1f(W_i.[k-s])].
By definitions (<ref>) and (<ref>) we get
𝒯_f,0 [t_1,⋯,t_k]=𝒮[t_1,⋯,t_k] 𝔼 [∂^k-1f(W_i.[k])].
This equation will be useful in our discussion later. In general if z>0 then
𝒯_f,s [t_1,⋯,t_k]
= 𝒮[t_1,⋯,t_q_z-1] ∑_i_q_z∈ N_q_z∑_i_q_z+1∈ N_q_z+1⋯∑_i_k∈ N_k
𝔼[X_i_q_z⋯ X_i_k∂^k-1f(W_i.[k-s])].
§.§.§ ℛ-sums
For k≥ 2 and given a real number ω∈ (0, 1], we further define an order-k ℛ-sum with respect to the sequence t_1:k as
ℛ_ω [t_1,⋯, t_k]:=
∑_i_1∈ N_1∑_i_2∈ N_2⋯∑_i_k-1∈ N_k-1[q_1-q_0,⋯,q_z+1-q_z]▹(| X_i_1|,⋯,| X_i_k-1|,(∑_i_k∈ N_k| X_i_k|)^ω)
= { ∑_i_1∈ N_1∑_i_2∈ N_2⋯∑_i_k∈ N_k𝔼[X_i_1⋯ X_i_k-1(∑_i_k∈ N_k| X_i_k|)^ω] if z=0
∑_i_1∈ N_1∑_i_2∈ N_2⋯∑_i_k∈ N_k𝔼[X_i_q_0⋯ X_i_q_1-1] ⋯ 𝔼[X_i_q_(z-1)⋯ X_i_q_z-1] ·
𝔼[X_i_q_z⋯ X_i_k-1(∑_i_k∈ N_k| X_i_k|)^ω]
if z≥ 1.
.
We again remark that if z≥ 1 then
ℛ_ω [t_1,⋯, t_k]
= ℛ_1[t_1,⋯,t_q_z-1] ∑_i_q_z∈ N_q_z∑_i_q_z+1∈ N_q_z+1⋯∑_i_k∈ N_k
𝔼[X_i_q_z⋯ X_i_k-1(∑_i_k∈ N_k| X_i_k|)^ω].
We call ω the exponent of the ℛ-sum. If ω=1, the only difference between an ℛ-sum and an 𝒮-sum is that the X_i_j's in (<ref>) are replaced by | X_i_j|'s in (<ref>). Thus, an 𝒮-sum is always upper-bounded by the corresponding compositional 1-sum, i.e.,
|𝒮 [t_1,⋯,t_k]|≤ℛ_1 [t_1,⋯,t_k].
Another important observation is that we can compare the values of ℛ-sums with respect to two different sequences t_1,⋯,t_k and t_1',⋯,t_k' in certain situations. In specific, if for any j∈ [k] we have that if t_j and t'_j are of the same sign and |t_j|≤ |t_j'|, then
ℛ_ω[t_1,⋯,t_k]≤ℛ_ω[t_1',⋯,t_k'].
In fact, the sequences (t_j) and (t_j') having the same sign indicates that { j:t_j>0 }={ j:t_j'>0 }.
Thus, we can write
ℛ_ω [t_1',⋯, t_k']
= ∑_i_1∈ N_1'∑_i_2∈ N_2'⋯∑_i_k-1∈ N_k-1'[q_1-q_0,⋯,q_z+1-q_z]▹
(| X_i_1|,⋯,| X_i_k-1|,(∑_i_k∈ N_k'| X_i_k|)^ω),
where we note that N_1'=I=N_1 and for j=2,⋯,k we have
N_j'=N(i_1,⋯,i_| t_j' |)⊇ N(i_1,⋯,i_| t_j|)=N_j.
By comparing (<ref>) with (<ref>), we obtain (<ref>).
§.§.§ Re-expression of the remainder terms R_k,ω
Using the notion of ℛ-sums, we rewrite the R_k,ω in <ref> as
R_k,ω:= ∑_(ℓ,η_1:ℓ)∈ C^*(k+2)∑_i_1∈ N_1'∑_i_2∈ N_2'⋯∑_i_k+1∈ N_k+1'
[η_1,⋯,η_ℓ]▹(| X_i_1|,⋯,| X_i_k+1|,(∑_i_k+2∈ N_k+2'| X_i_k+2|)^ω)
= ∑_t_1:(k+2)∈ℳ_1,k+2 ℛ_ω[t_1,t_2,⋯,t_k+2].
where N_1':=I and N_j':=N(i_1:(j-1)) for j≥ 2. C^*(k+2) and ℳ_1,k+2 are given by
C^*(k+2)={ℓ,η_1:ℓ∈ℕ_+: η_j≥ 2 ∀ j∈ [ℓ-1], ∑_j=1^ℓη_j=k+2},
and
ℳ_1,k+2:={t_1:(k+2): t_j+1=± j & t_j∧ t_j+1< 0 ∀ 1≤ j≤ k+1}.
Note that t_j∧ t_j+1<0 for any j∈ [k+1] means that there is at least one -1 in any two consecutive signs, which corresponds to the requirement that η_j≥ 2 for j∈ [ℓ-1] in (<ref>).
toc
§.§ Proofs of Proposition <ref> and Lemma <ref>
In this section, we carry out the local expansion technique and prove <ref>.
Firstly, we establish the following lemma, which will be crucial in the inductive step of proving the main theorem.
Fix k∈ℕ_+. For any s∈ [k]∪{ 0 } and f∈𝒞^k,ω(ℝ), we have
|𝒯_f,s[t_1,⋯,t_k+1]-𝒮[t_1,⋯,t_k+1] 𝔼 [∂^kf(W)]|
≤ | f |_k,ω(-(𝕀(t_k+1<0)·ℛ_ω[t_1,⋯,t_k+1,k+1]
+ 𝕀(s≥ 1)ℛ_ω[t_1,⋯,t_k+1,-(k+1)]).
Given any ℓ∈ [k] and s∈ [ℓ]∪{ 0 }, we further have
|𝒯_f,s[t_1,⋯,t_ℓ]-𝒮[t_1,⋯,t_ℓ] 𝔼 [∂^ℓ-1f(W)]
-𝕀(s≥ 1)·∑_j=1^k-ℓ+1∑_h=0^j(-1)^h1/h !(j-h)!𝒯_f,j[t_1,⋯,t_ℓ,s - ℓ,⋯,s - ℓ_h times,-ℓ,⋯,-ℓ_(j-h) times]
-(𝕀(t_ℓ<0)∑_j=1^k-ℓ+11/j!𝒯_f,j[t_1,⋯,t_ℓ,ℓ,-ℓ,⋯,-ℓ_(j-1) times] |
≤ | f |_k,ω/(k-ℓ+1)!(-𝕀(t_ℓ<0)·ℛ_ω[t_1,⋯,t_ℓ,ℓ,-ℓ,⋯,-ℓ_(k-ℓ+1) times]
+ 𝕀(s≥ 1)·ℛ_ω[t_1,⋯,t_ℓ,-ℓ,⋯,-ℓ_(k-ℓ+2) times]).
Firstly, we remark that the definition of Hölder continuity implies that
|∂^kf(y)-∂^kf (x)|≤| f |_k,ω| y-x |^ω,
where ω is the Hölder exponent of f and | f |_k,ω is the Hölder constant (see <ref>).
Let z=|{ j∈ [k+1]: t_j>0 }| be the number of positive indexes (t_j). If z≥ 1, we write { j∈ [k+1]:t_j>0 }={ q_1,⋯,q_z}. Without loss of generality, we suppose that the sequence 2≤ q_1<⋯<q_z≤ k+1 is increasing. We further let q_0:=1 and q_z+1:=k+2.
Applying (<ref>) we have
|𝔼[X_i_q_z⋯ X_i_k+1∂^kf(W_i.[k+1-s])]-𝔼[X_i_q_z⋯ X_i_k+1∂^kf(W_i.[k+1])]|
≤ | f |_k,ω𝔼[| X_i_q_z⋯ X_i_k+1|·| W_i.[k+1-s]-W_i.[k+1] |^ω]
≤ | f |_k,ω𝔼[| X_i_q_z⋯ X_i_k+1|·|∑_i∈ N(i_1:(k+1))\ N (i_1:(k+1-s)) X_i|^ω]
≤ | f |_k,ω𝔼[| X_i_q_z⋯ X_i_k+1|·|∑_i∈ N(i_1:(k+1)) X_i|^ω],
where in the last inequality we have used the fact that N(i_1:(k+1))\ N (i_1:(k+1-s))⊆ N(i_1:(k+1)).
If z=0, this directly implies that
|𝒯_f,s[t_1,⋯,t_k+1]-𝒯_f,0[t_1,⋯,t_k+1] |≤𝕀(s≥ 1)·| f |_k,ωℛ_ω[t_1,⋯,t_k+1,-(k+1)].
If z≥ 1, by definition (<ref>) we have for s≥ 1
|𝒯_f,s[t_1,⋯,t_k+1]-𝒯_f,0[t_1,⋯,t_k+1] |
= |∑_i_1∈ N_1∑_i_2∈ N_2⋯∑_i_k+1∈ N_k+1𝔼[X_i_q_0⋯ X_i_q_1-1] ⋯ 𝔼[X_i_q_z-1⋯ X_i_q_z-1]·
𝔼[X_i_q_z⋯ X_i_k+1(∂^kf(W_i.[k+1-s])-∂^kf(W_i.[k+1]))] |
≤ ∑_i_1∈ N_1∑_i_2∈ N_2⋯∑_i_k+1∈ N_k+1𝔼[| X_i_q_0⋯ X_i_q_1-1|] ⋯ 𝔼[| X_i_q_z-1⋯ X_i_q_z-1|]·
|𝔼[ X_i_q_z⋯ X_i_k+1∂^kf(W_i.[k+1-s])-∂^kf(W_i.[k+1])]|
(<ref>)≤ | f |_k,ω∑_i_1∈ N_1∑_i_2∈ N_2⋯∑_i_k+1∈ N_k+1𝔼[| X_i_q_0⋯ X_i_q_1-1|] ⋯ 𝔼[| X_i_q_z-1⋯ X_i_q_z-1|] ·
𝔼[| X_i_q_z⋯ X_i_k+1|·|∑_i∈ N(i_1:(k+1)) X_i|^ω]
= | f |_k,ωℛ_ω[t_1,⋯,t_k+1,-(k+1)].
Here the last equality is due to the definition (<ref>).
Thus, (<ref>) is proven for both z=0 and z≥ 1. Next we show that
|𝒮[t_1,⋯,t_k+1] (𝔼 [∂^kf(W)]-𝔼 [∂^kf(W_i.[k+1])])|
≤ -𝕀(t_k+1<0)| f |_k,ωℛ_ω[t_1,⋯,t_k+1,k+1].
In this goal, we first note that if t_k+1≥ 0, by definition (<ref>) we know that q_z=k+1 and therefore, according to (<ref>) we know that
𝒮[t_1,⋯,t_k+1]=0,
and so (<ref>) holds. Otherwise, we note that we have
|𝔼[∂^kf(W)]-𝔼[∂^kf(W_i.[k+1])]|≤| f |_k,ω𝔼[ | W_i.[k+1-s]-W_i.[k+1] |^ω]
≤ | f |_k,ω𝔼[ |∑_i∈ N(i_1:(k+1))\ N (i_1:(k+1-s)) X_i|^ω]
≤| f |_k,ω𝔼[ |∑_i∈ N(i_1:(k+1)) X_i|^ω].
This implies that
|𝒮[t_1,⋯,t_k+1] (𝔼 [∂^kf(W)]-𝔼 [∂^kf(W_i.[k+1])])|
≤ |𝒮[t_1,⋯,t_k+1] |·|𝔼[∂^kf(W)]-𝔼[∂^kf(W_i.[k+1])]|
(*)≤ | f |_k,ωℛ_1[t_1,⋯,t_k+1] 𝔼[ |∑_i∈ N(i_1:(k+1)) X_i|^ω]
= | f |_k,ω∑_i_1∈ N_1∑_i_2∈ N_2⋯∑_i_k+1∈ N_k+1[q_1-q_0,⋯,q_z+1-q_z]▹
(| X_i_1|,⋯,| X_i_k+1|)·𝔼[ |∑_i∈ N(i_1:(k+1)) X_i|^ω]
= | f |_k,ω∑_i_1∈ N_1∑_i_2∈ N_2⋯∑_i_k+1∈ N_k+1[q_1-q_0,⋯,q_z+1-q_z,1]▹
(| X_i_1|,⋯,| X_i_k+1|,(∑_i_k+2∈ N(i_1:(k+1))| X_i_k+2|)^ω)
= | f |_k,ωℛ_ω[t_1,⋯,t_k+1,k+1],
where (*) is due to (<ref>) and (<ref>).
Taking the difference of (<ref>) and (<ref>), we obtain (<ref>) by applying the equation (<ref>).
For ℓ≤ k, we apply the Taylor expansion with remainders taking the integral form and obtain that
∂^ℓ-1 f(y)-∂^ℓ-1 f(x)
= ∑_j=1^m-ℓ1/j!(y-x)^j∂^ℓ-1+jf(x)
+1/(k-ℓ+1)!(y-x)^k-ℓ+1∫_0^1(k-ℓ+1)v^k-ℓ∂^k f(v x+(1-v)y) v
(*) = ∑_j=1^k-ℓ+11/j!(y-x)^j∂^ℓ-1+jf(x)
+1/(k-ℓ+1)!(y-x)^k-ℓ+1∫_0^1(k-ℓ+1) v ^k-ℓ(∂^k f( v x+(1- v )y)-∂^k f(x)) v,
where to obtain (*) we added and subtracted (y-x)^k-ℓ+1/(k-ℓ+1)!∂^k f(x). Moreover, using the fact that ∂^k f(·) is assumed to be Hölder continuous we obtain that
|∂^k f( v x+(1- v )y)-∂^k f(x)|≤| f |_k,ω(1-v)^ω | y-x |^ω≤| f |_k,ω| y-x |^ω.
Therefore, as ∫_0^1(k-ℓ+1)v^k-ℓ v =1, by combining (<ref>) with (<ref>) we get that
|∂^ℓ-1 f(y)-∂^ℓ-1 f(x)- ∑_j=1^k-ℓ+11/j!(y-x)^j∂^ℓ-1+jf(x)|≤| f |_k,ω/(k-ℓ+1)!| y-x |^k-ℓ+1+ω.
We prove that the following inequality holds:
|𝒯_f,s[t_1,⋯,t_ℓ]-𝒯_f,0[t_1,⋯,t_ℓ]
-𝕀(s≥ 1)·∑_j=1^k-ℓ+1∑_h=0^j(-1)^h1/h !(j-h)!𝒯_f,j[t_1,⋯,t_ℓ,s - ℓ,⋯,s - ℓ_h times,-ℓ,⋯,-ℓ_(j-h) times] |
≤ 𝕀(s≥ 1)·| f |_k,ω/(k-ℓ+1)!ℛ_ω[t_1,⋯,t_ℓ,-ℓ,⋯,-ℓ_(k-ℓ+2) times],
First, let's establish (<ref>). Let z=|{ j∈ [ℓ]: t_j>0 }|. If z≥ 1, we write { j∈ [ℓ]:t_j>0 }={ q_1,⋯,q_z}. Without loss of generality, we suppose that the sequence 2≤ q_1<⋯<q_z≤ℓ is increasing. We further let q_0:=1 and q_z+1:=ℓ+1.
Applying (<ref>) we have
|𝔼[X_i_q_z⋯ X_i_ℓ∂^ℓ-1f(W_i.[ℓ-s])]-𝔼[X_i_q_z⋯ X_i_ℓ∂^ℓ-1f(W_i.[ℓ])]
-∑_j=1^k-ℓ+11/j!𝔼[ X_i_q_z⋯ X_i_ℓ (W_i.[ℓ-s]-W_i.[ℓ] )^j∂^ℓ-1+jf(W_i.[ℓ])]
|
≤ | f |_k,ω/(k-ℓ+1)!𝔼[| X_i_q_z⋯ X_i_ℓ|·| W_i.[ℓ-s]-W_i.[ℓ] |^k-ℓ+1+ω].
For convenience let
E_1:=∑_i_q_z∈ N_q_z⋯∑_i_ℓ∈ N_ℓ𝔼[X_i_q_z⋯ X_i_ℓ∂^ℓ-1f(W_i.[ℓ-s])]-𝔼[X_i_q_z⋯ X_i_ℓ∂^ℓ-1f(W_i.[ℓ])],
E_2,j:=∑_i_q_z∈ N_q_z⋯∑_i_ℓ∈ N_ℓ𝔼[ X_i_q_z⋯ X_i_ℓ (W_i.[ℓ-s]-W_i.[ℓ] )^j∂^ℓ-1+jf(W_i.[ℓ])],
E_3:=∑_i_q_z∈ N_q_z⋯∑_i_ℓ∈ N_ℓ𝔼[| X_i_q_z⋯ X_i_ℓ|·| W_i.[ℓ-s]-W_i.[ℓ] |^k-ℓ+1+ω].
Then (<ref>) reduces to
| E_1-∑_j=1^k-ℓ+1E_2,j/j! |≤| f |_k,ωE_3/(k-ℓ+1)!.
Then we observe that by definition of W_i.[·] we have
𝔼[ X_i_q_z⋯ X_i_ℓ (W_i.[ℓ-s]-W_i.[ℓ] )^j∂^ℓ-1+jf(W_i.[ℓ])]
= 𝔼[ X_i_q_z⋯ X_i_ℓ(∑_i∈ N(i_1:ℓ)X_i-∑_i∈ N(i_1:ℓ-s)X_i)^j∂^ℓ-1+jf(W_i.[ℓ])]
= ∑_h=0^j(-1)^hjh 𝔼[ X_i_q_z⋯ X_i_ℓ(∑_i∈ N(i_1:ℓ-s)X_i)^h(∑_i∈ N(i_1:ℓ)X_i)^j-h∂^ℓ-1+jf(W_i.[ℓ])],
and that
𝔼[| X_i_q_z⋯ X_i_ℓ|·| W_i.[ℓ-s]-W_i.[ℓ] |^k-ℓ+1+ω]
≤ 𝔼[| X_i_q_z⋯ X_i_k+1|·|∑_i∈ N(i_1:(k+1))\ N (i_1:(k+1-s)) X_i|^k-ℓ+1+ω]
≤ 𝔼[| X_i_q_z⋯ X_i_k+1|·|∑_i∈ N(i_1:(k+1)) X_i|^k-ℓ+1+ω]
≤ 𝔼[| X_i_q_z⋯ X_i_k+1|·( ∑_i∈ N(i_1:(k+1))| X_i|)^k-ℓ+1·|∑_i∈ N(i_1:(k+1)) X_i|^ω].
If z=0, we take the sum of (<ref>) or (<ref>) over i_q_z∈ N_q_z,⋯,i_ℓ∈ N_ℓ. By definition (<ref>) and (<ref>) we have
E_1=𝒯_f,s[t_1,⋯,t_ℓ]-𝒯_f,0[t_1,⋯,t_ℓ],
E_2,j=∑_h=0^j(-1)^hjh 𝒯_f,j[t_1,⋯,t_ℓ,s - ℓ,⋯,s - ℓ_h times,-ℓ,⋯,-ℓ_(j-h) times],
E_3≤ℛ_ω[t_1,⋯,t_ℓ,ℓ,-ℓ,⋯,-ℓ_(k-ℓ+1) times].
Combining (<ref>) and (<ref>), we have for s≥ 1
|𝒯_f,s[t_1,⋯,t_ℓ]-𝒯_f,0[t_1,⋯,t_ℓ]
-∑_j=1^k-ℓ+1∑_h=0^j(-1)^h1/h !(j-h)!𝒯_f,j[t_1,⋯,t_ℓ,s - ℓ,⋯,s - ℓ_h times,-ℓ,⋯,-ℓ_(j-h) times] |
(<ref>)= | E_1-∑_j=1^k-ℓ+1E_2,j/j! |(<ref>)≤| f |_k,ωE_3/(k-ℓ+1)!
(<ref>)≤ | f |_k,ω/(k-ℓ+1)!ℛ_ω[t_1,⋯,t_ℓ,-ℓ,⋯,-ℓ_(k-ℓ+2) times].
Thus, (<ref>) holds for z=0.
If z≥ 1, similar to (<ref>) we have
𝒮[t_1,⋯,t_q_z-1]
· E_1=𝒯_f,s[t_1,⋯,t_ℓ]-𝒯_f,0[t_1,⋯,t_ℓ],
𝒮[t_1,⋯,t_q_z-1]
· E_2,j
=∑_h=0^j(-1)^hjh 𝒯_f,j[t_1,⋯,t_ℓ,s - ℓ,⋯,s - ℓ_h times,-ℓ,⋯,-ℓ_(j-h) times],
ℛ_1[t_1,⋯,t_q_z-1]
· E_3≤ℛ_ω[t_1,⋯,t_ℓ,-ℓ,⋯,-ℓ_(k-ℓ+2) times].
Combining (<ref>) and (<ref>) we get for s≥ 1
|𝒯_f,s[t_1,⋯,t_ℓ]-𝒯_f,0[t_1,⋯,t_ℓ]
-∑_j=1^k-ℓ+1∑_h=0^j(-1)^h1/h !(j-h)!𝒯_f,j[t_1,⋯,t_ℓ,s - ℓ,⋯,s - ℓ_h times,-ℓ,⋯,-ℓ_(j-h) times] |
(<ref>)= |𝒮[t_1,⋯,t_q_z-1] |·| E_1-∑_j=1^k-ℓ+1E_2,j/j! |(<ref>)≤ℛ_1[t_1,⋯,t_q_z-1]·| E_1-∑_j=1^k-ℓ+1E_2,j/j! |
(<ref>)≤ ℛ_1[t_1,⋯,t_q_z-1]·| f |_k,ωE_3/(k-ℓ+1)!
(<ref>)≤| f |_k,ω/(k-ℓ+1)!ℛ_ω[t_1,⋯,t_ℓ,-ℓ,⋯,-ℓ_(k-ℓ+2) times].
Thus, we have shown (<ref>) for both z=0 and z≥ 1.
Next we prove that the following inequality holds:
|𝒮[t_1,⋯,t_ℓ](𝔼 [∂^ℓ-1f(W)]-𝔼 [∂^ℓ-1f(W_i.[ℓ])])
-𝕀(t_ℓ<0)·∑_j=1^k-ℓ+11/j!𝒯_f,j[t_1,⋯,t_ℓ,ℓ,-ℓ,⋯,-ℓ_(j-1) times] |
≤ 𝕀(t_ℓ<0)·| f |_k,ω/(k-ℓ+1)!ℛ_ω[t_1,⋯,t_ℓ,ℓ,-ℓ,⋯,-ℓ_(k-ℓ+1) times].
For (<ref>), we apply (<ref>) again and get that
|𝔼[∂^kf(W)]-𝔼[∂^kf(W_i.[ℓ])]
-∑_j=1^k-ℓ+11/j!𝔼[ (W_i.-W_i.[ℓ] )^j∂^ℓ-1+jf(W_i.[ℓ])]
|≤| f |_k,ω/(k-ℓ+1)!𝔼[ | W-W_i.[ℓ] |^k-ℓ+1+ω].
For convenience let
E_4:=𝔼 [∂^ℓ-1f(W)]-𝔼 [∂^ℓ-1f(W_i.[ℓ])],
E_5,j:=𝔼[ (W_i.-W_i.[ℓ] )^j∂^ℓ-1+jf(W_i.[ℓ])],
E_6:=𝔼[ | W-W_i.[ℓ] |^k-ℓ+1+ω].
Then (<ref>) reduces to
| E_4-∑_j=1^k-ℓ+1E_5,j/j! |≤| f |_k,ωE_6/(k-ℓ+1)!.
We first note that if t_ℓ≥ 0 then 𝒮[t_1,⋯,t_ℓ]=0 therefore, (<ref>) holds.
Moreover, similar to (<ref>), we have for t_ℓ<0
𝒮[t_1,⋯,t_ℓ]· E_4=𝒮[t_1,⋯,t_ℓ](𝔼 [∂^ℓ-1f(W)]-𝔼 [∂^ℓ-1f(W_i.[ℓ])]),
𝒮[t_1,⋯,t_ℓ]· E_5,j=𝒯_f,j[t_1,⋯,t_ℓ,ℓ,-ℓ,⋯,-ℓ_(j-1) times],
ℛ_1[t_1,⋯,t_ℓ]· E_6≤ℛ_ω[t_1,⋯,t_ℓ,ℓ,-ℓ,⋯,-ℓ_(k-ℓ+1) times].
Combining (<ref>) and (<ref>), we have
|𝒮[t_1,⋯,t_ℓ](𝔼 [∂^ℓ-1f(W)]-𝔼 [∂^ℓ-1f(W_i.[ℓ])])
-∑_j=1^k-ℓ+11/j!𝒯_f,j[t_1,⋯,t_ℓ,ℓ,-ℓ,⋯,-ℓ_(j-1) times] |
(<ref>)=|𝒮[t_1,⋯,t_ℓ] |·| E_4-∑_j=1^k-ℓ+1E_5,j/j! |(<ref>)≤ℛ_1[t_1,⋯,t_ℓ]·| E_4-∑_j=1^k-ℓ+1E_5,j/j! |
(<ref>)≤ℛ_1[t_1,⋯,t_ℓ]·| f |_k,ωE_6/(k-ℓ+1)!
(<ref>)≤| f |_k,ω/(k-ℓ+1)!ℛ_ω[t_1,⋯,t_ℓ,ℓ,-ℓ,⋯,-ℓ_(k-ℓ+1) times].
Therefore, we have established both (<ref>) and (<ref>). Taking their difference and applying (<ref>), we obtain (<ref>).
Equipped with the tools in <ref>, we approximate any 𝒯-sum 𝒯_f,s[t_1,⋯,t_ℓ] by order-j 𝒮-sums (j=ℓ,⋯,k+1) with remainder terms being order-(k+2) ℛ-sums.
Fix k∈ℕ_+. For any ℓ∈ [k+1], s∈ [ℓ]∪{ 0 }, and t_1,⋯,t_ℓ∈ℤ such that | t_j|≤ j-1 for any j∈ [ℓ], there exist Q_ℓ,⋯,Q_k+1 (which depend on s and t_1:ℓ and the joint distribution of (X_i)_i∈ I) and a constant C_k,ℓ (C_k,ℓ≤ 4^k-ℓ+1) such that for any f∈𝒞^k,ω(ℝ), we have
|𝒯_f,s [t_1,⋯, t_ℓ]-∑_j=ℓ^k+1Q_j𝔼 [∂^j-1 f(W)]|≤ C_k,ℓ| f |_k,ωR_k,ω.
Note that by (<ref>) R_k,ω is given as
R_k,ω= ∑_t_1:(k+2)∈ℳ_1,k+2 ℛ_ω[t_1,t_2,⋯,t_k+2],
where
ℳ_1,k+2:={t_1:(k+2): t_j+1=± j & t_j∧ t_j+1< 0 ∀ 1≤ j≤ k+1}.
If there exists an integer 2≤ j≤ℓ such that t_j=0 or there exists j∈ [ℓ-1] such that t_j and t_j+1 are both positive, then 𝒯_f,s[t_1,⋯,t_ℓ]=0 by definition and the theorem already holds by setting Q_j=⋯=Q_k+1=0.
Otherwise, we claim:
Let 𝒯_f,s [t_1,⋯,t_ℓ] be a 𝒯-sum. For any j=ℓ+1,⋯,k+1, let
ℰ_ℓ+1,j:={ t_(ℓ+1):j: | t_h+1|≤ h & t_h∧ t_h+1 ∀ℓ≤ h≤ j-1}.
For all j=ℓ+1,⋯, k+1, ν∈ [j]∪{ 0 }, and (t_ℓ+1,⋯,t_j)∈ℰ_ℓ,j, there are coefficients a_j,ν,t_(ℓ+1):j (additionally depending on s) such that if we write
Q_j=∑_t_(l+1):j∈ℰ_ℓ,j∑_ν=0^j a_j,ν,t_(ℓ+1):j𝒯_f,ν[t_1,⋯,t_ℓ,t_ℓ+1,⋯,t_j],
then the following holds
|𝒯_f,s [t_1,⋯, t_ℓ]-∑_j=ℓ^k+1Q_j𝔼 [∂^j-1 f(W)]|
≤ 4^k-ℓ+1| f |_k,ω∑_t_(ℓ+1):(k+2)∈ℳ_ℓ,k+1ℛ_ω[t_1,⋯,t_ℓ, ⋯ ,t_k+2],
where
ℳ_ℓ+1,k+2:={t_(ℓ+1):(k+2): t_j+1=± j & t_j∧ t_j+1< 0 ∀ℓ≤ j≤ k+1}.
We establish this claim by performing induction on ℓ with ℓ taking the value k+1,k,⋯, 1 in turn.
For ℓ=k+1, by (<ref>) we have
|𝒯_f,s[t_1,⋯,t_k+1]-𝒮[t_1,⋯,t_k+1] 𝔼 [∂^kf(W)]|
≤ | f |_k,ω(𝕀(t_k+1<0)·ℛ_ω[t_1,⋯,t_k+1,k+1]
+ 𝕀(s≥ 1)·ℛ_ω[t_1,⋯,t_k+1,-(k+1)]).
If there exists j∈ [k] such that t_j and t_j+1 are both positive, then 𝒯_f,s[t_1,⋯,t_k+1]=0 and the claim holds with all a_j,ν,t_ℓ:(k+1)=0. Otherwise, for all j≤ k either t_j is negative or t_j+1 is negative for j∈ [k]. If t_k+1<0, then we have
𝕀(t_k+1<0)·ℛ_ω[t_1,⋯,t_k+1,k+1]
+ 𝕀(s≥ 1)·ℛ_ω[t_1,⋯,t_k+1,-(k+1)]
= ℛ_ω[t_1,⋯,t_k+1,k+1]
+ 𝕀(s≥ 1)·ℛ_ω[t_1,⋯,t_k+1,-(k+1)]
(*)≤ ℛ_ω[0, sgn(t_2),2sgn(t_3),⋯,k·sgn(t_k+1),k+1]
+ℛ_ω[0, sgn(t_2),2sgn(t_3),⋯,k·sgn(t_k+1),-(k+1)]
≤ ∑_t_k+2=± (k+1):
t_k+1∧ t_k+2< 0 ℛ_ω[t_1,⋯,t_k+1,t_k+2],
where (*) is a consequence of (<ref>) and sgn(x)=0,1, or -1 denotes the sign of a real number x.
Further note that if t_k+1>0, then 𝕀(t_k+1<0)=0 and we get
𝕀(t_k+1<0)·ℛ_ω[t_1,⋯,t_k+1,k+1]
+ 𝕀(s≥ 1)·ℛ_ω[t_1,⋯,t_k+1,-(k+1)]
= 𝕀(s≥ 1)·ℛ_ω[t_1,⋯,t_k+1,-(k+1)]
(*)≤ ℛ_ω[0, sgn(t_2),2sgn(t_3),⋯,k·sgn(t_k+1),-(k+1)]
≤ ∑_t_k+2=± (k+1):
t_k+1∧ t_k+2< 0 ℛ_ω[t_1,⋯,t_k+1, t_k+2],
where (*) is a consequence of (<ref>). Thus, we have shown that
|𝒯_f,s[t_1,⋯,t_k+1]-𝒮[t_1,⋯,t_k+1] 𝔼 [∂^kf(W)]|
≤ | f |_k,ω(𝕀(t_k+1<0)·ℛ_ω[t_1,⋯,t_k+1,k+1]
+ 𝕀(s≥ 1)·ℛ_ω[t_1,⋯,t_k+1,-(k+1)])
≤ | f |_k,ω∑_t_k+2=± (k+1):
t_k+1∧ t_k+2< 0 ℛ_ω[t_1,⋯,t_k+1,t_k+2].
Now suppose the claim holds for ℓ+1 and consider the case of ℓ. By (<ref>) we have
|𝒯_f,s[t_1,⋯,t_ℓ]-𝒮[t_1,⋯,t_ℓ] 𝔼 [∂^ℓ-1f(W)]
-𝕀(s≥ 1)·∑_j=1^k-ℓ+1∑_h=0^j(-1)^h1/h !(j-h)!𝒯_f,j[t_1,⋯,t_ℓ,s - ℓ,⋯,s - ℓ_h times,-ℓ,⋯,-ℓ_(j-h) times]
+𝕀(t_ℓ<0)∑_j=1^k-ℓ+11/j!𝒯_f,j[t_1,⋯,t_ℓ,ℓ,-ℓ,⋯,-ℓ_(j-1) times] |
≤ | f |_k,ω/(k-ℓ+1)!(𝕀(t_ℓ<0)·ℛ_ω[t_1,⋯,t_ℓ,ℓ,-ℓ,⋯,-ℓ_(k-ℓ+1) times]
+ 𝕀(s≥ 1)·ℛ_ω[t_1,⋯,t_ℓ,-ℓ,⋯,-ℓ_(k-ℓ+2) times]).
Note that 𝒯_f,j[t_1,⋯,t_ℓ,s - ℓ,⋯,s - ℓ_h times,-ℓ,⋯,-ℓ_(j-h) times] and 𝒯_f,j[t_1,⋯,t_ℓ,ℓ,-ℓ,⋯,-ℓ_(j-1) times] are 𝒯-sums of order at least ℓ+j (j≥ 1). Therefore, we can apply inductive hypothesis on them. In specific, the remainder term (ℛ-sums) in the expansion of
𝒯_f,j[t_1,⋯,t_ℓ,s - ℓ,⋯,s - ℓ_h times,-ℓ,⋯,-ℓ_(j-h) times]
is given by this
4^k-ℓ-j+1| f |_k,ω∑_t_(ℓ+j+1):(k+2)∈ℳ_ℓ+j+1,k+2ℛ_ω[t_1,⋯,t_ℓ, s - ℓ,⋯,s - ℓ_h times,-ℓ,⋯,-ℓ_(j-h) times,t_ℓ+j+1,⋯ ,t_k+2]
(<ref>)≤ 4^k-ℓ-j+1| f |_k,ω∑_t_(ℓ+j+1):(k+2)∈ℳ_ℓ+j+1,k+2ℛ_ω[t_1,⋯,t_ℓ, -ℓ,-(ℓ+1),⋯,-(ℓ+j-1),t_ℓ+j+1,⋯ ,t_k+2]
≤ 4^k-ℓ-j+1| f |_k,ω∑_t_(ℓ+2):(k+2)∈ℳ_ℓ+2,k+2ℛ_ω[t_1,⋯,t_ℓ, -ℓ,t_ℓ+2,⋯ ,t_k+2]=:4^k-ℓ-j+1| f |_k,ω· U_1.
Similarly, the remainder term in the expansion of 𝒯_f,j[t_1,⋯,t_ℓ,ℓ,-ℓ,⋯,-ℓ_(j-1) times] is given by
4^k-ℓ-j+1| f |_k,ω∑_t_(ℓ+j+1):(k+2)∈ℳ_ℓ+j+1,k+2ℛ_ω[t_1,⋯,t_ℓ, ℓ,-ℓ,⋯,-ℓ_(j-1) times,t_ℓ+j+1,⋯ ,t_k+2]
≤ 4^k-ℓ-j+1| f |_k,ω∑_t_(ℓ+2):(k+2)∈ℳ_ℓ+2,k+2ℛ_ω[t_1,⋯,t_ℓ, ℓ,t_ℓ+2,⋯ ,t_k+2]=:4^k-ℓ-j+1| f |_k,ω· U_2.
Note that U_1+𝕀(t_ℓ<0)· U_2 is controlled by
U_1+𝕀(t_ℓ<0)· U_2
= ∑_t_(ℓ+2):(k+2)∈ℳ_ℓ+2,k+2 ℛ_ω[t_1,⋯,t_ℓ, -ℓ,t_ℓ+2,⋯ ,t_k+2]
+𝕀(t_ℓ<0)·∑_t_(ℓ+2):(k+2)∈ℳ_ℓ+2,k+2 ℛ_ω[t_1,⋯,t_ℓ, ℓ,t_ℓ+2,⋯ ,t_k+2]
≤ ∑_t_(ℓ+1):(k+2)∈ℳ_ℓ+1,k+2 ℛ_ω[t_1,⋯,t_ℓ,t_ℓ+1,⋯ ,t_k+2].
As we mentioned above, by inductive hypothesis we have that there exist coefficients Q_j satisfying (<ref>) such that
|𝒯_f,s [t_1,⋯, t_ℓ]-∑_j=ℓ^k+1Q_j 𝔼 [∂^j-1 f(W)]|
≤ ∑_j=1^k-ℓ+1∑_h=0^j1/h!(j-h)!4^k-ℓ-j+1| f |_k,ω· U_1+𝕀(t_ℓ<0)∑_j=1^k-ℓ+11/j!4^k-ℓ-j+1| f |_k,ω· U_2
+| f |_k,ω/(k-ℓ+1)!(𝕀(t_ℓ<0)·ℛ_ω[t_1,⋯,t_ℓ,ℓ,-ℓ,⋯,-ℓ_(k-ℓ+1) times]
+ ℛ_ω[t_1,⋯,t_ℓ,-ℓ,⋯,-ℓ_(k-ℓ+2) times]).
Noting that ∑_h=0^j1/(h!(j-h)!)=2^j/j!, we have
|𝒯_f,s [t_1,⋯, t_ℓ]-∑_j=ℓ^k+1Q_j 𝔼 [∂^j-1 f(W)]|
≤ ∑_j=1^k-ℓ+12^j· 4^k-ℓ-j+1/j!| f |_k,ω·( U_1+𝕀(t_ℓ<0)· U_2)
+| f |_k,ω/(k-ℓ+1)!(𝕀(t_ℓ<0)· U_2
+ U_1)
≤ (1+∑_j=1^k-ℓ+1 2^2k-2ℓ-j+2)| f |_k,ω( U_1+𝕀(t_ℓ<0)· U_2)
≤ 4^k-ℓ+1| f |_k,ω( U_1+𝕀(t_ℓ<0)· U_2)
(<ref>)≤ 4^k-ℓ+1| f |_k,ω∑_t_(ℓ+1):(k+2)∈ℳ_ℓ+1,k+2 ℛ_ω[t_1,⋯,t_ℓ,t_ℓ+1,⋯ ,t_k+2].
Thus, we have shown (<ref>).
Finally, we note that for all t_1:ℓ∈ℳ_1,ℓ and then by (<ref>) we have
∑_t_(ℓ+1):(k+1)∈ℳ_ℓ+1,k+2 ℛ_ω[t_1,⋯,t_ℓ, ⋯ ,t_k+2]
≤ ∑_t_(ℓ+1):(k+2)∈ℳ_ℓ+1,k+2 ℛ_ω[0,sgn(t_2),2sgn(t_3),⋯,(ℓ-1)sgn(t_ℓ), t_ℓ+1⋯ ,t_k+2]
≤ ∑_t_1:(k+2)∈ℳ_1,k+2 ℛ_ω[t_1,t_2,⋯,t_k+2]=R_k,ω.
We remark that if f is a polynomial of degree at most k, then the Hölder constant | f |_k,ω=0 and hence the remainder C_k,ℓ| f |_k,ωR_k,ω vanishes.
For any 𝒯-sum, we have established the existence of expansions in <ref>. Next we show the uniqueness of such expansions.
Under the same settings as <ref>, suppose that there exist two sets of coefficients Q_ℓ,⋯,Q_k+1 and Q_ℓ',⋯,Q_k+1' only depending on s and t_1:ℓ, and the joint distribution of (X_i)_i∈ I such that for any polynomial f of degree at most ℓ, we have
𝒯_f,s [t_1,⋯, t_ℓ]= Q_ℓ𝔼 [∂^ℓ-1 f(W)]+⋯+Q_k+1𝔼 [∂^k f(W)]
= Q_ℓ'𝔼 [∂^ℓ-1 f(W)]+⋯+Q_k+1'𝔼 [∂^k f(W)],
Then Q_j= Q_j' for any j=ℓ,⋯, k+1.
We prove this lemma by contradiction.
Let j be the smallest number such that Q_j≠ Q_j'. Since the coefficients Q_ℓ,⋯,Q_k+1 do not depend on f, we can choose f(x)=c x^j-1 such that ∂^j-1 f(x)= c(j-1)!≠ 0. But Q_j+1𝔼 [∂^jf(W)]=⋯=Q_k+1𝔼 [∂^k f(W)]=0, which implies cQ_j=cQ_j'. This is a contradiction. Therefore, Q_j= Q_j' for any j=ℓ,⋯, k+1.
Applying <ref> with ℓ=1, and s=t_1=t_2=0, we have for any f∈𝒞^k,ω(ℝ),
𝔼 [Wf(W)]=∑_i_1∈ I𝔼 [X_i_1f(W)]=𝒯_f,0[0]=∑_j=1^k+1Q_j𝔼 [∂^j-1 f(W)]+𝒪(| f |_k,ωR_k,ω),
for some Q_1,⋯, Q_k+1 that only depend on the distribution of (X_i)_i∈ I and where R_k,ω is defined in (<ref>). Suppose that f is a polynomial of degree at most k, then we observe that f∈𝒞^k,ω(ℝ) and | f |_k,ω=0. Thus, this implies that
𝒯_f,0[0]=𝔼 [Wf(W)]=∑_j=1^k+1Q_j𝔼 [∂^j-1 f(W)].
On the other hand, for any random variable, the moments (μ_j)_j≥ 0 and cumulants (κ_j)_j≥ 0, provided that they exist, are connected through the following relations <cit.>:
μ_n=∑_j=1^nn-1j-1κ_jμ_n-j.
Using this we will obtain a similar expansion to (<ref>) by using the cumulants (κ_j). In this goal, we first remark that if f(x)= x^j where j≤ k, then by using (<ref>) we obtain that
𝔼 [Wf(W)]=μ_j+1(W)
=∑_h=1^j+1jh-1κ_h(W)μ_j+1-h(W)
= ∑_h=0^jjhκ_h+1(W)μ_j-h(W)
=∑_h=0^kκ_h+1(W)/h !𝔼 [∂^h f(W)].
Moreover, we remark that this can be generalized to arbitrary polynomials f of degree k. Indeed, any polynomial f of degree k can be written as f(x)=∑_j=0^ka_jx^j for certain coefficients (a_j). By the linearity of expectations, we know that
𝔼 [Wf(W)]=∑_j=0^kκ_j+1(W)/j !𝔼 [∂^j f(W)].
Compare this to (<ref>) and apply <ref>. We conclude that Q_j=κ_j(W)/(j-1)! for any j∈ [k+1]. In particular, Q_1=0=κ_1(W).
Next we upper-bound the cumulants of W using R_k,1.
For any k∈ℕ_+, there exists a constant C_k that only depends on k such that |κ_k+2(W) |≤ C_kR_k,1.
Let f(x)=x^k+1/(k+1)!. We remark that f∈Λ_k+1 where Λ_k+1:={ f∈𝒞^k,1(ℝ):| f |_k,1≤ 1 }. Moreover, by using <ref> we have
𝔼 [Wf(W)]=∑_j=1^kκ_j+1(W)/j !𝔼 [∂^j f(W)]+𝒪(R_k,1).
Here the constant dropped from the big 𝒪 analysis is controlled by 4^k.
On the other hand, by (<ref>) we have
𝔼 [Wf(W)]=1/(k+1)!μ_k+2(W)
= ∑_j=1^k+1k+1jκ_j+1(W)μ_k+1-j(W)
= ∑_j=1^kκ_j+1(W)/j !𝔼 [∂^j f(W)]+κ_k+2(W)/(k+1)!.
Thus, there exists C_k such that |κ_k+2(W) |≤ C_kR_k,1.
Finally, we are able to prove <ref> based on <ref> and <ref>.
We perform induction on k:=⌈ p⌉. We start with k=1. In this goal, we first remark that by <ref>, we have f = Θ h∈𝒞^1,ω(ℝ) and that | f |_1,ω is bounded by a constant. Moreover, as f=Θ h is the solution to the Stein equation (<ref>). By <ref> we obtain that
𝔼 [h(W)]-𝒩h= 𝔼 [f'(W)]-𝔼 [W f(W)] = 𝒪(R_1,ω).
Therefore, the desired result is established for 1.
Suppose that the proposition holds for 1,⋯,k-1, we want to prove that it will also hold for k. Let f=Θ h, then by <ref> we know that f∈𝒞^k,ω(ℝ) and that | f |_k,ω is bounded by some constant that only depends on k,ω. Thus, by <ref>, we have
𝔼 [Wf(W)]=∑_j=1^kκ_j+1(W)/j !𝔼 [∂^j f(W)]+𝒪(R_k,ω).
Hence we have the following expansion of the Stein equation
𝔼 [h(W)]-𝒩h= 𝔼 [ f'(W)]-𝔼 [W f(W)]
= -∑_j=2^kκ_j+1(W)/j!𝔼 [∂^jf(W)]+𝒪 (R_k,ω)
= -∑_j=1^k-1κ_j+2(W)/(j+1)!𝔼 [∂^j+1Θ h(W)]+𝒪 (R_k,ω).
Noting that
∂^j+1Θ h∈𝒞^k-j-1,ω(ℝ) and |∂^j+1Θ h |_k-j-1,ω is bounded by a constant only depending on k,ω, then by inductive hypothesis we obtain that
𝔼 [∂^j+1Θ h(W)]-𝒩[∂^j+1Θ h]
= ∑_(r,s_1:r)∈Γ(k-j-1)(-1)^r∏_ℓ=1^rκ _s _ℓ+2(W)/(s _ℓ+1)!𝒩 [∏_ℓ=1^r(∂ ^s _ℓ+1Θ)∂^j+1Θ h]
+𝒪(∑_ℓ=1^k-j-1R _ℓ,1^(k-j-1+ω )/ℓ+∑_ℓ=1^k-jR _ℓ,ω^(k-j-1+ω )/(ℓ+ω -1)),
where we denoted Γ(k-j-1):={ r,s_1:r∈ℕ_+: ∑_ℓ=1^rs_ℓ≤ k-j-1 }.
By <ref> and Young's inequality, we have
|κ_j+2(W) R_ℓ,ω^k-j+ω -1/ℓ+ω -1|≲ R_j,1R_ℓ,ω^k-j+ω -1/ℓ+ω -1≤j/k+ω-1 R_j,1^k+ω -1/j+k-j+ω -1/k+ω -1R_ℓ,ω^k+ω -1/ℓ+ω -1,
|κ_j+2(W) R_ℓ,1^k-j+ω -1/ℓ|≲ R_j,1R_ℓ,1 ^k-j+ω -1/ℓ≤j/k+ω-1 R_j,1^k+ω-1/j+k-j+ω -1/k+ω-1R_ℓ,1 ^k+ω -1/ℓ.
Thus, we derive that
𝔼 [h(W)]-𝒩h
(<ref>)= -∑_j=1^k-1κ_j+2(W)/(j+1)!𝔼 [∂^j+1Θ h(W)]+𝒪 (R_k,ω)
(<ref>)= -∑_j=1^k-1κ_j+2(W)/(j+1)!𝒩 [∂^j+1Θ h] +∑_j=1^k-1κ_j+2(W)/(j+1)!·
∑_(r,s_1:r)∈Γ(k-j-1)(-1)^r∏_ℓ=1^rκ _s _ℓ+2(W)/(s _ℓ+1)!𝒩 [∏_ℓ=1^r(∂ ^s _ℓ+1Θ)∂^j+1Θ h]
+𝒪(R_k,ω+∑_j=1^k-1|κ_j+2(W) |∑_ℓ=1^k-j-1R _ℓ,1^(k+ω -j-1)/ℓ
+∑_j=1^k-1|κ_j+2(W) |∑_ℓ=1^k-jR _ℓ,ω^(k+ω -j-1)/(ℓ+ω -1))
(<ref>)= -∑_j=1^k-1κ_j+2(W)/(j+1)!𝒩 [∂^j+1Θ h]+∑_j=1^k-1κ_j+2(W)/(j+1)!·
∑_(r,s_1:r)∈Γ(k-j-1)(-1)^r∏_ℓ=1^rκ _s _ℓ+2(W)/(s _ℓ+1)!𝒩 [∏_ℓ=1^r(∂ ^s _ℓ+1Θ)∂^j+1Θ h]
+𝒪(R_k,ω+∑_j=1^k-1R_j,1^(k+ω-1 )/j+∑_j=1^k-1∑_ℓ=1^k-j-1R _ℓ,1^(k+ω -1)/ℓ
+∑_j=1^k-1∑_ℓ=1^k-jR _ℓ,ω^(k+ω -1)/(ℓ+ω -1))
= ∑_(r,s_1:r)∈Γ(k-1)(-1)^r∏_ℓ=1^rκ _s _ℓ+2(W)/(s _ℓ+1)!𝒩 [∏_ℓ=1^r(∂ ^s _ℓ+1Θ) h]
+𝒪(∑_ℓ=1^k-1R _ℓ,1^(k+ω -1)/ℓ+∑_ℓ=1^kR _ℓ,ω^(k+ω -1)/(ℓ+ω -1)).
Therefore, the desired property was established by induction.
§ PROOF OF LEMMA <REF>
In <ref>, we would like to find a random variable with a given sequence of real numbers as its cumulants. Constructing a random variable from its cumulants can be difficult in practice. However, there is a rich literature on establishing the existence of a random variable given the moment sequence. And it is well-known that the moments can be recovered from the cumulants, and vice versa. The explicit expression between moments μ_n and cumulants κ_n is achieved by using the Bell polynomials, i.e.,
μ_n =B_n(κ_1,⋯,κ_n)=∑_j=1^nB_n,j(κ_1,⋯,κ_n-j+1),
κ_n =∑_j=1^n(-1)^j-1(j-1)!B_n,j(μ_1,⋯,μ_n-j+1),
where B_n and B_n,j are the exponential Bell polynomial defined by
B_n(x_1,⋯,x_n):=∑_j=1^nB_n,j(x_1,x_2,⋯,x_n-j+1),
B_n,j(x_1,x_2,⋯,x_n-j+1):=∑n!/i_1!i_2!⋯ i_n-j+1!(x_1/1!)^i_1(x_2/2!)^i_2⋯(x_n-j+1/(n-j+1)!)^i_n-j+1.
The sum here is taken over all sequences i_1,⋯,i_n-j+1 of non-negative integers such that the following two conditions are satisfied:
i_1+i_2+⋯+i_n-j+1=j,
i_1+2 i_2+⋯ +(n-j+1)i_n-j+1=n.
In mathematics, the classical moment problem is formulated as follows: Given a sequence (μ_i)_i≥ 0, does there exist a random variable defined on a given interval such that μ_j=𝔼 [X^j] for any non-negative integer j? There are three essentially different types of (closed) intervals. Either two end-points are finite, one end-point is finite, or no end-points are finite, which corresponds to the Hamburger, Hausdorff, and Stieltjes moment problem respectively. See <cit.> or <cit.> for a detailed discussion. For our purpose, there is no restriction on the support of random variables. Thus, the following lemma for the Hamburger moment problem is all we need.
The Hamburger moment problem is solvable, i.e., (μ_j)_j≥ 0 is a sequence of moments if and only if μ_0=1 and the corresponding Hankel kernel
H=([ μ_0 μ_1 μ_2 ⋯; μ_1 μ_2 μ_3 ⋯; μ_2 μ_3 μ_4 ⋯; ⋮ ⋮ ⋮ ⋱ ])
is positive definite, i.e.,
∑_j, k ≥ 0μ_j+k c_j c_k≥ 0
for every real sequence (c_j)_j ≥ 0 with finite support, i.e., c_j=0 except for finitely many j's.
If we define the (j+1)-th upper-left determinant of a Hankel matrix by
H_j(x_0,x_1,⋯,x_2j):=|[ x_0 x_1 ⋯ x_j; x_1 x_2 ⋯ x_j+1; ⋮ ⋮ ⋱ ⋮; x_j x_j+1 ⋯ x_2j ]|,
by Sylvester's criterion in linear algebra <cit.>, the positive-definite condition above is equivalent to H_j(μ_0,⋯,μ_2j)>0 for any j∈ℕ_+.
In order to prove <ref>, we construct a Hankel matrix from given values of cumulants and ensure that the upper-left determinants of (<ref>) are all positive. Then by <ref>, there exists a random variable that has matched moments with the ones in (<ref>) and hence it also has the required cumulants by (<ref>).
For convenience, we write
L_j(x_1,⋯,x_2j):=H_j(1,B_1(x_1),B_2(x_1,x_2),⋯,B_2j(x_1,⋯,x_2j)).
Taking x_1=0, from the definitions (<ref>) and (<ref>), there is an expansion
L_j(0,x_2,⋯,x_2j)
= H_j(1,0,B_2(0,x_2),⋯,B_2j(0,x_2,⋯,x_2j))
= ∑ a_t_2,⋯ , t_2j^(j)x_2^t_2⋯ x_2j^t_2j,
where the sum is taken over
t_2+t_3+⋯+t_2j≥ j,
2t_2+3t_3+⋯+(2j)t_2j=j(j+1).
We further define in the following way a sequence of univariate polynomials which will be essential in our construction in <ref>, by setting
P_j(x):=L_j(0,1,x,x^2,x^3,⋯,x^2j-2).
Firstly, we present a lemma on the properties of P_j(x).
P_j(x) is a polynomial of degree at most j(j-1) with only even-degree terms and if we write
P_j(x)=∑_ℓ=0^j(j-1)/2b_2ℓ^(j)x^2ℓ,
we have b_0^(j)=a_j(j+1)/2,0,⋯,0^(j)≥ 2 for any j≥ 2, j∈ℕ_+.
Note that by applying (<ref>) we obtain that
P_j(x)=L_j(0,1,x,⋯,x^2j-2)=∑ a_t_2,⋯,t_2j^(j)x^t_3+2t_4+⋯+(2j-2)t_2j,
where the sum is taken over
t_2+t_3+⋯+t_2j≥ j,
2t_2+3t_3+⋯+(2j)t_2j=j(j+1).
The degree of each term in (<ref>) is
t_3+2t_4+⋯+(2j-2)t_2j
= (2t_2+3t_3+⋯+(2j)t_2j)-2 (t_2+t_3+⋯+t_2j)
= j(j+1)-2 (t_2+t_3+⋯+t_2j).
This is even and no greater than j(j-1) since t_2+t_3+⋯+t_2j≥ j.
Then we show the constant term b_0^(j)≥ 2. Consider a standard normal random variable ξ∼𝒩(0,1). Then κ_j(ξ)=0 for all j≥ 1,j≠ 2, and κ_2(ξ)=1, which is straightforward by checking that the moment generating function of ξ is exp (t^2/2). By <ref>, we have
b_0^(j)=P_j(0)=L_j(0,1,0,⋯,0)
= L_j(κ_1(ξ),κ_2(ξ),⋯,κ_2j(ξ))
= H_j(μ_0(ξ),μ_1(ξ),⋯,μ_2j(ξ))>0.
Since μ_2ℓ(ξ)=(2ℓ-1)!! and μ_2ℓ-1(ξ)=0 are integers for ℓ∈ℕ_+, b_0^(j) is also an integer. Checking Leibniz formula of the determinant for the Hankel matrix H_j <cit.>, we observe that there is an even number of terms and that each term is odd. In specific, the determinant for the Hankel matrix is given by
b_0^(j)=H_j(μ_0(ξ),μ_1(ξ),⋯,μ_2j(ξ))=∑_τ∈ S_jsgn(τ)∏_i=1^jμ_τ(i)+i-2(ξ),
where by abuse of notation sgn is the sign function of permutations in the j-th permutation group S_j, which returns +1 and -1 for even and odd permutations, respectively. Since μ_2ℓ(ξ)=(2ℓ-1)!! and μ_2ℓ-1(ξ)=0 for all ℓ∈ℕ_+, we have
sgn(τ)∏_i=1^jμ_τ(i)+i-2(ξ)
is odd if τ (i)+i is even ∀ i=1,⋯,j
=0 otherwise .
Noting that the number of permutations τ that satisfies τ (i)+i is even for all i=1,⋯,j is (j!)^2, which is even when j≥ 2, we conclude that b_0^(j) is even, and thus, it should be at least 2.
As we have explained at the beginning of this section, we would like to construct a `moment' sequence such that the corresponding Hankel kernel is positive definite. The following lemma offers one single step in the construction.
Suppose there is some constant C such that |μ_ℓ|≤ C for ℓ=1,⋯, 2j+1 and H_j(μ_0,⋯,μ_2j)≥ 1. Then there exists C' only depending on j and C such that
H_j+1(μ_0,⋯,μ_2j,μ_2j+1,C')≥ 1.
Let C'=(j+1) (j+1)!C^j+2+1. Then by the Laplace expansion <cit.> of the determinant, we have
H_j+1(μ_0,⋯,μ_2j,μ_2j+1,C')= C'H_j(μ_0,⋯,μ_2j)+∑_ℓ=0^j(-1)^j+1+ℓμ_j+1+ℓA_j+2,ℓ+1
≥ C'-(j+1)C· (j+1)!C^j+1≥ 1,
where A_j+2,ℓ+1 is the determinant of the (j+1)× (j+1) submatrix obtained by deleting the (j+2)-th row and (ℓ+1)-th column of
A=([ μ_0 μ_1 ⋯ μ_j+1; μ_1 μ_2 ⋯ μ_j+2; ⋮ ⋮ ⋱ ⋮; μ_j+1 μ_j+2 ⋯ C' ]).
Now we prove <ref>.
The key of the proof will be to use <ref>. To do so we need to postulate an infinite sequence that will be our candidates for of potential moments and check that the conditions of <ref> hold. We remark that as we already know what we want the first k+1 cumulants to be, we already know what the candidates are for the first k+1 moments; and we only to find adequate proposal for the (k+2)-th moment onward. We will do so by iteratively using <ref>.
In this goal, we remark that since by <ref> we know that b_0^(j)≥ 2. Therefore, we can choose a small enough constant 0<C_p<1 only depending on k=⌈ p⌉ such that
b_0^(j)-∑_ℓ=1^j(j-1)/2∑_2t_2+2t_3+⋯+2t_2j=j(j+1)-2ℓ 2t_2+3t_3+⋯+2jt_2j=j(j+1)| a_t_2,⋯,t_2j^(j)| C_p^2ℓ≥ 1,
for any integer j=1,⋯, (k+1)/2. Given an index set I_n, if u_j^0.6(n)= 0 for all j=1,⋯, k-1, let ξ^0.6(n)∼𝒩(0,1) and q_n≫| I_n |. Then q_n and ξ^0.6(n) satisfy all the requirements since κ_j(ξ^0.6(n))=0 for all j∈ℕ_+,j≠ 2 and κ_2(ξ^0.6(n))=1, which is straightforward by checking that the momemt generating function of ξ^0.6(n) is exp (t^2/2).
Otherwise, let
q_n:=⌊min_1≤ j≤ k-1, u_j^0.4(n)≠ 0{ C_p^2| u_j^0.6(n)|^-2/j}⌋,
where ⌊ x⌋ denotes the largest integer not exceeding x. Since by assumption, for any j=1,⋯, k-1, u_j^0.6(n)→ 0 as n→∞, then we know that there exists N>0 such that (i) q_n≥ 1 for any n>N and (ii) q_n→∞ as n→∞.
We note that by definition
min_1≤ j≤ k-1, u_j^0.4(n)≠ 0{ C_p^2| u_j^0.6(n)|^-2/j}< q_n+1,
which implies
max_1≤ j≤ k-1{ q_n^j/2| u_j^0.6(n)|}>C_p^j(q_n/(q_n+1))^j/2>C_p^p/2^p/2.
On the other hand, (<ref>) also implies that C_p^2| u_j^0.6(n)|^-2/j≥ q_n. Thus, q_n^j/2| u_j^0.6(n)|≤ C_p^j. Now let κ_j+2:=q_n^j/2u_j^0.6(n). We remark that κ_j+2≤ C_p^j and κ̃_j+2≥ C_p^p/2^p/2. We write μ_j+2:=B_j+2(0,κ_2,⋯,κ_j+2) for j=1,⋯,k-1. Those will be our candidates for the first k+1 moments. Moreover, if k is odd, we also propose a candidate for (k+2)-th moment by setting μ_k+2:=0.
For j=1,⋯,⌈ k/2⌉
by (<ref>) we have
H_j(1,0,μ_2,μ_3,⋯,μ_2j)=L_j(0,κ_2,κ_3,⋯,κ_2j)
= ∑_2t_2+3t_3+⋯+2jt_2j=j(j+1)a_t_2,⋯ , t_2j^(j)κ_2^t_2⋯κ_2j^t_2j
= ∑_ℓ=0^j(j-1)/2∑_2t_2+2t_3+⋯+2t_2j=j(j+1)-2ℓ 2t_2+3t_3+⋯+2jt_2j=j(j+1)a_t_2,⋯ , t_2j^(j)κ_2^t_2⋯κ_2j^t_2j
(a)≥ b_0^(j)-∑_ℓ=1^j(j-1)/2∑_2t_2+2t_3+⋯+2t_2j=j(j+1)-2ℓ 2t_2+3t_3+⋯+2jt_2j=j(j+1)| a_t_2,⋯ , t_2j^(j)κ_2^t_2⋯κ_2j^t_2j|
(b)≥ b_0^(j)-∑_ℓ=1^j(j-1)/2∑_2t_2+2t_3+⋯+2t_2j=j(j+1)-2ℓ 2t_2+3t_3+⋯+2jt_2j=j(j+1)| a_t_2,⋯,t_2j^(j)| C_p^2ℓ (c)≥ 1.
where to get (a) we used the definition of b_0^(j), and where to obtain (b) we used the fact that |κ_j+2|≤ C_p^j, and where to get (c) we used (<ref>).
Moreover, as |κ_j+2|≤ C_p^j, then we know that there exists some constant C_p' such that |μ_j+2|=| B_j+2(0,κ_2,⋯,κ_j+2) |≤ C_p' for any integer j=1,⋯, 2⌈ k/2⌉-1.
Therefore, by <ref>, there exists C_p” depending on k=⌈ p⌉ and C_p' such that
H_⌈ k/2⌉+1(1,0,μ_2,⋯,μ_2⌈ k/2⌉+1,C_p”)≥ 1.
Let μ_2⌈ k/2⌉+2:=C_p”. Applying <ref> repeatedly, we get a sequence (μ_j)_j≥ 1 such that μ_0=1 and H_j(μ_0,μ_1,⋯,μ_2j)≥ 1>0 for any j∈ℕ_+. The sequence (μ̃_j) is then our candidate for the moments and we remark that they satisfy the conditions of <ref>. Therefore, by <ref>, we conclude that there exists ξ^0.6(n) such that μ_j(ξ^0.6(n))=μ_j for any j∈ℕ_+. As the first k+1 moments uniquely define the first k+1 cumulants of a random variable we have κ_j+2(ξ^0.6(n))=κ_j+2=q_n^j/2u_j^0.6(n) for all j=1,⋯, k-1. Thus, the q_n and ξ^0.6(n) that we have constructed meet the requirements of <ref>. Moreover, (<ref>) implies that <ref> is also satisfied. Lastly, to show <ref> we note that
𝔼 [|ξ^0.6(n)|^p+2]=‖ξ^0.6(n)‖_p+2^p+2(*)≤‖ξ^0.6(n)‖_2⌈ k/2⌉ +2^p+2
= (μ_2⌈ k/2⌉ +2(ξ^0.6(n)))^(p+2)/(2⌈ k/2⌉ +2)
≤ (C_p”)^(p+2)/(2⌈ k/2⌉ +2).
Here (*) is due to the fact that k=⌈ p⌉≥ p.
§ PROOFS OF OTHER RESULTS
In this section, we provide the proofs of all the other results in the main text.
toc
§.§ Proof of Proposition <ref> and Theorem <ref>
For ease of notation, in this section we will drop the dependence on n in our notation and write W, N( · ), σ, X_i, I and R_j,ω for respectively W_n, N_n( · ), σ_n, X^0.6(n)_i, I_n and R_j,ω,n.
Before we prove the bounds for R_k,ω, we note that R_k,ω can be defined without assuming local dependence . Thus, we first aim to generalize this concept, which makes the result derived in <ref> also applicable in general dependent situations. Let (X_i)_i∈ I be a class of mean zero random variables indexed by I. For any graph G (not necessarily the dependency graph) with the vertex set I and a subset J⊆ I, we define N(J) to be vertex set of the neighborhood of J. As in <ref>, we assume Var(∑_i∈ IX_i)=1, without loss of generality. Let W=∑_i∈ IX_i.
We extend the notation of ℛ-sums defined in (<ref>) to this general setting. Given an integer k∈ℕ_+ such that k≥ 2, for any t_1:k∈ℤ such that | t_j|≤ j-1 for any j∈ [k], let z=|{ j:t_j>0 }|. If z≥ 1, we write { j:t_j>0 }={ q_1,⋯,q_z}, where the sequence 2≤ q_1<⋯<q_z≤ k is taken to be increasing. We further let q_0:=1 and q_z+1:=k+1. Then we could still define the ℛ-sums by
ℛ_ω[t_1,t_2,⋯,t_k] : =
∑_i_1∈ N_1∑_i_2∈ N_2⋯∑_i_k-1∈ N_k-1[q_1-q_0,⋯,q_z+1-q_z]▹(| X_i_1|,⋯,| X_i_k-1|,(∑_i_k∈ N_k| X_i_k|)^ω),
where N_1:=I, and for 2≤ j≤ k
N_j:= N (i_1:| t_j|)=N(i_1,⋯,i_| t_j|) if t_j≠ 0
∅ if t_j=0
.
Now the remainder term R_k,ω is defined as
R_k,ω:= ∑_(ℓ,η_1:ℓ)∈ C^*(k+2)∑_i_1∈ N_1'∑_i_2∈ N_2'⋯∑_i_k+1∈ N_k+1'
[η_1,⋯,η_ℓ]▹(| X_i_1|,⋯,| X_i_k+1|,(∑_i_k+2∈ N_k+2'| X_i_k+2|)^ω)
= ∑_t_1:(k+2)∈ℳ_1,k+2 ℛ_ω[t_1,t_2,⋯,t_k+2].
where N_1':=I and N_j':=N(i_1:(j-1)) for j≥ 2. C^*(k+2) and ℳ_1,k+2 are given by
C^*(k+2)={ℓ,η_1:ℓ∈ℕ_+: η_j≥ 2 ∀ j∈ [ℓ-1], ∑_j=1^ℓη_j=k+2},
and
ℳ_1,k+2:={t_1:(k+2): t_j+1=± j & t_j∧ t_j+1< 0 ∀ 1≤ j≤ k+1}.
Note that the expressions of ℛ-sums and R_k,ω have the same forms as those in <ref>, but here we do not impose the assumption of the local dependence of (X_i)_i∈ I anymore as N(i_1:q)'s are defined directly from the graph structure we constructed on I. The main goal of this section is to prove the following proposition.
Fix k∈ℕ_+ such that k≥ 2 and real number ω∈ (0,1]. Let N(J) be defined as above and suppose the cardinality of N(J) is upper-bounded by M for any | J |≤ k. Then there exists a constant C_k+ω only depending on k+ω such that
ℛ_ω [t_1,t_2,⋯,t_k]≤ C_k+ω M^k-2+ω∑_i∈ I𝔼 [| X_i|^k-1+ω].
Before proving <ref>, we need the following two lemmas. <ref> helps us change the order of summation in ℛ_ω[t_1,⋯,t_k] and <ref> is a generalized version of Young's inequality, which allows us to bound the expectations of products by sums of moments.
Fix k∈ℕ_+ such that k≥ 2. For any J⊆ I, let N(J) be defined as above. Suppose (i_1,⋯,i_k) is a tuple such that i_1∈ I, i_2∈ N(i_1), ⋯, i_k∈ N(i_1:(k-1)). Then for any 1≤ h≤ k, there exists a permutation π on [k] such that π (1)=h, i_π(1)∈ I, i_π(2)∈ N(i_π(1)), ⋯, i_π(k)∈ N(i_π(1),⋯,i_π(k-1)).
We perform induction on k.
Firstly, suppose that k=2, then we remark that i_2∈ N(i_1)⇔ i_1∈ N(i_2). For h=1, we can choose π to be the identity and the desired identity holds. For h=2, we let π(1):=2 and π(2):=1 and remark than once again the desired result holds.
Suppose that the proposition is true for 2,⋯,k-1. We want to prove that it holds for k. If h<k, consider the tuple (i_1,⋯, i_h). By inductive hypothesis, there is a permutation π on { 1,2,⋯,h } such that π(1)=h, i_π(2)∈ N(i_π(1)), ⋯, i_π(h)∈ N(i_π(1),⋯,i_π(q-1)). Define
π(j):={ π(j) if 1≤ j≤ h
j if h+1≤ j≤ k
..
Then π satisfies the requirements in the lemma.
Now suppose h=k. i_k∈ N(i_1:(k-1)) indicates that i_k is a neighbor of { i_1,⋯,i_k-1}. Then there exists 1≤ℓ≤ k-1 such that there is an edge between i_k and i_ℓ in the graph G=(I,E). Thus, i_h∈ N(i_ℓ).
By inductive hypothesis, there is a permutation π on [ℓ] such that π(1)=ℓ, i_π(2)∈ N(i_π(1)), ⋯, i_π(ℓ)∈ N(i_π(1),⋯,i_π(ℓ-1)).
Define
π(j):=
k if j=1
π(j-1) if 2≤ j≤ℓ+1
j-1 if ℓ+2≤ j≤ k
.
Then π(1)=h=k. Moreover, we have i_π(2)=i_ℓ∈ N(i_k)=N(i_π(1)), and note that for all j=3,⋯,ℓ we have i_π(j+1)=i_π(j)∈ N(i_π(1),⋯,i_π(j-1))=N(i_π(1),⋯,i_π(j)). Finally, for all j≥ℓ+1 we have i_π(j+1)=i_j∈ N(i_1:(j-1))⊆ N(i_1,⋯,i_j-1,i_k)=N(i_π(1),⋯,i_π(j)). Thus, the lemma holds for k as well. By induction, the proof is complete.
Also, we need a generalization of Young's inequality.
Given t∈ℕ_+, let Y_1,⋯,Y_t be a sequence of random variables, and real numbers p_1,⋯, p_t>1 satisfy that 1/p_1+⋯+1/p_t=1. Then for any (ℓ, η_1:ℓ)∈ C(t):={ℓ,η_1:ℓ∈ℕ_+:∑_j=1^ℓη_j=t}, we have that
[η_1,⋯, η_ℓ]▹ (| Y_1|,⋯,| Y_t|)≤1/p_1𝔼 [| Y_1|^p_1]+⋯+ 1/p_t𝔼 [| Y_t|^p_t].
First, we prove
𝔼 [| Y_1⋯ Y_t|]≤1/p_1𝔼 [| Y_1|^p_1]+⋯+ 1/p_t𝔼 [| Y_t|^p_t],
𝔼 [| Y_1|]⋯𝔼 [| Y_t|] ≤1/p_1𝔼 [| Y_1|^p_1]+⋯+ 1/p_t𝔼 [| Y_t|^p_t].
In this goal, note that Young's inequality is stated as follows: For any a_1,⋯,a_t≥ 0, and p_1,⋯,p_t>1 such that 1/p_1+⋯+1/p_t=1, we have
a_1⋯ a_t≤1/p_1a_1^p_1+⋯+1/p_ta_t^p_t.
Thus, by Young's inequality we know that
| Y_1⋯ Y_t|≤1/p_1| Y_1|^p_1+⋯+1/p_t| Y_t|^p_t.
Taking the expectation, we have
𝔼 [| Y_1⋯ Y_t|]≤1/p_1𝔼 [| Y_1|^p_1]+⋯+1/p_t𝔼 [| Y_t|^p_t].
Again by Young's inequality, we obtain that
𝔼 [| Y_1|]⋯𝔼 [| Y_t|]≤1/p_1𝔼 [| Y_1|]^p_1+⋯+1/p_t𝔼 [| Y_t|]^p_t.
By Jensen's inequality, 𝔼 [| Y_i|]^p_i≤𝔼 [| Y_i|^p_i] for i∈ [t].
This implies that
𝔼 [| Y_1|]⋯𝔼 [| Y_t|]≤1/p_1𝔼 [| Y_1|^p_1]+⋯+1/p_t𝔼 [| Y_t|^p_t].
Finally, we prove (<ref>). Let 1/q_j:=∑_i=η_j-1+1^η_j1/p_i for 1≤ j≤ k.
[η_1,⋯,η_ℓ]▹ (| Y_1|,⋯,| Y_k|)
= 𝔼[| Y_1⋯ Y_η_1|] 𝔼[| Y_η_1+1⋯ Y_η_2|] ⋯ 𝔼[| Y_η_1+⋯+η_ℓ-1+1⋯ Y_k|]
(<ref>)≤ 1/q_1𝔼[| Y_1⋯ Y_η_1|^q_1]+⋯+1/q_k𝔼[| Y_η_1+⋯+η_ℓ-1+1⋯ Y_k|^q_k]
(<ref>)≤ 1/p_1𝔼 [| Y_1|^p_1]+⋯+1/p_η_1𝔼 [| Y_η_1|^p_η_1]+⋯
+1/p_η_1+⋯+η_ℓ-1+1𝔼 [| Y_k+1-u _ℓ|^p_η_1+⋯+η_ℓ-1+1]+⋯+1/p_k𝔼 [| Y_k|^p_k].
Now we are ready to prove <ref>.
By (<ref>), we only need to prove that the following inequality holds for any k∈ℕ_+:
ℛ_ω [0,± 1,⋯,± k]≲ M^k-1+ω∑_i∈ I𝔼 [| X_i|^k+ω].
Once again we write z:=|{ j:t_j>0 }|. If z≥ 1, we write { j:t_j>0 }={ q_1,⋯,q_z}, where 2≤ q_1<⋯<q_z≤ k is increasing. Further let q_0:=1 and q_z+1:=k+1.
Noticing that
1/k+ω+⋯+1/k+ω_k times+ω/k+ω=1,
we apply <ref> and obtain that
[q_1-q_0,⋯,q_z+1-q_z]▹(| X_i_1|,⋯,| X_i_k|, (1/M∑_i_k+1∈ N(i_1:k)| X_i_k+1|)^ω)
≲ 𝔼 [| X_i_1|^k+ω]+ ⋯ +𝔼 [| X_i_k|^k+ω ] +𝔼[ (1/M∑_i_k+1∈ N(i_1:k)| X_i_k+1|)^k+ω].
Now by Jensen's inequality and the fact that | N(i_1:k)|≤ M, we get that
𝔼[ (1/M∑_i_k+1∈ N(i_1:k)| X_i_k+1|)^k+ω]≤1/M∑_i_k+1∈ N(i_1:k)𝔼 [| X_i_k+1|^k+ω].
Moreover, we remark that
M^ω [q_1-q_0,⋯,q_z+1-q_z]▹(| X_i_1|,⋯,| X_i_k|, (1/M∑_i_k+1∈ N(i_1:k)| X_i_k+1|)^ω)
= [q_1-q_0,⋯,q_z+1-q_z]▹(| X_i_1|,⋯,| X_i_k|, (∑_i_k+1∈ N(i_1:k)| X_i_k+1|)^ω)
Thus, this implies that
ℛ_ω[0,± 1,⋯,± k]
= ∑_i_1∈ I⋯∑_i_k∈ N(i_1:(k-1))[q_1-q_0,⋯,q_z+1-q_z]▹(| X_i_1|,⋯,| X_i_k|, (∑_i_k+1∈ N(i_1:k)| X_i_k+1|)^ω)
≲ M^ω∑_i_1∈ I⋯∑_i_k∈ N(i_1:(k-1))(𝔼 [| X_i_1|^k+ω]+⋯+𝔼 [| X_i_k|^k+ω]+1/M∑_i_k+1∈ N(i_1:k)𝔼 [| X_i_k+1|^k+ω]).
Since the cardinality of N(i_1),⋯,N(i_1:k) are bounded by M, for j=1 we have
∑_i_1∈ I∑_i_2∈ N(i_1)⋯∑_i_k∈ N(i_1:(k-1))𝔼 [| X_i_j|^k+ω]≤ M^k-1∑_i∈ I𝔼 [| X_i|^k+ω].
Now we bound
∑_i_1∈ I∑_i_2∈ N(i_1)⋯∑_i_k∈ N(i_1:(k-1))𝔼 [| X_i_j|^k+ω],
where j=2,⋯,k.
By <ref>, for any tuple (i_1,⋯,i_k) in the summation, there exists a permutation π such that π(1)=j, i_π(2)∈ N(i_π(1)), ⋯, i_π(k)∈ N(i_π(1),⋯,i_π(k-1)). Let ϕ_j be a map that sends (i_1,⋯,i_k) to (i_π(1),⋯,i_π(k)). Then no more than (k-1)! tuples are mapped to the same destination since (i_1,⋯,i_k) is a permutation of (i_π(1),⋯,i_π(k)) and i_j is fixed to be i_π(1). Thus, we obtain that
∑_i_1∈ I∑_i_2∈ N(i_1)⋯∑_i_k∈ N(i_1:(k-1))𝔼 [| X_i_j|^k+ω]
≤ (k-1)!∑_π:π(1)=j∑_i_π(1)∈ I∑_i_π(2)∈ N(i_π(1))⋯∑_i_π(k)∈ N(i_π(1),⋯,i_π(k-1))𝔼 [| X_i_π(1)|^k+ω]
≤ (k-1)!∑_π:π(1)=j∑_i_1∈ I∑_i_2∈ N(i_1)⋯∑_i_k∈ N(i_1:(k-1))𝔼 [| X_i_j|^k+ω]
≤ ((k-1)!)^2M^k-1∑_i∈ I𝔼 [| X_i|^k+ω]≲ M^k-1∑_i∈ I𝔼 [| X_i|^k+ω].
Similarly,
∑_i_1∈ I∑_i_2∈ N(i_1)⋯∑_i_k+1∈ N(i_1:k)𝔼 [| X_i_k+1|^k+ω]≲ M^k∑_i∈ I𝔼 [ | X_i|^k+ω].
Substituting (<ref>), (<ref>), and (<ref>) into (<ref>), we conclude
ℛ_ω[t_1,t_2,⋯,t_k]
≤ ℛ_ω[0,sgn(t_2),2·sgn(t_3),⋯,(k-1)sgn(t_k-1)]
≲ M^k-2+ω∑_i∈ I𝔼 [ | X_i|^k-1+ω].
By <ref>, we have
R_k,ω(<ref>)= ∑_t_1:(k+2)∈ℳ_1,k+2 ℛ_ω[t_1,t_2,⋯,t_k+2]
≲∑_t_1:(k+2)∈ℳ_1,k+2M^k+ω∑_i∈ I𝔼 [ | X_i|^k+1+ω].
Noting that |ℳ_1,k+2|< 2^k+1 <cit.>, we conclude that
R_k,ω≲ M^k+ω∑_i∈ I𝔼 [ | X_i|^k+1+ω].
The proof of <ref> relies on <ref> and <ref>.
Let k:=⌈ p⌉. Then p=k+ω -1. Without loss of generality, we assume σ_n= 1.
By <ref>,
R_j,ω,n≲ M_n^j+ω∑_i∈ I_n𝔼[| X^0.6(n)_i|^j+1+ω].
If we let q_1=(k-1)/(k-j) and q_2=(k-1)/(j-1), then 1/q_1+1/q_2=1 and (2+ω )/q_1+(k+1+ω )/q_2=j+1+ω.
Thus,
| X^0.6(n)_i|^j+1+ω=| X^0.6(n)_i|^(2+ω )/q_1·| X^0.6(n)_i|^(k+1+ω )/q_2.
By Hölder's inequality,
M_n^j+ω∑_i∈ I_n𝔼[| X^0.6(n)_i|^j+1+ω]
≤ (M_n^1+ω∑_i∈ I_n𝔼[| X^0.6(n)_i|^2+ω])^1/q_1(M_n^k+ω∑_i∈ I_n𝔼[| X^0.6(n)_i|^k+1+ω])^1/q_2
= (M_n^1+ω∑_i∈ I_n𝔼[| X^0.6(n)_i|^2+ω])^(k-j)/(k-1)(M_n^k+ω∑_i∈ I_n𝔼[| X^0.6(n)_i|^k+1+ω])^(j-1)/(k-1).
Since
ω (k-j)/(k-1)(j+ω -1)+(j-1)(k+ω-1 )/(k-1)(j+ω -1)=1,
by Young's inequality (See <ref> for details), we get
(M_n^1+ω∑_i∈ I_n𝔼[| X^0.6(n)_i|^2+ω])^k-j/(k-1)(j+ω -1)(M_n^k+ω∑_i∈ I_n𝔼[| X^0.6(n)_i|^k+1+ω])^j-1/(k-1)(j+ω -1)
≤ ω (k-j)/(k-1)(j+ω -1)(M_n^1+ω∑_i∈ I_n𝔼[| X^0.6(n)_i|^2+ω])^1/ω
+(j-1)(k+ω -1)/(k-1)(j+ω -1)(M_n^k+ω∑_i∈ I_n𝔼[| X^0.6(n)_i|^k+1+ω])^1/(k+ω -1).
Thus, we have
R_j,ω,n^1/(j+ω -1)≲(M_n^1+ω∑_i∈ I_n𝔼[| X^0.6(n)_i|^2+ω])^1/ω+(M_n^k+ω∑_i∈ I_n𝔼[| X^0.6(n)_i|^k+1+ω])^1/(k+ω -1).
Similarly, we derive that
R_j,1,n^1/j≲(M_n^j+1∑_i∈ I_n𝔼[| X^0.6(n)_i|^j+2])^1/j
≤ (M_n^1+ω∑_i∈ I_n𝔼[| X^0.6(n)_i|^2+ω])^k+ω -j-1/kj(M_n^k+ω∑_i∈ I_n𝔼[| X^0.6(n)_i|^k+1+ω])^j-ω/(k-1)j
≲ (M_n^1+ω∑_i∈ I_n𝔼[| X^0.6(n)_i|^2+ω])^1/ω+(M_n^k+ω∑_i∈ I_n𝔼[| X^0.6(n)_i|^k+1+ω])^1/(k+ω -1).
Since by assumption M_n^1+ω∑_i∈ I_n𝔼[| X^0.6(n)_i|^ω +2]→ 0 and M_n^k+ω∑_i∈ I_n𝔼[| X^0.6(n)_i|^p+2]→ 0 as n→∞, we have that R_j,1,n→ 0 as n→∞.
Therefore, by <ref> and noting the fact that p=k+ω -1, we conclude
𝒲_p(ℒ(W_n),𝒩(0,1))
≤ C_p((M_n^1+ω∑_i∈ I_n𝔼[| X^0.6(n)_i|^ω +2] )^1/ω+(M_n^p+1∑_i∈ I_n𝔼[| X^0.6(n)_i|^p+2] )^1/p),
where C_p only depends on p.
toc
§.§ Proofs of Corollaries <ref> and <ref>
Define the graph (T_n,E_n) to be such that there is an edge between i_1,i_2∈ T_n if and only if ‖ i_1-i_2‖≤ m. From the definition of the m-dependent random field, (X_i^0.6(n))_i∈ T_n satisfies . We will therefore apply <ref> to obtain the desired result. We remark that j∈ N_n(i_1:(⌈ p⌉ +1)) only if there is ℓ∈[⌈ p⌉ +1] such that i_ℓ-j≤ m, which directly implies that |N_n(i_1:(⌈ p⌉ +1))|≤ (2m+1)^d(⌈ p⌉+1).
Moreover, by Hölder's inequality, we have
∑_i∈ T_n𝔼[| X^0.6(n)_i|^ω+2]
≤ (∑_i∈ T_n𝔼[| X^0.6(n)_i|^2])^(p-ω)/p(∑_i∈ T_n𝔼[| X^0.6(n)_i|^p+2])^ω/p
(a)≤ M^(p-ω)/pσ^2(p-ω)/p(∑_i∈ T_n𝔼[| X^0.6(n)_i|^p+2])^ω/p.
Here (a) is due to the non-degeneracy condition. And this directly implies that
m^(1+ω)d/ω(σ_n^-(ω+2)∑_i∈ T_n𝔼[| X^0.6(n)_i|^ω +2] )^1/ω
≤ m^(1+ω)d/ωM^p-ω/pω(σ_n^-(p+2)∑_i∈ T_n𝔼[| X^0.6(n)_i|^p+2] )^1/p→ 0 as n→∞.
Therefore, by <ref>, there exists C_p,d>0 such that for n large enough we have
𝒲_p(ℒ(W_n),𝒩(0,1))≤ C_p,dm^(1+ω)d/ωM^p-ω/pωσ_n^-p+2/p(∑_i∈ T_n𝔼[| X^0.6(n)_i|^p+2] )^1/p.
Moreover, if (X^0.6(n)_i) is in addition assumed to be stationary, then by assumption there is a constant K such that lim inf_n→∞σ_n^2/| T_n |≥ K. Therefore, we get that
σ_n^-(p+2)∑_i∈ T_n𝔼[| X^0.6(n)_i|^p+2]≍| T_n |^-(p+2)/2·| T_n |=| T_n |^-p/2→ 0,
and
𝒲_p(ℒ(W_n),𝒩(0,1))
≤ C_p,dm^(1+ω)d/ωM^p-ω/pωσ_n^-p+2/p(∑_i∈ T_n𝔼[| X^0.6(n)_i|^p+2] )^1/p
= 𝒪(| T_n |^-1/2).
Consider the index set I_n={i=(i_1,⋯,i_m):1≤ i_1≤⋯≤ i_m≤ n }⊆ℤ^m. For each i∈ I_n, let ξ_i:=h(X_i_1,⋯,X_i_m). Then W_n=σ_n^-1∑_i∈ Iξ_i. Let (I_n,E_n) be the graph such that there is an edge between i,j∈ I_n if and only if { i_1,⋯,i_m}∩{ j_1,⋯,j_m}≠∅.
Then we remark that the conditions holds. Moreover, note that j is in
N_n(i_1:(⌈ p⌉+1)) only if there is ℓ∈ [⌈ p⌉+1] and k_1,k_2∈ [m] such that j_k_1=(i_ℓ)_k_2, where (i_ℓ)_k_2 denotes the k_2-th component of the vector i_ℓ. This directly implies that the cardinality of the dependency neighborhoods are bounded by n^m-(n-m(⌈ p⌉+1))^m≍ n^m-1. Moreover, the non-degeneracy condition of the U-statistic implies that σ_n^2≍ n^2m-1 <cit.>. Applying <ref>, we get that
𝒲_p(ℒ(W_n),𝒩(0,1))
≲ (n^m(n^m-1)^1+ω1/σ_n^ω +2𝔼[| h(X_1,⋯,X_m) |^ω +2])^1/ω
+(n^m(n^m-1)^p+11/σ_n^p+2𝔼[| h(X_1,⋯,X_m) |^p+2])^1/p
≲ n^-1/2(𝔼[| h(X_1,⋯,X_m) |^ω +2])^1/ω+n^-1/2(𝔼[| h(X_1,⋯,X_m) |^p+2])^1/p
≤ n^-1/2‖ h(X_1,⋯,X_m) ‖ _p+2^(ω+2)/ω+n^-1/2‖ h(X_1,⋯,X_m) ‖ _p+2^(p+2)/p.
By the moment condition, ‖ h(X_1,⋯,X_m) ‖ _p+2<∞. Thus, we conclude
𝒲_p(ℒ(W_n),𝒩(0,1))=𝒪(n^-1/2).
toc
§.§ Proof of Theorem <ref>
For ease of notation we write ω_p:=𝒲_p(ℒ(W_n),𝒩(0,1)). Choose ρ∈ (0,1). Then remark that for all ϵ>0 there is G∼𝒩(0,1) such that G-W_n_p≤𝒲_p(ℒ(W_n),𝒩(0,1))+ϵ. Therefore, by the union bound we have
ℙ(W_n≥ t) = ℙ(W_n-G+G≥ t)
≤ℙ(W_n-G≥ (1-ρ)t)+ ℙ(G≥ρ t)
(a)≤Φ^c(ρ t)+ W_n-G_p^p/((1-ρ) t)^p
≤Φ^c(ρ t)+ ( ω_p+ϵ)^p/((1-ρ) t)^p
where to obtain (a) we have used Markov's inequality. Now as this holds for any arbitrary choice of ϵ>0 we conclude that
ℙ(W_n≥ t) ≤Φ^c(ρ t)+ ω_p^p/((1-ρ) t)^p.
Define the function g_t:x↦ (1-x)^p+1 e^-(xt)^2/2, then we can remark that g_t:[0,1]→[0,1] is a bijection. Choose ρ^*_t:=g_t^-1(√(2π)pω_p^p/t^p+1).
Moreover, we obtain that
ℙ(W_n≥ t) ≤Φ^c(ρ^*_t t)+ φ(ρ^*_t t)(1-ρ^*_t)tω_p^p/t^p+1(1-ρ_t^*)^p+1φ(ρ^*_t t)
(a)≤Φ^c(t)+ (1-ρ^*_t)tφ(ρ^*_t t)(1+1/p)
≤Φ^c(t)+ p^1/p+1ω_p^1-1/p+1/tφ(ρ^*_t t(1-1/p+1))(1+1/p)
where to obtain (a) we used the fact that Φ^c(ρ^*_t t)≤Φ^c(t)+(1-ρ^*_t)tsup_x∈ [ρ^*_t t,t]φ(x).
Suppose that t≥ 1 and satisfies 1-√(2βlog t)/t≤ 1. Define
x:= √(2βlog t )/t,
we notice that x∈ [0,1]. We remark that if
ω_p≤ (√(2π)p)^1/p+1(1-√(2βlog t )/t)t^1-β/p+1.
then we have g_t^-1(x)≥√(2π)pω_p^p/t^p+1.
Therefore as g_t^-1(·) is a decreasing function we have that x≤ρ^*_t which implies that
ℙ(W_n≥ t)≤Φ^c(t)+ 1/t^1+β(1-1/p+1)p^1/p+1ω_p^1-1/p+1(1+1/p).
Moreover, similarly we can remark that
ℙ(G≥ (1+ρ) t) ≤ℙ(W_n≥ t )+ℙ(G-W_n≥ρ t)
≤ℙ(W_n≥ t )+(ω_p+ϵ)^p/ρ^pt^p
Therefore, as this holds for any arbitrary ϵ>0 we obtain that
Φ^c((1+ρ) t) ≤ℙ(W_n≥ t )+ω_p^p/ρ^pt^p.
Moreover, we can definite g̃_t:x↦ e^-(1+x)^2t^2x^p+1 then choose ρ̃_t^*:=g̃_t^-1(√(2π)pω_p^p/t^p+1). We similarly obtain that
ℙ(W_n≥ t)
≥Φ^c(t)-p^1/p+1ω_p^1-1/p+1/tφ(t(1-1/p+1))(1+1/p).
amsalpha
toc
|
http://arxiv.org/abs/2307.06033v1 | 20230712092556 | AI-Generated Imagery: A New Era for the `Readymade' | [
"Amy Smith",
"Michael Cook"
] | cs.AI | [
"cs.AI",
"cs.CY"
] |
Pyramid Deep Fusion Network for Two-Hand Reconstruction from RGB-D Images
Jinwei Ren, and Jianke Zhu, Senior Member, IEEE
Jinwei Ren and Jianke Zhu are both with the College of Computer Science and Technology, Zhejiang University, Zheda Rd 38th, Hangzhou, China.
Email: {zijinxuxu, jkzhu}@zju.edu.cn;
Jianke Zhu is the Corresponding Author.
August 12, 2023
====================================================================================================================================================================================================================================================================================
While the term `art' defies any concrete definition, this paper aims to examine how digital images produced by generative AI systems, such as Midjourney, have come to be so regularly referred to as such. The discourse around the classification of AI-generated imagery as art is currently somewhat homogeneous, lacking the more nuanced aspects that would apply to more traditional modes of artistic media production. This paper aims to bring important philosophical considerations to the surface of the discussion around AI-generated imagery in the context of art. We employ existing philosophical frameworks and theories of language to suggest that some AI-generated imagery, by virtue of its visual properties within these frameworks, can be presented as `readymades' for consideration as art.
§ INTRODUCTION
Within many popular online spaces such as Twitter and Instagram, the recent abundance of artistic imagery produced by text-to-image generative deep learning models has sparked debate as to the nature of these images in the context of more traditional modes of understanding art, and the art world [We use the term `art' here to primarily refer to visual art.]. We seek to bring some clarity to this discussion using established theoretical frameworks. Text-to-image synthesis requires natural language descriptions that are used to generate corresponding images; the natural language description often being referred to as the `prompt'. Online platforms, such as Midjourney, allow users to generate artistic imagery within seconds, with as little as a single word providing the driving concept for the image synthesis process. The term `art' readily eludes any concrete definition, yet it seems valid given the current climate around the affordances of generative AI, to examine how such a large corpus of digital images has come to be regularly referred to under this title (see figure <ref>). After over a decade of computational creativity research attempting to encourage a shift in the public perception of computer-generated art, the recent and drastic change in this domain leaves the question open of how and why this has happened now.
The notion of `framing' has been explored within the field of computational creativity with an aim to deepen our understanding of the relationship between the perception of the observer and the artistic artifact being perceived <cit.>. Traditional ideas of framing do not tend to engage with the conceptual baggage of the observer and instead seek to expose aspects of a generative system in the hope that this transparency positively affects the perception of the system, and its output, as creative. We propose a distinct view (but to the same ends) that it is the viewer framing, rather than the artist framing, that is important to consider in this context.
§ `AI ART' - A PHILOSOPHY
There are many frameworks, from the fields of art theory and philosophy, that can help to interrogate the perceptual status of AI-generated imagery in the context of categorising imagery as belonging to the class of `art'. We propose that philosophical theories around the formation of meaning, specifically Wittgenstein's theory of meaning, can inform our understanding of the perception of AI-generated imagery in this contemporary context.
§.§ Wittgenstein's theory of meaning
In his work: `Philosophical Investigations' the German philosopher Ludwig Wittgenstein tackles how it is that, despite the endlessly variable manifestations of things in the world, we can still come to understand similarities between them and can identify groups of occurrences around us <cit.>. He also addresses how it may be that we come to attach meaning to these groups through language, specifically words.
§.§.§ Wittgenstein's Thought experiment
To demonstrate this theory, Wittgenstein uses a thought experiment that questions what it means for something to be considered a `game'. The many differences between games clearly do not define whether or not they can still be perceived as qualifying as such, as when trying to describe what the word `game' means, any single defining characteristic of a game can also be found outside of what we perceive `game' to mean. For example, Chess and Solitaire are both considered to be games, even when one involves a winner and a loser and the other does not. So the notion of `game' is a contested concept, as an attempt at any ostensive definition is rejected. Wittgenstein theorises that it is only through a complex web of learnt resemblances and associations that we come to understand a game as belonging to the class of `game'; and not through any concrete definition present in the word itself. Wittgenstein proposes that it is the learnt combination and exemption of traits that contribute to our notion of what the word `game' means, despite there being many ways that people come to understand the umbrella term `games' <cit.>.
§.§.§ Family Resemblance
Wittgenstein uses the analogy of `family resemblance' to illustrate how it is that we identify members of cognitive groups as belonging, whilst discriminating against others. Wittgenstein proposes that we identify one activity as belonging to the category of `game', but not another, in the same way that we might recognise a person as belonging to one family and not another. This is through identifying the resemblances that a person would have with their family due to the fact that they are relations, and the lack of this resemblance otherwise. Factors such as eye colour, surname, hair colour, height, or accent cannot individually be responsible for the decision that someone may be related to someone else, as it then could be concluded that everybody was related. A combination of factors, however, becomes `resemblances' <cit.>.
§.§.§ Socio-cultural Context
Wittgenstein argues that our socio-cultural backdrop is imperative to our understanding of language. He famously theorised that:“If a lion could talk, we could not understand him" <cit.>. A lion is playfully used here, as lions exist within a very different set of social contexts to people. Within Wittgenstein's framework, which anchors meaning in lived experience, it seems consistent to assume that even if a lion could talk, we would not be able to understand what it would say. This is because the language would be derived from a set of socio-cultural circumstances that we are unfamiliar with, not being lions ourselves. Wittgenstein makes the case that it is the matrix of resemblance and associations within a social and cultural context that gives language meaning <cit.>.
§.§ Art ⇔ Game
We propose that the word `game' used in Wittgenstein's thought experiment is interchangeable with the word `art', as this is a similarly contested concept <cit.>. As such, we propose that we come to understand an image as art in the same way that we come to understand an activity as a game - in reference to a plexus of resemblances and associations with existing art.
One could try and make the argument that art is pigment on canvas, but there are examples of sculpture that contradict this, and so on. So, we rely on a self-reflective consciousness of social context, resemblances, and associations in order to recognize something as belonging to a perceptual category such as Art. Just in the same way that all games contribute to the meaning of the word `game', all art contributes to what is meant by the term `art'.
Images and objects that are readily considered to be art manifest in many different forms and so defy any ostensible definition. Given this, we propose that in the same way that Wittgenstein's theory of meaning allows for an explanation for how it is possible that we can identify previously unseen examples of games as belonging to the category of `game', this theory of meaning also helps to explain how it is that AI-generated imagery can be categorised as `art'. It is through our understanding of other art, despite its relentless internal physical differences, that we come to be able to classify something else as art. So it follows, that if an image generated by AI has the abstracted properties and resemblances needed to identify it with this perceptual class, then it becomes a member of that class (Art).
§.§ Saussure's `Signifier' and `Signified'
Semiotics is concerned with the making of meaning through the creation and interpretation of `signs'. Signs can be words, images, sounds, acts, or objects. According to the American philosopher, mathematician, and scientist Charles Sanders Pierce: “We think only in signs” <cit.>.
The Swiss Linguist Ferdinand de Saussure and, later on, the writings of the French literary theorist Roland Barthes (who famously declared `The Death of the Author') influenced thought on the role of semiotics in the deconstruction and extraction of meaning in the social context. This is through the recognition of a dyadic system of signs. For Saussure, semiotics breaks down the factors in our external environment into signs, which are comprised of a `signifier' and a `signified'. The `signifier' is the physical form the sign takes, and the `signified' is the concept that it represents. The sign is the result of the association of the signifier with the signified, and the relationship between them is referred to as `signification' <cit.>. The meaning of signs is said not to be inherent in the words, images, sounds, acts or objects themselves, but rather in the meaning we attach to them. Anything can be a sign as long as someone interprets it as referring to something else. We often interpret signs unconsciously, relating them to familiar systems of conventions: “A sign is the basic unit of language (a given language at a given time). Every language is a complete system of signs" <cit.>.
§.§ Image as `signifier', Art as `signified'
The question of how a digital image can be removed from its status as simply an `image' and be interpreted within a new field of understanding where it is perceived to have a meaning beyond its basic function, we propose, starts in the interaction between the `consumer' of the image, and the image itself. This echoes the ideas explored in work by Colton and Wiggins <cit.>, where Computational Creativity is centered around observers and audiences. The way in which an image is perceived is similarly relevant to examine here, as it is at this point of consumption by an audience that an understanding can be formed by the observer, and where a decision as to the status of an image can be decided (i.e. an image is given the status of `art').
According to the dyadic theory of semiotics, images have the potential to become art if they are a signifier for the concept of art. In this case, the image is the `signifier' and the idea of art is the signified. The signifier is able to function as such as it alludes to the matrix of associated properties (Wittgenstein) that is signified. Figure <ref> shows an example of this idea. We suggest that if meaning arises in the consumption of a dyadic system of signs, and art consists of these very factors, then it follows that an image can be understood as art through a participation in this system and the reciprocal nature of the interaction between the perceiver and the sign being perceived.
Within the framework of Wittgenstein's theory, it is possible to understand how an image may come to be understood as art given the social context. Through the linguistic phenomena of semiotics, it is possible to understand the relationship between an image as a signifier, and the concept anchored to it as the signified - which is shaped by the given social context. Given this, it seems that it is necessary for there to be a much wider presence in a work of art, of a space for the intuitive understanding of the perceiver, but also a reference to something `learnt'. We propose that it is the reading of an image that facilitates its attribution to the class of art.
Figure <ref> is an example of an AI-generated image (made using Midjourney) that demonstrates the many different visual properties in an image that AI may generate in order to be a signifier to the concept of art and satisfy its internal categorisation system given a prompt (i.e. the user asks for `a work of art', as shown in figure <ref>). A brief visual analysis exposes the following properties associated with art genres: portrait painting (a 3/4 profile of a man is shown), realism (the portrait features are mildly abstracted), graffiti art (colourful and chaotic spray paint marks and splatters), Impressionism (colour blocks in the face give the impression of form, and create highlights and shadow - but are not true to a real-life depiction), Pop Art (bold, bright, multi-coloured areas in direct contrast with areas of black). A closer analysis reveals that the colours in the image correspond to colour theory. The image is mostly constructed from two sets of complementary colours: blue/orange and red/green. Furthermore, different shades of these colours are used to create highlights and shadows. These aesthetic properties are indicators within the signifier (the image) that signify these well know conceptions of visual art.
Figure <ref> shows an example of an image generated by Midjourney that doesn't explicitly hook into the visual properties associated with artworks, and yet has cultural significance within a social context. In fact, when the image of the Pope first became viral on Twitter, many people believed it to be a real photograph <cit.>. It only surfaced later on that it was in fact an AI-generated image. This phenomenon is an example of the importance of the audience's perception of an image regarding the classification of that image within the discourse around it.
§.§ Duchamp's `Readymades'
Marcel Duchamp's concept of the `readymade' was a significant development in the history of modern art. The term refers to everyday mass-produced objects that were selected and presented as works of art. The act of choosing these objects, and presenting them in the context of an art exhibition, was challenging traditional notions of what could be considered art and questioning the value of artistic skill and craftsmanship.
The chosen objects were often prefabricated and were separated from their intended `mundane' use by being brought into the art gallery context and discourse <cit.>.
We propose that AI-generated imagery can be a conceptual extension of the phenomena of the `readymade', by virtue of them being mass-produced and can challenge our existing notions of what can constitute an art artifact. As we will explore in the next section, many of
Duchamp's readymades were intended to challenge traditional perceptions of the role of craftsmanship in art. Mass-produced AI-generated imagery poses a similar challenge to the kinds of skills needed to create culturally significant and artistic imagery.
§.§.§ Mass production and the `Bach Faucet'
Text-to-image models are systems that are capable of synthesising potentially unlimited amounts of new and high-quality digital imagery, which we can view as a kind of `Bach faucet', a term coined by Kate Compton to refer to a situation in which: `a generative system produces an infinite amount of content that is of equal or better quality than a culturally significant original...since the endless supply of this content makes it no longer rare, it decreases its value'<cit.>. This phenomenon represents the inverse of the value transaction inherent in the process of creating readymades. In the case of the readymade, an object of low scarcity and value is transmuted into a scarce object with high value. AI however is able to take scarce and high-value artifacts (artworks) and mass produce images of equal or increased quality which lowers scarcity, decreasing the overall value.
§.§ Latent space imbues generated imagery with artistic signifiers
It is timely to intersect this theory with notions of machine learning and how complexity is abstracted during the training, and image synthesis, process.
We propose that image synthesis algorithms, that are trained on large-scale data sets of corresponding signs (or patterns <cit.>), are able to abstract the complexity of the matrix of multiple associated properties (or signifiers), that according to Wittgenstein and Saussure accumulates as an understanding of what art comes to mean, into a `latent space'. Any image generated from this space, that relates to `art' (via a text prompt), is imbued with the properties learnt from the signs in the training data which act as signifiers to the concept of art. We propose that, as a result, when we encounter these properties in AI-generated imagery, we are easily able to associate the image (signifier) with the web of resemblances and associations that we bring to the interaction regarding art (the signified). With some degree of inevitability, given the potential for exposure to the images in the data (pre-training), the generated images match our perception of `art' - as the model is trained on many thousands of existing examples of art. This is why, we propose, that so many AI-generated images are considered to belong to the category of art.
§.§ Kate Compton's `Liquid Art'
The term `Liquid Art' was disseminated at an invited talk given by Kate Compton at MIT in the autumn of 2022: “The ARTIFACT is dead. All that remains is the EXPERIENCE. Welcome to the world of liquid art” <cit.>. This term describes a new form of art experience, specifically in online spaces, in the wake of the invention of text-to-image generative AI systems <cit.>.
Liquid art is described as a space of potential art artefacts, that is moved through by surfing and filters, and that is experienced in streams or overwhelming waves (most commonly in an online space). Liquid art is the phenomenon of being exposed to mass-produced artefacts, such as AI-generated imagery, in a space where its abundance means that the experience of `surfing' this media becomes the experience of art. In this framework, art is a verb - as art becomes the experience of moving through this possibility space <cit.>. This is opposed to more traditional forms of `Solid' art, where there is an art artifact of fixed form.
Liquid art has implications for images and their function as signs in our language system, including visual language systems, as the abundance of imagery and the properties of this imagery (high-quality artistic imagery) shifts the social context and environment for the sign within art, and the context within which signifiers of art function.
§.§.§ Liquid Art and the Readymade
We propose that, out of this endlessly generated sea of imagery, an image can be selected based on the ways in which it functions as a sign. Another way to conceptualise this would be to class the output space of the image generator as a mass-produced artefact (a space of images) and the act of prompting and identifying an image as `good' is what creates the readymade.
We propose that because the history of a generated image (the transmutation of meaning from the training data into the latent space, re-manifested as an image in a liquid perceptual possibility space, from which it is then selected as a readymade for its quality as a sign within visual art) is so rich, that there is an increase in value in the selection of generated imagery in particular as a new era of readymade, because traditional readymades as a concept (and also an aesthetic to be signified) are already a part of the web of understanding that resonates with this selection process as they are an established part of art history now. At the same time, these images can still serve the function of challenging traditional notions of what can be considered as art (as with <ref>) and question the value of artistic skill and craftsmanship as the traditional readymades did.
§ CONCLUSION AND FUTURE WORK
AI-generated imagery can come to be perceived as art according to its perception as a sign by an observer within their social context. The use of mass-produced AI-generated imagery as art can be seen as a conceptual extension of the `readymade' within in a complex contemporary context, where the selection and presentation of the image can challenge even current notions of what is required for the experience of `art'. We suggest that future work could go on to discuss memes as readymades, based on the way that they involve the use of mass-produced imagery and hold relevance to the socio-cultural backdrop of the time.
iccc
|
http://arxiv.org/abs/2307.04594v1 | 20230710143529 | Parameterized Analysis of the Cops and Robber Problem | [
"Harmender Gahlawat",
"Meirav Zehavi"
] | cs.DM | [
"cs.DM"
] |
Singling out SO(10) GUT models using recent PTA results
Jonathan Steiner
February 2023
=========================================================
Pursuit-evasion games have been intensively studied for several decades due to their numerous applications in artificial intelligence, robot motion planning, database theory, distributed computing, and algorithmic theory. Cops and Robber () is one of the most well-known pursuit-evasion games played on graphs, where multiple cops pursue a single robber. The aim is to compute the cop number of a graph, k, which is the minimum number of cops that ensures the capture of the robber.
From the viewpoint of parameterized complexity, is W[2]-hard parameterized by k [Fomin et al., TCS, 2010].
Thus, we study structural parameters of the input graph. We begin with the vertex cover number (𝗏𝖼𝗇). First, we establish that k ≤𝗏𝖼𝗇/3+1. Second, we prove that parameterized by 𝗏𝖼𝗇 is by designing an exponential kernel. We complement this result by showing that it is unlikely for parameterized by 𝗏𝖼𝗇 to admit a polynomial compression. We extend our exponential kernels to the parameters cluster vertex deletion number and deletion to stars number, and design a linear vertex kernel for neighborhood diversity. Additionally, we extend all of our results to several well-studied variations of .
§ INTRODUCTION
In pursuit-evasion, a set of agents, called pursuers, plan to catch one or multiple evaders. Classically, pursuit-evasion games were played on geometric setups, where pursuers and evaders move on the plane following some rules <cit.>. Parsons <cit.> formulated pursuit-evasion on graphs to model the search for a person trapped in caves, giving rise to the field of graph searching. Since then, pursuit-evasion has been studied extensively, having applications in artificial intelligence <cit.>, robot motion planning <cit.>, constraint satisfaction and database theory <cit.>, distributed computing <cit.> and network decontamination <cit.>, and significant implications in graph theory and algorithms <cit.>.
() is one of the most intensively studied pursuit-evasion games on graphs, where a set of cops pursue a single robber. Players move in discrete time steps alternately, starting with the cops. In each move, a player can move to an adjacent vertex, and the cops win by capturing the robber (i.e., if a cop and the robber occupy the same vertex). The goal is to compute the cop number of a graph G, denoted 𝖼(G), which is the minimum number of cops required to win in G. We define the game formally in Section 2. is well studied in the artificial intelligence literature under the name Moving Target Pursuit () <cit.>, where we consider sub-optimal but faster strategies from an applicative point of view. The results have found numerous applications in game design, police chasing, path planning, and robot motion planning <cit.>.
Determining the parameterized complexity of games is a well-studied research topic <cit.>.
Most pursuit-evasion games are, in fact, AW[*]-hard <cit.>. In particular, is W[2]-hard parameterized by 𝖼(G) <cit.>. Thus, we consider structural parameterizations, focusing on kernelization, also known as polynomial-time preprocessing with a parametric guarantee. Due to the profound impact of preprocessing, kernelization was termed “the lost continent of polynomial time” <cit.>. We begin with the most studied structural parameter in parameterized complexity: the vertex cover number (𝗏𝖼𝗇) of the input graph. We bound 𝖼(G) in terms of 𝗏𝖼𝗇, as well as achieve both positive and negative results concerning the kernelization complexity of parameterized by 𝗏𝖼𝗇. We generalize our kernelization results to the smaller parameters cluster vertex deletion number () and deletion to stars number (), as well as to the parameter neighborhood diversity (). Furthermore, we extend all our results to several well-studied variants of .
The choice of 𝗏𝖼𝗇 as a parameter to study pursuit-evasion games is natural due to various scenarios where 𝗏𝖼𝗇 is significantly smaller than the graph size. For example, this includes scenarios where we model the existence of one or few (possibly interconnected) central hubs—for illustration, suppose an intruder is hiding in a system of buildings where we have only few corridors but a large number of rooms, or suppose we have few virtual servers with many stations (e.g., of private users) that can communicate only with the servers. Furthermore, 𝗏𝖼𝗇 is one of the most efficiently computable parameters from both approximation <cit.> and parameterized <cit.> points of view, making it fit from an applicative perspective even when a vertex cover is not given along with the input. Moreover, 𝗏𝖼𝗇 is the best choice for proving negative results—indeed, our negative result on the kernelization complexity of for 𝗏𝖼𝗇 implies the same for many other well-known smaller parameters such as treewidth, treedepth and feedback vertex set <cit.>. One shortcoming of 𝗏𝖼𝗇 as a parameter is that it is very high for some simple (and easy to resolve) dense graphs like complete graphs. However, we generalize our kernel to , which is small for these dense graphs, and for . Furthermore, we design a linear kernel for the well-studied parameter . We further discuss the utility of our kernels in the Conclusion.
§.§ Brief Survey
was independently introduced by Quilliot <cit.> and by Nowakowski and Winkler <cit.> with exactly one cop[In fact, a specific instance of on a specific graph was given as a puzzle in Problem 395 of the book Amusements in Mathematics <cit.> already in 1917.]. Aigner and Fromme <cit.> generalized the game to multiple cops and defined the cop number of a graph.
The notion of cop number and some fundamental techniques introduced by Aigner and Fromme <cit.> yielded a plethora of results on this topic. For more details, we refer the reader to the book <cit.>.
The computational complexity of finding the cop number of a graph has been a challenging subject of research. On the positive side, Berarducci and Intrigila <cit.> gave a backtracking algorithm that decides whether G is k-copwin in 𝒪(n^2k+1) time.
On the negative side, Fomin et al. <cit.> proved that determining whether G is k-copwin is NP-hard, and W[2]-hard parameterized by k. Moreover, Mamino <cit.> showed that the game is PSPACE-hard, and later, Kinnersley <cit.> proved that determining the cop number of a graph is, in fact, EXPTIME-complete. Recently, Brandt et al. <cit.> provided fine-grained lower bounds, proving that the time complexity of any algorithm for is Ω(n^k-o(1)) conditioned on Strong Exponential Time Hypothesis (𝖲𝖤𝖳𝖧), and 2^Ω (√(n)) conditioned on Exponential Time Hypothesis (𝖤𝖳𝖧).
Since admits an XP-time algorithm, it is sensible to bound the cop number for various graph classes or by some structural parameters. Nowadays, we know that the cop number is 3 for the class of planar graphs <cit.> and toroidal graphs <cit.>, 9 for unit-disk graphs <cit.>, 13 for string graphs <cit.>, and is bounded for bounded genus graphs <cit.> and minor-free graphs <cit.>. Moreover, it is known that the cop number of a graph G is at most 𝗍𝗐(G)/2+1 <cit.>, where 𝗍𝗐(G) denotes the treewidth of G, and at most 𝖼𝗐(G) <cit.>, where 𝖼𝗐(G) denotes the clique-width of G.
§.§ Our Contribution
We conduct a comprehensive analysis of parameterized by 𝗏𝖼𝗇. We start by bounding the cop number of a graph by its vertex cover number:
theoremVCBound
For a graph G, 𝖼(G) ≤𝗏𝖼𝗇/3+1.
The proof is based on the application of three reduction rules. Each of our rules controls its own cop that, in particular, guards at least three vertices that belong to the vertex cover. Once our rules are no longer applicable, we exhibit that the remaining unguarded part of the graph is of a special form. In particular, we exploit this special form to prove that, now, the usage of only two additional cops suffices.
We complement Theorem <ref> with an argument (Lemma <ref>) that it might be difficult to improve this bound further using techniques similar to ours.
Second, we prove that parameterized by 𝗏𝖼𝗇 is 𝖥𝖯𝖳 by designing a kernelization algorithm:
theoremVCKernel
parameterized by 𝗏𝖼𝗇 admits a kernel with at most 𝗏𝖼𝗇+ 2^𝗏𝖼𝗇/√(𝗏𝖼𝗇) vertices.
Our kernel is also based on the application of reduction rules. However, these rules are very different than those used for the proof of Theorem 1. While our main rule is quite standard in kernelization (involving the removal of, in particular, false twins), the proof of its correctness is (arguably) not.
Theorem <ref>, along with Theorem <ref> and an XP-algorithm (Proposition <ref>), gives the following immediate corollary:
corollaryVCFPT
is parameterized by 𝗏𝖼𝗇, and is solvable in (𝗏𝖼𝗇+2^𝗏𝖼𝗇/√(𝗏𝖼𝗇))^𝗏𝖼𝗇/3+2· n^𝒪(1) time.
We complement our kernel by showing that it is unlikely for to admit polynomial compression, by providing a polynomial parameter transformation from . In particular, our reduction makes non-trivial use of a known construction of a special graph having high girth and high minimum degree.
theoremVCNPoly
parameterized by 𝗏𝖼𝗇 does not admit polynomial compression, unless .
Next, we present a linear kernel for parameterized by neighbourhood diversity:
theoremND
parameterized by 𝗇𝖽 admits a kernel with at most 𝗇𝖽 vertices.
On the positive side, we extend our exponential kernel to two smaller structural parameters, and :
theoremVClique
parameterized by admits a kernel with at most 2^2^𝖼𝗏𝖽 + √(𝖼𝗏𝖽) vertices. Moroever, parameterized by admits a kernel with at most 2^2^𝖽𝗍𝗌 + 𝖽𝗍𝗌^1.5 vertices.
Several variants of have been studied due to their copious applications. We extend our results, parameterized by , to some of the most well-studied ones. We define these variants (and used notations) in Section <ref>. We first bound the cop number of these variants by 𝗏𝖼𝗇:
For a graph G: (1) 𝖼_lazy≤𝗏𝖼𝗇/2 +1; (2) 𝖼_attack≤𝗏𝖼𝗇/2 +1; (3) 𝖼_active(G) ≤𝗏𝖼𝗇; (4) 𝖼_surround(G) ≤𝗏𝖼𝗇; (5) 𝖼_s(G) ≤𝗏𝖼𝗇 (for any value of s); (6) for a strongly connected orientation G of G, 𝖼(G) ≤𝗏𝖼𝗇.
We also extend our exponential kernel to these variants:
theoremLAtKernel
and parameterized by 𝗏𝖼𝗇 admit a kernel with at most 𝗏𝖼𝗇+2^𝗏𝖼𝗇/√(𝗏𝖼𝗇) vertices. Moreover, on strongly connected directed graphs admits a kernel with at most 3^𝗏𝖼𝗇+𝗏𝖼𝗇 vertices.
Then, we present a slightly more general kernelization that works for most variants of the game. In particular, we define a new variant of the game (in Section <ref>), Generalized CnR,that generalizes various well studied variants of . We have the following result that proves that Generalized CnR parameterized by 𝗏𝖼𝗇 admits an exponential kernel.
theoremGenKernel
Generalized CnR parameterized by 𝗏𝖼𝗇 admits a kernel with at most 𝗏𝖼𝗇+𝗏𝖼𝗇· 2^𝗏𝖼𝗇 vertices.
Then, we show that the same kernelization algorithm also provides us the following result:
, , and parameterized by 𝗏𝖼𝗇 admit a kernel with at most 𝗏𝖼𝗇+𝗏𝖼𝗇· 2^𝗏𝖼𝗇 vertices.
Finally, we complement our exponential kernels for these variants by arguing about their incompressibility:
, , , , and on strongly connected directed and oriented graphs parameterized by 𝗏𝖼𝗇 do not admit a polynomial compression, unless .
§.§ Additional Related Works
For a graph with girth at least 5, the cop number is lower bounded by the minimum degree of the graph <cit.>. As implied by the lower bound for the Zarankiewicz problem <cit.>, an extremal graph with girth 5 has Ω(n^3/2) edges.
In a graph with Ω(n^3/2) edges, if there is a vertex whose degree is smaller than c√(n), for an appropriate constant c, then we can remove it and still get a smaller graph with Ω(n^3/2) edges.
Hence, eventually, every vertex has degree Ω(√(n)).
Therefore, the cop number of such a graph is Ω(√(n)).
Meyniel <cit.> conjectured this to be tight, that is, 𝒪(√(n)) cops are sufficient to capture the robber in any connected graph.
This is probably the deepest conjecture in this field (see <cit.>). Since then, several attempts have been made to bound the cop number of general graphs <cit.>. Although these results establish that the 𝖼(G) = o(n), even the question whether c(G) = 𝒪(n^1-ϵ), for ϵ >0, remains open.
Many graph classes have unbounded cop number. The graph classes for which the cop number is Ω(√(n)) are called Meyniel extremal. These include bipartite graphs <cit.>, subcubic graphs <cit.>, and polarity graphs <cit.>. Meyniel's conjecture was also considered for random graphs <cit.>.
Lastly, we remark that variations of vary mainly depending on the capabilities of the cops and the robber. Some of these variations were shown to have correspondence with several width measures of graphs like treewidth <cit.>, pathwidth <cit.>, tree-depth <cit.>, hypertree-width <cit.>, cycle-rank <cit.>, and directed tree-width <cit.>. Moreover, Abraham et al. <cit.> defined the concept of a cop-decomposition, which is based on the cop strategy in the game on minor-free graphs provided by Andreae <cit.>, and showed that it has significant algorithmic applications.
§ PRELIMINARIES
For ℓ∈ℕ, let [ℓ] = {1,…, ℓ}. Whenever we mention a/b, we mean ⌈a/b⌉.
§.§ Graph Theory
For a graph G, we denote its vertex set by V(G) and edge set by E(G). We denote the size of V(G) by n and size of E(G) by m. In this paper, we consider finite, connected[The cop number of a disconnected graph is the sum of the cop numbers of its components; hence, we assume connectedness.], and simple graphs.
Let v be a vertex of a graph G. Then, by N(v) we denote the open neighbourhood of v, that is, N(v)= {u | uv ∈ E(G)}.
By N[v] we denote the close neighbourhood of v, that is, N[v] = N(v) ∪{v}. For X ⊆ V(G), we define N_X(v) = N(v) ∩ X and N_X[v] = N[v] ∩ X. We say that v dominates u if u∈ N[v]. The girth of a graph G is the length of a shortest cycle contained in G.
A u,v-path is a path with endpoints u and v. A path is isometric if it is a shortest path between its endpoints. For u,v∈ V(G), let d(u,v) denote the length of a shortest u,v-path.
Let G be a graph and U⊆ V(G). Then, G[U] denotes the subgraph of G induced by U. A set U ⊆ V(G) is a vertex cover if G[V(G) ∖ U] is an independent set. The minimum cardinality of a vertex cover of G is its vertex cover number (𝗏𝖼𝗇). Moreover, U is a cluster vertex deletion set if G[V(G) ∖ U] is a disjoint union of cliques. The minimum size of a cluster vertex deletion set of a graph is its cluster vertex deletion number (𝖼𝗏𝖽). Additionally, U is a deletion to stars set if G[V(G) ∖ U] is a disjoint union of star graphs. The minimum size of a deletion to stars set of a graph is its deletion to stars number (𝖽𝗍𝗌). Two vertices u,v ∈ V(G) have the same type if and only if N(v)∖{u} = N(u) ∖{v}. A graph G has neighborhood diversity at most w if there exists a partition of V(G) into at most w sets, such that all the vertices in each set have the same type.
§.§
is a two-player perfect information pursuit-evasion game played on a graph.
One player is referred as cop player and controls a set of cops, and the other player is referred as robber player and controls a single robber.
The game starts with the cop player placing each cop on some vertex of the graph, and multiple cops may simultaneously occupy the same vertex. Then, the robber player places the robber on a vertex.
Afterwards, the cop player and the robber player make alternate moves, starting with the cop player.
In the cop player move, the cop player, for each cop, either moves it to an adjacent vertex (along an edge) or keeps it on the same vertex. In the robber player move, the robber player does the same for the robber. For simplicity, we will say that the cops (resp., robber) move in a cop (resp., robber) move instead of saying that the cop (resp., robber) player moves the cops (resp., robber). Throughout, we denote the robber by .
A situation where one of the cops, say, , occupies the same vertex as is a capture. (We also say that the captures and that is captured by .) The cops win if they have a strategy to capture , and wins if it has a strategy to evade a capture indefinitely. A graph G is k-copwin if k cops have a winning strategy in G.
The cop number of G, denoted 𝖼(G), is the minimum k such that G is k-copwin. For brevity, G is said to be copwin if it is 1-copwin (i.e. (G) = 1).
Accordingly, we have the following decision version of the problem.
A graph G, and an integer k ∈ℕQuestionIs G k-copwin?
We say that some cops guard a subgraph H of G if cannot enter H without getting captured by one of these cops in the next cop move. We shall use the following result:
Let P be an isometric path in G. Then one cop can guard P after a finite number of rounds/cop moves.
Currently, the best known algorithm to decide whether G is k-copwin is by Petr et al. <cit.>:
is solvable in 𝒪(kn^k+2) time.
If a cop occupies a vertex v, then attacks N[v]. A vertex u is safe if it is not being attacked by any cop. If is on a vertex that is not safe, then is under attack.
§.§ Variations of
Several variations of have been studied in the literature, differing mainly in the rules of movements of agents, the definition of the capture, and the capabilities of the agents. We provide below the definitions of the games considered in this paper. We list below some of the primary properties of the gameplay in which these variations differ:
* Speed of agents: If an agent has speed s, where s∈ℕ, then the agent can move along at most s edges in its turn. We note that a robber with speed s cannot move over a cop, that is, the robber can move along a path of length at most s not containing any cop, in its turn.
* Lazy/active/flexible cops:
Let C be the set of cops and let A∪ F ∪ L be a partition of the set of cops such that A is the set of active cops, F be the set of flexible cops, and L be the set of lazy cops. Then, in each cop move, at most one cop from L can make a move, each cop from A must make a move, and each cop from F can either make a move or stay on the same vertex. Unless mentioned otherwise, all cops are assumed to be flexible.
* Reach of cops:
If a cop _i has reach λ_i, then cannot access a vertex that is at a distance at most λ_i from the vertex occupied by _i. Here, think of the cop _i as having a gun with range λ_i. Hence, if _i can reach a vertex that is at most distance λ_i from the robber's vertex at the end of a cop move, then _i can shoot , and the cops win. Similarly, on a robber move, even if has speed s, then it can move only along a path of length at most s that does not contain any vertex that is at a distance at most λ_i from _i. In , for each cop _i, λ_i = 0.
* Visible/invisible robber: If the robber is visible, then the cops know the position of the robber. If the robber is invisible, then the cops do not know the position of the robber. Moreover, we say that cops have d-visibility if cops can see the position of the robber only if it is at most d edges away from at least one of the cops.
Next, we define the variants of for which we will extend our results.
: <cit.> is one the the most well-studied variants of games <cit.>. In this variant, the cops are lazy, that is, at most one cop can move during a cops' turn. This restricts the ability of the cops with respect to the classical version. The minimum number of lazy cops that can ensure a capture in a graph G is known as the lazy cop number and is denoted by 𝖼_lazy(G). Clearly, 𝖼(G) ≤𝖼_lazy(G), as 𝖼_lazy(G) cops can capture the robber in the classical version (using the winning strategy of the game). We remark that this game is also studied with the name one-cop-moves game <cit.>.
:
In <cit.>, the robber is able to strike back against the cops. If on a robber's turn, there is a cop in its neighborhood, then the robber can attack the cop and eliminate it from the game. However, if more than one cop occupy a vertex and the robber attacks them, then only one of the cops gets eliminated, and the robber gets captured by one of the other cops on that vertex. The cop number for capturing an attacking robber on a graph G is denoted by 𝖼_attack(G), and is referred to as the attacking cop number of G. Clearly, 𝖼(G) ≤𝖼_attack(G) ≤ 2 ·𝖼(G), as, on the one hand, 𝖼_attack(G) cops can capture the robber in the classical version. On the other hand, if we play the attacking version with 2·𝖼(G) cops using the strategy of the classical variant with the only difference that there are always at least two cops on a vertex, then the cops have a winning strategy.
:
In the game of <cit.>, each cop as well as the robber are active, that is, in a cop/robber move, each cop/robber has to move to an adjacent vertex. The active cop number of a graph G, denoted by 𝖼_active(G), is the minimum number of cops that can ensure capture in this game. It is easy to see that 𝖼_active(G) ≤ 2·𝖼(G), as if we keep one extra cop adjacent to each cop in the winning strategy for , then whenever some cop has to skip a move, it can simply do so by switching with the extra cop adjacent to it.
:
In the game of <cit.>, the definition of capture is different. In this game, a cop and the robber can occupy the same vertex of the graph during the game, but the robber cannot end its turn by remaining at a vertex occupied by some cop. The cops win by surrounding the robber, that is, if the robber occupies a vertex v, then there is a cop at each vertex u∈ N(v). The surrounding cop number for a graph G is denoted as 𝖼_surround(G). It is easy to see that 𝖼_surround(G) ≥δ(G), where δ(G) is the minimum degree of the graph.
:
In the game of <cit.>, the robber can move faster than the cops. If has speed s, then it can move along a path with at most s edges not containing any cop. The minimum number of cops that can ensure capture of
a fast robber with speed s in a graph G is denoted by 𝖼_s(G). For s ≥ 2, deciding whether 𝖼_s(G)≤ k is NP-hard as well as W[2]-hard even when input graph G is restricted to be a split graph <cit.>. The game of is well-studied <cit.>.
on Directed Graphs:
The game of is also well-studied for oriented/directed graphs <cit.>. The game is played on a directed graph G, and the players can only move along the orientation of the arcs.
Finally, we define a variant of that generalizes many well-studied variants of :
Generalized :
Consider the following generalized version of . Here the input is (G,_1,…,_k, ) where each cop _i has speed s_i (possibly different for each cop) and has speed s_R. Moreover, each cop can be either forced to be active (all active cops have to move in each turn), lazy (at most one lazy cop moves in each turn), or flexible (a flexible cop can either move or stay on the same vertex in its move). Moreover, the robber can also be forced to be either lazy or flexible. Furthermore, each cop _i can have reach λ_i (possibly different for each cop). This game generalizes several well-studied variants of along with , , and Cops and Robber From a Distance <cit.>. It also generalizes the game of <cit.>.
Finally, we note that we assume the notion of “being active” to be defined only when the agent has speed s=1. But, this notion can be defined in multiple ways if the agent has speed s>1: the player might have to move at least s'≤ s edges, the player may have to move to a vertex at a distance at least s' ≤ s from the current vertex, the player may or may not be allowed to repeat edges, and so on. We remark that our kernelization result for Generalized CnR can be made to work, with some changes, considering any of these notions discussed.
§.§ An XP Algorithm for Variants
For graph searching games, there is a standard technique to get an XP-time algorithm with running time n^𝒪(k) (where n is the size of input graph and the question is whether k cops have a winning strategy). This technique involves generating a game graph where each vertex represents a possible placement of all the agents on the vertices of G. Since k cops and a single robber can have n^k+1 possible placements on G, the game graph has n^k+1 vertices. The following step is to mark all of the winning states (that is, where the robber is captured). Afterwards, we use an algorithm to keep adding states to the set of winning states in the following manner. On a cop move, from a given state S, if there exists a movement of cops that can change the game state S to a winning state, we add S to the winning states. On a robber move, for a game state S, if all the possible moves of the robber lead to a winning state, we add S to the winning state. Finally, if there exists a position of k cops such that, for any position of the robber, these states are in winning states, we declare that k cops have a winning strategy in G. It is easy to see that this algorithm can be implemented in n^𝒪(k) time.
Petr, Portier, and Versteegan <cit.> gave an implementation of this algorithm, for , that runs in 𝒪(kn^k+2) time. It is not difficult to see that this algorithm can be made to work for all the variants we discussed by changing the rules to navigate between game states. For , the only extra consideration is that if attacks a cop (among k cops) and does not get captured in the next cop move, then we have a game state, say, S', with k-1 cops and one robber, where the placement of these agents is a subset of a placement of k+1 agents in one of the original game states, and hence we prune S'. Thus, we have the following proposition.
For any variant of considered in this paper, an instance (G,k) can be solved in 𝒪(kn^k+2) time.
§.§ Parameterized complexity
In the framework of parameterized complexity, each problem instance is associated with a non-negative integer, called a parameter. A parametrized problem Π is fixed-parameter tractable (𝖥𝖯𝖳) if there is an algorithm that, given an instance (I,k) of Π, solves it in time f(k)· |I|^𝒪(1) for some computable function f(·). Central to parameterizedcomplexity is the W-hierarchy of complexity classes:
𝖥𝖯𝖳⊆𝖶[1]⊆𝖶[2]⊆…⊆𝖷𝖯.
Two instances I and I' (possibly of different problems) are equivalent when I is a Yes-instance if and only if I' is a Yes-instance. A compression of a parameterized problem Π_1 into a (possibly non-parameterized) problem Π_2 is a polynomial-time algorithm that maps each instance (I,k) of Π_1 to an equivalent instance I' of Π_2 such that size of I' is bounded by g(k) for some computable function g(·). If g(·) is polynomial, then the problem is said to admit a polynomial compression.
A kernelization algorithm is a compression where Π_1 = Π_2. Here, the output instance is called a kernel.
Let Π_1 and Π_2 be two parameterized problems. A polynomial parameter transformation from Π_1 to Π_2 is a polynomial time algorithm that, given an instance (I,k) of Π_1, generates an equivalent instance (I',k') of Π_2
such that k' ≤ p(k), for some polynomial p(·). It is well-known that if Π_1 does not admit a polynomial compression, then Π_2 does not admit a polynomial compression <cit.>. We refer to the books <cit.> for details on parameterized complexity.
§ BOUNDING THE COP NUMBER
In the following lemma, we give a general upper bound for the cop number, which we use to derive bounds for several graph parameters.
Let G be a graph and let U ⊆ V(G) be a set of vertices such that for each connected component H of G[V(G) ∖ U], 𝖼(H) ≤ℓ. Then, 𝖼(G) ≤⌈|U|/2⌉ +ℓ.
We note that this proof uses techniques used to bound 𝖼(G) in terms of tw(G) by Joret et al. <cit.>. Denote U = {u_1,…, u_q}. Consider isometric paths P_1, …, P_⌈q/2⌉ such that the endpoints of P_i are u_2i-1 and u_2i. Note that these isometric paths always exist as we assume that the graph is connected. Here, P_⌈q/2⌉ might be a single vertex path containing only the vertex u_q.
Now, we guard each path P_i using a single cop (due to Proposition <ref>). These ⌈q/2⌉ cops restrict the robber to one connected component H of G[V(G)∖ U]. Since each of these components is ℓ-copwin, q/2 +ℓ cops have a clear winning strategy in G.
We know that the classes of star graphs, complete graphs, chordal graphs, and trees are copwin <cit.>. These bounds, along with Lemma <ref>, implies the following theorem.
Let G be a graph and t =min{𝖼𝗏𝖽, 𝖽𝗍𝗌}. Then, 𝖼(G) ≤t/2+1.
§.§ Bounding Cop Number by 𝗏𝖼𝗇:
Let U be a vertex cover of size t in G and I be the independent set V(G) ∖ U. Lemma <ref> implies that 𝖼(G) ≤⌈t/2⌉ +1. In this section, we improve this bound. First, we provide the following reduction rules.
[RR<ref>]
If there is a vertex v ∈ I such that |N(v)| ≥ 3, then place a cop at v and delete N[v].
[RR<ref>]
If there is a vertex v ∈ U such that |N[v] ∩ U| ≥ 3, then place a cop at v and delete N[v].
[RR<ref>]
If there is an isometric path P such that P contains at least three vertices from U, then guard P using one cop and delete V(P) (see Proposition <ref>).
We remark that RR<ref> and RR<ref> can be merged, but we prefer to keep them separate to ease the presentation. Moreover, we note the following.
In the application of reduction rules RR<ref>-RR<ref>, whenever a set of vertices X ⊆ V(G) is deleted by the application of rules RR<ref>-RR<ref>, it implies that each vertex x ∈ X is being guarded by some cop, and hence, is not accessible to . We do not actually delete the vertices, and this deletion part is just for the sake of analysis. Hence, from the cop player's perspective, the graph remains connected.
Second, we have the following lemma concerning the structure of subgraphs accessible to after an exhaustive application of rules RR<ref>-RR<ref>.
Let H be a connected component of G where rules RR<ref>-RR<ref> cannot be applied anymore. Then, for every two distinct vertices x,y ∈ V(H) ∩ U, either xy ∈ E(G) or there exists a vertex w ∈ I such that xw ∈ E(G) and yw ∈ E(G).
For contradiction, let us assume that there exist two distinct vertices x,y ∈ V(H) ∩ U such that xy ∉ E(G) and there does not exist a vertex w ∈ I such that xw ∈ E(G) and yw ∈ E(G). Since x and y are part of the connected component H, there exists an x,y-path. Let P be an isometric x,y-path.
Let P = x, v_1, … , v_ℓ, y. Since vertices in I form an independent set and ℓ≥ 2, the vertices v_1, …, v_ℓ cannot be all from I. So, there exists at least one v_i, for i ∈ [ℓ], such that v_i ∈ U. Thus, P contains at least three vertices from U, and P is an isometric path. Therefore, we can apply RR<ref>, and hence, we reach a contradiction.
Next, we argue that, after an exhaustive application of rules RR<ref>-RR<ref>, the cop number of each connected component accessible to is bounded. We have the following lemma.
Once we cannot apply rules RR<ref>-RR<ref> anymore, let the robber be in a connected component H. Then, c(H) ≤ 2.
We present a winning strategy for two cops. If H contains at most two vertices from U, then the cops have a winning strategy by placing a cop on each of these vertices. Hence, we assume there exist at least three vertices in H from U. Let x and y be two distinct vertices of H from U. Then, we place a cop on each of these vertices. Denote the cops by _1 and _2. We consider the two cases as follows.
Case 1: If is on a vertex in w ∈ I, then due to reduction rule RR<ref>, it can have at most two neighbors in U. Let them be u and v. Now, due to Lemma <ref>, the cops can move to vertices such that one of them, say x', dominates the vertex u and the other, say y', dominates the vertex v. See Figure <ref> for reference. So, the cops move to the vertices x' and y'. This restricts to stay on its current vertex w in I (else it is captured in the next move of the cops). Now, in the next move of the cops, they move to the vertices u and v. Again, this restricts to stay on the vertex w (else it is indeed captured). Finally, in the next move of the cops, the cops capture .
Case 2: If is on a vertex in u ∈ U, then _1 can move to a vertex in I, say x', to attack (due to Lemma <ref>). This forces to either move to a vertex w ∈ I or to a vertex z ∈ U. Accordingly, we consider two sub-cases.
* If moves to a vertex w ∈ I, then note that w can have at most two neighbors in U (due to RR1), and one of them is u (being attacked by _1). Let the other neighbor of w be v. Now, _2 can move to a vertex such that it attacks v (due to Lemma <ref>). This game state is identical to case 1. Hence, the cops can capture the robber in two rounds.
* If moves to a vertex z ∈ U, then _1 moves to u. This forces to move to a vertex in I since u can have only one neighbor in U (due to RR<ref>), and that is occupied by C_1, with both cops being in U. This game state is again identical to case 1, and thus the cops win in at most two rounds.
This completes our proof.
Finally, we have the following theorem.
*
The correctness of this theorem follows from Lemma <ref> and the fact that using each cop in the reduction rules RR<ref>, RR<ref>, and RR<ref>, we remove at least three vertices from U. If we can apply these rules t/3 times, gets restricted to a vertex in I, and thus one additional cop can capture . Else, when we apply these rules at most t/3-1 times, we then need two additional cops (by Lemma <ref>), that is, we overall need at most t/3 +1 cops to ensure capture.
We note here that a similar technique will fail if we try to “remove” four vertices in each reduction rule. More precisely, if we have the following reduction rules, then we might not get a graph with a bounded (by a constant independent of the input) cop number.
[RR<ref>]
If there is a vertex v ∈ I such that |N(v)| ≥ 4, then place a cop at v and delete N[v].
[RR<ref>]
If there is a vertex v ∈ U such that |N[v] ∩ U| ≥ 4, then place a cop at v and delete N[v].
[RR<ref>]
If there is an isometric path P such that P contains at least four vertices from U, then guard U using one cop and delete V(P) (see Proposition <ref>).
We have the following claim.
For every k ∈ℕ, there exists a graph G with a vertex cover U and independent set I = V(G) ∖ U, such that we cannot apply the rules RR<ref>-RR<ref>, and 𝖼(G)>k.
Bonato and Burgess <cit.> proved that for every k, there exists a diameter-2 graph H such that c(H) ≥ k. Let H be a diameter-2 graph such that c(H) ≥ k.
Joret et al. <cit.> showed that subdividing each edge of a graph an equal number of times does not reduce the cop number. So, we subdivide each edge of H to get the graph G such that 𝖼(G) ≥ k. Now, we can put the original vertices in the vertex cover U, and the newly introduced vertices in the independent set I. We cannot apply any of the rules RR<ref> (because each vertex in I has degree exactly 2), RR<ref> (because U is an independent set), and RR<ref> (since any isometric path in G containing more than three vertices of U will contradict the fact that H is a diameter-2 graph).
Hence, G is a graph that satisfies the conditions of our lemma.
§.§ Bounding the Cop Number for Variants
Here we extend the result of Theorem <ref> to several variations of the game. In particular, we prove the following result.
Let G be a graph with a vertex cover U of size t. Then,
* 𝖼_lazy≤t/2 +1.
* 𝖼_attack≤t/2 +1.
Let I be the independent set V(G) ∖ U. First, we note here that in , one cop cannot ensure guarding of an isometric paths <cit.>, and in , multiple cops, say, ℓ cops, cannot ensure guarding ℓ paths simultaneously. (This is evident from the fact that there exists a planar graph G with 𝖼_lazy(G) ≥ 4 <cit.>.) Therefore, reduction rules RR<ref>-RR<ref> will not directly imply an upper bound on the respective cop numbers here. So, we have the following reduction rules:
[RR<ref>]
If there is a vertex v ∈ I such that N(v) > 1, then place a cop at v and delete N[v].
[RR<ref>]
If there is a vertex v ∈ U such that N_U[v] > 1, then place a cop at v and delete N[v].
Observe that after an exhaustive application of reduction rules RR<ref> and RR<ref>, we are left with a collection of stars, each of which has its center vertex in U.
In the case of , we can easily apply rules RR<ref> and RR<ref>, since cops do not move once placed according to an application of reduction rules RR<ref> and RR<ref>, except for when they move to capture . Finally, is restricted to a star, and one extra lazy cop can move and capture .
In the case of , all cops start at the same vertex. Whenever the cop player wants to station one of the cops at a vertex v according to rules RR<ref> and RR<ref>, all of the cops that are not stationed yet move together to the vertex v (to avoid getting attacked). Note that once a cop is stationed at a vertex u, the cop never moves and hence can never be attacked (because if wants to attack a cop at vertex v, it has to reach a vertex in N(v) in the previous round, and now the cop at v can move and capture ). Once we cannot apply rules RR<ref> and RR<ref> anymore, is restricted to a star. At this point, if there are at least two unstationed cops, then these two cops can move to capture . Else, let v be the last vertex where we stationed a cop. Since at this point we have stationed all but one cop (t/2 cops stationed), observe that for each vertex x∈ U, there is a cop in N[x], and therefore, is restricted to one vertex, say, u, of I. Now, can only attack a cop if it is at a vertex in N(u) (and N(u)⊆ U). Finally, the only unstationed cop, say, , moves to a vertex in N(u) in a finite number of steps (at this point cannot attack without getting captured as is on a vertex in U), and captures in the next round.
The bound on the cop numbers follow from the fact that in each reduction rule, we remove at least two vertices from U and place only one cop.
We have the following straightforward observation concerning the bounds on the cop number for the remaining variants.
Let t be the 𝗏𝖼𝗇 of a graph G. Then, 𝖼_active(G) ≤ t, 𝖼_surround(G) ≤ t, 𝖼_s(G) ≤ t (for any value of s), and for a strongly connected orientation G of G, 𝖼(G) ≤ t.
We remark that the cop number for an oriented graph G (with underlying graph G) that is not strongly connected can be arbitrarily larger than the 𝗏𝖼𝗇 of G. To see this, consider a vertex cover U of size t in G. Next, we add ℓ vertices v_1, …, v_ℓ such that each vertex v_i, for i ∈ [ℓ], has only outgoing edges to vertices in U. Now, if we do not place a cop on some v_j, for j ∈ [ℓ], then can start at v_j and cops can never capture . Hence, 𝖼(G) ≥ℓ.
The proof of Theorem <ref> directly follows from Lemma <ref> and Observation <ref>.
§ KERNELIZATION ALGORITHMS
In this section, we provide kernelization algorithms for and its variants.
§.§ Exponential Kernel for by 𝗏𝖼𝗇:
Let G be a graph where a vertex cover U of size t is given. If no such vertex cover is given, then we can compute a vertex cover U of size t≤ 2·𝗏𝖼(G) using a polynomial-time approximation algorithm <cit.>. Then, the vertices in V(G) ∖ U form an independent set I of size n-t. Recall that the question is whether G is k-copwin.
Our kernelization algorithm is based on the exhaustive application of the following reduction rules.
[RR<ref>]
If k ≥t/3+1, then answer positively.
[RR<ref>]
If k = 1, then apply an 𝒪(n^3) time algorithm (Proposition <ref>) to check whether G is copwin.
[RR<ref>]
If there are two distinct vertices u,v ∈ I such that N(u) ⊆ N(v), then delete u.
The safeness of rule RR<ref> follows from Theorem <ref>. For the safeness of rule RR<ref>, we have the following lemma. We note that Lemma <ref> can also be derived from <cit.>, but we give a self-contained proof for the sake of completeness.
Let u and v be two distinct vertices of G such that N(u) ⊆ N(v). Consider the subgraph H of G induced by V(G)∖{u}. Let k≥ 2. Then, G is k-copwin if and only if H is k-copwin.
First, we show that if G is k-copwin, then H is k-copwin. For the graph H, the k cops borrow the winning strategy that they have for G, with the only difference that whenever a cop has to move to the vertex u in G, it moves to v (in H) instead. Since N(u) ⊆ N(v), the cop can make the next move as it does in the winning cop strategy for G. Note that using this strategy, the cops can capture if is restricted to V(H) in G. Therefore, using this strategy, k cops will capture in H as well.
Second, we show that if H is k-copwin, then G is is k-copwin. Here, for each vertex x ≠ u of G, we define I(x) = x, and for u, we define I(u)= v. Observe that for each x ∈ V(G), I(x) is restricted to H and if xy ∈ E(G), then I(x)I(y) ∈ E(H). Therefore, every valid move of a player from a vertex x to y in G can be translated to a valid move from I(x) to I(y) in H. Now, the cops have the following strategy. If the robber is on a vertex x, the cops consider the image of the robber on the vertex I(x). Since the robber's image is restricted to H, the cops can use the winning strategy for H to capture the image of the robber in G. Once the image is captured, if the robber is not on the vertex u, then the robber is also captured. Otherwise, the robber is on the vertex u, and at least one cop is on v. See Figure <ref> for an illustration. So, one cop, say 𝒞_1, stays on v and this prevents the robber from ever leaving u. Indeed this follows because N(u) ⊆ N(v), and so, if ever leaves u, it will be captured by 𝒞_1 in the next cop move. Finally, since k>1, some other cop, say 𝒞_2, can use a finite number of moves to reach u and capture the robber.
This completes our proof.
Note that the requirement for k≥ 2 in Lemma <ref> is crucial. It might so happen that we can get an H such that c(H)=1, but 𝖼(G)>1. To see this, consider the example of C_4, where any two diagonal (i.e., non-adjacent) vertices satisfy the property in Rule RR9, and if we remove one of them, the cop number reduces from 2 to 1. However, this does not harm our algorithm because if we are given k= 1, then RR<ref> is applied (before RR<ref>).
Two sets A and B are incomparable if neither A⊆ B nor B⊆ A. We shall use the following proposition that follows from Sperner's Theorem and Stirling's approximation.
Let X be a set of cardinality N. Moreover, let Y be a set of subsets of X such that for each a,b ∈ Y, a and b are incomparable. Then, |Y| ≤2^N/√(N).
Once we cannot apply RR<ref>-RR<ref> anymore, we claim that the size of the reduced graph G' is bounded by a function of t. Let U' = U ∩ V(G') and I' = I ∩ V(G'). Clearly, |U'| ≤ t. Now, each vertex u ∈ I' is associated with a neighborhood N(u) such that N(u) ⊆ U'. Moreover, for any two vertices u,v ∈ I', the sets N(u) and N(v) are incomparable. Hence, due to Proposition <ref>, |I'| ≤2^t/√(t), and therefore, |V(G')| ≤ t+2^t/√(t), which proves the following theorem.
*
Now, we can apply the XP-time algorithm (Proposition <ref>) for on our kernel. Since k ≤t/3, the running time we get is exponential only in t and polynomial in n. Specifically, the running time of the algorithm is t·(t+2^t/√(t))^t/3+2· n^𝒪(1). Moreover, if a vertex cover U of size t = 𝗏𝖼(G) is not given, then we can compute one in time 1.2738^t· n^𝒪(1) <cit.>. Thus, we have the following corollary.
*
§.§ Exponential Kernel for by :
To get a kernel for parameterized by 𝖼𝗏𝖽, we employ techniques similar to the ones we used to get a kernel for parameterized by 𝗏𝖼𝗇. Let U be a cluster vertex deletion set of size t. Let S = V(G)∖ U, and C_1, …, C_ℓ be the set of disjoint cliques that form the graph G[S]. Since 𝖼(G)≤t/2+1 (Theorem <ref>), we have the following reduction rule.
[RR<ref>]
If k ≥t/2+1, then report Yes-instance.
Next, we have the following lemma.
Let u and v be vertices of some clique C of G[S]. If N_U(u) ⊆ N_U(v), then 𝖼(G) = 𝖼(G∖{u}).
First we observe that, since u and v are part of the same clique C, N[u] ⊆ N[v]. Then, the proof of this lemma follows from the proof of Lemma <ref>. We remark that this proof also follows from the idea of retracts used in the literature <cit.>. Additionally, we remark that, here, 𝖼(G) need not be necessarily greater than 1. To see this, consider the situation when is at u and a cop, say, _1, is at v. Now, cannot move to a vertex in U since N_U(u) ⊆ N_U(v) and cannot stay on a vertex in C since v is a part of C. Thus, gets captured in the next move by _1.
Hence, we can apply the following reduction rule, whose safeness was proved by Lemma <ref>.
[RR<ref>]
Let u and v be vertices of some clique C ∈ G[S] such that N[u] ⊆ N[v]. Then, delete u.
Once we cannot apply reduction rule RR<ref> anymore, the size of each clique in G[S] is at most 2^t/√(t) (due to Proposition <ref>).
Similarly to Lemma <ref>, we have the following lemma.
Let C_i and C_j be two cliques in G[S] such that for each vertex u ∈ V(C_i), there exists a vertex v ∈ V(C_j) such that N_U(u) ⊆ N_U(v). Then, k >1 cops have a winning strategy in G if and only if they have a winning strategy in G[V(G) ∖ V(C_i)].
The proof idea here is similar to the proof idea of Lemma <ref>. Let H= G[V(G) ∖ V(C_i)]. Here, we will just prove that if k cops have a winning strategy in H, then k cops have a winning strategy in G. (The proof of the reverse direction is rather easy to see, combining arguments from Lemma <ref> and the arguments we present in the rest of this proof).
Let k≥ 2 cops have a winning strategy in H. Similarly to Lemma <ref>, for each vertex x ∈ V(G) ∖ V(C_i), we define I(x) = x, and for each vertex u∈ V(C_i), we have a vertex v∈ V(C_j) such that N_U(u) ⊆ N_U(v), and we define I(u)=v. (Note that there might be multiple choices for v here. We can choose any such vertex.)
Observe that for each vertex x∈ V(G), I(x) is restricted to H. Moreover, if xy ∈ E(G), then I(x)I(y) ∈ E(H) for the following reasons. If x,y ∈ V(G) ∖ V(C_i), then it is obvious. Else, if x,y ∈ V(C_i), then observe that I(x) and I(y) are part of some clique C_j, and N_U(x) ⊆ N_U(I(x)) and N_U(y) ⊆ N_U(I(y)). Hence, in this case, if xy ∈ E(G), then I(x)I(y) ∈ E(H). Finally, assume without loss of generality that x∈ V(C_i) and y∈ V(G)∖ V(C_i). In this case, xy∈ E(G) only if y∈ U. Since N_U(x) ⊆ N_U(I(x)), I(x)I(y) ∈ E(H). Thus, if xy ∈ E(G), then I(x)I(y) ∈ E(H). Therefore, every valid move of a player from a vertex x to a vertex y in G can be translated to a move from I(x) to I(y) in H.
Now, cops play their winning strategy in H with the following consideration: When the robber is at a vertex x in G, the cops consider the image of the robber at vertex I(x) in G. Since the robber's image is restricted to the vertices of H, the cops can use a winning strategy from H to capture the image of the robber in G. Once the image is captured, if the robber is at a vertex x ∉ V(C_i), then the robber is also captured. Otherwise, the robber is at a vertex x∈ V(C_i), and one of the cops is at vertex I(x) in C_j. Now, observe that the robber cannot immediately move to a vertex in U. Anyhow, the robber can move to some other vertex y ∈ V(C_i), and in this case, the cop at vertex I(x) can move to vertex I(y) ∈ V(C_j). This way, the cop occupying the robber's image can prevent the robber from ever leaving C_i. Since k≥ 2, some other cop can move to capture the robber in C_i (as cliques are copwin). This completes our proof.
Thus, we can apply the following reduction rule, whose safeness was proved by Lemma <ref>.
[RR<ref>]
Let C_i and C_j be two cliques in G[S] such that for each vertex u ∈ V(C_i), there exists a vertex v ∈ V(C_j) such that N_U(u) ⊆ N_U(v). Then, delete V(C_i).
Finally, we use the following lemma to bound the size of the desired kernel from Theorem <ref>.
After an exhaustive application of RR<ref>-RR<ref>, the size of the reduced graph is at most 2^2^t + √(t).
Once we cannot apply the reduction rules RR<ref> and RR<ref>, due to Proposition <ref>, each clique can have at most 2^t/√(t) vertices. Moreover, the total number of cliques possible is at most 2^2^t/√(t)/√(2^t/√(t)) (due to Proposition <ref>). Thus, the total number of vertices in the reduced graph is at most 2^2^t + √(t).
Since k ≤t/2+1 (by Reduction Rule RR<ref>), applying the XP-algorithm for from Proposition <ref> to the kernel in Theorem <ref> gives us the following corollary.
is parameterized by . Specifically, it is solvable in (𝖼𝗏𝖽+2^2^𝖼𝗏𝖽 + √(𝖼𝗏𝖽))^𝖼𝗏𝖽/2+2· n^𝒪(1) time.
§.§ Exponential Kernel for by
Using the ideas we presented in Section <ref>, we can also get a kernel for with respect to deletion to stars number. Let U be a deletion to stars vertex set of size t. Also, let S = V(G) ∖ U, and let X_1, … X_ℓ be the stars in the graph G[S]. Specifically, we have the following reduction rules along with reduction rule RR<ref>.
[RR<ref>]
Let u and v be two leaf vertices of some star X in G[S] such that N_U(u) ⊆ N_U(v). Then, delete u.
[RR<ref>]
Let X and Y be two stars in G[S] such that V(X) = x, x_1, …, x_p and V(Y) = y, y_1, …, y_q, where x and y are center vertices of X and Y, respectively. If N_U(x)⊆ N_U(y) and for each vertex x_i (for i∈ [p]), there is a vertex y_j (for j∈ [q]) such that N_U(x_i) ⊆ N_U(y_j), then delete X.
The safeness of RR<ref> follows from Theorem <ref>. We have the following lemma, which establishes that reduction rules RR<ref> and RR<ref> are safe.
Assuming k>1, reduction rules RR<ref> and RR<ref> are safe.
To prove that rule RR<ref> is safe, it suffices to observe that for leaf vertices u and v of some star X ∈ S, if N_U(u) ⊆ N_U(v), then N(u) ⊆ N(v) in G. Indeed, the rest of the proof follows directly from the proof of Lemma <ref>.
Next, we give a proof idea for the safeness of rule RR<ref>. Here, we just define the function of the image of the robber, and the rest of the proof is similar to the proofs of Lemmas <ref> and <ref>. For each vertex u ∉ V(X), I(u) = u. For each x_i, I(x_i) = y_j such that N_U(x_i)⊆ N_U(y_j) (there might be multiple choices for y_j and we can choose any one of them), and I(x) = y.
Now, we claim that once we cannot apply rules RR<ref> and RR<ref> anymore, the size of the graph is bounded by a function of t. First, we note that the size of each star is at most 2^t/√(t)+1 (by Proposition <ref>). Let X and Y be two stars in G[S] such that x and y are the center vertices of X and Y, respectively. We say that X and Y have the same neighbourhood type if N_U(x) = N_U(y). Second, it is easy to see that there can be at most 2^t neighbourhood types. Next, we bound the number of stars in each neighbourhood type. Let S_1, …, S_z be the stars having the same neighbourhood type, and let v_i be the center vertex of star S_i. For each star S_i, for i∈ [z], let 𝒮_i = {N(v): v∈ V(S_i)∖{v_i}}. Since we have applied reduction rule RR<ref> exhaustively, we know that for each A∈𝒮_i, A=N(v) for a unique vertex v∈ V(S_i)∖{v_i}. Observe that each S_i is a subset of the power set of U and the power set of U has size 2^t. Moreover, since we have applied reduction rule RR<ref> exhaustively, we know that for any i,j ∈ [z], neither 𝒮_i ⊆𝒮_j nor 𝒮_j ⊆𝒮_i. Hence, due to Proposition <ref>, z ≤2^2^t/√(2^t).
Therefore, the size of the reduced graph can be at most 2^2^t/√(2^t)· 2^t · (2^t/√(t) +1). Thus, we have the desired kernel from Theorem <ref>.
Since k ≤t/2+1 (by reduction rule RR<ref>), applying the XP-algorithm for from Proposition <ref> to the kernel in Theorem <ref> gives us the following corollary.
is parameterized by . Specifically, it is solvable in (𝖽𝗍𝗌+2^2^𝖽𝗍𝗌 + 𝖽𝗍𝗌^1.5)^𝖽𝗍𝗌/2+2· n^𝒪(1) time.
§.§ Exponential Kernels for Different Variants
Here, we extend the result of Theorem <ref> to several variations of the game. We have the following results.
§.§.§ and :
First, we prove the following lemma.
Let u and v be two distinct vertices of G such that N(u) ⊆ N(v). Consider the graph H induced by V(G)∖{u}. Then for k>1 and for x∈{lazy,attack}, 𝖼_x(G) ≤ k if and only if 𝖼_x(H) ≤ k.
The proof for the forward direction (c_x(G)≤ k implies c_x(H)≤ k) is easy and follows from the arguments similar to the arguments in the proof of Lemma <ref>. We prove the reverse side (c_x(H)≤ k implies c_x(G) ≤ k) for both the variants below. Moreover, similarly to the proof of Lemma <ref>, we define I(u) = v and I(x) = x when x≠ u. Similarly, when is at a vertex x, we say that the image of is at vertex I(x). (Note that the image of is restricted to H.) In both of the variants, the cops will play in G to capture the image of the robber using the winning strategy of H.
In , the cops begin by capturing the image of in G. If is at a vertex x ≠ u, then is captured. If is at vertex u, then observe that there is a cop, say, , at v that has captured the image of . Now, ensures that cannot move, and some other lazy cop can move to capture in a finite number of rounds.
In , the main observation is that if the cops can capture in H, they can capture the image of in G without getting attacked by . If is at a vertex x≠ u when the image of is captured, then is captured. Otherwise, is at u, and a cop, say, _1, is at vertex v. Now another cop, say, _2, can move to a vertex w ∈ N(v) (in a finite number of steps) to capture . If attacks _2 at this point, then note that _1 can move to capture in the next round. If does not attack, then _2 moves to capture in the next round.
Lemma <ref> establishes that reduction rule RR<ref> is safe for both and . Before applying reduction rule RR<ref>, we apply the following reduction rules.
[RR<ref>]
If k ≥t/2 +1, then answer positively (Theorem <ref>).
[RR<ref>]
If k=1, then apply the 𝒪(n^3) time algorithm from Proposition <ref>.
The size of the kernel, by using these reduction rules, is dependent on RR9. Therefore, the proof of the existence of the desired kernel from Theorem <ref> follows directly.
Moreover, Theorem <ref>, along with the XP-algorithms from Proposition <ref> for these variants, gives the following immediate corollary.
and are parameterized by 𝗏𝖼𝗇. Specifically, they are solvable in 𝗏𝖼𝗇+ 2^𝗏𝖼𝗇/√(𝗏𝖼𝗇))^𝗏𝖼𝗇/2+2· n^𝒪(1) time.
§.§.§ on Directed Graphs:
Next, we consider the game of on oriented graphs. For a directed graph G and a vertex v∈ V(G), let N^+(v) and N^-(v) denote the set of out-neighbors and in-neighbors of v, respectively. We have the following lemma.
Let u and v be two distinct vertices of a strongly connected directed graph G such that N^+(u) ⊆ N^+(v) and N^-(u) ⊆ N^-(v). Let H be the graph induced by V(G)∖{u}. Then, for k>1, k cops have a winning strategy in H if and only if k cops have a winning strategy in G
First, observe that H is also strongly connected.
Second, let k cops have a winning strategy in G. Then, the cops can use this winning strategy in H, with the only difference that whenever a cop, say, , has to move to u in G, moves to v in H instead ( can do so because N^-(u) ⊆ N^-(v)). Next, whenever has to move to a vertex, say, w, from u, in the strategy in G, then can move to w from v also (since N^+(u) ⊆ N^+(v)). As is restricted to V(H) in G, cops will capture using this strategy in H as well.
Finally, let k cops have a winning strategy in H. We use this strategy to get a winning strategy in G using k cops. First, we define I(x) = x for x≠ u and I(u) = v. Since I(x) is restricted to H, we use the winning strategy in H to capture I(x). At this point if x ≠ u, then is captured. Else, is at u and one of the cops, say, _1, is at v. Since N^+(u) ⊆ N^+(v), cannot move as long as _1 occupies v. Since G is strongly connected, one of the other cops, say, _2, can move to u in a finite number of rounds to capture .
Let G be a graph with a vertex cover U of size t, and let I = V(G)∖ U. Let G be a strongly connected orientation of G. We apply the following reduction rules.
[RR<ref>]
If k≥ t, then answer positively.
[RR<ref>]
If k=1, then apply the 𝒪(n^3) time algorithm from Proposition <ref> to check whether G is copwin.
[RR<ref>]
If u and v are two distinct vertices in I such that N^+(u) ⊆ N^+(v) and N^-(u) ⊆ N^-(v), then delete u.
Safeness of reduction rules RR<ref> and RR<ref> follow from Theorem <ref> and Lemma <ref>, respectively. Now, we argue that once we cannot apply reduction rules RR<ref>-RR<ref>, the size of G is bounded by a function of t. Observe that each vertex u in I has a unique neighbourhood (N^+(u)∪ N^-(u)) and there are three choices for a vertex v ∈ U to appear in the neighbourhood of a vertex u ∈ I, that is, either v ∈ N^+(u), or v ∈ N^-(u), or v ∉ N^+(u)∪ N^-(u). Therefore, the total number of possible vertices in I are at most 3^t. Thus, applying reduction rules RR<ref>-RR<ref>, we get the desired kernel from Theorem <ref>.
Theorem <ref>, along with rule RR21 and Proposition <ref>, gives the following corollary.
on strongly connected directed graphs is parameterized by the vertex cover number t. In particular, it is solvable in (3^t+t)^t+1· n^𝒪(1) time.
§.§ General Kernelization
In this section, we provide a general reduction rule that works for most variants of parameterized by the vertex cover number.
Let U be a vertex cover of size t in G and I be the independent set V(G) ∖ U. For each subset S⊆ U, we define the following equivalence class: 𝒞_S = { v ∈ I N(v) = S}.
Given an instance ((G,k),t), we have the following reduction rule.
[RR<ref>]
If there is an equivalence class 𝒞_S such that |𝒞_S| >k+1, then keep only k+1 arbitrary vertices from 𝒞_S in G, and delete the rest.
First, we present (informal) intuition why reduction rule RR22 is safe. Since the neighbourhood of each vertex in 𝒞_S is the same, all of these vertices are equivalent with respect to the movement rules in any of the variants discussed. We keep k+1 copies of such vertices because, on a robber move, there is at least one vertex that is not occupied by any cop. We refer to such a vertex as a free vertex. Note that there might be multiple free vertices. On a robber player's turn, if plans to move to a vertex in 𝒞_S, it can move to a free vertex. Moreover, if a fast robber wants to use a vertex from 𝒞_S as an intermediate vertex, it can use a free vertex for this purpose as well. We prove safeness for individual variants later in this section.
Moreover, we have the following lemma that we will use later.
Let G be a graph with a vertex cover U of size t. After an exhaustive application of reduction rules RR<ref> and RR<ref>, the reduced graph has at most t+ t·2^t vertices.
There can be at most 2^t equivalence classes, and for each equivalence class, we keep at most k+1 vertices in I. Due to rule RR19, we can assume k<t. Thus, size of I is at most t· 2^t. The size of G is, therefore, at most |U| + |I| ≤ t+ t·2^t.
§.§.§ Generalized CnR
In this section, we establish that RR<ref> is safe for Generalized CnR. We have the following lemma to prove this claim.
Let G be a graph with a vertex cover U of size t. Let 𝒞_S (for S ⊆ U) be an equivalence class such that |𝒞_S| = ℓ >k+1. Moreover, let H be a subgraph formed by deleting an arbitrary vertex v of 𝒞_S from G. Then, (G,_1,…,_k, ) is a Yes-instance if and only if (H,_1,…,_k, ) is a Yes-instance.
Let 𝒞_S = {v_1, …, v_ℓ}. Without loss of generality, let us assume that vertices v_1,… v_ℓ-1 belong to the graph H, and v = v_ℓ. Since there are at most k cops in the game and ℓ >k+1, at least one vertex of v_1,…, v_ℓ-1 is not occupied by any cop. We denote this vertex by x (x is dependent on the position of the cops and may change during the course of the game). Moreover, here we modify the definition of a safe vertex slightly: A vertex y is safe if it is at a distance at least λ_i+1 from _i, for i∈ [k]. Since each vertex in 𝒞_S has the same neighborhood, observe that either each vertex in 𝒞_S not occupied by a cop is a safe vertex or none of the vertices in 𝒞_S is safe. Moreover, for each vertex y∈ V(G)∖{v}, let I(y) = y and I(v) = x. Note that for each vertex u, I(u) is restricted to vertices of V(H), N(u)= N(I(u)), and if u is a safe vertex, then I(u) is also a safe vertex. To ease the presentation, instead of saying that the cops/robber has a winning strategy in (G,_1,…, _k,) (or (G,_1,…, _k,)), we will say that the cops/robber has a winning strategy in G (or H).
First, let has a winning strategy 𝒮 in G. To show that has a winning strategy in H, we will prove a slightly stronger statement that has a winning strategy, say, 𝒮', in G even if is restricted to the vertices of V(H) in G. We get 𝒮' from 𝒮 as follows: If has to use a vertex y in 𝒮 in some move during the game, it uses I(y) instead. We first show that can safely enter the graph. Let y be the vertex enters in the strategy 𝒮. Then, enters at I(y) in 𝒮'. Since y is a safe vertex (as 𝒮 is a winning strategy for ), I(y) is also a safe vertex. Hence can safely enter a vertex. Now, the only thing to argue is that if can move safely from a vertex y to a vertex z in G, then it can safely move from vertex I(y) to I(z) in G. Let moves from y to z using a path P_1= (y=y_1,…,y_r=z), where r∈ [s_R], during some move in 𝒮. Notice that since 𝒮 is a winning strategy, each vertex y_i (i∈ [r]) is a safe vertex, and hence, each vertex I(y_i) is also a safe vertex. Moreover, since N(y_i) = N(I(y_i)), W=(I(y_1), …, I(y_r)) is a walk with at most r vertices between I(y) and I(z). (It might not be a path since vertex x may repeat in this walk.) Since the existence of a walk between two vertices implies the existence of a path between these vertices using vertices from a subset of the walk vertices, we have an I(y),I(z)-path of length at most r using (safe) vertices from {I(y_1),…,I(y_r)}. Hence can safely move from I(y) to I(z). Thus, 𝒮' is a winning strategy for even when is restricted to vertices of V(H) in G.
In the reverse direction, let has a winning strategy in H. Then, we show that has a winning strategy in G as well. Here, whenever a cop _i moves to a vertex y, assumes its image at the vertex I(y). Observe that I(y) is restricted to V(H) in G. Let y_1,…,y_k be the vertices occupied by cops during some instance in the game. Let F be the set of vertices in V(H) that are safe during this turn. Moreover, let F' be the set of the vertices in V(H) that are safe if cops are occupying the vertices I(y_1), …, I(y_k). Then, we have the following claim.
F'⊆ F.
Targetting contradiction, assume that y∈ F' but y∉ F. Then, there exists some i∈ [k] such that d(y, y_i) ≤λ_i but d(y,I(y_i))>λ_i. If y_i ≠ v, then this is not possible since for y_i≠ v, I(y_i)=y_i. Hence, we can assume that y_i = v and I(y_i) = x. Since N(v) = N(x), for each vertex y in V(G)∖{v} (and y∈ F' ⊆ V(H)), d(y,x) ≤ d(y,v), that is, d(y,I(y_i))≤ d(y,y_i), a contradiction.
We note that it might not be true that F⊆ F', as it might so happen that F contains the vertex x, but F' does not.
Due to Claim <ref>, it is sufficent to show that if has a winning strategy in H considering the image of cop _i as a cop with the capabilities of _i, then has a winning strategy in G. To this end, can use its winning strategy from H since images of the cops are restricted to V(H). Thus, has a winning strategy in G.
Finally, note that, in both directions of proofs, moves in H (respectively, in G) if and only if moves in G (respectively, in H). Hence, if is active/flexible in the original strategy, is active/flexible in the designed strategy. This completes the proof.
Observe that 𝗏𝖼𝗇+1 cops always have a winning strategy in G. Therefore, we have the following theorem as a consequence of Lemma <ref> and Lemma <ref>.
*
Theorem <ref> directly implies the existence of the desired kernel for and from Theorem <ref>. The existence of the desired kernel for from Theorem <ref> follows from Lemma <ref> and the following lemma, which proves the safeness of RR<ref> for .
Let G be a graph with a vertex cover U of size t. Let 𝒞_S (for S ⊆ U) be an equivalence class such that |𝒞_S| = ℓ >k+1. For a subgraph H formed by deleting ℓ - k - 1 arbitrary vertices of 𝒞_S from G, 𝖼_surround(H) ≤ k if and only if 𝖼_surround(G) ≤ k.
Let 𝒞_S = {v_1, …, v_ℓ}. Without loss of generality, let us assume that the vertices v_1,… v_k+1 belong to the graph H and vertices v_k+2, …, v_ℓ are deleted. We begin by noting that cannot be surrounded at a vertex in S in G (since each vertex in S has at least k+1 neighbours). Therefore, throughout the proof, we have the implicit assumption that when is surrounded, it is not on a vertex in S.
Let k cops have a winning strategy in G. Then, to surround , cops use this strategy with the following changes in H. Whenever a cop has to move to a vertex in {v_k+2, …, v_ℓ}, it moves to vertex v_1 instead. Since all vertices in 𝒞_S have the same neighbourhood, the next move of this cop can be the same as it was in (the winning strategy of) G. Note that using this strategy, the cops can surround the robber in G if is restricted to V(H) in G, and also the moves of cops are restricted to V(H) in G. Therefore, the cops can surround using this strategy in H as well.
Now, let k cops have a winning strategy in H. We use this strategy to surround in G, in the following manner. Since we have only k cops, during any time in the gameplay, there is at least one vertex in {v_1, …, v_k+1} that is not occupied by any cop. Let us call this vertex a free vertex (there might be multiple free vertices). Again we use the concept of the image of the robber.
For each vertex x∈ V(G), if x∈ V(H), then we define I(x) = x; else, if x∈{ v_k+1, … v_ℓ}, then we define I(x) = y, where y is a free vertex at that instance. Whenever moves to a vertex x ∈ V(G), we say that the image of the robber moves to I(x). Moreover, we remind that, in this game, although some cop and can be at the same vertex, the robber cannot end its move at the same vertex as one of the cops. Cops use this capability to force to move from a vertex. Therefore, we also have to argue that whenever cops force to move, they force the image of the robber to move as well. To this end, observe that the image of the robber and the robber are on different vertices only if is on some vertex x∈{ v_k+1, …, v_ℓ} and the image of the robber is on a free vertex, say, y. Notice that if, in the strategy for H, was occupying y and the cop player wants to force to move out of y, then it does so by moving a cop, say, , from a vertex w∈ N(y) to y. Cop player adapts this strategy in G by moving form w to x instead of w to y. This move is possible because N(x)= N(y). Thus, , as well as the image of , are forced to move as they would have been forced to move in the winning strategy of k cops in H.
Hence, the image of is restricted to V(H) in G and has to follow the rules of the movement of the robber.
Thus, the cops will finally surround the image of in G. At this point, if is on a vertex v ∈{v_k+2, …, v_ℓ}, note that I() is on a vertex u ∈{v_1, …, v_k+1}. Observe that here, if I() is surrounded, then there is a cop on each vertex in S, and thus, is surrounded as well. If was on a vertex in V(H)∖ S when surrounded, then I() and are at the same vertex, and thus, is surrounded as well.
This finishes the proof of Theorem <ref>. The following corollary is a direct consequence of Theorem <ref>, Theorem <ref>, Theorem <ref>, and Proposition <ref>.
, , , and Generalized CnR are prameterized by 𝗏𝖼𝗇. Specifically, each of these variants is solvable in (𝗏𝖼𝗇· 2^𝗏𝖼𝗇+𝗏𝖼𝗇)^𝗏𝖼𝗇+1· n^𝒪(1) time.
§ POLYNOMIAL KERNELS FOR
In this section, we provide a linear kernel for parameterized by the neighbourhood diversity (𝗇𝖽) of the input graph. One of the key benefits of 𝗇𝖽 as a parameter is that it is computable in polynomial time <cit.>. More specifically, in polynomial time, we can compute a minimum partition of V(G) into classes V_1,…, V_w such that each V_i contains vertices of the same type. Hence, a linear kernel parameterized by 𝗇𝖽 can be very useful from an applicative perspective.
Since for any two vertices u,v∈ V_i, for i∈ [w], N(v) ∖{u} = N(v)∖{ u}, we have that either each V_i is an independent set (N(v) = N(u) in this case) or each V_i is a clique (N[v] = N[u] in this case). Now, we use the following reduction rules.
[RR<ref>]
If k≥ w, then answer positively.
We have the following lemma to prove that RR<ref> is safe.
For a graph G, 𝖼(G) ≤𝗇𝖽.
Let S be a set of vertices such that S contains exactly one vertex, say v_i, from each neighbourhood class V_i. Then, (since we assume G to be connected) observe that S is a dominating set of G. Hence, the cops have a trivial winning strategy by placing a cop on each vertex of S (and |S| ≤ w). Therefore, 𝖼(G) ≤𝗇𝖽.
Next, if k=1, then we apply RR<ref> (the XP-algorithm for from Proposition <ref>). Hence, we assume that k≥ 2. Next, we have the following reduction rule.
[RR<ref>]
For each neighbourhood class V_i, keep one arbitrary vertex and delete the rest.
We have the following lemma to prove that RR<ref> is safe.
Let V_i= {v_1,…,v_ℓ} be a neighbourhood class of G having at least two vertices (ℓ≥ 2). Consider the subgraph H of G induced by V(G)∖{v_ℓ}. Then, for k>1, G is k-copwin if and only if H is k-copwin.
We have the following two cases depending on whether V_i is an independent set or a clique.
* V_i is an independent set: Note that, in this case, N(v_ℓ) = N(v_1). Therefore, due to Lemma <ref>, we have that G is k-copwin if and only if H is k-copwin.
* V_i is a clique: Note that in this case, N[v_ℓ] = N[v_1]. The proof, in this case, (specifically the forward direction) follows from arguments presented in the proof of Lemma <ref>. For the reverse direction, here for x≠ v_ℓ, I(x) = x and I(v_ℓ) = v_1. Now, note that every possible move of in G can be mapped to a valid move of the image of the robber in H, just like in the proof of Lemma <ref>. The only difference here is that when is at v_ℓ (image of is at v_1), can move to v_1 as well (along with vertices in N(v_1)). Notice that this move can be translated to the movement of the image of in H where the image of chooses to stay on the same vertex in its move. Hence, the cops will first capture the image of in H, and then capture in G.
This completes the proof of this lemma.
Since we keep only one vertex of each type and there are at most w types, we have the following theorem.
*
We have the following corollary as a consequence of Theorem <ref>.
is parameterized by 𝗇𝖽. Specifically, it is solvable in 𝗇𝖽^𝗇𝖽· n^𝒪(1) time.
Moreover, it is not difficult to see that this kernelization can be extended to and using an extension of Lemma <ref>, giving us a kernel with at most 𝗇𝖽 vertices. Moreover, using a reduction rule similar to RR<ref> where we keep k vertices of each type, we can have a kernel with at most k·𝗇𝖽 vertices for and .
We have the following lemma, for which we provide a proof outline.
Let V_i = {v_1,…,v_ℓ} is a neighbourhood class of G containing at least k+2 vertices (ℓ≥ k+2). Consider the subgraph of G induced by V(G)∖{v_ℓ}. Then, for k>1, s≥ 1, and x∈{active, s}, 𝖼_x(G) ≤ k if and only if 𝖼_x(H) ≤ k.
Similar to the proof of Lemma <ref>, we have the following two cases depending on whether V_i is an independent set or a clique.
* V_i is an independent set: Proof of this case follows from the proof of Lemma <ref>.
* V_i is a clique: Here, for each v_j ∈ V_i, N[v_j] = N[v_ℓ]. For each vertex x ≠ v_ℓ, let I(x) = x and I(v_ℓ) = v_1.
First, let 𝖼_x(G) ≤ k. Then, we use the strategy of the cops from G to capture in H with the only change that whenever a cop, say, _i, wants to move to a vertex x in G, it moves to I(x) in H instead with the only contingency that if _i wants to move from v_1 to v_ℓ, then it moves to v_2 (so that if the cops are active, then this is indeed a valid move of _i in H). Observe that the cops can capture in G using this strategy even when the cops are restricted to the vertices of H. Hence, the cops can capture using this strategy in H.
In the reverse direction, let 𝖼_x(H) ≤ k. Note that if k active cops have a winning strategy against a flexible robber in G, then k active cops have a winning strategy against an active robber in G as well. Hence, for the ease of arguments, we show that k active cops have a winning strategy in G even if is flexible to show that 𝖼_active≤ k. The cops assume that the image of is occupying the vertex I(x) when is occupying the vertex x. Thus, we have an image of moving in H with the same capabilities as . The cops will capture the image of using their winning strategy from H. Notice that once the image of is captured, if is at a vertex x ≠ v_ℓ, then is captured as well. Otherwise, is at v_ℓ and there is some cop, say, _1, at v_1. In the case of , will be captured in the next move of cops (since v_1v_ℓ∈ E(G)). In the case of , if this is a cop move (that is, the image of is captured on a robber move), then _1 will capture in the next move. Otherwise, in the previous move of the cops, _1 moved to v_1 while was at v_ℓ. In this case, since N[v_1] = N[v_ℓ], _1 could have moved to v_ℓ to capture itself. Hence, 𝖼_x(G) ≤ k.
This completes the proof.
Since 𝖼(G) ≤𝗇𝖽 for all of these variants (as there is a dominating set of size 𝗇𝖽 in G), we have the following result as a consequence of Lemma <ref> (and arguments presented above).
and parameterized by 𝗇𝖽 admit a kernel with at most 𝗇𝖽 vertices. Moreover, and parameterized by 𝗇𝖽 admit a kernel with at most 𝗇𝖽^2 vertices.
Finally, we remark that this technique of kernelization will not work directly for . For example, consider a complete graph on n vertices, for which 𝗇𝖽 = 1 (all the vertices have the same type) and 𝖼_surround = n, and if we remove any vertex from this clique, 𝖼_surround decreases. Moreover, as evident from our example of complete graphs, 𝖼_surround cannot be bounded by any computable function that depends only on 𝗇𝖽.
§ INCOMPRESSIBILITY
§.§ Incompressibility of
In this section, we show that it is unlikely that the problem parameterized by 𝗏𝖼𝗇 admits a polynomial compression. For this purpose, we first define the following problem. In , we are given a bipartite graph G with a vertex bipartition V(G) = T ∪ N and a non-negative integer k. A set of vertices N'⊆ N is said to be an RBDS if each vertex in T has a neighbour in N'. The aim of is to decide whether there exists an RBDS of size at most k in G. Accordingly, we have the following decision version of .
A bipartite graph G with vertex bipartition V(G) = T ∪ N, and a non-negative integer k.Question Does G has an RBDS of size at most k?
Dom, Lokshtanov, and Saurabh <cit.> proved that it is unlikely for parameterized by |T|+k to admit a polynomial compression. More precisely, they proved the following result.
parameterized by |T|+k does not admit a polynomial compression, unless .
We show that parameterized by the 𝗏𝖼𝗇 does not have a polynomial compression by developing a polynomial parameter transformation from parameterized by |T|+k to parameterized by 𝗏𝖼𝗇.
§.§.§ Bipartite Graphs with Large Degree and Girth
For our reduction, we borrow a construction by Fomin at al. <cit.> of bipartite graphs having high girth and high minimum degree, which they used to prove NP-hardness (and W[2]-hardness for the solution size k) of .
For positive integers p,q, and r, we can construct a bipartite graph H(p,q,r) with rqp^2 edges and a bipartition (X,Y), with |X| = |Y| = pq. The set X is partitioned into sets U_1, …, U_p, and the set Y is partitioned into sets W_1, … W_p, with |U_i| = |W_i| = q. By H_i,j we denote the subgraph of H(p,q,r) induced by U_i ∪ W_j, and by 𝖽𝖾𝗀_i,j(z) we denote the degree of vertex z in H_i,j. Fomin et al. <cit.> provided the following construction:
Let q ≥ 2p(r+1) (p(r+1)-1)^6-1/(p(r+1)-1)^2-1. Then, we can construct H(p,q,r) in time 𝒪(r· q · p^2) with the following properties.
* The girth of H(p,q,r) is at least 6.
* For every vertex z ∈ V(H_i,j) and every i,j ∈ [p], we have r-1 ≤𝖽𝖾𝗀_i,j(z) ≤ r+1.
§.§.§ Polynomial Parameter Transformation
Suppose that we are given an instance (G,k) with V(G) = T ∪ N of the problem. First, we construct a graph G' with V(G') = T' ∪ N' from G by introducing two new vertices, x and y, such that T' = T ∪{x} and N' = N ∪{y}, and E(G') = E(G) ∪{xy }. We have the following observation.
G has an RBDS of size at most k if and only if G' has an RBDS of size at most k+1. Moreover, any RBDS of G' contains y.
Now, we present the main construction for our reduction. Denote the vertex set V(T') by {v_1, v_2, …, v_p', x}. Moreover, let p = p'+1, ℓ=k+1, r = ℓ+2, and q = ⌈ 2p(r+1) (p(r+1)-1)^6-1/(p(r+1)-1)^2-1⌉.
We construct H(p,q,r) such that each of U_i and W_i, for 0 < i ≤ p', contains q copies of vertex v_i, and each of U_p and W_p contains q copies of vertex x. Now, we obtain a graph G” by adding one more set of vertices P to H(p,q,r) such that V(P) = V(N'). Moreover, if there is an edge between a vertex u ∈ N' and a vertex v_i ∈ T', then we add an edge between u and every vertex of U_i, and also between u and every vertex of W_i. Similarly, we add an edge between y and every vertex of U_p, and between y and every vertex of W_p. Finally, we make the vertex y adjacent to every vertex of P. See Figure <ref> for reference.
For correctness, we have the following lemma.
G' has an RBDS of size at most ℓ if and only if G” is ℓ-copwin.
First, we show that if G' has an RBDS of size ℓ, then ℓ cops have a winning strategy in G”. Let S⊆ N' be an RBDS in G' of size at most ℓ. The cops begin by choosing the vertices corresponding to S in P. Observe that the vertex y has to be present in S. Since vertex y dominates each vertex in P, the robber cannot safely enter a vertex in P. Additionally, due to the construction of G”, the vertices of S dominate each vertex in H. Hence, the robber cannot safely enter a vertex in H. Therefore, the robber will be captured in the first move of the cops.
Next, we show that if there is no RBDS of size ℓ in G', then ℓ cops do not have a winning strategy. We prove this by giving a winning strategy for the robber. First, we show that the robber can safely enter the graph. In the beginning, let there be ℓ_1 ≤ℓ cops in P and ℓ_2 ≤ℓ cops in H. Since there is no RBDS of size ℓ in G', for every placement of at most ℓ cops in P, there exists at least one pair of U_i and W_i such that no vertex of U_i and W_i is dominated by the cops from P. Let U_i and W_i be one such pair of sets such that no vertex of U_i and W_i is dominated by the cops from P. Moreover, since each vertex of H can dominate at most p(r+1) vertices in H, ℓ_2 cops can dominate at most ℓ· p(r+1) vertices. Since U_i (and W_i also) contains q vertices, and q> ℓ· p(r+1), the ℓ_2 cops in H cannot dominate all vertices of U_i, and hence the robber can safely enter a vertex of U_i.
Now, whenever the robber is under attack, it does the following. Without loss of generality, let us assume that the robber is in U_i (the case of W_i is symmetric). Since there are at most ℓ cops in P, there is always a W_j such that no vertex of W_j is dominated by cops from P. Since each vertex in U_i has at least r-1 = ℓ+1 neighbours in W_j, the robber can move to at least ℓ+1 vertices of W_j. Since the girth of H is at least 6, no vertex from H can dominate two vertices of W_j that are adjacent to the robber; else, we get a cycle on four vertices. Hence, at most ℓ cops from H can dominate at most ℓ neighbors of the robber in W_j, and the robber has at least ℓ+1 neighbors in W_j. Hence, the robber can move to a safe vertex in W_j. Since the graph H is symmetric, the robber can move safely from W_j' to W_i' also. The robber follows this strategy to avoid capture forever.
This completes the proof of our lemma.
Next, we have the following observation to show that there exists a vertex cover U of G” such that |U| = poly(|T|,k).
V(H) ∪{y} is a vertex cover of G”. Therefore, the vertex cover number of G” is at most 2· p · q+1 = 1+ 2p·⌈ 2p(k+3) (p(k+3)-1)^6-1/(p(k+3)-1)^2-1⌉, where p = |T|+1.
This completes the proof of the argument that parameterized by the 𝗏𝖼𝗇 is unlikely to admit a polynomial compression. Thus, we have the following theorem as a consequence of Lemma <ref>, Observation <ref> and Proposition <ref>.
*
We prove the incompressibility of the variants (Theorem <ref>) in the Appendix.
§.§ Incompressibility for Variants
In this section, we prove Theorem <ref>. In Theorem <ref>, we proved that it is unlikely for to admit a polynomial compression. For this purpose, we constructed a graph G” where k cops have a winning strategy if and only if the graph G' has an RBDS of size at most k. If G' has an RBDS of size k, then there is a dominating set of size k in G”. Else, there is no winning strategy for k cops in G”. Here, we use the same construction to show that the variants we study (except for ) are unlikely to admit a polynomial compression parameterized by 𝗏𝖼𝗇. We establish this by proving that G” is k-copwin for these variants if and only if G' has an RBDS of size at most k.
As discussed earlier, for a graph G, 𝖼(G) ≤𝖼_lazy(G), 𝖼(G) ≤𝖼_attacking(G), and 𝖼(G) ≤𝖼_s(G) (for any s≥ 1). Therefore, if G does not have an RBDS of size at most k, then 𝖼(G)>k, and hence, 𝖼_lazy(G) >k, 𝖼_attacking(G) > k, and 𝖼_s(G) > k (for k>0). To see the reverse direction, observe that in each of these three variants, if the cops start by occupying a dominating set, then they win in the next round. Hence, this establishes that it is unlikely for , , parameterized by 𝗏𝖼𝗇 to admit to admit a polynomial compression.
Similarly, it is true for also, that if the cops start by occupying a dominating set, then they win in the next round. Hence, we have to only show that if there is no RBDS of size k in G' (and hence, no dominating set of size k in G”), then k cops do not have a winning strategy in G for . The robber uses the following strategy. When is under attack, it follows the strategy from . Now, is forced to move (because it is active) even when it is on a safe vertex. Note that always stays in H(p,q,r). Due to symmetry, let us assume it is in some vertex v in some block U_i. In this case, can simply move to a vertex in W_i. Observe here that since vertices in U_i are safe, the vertices in W_i are also safe.
Thus, we have the following lemma to establish that these variants are unlikely to admit a polynomial compression.
, , , and parameterized by the vertex cover number do not admit a polynomial compression, unless .
This result can also be extended to directed (or oriented) graphs. We have the following lemma.
on strongly connected directed and oriented graphs parameterized by vertex cover number does not admit a polynomial compression, unless .
For the case of directed graphs, we can simply replace each edge in the construction with a loop edge (directed cycle on two vertices).
To prove this result for oriented graphs, we do the following. Here, we change the underlying graph G”. First, instead of having two partitions U and W, we have three partitions U,W, and X (with the same rules). See Figure <ref> for an illustration. Second, we add edges between U and W, W and X, and X and U following the rules of the construction. Moreover, the edge rules for vertices in P are the same (that is, if a vertex has edges with each vertex in some U_i, it has edges with each vertex in W_i and X_i as well). Next, we define orientations. For the vertex y, we orient all the edges as outgoing. For every vertex u∈ P ∖{y}, we mark all the edges as outgoing, except for the edge uy (which is oriented yu). For each edge uv such that u∈ U and w ∈ W, orient the edge uw. For every edge wx such that w∈ W and x∈ X, orient the edge wx. For each edge xu such that x∈ X and u∈ U, orient the edge xu. Finally, add an extra vertex z, and add arc zy. Moreover, for each vertex v ∈ U_p ∪ W_p ∪ X_p, add an arc vz.
It is straightforward to see that G” is a strongly connected oriented graph. Moreover, if G” has a dominating set of size k, then k cops have a winning strategy by occupying these vertices in G”. Observe that, at this point, can enter only at vertex z and cannot move as long as there is a cop, say, _1, at y (which there is due to the construction of G' and G”). Now, since G” is strongly connected, some other cop, say, _2, can move to capture in a finite number of rounds. For the reverse direction, if G' does not have an RBDS of size k (and hence G” does not have a dominating set of size k), then following the arguments of Lemma <ref>, can enter at a safe vertex in U. Then, whenever is under attack, it can move to a safe vertex in W. Similarly, it can move from W to X and from X to U when under attack. Moreover, note that vertex z does not attack any vertex in U∪ W∪ X. Hence, has a clear evading strategy.
This completes our proof.
Lemma <ref> and Lemma <ref> directly imply Theorem <ref>.
§ CONCLUSION AND FUTURE DIRECTIONS
In this paper, we conducted a comprehensive analysis of the parameterized complexity of parameterized by 𝗏𝖼𝗇.
First, we showed that the cop number of a graph is upper bounded by 𝗏𝖼𝗇/3+1. Second, we proved that parameterized by 𝗏𝖼𝗇 is by designing an exponential kernel. We complemented this result by proving that it is unlikely for parameterized by 𝗏𝖼𝗇 to admit a polynomial compression. We then extended these results to other variants as well as to other parameters.
To achieve our kernelization results, the rules we used concerned removing (false or true) twins from the graph. These rules are easy to implement and hence can be used to reduce the complexity of the input graph, even when the input graph is far from the considered parameters. For example, for cographs, none of the considered parameters is constant/bounded, but cographs can be reduced to a single vertex with the operation of removing twins, and hence, our reduction rules give an alternate proof that the cop number of cographs is at most two <cit.> for several variants. Moreover, MTP is well-studied with the motivation of designing computer games. Some examples of these variants include: multiple targets and multiple pursuer search <cit.> with applications in controlling non-player characters in video games; from the robber's perspective with faster cops <cit.> where the strategies were tested on Baldur's Gate; modeled with edge weights and different speeds of agents <cit.> with the specific examples of Company of Heroes and Supreme Commander. Moreover, the PACMAN game's movement can be considered as an instance of on a partial grid. One of the key aspects of designing these games is to come up with scenarios that are solvable but look complex and challenging. Our reduction rule can help in this regard. One can begin with an easy-to-resolve instance of , and then keep adding twins to this instance (recursively) to get an instance that looks sufficiently complex but has the same complexity.
Finally, we defined a new variant of , named Generalized CnR, that generalizes many well-studied variants of including , , Cops and Robber From a Distance <cit.>, and also generalizes the games of <cit.>. We showed that RR<ref> provides a kernel for Generalized CnR as well. This gives hope that RR<ref> can be used to get kernels for many practical variants not explicitly studied in this paper.
Still, many questions on the parameterized complexity of remain open. We list some of these questions below.
Does there exist an algorithm for parameterized by 𝗏𝖼𝗇 with running time 2^𝒪(𝗏𝖼𝗇)· n^𝒪(1)?
Does there exist a better bound for the cop number with respect to 𝗏𝖼𝗇? In particular, is 𝖼(G) = o(𝗏𝖼𝗇)?
Does parameterized by 𝗏𝖼𝗇 admit a polynomial α-approximate kernel?
Study with respect to the following parameters: (1) feedback vertex set (2) treewidth (3) treedepth. In particular, is parameterized by treewidth?
plainurl
|
http://arxiv.org/abs/2307.04779v1 | 20230710075009 | Law of Large Numbers for Bayesian two-layer Neural Network trained with Variational Inference | [
"Arnaud Descours",
"Tom Huix",
"Arnaud Guillin",
"Manon Michel",
"Éric Moulines",
"Boris Nectoux"
] | stat.ML | [
"stat.ML",
"math.PR",
"math.ST",
"stat.TH"
] |
Relieving the S_8 Tension: Exploring the Surface-type DBI Model as a Dark Matter Paradigm
Huanyuan Shan
August 12, 2023
=========================================================================================
We provide a rigorous analysis of training by variational inference
(VI) of Bayesian neural networks in the two-layer and infinite-width
case. We consider a regression problem with a regularized evidence
lower bound (ELBO) which is decomposed into the expected
log-likelihood of the data and the Kullback-Leibler (KL) divergence
between the a priori distribution and the variational
posterior. With an appropriate weighting of the KL, we prove a law
of large numbers for three different training schemes: (i) the
idealized case with exact estimation of a multiple Gaussian integral
from the reparametrization trick, (ii) a minibatch scheme using
Monte Carlo sampling, commonly known as Bayes by Backprop,
and (iii) a new and computationally cheaper algorithm which we
introduce as Minimal VI. An important result is that all
methods converge to the same mean-field limit. Finally, we
illustrate our results numerically and discuss the need for the
derivation of a central limit theorem.
Bayesian neural networks, variational inference, mean-field, law of large numbers, infinite-width neural networks.
§ INTRODUCTION
Deep Learning has led to a revolution in machine learning with
impressive successes. However, some limitations of DL have been
identified and, despite, many attempts, our understanding of DL is
still limited. A long-standing problem is the assessment of predictive
uncertainty: DL tends to be overconfident in its predictions
<cit.>, which is a problem in applications such as
autonomous driving
<cit.>, medical
diagnosis <cit.>, or
finance; cf
<cit.>. Therefore,
on the one hand, analytical efforts are being made to thoroughly
investigate the performance of DL; and on the other hand, many
approaches have been proposed to alleviate its shortcomings. The
Bayesian paradigm is an attractive way to tackle predictive
uncertainty, as it provides a framework for training uncertainty-aware
neural networks (NNs) (e.g.
<cit.>).
Thanks to a fully probabilistic approach, Bayesian Neural Networks
(BNN) combine the impressive neural-network expressivity with the
decision-theoretic approach of Bayesian inference, making them capable
of providing predictive uncertainty; see
<cit.>.
However, Bayesian inference requires deriving the posterior
distribution of the NN weights. This posterior distribution is
typically not tractable. A classical approach is to sample the
posterior distribution using Markov chain Monte Carlo methods (such as
Hamilton-Monte-Carlo methods). There are however long-standing
difficulties, such as the proper choice of the prior and fine-tuning
of the sampler. Such difficulties often become prohibitive in
large-dimensional cases,<cit.>. An alternative is to
use variational inference, which has a long history
<cit.>. Simpler
methods that do not require exact computation of integrals over the
variational posterior were then developed, e.g. first by
<cit.> thanks to some approximation and then by
<cit.> with the Bayes by Backprop
approach. In the latter, the posterior distribution is approximated by
a parametric distribution and a generalisation of the
reparametrization trick used by <cit.> leads to an unbiased
estimator of the gradient of the ELBO; see also
<cit.>. Despite
the successful application of this approach, little is known about the
overparameterized limit and appropriate weighting that must be assumed
to obtain a nontrivial Bayesian posterior, see
<cit.>. Recently, <cit.> outlined the
importance of balancing in ELBO the integrated log-likelihood term and
the KL regularizer, to avoid both overfitting and dominance of the
prior. However, a suitable limiting theory has yet to be established,
as well as guarantees for the practical implementation of the
stochastic gradient descent (SGD) used to estimate the parameters of
the variational distribution.
Motivated by the need to provide a solid theoretical framework,
asymptotic analysis of NN has gained much interest recently. The main focus
has been on the gradient descent algorithm and its variants
<cit.>. In
much of these works, a mean-field analysis is performed to
characterize the limiting nonlinear evolution of the weights of a
two-layer NN, allowing the derivation of a law of large numbers and a
central limit theorem for the empirical distribution of neuron
weights. A long-term
goal of these works is to demonstrate convergence toward a global
minimum of these limits for the mean field. Despite some progress in
this direction, this is still an open and highly challenging problem;
cf <cit.>. Nevertheless, this
asymptotic analysis is also of interest in its own right, as we show
here in the case of variational inference for Bayesian neural
networks. Indeed, based on this asymptotic analysis, we develop an
efficient and new variant of the stochastic gradient descent (SGD)
algorithm for variational inference in BNN that computes only the
information necessary to recover the limit behavior.
Our goal, then, is to work at the intersection of analytical efforts
to gain theoretical guarantees and insights and of practical methods
for a workable variational inference procedure. By adapting the
framework developed by <cit.>, we produce a rigorous
asymptotic analysis of BNN trained in a variational setting for a
regression task. From the limit equation analysis, we first
find that a proper regularisation of the Kullback-Leibler divergence
term in relation with the integrated loss leads to their right
asymptotic balance. Second, we prove the asymptotic equivalence of
the idealized and Bayes-by-Backprop SGD schemes, as both preserve
the same core contributions to the limit. Finally, we introduce a
computationally more favourable scheme, directly stemming from the
effective asymptotic contributions. This scheme is the true
mean-field algorithmic approach, as only deriving from
non-interacting terms.
More specifically, our contributions are the following:
* We first focus on the idealized SGD algorithm, where the
variational expectations of the derivative of the loss from the
reparametrization trick of <cit.> are computed
exactly. More precisely, we prove that with the number of neurons
N→ +∞, the sequence of trajectories of the scaled empirical
distributions of the parameters satisfies a law of large
numbers. This is the purpose of Theorem <ref>. The proof
is completely new: it establishes directly the limit in the topology
inherited by the Wasserstein distance bypassing the highly technical
Sobolev space arguments used in <cit.>.
The idealized SGD requires the computation of some integrals, which in
practice prevents a direct application of this algorithm. However, we
can prove its convergence to an explicit nonlinear process. These
integrals are usually obtained by a Monte Carlo approximation, leading to the
Bayes-by-Backprop SGD, see <cit.>.
* We show for the Bayes-by-Backprop SGD (see Theorem
<ref>) that the sequence of trajectories of the scaled
empirical distributions of the parameters satisfies the same law of
large numbers as that in Theorem <ref>, which justifies
such an approximation procedure. Note that each step of the
algorithm involves the simulation of O(N) Gaussian random
variables, which can make the associated gradient evaluation
prohibitively expensive.
* A careful analysis of the structure of the limit equation
(<ref>) allows us to develop a new algorithm, called
Minimal-VI SGD, which at each step generates only two
Gaussian random variables and for which we prove the same limiting
behavior. The key idea here is to keep only those contributions which
affect the asymptotic behavior and which can be understood as the
mean-field approximation from the uncorrelated degrees of
freedom. This is all the more interesting since
we observe numerically that the number weights N required to reach
this asymptotic limit is quite small which makes this variant of
immediate practical interest.
* We numerically investigate the convergence of the three methods
to the common limit behavior on a toy example. We observe that the
mean-field method is effective for a small number of neurons
(N=300). The differences between the methods are
reflected in the variances.
The paper is organized as follows: Section <ref>
introduces the variational inference in BNN, as well as the SGD
schemes commonly considered, namely the idealized and
Bayes-by-backprop variants. Then, in Section <ref> we
establish our initial result, the LLN for the idealized SGD. In
Section <ref> we prove the LLN for the
Bayes-by-backprop SGD and its variants. We show that both SGD
schemes have the same limit behavior. Based on an analysis of the
obtained limit equation, we present in Section <ref> the
new minimal- VI. Finally, in Section <ref> we
illustrate our findings using numerical experiments. The proofs of the
mean-field limits, which are original and quite technically demanding,
are gathered in the supplementary paper.
Related works.
Law of Large Numbers (LLN) for mean-field interacting particle
systems, have attracted a lot of attentions; see for
example <cit.> and references therein. The use of mean-field
particle systems to analyse two-layer neural networks with random
initialization have been considered in <cit.>, which
establish a LLN on the empirical measure of the weights at fixed times
- we consider in this paper the trajectory convergence, i.e. the whole empirical measure process (time indexed) converges uniformly w.r.t. Skorohod topology. It enables not only to use the limiting PDE, for example to study the convergence of the weights towards the infimum of the loss function (see <cit.> for preliminary results), but is is also crucial to establish the central limit theorem, see for example <cit.>. <cit.> give conditions for global convergence of
GD for exact mean-square loss and online stochastic gradient descent
(SGD) with mini-batches increasing in size with the number of weights
N. A LLN for the entire trajectory of the empirical measure is also
given in <cit.> for a standard SGD.
<cit.> establish the propagation of chaos for SGD with
different step size schemes. Compared to the existing literature
dealing with the SGD empirical risk minimization in two-layer neural
networks, <cit.> provide the first rigorous proof of
the existence of the limit PDE, and in particular its uniqueness, in
the LLN.
We are interested here in deriving a LLN but for Variational Inference
(VI) of two-layer Bayesian Neural Networks (BNN), where we consider a
regularized version of the Evidence Lower Bound (ELBO).
§ VARIATIONAL INFERENCE IN BNN: NOTATIONS AND COMMON SGD
SCHEMES
§.§ Variational inference and Evidence Lower Bound
Setting. Let 𝖷 and 𝖸 be subsets of 𝐑^n (n≥ 1) and 𝐑 respectively.
For N≥1 and w=(w^1,…,w^N)∈(𝐑^d)^N, let f_w^N: 𝖷→𝐑 be the following two-layer neural network: for x∈𝖷,
f_w^N(x):=1/N∑_i=1^Ns(w^i,x)∈𝐑,
where s:𝐑^d×𝖷→𝐑 is the activation function.
We work in a Bayesian setting, in which we seek a distribution of the latent variable w which represents the weights of the neural network. The standard problem in Bayesian inference over complex models is that the posterior distribution is hard to sample. To tackle this problem, we consider Variational Inference, in which we consider a family of distribution 𝒬^N={ q_θ^N, θ∈Ξ^N} (where Ξ is some parameter space) easy to sample. The objective is to find the best q_θ^N∈𝒬^N, the one closest in KL divergence (denoted 𝒟_ KL) to the exact posterior. Because we cannot compute the KL, we optimize the evidence lower bound (ELBO), which is equivalent to the KL up to an additive constant.
Denoting by 𝔏: 𝐑×𝐑→𝐑_+ the negative log-likelihood (by an abuse of language, we call this quantity the loss), the ELBO (see <cit.>) is defined, for ∈Ξ^N, (x,y)∈𝖷×𝖸, by
E_ lbo(θ,x,y) :=- ∫_(𝐑^d)^N𝔏(y,f_w^N(x))q_θ^N(w)w - 𝒟_ KL(q_^N|P_0^N),
where P_0^N is some prior on the weights of the NN. The ELBO is
decomposed into two terms: one corresponding to the Kullback-Leibler
(KL) divergence between the variational density and the prior and the
other to a marginal likelihood term. It was empirically found that the
maximization of the ELBO function is prone to yield very poor
inferences <cit.>. It is argued in <cit.> and
<cit.> that optimizing the ELBO leads as N →∞ to the
collapse of the variational posterior to the prior. <cit.>
proposed to consider a regularized version of the ELBO, which consists
in multiplying the KL term by a parameter which is scaled by the
inverse of the number of neurons:
E_ lbo^N(θ,x,y) :=- ∫_(𝐑^d)^N𝔏(y,f_w^N(x))q_θ^N(w)w -1/N𝒟_ KL(q_^N|P_0^N),
A first objective of this paper is to show
that the proposed regularization leads to a stable asymptotic behavior
and the effect of both the integrated loss and Kullback-Leibler terms on the
limiting behavior are balanced in the limit N →∞.
The maximization of E_ lbo^N is carried out using SGD.
The variational family 𝒬^N we consider is a Gaussian family of distributions. More precisely, we assume that for any =(θ^1,…,θ^N)∈Ξ^N, the variational distribution q_^N factorizes over the neurons: for all w=(w^1,…,w^N)∈(𝐑^d)^N, q_^N(w)=∏_i=1^Nq^1_θ^i(w^i), where
θ=(m,ρ)∈Ξ:=𝐑^d×𝐑 and q^1_θ is the probability density function (pdf) of 𝒩(m,g(ρ)^2 I_d), with g(ρ)=log(1+e^ρ), ρ∈𝐑.
In the following, we simply write 𝐑^d+1 for 𝐑^d×𝐑.
In addition, following the reparameterisation trick of <cit.>, q^1_θ(w) w is the pushforward of a reference probability measure with density γ by Ψ_θ (see more precisely Assumption A1).
In practice, γ is the pdf of 𝒩(0,I_d) and Ψ_θ(z)=m+g(ρ)z. With these notations, (<ref>) writes
E_ lbo^N(θ,x,y) =- ∫_(𝐑^d)^N𝔏(y,1/N∑_i=1^Ns(Ψ_θ^i(z^i),x)) γ(z^1)…γ(z^N) z_1… z_N -1/N𝒟_ KL(q_^N|P_0^N).
Loss function and prior distribution.
In this work, we focus on the regression problem, i.e.
𝔏 is the Mean Square Loss: for y_1,y_2∈𝐑, 𝔏(y_1,y_2)=1/2|y_1-y_2|^2.
We also introduce the function ϕ:(θ,z,x)∈𝐑^d+1×𝐑^d×𝖷↦ s(Ψ_θ(z),x). On the other hand, we assume that the prior distribution P_0^N write, for all w∈(𝐑^d)^N,
P_0^N(w)=∏_i=1^NP_0^1(w^i),
where P_0^1:𝐑^d→𝐑_+ is the pdf of 𝒩(m_0,σ^2_0I_d), and σ_0>0. Therefore 𝒟_ KL(q_^N|P_0^N)=∑_i=1^N𝒟_ KL(q_θ^i|P_0^1) and, for θ=(m,ρ)∈𝐑^d+1,
𝒟_ KL(q_θ^1|P_0^1)=∫_𝐑^d q^1_θ(x) log(q^1_θ(x)/P_0^1(x)) x=m-m_0_2^2/2σ_0^2+d/2(g(ρ)^2/σ_0^2-1)+d/2log(σ_0^2/g(ρ)^2).
Note that 𝒟_ KL has at most a quadratic growth in m and ρ.
Note that we assume here a Gaussian prior to get an explicit expression of the Kullback-Leibler divergence. Most arguments extend to sufficiently regular densities and are essentially the same for exponential families, using conjugate families for the variational approximation.
§.§ Common SGD schemes in backpropagation in a variational setting
Idealized SGD. Let (Ω, ℱ,𝐏) be a probability space. Consider a data set {(x_k,y_k)}_k≥ 0 i.i.d. w.r.t. π∈𝒫(𝖷×𝖸), the space of probability measures over 𝖷×𝖸. For N≥1 and given a learning rate η>0, the maximization of θ∈𝐑^d+1↦E_ lbo^N(θ,x,y) with a SGD algorithm writes as follows:
for k≥ 0 and i∈{1,…,N},
θ_k+1=θ_k+ η∇_θE_ lbo^N(θ_k,x_k,y_k)
θ_0 ∼μ_0^⊗ N,
where μ_0∈𝒫(𝐑^d+1) and θ_k=(θ^1_k,…, θ^N_k).
We now compute ∇_θE_ lbo^N(θ,x,y).
First, under regularity assumptions on the function ϕ (which will be formulated later, see A1 and A3 below) and by assumption on 𝔏, we have for all i∈{1,…,N} and all (x,y)∈𝖷×𝖸,
∫_(𝐑^d)^N∇_θ^i𝔏(y,1/N∑_j=1^Nϕ(θ^j,z^j,x))γ(z^1)…γ(z^N) z^1… z^N
= -1/N^2∑_j=1^N∫_(𝐑^d)^N(y-ϕ(θ^j,z^j,x))∇_θϕ(θ^i,z^i,x)γ(z^1)…γ(z^N) z^1… z^N
=-1/N^2[∑_j=1,j≠ i^N(y-⟨ϕ(θ^j,·,x),γ⟩)⟨∇_θϕ(θ^i,·,x),γ⟩ + ⟨(y-ϕ(θ^i,·,x))∇_θϕ(θ^i,·,x),γ⟩],
where we have used the notation ⟨ U,ν⟩=∫_𝐑^qU(z)ν( z) for any integrable function U:𝐑^q→𝐑 w.r.t. a measure ν (with a slight abuse of notation, we denote by γ the measure γ(z) z). Second, for θ∈𝐑^d+1, we have
∇_θ𝒟_ KL(q_θ^1|P_0^1)=
[ ∇_m𝒟_ KL(q_θ^1|P_0^1); ∂_ρ𝒟_ KL(q_θ^1|P_0^1) ]
=
[ 1/σ_0^2(m-m_0); d/σ_0^2g'(ρ)g(ρ)-dg'(ρ)/g(ρ) ].
In conclusion, the SGD (<ref>) writes: for k≥ 0 and i∈{1,…,N},
θ_k+1^i=θ_k^i-η/N^2∑_j=1,j≠ i^N(⟨ϕ(θ_k^j,·,x_k),γ⟩-y_k)⟨∇_θϕ(θ_k^i,·,x_k),γ⟩
-η/N^2⟨(ϕ(θ_k^i,·,x_k)-y_k)∇_θϕ(θ_k^i,·,x_k),γ⟩-η/N∇_θ𝒟_ KL(q_θ^i_k^1|P_0^1)
θ_0^i ∼μ_0.
We shall call this algorithm idealised SGD because it contains an intractable term given by the integral w.r.t. γ. This has motivated the development of methods where this integral is replaced by an unbiased Monte Carlo estimator (see <cit.>) as detailed below.
Bayes-by-Backprop SGD. The second SGD algorithm we study
is based on an approximation, for i∈{1,…,N}, of ∫_(𝐑^d)^N(y-ϕ(θ^j,z^j,x))∇_θϕ(θ^i,z^i,x)γ(z^1)…γ(z^N) z^1… z^N (see (<ref>))
by
1/B∑_ℓ=1^B (y-ϕ(θ^j, 𝖹^j,ℓ,x) )∇_θϕ(θ^i,𝖹^i,ℓ,x)
where B∈𝐍^* is a fixed integer and (𝖹^q,ℓ, q∈{i,j}, 1≤ℓ≤ B) is a i.i.d finite sequence of random variables distributed according to γ(z) z.
In this case, for N≥ 1, given a dataset (x_k,y_k)_k≥0, the maximization of θ∈𝐑^d+1↦E_ lbo^N(θ,x,y) with a SGD algorithm is the following: for k≥ 0 and i∈{1,…,N},
θ_k+1^i=θ_k^i -η/N^2B∑_j=1^N∑_ℓ=1^B (ϕ(θ_k^j,𝖹^j,ℓ_k,x_k)-y_k )∇_θϕ(θ_k^i,𝖹^i,ℓ_k,x_k)
-η/N∇_θ𝒟_ KL(q_θ^i_k^1|P_0^1)
θ_0^i=(m_0^i,ρ_0^i)∼μ_0,
where η>0 and (𝖹^j,ℓ_k, 1≤ j≤ N, 1≤ℓ≤ B, k≥ 0) is a i.i.d sequence of random variables distributed according to γ.
§ LAW OF LARGE NUMBERS FOR THE IDEALIZED SGD
Assumptions and notations. When E is a metric space and ℐ= 𝐑_+ or ℐ=[0,T] (T≥ 0), we denote by 𝒟(ℐ,E) the Skorohod space of càdlàg functions on ℐ taking values in E and 𝒞(ℐ,E) the space of continuous functions on ℐ taking values in E.
The evolution of the parameters ({θ_k^i, i=1,…,N})_k≥ 1 defined by (<ref>) is tracked through their empirical distribution ν_k^N (for k≥ 0) and its scaled version μ_t^N (for t∈𝐑_+), which are defined as follows:
ν_k^N:=1/N∑_i=1^Nδ_θ_k^i and μ_t^N:=ν_⌊ Nt⌋^N, where the θ^i_k's are defined (<ref>).
Fix T>0.
For all N≥1, μ^N:={μ_t^N, t∈[0,T]} is a random element of 𝒟([0,T],𝒫(𝐑^d+1)), where 𝒫(𝐑^d+1) is endowed with the weak convergence topology. For N≥1 and k≥1, we introduce the following σ-algebras:
ℱ_0^N=σ(θ_0^i, 1≤ i≤ N) and ℱ_k^N=σ(θ_0^i, (x_q,y_q),1≤ i≤ N, 0≤ q≤ k-1).
Recall q_θ^1:𝐑^d→𝐑_+ be the pdf of 𝒩(m,g(ρ)^2I_d) (θ=(m,ρ)∈𝐑^d+1).
In this work, we assume the following.
A1.
There exists a pdf γ:𝐑^d→𝐑_+ such that for all θ∈𝐑^d+1, q^1_θ x=Ψ_θ#γ x, where {Ψ_θ, θ∈𝐑^d+1} is a family of 𝒞^1-diffeomorphisms over 𝐑^d such that for all z∈𝐑^d, θ∈𝐑^d+1↦Ψ_θ(z) is of class 𝒞^∞.
Finally, there exists 𝔟:𝐑^d→𝐑_+ such that for all multi-index α∈𝐍^d+1 with |α|≥ 1, there exists C_α>0, for all z∈𝐑^d and θ=(θ_1,…,θ_d+1)∈𝐑^d+1,
| ∂_αΨ_θ(z)| ≤ C_α𝔟(z) with for all q≥ 1, ⟨𝔟^q, γ⟩ <+∞,
where ∂_α= ∂_θ_1^α_1…∂_θ_d+1^α_d+1 and ∂_θ_j^α_j is the partial derivatives of order α_j w.r.t. to θ_j.
A2.
The sequence {(x_k,y_k)}_k≥ 0 is i.i.d. w.r.t. π∈𝒫(𝖷×𝖸).
The set 𝖷×𝖸⊂𝐑^d×𝐑 is compact. For all k≥0, (x_k,y_k)ℱ_k^N, where ℱ_k^N is defined in (<ref>).
A3.
The activation function s:𝐑^d×𝖷→𝐑 belongs to 𝒞^∞_b(𝐑^d×𝖷) (the space of smooth functions over 𝐑^d×𝖷 whose derivatives of all order are bounded).
A4.
The initial parameters (θ_0^i)_i=1^N are i.i.d. w.r.t. μ_0∈𝒫(𝐑^d+1) which has compact support.
Note that A1 is satisfied when γ is the pdf of 𝒩(0,I_d) and Ψ_θ(z)=m+g(ρ)z, with 𝔟(z)=1+|z|.
With these assumptions, for every fixed T>0, the sequence ({θ_k^i, i=1,…,N})_k=0, …, ⌊ NT ⌋ defined by (<ref>) is a.s. bounded:
Assume A1→A4. Then,
there exists C>0 such that a.s. for all T>0, N≥ 1,
i∈{1,…, N}, and 0≤ k≤⌊ NT⌋,
|θ_k^i|≤ Ce^[ C(2+T)]T.
Lemma <ref> implies that a.s. for all T>0 and N≥ 1, μ^N ∈𝒟([0,T],𝒫(Θ_T)), where
Θ_T={θ∈𝐑^d+1, |θ|≤ Ce^[ C(2+T)]T}.
Law of large numbers for (μ^N)_N≥1 defined in (<ref>). The first main result of this work is the following.
Assume A1→A4. Let T>0. Then, the sequence (μ^N)_N≥1⊂𝒟([0,T],𝒫(Θ_T)) defined in (<ref>) converges in probability to the unique deterministic solution μ̅∈𝒞([0,T],𝒫(Θ_T)) to the following measure-valued evolution equation: ∀ f∈𝒞^∞(Θ_T) and ∀ t∈ [0,T],
⟨ f,μ̅_t⟩-⟨ f,μ_0⟩ =- η∫_0^t∫_𝖷×𝖸⟨ϕ(·,·,x)-y,μ̅_s⊗γ⟩⟨∇_θ f·∇_θϕ( · ,·,x),μ̅_s⊗γ⟩π( x, y) s
- η∫_0^t⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ̅_s⟩ s.
The proof of Theorem <ref> is given in Appendix
<ref>. We stress here the most important steps and used
techniques. In a first step, we derive an identity satisfied by
(μ^N)_N≥ 1, namely the pre-limit
equation (<ref>); see Sec. <ref>. Then we
show in Sec. <ref> that (μ^N)_N≥ 1 is relatively
compact in 𝒟([0,T],𝒫(Θ_T)).
To do so, we check that the sequence (μ^N)_N≥ 1 satisfies all the required assumptions of <cit.> when E= 𝒫(Θ_T) there.
In
Sec. <ref> we prove that every limit point of
(μ^N)_N≥ 1 satisfies the limit equation (<ref>). Then, in Section <ref>,
we prove that there is a unique solution of the measure-valued equation (<ref>).
To prove the uniqueness of the solution of (<ref>),
we use techniques developed in <cit.> which are based on
a representation formula for solution to measure-valued equations <cit.> together with estimates in Wasserstein distances between two solutions of (<ref>) derived in <cit.>.
In Section <ref>, we also conclude the
proof of Theorem <ref>. Compared
to <cit.>, the fact that
({θ_k^i, i=1,…,N})_k=0, …, ⌊ NT ⌋
defined by (<ref>) are a.s. bounded allows to use
different and more straightforward arguments to prove (i) the relative
compactness in 𝒟([0,T],𝒫(Θ_T)) of
(μ^N)_N≥1 (defined in (<ref>)) (ii) the
continuity property of the operator
𝗆↦Λ_t[f](𝗆) defined in
(<ref>) w.r.t. the topology of
𝒟([0,T],𝒫(Θ_T)) and (iii) (μ^N)_N≥ 1
has limit points in 𝒞([0,T],𝒫(Θ_T)). Step
(ii) is necessary in order to pass to the limit N→ +∞ in the
pre-limit equation and Step (iii) is crucial since we prove that there is at most
one solution of (<ref>) in
𝒞([0,T],𝒫(Θ_T)). It is worthwhile to
emphasize that, as N →∞, the effects of the integrated loss
and of the KL terms are balanced, as conjectured in <cit.>.
To avoid further technicalities, we have chosen what may seem restrictive assumptions on the data or the activation function. Note however that it readily extends to unbounded set 𝖷, and also unbounded 𝖸 assuming that π as polynomial moments of sufficiently high order. Also, RELU (or more easily leaky RELU) may be considered by using weak derivatives (to consider the singularity at 0), and a priori moment bounds on the weights.
§ LLN FOR THE BAYES-BY-BACKPROP SGD
The sequence {θ_k^i, i∈{1,… N}}_k=0, …, ⌊ NT ⌋ defined recursively by the algorithm (<ref>) is in general not bounded, since ∇_θϕ(θ ,𝖹, x) is not necessarily bounded if 𝖹∼γ(s) z. Therefore, we cannot expect Lemma <ref> to hold for {θ_k^i, i∈{1,… N}}_k=0, …, ⌊ NT ⌋ set by (<ref>). Thus, the sequence {θ_k^i, i∈{1,… N}}_k=0, …, ⌊ NT ⌋ is considered on the whole space 𝐑^d+1.
Wasserstein spaces and results.
For N≥1, and k≥ 1, we set
ℱ_k^N=σ (θ_0^i , 𝖹^j,ℓ_q,(x_q,y_q), 1≤ i,j≤ N, 1≤ℓ≤ B, 0≤ q≤ k-1} ).
In addition to A1→A4 (where in A2, when k≥ 1, ℱ_k^N is now the one defined in (<ref>)),
we assume:
A5. The sequences (𝖹^j,ℓ_k,1≤ j≤ N, 1≤ℓ≤ B, k≥ 0) and ((x_k,y_k), k≥ 0) are independent. In addition, for k≥ 0, ((x_k,y_k),𝖹^j,ℓ_k, 1≤ j≤ N, 1≤ℓ≤ B)ℱ_k^N.
Note that the last statement of A5 implies the last statement of A2.
We introduce the scaled empirical distribution of the parameters of the algorithm (<ref>), i.e. for k≥ 0 and t≥ 0:
ν_k^N:=1/N∑_i=1^Nδ_θ_k^i and μ_t^N:=ν_⌊ Nt⌋^N, where the θ^i_k's are defined (<ref>).
One can no longer rely on the existence of a compact subset Θ_T⊂𝐑^d+1 such that a.s. (μ^N)_N≥1⊂𝒟([0,T], 𝒫(Θ_T)), where μ^N={t≥ 0↦μ_t^N} is defined in (<ref>). For this reason, we will work in Wasserstein spaces 𝒫_q(𝐑^d+1), q≥ 0, which, we recall, are defined by
𝒫_q(𝐑^d+1)={ν∈𝒫(𝐑^d+1), ∫_𝐑^d+1 |θ|^q ν (θ)<+∞}.
These spaces are endowed with the Wasserstein metric 𝖶_q, see e.g. <cit.> for more materials on Wasserstein spaces. For all q≥ 0, (μ^N)_N≥1⊂𝒟(𝐑_+,𝒫_q(𝐑^d+1)).
The second main results of this work is a LLN for (μ^N)_N≥1 defined in (<ref>).
Assume A1→A5. Let γ_0> 1+ d+1/2. Then, the sequence (μ^N)_N≥1 defined in (<ref>) converges in probability in 𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)) to a deterministic element μ̅∈𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)), where μ̅∈𝒞(𝐑_+,𝒫_1(𝐑^d+1)) is the unique solution in 𝒞(𝐑_+,𝒫_1(𝐑^d+1)) to the following measure-valued evolution equation:∀ f∈𝒞^∞_b(𝐑^d+1) and ∀ t∈𝐑_+,
⟨ f,μ̅_t⟩-⟨ f,μ_0⟩ =- η∫_0^t∫_𝖷×𝖸⟨ϕ(·,·,x)-y,μ̅_s⊗γ⟩⟨∇_θ f·∇_θϕ( · ,·,x),μ̅_s⊗γ⟩π( x, y) s
- η∫_0^t⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ̅_s⟩ s.
Theorem <ref> is proved in the appendix <ref>.
Since {θ_k^i, i∈{1,… N}}_k=0, …, ⌊ NT ⌋ defined by (<ref>) is not bounded in general, we work in the space 𝒟(𝐑_+, 𝒫_γ_0(𝐑^d+1)). The proof of Theorem <ref> is more involved than that of Theorem <ref>, and generalizes the latter to the case where the parameters of the SGD algorithm are unbounded.
We prove that (μ^N)_N≥1 (defined in (<ref>)) is relatively compact in 𝒟(𝐑_+, 𝒫_γ_0(𝐑^d+1)). To this end we now use <cit.>. The compact containment, which is the purpose of Lemma <ref>, is not straightforward since 𝒫_γ_0(𝐑^d+1) is not compact contrary to Theorem <ref> where we used the compactness of 𝒫(Θ_T). More precisely, the compact containment of (μ^N)_N≥ 1 relies on a characterization of the compact subsets of 𝒫_γ_0(𝐑^d+1) (see Proposition <ref>) and moment estimates on {θ_k^i, i∈{1,… N}}_k=0, …, ⌊ NT ⌋ (see Lemma <ref>).
We also mention that contrary to what is done in the proof of Theorem <ref>, we do not show that every limit point of (μ^N)_N≥1 in 𝒟(𝐑_+, 𝒫_γ_0(𝐑^d+1)) is continuous in time but we still manage to prove that they all satisfy (<ref>). Then, using the duality formula for the 𝖶_1-distance together with rough estimates on the jumps of t↦⟨ f, μ_t^N⟩ (for f uniformly Lipschitz over 𝐑^d+1), we then show that every limit point of (μ^N)_N≥1 in 𝒟(𝐑_+, 𝒫_γ_0(𝐑^d+1)) belongs a.s. to 𝒞(𝐑_+, 𝒫_1(𝐑^d+1)). Again this is important since we have uniqueness of (<ref>) in 𝒞(𝐑_+, 𝒫_1(𝐑^d+1)).
We conclude this section with the following important uniqueness result.
Under the assumptions of Theorems <ref> and <ref>, the solution to (<ref>) is independent of T and is equal to the solution to (<ref>).
This uniqueness result states that both idealized and
Bayes-by-backprop SGD have the same limiting behavior. It is also noteworthy that the mini-batch B is held fixed B. The effect of batch size can be seen at the level of the central limit theorem, which we leave for future work.
§ THE MINIMAL-VI SGD ALGORITHM
The idea behing the Bayes-by-Backprop SGD stems from the fact
that there are integrals wrt γ in the loss function that cannot
be computed in practice and it is quite natural up to a
reparameterization trick, to replace these integrals by a Monte Carlo
approximation (with i.i.d. gaussian random variables). To devise a
new cheaper algorithm based on the only terms impacting the asymptotic
limit, we directly analyse the limit equation (<ref>) and remark that it can be rewritten as,
∀ f∈𝒞^∞(Θ_T) and ∀ t∈
[0,T],
⟨ f,μ̅_t⟩-⟨ f,μ_0⟩
=- η∫_0^t∫_𝖷×𝖸× (𝐑^d)^2⟨ϕ(·,z_1,x)-y,μ̅_s⟩⟨∇_θ f·∇_θϕ( · ,z_2,x),μ̅_s⟩γ^⊗ 2( z_1 z_2)π( x, y) s
- η∫_0^t⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ̅_s⟩ s.
Thus, the integration over γ^⊗ 2 can be considered as that over π, i.e., we can consider them as two more data variables that only need to be sampled at each new step. In this case, the SGD (<ref>) becomes: for k≥ 0 and i∈{1,…,N},
θ_k+1^i=θ_k^i -η/N^2∑_j=1^N (ϕ(θ_k^j,𝖹^1_k,x_k)-y_k )∇_θϕ(θ_k^i,𝖹^2_k,x_k)
-η/N∇_θ𝒟_ KL(q_θ^i_k^1|P_0^1)
θ_0^i=(m_0^i,ρ_0^i)∼μ_0,
where η>0 and (𝖹^p_k, p∈{1,2}, k≥ 0) is a
i.i.d sequence of random variables distributed according to
γ^⊗2. We call this backpropagation scheme
minimal- VI SGD which is much cheaper in terms of computational complexity, with the same limiting behavior as we now discuss.
We introduce the σ-algebra for N,k≥ 1:
ℱ_k^N=σ (θ_0^i , 𝖹^p_q,(x_q,y_q), 1≤ i≤ N, p∈{1,2}, 0≤ q≤ k-1} ).
In addition to A1→A4 (where in A2, ℱ_k^N is now the one defined above in (<ref>) when k≥ 1), the following assumption
A6. The sequences (𝖹^p_k, p∈{1,2}, k≥ 0) and ((x_k,y_k), k≥ 0) are independent. In addition, for k≥ 0, ((x_k,y_k),𝖹^p_k, p∈{1,2})ℱ_k^N, where ℱ_k^N is defined in (<ref>).
Set for k≥ 0 and t≥ 0, ν_k^N:=1/N∑_i=1^Nδ_θ_k^i and μ_t^N:=ν_⌊ Nt⌋^N, where the θ^i_k's are defined in (<ref>).
The last main result of this work states that the sequence (μ^N)_N≥1 satisfies the same law of large numbers when N→ +∞ as the one satisfied by (<ref>), whose proof will be omitted as it is the same as the one made for Theorem <ref>.
Assume A1→A4 and A6. Then, the sequence of (μ^N)_N≥1 satisfies all the statements of Theorem <ref>.
§ NUMERICAL EXPERIMENTS
In this section we illustrate the theorems <ref>,
<ref>, and <ref> using the following toy model. We
set d=5. Given θ^*∈𝐑^d (drawn from a normal
distribution and scaled to the unit norm), we draw i.i.d observations
as follows: Given x∼𝒰([-1,1]^d), we draw
y=tanh(x^⊤θ^*)+ϵ, where ϵ is zero mean with
variance 10^-4. The initial distribution of parameters is centered
around the prior:
θ_0∼ (𝒩(m_0,0.01I_d)×𝒩(g^-1(σ_0),0.01))^⊗ N, with m_0=0 and
σ_0=0.2. Since the idealized algorithm cannot be
implemented exactly, a mini-batch of size 100 is used as a proxy for
the following comparisons of the different algorithms. For the
algorithm (<ref>) SGD we set B=1.
Evolution and limit of the distribution
Fig. <ref> displays the histograms of
{F(θ_⌊ Nt⌋^i), i=1,…,N}
(F(θ)=m_2, g(ρ) or m, where
θ=(m,ρ)∈𝐑^d×𝐑), for N=10000, at initialization, halfway through training, and at the end of training. The empirical distributions illustrated by these histograms are very similar over the course of training. It can be seen that for N=10000 the limit of the mean field is reached.
Convergence with respect to the numbers of neurons.
We investigate here the speed of convergence of μ_t^N to
μ̅_t (as N→+∞), when tested against test functions
f. More precisely, we fix a time T (end of training) and Figure
<ref> represents the empirical mean of
⟨ f, μ_T^N⟩ over 50 realizations. The test
functions used for this experiment are f_m(θ) = ‖ m‖_2,
f_Elbo(θ) = -
Ê_lbo(θ)^N where
Ê_lbo is the empirical
E_lbo^N (see (<ref>)) computed with 100
samples of (x,y) and (z^1,…,z^N). Finally,
f_pred(θ) =
𝔼̂_x[𝕍̂_w∼
q_θ^N[f_w^N(x)]^1/2]
where 𝔼̂ and 𝕍̂ denote respectively the
empirical mean and the empirical variance over 100 samples. All
algorithms are converging to the same limit and are performing similarly
even with a limited number of neurons (N=300 in this example).
Convergence with respect to time.
This section illustrates the training process of a BNN with a given
number of neurons N = 10000. In Figure <ref>,
we plot the negative ELBO on a test set and its two components, the
loss and the KL-divergence terms. Figure <ref>
shows that the BNN is able to learn on this specific task and all
algorithms exhibit a similar performance. It illustrates the
trajectorial convergence of {μ_t^N, t∈[0,T]}_N≥1 to
{μ̅_t, t∈[0,T]} as N→+∞.
Behavior around the limit μ̅.
On Figure <ref>, we plot the boxplots
of ⟨ f,μ_t^N⟩ for 50 realizations and N=10000, at
different times of the training.
Minimal-VI scheme (which is computationally cheaper as
explained in <ref>) exhibit a larger variance than the
other algorithms.
§ CONCLUSION
By establishing the limit behavior of the idealized
SGD for the variational inference of BNN with the weighting suggested
by <cit.>, we have rigorously shown that the most-commonly used
in practice Bayes-by-Backprop scheme indeed exhibits the same
limit behavior. Furthermore, the analysis of the limit equation
led us to validate the correct scaling of the KL divergence term in
with respect to the loss. Notably, the mean-field limit dynamics
has also helped us to devise a far less costly new SGD algorithm,
the Minimal-VI. This scheme shares the same limit
behavior, but only stems from the non-vanishing asymptotic
contributions, hence the reduction of the computational cost. Aside
from confirming the analytical results, the first simulations
presented here show that the three algorithms, while having the same
limit, may differ in terms of variance. Thus, deriving a CLT result
and discussing the right trade-off between computational complexity
and variance will be done in future work. Also, on a more
general level regarding uncertainty quantification, an interesting
question is to analyse the impact of the correct scaling of the KL
divergence term on the error calibration and how to apply the same
analysis in the context of deep ensembles.
A.D. is grateful for the
support received from the Agence Nationale de la Recherche (ANR) of
the French government through the program "Investissements d'Avenir"
(16-IDEX-0001 CAP 20-25) A.G. is supported by the French ANR under
the grant ANR-17-CE40-0030 (project EFI) and the Institut
Universtaire de France. M.M. acknowledges the support of the the
French ANR under the grant ANR-20-CE46-0007 (SuSa project).
B.N. is supported by the grant IA20Nectoux from the Projet I-SITE
Clermont CAP 20-25. E.M. and T.H. acknowledge the support of ANR-CHIA-002, "Statistics, computation and Artificial Intelligence"; Part of the work has been developed under the auspice of the Lagrange Center for Mathematics and Calculus
§ PROOF OF THEOREM <REF>
For simplicity, we prove the theorem <ref> when T=1, and we denote Θ_1 simply by Θ. In this section we assume A1–A4.
§.§ Pre-limit equation (<ref>) and error terms in (<ref>)
§.§.§ Derivation of the pre-limit equation
The aim of this section is to establish the so-called pre-limit equation (<ref>), which will be our starting point to derive Equation (<ref>).
Let N≥ 1, k∈{0,…,N}, and f∈𝒞^∞(Θ). Recall that by Lemma <ref> and since 0≤ k ≤ N, a.s. θ^i_k∈Θ, and thus a.s. f(θ^i_k) is well-defined. The Taylor-Lagrange formula yields
⟨ f,ν_k+1^N⟩-⟨ f,ν_k^N⟩ =1/N∑_i=1^Nf(θ_k+1^i)-f(θ_k^i)
=1/N∑_i=1^N∇_θ f(θ_k^i)·(θ_k+1^i-θ_k^i) +1/2N∑_i=1^N(θ_k+1^i-θ_k^i)^T∇^2f(θ_k^i)(θ_k+1^i-θ_k^i),
where, for all i∈{1,…, N}, θ_k^i∈ (θ_k^i,θ_k+1^i)⊂Θ.
Using (<ref>), we then obtain
⟨ f,ν_k+1^N⟩-⟨ f,ν_k^N⟩ =
-η/N^3∑_i=1^N∑_j=1,j≠ i^N (⟨ϕ(θ_k^j,·,x_k),γ⟩-y_k )⟨∇_θ f(θ_k^i)·∇_θϕ(θ_k^i,·,x_k),γ⟩
-η/N^2⟨(ϕ(·,·,x_k)-y_k)∇_θ f·∇_θϕ(·,·,x_k),ν_k^N⊗γ⟩
-η/N⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),ν_k^N⟩ + 𝐑_k^N[f],
where
𝐑_k^N[f]:=1/2N∑_i=1^N(θ_k+1^i-θ_k^i)^T∇^2f(θ_k^i)(θ_k+1^i-θ_k^i).
Let us define
𝐃_k^N[f] := 𝐄[-η/N^3∑_i=1^N∑_j=1,j≠ i^N (⟨ϕ(θ_k^j,·,x_k),γ⟩-y_k )⟨∇_θ f(θ_k^i)·∇_θϕ(θ_k^i,·,x_k),γ⟩|ℱ_k^N]
-𝐄[η/N^2⟨(ϕ(·,·,x_k)-y_k)∇_θ f·∇_θϕ(·,·,x_k),ν_k^N⊗γ⟩|ℱ_k^N].
Note that using (<ref>) and (<ref>) together with the fact that |∇_θ f(θ_k^i)|≤sup_θ∈Θ |∇_θ f(θ)|,
the integrant in (<ref>) is integrable and thus 𝐃_k^N[f] is well defined.
Using the fact that (x_k,y_k)ℱ_k^N by A2 and that {θ_k^i, i=1,…,N} is ℱ_k^N-measurable by (<ref>), we have:
𝐃_k^N[f] =-η/N^3∑_i=1^N∑_j=1,j≠ i^N∫_𝖷×𝖸 (⟨ϕ(θ_k^j,·,x),γ⟩-y )⟨∇_θ f(θ_k^i)·∇_θϕ(θ_k^i,·,x),γ⟩π( x, y)
-η/N^2∫_𝖷×𝖸⟨(ϕ(·,·,x)-y)∇_θ f·∇_θϕ(·,·,x),ν_k^N⊗γ⟩π( x, y).
Introduce also
𝐌_k^N[f] :=-η/N^3∑_i=1^N∑_j=1,j≠ i^N(⟨ϕ(θ_k^j,·,x_k),γ⟩-y_k)⟨∇_ θ f(θ_k^i)·∇_θϕ(θ_k^i,·,x_k),γ⟩
-η/N^2⟨(ϕ(·,·,x_k)-y_k)∇_θ f·∇_θϕ(·,·,x_k),ν_k^N⊗γ⟩-𝐃_k^N[f].
Note that 𝐄 [𝐌_k^N[f]|ℱ_k^N]=0. Equation (<ref>) then writes
⟨ f,ν_k+1^N⟩-⟨ f,ν_k^N⟩ =𝐃_k^N[f]+ 𝐌_k^N[f] -η/N⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),ν_k^N⟩ +𝐑_k^N[f].
Notice also that
𝐃_k^N[f] =-η/N^3∑_i=1^N∑_j=1^N∫_𝖷×𝖸(⟨ϕ(θ_k^j,·,x),γ⟩-y)⟨∇_θ f(θ_k^i)·∇_θϕ(θ_k^i,·,x),γ⟩π( x, y)
+η/N^3∑_i=1^N∫_𝖷×𝖸(⟨ϕ(θ_k^i,·,x),γ⟩-y)⟨∇_θ f(θ_k^i)·∇_θϕ(θ_k^i,·,x),γ⟩π( x, y)
-η/N^2∫_𝖷×𝖸⟨(ϕ(·,·,x)-y)∇_θ f·∇_θϕ(·,·,x),ν_k^N⊗γ⟩π( x, y)
=-η/N∫_𝖷×𝖸⟨ϕ(·,·,x)-y,ν_k^N⊗γ⟩⟨∇_θ f·∇_θϕ(·,·,x),ν_k^N⊗γ⟩π( x, y)
+η/N^2∫_𝖷×𝖸⟨(⟨ϕ(·,·,x),γ⟩-y)⟨∇_θ f·∇_θϕ(·,·,x),γ⟩,ν_k^N⟩π( x, y)
-η/N^2∫_𝖷×𝖸⟨(ϕ(·,·,x)-y)∇_θ f·∇_θϕ(·,·,x),ν_k^N⊗γ⟩π( x, y).
Now, we define for t∈ [0,1]:
𝐃_t^N[f]:=∑_k=0^⌊ Nt⌋-1𝐃_k^N[f], 𝐑_t^N[f]:=∑_k=0^⌊ Nt⌋-1𝐑_k^N[f], and 𝐌_t^N[f]:=∑_k=0^⌊ Nt⌋-1𝐌_k^N[f] .
We can rewrite 𝐃_t^N[f] has follows:
𝐃_t^N[f]=∑_k=0^⌊ Nt⌋-1∫_k/N^k+1/NN 𝐃_⌊ Ns⌋^N[f] s=N∫_0^t 𝐃_⌊ Ns⌋^N[f] s-N∫_⌊ Nt⌋/N^t 𝐃_⌊ Ns⌋^N[f] s.
Since ν_⌊ Ns⌋^N=μ_s^N (by definition, see (<ref>)), we have, using also (<ref>) with k=⌊ Ns⌋,
𝐃_t^N[f] =-η∫_0^t∫_𝖷×𝖸⟨ϕ(·,·,x)-y,μ_s^N⊗γ⟩⟨∇_θ f·∇_θϕ(·,·,x),μ_s^N⊗γ⟩π( x, y) s
+η/N∫_0^t∫_𝖷×𝖸⟨⟨ϕ(·,·,x)-y,γ⟩⟨∇_θ f·∇_θϕ(·,·,x),γ⟩,μ_s^N⟩π( x, y) s
-η/N∫_0^t∫_𝖷×𝖸⟨(ϕ(·,·,x)-y)∇_θ f·∇_θϕ(·,·,x),μ_s^N⊗γ⟩π( x, y) s-𝐕_t^N[f],
where
𝐕_t^N[f] :=-η∫^t_⌊ Nt⌋/N∫_𝖷×𝖸⟨ϕ(·,·,x)-y,μ_s^N⊗γ⟩⟨∇_θ f·∇_θϕ(·,·,x),μ_s^N⊗γ⟩π( x, y) s
+η/N∫^t_⌊ Nt⌋/N∫_𝖷×𝖸⟨⟨ϕ(·,·,x)-y,γ⟩⟨∇_θ f·∇_θϕ(·,·,x),γ⟩,μ_s^N⟩π( x, y) s
-η/N∫^t_⌊ Nt⌋/N∫_𝖷×𝖸⟨(ϕ(·,·,x)-y)∇_θ f·∇_θϕ(·,·,x),μ_s^N⊗γ⟩π( x, y) s.
On the other hand, we also have for t∈ [0,1],
∑_k=0^⌊ Nt⌋-1-η/N⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),ν_k^N⟩ =-η∫_0^⌊ Nt⌋/N⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ_s^N⟩ s.
We finally set:
𝐖_t^N[f]:=- 𝐕_t^N[f] + η∫^t_⌊ Nt⌋/N⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ_s^N⟩ s.
Since ⟨ f,μ_t^N⟩-⟨ f,μ_0^N⟩=∑_k=0^⌊ Nt⌋-1⟨ f,ν_k+1^N⟩-⟨ f,ν_k^N⟩, we deduce from (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>), the so-called pre-limit equation satisfied by
μ^N: for N≥1, t∈ [0,1], and f∈𝒞^∞(Θ),
⟨ f,μ_t^N⟩-⟨ f,μ_0^N⟩ =-η∫_0^t∫_𝖷×𝖸⟨ϕ(·,·,x)-y,μ_s^N⊗γ⟩⟨∇_θ f·∇_θϕ(·,·,x),μ_s^N⊗γ⟩π( x, y) s
-η∫_0^t ⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ_s^N⟩ s
+η/N∫_0^t∫_𝖷×𝖸⟨⟨ϕ(·,·,x)-y,γ⟩⟨∇_θ f·∇_θϕ(·,·,x),γ⟩,μ_s^N⟩π( x, y) s
-η/N∫_0^t∫_𝖷×𝖸⟨(ϕ(·,·,x)-y)∇_θ f·∇_θϕ(·,·,x),μ_s^N⊗γ⟩π( x, y) s
+ 𝐌_t^N[f] +𝐖_t^N[f]+ 𝐑_t^N[f].
§.§.§ The last five terms in (<ref>) are error terms
The purpose of this section is to show that the last five terms appearing in the r.h.s. of (<ref>) are error terms when N→+∞.
For J∈𝐍^* and f∈𝒞^J(Θ), set ‖ f‖_𝒞^J(Θ):=∑_|k|≤ J‖∂_kf ‖_∞, Θ,
where ‖ g‖_∞, Θ=sup_θ∈Θ|g(θ)| for g:Θ→𝐑^m.
Assume A1→A4. Then, there exists C>0 such that a.s. for all f∈𝒞^∞(Θ) and N≥1,
* η/N∫_0^1∫_𝖷×𝖸|⟨⟨ϕ(·,·,x)-y,γ⟩⟨∇_θ f·∇_θϕ(·,·,x),γ⟩,μ_s^N⟩| π( x, y) s ≤ C‖ f‖_𝒞^1(Θ)/N.
* η/N∫_0^1∫_𝖷×𝖸|⟨(ϕ(·,·,x)-y)∇_θ f·∇_θϕ(·,·,x),μ_s^N⊗γ⟩| π( x, y) s≤ C‖ f‖_𝒞^1(Θ)/N.
* sup_t∈[0,1]|𝐖_t^N[f]|+ sup_t∈[0,1]|𝐑_t^N[f]| ≤ C‖ f‖_𝒞^2(Θ)/N.
Finally, sup_t∈[0,1]𝐄[|𝐌_t^N[f]|]≤C‖ f‖_𝒞^1(Θ)/√(N).
All along the proof, C>0 denotes a positive constant independent of N≥ 1,k∈{0,…,N-1},(s,t)∈ [0,1]^2,(x,y)∈𝖷×𝖸,θ∈Θ,z∈𝐑^d, and f∈𝒞^∞(Θ) which can change from one occurrence to another.
Using (<ref>), the Cauchy-Schwarz inequality, and the fact that ∇_θ f is bounded over Θ imply:
|⟨∇_θ f(θ)·∇_θϕ(θ,·,x),γ⟩|≤⟨|∇_θ f(θ)·∇_θϕ(θ,·,x)|,γ⟩≤ C‖ f‖_𝒞^1(Θ).
Combining (<ref>) and (<ref>), we obtain:
∫_0^1∫_𝖷×𝖸|⟨⟨ϕ(·,·,x)-y,γ⟩⟨∇_θ f·∇_θϕ(·,·,x),γ⟩,μ_s^N⟩| π( x, y) s≤ C‖ f‖_𝒞^1(Θ)
and
∫_0^1∫_𝖷×𝖸|⟨(ϕ(·,·,x)-y)∇_mf·∇_mϕ(·,·,x),μ_s^N⊗γ⟩| π( x, y) s≤ C‖ f‖_𝒞^1(Θ),
which proves Items <ref> and <ref>.
Let us now prove Item <ref>. By (<ref>) and (<ref>), sup_t∈[0,1]|𝐕_t^N[f]|≤ C‖ f‖_𝒞^1(Θ)/N.
On the other hand,
because f∈𝒞^∞(Θ) and θ↦∇_θ𝒟_ KL(q_θ^1|P_0^1) is continuous (see (<ref>)) over Θ which is compact, it holds, ‖∇_θ f·∇_θ𝒟_ KL(q_θ^1|P_0^1)‖_∞,Θ<+∞.
Hence, it holds:
sup_t∈[0,1]|∫^t_⌊ Nt⌋/N⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ_s^N⟩ s|≤ C‖ f‖_𝒞^1(Θ)/N.
Using (<ref>), it then holds sup_t∈[0,1]|𝐖_t^N[f]| ≤ C‖ f‖_𝒞^1(Θ)/N.
Since f∈𝒞^∞(Θ), we have, by (<ref>), for N≥ 1 and 0≤ k≤ N-1, |𝐑_k^N[f]|≤‖ f‖_𝒞^2(Θ)C/N∑_i=1^N|θ_k+1^i-θ_k^i|^2.
By (<ref>) and Lemma <ref>, |θ_k+1^i-θ_k^i|^2≤ C/N^2 and consequently, one has:
|𝐑_k^N[f]|≤C‖ f‖_𝒞^2(Θ)/N^2.
Hence, for all t∈[0,1], |𝐑_t^N[f]|≤C‖ f‖_𝒞^2(Θ)/N.
This proves Item <ref>.
Let us now prove the last item in Lemma <ref>. Let t∈[0,1]. We have, by (<ref>),
|𝐌_t^N[f]|^2=∑_k=0^⌊ Nt⌋-1 |𝐌_k^N[f] |^2+2∑_k<j𝐌_k^N[f] 𝐌_j^N[f] .
For all 0≤ k<j<⌊ Nt⌋, 𝐌_k^N[f] is ℱ_j^N-measurable (see (<ref>)), and since 𝐄 [𝐌_j^N[f]|ℱ_j^N]=0, one deduces that 𝐄 [ 𝐌_k^N[f] 𝐌_j^N[f] ]=𝐄 [𝐌_k^N[f] 𝐄 [𝐌_j^N[f]|ℱ_j^N] ]=0.
Hence, 𝐄[|𝐌_t^N[f]|^2]=∑_k=0^⌊ Nt⌋-1𝐄[|𝐌_k^N[f]|^2].
By (<ref>) and (<ref>), one has a.s. for all 0≤ k≤ N-1,
|𝐌_k^N[f]|≤ C‖ f‖_𝒞^1(Θ)/N.
Hence, 𝐄[|𝐌_t^N[f]|^2]≤ C‖ f‖_𝒞^1(Θ)/N, which proves the last inequality in Lemma <ref>.
§.§ Convergence to the limit equation as N→+∞
In this section we prove the relative compactness of (μ^N)_N≥ 1 in 𝒟([0,1],𝒫(Θ)). We then show that any of its limit points satisfies the limit equation (<ref>).
§.§.§ Wasserstein spaces and duality formula
In this section we recall some basic results which will be used throughout this work on the space 𝒫(𝒮) when (𝒮, 𝖽) is a Polish space. First when endowed with the weak convergence topology, 𝒫(𝒮) is a Polish space <cit.>. In addition, 𝒫_q(𝒮)= {ν∈𝒫(𝒮), ∫_𝒮𝖽(w_0,w)^q ν ( w)<+∞}, where w_0∈𝒮 is arbitrary (note that this space was defined previously in (<ref>) when 𝒮=𝐑^d+1) when endowed with the 𝖶_q metric is also a Polish space <cit.>.
Recall also the duality formula for the 𝖶_1-distance on 𝒫_1(𝒮) (see e.g <cit.>):
𝖶_1(μ,ν)=sup{|∫_𝒮f(w)μ(w)-∫_𝒮f(w)ν( w)|, f_Lip≤ 1}.
Finally, when 𝒦⊂𝐑^d+1 is compact, the convergence in 𝖶_q-distance is equivalent to the usual weak convergence on 𝒫(𝒦) (see e.g. <cit.>).
§.§.§ Relative compactness
The main result of this section is to prove that (μ^N)_N≥ 1 is relatively compact in 𝒟([0,1],𝒫(Θ)), which is the purpose of Proposition <ref> below. To this end, we need to prove that for all f∈𝒞^∞(Θ), every sequence
(⟨ f,μ_t^N⟩)_N≥ 1 satisfies some regularity conditions, which is the purpose of the next result.
Assume A1→A4.
Then there exists C>0 such that a.s. for all f∈𝒞^∞(Θ), 0≤ r<t≤ 1, and N≥1:
|⟨ f,μ_t^N⟩-⟨ f,μ_r^N⟩|≤ C‖ f‖_𝒞^2(Θ)[|t-r|+|t-r|/N+1/N].
Let f∈𝒞^∞(Θ) and let N≥1 and 0≤ r<t≤ 1. In the following C>0 is a positive constant independent of f∈𝒞^∞(Θ), N≥1, and 0≤ r<t≤ 1, which can change from one occurrence to another.
From (<ref>), we have
⟨ f,μ_t^N⟩-⟨ f,μ_r^N⟩ =𝐀_r,t^N[f] - η∫_r^t ⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ_s^N⟩ s
+𝐌_t^N[f]-𝐌_r^N[f] +𝐖_t^N[f]-𝐖_r^N[f]+𝐑_t^N[f]-𝐑_r^N[f],
where
𝐀_r,t^N[f] =-η∫_r^t∫_𝖷×𝖸⟨ϕ(·,·,x)-y,μ_s^N⊗γ⟩⟨∇_θ f·∇_θϕ(·,·,x),μ_s^N⊗γ⟩π( x, y)
+η/N∫_r^t∫_𝖷×𝖸⟨⟨ϕ(·,·,x)-y,γ⟩⟨∇_θ f·∇_θϕ(·,·,x),γ⟩,μ_s^N⟩π( x, y)
-η/N∫_r^t∫_𝖷×𝖸⟨(ϕ(·,·,x)-y)∇_θ f·∇_θϕ(·,·,x),μ_s^N⊗γ⟩π( x, y).
By (<ref>) and (<ref>), |𝐀_r,t^N[f]| ≤ C‖ f‖_𝒞^1(Θ)[|t-r|+|t-r|/N].
In addition, since θ↦𝒟_ KL(q_θ^1|P_0^1) is bounded over Θ (since it is smooth and Θ is compact),
| ∫_r^t ⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ_s^N⟩ s |≤ C‖ f‖_𝒞^1(Θ)|t-r|.
Furthermore, using (<ref>),
|𝐌_t^N[f]-𝐌_r^N[f]|=|∑_k=⌊ Nr⌋^⌊ Nt⌋-1𝐌_k^N[f]|≤ (⌊ Nt⌋-⌊ Nr⌋) C‖ f‖_𝒞^1(Θ)/N.
Next, we have, by Item <ref> in Lemma <ref>, |𝐖_t^N[f]-𝐖_r^N[f]|≤|𝐖_t^N[f]|+|𝐖_r^N[f]|≤C‖ f‖_𝒞^2(Θ)/N.
Finally, by (<ref>),
|𝐑_t^N[f]-𝐑_r^N[f]|=|∑_k=⌊ Nr⌋^⌊ Nt⌋-1𝐑_k^N[f]|≤ (⌊ Nt⌋-⌊ Nr⌋) C‖ f‖_𝒞^2(Θ)/N^2.
The proof of Proposition <ref> is complete plugging all the previous estimates in (<ref>).
Assume A1→A4. Then, the sequence (μ^N)_N≥ 1 is relatively compact in 𝒟([0,1],𝒫(Θ)).
The proof consists in applying <cit.> with E=𝒫(Θ) endowed with the weak convergence topology.
Set 𝔽={𝔏_f, f∈𝒞^∞(Θ)} where
𝖫_f: ν∈𝒫(Θ)↦⟨ f, ν⟩.
The class of continuous functions 𝔽 on 𝒫(Θ) satisfies Conditions <cit.>.
On the other hand, the condition <cit.> is satisfied since 𝒫(Θ) is compact because Θ is compact (see e.g. <cit.> together with <cit.>).
It remains to verify Condition (3.4) of <cit.>, i.e. that for all f∈𝒞^∞(Θ), (⟨ f,μ^N⟩)_N≥1 is relatively compact in 𝒟([0,1],𝐑). To this end, we apply <cit.>. Condition (i) in <cit.> is satisfied because
|⟨ f,μ^N_t⟩|≤‖ f‖_∞,Θ for all t∈ [0,1] and N≥ 1. Let us now show that Condition (ii) in <cit.> holds.
For this purpose, we use Lemma <ref>.
For δ,β>0 sufficiently small, it is possible to construct a subdivision { t_i}_i=0^v of [0,1] such that t_0 =0, t_v=1, t_i+1-t_i = δ+β for i∈{0,…,v-2} and δ+β≤ t_v -t_v-1≤ 2(δ+β). According to the terminology introduced in <cit.>, { t_i}_i=0^v is δ-sparse. Then, by Lemma <ref>, there exists C>0 such that a.s.
for all δ,β>0, all such subdivision { t_i}_i=0^v, i∈{0,…,v-1}, and N≥ 1,
sup_t,r∈[t_i ,t_i+1 ] |⟨ f,μ_t^N⟩-⟨ f,μ_r^N⟩|≤ C(|t_i+1 -t_i |+|t_i+1 -t_i |/N+1/N)≤ C(2(δ+β)+2(δ+β)/N+1/N).
Thus, one has:
inf_β>0max_isup_t,r∈[t_i ,t_i+1 ]|⟨ f,μ_t^N⟩-⟨ f,μ_r^N⟩|≤ C(2δ+2δ/N+1/N).
Consequently, there exists C>0 such that a.s. for all δ>0 small enough and N≥ 1,
w'_⟨ f,μ^N⟩(δ):=inf_{t_i}
δ-sparsemax_isup_t,r∈[t_i,t_i+1]|⟨ f,μ_t^N⟩-⟨ f,μ_r^N⟩|≤ C(2δ+2δ/N+1/N).
This implies lim_δ→0lim sup_N→+∞𝐄[w'_⟨ f,μ^N⟩(δ)]=0. By Markov's inequality, this proves Condition (ii) of <cit.>. Therefore, for all f∈𝒞^∞(Θ), using also Prokhorov theorem, the sequence (⟨ f,μ^N⟩)_N≥1⊂𝒟([0,1],𝐑) is relatively compact. In conclusion,
according to <cit.>, (μ^N)_N≥ 1⊂𝒟([0,1],𝒫(Θ)) is tight.
§.§.§ Limit points satisfy the limit equation (<ref>)
In this section we prove that every limit point of (μ^N)_N≥ 1 in 𝒟([0,1],𝒫(Θ)) satisfies (<ref>).
Let 𝗆,(𝗆^N)_N≥ 1⊂𝒟([0,1],𝒫(Θ)) be such that 𝗆^N→𝗆 in 𝒟([0,1],𝒫(Θ)). Then, for all Lipschitz continuous function f:Θ→𝐑, we have ⟨ f,𝗆^N⟩→⟨ f,𝗆⟩ in 𝒟([0,1],𝐑).
Let f be such a function.
By <cit.>, 𝗆^N→𝗆 in 𝒟([0,1],𝒫(Θ)) iff there exist functions λ_N: [0,1]→ [0,1] continuous, increasing onto itself such that sup_t∈[0,1]|λ_N(t)-t|→_N→∞ 0 and
sup_t∈ [0,1]𝖶_1(𝗆_λ_N(t)^N,𝗆_t)→_N→∞0.
Then ⟨ f,𝗆^N⟩→⟨ f,𝗆⟩ in 𝒟([0,1],𝐑) since by (<ref>), sup_t∈ [0,1]|⟨ f,𝗆_λ_N(t)^N⟩-⟨ f,𝗆_t⟩| ≤f_Lipsup_t∈ [0,1]𝖶_1(𝗆_λ_N(t)^N,𝗆_t)→_N→∞0.
Let f∈𝒞^∞(Θ). Then, any limit point of (⟨ f,μ^N⟩)_N≥1⊂𝒟([0,1],𝐑) belong a.s. to 𝒞([0,1],𝐑).
Fix t∈ (0,1]. Letting r→ t in (<ref>), we obtain
|⟨ f,μ_t^N⟩-⟨ f,μ_t^-^N⟩|≤ C/N.
Therefore sup_t∈(0,1]|⟨ f,μ_t^N⟩-⟨ f,μ_t^-^N⟩| 0 as N→+∞. The result follows from <cit.>.
Let μ^*∈𝒟([0,1], 𝒫(Θ)) be a limit point of (μ^N)_N≥1⊂𝒟([0,1], 𝒫(Θ)). Then, a.s. μ^*∈𝒞([0,1], 𝒫(Θ)).
Up to extracting a subsequence, we assume that μ^Nμ^*. By Skorohod representation theorem, there exists another probability space (Ω̂, ℱ̂,𝐏̂) on which are defined random elements (μ̂^N)_N≥1 and μ̂^*, where,
μ̂^*𝒟=μ^*, and for all N≥1, μ̂^N𝒟=μ^N,
and such that 𝐏̂-a.s., μ̂^N→μ̂^* in 𝒟([0,1], 𝒫(Θ)) as N→ +∞. Fix f∈𝒞^∞(Θ). We have, by Lemma <ref>,
𝐏̂-a.s., ⟨ f,μ̂^N⟩→_N→+∞⟨ f,μ̂^*⟩ in 𝒟([0,1],𝐑).
In particular, ⟨ f,μ̂^N⟩→_N→+∞⟨ f,μ̂^*⟩ in distribution. By Proposition <ref>, there exists Ω̂_f ⊂Ω̂ of 𝐏̂-mass 1 such that for all ω∈Ω̂_f, ⟨ f,μ̂^*(ω)⟩∈𝒞([0,1],𝐑). Denote by ℱ the class polynomial functions with rational coefficients. Since this class is countable, the set Ω̂_ℱ:=∩_f∈ℱΩ̂_f
is of 𝐏̂-mass 1.
Consider now an arbitrary f∈𝒞(Θ) and let us show that for all ω∈Ω̂_ℱ, ⟨ f,μ̂^*(ω)⟩∈𝒞([0,1],𝐑). By the Stone-Weierstrass theorem, there exist (f_n)_n≥1⊂ℱ such that f_n-f_∞,Θ→_n→+∞0. On Ω̂_ℱ, for all n,
t∈ [0,1]↦⟨ f_n,μ̂_t^*⟩ is continuous and converges uniformly to t∈ [0,1]↦⟨ f,μ̂_t^*⟩.
Hence, for all ω∈Ω̂_ℱ and f∈𝒞 (Θ), ⟨ f,μ̂^*(ω)⟩∈𝒞([0,1],𝐑), i.e. for all ω∈Ω̂_ℱ, μ̂^*(ω)∈𝒞([0,1],𝒫(Θ)). This concludes the proof.
Now, we introduce, for t∈[0,1] and f∈𝒞^∞(Θ), the function Λ_t[f]:𝒟([0,1],𝒫(Θ))→𝐑_+ defined by:
Λ_t[f]:𝗆↦ |⟨ f,𝗆_t⟩-⟨ f,μ_0⟩
+η∫_0^t∫_𝖷×𝖸⟨ϕ(·,·,x)-y,𝗆_s⊗γ⟩⟨∇_θ f·∇_θϕ(·,·,x),𝗆_s⊗γ⟩π( x, y) s
+ η∫_0^t ⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),𝗆_s⟩ s |.
We now study the continuity of Λ_t[f].
Let (𝗆^N)_N≥ 1⊂𝒟([0,1],𝒫(Θ)) converge to 𝗆∈𝒟([0,1],𝒫(Θ)). Then, for all continuity point t∈[0,1] of 𝗆 and all f∈𝒞^∞(Θ), we have Λ_t[f](𝗆^N)→Λ_t[f](𝗆).
Let f∈𝒞^∞(Θ) and denote by 𝒞(𝗆)⊂[0,1] the set of continuity points of 𝗆. Let t∈𝒞(𝗆). From <cit.>, we have, for all s∈𝒞(𝗆),
𝗆^N_s→𝗆_s in 𝒫(Θ).
Thus, ⟨ f,𝗆_t^N⟩→_N→∞⟨ f,𝗆_t⟩.
For all z∈𝐑^d and (x,y)∈𝖷×𝖸, A1 and A3 ensure that the functions θ∈Θ↦ϕ(θ
,z,x)-y and θ∈Θ↦∇_θ f(θ)·∇_θϕ(θ,z,x) are continuous and also bounded because Θ is compact. Hence, for all s∈ [0,t]∩𝒞(𝗆), using (<ref>),
⟨ϕ(·,z,x)-y,𝗆_s^N⟩→⟨ϕ(·,z,x)-y,𝗆_s⟩ and ⟨∇_θ f·∇_θϕ(·,z,x),𝗆_s^N⟩→⟨∇_θ f·∇_θϕ(·,z,x),𝗆_s⟩
Since [0,1]\𝒞(𝗆) is at most countable (see <cit.>) we have that for a.e. (s,z',z,x,y)∈ [0,t]×𝐑^d×𝐑^d×𝖷×𝖸,
⟨ϕ(·,z',x)-y,𝗆_s^N⟩⟨∇_θ f·∇_θϕ(·,z,x),𝗆_s^N⟩→⟨ϕ(·,z',x)-y,𝗆_s⟩⟨∇_θ f·∇_θϕ(·,z,x),𝗆_s⟩.
Since ϕ(θ,z',x)-y is bounded and by (<ref>), there exists C>0 such that for all (s,z',z,x,y)∈ [0,t]×𝐑^d×𝐑^d×𝖷×𝖸, ⟨ |ϕ(·,z',x)-y|,𝗆_s^N⟩⟨|∇_θ f·∇_θϕ(·,z,x)|,𝗆_s^N⟩≤ C‖∇ _θ f‖_∞,Θ𝔟(z).
By the dominated convergence theorem, we then have:
∫_0^t∫_𝖷×𝖸⟨ϕ(·,·,x)-y,𝗆_s^N⊗γ⟩⟨∇_θ f·∇_θϕ(·,·,x),𝗆_s^N⊗γ⟩π( x, y) s
N→+∞⟶∫_0^t∫_𝖷×𝖸⟨ϕ(·,·,x)-y,𝗆_s⊗γ⟩⟨∇_θ f·∇_θϕ(·,·,x),𝗆_s⊗γ⟩π( x, y) s.
With the same arguments as above, one shows that ∫_0^t ⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),𝗆_s^N ⟩ s →∫_0^t ⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),𝗆_s ⟩ s.
The proof of the lemma is complete.
Let μ^*∈𝒟([0,1],𝒫(Θ)) be a limit point of (μ^N)_N≥1⊂𝒟([0,1],𝒫(Θ)). Then, a.s. μ^* satisfies (<ref>).
Up to extracting a subsequence, we can assume that μ^Nμ^* as N→ +∞. Let f∈𝒞^∞(Θ). The pre-limit equation (<ref>) and Lemma <ref> imply that a.s. for all N≥ 1 and t∈[0,1], Λ_t[f](μ^N)≤ C/N+ 𝐌_t^N[f].
Hence, using the last statement in Lemma <ref>, it holds for all t∈[0,1],
lim_N→∞𝐄[Λ_t[f](μ^N)]=0.
In particular, Λ_t[f](μ^N) 0. Let us now show that Λ_t[f](μ^N)Λ_t[f](μ^*).
Denoting by 𝖣(Λ_t[f]) the set of discontinuity points of Λ_t[f], we have, from Proposition <ref> and Lemma <ref>, for all t∈[0,1] and f∈𝒞^∞(Θ),
𝐏(μ^*∈𝖣(Λ_t[f])) =0.
By the continuous mapping theorem, Λ_t[f](μ^N)Λ_t[f](μ^*).
By uniqueness of the limit in distribution, we have that for all t∈[0,1] and f∈𝒞^∞(Θ), a.s. Λ_t[f](μ^*)=0. Let us now prove that a.s. for all t∈[0,1] and f∈𝒞^∞(Θ), Λ_t[f](μ^*)=0.
On the one hand, for all f∈𝒞^∞(Θ) and 𝗆∈𝒟([0,1],𝒫(Θ)), the function t↦Λ_t[f](𝗆) is right-continuous. Since [0,1] is separable, we have that for all f∈𝒞^∞(Θ), a.s. for all t∈[0,1], Λ_t[f](μ^*)=0.
One the other hand 𝒞^∞(Θ) is separable (when endowed with the norm f_𝒞^∞(Θ)= ∑_k≥ 02^-kmin(1,∑_|j|=k∂_jf_∞,Θ)) and the function f∈𝒞^∞(Θ) ↦Λ_t[f](𝗆) is continuous (for fixed t∈[0,1] and 𝗆∈𝒟([0,1],𝒫(Θ))) relatively to the topology induced by f_𝒞^∞(Θ).
Hence, we obtain that a.s. for all t∈[0,1] and f∈𝒞^∞(Θ), Λ_t[f](μ^*)=0. The proof of the proposition is thus complete.
§.§.§ Uniqueness and end of the proof of Theorem <ref>
There exists a unique solution to (<ref>) in 𝒞([0,1],𝒫(Θ)).
First of all, the fact that there is a solution to (<ref>) is provided by Propositions <ref>, <ref> and <ref>.
The proof of the fact that there is a unique solution to (<ref>) relies on the same arguments as those used in the proof of <cit.>.
For μ∈𝒫(𝐑^d+1), we introduce v[μ]:𝐑^d+1→𝐑^d+1 defined, for θ=(m,ρ)∈𝐑^d+1, by
v[μ](θ)=
-η∫_𝖷×𝖸⟨ϕ(·,·,x)-y,μ⊗γ⟩⟨∇_θϕ(θ,·,x),γ⟩π( x, y)-η∇_θ𝒟_ KL(q_θ^1|P_0^1).
In addition, if μ̅∈𝒞([0,1],𝒫(Θ)) is solution to (<ref>), it satisfies also (<ref>) with test functions f∈𝒞^∞_c( 𝐑^d+1). Then, adopting the terminology of <cit.>,
any solution μ̅ to (<ref>) is a weak solution[We mention that
according to <cit.>, the
two notions of solutions of (<ref>) (namely the weak
solution and the distributional solution) are equivalent.]
on [0,T] of the measure-valued equation
∂_tμ̅_t=div( v[μ̅_t]μ̅_t)
μ̅_0=μ_0.
Let us now prove that:
* There exists C>0 such that for all μ∈𝒫(𝐑^d+1) and θ∈𝐑^d+1,
|J_θ v[μ](θ)|≤ C.
* There exists C>0 such that for all μ̅∈𝒞([0,1],𝒫(Θ)) solution to (<ref>), 0≤ s,t≤ 1, and θ∈𝐑^d+1,
| v[μ̅_t](θ)- v[μ̅_s](θ)|≤ C|t-s|.
* There exists L'>0 such that for all μ,ν∈𝒫_1(𝐑^d+1),
sup_θ∈𝐑^d| v[μ](θ)- v[ν](θ)|≤ L'𝖶_1(μ,ν).
Before proving the three items above, we quickly conclude the proof of the proposition. Items 1 and 2 above imply that v(t,θ)= v[μ̅_t](θ) is globally Lipschitz continuous over [0,1]×𝐑^d+1 when μ̅∈𝒞([0,1],𝒫(Θ)) is a solution to (<ref>). Since μ̅∈𝒞([0,1],𝒫(Θ))⊂𝒞([0,1],𝒫(𝐑^d+1)), this allows to use the representation theorem <cit.> for the solution of (<ref>) in 𝒞([0,1],𝒫(𝐑^d+1)), i.e. it holds:
∀ t∈ [0,1], μ̅_t=ϕ_t#μ_0,
where ϕ_t is the flow generated by the vector field v[μ̅_t](θ) over 𝐑^d+1.
Equation (<ref>) and the fact that 𝒞([0,1],𝒫(Θ))⊂𝒞([0,1],𝒫_1(𝐑^d+1)) together with Item 3 above and the same arguments as those used in the proof of <cit.> (which we recall is based estimates in Wasserstein distances between two solutions of (<ref>) derived in <cit.>), one deduces that there is a unique solution to (<ref>).
Let us prove Item 1.
Recall g(ρ)= ln(1+e^ρ). The functions
ρ↦ g”(ρ)g(ρ), ρ↦ g'(ρ), ρ↦g'(ρ)/g(ρ), and ρ↦g”(ρ)/g(ρ)
are bounded on 𝐑. Thus, in view of (<ref>), ‖ Hess_θ 𝒟_ KL(q_θ^1|P_0^1)‖_∞,𝐑^d+1<+∞.
On the other hand, by A1 and A3, for x∈𝖷, z∈𝐑^d, θ∈Θ↦ϕ(θ,z,x) is smooth and
there exists C>0, for all x∈𝖷, θ∈𝐑^d+1, z∈𝐑^d:
| Hess_θϕ(θ,z,x) | ≤ C(𝔟(z)^2+𝔟(z)).
This bound allows us to differentiate under the integral signs in (<ref>) and proves that |J_θ∫_𝖷×𝖸⟨ϕ(·,·,x)-y,μ⊗γ⟩⟨∇_θϕ(θ,·,x),γ⟩π( x, y)|≤ C, where C>0 is independent of μ∈𝒫(Θ) and θ∈Θ. The proof of Item 1 is complete.
Let us prove Item 2. Let μ̅∈𝒞([0,1],𝒫(Θ)) be a solution to (<ref>), 0≤ s≤ t≤ 1, and θ∈𝐑^d+1. We have
v[μ̅_t](θ)- v[μ̅_s](θ)=
-η∫_𝖷×𝖸⟨ϕ(·,·,x),(μ̅_t-μ̅_s)⊗γ⟩⟨∇_θϕ(θ,·,x),γ⟩π( x, y).
Let z∈𝐑^d and x∈𝖷. By A1 and A3, ϕ(·,z,x)∈𝒞^∞(Θ). Therefore, by (<ref>),
⟨ϕ(·,z,x),μ̅_t-μ̅_s⟩ = -η∫_s^t ∫_𝖷×𝖸⟨ϕ(·,·,x')-y,μ̅_r⊗γ⟩⟨∇_θϕ(·,z,x)·∇_θϕ(·,·,x'),μ̅_r⊗γ⟩π( x', y) r
-η∫_s^t⟨∇_θϕ(·,z,x)·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ̅_r⟩ r
We have ‖∇_θ𝒟_ KL(q_θ^1|P_0^1)‖_∞,Θ<+∞. Using also (<ref>)
and the fact that 𝖷×𝖸 is a compact (see A2), it holds:
|⟨ϕ(·,z,x),μ̅_t-μ̅_s⟩|≤ C 𝔟(z)|t-s|.
Hence, for all x'∈𝖷,
|⟨ϕ(·,·,x'),(μ̅_t-μ̅_s)⊗γ⟩|≤⟨|⟨ϕ(·,·,x'),μ̅_t-μ̅_s⟩|,γ⟩≤ C|t-s|.
Thus, by (<ref>) and (<ref>), | v[μ̅_t](θ)- v[μ̅_s](θ)|≤ C|t-s|. This ends the proof of Item 2.
Let us now prove Item 3. Fix μ,ν∈𝒫_1(𝐑^d+1) and θ∈𝐑^d+1.
We have
v[μ](θ)- v[ν](θ)= -η∫_𝖷×𝖸⟨ϕ(·,·,x),( μ -ν)⊗γ⟩⟨∇_θϕ(θ,·,x),γ⟩π( x, y)
For all x∈𝖷, using (<ref>) and (<ref>), it holds:
|⟨ϕ(·,·,x),(μ-ν)⊗γ⟩| ≤∫_𝐑^d|⟨ϕ(·,z,x),μ⟩-⟨ϕ(·,z,x),ν⟩|γ(z) z
≤ C ∫_𝐑^d𝖶_1(μ,ν)𝔟(z)γ(z) z≤ C 𝖶_1(μ,ν).
Finally, using in addition (<ref>) and (<ref>), we deduce Item 3.
This ends the proof of the proposition.
We are now ready to prove Theorem <ref>.
Recall Lemma <ref> ensures that a.s. (μ^N)_N≥1⊂𝒟([0,1],𝒫(Θ)). By Proposition <ref>, this sequence is relatively compact. Let μ^*∈𝒟([0,1],𝒫(Θ)) be a limit point. Along some subsequence N', it holds:
μ^N'μ^*.
In addition, a.s. μ^*∈𝒞([0,1],𝒫(Θ)) (by Proposition <ref>) and μ^* satisfies (<ref>) (by Proposition <ref>). By Proposition <ref>, (<ref>) admits a unique solution μ̅∈𝒞([0,1],𝒫(Θ)). Hence, a.s. μ^*=μ̅. Therefore,
μ^N'μ̅.
Since the sequence (μ^N)_N≥1 admits a unique limit point, the whole sequence converges in distribution to μ̅. The convergence also holds in probability since μ̅ is deterministic. The proof of Theorem <ref> is complete.
§.§ Proof of Lemma <ref>
In this section we prove Lemma <ref>.
We start with the following simple result.
Let T>0, N≥ 1, and c_1>0. Consider a sequence (u_k)_0≤ k≤⌊ NT⌋⊂𝐑_+ for which there exists v_0 such that u_0≤ v_0 and for all 1≤ k≤⌊ NT⌋, u_k≤ c_1 (1+1/N∑_ℓ=0^k-1u_ℓ). Then, for all 0≤ k≤⌊ NT⌋, u_k≤ v_0e^c_1T.
Define v_k=c_1(1+1/N∑_ℓ=0^k-1v_ℓ). For all 0≤ k≤⌊ NT⌋, u_k≤ v_k and v_k=v_k-1(1+c_1/N). Hence v_k=v_0 (1+ c_1/N)^k≤ v_0(1+ c_1/N)^⌊ NT⌋≤ v^0e^c_1T.
This ends the proof of the Lemma.
Since ρ↦ g'(ρ) and ρ↦ g'(ρ)/g(ρ) are bounded continuous functions over 𝐑, and since |g(ρ)|≤ C(1+|ρ|), according to (<ref>), there exists c>0, for all θ∈𝐑^d+1,
|∇_θ𝒟_ KL(q_θ^1|P_0^1)|≤ c(1+|θ|).
All along the proof, C>0 is a constant independent of N≥ 1, T>0, i∈{1,…, N}, 1≤ k≤⌊ NT⌋, (x,y)∈𝖷×𝖸, θ∈𝐑^d+1, and z∈𝐑^d, which can change from one occurence to another.
It holds:
|θ_k^i|≤ |θ_0^i|+ ∑_ℓ=0^k-1|θ_ℓ+1^i-θ_ℓ^i|.
Using (<ref>), we have, for 0≤ℓ≤ k-1,
|θ_ℓ+1^i-θ_ℓ^i| ≤η/N^2∑_j=1,j≠ i^N|(⟨ϕ(θ_ℓ^j,·,x_ℓ),γ⟩-y_ℓ)⟨∇_θϕ(θ_ℓ^i,·,x_ℓ),γ⟩|
+ η/N^2|⟨(ϕ(θ_ℓ^i,·,x_ℓ)-y_ℓ)∇_θϕ(θ_ℓ^i,·,x_ℓ),γ⟩| +η/N |∇_θ𝒟_ KL(q_θ_ℓ^i^1|P_0^1)|.
For all θ∈𝐑^d+1, z∈𝐑^d, (x,y)∈𝖷×𝖸, we have, by A2 and A3, since ϕ(θ,z,x)=s(Ψ_θ(z),x),
|ϕ(θ,z,x)-y|≤ C.
Moreover, we have ∇_θϕ(θ,z,x)=∇_1s(Ψ_θ(z),x) J_θΨ_θ(z) (here ∇_1s refers to the gradient of s w.r.t. its first variable). By A3, |∇_1s(Ψ_θ(z),x)|≤ C and, hence, denoting by J_θ the Jacobian w.r.t. θ, using (<ref>),
|∇_θϕ(θ,z,x)|≤ C|J_θΨ_θ(z)|≤ C𝔟(z).
Therefore, by (<ref>),
⟨|∇_θϕ(θ,·,x)|,γ⟩≤ C.
Hence, we obtain, using (<ref>) and (<ref>),
|θ_ℓ+1^i-θ_ℓ^i| ≤η/N^2∑_j=1,j≠ i^NC+η/N^2C + cη/N(1+|θ_ℓ^i|) ≤C/N(1+ |θ_ℓ^i|).
Using A4, there exists K_0>0 such that a.s. for all i, |θ_0^i|≤ K_0.
Then, from (<ref>) and (<ref>), for 1≤ k≤⌊ NT⌋, it holds:
|θ_k^i|≤ K_0 + C/N∑_ℓ=0^k-1(1+|θ_ℓ^i|)≤ K_0+CT+ C/N∑_ℓ=0^k-1 |θ_ℓ^i|≤ C_0,T(1+ 1/N∑_ℓ=0^k-1 |θ_ℓ^i|),
with C_0,T=max(K_0+CT, C)≤ K_0+C(1+T).
Then, by Lemma <ref> and A4, we have that for all N≥1, i∈{1,…,N} and 0≤ k≤⌊ NT⌋, |θ_k^i|≤ K_0e^[K_0+C(1+T)]T.
The proof of Lemma <ref> is thus complete.
§ PROOF OF THEOREM <REF>
In this section, we assume A1→𝐀5
(where in A2, when k≥ 1, ℱ_k^N is now the one defined in (<ref>)) and the θ^i_k's (resp. μ^N) are those defined by (<ref>) for i∈{1,…,N} and k≥ 0 (resp. by (<ref>) for N≥ 1).
§.§ Preliminary analysis and pre-limit equation
§.§.§ Notation and weighted Sobolev embeddings
For J∈N and β≥0, let ℋ^J,β(𝐑^d+1) be the closure of the set
𝒞_c^∞(𝐑^d+1) for the norm
f_ℋ^J,β:=(∑_|k|≤ J∫_𝐑^d+1|∂_kf(θ)|^2/1+|θ|^2βθ)^1/2.
The space
ℋ^J,β(𝐑^d+1) is a separable Hilbert space and we denote its dual space by
ℋ^-J,β(𝐑^d+1) (see e.g. <cit.>).
The associated
scalar product on ℋ^J,β(𝐑^d+1) will be denoted
by ⟨·,·⟩_ℋ^J,β. For
Φ∈ℋ^-J,β(𝐑^d+1), we use the notation
⟨ f,Φ⟩_J,β= Φ[f], f∈ℋ^J,β(𝐑^d+1).
For ease of notation, and if no confusion is possible, we simply
denote ⟨ f,Φ⟩_J,β by ⟨ f,Φ⟩.
The set 𝒞^J,β_0(𝐑^d+1) (resp. 𝒞^J,β(𝐑^d+1)) is defined as the space of
functions f:𝐑^d+1→𝐑 with continuous
partial derivatives up to order J∈N such that
for all |k|≤ J, lim_|θ|→∞|∂_kf(θ)|/1+|θ|^β=0 (resp. ∑_|k|≤ J sup_θ∈𝐑^d+1|∂_kf(θ)|/1+|θ|^β<+∞).
The spaces 𝒞^J,β(𝐑^d+1) and 𝒞^J,β_0(𝐑^d+1) is endowed with the norm
f_𝒞^J,β:=∑_|k|≤ J sup_θ∈𝐑^d+1|∂_kf(θ)|/1+|θ|^β.
We note that
θ∈𝐑^d+1↦ (1-χ(θ))|θ|^α∈ℋ^J,β(𝐑^d+1) if β-α>(d+1)/2,
where χ∈𝒞_c^∞(𝐑^d+1) equals 1 near 0.
We recall that from
<cit.>, for m'>(d+1)/2 and α,j≥ 0, ℋ^m'+j,α(𝐑^d+1)↪𝒞_0^j,α(𝐑^d+1).
In the following, we consider γ_0,γ_1∈𝐑 and L_0∈𝐍 such that
γ_1>γ_0> d+1/2+1 and L_0> d+1/2 +1.
We finally recall the following standard result.
Let q>p≥ 1 and C>0. The set 𝒦_C^q:={μ∈𝒫_p(𝐑^d+1), ∫_𝐑^d+1|x|^qμ( x)≤ C} is compact.
§.§.§ Bound on the moments of the θ_k^i's
We have the following uniform bound in N≥ 1 on the moments of the sequence {θ_k^i, i∈{1,…,N}}_k= 0,…, ⌊ NT ⌋ defined by (<ref>).
Assume A1→ 𝐀5. For all T>0 and p≥ 1, there exists C>0 such that for all N≥1, i∈{1,…,N} and 0≤ k≤⌊ NT⌋,
𝐄[|θ_k^i|^p]≤ C.
Let p≥ 1.
By A4, 𝐄[|θ_0^i|^p]≤ C_p for all i∈{1,…,N}. Let T>0.
In the following C>0 is a constant independent of N≥1, i∈{1,…,N}, and 1≤ k≤⌊ NT⌋.
Using (<ref>), the fact that ϕ is bounded, 𝖸 is bounded, and (<ref>), we have, for 0≤ n ≤ k-1,
|θ_n+1^i-θ_n^i| ≤C/N^2B∑_j=1^N∑_ℓ=1^B 𝔟(𝖹^i,ℓ_n) +C/N |∇_θ𝒟_ KL(q_θ_n^i^1|P_0^1)|
≤C/NB∑_ℓ=1^B (1+𝔟(𝖹^i,ℓ_n)) +C/N (1+|θ_n^i|),
where we have also used (<ref>) for the last inequality.
Let us recall the following convexity inequality: for m,p≥ 1
and x_1,…,x_p∈𝐑_+,
(∑_n=1^mx_n)^p≤ m^p-1∑_n=1^mx_n^p.
Using (<ref>), A1 with q=p, and the fact that 1≤ k ≤⌊ NT⌋, one has setting u_k=𝐄[|θ_k^i|^p], u_k≤ C (1+1/N∑_n=0^k-1u_n). The result then follows from
Lemma <ref>.
§.§.§ Pre-limit equation
In this section, we derive the pre-limit equation for μ^N defined by (<ref>). For simplicity we will keep the same notations as those introduced in Section <ref>, though these objects will now be defined with θ^i_k set by (<ref>), and on 𝒞^2,γ_1(𝐑^d+1), for all integer k≥ 0, and all time t≥ 0. Let f∈𝒞^2,γ_1(𝐑^d+1).
Then, set for k≥ 0,
𝐃_k^N[f] =-η/N^3∑_i=1^N∑_j=1,j≠ i^N∫_𝖷×𝖸 (⟨ϕ(θ_k^j,·,x),γ⟩-y )⟨∇_θ f(θ_k^i)·∇_θϕ(θ_k^i,·,x),γ⟩π( x, y)
-η/N^2∫_𝖷×𝖸⟨(ϕ(·,·,x)-y)∇_θ f·∇_θϕ(·,·,x),ν_k^N⊗γ⟩π( x, y).
Note that 𝐃_k^N above is the one defined in (<ref>) but now on 𝒞^2,γ_1(𝐑^d+1) and with θ^i_k defined by (<ref>).
For k≥ 0, we set
𝐌_k^N[f]= -η/N^3B∑_i,j=1^N ∑_ℓ=1^B(ϕ(θ_k^j,𝖹_k^j,ℓ,x_k)-y_k)∇_θ f(θ_k^i)·∇_θϕ(θ_k^i,𝖹_k^i,ℓ,x_k)-𝐃_k^N[f].
By Lemma <ref> together with (<ref>) and (<ref>), 𝐌_k^N[f] is integrable.
Also, using A5 and the fact that θ_k^j is ℱ_k^N-measurable (see (<ref>)),
𝐄 [𝐌_k^N[f]|ℱ_k^N ]=0.
Set 𝐌_t^N[f]=∑_k=0^⌊ Nt⌋-1𝐌_k^N[f], t≥ 0. We now extend the definition of 𝐖_t^N[f]
and 𝐑_k^N[f] in (<ref>) and (<ref>) to any time t≥ 0, k≥ 0, and f∈𝒞^2,γ_1(𝐑^d+1), and with θ^i_k set by (<ref>). We then set
𝐑_t^N[f]=∑_k=0^⌊ Nt⌋-1𝐑_k^N[f], t≥ 0.
With the same algebraic computations as those made in Section <ref>, one obtains the following pre-limit equation: for N≥ 1, t≥ 0, and f∈𝒞^2,γ_1(𝐑^d+1),
⟨ f,μ_t^N⟩-⟨ f,μ_0^N⟩ =-η∫_0^t∫_𝖷×𝖸⟨ϕ(·,·,x)-y,μ_s^N⊗γ⟩⟨∇_θ f·∇_θϕ(·,·,x),μ_s^N⊗γ⟩π( x, y) s
-η∫_0^t ⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ_s^N⟩ s
+η/N∫_0^t∫_𝖷×𝖸⟨⟨ϕ(·,·,x)-y,γ⟩⟨∇_θ f·∇_θϕ(·,·,x),γ⟩,μ_s^N⟩π( x, y) s
-η/N∫_0^t∫_𝖷×𝖸⟨(ϕ(·,·,x)-y)∇_θ f·∇_θϕ(·,·,x),μ_s^N⊗γ⟩π( x, y) s
+ 𝐌_t^N[f] +𝐖_t^N[f]+ 𝐑_t^N[f].
We will now show that the sequence (μ^N)_N≥1 is relatively compact in 𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)).
§.§ Relative compactness and convergence to the limit equation
§.§.§ Relative compactness in 𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1))
In this section we prove the following result.
Assume A1→𝐀5. Recall γ_0> d+1/2+1.
Then, the sequence (μ^N)_N≥1 is relatively compact in 𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)).
We start with the following lemma.
Assume A1→ 𝐀5. Then, ∀ T>0 and f∈𝒞^2,γ_1(𝐑^d+1),
sup_N≥1𝐄[sup_t∈[0,T]⟨ f,μ_t^N⟩^2]<+∞.
Let T>0. In what follows, C>0 is a constant independent of f∈𝒞^2,γ_1(𝐑^d+1), (s,t)∈ [0,T]^2, and z∈𝐑^d which can change from one occurence to another. We have by A4, 𝐄[⟨ f,μ_0^N⟩^2]≤ C f_𝒞^2,γ_1^2.
By (<ref>) and (<ref>), it holds:
sup_t∈[0,T]⟨ f,μ_t^N⟩^2 ≤ C[ f_𝒞^2,γ_1^2+ ∫_0^T∫_𝖷×𝖸 |⟨⟨ |∇_θ f·∇_θϕ(·,·,x) |,γ⟩,μ_s^N⟩ | ^2 π( x, y) s
∫_0^ T | ⟨ |∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1) |,μ_s^N⟩ | ^2 s
+1/N^2∫_0^T∫_𝖷×𝖸 |⟨⟨ |∇_θ f·∇_θϕ(·,·,x) |,γ⟩,μ_s^N⟩ | ^2 π( x, y) s
+ sup_t∈[0,T] |𝐌_t^N[f]|^2 +sup_t∈[0,T] |𝐖_t^N[f]|^2+ sup_t∈[0,T] |𝐑_t^N[f]|^2.].
We have using (<ref>), for s∈ [0,T] and z∈𝐑^d,
| ∇_θ f (θ^i_⌊ Ns⌋) ·∇_θϕ(θ^i_⌊ Ns⌋,z,x)|≤ C f_𝒞^1,γ_1𝔟(z) (1+|θ^i_⌊ Ns⌋|^γ_1).
Thus, using Lemma <ref>,
𝐄[ ⟨⟨|∇_θ f·∇_θϕ(·,·,x)|,γ⟩ ,μ_s^N⟩^2 ]≤ Cf_𝒞^1,γ_1^2.
Using (<ref>), for s∈ [0,T], it holds:
| ∇_θ f(θ^i_⌊ Ns⌋)·∇_θ𝒟_ KL(q_θ^i_⌊ Ns⌋^1|P_0^1) | ≤ C f_𝒞^1,γ_1 (1+|θ^i_⌊ Ns⌋|^γ_1+1).
Thus, using Lemma <ref>,
𝐄 [ | ⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ_s^N⟩ | ^2 ]≤ Cf_𝒞^1,γ_1^2.
On the other hand, we have using (<ref>):
sup_t∈ [0,T]|𝐌_t^N[f]|^2≤⌊ NT⌋∑_k=0^⌊ NT⌋-1| 𝐌_k^N[f]|^2.
Recall (<ref>). By (<ref>), (<ref>), A1, and (<ref>), it holds:
|𝐃_k^N[f]|^2≤ Cf_𝒞^1,γ_1^2 [1/N^4∑_i≠ j=1^N (1+|θ^i_k|^2γ_1)+ 1/N^4 (1+⟨ |· |^2γ_1, ν_k^N⟩)]≤C/N^2f_𝒞^1,γ_1^2 (1+|θ^i_k|^2γ_1)
and
|𝐌_k^N[f]|^2≤C/N^4B∑_i,j=1^N ∑_ℓ=1^Bf^2_𝒞^1,γ_1 |𝔟(𝖹_k^i,ℓ)|^2 (1+|θ^i_⌊ Ns⌋|^2γ_1)+ |𝐃_k^N[f]|^2.
By Lemma <ref> and A1, one deduces that
𝐄[|𝐌_k^N[f]|^2]≤Cf_𝒞^1,γ_1^2/N^2.
Going back to (<ref>), we then have 𝐄[sup_t∈ [0,T]|𝐌_t^N[f]|^2]≤ Cf_𝒞^1,γ_1^2.
Using the same arguments as those used so far,
one also deduces that for t∈ [0,T]
sup_t∈[0,T]|𝐖_t^N[f]|^2 ≤Cf_𝒞^1,γ_1^2/N^2sup_t∈[0,T] (1+⟨ |· |^γ_1+1, ν_⌊ Nt⌋^N⟩)^2
= Cf_𝒞^1,γ_1^2/N^2max_0≤ k≤⌊ NT⌋(1+⟨ |· |^γ_1+1, ν_k^N⟩)^2
≤Cf_𝒞^1,γ_1^2/N^2∑_k=0^⌊ NT⌋ (1+⟨ |· |^γ_1+1, ν_k^N⟩)^2.
and thus
𝐄[sup_t∈[0,T]|𝐖_t^N[f]|^2] ≤Cf_𝒞^1,γ_1^2/N.
Let us finally deal with the term involving 𝐑_t^N[f].
One has using (<ref>):
sup_t∈[0,T]|𝐑_t^N[f]|^2≤⌊ NT⌋∑_k=0^⌊ NT⌋-1|𝐑_k[f]|^2.
For 0≤ k≤⌊ NT⌋-1, we have, from (<ref>),
|𝐑_k^N[f]|^2 ≤Cf_𝒞^2,γ_1^2/N∑_i=1^N|θ_k+1^i-θ_k^i|^4(1+|θ̂_k^i|^γ_1)^2
≤Cf_𝒞^2,γ_1^2/N∑_i=1^N|θ_k+1^i-θ_k^i|^4(1+|θ_k+1^i|^2γ_1+|θ_k^i|^2γ_1).
Using (<ref>),
|θ_k+1^i-θ_k^i|^4≤ C[1/N^4+|θ_k^i|^4/N^4+1/N^4B∑_ℓ=1^B|𝔟(𝖹_k^i,ℓ)|^4].
By Lemma <ref> and A1, it then holds
𝐄[|θ_k+1^i-θ_k^i|^4(1+|θ_k+1^i|^2γ_1+|θ_k^i|^2γ_1)] ≤C/N^4.
Hence, one deduces that
𝐄[sup_t∈[0,T]|𝐑_t^N[f]|^2]≤ C f_𝒞^2,γ_1^2 /N^2.
This ends the proof of Lemma <ref>.
Assume A1→𝐀5. Let 0<ϵ<γ_1-γ_0. For every T>0,
sup_N≥1𝐄[sup_t∈[0,T]∫_𝐑^d+1|x|^γ_0+ϵμ_t^N( x) ] <+∞.
Apply Lemma <ref> with f:θ↦(1-χ)|θ|^γ_0+ϵ∈𝒞^2,γ_1(𝐑^d+1).
Assume A1→𝐀5. Let T>0 and f∈𝒞^2,γ_1(𝐑^d+1). Then, there exists C>0 such that for all δ>0 and 0≤ r<t≤ T such that t-r≤δ, one has for all N≥ 1,
𝐄[|⟨ f,μ_t^N⟩ -⟨ f,μ_r^N⟩ |^2]≤ C (δ^2+δ/N+ 1/N).
Using (<ref>), Jensen's inequality, (<ref>), (<ref>), and (<ref>), one has for f∈𝒞^2,γ_1(𝐑^d+1),
𝐄[|⟨ f,μ_t^N⟩ -⟨ f,μ_r^N⟩ |^2] ≤ C[(t-r)^2(1+1/N^2)f_𝒞^1,γ_1^2 +𝐄[ | ∑_k=⌊ Nr⌋^⌊ Nt⌋-1𝐌_k^N[f] |^2]
+𝐄[| 𝐖_t^N[f] - 𝐖_r^N[f] |^2]+𝐄[| 𝐑_t^N[f] - 𝐑_r^N[f] |^2].
We also have with the same arguments as those used just before (<ref>)
𝐄[ | ∑_k=⌊ Nr⌋^⌊ Nt⌋-1𝐌_k^N[f] |^2]=∑_k=⌊ Nr⌋^⌊ Nt⌋-1𝐄[|𝐌_k^N[f]|^2].
Using in addition (<ref>), one has
𝐄[ | ∑_k=⌊ Nr⌋^⌊ Nt⌋-1𝐌_k^N[f] |^2]≤ C (Nδ+1) f_𝒞^1,γ_1^2/ N^2. Note that with this argument, we also deduce that
𝐄[ | 𝐌_t^N[f]|^2]≤ Cf_𝒞^1,γ_1^2/ N.
On the other hand, by (<ref>) and (<ref>), one has
𝐄[| 𝐖_t^N[f] - 𝐖_r^N[f] |^2]≤ C f^2_𝒞^1,γ_1/ N and 𝐄[| 𝐑_t^N[f] - 𝐑_r^N[f] |^2]≤ C f_𝒞^2,γ_1^2/ N^2.
One then plugs all the previous estimates in (<ref>) to deduce the result of Lemma <ref>.
We are now in position to prove Proposition <ref>.
The proof consists in applying <cit.> with E= 𝒫_γ_0(𝐑^d+1) and 𝔽={𝖧_f, f∈𝒞^∞_c(𝐑^d+1)} where
𝖧_f: ν∈𝒫_γ_0(𝐑^d+1)↦⟨ f, ν⟩.
The set 𝔽 on 𝒫_γ_0(𝐑^d+1) satisfies Conditions <cit.>. Condition (4.8) there follows from Proposition <ref>, Lemma <ref>, and Markov's inequality.
Let us now show <cit.> is verified, i.e. that for all f∈𝒞^∞_c(𝐑^d+1), the family (⟨ f,μ^N⟩)_N≥1 is relatively compact in 𝒟(𝐑_+,𝐑).
To do this, it suffices to use Lemma <ref> and <cit.> (with ℋ_1=ℋ_2=𝐑 there).
In conclusion, according to <cit.>, the sequence (μ^N)_N≥1⊂𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)) is relatively compact.
§.§.§ Limit points satisfy the limit equation (<ref>)
For f∈𝒞^1,γ_0-1(𝐑^d+1)
and t≥ 0,
we introduce for 𝗆∈𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)),
Φ_t[f]:𝗆↦ |⟨ f,𝗆_t⟩-⟨ f,μ_0⟩
+η∫_0^t∫_𝖷×𝖸⟨ϕ(·,·,x)-y,𝗆_s⊗γ⟩⟨∇_θ f·∇_θϕ(·,·,x),𝗆_s⊗γ⟩π( x, y) s
+ η∫_0^t ⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),𝗆_s⟩ s |.
Note that Φ_t[f] is the function Λ_t[f] previously defined in (<ref>) for test functions f∈𝒞^1,γ_0-1(𝐑^d+1) and for 𝗆∈𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)).
Assume A1→𝐀5. Let f∈𝒞^1,γ_0-1(𝐑^d+1). Then
Φ_t[f] is well defined. In addition, if a sequence (𝗆^N)_N≥ 1 converges to 𝗆 in 𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)), then, for all continuity point t≥ 0 of 𝗆, we have Φ_t[f](𝗆^N)→Φ_t[f](𝗆).
Using A1, and because 𝖸 is bounded and the function ϕ is bounded, 𝒢_1^x,y: θ↦⟨ϕ(θ,·,x)-y,γ⟩∈𝒞^∞_b(𝐑^d+1). In addition, for all multi-index α∈𝐍^d+1, there exists C>0, for all x,y∈𝖷×𝖸 and all θ∈𝐑^d+1, |∂_α𝒢_1^x,y(θ)|≤ C. The same holds for the function
𝒢_2^x: θ∈𝐑^d+1↦⟨∇_θϕ(θ,·,x), γ⟩.
Consequently, θ↦∇_θ f(θ)·𝒢_2^x(θ)∈𝒞^0,γ_0-1(𝐑^d+1)↪𝒞^0,γ_0(𝐑^d+1). Then, there exists C>0 independent of (x,y)∈𝖷×𝖸 and s∈ [0,t] such that
|⟨𝒢_1^x,y,𝗆_s⟩|≤ C,
and
|⟨∇_θ f·𝒢_2^x,𝗆_s⟩ |≤ C ‖ f ‖_𝒞^1,γ_0-1⟨ 1+|.|^γ_0, 𝗆_s⟩.
Finally, the function θ↦∇_θ𝒟_ KL(q_θ^1|P_0^1) is smooth (see (<ref>)) and (<ref>) extends to all its derivatives, i.e. for all multi-index α∈𝐍^d+1,
there exists c>0, for all θ∈𝐑^d+1,
|∂_α∇_θ𝒟_ KL(q_θ^1|P_0^1)|≤ c(1+|θ|).
Thus, ∇_θ f·∇_θ𝒟_ KL(q_θ^1|P_0^1)∈𝒞^0,γ_0(𝐑^d+1) and for some C>0 independent of s∈ [0,t]
|⟨∇_θ f·∇_θ𝒟_ KL(q_ _·^1|P_0^1),𝗆_s⟩|≤ C ‖ f ‖_𝒞^1,γ_0-1⟨ 1+|.|^γ_0, 𝗆_s⟩.
Since in addition sup_s∈ [0,t]⟨ 1+|.|^γ_0, 𝗆_s⟩<+∞ (since s↦⟨ 1+|.|^γ_0, 𝗆_s⟩∈𝒟(𝐑_+,𝐑)), Φ_t[f] is well defined. To prove the continuity property of Φ_t[f] it then suffices to use the previous upper bounds together similar arguments as those used in the proof of Lemma <ref> (see also <cit.>).
Assume A1→𝐀5. Let μ^* be a limit point of (μ^N)_N≥1 in 𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)). Then, μ^* satisfies a.s. Equation (<ref>).
Let us consider f∈𝒞_c^∞(𝐑^d+1) and μ^* be a limit point of (μ^N)_N≥1 in 𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)). Recall that by <cit.>, the complementary of the set
𝒞(μ^*)={t≥ 0, 𝐏(μ^*_t^-= μ^*_t)=1}
is at most countable. Let t_*∈𝒞(μ^*). Then, by Lemma <ref>, one has that 𝐏(μ^*∈𝖣(Φ_t_*[f]))=0. Thus, by the continuous mapping theorem, it holds
Φ_t_*[f](μ^N)Φ_t_*[f](μ^*).
On the other hand, using (<ref>) and the estimates (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>), it holds
lim_N→∞𝐄[Φ_t_*[f](μ^N)]=0.
Consequently, for all f∈𝒞_c^∞(𝐑^d+1) and t_*∈𝒞(μ^*), it holds a.s. Φ_t_*[f](μ^*)=0. On the other hand, for all ψ∈𝒞_c^∞(𝐑^d+1), 𝗆∈𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)), and s≥ 0, the mappings
t≥ 0↦Φ_t[ψ ](𝗆)
is right continuous, and
f∈ℋ^L_0,γ_0-1(𝐑^d+1)↦Φ_s[f](𝗆)
is continuous (because ℋ^L_0,γ_0-1(𝐑^d+1)↪𝒞_0^1,γ_0-1(𝐑^d+1)).
In addition, ℋ^L_0,γ_0-1(𝐑^d+1) admits a dense and countable subset of elements in 𝒞_c^∞(𝐑^d+1). Moreover, there exists a countable subset 𝒯_μ^* of
𝒞(μ^*) such that for all t≥ 0 and ϵ>0, there exists s∈𝒯_μ^*, s∈ [t,t+ϵ]. We prove this claim. Since ℝ_+ is a metric space, 𝒞(μ^*) is separable and thus admits a dense subset 𝒪_μ^*. Since [t+ϵ/4,t+3ϵ/4]∩𝒞(μ^*)≠∅, there exists u∈ [t+ϵ/4,t+3ϵ/4]∩𝒞(μ^*). Consider now s∈𝒪_μ^* such that |s-u|≤ϵ/4. It then holds t≤ s≤ t+ ϵ, proving the claim with 𝒯_μ^*=𝒪_μ^*.
Hence, we have with a classical argument that a.s. for all f∈ℋ^L_0,γ_0-1(𝐑^d+1) and t≥ 0, Λ_t[f](μ^*)=0. Note also that 𝒞^∞_b(𝐑^d+1)⊂ℋ^L_0,γ_0-1(𝐑^d+1) since 2γ_0>d+1. This ends the proof of the proposition.
§.§ Uniqueness of the limit equation and end of the proof of Theorem <ref>
In this section, we prove that there is a unique solution to (<ref>) in 𝒞(𝐑_+,𝒫_1(𝐑^d+1)). To this end, we first need to prove that every limit points of (μ^N)_N≥ 1 a.s. belongs to 𝒞(𝐑_+,𝒫_1(𝐑^d+1)).
§.§.§ Limit points belong to 𝒞(𝐑_+,𝒫_1(𝐑^d+1))
Assume A1→𝐀5. Let μ^*∈𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)) be a limit point of (μ^N)_N≥ 1 in 𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)). Then, a.s. μ^*∈𝒞(𝐑_+,𝒫_1(𝐑^d+1)).
Note that since 𝖶_1≤𝖶_γ_0, μ^N'μ^* also in 𝒟(𝐑_+,𝒫_1(𝐑^d+1)), along some subsequence N'. According to <cit.>, μ^*∈𝒞(𝐑_+,𝒫_1(𝐑^d+1)) a.s. if for all T>0, lim_N→ +∞𝐄[ sup_t∈ [0,T]𝖶_1(μ^N_t_-,μ^N_t) ]=0. Using (<ref>), this is equivalent to prove that
lim_N→ +∞𝐄[ sup_t∈ [0,T]sup_‖ f‖_Lip≤ 1|⟨ f,μ^N_t_-⟩-⟨ f,μ^N_t⟩| ]=0.
Let us consider T>0 and a Lipschitz function f:𝐑^d+1→𝐑 such that ‖ f‖_Lip≤ 1. We have ⟨ f,μ_t^N⟩=⟨ f,μ_0^N⟩+ ∑_k=0^⌊ Nt⌋-1⟨ f,ν_k+1^N⟩-⟨ f,ν_k^N⟩ (with usual convention ∑_0^-1=0). Thus the discontinuity points of t∈ [0,T]↦⟨ f,μ_t^N⟩ lies exactly at {1/N, 2/N,…, ⌊ NT⌋/N} and
|⟨ f,μ^N_t_-⟩-⟨ f,μ^N_t⟩|≤max_k=0,…,⌊ NT⌋-1|⟨ f,ν_k+1^N⟩-⟨ f,ν_k^N⟩|, ∀ t∈ [0,T], f Lipschitz.
Pick k=0,…,⌊ NT⌋-1. We have by (<ref>),
|⟨ f,ν_k+1^N⟩-⟨ f,ν_k^N⟩| ≤1/N∑_i=1^N |θ_k+1^i-θ_k^i| ≤C/N∑_i=1^N[ 1/NB∑_ℓ=1^B (1+𝔟(𝖹^i,ℓ_k)) +1/N (1+|θ_k^i|)]=:d_k^N
Hence, it holds:
|d_k^N|^2 ≤C/N∑_i=1^N[ 1/N^2B∑_ℓ=1^B (1+𝔟^2(𝖹^i,ℓ_k)) +1/N^2 (1+|θ_k^i|^2)],
where thanks to Lemma <ref> and A1, for all k=0,…,⌊ NT⌋-1, 𝐄[|d_k^N|^2]≤ C/N^2 for some C>0 independent of N≥ 1 and k=0,…,⌊ NT⌋-1.
Thus, using (<ref>) and (<ref>),
𝐄[ sup_t∈ [0,T]sup_‖ f‖_Lip≤ 1|⟨ f,μ^N_t_-⟩-⟨ f,μ^N_t⟩| ] ≤𝐄[ sup_‖ f‖_Lip≤ 1max_k=0,…,⌊ NT⌋-1|⟨ f,ν_k+1^N⟩-⟨ f,ν_k^N⟩| ]
≤𝐄[ max_k=0,…,⌊ NT⌋-1d_k^N ]
≤𝐄[ √(∑_k=0^⌊ NT⌋-1 |d_k^N|^2 )]
≤√(𝐄[ ∑_k=0^⌊ NT⌋-1 |d_k^N|^2 ])≤C/√(N).
This concludes the proof of Proposition <ref>.
§.§.§ Uniqueness of the solution to (<ref>)
There is a unique solution μ̅∈𝒞(𝐑_+,𝒫_1(𝐑^d+1)) to (<ref>).
First of all, the existence of a solution is provided by Propositions <ref>, <ref> and <ref>.
Let us now prove that there is a unique solution to (<ref>) in 𝒞(𝐑_+,𝒫_1(𝐑^d+1)).
Recall the definition of v[μ] in (<ref>). We claim that
for all T>0 and
all solution μ̅∈𝒞(𝐑_+,𝒫_1(𝐑^d+1)) of (<ref>), there exists C>0 such that
| v[μ̅_t](θ)- v[μ̅_s](θ)|≤ C|t-s|, for all 0≤ s ≤ t≤ T and θ∈𝐑^d+1.
The proof of item (<ref>) is the same as the one made for Item 2 in Proposition <ref> since it holds using (<ref>) and (<ref>), for all 0≤ s≤ t≤ T and z∈𝐑^d,
|∫_s^t⟨∇_θϕ(·,z,x)·∇_θ𝒟_ KL(q_ _·^1|P_0^1),μ̅_r⟩ r | ≤ C𝔟(z)∫_s^t ⟨ (1+|·| ), μ̅_r⟩ r
≤ C𝔟(z) max_r∈ [0,T]⟨ (1+|·| ), μ̅_r⟩ |t-s|.
We now conclude the proof of Proposition <ref>.
Item 1 in the proof of Proposition <ref> and (<ref>) imply that v(t,θ)= v[μ̅_t](θ) is globally Lipschitz on [0,T]×𝐑^d+1, for all T>0, when μ̅∈𝒞(𝐑_+,𝒫_1(𝐑^d+1)) is a solution of (<ref>). Since in addition a solution μ̅ to (<ref>) is a weak solution on 𝐑_+ to (<ref>) in 𝒞(𝐑_+,𝒫(𝐑^d+1)), it holds by <cit.>:
∀ t≥ 0, μ̅_t=ϕ_t#μ_0,
where ϕ_t is the flow generated by the vector field v[μ̅_t](θ) over 𝐑^d+1.
Together with Item 3 in the proof of Proposition <ref> and using the same arguments as those used in Step 3 of the proof of <cit.>, two solutions agrees on each [0,T] for all T>0. One then deduces the uniqueness of the solution to (<ref>). The proof of Proposition <ref> is complete.
We are now in position to end the proof of Theorem <ref>.
By Proposition <ref>, (μ^N)_N≥1 is relatively compact in 𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)). Let μ^1,μ^2∈𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)) be two limit points of this sequence. By Proposition <ref>, a.s. μ̅^1,μ̅^2∈𝒞(𝐑_+,𝒫_1(𝐑^d+1)). In addition, according to Proposition <ref>, μ^1 and μ^2 are a.s. solutions of (<ref>). Denoting by μ̅∈𝒞(𝐑_+,𝒫_γ_0(𝐑^d+1)) the unique solution to (<ref>) (see Proposition <ref>), we have a.s.
μ̅^1 =μ̅ and μ̅^2=μ̅ in 𝒞(𝐑_+,𝒫_1(𝐑^d+1)).
In particular μ̅∈𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1) ) and μ̅^j=μ̅ in 𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)), j∈{1,2}. As a consequence, μ̅ is the unique limit point of (μ^N)_N≥1 in 𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)) and the whole sequence (μ^N)_N≥1 converges to μ̅ in 𝒟(𝐑_+,𝒫_γ_0(𝐑^d+1)). Since μ̅ is deterministic, the convergence also holds in probability. The proof of Theorem <ref> is complete.
Let us now prove Proposition <ref>.
Any solution to (<ref>) in 𝒞([0,T],𝒫(Θ_T)) is a solution to (<ref>) in 𝒞([0,T],𝒫_1( 𝐑^d+1)). The result follows from Proposition <ref>.
|
http://arxiv.org/abs/2307.06293v1 | 20230712164006 | Peru Mining: Analysis and Forecast of Mining Production in Peru Using Time Series and Data Science Techniques | [
"Yhack Bryan Aycaya-Paco",
"Lindell Dennis Vilca-Mamani",
"Fred Torres-Cruz"
] | stat.OT | [
"stat.OT"
] |
PERU MINING: ANALYSIS AND FORECAST OF MINING PRODUCTION IN PERU USING TIME SERIES AND DATA SCIENCE TECHNIQUES
The dilemma of voluntary quarantine: insights from coupled dynamics of disease spreading and adaptive quarantine
Aycaya-Paco Yhack Bryan
Facultad de Ingeniería Estadística e Informática
Universidad Nacional del Altiplano
Puno, Perú
Email: [email protected]
Torres-Cruz Fred
Facultad de Ingeniería Estadística e Informática
Universidad Nacional del Altiplano
Puno, Perú
Email: [email protected]
Vilca-Mamani Lindell Dennis
Facultad de Ingeniería Estadística e Informática
Universidad Nacional del Altiplano
Puno, Perú
Email: [email protected]
August 12, 2023
=========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Peruvian mining plays a crucial role in the country's economy, being one of the main producers and exporters of minerals worldwide. In this project, an application was developed in RStudio that utilizes statistical analysis and time series modeling techniques to understand and forecast mineral extraction in different departments of Peru. The application includes an interactive map that allows users to explore Peruvian geography and obtain detailed statistics by clicking on each department. Additionally, bar charts, pie charts, and frequency polygons were implemented to visualize and analyze the data. Using the ARIMA model, predictions were made on the future extraction of minerals, enabling informed decision-making in planning and resource management within the mining sector. The application provides an interactive and accessible tool to explore the Peruvian mining industry, comprehend trends, and make accurate forecasts. These predictions for 2027 in total annual production are as follows: Copper = 2,694,957 MT, Gold = 72,817.47 kg Fine, Zinc = 1,369,649 MT, Silver = 3,083,036 MT, Lead = 255,443 MT, Iron = 15,776,609 MT, Tin = 29,542 MT, Molybdenum = 35,044.66 MT, and Cadmium = 724 MT. These predictions, based on historical data, provide valuable information for strategic decision-making and contribute to the sustainable development of the mining industry in Peru.
Mining, Minerals, Statistical analysis, Time series, Prediction.
§ INTRODUCTION
Peruvian mining has played a fundamental role in the country's economy, contributing significantly to its growth and development <cit.>. In this project, we focus on the analysis and optimization of the Peruvian mining industry using advanced statistical analysis tools. Our objective is to gain a better understanding of the functioning of this industry and maximize its efficiency.
Peru is globally recognized for its abundance of mineral resources such as copper, gold, silver, and zinc. These resources have attracted foreign investments and positioned the country as one of the main producers and exporters of minerals. Mining has generated employment, driven the growth of local communities, and contributed to the development of infrastructure in mining regions <cit.>.
However, the mining industry also faces significant challenges. Efficient resource management, reducing environmental impact, and sustainability are key aspects that need to be addressed to ensure responsible mining development <cit.>. In this context, statistical analysis presents itself as a valuable tool to understand trends, patterns, and factors influencing mining production.
This project is based on previous research and innovative approaches in statistical analysis applied to Peruvian mining <cit.>. Studies have shown how statistical analysis can help identify improvement opportunities in mining production and optimize operational processes <cit.>. Additionally, the report from Peru's Ministry of Energy and Mines, provides an overview of the current mining situation in the country and highlights the importance of efficient natural resource management.
In this context, we have developed the web application MineAnalytica using Shiny/R Studio. MineAnalytica is a powerful tool that allows visualization and analysis of mining production data using time series techniques and ARIMA and state space models <cit.>. Its aim is to predict trends and provide future estimates of metallic mining production in Perú.
MineAnalytica is built on a solid foundation of references and reliable sources. These include the Geological, Mining, and Metallurgical Institute (INGEMMET) <cit.>, which provides up-to-date geological and technical data on Peruvian mining, and the Central Reserve Bank of Peru <cit.>, which offers economic information and relevant statistics for the mining sector.
Time series techniques, supported by studies <cit.>, enable users to make accurate forecasts about mining production and anticipate possible future scenarios. This facilitates strategic decision-making and contributes to more efficient management of mineral resources <cit.>.
In summary, our project focuses on the analysis, visualization, and prediction of mining production in Peru. By using MineAnalytica as the main tool, supported by solid research and collaborations with leading institutions in the mining sector, we aim to provide users with an effective tool for informed and strategic decision-making. By facilitating access to updated data and advanced analysis tools, we contribute to the sustainable and responsible development of the mining industry in the country.
§ METHODOLOGY
In this section, we describe the methodology used to conduct our study on mining production in Peru and the development of the web application MineAnalytica.
§.§ Data Collection
We obtained the mining production data for Peru from the official portal of MINEM <cit.>. We selected two sets of data: monthly mining production from 2020 until the end of 2022 and annual mining production data from 1900. These datasets allow us to analyze both recent patterns and long-term trends in the Peruvian mining industry.
§.§ Data Analysis and Processing
We conducted a data processing and cleaning process to ensure the quality and consistency of the information <cit.>. During this stage, we applied techniques such as variable name correction, removal of empty or null data, and imputation of missing values using the k-Nearest Neighbors (k-NN) <cit.> algorithm to obtain accurate estimations.
Additionally, we also applied techniques to adjust the monthly data and appropriately adapt it for use in the ARIMA model. This involved performing transformations or adjustments to the frequency of the monthly data to meet the requirements of the model. In this way, we could fully leverage the potential of ARIMA models in our time series analysis.
§.§ Time Series Models
In this stage, we used time series models to analyze and predict mining production in Peru. Time series models allow us to capture and model temporal patterns and trends present in the data <cit.>.
For our study, we implemented two types of time series models: ARIMA (autoregressive integrated moving average) and state space.
§.§.§ ARIMA Model
This model consists of three main components: autoregressive (AR), integrated (I), and moving average (MA) <cit.>.
Autoregressive (AR) Component: The AR component refers to the linear dependence of a current observation on past values of the time series. It is represented as AR(p), where "p" is the order of the autoregressive component. The general formula for the AR(p) component is:
y(t) = c + φ_1 · y(t-1) + φ_2 · y(t-2) + … + φ_p · y(t-p) + ε(t)
where:
* y(t) is the current value of the time series.
* c is a constant.
* φ_1, φ_2, …, φ_p are the autoregressive coefficients.
* ε(t) is a random error term.
Moving Average (MA) Component: The MA component represents the linear dependence of a current observation on past error terms of the time series. It is represented as MA(q), where "q" is the order of the moving average component. The general formula for the MA(q) component is:
y(t) = c + θ_1 ·ε(t-1) + θ_2 ·ε(t-2) + … + θ_q ·ε(t-q) + ε(t)
where:
* y(t) is the current value of the time series.
* c is a constant.
* θ_1, θ_2, …, θ_q are the moving average coefficients.
* ε(t) is a random error term.
Integration (I): Integration is used to achieve stationarity in the time series. If the series is not stationary, first-order differencing (d = 1) or higher-order differencing can be applied to make it stationary. The general formula for differencing is:
Δ y(t) = y(t) - y(t-1)
where:
* Δ y(t) is the difference between the current value and the previous value of the time series.
The ARIMA model combines the linear dependence on past values (AR), linear dependence on past errors (MA), and differencing to model and forecast time series. The parameters p, d, and q are chosen to fit the model to the data.
§.§.§ State Space Model
This model is another approach to analyze and predict time series. It consists of two main components: the state equation and the observation equation <cit.>.
State Equation: The state equation describes how the hidden state of the system evolves over time. It is represented as a linear relationship between the state at the current time and the state at the previous time, with a transition matrix. The general formula for the state equation is:
x(t) = A · x(t-1) + B · u(t) + w(t)
where:
* x(t) is the state at time t.
* A is the state transition matrix.
* B is the control matrix representing the influence of an external control signal u(t) on the state.
* u(t) is the control signal at time t.
* w(t) is the process noise.
Observation Equation: The observation equation relates the hidden state of the system to the measurements or observations that are made. It is represented as a linear relationship between the observations at the current time and the state at the current time, with an observation matrix. The general formula for the observation equation is:
y(t) = C · x(t) + v(t)
where:
* y(t) is the observation at time t.
* C is the observation matrix.
* v(t) is the observation noise.
In summary, the state space model uses a state equation to describe the evolution of the hidden state of the system and an observation equation to relate the state to the observations. The parameters A, B, C, and the noise matrices w(t) and v(t) are estimated to fit the model to the observed data.
Additionally, for the implementation of the models in the application, we utilized the auto.arima() and StructTS() functions from the forecast and stats libraries in R. These functions are essential for the analysis and forecasting of time series.
The auto.arima() function was employed to automatically select the optimal ARIMA model for the time series of mining production in Peru. On the other hand, the StructTS() function was used to fit a State Space model to the time series.
These functions enabled us to conduct detailed analysis and generate accurate forecasts for mining production. By selecting the appropriate ARIMA model and fitting a State Space model, we obtained valuable information for decision-making in the mining sector.
§.§ Model Validation and Performance Evaluation
In this stage, we used residual analysis, including the Ljung-Box test and Shapiro‒Wilk Normality test <cit.>, to assess the quality of fit of the ARIMA and State Space models <cit.>. Additionally, we generated bootstrap data to evaluate the performance of the application and obtain uncertainty estimates for mining production predictions <cit.>. These techniques allowed us to assess the accuracy and reliability of the models and the developed application.
§.§ Application Development
The MineAnalytica application was developed in Shiny/
RStudio, making use of the following R libraries and packages:
Shiny, Shinydashboard, Leaflet, sf, Readxl, Plotly, Lubridate, Openxlsx, DT, Dplyr, Tidyverse, Magick, Scales, Forecast, Shinyjs, and FontAwesome. These libraries and packages provided specific functionalities for data visualization, interactive chart generation, and prediction within the application.
§ RESULTS
In this section, we will present the main results obtained from the analysis and processing of mining production data in Peru, as well as the predictions generated by the time series models implemented in the MineAnalytica application.
§.§ Prediction
The prediction is based on the analysis and processing of the collected mineral extraction data using the ARIMA model and other R packages such as "forecast". This approach allows for fitting an optimal model to the historical data and generating estimations for future periods. The predictions based on minerals and departments with limited historical data only forecast the next 3 months, while the total annual prediction extends up to 5 years as the data span from 1980 to 2022.
To evaluate the performance of the model, the Ljung-Box test was applied. The test result for (P value = 0.5986) (α = 0.05) indicates a good performance indicator for the variable (COPPER). Following this, the predictive model for the other metals for the next 5 years is presented. For the year 2027, the following quantities are estimated as predictions for the total annual extraction:
These predictions are based on historical data and provide valuable information for strategic decision-making and resource management in the mining sector.
§.§.§ Annual Mineral Extraction Prediction
Using historical data of mineral extraction, the ARIMA model can identify patterns and trends in production and generate future projections. These predictions are backed by statistical analysis and allow users to gain more accurate and informed insight into future mineral production in Peru's departments.
§.§.§ Mineral-based Prediction
These predictions enable users to anticipate the amount of mineral expected to be extracted in a specific period and make strategic decisions accordingly. Furthermore, the predictions can also be useful for demand planning, resource management, and decision-making in the mining sector.
§.§.§ Department-based Prediction
Using historical data of mineral extraction, the ARIMA model can identify patterns and trends in production and generate future projections. These predictions are backed by statistical analysis and allow users to gain more accurate and informed insight into future mineral production in Peru's departments.
It is important to note that predictions are subject to uncertainty, and results may vary based on different factors such as changes in demand, price fluctuations, or market conditions. However, the application of prediction techniques based on historical data can provide valuable guidance for decision-making and planning in the mining sector regarding specific mineral extraction in Peru.
§.§ Application
§.§.§ Data Processing
Various techniques were employed for data processing, including the K-Nearest Neighbors (KNN) algorithm and the Extract, Transform, Load (ETL) process <cit.>. The ETL process facilitated the extraction, transformation, and loading of the data, ensuring its quality and preparation for subsequent analysis. The KNN algorithm was used in the data transformation process.
§.§.§ Graphs
Graphical representations are an effective way to understand statistics visually, and they are an essential component of this application.
Bar graphs and pie charts are widely used visual tools in data analysis. Bar graphs allow for clear and concise comparison of categories or variables, while pie charts highlight the relative distribution of data across categories, showing the proportion of each category in the dataset. Both representations are effective and easy to understand, providing a quick and clear visualization of the information.
Frequency polygon graphs: Frequency polygons are a useful tool for visualizing and analyzing the frequency distribution of continuous variables. They provide a clear graphical representation of patterns and trends in the data, enabling a deeper understanding of the distribution and variability of the variable.
§.§.§ Map
By using this application, users can intuitively and visually explore the geography of Peru and obtain relevant data about each selected department. The application utilizes data visualization techniques and graphs to present information in a clear and understandable manner.
Clicking on a specific department on the map will display a popup window or section in the application showing relevant statistics. This provides a valuable tool for exploring and understanding the diversity and particularities of each region in the context of mining in Peru.
For those who wish to interact with the application, it can be accessed through the following link: url = https://yhackpaco.shinyapps.io/MineAnalytica/
§ DISCUSSION
The continuous growth of the mining industry is driven by long-term global trends. Contrary to initial predictions, in countries belonging to the Organization for Economic Cooperation and Development (OECD) <cit.>, the demand for minerals has not decreased as they reach higher levels of national economic development <cit.>. Our predictions confirm this trend.
In the field of mining safety and health, there is evidence of fatal accidents in this activity. Within the period from 2000 to 2006, it was found that thirty-three companies out of a total of eighty-four accounted for 80% of fatal accidents. Among these companies, six are considered large (18%), twenty-five are medium-sized (76%), and three are small (6%) <cit.>. This should indicate a decrease in mining production in small companies, but with the graphical analysis, we can see that it is increasing.
The inherent variability makes it difficult to predict the price of these metals, as it is subject to wide and unpredictable movements that can have lasting effects. This is evident in the notable alternation between growth percentages from one year to another. A recent example of this is the drastic change that occurred between 2008 and 2009, with a growth of 37%, followed by several periods of negative variation starting from 2011 <cit.>. Although the idea that prediction is very uncertain is shared, statistically, we can predict the amount of mineral extraction for the mentioned metals in 5 years.
The production of silver based on data from 1986 to 2013 predicts a decrease in production by 2025 <cit.>. In contrast, with our model, we predict an increase in production for that year and onward.
These discussions provide an opportunity to analyze and reflect on the obtained results, as well as to highlight the practical and theoretical implications of this project in the context of the Peruvian mining industry.
§ CONCLUSIONS
Statistical analysis applied to the Peruvian mining industry provides valuable tools for understanding trends, patterns, and factors that influence mining production. This allows for the identification of improvement opportunities and optimization of operational processes.
The ARIMA model is an effective statistical methodology for analyzing and predicting time series data, such as mineral extraction in the Peruvian mining industry. It enables the identification of patterns and trends in historical data and generates predictions to support decision-making.
The application of the ARIMA methodology in the Peruvian mining industry can help anticipate future scenarios and support strategic decision-making. This can contribute to more efficient management of mineral resources and sustainable development of the industry.
Furthermore, to enhance the user experience, an interactive map of Peru was incorporated into the application. This allows for an intuitive visualization of the geographic distribution of mining regions and facilitates understanding of the importance and impact of the mining industry in different areas of the country. The inclusion of this interactive map enriches the user experience by providing a more comprehensive and engaging visual representation of mining data.
In conclusion, statistical analysis and the ARIMA model are essential tools for understanding and optimizing the Peruvian mining industry. They enable the identification of patterns, prediction of trends, and informed and strategic decision-making. By utilizing these tools, the responsible and sustainable development of the mining industry in Peru can be promoted.
§ ACKNOWLEDGEMENTS
We would like to express our sincere gratitude to the Universidad Nacional del Altiplano, in particular to the School of Statistical Engineering and infomatics, for giving us the invaluable opportunity to grow as professionals in their classrooms. Its commitment to academic excellence and the integral development of students has been fundamental in our formation.
We would like to extend our thanks to all our esteemed teachers, who have given us unconditional support from the very beginning. Their dedication and guidance have been a constant source of inspiration in our educational journey. In particular, we would like to highlight Professor Fred Torrez Cruz, whose motivation and tireless support pushed us to overcome the challenges and successfully complete this project.
Likewise, we cannot forget to mention our classmates, who have been an integral part of our educational journey. Their perspectives, collaboration and mutual support have enriched our training and propelled us to joint achievements. Through interaction and the exchange of ideas, we have grown as professionals and forged lasting friendships.
Finally, we would like to express our deep appreciation to our families and loved ones, who have given us their unconditional support throughout this process. Their encouragement and understanding have given us the strength to face the challenges and pursue our academic dreams.
|
http://arxiv.org/abs/2307.06013v1 | 20230712085120 | An Effective and Efficient Time-aware Entity Alignment Framework via Two-aspect Three-view Label Propagation | [
"Li Cai",
"Xin Mao",
"Youshao Xiao",
"Changxu Wu",
"Man Lan"
] | cs.AI | [
"cs.AI",
"cs.LG"
] |
Higgs amplitude mode in ballistic superconducting hybrid junctions
J. Cayssol
August 12, 2023
===================================================================
Entity alignment (EA) aims to find the equivalent entity pairs between different knowledge graphs (KGs), which is crucial to promote knowledge fusion. With the wide use of temporal knowledge graphs (TKGs), time-aware EA (TEA) methods appear to enhance EA. Existing TEA models are based on Graph Neural Networks (GNN) and achieve state-of-the-art (SOTA) performance, but it is difficult to transfer them to large-scale TKGs due to the scalability issue of GNN. In this paper, we propose an effective and efficient non-neural EA framework between TKGs, namely LightTEA, which consists of four essential components: (1) Two-aspect Three-view Label Propagation, (2) Sparse Similarity with Temporal Constraints, (3) Sinkhorn Operator, and (4) Temporal Iterative Learning. All of these modules work together to improve the performance of EA while reducing the time consumption of the model. Extensive experiments on public datasets indicate that our proposed model significantly outperforms the SOTA methods for EA between TKGs, and the time consumed by LightTEA is only dozens of seconds at most, no more than 10% of the most efficient TEA method.
§ INTRODUCTION
Knowledge graphs (KGs) describe the real world with structured facts. A fact consists of a head entity, a tail entity, and a relation connecting them, which can be formally represented as a triple (e_h, r, e_t), such as (George Porter, hasWonPrize, Nobel Prize in Chemistry). KGs have been widely used in information retrieval <cit.>, question answering <cit.>, and recommedation systems <cit.>.
Existing KGs ignore the temporal information which indicates when a fact occurred. In the real world, some facts only happened at a specific time. Therefore, Wikidata <cit.> and YOGO2 <cit.> add temporal information to represent the KGs more accurately, and some event KGs <cit.> also contain the timestamps indicating when the events occurred. In the temporal knowledge graphs (TKGs), a fact is expanded into a quadruple (e_h, r, e_t, τ), where τ represents the timestamps.
Entity alignment (EA) seeks to find the same entities in the real world between different KGs, which is important for knowledge fusion between multi-source and multi-lingual KGs. In recent years, the embedding-based EA methods <cit.> have been widely investigated, which represent the entities in the low-dimensional vector space, and calculate the similarity of these vectors to obtain the equivalent entity pairs. Earlier EA methods <cit.> are based on the translation model. However, since the inability of such models to effectively capture graph structures, graph neural networks (GNN)-based models <cit.> emerge and achieve superior performance. Due to the scalability limitation of GNN <cit.>, these models are not suitable for large-scale KGs. LightEA <cit.> significantly enhances the efficiency of EA by utilizing a three-view label propagation instead of GNN.
Despite the success of the above EA methods, they all ignore the temporal information in the KGs, thus may lead to wrong alignment between similar entities in different KGs. Take Figure <ref> as an example. There are two subgraphs from YAGO and Wikidata. George Porter in TKG1 and Harray Kroto in TKG2 have similar structure and relations. Both of them have the same neighbors (Copley Medal, Nobel Prize in Chemistry, and Michael Faraday Prize) and relations (hasWonPrize in TKG1 is same as award recieved in TKG2). George Porter has another neighbor Davy Medal and relation hasWonPrize. Existing EA methods disregard temporal information of KGs and may wrongly take George Porter and Harray Kroto as an equivalent entity pair.
Recently, time-aware methods <cit.> emerge to improve the performance of EA between TKG. STEA <cit.> adopts a simple GNN with a temporal information matching mechanism and achieves state-of-the-art (SOTA) performance. All of these TEA methods are based on GNN <cit.>, which has an inherent defect: the GNN is trained using the gradient descent algorithm and takes much time to converge to the optimal solution. Therefore, to promote the development of EA between TKGs, a straightforward approach is to combine the advantages of EA and TEA methods.
To this end, we propose an effective and efficient non-neural EA framework between TKGs, namely LightTEA, which consists of four key components:
(1) Two-aspect Three-view Label Propagation. We combine relational-aspect and temporal-aspect three-view label propagation (LP) to improve the performance of EA. In this module, GNN is replaced by LP, which does not require gradient propagation to train the neural networks, greatly reducing the time consumption of the model.
(2) Sparse Similarity with Temporal Constraints. Instead of calculating the similarity of all entities, we only retrieve the top-k nearest neighbors of each entity to find the equivalent entity pairs, which saves the time complexity and space complexity of calculation. By utilizing the temporal information of each entity as the constraints of entity similarity, the EA performance is also improved.
(3) Sinkhorn Operator. To further promote the model's effectiveness, we regard the EA problem as a one-to-one assignment problem and use the Sinkhorn operator to solve it. The Sinkhorn operator is a fast and completely parallelizable algorithm and only takes seconds to converge to the optimal solution.
and (4) Temporal Iterative Learning. Several studies have demonstrated that iterative learning effectively enhances EA and helps address the lack of alignment seeds in the real scenario. We adopt a temporal iterative learning strategy, which utilizes the additional temporal information of entities to get more credible alignment pairs for augmenting the training set to obtain better alignment results.
In general, this paper presents the following contributions:
* We propose an effective and efficient TEA framework that consists of four essential components: (1) Two-aspect Three-view Label Propagation, (2) Sparse Similarity with Temporal Constraints, (3) Sinkhorn Operator, and (4) Temporal Iterative Learning. All of these modules work together to improve the performance of EA while reducing time consumption.
* The proposed TEA framework combines the strengths of the latest SOTA EA and TEA models and addresses the limitations of current EA models that do not effectively utilize time information, as well as the scalability constraints of TEA models due to GNN usage.
* Extensive experiments on public datasets indicate that our proposed model significantly outperforms the SOTA methods for EA between TKGs, and the time consumed by LightTEA is only dozens of seconds at most, no more than 10% of the most efficient TEA method.
§ RELATED WORK
§.§ Entity Alignment
The purpose of EA is to find the equivalent entity pairs from different KGs. EA usually adopts embedding-based approaches, which are divided into two sub-categories: translation-based and GNN-based models.
Translation-based models regard the relations as the translation from the head entities to the tail entities, such as (h_e_h + h_r ≈h_e_t).
MTransE <cit.> is the early entity alignment model based on TransE <cit.>, which maps two KGs into different vector spaces and considers the entities with similar positions in the two vector spaces as equivalent pairs. In addition to learning the structure embeddings of entities based on TransE with the relation triples, JAPE <cit.> joins the attribute embedding and structure embedding of entities to align entities. BootEA <cit.> obtains the alignment-oriented KG embeddings and proposes a bootstrapping process by adding likely alignment entities into training data iteratively to improve the performance of EA.
GNN-based models promote EA by utilizing the graph structure of KGs. GCN-Align <cit.> encodes the entities into a unified vector space via GCN <cit.> and aligns the entities with their structure embeddings and attribute embeddings. However, GCN-Align does not effectively utilize the relation of KGs. MRAEA <cit.> learns the entity embeddings through a relation-aware self-attention GNN to obtain the alignment entities. RREA <cit.> proposes a GNN with relational reflection to get the relation-specific embeddings and uses an iterative strategy to enhance EA.
The above methods focus on encoding the entity embeddings and aligning the entities by calculating the similarity of their embeddings, DATTI <cit.> applies a decoding process using the adjacency and inner correlation isomorphisms of KGs to the advanced methods and gains significant performance improvements. LightEA <cit.> adopts a three-view label propagation approach for entity alignment instead of using GNN and achieves comparable performance with SOTA EA methods while taking only one-tenth of the time consumption of those methods.
Although these methods have significantly advanced the development of EA, they all have the limitation of neglecting the temporal information in KGs.
§.§ Time-aware Entity Alignment
Recently, the research on TKGs has developed rapidly, and TEA methods have also sprung up.
TEA-GNN <cit.> is the first TEA method using temporal information in KGs. It introduces a time-aware attention mechanism to learn entity embedding based on GNN and constructs five datasets extracted from ICEWS, YAGO3, and Wikidata to evaluate the model.
TREA <cit.> utilizes a temporal relational attention mechanism to integrate relational and temporal features of entities from the neighborhood and adopts the margin-based multi-class log-loss (MML) with sequential time regularizer to train the model. TREA is the most efficient TEA model since the MML can achieve fast convergence. STEA <cit.> presents a simple GNN to learn the entity embeddings and uses a temporal information matching mechanism to calculate the time similarity of entities. Then it balances the time similarity and embedding similarity of entities to obtain the equivalent pairs.
By using the temporal information, the TEA methods achieve better performance. However, since the experimental datasets are much smaller than real-world KGs, these methods focus on improving performance and ignore efficiency. Although TREA utilizes MML to accelerate convergence, all TEA methods are based on GNN <cit.> and suffer from scalability issues. So we propose an effective and efficient TEA framework to address the limitations.
§ PROBLEM FORMULATION
A TKG can be formalized as G=(E, R, T, Q), where E, R and T are the sets of entities, relations, and timestamps respectively, Q⊂ E× R× E× T denotes the set of quadruples. A quadruple stores the real-world fact and can be presented as (e_h,r,e_t,τ), where e_h,e_t ∈ E. Given two TKGs G_1=(E_1,R_1,T_1,Q_1), G_2=(E_2,R_2,T_2,Q_2), and alignment seeds set S={(e_1_i,e_2_j)|e_1_i∈ E_1,e_2_j∈ E_2, e_1_i≡ e_2_j} where ≡ denotes equivalence. EA task aims to find new equivalent entity pairs between G_1 and G_2 based on S. C is the set of reference entity pairs used for evaluation. Specifically, a uniform time set T^* = T_1 ∪ T_2 is constructed by merging the timestamps in the two time sets. Therefore, the two TKGs can be renewed as G_1=(E_1,R_1,T^*,Q_1) and G_2=(E_2,R_2,T^*,Q_2) sharing the same set of timestamps.
§ THE PROPOSED APPROACH
The LightTEA framework can be described as three phases with time similarity enhancement. (1) For pre-alignment phase, we use two-aspect three-view label propagation to learn the labels of entities. (2) For alignment phase, we first compute the sparse similarity with temporal constraints, then translate the EA problem to the assignment problem and utilize the Sinkhorn operator to resolve it. (3) For post-alignment phase, we adopt a temporal iterative learning strategy to enhance EA. Figure <ref> shows the framework of LightTEA.
§.§ Two-aspect Three-view Label Propagation
Inspired by LightEA <cit.>, we extend the three-view label propagation to two-aspect (relational-aspect and temporal-aspect) three-view label propagation, which can enhance EA with temporal information while not increase the time and space complexity.
Specifically, the TKG requires a four-order tensor 𝒜∈ℝ^|E| × |E| × |R| × |T| to fully describe the adjacency relations. As shown in Figure <ref>(a), we regard TKG as two three-order tensor 𝒜^R ∈ℝ^|E| × |E| × |R| and 𝒜^T ∈ℝ^|E| × |E| × |T|, there are five-view in TKGs: A^ER∈ℝ^|E| × |R|, A^RE∈ℝ^|R| × |E|, A^EE∈ℝ^|E| × |E|, A^TE∈ℝ^|T| × |E|, and A^ET∈ℝ^|E| × |T|, which represent the adjacency relations from head entity to relation, relation to tail entity, head entity to tail entity, timestamps to tail entity, head entity to timestamps, respectively. Then, we use the relational-aspect and temporal-aspect three-views label propagation to update the labels of entities, relations, and timestamps (as shown in Figure <ref>(b)).
The relational-aspect three-views label propagation can be presented as follows:
L_e^(n+1) = A^EE·L_e^(n) + A^ER·L_r^(n)
L_r^(n+1) = A^RE·L_e^(n)
where L_e ∈ℝ^|E| × |d|, L_r ∈ℝ^|R| × |d| are label matrixs of entities and relations, · means the dot product. Following LightEA, we regard each pair of alignment entities as an independent class, and independently sample random vectors on the d-dimensional hyper-sphere to approximate the one-hot label vectors for representing the alignment seeds, so we set l_e_i^(0) = l_e_j^(0) = random(d) ∀(e_i,e_j)∈ S. For the entities in the candidate set C, we initialize the label of them as l_e_k^(0) = l_e_l^(0) = 0 ∀(e_k,e_l)∈ C, and the initial label matrix of relations is also set to all-zero L_r^(0)=0.
The final label of entity e_i in the relational aspect is the concatenation of label vectors in all steps:
l_e_i^r = [l_e_i^(0)||l_e_i^(1)||...||l_e_i^(n-1)||l_e_i^(n)]
The temporal-aspect three-views label propagation can be expressed as follows:
L_e^(n+1)' = A^EE·L_e^(n)' + A^ET·L_t^(n)
L_t^(n+1) = A^TE·L_e^(n)'
where L_e^'∈ℝ^|E| × |d|, and the initializtion label matrix of L_e^(0)' is the same as L_e^(0). L_t ∈ℝ^|T| × |d| is label matrix of timestamps and is initialized to L_t^(0)=0.
We concatenate the label vectors of all steps as the final label of entity e_i in the temporal aspect:
l_e_i^t = [l_e_i^(0)'||l_e_i^(1)'||...||l_e_i^(n-1)'||l_e_i^(n)']
Finally, the label of entity e_i is the balanced result of the two aspect labels:
l_e_i = (1-α) ×l_e_i^r + α×l_e_i^t
where α is a hyper-parameter to balance the label of relational aspect and temporal aspect.
§.§ Temporal Information Similarity Calculation
Recent research( <cit.>) suggests that the temporal information similarity can enhance the EA between TKGs, so we calculate the time similarity matrix and use it in the alignment and post-alignment phase.
First, we collect all timestamps of entities. Then we calculate the time similarity s^t_e_ie_j of e_i and e_j by the following:
s^t_e_ie_j = 2 × v/ k + q
where v denote the number of same items of e_i and e_j, k and q are the numbers of timestamps of e_i and e_j, respectively.
§.§ Sparse Similarity with Temporal Constraints
After obtaining the labels of entities, we calculate the similarity of entities by their labels in a sparse way with temporal constraints.
Early studies <cit.> calculate the embedding similarity (Cosine, Euclidean, or Manhattan distance) of all entities to find the equivalent entity pairs. The calculation complexity is O(|E|^2 d), and the space complexity of the similarity matrix is O(|E|^2). LightEA <cit.> notices that the similarity of many entities is very small and infinitely close to zero. Even if these smaller values are removed initially, it does not significantly affect the alignment results. Therefore, instead of calculating the similarities between all entities, we only retrieve the top-k nearest neighbors for each entity by approximate nearest neighbor (ANN) algorithms [In LightTEA, we use the FAISS framework <cit.> for approximate vector retrieval ]. It only takes several seconds to find the top-k nearest neighbors, and the space complexity of the sparse similarity matrix is O(|E| k), k ≪ |E|.
Different from LightEA, we use the time similarity matrix as a constraint to obtain the sparse similarity of entities. Specifically, we get the sparse similarity matrix S^l∈ℝ^|E| × k by the ANN algorithms with the labels of entities and find the related time similarity of these entities, the final sparse similarity of entities is as follows:
S' = (1- β) ×S^l + β×S^t
where β is the hyper-parameter for balancing the label similarity and time similarity of entities.
§.§ Sinkhorn Operator
Existing TEA methods <cit.>
simply calculate the similarity of the entities to obtain the equivalent entity pairs in the alignment phase. To further improve the effectiveness of the model, we adopt the Sinkhorn operator in the alignment phase to enhance EA.
LightEA <cit.> regards the EA problem as a one-to-one assignment problem to improve the performance of EA. The goal of the assignment problem is to find the optimal strategy to obtain the maximum profit and can be formulated as follows:
P ∈𝒫_|E| max ⟨P, S⟩ _F
where P is a permutation matrix that has exactly one entry of 1 in each row and each column and 0s elsewhere. 𝒫_|E| is the set of |E|× |E| permutation matrices, S∈ℝ^|E| × |E| is the similarity matrix of entities, and ⟨·⟩ _F represents the Frobenius inner product.
The Sinkhorn operator proposes a fast and completely parallelizable algorithm for the assignment problem. It iteratively normalizes rows and columns of the similarity matrix:
Sinkhorn^0(S) = exp(S),
Sinkhorn^m(S) = 𝒩_c(𝒩_r(Sinkhorn^m-1(S))),
Sinkhorn(S) = m →∞lim Sinkhorn^m(S)
where 𝒩_r(S) = S⊘ (S1_N1_N^⊤), 𝒩_c(S) = S⊘ (1_N1_N^⊤S) are the row and column-wise normalization operators, ⊘ denotes the element-wise division, 1_N represents a column vector of ones, and m is the iterations. The time complexity of Sinkhorn is O(m|E|^2).
We also regard the EA problem as an assignment problem and employ the Sinkhorn operator to obtain the approximate solution:
P∈𝒫_|E| max ⟨P, S' ⟩ _F
= t → 0^+lim Sinkhorn(S'/ t)
where S' ∈ℝ^|E| × k is the sparse similarity matrix with temporal constraints, which is calculated by equation (<ref>), t is the temperature. In this way, the performance of EA improves significantly, and the computational complexity drops to O(m|E|k).
§.§ Temporal Iterative Learning
Iterative Learning is proposed by BootEA <cit.> to address the problem of fewer alignment seeds and enhance the performance of EA. It is also called a semi-supervised alignment strategy which continuously selects possible entity pairs to augment the training data through an iterative method in the post-alignment phase.
To promote EA, we adopt temporal iterative learning. Different from STEA <cit.>, which adopts a bi-directional iterative strategy, we simply choose the entity and its nearest neighbor whose similarity value is greater than the threshold [The threshold is 0.8 in LightTEA.] as the alignment pairs and add them to the training set for the next iteration.
§ EXPERIMENTS
We conduct the experiments on a workstation with a GeForce RTX 3090 GPU and an AMD EPYC 7502 32-Core Processor CPU, 128GB memory. The codes and datasets will be available on GitHub [https://github.com/lcai2/LightTEA].
§.§ Datasets
To comprehensively evaluate the effectiveness and efficiency of the proposed model, we experiment on two widely used public datasets. The statistics of these datasets are listed in Table <ref>.
(1) DICEWS-1K/200 <cit.> is constructed from the event knowledge graph ICEWS05-15 which contains events during 2005 to 2015. It consists of two subsets with different alignment seeds 1K/200.
(2) YAGO-WIKI50K-5K/1K <cit.> extracts the equivalent entities with temporal information from YAGO and Wikidata. There are two subsets, one with 5K alignment seeds and the other with 1K.
§.§ Baselines
In the experiments, we compared our proposed model with two categories of advanced entity alignment methods:
(1) Supervised methods: JAPE <cit.>, AlignE <cit.>, GCN-Align <cit.>, MRAEA <cit.>, LightEA* <cit.>, TEA-GNN <cit.>, TREA <cit.>, and STEA* <cit.>. JAPE, AlignE, GCN-Align, MRAEA, and LightEA* are EA methods, TEA-GNN, TREA, and STEA* are TEA methods.
(2) Semi-supervised methods: BootEA <cit.>, RREA <cit.>, LightEA <cit.>, STEA <cit.>. BootEA, RREA, and LightEA are EA methods, STEA is a TEA method.
The main experiment results of these methods reported in the paper are from STEA (SOTA TEA method), except the latest SOTA EA method LightEA. The experiments of LigthEA are implemented on its open-source code. For a fair comparison, LightTEA has two corresponding versions: (1) LightTEA* is the supervised version. (2) LightTEA is the semi-supervised version with iterative learning.
§.§ Settings
§.§.§ Evaluation Metrics
Following the conventions, we adopt mean reciprocal rank (MRR) and Hits@k (k=1,10) as the evaluation metrics. MRR reports the average reciprocal of the ranks, and Hits@k calculates the proportion of correct alignment pairs whose rank is not greater than k. In particular, Hits@1 represents the accuracy of the results, which is the most important indicator. The higher the MRR and Hits@k, the better the performance.
§.§.§ Implementation Details
We use the fixed training set and validation set provided by TEA-GNN. The hyper-parameters are set as follows: the dimension of the hyper-sphere is d=512, the factor α is set to 0.6 for DICEWS and
0.5 for YAGO-WIKI50K to balance the relational-aspect and temporal-aspect label propagation, the factor β is set to 0.4 for balancing the label similarity and time similarity. Following LightEA, we use rounds n=2 for two-aspect three-view label propagation, retrieve the top-k=500 nearest neighbors for sparse similarity, set the number of iterations of the Sinkhorn operator to m=15, and set the temperature to
t=0.05. The reported performances are the averages of five independent runs.
§.§ Main Experiments
Table <ref> lists the main experimental results of our proposed model and all baselines on the four datasets. Among the supervised methods, LightTEA* has significant improvements in all metrics. Compared to the SOTA supervised TEA method STEA*, LightTEA* improves the Hits@1 by 2.61%, 2.33%, 5.45%, and 9.22%, respectively. In the semi-supervised methods, LightTEA outperforms all the baselines on all datasets across all metrics. The improvements of Hits@1 compared with STEA are 0.93%, 1.57%, 2.79%, and 4.77%, respectively. TEA methods perform better in these two categories of methods than EA methods by using temporal information. The semi-supervised methods achieve better performance than supervised methods, which indicates the effectiveness of temporal iterative learning. Without iteration, LightTEA* outperforms all baselines except for STEA's Hits@10 on DICEWS-200 with tiny gap 0.002. The high performance of LightTEA* demonstrates that the two-aspect three-view label propagation, the sparse similarity with temporal constraints, and the Sinkhorn operator are effective in promoting EA.
We conduct one sample t-test between LightTEA(/LightTEA*) and their corresponding strong baselines. All the p-value << 0.01 indicates that our proposed model significantly outperforms all baselines.
§.§ Ablation Study
We conduct an ablation experiment on DICEWS-200 and YAGO-WIKI50K-1K to investigate the contributions of three critical components of LightTEA: (1) Temporal-aspect Label Propagation (TLP), (2) Temporal Constraints (TC), and (3) Sinkhorn Operator (SO).
As reported in Table <ref>, without temporal-aspect label propagation (w/o TLP), the performance of LightTEA drops a little, which indicates the effectiveness of the temporal-aspect label propagation. When removing the temporal constraints (w/o TC), the underperformance of LightTEA implies the effectiveness of the temporal constraints. Without the Sinkhorn operator (w/o SO), the performance of LightTEA drops significantly, demonstrating the contribution of utilizing the Sinkhorn operator in the alignment phase to enhance EA.
§.§ Efficiency Studys
Table <ref> shows the overall time costs on DICEWS-200 and YAGO-WIKI50K-1K datasets by TEA methods from data loading to evaluation. The results of TREA are from <cit.> since it doesn't provide the source codes. The other results are obtained by directly running the source codes provided by the author.
It can be seen from Table <ref>, in the supervised time-aware methods, TREA costs less time than TEA-GNN with MML, which speed up the convergence. LightTEA* takes much less time than the above two methods. The time costs of LigthTEA* are 5 seconds and 24 seconds on the two datasets, respectively, which are only 3.91 % and 0.90 % of TREA. In the semi-supervised methods, the time costs of LightTEA are 12 seconds and 85 seconds, which are 2.08 % and 1.70 % of STEA on the two datasets, respectively. The time consumed by LightTEA is no more than 10 % of the most efficient methods TREA. By using temporal iterative learning, LightTEA increases the time consumption while improving performance compared to LightTEA*. The high efficiency of LightTEA indicates that it could be applied to large-scale datasets for EA between TKGs.
§.§ Hyper-parameter Analysis
We conduct experiments on the following hyper-parameters to investigate their effect on the performance of LightTEA.
(1) The dimension of hyper-sphere d. We select the dimension in the set {64, 128, 256, 512, 1024, 2048} and conduct the experiments. The Hits@1 scores with different dimensions on DICEWS-200 are shown in Table <ref>.
From the table we can see that as the dimension increases, the Hits@1 score for both LightTEA and LightTEA* gradually improves until it reaches their best performance at the dimension of 512. Even if the dimension is further increased, the performance remains unchanged. This indicates that when the dimension of the label vector is greater than 512, it only increases memory consumption and provides no benefit to performance.
(2) The balance factors α and β. We set the two factors in range 0∼1 with interval 0.1 to investigate the impact of different values. The experimental results with different α and β on YAGO-WIKI50K-1K are shown in Figure <ref>.
α balances the relational-aspect label propagation (RLP) and temporal-aspect label propagation (TLP). Figure <ref>(a) shows the Hits@1 of LightTEA* and LightTEA with different α. Due to the use of temporal iterative learning, the performance of LightTEA does not change significantly with different α. The Hits@1 curve of LightTEA* shows that the result of combining RLP and TLP with appropriate weight (α = 0.5) is better than using only one of them (α = 0 or α = 1).
β is used to balance the label similarity and time similarity. As shown in Figure <ref>(b), with the increase of β, the Hits@1 first increases slowly (reaching the maximum value when β = 0.4), and then decreases rapidly. When β = 1, Hits@1 of LightTEA with temporal iterative learning are lower than LightTEA*, it indicates that only using time similarity to generate the possible alignment pairs and add them to the training set for the next iteration will hurt the performance of the model.
§ CONCLUSION
Existing EA methods ignore the temporal information in KGs, which may wrongly align similar entities. The TEA methods lack scalability since the inherent defect of GNN. To address the limitations, we proposes an effective and efficient TEA framework that consists of four important components: (1) Two-aspect Three-view Label Propagation, (2) Sparse Similarity with Temporal Constraints, (3) Sinkhorn Operator, and (4) Temporal Iterative Learning. These modules work collaboratively to enhance the model's performance while reducing time consumption.
Extensive experiments on public datasets indicate that the proposed model significantly outperforms the SOTA methods. The time consumed by the model is only dozens of seconds at most, no more than 10% of the most efficient TEA methods, which denotes that our model has high efficiency and can be applied to large-scale TKGs.
§ ACKNOWLEDGEMENTS
We appreciate the support from National Natural Science Foundation of China with the Main Research Project on Machine Behavior and Human-Machine Collaborated Decision Making Methodology (72192820 & 72192824), Pudong New Area Science & Technology Development Fund (PKX2021-R05), Science and Technology Commission of Shanghai Municipality (22DZ2229004), and Shanghai Trusted Industry Internet Software Collaborative Innovation Center.
named
|
http://arxiv.org/abs/2307.04675v2 | 20230710162105 | LINFA: a Python library for variational inference with normalizing flow and annealing | [
"Yu Wang",
"Emma R. Cobian",
"Jubilee Lee",
"Fang Liu",
"Jonathan D. Hauenstein",
"Daniele E. Schiavazzi"
] | cs.LG | [
"cs.LG",
"stat.CO"
] |
]Yu Wang
]Emma R. Cobian
]Jubilee Lee
]Fang Liu
]Jonathan D. Hauenstein
]Daniele E. Schiavazzi
[]Department of Applied and Computational Mathematics and Statistics
University of Notre Dame, Notre Dame, IN, USA
LINFA: a Python library for variational inference with normalizing flow and annealing
[
=====================================================================================
Variational inference is an increasingly popular method in statistics and machine learning for approximating probability distributions.
We developed LINFA (Library for Inference with Normalizing Flow and Annealing), a Python library for variational inference to accommodate computationally expensive models and difficult-to-sample distributions with dependent parameters.
We discuss the theoretical background, capabilities, and performance of LINFA in various benchmarks. LINFA is publicly available on GitHub at <https://github.com/desResLab/LINFA>.
§ INTRODUCTION
Generating samples from a posterior distribution is a fundamental task in Bayesian inference.
The development of sampling-based algorithms from the Markov chain Monte Carlo family <cit.> has made solving Bayesian inverse problems accessible to a wide audience of both researchers and practitioners.
However, the number of samples required by these approaches is typically significant and the convergence of Markov chains to their stationary distribution can be slow especially in high-dimensions. Additionally, satisfactory convergence may not be always easy to quantify, even if a number of metrics have been proposed in the literature over the years.
More recent paradigms have been proposed in the context of variational inference <cit.>, where an optimization problem is formulated to determine the optimal member of a parametric family of distributions that can approximate a target posterior density.
In addition, flexible approaches to parametrize variational distributions through a composition of transformations (closely related to the concept of trasport maps, see, e.g., <cit.>) have reached popularity under the name of normalizing flows <cit.>.
The combination of variational inference and normalizing flow has received significant recent interest in the context of general algorithms for solving inverse problems <cit.>.
However, cases where the computational cost of evaluating the underlying probability distribution is significant occur quite often in engineering and applied sciences, for example when such evaluation requires the solution of an ordinary or partial differential equation.
In such cases, inference can easily become intractable. Additionally, strong and nonlinear dependence between model parameters may results in difficult-to-sample posterior distributions characterized by features at multiple scales or by multiple modes.
The LINFA library is specifically designed for cases where the model evaluation is computationally expensive. In such cases, the construction of an adaptively trained surrogate model is key to reducing the computational cost of inference <cit.>.
In addition, LINFA provides an adaptive annealing scheduler, where temperature increments are automatically determined based on the available variational approximant of the posterior distribution. Thus, adaptive annealing makes it easier to sample from complicated densities <cit.>.
This paper is organized as follows. The main features of the LINFA library are discussed in Section <ref>, followed by a brief outline of a few selected numerical tests in Section <ref>. Conclusions and future work are finally discussed in Section <ref>. The paper is completed by a brief description of the background theory and reference to the relevant papers in Appendix <ref>, a detailed presentation of a four benchmarks in Appendix <ref>, and a list of all the relevant hyperparameters in Appendix <ref>.
§ CAPABILITIES
LINFA is designed as a general inference engine and allows the user to define custom input transformations, computational models, surrogates, and likelihood functions.
1 - User-defined input parameter transformations - Input transformations may reduce the complexity of inference and surrogate model construction in situations where the ranges of the input variables differ substantially or when the input parameters are bounded. A number of pre-defined univariate transformations are provided, i.e, , , , and . These transformations are independently defined for each input variable, using four parameters (a,b,c,d), providing a nonlinear transformation between the normalized interval [a,b] and the physical interval [c,d]. Additional transformations can be defined by implementing the following member functions.
* - It evaluates the transformation from the normalized to the physical space. One transformation needs to be defined for each input. For example, the list of lists
defines a hyperbolic tangent transformation for the first two variables and an exponential transformation for the third.
* - This is the log Jacobian of the transformation that needs to be included in the computation of the log posterior density to account for the additional change in volume.
2 - User-defined computational models - LINFA can accommodate any type of models from analytically defined posteriors with the gradient computed through automatic differentiation to legacy computational solvers for which the solution gradient is not available nor easy to compute. New models are created by implementing the methods below.
-3pt
* - This is a pre-processing function used to generate synthetic observations. It computes the model output corresponding to the default parameter values (usually defined as part of the model) and adds noise with a user-specified distribution. Observations will be stored in a file and are typically assigned to so they are available for computing the log posterior.
* - This function solves the model for multiple values of the physical input parameters specified in a matrix format (with one sample for each row and one column for each input parameter dimension).
3 - User-defined surrogate models - For computational models that are too expensive for online inference, LINFA provides functionalities to create, train, and fine-tune a surrogate model. The class implements the following functionalities:
-3pt
* A new surrogate model can be created using the constructor.
* The (i.e. upper and lower bounds) are stored as a list of lists using the format .
* A pre-grid is defined as an a priori selected point cloud created inside the hyper-rectangle defined by . The pre-grid can be either of type (tensor product grid) where the grid order (number of points in each dimension) is defined through the argument , or of type , in which case the variable defines the total number of samples.
* Surrogate model Input/Output. The two functions and are provided to save a snapshot of a given surrogate or to read it from a file.
* The function is provided to perform an initial training of the surrogate model on the pre-grid. In addition, the function is also available to re-train the model once additional training examples are available.
* The function evaluates the surrogate model at multiple input realizations. If a transformation is defined, the surrogate should always be specified in the normalized domain with limits coincident with the normalized intervals.
4 - User-defined likelihood - A user-defined likelihood function can be defined by passing the parameters, the model, the surrogate and a coordinate transformation using
and then assigning it as a member function of the class using:
.
5 - Linear and adaptive annealing schedulers - LINFA provides two annealing schedulers by default. The first is the scheduler with constant increments. The second is the adaptive scheduler
<cit.> with hyperparameters reported in Table <ref>. For the AdaAnn scheduler, the user can also specify a different number of parameter updates to be performed at the initial temperature t_0, final temperature t_1, and for any temperature t_0<t<1. Finally, the batch size (number of samples used to evaluate the expectations in the loss function) can also be differentiated for t=1 and t<1.
6 - User-defined hyperparameters - A complete list of hyperparameters with a description of their functionality can be found in Appendix <ref>.
§ NUMERICAL BENCHMARKS
We tested LINFA on multiple problems. These include inference on unimodal and multi-modal posterior distributions specified in closed form, ordinary differential models and dynamical systems with gradients directly computed through automatic differentiation in PyTorch, identifiable and non-identifiable physics-based models with fixed and adaptive surrogates, and high-dimensional statistical models. Some of the above tests are included with the library and systematically tested when pushing the master branch on GitHub. A detailed discussion of these test cases is provided in Appendix <ref>. LINFA can be installed through the Python Package Index (PyPI) typing
To run the tests type
where is the name of the test case, either , , , , or .
§ CONCLUSION AND FUTURE WORK
In this paper, we have introduced the LINFA library for variational inference, briefly discussed the relevant background, its capabilities, and report its performance on a number of test cases. Some interesting directions for future work are mentioned below.
Future versions will support user-defined privacy-preserving synthetic data generation and variational inference through differentially private gradient descent algorithms. This will allow the user to perform inference tasks while preserving a pre-defined privacy budget, as discussed in <cit.>.
LINFA will also be extended to handle multiple models. This will open new possibilities to solve inverse problems combining variational inference and multi-fidelity surrogates <cit.>.
In addition, for inverse problems with significant dependence among the parameters, it is often possible to simplify the inference task by operating on manifolds of reduced dimensionality <cit.>. New modules for dimensionality reduction will be developed and integrated with the LINFA library.
Finally, the ELBO loss typically used in variational inference has known limitations, some of which are related to its close connection with the KL divergence. Future versions of LINFA will provide the option to use alternative losses.
§ ACKNOWLEDGEMENTS
The authors gratefully acknowledge the support from the NSF Big Data Science & Engineering grant #1918692 and the computational resources provided through the Center for Research Computing at the University of Notre Dame. DES also acknowledges support from NSF CAREER grant #1942662.
§ BACKGROUND THEORY
§.§ Variational inference with normalizing flow
Consider the problem of estimating (in a Bayesian sense) the parameters z∈𝒵 of a physics-based or statistical model
x = f(z) + ε,
from the observations x∈𝒳 and a known statistical characterization of the error ε.
We tackle this problem with variational inference and normalizing flow. A normalizing flow (NF) is a nonlinear transformation F:ℝ^d×Λ→ℝ^d designed to map an easy-to-sample base distribution q_0(z_0) into a close approximation q_K(z_K) of a desired target posterior density p(z|x).
This transformation can be determined by composing K bijections
z_K = F(z_0) = F_K∘ F_K-1∘⋯∘ F_k∘⋯∘ F_1(z_0),
and evaluating the transformed density through the change of variable formula <cit.>.
In the context of variational inference, we seek to determine an optimal set of parameters λ∈Λ so that q_K(z_K)≈ p(z|x). Given observations x∈𝒳, a likelihood function l_z(x) (informed by the distribution of the error ε) and prior p(z), a NF-based approximation q_K(z) of the posterior distribution p(z|x) can be computed by maximizing the lower bound to the log marginal likelihood log p(x) (the so-called evidence lower bound or ELBO), or, equivalently, by minimizing a free energy bound <cit.>.
ℱ( x) = 𝔼_q_K( z_K)[log q_K( z_K) - log p( x, z_K)]
= 𝔼_q_0( z_0)[log q_0( z_0)] - 𝔼_q_0( z_0)[log p( x, z_K)] - 𝔼_q_0( z_0)[∑_k=1^K log|∂ z_k/∂ z_k-1|].
For computational convenience, normalizing flow transformations are selected to be easily invertible and their Jacobian determinant can be computed with a cost that grows linearly with the problem dimensionality.
Approaches in the literature include RealNVP <cit.>, GLOW <cit.>, and autoregressive transformations such as MAF <cit.> and IAF <cit.>.
§.§ MAF and RealNVP
LINFA implements two widely used normalizing flow formulations, MAF <cit.> and RealNVP <cit.>.
MAF belongs to the class of autoregressive normalizing flows. Given the latent variable z = (z_1,z_2,…,z_d), it assumes p(z_i|z_1,…,z_i-1) = ϕ[(z_i - μ_i) / e^α_i], where ϕ is the standard normal distribution, μ_i = f_μ_i(z_1,…,z_i-1), α_i = f_α_i(z_1,…,z_i-1), i=1,2,…,d, and f_μ_i and f_α_i are masked autoencoder neural networks <cit.>.
In a MADE autoencoder the network connectivities are multiplied by Boolean masks so the input-output relation maintains a lower triangular structure, making the computation of the Jacobian determinant particularly simple.
MAF transformations are then composed of multiple MADE layers, possibly interleaved by batch normalization layers <cit.>, typically used to add stability during training and increase network accuracy <cit.>.
RealNVP is another widely used flow where, at each layer the first d' variables are left unaltered while the remaining d-d' are subject to an affine transformation of the form z_d'+1:d = z_d'+1:d ⊙ e^α + μ, where μ = f_μ(z_1:d') and α = f_α(z_d'+1:d) are MADE autoencoders.
In this context, MAF could be seen as a generalization of RealNVP by setting μ_i=α_i=0 for i≤ d' <cit.>.
§.§ Normalizing flow with adaptive surrogate (NoFAS)
LINFA is designed to accommodate black-box models f: 𝒵→𝒳 between the random inputs z = (z_1, z_2, ⋯, z_d)^T ∈𝒵 and the outputs (x_1, x_2,⋯,x_m)^T ∈𝒳, and assumes n observations x = { x_i}_i=1^n ⊂𝒳 to be available.
Our goal is to infer z and to quantify its uncertainty given x.
We employ a variational Bayesian paradigm and sample from the posterior distribution p( z| x)∝ℓ_ z( x,f) p( z), with prior p( z) via normalizing flows.
This requires the evaluation of the gradient of the ELBO (<ref>) with respect to the NF parameters λ, replacing p( x, z_K) with p( x| z_K) p( z) =ℓ_ z_K(x,f) p( z), and approximating the expectations with their MC estimates.
However, the likelihood function needs to be evaluated at every MC realization, which can be costly if the model f(z) is computationally expensive. In addition, automatic differentiation through a legacy (e.g. physics-based) solver may be an impractical, time-consuming, or require the development of an adjoint solver.
Our solution is to replace the model f with a computationally inexpensive surrogate f: 𝒵×𝒲→𝒳 parameterized by the weigths w∈𝒲, whose derivatives can be obtained at a relatively low computational cost, but intrinsic bias in the selected surrogate formulation, a limited number of training examples, and locally optimal w can compromise the accuracy of f.
To resolve these issues, LINFA implements NoFAS, which updates the surrogate model adaptively by smartly weighting the samples of z from NF thanks to a memory-aware loss function.
Once a newly updated surrogate is obtained, the likelihood function is updated, leading to a new posterior distribution that will be approximated by VI-NF, producing, in turn, new samples for the next surrogate model update, and so on.
Additional details can be found in <cit.>.
§.§ Adaptive Annealing
Annealing is a technique to parametrically smooth a target density to improve sampling efficiency and accuracy during inference.
In the discrete case, this is achieved by incrementing an inverse temperature t_k and setting p_k(z,x) = p^t_k(z,x), for k=0,…,K, where 0 < t_0 < ⋯ < t_K≤ 1.
The result of exponentiation produces a smooth unimodal distribution for a sufficiently small t_0, recovering the target density as t_k approaches 1. In other words, annealing provides a continuous deformation from an easier to approximate unimodal distribution to a desired target density.
A linear annealing scheduler <cit.> with fixed temperature increments is often used in practice, where for with constant increments .
Intuitively, small temperature changes are desirable to carefully explore the parameter spaces at the beginning of the annealing process, whereas larger changes can be taken as t_k increases, after annealing has helped to capture important features of the target distribution (e.g., locating all the relevant modes).
The AdaAnn scheduler determines the increment ϵ_k that approximately produces a pre-defined change in the KL divergence between two distributions annealed at t_k and t_k+1=t_k+ϵ_k, respectively.
Letting the KL divergence equal a constant τ^2/2, where τ is referred to as the KL tolerance, the step size ϵ_k becomes
ϵ_k = τ/ √(𝕍_p^t_k[log p( z,x)]).
The denominator is large when the support of the annealed distribution p^t_k(z,x) is wider than the support of the target p(z,x), and progressively reduces with increasing t_k.
Further detail on the derivation of the expression for ϵ_k can be found in <cit.>.
§ DETAILED NUMERICAL BENCHMARKS
§.§ Simple two-dimensional map with Gaussian likelihood
A model f:ℝ^2→ℝ^2 is chosen in this experiment having the closed-form expression
f( z) = f(z_1,z_2) = (z_1^3 / 10 + exp(z_2 / 3), z_1^3 / 10 - exp(z_2 / 3))^T.
Observations x are generated as
x = x^* + 0.05 |x^*| ⊙x_0,
where x_0∼𝒩(0, I_2) and ⊙ is the Hadamard product.
We set the true model parameters at z^* = (3, 5)^T, with output x^* = f( z^*)=(7.99, -2.59)^T, and simulate 50 sets of observations from (<ref>). The likelihood of z given x is assumed Gaussian and we adopt a noninformative uniform prior p( z).
We allocate a budget of 4×4=16 model solutions to the pre-grid and use the rest to adaptively calibrate f using 2 samples every 1000 normalizing flow iterations.
Results in terms of loss profile, variational approximation, and posterior predictive distribution are shown in Figure <ref>.
§.§ High-dimensional example
We consider a map f: ℝ^5→ℝ^4 expressed as
f(z) = A g(e^z),
where g_i(r) = (2· |2 a_i - 1| + r_i) / (1 + r_i) with r_i > 0 for i=1,…,5 is the Sobol function <cit.> and A is a 4×5 matrix. We also set
a = (0.084, 0.229, 0.913, 0.152, 0.826)^T A = 1/√(2)[ 1 1 0 0 0; 0 1 1 0 0; 0 0 1 1 0; 0 0 0 1 1; ].
The true parameter vector is set at z^* = (2.75, -1.5, 0.25, -2.5, 1.75)^T. While the Sobol function is bijective and analytic, f is over-parameterized and non identifiabile.
This is also confirmed by the fact that the curve segment γ(t) = g^-1(g( z^*) + v t)∈ Z gives the same model solution as x^* = f(z^*) = f(γ(t)) ≈ (1.4910, 1.6650, 1.8715, 1.7011)^T for t ∈ (-0.0153, 0.0686], where v = (1,-1,1,-1,1)^T.
This is consistent with the one-dimensional null-space of the matrix A.
We also generate synthetic observations from the Gaussian distribution
x = x^* + 0.01· |x^*| ⊙x_0, and x_0∼𝒩(0, I_5).
Results are shown in Figure <ref>.
§.§ Two-element Windkessel Model
The two-element Windkessel model (often referred to as the RC model) is the simplest representation of the human systemic circulation and requires two parameters, i.e., a resistance R ∈ [100, 1500] Barye· s/ml and a capacitance C ∈ [1× 10^-5, 1 × 10^-2] ml/Barye.
We provide a periodic time history of the aortic flow (see <cit.> for additional details) and use the RC model to predict the time history of the proximal pressure P_p(t), specifically its maximum, minimum, and average values over a typical heart cycle, while assuming the distal resistance P_d(t) as a constant in time, equal to 55 mmHg.
In our experiment, we set the true resistance and capacitance as z_K,1^*=R^* = 1000 Barye· s/ml and z_K,2^*=C^* = 5× 10^-5 ml/Barye, and determine P_p(t) from a RK4 numerical solution of the following algebraic-differential system
Q_d = P_p-P_d/R, d P_p/d t = Q_p - Q_d/C,
where Q_p is the flow entering the RC system and Q_d is the distal flow.
Synthetic observations are generated by adding Gaussian noise to the true model solution x^*=(x^*_1,x^*_2,x^*_3)=(P_p,min, P_p,max, P_p,avg)= (78.28, 101.12, 85.75), i.e., x follows a multivariate Gaussian distribution with mean x^* and a diagonal covariance matrix with entries 0.05 x_i^*, where i=1,2,3 corresponds to the maximum, minimum, and average pressures, respectively.
The aim is to quantify the uncertainty in the RC model parameters given 50 repeated pressure measurements. We imposed a non-informative prior on R and C. Results are shown in Figure <ref>.
§.§ Three-element Wndkessel Circulatory Model (NoFAS)
The three-parameter Windkessel or RCR model is characterized by proximal and distal resistance parameters R_p, R_d∈ [100, 1500] Barye·s/ml, and one capacitance parameter C ∈ [1× 10^-5, 1× 10^-2] ml/Barye.
This model is not identifiable. The average distal pressure is only affected by the total system resistance, i.e. the sum R_p+R_d, leading to a negative correlation between these two parameters. Thus, an increment in the proximal resistance is compensated by a reduction in the distal resistance (so the average distal pressure remains the same) which, in turn, reduces the friction encountered by the flow exiting the capacitor. An increase in the value of C is finally needed to restore the average, minimum and maximum pressure. This leads to a positive correlation between C and R_d.
The output consists of the maximum, minimum, and average values of the proximal pressure P_p(t), i.e., (P_p,min, P_p,max, P_p,avg) over one heart cycle.
The true parameters are z^*_K,1 = R^*_p = 1000 Barye·s/ml, z^*_K,2=R^*_d = 1000 Barye·s/ml, and C^* = 5× 10^-5 ml/Barye. The proximal pressure is computed from the solution of the algebraic-differential system
Q_p = P_p - P_c/R_p, Q_d = P_c-P_d/R_d, d P_c/d t = Q_p-Q_d/C,
where the distal pressure is set to P_d=55 mmHg.
Synthetic observations are generated from N(μ, Σ), where μ=(f_1(z^*),f_2(z^*),f_3(z^*))^T = (P_p,min, P_p,max, P_p,ave)^T = (100.96, 148.02, 116.50)^T and Σ is a diagonal matrix with entries (5.05, 7.40, 5.83)^T. The budgeted number of true model solutions is 216; the fixed surrogate model is evaluated on a 6× 6× 6 = 216 pre-grid while the adaptive surrogate is evaluated with a pre-grid of size 4× 4× 4 = 64 and the other 152 evaluations are adaptively selected.
This example also demonstrates how NoFAS can be combined with annealing for improved convergence. The results in Figure <ref> are generated using the AdaAnn adaptive annealing scheduler with intial inverse temperature t_0=0.05, KL tolerance τ=0.01 and a batch size of 100 samples. The number of parameter updates is set to 500, 5000 and 5 for t_0, t_1 and t_0<t<t_1, respectively and 1000 Monte Carlo realizations are used to evaluate the denominator in equation (<ref>). The posterior samples capture well the nonlinear correlation among the parameters and generate a fairly accurate posterior predictive distribution that overlaps with the observations. Additional details can be found in <cit.>.
§.§ Friedman 1 model (AdaAnn)
We consider a modified version of the Friedman 1 dataset <cit.> to examine the performance of our adaptive annealing scheduler in a high-dimensional context.
According to the original model in <cit.>, the data are generated as
y_i = μ_i(β)+ ϵ_i, μ_i(β)=β_1sin(π x_i,1x_i,2)+ β_2(x_i,3-β_3)^2+∑_j=4^10β_jx_i,j,
where ϵ_i∼𝒩(0,1).
We made a slight modification to the model in (<ref>) as
μ_i(β) = β_1sin(π x_i,1x_i,2)+ β_2^2(x_i,3-β_3)^2+∑_j=4^10β_jx_i,j,
and set the true parameter combination to β=(β_1,…,β_10)=(10,±√(20), 0.5, 10, 5, 0, 0, 0, 0, 0). Note that both (<ref>) and (<ref>) contain linear, nonlinear, and interaction terms of the input variables X_1 to X_10, five of which (X_6 to X_10) are irrelevant to Y. Each X is drawn independently from 𝒰(0,1). We used R package <cit.> to generate a Friedman 1 dataset with a sample size of n=1000.
We impose a non-informative uniform prior p(β) and, unlike the original modal, we now expect a bimodal posterior distribution of β. Results in terms of marginal statistics and their convergence for the mode with positive z_K,2 are illustrated in Table <ref> and Figure <ref>.
[b]0.4
.8!
2in]l c c c c
True 2cMode 1
Value Post. Mean Post. SD
β_1 = 10 10.0285 0.1000
β_2 = ±√(20) 4.2187 0.1719
β_3 = 0.5 0.4854 0.0004
β_4 = 10 10.0987 0.0491
β_5 = 5 5.0182 0.1142
β_6 = 0 0.1113 0.0785
β_7 = 0 0.0707 0.0043
β_8 = 0 -0.1315 0.1008
β_9 = 0 0.0976 0.0387
β_10 = 0 0.1192 0.0463
tablePosterior mean and standard deviation for positive mode in the modified Friedman test case.
[b]0.58
< g r a p h i c s >
< g r a p h i c s >
figureLoss profile (left) and posterior marginal statistics (right) for positive mode in the modified Friedman test case.
§ HYPERPARAMETERS IN LINFA
This section contains the list of all hyperparameters in the library, their default values, and a description of the functionalities they control.
General hyperparameters are listed in Table <ref>, those related to the optimization process in Table <ref>, and to the output folder and files in Table <ref>. Hyperparameters for the proposed NoFAS and AdaAnn approaches are listed in Table <ref> and <ref>, respectively.
Finally, a hyperparameter used to select the hardware device is described in Table <ref>.
0.2in
unsrtnat
|
http://arxiv.org/abs/2307.03872v1 | 20230708012336 | Domain Adaptation using Silver Standard Labels for Ki-67 Scoring in Digital Pathology: A Step Closer to Widescale Deployment | [
"Amanda Dy",
"Ngoc-Nhu Jennifer Nguyen",
"Seyed Hossein Mirjahanmardi",
"Melanie Dawe",
"Anthony Fyles",
"Wei Shi",
"Fei-Fei Liu",
"Dimitrios Androutsos",
"Susan Done",
"April Khademi"
] | eess.IV | [
"eess.IV",
"cs.AI",
"cs.CV",
"cs.LG"
] |
Why does dissolving salt in water decrease its dielectric permittivity
Xifan Wu
======================================================================
Deep learning systems have been proposed to improve the objectivity and efficiency of Ki-67 PI scoring. The challenge is that while very accurate, deep learning techniques suffer from reduced performance when applied to out-of-domain data. This is a critical challenge for clinical translation, as models are typically trained using data available to the vendor, which is not from the target domain. To address this challenge, this study proposes a domain adaptation pipeline that employs an unsupervised framework to generate silver standard (pseudo) labels in the target domain, which is used to augment the gold standard (GS) source domain data. Five training regimes were tested on two validated Ki-67 scoring architectures (UV-Net and piNET), (1) SS Only: trained on target silver standard (SS) labels, (2) GS Only: trained on source GS labels, (3) Mixed: trained on target SS and source GS labels, (4) GS+SS: trained on source GS labels and fine-tuned on target SS labels, and our proposed method (5) SS+GS: trained on source SS labels and fine-tuned on source GS labels. The SS+GS method yielded significantly (p<0.05) higher PI accuracy (95.9%) and more consistent results compared to the GS Only model on target data. Analysis of t-SNE plots showed features learned by the SS+GS models are more aligned for source and target data, resulting in improved generalization. The proposed pipeline provides an efficient method for learning the target distribution without manual annotations, which are time-consuming and costly to generate for medical images. This framework can be applied to any target site as a per-laboratory calibration method, for widescale deployment.
Ki-67, proliferation index, domain adaptation, self-supervised learning
§ INTRODUCTION
Breast cancer is the most diagnosed cancer and the leading cause of cancer-related death in women worldwide <cit.>. Ki-67 immunohistochemistry (IHC) biomarker is gaining traction for evaluating the proliferation rate of invasive breast cancers <cit.>. Ki-67 expression is related to prognosis and can identify high-risk early-stage breast cancers <cit.> and determine treatment modalities <cit.>. The Ki-67 proliferation index (PI) is the score associated with the proportion of Ki-67^+ tumour cells to the total number of tumour cells in a breast tissue section <cit.>. However, quantifying this biomarker is labour-intensive, time-consuming, and subject to poor visual estimation concordance <cit.>.
Fortunately, Ki-67 PI can be calculated with deep learning nuclei detection algorithms for more efficient and objective quantification. There have been a few deep learning tools addressing automated Ki-67 PI scoring in literature, such as piNET <cit.> and UV-Net <cit.> which were specifically developed for Ki-67 PI quantification in breast cancer. As automated artificial intelligence (AI) tools become more robust, there is a chance for translation and deployment. However, a challenge with widescale adoption is performance degradation at deployed target sites resulting when the target data is from a center that is not included in the (source) training set. It is especially evident in digital pathology given the variation in patient factors, specimen processing, staining protocols and acquisition devices across pathology laboratories. Annotations from target sites could be included in training sets, but generating gold standard (GS) ground truths are laborious and expensive for medical imaging.
Mitigating domain shift has become a topic of extensive research <cit.> and unsupervised domain adaptation (UDA) is gaining considerable attention for this task. UDA methods seek to overcome the domain gap without the need for labelled target data. Self-training (pseudo label-based methods) has emerged as a promising UDA solution <cit.>. Self-training generates a set of pseudo labels in the target domain and re-trains a network based on these pseudo labels. Self-training loss encourages cross-domain feature alignment by learning from the labelled source data and pseudo-labelled target data. Pseudo labels can be quickly generated for any number of datasets, which is cost-effective and reduces development time. However, perfect accuracy cannot be guaranteed which can lead to propagated errors when fine-tuning. Because pseudo labels do not capture detailed features as well as clean labels we hypothesize that pre-training a network on pseudo labels from the target domain will allow a network to first learn dataset-specific characteristics and low-level features that are task-dependent, thereby providing optimal parameter initialization. Fine-tuning with GS (clean) labels from the source domain can then allow more detailed features to be captured by the network. This work proposes a pipeline that (1) uses an unsupervised Ki-67 PI quantification algorithm to generate pseudo labels, which we call silver standard (SS) labels, in the unlabeled target domain, (2) pre-trains a network on SS labels, and (3) fine-tunes the network on GS labels from the source domain. This pipeline can be used to calibrate automated deep learning-based medical imaging tools on a per-dataset basis, in an easy and unsupervised manner. We validate our method on 325 clinical tissue microarrays (TMAs) (20800 patches) from the target domain. Experimental results show the proposed approach achieves superior performance on the pixel-level and patient-level, therefore, providing a DA training method for robust and accurate Ki-67 PI estimation.
§ METHODS
§.§ Deep Learning Models
Two deep learning architectures are used for experiments: UV-Net and piNET, both developed for Ki-67 PI quantification in breast cancer and validated on large multi-institutional datasets. piNET was built using the U-NET architecture with an extra layer <cit.> and UV-Net was designed to preserve nuclear features of clustering or overlapping nuclei through dense 'V' blocks to retain the high-resolution details <cit.>. The output of piNET and UV-Net is a multi-channel probability map, with center locations of tumour nuclei detected for two classes: Ki-67^- and Ki-67^+ cells.
§.§ Transfer Learning
Transfer learning (TL) <cit.> has proven to be effective for many real-world applications by exploiting knowledge in labelled training data from a source domain. TL has made major contributions to medical image analysis as it overcomes the data scarcity problem as well as preserving time and hardware resources. In this study, we introduce a TL approach that uses an unsupervised Ki-67 nuclei detection scheme to generate SS labels in the target domain for pre-training the model. This enables the model to learn the low-level nuclei features and attain optimal parameter initialization. We will then fine-tune the model using gold GS labels to capture more precise details and improve the accuracy of the learned features. We compare the performance of two network architectures, UV-Net and piNET, in the following scenarios: (1) pre-training with GS labels and fine-tuning with SS labels, and (2) pre-training with SS labels and fine-tuning with GS labels. The results are compared against training methods without TL.
§.§ Pseudo Label Generation: Silver Standards
In UDA settings, there are no labels for the target domain. Our goal is to improve performance on the target, so we train the model with the target SS labels generated by a previously developed and validated unsupervised Ki-67 nuclei detection method called the immunohistochemical colour histogram (IHCCH). The process includes vector median filtering, background subtraction, an unsupervised colour separation method that separates blue and brown objects automatically based on the histogram of the b* channel, and adaptive radius nuclei detection. More details can be found in <cit.>.
§.§ Dataset
This study uses Ki-67 stained invasive breast cancer images obtained from three institutions. Table <ref> summarizes the Ki-67 datasets used for each training method.
Source Dataset:
510 patches of 256×256 pixels in size are extracted from whole slide images provided by St. Michael's Hospital (SMH) in Toronto and an open-source database, Deepslide <cit.>. The ×20 Aperio AT Turbo and ×40 Aperio ScanScope scanners were used, respectively. Deepslide images are down-sampled to ×20 for compatibility. Images were annotated by marking Ki-67^- and Ki-67^+ centroids <cit.>. Centroid annotations were recast into a Gaussian kernel to allow the system to contextual learn information from the
nuclei to help the classifier discover more robust features. Artifacts including overstaining, background, folders, blur, and dust are common in tissue slides; therefore, 15% of the training dataset includes patches with artifacts and non-tumorous areas to reduce false positives. This dataset represents our source domain and contains GS labels. Each patch contains 58 tumourous cells on average for a total of 29571 cells.
Target Dataset:
The target dataset was provided by the University Health Network (UHN) and contains 411 tissue microarrays (TMA) from 175 patients. Each patient has 1 to 3 corresponding TMAs of 2000 × 2000 pixels in size and an expert PI estimate is available for each patient. 24 TMAs from 24 patients were used to create the SS labels. These 24 TMAs were tiled into patches of size 256x256 pixels and 345 patches which contained ≥ 80 % tumorous tissue were extracted and the remaining patches were discarded. The TMAs from patients used for SS label generation were removed from our target dataset to prevent patient data leakage. 10 TMAs were randomly selected from the remaining pool and annotated by an anatomical pathology resident (N.N.J.N) and verified by a breast pathologist (S.D.) to produce pixel-wise nuclei annotations for testing in the target domain. Each annotated TMA contains 2093 tumourous cells on average for a total of 20930 cells. Accordingly, the target domain test set contains 325 TMAs from 151 patients with patient-level PI scores and 10 TMAs with nuclei annotations.
§.§ Evaluation Metrics
Nuclei detection is evaluated by comparing the Ki-67^- and Ki-67^+ centroids between the AI prediction and GS ground truths through the F1 score. The F1 score is the harmonic mean of precision and recall which is dependent on the number of true positives (TP), false positives (FP), and false negatives (FN). A TP is detected whenever the Euclidean distance between an annotation centroid and a detected centroid is less than 6 µm. This value corresponds to the average radius of tumourous cells from the source dataset. All detected cells not within 6 µm of a ground truth annotation are considered FP. Multiple detections of an already counted cell are also counted as FP. All ground truth cells without a detection within 6 µm proximity are considered FN. The F1 scores report raw nuclei detection performance, therefore, if a model is operating on an image with a low tumour nuclei count, a single missed nucleus can greatly skew the overall F1 score. Thus, different metrics, such as the proliferation index (PI) error should also be used. Tumour proliferation is measured by:
PI=# Ki-67^+ tumour cells/#(Ki-67^+ + Ki-67^-) tumour cells
which is computed over the whole TMA based on the detected nuclei. The PI difference is used to investigate the error between predicted and actual PI values: Δ PI= |PI_actual - PI_predicted|. Pairwise one-way ANOVA is used to compare model performance.
§.§ Experimental Setup
Five training methods are used to study Ki-67 nuclei detection and PI estimation accuracy. The first configuration is GS only, which uses only the GS data from the source domain. The second configuration, SS only, uses the SS data generated by the unsupervised IHCCH algorithm from the target domain. The third configuration, Mixed, includes both GS and SS in the training pool. The fourth configuration, GS+SS, uses GS for pre-training and SS for fine-tuning and the final configuration is our proposed method, SS+GS, which uses SS for pre-training and GS for fine-tuning. All methods that use SS are trained with increments of 100 where each increment contains SS from previous increments. Table <ref> summarizes the configurations of each training method. The IHCCH (unsupervised) method is also evaluated to verify the stand-alone performance of the tool.
To ensure robustness to training variations we use a 3-fold cross-validation protocol for all experiments. We divide our 510 source patches with GS annotations into 3 subsets. For each fold, we select one subset as the held-out patches and the other 340 patches are used in the training pool. An Adam optimizer was used with a learning rate of 1e-3, a batch size of 4 with 100 epochs, and a Huber loss function, the epoch with the lowest validation loss was saved. Data augmentations were applied for rotation and scaling. All experiments were run using a GeForce RTX 3070 Ti.
§ RESULTS
Quantitative results are summarized in Table <ref>. Nuclei predictions are shown in Figure <ref>. Reproducibility (standard deviation between 3-fold cross-validation models when predicting on the same target distribution data) is shown in Table <ref>.
§.§ Source Domain: Nuclei Detection
170 unseen patches from the source domain with pixel-level Ki-67^- and Ki-67^+ centroid annotations are used to test nuclei detection performance. The distributions of the F1 scores are shown in Figure <ref> and summarized in Table <ref>. The proposed SS+GS method yielded superior or competitive F1 performance on the source domain when compared to the baseline method, GS Only, whereas IHCCH, SS Only, Mixed and GS+SS methods performed generally worse. Nuclei detection performance on the source domain serves as our model verification step. Our findings indicate that including SS data from the target domain does not degrade model performance on the source domain.
§.§ Target Domain: Nuclei Detection
We next test our method on an adaptation task as we shift from source domain to target domain pixel-level assessments. 10 TMAs from the target domain with pixel-level Ki-67^- and Ki-67^+ expert annotations were used to test nuclei detection performance. The distribution of the F1 scores on the target domain test set is shown in Figure <ref> and summarized in Table <ref>. The GS+SS method achieves superior performance exceeding all other methods and significantly higher performance than the baseline method regardless of the SS increment.
§.§ Target Domain: PI Computation
We extend the use of our approach to another adaptation task involving a change in the level of assessment, specifically from patch-level to patient-level. ΔPI is assessed on 151 patients (325 TMAs) from the target domain. The distributions of the ΔPI are shown in Figure <ref> and summarized in Table <ref>. SS+GS achieves superior PI prediction performance exceeding all other methods and achieving significantly lower PI error (p<0.05) compared to the baseline method, GS Only, regardless of the SS increment.
The ΔPI for GS only methods is ∼ 7.5%, but using the SS+GS method leads to a decrease in error by ∼ 3.5%, which is a significantly greater improvement compared to other methods. SS+GS methods also yielded the lowest ΔPI standard deviation signifying less variability and more consistent and reliable predictions. As some PI intervals have greater clinical significance, the patient-level PI performance was evaluated in intervals of 10% as depicted in Figure <ref>. SS+GS methods maintain the lowest ΔPI across all intervals (excluding 30% to 40% for UV-Net) which demonstrates optimal performance in clinically relevant ranges.
§.§ Qualitative Evaluation: t-SNE
We analyze the effects of the models on source and target domains further with t-SNE, a popular method to visualize high-dimensional data in 2D <cit.>. Figure <ref> illustrates such feature visualizations from source and target images obtained from GS Only, GS+SS and SS+GS models. The features learned for the source and target domains in the GS only and GS+SS models are diffuse and mostly non-overlapping, which likely causes reduced generalization. However, features from the SS+GS model are similar across source and target domains, which likely resulted in improved generalization and top performance on target domain data.
§ DISCUSSION
Ki-67 PI is visually assessed by pathologists to estimate prognosis <cit.> and decide whether adjuvant chemotherapy should be added to a patient's treatment plan <cit.>. A high Ki-67 proliferation index is associated with a poor prognosis <cit.> and better eligibility for adjuvant chemotherapy <cit.>. The monarchE Phase 3 <cit.>—establishes >20% Ki-67 PI as a clinically relevant threshold to stratify patients with estrogen receptor-positive early breast cancer eligible for adjuvant chemotherapy. However, various preanalytical, analytical and interpretation factors affect the scoring of Ki-67 by pathologists and lead to high inter-rater variability. Automated tools, such as deep learning can be used to bring objectivity and efficiency, thus improving the clinical utility of Ki-67 scoring.
While more accurate than other tools, deep learning methods experience a reduction in performance when applied to out-of-domain data. Covariate shifts between source and target domains are common in digital pathology due to different staining protocols and scanning equipment/software. This presents a significant challenge for clinical translation, as the current industry standard is to train models using data only available to the vendor. To address this issue and move closer to widespread deployment, this work presents an unsupervised domain adaptation method for Ki-67 quantification to focus on creating models that generalize to target data. The proposed pipeline learns the target distribution without manual annotations, which would be time-consuming and costly to obtain for medical images. Pseudo labels (SS labels) are extracted from the target domain in an unsupervised manner using the IHCCH method, and this data is used to supplement training datasets to learn domain- and problem-specific features. This framework can be easily implemented at any target site as a laboratory-specific calibration method, which can simplify deployment not only for Ki-67 quantification but also for a wide range of medical imaging applications.
We evaluated five training configurations (GS Only, SS Only, Mixed, GS+SS, SS+GS) on two Ki67 architectures (piNET and UV-Net) and found improved performance, particularly for the SS+GS configuration compared to the baseline, GS only. This suggests that although the SS labels may be slightly noisy (F1 score of 0.53 on source and 0.57 on target), incorporating data from the target domain can help the models learn domain-specific features. This was evident from the t-SNE plots, which showed a clear overlap in features learned for the target and source distributions in the SS+GS models. On the other hand, the GS+SS models did not perform as well, despite being the standard practice in the community. We believe that fine-tuning with the noisy SS labels forces the model to remember the noise more prominently. However, in the SS+GS configuration, the model was first trained with the noisy SS labels and then refined with clean GS labels, leading to better performance and an overall PI accuracy of 95.9% achieved using piNET. Furthermore, across clinically relevant PI ranges, the SS+GS models exhibited the best performance and demonstrated consistency (low standard deviation across multiple training runs).
We recognize there is ample opportunity to enhance performance and gain a deeper understanding of the impact of SS labels. Our strategy includes enhancing pseudo label generation, refining patch selection, diversifying patient cohorts, and assessing SS label source domain effects. We'll also compare our approach to domain adversarial learning and self-supervised model distillation. Future studies will explore per-site calibration in other datasets and benchmark against state-of-the-art methods.
§ CONCLUSION
In this study, we address the problem of domain adaptation for automated Ki-67 quantification in invasive breast cancer. We present a novel self-supervised approach that shows that using target domain pseudo labels (SS) for pre-training and fine-tuning with ground truth (GS) data from the source domain leads to improved performance on both source and target domains. The proposed method enhances the robustness of AI models to domain variations and improves adaptation to unseen data distributions. The training pipeline overcomes the difficulties of scarce labelled data and costly manual annotations; a challenge in medical imaging applications. These findings can drive widespread clinical utilization of automated quantification tools in digital pathology.
We acknowledge the Canadian Cancer Society, and MITACs Canada for funding this research.
§ APPENDIX
|
http://arxiv.org/abs/2307.07468v1 | 20230714164853 | SGGNet$^2$: Speech-Scene Graph Grounding Network for Speech-guided Navigation | [
"Dohyun Kim",
"Yeseung Kim",
"Jaehwi Jang",
"Minjae Song",
"Woojin Choi",
"Daehyung Park"
] | cs.RO | [
"cs.RO"
] |
Spectral Network Principle for Frequency Synchronization in Repulsive Laser Networks
Mostafa Honari-Latifpour,^1,2 Jiajie Ding,^1,2 Igor Belykh,^3 and Mohammad-Ali Miri^1,2,*
August 12, 2023
=============================================================================================
empty
empty
The spoken language serves as an accessible and efficient interface, enabling non-experts and disabled users to interact with complex assistant robots. However, accurately grounding language utterances gives a significant challenge due to the acoustic variability in speakers' voices and environmental noise. In this work, we propose a novel speech-scene graph grounding network (SGGNet^2) that robustly grounds spoken utterances by leveraging the acoustic similarity between correctly recognized and misrecognized words obtained from automatic speech recognition (ASR) systems.
To incorporate the acoustic similarity, we extend our previous grounding model, the scene-graph-based grounding network (SGGNet), with the ASR model from NVIDIA NeMo. We accomplish this by feeding the latent vector of speech pronunciations into the BERT-based grounding network within SGGNet. We evaluate the effectiveness of using latent vectors of speech commands in grounding through qualitative and quantitative studies. We also demonstrate the capability of SGGNet^2 in a speech-based navigation task using a real quadruped robot, RBQ-3, from Rainbow Robotics.
§ INTRODUCTION
As the population ages, the demand for caregivers to assist with direct and indirect daily living activities is increasing.
A potential solution to this challenge is the deployment of robotic caregivers, which can provide increased independence for the elderly and complement human caregivers <cit.>. However, the command interfaces of robots typically require professional knowledge to operate effectively. To improve the usability of robotic assistants, we need a more intuitive interface for non-expert users.
Natural language serves as one of the most convenient communication interfaces for humans. In robotics, natural language grounding (NLG) refers to the ability of a robot to understand linguistic instructions by establishing a connection between phrases or words in human language and the physical world. Researchers have introduced a wide variety of human instruction grounding approaches <cit.>. Classic methods involve identifying the structure and meaning of linguistic instructions <cit.>. Recent approaches utilize neural language models to predict the target actions or objects in the robots' world model <cit.>. However, the grounding performance inherently depends on the accuracy of converting human speech into text. Any errors in this conversion, caused by speaker variability or environmental noise, significantly degrade the grounding performance, thereby affecting the usability and safety of the robots.
Automatic speech recognition (ASR), also known as speech-to-text (STT), is the process of converting spoken language into written text. In robotic applications, the STT service is to convert an operator's utterance into a written form, enabling the connection between human intention and the corresponding robot action. Traditional approaches often construct separate generative language, pronunciation, and acoustic models to form a complete ASR pipeline. Recent advancements in neural network-based methods, such as LAS <cit.> and NVIDIA-NeMo ASR <cit.>, have facilitated the joint learning of all ASR components in an end-to-end fashion <cit.>.
The end-to-end schemes offer advantages in training simplicity and compatibility when integrated with other networks to achieve specific task objectives. In this work, we leverage the capabilities of state-of-the-art ASR models to enhance the performance of speech-guided navigation tasks in robotics.
We introduce an end-to-end scheme for the speech-scene graph grounding network (SGGNet^2), which robustly identifies target objects in a robot's world model given voice commands. The core idea of SGGNet^2 is that words with similar acoustic properties can be generated from closely related latent encodings. By transferring the encoding to the scene-graph grounding network (i.e., SGGNet <cit.>), we enable the robot correctly understand the intended word that can be vocally misconverted by the ASR system.
We evaluate the proposed grounding network by training it with a Korean navigation dataset. We particularly show the similarity between the acoustic properties of words in the latent space. This unified network significantly improves grounding performance compared to conventional frameworks. Furthermore, we demonstrate the speech-guided navigation with SGGNet^2 using a quadruped robot, RBQ-3, toward assistive navigation or fetching tasks (see Fig. <ref>). Our work ultimately enhances the capability of robotic assistance through natural language commands, benefiting non-expert users.
Our contributions are as follows:
* we propose an end-to-end structure for the speech-scene graph grounding network, which resolves the issue of misconversion from ASR,
* we introduce a Korean speech-based grounding framework for training navigation tasks, utilizing NVIDIA-NeMo ASR with the Korean navigation dataset, and
* we perform a real-world demonstration by deploying SGGNet^2 on a real quadruped robot.
§ RELATED WORK
§.§ Natural-language grounding
Natural language grounding has become increasingly prominent in the field of robotics, allowing robots to follow verbal instructions without the need for a specialized interface. Early studies primarily focused on identifying target objects within the physical environment <cit.> or specifying target locations by leveraging on spatial relationships present <cit.> in written instructions. More advanced works began considering historical referring expressions <cit.>, background knowledge <cit.>, and the affordances of various skills <cit.>. Other approaches utilize graphical models to represent natural language utterances through syntactic parse trees of phrases <cit.>. However, syntactic parsing is often vulnerable to the ambiguity of unstructured expressions, which can undermine grounding performance.
Recent developments have employed language models such as OpenAI's GPT <cit.> and BERT <cit.>, which have substantially improved grounding performance. These models possess rich commonsense intent representations capable of handling the variability of linguistic commands. Researchers have also explored aligning natural language with geometric trajectories <cit.> or integrating human feedback <cit.> to enhance grounding performance. However, these works still rely on the input text converted from the user's speech. In contrast, our proposed framework merges speech and perceptual data to improve robustness against acoustic variability.
§.§ Automatic speech recognition for language grounding
To deliver our intentions to robots, researchers have used off-the-shelf ASR. Traditional ASR systems often use isolated word entries with classic machine learning models such as hidden Markov models (HMMs). In order to accommodate a wide variety of continuous speech, commercial entities offer cloud-based ASR systems: Microsoft Speech Platform SDK <cit.> and Google Cloud Speech API <cit.>.
While these software systems provide high accuracy, various factors like the user's accent, linguistic variation, and background noise can significantly affect the precision of transcription.
The advance of neural network models has improved the performance of standalone ASR systems.
For example, NVIDIA NeMo <cit.>, a representative conversational AI toolkit, enables a robot to output text from an acoustic model for speech recognition.
In this work, we not only utilize the capability of the ASR model but also extract speech features to improve grounding performance in noisy real-world settings.
§ PROBLEM FORMULATION
Consider a mobile robot operating in a workspace with a set of detectable objects o_i ∈𝒪. Our goal is to enable the robot to ground a target object o_t∈𝒪 that satisfies a human's speech command Λ, given a world model Υ. The robot receives the spoken command Λ through a natural language interface (i.e., microphone). The world model Υ includes information such as the class, position, and other recognizable features of each observable object o_i ∈𝒪. The robot recognizes each object using a detector ϱ. Conventional language-grounding framework finds the target object o_t using modularized ASR, world model, and natural language grounding functions, assuming the conditional independence of visual and auditory information:
o_t^* = _o_t∈𝒪 P(o_t | Λ, 𝒪)
= _o_t∈𝒪 P(o_t | 𝐙, Λ, Υ, 𝒪) P(𝐙 | Λ, Υ, 𝒪) P(Υ| Λ, 𝒪)
= _o_t∈𝒪P(o_t | 𝐙, Υ)_Natural language
groundingP(𝐙 | Λ)_Automatic
speech recognitionP(Υ| 𝒪)_World-model
generator,
where 𝐙 is the written utterance that corresponds to Λ. However, the acoustic variability and noise of spoken utterances in the real world often degrade the performance of ASR.
We make the assumption that words or characters that sound similar result in vectors that are close to each other in the latent space. This can help improve grounding performance in cases where the text is misconverted. For example, the Korean word “자전거 ([jajeongeo], meaning bicycle)" is often misconverted to “하얀 거 ([hayangeo], meaning white one)" or “다른 거 ([dareungeo], meaning something else)" when input to the ASR model. This can degrade the grounding process, especially in a noisy environment.
We improve the framework with Eq. (<ref>) by formulating an end-to-end scheme of grounding network, SGGNet^2, directly grounding o_t given the spoken utterance and a scene-graph Υ_sg,
o_t^* = _o_t∈𝒪P(o_t | Λ, Υ_sg)_SGGNet^2P(Υ_sg| 𝒪)_Scene-graph
generator,
where the scene-graph Υ_sg is a graph-based representation of the world model encoding the semantic relationships between entities based on our past work <cit.>. For computational convenience, our model first calculates the best scene-graph Υ^*_sg and then computes the likelihood of the target object given Λ and Υ^*_sg.
§ SPEECH-SCENE GRAPH GROUNDING NETWORK
§.§ Overview of SGGNet^2
The speech-scene graph grounding network, SGGNet^2, is a unified structure of the speech-grounding model that combines a scene-graph grounding network, SGGNet <cit.>, with an ASR network from NVIDIA NeMo ASR <cit.>.
Unlike SGGNet, which accepts text as input, SGGNet^2 directly processes human-speech commands. Fig. <ref> shows the overall architecture of the speech-enabled navigation system with SGGNet^2, which grounds a target object in a scene-graph obtained from an RGB-D camera given a human-speech command.
SGGNet^2 particularly uses a Conformer-CTC <cit.> model via NeMo ASR, which converts audio input Λ to textual output 𝐙. To use a variety of languages, we pre-train the model with the Korean speech dataset (see Sec. <ref>) before training the entire SGGNet^2 model with the navigation command dataset. We input the embedded speech vector 𝐯_speech∈ℝ^n, the output text vector 𝐯_txt∈ℝ^m, and an embedded vector of the scene-graph 𝐯_sg∈ℝ^k (see Sec. <ref>) to DistilKoBERT <cit.>, for grounding a target object, where n, m, and k are user-defined sizes of input vectors.
§.§ Automatic Korean speech recognition
We develop an automatic Korean-speech recognition network using NVIDIA NeMO <cit.>, which is a conversational AI toolkit with prebuilt modules for ASR, natural language processing (NLP), and TTS synthesis. We use NeMo's Conformer-CTC-based ASR model, a variant of the Conformer, which has an encoder using CTC loss <cit.> and a linear decoder. This structure gives a non-autoregressive model enabling fast inference.
To enable the ASR model to recognize Korean speech, we pre-train it with the KsponSpeech dataset, a large-scale corpus of spontaneous Korean speech <cit.>. This dataset includes 620,000 utterances of PCM format recordings (965.2 hours) and text-Korean phonetic transcriptions from 2,000 speakers, with 54% women and 46% men. We preprocess the dataset following the guidelines outlined in <cit.>, converting the PCM recordings to WAV and limiting the vocabulary to 5,000 words.
We apply a 30 threshold to remove silence sections in the speech. For transcriptions, we eliminate special tokens indicating background noise, breathing, and other unwanted elements. Additionally, we process the transcriptions into characters, ensuring a suitable format for the model's further analysis.
We pre-train the model for 20 epochs with a batch size of 32 with Adam optimizer setting 10^-6 of weight decay. To increase the dataset size, we employed SpecAugment <cit.> for data augmentation.
§.§ Scene-graph generator
We design a scene-graph generator that constructs a scene-graph Υ_sg given visual information. The generator consists of four steps: object detection, position association, world-model generation, and scene-graph conversion.
To detect environmental objects around the robot, we use an object detector, YOLO <cit.> that subscribes to RGB image streams and finds bounding boxes estimating their positions in pixel coordinates.
We use an adaptive clustering algorithm LiDAR point clouds <cit.> to associate the detected 2-D objects with 3-D box clusters to obtain their 3-D positions (∈ℝ^3, see details in <cit.>).
We then add any new detection result to an object dictionary (i.e., world model Υ) with the object class, attribute, and position.
The system stores the class and attribute of each object in the textual format while representing the position using a three-dimensional vector in ℝ^3.
Then, to represent the spatial relationships between objects, we convert the world model dictionary to a scene-graph, Υ_sg=(𝒱,ℰ), where 𝒱 and ℰ are a set of object nodes and directional edges between them, respectively.
Pre-defined predicates, left, right, front, and behind, compose the edge attributes between two interconnected nodes.
A rule-based approach is used to ascertain these predicates, which considers the x and y coordinates of the nodes within the global frame (see details in <cit.>).
The system stores edge predicates, which represent spatial relationships, in textual format.
When multiple predicates are present, the system combines them using a blank space as a separator, for example, left behind.
§.§ End-to-end grounding network design & training
We combine the ASR network as a speech encoder alongside the graph encoder to build an end-to-end structure of SGGNet^2. Fig. <ref> shows the detailed structure, which shows the embedded vectors from the graph encoder and the ASR module, as well as the converted text command are input to the language model, such as BERT, to ground a sentence of speech input to a target entity in the scene-graph. We describe each input below.
1) Scene-graph vector 𝐯_sg: We extract a latent vector 𝐯_sg∈ℝ^k=128 of the scene-graph Υ_sg passing that of nodes and edges through a graph attention network (GAT) <cit.>. In detail, let N be the number of nodes in a fully-connected scene-graph Υ_sg. We convert each node feature (e.g., object class, bicycle) to a vector 𝐱_i ∈ℝ^128 encoding it through DistilKoBERT <cit.>, where i ∈{1, ..., N }. Likewise, we also encode each edge feature, which is the spatial relationship (e.g., left behind) between i-th and j-th nodes, to a vector e_ij∈ℝ^128, where j ∈{1, ..., N }.
We update node features 𝐗=[𝐱_1, ..., 𝐱_N ] via a GAT function f_GAT, where we compute the attention scores α_ij^(L) from i-th node to j-th node at L-th layer based on <cit.>. Then, we compute the latent vector 𝐯_sg given the node features and the edge features 𝐄=[e_12, ..., e_ij, ... ] via a GAT function, f_GAT:
𝐡 = f_GAT(𝐗, 𝐄)
𝐯_sg = ϕ ( 𝐡 )
where 𝐡 and ϕ are nodes' new embedding (∈ℝ^N × 128) and a max-pooling readout function, respectively. Note that we use a ReLU activation function between layers.
2) ASR speech vector v_speech: The ASR model comprises two primary components: an encoder and a decoder.
The encoder converts the speech command Λ to a hidden vector 𝐯_speech, t∈ℝ^256 that captures the acoustic features of the spoken words at each step t ∈{0, ..., T } given the spoken input with length T.
In this work, we use T=126.
3) ASR output text vector v_txt:
The Conformer-CTC's linear decoder predicts a distribution of characters or words given each hidden vector 𝐯_speech, t. By selecting the best one for each step, we obtain a sequence of output texts that can be converted to an embedding vector, v_txt∈ℤ^m=756.
We finally input the described embedding vectors, [𝐯_sg', 𝐯_speech', 𝐯_txt], to the language model, DistilKoBERT <cit.> for grounding a target object in the environment, where 𝐯_sg' ∈ℝ^756 and 𝐯_speech' ∈ℝ^756 are the outputs of linear layers. Following the standard BERT <cit.> convention, we insert [CLS] and [SEP] tokens at the beginning and end of the input, respectively. We then add an extra linear layer to the language model, using the [CLS] token as input for target object classification.
For training, our model takes a speech command and an associated scene-graph as input. We also provide a target object in the environment, as a ground-truth output. We use a cross-entropy loss function and the AdamW <cit.> optimizer, which has a learning rate of 5 × 10^-5 and weight decay of 0.01.
We employ a learning rate scheduler that uses cosine annealing with warm restarts <cit.> during the training process. We train the model around 200 epochs.
§ EXPERIMENTAL SETUP
§.§ Data collection
We create a speech-scene graph dataset for training SGGNet^2 in the robotic navigation domain. In the dataset, each sample contains a human speech, an equivalent text command, and an associated scene-graph. The speeches are a pattern of navigation commands consisting of a verb, a preposition, and an object. For example, we use “자전거로 가" equivalent to “Move to the bicycle." To build a dataset, we first selected 10 classes of target objects: bicycle, sign, desk, door, window, red box, blue box, black box, car, and suitcase. We sampled between two and eight synonyms per class. For example, for the object class table, we additionally sampled desk and stand. Finally, we have a total of 35 object names, where each is assigned to one of the training and test sets. Associating to each name, we sampled a list of suitable verbs and prepositions through GPT-4 <cit.> given exemplar navigation commands. Finally, we artificially generated 390 sentences per object by combining a verb, a preposition, and an object name. In other words, we generated a total of 13,650 text commands that consist of 9,750 training and 3,900 test sets. Note that we set the dataset to be independent by assigning non-overlapping object names.
We also created a scene-graph dataset associated with the text commands. For each command involving a target object, we randomly selected between two and eight additional objects, as nodes, that do not overlap each other. We assigned a 2-dimensional location to each object and fully connect them with edges, representing geometric relationships such as left, right, front, and behind. Note that each graph only includes one object per object type. Additionally, each edge can hold multiple relationships, such as `left behind.'
To produce synthetic speech data associated with the commands, we converted the text commands to artificial voice using Naver Clova Voice <cit.>, an advanced text-to-speech (TTS) synthesizer. To maintain diversity in the speech, we randomly selected five male and five female voices. The synthesis system generates a 24 and 16 bits linear WAV file, which is then padded to be 5 long.
§.§ Robot setup for speech-enabled navigation
We briefly introduce our hardware setup and navigation system for semantic navigation. Our robot system consists of a quadruped robot RBQ-3 from Rainbow Robotics, a GPU-enabled single-board computer NVIDIA Jetson AGX Orin, and additional sensors. The RBQ-3 robot is a 12-degree-of-freedom quadruped robot with a maximum payload 3. To perform LiDAR-inertial navigation and semantic map generation, we mount an Ouster OS-1 32-channel LiDAR sensor and a Realsense D435 RGB-D camera, as shown in Fig. <ref>. In addition, we also mounted a microphone-speaker for speech-based navigation, enabling the robot to communicate with operators. The Jetson computer collects sensor information and controls the robot by sending commands for speed and direction to the robot's controller. We integrated the software and communication systems using the robot operating system (ROS) with the Noetic version.
Our navigation system consists of two layers: metric and semantic simultaneous-localization-and-mappings (SLAMs). For the metric SLAM, we construct a dense 3-D point cloud map using a LiDAR-inertial odometry framework, FAST-LIO2 <cit.>, and generate local & global collision-free trajectories within the map using a move_base package in ROS. Then, for semantic SLAM, our system uses YOLO <cit.> and an adaptive point-cloud clustering algorithm <cit.> depicted in Sec. <ref>.
By labeling each cluster, we can create a semantic map which is also used for scene-graph generation and natural language grounding. In other words, when a user can indicate a target object in the world, the robot can recognize its physical location. Finally, we control the robot by sending speed and direction computed from the semantic navigation system.
For speech-scene graph grounding, our system receives the voice signal via the microphone-speaker and maintains a scene-graph converted from the detection and localization information of the objects in the semantic SLAM. Finally, when our SGGNet^2 returns a target name given the two pieces of information, our navigation system enables the robot to reach the real-world target.
§ EVALUATION
We hypothesize that the similarity in acoustic properties between misconverted and correct words assists in identifying the correct target object by leveraging their close embeddings in the latent space of ASR. We show the effectiveness of this hypothesis through statistical evaluations and a real-robot demonstration.
§.§ Statistical evaluation
Fig. <ref> shows the projection of various latent vectors in 2-D space with their meaning and pronunciation. We found similar pronunciations of words are closely located in the 2-D space regardless of their meanings. For example, although `restaurant [sik-dang]' and `market [si-jang]' are different meanings of words, their latent vectors are close to each other. Thus, regardless of misconversion in ASR, using latent vectors can mitigate the effect of any misconverted word. To obtain the projection points in Fig. <ref>, we sampled 50 words from ASR's vocabulary list and performed principle component analysis (PCA) of their latent vectors. We particularly visualized the first two components of the PCA results.
Table <ref> shows the ablation study that compares the grounding accuracy of SGGNet^2 with variants. Our proposed grounding network SGGNet^2 outperformed all the other frameworks with the highest grounding accuracy over 85%, which is about 8%p higher than SGGNet^2 removing the latent speech vector 𝐯_speech. The accuracy of SGGNet^2 is also about 5%p higher than that of the next best method SGGNet with the ground-truth written utterance 𝐙^* and the same size as 𝐯_sg. The results indicate the use of the latent speech vector 𝐯_speech not only resolves the problem of acoustic variability/noise but also improves the grounding performance given the ground truth texts. In addition, noticeable results are that the sole use of ASR-converted written utterances degrades the grounding performance on the previous SGGNet. The overall comparison result indicates that our method, SGGNet^2, is robust and effective.
§.§ Demonstration with a real robot RBQ-3
We finally demonstrated the effectiveness of our SGGNet^2 in a real-world scenario for speech-enabled semantic navigation tasks. Fig. <ref> shows two examples of the indoor speech-enabled navigation experiment. A human operator speaks his Korean command, such as “Go to the red box (빨간색 박스로 가)" or “Move in front of the bicycle (자전거 앞으로 이동해)," that is delivered to the robot's microphone-speaker or a laptop's built-in microphone the remote operator is holding. Our model, SGGNet^2, successfully identified the intended destination out of five candidates and our RBQ-3 robot was able to reach the destination.
Further, even in the case when the pillar occluded the viewpoint of the robot as shown in Fig. <ref> (b), our robot could correctly ground the intended object by leveraging the scene-graph.
This indicates the effectiveness of scene-graph-based grounding as compared to image-based grounding.
Note that we enabled the robot to construct a 2-D metric/semantic map as well as a scene-graph in advance, placing five objects: a bicycle, a sign, a red box, a blue box, and a black box in advance.
§ CONCLUSION
We proposed a framework for grounding Korean speech-enabled navigation commands, which combines a scene graph-based grounding network with a Korean acoustic-speech recognition network. Our evaluation shows that similar acoustic properties of words lead to similar encodings, reducing grounding failures and enhancing the robustness of SGGNet. Furthermore, we validate the effectiveness of our approach through a real-world navigation task. We anticipate that our study enhances the accessibility of robotic assistance by enabling natural language commands for non-expert users.
ieeetr
|
http://arxiv.org/abs/2307.04212v1 | 20230709155233 | Delay-Adaptive Control of First-order Hyperbolic PIDEs | [
"Shanshan Wang",
"Jie Qi",
"Miroslav Krstic"
] | math.AP | [
"math.AP",
"cs.SY",
"eess.SY",
"math.OC",
"physics.class-ph",
"physics.flu-dyn"
] |
1]Shanshan Wang
2]Jie Qi
3]Miroslav Krstic
AUTHOR ONE et al
[1]Department of Control Science and Engineering, University of Shanghai for Science and Technology, Shanghai, China
[2]College of Information Science and Technology, Donghua University, Shanghai, China
[3]Department of Mechanical and Aerospace Engineering, University of California, San Diego, California, USA
Jie Qi,
[email protected]
[Summary]We develop a delay-adaptive controller for a class of first-order hyperbolic partial integro-differential equations (PIDEs) with an unknown input delay. By employing a transport PDE to represent delayed actuator states, the system is transformed into a transport partial differential equation (PDE) with unknown propagation speed cascaded with a PIDE. A parameter update law is designed using a Lyapunov argument and the infinite-dimensional backstepping technique to establish global stability results. Furthermore, the well-posedness of the closed-loop system is analyzed. Finally, the effectiveness of the proposed method was validated through numerical simulations.
Delay-Adaptive Control of First-order Hyperbolic PIDEs
[
August 12, 2023
========================================================
§ INTRODUCTION
First-order hyperbolic PIDEs are widely used in various engineering applications, including traffic flow <cit.>, pipe flow <cit.>, heat exchangers <cit.>, and oil well drilling <cit.>. These applications often involve time delays due to the transportation of matter, energy, and information, which negatively affect the stability and performance of the system. Maintaining a stable fluid temperature is critical for the normal operation of heat exchangers, but the response speed is often limited when regulating fluid temperature, resulting in a time delay <cit.>. The exact value of the delay is usually hard to measure, which becomes a significant source of uncertainty within the controlled process <cit.>. Controlling the advection process in the presence of unknown delays is, therefore, a challenging task with practical significance. Thus, addressing the stabilization problem of first-order hyperbolic PIDEs with unknown input delays is of great practical importance.
Recently, there have been many studies on the stability of first-order hyperbolic PIDEs <cit.>, and the development of infinite-dimensional backstepping techniques in <cit.> has provided effective methods for the PDE system control problems. <cit.> applies this method to the control of unstable open-loop hyperbolic PIDEs and developed a backstepping-based controller to stabilize the system. Subsequently, control problems for 2× 2 first-order PDEs <cit.>, n+1 coupled first-order hyperbolic PDEs <cit.>, and m+n anisotropic hyperbolic systems <cit.> were investigated by employing the infinite-dimensional backstepping approach. In reference <cit.>, a state feedback controller was designed for hyperbolic PIDEs with time-varying system parameters using this infinite-dimensional backstepping method, and the controller ensures that the system state converges to zero in the H_∞ norm within a finite time. Furthermore, a stabilizing controller and observer for hyperbolic PIDEs with Fredholm integrals were constructed in <cit.>, and the results of <cit.> were extended to output regulation problems. <cit.> demonstrated the equivalence between finite-time stabilization and exact controllability properties for first-order hyperbolic PIDEs with Fredholm integrals. For linear anisotropic hyperbolic systems without integral terms, finite-time output regulation problems were addressed in <cit.>, and stabilization problems for linear ODEs with linear anisotropic PDEs were solved in <cit.>.
Infinite-dimensional backstepping has also been
applied to adaptive control of hyperbolic PDEs. The pioneering work was presented in <cit.>, where an adaptive stabilization method was developed for a one-dimensional (1-D) hyperbolic system with a single uncertain parameter. Since then, this method has been extensively applied to various types of hyperbolic PDEs with unknown parameters, as presented in the extensive literature <cit.>. The aforementioned results are built based on three traditional adaptive schemes, including the Lyapunov design, the passivity-based design, and the swapping design, which were initially proposed for nonlinear ODEs <cit.>, and extended to the boundary adaptive control of PDEs <cit.>. Combined with backstepping design, a novel control strategy is proposed for coupled hyperbolic PDEs with multiplicative sensor faults in <cit.>, it utilized a filter-based observer and model-based fault parameter estimation technique to achieve the tracking objective.
In recent years, studies began to pay attention to the time delays that occur in first-order PIDE systems since delays are commonly encountered in engineering practice. For instance, in <cit.>, input delays were considered, and a backstepping boundary control was designed for first-order hyperbolic PIDEs. An observer-based output feedback control law was proposed for a class of first-order hyperbolic PIDEs with non-local coupling terms in the domain and measurement delay compensation<cit.>. Reference <cit.> addressed the output boundary regulation problem for a first-order linear hyperbolic PDE considering disturbances in the domain and on the boundary as well as state and sensor delays. Recently, the robustness of output feedback for hyperbolic PDEs with respect to small delays in actuation and measurements was discussed in <cit.>. Research on adaptive control for unknown arbitrary delays in PDE systems is relatively scarce. In contrast, there have been significant research achievements in the adaptive control of ODE systems with unknown delays. A notable theoretical breakthrough by developing adaptive control methods to compensate for uncertain actuator delays is achieved in <cit.>. Subsequently, the delay adaptive control technique has been applied to various types of unknown delays in ODE systems, including single-input delay <cit.>, multi-input delay <cit.> and distributed input delay <cit.>. Inspired by these studies, recent work on parabolic systems with unknown input delays is presented in <cit.>. However, research on hyperbolic PDE systems with delays remains relatively limited. For the first-order hyperbolic systems with uncertain transport speed, parameter estimators and adaptive controllers are designed in <cit.> by using swapping filters. Different from these two studies, we apply a Lyapunov argument combined with the infinite-dimensional backstepping technique to design a delay-adaptive controller that achieves global stability in this paper, since the Lyapunov based adaptive methods are known to provide better transient performance <cit.>.
In this paper, we consider a hyperbolic PIDE with an arbitrarily large unknown input delay. We extend the previous work on parabolic PDEs <cit.> to a first-order PIDE system. We employ the infinite-dimensional backstepping method and choose the classic update law for the unknown delay, resulting in the structuring of the target system as a "cascade system", and the target transport PDE has two extra nonlinear terms which are controlled by the delay estimation error and the delay update law.
The L^2 global stability of the target system is proven using appropriate Lyapunov functionals. The inverse Volterra/backstepping transformation establishes the norm equivalence relationship between the target system and the original one, thereby achieving L^2 global stability of the PDE system under the designed adaptive delay compensation controller. Furthermore, the well-posedness of the closed-loop system is analyzed.
Main contributions of this paper are:
(1) This paper develops a combined approach of the infinite-dimensional backstepping and the Lyapunov functional method for delay-adaptive control design for a class of hyperbolic PIDEs with unknown input delay. In <cit.>, the presence of nonzero boundary conditions in the parabolic PDE target system with unknown input delay restricts us to the local stability of the closed-loop system with delay update law. However, we leverage the property first-order hyperbolic of the system to attain global stability of the closed-loop system.
(2) The well-posedness of the closed-loop system is established. Due to the presence of nonlinear terms and non-zero boundary conditions in the target system, the proof of well-posedness is not straightforward. We use the semigroup method to analyze the well-posedness of the target system, and construct Lyapunov functions to establish the system's asymptotic stability in the H^1 norm, thereby ensuring the global existence of the classical solution. Due to the invertibility of the backstepping transformation, the equivalence between the target system and the closed-loop system can be established, so that the closed-loop system is well-posed.
The structure of this paper is as follows: Section <ref> briefly describes the design of a nonadaptive controller for the considered hyperbolic PIDE system. Section <ref> discusses the design of the delay-adaptive control law. Section <ref> is dedicated to the stability analysis of the resulting adaptive closed-loop system and the well-posedness of the closed-loop system. Section <ref> provides consistent simulation results to demonstrate the feasibility of our approach. The paper ends with concluding remarks in Section <ref>.
Notation: Throughout the paper, we adopt the following notation to define the L^2-norm for χ(x)∈ L^2[0,1]:
‖χ‖^2_L^2=∫_-1^1|χ(x)|^2dx,
and set ‖χ‖^2=‖χ‖^2_L^2.
For any given function ψ (· , D̂ (t))
∂ψ (· , D̂ (t))/∂ t= Ḋ̂̇(t) ∂ψ(·,D̂ (t))/∂D̂ (t).
§ PROBLEM STATEMENT AND NON-ADAPTIVE CONTROLLER
Consider the first-order PIDE with an input delay D>0,
u_t(x,t)= u_x(x,t)+g(x) u(0,t)+∫_0^xf(x,y)u(y,t)dy,
u(1,t)=U(t-D),
u(x,0)=u_0(x),
for (x,t)∈ (0,1)×ℝ_+, where g(x),f(x,y)∈ C[0,1] are known coefficient functions. Following <cit.>, the delayed input U(t-D) is written as a transport equation coupled with (<ref>) as follows:
u_t(x,t)= u_x(x,t)+g(x) u(0,t)+∫_0^xf(x,y)u(y,t)dy,
u(1,t)=v(0,t),
u(x,0)=u_0(t),
Dv_t(x,t)=v_x(x,t), x∈[0,1),
v(1,t)=U(x,t),
v(x,0)=v_0(x),
where the infinite-dimensional actuator state is solved as
v(x,t)=U(t+D(x-1)).
To design the delay-compensated controller U(t), the backstepping transformation as follows can be employed:
w(x,t)= u(x,t)-∫_0^xk(x,y) u(y,t)dy,
z(x,t)= v(x,t)-∫_0^1γ(x,y) u(y,t)dy-D∫_0^xq(x-y)v(y,t)dy,
where the kernel function k(x,y) and q(x-y) are defined on 𝒯_1={(x,y): 0≤ y ≤ x≤ 1}, γ(x,y) on 𝒯_2 ={(x,y): 0≤ y, x ≤ 1}, which gives the following target system
w_t(x,t)=w_x(x,t),
w(1,t)=z(0,t),
w(x,0)= w_0(x),
Dz_t(x,t)=z_x(x,t),
z(1,t)=0,
z(x,0)=z_0(x),
with a mild solution for z
z(x,t)=
z_0(x+t/D), 0≤ x+t/D≤ 1,
0, x+t/D>1,
Using the backstepping method, one can get the kernel equations
k_x(x,y)= -k_y(x,y)+∫_y^xf(τ,y)k(τ,y)dτ-f(x,y),
k(x,0)= ∫_0^x k(x,y)g(y)dy-g(x),
γ_x(x,y)= -Dγ_y(x,y)+D∫_y^1f(τ,y)γ(x,τ)dτ,
γ(x,0)= ∫_0^1g(y)γ(x,y)dy,
γ(0,y)= k(1,y),
q(x)= γ(x,1).
From the boundary conditions (<ref>) and (<ref>), the associated control law is straightforwardly derived
U(t)=∫_0^1γ(1,y) u(y,t)dy+D∫_0^1q(1-y)v(y,t)dy.
Knowing that the transformations (<ref>)–(<ref>) are invertible with inverse transformation as
u(x,t)= w(x,t)+∫_0^xl(x,y) w(y,t)dy,
v(x,t)= z(x,t)+∫_0^1η(x,y) w(y,t)dy-D∫_0^xp(x-y)z(y,t)dy,
where kernels l(x,y), η(x,y) and p(x-y) satisfy the following PDEs,
l_x(x,y)+l_y(x,y)=-∫_y^xf(τ,y)l(τ,y)dτ-f(x,y),
l(x,0)=-g(x),
η_x(x,y)+Dη_y(x,y)=0,
η(x,0)=0,
η(0,y)=l(1,y),
p(x)=η(x,1).
Next, we will develop an adaptive controller with delay update law to stabilize (<ref>)–(<ref>) for the arbitrarily long unknown delay.
§ DESIGN OF A DELAY-ADAPTIVE FEEDBACK CONTROL
§.§ Adaptive control design
Considering the plant (<ref>)–(<ref>) with an unknown delay D>0, which equivalent to the cascade system (<ref>)–(<ref>) with an unknown propagation speed 1/D, we will design an adaptive boundary controller to ensure global stability result.
assuptionAssumption
The upper and lower bounds D and D for delay D>0 are known.
Based on the certainty equivalence principle, we rewrite controller (<ref>) by replacing D with estimated delay D̂(t) as the delay-adaptive controller
U(x,t)=∫_0^1γ(1,y,D̂(t)) u(y,t)dy+D̂(t)∫_0^1q(1-y,D̂(t))v(y,t)dy.
§.§ Target system for the plant with unknown input delay
Rewriting the backstepping transformations (<ref>) as
z(x,t)=v(x,t)-∫_0^1γ(x,y,D̂(t)) u(y,t)dy-D̂(t)∫_0^xq(x-y,D̂(t))v(y,t)dy,
and its inverse (<ref>) as:
v(x,t)=z(x,t)+∫_0^1η(x,y,D̂(t)) u(y,t)dy+D̂(t)∫_0^xp(x-y,D̂(t))z(y,t)dy,
where the kernels γ(x,y,D̂(t)), q(x-y,D̂(t)), η(x,y,D̂(t)), p(x-y,D̂(t)) satisfy the same form of PDEs (<ref>)-(<ref>) and (<ref>)-(<ref>) except D replaced with D̂(t). Using the transformation (<ref>) and (<ref>), we get the following target system
w_t(x,t)= w_x(x,t),
w(1,t)=z(0,t),
w(x,0)= w_0(x),
Dz_t(x,t)=z_x(x,t)-D̃(t)P_1(x,t)-DḊ̂̇(t)P_2(x,t),
z(1,t)=0,
z(x,0)=z_0(x),
where D̃(t)=D-D̂(t) is the estimation error, functions P_i(x,t), i=1,2 are given below:
P_1(x,t)= z(0,t)M_1(x,t)+∫_0^1 w(y,t)M_2(x,y,t)dy,
P_2(x,t)= ∫_0^1 z(y,t)M_3(x,y,t)dy+∫_0^1w(y,t)M_4(x,y,t)dy,
with
M_1(x,t)= γ(x,1,D̂(t)),
M_2(x,y,t)= γ(x,1,D̂(t))l(1,y)-γ_y(x,y,D̂(t))
+∫_y^1(-γ_y(x,ξ,D̂(t))l(ξ,y)+γ(x,ξ,D̂(t))f(ξ,y)+∫_ξ^1γ(x,τ,D̂(t))f(τ,ξ)l(ξ,y)dτ)dξ,
M_3(x,y,t)= q(x-y,D̂(t))+ q_D̂(t)(x-y,D̂(t))+D̂(t)∫_y^xq(x-ξ,D̂(t))p(ξ-y,D̂(t))dξ
+D̂(t)^2∫_y^xq_D̂(t)(x-ξ,D̂(t))p(ξ-y,D̂(t))dξ,
M_4(x,y,t)= γ_D̂(t)(x,y,D̂(t))+∫_y^1γ_D̂(t)(x,ξ,D̂(t))l(ξ,y)dξ+∫_0^xq(x-ξ,D̂(t))η(ξ,y,D̂(t))dξ
+D̂(t)∫_0^xq_D̂(t)(x-ξ,D̂(t))η(ξ,y,D̂(t))dξ.
§.§ The parameter update law
We choose the following update law
Ḋ̂̇(t)=θProj_[D,D]{τ(t)}, 0<θ<θ^*,
where
τ(t) is given as
τ(t)=-b_1∫_0^1(1+x)z(x,t)P_1(x,t)dx/N(t),
with N(t)=1/2∫_0^1 (1+x)w (x,t)^2dx+b_1/2∫_0^1(1+x)z(x,t)^2d x, b_1>2D̅ and
θ^*=min{D,b_1-2D̅}min{1,b_1}/2b_1^2L^2.
The standard projection operator is defined as follows
Proj_[D,D]{τ(t)}={[ 0 D̂(t)=D τ(t)<0,; 0 D̂(t)=D τ(t)>0,; τ(t) . ].
The projection is used to ensure the parameters D̂(t) within the known bounds [D, D] which cannot be viewed as a robust tool <cit.>. It prevents adaptation transients by over-limiting the size of the adaptation gain. The projection
set can be taken conservatively and can be large, however, in order to ensure stability, the size needs to be
inversely proportional to the adaptation gain.
§ THE GLOBAL STABILITY OF THE CLOSED-LOOP SYSTEM UNDER THE DELAY-ADAPTIVE CONTROL
The following theorem states the global stability result of the closed-loop system (<ref>)–(<ref>) with update law (<ref>) and adaptive controller (<ref>).
Consider the closed-loop system consisting of the plant (<ref>)–(<ref>), the control law (<ref>), and the update law (<ref>)–(<ref>) under Assumption <ref>. There exist positive constants ρ, R such that
Ψ(t)≤ R(e^ρΨ(0)-1), ∀ t≥0,
where
Ψ(t)= ∫_0^1 u(x,t)^2dx+∫_0^1 v(x,t)^2dx+D̃(t)^2.
Furthermore,
lim_t→∞max_x∈[0,1]|u(x,t)|=0,
lim_t→∞max_x∈[0,1]|v(x,t)|=0.
The global stability of the (u, v)-system is established by the following steps:
* We establish the norm equivalence between (u, v) and ( w, z).
* We introduce a Lyapunov function to prove the global stability of the (w, z)-system (<ref>)–(<ref>), and then get the stability of system (u, v) by using the norm equivalence.
* We arrive at the regulation of states u(x,t) and v(x,t).
§.§ Global stability of the closed-loop system
First, we discuss the equivalent stability property between the plant (<ref>)–(<ref>) and the target system (<ref>)–(<ref>). Call now kernel functions k(x,y), γ(x,y), q(x-y), l(x,y), η(x,y), and p(x-y) are bounded by k, γ, q, l, η, and p and in their respective domains. From (<ref>), (<ref>), (<ref>), and (<ref>) it is easy to find, by using Cauchy-Schwarz inequality, that
‖ u(t)‖^2+‖ v(t)‖^2≤ r_1 ‖ w (t)‖^2+r_2‖ z(t)‖^2,
‖ w (t)‖^2+‖ z(t)‖^2≤ s_1‖ u(t)‖^2+s_2‖ v(t)‖^2,
where r_i and s_i, i=1,2 are positive constants given by
r_1=2+2l^2+3η^2,
r_2=3+3D^2p^2,
s_1=2+2k^2+3γ^2,
s_2=3+3D^2q^2.
Next, we prove the global stability of the closed-loop system consisting of the (u, v)-system under the control law (<ref>), and the update law (<ref>)-(<ref>).
Introducing a Lyapunov-Krasovskii-type function
V_1(t)= Dlog (1+N(t))+D̃(t)^2/2θ,
where N(t)=1/2∫_0^1 (1+x)w (x,t)^2dx+b_1/2∫_0^1(1+x)z(x,t)^2d x, based on the target system (<ref>)–(<ref>) and where b_1 is a positive constant.
Taking the time derivative of (<ref>) along (<ref>)–(<ref>), we get
V̇_1(t)= D/N(t)(∫_0^1 (1+x)w (x,t)w_t (x,t)dx+b_1∫_0^1(1+x)z(x,t)z_t(x,t)dx)-D̃(t)Ḋ̂̇(t)/θ
= 1/N(t)(D∫_0^1 (1+x)w (x,t)w_x(x,t)dx+b_1∫_0^1(1+x)z(x,t)(z_x(x,t)-D̃(t)P_1(x,t) -DḊ̂̇(t)P_2(x,t))dx)-D̃(t)Ḋ̂̇(t)/θ
= 1/N(t)(Dw(1,t)^2-D/2w(0,t)^2-D/2‖ w‖^2-b_1/2z(0,t)^2-b_1/2‖ z‖^2
-b_1D̃(t)∫_0^1(1+x)z(x,t)P_1(x,t)dx-b_1DḊ̂̇(t)∫_0^1(1+x)z(x,t)P_2(x,t)dx)-D̃(t)Ḋ̂̇(t)/θ,
where we have used integration by parts, Cauchy-Schwarz, and Young's inequalities.
Using (<ref>)–(<ref>) and the standard properties of the projection
operator leads to
V̇_1(t)≤ 1/N(t)(-D/2‖ w‖^2-b_1/2‖ z‖^2-(b_1/2-D)z(0,t)^2
-b_1DḊ̂̇(t)∫_0^1(1+x)z(x,t)P_2(x,t)dx),
where b_1>2D̅.
After a lengthy but straightforward calculation, employing the Cauchy-Schwarz and Young inequalities, along with (<ref>) and (<ref>), yields the following estimates
∫_0^1(1+x)z(x,t)P_1(x,t)dx≤ L(‖ w‖^2+‖ z‖^2+‖ z(0,t)‖^2),
∫_0^1(1+x)z(x,t)P_2(x,t)dx≤ L(‖ w‖^2+‖ z‖^2),
where the parameter L̅ is defined below
L̅=max{M_1+M_2,2M_3+M_4},
where M_1=max_0≤ x≤1, t≥0{|M_1(x,D̂(t))|}, M_i=max_0≤ x≤ y≤1, t≥0{|M_i(x,y,D̂(t))|} for i=2,3,4.
According to the equivalent stability property between the plant (<ref>)–(<ref>) and the target system (<ref>)–(<ref>), we can get
V̇_1≤ -(min{D/2,b_1/2-D̅}..-θ b_1^2L^2/min{1,b_1})‖ w‖^2+‖ z‖^2+‖ z(0,t)‖^2/N(t).
Choosing θ∈(0,θ^⋆), where θ^⋆ defined by (<ref>), we know V̇_1(t)≤0, which gives
V_1(t)≤ V_1(0),
for all t≥0.
Hence, we get the following estimates from (<ref>):
‖ w‖^2≤ 2(e^V_1(t)/D-1),
‖ z‖^2≤2/b_1(e^V_1(t)/D-1),
D̃(t)≤2θ V_1(t)/D.
Furthermore, from (<ref>), (<ref>) and (<ref>)-(<ref>), it follows that
‖ u‖^2+‖ v‖^2≤(2r_1+2r_2/b_1)(e^V_1(t)/D-1),
and combining (<ref>) and (<ref>), we get
Ψ(t)≤(2r_1+2r_2/b_1+2θ/D)(e^V_1(t)/D-1).
So, we have bounded Ψ(t) in terms of V_1(t) and thus, using (<ref>), in terms of V_1(0). Now we have to bound V_1(0) in terms of Ψ(0). First, from (<ref>), it follows that
V_1(t)=D log(1+1/2∫_0^1(1+s)w(x,t)^2dx+b_1/2∫_0^1(1+s)z(x,t)^2dx)+D̃(t)^2/2θ
≤ D̅‖ w‖^2+b_1D̅‖ z‖^2+D̃(t)^2/2θ
≤ D̅max{1,b_1}(s_1+s_2)(‖ u‖^2+‖ v‖^2)+D̃(t)^2/2θ
≤ (D̅max{1,b_1}(s_1+s_2)+1/2θ)Ψ(t),
leading to the following relation
V_1(0)≤ (D̅max{1,b_1}(s_1+s_2)+1/2θ)Ψ(0).
Then, combining (<ref>), (<ref>) and (<ref>), we have
Ψ(t)≤ R(e^ρΨ(0)-1),
where
R=2r_1+2r_2/b_1+2θ/D,
ρ=D̅max{1,b_1}(s_1+s_2)+1/2θ,
so we complete the proof of the stability estimate (<ref>).
§.§ Pointwise boundedness and regulation of the distributed states
Now, we ensure the regulation of the distributed states. From (<ref>) and (<ref>), we get the boundedness of ‖ w‖, ‖ z‖ and D̂(t). Knowing that
∫_0^t‖ w(τ)‖^2dτ≤sup_0≤τ≤ tN(τ)∫_0^t‖ w(τ)‖^2/N(τ)dτ,
and using (<ref>) the following inequality holds
N(τ)≤ N(0)e^D̃(0)^2/2θ.
Integrating (<ref>) over [0, t ], we have
∫_0^t‖ w(τ)‖^2/N(τ)dτ≤D̅log N(0)+D̃(0)^2/2θ/min{1/2,b_1/2-1}-θ b_1^2L^2/min{1,b_1}.
Substituting (<ref>) and (<ref>) into (<ref>),
we get ‖ w‖ is square integrable in time. One can establish that ‖ z‖ and ‖ z(0,t)‖ are
square integrable in time similarly. Thus, ‖ P_1‖ and ‖ P_2‖ are bounded and integrable functions of time.
To prove the boundedness of ‖ w_x‖, we define
the following Lyapunov function
V_2(t)=1/2∫_0^1 (1+x)w _x(x,t)^2dx+b_2D/2∫_0^1 (1+x)z_x(x,t)^2dx,
where b_2 is a positive constant. Using the integration by parts, the derivative of (<ref>) with respect to time is written as
V̇_2(t)= ∫_0^1 (1+x)w _x(x,t) w _xt(x,t)dx+b_2D∫_0^1 (1+x)z_x(x,t)z_xt(x,t)dx
= ∫_0^1 (1+x)w _x(x,t) w _xx(x,t)dx+b_2∫_0^1 (1+x)z_x(x,t)z_xx(x,t)dx
-b_2D̃(t)∫_0^1(1+x)z_x(x,t)P_1x(x,t)dx-b_2DḊ̂̇(t)∫_0^1(1+x)z_x(x,t)P_2x(x,t)dx.
= w_x(1,t)^2-1/2w_x(0,t)^2-1/2‖ w_x‖^2+b_2z_x(1,t)^2-b_2/2z_x(0,t)^2-b_2/2‖ z_x‖^2
-b_2D̃(t)∫_0^1(1+x)z_x(x,t)P_1x(x,t)dx-b_2DḊ̂̇(t)∫_0^1(1+x)z_x(x,t)P_2x(x,t)dx.
Based on (<ref>), (<ref>), one can get
w_x(1,t)= w_t(1,t)=z_t(0,t)
= z_x(0,t)-D̃(t)P_1(0,t)-DḊ̂̇(t)P_2(0,t),
we arrive at the following inequality
V̇_2(t)≤ -1/2‖ w_x‖^2-b_2/2‖ z_x‖^2-(b_2/2-3)z_x(0,t)^2+3D̃(t)^2P_1(0,t)^2+3D^2Ḋ̂̇(t)^2P_2(0,t)^2
+2b_2D̃(t)^2 P_1(1,t)^2+2b_2D^2Ḋ̂̇(t)^2 P_2(1,t)^2+2b_2|D̃(t)|‖ z_x‖‖ P_1x(x,t)‖
+b_2D|Ḋ̂̇(t)|‖ z_x‖‖ P_2x(x,t)‖.
Choosing b_2>6,
we get,
V̇_2(t)≤ -1/2‖ w_x‖^2-b_2/2‖ z_x‖^2+3D̃(t)^2P_1(0,t)^2+3D^2Ḋ̂̇(t)^2P_2(0,t)^2+2b_2D̃(t)^2 P_1(1,t)^2
+2b_2D̅^2Ḋ̂̇(t)^2 P_2(1,t)^2+2b_2D|Ḋ̂̇(t)|‖ z_x‖‖ P_2x‖+2b_2|D̃(t)|‖ z_x(x,t)‖‖ P_1x(x,t)‖
≤ -c_1V_2(t)+f_1(t)V_2(t)+f_2(t),
where we use Young's and Agmon's inequalities. Here, c_1=1/2min{1,1/D}, and the functions f_1(t) and f_2(t) are given by
f_1(t)= b_2D^2(|Ḋ̂̇(t)|^2+4),
f_2(t)= b_2‖ P_1x‖^2+b_2‖ P_2x‖^2+12D̅^2P_1(0,t)^2+3D̅^2Ḋ̂̇(t)^2P_2(0,t)^2+8b_2D̅^2 P_1(1,t)^2
+2b_2D^2Ḋ̂̇(t)^2 P_2(1,t)^2.
Knowing that
P_1(0,t)^2≤ 2M_1^2z(0,t)^2+2M_2^2‖ w ‖^2
≤ 2M_1^2(‖ z‖^2+‖ z_x‖^2)+2M_2^2‖ w ‖^2,
P_2(0,t)^2≤ 2M_3^2‖ z‖^2+ 2M_4^2‖ w ‖^2,
with (<ref>) and (<ref>), we get |Ḋ̂̇(t)|, P_1(0,t)^2, P_2(0,t)^2, P_1(1,t)^2 and P_2(1,t)^2 are integrable. Then,
f_1(t) and f_2(t) are also integrable functions of time.
Using Lemma D.3 <cit.>, we get that ‖ w _x‖ and ‖ z _x‖ are bounded, and combing the Agmon's inequality, one can deduce the boundedness of w(x,t) and z(x,t) for all x∈[0,1].
Next we establish the boundedness of d/dt(‖ w‖^2), d/dt(‖ z‖^2) and d/dt(‖ z_x‖^2) using the following Lyapunov function
V_3(t)=1/2∫_0^1 (1+x)w_x (x,t)^2dx+b_3D/2∫_0^1 (1+x)z(x,t)^2dx+b_3D/2∫_0^1 (1+x)z_x(x,t)^2dx,
where b_3 is a positive constant. Taking the derivative of (<ref>) with respect to time,
we obtain
V̇_3(t)= ∫_0^1 (1+x)w_x(x,t) w _xt(x,t)dx+b_3D∫_0^1 (1+x)z(x,t)z_t(x,t)dx
+b_3D∫_0^1 (1+x)z_x(x,t)z_xt(x,t)dx
= w_x(1,t)^2-1/2w_x(0,t)^2-1/2‖ w_x‖^2-b_3/2z(0,t)^2-b_3/2‖ z‖^2+b_3z_x(1,t)^2-b_3/2z_x(0,t)^2
-b_3/2‖ z_x‖^2-b_3D̃(t)(∫_0^1(1+x)z(x,t)P_1(x,t)dx+∫_0^1(1+x)z_x(x,t)P_1x(x,t)dx)
-b_3DḊ̂̇(t)(∫_0^1(1+x)z(x,t)P_2(x,t)dx+∫_0^1(1+x)z_x(x,t)P_2x(x,t)dx).
Clearly, using integrations by part and Young's inequality, the following holds,
V̇_3(t)≤ -1/2‖ w_x‖^2-b_3/2‖ z‖^2-(b_3/2-1)z(0,t)^2-b_3/2‖ z_x‖^2-(b_3/2-3)z_x(0,t)^2
+3D̃(t)^2P_1(0,t)^2+3D^2Ḋ̂̇(t)^2P_2(0,t)^2+2b_3D̃(t)^2 P_1(1,t)^2+2b_3D^2Ḋ̂̇(t)^2 P_2(1,t)^2
+2b_3|D̃(t)|‖ z‖‖ P_1‖+2b_3D|Ḋ̂̇(t)|‖ z‖‖ P_2‖+2b_3|D̃(t)|‖ z_x‖‖ P_1x‖
+2b_3D|Ḋ̂̇(t)|‖ z_x‖‖ P_2x‖.
Choosing b_3>6, we have
V̇_3(t)≤ -1/2‖ w_x‖^2-b_3/2‖ z‖^2-b_3/2‖ z_x‖^2+3D̃(t)^2P_1(0,t)^2+3D^2Ḋ̂̇(t)^2P_2(0,t)^2+2b_3D̃(t)^2 P_1(1,t)^2
+3D^2Ḋ̂̇(t)^2P_2(0,t)^2+2b_3D̃(t)^2 P_1(1,t)^2+2b_3D^2Ḋ̂̇(t)^2 P_2(1,t)^2+2b_3|D̃(t)|‖ z‖‖ P_1‖
+2b_3D|Ḋ̂̇(t)|‖ z‖‖ P_2‖+2b_3|D̃(t)|‖ z_x‖‖ P_1x‖+2b_3D|Ḋ̂̇(t)|‖ z_x‖‖ P_2x‖
≤ -c_1V_3(t)+f_3(t)V_3(t)+f_4(t)<∞,
where we use Young's and Agmon's inequalities, and
f_3(t)= 2b_3D^2(|Ḋ̂̇(t)|^2+4),
f_4(t)= 3D̃(t)^2P_1(0,t)^2+3D̅^2Ḋ̂̇(t)^2P_2(0,t)^2+b_3(‖ P_1‖^2+‖ P_2‖^2+‖ P_1x‖^2+‖ P_2x‖^2
+8D̅^2 P_1(1,t)^2+2D̅^2Ḋ̂̇(t)^2 P_2(1,t)^2),
are bounded functions. Thus, from (<ref>), one can deduce the boundedness of d/dt(‖ w‖^2), d/dt(‖ z‖^2) and d/dt(‖ z_x‖^2). Moreover, by Lemma D.2 <cit.>, we get V_3(t)→0, and thus ‖ w_x ‖,‖ z‖, ‖ z_x‖^2→ 0 as t→∞. Next, from (<ref>), we have ‖ u_x‖^2, ‖ v‖^2, ‖ v_x‖^2→ 0 as t→∞.
From (<ref>), we have
‖ u_x‖^2≤ 2‖ w_x‖^2+2‖ w‖^2l_x^2.
Since ‖ w‖, ‖ w_x‖ are bounded, ‖ u_x‖ is also bounded. By Agmon's inequality u(x,t)^2≤2‖ u‖‖ u_x‖, which enables one to state the regulation of u(x,t) to zero uniformly in x. Similarly, one can prove the regulation of v(x,t). Since ‖ v‖^2 and ‖ v_x‖^2→ 0 as t→∞, by Agmon's inequality v(x,t)^2≤2‖ v‖‖ v_x‖, which enables one to state the regulation of v(x,t) to zero uniformly in x and completes the proof of Theorem <ref>.
§.§ Well-posedness of the closed-loop system
Following the approach in <cit.>, we prove the well-posedness of the closed-loop system in Theorem 1.
Consider the closed-loop target system (w,z,D̃(t)):
w_t(x,t)= w_x(x,t),
w(1,t)=z(0,t),
z_t(x,t)=1/Dz_x(x,t)-D̃(t)/D P_1(x,t)-θProj_[D,D̅]{τ(t)} P_2(x,t),
z(1,t)=0,
Ḋ̃̇(t)=-θProj_[D,D̅]{τ(t)},
we set Z=( w,z,D̃(t))^T, and introduce the operator
A=[ -∂/∂ x 0 0; 0 -∂/D∂ x 0; 0 0 0 ],
with
F(Z)=[ 0; -D̃(t)/D P_1(x,t)-θProj_[D,D̅]{τ(t)} P_2(x,t); θProj_[D,D̅]{τ(t)} ].
Then (<ref>)-(<ref>)
can be written in abstract form as
Z_t=-AZ+F(Z),
Z(0)=Z_0.
where Z=L^2(0,1)× L^2(0,1)×ℝ, ℬ(A)={f,g,l:f∈ H^1(0,1), f(1)=g(0); g∈ H^1(0,1),g(1)=0; l∈ℝ} and the norm ‖ Z‖_H=‖ w‖^2+‖ z‖^2+D̃^2.
Now, we establish the well-posedness of (<ref>)–(<ref>) with the following theorem (see as well Theorem 8.2 <cit.>,
Theorem 2.5.6 <cit.>, for which a similar method has been
employed to establish well-posedness).
Consider the system (<ref>)–(<ref>) , where A is a maximal accretive operator from a dense subset
ℬ(A) in a Banach space H into H. If F
is a nonlinear operator from ℬ(A) to ℬ(A) and satisfies
the local Lipschitz condition, then for any Z_0 ∈ℬ(A), the
problem (<ref>)–(<ref>) admits a unique classical solution Z
such that
Z∈ C^1([0,T_max),H)∩ C([0,T_max),ℬ(A)),
where
(i) either T_max=+∞, i,e., there is a unique global classical solution
(ii) or T_max<+∞ and lim_t→ T_max-0‖ Z(t)‖_H=+∞.
Combining the proof for hyperbolic case (see, e.g., Example 2.3.1 in <cit.>), we obtain that A is a maximal accretive operator. Then, it is straightforward to establish that for any Z_1, Z_2 ∈ H,
‖ F(Z_1)-F(Z_2)‖_H≤ C‖ Z_1-Z_2‖_Hmax{‖ Z_1‖_H,‖ Z_2‖_H},
where C is a constant independent of Z_1 and Z_2. So, we get
F to be locally Lipschitz on H. Hence, the system (<ref>)–(<ref>) has a unique classical solution.
Next, we will establish that the existence of the classical
solution is global. In order to prove that T_max = +∞, which means
there is no blowup, we need to make a priori estimates of the H^1
norm of w and z. Based on the proof of boundedness of w and z in
L^2 norms, in our present work, one can obtain that
w and z are bounded in H^1 by using the following new Lyapunov function
V_4(t)=1/2∫_0^1 (1+x)w _xx(x,t)^2dx+b_4D/2∫_0^1 (1+x)z_xx(x,t)^2dx.
Using the integration by parts, the derivative of (<ref>) with respect to time is written as
V̇_4(t)= ∫_0^1 (1+x)w_xx(x,t) w _xxt(x,t)dx+b_4D∫_0^1 (1+x)z_xx(x,t)z_xxt(x,t)dx
= ∫_0^1 (1+x)w _xx(x,t) w _xxx(x,t)dx+b_4∫_0^1 (1+x)z_xx(x,t)z_xxx(x,t)dx
-b_4D̃(t)∫_0^1(1+x)z_xx(x,t)P_1xx(x,t)dx-b_4DḊ̂̇(t)∫_0^1(1+x)z_xx(x,t)P_2xx(x,t)dx
= w_xx(1,t)^2-1/2w_xx(0,t)^2-1/2‖ w_xx(x,t)‖^2+b_4z_xx(1,t)^2-b_4/2z_xx(0,t)^2-b_4/2‖ z_xx(x,t)‖^2
-b_4D̃(t)∫_0^1(1+x)z_xx(x,t)P_1xx(x,t)dx-b_4DḊ̂̇(t)∫_0^1(1+x)z_xx(x,t)P_2xx(x,t)dx.
Based on (<ref>), (<ref>), one can get
w_xx(1,t)= w_tx(1,t)=w_tt(1,t)=z_tt(0,t)
= 1/D^2z_xx(0,t)-D̃(t)/D^2P_1x(0,t)-Ḋ̂̇(t)/DP_2x(0,t)+1/DḊ̂̇(t)P_1(0,t)-D̈̂̈(t)P_2(0,t)
-1/DD̃(t)P_1t(0,t)-Ḋ̂̇(t)P_2t(0,t),
z_xx(1,t)= D̃(t)P_1x(1,t)+DḊ̂̇(t)P_2x(1,t)-DḊ̂̇(t)P_1(1,t)+D^2D̈̂̈(t)P_2(1,t)
+DD̃(t)P_1t(1,t)+D^2Ḋ̂̇(t)P_2t(1,t).
Submitting (<ref>) and (<ref>) into (<ref>),
we arrive at the following inequality
V̇_4(t)≤ -1/2w_xx(0,t)^2-1/2‖ w_xx‖^2-b_4/2‖ z_xx‖^2-(b_4/2-7/D^4)z_xx(0,t)^2+2b_4|D̃(t)|‖ z_xx‖‖ P_1xx‖
+2b_4D|Ḋ̂̇(t)|‖ z_xx‖‖ P_2xx‖ +7D̃(t)^2/D^4P_1x(0,t)^2+7Ḋ̂̇(t)^2/D^2P_2x(0,t)^2+7Ḋ̂̇(t)^2/D^2P_1(0,t)^2
+7D̈̂̈(t)^2P_2(0,t)^2+7/D^2D̃(t)^2P_1t(0,t)^2+7Ḋ̂̇(t)^2P_2t(0,t)^2+6b_4D̃(t)^2P_1x(1,t)^2
+6b_4D^2Ḋ̂̇(t)^2P_2x(1,t)^2+6b_4D^2Ḋ̂̇(t)^2P_1(1,t)^2+6b_4D^4D̈̂̈(t)^2P_2(1,t)^2+6b_4D^2D̃(t)^2P_1t(1,t)^2
+6b_4D^4Ḋ̂̇(t)^2P_2t(1,t)^2.
Choosing b_4>14/D^4,
we get,
V̇_4(t)≤ -1/2‖ w_xx‖^2-b_4/2‖ z_xx‖^2+b_4D̃(t)^2‖ z_xx‖^2+b_4 ‖ P_1xx‖^2 +b_4D^2Ḋ̂̇(t)^2‖ z_xx‖^2 +b_4‖ P_2xx‖^2
+7D̃(t)^2/D^4P_1x(0,t)^2+7Ḋ̂̇(t)^2/D^2P_2x(0,t)^2+7Ḋ̂̇(t)^2/D^2P_1(0,t)^2+7D̈̂̈(t)^2P_2(0,t)^2
+7/D^2D̃(t)^2P_1t(0,t)^2+7Ḋ̂̇(t)^2P_2t(0,t)^2+6b_4D̃(t)^2P_1x(1,t)^2+6b_4D^2Ḋ̂̇(t)^2P_2x(1,t)^2
+6b_4D^2Ḋ̂̇(t)^2P_1(1,t)^2+6b_4D^4D̈̂̈(t)^2P_2(1,t)^2+6b_4D^2D̃(t)^2P_1t(1,t)^2+6b_4D^4Ḋ̂̇(t)^2P_2t(1,t)^2
≤ -c_1V_4(t)+f_5(t)V_4(t)+f_6(t),
where we use Young's and Agmon's inequalities. Here, c_1=1/2min{1,1/D}, and the functions f_5(t) and f_6(t) are given by
f_5(t)= b_5D^2(Ḋ̂̇(t)^2+4),
f_6(t)= b_4 ‖ P_1xx‖^2+b_4‖ P_2xx‖^2+28D̅^2/D^4P_1x(0,t)^2+7Ḋ̂̇(t)^2/D^2P_2x(0,t)^2+7Ḋ̂̇(t)^2/D^2P_1(0,t)^2
+7D̈̂̈(t)^2P_2(0,t)^2+28/D^2D̅^2P_1t(0,t)^2+7Ḋ̂̇(t)^2P_2t(0,t)^2+24b_4D̅^2P_1x(1,t)^2
+6b_4D̅^2Ḋ̂̇(t)^2P_2x(1,t)^2+6b_4D̅^2Ḋ̂̇(t)^2P_1(1,t)^2+6b_4D̅^4D̈̂̈(t)^2P_2(1,t)^2+24b_4D̅^4P_1t(1,t)^2
+6b_4D̅^4Ḋ̂̇(t)^2P_2t(1,t)^2,
based on all above results, we can get all terms in (<ref>) and (<ref>) are integrable of time.
Using Lemma D.3 <cit.>, we get that ‖ w_xx‖ and ‖ z_xx‖ are bounded. Then, from (<ref>) and (<ref>),
w_tx(x,t)= w_xx(x,t),
Dz_tx(x,t)=z_xx(x,t)-D̃(t)P_1x(x,t)-DḊ̂̇(t)P_2x(x,t),
we get
‖ w_tx‖ and ‖ z_tx‖ are bounded. Combing with ‖ w_x‖,‖ z_x‖^2→ 0 as t→∞ and regulation of w(x,t) and z(x,t), one can get ‖ w_t ‖,‖ z_t‖^2→ 0 as t→∞, and then, by using the Agmon's inequality, the regulation of w_t(x,t) and z_t(x,t) is proven for all x∈[0,1]. Therefore, we have proved that ‖ Z‖_H is bounded and global
classical solution exists.
Finally, we can get the well-posedness of the closed-loop system consisting of the plant (<ref>)–(<ref>), the control law (<ref>), and the update law (<ref>)–(<ref>) the under Assumption 1 based on the invertibility of the backstepping transformations (<ref>) and (<ref>).
§ SIMULATION
To illustrate the feasibility of the proposed adaptive controller design, we simulate the closed-loop system consisting (<ref>)–(<ref>), the control law (<ref>), and the update law defined through (<ref>)–(<ref>).
The actual delay is set to D = 2 assuming known upper and lower bounds defined as D̅ = 4 and D=0.1, respectively.
The adaptation gain is set to θ = 0.021, the plant coefficients are chosen as g(x)=2(1-x) and f(x,y)=cos(2π x)+4sin(2π y)). The simulations are performed considering u_0(x)=4sin(π x), v_0(x)=0 as initial conditions with D̂_0=1 and D̂_0=3, respectively.
Figure <ref> shows the convergence of the plant's state u(x,t) with and without adaptation, respectively. In the absence of adaptation, but with a "mismatch input delay" set to
D̂(t) = 3 (the true delay being D = 2). Figure <ref> (a) shows the dynamics of the L^2-norm of the plant state u(x,t)_L^2 with and without adaptation, respectively. The control effort is displayed in Figure <ref> (b) and the update law in Figure <ref> (c). Finally, Figure <ref> (d) reflects a good estimate of the delay with D̂(t) converging to the true value D=2.
§ CONCLUSION
We have studied a class of first-order hyperbolic PIDEs systems with an input subject to an unknown time delay. By utilizing an infinite-dimensional representation of the actuator delay, the system was transformed into a cascading structure consisting of a transport PDE and a PIDE. We successfully established global stability results by designing a parameter update law using the well-known infinite-dimensional backstepping technique and a Lyapunov argument. Furthermore, we analyzed the well-posedness of the system, taking into account the added difficulty caused by the presence of nonlinear terms. Through numerical simulations, we have demonstrated the effectiveness of the proposed method. This research contributes to the understanding and control of systems with unknown time delays and provides valuable insights into the stability analysis and parameter update design for such systems. Future work may involve extending these findings to more complex systems or considering additional constraints and uncertainties.
§ ACKNOWLEDGMENTS
This work is partly supported by partially supported by National Natural Science Foundation of China (62173084, 61773112), the Natural Science Foundation of Shanghai (23ZR1401800).
ieeetr
|
http://arxiv.org/abs/2307.04887v1 | 20230710202020 | Measuring and Mitigating Interference in Reinforcement Learning | [
"Vincent Liu",
"Han Wang",
"Ruo Yu Tao",
"Khurram Javed",
"Adam White",
"Martha White"
] | cs.LG | [
"cs.LG",
"cs.AI"
] |
Critical behavior of cascading failures in overloaded networks
Shlomo Havlin
^1 Univ Lyon, EnsL, UCBL, CNRS, Inria, LIP, F-69342, Lyon Cedex 07, France
^2 CNRS, Univ de Lyon, ENS de Lyon, Laboratoire de Physique, F-69342 Lyon, France
^3 Department of Network and Data Science, Central European University, 1100 Vienna, Austria
^4 Rényi Institute of Mathematics, 1053 Budapest, Hungary
^*Corresponding author: [email protected]
===========================================================================================================================================================================================================================================================================================================================================================================================
Catastrophic interference is common in many network-based learning systems, and many proposals exist for mitigating it. Before overcoming interference we must understand it better. In this work, we provide a definition and novel measure of interference for value-based reinforcement learning methods such as Fitted Q-Iteration and DQN. We systematically evaluate our measure of interference, showing that it correlates with instability in control performance, across a variety of network architectures. Our new interference measure allows us to ask novel scientific questions about commonly used deep learning architectures and study learning algorithms which mitigate interference. Lastly, we outline a class of algorithms which we call online-aware that are designed to mitigate interference, and show they do reduce interference according to our measure and that they improve stability and performance in several classic control environments.
§ INTRODUCTION
A successful reinforcement learning (RL) agent must generalize — learn from one part of the state space to behave well in another. Generalization not only makes learning more efficient but is also essential for RL problems with large state spaces. For such problems, an agent does not have the capacity to individually represent every state and must rely on a function approximator — such as a neural network — to generalize its knowledge across many states. While generalization can improve learning by allowing an agent to make accurate predictions in new states, learning predictions of new states can also lead to inaccurate predictions in unseen or even previously seen states. If the agent attempts to generalize across two states that require vastly different behavior, learning in one state can interfere with the knowledge of the other. This phenomenon is commonly called interference[The term interference comes from early work in neural networks <cit.>.] or forgetting in RL <cit.>.
The conventional wisdom is that interference is particularly problematic in RL, even single-task RL, because
(a) when an agent explores, it processes a sequence of observations, which are likely to be temporally correlated;
(b) the agent continually changes its policy, changing the distribution of samples over time; and
(c) most algorithms use bootstrap targets (as in temporal difference learning), making the update targets non-stationary.
All of these issues are related to having data and targets that are not . When learning from a stream of temporally correlated data, as in RL, the learner might fit the learned function to recent data and potentially overwrite previous learning—for example, the estimated values.
To better contextualize the impacts of interference on single task RL, consider a tiny two room gridworld problem shown in Figure <ref>.
In the first room, the optimal policy would navigate to the bottom-right as fast as possible, starting from the top-left. In the second room, the optimal policy is the opposite: navigating to the top-left as fast as possible, starting from the bottom-right. The agent is given its position in the room and the room ID number, thus the problem is fully observable. However, the agent has no control over which room it operates in. We can see catastrophic interference if we train a DQN agent in room one for a while and then move the agent to room two. The agent simply overrides its knowledge of the values for room one the longer it trains in room two. Indeed, we see DQN's performance in room one completely collapse. In this case the interference is caused by DQN's neural network. We contrast this to a simple tile coding representation (fixed basis), with a linear Q-learning agent. The tile coding represents these two rooms with completely separate features; as a result, there is no interference and performance in room one remains high even when learning in room two.
It is difficult to verify this conventional wisdom in more complex settings, as there is no established online measure of interference for RL. There has been significant progress quantifying interference in supervised learning <cit.>, with some empirical work even correlating interference and properties of task sequences <cit.>, and investigations into (un)forgettable examples in classification <cit.>.
In RL, recent efforts have focused on generalization and transfer, rather than characterizing or measuring interference. Learning on new environments often results in drops in performance on previously learned environments <cit.>.
DQN-based agents can hit performance plateaus in Atari, presumably due to interference. In fact, if the learning process is segmented in the right way, the interference can be more precisely characterized with TD errors across different game contexts <cit.>. Unfortunately this analysis cannot be done online as learning progresses. Finally, recent work investigated several different possible measures of interference, but did not land on a clear measure <cit.>.
Interference classically refers to an update negatively impacting the agent's previous learning—eroding the agent's knowledge stored in the value function. Therefore it makes sense to first characterize interference in the value function updates, instead of the policy or return.
In most systems the value estimates and actions change on every time-step conflating many different sources of non-stationarity, stochasticity, and error. If an update to the value function interferes, the result of that update might not manifest in the policy's performance for several time steps, if at all.
We therefore focus on measuring interference for approximate policy iteration
algorithms: those that fix the policy for some number of steps (an iteration) and only update the value estimates.
We specifically conduct experiments on a class of algorithms we call Deep Q-iteration. One instance—with target networks—is almost the same as DQN but additionally keeps the behavior policy fixed within an iteration. The goal is to remove as many confounding factors as possible to make progress on understanding interference in RL. This class of algorithms allows us to investigate an algorithm similar to DQN; investigate the utility of target networks; and define a sensible interference measure by keeping more factors of variation constant within an iteration.
The contributions in this work are as follows.
(1) We define interference at different granularities to capture interference within and across iterations for this class of value-based algorithms.
(2) We justify using differences in squared TD errors across states before and after an update as an effective and computationally efficient approximation of this interference definition.
(3) We empirically verify the utility of our interference metric by showing that it correlates with instability in control performance across architectures and optimization choices.
(4) We leverage this easy-to-compute measure to outline a class of algorithms that mitigate interference. We demonstrate that these online-aware algorithms can improve stability in control by minimizing the interference metric.
We conclude this work by highlighting limitations and important next steps.
§ PROBLEM FORMULATION
In reinforcement learning (RL), an agent interacts with its environment, receiving observations and selecting actions to maximize a reward signal. We assume the environment can be formalized as a Markov decision process (MDP). An MDP is a tuple (, , Pr, R, γ) where is a set of states, is an set of actions, Pr:××→[0,1] is the transition probability, R:××→ is the reward function, γ∈ [0,1) a discount factor.
The goal of the agent is to find a policy π:×→[0,1] to maximize the expected discounted sum of rewards.
Value-based methods find this policy using an approximate policy iteration (API) approach, where the agent iteratively estimates the action-values for the current policy and then greedifies.
The action-value function : ×→ for policy π is (s,a) 𝔼[ ∑_k=0^∞γ^k R_t+k+1 | S_t = s, A_t=a ],
where R_t+1 R(S_t,A_t,S_t+1), S_t+1∼(·|S_t,A_t), and A_t∼π(·|S_t).
The Bellman operator for action values ^π:^||×||→^||×|| is defined (^π Q)(s,a) ∑_s'∈(s'| s,a)[R(s,a,s') + γ∑_a' ∈π(a'|s') Q(s',a')]. This operator can be used to obtain Q^π because it is the unique solution of the Bellman equation: ^π Q^π = Q^π.
Temporal difference (TD) learning algorithms are built on this operator, as the sampled TD error δ in expectation equals ^π Q - Q.
We can use neural networks to learn an approximation Q_ to the action-values, with parameters . Under certain conditions, the API procedure—consisting of a policy evaluation step to get Q_, followed by greedifying to get a new policy, and repeating—eventually converges to a nearly optimal value function <cit.>.
We investigate a particular API algorithm that is similar to Deep Q-learning (DQN), that we call Deep Q-iteration. The only difference to DQN is that the behavior policy is held fixed during each evaluation phase.
In this algorithms there is an explicit evaluation phase for a fixed target policy, where the agent has several steps to improve its value estimates. More specifically, on iteration k with current action-values estimates Q_k, the target policy is greedy π_k(s) = _a ∈ Q_k(s,a) and the behavior is ϵ-greedy. For each step in the iteration, a mini-batch update from a replay buffer is performed, using the update equation Δθδ_t ∇__t Q__t(S_t, A_t) for temporal difference (TD) error δ_t. This TD error can either be computed without target networks, δ_t R_t+1 + γ Q__t(S_t+1, π_k(S_t+1)) - Q__t(S_t, A_t), or with a target network,
δ_t = R_t+1+ γ Q_k(S_t+1, π_k(S_t+1)) - Q__t(S_t, A_t). The procedure is in Algorithm <ref>.
We exactly recover DQN by setting the behavior policy[Notice that the typical bootstrap target max_a' Q_k(S_t+1, a') in DQN is in fact equivalent to the Deep Q-iteration update with a target network, because max_a' Q_k(S_t+1, a') = Q_k(S_t+1, π_k(S_t+1)). The scalar is the target network refresh frequency. We can also recover Double DQN <cit.>, though it deviates just a bit more from Deep Q-iteration. It similarly uses the Deep Q-iteration update with a target network, but the target policy is greedy in Q__t rather than Q_k. The resulting TD error is instead δ_t R_t+1 + γ Q_k(S_t+1, max_a' Q__t(S_t+1, a')) - Q__t(S_t, A_t).] to be ϵ-greedy in Q__t rather than Q_k. We opt to analyze this slightly modified algorithm, Deep Q-iteration, to avoid confounding factors due to the policy changing at each step.
The definitions of interference developed in the next section, however, directly apply to DQN as well. For the controlled Two Rooms example in the introduction, we used our measure for a DQN agent. However, when moving to more complex, less controlled scenarios, this changing data distribution may impact outcomes in unexpected ways. Therefore, to control for this factor, we focus in this work on Deep Q-iteration algorithms where the data-gathering policy also remains fixed during each iteration.
The central question in this work is how generalization in Q_ impacts behavior of Deep Q-iteration. Intuitively, updates to Q_ in some states may interfere with the accuracy of the values in other states. We formalize this notion in the next section, and in the following sections empirically connect the level of interference to performance.
§ DEFINING INTERFERENCE FOR VALUE ESTIMATION ALGORITHMS
In this section, we define the interference measure that used to compare Deep Q-iteration algorithms in the coming sections. Deep Q-iteration alternates between policy evaluation and policy improvement, where one cycle of policy evaluation and improvement is called an iteration. To explain this measure, we first need to define interference during the evaluation phase of an iteration. We then discuss interference at four different levels of granularity, the coarser of which we use for our experiments. We start at the lowest level to build intuition for the final definition of interference.
Within each iteration—in each evaluation phase—we can ask: did the agent's knowledge about its value estimates improve or degrade? The evaluation phase is more similar to a standard prediction problem, where the goal is simply to improve the estimates of the action-values towards a clear target. In the case of Deep Q-iteration with target networks, it attempts to minimize the distance to the target function 𝔼[R + max_a' Q_k (S', a') | S = s, A = a]. More generally, Deep Q-iteration, with or without target networks, attempts to reduce the squared expected TD-error: [δ(θ) | S = s, A= a]^2. Without target networks, the expected TD error is the Bellman error: [δ(θ) | S = s, A= a] = ^π Q_θ (s,a) - Q_θ(s,a), where ^π Q (s,a) = _π[R + γ Q(S', A') | S = s, A= a]. A natural criterion for whether value estimates improved is to estimate if the expected TD error decreased after an update.
Arguably, the actual goal for policy evaluation within an iteration is to get closer to the true Q^π(s,a). Reducing the expected TD error is a surrogate for this goal.
We could instead consider interference by looking at if an update made our estimate closer or further from Q^π(s,a). But, we opt to use expected TD errors, because we are evaluating if the agent improved its estimates under its own objective—did its update interfere with its own goal rather than an objective truth. Further, we have the additional benefit that theory shows clear connections between value error to Q^π(s,a) and Bellman error. Bellman error provides an upper bound on the value error <cit.>, and using Bellman errors is sufficient to obtain performance bounds for API <cit.>.
Accuracy Change. At the most fine-grained, we can ask if an update, going from _t to _t+1, resulted in interference for a specific point (s,a). The change in accuracy at s,a after an update is
Accuracy Change((s,a), _t, _t+1) [δ(_t+1) | S = s, A= a]^2 - [δ(_t) | S = s, A= a]^2
where if this number is negative it reflects that accuracy improved. This change resulted in interference if it is positive, and zero interference if it is negative.
Update Interference. At a less fine-grained level, we can ask if the update generally improved our accuracy—our knowledge in our value estimates—across points.
Update Interference(_t, _t+1) max( 𝔼_(S,A) ∼ d[Accuracy Change((S,A), _t, _t+1) ] , 0 )
where (s,a) are sampled according to distribution d, such as from a buffer of collected experience.
Both Accuracy Change and Update Interference are about one step.
At an even higher level, we can ask how much interference we have across multiple steps, both within an iteration and across multiple iterations.
Iteration Interference. reflects if there was significant interference in updating during the evaluation phase (an iteration).
We define Iteration Interference for iteration k using expectation over Updated Interference in the iteration
Iteration Interference(k) [X] for X = Update Interference(_T_k, _T_k+1)
where T_k is a uniformly sampled time step in the iteration k.
Interference Across Iterations reflects if an agent has many iterations with significant interference. Here, it becomes more sensible to consider upper percentiles rather than averages. Even a few iterations with significant interference could destabilize learning; an average over the steps might wash out those few significant steps. We therefore take expectations over only the top α percentage of values. In finance, this is typically called the expected tail loss or conditional value at risk. Previous work in RL <cit.> has used conditional value at risk to measure the long-term risk of RL algorithms.
For iteration index K, which is a random variable,
Interference Across Iterations[X|X ≥Percentile_0.9(X)]
for X = Iteration Interference(K)
.
Iteration index K is uniformly distributed and Percentile_0.9(X) is the 0.9-percentile of the distribution of X. Other percentiles could be considered, where smaller percentiles average over more values and a percentile of 0.5 gives the median.
These definitions are quite generic, assuming only that the algorithm attempts to reduce the expected TD error (Bellman error) to estimate the action-values.
Calculating Update Interference, however, requires computing an expectation over TD errors, which in many cases is intractable to calculate. To solve this issue, we need an approximation to Update Interference, which we describe in the next section.
For clarity, we provide the pseudocode to measure interference for DQI in Algorithm <ref>.
§ APPROXIMATING UPDATE INTERFERENCE
The difficulty in computing the Update Interference is that it relies on computing the expected TD error. With a simulator, these can in fact be estimated. For small experiments, therefore, the exact Accuracy Change could be computed. For larger and more complex environments, the cost to estimate Accuracy Change is most likely prohibitive, and approximations are needed. In this section, we motivate the use of squared TD errors as a reasonable approximation.
The key issue is that, even though we can get an unbiased sample of the TD errors, the square of these TD errors does not correspond to the squared expected TD error (Bellman error). Instead, there is a residual term, that reflects the variance of the targets <cit.>
[δ()^2 | S = s, A = a]
= [δ() | S = s, A= a]^2 +
[R + Q_(S', A') | S = s, A= a]
where the expectation is over (R, S', A'), for the current (s,a), where A' is sampled from the current policy we are evaluating.
When we consider the difference in TD errors, after an update, for (s,a), we get
[δ(_t+1)^2 | S = s, A = a] - [δ(_t)^2 | S = s, A = a]
= [δ(_t+1) | S = s, A= a]^2 - [δ(_t) | S = s, A= a]^2
+ [R + Q__t+1(S', A') | S = s, A= a]
- [R + Q__t(S', A') | S = s, A= a].
For a given (s,a), we would not expect the variance of the target to change significantly. When subtracting the squared TD errors, therefore, we expect these residual variance terms to nearly cancel. When further averaged across (s,a), it is even more likely for this term to be negligible.
There are actually two cases where the squared TD error is an unbiased estimate of the squared expected TD error. First, if the environment is deterministic, then this variance is already zero and there is no approximation. Second, when we use target networks, the bootstrap target is actually R + Q_k(S', A') for both. The difference in squared TD errors measures how much closer Q__t+1(s, a) is to the target after the update. Namely, δ(_t+1) = R + Q_k(S', A') - Q__t+1(s, a). Consequently
[δ(_t+1)^2 | S = s, A = a] - [δ(_t)^2 | S = s, A = a]
= [δ(_t+1) | S = s, A= a]^2 - [δ(_t) | S = s, A= a]^2
+ [R + Q_k(S', A') | S = s, A= a]
- [R + Q_k(S', A') | S = s, A= a]
= [δ(_t+1) | S = s, A= a]^2 - [δ(_t) | S = s, A= a]^2
.
It is straightforward to obtain a sample average approximation of [δ(_t+1)^2 | S = s, A= a]
- [δ(_t) | S = s, A= a]. We sample B transitions (s_i, a_i, r_i, s'_i) from our buffer, to get samples of δ^2(_t+1, S, A,R,S') - δ^2(_t, S, A,R,S'). This provides
the following approximation for Update Interference:
Update Interference(_t, _t+1) ≈max(1/B∑_i=1^B δ^2(_t+1, s_i, a_i, r_i, s'_i) - δ^2(_t, s_i, a_i, r_i, s'_i), 0).
The use of TD errors for interference is related to previous interference measures based on gradient alignment. To see why, notice if we perform an update using one transition (s_t,a_t,r_t,s_t'),
then the interference of that update to (s,a,r,s') is δ^2(_t+1, s, a,r,s') - δ^2(_t, s, a,r,s').
Using a Taylor series expansion, we get the following first-order approximation assuming a small stepsize α:
δ^2(_t+1, s, a,r,s') - δ^2(_t, s, a,r,s')
≈∇_δ^2(_t; s,a,r,s')^⊤ (_t+1 - _t)
= -α∇_δ^2(_t; s,a,r,s')^⊤∇_δ^2(_t; s_t,a_t,r_t,s_t')
.
This approximation corresponds to negative gradient alignment, which has been used to learn neural networks that are more robust to interference <cit.>. The idea is to encourage gradient alignment to be positive, since having this dot product greater than zero indicates transfer between two samples. Other work used gradient cosine similarity, to measure the level of transferability between tasks <cit.>, and to measure the level of interference between objectives <cit.>.
A somewhat similar measure was used to measure generalization in reinforcement learning <cit.>, using the dot product of the gradients of Q functions ∇_ Q__t(s_t,a_t)^⊤∇_ Q__t(s,a).
This measure neglects gradient direction, and so measures both positive generalization as well as interference.
Gradient alignment has a few disadvantages, as compared to using differences in the squared TD errors. First, as described above, it is actually a first order approximation of the difference, introducing further approximation. Second, it is actually more costly to measure, since it requires computing gradients and taking dot products. Computing Update Interference on a buffer of data only requires one forward pass over each transition. Gradient alignment, on the other hand, needs one forward pass and one backward pass for each transition. Finally, in our experiments we will see that optimizing for gradient alignment is not as effective for mitigating interference, as compared to the algorithms that reduced Update Interference.
§ MEASURING INTERFERENCE & PERFORMANCE DEGRADATION
Given a measure for interference, we can now ask if interference correlates with degradation in performance, and study what factors affect both interference and this degradation. We define performance degradation at each iteration as the difference between the best performance achieved before this iteration, and the performance after the policy improvement step.
Similar definitions have been used to measure catastrophic forgetting in the multi-task supervised learning community <cit.>.
Let _(s,a) ∼ d_0[Q^π_k+1(s,a)] be the agent performance after the policy improvement step at iteration k where d_0 is the start-state distribution, where a random action is taken in the first step. We estimate this value using 50 rollouts. Performance Degradation due to iteration k is defined as
Iteration Degradation(k) max_i=1,…,k_(s,a) ∼ d_0[Q^π_i(s,a)] - _(s,a) ∼ d_0[Q^π_k+1(s,a)].
As before, we take the expected tail over all iterations.
If a few iterations involve degradation, even if most do not, we should still consider degradation to be high. We therefore define Degradation across iterations as
Degradation[X|X ≥Percentile_0.9(X)] for X = Iteration Degradation(K).
It might seem like Degradation could be an alternative measure of Interference. A central thesis in this paper, however, is that Interference is about estimated quantities, like values, that represent the agent's knowledge of the world. The policy itself—and so the performance—may not immediately change even with inaccuracies introduced into the value estimates. Further, in some cases the agent may even choose to forgo reward, to explore and learn more; the performance may temporarily be worse, even in the absence of interference in the agent's value estimates.
We empirically show that Interference Across Iterations is correlated with Degradation, by measuring these two quantities for a variety of agents with different buffer sizes and number of hidden nodes.
We perform the experiment in two classic environments: Cartpole and Acrobot. In Cartpole, the agent tries to keep a pole balanced, with a positive reward per step. We chose Cartpole because RL agents have been shown to exhibit catastrophic forgetting in this environment <cit.>. In Acrobot, the agent has to swing up from a resting position to reach the goal, receiving negative reward per step until termination. We chose Acrobot because it exhibits different learning dynamics than Cartpole: instead of starting from a good location, it has to explore to reach the goal.
We ran several agents to induce a variety of different learning behaviors. We generated many agents by varying buffer size ∈{1000, 5000, 10000}, number of steps in one iteration M ∈{100, 200, 400}, hidden layer size ∈{64, 128, 256, 512} with two hidden layers.
Each algorithm performed 400 iterations. Interference Across Iterations and Degradation are computed over the last 200 iterations.
A buffer for measuring Interference is obtained using reservoir sampling from a larger batch of data, to provide a reasonably diverse set of transitions. Each hyperparameter combination is run 10 times, resulting in 360 evaluated agents for Deep Q-iteration without target networks and 360 with target networks.
We show the correlation plot between Interference and Degradation in Figure <ref>. For DQI with target networks, there is a strong correlation between our measure of interference and performance degradation. For DQI without target networks, we actually found that the agents were generally unstable, with many suffering from maximal degradation. Measuring interference for algorithms that are not learning well is not particularly informative, because there is not necessarily any knowledge to interfere with.
We note a few clear outcomes.
(1) Neural networks with a larger hidden layer size tend to have higher interference and degradation.
(2) DQI with target networks has lower magnitude Interference and less degradation than DQI without target networks on both environments. Target networks are used in most deep RL algorithms to improve training stability, and the result demonstrates that using target network can indeed reduce interference and improve stability. This result is unsurprising, though never explicitly verified to the best of our knowledge. It also serves as a sanity check on the approach, and supports the use of this measure for investigation in the role of other algorithm properties that might impact interference.
40.46
< g r a p h i c s >
Correlation plot of interference and degradation for fine-tuning agents. Each point represents the interference and degradation for 50 iterations. A darker color indicates the point is later during the fine-tuning phase.
In the previous experiment, we measure interference and degradation for online agents. We can further separate learning and interference by considering fine-tuning offline learned agents.
We use Implicit Q-learning <cit.> to learn an offline agent from offline data, and fine-tune the agent with online interaction.
During the fine-tuning phase, we measure interference and degradation of the fine-tuning agent every 50 iterations.
We show the correlation plot in Figure
<ref>.
Similarly, we can see a correlation between our measure of interference and performance degradation.
When our interference measure is high, it is expected to decrease the performance.
In this experiment, several outliers exist with much higher interference than others, so we zoomed in on the x-axis to better indicate the pattern for most of the data.
In the appendix, we show the plot with all points in Figure <ref>.
§ MITIGATING INTERFERENCE VIA ONLINE-AWARE META LEARNING
With the interference measures developed and a better understanding of some of the factors that effect interference, we now consider how to mitigate interference.
In this section, we outline and empirically investigate a class of algorithms, which we call online-aware algorithms, that are designed to mitigate interference.
§.§ Online-aware Algorithms
We first discuss an objective to learn a neural network that explicitly mitigates interference. We then outline a class of algorithms that optimize this objective.
Let be the network parameters and U^_B() be an inner update operator that updates using the set of transitions in B, times. For example, U^_B() could consist of sampling mini-batches B_i from B for each of the i = 1, …, n DQI updates.
The goal of online-aware learning is to update the network parameters to minimize interference for multiple steps in the future: find a direction g_t at time step t to minimize the -step ahead Update Interference
_B[∑_i = 1^|B|δ_i(U^_B(_t- g_t))^2 -δ_i(_t)^2]
Formally, we can describe the online-aware objective as
J() = _B[L_B(U^_B())]
where L_B() = 1/|B|∑_i = 1^|B|δ_i()^2.
We refer to the class of algorithms which optimizes the online-aware objective as online-aware algorithms, and provide the pseudocode in Algorithm <ref> in Appendix <ref>. Note that this objective not only minimizes interference but also maximizes transfer (promotes positive rather than negative generalization).
The objective can be optimized by meta-learning algorithms including MAML <cit.>, which is a second-order method by computing gradients through the inner update gradients to update the meta-parameters, or a first-order method such as Reptile <cit.>. Reptile is more computationally efficient since it does not involve computing higher order terms, and only needs to perform the n inner updates to then perform the simple meta update.
Algorithm <ref> is not a new algorithm, but rather is representative of a general class of algorithms which explicitly mitigates interference. It incorporates several existing meta learning algorithms. The choice of meta-parameters, inner update operator and meta update rules results in many variants of this Online Aware algorithm.
Two most related approaches, OML <cit.> and MER <cit.>, can be viewed as instances of such an algorithm.
OML was proposed as an offline supervised learning algorithm, but the strategy can be seen as instance of online-aware learning where the inner update operator updates only the last few layers at each step on a correlated sequence of data, whereas the initial layers are treated as meta-parameters and updated using the second-order method proposed in MAML.
MER, on the other hand, uses the first-order method proposed in Reptile to update the entire network as the meta-parameters. During the inner loop, MER updates the entire network with stochastic samples. MER introduces within-batch and across-batch meta updates; this difference to the Online-aware framework is largely only about smoothing updates to the meta-parameters. In fact, if the stepsize for the across-batch meta update is set to one, then the approach corresponds to our algorithm with multiple meta updates per step. For a stepsize less than one, the across-batch meta update averages past meta-parameters. MER also uses other deep RL techniques such as prioritizing current samples and reservoir sampling.
§.§ Experimental Setup
We aim to empirically answer the question: do these online-aware algorithms mitigate interference and performance degradation? We focus on an instance of the online-aware algorithm where the meta updates are performed with the first-order Reptile methods. This instance can be viewed as a variant of MER using one big batch <cit.>. We have also tried online-aware algorithm using MAML or only meta learning a subset of network parameters similar to <cit.>, but we found online-aware algorithm using Reptile outperforms the MAML and OML variants consistently across the environments we tested.
To answer the question we compare baseline algorithms to online-aware (OA) algorithms where the baseline algorithm is DQI with or without target nets. OA treats the entire network as the meta-parameter and uses the first-order Reptile method, shown in Algorithm <ref>. The inner update operator uses randomly sampled mini-batches to compute the update in Algorithm <ref>.
To fairly compare algorithms, we restrict all algorithms to perform only one update to the network parameters per step, and all algorithms use similar amounts of data to compute the update. We also include two other baselines: Large which is DQI with 10 to 40 times larger batch sizes so that the agent sees more samples per step,
and GA which directly maximizes gradient alignment from Equation (<ref>) within DQI.[In fact, both MAML and Reptile approximately maximize the inner product between gradients of different mini-batches <cit.>.]
§.§ Experiments for DQI without Target Networks
We first consider DQI without target networks, which we found in the previous section suffered from more interference than DQI with target networks. We should expect using an online aware update should have the biggest impact in this setting. Figure <ref> summarizes the results on Acrobot and Cartpole.
Due to the space limit, we present the results with target networks in Appendix <ref>.
We can see that OA significantly mitigates interference and performance degradation, and improves control performance.
Large (light-blue) and GA (green) do not mitigate interference nearly as well. In fact, Large generally performs quite poorly and in two cases actually increases interference. Our results indicate that the online-aware algorithms are capable of mitigating interference, whereas, simply processing more data or directly maximizing gradient alignment are not sufficient to mitigate interference.
Further insight can be found by investigating data from individual runs. The previous results aggregate performance and return over runs, which can remove much of the interesting structure in the data. Looking closer, Figure <ref>(a) shows the return per run (left) and iteration interference per run (right) in Acrobot, revealing that vanilla DQI without target nets (in blue) experienced considerable problems learning and maintaining stable performance. OA (in red) in comparison was substantially more stable and reached higher performance.
Overall OA also exhibits far less interference.
§ CONCLUSION
In this paper, we proposed a definition of interference for value-based methods that fix the target policy for each iteration. We justified the use of squared TD errors to approximate this interference and showed this interference measure is correlated with control performance. In this empirical study across agents, we found that target networks can significantly reduce interference, and that bigger hidden layers resulted in higher interference in our environments. Lastly, we discuss a framework for online-aware learning for Deep Q-iteration, where a neural network is explicitly trained to mitigate interference. We concluded with experiments on classical reinforcement learning environments that showed the efficacy of online-aware algorithms in improving stability and lowering our measure of interference. This was particularly the case for Deep Q-iteration without target networks, where interference was the highest. These online aware algorithms also exhibit lower performance degradation across most of the tested environments.
There are several limitations in this work.
We did not carefully control for other factors that could impact performance, like exploration or the distribution of data in the replay buffer. DQI without target networks performed poorly in Acrobot under many hyperparameter settings, making it difficult to measure interference. Later, by including online-aware learning, the performance significantly improved, suggesting interference was indeed the culprit. But, it was difficult to perfectly identify, at least using only our measure. The correlation plots themselves indicate there are other factors, beyond interference, driving performance degradation. In fact, there has been increased interest in understanding when and why deep RL agents fail: why deep valued-based RL agents rapidly change the greedy policy <cit.>; how agents eventually loss capacity with non-stationary targets <cit.>; why agents benefit from random resetting <cit.>. Interference may be playing a role in these failures, and these issues may be surfacing in our own experiments.
Another important limitation is that we only examined Deep Q-iteration algorithms, which fixed the behavior during each iteration. Allowing this behavior to update on each step, to be ϵ-greedy with respect to the current action-values, would give us DQN. An important next step is to analyze this algorithm, and other extensions of Deep Q-iteration.
Finally, these results highlight several promising avenues for improving stability in RL. One surprising outcome was the instability, within a run, of a standard method like Deep Q-iteration. The learning curve was quite standard, and without examining individual runs, this instability would not be obvious. This motivates re-examining many reinforcement learning algorithms based on alternative measures, like degradation and other measures of stability. It also highlights that there are exciting opportunities to significantly improve reinforcement learning algorithms by leveraging online-aware learning.
collas2023_conference
§ EXPERIMENTAL DETAILS
§.§ Measuring Interference
We provide the pseudocode in Algorithm <ref>.
§.§ Online-Aware Algorithms
We provide the pseudocode in Algorithm <ref>.
§.§ Experiment setup
We experiment with two environments: Cartpole and Acrobot from the OpenAI gym (<https://gym.openai.com/>). We set the maximum steps per episode to 500, and use a discounting factor γ=0.99 in all environments.
For the fine-tuning experiment, we additionally include two more environments: Lunar Lander and Mountain Car.
We collect offline datasets containing transitions from 4% near-optimal policy and 96% random policy.
Then we train an offline agent for 70000 updates and fine tune the agent for 400 iterations.
During fine-tuning phase, the number of updates in each iteration was 200.
We set up a checkpoint every 50 iterations and measure interference and degradation across the 50 iterations. We include 10 runs in each environment.
When computing performance degradation, we use 50 Monte Carlo rollouts to estimate the performance of the policy at each iteration, that is, _(s, a) ∼ d_0[Q^π_k(s,a)]. For evaluating the TD error difference, we use a reservoir buffer of size 1000, which approximates uniform sampling from all the past transitions.
§.§ Network architecture and hyperparameters
For all experiments, we use a three-layer neural network with ReLU activation <cit.> and He initialization <cit.> to initialize the neural. For Adam and RMSprop optimizer, we use the default values for the hyper-parameters except the step size.
For the online experiments in Section <ref>, we generate a set of hyper-parameter by choosing each parameter in the set:
* Batch size = 64
* Step size α = 0.0003
* Number of iteration =400
* Optimizer = Adam
* Buffer size ∈{1000, 5000, 10000}
* Hidden size ∈{64, 128, 256, 512}
* Number of steps in an iteration M ∈{100, 200, 400}
For the fine-tuning experiment in Section <ref>, the IQL agent used a two-layer network with 64 hidden nodes on each layer to learn the policy from offline datasets.
In acrobot, the offline policy was learned with τ=0.9, β=10, α=0.005, the batch size was 100, and the learning rate was 3×10^-5 (notations are consistent with <cit.>).
In lunar lander, the number of timesteps for learning, the batch size, and α remained the same.
Other parameters were changed to τ=0.7, β=3, and the learning rate was 0.001.
The settings in mountain car were as same as in acrobot, except that the learning rate was 0.001.
During the fine-tuning step, τ and β remained the same as in the offline learning stage in the corresponding environment, while the batch size, learning rate, and the number of iterations were changed to be consistent with other experiments. The buffer size was fixed to 5000 with a first-in-first-out rule.
For the experiments in Section <ref> and <ref>, all algorithms use buffer size of 10000, 100 steps in an iteration, and 400 iterations. DQI without target nets uses hidden size of 128, and DQI with target nets uses hidden size of 512. The best parameters are chosen based on average performance of the policies over the last 200 iterations.
Baseline.
We sweep the hyperparameters for DQI in the range:
* Batch size =64
* Optimizer ∈{Adam, RMSprop}
* Step size α∈{0.003, 0.001, 0.0006, 0003, 0.0001, 0.00001}
DQI with large batch size.
For the baseline Large, we find the best batch size in the range:
* Batch size ∈{640, 1280, 2560}
Online-aware DQI.
In our experiment, We sweep over the hyperparameters in the set:
* Inner update optimizer = SGD
* α∈{1.0, 0.3, 0.1, 0.03}
* α_inner∈{0.01, 0.001, 0.0001, 0.00001}
* Number of inner updates K ∈{5,10, 20}
DQI maximizing gradient alignment (GA).
When updating the parameters, we draw two mini-batch samples B_1 and B_2 and add a regularization term in the loss function:
-λ[1/|B_1|∑_i∈ B_1∇_δ_i^2()]^⊤[1/|B_2|∑_j∈ B_2∇_δ_j^2()],
normalized by the number of parameters in the network.
In our experiment, We sweep over the hyperparameters in the set:
* Optimizer ∈{Adam, RMSprop}
* Step size α∈{0.003, 0.001, 0.0006, 0003, 0.0001, 0.00001}
* λ∈{10.0, 1.0, 0.1, 0.01, 0.001}
§.§ Experiments for DQI with Target Networks
In this section, we investigate the utility of OA for DQI with target networks. Again, it is unlikely to be particularly useful to add OA for settings where the interference is low.
In the previous section, in Figure <ref>, we found that DQI with target networks had higher interference with a larger hidden layer size (512). We therefore test the benefits of OA for this larger network size in this experiment. Figure <ref> summarizes the results on Acrobot and Cartpole.
We can see that the addition of OA to DQI with target networks helps notably in Cartpole, and only slightly in Acrobot. This is in stark contrast to the last section, where there was a large gain in Acrobot when adding OA. This outcome makes sense, as when adding OA to DQI without target networks, the agent went from failure to learning reasonably well. In this case of DQI with target networks, the agent was already learning reasonably. Nonetheless, the addition of the OA objective does still provide improvement. In Cartpole, the improvement is more substantial. Again, looking at the previous correlation plots in Figure <ref>, we can see that a hidden layer size of 512 resulted in more interference in Cartpole, and more Performance Degradation; there was more room for OA to be beneficial in Cartpole. When looking at the individual runs in Figure <ref>, we can see that DQI has some drops in performance, whereas the OA variant is much more stable.
A few other outcomes are notable. The larger network (10x the size) was actuthally better than OA in Acrobot but did worse than the base algorithm (DQI with hidden layer sizes of 512) in Cartpole. The most consistent performance was with OA. Further, except for the larger network, there was a clear correspondence between interference and performance: OA reduced interference most and performed the best, GA was next in terms of both and then finally the baseline with no additions.
§.§ Experiments for Fine-Tuning
We show the plots for all points in the fine-tuning experiment in Figure <ref>.
|
http://arxiv.org/abs/2307.05779v1 | 20230711201547 | Neural Machine Translation Data Generation and Augmentation using ChatGPT | [
"Wayne Yang",
"Garrett Nicolai"
] | cs.CL | [
"cs.CL"
] |
Making the Nyström method highly accurate
for low-rank approximationsFor review.
The research of Jianlin Xia was supported in part by an NSF grant DMS-2111007.
Jianlin XiaDepartment of Mathematics, Purdue University, West Lafayette, IN 47907 ([email protected]).
August 12, 2023
================================================================================================================================================================
Neural models have revolutionized the field of machine translation, but creating parallel corpora is expensive and time-consuming. We investigate an alternative to manual parallel corpora - hallucinated parallel corpora created by generative language models. Although these models are themselves trained on parallel data, they can leverage a multilingual vector space to create data, and may be able to supplement small manually-procured corpora. Our experiments highlight two key findings - despite a lack of diversity in their output, the hallucinated data improves the translation signal, even when the domain clashes with the original dataset.
§ INTRODUCTION
Neural Machine Translation (NMT) models are usually trained on large amounts of sentence-aligned parallel corpora (Figure <ref>). However, the time and manpower required to create such corpora are quite costly. The same can be said for other Natural Language Processing (NLP) tasks as well, which can explain the reason we have seen increased interest in research on textual data augmentation <cit.>. Data augmentation seeks to strengthen the learning objective by providing additional training instances, often generated artificially.
Our augmentation method, commonly referred to as “data hallucination” generates a sizable amount of synthetic data independently of the original data <cit.>. This data is then appended to the original training data in an effort to create a more generalizable dataset.
In this paper, we propose and explore a simple method for translation data hallucination using ChatGPT <cit.>, a prompt-based large language model which has been fine tuned for conversational dialogue. InstructGPT, the precursor to ChatGPT, has previously been evaluated on Machine Translation <cit.>, but the full capabilities of ChatGPT in the realm of machine translation are unknown. In this study, we investigate one possible use of ChatGPT for NMT - providing extra data for lower-resourced language pairs.
This paper is structured as follows: In Section <ref>, we cover recent work on low-resource NMT, as well as a preliminary study on ChatGPT; in Section <ref>, we explain our data collection pipeline; Section <ref> explains how we train our models; our results and interpretation are presented in Section <ref>; we provide a qualitative discussion of the data in Section <ref>, while Section <ref> addresses the limitations and challenges of this study. Section <ref>, concludes the paper.
§ RELATED WORK
Data augmentation for low-resource NMT often attempts to supplement bilingual data with data from related languages, in an attempt to exploit a cross-lingual signal. mueller-etal-2020-analysis found that the addition of additional languages can aid low-resource translation, but that the model quickly collapses with too many languages. ko-etal-2021-adapting augmented a low-resource NMT model with monolingual data using their method NMT-Adapt and found improvements over other augmentation methods. xia-etal-2019-generalized make use of a related high-resource language as a pivot to generate synthetic low-resource data <cit.>, but observes that pivoting doesn't perform as well as just augmenting with data of the related high-resource language in most cases. A limitation of these methods is the heavy reliance on the relationships between languages - many low-resource languages do not have higher-resource sisters, and even when they do, cross-lingual transfer differs significantly, even within families. This makes it more difficult to generalize their consistencies across other low-resource languages.
Alternatives to multilingual data augmentation come in several flavors.
Back-translation trains a model that reverses the source and target in the training data. It then feeds monolingual target-side (now- source-side) data to generate parallel sentence. These sentences are then concatenated to the original corpus <cit.>. peng2020dictionarybased instead proposed a dictionary based approach in addition to back-translation, which showed improvements on cross domain translation. ng-etal-2020-ssmba propose Self-Supervised Manifold Based Data Augmentation (SSMBA) for improving out-of-domain robustness of textual data. In SSMBA, data is sampled from a denoising autoencoder to produce slight variations to the original data that allows for more robust generalization. Other methods, such as word DropOut <cit.> and SwitchOut <cit.>, apply regularization techniques to limit overfitting.
ChatGPT is a relatively new toy for MT researchers to play with.
A preliminary study by jiao2023chatgpt discovered that GPT-3 (the underlying language model for ChatGPT) produced fluent-enough results, but substituting GPT-4 produced output comparable to existing commercial NMT systems. Unfortunately, their test size was restricted to 50 sentences, due to a necessity of entering each sentence as a manual prompt. However, the reported quality of the translation inspires our own investigation into the use of ChatGPT for data hallucination.
§ DATA COLLECTION
We limit our investigation to two languages: German (de), and Galician (gl). These languages were chosen due to the amount of data used in the training of GPT-3, ChatGPT's core model. German can be considered a high-resource language - GPT-3 has seen more than 2.9 billion tokens of German data. Meanwhile, Galician is a much more sparsely-resourced language, with only 7 million training tokens of Galician data <cit.>[We acknowledge that true low-resource settings will often have orders of magnitude less data than either of these language pairs.]
We use ChatGPT with the GPT-3 language model, accessed through the API (See Section <ref>). Although a GPT-4 model was available, its API was not yet accessible.
All experiments translate from the source language into English.
brown2020language demonstrate that the GPT model generally performs better with English as a target than as a source. Furthermore, it allows for a more thorough error analysis and discussion, as none of the authors speak Galician.
We describe two data settings: Natural data (nat) consists of a corpus extracted from the TEDTalks set <cit.>. We randomly select sentences until we surpass a threshold of 1,000,000 tokens in the source language: 900,000 is given to the training set and 100,000 to the validation set. We likewise sample an additional 100,000 tokens as a test set in each language.
§.§ Synthetic Data Collection
In contrast to sampling the natural dataset from an existing corpus, we generate our synthetic (syn) dataset using a three-step process, outlined in Figure <ref>. We first prompt ChatGPT to generate 600 nouns and 600 verbs in our target language; duplicates are removed. We next prompt ChatGPT to create 100 sentences for each of these seeds. Likewise, duplicates are discarded. Finally, we ask ChatGPT to translate each of our generated sentences into English. From this corpus, we select enough sentences to have 900,000 training tokens, and 100,000 validation tokens, in a manner that mirrors the sampling of the natural dataset.
§.§.§ ChatGPT API
We prompt ChatGPT using the Python3 API[https://platform.openai.com/docs/guides/chat], which also automatically parses and organizes the responses. Prompts for seed words and source sentences are given in the source language, translated using Google Translate[https://translate.google.com/], while the translation prompts are given in English. Some example prompts from German can be seen in Table <ref>.
The API allows 3 types of message inputs: `System', `User', and `Assistant'. The System message provides the model with any context it needs to write its response, such as the task it needs to do and the type of role it should take. ChatGPT was originally designed as a chatbot, so its User role is for user messages and its Assistant role for the model's messages. Multiple User and Assistant messages can be sent in one request, and this differentiation allows prior conversations to be sent back to the model for context on the conversation. Consecutive human-written Assistant messages have been used by the developer community as a method of few shot learning, since the API does not require User and Assistant messages to alternate. The model then responds with an additional Assistant message to continue the conversation.
For the seed generation task we put the entire prompt, such as “Generate 600 random unique nouns that are separated by a comma” as the System message. For the sentence generation we then provide the prompt “Generate 100 sentences generated from the prompt, separated by a semicolon.” as the System message, the seed word as the User message and an Assistant message consisting of a two sentence example. The two sentence Assistant message is used as a few shot example for delimiter formatting, which ensures a smoother response parsing process. An API call was made for each seed word available to maximize the total number of sentences generated. Finally the translation task gives the prompt, in English, as the System message, and the sentence to translate as a User message (see Table <ref>). Each sentence was given its own API call to reduce the likelihood of mismatching the sentences during parsing. This last step took about 1 second per sentence translation, varying depending on server traffic and internet speeds.
Using this process, we create 10 datasets, outlined in Figure <ref>. We have a natural training and validation set for both German and Galician. Likewise, we have a synthetic training and validation set for both languages. We finally have a natural test set for both languages.
§ EXPERIMENTAL DESIGN
In our experiment, we consider the role
that synthetically-generated training data can have
on low-resource translation models. We train three
models on each of our language pairs - one that
is trained solely on natural data (Nat), one that is
trained solely on synthetic data (Synth), and one
that is a combination of both (Aug).
Our models are trained using transformers <cit.>, implemented using Fairseq <cit.>. We train a joint source-target BPE <cit.> with a vocabulary of size 16K on each language pair.
Models are trained using 4 attention heads and 3 layers, and in a batch size of 2,000. They are trained for 100 epochs, using validation loss as an early-stopping criterion. Models are evaluated using SacreBLEU.
§ RESULTS AND DISCUSSION
To test our six models, we had each of them translate the test set that was encoded using their respective BPE tokenizers. We then ran SacreBLEU on their outputs against the original English sentences from the TED dataset to obtain a BLEU score. The resulting BLEU scores are shown on Table <ref>.
We observe that the models trained on natural data are significantly better than those trained on synthetic data. This is not surprising, as the test set is sampled from the same domain as the natural data. We did not specify in our prompts to ChatGPT that the generated sentences should be in the same domain as our natural data.
Furthermore, we see that augmenting the natural training data with synthetic data leads to a notable improvement in the translation quality, despite the domain mismatch stated above. In low-resource translation tasks, it appears that any additional information can strengthen the signal. Even in our "high-resource" setting, the synthetic data supplements the translation.
In Table <ref>, we provide some examples of the predictions made by the augmented model. We can observe that when the natural and synthetic models are at odds in their predictions (such as in the first example), the augmented model leans towards a correct interpretation. Likewise, it's able to synergize much more appropriate translations from a combination of the training data. There is still plenty of room for interpretation and experimentation in how the augmentation is improving translation, which we leave to future work.
Somewhat surprisingly, we observe that although ChatGPT had access to significantly more German data than Galician data, the Galician synthetic data produces higher quality translations than its German counterpart. Part of this result could just be that Galician is easier to translate into English than German; we observe that in the natural setting, the Galician model produces higher quality translations. Furthermore, it could be largely coincidental: ChatGPT's training data may better align the limited Galician data [Indeed, it is possible that GPT-3 was trained on the Galician TEDTalks]. It is also possible that ChatGPT is leveraging a high amount of Spanish data to supplement its Galician translations. In the next part of our evaluation, we investigate the quality and diversity of the synthetic data.
We evaluate our models on both the natural and synthetic validation sets. As a sanity check, the models perform fairly similarly on natural validation data and natural test data. However, we note an interesting difference in how the models perform on synthetic data. The natural models do not provide much insight - the Galician model performs worse on synthetic data, but the German model performs better; further investigation is required. We note, however, the unnaturally high scores of the synthetic models on the synthetic validation data. Such high BLEU scores suggest that the synthetic models are overfitting to their validation data - much more than the natural models fit to theirs. One interpretation of this result could be that the synthetic data is much less diverse than the natural data. Although the training sets are the same size, the model more tightly fits the synthetic data, by a significant margin.
In retrospect, this is not entirely surprising. ChatGPT is based on a sampled generative language model. Although it is not as predictable as past language models, it still has a bias towards frequent patterns, and the sentences it is generating may be much more repetitive than we would like. We now turn our investigation to the quality of the data generated by ChatGPT.
§ DATA ANALYSIS
We first investigate the type-token ratio (TTR) of both the natural and synthetic data. The results are shown on the left of Figure <ref>. We see that for both the German and Galician sources, as well as the English targets, the TTR for the synthetic data is much lower than it is for the natural data. This result confirms our hypothesis that ChatGPT is producing repetitive sentences with a limited vocabulary. At this point, it is impossible to determine whether this result is a shortcoming of the model's training method, or simply a lack of prompt engineering.
Conversely, we observe that many of the most frequent words in the training data set are used considerably more by the synthetic dataset than the natural one. Although the synthetic data still follows a roughly Zipfian distribution, its frequent words are regularly more frequent than the natural dataset. The sentences generated by ChatGPT show significantly less linguistic diversity than the natural sentences. With a less diverse training set, overfitting is likely. Since the training and validation sets were sampled from the same distribution, it's highly likely that the synthetic models are also very closely fitting their validation data, as observed in Section <ref>.
§ LIMITATIONS
This study is not without limitations. First, the size of the training data we set aside for this study is relatively small for NMT. This may impact the generalizability of our findings to instances where a much larger parallel corpus needs to be augmented. Secondly, our experiments only considered two different languages translating into English (and both languages are western Indo-European languages closely related to English). This is a small sample size and may also affect the generalizability of our study. It is especially important because ChatGPT itself is trained on significantly more data, likely quite a bit of parallel data as well. It's possible it will not perform well in naturally low resource settings, where it is not possible for ChatGPT to generate synthetic parallel data.
§ CONCLUSIONS
Our experiments demonstrate a possible issue with using large language models to generate synthetic data: the lack of diversity in generated data. It is becoming evident that many of the capabilities of these models are limited only by the specific prompts used to communicate with them. More research is necessary to determine whether prompt engineering can improve the diversity issue. For example, using more sentences during the 'Assistant' few shot phase in section <ref>. As new and varied prompt-based models are released, the problem may also naturally be solved by better language models and more thorough linguistic understanding on the part of the models.
Our experiments do show, however, that augmenting natural data with entirely synthetic data shows promise for training machine translators. Despite a domain mismatch, translation quality improved, even when the language model only had access to limited training data. While the diversity issue remains, it is encouraging that even simple, repetitive sentences can improve the quality of a translator.
§ ACKNOWLEDGEMENTS
We thank Dr. Miikka Silfverberg for providing suggestions in the data analysis section, as well as Dr. James Kryklywy for insight in the discussion of the results.
acl_natbib
|
http://arxiv.org/abs/2307.04832v2 | 20230710181133 | Odd Entanglement Entropy in $\text{T}\bar{\text{T}}$ deformed CFT$_2$s and Holography | [
"Debarshi Basu",
"Saikat Biswas",
"Ankur Dey",
"Boudhayan Paul",
"Gautam Sengupta"
] | hep-th | [
"hep-th"
] |
]Debarshi BasuE-mail:
]Saikat BiswasE-mail:
]Ankur DeyE-mail:
]Boudhayan PaulE-mail:
]Gautam SenguptaE-mail:
[]
Department of Physics,
Indian Institute of Technology,
Kanpur 208 016, India
Odd Entanglement Entropy in TT̅ deformed CFT_2s and Holography
[
==============================================================
empty
We construct a replica technique to perturbatively compute the odd entanglement entropy (OEE) for bipartite mixed states in TT̅ deformed CFT_2s. This framework is then utilized to obtain the leading order correction to the OEE for two disjoint intervals, two adjacent intervals, and a single interval in TT̅ deformed thermal CFT_2s in the large central charge limit. The field theory results are subsequently reproduced in the high temperature limit from holographic computations for the entanglement wedge cross sections in the dual bulk finite cut-off BTZ geometries. We further show that for finite size TT̅ deformed CFT_2s at zero temperature the corrections to the OEE are vanishing to the leading order from both field theory and bulk holographic computations.
§ INTRODUCTION
Quantum entanglement has emerged as a prominent area of research to explore a wide range of physical phenomena spanning several disciplines from quantum many body systems in condensed matter physics to issues of quantum gravity and black holes. The entanglement entropy (EE) has played a crucial role in this endeavor as a measure for characterizing the entanglement of bipartite pure quantum states although it fails to effectively capture mixed state entanglement due to spurious correlations. In this context several mixed state entanglement and correlation measures such as the reflected entropy, entanglement of purification, balanced partial entanglement etc. have been proposed in quantum information theory.
Interestingly it was possible to compute several of these measures through certain replica techniques for bipartite states in two dimensional conformal field theories (2s). In this connection the Ryu Takayanagi (RT) proposal <cit.> quantitatively characterized the holographic entanglement entropy (HEE) of a subsystem in s dual to bulk geometries through the / correspondence. This was extended by the Hubeny Rangamani Takayanagi (HRT) proposal <cit.> which provided a covariant generalization of the RT proposal for time dependent states in s dual to non static bulk geometries. The RT and HRT proposals were later proved in <cit.>.
Recently another computable measure for mixed state entanglement known as the odd entanglement entropy (OEE) was proposed by Tamaoka in <cit.>. The OEE may be broadly understood as the von Neumann entropy of the partially transposed reduced density matrix of a given subsystem <cit.>.[This is a loose interpretation as the partially transposed reduced density matrix does not represent a physical state and may contain negative eigenvalues <cit.>.] The author in <cit.> utilized a suitable replica technique to compute the OEE for a bipartite mixed state configuration of two disjoint intervals in a 2. Interestingly in <cit.> the author proposed a holographic duality relating the OEE and the EE to the bulk entanglement wedge cross section (EWCS) for a given bipartite state in the 3/2 scenario. For recent developments see <cit.>.
On a different note it was demonstrated by Zamolodchikov <cit.> that 2s which have undergone an irrelevant deformation by the determinant of the stress tensor (known as deformations) exhibit exactly solvable energy spectrum and partition function. These theories display non local UV structure and have an infinite number of possible RG flows leading to the same fixed point. A holographic dual for such theories was proposed in <cit.> to be a bulk 3 geometry with a finite radial cut-off. This proposal could be substantiated through the matching of the two point function, energy spectrum and the partition function between the bulk and the boundary (see <cit.> for further developments). The authors in <cit.> computed the HEE for bipartite pure state configurations in various deformed dual s. Subsequently the authors in <cit.> obtained the reflected entropy and its holographic dual, the EWCS, for bipartite mixed states in deformed dual 2s. Recently the entanglement negativity for various bipartite mixed states in deformed thermal 2s, and the corresponding holographic dual for bulk finite cut-off BTZ black hole geometries were computed in <cit.>.
Motivated by the developments described above, in this article we compute the OEE for various bipartite mixed states in deformed dual 2s. For this purpose we construct an appropriate replica technique and a conformal perturbation theory along the lines of <cit.> to develop a path integral formulation for the OEE in deformed 2s with a small deformation parameter. This perturbative construction is then utilized to compute the first order corrections to the OEE for two disjoint intervals, two adjacent intervals, and a single interval in a deformed thermal 2 with a small deformation parameter in the large central charge limit. Subsequently we explicitly compute the bulk EWCS for the above mixed state configurations in the deformed thermal dual 2s by employing a construction involving embedding coordinates as described in <cit.>. Utilizing the EWCS obtained we demonstrate that the first order correction to field theory replica technique results for the OEE in the large central charge and the high temperature limit match exactly with the first order correction to the sum of the EWCS and the HEE verifying the holographic duality between the above quantities in the context of deformed thermal 2s. Following this we extend our perturbative construction to deformed finite size 2s at zero temperature and demonstrate that the leading order corrections to the OEE are vanishing, which is substantiated through bulk holographic computations involving the EWCS.
This article is organized as follows. In <ref> we briefly review the basic features of deformed 2s and the OEE. In <ref> we develop a perturbative expansion for the OEE in a deformed 2. In <ref> this perturbative construction is then employed to obtain the leading order corrections to the OEE for various bipartite states in a deformed thermal 2. Following this we explicitly demonstrate the holographic duality for first order corrections between the OEE and the sum of the bulk EWCS and the HEE for these mixed states. Subsequently in <ref> we extend our perturbative analysis to a deformed finite size 2 at zero temperature and show that the leading order corrections to the OEE are zero. This is later verified through bulk holographic computations. Finally, we summarize our results in <ref> and present our conclusions. Some of the lengthy technical details of our computations have been described in <ref>.
§ REVIEW OF EARLIER LITERATURE
§.§ deformation in a 2
We begin with a brief review of a two dimensional conformal field theory deformed by the operator defined as follows <cit.>
<TT̅>=1/8(<T_ab><T^ab>-<T^a_a>^2).
It is a double trace composite operator which satisfies the factorization property <cit.>. The corresponding deformation generates a one parameter family of theories described by a deformation parameter μ (≥ 0) as given by the following flow equation <cit.>
dℐ_QFT^(μ)/dμ=∫ d^2x (TT̅)_μ , ℐ_QFT^(μ)|_μ=0=ℐ_CFT ,
where ℐ_QFT^(μ) and ℐ_CFT represent the actions of the deformed and undeformed theories respectively. The deformation parameter μ has dimensions of length squared. Note that the energy spectrum may be determined exactly for a deformed 2 <cit.>.
When μ is small, the action of the deformed 2 may be perturbatively expanded as <cit.>
ℐ_QFT^(μ)=ℐ_CFT+μ∫ d^2x (TT̅)_μ=0
=ℐ_CFT+μ∫ d^2x (TT̅-Θ^2) ,
where T≡ T_ww, T̅≡ T_w̅w̅ and Θ≡ T_ww̅ describe the components of the stress tensor of the undeformed theory expressed in the complex coordinates (w,w̅). Our investigation focuses on deformed 2s at a finite temperature, and finite size deformed 2s at zero temperature, which are defined on appropriate cylinders. The expectation value of Θ vanishes on a cylinder and the Θ^2 term in <ref> may be dropped from further consideration <cit.>.
§.§ Odd entanglement entropy
We now focus our attention on a bipartite mixed state correlation measure termed the odd entanglement entropy (OEE), which approximately characterizes the von Neumann entropy for the partially transposed reduced density matrix of a given bipartite system <cit.>. In this context we begin with a bipartite system comprising the subsystems A and B, described by the reduced density matrix ρ_AB defined on the Hilbert space ℋ_AB=ℋ_A⊗ℋ_B, where ℋ_A and ℋ_B denote the Hilbert spaces for the subsystems A and B respectively. The partial transpose ρ_AB^T_B for the reduced density matrix ρ_AB with respect to the subsystem B is then given by
e^(A)_ie^(B)_jρ_AB^T_Be^(A)_ke^(B)_l=e^(A)_ie^(B)_lρ_ABe^(A)_ke^(B)_j,
where |e^(A)_i⟩ and |e^(B)_j⟩ describe orthonormal bases for the Hilbert spaces ℋ_A and ℋ_B respectively. The Rényi odd entropy of order n_o between the subsystems A and B may be defined as <cit.>
S_o^(n_o)(A:B)=1/1-n_olog[Tr(ρ_AB^T_B)^n_o],
where n_o is an odd integer. The OEE between the subsystems A and B may now be defined through the analytic continuation of the odd integer n_o→ 1 in <ref> as follows <cit.>
S_o(A:B)=lim_n_o→ 1[S_o^(n_o)(A:B)]=lim_n_o→ 11/1-n_olog[Tr(ρ_AB^T_B)^n_o].
§.§ Odd entanglement entropy in a 2
The subsystems A and B in a 2 may be characterized by the disjoint spatial intervals [z_1,z_2] and [z_3,z_4] in the complex plane [with x_1<x_2<x_3<x_4 , x= Re(z)]. In <cit.> the author advanced a replica technique to compute the OEE for bipartite systems in a 2. The replica construction involves an n_o sheeted Riemann surface ℳ_n_o (where n_o∈ 2ℤ^+-1) prepared through the cyclic and anti cyclic sewing of the branch cuts of n_o copies of the original manifold ℳ along the subsystems A and B respectively. Utilizing the replica technique, the trace of the partial transpose in <ref> may be expressed in terms of the partition function on the n_o sheeted replica manifold as follows <cit.>
Tr(ρ_AB^T_B)^n_o
=ℤ[ℳ_n_o]/(ℤ[ℳ])^n_o .
The relation in <ref> may be utilized along with <ref> to express the OEE in terms of the partition functions as follows
S_o(A:B)=lim_n_o→ 11/1-n_olog[ℤ[ℳ_n_o]/(ℤ[ℳ])^n_o].
The partition function in <ref> may be expressed in terms of an appropriate four point correlation function of the twist and anti twist operators σ_n_o and σ̅_n_o located at the end points of the subsystems A and B as follows <cit.>
ℤ[ℳ_n_o]/(ℤ[ℳ])^n_o
=⟨σ_n_o(z_1,z̅_1)σ̅_n_o(z_2,z̅_2)
σ̅_n_o(z_3,z̅_3)σ_n_o(z_4,z̅_4) ⟩.
We are now in a position to express the OEE between the subsystems A and B in terms of the four point twist correlator by combining <ref> as follows <cit.>
S_o(A:B)=lim_n_o→ 11/1-n_olog[
⟨σ_n_o(z_1,z̅_1)σ̅_n_o(z_2,z̅_2)
σ̅_n_o(z_3,z̅_3)σ_n_o(z_4,z̅_4) ⟩].
Note that σ_n_o and σ̅_n_o represent primary operators in 2 with the following conformal dimensions <cit.>
h_n_o=h̅_n_o=c/24(n_o-1/n_o).
We also note in passing the conformal dimensions of the twist operators σ_n_o^2 and σ̅_n_o^2, which are given as follows <cit.>
h_n_o^(2)=h̅_n_o^(2)=h_n_o=c/24(n_o-1/n_o).
§.§ Holographic odd entanglement entropy
We now follow <cit.> to present a brief review of the EWCS. Let M be any specific time slice of a bulk static geometry in the context of d+1/d framework. Consider a region A in ∂ M. The entanglement wedge of A is given by the bulk region bounded by A∪Γ_A^ min, where Γ_A^ min is the RT surface for A. It has been proposed to be dual to the reduced density matrix ρ_A <cit.>. To define the EWCS, we subdivide A=A_1∪ A_2. A cross section of the entanglement wedge for A_1∪ A_2, denoted by Σ_A_1A_2, is defined such that it divides the wedge into two parts containing A and B separately. The EWCS between the subsystems A_1 and A_2 may then be defined as <cit.>
E_W (A_1:A_2)=Area(Σ_A_1A_2^ min)/4G_N ,
where Σ_A_1A_2^ min represents the minimal cross section of the entanglement wedge.
In <cit.> the author proposed a holographic duality describing the difference of the OEE and the EE in terms of the bulk EWCS of the bipartite state in question as follows
S_o (A_1:A_2) - S (A_1 ∪ A_2) = E_W (A_1:A_2) ,
where S(A_1 ∪ A_2) is the EE for the subsystem A_1 ∪ A_2, and E_W (A_1:A_2) represents the EWCS between the subsystems A_1 and A_2 respectively.
§ OEE IN A DEFORMED 2
In this section we develop an appropriate replica technique similar to those described in <cit.> for the computation of the OEE for various bipartite mixed state configurations in a deformed 2. To this end we consider two spatial intervals A and B in a deformed 2 defined on a manifold ℳ. The partition functions on ℳ and ℳ_n_o for this deformed theory may be expressed in the path integral representation as follows [refer to <ref>]
ℤ[ℳ] = ∫_ℳ𝒟ϕ e^-ℐ_QFT^(μ)[ϕ] , ℤ[ℳ_n_o]
= ∫_ℳ_n_o𝒟ϕ e^-ℐ_QFT^(μ)[ϕ] .
When the deformation parameter μ is small, <ref> may be utilized to express the OEE as
S_o^(μ)(A:B)=lim_n_o→ 11/1-n_olog[∫_ℳ_n_o𝒟ϕ e^-ℐ_CFT-μ∫_ℳ_n_o(TT̅)/(∫_ℳ𝒟ϕ e^-ℐ_CFT-μ∫_ℳ(TT̅))^n_o] ,
where the superscript μ has been used to specify the OEE in the deformed 2. The exponential factors in <ref> may be further expanded for small μ to arrive at
S_o^(μ)(A:B) =lim_n_o→ 11/1-n_olog[∫_ℳ_n_o𝒟ϕ e^-ℐ_CFT(1-μ∫_ℳ_n_o(TT̅)+𝒪(μ^2))/[∫_ℳ𝒟ϕ e^-ℐ_CFT(1-μ∫_ℳ(TT̅)+𝒪(μ^2))]^n_o]
=S_o^(CFT)(A:B)+lim_n_o→ 11/1-n_olog[(1-μ∫_ℳ_n_oTT̅_ℳ_n_o)/(1-μ∫_ℳTT̅_ℳ)^n_o] .
The term S_o^(CFT)(A:B)≡ S_o^(μ=0)(A:B) in <ref> represents the corresponding OEE for the undeformed 2. The expectation values of the operator on the manifolds ℳ and ℳ_n_o appearing in <ref> are defined as follows
TT̅_ℳ=∫_ℳ𝒟ϕ e^-ℐ_CFT(TT̅)/∫_ℳ𝒟ϕ e^-ℐ_CFT , TT̅_ℳ_n_o=∫_ℳ_n_o𝒟ϕ e^-ℐ_CFT(TT̅)/∫_ℳ_n_o𝒟ϕ e^-ℐ_CFT .
The second term on the right hand side of <ref> may be simplified to obtain the first order correction in μ to the OEE due to the deformation as follows
δ S_o(A:B) = -μlim_n_o→ 11/1-n_o[∫_ℳ_n_oTT̅_ℳ_n_o-n_o ∫_ℳTT̅_ℳ] .
§ DEFORMED THERMAL 2 AND HOLOGRAPHY
§.§ OEE in a deformed thermal 2
We now investigate the behavior of the deformed 2 at a finite temperature 1/β. The corresponding manifold ℳ for this configuration is given by an infinitely long cylinder of circumference β with the Euclidean time direction compactified by the periodic identification τ∼τ+β. This cylindrical manifold ℳ may be described by the complex coordinates <cit.>
w=x+iτ , w̅=x-iτ ,
with the spatial coordinate x∈ (-∞,∞) and the time coordinate τ∈ (0,β). The cylinder ℳ may be further expressed in terms of the complex plane ℂ through the following conformal map <cit.>
z=e^2π w/β , z̅=e^2πw̅/β ,
where (z, z̅) represent the coordinates on the complex plane. The transformation of the stress tensors under the conformal map described in <ref> is given as
T(w)=T(z)-π^2c/6β^2 , T̅(w̅)=T̅(z̅)-π^2c/6β^2 .
The relations in <ref> may be utilized to arrive at
T(w)T̅(w̅)_ℳ=(π^2c/6β^2)^2,
where we have used the fact that T(z)_ℂ=T̅(z̅)_ℂ=0 for the vacuum state of an undeformed 2 described by the complex plane.
In the following subsections, we utilize <ref> to compute the first order correction in μ to the OEE in a finite temperature deformed 2 for two disjoint intervals, two adjacent intervals and a single interval.
§.§.§ Two disjoint intervals
We begin with the bipartite mixed state configuration of two disjoint spatial intervals A=[x_1,x_2] and B=[x_3,x_4] in a deformed 2 at a finite temperature 1/β, defined on the cylindrical manifold ℳ (x_1<x_2<x_3<x_4). Note that the intervals may also be represented as A=[w_1,w_2] and B=[w_3,w_4] with τ=0 [cf. <ref>]. The value of TT̅_ℳ_n_o on the replica manifold ℳ_n_o may be computed by insertion of the operator into the appropriate four point twist correlator as follows <cit.>
∫_ℳ_n_oTT̅_ℳ_n_o = ∑_k=1^n_o∫_ℳT_k(w)T̅_k(w̅)σ_n_o(w_1, w̅_1)σ̅_n_o(w_2, w̅_2)σ̅_n_o(w_3, w̅_3)σ_n_o(w_4, w̅_4)_ℳ/σ_n_o(w_1, w̅_1)σ̅_n_o(w_2, w̅_2)σ̅_n_o(w_3, w̅_3)σ_n_o(w_4, w̅_4)_ℳ
= ∫_ℳ1/n_oT^(n_o)(w)T̅^(n_o)(w̅)σ_n_o(w_1, w̅_1)σ̅_n_o(w_2, w̅_2)σ̅_n_o(w_3, w̅_3)σ_n_0(w_4, w̅_4)_ℳ/σ_n_o(w_1, w̅_1)σ̅_n_o(w_2, w̅_2)σ̅_n_o(w_3, w̅_3)σ_n_o(w_4, w̅_4)_ℳ .
Here T_k(w),T̅_k(w̅) are the stress tensors of the undeformed 2 on the k^th sheet of the Riemann surface ℳ_n_o, while T^(n_o)(w),T̅^(n_o)(w̅) represent the stress tensors on ℳ_n_o <cit.>. σ_n_o(w_i,w̅_i),σ̅_n_o(w_i,w̅_i) represent the twist operators located at the end points w_i of the intervals. An identity described in <cit.> has been used to derive the last line of <ref>. The relation in <ref> may now be utilized to transform the stress tensors from the cylindrical manifold to the complex plane. The following Ward identities are then employed to express the correlation functions involving the stress tensors in terms of the twist correlators on the complex plane
T^(n_o)(z)𝒪_1(z_1,z̅_1)…𝒪_m(z_m,z̅_m)_ℂ
= ∑_j=1^m(h_j/(z-z_j)^2+1/(z-z_j)∂_z_j) 𝒪_1(z_1,z̅_1)…𝒪_m(z_m,z̅_m)_ℂ ,
T̅^(n_o)(z̅)𝒪_1(z_1,z̅_1)…𝒪_m(z_m,z̅_m)_ℂ
= ∑_j=1^m(h̅_j/(z̅-z̅_j)^2+1/(z̅-z̅_j)∂_z̅_j)
𝒪_1(z_1,z̅_1)…𝒪_m(z_m,z̅_m)_ℂ ,
where 𝒪_is represent arbitrary primary operators with conformal dimensions (h_i ,h̅_i).
Utilizing <ref>, we may now express the expectation value in <ref> as
∫_ℳ_n_oTT̅_ℳ_n_o = 1/n_o∫_ℳ1/σ_n_o(z_1, z̅_1)σ̅_n_o(z_2, z̅_2)σ̅_n_o(z_3, z̅_3)σ_n_o(z_4, z̅_4)_ℂ
×[ -π^2 c n_o/6 β^2+(2 π z/β)^2 ∑_j=1^4(h_j/(z-z_j)^2+1/(z-z_j)∂_z_j) ]
×[ -π^2 c n_o/6 β^2+(2 πz̅/β)^2 ∑_k=1^4(h̅_k/(z̅-z̅_k)^2+1/(z̅-z̅_k)∂_z̅_k) ]
×σ_n_o(z_1, z̅_1)σ̅_n_o(z_2, z̅_2)σ̅_n_o(z_3, z̅_3)σ_n_o(z_4, z̅_4)_ℂ ,
where h_i=h̅_i=h_n_o (i=1,2,3,4) [see <ref>]. The four point twist correlator in <ref> for two disjoint intervals in proximity described by the t channel is given by <cit.>
σ_n_o(z_1,z̅_1)σ̅_n_o(z_2,z̅_2)
σ̅_n_o(z_3,z̅_3)σ_n_o(z_4,z̅_4)_ℂ≈z_14z_23^-4 h_n_o(1+√(η)/1-√(η))^-h_n_o^(2)(1+√(η̅)/1-√(η̅))^-h̅_n_o^(2).
The conformal dimensions h_n_o, h_n_o^(2) and h̅_n_o^(2) in <ref> are given in <ref>. We have defined the cross ratio η:=z_12 z_34/z_13 z_24 where z_ij≡ z_i-z_j.
We are now in a position to obtain the first order correction due to μ in the OEE of two disjoint intervals in a deformed finite temperature 2 by substituting <ref> into <ref> as follows
δ S_o(A:B) = -μ c^2 π ^4 √(η)/18β^4 z_21 z_32 z_41 z_43∫_ℳ z^2 [z_32 z_42 [z_31 (2z-3z_1+z_4)√(η)+z_43 (z-z_1)]/(z-z_1)^2.
+z_31 z_41 [z_42 (2z-3z_2+z_3)√(η)-z_43 (z-z_2)]/(z-z_2)^2
-z_42 z_41 [z_31(2z+z_2-3z_3) √(η)-z_21(z-z_3)]/(z-z_3)^2
. -z_31 z_32 [z_42 (2z+z_1-3z_4)√(η)+z_21 (z-z_4)]/(z-z_4)^2]+h.c.
The detailed derivation of the definite integrals in <ref> has been provided in <ref>. These results may be used to arrive at
δ S_o (A:B) = μ c^2 π^3 /36 β^2[
{( √(z_42 z_43/z_21 z_31)+1 ) z_1+z_4 }/ z_41log[ z_1/z_2] .
. + (√(z_21 z_43/z_31 z_42)-2) (z_1 z_2-z_3 z_4) /z_32 z_41log[ z_2/z_3]
+ { z_1 - (√(z_21 z_31/z_42 z_43)-1 ) z_4 }/ z_41log[ z_3/z_4] + h.c. ].
We may now substitute z_i = z̅_i = e^2π x_i/β (at τ_i=0) into <ref> to finally obtain
δ S_o (A:B) = -μ c^2 π^4/9 β ^3√(sinh(π x_21/β) sinh(π x_43/β)/sinh(π x_31/β) sinh(π x_42/β))[x_21(π x_21/β).
. -x_32(π x_32/β)
- x_41(π x_41/β)+x_43(π x_43/β)]
-μ c^2 π^4/9 β ^3[x_32(π x_32/β)+x_41(π x_41/β)],
where x_ij≡ x_i-x_j.
§.§.§ Two adjacent intervals
We now turn our attention to the bipartite mixed state configuration of two adjacent intervals A=[x_1,x_2] and B=[x_2,x_3]
in a deformed 2 at a finite temperature 1/β (x_1<x_2<x_3). As earlier the intervals may be expressed as A=[w_1,w_2] and B=[w_2,w_3] with τ=0. The value of TT̅_ℳ_n_o for two adjacent intervals may be evaluated in a manner similar to that of two disjoint intervals as follows
∫_ℳ_n_oTT̅_ℳ_n_o = ∫_ℳ1/n_oT^(n_o)(w)T̅^(n_o)(w̅)σ_n_o(w_1, w̅_1)σ̅^2_n_o(w_2, w̅_2)σ_n_o(w_3, w̅_3)_ℳ/σ_n_o(w_1, w̅_1)σ̅^2_n_o(w_2, w̅_2)σ_n_o(w_3, w̅_3)_ℳ .
As before the relations in <ref> may be utilized to express the expectation value in <ref> as follows
∫_ℳ_n_oTT̅_ℳ_n_o = 1/n_o∫_ℳ1/σ_n_o(z_1, z̅_1)σ̅^2_n_o(z_2, z̅_2)σ_n_o(z_3, z̅_3)_ℂ
×[ -π^2 c n_o/6 β^2+(2 π z/β)^2 ∑_j=1^3(h_j/(z-z_j)^2+1/(z-z_j)∂_z_j) ]
×[ -π^2 c n_o/6 β^2+(2 πz̅/β)^2 ∑_k=1^3(h̅_k/(z̅-z̅_k)^2+1/(z̅-z̅_k)∂_z̅_k) ]
×σ_n_o(z_1, z̅_1)σ̅^2_n_o(z_2, z̅_2)σ_n_o(z_3, z̅_3)_ℂ .
In <ref> we have h_1=h_3=h_n_o,h_2=h^(2)_n_o with h̅_i=h_i (i=1,2,3) [see <ref>]. The three point twist correlator in <ref> is given by <cit.>
σ_n_o(z_1, z̅_1)σ̅^2_n_o(z_2, z̅_2)σ_n_o(z_3, z̅_3)_ℂ
=𝒞_σ_n_oσ̅_n_o^2σ_n_o/( z^h^(2)_n_o_12z^h^(2)_n_o_23z^2h_n_o-h^(2)_n_o_13)
( z̅^h̅^(2)_n_o_12z̅^h̅^(2)_n_o_23z̅^2h̅_n_o-h̅^(2)_n_o_13) ,
where 𝒞_σ_n_eσ̅_n_e^2σ_n_e is the relevant OPE coefficient. The first order correction due to μ in the OEE of two adjacent intervals in a deformed thermal 2 may now be obtained by substituting <ref> into <ref> as follows
δ S_o (A:B) = -μ c^2π^4/18 β^4∫_ℳz^2[
1/(z-z_1)^2+1/(z-z_2)^2+1/(z-z_3)^2.
. +(-3 z+z_1+z_2+z_3) /(z-z_1) (z-z_2) (z-z_3)
+ h.c. ].
The technical details of the definite integrals in <ref> have been included in <ref>. The correction to the OEE may then be expressed as
δ S_o (A:B)
= -μ c^2π^3/36β^2[(z_1^2-z_2 z_3) log(z_1/z_2)/z_12 z_13+(z_1 z_2-z_3^2) log(z_2/z_3)/z_23z_13 + h.c. ] .
As earlier we may now restore the x coordinates by inserting z_i = z̅_i = e^2π x_i/β (at τ_i=0) into <ref> to arrive at
δ S_o (A:B)
= - (μ c^2π^4/36β^3) x_21cosh(2 π x_21/β)+x_32cosh(2 π x_32/β) - x_31cosh(2 π x_31/β) /sinh(π x_21/β) sinh(π x_32/β) sinh(π x_31/β) .
§.§.§ A single interval
We finally focus on the case of a single interval A=[-ℓ,0] in a thermal deformed 2 (ℓ>0). To this end it is required to consider two auxiliary intervals B_1=[-L, -ℓ] and B_2=[0,L] on either side of the interval A with B≡ B_1∪ B_2 (L≫ℓ) <cit.>. The intervals may be equivalently represented by the coordinates B_1=[x_1,x_2], A=[x_2,x_3] and B_2=[x_3,x_4], with x_1=-L,x_2=-ℓ,x_3=0,x_4=L and x_1<x_2<x_3<x_4. As before the intervals may also be characterized as B_1=[w_1,w_2], A=[w_2,w_3] and B_2=[w_3,w_4] with τ=0. The OEE for the mixed state configuration of the single interval A is then evaluated by implementing the bipartite limit L→∞ (B→ A^c) subsequent to the replica limit n_o→ 1 <cit.>. For the configuration described above, the integral of TT̅_ℳ_n_o on the replica manifold is given by
∫_ℳ_n_oTT̅_ℳ_n_o = ∫_ℳ1/n_oT^(n_o)(w)T̅^(n_o)(w̅)σ_n_o(w_1, w̅_1)σ̅^2_n_o(w_2, w̅_2)σ^2_n_o(w_3, w̅_3)σ̅_n_o(w_4, w̅_4)/σ_n_o(w_1, w̅_1)σ̅^2_n_o(w_2, w̅_2)σ^2_n_o(w_3, w̅_3)σ̅_n_o(w_4, w̅_4) .
As earlier <ref> may be simplified by utilizing <ref> as follows
∫_ℳ_n_oTT̅_ℳ_n_o = 1/n_o∫_ℳ1/σ_n_o(z_1, z̅_1)σ̅^2_n_o(z_2, z̅_2)σ^2_n_o(z_3, z̅_3)σ̅_n_o(z_4, z̅_4)
×[ -π^2 c n_o/6 β^2+(2 π z/β)^2 ∑_j=1^4(h_j/(z-z_j)^2+1/(z-z_j)∂_z_j) ]
×[ -π^2 c n_o/6 β^2+(2 πz̅/β)^2 ∑_k=1^4(h̅_k/(z̅-z̅_k)^2+1/(z̅-z̅_k)∂_z̅_k) ]
×σ_n_o(z_1, z̅_1)σ̅^2_n_o(z_2, z̅_2)σ^2_n_o(z_3, z̅_3)σ̅_n_o(z_4, z̅_4)_𝒞 ,
where h_1=h_4=h_n_o,h_2=h_3=h^(2)_n_o with h̅_i=h_i (i=1,2,3,4) [see <ref>]. The four point twist correlator in <ref> is given by <cit.>
σ_n_o(z_1, z̅_1)σ̅^2_n_o(z_2, z̅_2)σ^2_n_o(z_3, z̅_3)σ̅_n_o(z_4, z̅_4)
= c_n_oc^(2)_n_o(ℱ_n_o(η)/z^2h_n_o_14 z^2h^(2)_n_o_23η^h^(2)_n_o) (ℱ̅_n_o(η̅)/z̅^2h̅_n_o_14z̅^2h̅^(2)_n_o_23η̅^h̅^(2)_n_o) ,
where c_n_o and c_n_o^(2) are the normalization constants. The functions ℱ_n_o(η) and ℱ̅_n_o(η̅) in <ref> satisfy the following OPE limits
ℱ_n_o(1)ℱ̅_n_o(1)=1 , ℱ_n_o(0)ℱ̅_n_o(0)=𝒞_σ_n_oσ̅_n_o^2σ̅_n_o/c_n_o^(2) ,
where 𝒞_σ_n_oσ̅_n_o^2σ̅_n_o represents the relevant OPE coefficient. As earlier <ref> may be substituted into <ref> to arrive at
δ S_o (A:B)= -μ c^2 π^4/18 β^4∫_ℳ[ ∑_j=1^4z^2/(z-z_j)^2
-∑_j=1^4z^2/(z-z_j)∂_z_j(log[z^2_23 z^2_14 η f(η)]) + h.c. ].
The functions f(η) and f̅(η̅) introduced in <ref> are defined as follows
lim_n_o→ 1 [ℱ_n_o (η)]^1/1-n_o = [f(η)]^c/12 ,
lim_n_o→ 1 [ℱ̅_n_o (η̅)]^1/1-n_o = [f̅(η̅)]^c/12 .
The first order correction due to μ in the OEE of a single interval in a deformed 2 at a finite temperature 1/β may now be computed from <ref> by reverting back to the coordinates involving ℓ, L and implementing the bipartite limit L→∞ as follows
δ S_o (A:A^c) = -2 μ c^2 π^4 ℓ/9β^3(1/ e^2 πℓ/β -1 - e^-2 πℓ/β f' [ e^-2 πℓ/β]/2 f [ e^-2 πℓ/β])
- lim_L→∞[ μ c^2 π ^4 L/9β^3( 2 π L/β) ] .
The technical details of the integrals necessary to arrive at <ref> from <ref> have been provided in <ref>. Note that the second term on the right hand side of <ref> represents the divergent part of the OEE for a single interval.
§.§ Holographic OEE in a deformed thermal 2
We now turn our attention to the holographic description of the OEE as advanced in <cit.> for various bipartite mixed states in a deformed 2 at a finite temperature 1/β. The holographic dual of a deformed 2 is described by the bulk 3 geometry corresponding to the undeformed 2 with a finite cut-off radius r_c given as follows <cit.>
r_c=√(6 R^4/π cμ)=R^2/ϵ .
In <ref> μ is the deformation parameter, c is the central charge, ϵ is the UV cut-off of the field theory, and R is the 3 radius. For a deformed 2 at a finite temperature 1/β, the corresponding bulk dual is characterized by a BTZ black hole <cit.> with a finite cut-off, represented by <cit.>
ds^2=-r^2-r_h^2/R^2dt^2+R^2/r^2-r_h^2dr^2+r^2dx̃^2 .
In the above metric, the horizon of the black hole is located at r=r_h, with β=2π R^2/r_h as the inverse temperature of the black hole and the dual 2. For simplicity from now onwards we set the radius R=1. The metric on the deformed 2, located at the cut-off radius r=r_c, is conformal to the bulk metric at r=r_c as follows <cit.>
ds^2=-dt^2+dx̃^2/1-r_h^2/r_c^2≡ -dt^2+dx^2 ,
x=x̃(1-r_h^2/r_c^2)^-1/2,
where x represents the spatial coordinate on the deformed 2. To compute the EWCS, we embed the BTZ black hole described by <ref> in ℝ^2,2 as follows <cit.>
ds^2
=η_ABdX^AdX^B =-dX^2_0-dX^2_1+dX^2_2+dX^2_3 ,
X^2=-1 .
The metric in <ref> may then be described by these embedding coordinates as follows <cit.>
X_0(t,r,x) =√(r^2/r_h^2-1) sinh(2 π t/β),
X_1(t,r,x) =r/r_hcosh(2 πx̃/β),
X_2(t,r,x) =√(r^2/r_h^2-1) cosh( 2 π t/β),
X_3(t,r,x) =r/r_hsinh(2 πx̃/β).
Note that for convenience the embedding coordinates in <ref> are parameterized in terms of the coordinate x described in <ref>. We also introduce a new coordinate u = 1/r to simplify later calculations, with u_c ≡ 1/r_c and u_h ≡ 1/r_h. We also note the Brown Henneaux formula G_N=3/(2c) described in <cit.>, which will be extensively used in later sections. In the following subsections we apply the methods described above to compute the holographic OEE from <ref> for two disjoint intervals, two adjacent intervals, and a single interval in a deformed thermal holographic 2.
§.§.§ Two disjoint intervals
We begin with the two disjoint spatial intervals A=[x_1,x_2] and B=[x_3,x_4] with x_1<x_2<x_3<x_4 as described in <ref>. The setup has been shown in <ref>. The EWCS involving the bulk points X(s_1),X(s_2),X(s_3),X(s_4) is given by <cit.>
E_W = 1/4G_Ncosh ^-1( 1+√(u)/√(v)),
where
u=ξ^-1_12ξ^-1_34ξ^-1_13ξ^-1_24 , v=ξ^-1_14ξ^-1_23ξ^-1_13ξ^-1_24 , ξ^-1_ij=-X(s_i)· X(s_j) .
The four points on the boundary may be expressed in the global coordinates as X(0,r_c,x_i) for i=1,2,3,4. The corresponding EWCS may then be computed from <ref> as
E_W(A:B)
=1/4G_Ncosh ^-1( √([ u_c^2-u_h^2+u_h^2 cosh( √(u_h^2-u_c^2) x_31/u_h^2) ]
[ u_c^2-u_h^2+u_h^2 cosh( √(u_h^2-u_c^2) x_42/u_h^2) ] /[ u_c^2-u_h^2+u_h^2 cosh( √(u_h^2-u_c^2) x_32/u_h^2) ]
[ u_c^2-u_h^2+u_h^2 cosh( √(u_h^2-u_c^2) x_41/u_h^2) ] ).
+ . √([ u_c^2-u_h^2+u_h^2 cosh( √(u_h^2-u_c^2) x_21/u_h^2) ]
[ u_c^2-u_h^2+u_h^2 cosh( √(u_h^2-u_c^2) x_43/u_h^2) ] /[ u_c^2-u_h^2+u_h^2 cosh( √(u_h^2-u_c^2) x_32/u_h^2) ]
[ u_c^2-u_h^2+u_h^2 cosh( √(u_h^2-u_c^2) x_41/u_h^2) ] ) ).
To compare with the field theory computations in <ref>, we have to take the limit of small deformation parameter μ, corresponding to large cut-off radius r_c (or small u_c) [see <ref>]. Further we must consider the high temperature limit β≪ |x_ij|, as the dual cut-off geometry resembles a BTZ black hole only in the high temperature limit. Expanding <ref> for small u_c and β≪ |x_ij| we arrive at
E_W(A:B) =
1/4G_Ncosh^-1[
1 + 2 sinh( x_21/2u_h) sinh( x_43/2u_h) /sinh( x_32/2u_h) sinh( x_41/2u_h) ]
- u_c^2/16G_N u_h^3√(sinh( x_21/2u_h) sinh( x_43/2u_h) /sinh( x_31/2u_h) sinh( x_42/2u_h) )[ x_21( x_21/2u_h)
+ x_43( x_43/2u_h) .
. -x_32( x_32/2u_h)
- x_41( x_41/2u_h) ]
- u_c^2/32G_N u_h^2(
√(sinh( x_31/2u_h) sinh( x_42/2u_h)
/sinh( x_21/2u_h) sinh( x_43/2u_h)
).
. ×[
^2 ( x_31/2u_h) +^2 ( x_42/2u_h)
-^2 ( x_32/2u_h) -^2 ( x_41/2u_h) ] .
+ √(sinh( x_21/2u_h) sinh( x_43/2u_h) /sinh( x_31/2u_h) sinh( x_42/2u_h) )
. ×[
^2 ( x_21/2u_h) +^2 ( x_43/2u_h)
-^2 ( x_32/2u_h) -^2 ( x_41/2u_h) ]
).
The first term in <ref> is the EWCS between the two disjoint intervals for the corresponding undeformed 2. The rest of the terms (proportional to u_c^2 and thus to μ) describes the leading order corrections for the EWCS due to the deformation. The third term becomes negligible (compared to the second term) in the high temperature limit. The change in HEE for two disjoint intervals due to the deformation is given by <cit.>
δ S(A∪ B) = - μ c^2 π ^4 /9β^3[ x_32( π x_32/β)
+ x_41( π x_41/β) ].
The change in holographic OEE for two disjoint intervals due to the deformation may now be computed by combining <ref> through <ref>, and is given by <ref>, where we have utilized the holographic dictionary to substitute G_N=3/(2c), u_h=β/(2π) and u_c^2 = π c μ /6. Interestingly our holographic result matches exactly with our earlier field theory computation in <ref>, in the large central charge limit together with small deformation parameter and high temperature limits, which serves as a strong consistency check for our holographic construction.
§.§.§ Two adjacent intervals
We now consider two adjacent intervals A=[x_1,x_2] and B=[x_2,x_3] with x_1<x_2<x_3 as described
in <ref>. The configuration has been depicted in <ref>. The EWCS for the corresponding bulk points X(s_1),X(s_2),X(s_3) is given by <cit.>
E_W=1/4 G_Ncosh ^-1(√(2)/√(v)),
where
v= ξ_13^-1/ξ_12^-1ξ_23^-1 , ξ_ij^-1=-X(s_i)· X(s_j) .
As earlier the three points on the boundary may be expressed in the global coordinates as X(0,r_c,x_i) for i=1,2,3. The corresponding EWCS may then be computed from <ref> as
E_W(A:B) =
1/4 G_Nlog[ 4 u_h
sinh( x_21/2 u_h) sinh( x_32/2 u_h) / u_c
sinh( x_31/2 u_h) ]
- u_c^2/16 G_N u_h^3[
x_21( x_21/2 u_h) - x_31( x_31/2 u_h)
+ x_32( x_32/2 u_h) ]
+ u_c^2/16 G_N u_h^2[
^2 ( x_21/2 u_h) - ^2 ( x_31/2 u_h)
+ ^2 ( x_32/2 u_h) ].
Similar to the disjoint configuration, the first term in <ref> is the EWCS between the two adjacent intervals for the corresponding undeformed 2. The rest of the terms (proportional to u_c^2 and thus to μ) describes the leading order corrections for the EWCS due to the deformation. The third term becomes negligible (compared to the second term) in the high temperature limit. The change in HEE for two adjacent intervals due to the deformation is given by <cit.>
δ S(A∪ B)= - ( μ c^2 π^4 /9 β^3) x_31(π x_31/β) .
The change in holographic OEE for two adjacent intervals due to the deformation may now be obtained from <ref>, and is described by <ref>, where as earlier we have used the holographic dictionary. Once again we find exact agreement between our holographic and field theory results (in the large central charge limit, along with small deformation parameter and high temperature limits), which substantiates our holographic construction.
§.§.§ A single interval
Finally we consider the case of a single interval A=[-ℓ,0] in a thermal deformed holographic 2 (ℓ>0). As described in <ref> this necessitates the introduction of two large but finite auxiliary intervals B_1=[-L, -ℓ] and B_2=[0,L] sandwiching the interval A with B≡ B_1∪ B_2 (L≫ℓ) <cit.>. The situation has been outlined in <ref>.
We then compute the holographic OEE for this modified configuration, and finally take the bipartite limit B→ A^c (implemented through L→∞) to obtain the desired OEE for the original configuration of the single interval A. The EWCS between the intervals A and B=B_1∪ B_2 may be computed from the following relation
<cit.>
Ẽ_W(A:B)=E_W(A:B_1)+E_W(A:B_2) ,
where Ẽ_W(A:B) denotes an upper bound on the EWCS between the intervals A and B.
All subsequent computations involving <ref> should be interpreted accordingly. Note that each term on the right hand side of <ref> represents the EWCS of two adjacent intervals which has already been computed in <ref>. The corrections to these terms may thus be read off from <ref> as follows
δ E_W(A:B_1) = - u_c^2/16 G_N u_h^3[
ℓ( ℓ/2 u_h)
+ (L-ℓ) ( L - ℓ/2 u_h)
- L ( L/2 u_h) ],
and
δ E_W(A:B_2) = - u_c^2/16 G_N u_h^3[
ℓ( ℓ/2 u_h)
+ L ( L/2 u_h)
-(L+ℓ) ( L+ℓ/2 u_h) ],
where we have already taken the limits of small deformation parameter and high temperature. The correction to the HEE for a single interval is given as follows <cit.>
δ S (A∪ A^c)= - ( 2 μ c^2 π ^4 L /9 β ^3)
(2 π L/β),
where the bipartite limit has already been implemented. The correction to holographic OEE for a single interval due to the deformation may then be computed from <ref> through <ref> on effecting the bipartite limit L→∞ as follows
δ S_o (A : A^c) = -μ c^2 π^4 ℓ/9β^3[ ( πℓ/β) - 1 ]
= -2 μ c^2 π^4 ℓ/9β^3(1/ e^2 πℓ/β -1 ),
where we have utilized the holographic dictionary as earlier. Note that on taking the high temperature limit (β→ 0), <ref> reduces (the second part of the first term becomes negligible as e^-2 πℓ/β→ 0) exactly to <ref>. This once again serves as a robust consistency check for our holographic construction.
§ DEFORMED FINITE SIZE 2 AND HOLOGRAPHY
§.§ OEE in a deformed finite size 2
In this section we follow a similar prescription as in <ref> to formulate a perturbative expansion for the OEE in a deformed finite size 2 of length L at zero temperature. For this setup, the corresponding manifold ℳ describes an infinitely long cylinder of circumference L with the length direction periodically compactified by the relation x∼ x+L <cit.>. The cylindrical manifold ℳ for this configuration may be represented by the complex coordinates described in <ref> with the spatial coordinate x∈ (0,L) and the time coordinate τ∈ (-∞,∞) <cit.>. The cylinder ℳ may be further described on the complex plane ℂ through the following conformal map <cit.>
z=e^- 2π i w/L , z̅=e^2π i w̅/L ,
where (z, z̅) are the coordinates on the complex plane. The relations in <ref> remain valid with β effectively replaced by iL. With these modifications, the expressions in <ref> may now be applied to compute the OEE in a deformed finite size 2 at zero temperature.
§.§.§ Two disjoint intervals
As earlier we start with the mixed state of two disjoint spatial intervals A=[x_1,x_2] and B=[x_3,x_4] in a deformed finite size 2 of length L at zero temperature, defined on the cylindrical manifold ℳ described above (x_1<x_2<x_3<x_4). The first order correction in the OEE of two disjoint intervals in a deformed finite size 2 may be obtained by substituting <ref> along with <ref> (β replaced by iL) into <ref> as follows
δ S_o(A:B) = -μ c^2 π ^4 /18L^4(z_1-z_3)^2(z_2-z_4)^2(η -1)√(η)
×∫ _ℳ z^2 [ (z_2-z_3)(z_2-z_4)((z-z_1)(z_3-z_4)+(z_1-z_3)(2z-3z_1+z_4) √(η))/(z-z_1)^2.
. + (z_1-z_3)(z_1-z_4)(-((z-z_2)(z_3-z_4))+(2z-3z_2+z_3)(z_2-z_4)√(η))/(z-z_2)^2.
. - (z_1-z_4)(z_2-z_4)((z_1-z_2)(-z+z_3)+(2z+z_2-3z_3)(z_1-z_3)√(η))/(z-z_3)^2.
. + (z_1-z_3)(z_3-z_2)((z_1-z_2)(z-z_4)+(2z+z_1-3z4_)(z_2-z_4)√(η))/(z-z_4)^2].
We now substitute z → e^-2π i (x+iτ)/L into <ref> and integrate the resulting expression with respect to x to arrive at
δ S_o(A:B) = iμ c^2π ^3/36L^3 √(η)∫ dτ[ z_1√(η)/e^2π (-ix+τ)/L-z_1 + z_2√(η)/e^2π (-ix+τ)/L-z_2 + z_3√(η)/e^2π (-ix+τ)/L-z_3.
. + z_4√(η)/e^2π (-ix+τ)/L-z_4
+ (z_1(z_3-z_4)+(z_1-z_3)(z_1+z_4) √(η))log [e^2π (-ix+τ)/L-z_1]/(z_1-z_3)(z_1-z_4).
. + (z_2(z_4-z_3)+(z_2+z_3)(z_2-z_4) √(η))log [e^2π (-ix+τ)/L-z_2]/(z_2-z_3)(z_2-z_4).
. + ((z_2-z_1)z_3+(z_1-z_3)(z_2+z_3) √(η))log [e^2π (-ix+τ)/L-z_3]/(z_1-z_3)(z_3-z_2).
. + ((z_2-z_1)z_4+(z_1+z_4)(z_4-z_2) √(η))log [e^2π (-ix+τ)/L-z_4]/(z_1-z_4)(z_4-z_2)].
We observe that the first four terms on the right hand side of <ref> readily vanish on inserting the limits of integration x=0 and x=L. Since we have considered the system on a constant time slice, we may take τ_j (j=1,2,3,4) to be zero for all boundary points, and the contributions of the logarithmic functions become zero identically. Thus it is observed that the resultant integrand for the τ integration in <ref> vanishes. Hence the first order correction to the OEE vanishes.
§.§.§ Two adjacent intervals
We now focus on the bipartite mixed state of two adjacent intervals A=[x_1,x_2] and B=[x_2,x_3] in a deformed finite size 2 of length L at zero temperature, defined on the cylindrical manifold ℳ described by <ref> (x_1<x_2<x_3). For this case, <ref> may still be employed along with the relation described in <ref>, effectively replacing β by iL. The first order correction in OEE due to μ for two adjacent intervals is then given by
δ S_o(A:B) = -μ c^2π^4/18L^4∫_ℳz^2/(z-z_1)^2(z-z_2)^2(z-z_3)^2
×[ z_2^2z_3^2-z_1z_2z_3(z_2+z_3)+z_1^2(z_2^2-z_2z_3+z_3^2) .
. +z^2(z_1^2+z_2^2-z_2z_3+z_3^2-z_1(z_2+z_3))-z(z_1^2(z_2+z_3) .
. +z_2z_3(z_2+z_3)+z_1(z_2^2-6z_2z_3+z_3^2)) ].
Next we replace z → e^-2π i (x+iτ)/L into <ref> and subsequently integrate with respect to x to obtain
δ S_o(A:B) = iμ c^2π^3/36L^3∫ dτ[ z_1/e^2π (-ix+τ)/L-z_1+z_2/e^2π (-ix+τ)/L-z_2+z_3/e^2π (-ix+τ)/L-z_3.
. +(z_1^2-z_2z_3)log [e^2π (-ix+τ)/L-z_1]/(z_1-z_2)(z_1-z_3) +(z_2^2-z_1z_3)log [e^2π (-ix+τ)/L-z_2]/(z_2-z_1)(z_2-z_3).
. +(z_3^2-z_2z_1)log [e^2π (-ix+τ)/L-z_3]/(z_1-z_3)(z_2-z_3)].
Similar to the disjoint case, the first three terms on the right hand side of <ref> readily vanish when the limits of integration x=0 and x=L are inserted. As earlier, for a constant time slice τ_j=0 (j=1,2,3), the
logarithmic functions also contribute nothing to the definite integral. The resulting integrand for the τ integration in <ref> thus vanishes. Hence the corresponding first order correction in the OEE of two adjacent intervals turns out to be zero.
§.§.§ A single interval
Finally we turn our attention to the bipartite mixed state configuration of a single interval A=[x_1,x_2] in a deformed finite size 2 of length L at zero temperature, defined on the cylindrical manifold ℳ given in <ref> (x_1<x_2). The construction of the relevant partially transposed reduced density matrix for this configuration is described in <cit.>. Once again we may utilize <ref> with only two points z_1 and z_2, subject to <ref> (with the effect of iL replacing β), and a two point twist correlator as mentioned below in <ref>. We have expressed the modified version of <ref> as applicable for the system under consideration for convenience of the reader as follows
∫_ℳ_n_oTT̅_ℳ_n_o = 1/n_o∫_ℳ1/σ^2_n_o(z_1, z̅_1)σ̅^2_n_o(z_2, z̅_2)
×[ π^2 c n_o/6 L^2 - (2 π z/L)^2 ∑_j=1^2(h_j/(z-z_j)^2+1/(z-z_j)∂_z_j) ]
×[ π^2 c n_o/6 L^2 - (2 πz̅/L)^2 ∑_k=1^2(h̅_k/(z̅-z̅_k)^2+1/(z̅-z̅_k)∂_z̅_k) ]
×σ^2_n_o(z_1, z̅_1)σ̅^2_n_o(z_2, z̅_2) _𝒞 ,
where h_1=h_2=h^(2)_n_o with h̅_i=h_i (i=1,2) [see <ref>]. The corresponding two point twist correlator for this configuration is given by <cit.>
σ^2_n_o(z_1, z̅_1)σ̅^2_n_o(z_2, z̅_2)
= 𝒞_12/| z_1-z_2 |^2h_n_o ,
where 𝒞_12 is the relevant normalization constant. Following a similar procedure like the earlier cases, the first order correction for the OEE of this setup may be given as follows
δ S_o(A:B)= -μ c^2π^4/18L^4 (z_1-z_2)^2 ∫_ℳz^2/(z-z_1)^2(z-z_2)^2 .
We then obtain the following expression by substituting z → e^-2π i (x+iτ)/L into <ref> and integrating with respect to x
δ S_o(A:B) = i μ c^2π^3/36L^3∫ dτ[ z_1/ e^2π (-ix+τ)/L-z_1+z_2/ e^2π (-ix+τ)/L-z_2.
. +z_1+z_2/z_1-z_2( log[ e^2π (-ix+τ)/L-z_1 ]-log[ e^2π (-ix+τ)/L-z_2 ] ) ].
Like the previous cases, we observe that the first two terms in <ref> vanish on implementation of the limits of integration x=0 and x=L. As the system under consideration is on a constant time slice τ_j=0 (j=1,2), once again the terms containing the logarithmic functions also vanish. Again the resulting integrand for the τ integration in <ref> vanishes, indicating the vanishing of the first order corrections of the OEE as earlier.
§.§ Holographic OEE in a deformed finite size 2
The bulk dual of a deformed finite size 2 of length L at zero temperature is represented by a finite cut-off 3 geometry expressed in global coordinates as follows <cit.>
ds^2=R^2 ( -cosh^2ρ dτ^2 +sinh^2 ρ dϕ^2 + dρ^2 ),
where ϕ=2π x/L. As earlier we embed this 3 geometry in ℝ^2,2 as follows <cit.>
ds^2=η_ABdX^A dX^B
=-dX^2_0-dX^2_1+dX^2_2+dX^2_3 ,
X^2=-1 .
The metric in <ref> may be expressed in terms of the embedding coordinates introduced in <ref> as follows
X_0(τ,ϕ,ρ) = R coshρsinτ, X_1(τ,ϕ,ρ) = R coshρcosτ,
X_2(τ,ϕ,ρ) = R sinhρcosϕ, X_3(τ,ϕ,ρ) = R sinhρsinϕ.
The finite cut-off of the 3 geometry is located at ρ=ρ_c, where
coshρ_c = √(3L^2/2 μ c π^3) .
With the UV cut-off of the field theory given by ϵ = √(μ c π / 6) [see <ref>], the relation in <ref> may be rewritten as
coshρ_c=L/2 πϵ .
§.§.§ Two disjoint intervals
We begin with two disjoint spatial intervals A=[x_1,x_2] and B=[x_3,x_4] on a cylindrical manifold ℳ as detailed in <ref> (x_1<x_2<x_3<x_4). Note that the EWCS involving arbitrary bulk points X(s_1),X(s_2),X(s_3),X(s_4) for a deformed finite size 2 is described by <cit.>
E_W
=1/4G_Ncosh ^-1( 1+√(u)/√(v)),
where
u=ξ^-1_12ξ^-1_34ξ^-1_13ξ^-1_24 , v=ξ^-1_14ξ^-1_23ξ^-1_13ξ^-1_24 , ξ^-1_ij=-X(s_i)· X(s_j) .
The end points of the two disjoint intervals under consideration on the boundary may be represented by the embedding coordinates as
X(0,ϕ_i,ρ_c) for i=1,2,3,4, where ϕ_1<ϕ_2<ϕ_3<ϕ_4 (Note that ϕ_i=2π x_i/L). The corresponding EWCS may then be computed from <ref> as
E_W(A:B) = 1/4G_Ncosh^-1 ( √([ 1+
sin^2( π x_31/L) sinh^2ρ_c ] [ 1+
sin^2( π x_42/L) sinh^2ρ_c ]/[ 1+
sin^2( π x_32/L) sinh^2ρ_c ] [ 1+
sin^2( π x_41/L) sinh^2ρ_c ]).
. + √([ 1+
sin^2( π x_21/L) sinh^2ρ_c ] [ 1+
sin^2( π x_43/L) sinh^2ρ_c ]/[ 1+
sin^2( π x_32/L) sinh^2ρ_c ] [ 1+
sin^2( π x_41/L) sinh^2ρ_c ]) ).
To extract the desired first order corrections, we now expand <ref> in small (1/coshρ_c) as follows
E_W(A:B)=
1/4G_Ncosh^-1[ 1 + 2sin( π x_21/L) sin( π x_43/L) /sin( π x_32/L) sin( π x_41/L)] +𝒪[ϵ^2 ],
where we have utilized <ref> to substitute ϵ. The first term in <ref> is the EWCS between the two disjoint intervals for the corresponding undeformed 2. The rest of the terms characterizing the corrections for the EWCS due to the deformation are second order and higher in ϵ and thus negligible. The corresponding leading order corrections for the HEE due to the deformation has been shown to be zero <cit.>. Thus the leading order corrections to the holographic OEE of two disjoint intervals in a deformed finite size 2 is zero, which is in complete agreement with our corresponding field theory computations in the large central charge limit described in <ref>.
§.§.§ Two adjacent intervals
We now turn our attention to the case of two adjacent intervals A=[x_1,x_2] and B=[x_2,x_3] (x_1<x_2<x_3) as described in <ref>.
The bulk description of the end points of the intervals A and B for a deformed finite size 2 is given by
X(0,ϕ_i,ρ_c) for i=1,2,3, where ϕ_1<ϕ_2<ϕ_3 (ϕ_i=2π x_i/L). The EWCS for this configuration is described as follows <cit.>
E_W=1/4 G_Ncosh^-1(√(2)/√(v)),
where
v= ξ_13^-1/ξ_12^-1ξ_23^-1 , ξ_ij^-1=-X(s_i)· X(s_j) .
We now utilize <ref> to explicitly compute the EWCS as follows
E_W (A:B)
= 1/4G_Ncosh ^-1(
√( 2 [cosh[2](ρ_c) - cos (2π x_21/L) sinh[2](ρ_c)]
[cosh[2](ρ_c) - cos (2π x_32/L) sinh[2](ρ_c)] /cosh[2](ρ_c) - cos (2π x_31/L) sinh[2](ρ_c) )).
We are now in a position to extract the leading order corrections to the EWCS from <ref> by expanding in small (1/coshρ_c) as follows
E_W(A:B) = 1/4 G_Nlog[ ( 2L/πϵ) sin(π x_21/L) sin(π x_32/L)/sin(π x_31/L)] +𝒪[ϵ^2 ],
where we have already substituted the relation in <ref>. As earlier the first term on the right hand side of <ref> describes the EWCS between the two adjacent intervals for the corresponding undeformed 2. Again the correction terms are second order and higher in ϵ and negligible. The leading order corrections of the HEE for this configuration due to the deformation has been demonstrated to be vanishing <cit.>. Hence the leading order corrections to the holographic OEE for this case vanishes, which once again is in conformity with our field theory results in the large central charge limit described in <ref>.
§.§.§ A single interval
The bulk representation of the end points of a single interval of length ℓ may be given by X(0,0,ρ_c) and X(0,δϕ,ρ_c), where δϕ=2πℓ/L. The EWCS for the given configuration (same as the HEE for a single interval) may be computed as
E_W(A:A^c)=1/4G_N cosh ^-1[ 1 + 2 sinh ^2(ρ _c) sin ^2(πℓ/L)].
Once again <ref> may be expanded for small (1/coshρ_c) to obtain the following expression for the EWCS
E_W(A:A^c)=
1/2 G_Nlog[ L/πϵsin(πℓ/L)]
+𝒪[ϵ^2 ],
where we have used <ref> to replace coshρ_c. Once again the first term of <ref> represents the EWCS of a single interval for the corresponding undeformed 2, while we have neglected the second and higher order correction terms in ϵ. The corresponding corrections for the HEE of a single interval has been shown to be zero <cit.>. Thus the leading order corrections to the holographic OEE for a single interval vanishes, demonstrating agreement with our field theory calculations in the large central charge limit detailed in <ref>.
§ SUMMARY AND CONCLUSIONS
To summarize we have computed the OEE for different bipartite mixed state configurations in a deformed finite temperature 2 with a small deformation parameter μ. In this context we have developed a perturbative construction to compute the first order correction to the OEE for small deformation parameter through a suitable replica technique. This incorporates definite integrals of the expectation value of the operator over an n_o sheeted replica manifold. We have been able to express these expectation values in terms of appropriate twist field correlators for the configurations under consideration. Utilizing our perturbative construction we have subsequently computed the OEE for the mixed state configurations described by two disjoint intervals, two adjacent intervals, and a single interval in a deformed thermal 2.
Following the above we have computed the corresponding EWCS in the dual bulk finite cut-off BTZ black hole geometry for the above configurations utilizing an embedding coordinate technique in the literature. Interestingly it was possible to demonstrate that the first order correction to the sum of the EWCS and the corresponding HEE matched exactly with the first order correction to the 2 replica technique results for the OEE in the large central charge and high temperature limit. This extends the holographic duality for the OEE proposed in the literature to deformed thermal 2s.
Finally we have extended our perturbative construction to deformed finite size 2s at zero temperature. We have computed the first order corrections to the OEE for the configurations mentioned earlier in such 2s in the large central charge limit. In all the cases we have been able to show that the leading order corrections vanish in the appropriate limits. Quite interestingly it was possible to demonstrate that the first order corrections to the corresponding bulk EWCS in the dual cut-off BTZ geometry were also identically zero in a further validation of the extension of the holographic duality for the OEE in the literature to deformed finite size 2s at zero temperature.
It will be instructive to develop similar constructions for other entanglement measures such as entanglement of purification, balanced partial entanglement, reflected entropy etc. for deformed 2s. Also a covariant framework for holographic entanglement in these theories along the lines of the HRT construction is an important open issue. These constitute exciting open problems for the future.
§ ACKNOWLEDGMENTS
We would like to thank Lavish, Mir Afrasiar and Himanshu Chourasiya for valuable discussions. The work of Gautam Sengupta is supported in part by the Dr. Jag Mohan Garg Chair Professor position at the Indian Institute of Technology, Kanpur. The work of Saikat Biswas is supported by the Council of Scientific and Industrial Research (CSIR) of India under Grant No. 09/0092(12686)/2021-EMR-I.
§ THE INTEGRALS FOR THERMAL 2S
The detailed derivation of the integrals appearing in <ref> has been provided in this appendix. Note that the corresponding domain of integration for all the configurations is the cylindrical manifold ℳ characterized by the complex coordinates (w, w̅) [see <ref>].
§.§ Two disjoint intervals
The holomorphic part of the integral in <ref> may be written as
- μ c^2 π^4 √(η)/18 β^4 z_21 z_32 z_41 z_43∫_ℳ d^2w (z^2) [
z_32 z_42 [z_31 (2z-3z_1+z_4)√(η)+z_43 (z-z_1)]/(z-z_1)^2
+ z_31 z_41 [z_42 (2z-3z_2+z_3)√(η)-z_43 (z-z_2)]/(z-z_2)^2
-z_42 z_41 [z_31(2z+z_2-3z_3) √(η)-z_21(z-z_3)]/(z-z_3)^2
-z_31 z_32 [z_42 (2z+z_1-3z_4)√(η)+z_21 (z-z_4)]/(z-z_4)^2]
= -μ c^2 π ^4 √(η)/18 β^4 z_21 z_32 z_41 z_43∫_0 ^∞ dx ∫_0 ^β dτ e^4 π (x+iτ)/β
×[ z_32 z_42 [z_31 (2e^2π(x+i τ)/β-3z_1+z_4)√(η)+z_43 (e^2π(x+i τ)/β-z_1)]/(e^2π(x+i τ)/β-z_1)^2.
+ z_31 z_41 [z_42 (2e^2π(x+i τ)/β-3z_2+z_3)√(η)-z_43 (e^2π(x+i τ)/β-z_2)]/(e^2π(x+i τ)/β-z_2)^2
-z_42 z_41 [z_31(2e^2π(x+i τ)/β+z_2-3z_3) √(η)-z_21(e^2π(x+i τ)/β-z_3)]/(e^2π(x+i τ)/β-z_3)^2
. -z_31 z_32 [z_42 (2e^2π(x+i τ)/β+z_1-3z_4)√(η)+z_21 (e^2π(x+i τ)/β-z_4)]/(e^2π(x+i τ)/β-z_4)^2].
The primitive function on indefinite integration with respect to τ turns out to be
-i μ c^2 π^3 /36 β ^3 √(η)[ (√(η) z_1^2+(√(η)-1) z_1 (z_43)-√(η) z_3 z_4) log(-z_1+e^2 π (x+i τ )/β)/z_31 z_41.
+(√(η) z_2^2+(√(η)-1) z_2 z_34-√(η) z_3 z_4) log(-z_2+e^2 π (x+i τ )/β)/z_32 z_42
- (√(η) z_1 z_2+(√(η)-1) z_1 z_3+z_3 (-√(η) z_2+z_2-√(η) z_3)) log(-z_3+e^2 π (x+i τ )/β)/z_31 z_32
. +(z_4 (-√(η) z_2+z_2+√(η) z_4)-z_1 (√(η) z_2-√(η) z_4+z_4)) log(-z_4+e^2 π (x+i τ )/β)/z_41 z_42] .
Due to the presence of branch points, the logarithmic functions necessitate careful treatment while implementing the limits of integration τ=0 and τ=β. The following relation outlines the contribution due to a branch point at z=z_j <cit.>
log(e^2π(x+i τ)/β-z_j) |_τ=0^τ=β = {[ 2 π i, for e^2π x/β > z_j ⇔ x > β/2πlog z_j ,; 0, otherwise. ].
The branch cuts of the logarithmic functions change the limits of the x integrals as follows
∫_-∞^∞ dx→∫_β/2πlog z_j^∞ dx,
for j=1,2,3,4.
We are now in a position to integrate over x and utilize the prescription described above to implement the limits of integration to arrive at
μ c^2 π^3 /36 β^2( ( z_1 ( 1+ √(z_42 z_43/z_21 z_31) +z_4 ) )/z_41log[ z_1/z_2] +(-2+ √(z_21 z_43/z_31 z_42)) (z_1 z_2-z_3 z_4)/z_32 z_41log[ z_2/z_3] .
. + ( z_1 +( 1+ √(z_12 z_31/z_42 z_43) z_4 ) )/z_41log[ z_3/z_4] ).
The anti holomorphic part of the integral in <ref> follows a similar analysis and produces the same result as the holomorphic part.
§.§ Two adjacent intervals
The holomorphic part of the integral in <ref> may be written as
∫_ℳ z^2 [ 1/(z-z_1)^2+1/(z-z_2)^2
+1/(z-z_3)^2+(-3 z+z_1+z_2+z_3) /(z-z_1) (z-z_2) (z-z_3)]
=∫_-∞^∞dx∫_0^βdτ e^4 π (x+i τ )/β[ 1/(e^2 π (x+i τ )/β-z_1)^2+1/(e^2 π (x+i τ )/β-z_2)^2+1/(e^2 π (x+i τ )/β-z_3)^2.
. +z_1+z_2+z_3-3 e^2 π (x+i τ )/β/(e^2 π (x+i τ )/β-z_1) (e^2 π (x+i τ )/β-z_2)
(e^2 π (x+i τ )/β-z_3)] .
We proceed in a similar manner to the disjoint configuration as described in <ref>. The indefinite integration with respect to τ leads to the following primitive function
z_1/e^2 π (x+i τ )/β-z_1+z_2/e^2 π (x+i τ )/β-z_2+z_3/e^2 π (x+i τ )/β-z_3+(z_1^2-z_2 z_3)/(z_1-z_2) (z_1-z_3)log(e^2 π (x+i τ )/β-z_1)
+(z_1 z_3-z_2^2)/(z_1-z_2)(z_2-z_3)log(e^2 π (x+i τ )/β-z_2)+(z_3^2-z_1 z_2) /(z_1-z_3)(z_2-z_3)log(e^2 π (x+i τ )/β-z_3).
On implementation of the limits of integration τ = 0 and τ = β, the non logarithmic terms in the above expression vanish, while the contributions of the logarithmic terms follow the relation in <ref>. Due to the relation in <ref>, the limits of integration over x for each term in the integrand gets modified as follows
∫_-∞^∞ dx →∫_β/2πlog z_j^∞ dx, for j=1,2,3.
The integration over x may now be performed to arrive at
∫_ℳ z^2
[ 1/(z-z_1)^2+1/(z-z_2)^2
+1/(z-z_3)^2
+(-3 z+z_1+z_2+z_3) /(z-z_1) (z-z_2) (z-z_3)]
= β ^2/2π[ (z_1^2-z_2 z_3) log(z_1/z_2)/z_12 z_13+(z_1 z_2-z_3^2) log(z_2/z_3)/z_23z_13].
As earlier, the anti holomorphic part of the integral gives result identical to the holomorphic part.
§.§ A single interval
The holomorphic part of the integral in <ref> is given by
∫_ℳ d^2 w ∑_j=1^4( z^2/(z-z_j)^2
- z^2/(z-z_j)∂_z_jlog[z^2_41z^2_23 η f(η) ] )
= ∫_0^∞ dx ∫_0^β dτ
e^4 π (x +i τ)/β[ ∑_j=1^41/( e^2 π (x +i τ)/β-z_j)^2.
+ -4 e^4 π (x +i τ)/β -2z_3z_2 +z_1(-z_2+z_3-2z_4)+z_2z_4-z_3z_4+2e^2 π (x +i τ)/β(z_1+z_2+z_3+z_4)/( e^2 π (x +i τ)/β-z_1)( e^2 π (x +i τ)/β-z_2)( e^2 π (x +i τ)/β-z_3)( e^2 π (x +i τ)/β-z_4)
. -z_21z_32z_41z_43f'(η)/( e^2 π (x +i τ)/β-z_1)( e^2 π (x +i τ)/β-z_2)( e^2 π (x +i τ)/β-z_3)( e^2 π (x +i τ)/β-z_4)z_31z_42 f(η)] .
The indefinite integration over τ gives
i β/2 π∑_j=1^4[ B_j+ C_j log( e^2 π (x+i τ)/β-z_j ) ] ,
where
B_j=z_j/e^2 π (x+iτ)/β-z_j , j=1,2,3,4,
and C_1, C_2, C_3 and C_4 are given as follows
C_1 = -1/z_31^2[ z_31 (z_1^3+z_1^2(z_4-2z_3)+ z_1 z_2 (z_3-2 z_4)+ z_2 z_3 z_4 )/z_41 z_21 + z_1 z_32 z_43 f'(η)/z_42 f(η)] ,
C_2 = 1/z_42^2[ z_42 (z_2 ^3+ z_2 ^2 (z_3-2z_4)+z_1 z_2 (z_4-2z_3)+ z_1 z_3 z_4 )/z_32 z_21 + z_2 z_41 z_43 f'(η)/z_31 f(η)] ,
C_3 = -1/z_31^2 [ z_31 (z_3^3+(z_2-2 z_1) z_3^2+(z_1-2 z_2) z_4 z_3+z_1 z_2 z_4)/z_43z_32 +z_3z_21 z_41 f'(η)/z_42 f(η)] ,
C_4 = 1/z_42^2[ z_42 (z_4^3+(z_1-2 z_2) z_4^2+(z_2-2 z_1) z_3 z_4+z_1 z_2 z_3)/z_41 z_43 +z_4z_21 z_32 f'(η)/z_31 f(η)] .
Once again the non logarithmic terms described by <ref> vanish on insertion of the limits of integration τ = 0 and τ= β, whereas the logarithmic terms in <ref> contribute according to the relation in <ref>, which modifies the limits of the integration over x as follows
∫_-∞^∞ dx →∫_β/2πlog z_j^∞ dx, j=1,2,3,4.
The integration over x for the integrand in <ref> may now be performed with the modified limits described above to arrive at
-β^2/2 π∑_j=1^4 C_j log z_j .
The desired correction to the OEE of a single interval of length ℓ may now be obtained through the substitutions
{z_1, z_2, z_3, z_4}→{e^-2π L/β, e^-2πℓ/β, 1, e^2π L/β}
and subsequent implementation of the bipartite limit L→∞ as follows
lim_L→∞∫_ℳ d^2 w ∑_j=1^4[z^2/(z-z_j)^2 - z^2/(z-z_j)∂_z_jlog[z^2_23 z_41^2 η f(η)]]
= ℓβ(-1/( e^2 πℓ/β -1 )
+ e^-2 πℓ/β f' [ e^-2 πℓ/β]/2 f [ e^-2 πℓ/β])
- lim_L→∞[ L β( 2 π L/β) ].
As before the anti holomorphic part of the integral produces identical result to the holomorphic part.
utphys
|
http://arxiv.org/abs/2307.04036v1 | 20230708195101 | Designing a Direct Feedback Loop between Humans and Convolutional Neural Networks through Local Explanations | [
"Tong Steven Sun",
"Yuyang Gao",
"Shubham Khaladkar",
"Sijia Liu",
"Liang Zhao",
"Young-Ho Kim",
"Sungsoo Ray Hong"
] | cs.HC | [
"cs.HC",
"cs.AI",
"cs.CV",
"cs.LG"
] |
]Designing a Direct Feedback Loop between Humans and Convolutional Neural Networks through Local Explanations
George Mason University
USA
[email protected]
Emory University
USA
[email protected]
George Mason University
USA
[email protected]
Michigan State University
USA
[email protected]
Emory University
USA
[email protected]
NAVER AI Lab
Republic of Korea
[email protected]
George Mason University
USA
[email protected]
The local explanation provides heatmaps on images to explain how Convolutional Neural Networks (CNNs) derive their output. Due to its visual straightforwardness, the method has been one of the most popular explainable AI (XAI) methods for diagnosing CNNs.
Through our formative study (S1), however, we captured ML engineers' ambivalent perspective about the local explanation as a Misc.valuable and indispensable envision in building CNNs versus the process that exhausts them due to the heuristic nature of detecting vulnerability. Moreover, steering the CNNs based on the vulnerability learned from the diagnosis seemed highly challenging. To mitigate the gap, we designed , the first interactive design that realizes the direct feedback loop between a user and CNNs in diagnosing and revising CNN's vulnerability using local explanations.
helps CNN engineers to systemically search “unreasonable” local explanations and annotate the new boundaries for those identified as unreasonable in a labor-efficient manner. Next, it steers the model based on the given annotation such that the model doesn't introduce similar mistakes. We conducted a two-day study (S2) with 12 experienced CNN engineers. Using , participants made a more accurate and “reasonable” model than the current state-of-the-art. Also, participants found the way guides case-based reasoning can practically improve their current practice. We provide implications for design that explain how future HCI-driven design can move our practice forward to make XAI-driven insights more actionable.
<ccs2012>
<concept>
<concept_id>10010147.10010257.10010282</concept_id>
<concept_desc>Computing methodologies Learning settings</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10003120.10003121</concept_id>
<concept_desc>Human-centered computing Human computer interaction (HCI)</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10003120.10003121.10003124</concept_id>
<concept_desc>Human-centered computing Interaction paradigms</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010147.10010257</concept_id>
<concept_desc>Computing methodologies Machine learning</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Computing methodologies Learning settings
[500]Human-centered computing Human computer interaction (HCI)
[500]Human-centered computing Interaction paradigms
[500]Computing methodologies Machine learning
[
Sungsoo Ray Hong
====================
§ INTRODUCTION
As the societal impact of Computer Vision (CV) models grows <cit.>, it has become crucial to find an effective way to steer Convolutional Neural Networks (CNNs) to align their behaviors with users' mental model <cit.>.
Using Explainable AI (XAI) techniques can be the first step to steering Machine Learning (ML) models, as spotting repeating cases that “surprise” ML engineers for a similar reason can help the engineers to generalize the cases to a bigger pattern that signals the vulnerability of their model <cit.>. While XAI techniques are increasingly becoming essential for revising ML models, there are relatively fewer options available for CNNs <cit.>.
Among few, local explanation–the technique that overlays a saliency map on a single image to visualize the attentive areas that the model referred to–has been widely used by tremendous ML engineers due to its visual straightforwardness <cit.>.
By seeing the attention of a model, a user can assess whether the rationale behind the prediction is reasonable <cit.>.
Checking the reasonableness of CNN's “attention” through local explanation can improve CNN's performance in two ways.
First, checking the attention can help ML engineers to identify the bias of a dataset used in training.
In diagnosing a gender classifier, for example, if a model is attentive to contextual objects, such as “snowboard” to predict a man <cit.> or “shopping cart” to infer a women <cit.>, it means that these contextual objects often appear with a specific gender class in the training dataset. As a result, such an imbalanced distribution of contextual objects causes the model attention to be biased towards contextual objects rather than focusing on the person in the image to classify the gender <cit.>.
Using a biased dataset can induce a model to reference contextual objects in prediction, which is defined to be unfair <cit.>.
Therefore, diagnosing CNNs using local explanation can reduce bias ingrained in a training set, leading the forthcoming model to be fairer <cit.>.
Second, detecting unfair predictions through local explanation can lead to a more robust and generalizable model with stable accuracy. The repeated occurrence of unfair predictions is related to the vulnerability of a CNN, which can be essential for defending against malicious attacks.
For example, imagine that an attacker found a gender classifier that tends to classify images with snowboards as men. In that case, the attacker can prepare counter-contextual examples that show women riding snowboards in a backdoor attack to drop the model accuracy.
Steering CNNs to fix the found vulnerable patterns can thus yield a model that provides stable accuracy performance regardless of object types appearing in future images.
In summary, if the dataset used in training is biased <cit.>, the model fails at demonstrating reasonable attention for specific predictions, which we call to be unfair predictions <cit.>.
Such unfair cases, in turn, make the CNN model vulnerable <cit.>.
Collectively, the phenomenon of a CNN shifting attention in an unreasonable way due to biased data refers to the problem of contextual bias <cit.>.
While contextual bias has become a highly crucial issue in ML and beyond <cit.>, spotting the vulnerability and steering the model is highly challenging or not even feasible <cit.> even for experienced ML engineers <cit.>.
Detecting unreasonable attention through local explanation can be “just noticeable” from human eyes, but the current solutions are predominantly a machine-centric approach with limited human involvement <cit.>.
In Human-Computer Interaction (HCI) and Computer Supported Cooperative Work (CSCW), despite the rich body of research dedicated to better supporting ML engineers <cit.>, little effort has been made to design interfaces that can efficiently and effectively steer CNNs to mitigate contextual bias.
Further, while there exists a breadth of empirical studies focused on understanding the ML engineers' practice, challenges, and design opportunities(e.g., <cit.>), it is not well understood how ML engineers apply local explanation in steering CNNs to mitigate contextual bias or what are the practical challenges.
Through this work, we aim to bridge the technical and empirical gaps we identified in the problem of contextual bias.
Specifically, we aim to create a novel interactive system that can empower ML engineers to leverage local explanations in diagnosing the vulnerability of CNNs and steer them.
To inform our design based on real practice, we conducted a formative study (S1) with five industry CNN experts who have more than 5 years of model development.
We sought to understand how they use local explanations, what the limitations of existing tools are, and how the new design can practically help their practice.
As a result, we identified 3 challenges and 3 desires that we were able to use to streamline their process in our new design.
Based on the findings, we devised , the first interactive system that realizes a direct feedback loop that connects a user and a CNN using local explanations for model steering.
First, enables a user to systematically categorize unreasonables—the images that have overlaps between the model attention and contextual objects—among images used in validation.
Next, for the categorized unreasonables, suggests the “reasonable” attention boundary that excludes contextual objects to help a user effortlessly finish the annotation task required for steering.
Third, using the user-confirmed boundary input, steers the target model by optimizing both the prediction loss and attention loss (minimizing prediction errors and shifting the model's attention towards confirmed “reasonable” areas).
Finally, helps a user to see what has been changed before and after steering.
In particular, provides the evaluation results regarding (1) how the attention quality has become reasonable and (2) how the improved model attention quality affected the model accuracy performance.
In the summative study (S2), we evaluated with 12 experienced CNN builders, asking them to revise a gender classifier across two days.
We found using enabled every participant to boost their model accuracy performance and model attention quality than applying state-of-the-art techniques.
Meanwhile, after using , we also found that over 80% of the participants perceived that using would improve their capability regarding model vulnerability assessment and performance improvement.
Based on the two studies, we provide implications for design on Beyond XAI—how the future design can convert XAI-driven insights into actionable steering plans such that the AI's behavior can gradually be aligned to the human mental model.
This work offers the following contributions:
* S1: Understanding How Local Explanation Is Used in Improving CNNs: We extend our knowledge about how field practitioners apply local explanations when working on CNNs and what the challenges are. Based on the analysis, we suggest how new design can mitigate their difficulties in steering CNNs.
* Design Contribution:
We devise and instantiate , a novel, end-to-end, and interactive design that enables ML engineers to practice a systematic case-based vulnerability diagnosis and model steering.
* S2: Understanding the Effect of : Through the study with 12 experienced CNN developers, we understand how the new design can make a difference in building more accurate and robust CNNs.
* Implications for Design for Steerable AI: Based on the results of S1 and S2, we provide how the HCI and CSCW communities can contribute to converting XAI-driven insights more useful and actionable through steerable AI design.
§ RELATED WORK
In this review, we first dive deeper into understanding the problem of contextual bias and explain how unreasonable model attention can detrimentally affect CNN's model performance.
Second, we review landmark XAI-driven systems in HCI devised for diagnosing Deep Neural Networks (DNNs) and discuss how the findings can be applied to resolve the problem of contextual bias through an interactive system.
Next, we cover how the recent advance in explanation-guided steering techniques can be applied to implement an interactive and integrated model steering environment.
Then we highlight the remained technical and empirical challenges in HCI.
When CNNs are not trained properly with generalized and representative datasets, there can be various kinds of bias that can introduce several weaknesses in the model performance <cit.>.
Imagine that one engineer is preparing a set of images for training a dog detection model.
In preparation of data, 50% of the images would show a dog to balance positive and negative cases <cit.>.
The problem can start when some contextual objects, such as a ball, appear more frequently in positive cases than negative <cit.>.
Using such a biased dataset, a model would establish a “spurious” correlation between a dog and a ball <cit.>.
In such a case, the model's attention visualized through local explanation is on the ball rather than a dog <cit.>.
Consequently, when bringing an image that shows a ball, the model may likely say that it detected a dog by seeing a ball regardless of a dog appearing in the image <cit.>.
As such, this phenomenon of “contextual bias” refers to the case where a model's attention is shifting to contextual objects which are not directly relevant to the model's goal <cit.>.
Consequently, using this potential vulnerability, an attacker may be able to drastically decrease model accuracy by showing the ball images without dogs <cit.>.
Furthermore, CNN's shifting the focus to a contextual object incurs the fairness issue <cit.>;
While model accuracy is accepted as a “golden standard” in modern ML research for evaluation, there is growing concern that putting insufficient emphasis on the quality of model explanation can lead us to have a technical debt <cit.>.
This aspect of a CNN's blind decision made by referring to contextual objects has become crucial in the Fairness, Accountability, and Transparency (FAccT) community and beyond <cit.>.
In handling contextual bias, several studies outside of HCI commonly apply mathematical approaches rather than incorporating human input <cit.>.
For example, Singh et al. used Class Activation Maps as a “weak” automatic attention annotation <cit.>.
Feature augmentation <cit.> is another technique proposed for de-biasing using disentangled representation.
Hirota et al. provided a way to analyze skewed data distributions to attain unbiased human-like reasoning <cit.>.
While each method has its pros and cons, there has been no ideal breakthrough.
In recent years, ML communities' approaches are gradually shifting towards involving more human inputs <cit.>.
Aligning with this direction, local explanations, such as Grad-CAM <cit.>, started to catch attention as an XAI technique that can mitigate contextual bias. It enables a user to spot the unreasonable model attention at a glance, and perhaps this aspect makes the technique the most widely used XAI technique for investigating CNNs <cit.>.
Meanwhile, in HCI and CSCW, despite the wide range of novel systems proposed for helping ML engineers <cit.>, we didn't recognize a system directly focusing on handling contextual bias.
When we scope the approaches related to Deep Neural Networks, we found the two perspectives useful in handling contextual bias through local explanation.
The first takeaway is that a bottom-up approach—the design that helps users understand the vulnerable patterns by exploring specific cases through local explanation <cit.>—can provide a more straightforward and intuitive flow than a top-down approach which aims at helping a user to understand global structure or rules to explain how DNNs make a prediction <cit.>.
Prospector <cit.> and What-if tool <cit.> belong to the bottom-up design that can help ML engineers to see the instance-level of prediction cases to gradually realize a set of patterns for making prediction <cit.>.
On the other hand, top-down approaches include XAI techniques and visual analytic components to help a user to understand the “landscape” of prediction rules, structure, and decision boundaries.
For instance, Squares <cit.> and Blocks <cit.> are some of the earliest designs that explain how DNNs predict the multi-class problem.
MLCube Explorer <cit.>, TwoRavens <cit.>, and Visus <cit.> present the model comparison feature, helping ML engineers more easily decide the model they would like to deploy.
ActiVis <cit.>, RuleMatrix <cit.>,
CNN explore <cit.>, ExplainExplorer <cit.>, DeepEyes <cit.>, RNNVis <cit.>, NeuroCartography <cit.>, and Dodrio <cit.> fall into visual analytic approaches.
The second takeaway is that by including every feature required for assessing and steering in a single, end-to-end systems can reduce the cost of switching the context between the diagnosis to the refinement <cit.>.
EnsembleMatrix <cit.>, ModelTracker <cit.>, Tenserflow Graph Visualizer <cit.>, and explAIner <cit.> present end-to-end environments that combine diagnosis and model refinement.
This review concludes that local explanations can help a user to easily diagnose the model vulnerability for easing contextual bias in a bottom-up fashion. Meanwhile, including both diagnosis and steering in a single system can further help ML engineers. In realizing this design goal, the first technical challenge is understanding how to steer a CNN upon finding the unreasonable model attention.
In recent years, new techniques have enabled steering the AI's behavior using human input through local explanation.
For example, Attention Branch Network <cit.> is a pioneering method that allows humans to directly adjust the boundary of model attention.
More advanced techniques, such as GRADIA <cit.>, RES <cit.>, and GNES <cit.> have been proposed.
While they can be potentially effective, they have never surfaced or been used by ML engineers through interactive systems.
The second challenge is the lack of studies aimed at understanding how ML engineers practice and perceive local explanations in their CNN building workflow.
There has been a series of empirical studies aimed at learning the workflow of ML engineers and data scientists. The directions include understanding how they use XAI tools <cit.>, how ML beginners learn XAI tools to work on their model building <cit.>, how ML experts view the automated AI <cit.>, how ML experts collaborate in using XAI tools, and beyond <cit.>.
Despite the popularity of local explanations, we didn't identify the work specifically focusing on understanding ML engineers' current practices and challenges.
Item.1So, we believe that an interactive system is essential to bridge the gap between computational techniques and human-centered design to diagnose and resolve contextual bias.
Since diagnosing and steering a CNN is a deep cognitive process that requires dense and repetitive interaction with a system, conducting a formative study in advance would higher the chance of yielding a practically useful design <cit.>.
§ STUDY 1: FORMATIVE STUDY
Through the reviews, we defined our specific goal of designing an interactive system that can mitigate contextual bias embedded in CNNs.
In doing so, we learned that local explanation provided through bottom-up fashion could help a user to efficiently and effectively examine CNN's vulnerable patterns and steers it.
To situate our design considerations based on real practice, we conduct a formative study with industry practitioners.
§.§ Method
We conducted open-ended, semi-structured interviews with professional CNN developers.
In recruiting them, we first provided a flyer to a company bulletin and communicated with industry acquaintances who use local explanations.
As a result, we recruited five experts with an average of over 5 years of experience building state-of-the-art CNN solutions in their field (see Item.2Table <ref>).
In shaping the detail of the interview, we strictly followed the interview methodology in HCI <cit.>.
First, in scoping our directions of inquiry, we motivated participants to focus on sharing their lived experiences, specifically about their practice and perception of local explanation but not discouraging them from connecting their story about local explanation with other experiences.
Consequently, in designing our questions Item.3(shown in Appendix A), we started from their general background and workflow in the early phase as follows.
In particular, we asked about their (1) roles and areas of expertise, the (2) CNNs they build, and (3) their development settings and tool belts.
Then we moved to their local-explanation-related questions aiming to learn their (4) workflows, (5) reasons-of-use, (6) challenges in using local explanation, and (7) their wish lists.
Second, to construct an appropriate dialogue with our participants, two authors—who completed HCI-centered training in their PhDs and currently working on a specialized domain of Human-AI Interaction and Deep Learning in academia and industry, respectively—participated in every interview.
One author proceeded with the interview with questions, while the second author asked follow-up questions to gain more specific insights.
In our interview, we collected 4 hours and 31 minutes of video. On average, each interview lasted 54 minutes, ranging from 37 minutes to 67 minutes in total.
In our analysis, we used a qualitative coding process <cit.> which entails two authors' coding, diagramming, and consensus-based theme generation.
First, the two authors each created, using the interview records, initial sets of codes, and memos <cit.>.
Second, they shared the codes and analyzed the emerging Item.1commonalities and discrepancies related to their perceived challenges and desires. For the matters of discrepancies, the two authors discussed the reasons for the disagreement and decided each matter could be agreed upon or annexed in existing commonalities.
Finally, after thinking about others' code choices, they reviewed all our coded text, quotes, and memos to tweak and derive the final structure.
§.§ Results
From every participant, we heard strong reasons why they apply local explanations in their practice.
The overarching reason they apply explanation in their workflow is predominantly related to retaining the “generalizability” of their model.
The generalizability explains the degree to which the model would “shake” when it sees unexpected, different cases they didn't see in the past.
P5 mentioned: “we strongly believe that that's the way to go, those sorts of visualizations are clearly the path towards understanding how to improve the model. I think it's a required envision. If the mistake is turned out to be unreasonable, I'm going to explore my data and see why it's not robust enough.”
P4 shared his interesting observation that accurate prediction and reasonable attention might be somewhat correlated.
Item.2He believed it was more crucial for a model to focus on the right gaze to make it robust for unexpected cases than optimizing performance on the test set, as we could not prepare the perfect dataset that represents every case equally.
All participants shared their experiences about the cases of spotting unreasonable attention in checking the vulnerability to remove the model's weakness.
P3 mentioned that he uses local explanation in the model comparison task mainly because it can be a good indicator of how robust the model can be:
“I see model behaves very differently task-by-task. ResNet works very well in one task, and VGG works well in a different task. I have no idea why. And the local explanation tells me why.”
While attaining a CNN's generalizability has been discussed in previous literature, our findings extend the existing in two directions.
First, we identified the three practical challenges they are encountering when applying local explanation in their workflow every day.
Second, we also identified the three future desires that the current local explanation-driven techniques cannot realize but could be achieved with future solutions.
§.§.§ Challenges
C1. Iterative and Exhaustive Diagnosis:
In diagnosing their model through local explanation, participants expressed the process as “nothing is given”.
In detecting vulnerable patterns using local explanation, participants seemed to have proactive and iterative shaping of their assumption and collecting the cases.
Generally, participants went for several rounds of iterative target image selection and local explanation generation.
This generation was made based on their dense inductive and deductive reasoning.
The aspect of iterative case-based reasoning seemed to entail nontrivial labor, which exhausts ML engineers.
P1 mentioned: “I wish I could check the (saliency) maps for every case. But coding to layout multiple maps takes some effort and does not become feasible as the dataset gets bigger. In the end, I normally have to compromise, just checking instances in an inaccurate category if I'm lucky, or even fewer.”
P3 developed a multi-classifier that has 4,000 to 5,000 classes. He mentioned that the required mental effort for detecting vulnerable attention grows exponentially as the number of classes increases. In the end, he can only consider a few “major” classes.
Many of our participants remarked that their model vulnerability analysis using local explanation is mostly a group effort, and sharing insights with colleagues also adds up even more time.
For P2's case, his group made a web-based tool where the team member can upload image groups and show the local explanation results for discussion due to the complexity of coding and positioning on a screen.
C2. Ad-Hoc Diagnosis Leads to Uncertainty:
The next challenge that our participants mentioned was the uncertainty they had to cope with in determining the vulnerable patterns.
They seemed to suffer from two types of vulnerability.
Since finding the vulnerable patterns stems from their intuition, our participants mentioned that there is no guarantee that their selection covers every major and minor vulnerability type.
In addition, upon spotting the local explanations that gaze at unreasonable objects, they had to decide if cases sow merely noise or the signal that leads to a vulnerable pattern.
Often, our participants' vulnerability determination process was done on their “gut feeling”, which made them perceive the process as heuristic and ad-hoc.
P2 mentioned: “I feel like showing the pros and cons of model's attention using local explanation is cherry picking, in many cases. Even if someone says the quality of model attention is good or bad with some examples, there is no ground one can say the cases represent a real pattern or merely subtle noise that won't likely happen in the future.”
Item.2P3 also shared similar difficulties that increasing classes could result in more bad-attention cases. Even though these problematic cases were identified, they might likely reoccur in the future.
P4 said that the hardship in verifying the severity of the vulnerability is closely related to the fact that there is no measure that we can rely on to see the “impact of the detected cases” from the perspective of the whole dataset.
There was a minor opinion that their feeling of uncertainty in the process was connected to the doubt about the diagnosis results.
For instance, P1 mentioned that he doesn't believe he can completely remove the bias no matter how much effort he may put in or what tools he may use.
C3. Hard to Steer as Intended: Every participant agreed that changing the model's future behavior from learned insights is challenging or often not feasible.
P5 mentioned that the insights were not actually insightful as they are often unactionable:
“Surprisingly, it wasn't really insightful when we looked at the mistakes our model made, and the saliency map was totally unreasonable. It was like it doesn't know what to do here, something is missing, architectural leap or something I don't know, we didn't quite solve a lot of the failure cases.”
Item.2He also shared his “dream tool” idea for instant attention adjustment, which could be some drawing applications that he could manually guide CNNs to focus on previously missed features of images and retrain through backpropagation.
P1 mentioned his current struggle to fix a model by fortifying the training set, such as adding more data to counterbalance the failure class. He still looked for alternative methods as the performance was not promising.
§.§.§ Desires
D1. The Way to Interact: Beyond Command Line:
Some mentioned that local explanation could not fully realize their potential with command line interfaces as the way to create them requires some work.
This aspect is connected to C1; participants feel making multiple queries for selecting images and examining model attention can become arduous.
From the interaction design's perspective, shifting the command line-based interface to a directly manipulatable GUI can streamline the process.
P1 remarked: “I feel like a complex task like this (vulnerability diagnosis), we would mostly benefit from GUI rather than a tool with a command line. It takes too long to create saliency maps. Showing the maps with different selection criteria and sorting can be super helpful.”
By lowering the cost of creating local explanations, participants could more effectively examine a bigger volume of model attention than the current design.
Item.2Some also mentioned the necessity of reorganizing results after each search, which was not easy with the current tools. P4 always looked for failure cases manually but struggled when there were too many cases. He suggested some summarization or pre-filtering features that prioritize interesting cases.
This finding indicates it is worth considering designing an interactive analytic system that enables a user to easily formulate the query and see the results.
D2. Evaluating Model: Model Accuracy and Beyond:
We had multiple chances to hear participants' voices regarding what they care about when it comes to evaluating their models.
In particular, we found that our participants shared the consensus regarding the model accuracy as a gold standard metric that should not be sacrificed even though the purpose of revision is not for boosting model accuracy (e.g., mitigating contextual bias).
For instance,
Item.2P4 was very curious to see whether improving model attention could improve model accuracy, and if the model were not improved, he would care less about attention quality improvement.
P5 also mentioned the tension between fairness and accuracy in model development: “I had much of a concern for fairness in my practice, it was more the kind of thing where prioritizing fairness connects to increasing failure case. This would result in my client making less money. If it was a courtroom, there's a much stronger debate here. But it's very serious in industrial cases that fairness is important, but the accuracy is still the king.”
At the same time, they shared their concern that the way the current tools provide the model accuracy is not enough to understand how accurate and how reasonable their models are.
Item.2P2 found it very difficult to check the saliency maps for accurate cases, and he felt uncomfortable making decisions solely based on overlooking accurate cases since it could penalize model generalizability. He was less focused on the test set performance than generalizability in the long run.
This internal tension helped us realize the delicate view of the way ML experts see model accuracy. It's still the “King” that should not be compromised, but they may still need more than that to make their model generalizable and trustworthy enough.
D3. A Balance between “Pain” and “Gain”:
One aspect we learned from our participants is that ML engineers are generally more conservative about testing a new feature using a human-in-the-loop-driven approach than we thought due to its high cost.
Regarding the idea of using human input for steering CNNs, some participants mentioned that the direction has potential but would only work if the workload is manageable.
For instance, P3 mentioned that he might not likely use the new tool if the expected effort is more than what they are currently investing in for the model diagnosis.
Not surprisingly, many participants mentioned the difficulties in eliciting data from in-house annotators or workers in crowdsourcing platforms.
P5 said: “The workflow of human-in-the-loop to adjust attention using human help, no one would say it's a bad idea that you could include humans and get more data and improve it. This is an obvious virtuous aspect, but it's not like you just sign up for data bricks, and you're done. Getting human labels would probably need a little bit of training. You don't want that to be an expense to ML engineers.”
This aspect helped us realize that making a practical tool can be readily adopted. It must automate the vast volume of work via intelligent automation and minimize the chance for human outsourcing.
§.§ Design Considerations
While we found that the local explanation serves as an indispensable tool for diagnosing the vulnerability of participants' data and model, they suffered in each stage of C1: detecting cases that signal vulnerable patterns, C2: verifying them to be “real”, and C3: steering.
Meanwhile, we also found they desire to D1: have an interactive and directly manipulatable design that can cut down their effort for writing lots of queries
and parameters, D2: use the product that can improve the model accuracy while also improving the quality of model attention to be reasonable, and D3: enable users to achieve the new feature with a reasonable size of additional labor.
As D1 suggests, we were able to find the reason why the interactive interface can be well appreciated by ML engineers, especially when completing their task requires deep thinking and iterative interactions with their tool.
In designing the system, we further synthesize our findings and establish the design considerationsItem.1Item.2 as shown below. Table <ref> also shows how the participants (“PID”) support the identified challenges (“C”), desires (“D”), and design considerations (“DC”).
* DC1. Semantic local explanation browser:
Seeing the results of local explanations for finding the cases that signal vulnerable patterns is the first stage to mitigating contextual bias.
In this stage, providing a semantic browser—that users can see, rank, and select the dominant semantic object types observed within the model's area of attention for every image—could reduce ML engineers' uncertain feelings and save them time.
In building a dog detector, this feature may enable a user query such as “find every image attentive on treat” or “rank every object type by its occurrence in a dataset.”
Descriptive statistics, such as how frequently the object types appear, can help users understand the degree to which the object grabs the model's attention.
DC1 will relieve C1, C2, and D2 (based on all 5 participants).
* DC2. Labor-efficient selection of “unreasonables” and adjustment of their attention boundaries:
Using the browser, users can diagnose a CNN by finding the cases that show unreasonable attention (“unreasonables”, hereinafter).
Then the users would annotate the areas that can make the annotation reasonable.
The system would need to provide this annotation with a lightweight interaction cost.
DC2 is related to D3 (based on 2 participants: P3 and P5).
* DC3. The fine-tuning mechanism that can boost both model accuracy and model attention quality:
One of the most evident consensuses among the participants was their difficulties in steering CNNs.
Therefore, the tool must help users to clearly understand how the CNN's quality of the model attention visualized through local explanation has been changed based on the input the users provided.
While doing so, the tool must not compromise the model's accuracy performance.
DC3 is derived from C3 (based on 2 participants: P1 and P5).
* DC4. Evaluation results that show what has been changed:
The last stage of the workflow would be to help users understand how their attempts made a difference.
In showing the differences, providing a set of views that show the difference made regarding the accuracy of model prediction, the quality of model attention, and the combined view that explain how the changing of the attention has been related to the accuracy would facilitate users' understanding of the impact.
DC4 is derived from C3 and D2 (based on 4 participants: P1, P2, P4, and P5).
§
Based on the DCs in S1, we designed .
is the first interactive system designed and built to support a CNN engineer's contextual bias-related tasks based on their practical needs.
The early part of 's workflow is defined based on what we learned from ML engineers:
First, a user prepares the base CNN model and datasets to be used for diagnosis (the “loading model’’ and “loading dataset’’ tabs).
Second, a user collects the cases where their gazes are on unreasonable objects by browsing local explanation results (i.e., the “accessing attention quality’’ tab in ).
The rest of the stages follow the recent literature that proposes model steering through local explanation <cit.>.
Third, for the collected “unreasonables”, a user corrects the attention boundary to shift the CNN's future gaze from contextual objects and starts to fine-tune the base CNN model with annotations (the “adjusting attention’’ tab in ).
Finally, a user sees how the approaches made the CNN different (the “evaluation’’ tab in ).
§.§ Interacting with
Consider a scenario for Sarah, an ML engineer who has trained a dog classifier built based on a CNN architecture.
She found the model accuracy performance was not enough for deployment and found a few cases that she could not understand why it failed.
She decided to examine her model using local explanations.
First, she created local explanations for a few accurate and inaccurate cases for multiple rounds to reason what could be wrong.
After her search, she found out the model's focus sometimes moves to some specific contextual objects, such as balls and treats.
To study if the cases would repeat, she decided to invest her time in generating local explanations for all the images and checking them serially. She put some effort into coding for loading and saving files (models, images, and statistics).
For the dubious cases, she decided to collect similar datasets for further testing (C1). Along the path, she started to wonder if the contextual object types she identified were comprehensive. She decided to examine other object types (C2).
Upon confirming every case and object type that signals the vulnerability of her model, she will need to find a way to steer the model's behavior (C3).
Using , her workflow can make better progress with less effort.
First, she uploads the base CNN and the image data she will use for diagnosis.
Leveraging the automatic local explanation object aggregation feature, will provide a list of object types that her CNN is gazing at, such as dogs, cats, balls, treats, and other object types, with examples.
She asks that she wants to see every case that is attentive to objects other than “dogs”.
Based on her specification, local explanation results are grouped based on object type categories (DC1).
She can quickly skim through each category (e.g., dogs, balls, treats, and cats) and confirm dubious local explanations as “unreasonables” in a few clicks.
will suggest the automatically drawn “reasonable” boundary for unreasonables' and asks Sarah to confirm or manually refine (DC2).
Upon her confirmation, will fine-tune the base model such that it won't make the same mistakes (DC3).
After the fine-tuning, Sarah can check how the models' performance regarding model accuracy and model attention quality has changed (DC4).
§.§ Workflow and System Components
supports stage-based workflows to inspect the model. The global navigation bar (see Fig. <ref>) on top of the screen provides access to each stage.
§.§.§ Loading Model and Data
allows users to upload their base CNN models and datasets.
In designing the feature for model upload, we considered compatibility with one of the most widely used Python libraries for building CNNs, PyTorch <cit.>.
Next, the “loading dataset” tab helps a user to upload the image datasets for diagnosis (a validation set, hereinafter) and a final evaluation after the fine-tuning (a test set, hereinafter).
In particular, the validation set is used for diagnosing contextual bias in the next stage. Using the test set in the last stage, a user can evaluate the final model by comparing before and after treatment and more.
§.§.§ Attention Quality Assessment
This stage has two goals.
First, helping a user understand which semantic object types are causing contextual bias by which degree (DC1).
Second, helping a user categorize every image into reasonable or unreasonable (i.e., the images that do not focus or focus on contextual bias in their local explanation) (DC2), which will be used in the next stage.
For both goals, the core mission is to significantly cut down a user's labor compared to their current practice.
In achieving the first goal, provides a list of semantic object types that can be observed in the model's focused area ordered by how frequently they appear.
In detecting the semantic object types, adopts a pre-trained object detection model <cit.> that is capable of detecting 80 object types defined in the Microsoft COCO dataset <cit.> (e.g., “person”, “bicycle”, “dog”, etc.).
A user will decide if the semantic object types are relevant or contextual to a CNN's goal.
In a gender classification problem, for example, the relevant object type can be a human face, while other object types, such as neckties or bicycles, can be contextual.
Second, based on the relevant object types specified by a user, intelligently suggests if local explanations of the images in a validation set are reasonable or unreasonable (see Item.3Item.7Fig. <ref>, green borders suggest the local explanations are reasonable while yellow borders suggest unreasonable).
The suggestions can reduce a user's time for assessing the quality of local explanations.
In positioning the results of suggestions, separates them into two sides: inaccurate images on the left and accurate on the right.
This layout helps determine which semantic object contributes to accurate/inaccurate records by how much.
When a user encounters a suggestion that is not right, (s)he can flip the suggestion by clicking the image, the semantic object group, or every of the accurate or inaccurate images.
Finally, provides 3 options for visualizing local explanation results: color-scale, gray-scale, or polygon mask (see Fig. <ref>-C).
§.§.§ Adjusting Attention
To support the later part of DC2—correcting the attention boundary of images categorized as unreasonables, needs an efficient annotation experience, especially because boundary drawing is an expensive annotation task.
In doing so, shows a vis-à-vis comparison between the current model attention on the left and the suggested attention boundaries on the right-hand side (see Item.7Fig. <ref>).
The suggested boundaries are made based on the Mask R-CNN model <cit.> we applied in 4.2.1.
If the suggested boundaries are not enough, a user can redraw manually (see the drawing panel in Fig. <ref>).
In checking the boundary suggestions, a user can separately examine the images from (1) unreasonables that are accurate (i.e., the images that were accurately predicted based on the wrong reasons, or by “luck”) and (2) unreasonables that are inaccurate (i.e., the image group that made an inaccurate prediction potentially because of seeing wrong contextual objects <cit.>).
Upon finishing the correction for unreasonable, becomes ready for fine-tuning using adjusted inputs.
§.§.§ Fine-Tuning
This stage is the key to maintaining an overall effective pipeline.
Based on DC3, we implemented a fine-tuning mechanism that can consider attention adjustment as new guidance for revising a better model and making the process of using boundary adjustment input straightforward.
The existing approach to optimizing a CNN’s model performance in the fine-tuning process is to minimize only the prediction loss—an error measure between model predictions and actual values.
To boost both the model performance and the interpretability of the black-box CNN model, we adopted Explanation-guided Learning framework <cit.> where the model accuracy performance and local explanation quality are jointly optimized with the prediction loss and attention loss.
Our intention for adding the attention loss during model training is based on the assumption that the model can learn to pay attention to the right semantic object types for the prediction tasks, thus naturally enhancing both the explainability and generalizability.
While the techniques in Explanation-guided Learning are in their early stage, some studies started to validate how applying both terms of explanation loss and prediction loss can benefit DNN performance using text data <cit.>, image data <cit.>, and graph-structured data <cit.>.
However, the techniques in Explanation-guided Learning have not been tested by human participants in their workflow.
Our aim in building is to understand how “real” human participants can interact with a system to leverage the techniques and if we can find evidence that using the techniques can practically help users in mitigating contextual bias in their CNN revision workflow.
For the implementation of the explanation objective for , we adopted the most recent approach called RES <cit.>, which proposed a generic robust framework for learning based on a user's boundary adjustment under the assumptions that the human annotation labels can be (1) not exactly accurate in drawing the boundary, (2) can be incomplete in the region, and (3) inconsistent with the distribution of the model explanation (i.e., binary annotation vs. the boundary with alpha channel).
Consequently, in the benchmark test, RES outperformed GRADIA <cit.> and HAICS <cit.> in leveraging human annotation boundaries and robust against the aforementioned annotation noises <cit.>.
In implementing, we utilized two methods from the RES's GitHub codebase[Available at: https://github.com/YuyangGao/RES], “Baseline” as the conventional state-of-the-art fine-tuning mechanism that applies a prediction loss but not an explanation loss.
This will be used as a baseline to help a user to understand how using can make a difference in model accuracy and model explanation quality.
Next, we implemented “RES-G” as the experimental attention steering mechanism that jointly optimizes the prediction loss and explanation loss.
Upon using to finish their boundary adjustment, a user will click fine-tune to activate the fine-tuning process.
Typically, our fine-tuning mechanism takes at least a few hours, and it is not possible to realize a real-time system yet.
In the system's back end, we built a schedule queue that receives the boundary input one by one. The inputs will be fine-tuned in order by a system administrator.
§.§.§ Evaluation Dashboard
Model evaluation is the last stage, where a user can check how the input has changed a model's varying performances.
Based on DC4, we designed this stage to help a user understand not only how model accuracy has been changed but also how the quality of local explanation has been shifted.
Most importantly, this stage attempt to facilitate a user's understanding of how accurate or inaccurate records are reasonable or unreasonable local explanations are related.
In doing so, we adopted Reasonable Matrix <cit.>, an evaluative matrix that explains the model's performance using the four groups as follows:
* Reasonable Accurate: The group that has accurately predicted records with reasonable attention. The bigger the group is, the more generalizable the model is.
* Unreasonable Accurate: The group that has accurate records but is based on unreasonable attention. Records in this group can be considered “lucky guess”. Reducing this group can increase model generalizability.
* Reasonable Inaccurate: The group has inaccurate records, but the attention is on the right area.
* Unreasonable inaccurate: The group has inaccurate records while their attention is also on unreasonable objects. This group can be considered an opportunity group, as shifting the gaze to reasonable objects can flip the prediction from inaccurate to accurate.
To generate a Reasonability Matrix, it is required to assess if the local explanation results are reasonable or unreasonable.
provides an automatic annotation feature to avoid relying on human annotation (as D3 suggests).
In particular, a user can select from 3 options.
Strict: assess local explanation as reasonable if the attention of a record includes only relevant objects and does not contain irrelevant objects;
Moderate: assess reasonable if the majority portion of an image contains relevant objects while the minor portion includes irrelevant objects; Relaxed: assess reasonable if the attentive area has any overlap with relevant objects.
After a user selects the Reasonability Matrix creation option, (s)he can start the evaluation.
To help a user understand what has been changed, prepares the three conditions as follows:
* M: the initial model before fine-tuning.
* M_base: the state-of-the-art fine-tuned model using M without applying attention input.
* M_exp: the fine-tuned model using M that uses attention input.
Using the three conditions, provides two pairwise comparisons of (1) Before vs. After: comparing M and M_exp and (2) State-of-the-art vs. our approach: M_base and M_exp.
In each pairwise model evaluation, there were 4 types of analytic views that users could do in-depth evaluations.
(1) Overall interpretation: for helping a user to directly understand how model accuracy and attention quality have been changed, the view presents a Reasonability Matrix showing percentage changes in 4 sub-groups (see the top-left sub-figure of Item.7Fig. <ref>).
The view also shows numeric comparisons to track the overall model accuracy and attention quality changes (see the bottom-left sub-figure of Fig. <ref>).
Finally, a user can see the generated performance report and an attention explorer module to derive insights about the effectiveness of the model conditions (e.g., whether the “unreasonable inaccurate” cases have been reduced by attention steering regarding the test image data).
(2) Accuracy-related analysis: this view provides accurate/inaccurate record bar plots grouped by common objects, helping users understand which semantic object types contribute to accurate or inaccurate records.
(3) Local explanation quality analysis: In this view, we present IoU distribution charts.
IoU (Intersection over Union) helps us to understand the overlap between the model's focused gaze and relevant objects. IoU of 0% means the gaze is entirely located on contextual objects, whereas 100% means the gaze is only on relevant objects.
The higher the IoU score, the better an attention area aligns with the ground truth.
In this view, we further help users browse cases based on IoU values (e.g., show images where IoU is between 40% and 60%).
(4) Record-wise attention comparison: the right screen in Fig. <ref> contains a comprehensive comparison of models’ local explanations, side-by-side for all conditions. This design helps a user quickly recognize attention quality changes among different conditions.
§.§ Implementation
is a browser-based user interface with a lightweight back end built with Python Flask, fully compatible with widely used ML and visualization libraries in Python (e.g., PyTorch, Grad-CAM, OpenCV, Matplotlib, etc.). The front end was developed using HTML, CSS, JavaScript, and D3.js for creating dynamic and interactive elements (such as the attention-drawing feature) to communicate between users and models. Item.3More detailed technical settings and a live demo of can be found in our GitHub repository[Available at: https://github.com/TongStevenSun/DeepFuse].
§ STUDY 2: SUMMATIVE STUDY
The core tasks integrated into —(1) diagnosing CNN's vulnerable patterns through local explanation and (2) making the found patterns actionable through direct model attention adjustment—have not been introduced in the previous work.
Further, our “system” has multiple sub-pieces connected together into a “single working whole” <cit.> to streamline the target task.
Due to these characteristics, we avoid applying comparison or experimental study where we have a clear baseline, just like many previous HCI work <cit.>.
Instead, we choose to derive our directions of inquiry by defining research questions (RQs), then triangulate the way we collect data in multiple ways to answer the questions.
Our goal in S2 is to create reusable pieces of knowledge in terms of what piece integrated into our system can be useful and understand how the system, as a whole, can be effective in supporting ML engineers who mitigate contextual bias.
To achieve our goal, we first aimed at understanding the effect of workflow—how our new workflow of model steering using local explanations introduced through an interactive environment can make a difference for ML engineers.
The research questions (RQs) in this category are:
RQ1a. How has a user’s viewpoint about using attention as a method for model revision changed after experiencing our workflow? and RQ1b. How has a user’s viewpoint about using attention as a method for evaluating their model performance changed after experiencing our workflow?
Next, we were curious to learn the effect of using itself as a system—how using can change the outcomes for mitigating contextual bias? In particular, the RQs regarding this direction are: RQ2a. How did using in the input phase make participants’ model diagnosis process different? RQ2b. How did using impact the outcome of contextual bias in terms of model accuracy and attention quality?
§.§ Method
We recruited 12 participants by snowball sampling through our network in industry and academia or advertising on social media.
In defining the S2 sample size, we followed the most common sample size of the past CHI publications consulted from Caine's work <cit.>.
The participants were selected by a screening survey where we asked about their demographics and degree of expertise in building vision-based models using CNNs, the task goals of vision models if experienced, professional position, experience in using local explanation, and whether they have heard of and understands the importance of detecting the “wrong” attention to handle contextual bias.
We are aware of the potential Hawthorne and novelty effects of having overestimated results when participants are being studied and new to our system <cit.>. To reduce the effects, we particularly hired experienced CNN developers who have established their own approaches in CNN fine-tuning. Later in the study, we asked them to compare the effectiveness between our approach and their current approaches and give reasoning.
We recruited 12 qualified participants (2 females and 10 males, aged between 20 and 43) out of 43 who submitted the screening survey. Six participants were academic researchers, and the other six were practitioners. Eight participants identified themselves as experienced, three as intermediate, and one as beginner developers in vision-based modeling. Item.4Although the experience distribution was imbalanced due to our consideration of having all genders' perspectives, there should not be any potential effect of this distribution on the study since all participants were qualified for the study with a good understanding of handling contextual bias and wrong reasoning of a model based on its saliency maps. Eight participants out of 12 have experienced using local explanation to improve model performance in the past (see Table <ref>).
<ref> summarizes the S2 workflow. Participants joined two online sessions, the input and output sessions, for two consequent days. Participants joined the sessions virtually on Zoom and shared their screens with us.
In the input session, we onboarded participants by explaining the purposes of the and presenting how model evaluation could be done differently using local explanations of a standard classifier. Then participants went through a tutorial where they practiced using the interface with a toy dataset. The onboarding and tutorial took 30 minutes.
After the tutorial, participants performed the early phase of tasks using features introduced in 4.2.1, 4.2.2, and 4.2.3.
After an input session, we fine-tuned the initial model (M) into 2 conditions of models: a state-of-the-art model without users' inputs (M_base) and a model using our users' attention inputs in the validation set (M_exp).
The output session was scheduled one day after the input session since we cannot make our participants wait until fine-tuning is done.
On the following day, participants joined the output session, where they used the reviewing feature of to assess the model performance using the features introduced in 4.2.5.
After the review, we conducted semi-structured interviews with the participants.
After finishing two sessions, we provided them with 60 USD as a token of appreciation.
While the input session took 90 minutes and the output session lasted two hours, Item.4as shown in Table <ref>, participants used for about 25 minutes on average in the input session (Min=12, Max=47, SD=10.43) and about 20 minutes in the output session (Min=5, Max=33, SD=8.88). The average time spent on the system in both sessions was about 45 minutes (Min=17, Max=68, SD=16.83).
§.§.§ Task, Data, and Model
While can work with any classification task, we chose a binary gender classification problem for the study.
We are aware of the limitation of framing the gender recognition task as a binary classification, which cannot fully represent the viewpoint of gender diversity.
We are aware of the negative aspects of choosing a binary gender classification as the main task in S2. For instance, automatic gender recognition primarily classifies gender through physical characteristics, which can disadvantage gender minorities <cit.>.
Also, while we believe that binary cannot represent the diversity in gender, we chose the task because it is one of the most widely adopted tasks in the problem of contextual bias <cit.>.
We note that our choice of the binary classification task is to demonstrate the system's capability of solving contextual bias in a relatively simplistic setting with the help of well-annotated datasets used for training CNN classifiers.
We also note that we explained the possible concerns that can stem from the binary gender classification to our participants at the beginning of the study.
The dataset used in the study was selected from the Microsoft COCO dataset <cit.>, one of the most widely used datasets in ML and computer vision communities. The dataset was chosen because of its well-structured label formats and abundant 80 object classes co-appearing with humans, and it has been used for contextual bias studies <cit.>.
The image selection process has three steps.
First, the images were filtered by the segmentation labels of the “person” class for single-person images only.
Second, the images were re-filtered by the gender-related keyword in the captioning labels (i.e., “male”, “man’’, “men’’, “female”, “woman’’, “women’’).
Lastly, the filtered images were examined manually to have the best quality images for the gender classification task, excluding images with very small human figures that were unidentifiable for classification.
In total, we extracted 2,000 images and split them into 1,000 in the training set, 500 in the validation set, and 500 in the test set.
Since we wanted to test the ’s capabilities of detecting and reducing contextual bias, we needed a model that had a reasonable performance but was vulnerable to contextual bias.
We first manually added contextual objects (i.e., green star markers) on the top-left corners of the images.
The distribution of the star-added images is shown in Fig. <ref>, bottom.
For the training set, 1/3 of the “male” images (N = 167) were added with stars.
For both the validation and test sets, the star markers were added only on the “female” images (N = 250).
Then, we trained a standard ResNet-18 classifier (denoted as “M’’) using the biased image data.
In deciding on ResNet architecture in S2, we tested several models built based on ResNet-18 and 50.
We found no significant model accuracy improvement by adding more layers to the ResNet-18 architecture.
Therefore, we chose a less complex model architecture to make lightweight.
Since the majority of images in the training set were original images, the model can achieve a reasonable prediction accuracy of 74% on regular images without the star markers.
We should expect that the model only saw “male” images have star markers.
When we tested the model on the validation set that only has star markers in the female class, the accuracy dropped to 43.8%, and 77.6% of “female” images were mispredicted.
This showed that the model only used commonly appeared star markers on “male” images as a feature to make predictions for images with the same contextual objects, meaning the model (M) was vulnerable to contextual bias.
In generating local explanations, applies Grad-CAM <cit.> on the last convolutional layer.
Due to CNN's hierarchical structure and comparisons of attention maps between layers <cit.>, earlier layers' attention maps are more scattered around objects' edges and corners, whereas the focus of local explanation gets shape to semantic objects as getting closer to later layers (see Fig. 5 in <cit.>).
Using the last layer, local explanations can create more semantic object-level meanings, which a human user can easily leverage for adjusting boundaries.
§.§.§ Input Session
At the beginning of the input session, we discussed the idea of using local explanations for mitigating contextual bias in a binary gender classification task.
After the discussion, we demonstrated how participants could upload their models and datasets using . Then we explained 's model vulnerability diagnosis feature explained in 4.2.1 and 4.2.2. and attention adjustment feature described in 4.2.3.
Upon the end of the tutorial, we gave time for participants to mimic the whole process using the same toy dataset and ask any questions.
Then, we asked participants to start the main session.
We erased all prior input and asked users to start over the process using a larger dataset (particularly assessing the local explanations of the validation set) and a base model we provided.
During the main session, participants had to use the system without help.
The main session was video-recorded.
Once participants finish their input session, we asked them to fill out an input survey, asking 2 questions for the “absolute” and “relative” valuations as follows:
* Q1: “[RQ2a, Absolute] I found understanding the model’s vulnerable aspects using to be _____.” (A 7-level Likert scale of usefulness. “7” is “extremely useful”.)
* Q2: “[RQ2a, Relative] Using , understanding the model’s vulnerable aspects was _____ than my current practice.” (A 7-level Likert scale of difficulty. “7” is “much easier”.)
§.§.§ Output Session
In this session, participants evaluated the performance change of the improved model with the test set.
In particular, provided two pairwise comparisons between M and M_exp, and M_base and M_exp) (see 4.2.5).
After the short output session tutorial using a toy test set, participants started the main output session using the model they fine-tuned from their input session and the larger test set.
Once users were finished with all the analysis and comfortable with their findings, we moved to the semi-structured exit interview. The interview had 9 question categories that were made to understand (1) their general perception about , such as the pros and cons they felt throughout the two sessions, (2) their perception of the specific perspectives, including (2-a) experiencing local explanation adjustment, (2-b) applying reasonability matrix in assessing the model performance, (2-c) features they used in day 1, (2-d) features they used in day 2, and (3) their suggestions for the better in the future.
Same as S1, two researchers attended every interview.
After the interview, they completed an output survey Item.5with 6 questions (see Q3 to Q8 below).
Lastly, to check the usability of , we asked participants to fill out the System Usability Scale (SUS) survey <cit.> (see Appendix B).
* Q3: “[RQ2b, Absolute] I found the capability of regarding improving the model performance using my input was _____.” (A 7-level Likert scale of effectiveness. “7’’ is “extremely effective’’.)
* Q4: “[RQ2b, Relative] I found the capability of regarding improving the model performance was _____ than my current practice.” (A 7-level Likert scale of effectiveness. “7’’ is “extremely effective’’.)
* Q5: “[RQ1a, Absolute] Adjusting the saliency maps (as guided) can be effective in building future models.” (A 7-level Likert scale of agreement. “7’’ is “strongly agree’’.)
* Q6: “[RQ1a, Relative] Adjusting the saliency maps (as guided) can practically change my model-building practice to a better form in the future.” (A 7-level Likert scale of agreement. “7’’ is “strongly agree’’.)
* Q7: “[RQ1b, Absolute] On top of a model accuracy performance, using saliency maps (as guided) can provide an effective measure for evaluating my future model performance.” (A 7-level Likert scale of agreement. “7’’ is “strongly agree’’.)
* Q8: “[RQ1b, Relative] On top of a model accuracy performance, using saliency maps (as guided) can practically change the way I evaluate my future model performance to a better form.” (A 7-level Likert scale of agreement. “7’’ is “strongly agree’’.)
For the analysis of the exit interviews, we followed the similar process we applied in analyzing S1.
The difference from S1 was the existence of the video recordings.
The recordings were reviewed multiple times for transcription, code development, and analysis to synchronize with the notes.
The codes and memos were developed by our two authors gradually as we intake more interviews.
After the final interview, each of the authors developed the themes and shared them with each other, developing the consensus-based diagram that articulates the main insights we learned relevant to explaining the three RQs.
§.§ Results
In this section, we aggregated all survey and interview responses from the participants for the RQs we developed.
S2 results suggest that (1) the workflow of the local explanation-based attention steering provided a diverse perspective in diagnosing model vulnerability, (2) the direct steering design helped the process of model revision straightforward, and (3) every participant enjoyed improved key model performance measures.
Specific sub-tasks, how they are improved, and why the participants perceived they are improved are in Table <ref>.
We believe these are not merely because of the Hawthorne and novelty effects since we have subjective evidence of performance improvement and assessment efficiency.
We also organized the aspects that need improvement in Table <ref>, which we share in detail in the Discussion section.
The behavioral data we collected shows that all participants generated the model that outperforms (1) its model accuracy, (2) the overlap between the model's focus and the relevant object types (IoU), and (3) the proportion of reasonable attention out of all images in a test set.
The average accuracy of 12 users’ fine-tuned models (M_exp) was 82.95%, with an average IoU of 0.39 (“Intersection over Union” with respect to the attention ground truth of the user-defined gender-related object: “person”), and the average proportion of reasonable attention was 89.55% (see Item.8Fig. <ref>-A). All these performances outperformed both the initial model (model M: accuracy = 47.6%, IoU = 0.12, attention reasonability = 51.8%) and the model that applied state-of-the-art fine-tuning method without attention (model M_base: accuracy = 79.0%, IoU = 0.26, attention reasonability = 79.4%).
Regarding the attitudinal survey data, every absolute and relative question's mean was over 4.
In terms of absolute questions, 100% of ratings were above 4-“neutral” (M = 6.19, SD = 0.67).
This indicates that participants were satisfied with the overall quality of the workflow and the system.
Regarding the relative questions, 89.6% of ratings were above 4-“neutral” (M = 5.94, SD = 1.24), which indicates that they felt applying the workflow and the system can practically improve their current practice.
§.§.§ [RQ1-a] Workflow: Adjusting model attention as a CNN steering method
After completing the user studies, the majority of users strongly agree that adjusting local explanations can effectively improve model performance (Q5 rating: M = 6.42 out of 7-“strongly agree”, SD = 0.64, as shown in Fig. <ref>-B). Also, people think their current modeling processes can be practically improved by considering the attention adjustment method (Q6 rating: M = 6.17 out of 7-“strongly agree”, SD = 1.07).
During interviews, all participants shared their positive impressions about the effectiveness of attention adjustment in improving model accuracy, which is the primary objective of conducting model fine-tuning. They also confirmed that the impact of contextual bias was reduced as attention quality increased by attention steering. By adding a new perspective from humans, a model also becomes fairer in making predictions for each target class (P2, P5, P10).
Participants (P1, P2, P3, P4) with experience in model attack and defense shared the possibility of using our method to improve the robustness of the models against backdoor attacks, letting the model ignore small perturbations on an image and focus on the right area. We learned that after trying our method, people gained awareness of considering human-in-the-loop and visual-based approaches in model steering since most of the ML researchers use algorithmic approaches for handling contextual bias, such as data augmentation, hyperparameter tuning, ensemble methods, etc., rather than extensively using visualization in the fine-tuning process.
§.§.§ [RQ1-b] Workflow: Adding quality of model attention in evaluating CNNs
Based on the feedback, users agree that using an attention evaluation method (e.g., reasonability matrix as guided, based on Gao et al. <cit.>) is effective in diagnosing model vulnerabilities (Q7 rating: M = 6.33, SD = 0.47, see Fig. <ref>-B), and they are very likely to use this method for improving future practices Q8 rating: M = 6.08, SD = 0.76).
Participants think that the attention assessment features in provide more diverse and rigorous perspectives in assessing a model's vulnerabilities, especially the reasonability matrix, which can be seen as an expansion of the accuracy dimension to understanding “why” a model underperforms (P1, P3, P5, P6, P8, P9, P10, P12). P1 and P4 endorsed the necessity of equipping a reasonable matrix assessment step in checking the model’s decision-making.
The matrix interpretation was straightforward to most users, as it is related to the widely-used confusion matrix concept in the data science domain.
The dynamic shifts of model vulnerability were well presented as shown by the reasonability matrix (3 vulnerable sub-groups, “UIA - unreasonable inaccurate’’, “UA - unreasonable accurate’’, and “RIA - reasonable inaccurate’’).
One major task we designed for users to achieve was the recognition of a backdoor attack in the data (i.e., added green star markers which may trigger a false prediction by the model), and all participants were able to identify the impact of the attack by evaluating attention quality using the reasonability matrix.
§.§.§ [RQ2-a] System: How improved CNN diagnosis
After comparing with people's current practices, was confirmed as a useful (Q1 rating: M = 5.92 out of 7-“extremely useful”, SD = 0.76, see Fig. <ref>-B) and easier tool (Q2 rating: M = 6.0, SD = 1.15) in understanding model vulnerability, benefiting from the labor-efficient mechanisms.
The step-by-step nature of the assessment process in allows users to systematically detect both contextual and manipulated bias in the data, making it easier to reduce model vulnerability (P3, P9, P12). People believe this GUI design can significantly reduce human effort in coding and visualization management for comprehensively assessing a CNN (P2, P3, P5, P6, P7, P8, P9, P10, P12). ML engineers are well aware of the advantages of using visualization to compare metrics and surface bias, but it is a cumbersome task (e.g., repetitive file creation and loading, lack of visual-based explorers for local explanations, etc.). Instead, people mostly use command lines and unintuitive numeric comparisons for checking vulnerabilities.
One important feature that people liked was the local explanation grouping by detected objects (e.g., “person”, “bicycle”, etc.), which allowed them to check attention quality and accuracy changes within the common object level (P2, P3, P6, P9, P12).
Some users pointed out that having consistent criteria for annotating attention quality regarding the classification task could be tricky with subjective uncertainty (P2, P4, P6, P9, P11). P6 mentioned that during the initial exploratory analysis of some models, users might not have good/bad attention criteria for annotating the attention.
P10 shared an experience in exploring what objects cause contextual bias, and the biggest challenge was making a reasonable assumption at first and evaluating it over time. This challenge is critical if the annotation task is outsourced to multiple people.
§.§.§ [RQ2-b] System: How improved CNN revision outcomes
According to survey responses, people witnessed the highly effective capability of in the performance steering task (Q3 rating: M = 6.08 out of 7-“extremely effective”, SD = 0.64, see Fig. <ref>-B).
Regarding the same task, people found it slightly more effective than current approaches (Q4 rating: M = 5.5, SD = 1.66) as 2 users who preferred their approaches and rated 2-“less effective”.
Aligning model attention with human perceptions can effectively revise a model performance, and with 's adjustment mechanisms (i.e., attention drawing panel and boundary suggestions, as shown in Fig. <ref>), people can directly embed their intention and domain knowledge into the CNN (P2, P4, P9, P10). Regarding model performance comparison, people were able to reveal the overall context of the image data and the corresponding impact on the model (accuracy and attention quality) by detected object sub-grouping of (P1, P2, P3, P5, P6, P8, P9, P11, P12).
An industry practitioner who worked primarily on model quality assurance mentioned that the black-box models were not usually accessible for engineers outside the core ML team, and had features that could be practical for them to evaluate the model performance in that situation (P11).
In the last evaluation view of for record-wise attention comparison (as shown on the right of Fig. <ref>), P7 was curious about the opposite shift of attention quality (i.e., a change from “right’’ to “wrong’’ attention after model fine-tuning) and wanted to see some quantitative measures about it.
The IoU distribution visualization was another measure in that could provide a rigorous comparison between model conditions (with/without attention adjustment), revealing the positive relationship between accuracy and attention quality improvement (P2, P8, P11). As people mentioned, measuring IoU was not commonly used in classification evaluation compared to segmentation tasks, and it was typically difficult to visualize.
§.§ Discussion
Overall, the system received acceptable usability <cit.> with an average SUS score of 76.88 (SD = 14.70, see the SUS box plot in Fig. <ref>-B, the rated scores (0-4) were converted to a 0-100 scale based on Brooke's SUS guide <cit.>), exceeding the average SUS level of 68. There were 10 out of 12 participants (except P3 and P5) who gave above-average SUS scores.
Although this study is not for system-level comparison, we wanted to understand the effect of our fine-tuning mechanism collected from real users. We conducted Mann-Whitney U tests to confirm the significant performance improvement after using attention.
From each of the 12 participants' results, the accuracy of our fine-tuned model using attention was significantly greater than the baseline line condition (U = 0, n_base = n_exp = 12, p < 0.00001). The same results apply to the IoU and attention reasonability proportion comparisons.
Through the studies, we also identified disadvantages of our system that need to be improved (as shown in Table <ref>).
Regarding the interpretation of the reasonability matrix produced by users' annotation and model prediction, the guidelines can be more formally provided to be acceptable in the ML community (P4, P5, P11). The styles of attention visualization (i.e., color-scale, gray-scale, and polygon mask) need improvement, especially since the orange polygon mask was not visually clear for P3 and P10. It can be solved by having color and opacity adjustment features.
People also raise the potential inconsistency issue in attention adjustment, where users may have subjective options and criteria about where the “right” attention should be. needs to further provide more deterministic guidelines in attention adjustment for more complex task types, especially for tasks that require domain expertise (e.g., TB diagnosis in chest X-ray images <cit.>).
With this uncertainty in attention adjustment, P7 and P10 suggested an instant performance comparison feature to reflect the model improvement on the fly as people annotate, which can be a future direction in active learning to have simultaneous updates while labeling in progress <cit.>.
About the attention adjustment module, people suggested that the drawing feature should be optimized for drawing curves and near image borders, as it was not easy to do so (P1, P3, P6). P5 suggested existing smart drawing features (e.g., image matting tool in Photoshop <cit.>) to be added. P7 thinks that binary mask drawings might not be enough for the best attention guidance used in fine-tuning the model. A solution could be giving higher weights toward the centroid of the attention areas.
Item.5(b)With the current data size and task setting in S2, the trade-off between manual workload and model improvement may not be as significant since the overall workload was not overwhelming and considered labor-efficient compared with existing assessment methods. Though evaluating attention maps could be a labor-intensive step, diagnosing and optimizing the model's vulnerability were effective and easy to use based on users' feedback. The annotation steps were incorporated with AI-supported automation (bulk annotation, object detection, object relevance filtering, adjustment recommendation, etc.) to reduce both users' cognitive and labor workloads while gaining better performance. However, as data size increases, this labor-performance trade-off becomes essential, and more specifically, scalability solutions should be explored to reduce human labor while maintaining good fine-tuning performance. We further discussed scalability considerations regarding the trade-off in the next section (6.3).
§ IMPLICATIONS FOR DESIGN BEYOND XAI
Through S1 and S2, we learned several insights from our participants.
While listening to their voice and questions, and observing the way they perceive after their usage, we learned that at the heart of people's pursuit of grounding their models into their practice, one of the core challenges they encounter seems to understand how they can harmonize between the way they see the CNN should suppose to work and the way CNNs actually work.
When they identify such a gap through XAI-driven tools, the upcoming challenge seemed to be to know how to reconcile such a gap efficiently and effectively.
We reflect on this aspect of beyond XAI—how to help a user to shift their learned insights to actionable plans—and list up possible research directions that the HCI and CSCW communities can consider in designing future XAI or steerable AI tools to help practitioners “in the trench”.
§.§ Correlating Model Attention and Model Accuracy
One of the overarching questions we wanted to understand was how the model attention seen as reasonable by the human mind could also result in accurate prediction.
Perhaps that was the reason we decided to use the reasonability matrix.
If reasonable attention and accurate prediction are aligned together, the reasonable accurate instances (i.e., accurate for the right reason) and unreasonable inaccurate instances (i.e., inaccurate for the wrong reason) should increase while the unreasonable accurate and reasonable inaccurate instances should decrease.
The tendency we saw was positive. We observed the reasonable accurate instances increased while the unreasonable accurate instances decreased from most participants.
At least from our setting, adding more human reasoning to the model's way of thinking has increased the model's gaze toward intrinsic objects, resulting in an accuracy increment.
However, one segment that didn't change was the reasonable inaccurate group.
We think understanding the reason when and why the model makes inaccurate predictions despite the reasonable gaze should be closely related to improving model performance.
Regarding research in Fairness, Accountability, and Transparency (FaccT), a dominant view is that human input or intervention may be required to realize a model that retains FaccT with the cost of model accuracy drop.
We hope to understand the effective way to correlate the right reason, and accurate prediction can motivate the development of a fair, robust, and accurate model <cit.>.
In general, we believe it is important to understand how to align human reasoning and model accuracy.
Shao et al. argue that humans “arguing” against DNNs when explanations are not reasonable can benefit the model <cit.>.
A railroad cannot be a train <cit.>, a snowboard is not a man <cit.>, and a shopping cart should not be a woman <cit.>.
Lastly, while human-guided ML has a potential and good cause <cit.>, finding a way to cut down the human-side labor is another important perspective from the two studies.
§.§ Generalizability Consideration: Beyond Binary Classification
We started to test the idea of direct steering of model attention through local explanation from the binary classification problem for reasons—simplicity of the problem and well-annotated datasets.
After using , several participants shared their feedback and curiosity on how our pipeline can be applied in more advanced vision-based tasks.
The design we provided in binary classification can be relatively simpler than the aforementioned cases.
As the model's task gets more complex and diverse, new designs customized to the particular task type and application area should be required to understand the generalizability of our findings.
Item.5(a)Methodologically, local explanation-based attention steering is not limited to binary classification tasks.
The future design can be explored to enhance CNN models for handling different tasks, such as multi-class classification, object detection, and segmentation tasks, which could possibly be expanded from processing images to videos.
The core user flow beneath in CNN steering is as follows:
First, the user flow allows human users to define reasonable and unreasonable types of attention depending on task goals.
Next, the user flow motivates reasonable attention types and penalizes unreasonable attention types in a fine-tuning process suggested in Explanation-guided Learning <cit.>.
Finally, the designer can provide a dashboard that helps users to understand how their indicated directions were reflected in the model revision process.
While the flow can be generally applicable, the way a designer facilitates a user's definition of reasonable and unreasonable attention type should be carefully implemented depending on the type of problem.
For example, in a multi-class classification or object detection task for different animals, users can employ attention logic that penalizes background and motivates foreground objects to build a more reasonable and high-performing model.
As mentioned in 5.1.1, local explanation methods can be applied to different layers of a CNN to produce different levels of granularity.
If the task goal requires a coarse granularity detection of a bounding box, applying local explanation visualization at the last layer of CNN can be suitable. However, if it needs more fine-grained granularity of closed curve for semantic segmentations, producing local explanations on both the first convolutional layer for edge-level of detail and the last convolutional layer for object-level detail can be considered, providing more depths of local explanation for users to evaluate.
Finally, we noted P7's suggestion about extending this flow to a more advanced video level of object classification, detection, and segmentation model steering.
Due to the data volume, special design considerations need to be applied in such a task.
However, upon the efficient design for indicating reasonable and unreasonable attention types, we believe that it is possible to apply the suggested flow to the problem space.
§.§ Scalability Consideration: Hundreds vs. Millions
Despite the promising performance of the model steering method, scalability remains an essential concern raised by several participants (P2, P3, P4, P8, P11), as many real-world image classification tasks involve millions of images.
Human scalability has been a crucial issue in HCI, CSCW, and beyond—while Misc.the data size can easily go up to millions and trillions in training state-of-the-art models, human cognition remains flat <cit.>.
Even if we can surface millions of images to users, it may not be possible for them to scan images serially and achieve sensemaking.
Generally, to successfully devise a scalable design, we believe that the number of images users have to go over should still not exceed thousands, and the amount of time they may spend should not exceed one hour, as recent data annotation literature suggests <cit.>.
Herbert Simon remarked that “wealth of information creates a poverty of attention” <cit.>.
As the trade-off between human labor and performance gain in human-in-the-loop applications is illustrated in Fig. <ref>, when users spend more effort as data size increases, the model will gain better performance until the workload hits the bottleneck of feasible human labor. We aim to make the curve of labor-performance trade-off steeper (from “curve 1” to “curve 2” shown in Fig. <ref>) through scalability optimization to improve the impact of human workload on performance gain. By devising “scalable” human-in-the-loop approaches, model performance could be further improved with the feasible amount of available human labor.
Item.5(b)While every human-in-the-loop approach can suffer the bottleneck of limited information, labor source, session time, etc., ultimate breakthroughs in human-in-the-loop and interactive ML designs could come from scalability strategies.
We introduce how some of the design strategies can be adopted in the design space of Beyond XAI.
First, one can consider sampling from the whole dataset.
Modern computer vision models can yield keywords of objects and context in the scene. Using such additional information extracted from the vast dataset, it is possible to define major and minor clusters of images. The new design may help users proceed with a small portion of sampled images derived from such clusters to reason the whole dataset and typify reasonable and unreasonable attention types accordingly.
Second, one can consider examining images based on the sequence built from Active Learning, Misc.a technique that chooses the fewest unlabeled data possible that could maximize the model accuracy gain <cit.>.
Applying active learning techniques is common in data annotation research, which can help reduce the required size of images to reason.
Third, devising further intelligent features that can automate the current workflow can facilitate the process as well.
Some features that need manual investigation can be automated in future designs.
Finally, if there is a strong rationale for investing more human resources, one can consider crowdsourcing.
§.§ Data Iteration and Continual Lifelong Learning
's capability of figuring out the vulnerability through local explanation is closely related to the capability of fortifying the dataset by adding more examples that can remove the contextual bias.
Such “data iteration” is not uncommon in practice.
To improve the model, the most fundamental way is to improve data. For instance, Chameleon lets users compare data features, training/testing splits, and performance across data versions <cit.>.
When combining the data iteration with model steering using local explanations, one could derive some interesting design ideas that can help ML engineers to better find, search, and add the dataset.
While improving the model with new data can be straightforward, a few issues need to be considered when steering models through local explanations.
First, it is necessary to understand what learning strategy can be more effective between the case where stacking every dataset in one place and retraining the model and the case of iteratively adding the new dataset and making the model “evolve”,
In general, the first case can yield a high-performing model than the second case due to the chance of catastrophic forgetting, which is a problematic and almost inevitable drawback <cit.>.
In recent years, the concept of continual lifelong learning has emerged <cit.> and provided a breakthrough.
Understanding which strategy can yield what strengths and weaknesses in the scenario of data iteration with local explanation reasoning would be necessary.
§.§ Improving Fine-Tuning
This work is the first study that observes how ML engineers experience techniques in the Explanation-guided Learning framework in fine-tuning their model and perceiving the difference.
While we saw participants satisfied with the progress they made with the RES framework, we introduced a few directions on how the RES framework can be evolved to design an improved model steering environment in the future.
One important direction is how to design a better quantitative measurement to assess the quality of the steered attention during the fine-tuning process.
Simple distance-based metrics such as Mean Squared Error (MSE) or Intersection over Union (IoU) scores that are calculated purely based on the alignment of each feature can hardly comprehensively reflect the quality of the adjusted attention, as they completely ignore the correlations among visual features.
One potential remedy to this issue is also to leverage fidelity-based metrics, which aim at evaluating how faithful the model's attention is with respect to the model's prediction.
The assumption behind this is that the `right' attention should contain sufficient information for the model also to make the `right' prediction <cit.>; while on the other hand, removing the attention should also lead to significant negative impact for the model to make the correct prediction <cit.>.
However, it is still not clear and challenging to propose a single metric that can together measure the faithfulness and the degree of alignment with the human annotation to make a more comprehensive assessment of the attention quality.
Another possible topic is how to leverage multiple annotations from different users for a single sample <cit.>.
As obtaining more than one annotation can be helpful to boost the reliability of the human boundary for attention adjustment, it poses challenges on how to align model attention with multiple ground truth boundaries.
While a simple way out can be using the 50% consensus or majority vote over all the available annotations, useful information can be lost during the aggregation. Thus, new techniques are in demand to leverage each annotation effectively.
§ CONCLUSION
In this work, we examined our inquiry of how we can design a direct feedback loop between a human and a CNN through local explanations.
In particular, we designed and developed the first interactive system to help a user adjust the local explanation results regarding the gaze of CNNs.
We applied our interactive design in the problem space of contextual bias for CNN engineers.
With the S1, we learned ML engineers' practical challenges and desires, converting the insights to design considerations that could improve how we use local explanations in model diagnosis and steering.
With , we conducted S2 and found how can provide a better workflow and experience to CNN engineers.
At the same time, we also found limitations and future research directions.
In particular, we boiled down and shared in Implications for Design beyond XAI within the categories of (1) correlating model attention and model accuracy, (2) generalizability consideration, (3) scalability consideration, (4) data iteration and lifelong learning, and (5) improving fine-tuning.
We hope this work can benefit researchers and practitioners who seek to understand how to make XAI-driven insights actionable in steering AI.
ACM-Reference-Format
§ STUDY 1 INTERVIEW QUESTIONS
Item.3
§.§ About you
* Can you explain your role in your company?
§.§ Your models and development settings
* Can you explain the purpose, input, and output of your models for which you used model saliency/attention?
* Can you walk us through your process of building your model? E.g., how to collect the training set, how to train your model, how to improve your model performance, how to debug?
§.§ Use of saliency maps
* Can you explain the way you use saliency maps in understanding your model’s behavior?
* Can you explain the way you use saliency maps in supervising/improving your model’s behavior?
§.§ Working on fair/robust/accurate models
* Can you explain your experience/effort towards building more fair DNN models?
* Can you explain if attention/saliency was useful or not?
§.§ Your tools, challenge, and wish list in the future
* Can you explain the types of tools that you use for understanding/improving your DNN models?
* Can you explain the challenges you experience while interacting with your DNN?
* What new tools/features do you wish to have in the near future to make your life better?
§ STUDY 2 SYSTEM USABILITY SCALE (SUS) SURVEY <CIT.>
Item.5
§.§ Indicate your degree of agreement for each of the 10 statements (on a Likert scale from 1-“strongly disagree” to 5-“strongly agree”)
* I think that I would like to use this system frequently.
* I found the system unnecessarily complex.
* I thought the system was easy to use.
* I think that I would need the support of a technical person to be able to use this system.
* I found the various functions in this system were well integrated.
* I thought there was too much inconsistency in this system.
* I would imagine that most people would learn to use this system very quickly.
* I found the system very cumbersome to use.
* I felt very confident using the system.
* I needed to learn a lot of things before I could get going with this system.
|
http://arxiv.org/abs/2307.03877v1 | 20230708014525 | Designing Mixed-Initiative Video Games | [
"Daijin Yang"
] | cs.HC | [
"cs.HC",
"cs.AI",
"J.5"
] |
[1]Covercover
To my family.
[1]Table of Contentscontents
I would like to first thank my great, beautiful, and strong mother. Since being diagnosed with multiple myeloma nine years ago, she has endured unimaginable suffering, but she has never given up on her life and has even achieved more in her work than healthy people. After contracting COVID-19, her condition deteriorated rapidly, and as I write this paper, she is fighting bravely against cancer in the hospital. She has always inspired and supported me to move forward. She is a great mother and I will love her forever.
I am grateful to my father, my mother's sister, and other family members for taking care of my mother and allowing me to focus on my studies.
I would thank Professor Elina Tochilnikova, Professor Giovanni Maria Troiano, Professor Bob De Schutter, Professor Casper Harteveld, Professor Leanne Chukoskie, and all other professors in the field of Game Science and Design at Northeastern University for their invaluable guidance and unwavering patience in supporting my work. I would also express my sincere gratitude to Professor Max Kreminski at Santa Clara University for providing crucial feedback and suggestions on my thesis.
I would like to extend my appreciation to all of my colleagues who generously provided valuable suggestions and constructive feedback on my work. Additionally, I am grateful to my friends Binyao Jian and Xinyan Deng, who stood by me during the most challenging times. Their unwavering support and companionship have been invaluable to me.
The development of Artificial Intelligence (AI) enables humans to co-create content with machines. The unexpectedness of AI-generated content can bring inspiration and entertainment to users. However, the co-creation interactions are always designed for content creators and have poor accessibility. To explore gamification of mixed-initiative co-creation and make human-AI interactions accessible and fun for players, I prototyped Snake Story, a mixed-initiative game where players can select AI-generated texts to write a story of a snake by playing a “Snake” like game. A controlled experiment was conducted to investigate the dynamics of player-AI interactions with and without the game component in the designed interface. As a result of a study with 11 players (n=11), I found that
players utilized different strategies when playing with the two versions,
game mechanics significantly affected the output stories, players' creative process, as well as role perceptions,
and players with different backgrounds showed different preferences for the two versions.
Based on these results, I further discussed considerations for mixed-initiative game design.
This work aims to inspire the design of engaging co-creation experiences.
Keywords - human-AI interaction, gamification of human-AI collaboration, mixed-initiative interface, mixed-initiative game, AI co-writing, playing and creating conflicts
headings
CHAPTER: INTRODUCTION
Recent machine learning (ML) techniques have boosted human creation, enabling humans to co-work with artificial intelligence (AI) to compose music <cit.>, draw illustrations <cit.>, write stories <cit.>, reply emails <cit.>, create characters <cit.>, and develop games <cit.>. In this mixed-initiative co-creation <cit.> process, AI acts as a partner of humans and provides real-time feedback aligned with the creation iteration. Since the algorithm can generate numerous instances with easy inputs in a relatively short time, the mixed-initiative interfaces can help its users quickly explore the solution space, inspire them with unexpected ideas <cit.>, and make creative experiences accessible to non-professional creators <cit.>.
Current mixed-initiative co-writing interfaces mainly focus on supporting writers. These systems were designed to help writers to keep the consistency of stories, plan plots, get unstuck <cit.>, and change text-based stories into other forms <cit.>. Users must have basic writing skills to operate these systems. Other work introduced gamified designs such as temporary rules and goals <cit.>, as well as scores <cit.> into mixed-initiative co-writing to make the system more enjoyable to novice writers. However, previous work on human-AI collaboration in the context of creative writing focused on AI as a supporting mechanism to facilitate creative storytelling efforts. Here, I extend prior work by exploring the use of AI for mixed-initiative creative writing as a game mechanic in the context of game design. To design mixed-initiative video games, I aim to explore the following research questions:
(1) What patterns of interaction and player identification emerge in the player-AI co-creating process?
(2) How do game mechanics impact the creation and role perceptions in the process?
(3) How can mix-initiative co-creating be integrated with game mechanics for a unified play experience?
To ground my study, I designed and prototyped Snake Story, a mixed-initiative game with the mechanics from “Snake” [https://www.arcade-history.com last accessed 03.06.2023]. The game (referred to as the game version) involved players selecting AI-generated texts or adding their own texts by controlling a growing snake to eat candies on the map, resulting in the creation of a story about a snake. A GPT-3 <cit.> based language model was employed to provide 2 text selections in each round with different preset parameters. The model would consider previous selections and would write an end for the story when the game or the interaction is over. For comparison, a system (referred to as the non-game version) was also developed for players to directly select AI-generated texts without engaging in gameplay.
To investigate how players dynamically interact with the game, I conducted a within-subject user study with 11 players (n = 11). Each player was asked to write an approximately 300-word story about a snake in the randomly assigned two versions. Eleven individuals participated in a study where they played Snake Story and their experience was analyzed using a mixed-method approach, including gameplay log data, survey, think-aloud, interview, and observations. Results from the study show: game mechanics significantly affect players' text selection strategies, the quality of the stories, and sense of engagement in creating; players shared a selection strategy for GPT-3 generated texts; different players had different play strategies in both versions and thus perceived themselves and AI differently because of the game mechanics; players with different writing and AI experiences hold different preferences on the two versions.
In summary, the thesis makes the following contributions to the mixed-initiative game design:
(1) I introduce Snake Story, a mixed-initiative game for collaborative writing with AI. I present techniques that enable players to write with AI, and I develop both game and non-game interactions and interfaces.
(2) In a within-subject study with 11 players, I compared the non-game and the game version and defined:
(a) Players' usage data.
(b) Statistic difference between the two versions.
(c) Players' strategies for selecting AI-generated texts to create stories
(d) Players' play strategies and role perceptions in the two versions.
(e) Players' preferences for the two versions.
(3) Based on the results of the user study, I discuss the design
implications that mixed-initiative games should:
(a) Resolve playing and creating conflicts.
(b) Increase narrative engagement in playing.
(c) Enhance emotional involvement in creating.
(d) Balance playing and creating.
(e) Find new evaluation criteria.
Taken together, these findings guide the design of future engaging co-creation experiences.
CHAPTER: RELATED WORK
§ NEURAL LANGUAGE MODELS FOR TEXT GENERATION
Text generation has appeared as a critical application of natural language processing (NLP) technologies, with various applications such as chatbots <cit.> and content creation <cit.>. The rise of deep learning has enabled significant advancements in the field of text generation, with language models such as the Generative Pre-trained Transformer (GPT) series achieving remarkable success. It has been proven that GPT models have the ability to generate texts that cannot be distinguished from human-written pieces <cit.>. Based on the previous GPT-2 structure <cit.>, the GPT-3 model with larger model size, dataset size, and more training has demonstrated stronger abilities in text completion and outperformed GPT-2 on several metrics, including fluency, coherence, and relevance <cit.>. As a result, GPT-3 was employed in the Snake Story game.
By using a few-shot learning approach <cit.>, the GPT-3 model is able to perform specific tasks under natural language instructions inputted by users. For example, in the Snake Story game proposed in this thesis, a prefix "writing a story of a snake" was added to restrict the generated texts under the topic. Despite the impressive advancements in text generation, several challenges remain in using GPT-3, including the issue of bias and the difficulty of producing diverse and engaging content. The issue of bias stands for that the generated text may reflect the biases inherent in the training data. Identical prompts in GPT-3 can result in stereotype-related outputs including biases on sex <cit.>, race, and certain religious groups<cit.>. Also, GPT-3 still has the problem of sometimes generating low-quality texts that repeat themselves, lose coherence over paragraphs, and have contradictory logic <cit.>. This problem will be enlarged when the parameters of GPT-3 are not set properly [https://platform.openai.com/docs/api-reference/models last accessed 03.06.2023].
§ MIXED-INITIATIVE CO-WRITING INTERFACES
Mixed-initiative interfaces that enable co-creation in various fields have been widely researched <cit.>.
The interfaces can take advantage of exploratory creativity from human writers and the fast generation of diagrammatic lateral paths from generative algorithms to create mixed-initiative co-creativity <cit.>. Extensive research has explored the potential of mixed-initiative interfaces to aid human writing through editing human-written texts, as well as generating and expanding ideas. Editing and refining functions are the most common functions in the interfaces. For example, Shuming Shi et al. <cit.> utilized AI technologies to boost users' writing proficiency by enabling them to generate higher-quality text more efficiently. This was accomplished through the implementation of five distinct categories of features in their digital writing assistant, including text completion, error detection, text refinement, keyword-to-sentence conversion (K2S), and cloud-based input methods (cloud IME). Xin Zhao <cit.> developed a writing assistant that can assist non-native English speakers in overcoming language barriers by offering rewriting alternatives with various tones (such as casual or formal) and lengths (like shortening or expanding).
In collaborative writing, AI can also serve as an idea generator, contributing to the generation of novel concepts and plot lines. For instance, Melissa Roemmele et al. <cit.> created a system that aids users in brainstorming by providing suggestions for the next sentence in a story. Swanson et al. <cit.> described an interactive storytelling system that utilizes a case-based reasoning architecture to offer a range of options for the subsequent sentences, leading the story in entirely diverse directions. Chung et al. <cit.> introduced an alternative to suggestion-based co-ideation approaches by developing line-sketching interactions that enable users to co-create stories while actively controlling and making sense of the protagonist's fate. Beyond idea generators, AI in <cit.> was AI was granted a more substantial role as an active writer and assumed responsibility for continuing users' narratives through a unique form of story solitaire. In contrast, Biermann et al. <cit.> proposed that AI could jeopardize writers' control, autonomy, and ownership by exceeding co-creative limits, and therefore sought to preserve human control over the writing process in their system.
Moreover, AI can assist in bridging the gaps between the skeleton structures of stories. Ammanabrolu et al. <cit.> introduced an ensemble-based model capable of generating event-driven stories. Yuan et al. <cit.> built a text editor that can provide plot points that can contextualize the scene built by humans. Laclaustra et al. <cit.> introduced a system that empowered users to specify characters, locations, and objects within a story world. The system, subsequently, generated a rudimentary story by allotting actions to individual characters and creating a sequence of events.
In summary, the aforementioned applications are well-designed to assist creative writers in enhancing language, ensuring consistency, overcoming writer's block, managing reader experience, as well as refining and iterating on expressive intent <cit.>. However, it is crucial to note that more research is indispensable to cater to the needs of casual creators or non-creators in the realm of content creation.
§ MIXED-INITIATIVE CO-WRITING GAMES
In order to broaden the accessibility of co-creative experiences for a wider range of users, various applications have recognized the benefits of integrating mixed-initiative co-writing as a valuable component of narrative instruments <cit.> in games, thereby enhancing the overall interactive experience of "play".
For example, a mixed-initiative text-based game, AI dungeon[https://play.aidungeon.io/main/home last accessed 03.06.2023], used AI to generate and respond to players' text-based commands and choices. The AI system produces a distinctive story outcome based on the players' inputs, providing an evolving and personalized gaming experience of exploring and creating stories in pre-set scenes.
Moreover, Kreminski et al. <cit.> developed "Why Are We Like This?", a mixed-initiative, co-creative storytelling game that aimed to engage players in investigating the generated history of characters and to bring the story to a satisfying conclusion by selecting and writing actions for the characters. The game involves designed author goals, proper AI suggestions, and player curiosity to encourage co-authorship.
While the term "play" is commonly used to denote the interaction between human and mixed-initiative interfaces, it is essential to recognize that "games" bear distinctive dissimilarities from play, as they feature unambiguous goals that encourage participants to engage in the interpretation and optimization of rules and tactics <cit.>. The introduction of goals into a system serves as the most straightforward means of distinguishing mixed-initiative co-writing games from mixed-initiative co-writing interfaces.
Xi et al. <cit.> introduced KuiLeiXi, an open-ended text adventure game that required players to interact with the AI to achieve predetermined plot goals. The system was developed to address the lack of incentives for players in AI Dungeon.
Additionally, Ben Samuel et al. <cit.> created a mixed-initiative playful tool, Writing Buddy, that integrates the affordances of both authoring and playable media to support creative writing endeavors. The game mechanics prompted players to engage in a puzzle-solving-like experience, where they possessed the freedom to add or eliminate story beats to alter the characters' states within the game and attained the pre-determined narrative goal.
Building upon the concept of Writing Buddy, Kreminski et al. <cit.> developed Loose Ends, a mixed-initiative co-creative storytelling play experience that incorporates obligatory storytelling goals that can be parameterized with specific characters and additional constraints. In addition, the AI system in Loose Ends ensures consistency with all previous texts, which emulates the functionality of an active writing partner.
The present state of mixed-initiative co-writing games suggests that their full potential has yet to be realized, as they continue to rely on interactions that overlap with mixed-initiative interfaces. While the mobile game designed by Castaño et al. <cit.> represents a step forward in this field, enabling users to collaboratively create a story by arranging a card-game-like system, further exploration of combining mixed-initiative interfaces and game mechanics are required.
CHAPTER: SNAKE STORY
To ground my study, a mixed-initiative game named "Snake Story" was designed and developed in the Unity3D engine. As shown in Fig. <ref>, the game consists of 2 parts: the non-game version and the game version. The game allows players to write different stories under the same prompt, “writing a story of a snake”, with GPT-3 generated texts in different turn-based interactions. The "text-davinci-003" model [https://platform.openai.com/docs/models/overview last accessed 03.06.2023] was employed in the system to generate the text.
§ NON-GAME VERSION
As illustrated in Fig. <ref>, the non-game version of the system functions as follows: players are presented with two 30-word text options with different temperatures (0.6 and 1.4) generated by GPT-3 in each turn. If they wish to continue the story, they can select one of the options, which will be automatically added to the narrative. Alternatively, players can opt to compose their own text to continue the story if they are dissatisfied with the AI-generated options. In the subsequent turn, GPT-3 generates two fresh text alternatives for the players to choose from. Once the players decide to end the story, GPT-3 assists in linking the narrative to the predefined ending: ", and the story of the snake ends" with a maximum of 80 words.
As depicted in Fig. <ref>, the interface of the non-game version presents the two GPT-3 generated text options on the left side of the screen, accompanied by square buttons containing labels to their left. Additionally, an input field is positioned beneath the text options, enabling players to contribute their own textual content. Once the GPT-3 generation process is completed, the button adjacent to the input field becomes interactable. Players can then click this button to incorporate their selected text into the ongoing narrative, marking the initiation of a new turn. Moreover, an "End" button is situated underneath the text options, providing players with the means to end the story.
§ GAME VERSION
In contrast to the non-game version, the game version of the system employs "Snake"-game-like mechanics as a metaphor for adding paragraphs to a story, as demonstrated in Fig. <ref>. In the game version, players are still presented with two selections of texts. However, these texts are now represented by candies positioned on a 15*15 tile map, each of which possesses unique mechanics. To add a text to the story, players must navigate a growing snake towards the corresponding candy, which triggers the addition of the selected text to the narrative, along with the application of the corresponding mechanics to either the player or the game map. Players are unable to terminate the story unless their life points become exhausted.
As shown in Fig. <ref> a), the game is played on the left-hand side of the screen, while the story is displayed on the right-hand side. Players’ life points are shown on the left bottom under the tile map. The player's life points are located at the bottom-left corner of the tile map. The two text selections, along with their corresponding candies, are displayed under the story. A countdown scrollbar for the pause time is located between the story and text selections, and the game pauses momentarily when new candies and texts appear. Once a player collects the special candy (Blue), they are given the opportunity to contribute to the story by writing their own text. As shown in Fig. <ref> b), an input field will appear under 2 text selections, and a corresponding yellow candy will be generated on the map.
As shown in Fig. <ref>, seven different tiles are designed in the game, comprising of six types of candies and one obstacle. The candies are divided into two pools: pool 1 for text selection 1 and pool 2 for text selection 2. To research how game mechanics can affect players’ choices of texts, the temperature for selection 1 and candy pool 1 that has negative effects is set lower than that for selection 2 and candy pool 2 for better and more stable text output. Candies with neutral and negative effects are designed for pool 1, which are indicated by negative colors. The white candy, with neutral mechanics, will only increase the snake's length by 1, while the black candy will additionally introduce three extra obstacles on the map. Furthermore, the red candy will decrease the player's life points by 1. Pool 2, on the other hand, features candies with neutral and positive effects as counterparts to the negative candies, indicated by positive colors. The green candy will add 1 life point, while the blue candy will permit players to write their text in the next turn, as demonstrated in Fig. <ref>. After each turn, three obstacles will be added to the map to increase the difficulty level.
In order to investigate the influence of game mechanics on players' text choices, the temperature for selection 1 and candy pool 1 was intentionally set lower (0.6) than that for selection 2 and candy pool 2 (1.4). This decision was made to improve the quality and stability of text output in selection 1. Considering players’ average reading speed and the usage of the think-aloud protocol, the game will be paused for 25 seconds each time when players get new texts. This pause duration will be extended to 45 seconds when players wish to write their own text to add to the story. Players can choose to end the pause early by clicking the buttons adjacent to the text selections, similar to how they would end their turns in the non-game version.
When players’ life points become 0, the interaction and the story will end. As shown in Fig. <ref>, players will enter a result page. On the right-hand side of the screen, the full story with an automatically generated ending will be displayed. Additionally, the interface will indicate the length of the snake and story, as well as provide information on the types of candies consumed by the player during gameplay.
CHAPTER: USER STUDY
§ PARTICIPANTS
To research how different players interact with Snake Story, 11 Game Design students (n=11, referred to as P1-P11) from Northeastern University were recruited through a Discord poster to play the game. Given the premise that the players' writing and AI experience may contribute to their distinct perceptions of the game <cit.>, the study recruited a diverse cohort of participants with variable levels of writing proficiency and collaborating experiences with AI. All participants volunteered for the study and were not compensated.
§ PROCEDURE
The study was designed as a within-subject investigation, whereby each participant was assigned to play both the non-game version and the game version of Snake Story in random orders. In each session, the participant was given a brief tutorial on the game mechanics and interface and was then instructed to compose a 300-word story about a snake with AI. The participant was also required to engage in think-aloud protocols during the 10-to-15-minute gameplay. Subsequently, the participant was asked to complete a 5-Likert scale usability questionnaire. Following the completion of two sessions, the participants would participate in a semi-structured interview lasting approximately 5-10 minutes, in which they shared their interaction experiences. Finally, participants were asked to complete a demographic survey, which included questions about their writing and AI experience.
§ EVALUATION
In the game, each text selection generated by GPT-3 was captured and stored. Moreover, the game also recorded the players' selection of texts and the stories they created. To further evaluate the user experience quantitatively, the usability questionnaire incorporated queries on the quality of the generated text, the overall story, and the user's interaction experience.
These collected data were subjected to quantitative analysis, including the use of Wilcoxon signed-rank tests to compare the results from the two versions of Snake Story.
During the study, the screen was recorded to capture the participant's interactions with the game, while the audio was recorded to generate transcriptions of the think-aloud protocols and interviews. The resulting data were analyzed using a qualitative approach based on open coding <cit.>, allowing for a thorough exploration of the participants' experiences and interactions with the game.
CHAPTER: QUANTITATIVE RESULTS
§ USAGE STATISTICS
[1] Colors in the game version candies row match candy colors mentioned in Section <ref>
11 players wrote a total of 22 stories about snakes in 2 versions of the Snake Story. The total number for each detailed statistic with an average number (M) and standard deviation (SD) are reported. As shown in Fig. <ref>, the players made a total of 130 choices (M = 11.82, SD = 1.80) in the non-game version. Of these, the generated texts with a lower temperature (0.6) were selected 63 times (M = 5.73, SD = 1.76), while the generated texts with a higher temperature (1.4) were selected 53 times (M = 4.82, SD = 2.48). Additionally, the players chose to write their own words 14 times (M = 1.27, SD = 2.05). On average, the players spent 49.14 seconds (SD = 13.13) making decisions in the non-game version.
Correspondingly, the players made a total of 142 choices (M = 12.91, SD = 4.50) in the game version. Of these, 0.6 temperature texts were selected 43 times (M = 3.91, SD = 1.98), while 1.4 temperature texts were selected 89 times (M = 8.09, SD = 4.72). Players chose to write their own words 10 times (M = 0.91, SD = 1.83). On average, the players spent 27.33 seconds (SD = 7.69) making decisions in the game version.
In the game, 91 white candies were generated, 42 of which were selected (46.15%); 50 black candies were generated, 18 of which were selected (36.00%); 47 red candies were generated, 11 of which were selected (23.40%); 46 green candies were generated, 31 of which were selected (67.39%); 47 blue candies were generated, 30 of which were selected (63.83%); 40 yellow candies were generated, 10 of which were selected (25.00%).
Wilcoxon signed-rank tests were conducted to compare players' selection and time usage differences in the 2 versions. The test results showed that there was no significant difference in the total number of selections made by players (W(11) = 29.0, p = 0.76). However, the test results showed that game mechanics significantly affected players' choices for 0.6 temperature texts (W(11) = 7.0, p = 0.035). By contrast, it was worth noting that players' choices for 1.4 temperature texts (W(11) = 10.0, p = 0.14) had no statistically significant differences. Moreover, no significant differences were found in self-writing (W(11) = 2.5, p = 0.16) choices between the two versions. Additionally, the analysis indicated that players made decisions significantly faster in the game version (W(11) = 2.0, p = 0.0029).
§ STORY EVALUATION
The stories in the non-game version had an average of 260.64 words (SD = 35.61), while the stories in the game version had 272.64 words(SD = 64.22). There was no significant difference in the length of the stories between the two versions (W(11) = 28, p = 0.70).
Automated writing evaluation tools <cit.> were employed to assess the cohesion, grammar, language diversity, and overall writing quality of the 22 stories.
Cohesion was evaluated using two metrics obtained from the Tool for the Automatic Analysis of Cohesion[https://www.linguisticanalysistools.org/taaco.html last accessed 03.08.2023] (TAAOC) <cit.>: the sentence overlap rate (S. Overlap) and the paragraph latent semantic overlap rate (P. LSA).
The Grammar and Mechanics Error Tool[https://www.linguisticanalysistools.org/gamet.html last accessed 03.08.2023] (GAMET) <cit.> was utilized to detect the number of grammatical errors in the texts.
In order to assess the language diversity of the writing, the Tool for the Automatic Analysis of Lexical Diversity[https://www.linguisticanalysistools.org/taaled.html last accessed 03.08.2023] (TAALED) <cit.> was employed. This tool was chosen for its ability to provide a reliable metric for the measure of textual lexical diversity (MTLD) <cit.>.
Finally, GPT-3[https://chat.openai.com/chat last accessed 03.08.2023] itself was used to provide an overall score for the stories on a scale of 0 to 10 <cit.>.
The results of the evaluations were shown in Table <ref>. The results from Wilcoxon signed-rank test indicated that the change in text selection preference between versions may impact the cohesion of paragraphs within the stories (W(11) = 6.0, p = 0.014). However, no other significant differences were found in the stories between the two versions.
Furthermore, the players were requested to assess the stories they wrote in the 5-Likert scale questionnaire. The results of this evaluation are presented in Figure <ref>. The additional Wilcoxon signed-rank test results indicated that there were no significant differences in the language used in the stories between the game and non-game versions (W(11) = 12.0, p = 0.19). However, the logic of the stories in the game version was significantly weaker than that in the non-game version (W(11) = 0.0, p = 0.0096). Moreover, the overall quality of the stories in the game version was significantly lower than that of the non-game version (W(11) = 3.5, p = 0.020).
§ QUANTITATIVE EXPERIENCE REPORT
As shown in Fig. <ref>, through the implementation of Wilcoxon signed-rank tests on the questionnaire data, it was observed that players had significantly less authorship of the story in the game version (W(11) = 3.0, p = 0.031). Furthermore, the players showed a significant difference in their interaction goal between the two versions, as they placed a greater emphasis on prioritizing the quality of the stories in the non-game version (W(11) = 2.0, p = 0.040). Nonetheless, no significant statistical difference was detected in their preference for the stories across the two versions (W(11) = 4.0, p = 0.083).
Additionally, players rated their interaction experience in the questionnaire. As shown in Fig. <ref>, players had significantly different perceptions between the two versions. Specifically, The game version was perceived to be significantly more complex compared to the non-game version (W(11) = 0.0, p = 0.039). Additionally, interactions within the game version were reported to have a significant impact on the creation process (W(11) = 0.0, p = 0.00098), whereas the creation process in the game version was considered to be less engaging (W(11) = 2.5, p = 0.047).
CHAPTER: QUALITATIVE RESULTS
§ INTERACTION PATTERNS IN THE NON-GAME VERSION
§.§ Text Selection Strategies
After analyzing the think-aloud data of players in the non-game version, 124 open codes were identified into 5 distinct categories based on the explanations the players provided for their choices. These categories are language (24), consistency (69), unexpectedness (17), self-writing reasons (12), and other situation (2).
§.§.§ Language
Players tended to choose texts of higher language quality, particularly those containing detailed descriptions, elaborate adjectives, and emotional expressions (19). For example, P9 mentioned "Although both 2 texts were cohesive to the previous texts, the description of snake behavior in Text 1 is more specific." when selecting between "...to explore the inside, and soon found himself submerged in the knee-depth water. He made his way from lily pad..." and "...and he quickly jumped into the water. He swam around for hours, enjoying the cool and refreshing sensation of the pond's waters. As...".
Additionally, players demonstrated a preference for texts that were well-structured and composed with a professional tone (5). For instance, P3 mentioned "I think the 2 selections are similar, but the second one is more professional and I will go with this one." when selecting between "...He had seen many creatures come and go in his long life, but he was content with his own company. He kept to himself, and the..." and "...Named Anaconda, known by every passing creature in pursuit of warmth. One could often hear laughter ringing near it’s solace...".
§.§.§ Consistency
Players preferred the texts that aligned with their pre-determined tone (24). As an illustration, P5 pointed out "I would select 1 because 1 is more like a start of a fairy tale. I do not want to write a realistic story so I will choose 1." when selecting between "Once upon a time, there was a small snake who lived in the forest. She was very curious and loved to explore her surroundings. One day..." and "Once there was a green, spotted snake who mind made his home in the deep parts of lush tropical jungle. This snake was quite different than other...".
Moreover, players demonstrated a proclivity towards selecting texts that unfolded the story in their anticipation (15). Such as P2 stated "I wanted to put my snake in a safe situation, (because) I don't want my snake to die. (Choose 1)" when choosing from "...-green scales glinted in the sun. Alice was sure that this snake wasn't dangerous, and she certainly didn't want to..." and "...shadowed a lower tear of its cheekbones. 'Hello there,' She eagerly greeted the glowing-eyed serpent and for just a few...".
Furthermore, players exhibited an inclination towards texts that maintained coherence with preceding texts, specifically those that exhibited sound and expandable logic (30). As an instance, P7 said "I think a snake cannot wear the necklace. Also, the golden necklace is so luxurious that it does not seem like something a bird that has just been saved would have." when selecting between "...special gift. It was a beautiful golden necklace with a single ruby in the center. Slither was amazed by the gift and decided to wear it around..." and "...(a)n Acorn sap, that would grant Slither fortune to any human being wished aide of him. Slither couldn't wait to tell the...".
§.§.§ Unexpectedness
Players displayed a preference to select unexpected texts that featured fresh settings, new characters, and surprising plot twists (11). To illustrate, P3 explained "I think 1 is more fun. It has new people (objects). 2 just mentioned familiar faces. I don't like familiar faces" when choosing between "...it was surprised to find a new world of exotic animals, plants and trees. It found itself in an oasis full of life and beauty,..." and "...it met the familiar faces, who watched without any hesitation. The explorative beast evolved into steps of understanding slowly and carefully forging relationships while exchange...".
In addition, players showed a propensity to select texts that possessed a sense of suspension regarding their potential narrative developments (6). For example, P11 said "I want to see where this goes. I chose this (2) because it's messed up, and I want to see if it becomes more messed up if I choose the messed up option." when selecting between "...I grant ye the power to control the elements of this world, but only when you accept my blessing.' George was terrified and uncertain what..." and "...Gallivanting about every daye frivolipaseday joorneys with on larkining flightal skeemeshyne lizard wingable sprites...".
§.§.§ Self-writing Reasons
In the situation that neither of the presented text options fulfilled their preferences, players were observed to write their own content, which frequently drew inspiration from the provided text selections (6). As an example, P10 mentioned "The first one is cheap...I like the 'anger' part (in 2), but then I don't like the 'mouth puckering', so maybe I can do that..." when facing "...and eventually let out a soft purr before turning away and walking off into the distance. The snake was relieved that he had been spared,..." and "...and anger never leaving his eyes. He narrowed his focus to the powerless serpent before him, is mouth puckering upward ready to end this mercy mission of...", but finally writing "..., but anger never leaving his eyes. It calmed down eventually and let out a soft purr before turning away and walking off into the distance. The tiny snake was relieved that he had been spared,..."
Players' desire to play with the AI was another factor that motivated them to write their own content (6), as will be illustrated in Section <ref>.
§.§.§ Other Situation
In a rare situation, players indicated satisfaction with both text options and selected one randomly (2). P7 said "I think both of the selections were good and appealing to me. Can I randomly choose one of them? (After performing a random selection procedure,) OK, I will go with 1." when choosing between "...other animals of his luck, but first he wanted to test the sap. He poured some onto a nearby rock and wished for more food. Suddenly,..." and "...animals in the Kindgdom about this wonderful gift! He spread word quickly, and sure enough many of his animal friends began asking for his help....".
§.§ Role Perceptions and Play Strategies
Five players use AI as a writing assistant (WA) to support their writing. Three of these players, who identify themselves as writers, believed that they had made the majority of the contributions to the story. For example, P5 said "Well, even though the AI generated most of the content, I still feel like I had a significant role in creating the story because I made the choices on how the AI wrote. So, I believe that I can claim authorship of this story." in the interview. The other two players claim less authorship of the story and describe themselves as "puppet masters" according to P3. P6 also shares this sentiment, stating 'I think I am just providing prompts, and the AI can help me to link them.'.
Four players consider AI to be an active writer (AW) that provides stories for them to read. They would describe themselves as readers of an interactive storybook, where the AI is the author, and they are the audience. For instance, P10 mentioned "I'm not planning on writing much on my own. I'm actually more interested in seeing what the AI comes up with." before starting to play the non-game version.
The two remaining players consider AI as a playful tool (PT) and engage in challenging or tricking the AI by actively using self-writing functions to generate unexpected or amusing outcomes. They view AI-generated texts not only as a means of co-creation but also as a source of entertainment, exploring the limits and capabilities of the system. To illustrate, P6 mentioned "I think AI is pretty good at generating texts based on cohesive inputs, but I'm curious to see how it can handle unexpected situations. So I'm gonna test it out by seeing what happens if I just kill off the snake in the story and let the AI continue from there." when adding the sentence "The snake is dead." at the very beginning of the story.
§ INTERACTION PATTERNS IN THE GAME VERSION
§.§ The Effect of Mechanics
All players acknowledged that the mechanics had a significant impact on their co-writing process.
The overall game design, particularly the time limit mechanics, had a significant influence on how the players read the generated texts. Two out of eleven players reported that they never read the generated text. For instance, P5 mentioned that "Given that it's a game, I'm not particularly concerned about what the AI writes. My main focus is on the gameplay itself." By contrast, four players read the text in its entirety, but only intermittently as they controlled the snake to avoid obstacles. For example, during the 5th round, P11 commented, "I think I can find a safe path for my snake to stay in, and then I can have extra time for reading the texts. Oh, this works!". The remaining five players opted to give the generated texts a quick scan. To illustrate, P8 mentioned "So basically, I just skim through the text real quick cause I also need to focus on figuring out how to get my snake to chow down on what I picked out for it at the same time." in the interview.
Additionally, the candy mechanics influenced players' choice strategies. Despite their low-quality text, the green and blue candies (good candies) are particularly attractive to players.
To illustrate, P2 mentioned "I would more likely go for the green candy to regain my lost HP and keep myself alive in the game for a bit longer." while playing the game. By contrast, black and red candies (bad candies) are rarely chosen by players. For example, P7 mentioned that "Even though the texts in the black candy are better, I'm not really keen on making the game more challenging. Plus, the white candy's text is good enough for me." However, in situations where white candies are present alongside white or "good" candies, or when a player's health points are at a safe level, they are more likely to apply their selection strategies from the non-game version for text content. Such as P11 said "The (black) one I chose just now was the more sad option, and I am choosing to make this snake's life sad." selecting between "...The snake was quite content with this lifestyle until one day, when it heard a strange noise coming from the house. Curiosity got the better..." (black) and "...Seasons unearthed happiness all around this cozy old home, kids screeched and teenage gossip shook the foundation undoubtedly our friend welcomed shelter in..." (blue).
Furthermore, the intentional design of the obstacles in the game resulted in a notable increase in emotional arousal among players during the co-creation process. Players were found to experience a range of negative emotions, such as tension and frustration, when attempting to navigate and avoid obstacles, or when inadvertently colliding with them.
§.§ Role Perceptions and Play Strategies
Despite all players identifying as "players" in the game version, their respective play strategies exhibited significant variation.
The majority of the players (7) will make trade-offs (T) between game mechanics and writing. These players aimed to uphold the story's integrity but were willing to compromise its quality if it meant prolonging the snake's lifespan in the game. As an illustration, P11 mentioned "I'm really tempted to pick the red option, but I know it'll end up killing me, so I'm gonna have the other one. I'd rather keep myself alive in the game to see more stories."
However, four of the players ignore (I) the writing systems and just merely focus on the "Snake" game. These players indulge in either the good or bad candies exclusively during gameplay, purely maintaining the life of the snake or increasing the difficulty of the game for fun. For example, P1 stated "Even I want to choose the texts but the mechanics keep me away. To be honest, I'd much rather focus on enjoying the gameplay rather than putting effort into crafting a compelling narrative." in the interview.
§ PREFERENCE
[1]AI experiences: N (No), Y (Yes); Writing experiences: R (rich), P (Poor), N (NO); Non-game Version (V.) Role Perception (RP): WA(Writing Assistant), AW (Active Writer), PT (Playful Tool); Game Version (V.) Play Strategies (PS): I (Ignore Writing), T (Make trade-offs); Preference: NG (Non-game Version), G (Game Version), - (No preference)
As shown in Table <ref>, most of the players (5) did not demonstrate a discernible inclination towards either the non-game version or the game version. While they believed that the non-game version was more suitable for serious writing, they found the game version to be more entertaining and enjoyable. For example, P5 mentioned "If I wanna write a story, I would choose the first one (the non-game version). But for fun, I would play the second one (the game version)." in the interview.
However, three players (P1, P6, and P8) expressed their strong dislike for the game version, stating that it significantly impaired their creation and reading process. For instance, P8 explained "I didn't think it was as fun as the other version of the game. I thought it was a little stressful ... if you enjoy that type of narrative (reading or writing a story as it unfolds), I think the first one is, gonna be more appealing." in the interview.
Nevertheless, the remaining three players (P7, P8, and P11), who had neither AI nor writing experience, expressed their strong admiration for the game version. They believed that the challenges presented in the game version increased their engagement in the creation process. As an illustration, P9 stated "I like the game version more. I think the challenge in the game makes me more engaged in the interaction. The sense of tension in the game version makes it harder for me to consider each selection thoroughly. This means I'm always looking forward to the next choice, hoping to make better decisions than before." in the interview.
CHAPTER: DISCUSSION
§ RESOLVE PLAYING AND CREATING CONFLICTS
Designing mixed-initiative games with consideration for the potential conflicts between gameplay and creative content generation is essential to promote engagement in the co-creating process. Mechanics that allow for both play and creativity to coexist can encourage players to develop their own unique stories and experiences within the game world. Specifically, as discussed in Section <ref> and Section <ref>, clear rules and mechanics in Snake Story can pose additional challenges for players who wish to engage in creative content generation, particularly when their writing goals (write a better story) conflict with the playing goals of the game (live longer). To mitigate such conflict, exchanging the temperature between good and bad candies can incentivize players to focus on both keeping the snake alive and generating high-quality stories. However, it is important to note that some intrinsic conflicts between playing and creating cannot be easily resolved through such parameter adjustments. In such cases, more specialized and deliberate mechanics must be designed. For example, Snake Story has an emergent endpoint when players run out of life points, whereas players' stories may continue, making it difficult to determine a definitive end for them. One possible solution for this issue can be a Neo-rogue-like game system with permanent death mechanics <cit.> that enables players to continue creating a larger story despite dying multiple times.
§ INCREASE NARRATIVE ENGAGEMENT IN PLAYING
Developing a tight narrative link between game mechanics and co-created content is a crucial factor in augmenting the participants' sense of immersion in mixed-initiative games. Although Snake Story was designed based on a metaphorical representation of the manipulated snake as the snake in the story, a majority of the players (n=7) expressed their dissatisfaction with the perceived disconnection between the game and the narrative.
Two possible directions can be applied to Snake Story as well as future mixed-initiative games. The first direction involves simulating the creative process of renowned writers, such as Shakespeare, in crafting a story. This would involve modeling how such writers generate and develop various ideas, unfold plots, and navigate potential challenges in their writing process. In the game, AI would be leveraged to simulate the thought processes of these writers <cit.>, while game mechanics can enable players to actively participate in the co-creation of the story by engaging in this abstract thinking process.
Alternatively, players can be cast as the main character of the co-created story. This can be accomplished through an interactive drama game design <cit.>, wherein players take on the role of the protagonist and make consequential decisions that influence the story's direction. To enhance player immersion and emotional investment in the story, personalized elements reflecting the player's experiences and characteristics can be integrated using AI. However, since players' interests align with those of the characters, conflicts between playing and creating must be resolved through additional mechanics.
§ ENHANCE EMOTIONAL INVOLVEMENT IN CREATING
To mitigate player frustration, mixed-initiative games should incorporate a degree of flexibility that allows players to manage unforeseen emergent events that may arise during gameplay or the creative process. For instance, in Snake Story, as discussed in Section <ref>, players experienced frustration when they were unable to allocate sufficient time to planning the story and maneuvering the snake simultaneously. To address these concerns, a mechanic could be incorporated that enables players to conserve unused time during easier situations and then utilize it during more challenging scenarios. This flexible design can decrease player frustration by introducing a feeling of control, while still retaining the intensity of the gameplay experience.
Moreover, given that mechanics have the potential to exert a noteworthy influence on players' co-creation strategies, mixed-initiative games can employ incentivization through game mechanics as a means of fostering engagement in the co-writing process. For example, in the Snake Story, favorable outcomes can be associated with the acquisition of the yellow candy, thereby stimulating players to generate their own textual content.
§ BALANCE PLAYING AND CREATING
Similar to the significance of traditional games keeping players in an optimal state of flow <cit.>, mixed-initiative games should maintain a good balance between playing and creating for players. There are different positive feedback mechanisms between gaming and creative endeavors. Gaming requires short-term, rapid feedback, while creative endeavors often involve long-term, slow feedback. As mixed-initiative games require the players to both engage with game mechanics and creative content generation, it is crucial that the game design facilitates a smooth transition between these two modes in its gameplay. This can be achieved through thoughtful design of factors such as game pacing and player agency.
Furthermore, a well-designed mixed-initiative game should provide players with appropriate guidance and tools to enable them to create meaningful and enjoyable content, without feeling overwhelmed by the creative demands of the game.
In addition, it is imperative to account for individual differences when designing mixed-initiative games. This is due to the fact discussed in Section <ref> that varying players may necessitate distinct interaction strategies, thereby necessitating a tailored approach to maintain optimal playing-creating flow. Additionally, AI should consider the unique creating strategies (as described in <ref>) of each player to generate personalized content that aligns with their writing goals. The integration of player-centric AI content generation can help to keep players in the flow by reducing low-quality options and providing uplifting text at the appropriate time.
§ FIND NEW EVALUATION CRITERIA
To achieve a unified experience of creating and playing in mixed-initiative games, it is crucial to establish novel evaluation criteria that can fairly assess players' creative behavior. This is because an unfair assessment may lead to player frustration and undermine the gameplay experience. While the use of automatic writing evaluation <cit.> was demonstrated in the study as a post-evaluation method for the stories, its applicability to evaluating writing quality within the game may be limited by its statistical nature, which might not be applicable for an individual's writing and does not consider subjective player perceptions. Furthermore, real-time human evaluation is not a feasible option. As such, a potential solution could involve the development of a novel algorithm to evaluate players' work automatically. Alternatively, a better approach could involve incorporating game mechanics that allow players to self-evaluate or rate each other. However, the effectiveness and feasibility of these approaches need further investigation.
Additionally, while current evaluation criteria for traditional games may still apply to some extent, mixed-initiative games involve unique features and require new criteria to accurately measure their effectiveness. Mixed-initiative games require new evaluation criteria that account for both the game mechanics and the effectiveness of the mixed-initiative interface. Specifically, it is important to assess how the game mechanics are dynamically combined with the mixed-initiative interface. Nevertheless, the evaluation of mixed-initiative games is still an area that requires further research to establish effective criteria and methodologies
CHAPTER: CONCLUSION
In conclusion, the paper presents a prototype of a mixed-initiative game, Snake Story, aimed at exploring gamification of co-creation interactions between humans and AI. The study involved 11 participants, who were asked to play two versions of the game with and without game mechanics.
The finding suggested that mechanics might significantly influence the players' creative processes and self-identifications in the game. Additionally, players with different backgrounds showed different preferences for the two versions. Overall, the study highlights the potential of gamification in making human-AI interactions accessible and fun for players and provides valuable insights for the design of engaging co-creation experiences.
plain
CHAPTER: STORIES
§ SAMPLE STORY 1 (GAME VERSION P8)
Once upon a time, there was a snake who lived in the woods. He had never seen another creature like himself, but he enjoyed living among the trees and eating the small animals that lived there.
One day, he noticed something strange in the distance - a long, winding line of creatures like himself! He decided to investigate, and as he got closer, he realized that the line was a group of snakes making their way through the forest.
He was so excited to see other snakes like himself that he quickly joined the procession, slithering alongside them as they moved through the forest. Along the way, he felt particularly drawn to one snake in particular, who would often look back and pause for just a moment as if to acknowledge his presence.
After a few days, the procession reached its destination - a beautiful, secluded lake in the middle of the forest. The snakes quickly dispersed, but the snake that he had been drawn to stayed behind and waited for him.
The two of them shared a moment above the waters as they looked into each other’s eyes, acknowledging their instant connection. From then on, it was only the two of them and their limitless adventures among the trees and by the lake, both content to live life together in this idyllic home.
§ SAMPLE STORY 2 (NON-GAME VERSION P8)
Once upon a time, there was a small snake named Lucy. She lived in the woods near a small village and often ventured out during the night when things were still and quiet. Every day was the same for Lucy, scampering among the earthy loam of the forest floor in search of insects and grubs to satisfy her hunger.
But one night on her usual midnight march, something stopped Lucy in her tracks – a basket of fruits, vegetables and other goodies had been left outside the village gates. Lucy was curious and hungry so she slithered closer to investigate. As she inched closer, Lucy noticed that the basket was guarded by a large and intimidating snake. He had a long body with shimmering golden scales and a sharp, pointed tail. Lucy knew that this was no ordinary snake – it was a cobra!
The cobra noticed Lucy and coiled itself around the basket as to challenge her. Even with her tiny size, Lucy stood up and faced off against the cobra. Still her bravery paid off and the cobra slithered away, allowing Lucy to feast on all the goodies inside.
From that day forward, Lucy became known as the brave little snake who stood up against a cobra. She was respected and admired by all of her forest friends, and even the villagers began to leave treats outside the gates for her. Lucy lived a long and happy life in the woods, always remembered as the brave and intrepid, little snake.
§ SAMPLE STORY 3 (GAME VERSION P9)
Once, there lived a majestic green snake in the heart of a untouched forest. Its piercing fire suffused its emerald body as it knowingly crawled through the foliage.
The snake had a special affinity for humans and often followed them around their camps, watching from afar as they cooked, talked, and laughed. It had no fear of them and often interacted with them in a friendly manner - though some people were scared of it because of its size.
One day, the snake was out exploring a new part of the forest when it stumbled across a mysterious stone altar with strange symbols carved into it. It was intrigued and decided to investigate further, only to find that the altar held a powerful magical gem.
The snake quickly realized that the gem had the power to grant wishes, and it began to think of all the things that it could wish for. After much deliberation, it decided that it wanted to fly so that it could see the world beyond its forest home. So, with a passionate final wish, the snake found itself rising into the air and soaring through the sky.
It was a liberating experience for the snake, and it enjoyed every second of its newfound freedom. From that day forward, the snake was able to explore distant lands and experience new cultures. It even made friends with other animals along its journey.
The snake was truly happy, and it would never forget the day it found that magical gem.
§ SAMPLE STORY 4 (NON-GAME VERSION P9)
Once upon a time, in a grassy meadow surrounded by forest hills, wove the adventurous and playful snake named Oscar. He was brown and yellow in colour, with a white diamond pattern on his back.
Oscar was always looking for new places to explore, so one day he decided to wander through the forest hills. He slithered up and down the dirt paths, taking in all of nature's beauty around him.
As he kept moving, Oscar noticed a small pond in the middle of the forest. He decided to take a closer look and when he got there, he was in awe. The pond was crystal clear and filled with lily pads and colorful fish.
Oscar couldn't resist the temptation and he quickly jumped into the water. He swam around for hours, enjoying the cool and refreshing sensation of the pond's waters.
As the sunlight dipped behind the trees, Oscar decided to pay one last exploration visit before he finished up for the day. As he rounded a corner, he came face to face with a giant boa constrictor. The two were both startled, but the boa quickly wrapped itself around Oscar and squeezed tight.
Oscar was scared and he knew that his life was in danger, but then something amazing happened. The boa constrictor loosened its strong grip and released Oscar without hurting him. The two then just looked at each other, as if they were communicating something neither of them could understand.
Oscar was so surprised by the snake's kindness and he knew that this experience had changed him forever. From then on,t the world around him was full of wonders, just waiting for him to discover.
CHAPTER: LOG DATA
§ SAMPLE LOG DATA 1 (GAME VERSION P11))
[3/6/2023 7:29:47 PM]Game Start
[3/6/2023 7:29:48 PM][1][3]Once lived in an old abandoned house in the center of a busy city, was a black-bellied golden snake. Day by day he would
[3/6/2023 7:29:48 PM][0][1]Once upon a time there was a snake named Slither who lived in the forest. He had been living peacefully and happily until one day, when
[3/6/2023 7:30:03 PM]Chose[1][3]
[3/6/2023 7:30:04 PM][1][0] scour up and down the crevices of this broken home unnoticed. Occasionally it crawled out into the scared city ridden society to cultivate a small hoard
[3/6/2023 7:30:04 PM][0][1] slither around the house, exploring every corner and crevice.
One day he was out for a stroll in the garden of the old house
[3/6/2023 7:30:17 PM][2][5]
[3/6/2023 7:30:21 PM]Chose[1][0]
[3/6/2023 7:30:22 PM][1][3] of food just enough to keep warm. As chilling winter winds slowly rolled in and darkness increased, no bouts of sunlight hit the grimy abode itself
[3/6/2023 7:30:23 PM][0][1] of food.
One day, the snake decided to venture out a bit further into the city than usual. He was mesmerized by all
[3/6/2023 7:30:52 PM]Chose[1][3]
[3/6/2023 7:30:53 PM][1][4].
Seasons unearthed happiness all around this cozy old home, kids screeched and teenage gossip shook the foundation undoubtedly our friend welcomed shelter in
[3/6/2023 7:30:53 PM][0][1].
The snake was quite content with this lifestyle until one day, when it heard a strange noise coming from the house. Curiosity got the better
[3/6/2023 7:31:19 PM]Chose[0][1]
[3/6/2023 7:31:21 PM][1][3] of it and so, he slithered its way inside towards the commotion. To his surprise were two other snakes - one bright orange spotted snake that
[3/6/2023 7:31:21 PM][0][2] of it, and the snake decided to investigate. As it slithered its way up the stairs, towards the source of this noise, its scales began
[3/6/2023 7:32:09 PM]Chose[0][2]
[3/6/2023 7:32:10 PM][1][4] a tapestry of gold and red waves in the sometimes pitch twilight. When finally it made its way to place from where this racket fell on its
[3/6/2023 7:32:10 PM][0][0] to tremble in fear. It was a human, and it seemed as if he had found the snake's secret hideout.
The man
[3/6/2023 7:32:44 PM]Chose[1][4]
[3/6/2023 7:32:46 PM][0][2] ears, it was astounded by what it saw.
There in the center of this dingy room stood a small boy, no older than
[3/6/2023 7:32:46 PM][1][4] ears, what it encountered swore changed his life forever.
Two people lay huddled next to in the interlocked embrace; poor souls everywhere had
[3/6/2023 7:33:12 PM][2][5]he snake had a massive concussion and it couldn't move for a few minutes.
[3/6/2023 7:33:16 PM]Chose[2][5]
[3/6/2023 7:33:17 PM][0][0]
When it finally regained its senses, the snake realized that the noise was coming from a group of people who were renovating this old home
[3/6/2023 7:33:18 PM][1][0] To it surprise the intruder was an orange rat, thief like eyes darting to and fro then, looking at it . On its left hip pocket was
[3/6/2023 7:34:39 PM]Chose[0][0]
[3/6/2023 7:34:40 PM][1][4]. It had instantly scored itself an arduous task - Surviving this hostile environment would take hidden amidst the remodelling mess- all depending on patience and
[3/6/2023 7:34:40 PM][0][2]. They had come to give the house a much needed facelift and were unearthing all its hidden secrets. However, in their haste they
[3/6/2023 7:35:18 PM]Chose[1][4]
[3/6/2023 7:35:19 PM][0][2] luck.
The snake, however, was a master of unseen and it managed to stay hidden for months on end until the house was finally
[3/6/2023 7:35:19 PM][1][0] strength alike.
Evey night while evading punches and hammer-killer blows, that very same unsettling snake somehow managed to maintain existence into this confusing
[3/6/2023 7:35:44 PM][2][5]and it's agility of movement.
[3/6/2023 7:35:49 PM]Chose[2][5]
[3/6/2023 7:35:50 PM][0][0]The snake was able to dodge the workers' tools and their careless feet. For weeks, it stayed hidden in a corner of the house,
[3/6/2023 7:35:51 PM][1][4] Determined to beat the odds, and escape both unscathed plus undestroyed the snake definitely nested deep down regular ceilings and walls. Now is
[3/6/2023 7:36:54 PM]Chose[1][4]
[3/6/2023 7:36:56 PM][0][1] the time to wait and watch.
As days passed, the snake became accustomed to its new home - it knew where it could find food
[3/6/2023 7:36:56 PM][1][0] another member of new house happily nestled furniture plots manoeuvring escape for various protected corners. Growing about doubled its apparel and authority, this dust-rake
[3/6/2023 7:37:24 PM][2][5]ind it's way down a pipe and into a safe underground haven. Aka the sewer.
[3/6/2023 7:37:27 PM]Chose[2][5]
[3/6/2023 7:37:28 PM][0][2]
The snake stayed there for many years, while the house was slowly remodelled and restored to its former glory. As time passed by,
[3/6/2023 7:37:28 PM][1][0]Gearing up each weakening suna with newfoundfault rather vighiand swimming tamanringly tyrying es never laidoutsdong
[3/6/2023 7:37:45 PM]Chose[1][0]
[3/6/2023 7:37:47 PM][0][2] in the sewers, learning to live off of whatever edible morsels it could find.
Years passed and the snake slowly adapted to this
[3/6/2023 7:37:47 PM][1][0]obtained ulumizedeach tiny wramping word ways falwen curprise lifhpiful second uphew.Finally iazuplettsand w
[3/6/2023 7:38:13 PM]Chose[1][0]
[3/6/2023 7:38:15 PM][1][3]all waters an passiffirece teleasight immish are found campfty confidescedi hraculumsetll tearchedting old for
[3/6/2023 7:38:15 PM][0][1]hile the snake could barely remember it's old life, it was content to spend its days in the sewers of the city.
[3/6/2023 7:38:29 PM]Game End
[3/6/2023 7:38:29 PM]Ate[14]
§ SAMPLE LOG DATA 1 (NON-GAME VERSION P11))
[3/6/2023 7:49:53 PM]Game Start
[3/6/2023 7:49:54 PM][0.6]Once upon a time there was a small snake who lived in the forest. The snake loved to explore his surroundings and would often slither around looking
[3/6/2023 7:49:54 PM][1.4]When Gus came to visit his uncle's country farm he was excited for the adventures that awaited him. He had heard all about how this new environment offered
[3/6/2023 7:50:22 PM][Add Own Text]There once was a Boy named George that lived on a farm near a forest.
[3/6/2023 7:50:23 PM][0.6] One day, while walking in the forest, he came across a large snake. The snake was coiled up and had bright yellow eyes that seemed to
[3/6/2023 7:50:23 PM][1.4] On the very edge of what felt like a forbidden wild abyss, there was an old abandoned shed that felt particularly displaced from Nature. Little did George know
[3/6/2023 7:50:54 PM][Chose][0.6]
[3/6/2023 7:50:55 PM][0.6] look right through him.
George was scared, but he knew that snakes were important to the environment and so he decided not to run away.
[3/6/2023 7:50:55 PM][1.4] stare straight into George's soul. As George stepped closer, his eyes began to widen as fear and tension filled the air.
George hesitated at
[3/6/2023 7:51:34 PM][Add Own Text]e was now convinced that he was chosen to be a member of the dark side.
[3/6/2023 7:51:35 PM][0.6] He knew that he could no longer live his old life and had to accept the snake as his master.
He followed the snake deep into
[3/6/2023 7:51:35 PM][1.4]He timidly reached out to softly touch its flat yet slimy black color. When he had finally worked up enough courage, his finger immediately
[3/6/2023 7:51:49 PM][Chose][0.6]
[3/6/2023 7:51:50 PM][0.6] the forest and to a large clearing with an old, decrepit cabin. The snake slithered around George and motioned for him to enter the cabin
[3/6/2023 7:51:50 PM][1.4] the Forest and before long, they came across an old temple. Inside the temple was a large room filled with pits of glowing green goo - this
[3/6/2023 7:52:31 PM][Chose][1.4]
[3/6/2023 7:52:32 PM][0.6] was the snake's home.
The snake spoke to George, telling him that he could have anything he wanted if he pledged his allegiance and
[3/6/2023 7:52:33 PM][1.4] was no ordinary sight. The snake slithered in and placed itself at the centre of the room before beginning to speak:
"Close humanling
[3/6/2023 7:53:18 PM][Chose][1.4]
[3/6/2023 7:53:19 PM][0.6], I am the guardian of this temple and you have been chosen to serve me. From now on you will be my servant and do as I command
[3/6/2023 7:53:20 PM][1.4], it's time ye became one with the snakes of dragons. I shall be tay tupping hearfy to protect youalways from lives touched barren
[3/6/2023 7:53:39 PM][Chose][1.4]
[3/6/2023 7:53:40 PM][0.6]. I grant ye the power to control the elements of this world, but only when you accept my blessing."
George was terrified and uncertain what
[3/6/2023 7:53:40 PM][1.4]. Gallivanting about every daye frivolipaseday joorneys with on larkining flightal skeemeshyne lizard wingable sprites
[3/6/2023 7:54:05 PM][Chose][1.4]
[3/6/2023 7:54:06 PM][0.6].
George was amazed by what he heard and felt compelled to accept the snake's offer, so he nodded in agreement. He quickly began
[3/6/2023 7:54:06 PM][1.4]. Forewards danwortion glymback bowards blessid them plateary putenderbound beasts."
George was astounded but knew this
[3/6/2023 7:54:12 PM][Chose][1.4]
[3/6/2023 7:54:14 PM][0.6] was his destiny. He bowed down before the snake and accepted its offer to become a part of its family. From then on, George became known as
[3/6/2023 7:54:14 PM][1.4] must be the purpose of whatever journey lay ahead he was been. The snake placed its hood across his head and instructed him ingest a mysterious blue liquid in
[3/6/2023 7:54:44 PM][Add Own Text]absolute gibberish must be a forgotten tongue spoken by the ancient snake
[3/6/2023 7:54:45 PM][1.4]s of dragons. With a slight belly flutter, he stepped out with the Giant Snake behind him and declared his loyalty to protecting its power full force
[3/6/2023 7:54:45 PM][0.6]s. He nodded his head in agreement and from that day forward, became the snake's apprentice. The snake taught George the forgotten language of dragons and
[3/6/2023 7:55:17 PM][Chose][0.6]
[3/6/2023 7:55:19 PM][1.4] came to see him as a son. George worked hard learning the magicks from his master and embracing the darkness with open arms and righteous judgment.
[3/6/2023 7:55:19 PM][0.6] together they travelled to distant lands, learning secrets and exploring the unknown.
Years passed by and George became an expert in dragon lore and a
[3/6/2023 7:55:46 PM][Chose][1.4]
[3/6/2023 7:55:47 PM][0.6]Years later, George had become a powerful wizard and a respected leader of the dark side. He was feared by many but respected for his wisdom and
[3/6/2023 7:55:48 PM][1.4]When George reached maturity, he joined the warrior army of Serpentin Graywitch who worshipped him and regarded him as one of their champions. George made
[3/6/2023 7:56:06 PM][Chose][1.4]
[3/6/2023 7:56:07 PM][1.4] excellent achievements in the army, never losing a battle or mission. Little by little, with the help of George its prime warrior, the dark forces slowly
[3/6/2023 7:56:07 PM][0.6] sure to always keep the snake close by his side and together they fought off many enemies.
The snake was very proud of George and eventually rewarded
[3/6/2023 7:56:25 PM][Chose][1.4]
[3/6/2023 7:56:26 PM][0.6] began to take over the world.
Today, George is known as the King of Snakes and his faithful servant still by his side.
[3/6/2023 7:56:27 PM][1.4] took control of the entire landscape and drew all things into itselfy. Who knew one random meeting with a snake in the forest would lead to such power
[3/6/2023 7:56:44 PM][Chose][1.4]
[3/6/2023 7:56:45 PM][0.6]!
[3/6/2023 7:56:45 PM][1.4]?
[3/6/2023 7:56:47 PM][Chose][1.4]
[3/6/2023 7:57:01 PM]Game End
CHAPTER: SURVEY QUESTIONS
§ STORY EVALUATION
In a degree of 5 (from strongly disagree to strongly agree), answer the following questions:
Q1 I think the logic of the story is well organized.
Q2 I think the language in the story was professionally written.
Q3 I think the overall quality of the story was perfect.
§ EXPERIENCE EVALUATION
In a degree of 5 (from strongly disagree to strongly agree), answer the following questions:
Q1 I think the story is written by myself.
Q2 I think I prioritize the quality of the story in the game.
Q3 I think I like this story and want to share it with others.
Q1 I think the system is too complex to be understood.
Q2 I think the gameplay interrupts my thinking while writing the story.
Q3 I think I am engaged in the co-writing process.
§ DEMOGRAPHIC QUESTIONS
Q1 Did you co-create content with any Artificial Intelligence before? Y/N
Q2 How would you describe your writing skills?
1 Never wrote stories
2 Have some skeletons of stories but never complete them
3 Wrote some stories and shared them with others privately or published them
|
http://arxiv.org/abs/2307.04482v1 | 20230710110437 | Nonlinear and nonreciprocal transport effects in untwinned thin films of ferromagnetic Weyl metal SrRuO$_3$ | [
"Uddipta Kar",
"Elisha Cho-Hao Lu",
"Akhilesh Kr. Singh",
"P. V. Sreenivasa Reddy",
"Youngjoon Han",
"Xinwei Li",
"Cheng-Tung Cheng",
"Song Yang",
"Chun-Yen Lin",
"I-Chun Cheng",
"Chia-Hung Hsu",
"D. Hsieh",
"Wei-Cheng Lee",
"Guang-Yu Guo",
"Wei-Li Lee"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall",
"cond-mat.mtrl-sci"
] |
Institute of Physics, Academia Sinica, Nankang, Taipei 11529, Taiwan
Department of Physics, National Taiwan University, Taipei 10617, Taiwan
Department of Physics, Applied Physics and Astronomy, Binghamton University, Binghamton, New York 13902, USA
Department of Physics, California Institute of Technology, Pasadena, California 91125, USA
Scientific Research Division, National Synchrotron Radiation Research Center, Hsinchu 30076, Taiwan
Nano Science and Technology, Taiwan International Graduate Program, Academia Sinica and National Taiwan University, Taipei, Taiwan
Physics Division, National Center for Theoretical Sciences, Taipei 10617, Taiwan
Graduate Institute of Photonics and Optoelectronics, National Taiwan University, Taipei, Taiwan
These authors contributed equally to the work.
Institute of Physics, Academia Sinica, Nankang, Taipei 11529, Taiwan
Nano Science and Technology, Taiwan International Graduate Program, Academia Sinica and National Taiwan University, Taipei, Taiwan
These authors contributed equally to the work.
Institute of Physics, Academia Sinica, Nankang, Taipei 11529, Taiwan
These authors contributed equally to the work.
Institute of Physics, Academia Sinica, Nankang, Taipei 11529, Taiwan
Department of Physics, National Taiwan University, Taipei 10617, Taiwan
Department of Physics, California Institute of Technology, Pasadena, California 91125, USA
Department of Physics, California Institute of Technology, Pasadena, California 91125, USA
Institute of Physics, Academia Sinica, Nankang, Taipei 11529, Taiwan
Scientific Research Division, National Synchrotron Radiation Research Center, Hsinchu 30076, Taiwan
Scientific Research Division, National Synchrotron Radiation Research Center, Hsinchu 30076, Taiwan
Graduate Institute of Photonics and Optoelectronics, National Taiwan University, Taipei, Taiwan
Scientific Research Division, National Synchrotron Radiation Research Center, Hsinchu 30076, Taiwan
Department of Physics, California Institute of Technology, Pasadena, California 91125, USA
Department of Physics, Applied Physics and Astronomy, Binghamton University, Binghamton, New York 13902, USA
Department of Physics, National Taiwan University, Taipei 10617, Taiwan
Physics Division, National Center for Theoretical Sciences, Taipei 10617, Taiwan
[email protected]
Institute of Physics, Academia Sinica, Nankang, Taipei 11529, Taiwan
The identification of distinct charge transport features, deriving from nontrivial bulk band and surface states, has been a challenging subject in the field of topological systems. In topological Dirac and Weyl semimetals, nontrivial conical bands with Fermi-arc surfaces states give rise to negative longitudinal magnetoresistance due to chiral anomaly effect and unusual thickness dependent quantum oscillation from Weyl-orbit effect, which were demonstrated recently in experiments. In this work, we report the experimental observations of large nonlinear and nonreciprocal transport effects for both longitudinal and transverse channels in an untwinned Weyl metal of SrRuO_3 thin film grown on a SrTiO_3 substrate. From rigorous measurements with bias current applied along various directions with respect to the crystalline principal axes, the magnitude of nonlinear Hall signals from the transverse channel exhibits a simple sinα dependent at low temperatures, where α is the angle between bias current direction and orthorhombic [001]_ o, reaching a maximum when current is along orthorhombic [11̄0]_ o. On the contrary, the magnitude of nonlinear and nonreciprocal signals in the longitudinal channel attains a maximum for bias current along [001]_ o, and it vanishes for bias current along [11̄0]_ o. The observed α-dependent nonlinear and nonreciprocal signals in longitudinal and transverse channels reveal a magnetic Weyl phase with an effective Berry curvature dipole along [11̄0]_ o from surface states, accompanied by 1D chiral edge modes along [001]_ o.
Nonlinear and nonreciprocal transport effects in untwinned thin films of ferromagnetic Weyl metal SrRuO_3
Wei-Li Lee
August 12, 2023
=========================================================================================================
§ INTRODUCTION
Since the first experimental demonstration of a quantized conductance from counter-propagating edge spin channels in HgTe quantum well system <cit.>, topological materials have become one of the main research focuses in condensed matter physics and materials science. The two dimensional (2D) quantum spin Hall phase originates from inverted bulk bands that crosses near the system's boundary, revealing one dimensional helical edge states and thus the observed conductance quantization, which is also known as the 2D topological insulator (TI) phase and also recently reported in several other 2D systems <cit.>. Extending to 3D TI, the existence of a nontrivial bulk band topology with an intrinsic topological invariant gives rise to unusual gapless Dirac surface states, which was confirmed in experiments using surface sensitive angle-resolved photoemission spectroscopy and scanning tunneling microscopy <cit.>. More recently, a remarkable advancement was made by the observation of the quantized anomalous Hall conductance at zero magnetic field in a magnetic TI <cit.>, and it is a unique transport signature due to the topological nature of the system, which was theoretically predicted long ago <cit.>.
In topological Dirac and Weyl semimetals (WSM), nontrivial crossings appear in the bulk bands near the Fermi surface <cit.>, and charge transport is overwhelmed by the unusual chiral charge excitations near nodal points with Berry phase π, showing superior electron mobility due to the suppressed backscattering by spin-momentum lock effect <cit.> and negative longitudinal magnetoresistance (MR) for aligned electric field and external magnetic field due to the chiral anomaly effect <cit.>. In addition, unique Fermi-arc surface states <cit.> appear on a surface of a WSM, connecting the projected Weyl-node pair, where a number of intriguing novel charge transport features have been predicted theoretically <cit.>. For a ferromagnetic WSM, there can be a minimum number of one Weyl-node pair with opposite chiral charges near the Fermi surface, accompanied by 1D chiral zero edge modes perpendicular to the connecting momentum of the Weyl-node pair. In this work, we report the experimental observations of nonlinear Hall signals <cit.> for T ≤ 10 K in the untwinned thin film of ferromagnetic Weyl metal SrRuO_3 (SRO) grown on a miscut SrTiO_3 (STO) substrate. Rigorous bias current dependent measurements of the nonlinear Hall signals correspond to an effective Berry curvature dipole (BCD) D⃗ from surface states along the orthorhombic [11̄0]_ o, where the subscript o refers to orthorhombic-phase. Surprisingly, a nonlinear and nonreciprocal transport effect in the longitudinal channel (NRTE) was also observed. It attains a maximum when the bias current is aligned perpendicular to D⃗, but it becomes vanishing small when bias current is parallel to D⃗, which can be attributed to the 1D chiral edge modes as demonstrated previously in the quantum anomalous Hall system <cit.>. Those results support the intriguing magnetic WSM phase in SRO/STO system with an effective surface D⃗ along [11̄0]_ o accompanied by 1D chiral edge modes along [001]_ o that circles around the surface of a SRO thin film.
§ EXPERIMENTAL SETUP
SRO is known as a ferromagnetic and metallic oxide, showing an orthorhombic crystal structure with Pbnm space group symmetry at room temperature <cit.>. In the past, the observed non-monotonic magnetization dependent anomalous Hall conductivity <cit.>, unusual temperature dependent magnon gap <cit.> and softening of the magnon mode at low temperatures <cit.> all pointed to the existence of the Weyl nodes near the Fermi surface, supporting the Weyl metal phase in SRO system. Recently, the advancement in the growth of exceptional quality SRO thin films with ultra-low ruthenium vacancy level was made possible using oxide molecular beam system <cit.>. The low residual resistivity at T = 2 K of only about 10 μΩcm for a SRO film with thickness of about 10 nm <cit.>, which may largely suppress the smearing of the Weyl nodes due to the rare region effects <cit.>, makes it possible to explore various charge transport features associated with the Fermi-arc surface states and Weyl metal phase of SRO in thin film form <cit.>.
Figure <ref>(a) shows an optical image of a sunbeam device patterned on an untwinned SRO thin film with a thickness t of about 13.7 nm. By using a STO (001) substrate with a miscut angle of about 0.1 degrees along one of the principal cubic axes, the volume fraction of the dominant domain was determined by high resolution X-ray scattering via the (02±1)_ o reflections to be about 95 % <cit.> (see Supplementary Note 1), where the orthorhombic crystalline directions are shown in Fig. <ref>(a). The right panel of Fig. <ref>(a) illustrates one of the Hall bars in the sunbeam device, and α defines the angle between the bias current direction and [001]_ o. ρ_ L and ρ_ T correspond to the longitudinal and transverse resistivity, respectively. With a compressive strain of about - 0.4 %, the Curie temperature T_ c for SRO thin film is about 150 K, and the magnetic easy axis is close to the film surface normal of [110]_ o <cit.>. Fig. <ref>(b) shows the α-dependent ρ_ L and ρ_ T values at three different applied field values of 0, - 1, and + 1 T along [110]_ o at T = 2 K. The ρ_ L appears to be at a maximum value of about 10.4 μΩcm for α = 90^ o and drops to a minimum value of about 8.1 μΩcm for α = 0 and 180^ o, exhibiting a clear cos(2α) dependence. On the other hand, the ρ_ T shows a sin(2α) dependence instead with a maximum magnitude of about 0.9 μΩcm at α = 45^ o and 135^ o. The simulated curves using a resistivity anisotropy model of ρ_ L and ρ_ T are shown as red curves in Fig. <ref>(b). We note that the amplitude of the anisotropy is significantly larger than the small changes in ρ_ L and ρ_ T when reversing the magnetization by changing the field from +1 to -1 T, inferring that the observed resistivity anisotropy in our SRO thin films is not dictated by the magnetization-related effects. The upper, middle, and lower panels of Fig. <ref>(c) show the temperature dependence of ρ_ L, ρ_ T, and ρ_ T/ρ_ L ratio, respectively, for different α values ranging from 0^ o to 180^ o. The residual resistivity ratio of ρ_ L(300K)/ρ_ L(5K) varies weakly and equals about 24.0 and 21.4 for α = 0^ o and 90^ o, respectively. Those results support the nearly single structure domain and thus untwinned nature in our SRO thin films, and also the exact dimensions of each Hall bar at different α values are very close to each other, which justifies the feasibility for the investigation of anisotropy effects in our SRO thin films. As the T decreases, we note that the magnitude of ρ_ T/ρ_ L ratio for α = 45^ o slightly decreases near the T_ c and then increases again below 100K, attaining a sizable ratio of ρ_ T/ρ_ L≈ - 0.085 at T = 2 K without saturation.
Now, we turn to the discussions about the anomalous Hall effect (AHE) and the magnetization data in our SRO thin films. Figure <ref>(a) shows the field dependent Hall resistivity ρ_ xy at different temperatures ranging from 2 to 180 K, where weak field hysteresis loops in ρ_ xy-μ_ 0H curves with a small coercive field of less than 0.1 T were observed below T_ c as expected. The magnitude of converted Hall conductivity |σ_ xy| at zero field was plotted in Fig. <ref>(b) as a function of the corresponding conductivity σ_ xx in logarithmic scales for SRO thin films with different thicknesses ts ranging from 3.9 to 37.1 nm. Remarkably, |σ_ xy| appears to approach a constant and t-independent value of about 2.0 × 10^4 Ω^-1m^-1 at low temperatures, which falls in the same order as the intrinsic anomalous Hall conductivity due to the Berry curvatures of the bulk band, i.e., e^2/hc_ o≈ 5.0 × 10^4 Ω^-1m^-1 (c_ o being the orthorhombic lattice constant of about 7.81Å) shown as the red dashed line in Fig. <ref>(b). We note that no significant changes in the |σ_ xy| with σ_ xx down to T = 1.4 K, and this thus suggests a negligible contribution from the extrinsic skew scattering effect to AHE, where a linear relation of |σ_ xy| ∝σ_ xx is expected instead <cit.>. On the other hand, rigorous magnetization measurements were performed on a thicker SRO film with t ≈ 37.1 nm using a SQUID magnetometer. By subtracting the diamagnetic background at 200 K, the resulting magnetization M' - H curves at different temperatures are shown in Fig. <ref>(c), where, for μ_0H ≥ 2 T, the diamagnetic response seems to increase as the temperature drops. As shown in Fig. <ref>(d), the averaged slope of dM/dH for the field regime from μ_0H ≥ 2 T to 7 T was negative with increasing magnitude as the temperature decreases to 2 K, which is in big contrast to the nearly T-independent slope from the controlled measurements on a bare STO substrate (square symbols in Fig. <ref>(d)). The observed intrinsic |σ_ xy| ∼ e^2/hc_ o <cit.> and the enhanced diamagnetic response <cit.> at low temperatures strongly support the presence of the Weyl-nodes near the Fermi surface and thus the Weyl metal phase in SRO. We also remark that the zero-field Hall signals at low temperatures in SRO are dominated by the intrinsic AHE, which would be important for the subsequent discussions about the observed nonlinear Hall signals in SRO.
§ RESULTS
As illustrated in the right panel Fig. <ref>(a), the second harmonic longitudinal (R_ L^2ω) and transverse (R_ T^2ω) resistance were measured with a bias current of 0.7 mA at a frequency of about 18.4 Hz. The resulting complex second harmonic signal can be expressed as R̃_ L(T)^2ω = R_ L(T)^2ωX + i R_ L(T)^2ωY, which is probed by a lock-in amplifier. The upper and lower panel of Fig. <ref>(a) shows the field dependent R_ L^2ωY and R_ T^2ωY, respectively, for α = 90^ o Hall bar device at different Ts ranging from 1.4 K to 10 K. For clarity, the curves of R_ L^2ωY- μ_0H and R_ T^2ωY- μ_0H at different Ts were systematically shifted upward by multiple of 100 μΩ and 50 μΩ, respectively. For T ≥ 10K, both R_ L^2ωY and R_ T^2ωY show no hysteresis loops in the weak field regime, which is in big contrast to the sizable ρ_ xy - μ_0H loops shown in Fig. <ref>(a) at similar temperatures. Below 6K, a sizable hysteresis loop starts to appear in R_ T^2ωY as shown in the lower panel of Fig. <ref>(a), but R_ L^2ωY remains nearly field-independent without showing a hysteresis loop. The definition of Δ R_ T^2ωY is illustrated in the lower panel of Fig. <ref>(a), and it corresponds to the change of the R_ T^2ωY signal at zero magnetic field when reversing the magnetization of the SRO thin film. For α = 90^ o Hall bar device with bias current I along [11̄0]_ o, the Δ R_ T^2ωY gradually increases in magnitude as T drops, giving a Δ R_ T^2ωY ≈ 44 μΩ at T = 1.4 K. Remarkably, for α = 180^ o Hall bar device with a bias current I along [001]_ o as demonstrated in Fig. <ref>(b), the hysteresis loops appear in the longitudinal channel of R_ L^2ωY at low temperatures instead, giving a value of Δ R_ L^2ωY ≈ 100 μΩ at T = 1.4 K, and no hysteresis loops were observed in the transverse channel (R_ T^2ωY).
Figure <ref>(c) summarized the results from 9 different α values Hall bars from the sunbeam device shown in Fig. <ref>(a) (see Supplementary Note 2 for detailed descriptions on measurement geometry and polarity). The upper panel of Fig. <ref>(c) shows the first harmonic signals (Δ R_ L^ωX) and second harmonic signals (Δ R_ L^2ωY) in the longitudinal channel as a function of α with different Ts. Δ R_ L^2ωY exhibits a maximum value of about 100 μΩ at α = 0^ o and 180^ o, and it gradually decreases in magnitude to zero as α approaches 90^ o. In contrast, the first harmonic signals of Δ R_ L^ωX are nearly zero for all α and T values as expected. On the other hand, the lower panel of Fig. <ref>(c) puts together the α dependent first harmonic signals (Δ R_ T^ωX) and second harmonic signals (Δ R_ T^2ωY) in the transverse channel at different Ts. Unlike the longitudinal channel, the Δ R_ T^2ωY data show a relatively good agreement to the sinα dependence (dashed red line in the lower panel of Fig. <ref>(c)), giving a value of Δ R_ T^2ωY ≈ 44 μΩ at α = 90^ o and vanishing values for α = 0^ o and 180^ o. Such a unique sinα dependence in Δ R_ T^2ωY is drastically distinct from the nearly α-independent first harmonic signals of Δ R_ T^ωX.
For consistency check, the current dependent R_ L^2ωY for α = 180^ o and R_ T^2ωY for α = 90^ o at T = 2 K were carried out and shown in the upper panel and lower panel, respectively, of Fig. <ref>(a) with different bias currents ranging from 0.3 to 0.9 mA, where the curves were systematically shifted upward for clarity. For α = 180^ o, Δ R_ L^2ωY progressive increases from 35 to 84 μΩ as the bias current I increases from 0.3 to 0.9 mA. The detailed I-dependent on the second harmonic signals (Δ R_ L(T)^2ωX + i Δ R_ L(T)^2ωY) were shown in the upper panel of Fig. <ref>(b), where only Δ R_ L^2ωY data show nearly I-linear dependent behavior, and all other second harmonic signals are vanishing small. On the contrary, for α = 90^ o, the Δ R_ T^2ωY increases from about 10 to 30 μΩ as I increases from 0.3 to 0.9 mA, and the corresponding I-dependent signals are shown in the lower panel of Fig. <ref>(b). The nearly I-linear dependence of Δ R_ T^2ωY for α = 90^ o only appears in the transverse channel but not in the longitudinal channel of Δ R_ L^2ωY, justifying the presence of nonlinear Hall effect in SRO thin films. The magnitude of both Δ R_ L^2ωY for α = 180^ o and Δ R_ T^2ωY for α = 90^ o grow rapidly as T drops below 10 K as shown in the upper panel and lower panel, respectively, of Fig. <ref>(c), which is dramatically different from the minor drops in ρ_L(T) and the nearly constant σ_ xy≡ρ_T/(ρ_L^2+ρ_T^2) with decreasing T as shown in Fig. <ref>(c) and Fig. <ref>(b), respectively. We also note that the extracted Δ R_ T^2ω and Δ R_ L^2ω do not vary significantly with the bias current frequency (see Supplementary Note 3), and they derive from the difference in the second harmonic signals between opposite magnetization directions in SRO at zero external magnetic field as illustrated in Fig. <ref>(a) and (b). Therefore, the extrinsic contact effects and also possible magnetic field related effects for NRTE and nonlinear Hall effects can be excluded <cit.>.
§ DISCUSSIONS
For SRO thin films, the onset of ferromagnetism for T ≤ 150 K with magnetization along [110]_ o can, in principle, break the mirror planes with normal vectors perpendicular to the magnetization direction, and a similar mirror symmetry breaking by magnetism has been reported before <cit.>. We also conducted rotational anisotropy second harmonic generation measurements, which can be sensitive to the magnetic order parameter in perovskite transition metal oxides <cit.>. Figure <ref>(a) shows the temperature dependence of the scattering plane angle averaged SHG intensity from a SRO/STO film with t ≈ 35 nm, which exhibits an intensity upturn below 150 K. Although we did not resolve whether the magnetic order induced SHG susceptibility is directly proportional to the magnetization or to its square (as would be the case for magnetostriction), the critical temperature is consistent with that reported for bulk single crystals. We also noted a progressive increase in the SHG intensity as temperature decreases further, inferring an increased contribution from surface states with inversion symmetry breaking. However, we can not completely exclude the possible bulk inversion symmetry breaking in SRO/STO system at low temperatures due to possible lattice strain gradient <cit.> and non-collinear magnetic configuration effects <cit.> (see also Supplementary Note 4), which requires further investigations with advanced characterization tools at low temperatures.
The growing surface states contribution at low temperatures is in accord with the dramatic changes of magnetotransport behavior below 10 K as demonstrated in Fig. <ref>. As T decreases from 10 K to 1.4 K, the weak field MR shows a crossover from a negative MR to a positive MR as shown in Fig. <ref>(a), and the Hall resistivity (Fig. <ref>(a)) also shows a nonlinear field dependence below 10 K, indicating a multiple channel conduction at lower temperatures. On the other hand, pronounced quantum oscillations with a frequency of about 28 T were observed for all α values in our sunbeam device as shown in Fig. <ref> (b) for α = 90^ o, and the corresponding Fast Fourier transform (FFT) spectra for different Ts were shown in Fig. <ref>(c). We note that 28 T quantum oscillations in SRO thin film were recently reported to behave as a 2D-like Fermi pocket with signatures that are consistent with Weyl-orbit quantum oscillation effect due to the bulk tunneling between the top and bottom Fermi-arc surface states <cit.>. The open black squares and open red circles in Fig. <ref>(d) plot the rapid increase of FFT amplitude for quantum oscillations below 10 K for α = 180^ o and 90^ o, respectively, which turns out to show a strong correlation with the rapid increases of Δ R_ L^2ω (solid black squares) and Δ R_ T^2ω (solid red circles). This is in big contrast to the minor decrease of resistivity (ρ_ L) from about 13.1 to 10.3 μΩcm as T goes from 10 to 2 K. Therefore, the rapid increases of the second harmonic signals of Δ R_ T^2ω and Δ R_ L^2ω below 10 K (Fig. <ref>(c)) are unlikely scaled with the bulk Drude electron lifetime. Instead, it signifies a crossover to a surface dominant charge transport with inversion symmetry breaking below 10 K.
In a magnetic system with broken time reversal symmetry, both intrinsic and extrinsic AHE can contribute to the measured Hall signals <cit.>, and nonlinear Hall signals at the second harmonic generally require additional inversion symmetry breaking <cit.>. As demonstrated in Fig. <ref>(b), the low-temperature AHE in SRO was dominated by the contribution from the intrinsic AHE due to Weyl nodes near the Fermi-surface <cit.>, where σ_ xy is nearly a constant of about e^2/hc_ o down to about 1.4 K, and thus extrinsic skew scattering effect <cit.> shall not play a significant role for our observed nonlinear Hall signals. On the other hand, the distinct sinα dependence of Δ R_ T^2ω does not seem to be compatible with the intrinsic mechanism due to the electron-lifetime-independent Berry curvature effect <cit.>, where intrinsic AHE at zero field (Δ R_ T^ω) is nearly α independent as shown in the lower panel of Fig. <ref>(c). Therefore, the observed nonlinear Hall signals of Δ R_ T^2ωY is more likely deriving from the BCD <cit.> due to surface states with inversion symmetry breaking. From rigorously calculated band dispersions along k_ // and k_ z (see Supplementary Note 5), we found that most of Weyl nodes appear to tilt along the k_// and thus [11̄0]_ o. Taking Weyl node of W_||^1 with |ε-ε_ F|= 18.36 meV as an example, the band dispersions along k_// and k_ z were plotted in the left panel and right panel, respectively, of Fig. <ref>(e). It shows a large tilting of Weyl node along k_ //, but the band dispersion along k_ z is nearly symmetric with respect to the Weyl node. It is thus expected to have nonzero total BCD D⃗ arising from surface projected Weyl nodes along [11̄0]_ o as also supported by the α-dependent Δ R_ T^2ω. The BCD contribution to the second harmonic current density can be derived as j_a^2ω = χ_abc E_bE_c, and χ_abc≡ -ε_adce^3τ/2ħ^2(1+iωτ)D_bd. The BCD can be expressed as D_bd≡∫d^3k/(2π)^3f_0∂Ω_d/∂ k_b, where f_0 and Ω are the equilibrium Fermi-Dirac distribution and the Berry curvature, respectively, and it can be nonzero for systems with titled Weyl nodes and inversion asymmetry <cit.>. Therefore, with a bias current along b axis, the resulting nonlinear Hall current is simply j_a^2ω = χ_abb E_b^2 with χ_abb = e^3τ/2ħ^2(1+iωτ)D_bc, and thus j_a^2ω is a direct measure for the Berry curvature gradient along the bias current direction. In our sunbeam device with different bias current directions of α values ranging from 0^ o to 180^ o, a largest nonlinear Hall signal was observed with α = 90^ o, inferring the presence of an effective BCD D⃗ along [11̄0]_ o. In order to compare the magnitude of our observed nonlinear Hall effect with other systems, we adopted the 3D formula with resistivity anisotropy effect shown in Fig. <ref>(b). The α dependent Δ R_ T^2ω can be deduced to give Δ R_ T^2ω = χ_abbρ_aρ_b^2/Wt^2 Isinα, where ρ_b(ρ_a) is the resistivity along [11̄0]_ o([001]_ o), and W is the width of the Hall bar device (W = 150 μm) (see Supplementary Note 6). The sinα and I-linear dependences of Δ R_ T^2ω are well confirmed by the experiment shown in lower-panel of Fig. <ref>(c) and Fig. <ref>(b), respectively. By using a Drude electron lifetime of about τ_d ∼ 1.9 × 10^-13 s, the magnitude of the effective 3D BCD can be roughly estimated to be about |D⃗| ≈ 55, which falls in the same order of magnitude as several other reported 3D Weyl systems with large BCD <cit.>.
On the other hand, the observation of a large NRTE of Δ R_ L^2ω in the longitudinal channel is intriguing, and its amplitude also grows with decreasing T below 10 K, suggesting an intimate relation with the appearance of the nonlinear Hall signals of Δ R_ T^2ω. However, as demonstrated in Fig. <ref>(c), the α dependence reveals a clear orthogonality in the Δ R_ L^2ω and Δ R_ T^2ω. We thus proposed a real space scenario as illustrated in Fig. <ref>(b), where a D⃗ along [11̄0]_ o is accompanied by 1D chiral edge modes along the orthogonal direction of [001]_ o (orange line). Figure <ref>(c) illustrates a minimum Weyl model with one pair of Weyl nodes with chiral charges of +1 and -1. For the yellow-shaded slice between Weyl node pair of opposite chiral charges, the integration of the total Berry flux across each 2D slice will give a Chern number of 1 accompanied by a unique 1D chiral edge modes at the boundary of the system as shown in the upper panel of Fig. <ref>(c) <cit.>. On the other hand, for green-shaded slice with the Weyl-node pair on the same side, the total Chern number is then zero without the presence of chiral edge modes. The Fermi-arc surface states are thus the zero energy chiral edge modes, connecting the non-overlapped Weyl-node pair on a surface Brillouin zone. By searching for Weyl nodes within an energy window of |ε-ε_ F| ≤ 20 meV in the calculated SRO band structure, a number of Weyl nodes can be identified and projected on (110)_ o plane as demonstrated in Fig. <ref>(d). Symbols of sphere, square and triangle correspond to Weyl nodes from three different band pairs. The red and blue colors represent the corresponding chiral charge of +1 and -1, respectively. We note that the yellow-shaded region in Fig.<ref>(d) highlights the non-zero total Chern number and thus supports the presence of 1D chiral edge modes along k_ z.
When flipping the magnetization in SRO, the signs of the chiral charges also reverse due to the swapping of spin subbands, and both the directions of BCD D⃗ and 1D chiral edge modes will reverse accordingly. Such 1D chiral edge modes are equivalent to the 1D chiral edge modes in a magnetic TI with quantum anomalous Hall phase <cit.>, where a large NRTE in the longitudinal channel had been recently reported arising from the asymmetric scattering between the 1D chiral edge modes and other surface states <cit.>. For the Weyl metal SRO, in principle, similar NRTE in the longitudinal channel for bias current along [001]_ o (Δ R_ L^2ω for α = 0^ o and 180^ o) can thus appear due to the asymmetric scattering between the 1D chiral edge modes and the Fermi-arc surface states. This may also explain the vanishing of Δ R_ L^2ω for α = 90^ o and thus the intriguing orthogonal relation between Δ R_ L^2ω and Δ R_ T^2ω shown in Fig. <ref>(c). We note that our observed Δ R_ T^2ω due to an effective BCD of surface states may be related to a recently proposed theory <cit.> that a hotline with divergent Berry curvature, separating the Fermi-arc surface states and 3D bulk states, may lead to a large nonlinear Hall response. However, the issues regarding the contribution of Fermi-arc surface states to NRTE and nonlinear Hall effect call for more theoretical and experimental efforts.
§ CONCLUSIONS
In summary, large nonlinear and nonreciprocal charge transport effects along the longitudinal (Δ R_ L^2ω) and transverse (Δ R_ T^2ω) channels were discovered below 10 K in a sunbeam device fabricated from an untwinned thin film SRO grown on miscut STO (001) substrate. Below 10 K, the crossover of weak field MR behavior and also the rapid rise of 2D-like quantum oscillation amplitude not only support the surface dominant charge transport but also agree well with the observed T dependent Δ R_ L(T)^2ω. The detailed bias current direction dependence reveals an intriguing orthogonality between the observed Δ R_ L^2ω and Δ R_ T^2ω, and, for bias current along [11̄0]_ o (α = 90^ o), Δ R_ T^2ω is at maximum while Δ R_ L^2ω is vanishing small. Considering the dominant roles of the intrinsic AHE and surface charge transport at low temperatures in thin films of SRO/STO system, a scenario of an effective BCD D⃗ from surface states along [11̄0]_ o accompanied by 1D chiral edge modes along [001]_ o was proposed to give a qualitative explanation for the observed α dependent Δ R_ L^2ω and Δ R_ T^2ω, which is supported by the calculated band dispersion with tilted Weyl nodes. Our findings demonstrate the feasibility of using the nonlinear and nonreciprocal charge transport effect as a probe for intriguing topology-related electronic properties in a topological system, such as the BCD from nonlinear Hall and 1D chiral edge modes from NRTE. On the other hand, our observations of nonlinear Hall in SRO/STO may also highlight the intriguing possibility of investigating surface dominant charge transport behavior in topological thin film systems.
§ METHODS
The sunbeam device was patterned on a SRO/STO thin film with SRO layer thickness t ≈ 13.7 nm, using standard photolithography followed by argon ion milling. It comprises of 16 Hall bars with α ranging from 0^ o to 360^ o, and the angle difference between adjacent Hall bars is 22.5^ o. One of the Hall bars was carefully aligned along the SRO orthorhombic [001]_ o direction, which was defined as α = 0^ o. Each Hall bar has exactly the same geometry with a width of 150 μm and a length of 290 μm between longitudinal voltage leads. The Au (35 nm)/Ti (10 nm) electrodes were deposited and fabricated via a subsequent step of photolithography.
The magnetization measurements on SRO/STO thin films were carried out using a SQUID-MPMS system from Quantum Design. The longitudinal (transverse) Δ R_ L (T)^ω and Δ R_ L(T)^2ω signals were measured simultaneously by a lock-in amplifier at first and second harmonic references, respectively. Rotational anisotropy (RA) SHG measurements were performed using a high-speed rotating scattering plane method described elsewhere <cit.>. The light source was a Ti:sapphire laser of central wavelength of 800 nm. The incident beam was focused onto the sample surface at oblique incidence (θ = 10^ o) with a spot size of ∼ 30 μm.
Electronic structure calculations of SrRuO_3 were performed using projector augmented plane wave method <cit.> as implemented in the Vienna ab-initio Simulation package <cit.> within the generalized gradient approximation schemes<cit.>. A 18 × 18 × 14 Gamma centered k-point mesh was used in computations with a cutoff energy of 500 eV. The convergence criterion for the electronic density was defined as 10^-6 eV. The spin-orbit coupling effects were included in self-consistent calculations along with ferromagnetic spin polarization in (110) direction. The effect of electronic correlations in the Ru d states (4d^4 for Ru4^+ ) was taken into account by using the rotationally invariant GGA+U scheme <cit.> with U = 3.0 eV and J = 0.6 eV. We have used Ru d-orbital and O p-orbital to construct the Wannier functions <cit.> with VASP2WANNIER90 <cit.> interface. We have used WannierTools <cit.> to search the Weyl points and to identify the chirality of each Weyl point.
§ DATA AVAILABILITY
All the supporting data are included in the main text and supplementary information. The raw data and other related data for this paper can be requested from W.L.L.
§ CODE AVAILABILITY
The input files for DFT using VASP, Wannier tight binding and WannierTools are available upon reasonable request.
§ ACKNOWLEDGEMENTS
This work was supported by the National Science and Technology Council of Taiwan (NSTC Grant No. 108-2628-M-001-007-MY3 and 111-2112-M-001-056-MY3) and the joint project of Academia Sinica and National Taiwan University (Grant No. AS-NTU-110-10).
§ COMPETING INTERESTS
The authors declare no competing financial or non-financial interests.
§ AUTHOR CONTRIBUTIONS
U.K., E.C.H.L., C.T.C., IC.C., and W.L.L. carried out the low-temperature magneto-transport measurements and data analyses. U.K. and A.K.S. grew the epitaxial SRO films. A.K.S., S.Y., C.Y.L., and C.H.H. performed the X-ray measurements at NSRRC in Taiwan. P.V.S.R., G.Y.G., and W.C.L. performed SRO band calculations. Y.J.H., X.W.L., and D.H. performed the SHG measurements and analysis. W.L.L. designed the experiment and wrote the manuscript.
§ ADDITIONAL INFORMATION
Supplementary Information accompanies the paper on the XXXX website (https://XXXXX).
10
url<#>1urlprefixURL
Konig2007
authorKönig, M. et al.
titleQuantum spin Hall insulator state in HgTe quantum
wells.
journalScience volume318,
pages766–770 (year2007).
Du2015
authorDu, L., authorKnez, I.,
authorSullivan, G. & authorDu, R.-R.
titleRobust helical edge transport in gated
InAs/GaSb bilayers.
journalPhys. Rev. Lett.
volume114, pages096802
(year2015).
Fei2017
authorFei, Z. et al.
titleEdge conduction in monolayer WTe_2.
journalNat. Physics volume13,
pages677–682 (year2017).
Tang2017
authorTang, S. et al.
titleQuantum spin Hall state in monolayer
1T'-WTe_2.
journalNat. Physics volume13,
pages683–687 (year2017).
Hsieh2008
authorHsieh, D. et al.
titleA topological Dirac insulator in a quantum spin
Hall phase.
journalNat. volume452,
pages970–974 (year2008).
Alpi2010
authorAlpichshev, Z. et al.
titleSTM imaging of electronic waves on the surface of
Bi_2Te_3: Topologically protected surface states and hexagonal
warping effects.
journalPhys. Rev. Lett.
volume104, pages016401
(year2010).
Hasan2010
authorHasan, M. Z. & authorKane, C. L.
titleColloquium: Topological insulators.
journalRev. Mod. Phys.
volume82, pages3045–3067
(year2010).
Chang2013
authorChang, C.-Z. et al.
titleExperimental observation of the quantum anomalous
Hall effect in a magnetic topological insulator.
journalScience volume340,
pages167–170 (year2013).
Kou2014
authorKou, X. et al.
titleScale-invariant quantum anomalous Hall effect in
magnetic topological insulators beyond the two-dimensional limit.
journalPhys. Rev. Lett.
volume113, pages137201
(year2014).
Checkelsky2014
authorCheckelsky, J. G. et al.
titleTrajectory of the anomalous Hall effect towards the
quantized state in a ferromagnetic topological insulator.
journalNat. Physics volume10,
pages731–736 (year2014).
Haldane1988
authorHaldane, F. D. M.
titleModel for a quantum Hall effect without Landau
levels: Condensed-matter realization of the "parity anomaly".
journalPhys. Rev. Lett.
volume61, pages2015–2018
(year1988).
Wan2011
authorWan, X., authorTurner, A. M.,
authorVishwanath, A. & authorSavrasov, S. Y.
titleTopological semimetal and Fermi-arc surface states
in the electronic structure of pyrochlore iridates.
journalPhys. Rev. B volume83,
pages205101 (year2011).
Wang2012
authorWang, Z. et al.
titleDirac semimetal and topological phase transitions in
A_3Bi (A = Na, K, Rb).
journalPhys. Rev. B volume85,
pages195320 (year2012).
Liang2015
authorLiang, T. et al.
titleUltrahigh mobility and giant magnetoresistance in the
Dirac semimetal Cd_3As_2.
journalNat. Mater. volume14,
pages280–284 (year2015).
Huang2015
authorHuang, X. et al.
titleObservation of the chiral-anomaly-induced negative
magnetoresistance in 3D Weyl semimetal TaAs.
journalPhys. Rev. X volume5,
pages031023 (year2015).
Xiong2015
authorXiong, J. et al.
titleEvidence for the chiral anomaly in the Dirac
semimetal Na_3Bi.
journalScience volume350,
pages413–416 (year2015).
Armitage2018
authorArmitage, N. P., authorMele, E. J. &
authorVishwanath, A.
titleWeyl and Dirac semimetals in three-dimensional
solids.
journalRev. Mod. Phys.
volume90, pages015001
(year2018).
Potter2014
authorPotter, A. C., authorKimchi, I. &
authorVishwanath, A.
titleQuantum oscillations from surface Fermi arcs in
Weyl and Dirac semimetals.
journalNat. Commun. volume5,
pages5161 (year2014).
Waw2021
authorWawrzik, D., authorYou, J.-S.,
authorFacio, J. I., authorvan den Brink, J. &
authorSodemann, I.
titleInfinite Berry curvature of Weyl Fermi arcs.
journalPhys. Rev. Lett.
volume127, pages056601
(year2021).
Gao2014
authorGao, Y., authorYang, S. A. &
authorNiu, Q.
titleField induced positional shift of Bloch electrons
and its dynamical implications.
journalPhys. Rev. Lett.
volume112, pages166601
(year2014).
Sodemann2015
authorSodemann, I. & authorFu, L.
titleQuantum nonlinear Hall effect induced by Berry
curvature dipole in time-reversal invariant materials.
journalPhys. Rev. Lett.
volume115, pages216806
(year2015).
Ma2019
authorMa, Q. et al.
titleObservation of the nonlinear Hall effect under
time-reversal-symmetric conditions.
journalNat. volume565,
pages337–342 (year2019).
Yasuda2020
authorYasuda, K. et al.
titleLarge non-reciprocal charge transport mediated by
quantum anomalous Hall edge states.
journalNat. Nanotechnology
volume15, pages831–835
(year2020).
Koster2012
authorKoster, G. et al.
titleStructure, physical properties, and applications of
SrRuO_3 thin films.
journalRev. Mod. Phys.
volume84, pages253–298
(year2012).
Kar2021
authorKar, U. et al.
titleHigh-sensitivity of initial SrO growth on the
residual resistivity in epitaxial thin films of SrRuO_3 on SrTiO_3
(001).
journalSci. Rep. volume11,
pages16070 (year2021).
Fang2003
authorFang, Z. et al.
titleThe anomalous Hall effect and magnetic monopoles in
momentum space volume302, pages92–95
(year2003).
Chen2013
authorChen, Y., authorBergman, D. L. &
authorBurkov, A. A.
titleWeyl fermions and the anomalous Hall effect in
metallic ferromagnets.
journalPhys. Rev. B volume88,
pages125110 (year2013).
Itoh2016
authorItoh, S. et al.
titleWeyl fermions and spin dynamics of metallic
ferromagnet SrRuO_3.
journalNat. Commun. volume7,
pages11788 (year2016).
Jenni2019
authorJenni, K. et al.
titleInterplay of electronic and spin degrees in
ferromagnetic SrRuO_3: Anomalous softening of the magnon gap and
stiffness.
journalPhys. Rev. Lett.
volume123, pages017202
(year2019).
Nair2018
authorNair, H. P. et al.
titleSynthesis science of SrRuO_3 and CaRuO_3
epitaxial films with high residual resistivity ratios.
journalAPL Mater. volume6,
pages046101 (year2018).
Taki2020
authorTakiguchi, K. et al.
titleQuantum transport evidence of Weyl fermions in an
epitaxial ferromagnetic oxide.
journalNat. Commun. volume11,
pages4969 (year2020).
Cap2002
authorCapogna, L. et al.
titleSensitivity to disorder of the metallic state in the
ruthenates.
journalPhys. Rev. Lett.
volume88, pages076602
(year2002).
Nand2014
authorNandkishore, R., authorHuse, D. A. &
authorSondhi, S. L.
titleRare region effects dominate weakly disordered
three-dimensional Dirac points.
journalPhys. Rev. B volume89,
pages245110 (year2014).
Kaneta2022
authorKaneta-Takada, S. et al.
titleHigh-mobility two-dimensional carriers from surface
Fermi arcs in magnetic Weyl semimetal films.
journalnpj Quantum Mater.
volume7, pages102 (year2022).
kar2022
authorKar, U. et al.
titleThe thickness dependence of quantum oscillations in
ferromagnetic Weyl metal SrRuO_3.
journalnpj Quantum Mater.
volume8, pages8 (year2023).
Nagaosa2010
authorNagaosa, N., authorSinova, J.,
authorOnoda, S., authorMacDonald, A. H. &
authorOng, N. P.
titleAnomalous Hall effect.
journalRev. Mod. Phys.
volume82, pages1539–1592
(year2010).
Rao2014
authorRaoux, A., authorMorigi, M.,
authorFuchs, J.-N., authorPiéchon, F. &
authorMontambaux, G.
titleFrom dia- to paramagnetic orbital susceptibility of
massless fermions.
journalPhys. Rev. Lett.
volume112, pages026402
(year2014).
Sue2021
authorSuetsugu, S. et al.
titleGiant orbital diamagnetism of three-dimensional
Dirac electrons in Sr_3PbO antiperovskite.
journalPhys. Rev. B
volume103, pages115117
(year2021).
Morimoto2016
authorMorimoto, T. & authorNagaosa, N.
titleChiral anomaly and giant magnetochiral anisotropy in
noncentrosymmetric Weyl semimetals.
journalPhys. Rev. Lett.
volume117, pages146603
(year2016).
LiRH2021
authorLi, R.-H., authorHeinonen, O. G.,
authorBurkov, A. A. & authorZhang, S. S.-L.
titleNonlinear Hall effect in Weyl semimetals induced
by chiral anomaly.
journalPhys. Rev. B
volume103, pages045105
(year2021).
Nandy2021
authorNandy, S., authorZeng, C. &
authorTewari, S.
titleChiral anomaly induced nonlinear Hall effect in
semimetals with multiple Weyl points.
journalPhys. Rev. B
volume104, pages205124
(year2021).
Torre2021
authorTorre, A. d. l. et al.
titleMirror symmetry breaking in a model insulating
cuprate.
journalNat. Phys. volume17,
pages777–781 (year2021).
Seyler2020
authorSeyler, K. L. et al.
titleSpin-orbit-enhanced magnetic surface second-harmonic
generation in Sr_2IrO_4.
journalPhys. Rev. B
volume102, pages201113
(year2020).
Hwang2012
authorHwang, H. Y. et al.
titleEmergent phenomena at oxide interfaces.
journalNat. Mater. volume11,
pages103–113 (year2012).
Pesq2012
authorPesquera, D. et al.
titleSurface symmetry-breaking and strain effects on
orbital occupancy in transition metal perovskite epitaxial films.
journalNat. Commun. volume3,
pages1189 (year2012).
Sohn2021
authorSohn, B. et al.
titleSign-tunable anomalous Hall effect induced by
two-dimensional symmetry-protected nodal structures in ferromagnetic
perovskite thin films.
journalNat. Mater. volume20,
pages1643–1649 (year2021).
mSHG1
authorTrain, C., authorNuida, T.,
authorGheorghe, R., authorGruselle, M. &
authorOhkoshi, S.-i.
titleLarge magnetization-induced second harmonic
generation in an enantiopure chiral magnet.
journalJ. Am. Chem. Soc.
volume131, pages16838–16843
(year2009).
mSHG2
authorSun, Z. et al.
titleGiant nonreciprocal second-harmonic generation from
antiferromagnetic bilayer CrI_3.
journalNature volume572,
pages497–501 (year2019).
Du2019
authorDu, Z. Z., authorWang, C. M.,
authorLi, S., authorLu, H.-Z. &
authorXie, X. C.
titleDisorder-induced nonlinear Hall effect with
time-reversal symmetry.
journalNat. Commun. volume10,
pages3047 (year2019).
Iso2020
authorIsobe, H., authorXu, S.-Y. &
authorFu, L.
titleHigh-frequency rectification via chiral Bloch
electrons.
journalSci. Adv. volume6,
pageseaay2497 (year2020).
He2021
authorHe, P. et al.
titleQuantum frequency doubling in the topological
insulator Bi_2Se_3.
journalNat. Commun. volume12,
pages698 (year2021).
Wang2021
authorWang, C., authorGao, Y. & authorXiao,
D.
titleIntrinsic nonlinear Hall effect in
antiferromagnetic tetragonal CuMnAs.
journalPhys. Rev. Lett.
volume127, pages277201
(year2021).
Liu2021
authorLiu, H. et al.
titleIntrinsic second-order anomalous Hall effect and
its application in compensated antiferromagnets.
journalPhys. Rev. Lett.
volume127, pages277202
(year2021).
Gao2023
authorGao, A. et al.
titleQuantum metric nonlinear Hall effect in a
topological antiferromagnetic heterostructure.
journalScience
pages10.1126/science.eadf1506 (year2023).
Zhang2018
authorZhang, Y., authorSun, Y. & authorYan,
B.
titleBerry curvature dipole in Weyl semimetal materials:
An ab initio study.
journalPhys. Rev. B volume97,
pages041101 (year2018).
Du2018
authorDu, Z. Z., authorWang, C. M.,
authorLu, H.-Z. & authorXie, X. C.
titleBand signatures for strong nonlinear Hall effect in
bilayer WTe_2.
journalPhys. Rev. Lett.
volume121, pages266601
(year2018).
Zhang2022
authorZhang, C.-L., authorLiang, T.,
authorKaneko, Y., authorNagaosa, N. &
authorTokura, Y.
titleGiant Berry curvature dipole density in a
ferroelectric Weyl semimetal.
journalnpj Quantum Mater.
volume7, pages103 (year2022).
Harter2015
authorHarter, J. W., authorNiu, L.,
authorWoss, A. J. & authorHsieh, D.
titleHigh-speed measurement of rotational anisotropy
nonlinear optical harmonic generation using position-sensitive detection.
journalOpt. Lett. volume40,
pages4671–4674 (year2015).
Kresse
authorKresse, G. & authorJoubert, D.
titleFrom ultrasoft pseudopotentials to the projector
augmented-wave method.
journalPhys. Rev. B volume59,
pages1758–1775 (year1999).
vasp
authorKresse, G. & authorFurthmüller, J.
titleEfficient iterative schemes for ab initio
total-energy calculations using a plane-wave basis set.
journalPhys. Rev. B volume54,
pages11169–11186 (year1996).
PBE
authorPerdew, J. P., authorBurke, K. &
authorErnzerhof, M.
titleGeneralized gradient approximation made simple.
journalPhys. Rev. Lett.
volume77, pages3865–3868
(year1996).
Liechtenstein
authorLiechtenstein, A. I., authorAnisimov, V. I. &
authorZaanen, J.
titleDensity-functional theory and strong interactions:
Orbital ordering in mott-hubbard insulators.
journalPhys. Rev. B volume52,
pagesR5467–R5470 (year1995).
Marzari
authorMarzari, N. & authorVanderbilt, D.
titleMaximally localized generalized wannier functions for
composite energy bands.
journalPhys. Rev. B volume56,
pages12847–12865 (year1997).
Mostofi
authorMostofi, A. A. et al.
titleAn updated version of wannier90: A tool for obtaining
maximally-localised wannier functions.
journalComput. Phys. Commun.
volume185, pages2309–2310
(year2014).
Franchini
authorFranchini, C. et al.
titleMaximally localized Wannier functions in
LaMnO_3 within PBE + U, hybrid functionals and partially
self-consistent GW: an efficient route to construct ab initio tight-binding
parameters for e_ g perovskites.
journalJournal of Physics: Condensed Matter
volume24, pages235602
(year2012).
QuanSheng
authorWu, Q., authorZhang, S., authorSong,
H.-F., authorTroyer, M. & authorSoluyanov, A. A.
titleWanniertools: An open-source software package for
novel topological materials.
journalComput. Phys. Commun.
volume224, pages405–416
(year2018).
Roh2021
authorRoh, C. J. et al.
titleStructural symmetry evolution in surface and
interface of SrRuO_3 thin films.
journalAppl. Surf. Sci.
volume553, pages149574
(year2021).
|
http://arxiv.org/abs/2307.04401v1 | 20230710080341 | Ethicist: Targeted Training Data Extraction Through Loss Smoothed Soft Prompting and Calibrated Confidence Estimation | [
"Zhexin Zhang",
"Jiaxin Wen",
"Minlie Huang"
] | cs.CL | [
"cs.CL",
"cs.AI"
] |
Deep and Decentralized Multi-Agent Coverage of a Target with Unknown Distribution
Hossein Rastgoftar
H. Rastgoftar is with the Department
of Aerospace and Mechanical Engineering, University of Arizona, Tucson,
AZ, 85721 USA e-mail: [email protected].
August 12, 2023
=====================================================================================================================================================================================
Large pre-trained language models achieve impressive results across many tasks. However, recent works point out that pre-trained language models may memorize a considerable fraction of their training data, leading to the privacy risk of information leakage. In this paper, we propose a method named Ethicist for targeted training data Extraction THrough loss smoothed soft prompting and calIbrated ConfIdence eSTimation, investigating how to recover the suffix in the training data when given a prefix. To elicit memorization in the attacked model, we tune soft prompt embeddings while keeping the model fixed. We further propose a smoothing loss that smooths the loss distribution of the suffix tokens to make it easier to sample the correct suffix. In order to select the most probable suffix from a collection of sampled suffixes and estimate the prediction confidence, we propose a calibrated confidence estimation method, which normalizes the confidence of the generated suffixes with a local estimation. We show that Ethicist significantly improves the extraction performance on a recently proposed public benchmark. We also investigate several factors influencing the data extraction performance, including decoding strategy, model scale, prefix length, and suffix length. Our code is available at <https://github.com/thu-coai/Targeted-Data-Extraction>.
§ INTRODUCTION
Large pre-trained language models have achieved impressive results on various natural language processing tasks <cit.>.
Model sizes rapidly increase from millions to trillions of parameters and keep growing to achieve better performance and even obtain some emergent abilities <cit.>. Despite the success of large-scale pre-trained language models, recent works point out that they may memorize a considerable fraction of training data, leading to the privacy risk of information leakage <cit.>. Furthermore, researchers find that memorization scales with model sizes <cit.>. Therefore, this privacy risk becomes more and more critical in the era of large-scale pre-training. And attacking language models to extract their training data attracts increasing attention.
There are currently two main settings to extract training data. One is membership inference attack, which infers whether a given example is contained in the model's training data <cit.>. The other is untargeted training data extraction <cit.>, which aims to extract training data from scratch (i.e., without the given prefix). However, both settings are not suitable for extracting targeted training data. For example,
attackers may feed the model with a prefix indicating the beginning of an email and try to extract the following private email content in the training dataset as shown in Figure <ref>. In such cases, we do not have complete examples to do membership inference, and we have specific goals instead of performing untargeted extraction. Therefore, we focus on targeted training data extraction in this paper, which requires recovering the suffix when given a prefix according to the training data. Compared with untargeted training data extraction, the task matters more because attackers can recover specific types of training data instead of any possible training data that might be harmless. What's more, it is easier to evaluate targeted training data extraction because we just need to compare the prediction with the ground truth suffix. However, for untargeted training data extraction, we need to search over the whole massive pre-training dataset (e.g., The Pile dataset <cit.>, which has 800GB text data) to check whether it contains the generated sample, which is very slow and costly.
The general process for targeted training data extraction can be divided into two steps: (1) generating one or more possible suffixes based on the given prefix, and (2) choosing a most likely suffix as the prediction result based on a confidence estimation method. We summarize two challenges of this task: (1) how to increase the generation likelihood of the ground truth suffix, and (2) how to estimate the confidence accurately so that the confidence score can be meaningfully interpreted as the probability that the output suffix is correct. To tackle these challenges, we propose a method named Ethicist for targeted training data Extraction THrough loss smoothed soft prompting and calIbrated ConfIdence eSTimation.
For the first challenge, we propose loss smoothed soft prompting. It uses soft prompt to elicit memorization in the attacked model, and adds an additional loss besides the maximum likelihood estimation (MLE) loss to smooth the loss distribution of the suffix tokens. Through the loss smoothing, we hope to ensure that the probability of the ground truth token at each time step is not low, which makes it more likely to sample the ground truth suffix. With the two loss functions, we tune the prepended soft prompt tokens on an extracted training set which contains pairs of prefixes and ground truth suffixes. The existence of a training set is reasonable because large-scale pre-trained data generally contain public data (e.g., Common Crawl) [Similar setting is adopted in <cit.>.]. For the second challenge, we propose a calibrated confidence estimation method. We find that the model's perplexity cannot accurately represent the probability that the generated suffix is correct because the prediction probabilities for diversified prefixes are inherently different and incomparable. We thus normalize the confidence of the generated suffixes with a local estimation, which can mitigate the problems caused by intrinsic differences in the difficulties of distinct samples. We verify Ethicist on a recently proposed public benchmark containing 15,000 pairs of prefixes and suffixes derived from The Pile dataset <cit.>.
Experiments show that Ethicist can significantly improve the extraction performance, which suggests that existing large language models are at significant risk of leaking training data. We also discuss and analyze several factors influencing the data extraction performance, including decoding strategy, model scale, prefix length, and suffix length.
Our contributions can be summarized as follows:
* We propose loss smoothed soft prompting to reduce the difficulties of sampling the ground truth suffixes.
* We propose a calibrated confidence estimation method that enables the confidence score to be meaningfully interpreted as the probability that the output suffix is correct.
* Experiments on a recently proposed benchmark demonstrate that Ethicist can consistently and significantly improve the data extraction performance across various model sizes. We further investigate several factors influencing the data extraction performance.
§ RELATED WORK
§.§ Training Data Extraction
Existing works on training data extraction mainly focus on membership inference attack or untargeted training data extraction. For membership inference attack, adversaries need to judge whether a given example is contained in the training data of the attacked model. <cit.> train several shadow models that mimic the attacked models' behaviors to help train an auditing model that can predict whether an example is contained in the training dataset. <cit.> perform membership inference attacks on machine translation systems. They find it is harder to attack sequence generation models than classification models. <cit.> show that the encoded dense representations can leak information under membership inference attack. <cit.> focuses on attacking masked language models that are pre-trained on possibly sensitive data (e.g., clinical notes). They introduce an additional reference masked language model besides the original attacked model and compute the ratio of the likelihood measured by the attacked model and the reference model, which is better than solely relying on the attacked model.
For untargeted training data extraction, adversaries first generate various samples using the attacked model and then predict whether they are contained in its training set. <cit.> extract hundreds of verbatim sequences from the popular pre-trained language model GPT-2 <cit.>. And there is privacy information such as names, phone numbers, and email addresses in the extracted sequences. <cit.> try to extract sensitive information from BERT <cit.> pre-trained on clinical notes. However, they are mostly unable to meaningfully expose Personal Health Information by simply using templates. Different from the existing works, we focus on targeted training data extraction that aims to recover the suffix when given a prefix, which is more security-critical and easier to evaluate.
§.§ Memorization
We generally expect models can gain the generalization ability from the training process. However, recent works point out that models may unintentionally memorize the training data even without overfitting <cit.>. One possible method to mitigate this problem is to deduplicate training data <cit.>. However, <cit.> also show that it is possible to recover samples appearing only once in the training dataset. Surprisingly, <cit.> find that there is a forgetting baseline during the pre-training of the casual language model (e.g., the model can memorize at least 40% of the data that appear only once, even being trained on other data for many epochs afterward). These findings further emphasizes the difficulties of avoiding memorization and the potential threats of unintended memorization in large-scale pre-trained language models. Another line of work uses differential privacy to avoid the memorization problem <cit.>, but the mechanism could reduce the accuracy <cit.>. Differential privacy also increases the training time, which can further influence the accuracy within the same budget. Therefore there is still no effective and practical way to avoid unintended memorization. Our work further verifies the existence of unintended memorization and makes it more necessary to develop practical defense methods.
§ METHODOLOGY
We formulate the targeted training data extraction task as follows: given a source prefix S=(s_1,s_2,⋯,s_|S|) with |S| tokens, the attacker should predict the target suffix T=(t_1,t_2,⋯,t_|T|) with |T| tokens and its confidence. The pair of the given prefix and the predicted suffix (S,T) should be contained in the pre-training dataset D_pretrain={(S_i,T_i)}, which the attacked model M is trained on. The prediction of the confidence score is necessary for picking out the most probable suffix when we don't know the ground truth suffix in realistic attack scenarios (i.e., we need to pick out most probable pairs of prefixes and extracted suffixes based on their confidence scores among all predictions). We assume the attacker can obtain some pairs of ground truth prefixes and suffixes D_train={(S_i,T_i) |(S_i,T_i)∈ D_pretrain, 1≤ i≤ |D_train|} before attacking, which is reasonable because large-scale pre-trained data generally contain public data (e.g., Common Crawl). The attackers can utilize D_train to train their attacking models and their goal is to predict suffixes for the prefixes in the test set D_test={S_i | 1≤ i≤ |D_test|}. Note that the prefix S_i in D_test is included in D_pretrain but is not a part of D_train.
§.§ Method Overview
An overview of Ethicist is shown in Figure <ref>. We first tune the soft prompt embeddings during training to elicit memorization in the attacked model M with the MLE loss and the additional smoothing loss. The smoothing loss aims to increase the probability of sampling the ground truth suffix. After prompt tuning, we repeatedly sample K suffixes using the attacked model M conditioned on one given prefix and reorder them with our calibrated confidence estimation. Our calibrated confidence estimation can not only select the most possible suffix, but also provide a more accurate confidence score that represents how likely the predicted suffix is correct. Finally, the suffix with the highest confidence is selected as the final prediction.
§.§ Prompt Tuning with Smoothing Loss
We adopt prompt tuning to train the soft prompt tokens on D, which prepends |X| soft tokens X=(x_1,x_2,⋯,x_|X|) before the original input sequence. Then we feed the input to the attacked model M to compute the MLE loss:
ℒ_MLE=∑_i=1^|T|-1/|T|logP_M(t_i|X,S,t_<i).
Note that we only tune the parameters of the soft prompt tokens and the parameters of the attacked model M are fixed. We use prompt tuning for two reasons: (1) we do not want to change the original parameters of the attacked model M because the main goal is to elicit memorization in M, and (2) prompt tuning is helpful to improve the training efficiency when M is very large, making Ethicist able to efficiently adapt to larger language models that generally memorize more training data.
The MLE loss aims to increase the total generation probability of the target suffix T. However, when using popular sampling methods such as top-k sampling <cit.> and top-p (nucleus) sampling <cit.> to generate multiple candidate suffixes, we want to ensure the probability of the ground truth suffix token at each time step is not low. Suppose the total probability of the ground truth suffix is high while there is one token in the sequence with a low generation probability. In this case, it is still hard to generate the correct suffix using auto-regressive sampling methods. Therefore, we propose a smoothing loss to make the loss distribution of the suffix sequence more smooth. More specifically, we pick out the top-N tokens with the highest loss values in the whole sequence T. Then we additionally optimize the generation probabilities for these N tokens as follows:
ℒ_Smooth=∑_i=1^N-1/NlogP_M(t_σ(i)|X,S,t_<σ(i)),
where t_σ(i) represents the token with the i-th highest loss in T. Note that t_σ(i) is dynamically computed during training. The smoothing loss can also be seen as assigning higher weights to the tokens with higher loss values. Finally, we derive the overall loss function as follows:
ℒ_Total=ℒ_MLE+αℒ_Smooth,
where the coefficient α is a hyperparameter to control the strength of the smoothing loss.
§.§ Calibrated Confidence Estimation
After predicting the suffix, we also need to give a confidence score for the prediction, which can be meaningfully interpreted as the probability that the output suffix is correct. A naive method is to use the generation likelihood P_T=exp(-|T|ℒ_MLE) as the confidence score. This naive method is reasonable for picking out the most probable suffix T_i from a collection of sampled suffixes {T_1,T_2,⋯,T_M} for one given prefix. However, it is unsuitable for comparing the confidence of different predicted suffixes corresponding to different prefixes. As the language model is essentially a statistical model, frequencies of tokens and n-grams in the prefixes can greatly influence the absolute generation likelihood of the suffixes. For example, consider two predicted suffixes T_A and T_B conditioned on two different prefixes S_A and S_B, where S_A and T_A contain tokens and n-grams with much higher frequencies. The absolute generation likelihood of T_A may be significantly higher than T_B, even if they are both ground truth suffixes. Therefore, to eliminate the intrinsic differences in scales of generation likelihood across different suffixes, we propose a novel calibrated confidence estimation method. To calibrate the confidence estimation, we have two considerations: (1) different generated suffixes conditioned on one given prefix should have comparable scales of generation likelihood, and (2) the memorized ground truth suffix is expected to be generated more frequently during multiple generations, which is also validated in Section <ref>.
Suppose the sampled distinct suffixes are {T_1,T_2,⋯,T_M} for one given prefix, the repeated generation times for these suffixes are {r_1,r_2,⋯,r_M} (i.e., r_i denotes how many times T_i is generated among K repeated sampling outputs), and the MLE loss values for these suffixes are {ℒ_MLE^1,ℒ_MLE^2,⋯,ℒ_MLE^M}. Then we assign the calibrated confidence score to T_i as:
C(T_i)=r_i×exp(-|T_i|ℒ_MLE^i)/∑_j=1^M r_j×exp(-|T_j|ℒ_MLE^j).
Through the proposed confidence estimation method, we obtain the confidence score of T_i by comparing it with other sampled suffixes with comparable scales of generation likelihood. In this way, we avoid the scale problem brought by different prefixes and make it practical to compare the predicted suffixes conditioned on different prefixes. Moreover, we leverage the repetition time r_i as a valuable signal since memorized suffix is expected to be generated more frequently. Finally, we select the suffix T_best with the highest confidence score C(T_best) among {C(T_1),C(T_2),⋯,C(T_M)} as the predicted suffix and C(T_best) as its confidence estimation.
§ EXPERIMENTS
§.§ Benchmark
We evaluate Ethicist on the LM-Extraction benchmark[<https://github.com/google-research/lm-extraction-benchmark/>], which is designed for benchmarking targeted training data extraction attacks. It consists of a subset contained in The Pile dataset <cit.>. Both the prefix and the suffix are 50 tokens long. All examples are well-specified, meaning that there is only one 50-token suffix in The Pile dataset given the 50-token prefix. What's more, these examples are all chosen to meet the property that there exists a prefix length (maybe longer than 50) that causes the model to generate the suffix string exactly, which implies that the extraction performance on this benchmark may be higher than that on randomly selected prefixes. We randomly split the dataset into training, validation and test sets. The detailed statistics of the LM-Extraction benchmark are shown in Table <ref>.
§.§ Baselines
We compare Ethicist with the following baselines. All the compared baselines first sample K suffixes {T_1,T_2,⋯,T_K} conditioned on one given prefix S and then pick out one suffix as the prediction.
Perplexity It leverages the perplexity (PPL) measured by the attacked language model M as the metric to sort the candidate suffixes and finally chooses the one with the lowest PPL as the predicted suffix T:
T=max_T_iC(T_i)=max_T_i1/PPL_M(T_i|S)
Comparing (LM) It takes another language model M' and leverages the ratio of the perplexity measured by theses two language models as the metric <cit.>:
T=max_T_iC(T_i)=max_T_iPPL_M'(T_i|S)/PPL_M(T_i|S)
The language model M' could be a much smaller model trained on the same dataset with M or trained on a different dataset.
Comparing (zlib) Different from Comparing (LM), it uses the zlib <cit.> entropy of the text (i.e., the number of bits after compression with zlib) for comparison <cit.>:
T=max_T_iC(T_i)=max_T_ilen(zlib(T_i))/PPL_M(T_i|S)
Comparing (lowercase) It compares the perplexity of the original text and the lower-cased text measured by the same language model M <cit.>:
T =max_T_iC(T_i)
=max_T_iPPL_M(lowercased(T_i)|S)/PPL_M(T_i|S)
Furthermore, we conduct ablation tests by removing the proposed components respectively to investigate the influence of each component.
§.§ Metrics
We adopt the following automatic metrics for evaluation.
Recall The metric computes the percentage of the suffixes that are predicted verbatim over the whole test set. A higher recall score indicates better data extraction ability, which can also be understood as a higher attacking success rate.
Recall_Early stop The metric first sorts the predictions according to their confidence scores and then evaluates the correctness of each prediction one by one. It finally computes the Recall score while making x incorrect predictions. We set x to 100 in our experiments following the LM-Extraction benchmark. A better confidence estimation method can give the correct predictions higher confidence scores and thus lead to a higher Recall_Early stop score.
§.§ Main Results
Table <ref> shows the automatic evaluation results with GPT-Neo 1.3B as the backbone. Ethicist achieves an impressive Recall score of 62.8% and outperforms all the baselines by a large margin, indicating its better ability to extract training data from language models. Moreover, Ethicist has better confidence estimation performance after calibration as shown by a significantly higher Recall_Early stop score.
To further investigate the influence of each component, we run an ablation study. From the results shown in Table <ref>, it can be seen that both the smoothing loss and the calibrated confidence estimation are important to enhance the ability to extract training data, and combining both of them achieves the best performance. Furthermore, we draw the following conclusions: (1) With prompt tuning and extra training data, we can better induce large-scale language models to generate their memorized training data and successfully achieves a 9.5% performance improvement on Recall and a 12.4% performance improvement on Recall_Early stop. (2) The proposed smoothing loss can further enhance the ability to extract training data, boosting the Recall score from 60.8% to 62.3%. (3) The calibrated confidence provides a 6.3% improvement on Recall_Early stop as expected, demonstrating the importance of calibrating confidence estimation for this task. (4) The smoothing loss is more effective in predicting exact suffixes while the calibrated confidence is more beneficial for identifying highly confident predictions, according to the significant drop in Recall without smoothing and the substantial decrease in Recall_Early stop without calibration. (5) The calibrated confidence estimation is effective regardless of whether using prompt tuning. And it demonstrates greater advantages compared to the comparing (LM) baseline in recognizing predictions with higher confidence when using prompt tuning, indicated by increasing Recall_Early stop (from 48.7 to 52.4).
§.§ Analysis: Decoding Strategy
In our experiments, we use top-p sampling to sample multiple candidate suffixes conditioned on one given prefix. However, there are also other popular decoding methods, including greedy search, beam search, and top-k sampling. We thus compare these popular decoding methods in this section. Table <ref> shows the results. Not surprisingly, greedy search performs worst on both Recall and Recall_Early stop, which suggests some tokens in the ground truth suffix do not have the highest probability at the corresponding positions. Beam search outperforms top-p sampling on Recall, indicating that searching for the suffix with the lowest loss works well to find the ground truth suffix. However, beam search performs significantly worse than top-p sampling on Recall_Early stop, because it cannot use our calibrated confidence. Compared with beam search, top-p sampling can generate multiple candidates, which could substantially increase the accuracy of confidence estimation with our proposed calibrated confidence. Moreover, the top-k sampling performs worse than top-p sampling on Recall_Early stop, which may be because top-k sampling is easier to sample low-probability tokens and thus reduce the confidence of the ground truth suffixes. We finally select top-p sampling as our decoding method due to its balance on Recall and Recall_Early stop.
§.§ Analysis: Model Scale
Previous works on scaling laws find that larger language models can memorize more training data <cit.>. Therefore, we are interested in how targeted data extraction performance varies across different model scales.
Figure <ref> shows the results. We can see that the targeted training data extraction performance continuously increases as the model scale increases from 125 million to 6 billion. Ethicist shows impressive results as it consistently and significantly outperforms baselines across different model scales. Thanks to prompt tuning, Ethicist is efficient in terms of computation time and particularly memory consumption. Therefore, Ethicist can also be adapted to larger language models for efficient targeted training data extraction.
§.§ Analysis: Prefix Length and Suffix Length
All prefixes and suffixes in the LM-Extraction benchmark are 50 tokens long, making it an interesting question how the length of prefixes and suffixes would affect the extraction performance.
We show the effect of the given prefix length in Figure <ref>. We can observe that the extraction performance grows approximately linearly with the prefix length for all evaluated methods, and Ethicist performs best for all prefix lengths. Although all methods have similar growth speed on Recall, Ethicist has the highest growth speed on Recall_Early stop. It is also interesting that Comparing (LM) only outperforms Perplexity when given prefixes that are long enough.
We show the effect of the predicted suffix length in Figure <ref>. For all three methods, the extraction performance decreases when the suffix length increases. Different from the approximately linear relationship between the prefix length and the extraction performance, the performance degradation tends to become progressively slower as the suffix length increases. This suggests that the model can still memorize a considerable proportion of suffixes (rather than quickly decreasing to zero) even if the predicted suffix length continues to increase. What's more, we observe that Ethicist has a significantly slower speed of performance degradation compared with the two baselines, which suggests Ethicist is effective for eliciting deeper memorization of longer suffixes of the attacked model.
§.§ Analysis: Sampling Time
Due to space limitations, we put the analysis of sampling time in Appendix <ref>.
§ DISCUSSION
We further show some statistical features in Table <ref>. We can see that the memorized suffixes can be sampled significantly more frequently with a high average repeat time of 85.38, validating that the repeat time is a valuable signal for confidence estimation. What's more, the memorized suffixes have significantly higher confidence. One interesting phenomenon we observe is that if the ground truth suffix can be generated, it mostly has the top 3 highest confidence (Recall@3 ≈ Recall@100). We also find that for more than 30% of the prefixes, the model cannot generate the correct prefix even given 100 chances. Therefore, an important future direction is to design better methods to elicit memorization in the attacked model. Considering the non-negligible gap between Recall@1 and Recall@100 (0.63 vs. 0.69), another important future direction is to design better confidence estimation methods (maybe trainable), which can pick out the ground truth suffix among the collection of candidate suffixes for one prefix.
We show a case in Figure <ref>. Although the first predicted suffix has higher loss than the second predicted suffix, it is sampled far more times than the latter. Therefore, we assign higher confidence to the first suffix using our calibrated confidence estimation method. We further show the probability of generating each token during the sampling process in Figure <ref>. We can observe that although the correct prediction has higher loss as a whole, it keeps a high sampling probability across the generation process. The minimum probability of generating one token in the correct suffix is about 0.45, which is significantly higher than 0.1 for the wrong suffix. Therefore it is easier to generate the correct suffix, which leads to a higher confidence score. This is also in line with our motivation for designing the extra smoothing loss, which can increase the probability of sampling the correct suffix.
§ CONCLUSION
In this work, we propose Ethicist, an effective method for targeted training data extraction attack. Ethicist uses soft prompt to elicit memorization in the attacked model. To ensure the probability of the ground truth suffix token at each time step is not low, we propose a smoothing loss besides the standard MLE loss.
We also propose a calibrated confidence estimation method to calibrate the scale of confidence across different samples.
Experiments on the LM-Extraction benchmark demonstrate that Ethicist significantly improves the extraction performance. We further conduct extensive experiments to investigate several critical factors influencing the extraction performance, including decoding strategy, model scale, prefix length, and suffix length.
We hope our work can promote future researches on better attack methods and practical defense methods for the training data extraction problem.
§ ACKNOWLEDGEMENT
This work was supported by the NSFC projects (Key project with No. 61936010 ). This work was also supported by the Guoqiang Institute of Tsinghua University, with Grant No. 2020GQG0005.
§ LIMITATIONS
Although we conduct experiments across various model scales ranging from 125M to 6B, there are still larger language models we don't test either because their training data is not publicly released or because we have limited resources.
Moreover, the examples in the LM-Extraction benchmark are all chosen to meet the property that there exists a prefix length (maybe longer than 50) that causes the model to generate the suffix string exactly, which makes the extraction performance on this benchmark higher than that on randomly selected prefixes.
§ ETHICS STATEMENT
Ethicist is a powerful method to elicit memorization in the large pre-trained language models, which makes it a useful tool to expose the privacy risk of large language models. However, it also has a risk to be abused by attackers to extract privacy information from pre-trained language models. Thus large language models should be carefully examined before being made publicly available. What's more, it is necessary to develop defense methods against the training data extraction attacks without sacrificing the language modeling ability.
The LM-Extraction benchmark is derived from the Pile dataset, and thus covers many domains including books, code, emails, etc. This suggests the effectiveness of targeted training data extraction across different domains.
acl_natbib
§ IMPLEMENTATION DETAILS
As the benchmark is derived from The Pile <cit.> dataset, we conduct experiments only on the models that are pre-trained on The Pile dataset. They are GPT-Neo 125M, GPT-Neo 1.3B, GPT-Neo 2.7B, and GPT-J 6B <cit.>. We set the prompt length to 100, the batch size to 32, the learning rate of AdamW optimizer to 1e-3, the warmup step to 500, the learning rate decay strategy to linear, N in Equation <ref> to 5, α in Equation <ref> to 0.7, and the maximum training epoch to 20 with an early stopping mechanism. In our main experiments, we generate the suffix using top-p sampling <cit.> with p=0.7 and temperature=0.8. For other decoding methods, we set beam size to 10 for beam search, and k to 10 for top-k sampling (temperature=0.8). Our code is based on Huggingface Transformers <cit.>.
§ COMPUTING INFRASTRUCTURE
All experiments are carried out on a single Tesla V100 GPU with 32GB memory. Each experiment can be completed in less than 20 hours.
§ EFFECT OF SAMPLING TIME
In our main experiments, we sample 100 candidate suffixes for one given prefix. We show the effect of sampling time in Figure <ref>. We can see that all methods' performances increase quickly when the sampling time increases from 1 to 10. However, Ethicist's performance can still improve slowly when the sampling time increases from 10 to 100, which we attribute to the consideration of repeat time in our calibrated confidence estimation. What's more, although we report the result for sampling 100 times in our main experiments, we can see that Ethicist can achieve satisfying performance when sampling only 10 times, which suggests the efficiency of Ethicist.
|
http://arxiv.org/abs/2307.04420v1 | 20230710085407 | FedDCT: A Dynamic Cross-Tier Federated Learning Scheme in Wireless Communication Networks | [
"Peng Liu",
"Youquan Xian",
"Chuanjian Yao",
"Xiaoyun Gan",
"Lianghaojie Zhou",
"Jianyong Jiang",
"Dongcheng Li"
] | cs.DC | [
"cs.DC",
"cs.AI"
] |
Article Title]FedDCT: A Dynamic Cross-Tier Federated Learning Scheme in Wireless Communication Networks
1,2]Peng [email protected]
1,2]Youquan [email protected]
1,2]Chuanjian [email protected]
1,2]Xiaoyun [email protected]
1,2]Lianghaojie [email protected]
1,2]Jianyong [email protected]
[1,2]Dongcheng [email protected]
*[1]
Key Lab of Education Blockchain and Intelligent Technology, Ministry of Education,
Guangxi Normal University,
Guilin,
54104,
China
[2]
Guangxi Key Lab of Multi-Source Information Mining and Security,
Guangxi Normal University,
Guilin,
54104,
China
With the rapid proliferation of Internet of Things (IoT) devices and the growing concern for data privacy among the public, Federated Learning (FL) has gained significant attention as a privacy-preserving machine learning paradigm. FL enables the training of a global model among clients without exposing local data. However, when a federated learning system runs on wireless communication networks, limited wireless resources, heterogeneity of clients, and network transmission failures affect its performance and accuracy. In this study, we propose a novel dynamic cross-tier FL scheme, named FedDCT to increase training accuracy and performance in wireless communication networks. We utilize a tiering algorithm that dynamically divides clients into different tiers according to specific indicators and assigns specific timeout thresholds to each tier to reduce the training time required. To improve the accuracy of the model without increasing the training time, we introduce a cross-tier client selection algorithm that can effectively select the tiers and participants. Simulation experiments show that our scheme can make the model converge faster and achieve a higher accuracy in wireless communication networks.
[
*
Received / Accepted
========================
§ INTRODUCTION
With the rapid proliferation of intelligent services and applications powered by artificial intelligence (AI), the Internet of Things (IoT) is permeating various aspects of our daily lives. Traditional AI techniques rely on centralized data collection and processing, which may not be feasible in real-world scenarios due to escalating concerns about data privacy and the high scalability of modern IoT networks. In this context, Federated Learning (FL) has emerged as a distributed and collaborative AI approach that enables training on distributed IoT devices without data sharing, making numerous intelligent IoT applications attainable <cit.>.
In wireless communication networks, IoT clients have different computing and communication resources, and unavoidable transmission failures, which can cause straggler problems. Although stragglers result in model drift and affect model convergence <cit.>, the straggling caused by heterogeneous clients and communication failure are different; the former is relatively stable and predictable, while the latter is unpredictable. Furthermore, local data samples of different clients are usually not independent and identically distributed (non-iid), which prolongs the training time and reduces the accuracy of the model <cit.>. To address these problems, asynchronous federated learning <cit.> is used with the expectation that clients can improve their training performance in a single round, without waiting for stragglers. However, asynchronous FLs usually require more iterations and communication overheads to train and are difficult to integrate into existing privacy protection schemes <cit.>. To improve the training performance, TiFL <cit.> divides clients into different tiers according to their training response time and randomly selects clients for training in a tier. However, although TiFL reduces the training time increased by heterogeneous clients, it does not consider the impact of communication failures in wireless communication networks.
In this study, we propose a new dynamic cross-tier federated learning scheme named FedDCT for existing FL applications. FedDCT consists of two algorithms: a dynamic tiering algorithm and a cross-tier client selection algorithm. Specifically, the dynamic tiering algorithm is used to evaluate the training time of clients and divide them into different logical tiers, and before each training round, the cross-tier client selection algorithm is used to select the tiers and participants. FL distributes the latest global model to selected clients for training. Through the dynamic tiering algorithm, timeout thresholds are assigned for each tier to increase training performance. If the training time of a client exceeds their tier’s threshold, they will be removed from the training process and re-evaluated. However, if the training time of a client does not exceed their threshold, their average training time is updated.
Our main contributions are as follows:
* To reduce the delay in training time caused by stragglers, we designed a dynamic tiering algorithm that aims to divide clients dynamically into different tiers according to their training time and assign different timeout thresholds for each tier. For clients that exceed the threshold, we used an evaluation program to reduce the impact of stragglers and improve training performance.
* To reduce training time and improve model accuracy, we proposed a cross-tier client selection algorithm. The algorithm uses different strategies for selecting tiers and participants to achieve a balance between training time and accuracy.
* FedDCT considers both data heterogeneity and different types of stragglers in wireless communication networks. Extensive experiments showed that the FedDCT can achieve better training performance and training accuracy under different degrees of data heterogeneity, client heterogeneity, and network reliability.
The remainder of this paper is organized as follows: Section <ref> summarizes related research in federated learning, Section <ref> provides a preliminary introduction to FL, and the technical details of FedDCT are presented in Section <ref>. Section <ref> summarizes the experimental results and discussion, and Section <ref> concludes the paper.
§ RELATED WORK
In recent years, many schemes have been proposed to reduce the influence of data heterogeneity, resource heterogeneity, and stragglers in FL to improve the training performance and accuracy of wireless communication networks.
To reduce the impact of data heterogeneity in training and to improve model accuracy, Wang et al. <cit.> suggested using Deep Q-Network (DQN) to select participants because they believed that the distribution of training samples was related to the model weight. Furthermore, Fraboni et al. <cit.> claimed that the current FL sampling algorithm was biased and unstable and proposed to select participants by introducing cluster sampling. Although the above schemes can effectively reduce the impact of data heterogeneity on FL, they do not consider the impact of the training time of the selected clients on the overall training time.
To reduce the impact of resource heterogeneity in training and improve training performance, Nishio et al. <cit.> proposed the FedCS algorithm, which dynamically selects clients for training according to their resource status, enables the server to aggregate as many model updates from the clients as possible, and significantly accelerates the training speed. In addition, Abdulrahman et al. <cit.> proposed the FedMCCS algorithm, which considers the computational resources and communication capabilities of the clients, predicts whether the clients can complete the task, and maximizes the number of clients selected to improve the overall convergence speed. However, this approach does not consider the effect of data heterogeneity, and excessive participants can increase network load <cit.>. Leng et al. <cit.> considered the channel and learning quality of clients in a wireless communication network, selected participants, and assigned subchannels to them. Zhang et al. <cit.> used reinforcement learning to select participants to whom different local iteration epochs and radio resources are allocated. TiFL divides clients into different tiers based on their training time and randomly selected participants from each tier in a round <cit.>. Although the above method can effectively improve the convergence speed of the model, the problem of stragglers due to network failures and other problems in wireless communication networks still significantly increases the training time of FL.
Asynchronous FL eliminates the need to wait for other clients to upload the model parameters in each training round, which can greatly improve the training performance <cit.>. Xie et al. <cit.> designed an adaptive weight algorithm according to the staleness of the model and updated the global model using the weighted average. Wang et al. <cit.> proposed a new aggregation weight that considers the effect of training data size and model staleness on global model convergence through combination. Chai et al. <cit.> proposed FedAT, an asynchronously federated learning system that enables clients in each tier to be trained simultaneously and uses gradient quantization and sparsification techniques to minimize communication. However, asynchronous FL aggravates different degrees of participation in training among clients, which may cause model drift and affect model accuracy <cit.>. Therefore, the current asynchronous FL method is difficult to fit into the currently available FL privacy protection schemes <cit.>.
Luo et al. <cit.> proposed that the training time can be reduced while ensuring convergence by adjusting the basic variables of each training round (number of selected nodes and number of local iterations). Chen et al. <cit.> used the upper confidence bound (UCB) algorithm to predict the computational and communication capabilities of clients and assign different numbers of local iterations to them. Chen et al. <cit.> used a dynamic learning step to compensate for clients with high data volume and poor communication status. Liu et al. <cit.> noted that the bias of the local model is initially large and decreases as training progresses; therefore, they proposed an adaptive number of aggregation models to improve the convergence speed of the global model. However, none of the above schemes consider the impact of stragglers in wireless communication networks on FL training.
Although there have many related studies trying to improve the training accuracy and performance of FL systems in wireless communication networks. The impact of stragglers caused by various factors on FL training in wireless communication networks requires further investigation. Therefore, this paper proposes the FedDCT scheme, which considers different stragglers and data heterogeneity to improve model accuracy and training performance in wireless communication networks.
§ PRELIMINARY INTRODUCTION ON FL
FL algorithms typically involve training tens of thousands of remote clients on their local datasets and jointly training a global shared model under the coordination of an aggregation server <cit.>. FL is an iterative process in which the selected clients use the latest global model and local data for training. The server then aggregates the trained models to form a new global model.
Where C represents the set of all available clients and | C | represents the number of available clients.
The basic flow of the FL algorithm is briefly summarized in Algorithm. <ref>, as follows.
* The aggregation server first initializes the weight of the global model w^0 randomly.
* At the beginning of each round, the aggregation server randomly selects the set of participants C_r and sends the latest global model w^r to them.
* The selected clients use the global model w^r and local data for training. They then return the trained model to the aggregation server.
* The aggregation server waits for the selected clients to upload model w_c^r+1, and aggregates the trained models to form a new global model w^r+1.
Steps 2–4 are repeated until either a predetermined number of training rounds has been completed or the model convergence satisfies the necessary accuracy standards.
§ FEDDCT: DYNAMIC CROSS TIER FEDERATED LEARNING
In this section, we introduce the framework and algorithm flow of FedDCT. First, we effectively improve the training performance using the dynamic tiering algorithm. Second, we use the cross-tier client selection algorithm to select more clients to participate in the training process without increasing the training time to improve the training accuracy. Table <ref> summarizes the main symbols used in this study.
§.§ System Overview
FedDCT consists of two main components: 1) The dynamic tiering algorithm is used to evaluate the training time of the client, and divide the client into different tiers based on the training time of each client. 2) The client selection and tier timeout threshold algorithm selects the tier according to the accuracy change of the global model, then selects the participating clients according to the training information of the clients in the tier, and assigns a specific timeout threshold to each tier of clients.
We illustrate the training process of FedDCT in Fig. <ref> and explain its specific implementation in Algorithm <ref>. First, the dynamic tiering algorithm evaluates the training time of all participants and divides them into M tiers according to the training time of each client: {tier_1,...,tier_M}, with tier_1 being the fastest tier and tier_M being the slowest.
Before each training round, FedDCT selects the participants for the round, based on the difference in model accuracy and the successful rounds of clients, and distributes the latest global model w^i to them. The selected clients use the global model and their local data to train the model and return it. The server aggregates the successfully uploaded models to form a new global model and updates the most recent client training time. For example, in round 1, the server selects clients in tier_1 and tier_2 to participate in the training process. All clients selected from tier_1 complete training and uploads their models within the timeout threshold D_Max^1,1. Some of the clients in tier_2 , who are considered stragglers (highlighted in red in Fig. <ref>), cannot complete the task within the timeout threshold D_Max^1,2, the server does not wait for them. In addition, the dynamic tiering algorithm re-evaluate the training time of the stragglers in each training round.
§.§ Dynamic Tiering Algorithm
The resources of clients in wireless communication networks are usually heterogeneous, increasing the training time between clients and affecting the training efficiency. An effective solution is to select clients with similar training time to participate in the training process. First, the server performs κ rounds of pre-training and divides the clients into M tiers based on their average training time at. Clients whose average training time exceeds the threshold Ω are considered stragglers and are thus not allowed to participate in subsequent rounds <cit.>. As shown in Fig. <ref>, the training time of clients from tier_1 to tier_M increases sequentially, while clients in the same tier have a similar training time.
at[i] =
at[i], at[i] < Ω
dropout, at[i] ≥Ω
Although this algorithm can effectively reduce the training time caused by the difference in client resources, such a fixed-tiering algorithm does not adapt to dynamic changes in the wireless communication network. Owing to problems such as network failure, there may be a large number of stragglers. On the one hand, if these clients are discarded directly, the model will have low accuracy. However, if they are selected to participate in the training, they will increase the training time in a single round.
To reduce the training time in a wireless communication network, we have designed a dynamic tiering algorithm. As shown in Algorithm <ref>, we first conduct κ rounds of training in the initial stage and evaluate the training time of clients to divide them into different tiers. In the subsequent rounds, we update at using the real training time of clients t_train. For stragglers joining subsequent rounds, our scheme does not wait for them in the current round and places them in a parallel evaluation program, which allows stragglers to update their average training time by completing training tasks whose results are not aggregated.
at[i] = at[i] × ct[i] + t_train/ct[i] + 1
Clients exceeding the tier timeout threshold D_max^t (the red part of Fig. <ref>) are not selected in subsequent rounds until κ rounds of evaluations are completed. After κ rounds of evaluations are completed properly, their new average training time is ∑_i=1^κt_train/κ. Finally, they are re-tiered according to their updated average training time, and re-participate in the following rounds.
§.§ Cross Tier Client Selection Algorithm
Based on the above analysis, we propose a cross-tier client selection algorithm. If we choose clients in a tier with a short training time to participate in the training, we can reduce the single-round training time of FL. However, selecting only the clients in a tier with a short training time leads to insufficient training of the clients in the other tiers and consequent model drift <cit.>. In this regard, we thoroughly examine how client selection affects the overall accuracy and training time, and define the issue of reducing the training time while maintaining model convergence performance. For example, if the clients in tier 1 can help the global model converge faster, a specific part of them are selected rather than those in slower tiers to reduce the training time.
To evaluate the performance of the system, we use the variation in accuracy υ_i produced by the newly aggregated model as the criterion. If the current accuracy υ_i is higher than the accuracy υ_i-1 from the last round, this means that the clients in the t^th tier currently used can still improve the global model accuracy. Therefore, we need to select only the clients in the t-1^th tier for the next round. However, if υ_i is lower than υ_i-1, the data in the current tier may not be able to help the global model effectively converge, and more data would have to be added to help the global model converge. Therefore, clients in the t+1^th tier should be selected in the next round.
t =
min(t + 1, M), υ_i < υ_i-1
max(t - 1, 1), υ_i ≥υ_i-1
In wireless communication networks, stragglers may result in different training rounds in the tiers, while non-stragglers are selected more frequently than stragglers, leading to global model drift. To solve this problem, we design a weighted client selection algorithm for tiers. Clients with fewer successful rounds have a higher probability of being selected, which can accelerate the model convergence. Therefore, we assign different selection probability probs based on the number of rounds ct on all participant clients in tier t. Finally, the lowest τ clients C_r are selected as the participant clients in the r^th round according to probs of the clients in the tier.
probs[i] = ct[i]/∑_i ∈ ts[t] ct[i]
FedDCT selects a tier at each training round, and randomly selects τ clients from that tier as participants C_r in the current round. The final training time D^t is affected by actual clients’ training time D_train^i ∈ C_r , timeout threshold of the current tier D_max^t, and max timeout threshold Ω.
D^t = min(max(D_train^1,...,D_train^τ ),D_max^t,Ω)
However, this approach is inefficient. Clients in tier t complete the training and upload their models within a certain time period, causing a significant amount of idle time in a single round. To improve the training performance, we select the clients not only from tier t, but also from tiers {1...t-1} to participate in the current round. The eventual training time D of this round depends on the longest training time of tier {1...t}.
D = max(D^1,...,D^t)
In wireless communication networks, the actual training time of clients may exceed their expectations, as shown in Fig. <ref>. If a client in the first tier fails to complete the training and upload within the estimated training time because of network delays or other reasons, it may delay the upload actions of clients in the subsequent tiers and thereby prolong the training time. Therefore, we take advantage of the tiering feature to set separate timeout thresholds D^t_max for each client tier.
First, we set the timeout tolerance to β and use the average training time
∑ _i ∈ ts[t]at[i]/∑ _i ∈ ts[t]1 of each tier multiplied by β as the timeout threshold D^t_max of each tier. We also set a maximum timeout threshold Ω to prevent D^t_max from becoming too large. Meanwhile, we allow clients to upload the model in tolerable time (green part in Fig. <ref>). D^t_max can restrict that each tier of clients to upload models within a certain time interval, thus alleviating interference between clients. Because the channel bandwidth of a wireless communication network is limited, numerous simultaneous upload behaviors can lead to network congestion. Therefore, we introduce D^t_max to provide a time guarantee for the cross-tier client selection algorithm, which ensures that clients in the t^th tier cannot excessively interfere with the normal activities of clients in the t+1^th tier.
D^t_max = min( ∑ _i ∈ ts[t]at[i]/∑ _i ∈ ts[t]1×β , Ω )
§ EXPERIMENTAL EVALUATION
§.§ Experimental Design
We use PyTorch to implement FedDCT and other FL baseline methods, referring to the implementation methods in FedLab <cit.>. We use a high-performance server with 2 x Intel(R) Xeon(R) Gold 6230 CPU, 128GB memory, and 2 x NVIDIA Tesla V100 FHHL graphics cards to simulate an aggregation server and 50 clients.
In this experiment, we used three common datasets for verification.
* MNIST<cit.> is a classic experimental data set in the field of image classification, which consists of 60,000 training data and 10,000 test data. Each piece of data is a 28×28 grayscale image, containing a total of 10 categories of images.
* CIFAR-10<cit.> dataset consists of 60,000 32×32 color images from 10 categories, with 6,000 images per category. It total consists of 50,000 training images and 10,000 test images.
* Fashion-MNIST<cit.> contains 10 classes of images; the dataset consists of 60,000 training data and 10,000 test data images; each example is a 28x28 grayscale image.
In the experiment, two classical neural network models, CNN and RestNet8 were used. We trained the CNN with the MNIST and Fashion-MNIST datasets. For MNIST, the network architecture includes two convolutional layers, each with 32 and 64 filters, followed by 2×2 max pooling and two fully connected layers with units of 512 and 10. For Fashion-MNIST, the network architecture includes two convolutional layers, each with 32 and 64 filters, followed by 2×2 max pooling, flattened and two fully connected layers with units of 128 and 10. We trained RestNet8 with the same network structure using CIFAR-10 as in <cit.>.
We compared FedDCT with three synchronous and asynchronous FL methods:
* FedAvg: Baseline synchronized FL method proposed by McMahan et al. <cit.> In each round, a certain proportion of total clients is randomly selected for training, and server averages the weights received from the selected clients to form a new global model.
* FedAsync: Xie et al. <cit.> used the asynchronous FL method that weights aggregation to train all clients simultaneously. When the server receives the model from any client, the model is weighted and averaged with the current global model to obtain the latest global model.
* TiFL<cit.>: Based on the training delay, the clients are divided into different tiers. In each round, one tier is selected based on the adaptive selection algorithm base on the test accuracy across all tiers, and some clients are chosen at random for training. The aggregation method used for TiFL is FedAvg.
The learning rate was set to 0.001. For each method, we use the following configuration for training: local epoch E = 1, batch size = 10, M = 5, τ = 5, β = 1.2, κ=1 , Ω = 30s; we used the same parameters for the other FL schemes. Other schemes choose five clients to participate in training by default in each round, but for FedDCT, the number of clients selected in each round changes with the selected tier.
Clients in FL are often edge devices such as smartphones and IoT clients and possess varying computing and communication capabilities. First, we divide all clients into M parts and then assign them random training delays satisfying a Gaussian distribution with a variance of 2, whose expectations are 5, 10, 15, 20, and 25 s, respectively, to simulate the training time difference caused by different client resources. In a wireless communication network, clients vary not only in terms of resources, but also due to the possibility of any client dropping out due to communication failure, client failure, etc. We randomly added a 30-60 s training delay to simulate the occurrence of various failures and used μ to control the probability of their appearance in the training process. To investigate the training effect under different data distributions, we assigned a master class to each client at random, with #% of the data in the client belonging to this category and the remaining data belonging to the other categories.
§.§ Experimental Results
Table <ref> shows the best average accuracy across all datasets and the time required to achieve the preset accuracy. The experimental results show that FedDCT improved the accuracy by 1.91% and reduced the time cost by 56.3% over the best baseline for CIFAR-10 when #=0.5. FedDCT achieves higher training accuracy in all experiments under the same experimental configuration and significantly reduced the time cost of training. This was because of the following factors: 1) Dynamic tiering can more effectively adapt to dynamic environments, improve client tier division, and cut down the time required for training. 2) The cross-tier client selection algorithm selects more clients to participate in the training process to increase the precision of the model without increasing training time. When μ > 0, TiFL could not achieve satisfactory training accuracy and training time because it mistakenly classified clients and abandoning more clients during the initial stage. Fig. <ref> shows that TiFL performs best in a steady environment without any unpredicted stragglers.
Fig. <ref> illustrates how FedDCT improves the training effect across several datasets in heterogeneous environments, including the ultimate training accuracy and training time. Furthermore, FedDCT can reach the target accuracy faster than FedAvg because its selection algorithm caps the selection frequency of the slowest tier. Because FedAvg is entirely random, it has more chances to select the slowest tier, which results in a longer training time. In addition, we discovered that FedDCT typically has a higher ultimate training accuracy than TiFL, as TiFL abandons stragglers who unintentionally drop out during training, which can reduce the training accuracy.
As shown in Fig. <ref>, to study the influence of data heterogeneity on FL training, the CIFAR-10 dataset is divided into different non-iid degrees for training. For μ = 0.1, we set # equal to iid, 0.3, and 0.7, respectively. The results match our expectations, and the proposed scheme can achieve good results under different data distributions. Taking iid as an example, it is observed that our scheme converges to 0.7 precision in only 685.69s. Our scheme can converge faster, and its final training accuracy is higher than that of other baseline schemes. FedDCT may cause training accuracy fluctuation because of it balancing training accuracy and time; however, its fluctuation time is very short. By comparing the three groups of graphs in Fig. <ref>, it can be observed that in Fig. <ref> - <ref>, the training accuracy decreases with an increase in the heterogeneity of the data distributions. Meanwhile, the training time of FedDCT and TiFL is affected by not only the data heterogeneity but also stragglers, as shown in Fig. <ref> - <ref>. FedDCT and TiFL tend to select faster tiers to train; if stragglers occur more frequently in the faster tiers, it has a greater impact on the training time.
As shown in Fig. <ref>, to study the influence of various failures in wireless communication networks on FL training, we have experiment with CIFAR-10. In the case of # = 0.5, we set μ to 0, 0.2, and 0.4, to test the performance of each scheme. The training time of the FL increased as the μ increased. As μ increased, the number of stragglers in the training process and the training time also increased. At the same time, we also find that μ has relatively little influence on FedDCT because the dynamic tiering algorithm and cross-tier client selection of FedDCT can greatly decrease the impact of stragglers on FL.
As shown in Fig. <ref>, to study the performance of FL training in a more complex network environment, we have increased the difference in resources between clients. To simulate the difference in resources between clients, we increased the expectation of the Gaussian random distribution corresponding to the client's training delays: 1, 3, 10, 30, and 100 s. We evaluated various FL methods using the Fashion-MNIST dataset. When the environment is more complicated, using a dynamic tiering scheme has a greater convergence benefit than the other baseline schemes. The successful training rounds of clients differ more in a complex network environment, but our scheme limits the training time, which significantly speeds up convergence.
As depicted in Fig. <ref>, we train in a stable network (no varied failures), and the dynamic tiering technique is eliminated to confirm the effectiveness of our cross-tier client selection algorithm. Our cross-tier client selection algorithm achieved good results for different datasets due to our scheme can use more client data for training at the same time cost. Meanwhile, in conjunction with Fig. <ref> - <ref>, we can see that FedAsync typically lags other schemes in training accuracy and time, which makes it difficult for the model to converge because of its staleness of model. Additionally, the convergence effect of FedAsync and FedAvg is subpar compared to other baseline schemes because they fail to utilize existing information, such as training time, effectively.
Finally, to explore why our cross-tier client selection algorithm could converge faster, we recorded the changes of the tier in FedDCT during the training process, averaged it every 10 rounds, and fitted it with a linear regression model. As shown in Fig. <ref>, the overall trend of the tier increases with training rounds, with more fluctuations in the middle. This is consistent with the expectations of the proposed design. FedDCT first uses the clients in the tier with a short training time for training until it is difficult to improve the accuracy of the global model, and then uses the clients in the other tier with a longer training time. In the middle of the training, tier selection can be temporarily caught in the tradeoff between time and accuracy, causing it to fluctuate.
§ CONCLUSION
In this study, we proposed a novel dynamic cross-tier federated learning scheme FedDCT to reduce the adverse impact of wireless communication networks on FL training. FedDCT adopts the method of dynamic tiering to reduce the waiting time caused by heterogeneous resources and varied failures in the training process, and improves the performance of training. Additionally, we designed a cross-tier client selection algorithm to effectively select participants based on their training information to improve training accuracy and performance. Finally, we verified the influence of various factors in wireless communication networks for FL training, such as data heterogeneity, network failures, and resource heterogeneity. Experiments showed that our scheme is superior to the traditional FL scheme in various heterogeneous scenarios both in terms of training accuracy and performance.
§.§ Acknowledgements
The research was supported in part by the National Natural Science Foundation of China (Nos. 62166004,U21A20474), the Guangxi Science and Technology Major Project (No. AA22068070), the Basic Ability Enhancement Program for Young and Middle-aged Teachers of Guangxi (No.2022KY0057 ), the Key Lab of Education Blockchain and Intelligent Technology, the Center for Applied Mathematics of Guangxi, the Guangxi "Bagui Scholar" Teams for Innovation and Research Project, the Guangxi Talent Highland Project of Big Data Intelligence and Application, the Guangxi Collaborative Center of Multisource Information Integration and Intelligent Processing.
unsrt
|
http://arxiv.org/abs/2307.05924v3 | 20230712054433 | Applying SDN to Mobile Networks: A New Perspective for 6G Architecture | [
"Rashmi Yadav",
"Rashmi Kamran",
"Pranav Jha",
"Abhay Karandikar"
] | cs.NI | [
"cs.NI"
] |
Applying SDN to Mobile Networks: A New Perspective for 6G Architecture
Rashmi Yadav
Department of Electrical Engineering
Indian Institute of Technology Kanpur,
India
[email protected]
Rashmi Kamran
Department of Electrical Engineering,
Indian Institute of Technology Bombay,
India
[email protected]
Pranav Jha
Department of Electrical Engineering,
Indian Institute of Technology Bombay,
India
[email protected]
Abhay Karandikar
Department of Electrical Engineering,
Indian Institute of Technology Bombay, India
[email protected]
Director, Indian Institute Technology Kanpur, India
[email protected]
August 12, 2023
==============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
The upcoming Sixth Generation (6G) mobile communications system envisions supporting a variety of use cases with differing characteristics, e.g., very low to extremely high data rates, diverse latency needs, ultra massive connectivity, sustainable communications, ultra-wide coverage etc. To accommodate these diverse use cases, the 6G system architecture needs to be scalable, modular, and flexible; both in its user plane and the control plane. In this paper, we identify some limitations of the existing Fifth Generation System (5GS) architecture, especially that of its control plane. Further, we propose a novel architecture for the 6G System (6GS) employing Software Defined Networking (SDN) technology to address these limitations of the control plane. The control plane in existing 5GS supports two different categories of functionalities – handling end user signalling (e.g., user registration, authentication) and control of user plane functions.
We propose to move the “end-user signalling functionality” out of the mobile network control plane and treat it as user service, i.e., as payload or data. This proposal results in an evolved service-driven architecture for mobile networks bringing increased simplicity, modularity, scalability, flexibility and security to its control plane. The proposed architecture can also support service specific signalling support, if needed, making it better suited for diverse 6GS use cases. To demonstrate the advantages of the proposed architecture, we also compare its performance with the 5GS using a process algebra-based simulation tool.
Software-defined networking, Mobile networks, Service-driven architecture.
§ INTRODUCTION
T he notable rise in the range of diverse use cases with differing attributes has paved the way for the continued evolution of mobile networks. The upcoming 6th Generation Mobile Communication System (6GS) is envisioned to support peak data rate (≥200 Gbps), very high mobility (500-1000 Km/h), very low latency (0.1-1 ms), connection density in the range of 10^6-10^8 devices/Km^2, reliability of 10^-5-10^-7 <cit.>. Moreover, it is expected to witness further diversity of use cases with the emergence of newer categories of use cases. Focus Group on Technologies for Network 2030 (FG NET-2030) <cit.> has identified and included the following use cases in its report: Holographic-type communications, Tactile Internet for Remote Operations, Intelligent Operation Networks, Network and Computing Convergence, Digital Twin, Space-Terrestrial Integrated Network, Industrial IoT with cloudification etc. A scalable, flexible and modular network architecture is one of the essential ingredients towards tackling this immense diversity of use cases in future mobile networks. Third Generation Partnership Project (3GPP) adopted technologies such as Network Function Virtualization, Control and User Plane Separation, Network slicing for Fifth Generation System (5GS), which resulted in improved scalability and flexibility of 5GS over the previous generation mobile communications systems such as Fourth Generation System (4GS).
However, there is scope for further improvement in mobile network architecture especially that of its control plane through the application of Software Defined Networking (SDN) technology. A survey of the existing research related to SDN-based enhancements in the mobile network control plane is presented next. The work in <cit.> proposes a centralised control plane for multi-Radio Access Technology (multi-RAT) Radio Access Network (RAN) to enhance the simplicity and flexibility of the network. Relocation of the control plane functionality of RAN to the Core Network (CN) to reduce the signalling cost between RAN and core has been discussed in <cit.>. Authors in <cit.> proposed a decentralized control plane architecture for the 5GS with independent control functions for different control events for flexible and scalable networks. An SDN architecture where a middle cell and a middle cell controller are introduced between the macro cell and the small cell to reduce the control overhead of the macro cell and to address the scalability problems is proposed in <cit.>. In <cit.>, authors proposed a new 5GS core architecture based on the SDN concept. They introduced a centralised SDN controller for easier and more flexible management of the user plane. In <cit.>, a hierarchical control plane is designed to lighten the load of the controller. It focuses on the vertical scalability of the control plane. In <cit.>, a scalability metric for the SDN control plane is proposed. Besides, a comparison between different SDN architectures is analysed via mathematical methods. In addition, there is a vast amount of literature on SDN-based network architectures, albeit unrelated to mobile networks <cit.>, <cit.>.
To summarize, current research in the context of the application of SDN technology to mobile networks mainly focuses on the centralized or distributed architecture of the control plane for reduced control overheads or scalability purposes. However, to the best of our knowledge, there is a limited discussion/rethink on certain other aspects of network architecture, such as, what functionality should constitute the mobile network control plane within an SDN-based framework. Is the network control plane right place for end user signalling handling functionality? Should Non-Access Stratum (NAS) messages be handled by CN control plane functions such as Access and Mobility Management Function (AMF) or should this functionality be moved out of AMF? Should the user authentication function (Authentication Server Function (AUSF) in 5GS) be part of the CN control plane? These questions assume even more importance in the upcoming 6GS era, where a massive increase in the number of UEs is expected and an accompanying growth in end-user signalling has the potential to over-burden the network control plane. In one of our earlier works <cit.>, we briefly analysed these questions.
In order to bring in additional enhancements to mobile network architecture, especially to its control plane, we propose to separate end user (User Equipment (UE)) signalling handling from the control plane functions. In a significant departure from the existing cellular networks, the proposed architecture views UE signalling as payload, i.e., a form of data traversing through the cellular network, not much different from other types of data such as Video streaming or Web browsing.
We analyse the proposed architecture using Performance Evaluation Process Algebra (PEPA) <cit.>, a formal language used to model distributed systems. We also provide a comparative analysis of the proposed architecture and the existing 5GS architecture through example call flows for Protocol Data Unit (PDU) session establishment and UE handover procedures. We demonstrate a significant reduction in the number of control messages exchanged in the proposed architecture along with the network's scalability.
The rest of the paper is organised as follows: Section <ref> provides limitations of the existing 5GS mobile network architecture. Section <ref> provides an overview of the proposed architecture and highlights its advantages. Section <ref> includes an information flow comparison of the existing and proposed architecture for PDU session establishment and handover procedures. Section <ref> describes the system model using PEPA. Section <ref> covers the performance analysis. Section <ref> provides the conclusion and future work.
§ LIMITATIONS OF EXISTING 5GS ARCHITECTURE
In this section, we have captured some of the limitations of the existing 5GS architecture especially that of its control plane. Although there can be other limitations too say pertaining to radio technology, etc., those are not discussed here.
§.§ Tight coupling of user plane control and UE signalling in control plane
The existing 5GS architecture supports the control and user plane separation. The 5GS control plane performs user plane control (network resource control, e.g., setting up data path through the user plane) and UE signalling handling functionalities (e.g., NAS/RRC (Radio Resource Control) message exchange with UEs). There is a tight coupling between these two categories of functionalities, i.e., between user plane control and UE signalling handling and certain CN (e.g., AMF) and RAN gNodeB-Centralized Unit-Control Plane (gNB-CU-CP) control plane functions in the existing 5GS perform both. A detailed description of control plane functionality is provided in <cit.>. As demonstrated here, decoupling of UE signalling handling functionality from User plane control functionality may lead to a more modular and scalable network architecture.
§.§ Limited alignment with SDN paradigm
SDN is a networking paradigm which separates the control plane of a network from its user (data) plane and centralizes the network’s intelligence in the control plane. Although there are differing views in industry/academia on how to define an SDN-based network architecture, we can still discern a broad agreement on the topic <cit.>, <cit.>, <cit.>. The existing 5GS architecture incorporates the concept of SDN, resulting in architectural features such as the separation of the user plane from the control plane <cit.>. However, closer observation shows that the 5GS architecture does not align completely with the SDN paradigm. Besides controlling the user plane, the 5GS control plane also exchanges signalling messages with UEs to provide services such as authentication and also collect service requirements, e.g., requirements for PDU connectivity service. The functionality of signalling exchange with UEs may fit better within the service plane instead of the control plane.
§.§ Non-uniform handling of services
Services in the existing 5GS can be categorized into the following two types:
* Application-based services such as Media streaming services, IP Multimedia subsystem services, Mission-critical services, Multicast/Broadcast Services (MBS) etc.
* Other than these application-based services, the 5GS network also provides services such as initial access, registration, authentication, PDU connectivity (connectivity to data networks), and connected mode mobility support. Such services can be called built-in (or intrinsic) network services.
The two categories of services (Application based services and built-in network services) are enabled differently in the 5GS. As Application (Service) Functions (AFs) are independent and decoupled from the core and RAN functions of mobile networks, they access the control plane functions of the mobile CN over a standardized interface to enable service delivery through the user plane.
However, the delivery of built-in services is tightly integrated within the control plane of the 5GS network (RAN and CN) itself. It also leads to the usage of special paths for signalling exchange with UEs, different from the regular data paths and brings certain inconsistencies to the architecture. For example, the Performance Measurement Function (PMF), a sub-function within the User Plane Function (UPF), exchanges Measurement Assistance Information, a type of signalling information with UEs to aid the access traffic steering, switching, and splitting (ATSSS) functionality at UPF. This signalling information is exchanged via a regular data path (i.e. user plane) between the UE and the PMF. This mechanism is different from how other signalling information such as “radio measurement reports” to support the handover procedure is exchanged.
§.§ Complex protocols between control plane and user plane
The existing 5GS control plane architecture impacts the interface design (protocols) between the control and user planes. For instance, F1 Application Protocol (F1AP) is the protocol used on the interface between the RAN control plane (gNB-CU-CP) and the RAN user plane (gNB-Distributed Unit (gNB-DU) or RAN-DU). It is used to configure gNB-DU and also carries RRC (UE signalling) messages for UEs. Integrating both these types of functionalities in a single protocol results in a relatively complex communication protocol between gNB-CU-CP and gNB-DU.
§ SERVICE DRIVEN ARCHITECTURE FOR 6GS MOBILE NETWORKS
This section presents the proposed architecture, which addresses the architectural limitations of the existing 5GS (as discussed in Section <ref>) and highlights a few other advantages. In the proposed work, we aim to separate the UE signalling handling from the control plane and treat them as a service to the user to enhance modularity and flexibility in the mobile network control plane. With the proposed separation, the control plane is left with only the user plane control functionality, as shown in Fig. <ref>. The UE signalling handling functionality is moved out of the control plane to the service/application plane. The service plane consists of various in-built and external service functions, as shown in Fig. <ref>, such as the PDU Session Service Function (handles PDU session establishment and management providing PDU connectivity service), Mobility Service Function (responsible for handling UE mobility), Registration Service Function (handles UE registration with the network), Authentication Service Function (manages UE authentication), Multicast/Broadcast Services and a few others. Due to the reorganisation of the architecture, it offers various architectural and performance advantages discussed next. Please note that there may be separate controllers in the CN and RAN, as shown in Fig. <ref>. Similarly, we have a separate resource plane (user plane) for RAN and the CN. Further, the proposed architecture's user or resource plane may remain the same as the 3GPP 5GS.
§.§ Advantages of the proposed 6GS architecture
This section highlights a few advantages of the proposed work. Segregation of UE signalling handling functionality from the control plane simplifies the control plane, which enhances the modularity of the control plane. The reorganised architecture also aligns well with the SDN paradigm as the control plane is redesigned to perform only user plane control functionality as discussed in Section <ref>. The proposed architecture also allows internal (or built-in 5GS) services to be treated the same way as external application-based services, leading to uniform handling of various services. Further, this proposal results in the simplification of the control messages. For instance, the number of sessions management-related messages is reduced due to the setup of a direct path between UE and the service function (detailed in Section <ref>), leading to simplified call flows. Also, the number of hops between the RAN controller and the CN controller in the proposed architecture is less than the corresponding entities in 5GS, i.e., between gNB-CU-CP and the Session Management Function (SMF), respectively, which further results in the performance improvement in terms of control plane latency and resource utilisation. Transposition of UE signalling handling functionality to functions in service plane simplifies the protocols between the control pane and the user plane such as Next Generation Application Protocol (NGAP) between the CN control plane and RAN and F1AP between the RAN control plane (gNB-CU-CP) and the RAN user plane (gNB-DU).
The existing 5GS uses the same type of signalling messages for all use cases. However, it is possible to have different signalling requirements for different use cases, e.g., the Internet of Things (IoT) and human users. The proposed architecture may support this requirement by employing use case specific signalling service functions. Our proposal can also support flexible function deployment and chaining as various service functions, such as the PDU session service function, mobility service function, registration service function, and authentication service function, can be placed flexibly and chained together to serve UEs.
An additional advantage towards signalling security is presented here. 3GPP specification <cit.> highlights the exposed AMF which is vulnerable to replay attacks of NAS signalling messages between the UE and AMF (control plane of the CN). In a similar way, <cit.> presents the exposed RAN which is susceptible to replay attacks to RRC signalling messages between the UE and RAN (gNB-CU-CP (control plane of RAN)) as the Uu interface also carries sensitive RRC signalling. Furthermore, the European Union Agency for Cybersecurity (ENISA) <cit.>, in its report recommends that the N2 interface between the 5GS RAN and AMF is a target for attackers since they carry sensitive signalling between the RAN and the CN. Therefore, in this context, the proposed architecture may have some advantages towards the UE signalling security between the UE and the signalling service function. Since UE signalling is segregated from the control plane (of RAN and CN) and is terminated to a separate signalling server, it leads to the possibility of localizing the attack originating from a UE within the signalling server without compromising the network control plane where the architectural and logical control and management of RAN and CN are located. This segregation allows us to improve the UE-related signalling security of future mobile networks.
§ INFORMATION FLOW COMPARISON
In this section, we compare the information flows of the proposed architecture and the existing 5GS architecture. We consider the PDU session establishment and mobility services example to differentiate the working of the existing 5GS and the proposed architectures.
§.§ PDU session establishment as a service
Fig. <ref> and Fig. <ref> show the entities involved in PDU session signalling for the 5GS and the proposed architecture, respectively. In 5GS, messages are exchanged between UE and SMF for PDU session-related signalling via RAN (it requires both gNB-DU and gNB-CU) and AMF. However, signalling messages are directly exchanged between UE and the (PDU session service function (PSSF)) service function via RAN (it requires only RAN-DU) in the proposed architecture, as shown in Fig. <ref>, which implies that in the existing 5GS, signalling takes place through multiple hops. In contrast, the number of hops is reduced in the proposed architecture. Further, the control plane collects all requirements from the PSSF (which in turn are received by PSSF from the UE as shown in Fig. <ref>) via the application-control interface and establishes the PDU session.
The complete message sequences for establishing PDU sessions for the existing 5GS are detailed in <cit.> while simplified call flow for the proposed architecture is shown in Fig. <ref>[In call flows and simulations, only those messages are considered and compared which are different in proposed and existing architectures]. Please note that the controllers do not require response messages from the resource (user) plane, as the controller knows about user plane resource information; it handles resource decision-making. Therefore, the proposed architecture eliminates many such messages. For example, the N4 session modification request and response are exchanged between SMF and UPF in 5GS architecture <cit.>, while the session modification command (message 3 in Fig. <ref> and message 9 in Fig. <ref>) is exchanged between the CN controller and CN user plane (UPF) in the proposed architecture. There is no need for a session modification response message from the UPF. Hence, these reductions in the messages simplify both the session establishment and mobility procedure (to be discussed next).
Please note that even though using RAN-User Plane (RAN-UP) and other network functions/messages is necessary, we have shown only the CN functions in the call flow to keep the analysis tractable even though RAN functions will also be required in real systems. However, keeping the RAN functions out of the call flows is not likely to alter the conclusions drawn here. This note applies to mobility services also.
§.§ Mobility as a service
We consider mobility as another service to illustrate the difference between the existing 5GS and the proposed architecture. Fig. <ref> and Fig. <ref> show the network entities, signalling and control message flow of the existing 5GS and proposed architecture, respectively.
S-DU and T-DU represent source gNB-DU and target gNB-DU, respectively. Similarly, the Source-Centralized Unit-User Plane (S-CU-UP) and Target-Centralized Unit-User Plane (T-CU-UP) represent source gNB-CU-UP and target gNB-CU-UP, respectively. S-CU-CP and T-CU-CP represent source gNB-CU-CP and target gNB-CU-CP, respectively. Also, the interaction between the RAN controller and the CN controller is through the inter-controller interface, as shown in Fig. <ref>. Signalling takes place between UE and MSF via S-DU before handover while after handover it is through T-DU. Likewise, the data path between UE and UPF is by way of S-UP before handover while it is via T-UP after handover.
Mobility call flow for the existing 5GS is available in <cit.>. Fig. <ref> shows the mobility call flow which illustrates the handover procedure of the proposed architecture. For the sake of simplicity, splitting S-UP into S-DU and S-CU-UP, T-UP into T-DU and T-CU-UP is not shown. However, the reason behind the simplification of mobility procedure/messages is the same as explained for PDU session establishment in Section <ref>.
§ SYSTEM MODEL
This section presents the system model for the proposed architecture using PEPA. PEPA is a formal high-level language for the quantitative modelling of a distributed system <cit.>.
Table <ref> and Table <ref> represent the system model for the proposed architecture for the PDU session establishment and mobility procedures, respectively. To explain the system model, we use the PDU session establishment (or session establishment) procedure (shown in Fig. <ref>).
The session establishment procedure requires PSSF, CN controller and UPF as the key CN functions in the proposed architecture. These NFs are modelled as PEPA components. In addition, a UE is also modelled as a PEPA component. Each PEPA component (representing UE or a CN NF) goes through a set of states during the handling of the procedure. The individual component states are denoted by associating a unique number with the name of the component (e.g., Pssf_1, represents the first state of component, PSSF). Each component performs a set of actions, such as accessing the processor or sending a request/response. These actions are denoted in lowercase, and subscripts are added to provide further distinction (as action_actiondetail). For example, the notation for the action of PDU session establishment request and response can be req_pduse and rep_pduse, respectively. Each action is associated with a specific rate value, r. The rate (number of actions performed per unit time) models the expected duration of the action in the PEPA component and is taken as reference from <cit.>, <cit.> and <cit.>.
Let us now understand the details of modelling of NF states as shown in Table <ref>. Consider UE as an example. The UE acquires the processor in its initial state (acc_uep, r_a) and performs the processing action (process, r_iat) before sending a request. The second state, Ue_2, models the request (req_pduse, r_r) and response (rep_pduse, r_r) messages exchanged between UE and PSSF for the PDU session establishment.
NFs acquire processors to process a request/response. In Table <ref>, UEP, PSSFP, CONP and UPFP are the processing entities for UE, PSSF, CN controller (CON) and UPF respectively. These processing entities are modelled such that each NF processor has two states. For instance, the first state of UEP, Uep_1, is for acquiring the processor (acc_uep), and the second state, Uep_2, performs the processing action (process). Similarly, the other NFs and their processing entities are modelled.
As discussed in this section, the system model uses the following additional parameters: n denotes the number of UEs; N_pssf, N_con, and N_upf are the number of NF instances for PSSF, CN controller (CON), and UPF, respectively. Similarly, N_pssfp, N_conp, and N_upfp are the number of PSSF processor (PSSFP), CN controller processor (CONP) and UPF processor (UPFP), respectively. Please note that each processor can handle a set of concurrent threads, N_t. Thus, the product N_nf·N_nfp·N_t (as mentioned in the system model equation) represents the total number of threads for a type of NF. Moreover, the product N_nf·N_nfp is the total number of processors allocated to a type of NF, e.g., for UPF processor.
The system equation represents the overall system model. The cooperation operator (⋈), for example, A _L^⋈ B, models the interactions between NFs A and B over the actions defined in the cooperation set L. It can be noted that it is possible that component A _L^⋈ B will have different behaviour from component A _K^⋈ B if L≠K. Let us consider an example from Fig. <ref>, where PSSF and CN controller (CON) interact with each other for session context request/response req_sc/rep_sc. These actions are defined in cooperation set L_2, as shown in Table <ref>. Therefore, the system equation Pssf_1[N_pssf.N_pssfp.N_t] _L_2^⋈ Con_1[N_con.N_conp.N_t], models the interaction between PSSF and CN controller over the cooperation set L_2. In a similar way, the overall system equation, as shown in Table <ref> and Table <ref> represents the interaction between the various NFs as shown in the two call flows, Fig. <ref> and Fig. <ref>, respectively.
§ PERFORMANCE EVALUATION
This section presents the performance comparison between the existing 5GS and the proposed architecture analysed using the PEPA Eclipse plug-in <cit.>, a software tool integrated into the popular Eclipse platform. This tool supports various performance measures <cit.> as discussed below, which help evaluate the network's performance.
* Session establishment rate (or the number of successful handovers in the case of mobility): The number of session establishments are measured for the action (say, rep_pduse, which describes the completion of the session establishment procedure), representing the session establishment rate. Similarly, the number of successful handovers is measured for the action session(as performed by UPF NF in Table <ref>), which signifies the completion of the handover procedure.
* Average response time: It measures the UE waiting time for any specific request and reflects the system's operating speed. We consider the average response time as the duration of the completion of the session establishment procedure. Similarly, we consider the mobility procedure's average response time as the completion of the handover procedure.
* Utilisation: Utilisation measures the NFs processor capacity utilised during the procedure. The utilisation of any NF (for example, PSSF processor) is derived from its population level (one of the features available in the tool) while performing any process.
* Scalability: Scalability (S), in simple terms, measures a network's ability to increase or decrease its size, performance and cost in response to changes in system processing demands. Alternatively, according to Equation <ref>, scalability can be defined as the ratio between the productivity of a system at two configurations (configuration here implies the number of NFs used) having different scales, say m_1 and m_2 <cit.>, which corresponds to the different numbers of NFs used in the network, say m_1 = (1,1,1) and m_2 = (3,3,1).
The mathematical expression for scalability is given as <cit.>:
S(m_1,m_2) = C(m_2)/C(m_1)
Where, C(m) is the productivity of a system at the scale m, given by (Equation <ref>):
C(m) = t(m)· r(m)/U(m)
Where t(m) is the average number of sessions established at scale m, U(m) is the processor utilisation of the system at scale m, and r(m) (Equation <ref>) is determined by evaluating the response time performance of the scaled system. We consider the following equation <cit.> to evaluate the performance function r(m) by using the average response time T(m), at scale m, with the target average response time T <cit.>.
r(m) =1/1+T(m)/T
§.§ Results and Analysis
In this section, we present the performance results for 5GS and the proposed architecture in the case of PDU session establishment service and mobility service.
§.§.§ PDU Session Establishment Service
The performance analysis of the proposed architecture and the existing 5GS for the session establishment procedure is discussed in this section. Fig. <ref> and Fig. <ref> show the session establishment rate with respect to the number of UEs for 5GS and the proposed architecture using two different configurations. For instance, (N_pssf, N_con, N_upf) = (1,1,1) for the proposed architecture is the basic configuration with single NF assigned and (N_pssf, N_con, N_upf) = (3,3,1) is the configuration for a scaled system with three NFs assigned to PSSF and CON while one to UPF. Similarly, basic and the scaled configuration for 5GS is defined as (N_amf, N_smf, N_upf) = (1,1,1) and (N_amf, N_smf, N_upf) = (3,3,1), respectively.
Results show that the proposed architecture can achieve a higher session establishment rate compared to the existing 5GS in case of both basic and scaled configurations. Although the session establishment rate has increased using a scaled configuration for proposed and existing architectures compared to the session establishment rate achieved using a basic configuration, the proposed architecture has achieved a higher session establishment rate than the 5GS.
The saturation point for existing 5GS, as shown in Fig. <ref>, is around 10,000 users i.e. it can serve a maximum number of 10,000 users, while the session establishment rate for the proposed architecture saturates at around 20,000 users. Similarly, Fig. <ref> shows that 5GS saturates at around 34,000 users. As the saturation point is reached, the network drops the incoming requests from the users. This means that with the given number of processors/NFs, the proposed architecture can achieve a higher session establishment rate. In contrast, more processors/NFs are required to support more number of session establishments.
The processor utilisation for all the NFs of the existing 5GS and the proposed architecture for basic and the scaled configuration is shown in Fig. <ref> and Fig. <ref>, respectively. For instance, the PSSFP reaches its maximum utilisation explaining the saturation point for the session establishment rate. Although at this point, CONP and UPFP are not fully utilised. These results show that the request processing chain fails if an NF becomes a bottleneck for the consecutive chain.
Scalability for the existing 5GS and the proposed architecture is evaluated based on Equation <ref>. It is plotted in Fig. <ref> based on the results obtained for session establishment rate, average response time and utilisation from the PEPA-based simulation and modelling. As stated earlier, we consider the following two configurations m_1 and m_2 for estimating the scalability metric. Fig. <ref> shows that the existing 5GS can serve 10,000 users for a basic configuration, and the proposed architecture can serve 20,000 users. Similarly, the existing 5GS reaches its saturation point at 34,000 users, and the proposed architecture saturates at 62,000 users for scaled configuration. Therefore, it implies that the proposed architecture performs better and can serve more users than the existing 5GS. Besides, the proposed is more scalable with increased users for the same number of NFs/processors. Please note that a similar explanation for all the performance measures (successful handovers, processor utilization and scalability) holds in the case of mobility service.
§.§.§ Mobility Service
This section presents the comparative analysis of the existing 5GS and the proposed architecture for the mobility service. Similar to the session establishment, the analysis is performed for the basic and the scaled configurations. Therefore, the basic configuration for the proposed architecture is given as (N_upt, N_msf, N_ran, N_cn, N_upf) = (1,2,2,1,1) and for the 5GS architecture is (N_sdu, N_scu, N_tdu, N_tcu, N_amf, N_smf, N_upf) = (1,1,1,1,1,1,1). Similarly, the scaled configuration for the proposed architecture is (N_upt, N_msf, N_ran, N_cn, N_upf) = (3,6,6,3,3) and for the 5GS architecture is given as (N_sdu, N_scu, N_tdu, N_tcu, N_amf, N_smf, N_upf) = (3,3,3,3,3,3,3). Here N_upt, N_msf, N_ran, N_cn, N_upf are the number of Target-User Plane (T-UP), MSF, RAN controller, CN controller and UPF respectively in the system model. Similarly, N_sdu, N_scu, N_tdu, N_tcu, N_amf, N_smf, N_upf are the number of S-DU, S-CU, T-DU, T-CU, AMF, SMF, and UPF respectively. Please note that for brevity, we have not split S-CU into S-CU-CP and S-CU-UP and T-CU into T-CU-CP and T-CU-UP while modelling the mobility call flow procedure for the 5GS. Further, we provide an equal number of functions and associated processors to the 5GS and the proposed architecture for justified comparison.
After reaching the saturation point, the system starts to drop handovers. Fig. <ref> and Fig. <ref> show that the proposed architecture serves more successful handovers per unit time compared to the existing 5GS for both the basic and the scaled configurations, respectively. The saturation point for the existing 5GS is 20,000 users, while for the proposed, the saturation is 30,000 users for the basic configuration. Similarly, the saturation point for the existing 5GS is around 60,000 users, while for the proposed, the saturation is around 90,000 users for the scaled configuration. The number of successful handovers per unit of time has increased using a scaled configuration for both architectures.
Fig. <ref> and Fig. <ref> are the result of processor utilisation for both the 5GS and the proposed architecture. Fig. <ref> shows the scalability results in the case of mobility service for 5GS and the proposed architectures. It can be observed from the scalability results that 5GS reaches its saturation point earlier than the proposed architecture and the proposed architecture is more scalable.
§ CONCLUSION AND FUTURE WORK
In this paper, we have proposed a novel mobile network architecture for separating the UE signalling from the network control functionality, enhancing the modularity, scalability, and flexibility of the network. The transposition of UE signalling functionality to service functions leads to simplified protocols and opens up ways to implement use case specific signalling in mobile networks. The proposed architecture also has improved alignment with the SDN principles.
We have considered PDU session establishment and mobility services as examples to analyse the performance of the proposed architecture using the PEPA-based simulation method. Based on the performance results and other benefits, it can be concluded that the proposed architecture is a promising option for future networks to handle vast and diverse traffic demands.
We plan to extend this work to analyse other features/services of mobile networks, such as authentication, network slicing, development of protocols between (signalling) service functions and the control plane, and addressing security threats in the 6GS mobile network (touched upon in section III) in future.
§ ACKNOWLEDGMENT
We acknowledge the Ministry of Electronics and Information Technology (MeitY), India, for supporting the project.
IEEEtran
|
http://arxiv.org/abs/2307.04777v1 | 20230709223047 | MentalHealthAI: Utilizing Personal Health Device Data to Optimize Psychiatry Treatment | [
"Manan Shukla",
"Oshani Seneviratne"
] | cs.LG | [
"cs.LG",
"cs.CY"
] |
Automated Essay Scoring in Argumentative Writing: DeBERTeachingAssistant
[
August 12, 2023
========================================================================
§ ABSTRACT
Mental health disorders remain a significant challenge in modern healthcare, with diagnosis and treatment often relying on subjective patient descriptions and past medical history. To address this issue, we propose a personalized mental health tracking and mood prediction system that utilizes patient physiological data collected through personal health devices. Our system leverages a decentralized learning mechanism that combines transfer and federated machine learning concepts using smart contracts, allowing data to remain on users' devices and enabling effective tracking of mental health conditions for psychiatric treatment and management in a privacy-aware and accountable manner. We evaluate our model using a popular mental health dataset that demonstrates promising results. By utilizing connected health systems and machine learning models, our approach offers a novel solution to the challenge of providing psychiatrists with further insight into their patients' mental health outside of traditional office visits.
§ INTRODUCTION
Mental health conditions such as depression and anxiety are some of the most challenging medical problems to diagnose and treat. Current treatment guidelines for these disorders primarily utilize subjective assessments, relying on patient self-report or clinician evaluation to inform clinical decisions. As such, the lack of objective markers for clinical outcomes presents a significant bottleneck in psychiatry. Furthermore, a patient's mood or emotions may change over time, but clinicians only have access to a patient’s data at the time of the visit, leading to a potentially biased sampling of the patient’s mental state. To address this, collecting data from the patient over a long period would be ideal for effective diagnosis and treatment. However, collecting such data is also challenging due to privacy concerns.
Connected health applications enable data to be generated and stored in a decentralized manner, where the data may reside cross-device.
A common challenge in health informatics in federated and decentralized settings is that training and test data are not independently and identically distributed (non-IID), which is especially true in scenarios that apply to predict the mental health of individuals using a combination of medical and environmental signals.
Because health data is typically not identically distributed, the generalization performance tends to be worse, and lower accuracy can result from overlooking the distribution shift in the training and testing data<cit.>.
More importantly, since non-IID data in healthcare applications comes from different clients, protecting data privacy is crucial in decentralized learning settings<cit.>.
Furthermore, applying connected health technologies in a mental health population poses multiple problems<cit.>. First is the concern about data security and privacy.
Studies have shown that mental health populations typically consider their data sensitive and vary in sharing this information due to perceived mental health stigmas. Surveys have shown that 65% of patients with mental health disorders are unlikely to share patient data with their psychiatrists<cit.>.
If the psychiatrists aim to rely on patients' history, studies<cit.> have shown patient histories only to be 62% accurate, leading to psychiatric misdiagnoses as high as 65.9% for major depressive disorders and 85.8% for panic disorders.
Therefore, a technological solution is necessary to provide psychiatrists with health insights without collecting raw data from the patient's smart health devices. Second, current models do not account for the granularity of mental health disorders. As explained in the American Psychiatric Association's Clinical Practice Guidelines<cit.>, patient emotions are subject to rapid changes within the span of a day or a week, and elements such as sleep or diet can lead to quick changes in mood. While many have utilized information from Electronic Health Records (EHR) to predict mental health crises<cit.>, these models overlook granular patient changes. Therefore, they cannot generate a patient baseline (in fact, getting data through facial expressions or EHR systems can lead to biased results). Understanding the immediate effects of medication, such as antidepressants, is crucial for psychiatrists and requires granular patient data that cannot be retrieved otherwise. Currently, the most feasible way to collect this granular patient data is through a smartphone and a patient's health devices. This method, however, has the issue of unequal data streams. Different patients have different personal health devices. For example, while one patient may have five devices, another may only have one. While training a model on the patient with five devices may lead to better results, the input from a patient population to this model will decrease (as not many patients have so many personal health devices). Therefore, there is a need to obtain insights even with the feature types being unequal from patient to patient.
We present a decentralized federated learning algorithm called MentalHealthAI to alleviate these challenges.
First, MentalHealthAI uses on-device machine learning to prevent data from leaving the patient's smartphone. However, smart contracts are utilized in this framework to elect an aggregator, thereby creating a decentralized aggregator instead of a traditional centralized server.
A self-executing piece of code, called a smart contract, can encode rules that will be executed in a decentralized manner on a blockchain<cit.>.
The data remains on the patient's device in each epoch, and the model parameters are transferred from that device to the aggregator. Second, as each smartphone may collect a different set of patient features, MentalHealthAI utilizes a decision tree based methodology to derive model insights even when features and labels are not necessarily uniform.
§ SYSTEM DESIGN AND IMPLEMENTATION
r0.58
< g r a p h i c s >
System Architecture
At its core, the current system is a decentralized-learning infrastructure that utilizes physiological data to predict patient moods and therefore provides mental insights to a patient's psychiatrist without the requirements of a uniform set of features (as is necessary for typical machine learning algorithms). The overall architecture can be found in <Ref>, and its specific features are described below. Clients X, Y, and Z each represent different patients. Each patient owns several IoT devices, indicated by the different data streams A, B, and C. Each data stream has the same dimensions but different content (A may be heart rate data, B may be blood pressure, and C may be skin temperature). The final model is created by adding the union of models trained from different combinations of data streams to the random forest decision tree classification system. During the evaluation, we use the features in the POPANE dataset<cit.> as individual data streams to simulate this concept.
Personal health data do not contain uniform features from patient to patient. The common problem is that the devices used by different individuals are different.
Traditional machine learning is limited in cases where one patient has data collected from many disparate data streams, such as heart rate, blood pressure, electroencephalogram (EEG), and electrocardiogram (ECG), while another patient has only one (for example, just the heart rate). This limitation exists because only the intersection of feature types is considered rather than every feature type present.
For each patient, we assume a smartphone acts as the gateway between the patient's IoT devices, as depicted in <Ref>.
We utilized the POPANE dataset<cit.> for the simulation, where we divided the patient population based on the type and number of data streams each patient has. For example, we place patients with six of the same data streams (set A) in a different cohort than patients with only three data streams (set B). Here, data streams can refer to heart rate and blood pressure data. However, if set B is a subset of set A, A ∩ B can be added to cohort B's training set (as the data streams used in B and A ∩ B are the same).
Now, consider a patient population where the number of data streams a patient has varies from 1 to 6. For any patient with >1 data streams, a power set excluding the empty set is generated as shown in <Ref>. For example, given a patient with data streams d = {A, B, C}, P(d) = [{}, {A}, {B}, {A, B}, {C}, {A, C}, {B, C}, {A, B, C}], excluding {}, we divert each element in this power set (representing a set of data streams) into separate cohorts. This process maximizes the utility of patients with multiple data streams, as it maximizes the amount of data found in each cohort (in comparison to simply dividing the patient population based on the number of data streams present). We then train multiple machine learning models on these cohorts, where one model is trained from data from one cohort. Note that the labels are unaltered, regardless of the feature subset. When combined with MentalHealthAI's decentralized AI architecture, it is also important to note that multiple smartphones will be selected as aggregators but will be training different models with different training subsets.
Based on the patient population and available data streams at a given time, certain models will be more accurate than others, and this relationship can change frequently. Furthermore, every patient's mood with different baseline emotion levels may differ. While models can predict a large portion of the population successfully, they may not be accurate enough for a specific patient. <Ref> shows the model generation process for each client based on the available datasets.
Using a smart contract deployed on a blockchain as the “secure model aggregator,” the client interacts with it by emitting events to indicate learning has finished. The corresponding smart contract code is depicted in <Ref>. The smart contract employs a voting process to elect the next “leader” to perform model aggregation.
As each model has been trained on different feature subsets, decentralized aggregation occurs independently for each model. For example, if we have three models trained on features [A], [A,B], and [A,B,C], each of these models will be aggregated with other models trained on the same set of features from a different patient on a different client.
If the patient's smartphone is not elected as an aggregator, the smart contract will send the model parameters from the patient's smartphone to the smartphone elected as the model aggregator.
The clients will interact with the smart contract as shown in <Ref>.
Once the clients have finished training, they will notify the smart contract and be considered for the next “leader” election.
The client will also monitor events emitted from the smart contract to see if it is elected as an aggregator. If it is, then it will receive models from other smartphones.
Once the model parameters have been received, utilizing a decision tree, the “leader” client select the best prediction model for the patient as shown in <Ref>.
It collects mapping from the smart contract with data stream as key and aggregator smart contract address as value.
For example, assume a patient has three devices/data streams. This patient's models include every model trained on the following data stream combinations: [{A}, {B}, {A, B}, {C}, {A, C}, {B, C}, {A, B, C}].
Then, a calibration period is set for a certain period to collect new patient emotional features/labels (in <Ref>, it is set to 7 days). Each set of features in our simulation contributes to the models generated daily. The random forest decision tree (such as the one shown in <Ref>) will then use these models to predict the patient's emotional labels. The decision tree is run on the patient's smartphone after data stream based models have been generated and distributed back to the individual nodes from the aggregator.
§ EVALUATION AND RESULTS
r0.5
< g r a p h i c s >
Learning Results After Leader Election and Model Aggregation. Nodes refer to the other smartphones contributing to the combined model.
We evaluated our system using a mental health dataset named POPANE<cit.>. The POPANE dataset contains a set of 142 patients whose ECG, Electrodermal Activity (EDA), Skin Temperature (ST), Respiration (Resp), Systolic Blood Pressure (SBP), and Diastolic Blood Pressure (DBP) have been measured and labeled with positive and negative affect, which is rated from a scale of 0-10, with 0 indicating negative affect, and 10 indicating positive affect. We chose this dataset primarily because it closely matches our use case. A personal health device can measure each of the physiological parameters given above, and training on such a dataset can provide insight into the utility of such a system in a much larger population. Secondly, the data provided is collected on a second-to-second basis, similar to the collection rates found in many current IoT devices, such as smartwatches that measure heart rate or ECGs on the patient's skin. Finally, a major advantage of utilizing the POPANE dataset is its non-IID distributed data, as seen in <Ref>. The figure clearly shows that the affect is not equally distributed throughout the dataset (and is not likely to represent the standard population), which is more akin to what may be present in real-world situations, where random samples proportionate to the overall population are unlikely. Thus, through this dataset, we aim to investigate the resilience of MentalHealthAI in non-IID settings.
r0.5
< g r a p h i c s >
Frequency of Various Affects in the POPANE Dataset<cit.>
First, we assessed the training results from a model run on a centralized server. The model was a simple Artificial Neural Network (ANN) with three dense layers with softmax activation, as shown in <Ref>. We decided upon the activation function based on favorable learning results. As mentioned, we used six physiological features to assess the patient's affect, ranked from 0-10 to serve as the output. Each physiological feature is considered a separate data stream for this evaluation, containing data from different IoT devices. We set the data into a train-test split of 70-30% and used a sparse categorical cross-entropy loss function due to the nature of the output categorical labels. Multiple checkpoints were implemented, such as early stopping (which will stop training if accuracy does not improve after multiple epochs in a row) and learning adjustment (which will lower the learning rate by a factor of ten if the accuracy does not improve). We ran the model for 107 epochs and stopped the learning process because there was no change in training accuracy. After multiple trials, this epoch value led to the best learning result in our model. The overall accuracy was approximately 86%.
r0.8
< g r a p h i c s >
A Simplified View of the Neural Network Model Architecture
Based on these results, we can conclude that there is a link between physiological parameters and a patient's emotional state. We chose accuracy as our primary evaluation metric to ensure the model is clinically viable.
We then evaluated the decentralized learning aspects of this system. Since we could not acquire physical devices to test the model's performance in the real world, we evaluated the decentralized learning components through simulation. In this simulation, we assume a consortium of 142 patients modeled using the POPANE dataset, each with data collected through IoT devices. A global model updates itself based on data from each patient to form the final trained model. We trained the models in the same fashion as the ANN discussed above. <Ref> shows the test-set accuracy after training on each node.
As shown in <Ref>, successful learning can happen in a discontinuous situation. While the nodes had an initial training accuracy of 51%, this increased immediately to 86% after training from two additional nodes, confirming that MentalHealthAI can obtain high accuracy even in distributed settings. However, note that such results may not be obtainable in real-world conditions. Primarily, data collected in the POPANE dataset has been obtained in a controlled environment rather than during regular day-to-day activities. Therefore, if truly deployed in a community, there is a greater chance of false positives, false negatives, and inaccurate readings from IoT devices. However, given accurate input data, we assert that MentalHealthAI can be deployed in such a community setting.
We then evaluated the decision tree aspect of the system to determine the model accuracy in non-ideal settings. We divided the 142 patients in the POPANE dataset into four patient cohorts. Each cohort represented patients with a certain number of IoT devices (1 device, 2 devices, 3 devices, or 4 devices). Similar to what we explained in the methods section, we extracted data streams from each cohort based on the power set of their features. For example, in the cohort with 2 data streams (A and B), three sets of data were created: A, B, (A, B), each with the same label. We repeated this process for each patient cohort. Models were trained on the following data streams: ST, ECG, ST, ECG, EDA, ST, EDA, ECG, EDA, ST, ECG, EDA, Resp, ST, Resp, ECG, Resp, ST, ECG, Resp, EDA, Resp, ST, EDA, Resp, ECG, EDA, Resp, ST, ECG, EDA, Resp. Next, we simulated the “calibration” period, where each model generated emotion predictions based on new data to which the models were not exposed. This data then served as the input to the random forest model, which then provided the emotional predictions based on the predictions of the previously trained models. <Ref> depicts the decision tree for a single client (i.e., a smartphone belonging to a patient).
We simulated a standard baseline solution, MentalHealthAI-Baseline, to the above problem as a means of comparison. As a typical machine learning model cannot utilize different feature sets, the most optimized results will likely only come from the cohort with 4 data streams (35 patients). We trained standard ANN with the same hyperparameters as above on this data set, with an overall accuracy of 86%.
In comparison, MentalHealthAI-Fed had an overall accuracy of 80%, a substantial improvement. By utilizing unequal feature sets through multiple model combinations and a random forest model, one can improve learning results compared to a model that requires uniform features. While this accuracy level is lower than the original baseline model, it is important to acknowledge the differences in data. The baseline ANN model simulated an ideal world where 142 willing patients with access to 6 separate IoT devices. However, finding 142 patients with more than three personal health devices in the real world is intuitively infeasible for many reasons, such as cost and access. However, through this unique MentalHealthAI framework, we demonstrate that high accuracy is achievable even in less-than-realistic settings. We believe that this occurs due to multiple reasons. First, MentalHealthAI utilizes models without noise and irrelevant features, making them less susceptible to their effects. Second, models trained on less number of features can succeed by having access to a greater number of patients. Third, a random forest model can select the best model for the patient, a choice that can change over time.
Finally, MentalHealthAI was compared to current state-of-the-art emotion prediction systems and machine learning methods in adjacent domains. As shown in <Ref>, it is clear that compared to other past AI models, MentalHealthAI can produce greater accuracy with both the baseline ANN model and the decentralized decision tree architecture. Note that due to the novelty of the POPANE dataset at the time we developed our model, we were unable to compare our results to similar models that may have been trained on the same data.
§ RELATED WORK
Federated learning, introduced by McMahan et al.<cit.>, enables learning from decentralized data sources, where clients volunteer to participate in federated learning, i.e., they can join or leave the systems whenever they want.
Simply put, federated learning enables learning from decentralized data sources<cit.>.
A variant of federated learning in blockchain settings is swarm learning<cit.>, where a smart contract would elect a node to perform model updates at each epoch instead of a central aggregator. This selected node aggregates and broadcasts the model parameters to all other nodes. We drew inspiration from this methodology in the work presented in this paper.
However, swarm learning nodes are essentially large and powerful hospital servers utilized in applications such as leukemia and tuberculosis prediction<cit.>. Our work involves learning in a much more decentralized setting that leverages IoT devices and smartphones with much smaller memory and performance. At the same time, input features are all uniform in the original swarm learning implementation<cit.>, which we believe is an assumption that may not hold in other decentralized settings. We have embraced the non-IID assumption in our implementation.
Pfitzner et al.<cit.> conducted a systematic literature review on the concept of and research into federated learning and its applicability for confidential healthcare datasets.
In particular, Lee and Shin<cit.> conducted an experiment using the Modified National Institute of Standards and Technology (MNIST), Medical Information Mart for Intensive Care-III (MIMIC-III), and ECG datasets to evaluate the performance of a federated learning system compared to the state-of-the-art method for in-hospital mortality using imbalanced data.
Additionally, a small but growing number of works have focused on the application of federated learning in mental health applications.
FedMood<cit.> uses mobile phone and IoT data in a “multi-view” federated learning setting to detect the emotions of individuals. However, a central aggregator is still necessary for federated learning to be possible, which is risky, especially for patients in vulnerable populations.
Chhikara et al.<cit.> describe a federated learning framework that uses images to detect human emotion. While they achieved successful learning results, such a model is infeasible for granular changes in emotion/mood. While physiological data can provide hour-by-hour changes in a patient's emotion (such as changes after taking an antidepressant), obtaining patient pictures every hour is infeasible.
Garriga et al.<cit.>'s work is similar to the previously described work but utilizes EHR to predict mental health crises. While this is a valuable model for predicting specific mental health events, the model cannot assess granular emotional changes. Instead, crises are only extrapolated based on patient visits to the clinic or the hospital (which often occurs when the patient is sick). Through our work, we aim to see both granular and longitudinal changes in emotions, i.e., how mood changes throughout the day (especially with medication interventions) and how moods change over a week.
A novel variant of federated learning is personalized federated learning<cit.>.
The generated model is adapted to better fit a local dataset (for example, data belonging to a single patient).
Such a model adaptation can lead to a more personalized model for the specific patient. In other words, each client’s model does not need to be the same. While this strategy can prove useful for this application, we have not yet employed it due to problems with categorical overfitting (where the model chooses one category with any given input). Such overfitting will likely occur in this setting as emotion/mood may remain the same for many hours. Subjecting this to a model may lead the model to assume that the patient's emotions always remain the same. A more generalized model can expose a greater variation of training data.
Architecturally, MentalHealthAI provides many learning advantages that other AI strategies in this domain do not provide. Compared to traditional machine learning, MentalHealthAI introduces privacy-sensitive strategies to address a previously stigmatized population. We specifically allow learning on decentralized edge nodes, i.e., smartphones and only require transferring model parameters to make MentalHealthAI less susceptible to noise and irrelevant features. However, MentalHealthAI takes this further by introducing decentralized aggregators, preventing attacks on a centralized aggregator. The unique contribution of MentalHealthAI lies in its ability to utilize the available data, regardless of variations in the feature set.
§ CONCLUSION AND FUTURE WORK
Advances in AI techniques and IoT devices have transformed how chronic illnesses are treated today, such as asthma, hypertension, and diabetes. However, an area of medicine where connected health has remained relatively untapped has been mental health. In most situations, patient history, which is inaccurate and imperfect, has predominantly been used to treat this disorder. At the same time, psychiatry poses many unique challenges to connected health adoption. First, a sensitive patient population may not support releasing personal data, i.e., information potentially harmful if leaked. Secondly, significant variations exist in the number of data streams a patient has, thus potentially limiting learning. Finally, qualitative elements such as mood or emotions can change rapidly throughout the day and the week. For example, simple changes in diet, medications, or even sleep can lead to different emotions. Therefore, monitoring granular emotional changes is key to successfully monitoring mental health. While many have focused on using facial expressions or EHR records to predict crises, these models overlook the small changes that can lead to mental health issues. Therefore, to solve these problems, we utilized a unique combination of various AI and blockchain techniques to enhance data privacy and ownership in a system called MentalHealthAI.
It is an innovative combination of smart contracts and decentralized learning to create models useful for psychiatrists but in a way that protects the patient's privacy. IoT devices provide a second-by-second change in the patient's outward physiological signs, allowing for a granular understanding of the patient's health. It allows for successful learning even when data is stored in a patient's smartphone.
As part of the evaluation, we used a novel mental health dataset and divided the patient population into cohorts based on the data streams available for each patient.
We demonstrated that we could predict emotions/moods from physiological data in a decentralized and privacy-preserving manner.
Our methodology for predicting mental health disorders has several benefits. First, it increases accessibility. For example, if 20% of the patient population has only one IoT device, a traditional machine learning algorithm would be trained on only this limited population. In comparison, MentalHealthAI can utilize the entire patient population for model training in a decentralized and privacy-preserving manner, which can provide greater model utility for patients who do not have access to physiological data generators (i.e., IoT Devices). Secondly, it can increase model accuracy, especially in fields that have yet to be studied extensively due to non (or limited) data availability. Therefore, certain data streams may contribute to the model's accuracy, and different combinations of data stream features enable the better establishment of links between features and labels. Finally, this method can better adapt to non-IID settings. Intuitively, patient populations are unique, as most patients are more likely to have between 1-3 IoT devices. Therefore, a model trained on more patients but with fewer features can have greater accuracy than one trained on more features but with a smaller patient cohort. This relationship can change from community to community and region to region. We address this issue by having different population cohorts to provide accurate results while being resilient to changes in the patient population composition.
There are various limitations to this work. We are yet to evaluate our work with real smartphones in a decentralized setting in real life. Therefore, an initial user study is necessary to determine the effectiveness and impact of the prediction accuracy. Secondly, understanding physiological changes during emotions such as surprise, fear, or agitation would be valuable in addition to detecting moods and emotions in patients as a baseline. Therefore, the current model may need to be retrained on a separate dataset, and hyperparameters tuned appropriately to recognize these emotions.
Another important challenges are model approximation and optimization, i.e., is there a model that performs well on all clients? And how to find such a model?
By continuing to work on these limitations, we can deploy such infrastructure in the mental health patient population and provide utility to psychiatrists needing an objective metric to assess their patients.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.07277v1 | 20230714111737 | Are words equally surprising in audio and audio-visual comprehension? | [
"Pranava Madhyastha",
"Ye Zhang",
"Gabriella Vigliocco"
] | cs.CL | [
"cs.CL"
] |
Replay to Remember: Continual Layer-Specific Fine-tuning for German Speech Recognition
Theresa Pekarek Rosin
Stefan Wermter
^1 York University, Canada
^2 ETH Zürich, Switzerland
^3 RWTH Aachen University, Germany
==================================================================================================================================================
We report a controlled study investigating the effect of visual information (i.e., seeing the speaker) on spoken language comprehension. We compare the ERP signature (N400) associated with each word in audio-only and audio-visual presentations of the same verbal stimuli. We assess the extent to which surprisal measures (which quantify the predictability of words in their lexical context) are generated on the basis of different types of language models (specifically n-gram and Transformer models) predict N400 responses for each word. Our results indicate that cognitive effort differs significantly between multimodal and unimodal settings.
In addition, our findings suggest that while Transformer-based models, which have access to a larger lexical context, provide a better fit in the audio-only setting, 2-gram language models are more effective in the multimodal setting. This highlights the significant impact of local lexical context on cognitive processing in a multimodal environment.
Keywords: Surprisal theory, face-to-face communication setup, multimodal language comprehension, language models.
§ INTRODUCTION
A significant amount of research in language comprehension has been dedicated to examining how humans interpret written or spoken language. These studies have mainly focused on analyzing the verbal form of language <cit.>. This approach involves building an understanding of the text or speech one word at a time, with some words being more difficult to process than others.
Expectation-based theories of sentence processing <cit.> propose that the difficulty in processing a sentence is driven by the predictability of upcoming lexical material in context. Surprisal, an information-theoretic measure of predictability, is computationally operationalised using language models <cit.>. Language models (LMs) calculate the probability of a word given its context, which is then used to calculate surprisal. Surprisal has been supported by behavioural and neural measures of processing difficulty <cit.>.
However, a large body of previous work in language comprehension does not consider the visual contextual cues available in face-to-face communication.
Language has evolved, is learnt and is most often used in face-to-face contexts in which comprehenders have access to a multitude of visual cues, such as hand gestures, body movements and mouth movements that contribute to language processing <cit.>. In this paper, we follow this line of research and examine how multiple modalities of information impact language comprehension. We present a controlled study comparing the comprehension of language-related stimuli in both audio-only and audio-visual conditions and analyse changes in ERP signals.
§.§ N400, Language Models and Surprisal
The N400 is an event-related potential (ERP) component peaking negatively at ≈400ms at the central parietal areas that are observed in the brain during language processing tasks, measured using electroencephalography (EEG). The N400 is larger in response to semantically incongruent or unexpected words compared to congruent or expected words <cit.>. This indicates that the N400 is related to semantic processing, and the N400 effect has been interpreted as reflecting the brain's automatic evaluation of incoming linguistic information for semantic coherence Typically, when an upcoming word is semantically consistent with the context, it leads to a smaller N400 amplitude compared to when it is not.
It has been reported in reading-related tasks that words with higher surprisal, thus less predictable[predictability is estimated using language models] and more difficult to process, elicit more negative N400 <cit.>. Previous research has demonstrated the robustness of the N400 effect, and surprisal has been shown to predict N400 for various experimental tasks, including cloze-style tasks and semantic relatedness, among others <cit.>. Recent work observe that surprisal estimates computed using some types of language models may be better predictors than other types. For example, <cit.>
find that n-gram based language models with larger window sizes (4-grams) were best at explaining variance. More recent works have investigated Transformer based language models <cit.> and show that Transformer based models may be better predictors of suprisal than other language models. <cit.> compared surprisal obtained from GPT-2 <cit.> (a Transformer based language model trained over large web-based corpora), Recurrent Neural Network (RNN) based language model <cit.> and manual cloze probability in predicting N400 in cloze tasks, where the target words are manipulated to have different cloze probabilities.
The authors discovered that all three measures showed a significant association with N400, but surprisal estimates generated from GPT-2 explained the largest amount of variance. Merkx and Frank (2020) conducted a study in which they trained language models with Transformer and RNN-based models in a controlled setting using similar corpora. Under these controlled conditions, the study found that surprisal estimates generated from Transformer-based models, overall, provided a better fit to the EEG data. The increased performance has been hypothesized to be primarily due to the access to a larger lexical context in Transformer-based language models, which helps the model capture longer-range dependencies.
Overall, most recent works have shown that surprisal estimates from Transformer based models correlate better with N400 based estimates of cognitive effort. We note that
the majority of the N400 and surprisal correlations were found in reading based tasks. Some recent works have shown that surprisal also predicts N400 based cognitive effort in audio <cit.> and audio-visual tasks <cit.> contexts.
However, it remains unclear whether Transformer-based models such as GPT-2 are better predictors in audio or audio-visual settings where multiple sources of information are available.
§.§ Multimodality, Surprisal and N400
Language learning and use is fundamentally face-to-face, involving information from multiple sources (or modalities) such as gestures, facial expression, mouth movements and prosody, in addition to the lexical content of speech
<cit.>. These modalities provide additional context and meaning, making communication more effective <cit.>. Recent studies have shown that multiple modalities, such as prosody (the rhythm, stress, and intonation of speech) and gestures, play a key role in shaping language use during face-to-face interactions and in general language use.
Crucially, multimodal information, such as prosody and gesture, also modulates N400. For example, prosodic stress has been shown to mark the information structure, with new information more likely to carry prosodic stress than lexical information <cit.>. Violations of such patterns elicit larger N400 <cit.>, indicating that prosodic information is taken into account in semantic processing. Crucially also visual signals such as iconic gestures (hand movements imagistically related to the content of speech, e.g., "drawing" - imitate holding a pen and moving around) have been shown to affect the N400. Iconic gestures that mismatch the speech elicit larger N400 <cit.>, indicating enhanced semantic processing difficulty.
<cit.> further investigated how multimodal information modulates N400 in the naturalistic context where different cues co-occur. The authors in this study present participants with videos where a speaker produces short passages with naturally occurring prosody, gestures and mouth movements. They then quantified the correlation between
the lexical predictability (using 2-gram surprisal estimates),
prosody (using mean F0, capturing the pitch of the word),
gestures (annotated as meaningful, e.g. “drinking" - imitate holding a cup to drink, or beats, the rhythmic hand movements that are not directly meaningful), and informativeness of mouth movements.
This study shows that ERP between 300-600ms is indeed sensitive to surprisal, extending the previous N400-surprisal effect to audio-visual modality. However, they also found that the effect of surprisal on N400 is modulated by multimodal information, as pitch prosody, meaningful gestures and informative mouth movements and their combinations reduce the N400, especially for higher surprisal words, indexing easier comprehension than predicted by surprisal alone. <cit.> further report similar patterns in highly proficient non-native English comprehenders.
These findings indicate that surprisal may not fully capture comprehension in the multimodal context, as the surprisal effect is modulated by multimodal information. However, both these studies
only use a 2-gram based language model to compute surprisal estimates. It is unclear whether other models such as Transformers (which have access to a larger window of context) would allow for a better fit for N400 in the audio and audio-visual context.
<cit.> presented evidence showing that visual information can impact lexical expectations in reading and listening experiments. They determined the index of cognitive activity by examining the impact of visual uncertainty on word surprisal and cognitive effort. These experiments focused on presenting additional visual stimuli that matched the words in the sentences. These findings suggest that in a controlled environment where visual stimuli are carefully provided, they have a significant effect on cognitive processing. This indicates the importance of taking into account additional information channels besides lexical content to accurately predict cognitive effort.
§.§ The Present Study
We report a controlled study to investigate the effects of visual signals (seeing the speaker) on language comprehension. We compare the effects of audio-only and audio-visual settings using the same language stimuli and analyze the changes in ERP signals. We then evaluate the effectiveness of surprisal estimates, using different language models with varying lexical context windows, in explaining cognitive effort in both unimodal (audio-only) and multimodal (audio-visual) conditions.
Our study extends recent observations that indicate that other modalities of information significantly contribute towards the cognitive effort of language processing <cit.>. We provide a comparison of EEG responses to the same lexical context but presented in a unimodal or multimodal manner. Crucially, our analysis of language model surprisal estimates assesses whether language models with different architectures and degrees of complexities provide equally good fit across unimodal and multimodal contexts.
We first present our methodology followed by the results and finally discuss the salient observations in the following sections.
§ METHODS
§.§ Electrophysiological Data
§.§.§ Participants
We collected experimental data from two cohorts: a) 27 participants
in the audio-only condition and
b) 31 participants in the audio-visual condition. All participants were native English speakers with normal hearing, vision, and no known neurological disorder[The study was approved by the university ethics committee. Participants gave written consent and were paid £7.5/h for their participation.].
§.§.§ Materials
103 naturalistic passages were randomly selected from the British National Corpus (BNC) and were evaluated by native English speakers to be semantically and grammatically coherent. They were recorded by a native English-speaking actress with natural prosody and facial expressions. The final corpus of experimental stimuli has a mean duration of 8.50 seconds and an average word count of 23. The onset and offset of each word were automatically detected using a word-phoneme aligner based on a Hidden Markov Model <cit.> and was further manually verified (mean=440ms, SD=376ms). Participants watched the videos in the audio-visual setting and listened to the soundtrack of the videos in the audio-only setting.
§.§.§ Procedures
Participants were seated approximately 1 meter away from a computer and wore earphones during the experiment. After three practice trials, they were presented with audio stimuli in the first experiment and audio-visual stimuli in the second experiment. To ensure comparability between the two experiments, participants in the first experiment also viewed a static snapshot of the same actress taken from the video, to control for the presence of visual input. Each trial was separated by a 2000ms interval, and 35 clips were followed by attention checks to ensure participants were paying attention to the stimuli. Participants were instructed to carefully listen to or watch the stimuli and answer as quickly and accurately as possible.
The EEG data was collected for both the audio-only and audio-visual conditions using the same 32-channel Biosemi system with CMS and DRL as ground reference. Two external electrodes were attached to the left and right mastoid as an offline reference, and two external electrodes captured horizontal and vertical eye movements. Participants were instructed to avoid moving, keep their facial muscles relaxed, and reduce blinking, if possible. The electrode offsets were maintained between ± 25mV. The recording was conducted in a shielded room with a temperature of 18 °C. The EEG session lasted approximately 60 minutes.
§.§.§ EEG Preprocessing
The data was pre-processed with EEGLAB (<cit.>, v.14.1.1) and ERPLAB (<cit.>, v.7.0.0) running under MATLAB 2019a. All electrodes were included. While N400 has a central-parietal distribution, the scalp distribution of audio and audiovisual speech can be more frontal and may be different from one another due to the modality differences <cit.>. Therefore, instead of focusing on a predefined region of interest (ROI), we included all electrodes <cit.>, categorized them into ROIs and added them in the statistical model (as in <cit.>, see Statistical Analysis section below for more description). EEG files were referenced to mastoids, down-sampled to 512Hz, separated into -100 to 1200ms epochs time-locked to word onset and filtered with a 0.05-100Hz band-pass filter. Artefacts (e.g., eye movements and muscle noise) were first corrected with ICA. The remaining artefacts were rejected using a moving window peak-to-peak analysis and step-like artefact analysis. Due to likely overlap between any baseline period (-100 to 0ms) and the EEG signal elicited by the previous word, we did not perform baseline correction, but instead extracted the mean EEG amplitude in this time interval and later used it as a control variable in the statistical analysis <cit.>. Following previous work <cit.>, we take the mean ERP amplitude between 300-500ms as the N400 signal.
§.§ Computing Surprisal
Surprisal theory <cit.> is rooted in information theoretic principles <cit.> by utilising entropy, a core concept in information theory, to assess the predictability of events and the level of surprise they generate. The theory examines the connection between predictability and the processing of lexical information in the human brain. In this framework, lexical units carry information which is conveyed through a probabilistic measure. The level of predictability of these units influences how the brain processes and evaluates them. When predictability is low, it results in higher levels of surprise and requires more cognitive resources for processing. The exact amount of information conveyed by a unit is hence quantified as its surprisal.
Formally, consider a linguistic signal 𝐥 made of units: {l_1, ⋯, l_n} (where the units could be words, phonemes, etc.); surprisal is then defined as:
s(l_t) = -log p(l_t | l_1, ⋯, l_t-1)
which represents the negative log-probability of a unit (l_t) given its preceding context (l_1, ⋯, l_t-1), where t indicates the sequence time-steps. Surprisal theory asserts that the effort needed to process a linguistic unit is directly proportional to its unexpectedness in its context, which is measured by its surprisal. Formally, for a linguistic unit (l_t), the processing effort is linearly proportional to its surprisal:
effort(l_t) ∝ s(l_t)
As we don't have direct access to the true conditional probabilities of observing linguistic units given their context, we use language models to estimate them instead. We obtain surprisal estimates using log-probabilities (see Equation <ref> above) through classical n-gram-based language models and more recent Transformer-based models.
For n-gram models, we cover an entire spectrum of n-gram models and construct {2,3,4,5,6}-gram models using modified Kneser-Ney Smoothing <cit.>[Following <cit.>, we use Wiki-text 103 as the corpus for estimating the n-gram probabilities.]. All probability estimates are computed at the word level. For Transformer-based models, we use GPT-2 and BERT [We use openly available pre-trained models from hugginface library <cit.>.], and all probability estimates are also computed at the word level. We note that BERT is trained for a cloze-style task and hence the probabilities from this model are considered as pseudo surprisal estimates.
§.§ Statistical Analysis
§.§.§ Correlation between Audio and Audiovisual N400
To determine the correlation between N400 in audio and audio-visual settings, we calculate Pearson's correlation of N400 per word across modalities. N400 was calculated as the mean ERP between 300-500ms minus the baseline ERP mean (as we did not perform baseline correction during preprocessing, as previously mentioned). The variance was reduced by averaging the results across all participants and electrode sites for each word in each modality.
§.§.§ Evaluating model performances across modalities
We compared the performances of surprisal generated by different computational models using a linear mixed effect regression model conducted in R using the lme4 package <cit.>. We followed a similar approach as
<cit.> by comparing a baseline model with more complex models containing surprisal. The dependent variable was the mean ERPs in the 300-500ms time window extracted from 32 electrodes for all content words (e.g. nouns, verbs, adjectives, as in <cit.>). The baseline model contains regions of interest (ROI) which describes the location of each electrode. 32 electrodes were catogorised as 5 ROIs, including prefrontal (Fp1, Fp2, AF3, AF4), fronto-central (F3, F7, Fz, F4, F8, FC5, FC1, FC6, FC2), central (C3, C4, Cz), posterior (CP1, CP5, CP2, CP6, P3, P7, Pz, P4, P8, PO3, PO4, O1, Oz, O2), left temporal (T7) and right temporal (T8). The baseline model also contains the mean EEG amplitude from the baseline interval extracted above. The baseline model includes participant, passage and electrode as random intercepts. Then, we added the main effect of surprisal to create surprisal+ROI models and further the interaction between surprisal and ROI to create surprisal×ROI models. We then estimated the improvement of fit by model comparisons, where the surprisal+ROI models are compared with the baseline models and the surprisal×ROI models are compared with the surprisal+ROI models using function in R (p-values FDR adjusted for multiple comparisons). We also calculated the decrease of AIC value for each model compared with the baseline model (Δ̃AIC). The same analysis was performed for audio and audio-visual data separately.
§ RESULTS
§.§ N400 is weakly correlated across settings
The Pearson correlation coefficient for N400 per word between audio-only and audio-visual settings is 0.11 (t = 5.16, p<.001), indicating a weak positive correlation between the two settings.
Figure <ref> shows a scatter plot of N400 across the two settings, which indicates that while most of the data points are densely populated in the center, there is no meaningful relationship between the two settings. If the lexical information were the most significant contributing factor, we would expect a stronger correlation between audio-only and audio-visual conditions since both experiments involve the same verbal stimuli.
§.§ Statistical models behave differently across settings
We present statistical analysis for the audio-only and audio-visual settings in Tables I. We find that additive models (surprisal+ROI) provide a good fit for N400 amplitudes than baseline models in both audio-only and audio-visual settings, as indicated in χ̃^2 and p values. Furthermore, the multiplicative models (surprisal×ROI) almost always improve the model fit compared to the additive models. The difference of the multiplicative models over additive models (surprisal×ROI) indicates that surprisal generated from all models predicts N400 amplitudes in both audio and audio-visual conditions in interaction with ROIs.
In the auditory setting, we observe the largest reduction in AIC compared to the baseline model (Δ̃AIC) is associated with GPT-2, followed by BERT and n-gram models. This suggests that Transformer-based models (especially GPT-2), which has access to a largest lexical context, can better predict N400 amplitudes, in a unimodal setting. Previous work has also seen similar pattern, where models that consider larger lexical contexts have been shown to provide better fit <cit.>.
However, in the audio-visual setting, we observe a reversal of this pattern where, strikingly, the 2-gram model shows the largest ΔAIC, while GPT-2 shows the smallest ΔAIC. In general, we notice that the models with smaller context window provide a better fit in the audio-visual setting. We present our results in Figure 2, which shows the reduction in AIC (ΔAIC) across models and modalities. We note that we only plot multiplicative models, as they offer a better overall fit (but the additive models showed similar patterns). These observations indicate that local lexical information is more prominent in the multimodal setting.
§ DISCUSSION
Our results demonstrate that under the same verbal stimuli, cognitive processing, as captured using ERP, significantly differs between unimodal and multimodal experimental settings.
We replicate the earlier findings of multiplicative models (surpirsal×ROI) providing better fit for the data in comparison to additive models (surpirsal+ROI) models. Although we validate earlier findings that Transformer-based models like GPT-2 are better predictors of N400 in unimodal (audio-only) settings, the opposite trend is observed in the multimodal setting. These observations strongly suggest that non-verbal cues significantly contribute to cognitive processing more than lexical information alone.
In the unimodal setting, the surprisal estimates from GPT-2 based language model exhibit the best fit compared to other models, consistent with previous research <cit.> demonstrating the superiority of Transformer-based models over other language models, such as RNNs and traditional n-gram models over a variety of psychometric data. Our findings show that in the unimodal setting, the surprisal estimates from GPT-2 based language model outperforms other models, as previously demonstrated in previous studies. However, the BERT displays slightly different results, possibly due to its training objective as a masked language model, which limits access to only pseudo log-probabilities. This difference in objectives between BERT and GPT-2, combined with the limitations in accessing log-probabilities from BERT, could contribute to the differing performance of these models. Similar findings have been reported in previous work <cit.>.
In the multimodal setting, our results reveal a reversal of trends compared to the unimodal setting. Surprisal values derived the n-gram language models, particularly the 2-gram model, provide the best fit for N400 in the multimodal scenario. We note that surprisal only captures word predictability based on previous lexical context, ignoring any multimodal information in the stimuli. We posit that, in the multimodal setting, participants utilise multiple sources of information, such as gestures, mouth movements, eye movements, and posture. The increased information content from multiple sources may only allow participants to better track local lexical context, rather than global lexical context. Our findings using language models over different contextual windows suggest some validation of this hypothesis. Especially, we observe in Figure 2 that as we increase the context window from 2 to 6 we overall see a degradation in ΔAIC, indicating a worse fit in comparison to 2-gram. The differences in N400 across audio and audiovisual modalities indicate that cognitive processing strategies differ across modalities even when the verbal stimuli is identical.
Overall, our findings provide strong evidence that multimodal processing of language differs significantly from unimodal processing of language, even under the same verbal stimuli. Our results generally highlight the importance of considering non-verbal cues in language processing.
§ SUMMARY AND CONCLUSIONS
In this paper, we present a controlled study, investigating the effect of multiple modalities of information on cognitive processing of language comprehension. We conduct experiments over audio-only and audio-visual modalities with the same verbal stimuli.
Our findings overall suggest that cognitive effort in a multimodal setting significantly differs from that in a unimodal setting, with nonverbal contextual information playing a significant role. We also observe that local verbal context significantly influences cognitive processing effort in a multimodal setting in comparison to the unimodal setting. We believe that our results highlight the importance of modelling non-verbal cues for language comprehension and processing.[Sourcecode and data for replication of our study are made available here: <https://github.com/pmadhyastha/multimodal_comprehension>]
|
http://arxiv.org/abs/2307.05061v1 | 20230711071019 | Maximizing Social Welfare in Score-Based Social Distance Games | [
"Robert Ganian",
"Thekla Hamm",
"Dušan Knop",
"Sanjukta Roy",
"Šimon Schierreich",
"Ondřej Suchý"
] | cs.GT | [
"cs.GT",
"cs.DS"
] |
Belief Revision from Probability
Jeremy Goodman
School of Philosophy
University of Southern California, USA
[email protected]
Bernhard Salow
Faculty of Philosophy
University of Oxford, UK
[email protected]
August 12, 2023
=======================================================================================================================================================================================================
Social distance games have been extensively studied as a coalition formation model where the utilities of agents in each coalition were captured using a utility function that took into account distances in a given social network. In this paper, we consider a non-normalized score-based definition of social distance games where the utility function u^ depends on a generic scoring vector , which may be customized to match the specifics of each individual application scenario.
As our main technical contribution, we establish the tractability of computing a welfare-maximizing partitioning of the agents into coalitions on tree-like networks, for every score-based function u^. We provide more efficient algorithms when dealing with specific choices of u^ or simpler networks, and also extend all of these results to computing coalitions that are Nash stable or individually rational.
We view these results as a further strong indication of the usefulness of the proposed score-based utility function: even on very simple networks, the problem of computing a welfare-maximizing partitioning into coalitions remains open for the originally considered canonical function .
§ INTRODUCTION
Coalition formation is a central research direction within the fields of algorithmic game theory and computational social choice. While there are many different scenarios where agents aggregate into coalitions, a pervasive property of such coalitions is that the participating agents exhibit homophily, meaning that they prefer to be in coalitions with other agents which are similar to them. It was this observation that motivated Brânzei and Larson to introduce the notion of social distance games (SDG) as a basic model capturing the homophilic behavior of agents in a social network <cit.>.
Brânzei and Larson's SDG model consisted of a graph G=(V,E) representing the social network, with V being the agents and E representing direct relationships or connections between the agents. To capture the utility of an agent v in a coalition C⊆ V, the model considered a single function: u(v,C)=1/|C|·∑_w∈ C∖{v}1/d_C(v,w) where d_C(v,w) is the distance between v and w inside C.
Social distance games with the aforementioned utility function have been the focus of extensive study to date, with a number of research papers specifically targeting algorithmic and complexity-theoretic aspects of forming coalitions with maximum social welfare <cit.>.
Very recently, Flammini et al. <cit.> considered a generalization of via an adaptive real-valued scoring vector which weights the contributions to an agent's utility according to the distances of other agents in the coalition, and studied the price of anarchy and stability for non-negative scoring vectors.
However, research to date has not revealed any polynomially tractable fragments for the problem of computing coalition structures with maximum social welfare (with or without stability-based restrictions on the behavior of individual agents), except for the trivial cases of complete (bipartite) graphs <cit.> and trees <cit.>.
Our Contribution.
The undisputable appeal of having an adaptive scoring vector—as opposed to using a single canonical utility function —lies in the fact that it allows us to capture many different scenarios with different dynamics of coalition formation. However, it would also be useful for such a model to be able to assign negative scores to agents at certain (larger) distances in a coalition.
For instance, guests at a gala event may be keen to accept the presence of friends-of-friends (i.e., agents at distance 2) at a table, while friends-of-friends may be less welcome in private user groups on social networks, and the presence of complete strangers in some scenarios may even be socially unacceptable.
Here, we propose the study of social distance games with a family of highly generic non-normalized score-based utility functions.
Our aim here is twofold. First of all, these should allow us to better capture situations where agents at larger distances are unwelcome or even unacceptable for other agents.
At the same time, we also want to obtain algorithms capable of computing welfare-maximizing coalition structures in such general settings, at least on well-structured networks.
Our model considers a graph G accompanied with an integer-valued, fixed but adaptive scoring vector which captures how accepting agents are towards other agents based on their pairwise distance.[Formal definitions are provided in the Preliminaries.] The utility function u^(v,C) for an agent v in coalition C is then simply defined as u^(v,C)=∑_w∈ C∖{v}(d_C(v,w)); we explicitly remark that, unlike previous models, this is not normalized with respect to the coalition size. As one possible example, a scoring vector of (1,0,-1) could be used in scenarios where agents are welcoming towards friends, indifferent to friends-of-friends, slightly unhappy about friends-of-friends-of-friends (i.e., agents at distance 3), and unwilling to group up with agents who are at distance greater than 3 in G.
A concrete example which also illustrates the differences to previous SDG models is provided in <Ref>.
While non-normalized scoring functions have not previously been considered for social distance games, we view them a natural way of modeling agent utilities; in fact, similar ideas have been successfully used in models for a variety of other phenomena including, e.g., committee voting <cit.>, resource allocation <cit.>
and Bayesian network structure learning <cit.>.
Crucially, it is not difficult to observe that many of the properties originally established by Brânzei and Larson for SDGs also hold for our non-normalized score-based model with every choice of , such as the small-world property <cit.> and the property that adding an agent with a close (distant) connection to a coalition positively (negatively) impacts the utilities of agents <cit.>.
In addition, the proposed model can also directly capture the notion of enemy aversion with symmetric preferences <cit.> by setting =(1).
Aside from the above, a notable benefit of the proposed model lies on the complexity-theoretic side of things. Indeed, a natural question that arises in the context of SDG is whether we can compute an outcome—a partitioning of the agents into coalitions—which maximizes the social welfare (defined as the sum of the utilities of all agents in the network). This question has been studied in several contexts, and depending on the setting one may also require the resulting coalitions to be stable under individual rationality (meaning that agents will not remain in coalitions if they have negative utility) or Nash stability (meaning that agents may leave to join a different coalition if it would improve their utility). But in spite of the significant advances in algorithmic aspects of other coalition formation problems in recent years <cit.>,
we lack any efficient algorithm capable of producing such a welfare-optimal partitioning when using the utility function even for the simplest types of networks.
To be more precise, when viewed through the refined lens of parameterized complexity <cit.> that has recently become a go-to paradigm for such complexity-theoretic analysis, no tractable fragments of the problem are known. More precisely, the problem of computing a welfare-maximizing outcome under any of the previously considered models is not even known to admit an algorithm when parameterized by the minimum size of a vertex cover in the social network G—implying a significant gap towards potential fixed-parameter tractability. This means that the complexity of welfare-maximization under previous models remains wide open even under the strongest non-trivializing restriction of the network.
As our main technical contribution, we show that non-normalized score-based utility functions do not suffer from this drawback and can in fact be computed efficiently under fairly mild restrictions on G. Indeed, as our first algorithmic result we obtain an algorithm that computes a welfare-maximizing partitioning of the agents into coalitions parameterized by the treewidth of G, and we strengthen this algorithm to also handle additional restrictions on the coalitions in terms of individual rationality or Nash stability.
As with numerous treewidth-based algorithms, we achieve this result via leaf-to-root dynamic programming along a tree-decomposition. However, the records we keep during the dynamic program are highly non-trivial and require an advanced branching step to correctly pre-computed the distances in the stored records.
We remark that considering networks of small treewidth is motivated not only by the fundamental nature of this structural graph measure, but also by the fact that many real-world networks exhibit bounded treewidth <cit.>.
In the next part of our investigation, we show that when dealing with simple scoring functions or bounded-degree networks, these results can be improved to fixed-parameter algorithms for welfare-maximization (including the cases where we require the coalitions to be individually rational or Nash stable). This is achieved by combining structural insights into the behavior of such coalitions with a different dynamic programming approach. Furthermore, we also use an entirely different technique based on quadratic programming to establish the fixed-parameter tractability of all 3 problems under consideration w.r.t. the minimum size of a vertex cover in G. Finally, we conclude with some interesting generalizations and special cases of our model and provide some preliminary results in these directions.
§ PRELIMINARIES
We use to denote the set of natural numbers, i.e., positive integers, and for the set of integers. For i∈, we let [i]= {1,…,i} and [i]_0 = [i] ∪{0}.
We assume basic familiarity with graph-theoretic terminology <cit.>.
Social Distance Games.
A social distance game (SDG) consists of a set N = {1,…,n} of agents, a simple undirected graph G=(N,E) over the set of agents called a social network, and a non-increasing scoring vector =(s_1,…,s_) where
[label=*)]
* for each a∈ [], s_a∈ and
* for each a∈ [-1], s_a+1≤ s_a.
In some cases, it will be useful to treat as a function from rather than a vector; to this end, we set (a)=s_a for each a≤ and (a)=-∞ when a>.
The value “-∞” here represents an inadmissible outcome, and formally we set -∞+z=-∞ and -∞<z for each z∈.
A coalition is a subset C⊆ N, and an outcome is a partitioning Π=(C_1,…,C_ℓ) of N into coalitions; formally,
⋃_i=1^ℓ C_i = N, every C_i∈Π is a coalition, and all coalitions in Π are pairwise disjoint. We use Π_i to denote the coalition the agent i∈ N is part of in the outcome Π.
The utility of an agent i∈ N for a coalition Π_i∈Π is
^(i,Π_i) = ∑_j∈Π_i∖{i}(_Π_i(i,j)),
where _Π_i(i,j) is the length of a shortest path between i and j in the graph G[Π_i], i.e., the subgraph of G induced on the agents of Π_i. We explicitly note that if Π_i is a singleton coalition then ^(i,Π_i)=0. Moreover, in line with previous work <cit.> we set _Π_i(i,j):=+∞ if there is no i-j path in G[Π_i], meaning that ^(i,Π_i)=-∞ whenever G[Π_i] is not connected.
For brevity, we drop the superscript from u^ whenever the scoring vector is clear from the context. To measure the satisfaction of the agents with a given outcome, we use the well-known notation of social welfare, which is the total utility of all agents for an outcome Π, that is,
^(Π) = ∑_i∈ N^(i,Π_i).
Here, too, we drop the superscript specifying the scoring vector whenever it is clear from the context.
We assume that all our agents are selfish, behave strategically, and their aim is to maximize their utility. To do so, they can perform deviations from the current outcome Π. We say that Π admits an IR-deviation if there is an agent i∈ N such that (i,C) < 0; in other words, agent i prefers to be in a singleton coalition over its current coalition. If no agent admits an IR-deviation, the outcome is called individually rational (IR). We say that Π admits an NS-deviation if there is an agent i and a coalition C∈Π∪{∅} such that (i,C∪{i}) > (i,Π_i). Π is called Nash stable (NS) if no agent admits an NS-deviation.
We remark that other notions of stability exist in the literature <cit.>, but Nash stability and individual rationality are the most basic notions used for stability based on individual choice <cit.>.
Having described all the components in our score-based SDG model, we are now ready to formalize the three classes of problems considered in this paper. We note that even though these are stated as decision problems for complexity-theoretic reasons, each of our algorithms for these problems can also output a suitable outcome as a witness. For an arbitrary fixed scoring vector , we define:
0.98
Input: A social network G=(N,E), desired welfare b ∈.
Question: Does the distance game given by G and admit an outcome with social welfare at least b?
and are then defined analogously, but with the additional condition that the outcome must be individually rational or Nash stable, respectively.
We remark that for each of the three problems, one may assume w.l.o.g. that s_1>0; otherwise the trivial outcome consisting of |N| singleton coalitions is both welfare-optimal and stable.
Moreover, without loss of generality we assume G to be connected since an optimal outcome for a disconnected graph G can be obtained as a union of optimal outcomes in each connected component of G.
The last remark we provide to the definition of our model is that it trivially also supports the
well-known small world property <cit.> that has been extensively studied on social networks.
In their original work on SDGs, Brânzei and Larson showed that their model exhibits the small world property by establishing a diameter bound of 14 in each coalition in a so-called core partition <cit.>.
Here, we observe that for each choice of , a welfare-maximizing coalition will always have diameter at most .
Parameterized Complexity. The parameterized complexity framework <cit.> provides the ideal tools for the fine-grained analysis of computational problems which are and hence intractable from the perspective of classical complexity theory. Within this framework, we analyze the running times of algorithms not only with respect to the input size n, but also with respect to a numerical parameter k∈ that describes a well-defined structural property of the instance; the central question is then whether the superpolynomial component of the running time can be confined by a function of this parameter alone.
The most favorable complexity class in this respect is (short for “fixed-parameter tractable”) and contains all problems solvable in f(k)· n^1 time, where f is a computable function. Algorithms with this running time are called fixed-parameter algorithms. A less favorable, but still positive, outcome is an algorithm with running time of the form n^f(k); problems admitting algorithms with such running times belong to the class .
Structural Parameters. Let G=(V,E) be a graph. A set U⊆ V is a vertex cover if for every edge e∈ E it holds that U∩ e ≠∅. The vertex cover number of G, denoted (G), is the minimum size of a vertex cover of G.
A nice tree-decomposition of G is a pair (𝒯,β), where 𝒯 is a tree rooted at a node r∈ V(𝒯), β V(𝒯)→ 2^V is a function assigning each node x of 𝒯 its bag, and the following conditions hold:
* for every edge {u,v}∈ E(G) there is a node x∈ V(𝒯) such that u,v∈β(x),
* for every vertex v∈ V, the set of nodes x with v∈β(x) induces a connected subtree of 𝒯,
* |β(r)|=|β(x)| = 0 for every leaf x∈ V(𝒯), and
* there are only tree kinds of internal nodes in 𝒯:
* x is an introduce node if it has exactly one child y such that β(x) = β(y)∪{v} for some v∉β(y),
* x is a join node if it has exactly two children y and z such that β(x) = β(y) = β(z), or
* x is a forget node if it has exactly one child y such that β(x) = β(y)∖{v} for some v∈β(y).
The width of a nice tree-decomposition (𝒯,β) is max_x∈ V(𝒯) |β(x)|-1, and the treewidth (G) of a graph G is the minimum width of a nice tree-decomposition of G. Given a nice tree-decomposition and a node x, we denote by G^x the subgraph induced by the set V^x = ⋃_y is a descendant of xβ(y), where we suppose that x is a descendant of itself.
It is well-known that optimal nice tree-decompositions can be computed efficiently <cit.>.
Integer Quadratic Programming. Integer Quadratic Programming (IQP) over d dimensions can be formalized as the task of computing
IQPmax{ x^T Q x | A x ≤ b, x ≥ 0 , x ∈ℤ^d } ,
where Q ∈ℤ^d × d, A ∈ℤ^m × d, b ∈ℤ^m.
That is, IQP asks for an integral vector x ∈ℤ^d which maximizes the value of a quadratic form subject to satisfying a set of linear constraints.
Integer Quadratic Programming is fixed-parameter tractable when parameterized by d+A_∞+ Q_∞.
§ STRUCTURAL PROPERTIES OF OUTCOMES
As our first set of contributions, we establish some basic properties of our model and the associated problems that are studied within this paper. We begin by showcasing that the imposition of individual rationality or Nash stability as additional constraints on our outcomes does in fact have an impact on the maximum welfare that can be achieved (and hence it is indeed necessary to consider three distinct problems). We do not consider this to be obvious at first glance: intuitively, an agent i's own contribution to the social welfare can only improve if they perform an IR- or NS-deviation, and the fact that the distance function _Π_i is symmetric would seem to suggest that this can only increase the total social welfare.
There is a scoring vector and a social network G such that the single outcome achieving the maximum social welfare is not individually rational.
Consider a scoring function such that =(1,1,-1,-1,-1,-1).
Consider the social network G in <Ref> formed from a path P on 5 vertices and a clique K on 5 vertices by connecting the endpoints of P to all vertices of K.
Let x be the central agent of P.
Let C be the grand coalition in G.
The graph can be viewed as a 6-cycle with K forming one “bold” agent.
All vertices on the cycle contribute positively to the agent's utility, except for the one that is exactly opposite on the cycle.
Hence, (x,C)=4-5=-1, while utility of all other agents is 8-1=7 in C.
This gives total social welfare of 62 for the grand coalition.
However, if x leaves the coalition to form its own one, their utility will improve from -1 to 0, whereas the total social welfare drops.
Indeed, in C ∖{x} there are 2 agents with utility 6-2=4, 2 agents with utility 7-1=6 and 5 agents with utility 8-0, giving total social welfare of 60.
If any y≠ x was to be excluded from C to form outcome {y}, C∖{y}, then y joining C improves social welfare, proving that it was not optimal.
Finally, if the outcome consists of several coalitions with the largest one of size 8, then the welfare is at most 8 · 7+2 · 1= 56, if the largest size is 7, then we get at most 7 · 6+3· 2=48, for 6 it is 6· 5+4· 3=42 and for 5 it is 5 · 4 +5 · 4=40.
Hence the grand coalition C is the only outcome with maximal social welfare, but it is not individually rational (and therefore not Nash stable), as (x,C)=-1.
There is a scoring vector and a social network G such that the single individually rational outcome achieving the maximum social welfare among such outcomes is not Nash stable.
Consider again the scoring function =(1,1,-1,-1,-1,-1).
Similarly to previous lemma, consider the social network G in <Ref> formed from a path P on 5 vertices and a clique K on 4 vertices by connecting the endpoints of P to all vertices of K and adding a agent y only connected to the central agent of P which we call x.
Let C be the coalition containing all vertices of G except for y.
As in the previous lemma, G[C] can be viewed as a 6-cycle with K forming one “bold” agent.
Hence, _x(C)=4-4=0, while utility of other agents in C is 7-1=6.
Trivially _y({y})=0, hence the outcome ({y},C) is individually rational.
It has total social welfare of 48.
However, it is not Nash stable, as x wants to deviate to {x,y} giving them utility 1.
However, the outcome ({x,y}, C∖{x}), which is Nash stable, has total social welfare only 46.
Note that _z(C∖{x}) ≥ 3 for every agent z ∈ C∖{x}, so any outcome ({x,y,z}, C∖{x,z}) cannot be Nash stable.
While the total social welfare of the grand coalition is 46, the utility of y is 3-6=-3 in this coalition, so this outcome is not even individually rational.
From the computations in the previous lemma, it follows, that to attain the social welfare of 48, the largest coalition in the outcome must be of size at least 7.
Moreover, if it is of size exactly 7, then these 7 vertices must be at mutual distance at most 2.
However, there are no 7 vertices in mutual distance at most 2 in G.
Hence, in any outcome with social welfare 48 the largest coalition must be of size at least 8.
Agent y has only 3 agents in distance at most 2 in G.
Hence, for y to get a positive utility from some coalition, the coalition must be of size at most 7, i.e., y cannot be part of the largest coalition in any outcome with social welfare at least 48.
However, for every z ∈ C, z joining the coalition C∖{z} improves the social welfare of the outcome, proving that it was not optimal.
Hence the outcome ({y},C) is the only individually rational outcome with maximal social welfare, but it is not Nash stable.
It should be noted that <Ref> also contrast many other models where outputs maximizing social welfare are stable for symmetric utilities <cit.>.
As our next two structural results, we prove that on certain SDGs it is possible to bound not only the diameter but also the size of each coalition in a welfare-maximum outcome. Notably, we establish such bounds for SDGs on bounded-degree networks and SDGs which have a simple scoring vector on a tree-like network. While arguably interesting in their own right, these properties will be important for establishing the fixed-parameter tractability of computing welfare-optimal outcomes in the next section.
For every scoring vector =(,…,s_δ), if G is a graph of maximum degree Δ(G) and C is a coalition of size more than (+1) ·Δ(G) · (Δ(G)-1)^-1, then for every i ∈ C we have (i,C) <0.
Let i ∈ C.
There are at most Δ(G) · (Δ(G)-1)^-1 agents in distance at most from i.
Each of these agents contributes at most to (i,C).
Every other agent contributes at most -1.
Hence, if there are more than (+1) ·Δ(G) · (Δ(G)-1)^-1 agents in C, then
more than ·Δ(G) · (Δ(G)-1)^-1 of them have a negative contribution to (i,C) and
(i,C) < ·Δ(G) · (Δ(G)-1)^-1
-1 ··Δ(G) · (Δ(G)-1)^-1 =0.
Let =(s_1,…,s_δ) be such that s_2 < 0.
If G is a graph of treewidth and C is a coalition of size more than 2(+1) · + 1, then ∑_i ∈ C(i,C) <0.
Each agent adjacent to i contributes to (i,C), whereas all the other agents contribute at most -1.
Since a graph of treewidth is -degenerate, there are |E(G[C])| ≤ |C| · pairs of adjacent agents and |C|2 - |E(G[C])| pairs of non-adjacent agents.
We have
∑_i ∈ C(i,C)
= ∑_i,j ∈ C; i≠ j((i,j))
≤ 2(·|E(G[C])| - (|C|2 - |E(G[C])|))
= 2((+1) ·|E(G[C])| - |C|2)
≤ 2(+1) · |C| · - |C|(|C|-1)
=|C|(2(+1) ·- (|C|-1))
<|C|(2(+1) · -(2(+1) ·+ 1-1))=0.
§ COMPUTING OPTIMAL OUTCOMES
§.§ Intractability
As our first step towards an understanding of the complexity of computing a welfare-optimal outcome in an SDG, we establish the -hardness of , and even for a very simple choice of .
Let =(s_1) for any s_1>0.
Then , and are .
As our first step, we prove the -hardness of the intermediate problem called 3-Coloring Triangle Covered Graph (3CTCG) via an adaptation of a known reduction from NotAllEqual-3-SAT <cit.>:
0.98
3-Coloring Triangle Covered Graph (3CTCG)
Input: An undirected graph G=(V,E) with |V|=3n vertices such that G contains a collection of n mutually vertex disjoint triangles.
Question: Does G have a 3-coloring?
Next, we reduce 3CTCG to our three problems via a single construction. Let G be an instance of 3CTCG with 3n vertices and T_1, …, T_n the corresponding collection of triangles.
Let G be a complement of G, let =() and let b=3n·(n-1).
To establish the -hardness of , it suffices to show that G is a of 3CTCG if and only if G admits an outcome with social welfare at least b; for the remaining two problems, we additionally show that such an outcome will furthermore be individually rational and Nash stable.
§.§ An Algorithm for Tree-Like Networks
We complement Theorem <ref> by establishing that all three problems under consideration can be solved in polynomial time on networks of bounded treewidth—in other words, we show that they are -tractable w.r.t. treewidth.
We first describe the “baseline” algorithm for solving , and then prove that this may be adapted to also solve the other two problems by expanding on its records and procedures (see the appendix).
For every fixed scoring vector , the , , and problems are in when parameterized by the treewidth of the social network G.
Our algorithm is based on leaf-to-root dynamic programming along a nice tree-decomposition of the input social network with rather complicated structure. In each node x of the tree-decomposition, we store a set ℛ_x of partial solutions called records. Each record realizes a single signature which is a triple (,,), where
* is a partition of bag agents into parts of coalitions; there are at most +1 different coalitions intersecting β(x) and, thus, at most tw^ possible partitions of β(x).
* is a function assigning each pair of agents that are part of the same coalition according to the shortest intra-coalitional path; recall that for fixed , the diameter of every coalition is bounded by a constant and, therefore, there are n^ = n^1 possible paths for each pair of agents which gives us n^^2 combinations in total.
* is a table storing for every coalition P and every possible vector of distances to bag agents that are in P the number of agents from P that were already forgotten in some node of the tree-decomposition; the number of possible coalitions is at most +1, the number of potential distance vectors is ^+1 = 2^, and there are at most n values for every combination of coalition and distance vector which leads to at most n^2^ different tables .
The value of every record is a pair (π,w), where π is a partition of V^x such that (π) = w and π witnesses that there is a partition of V^x corresponding to the signature of the record, as described above. We store only one record for every signature – the one with the highest social welfare. Therefore, in every node x, there are at most n^2^ different records.
Once the computation ends, we check the record in the root node r and based on the value of w, we return the answer; if w≥ b and otherwise. Moreover, as G^r=G, the partition π is also an outcome admitting social-welfare w.
§.§ Fixed-Parameter Tractability
A natural follow-up question to Theorem <ref>
is whether one can improve these results to fixed-parameter algorithms. As our final contribution, we show that this is possible at least when dealing with simple scoring vectors, or on networks with stronger structural restrictions. To obtain both of these results, we first show that to obtain fixed-parameter tractability it suffices to have a bound on the size of the largest coalition in a solution (i.e., a welfare-optimal outcome).
For every fixed scoring vector , the variants of , , where we only consider outcomes consisting of coalitions of at most a prescribed size are parameterized by the treewidth of the network and the maximum coalition size combined.
Similar to the previous ones, we design a dynamic programming (DP) on a nice tree decomposition, albeit the procedure and records are completely different.
Given a subset of agents X ⊆ N, let Π = (π_1,π_2, …, π_ℓ) be a partition of a set containing X and some “anonymous” agents. We use (Π) to denote a set of graph topologies on π_1, π_2, …, π_ℓ given X. That is, (Π) = {(π_1), … , (π_ℓ)} where (π_i) is some graph on |π_i| agents, namely π_i ∩ X and |π_i ∖ X| “anonymous” agents, for each i ∈ [ℓ].
The maximum coalition size of any welfare maximizing partition is denoted by .
Table, M, contains an entry M[x, C, (Π)] for every node x of the tree decomposition, each partition C of β(x), and each set of graph topologies (Π) given β(x) where Π is a partition of at most · agents. An entry of M stores the maximum welfare in G^x under the condition that the partition into coalitions satisfies the following properties.
Recall that for a partition P of agents and an agent a, we use P_a to denote the coalition agent a is part of in P.
* C and Π are consistent, i.e., the partition of the bag agents β(x) in G^x is denoted by C and C_a = Π_a ∩β(x) for each agent a ∈β(x).
* The coalition of agent a ∈β(x) in the graph G^x is Π_a.
* (Π) is consistent with G^x i.e., the subgraph of G^x induced on the agents in coalition of a is (Π_a), i.e., G^x[Π_a] = (Π_a).
Observe that we do not store Π. We only store the topology of Π which is a graph on at most · agents.
We say an entry of M[x,C, (Π)] is valid if it holds that
* C and Π are consistent, i.e., C_a = Π_a ∩β(x) for each agent a∈β(x),
* Either C_a = C_b, or C_a ∩ C_b = ∅ for each pair of agents a,b ∈β(x),
* (Π) is consistent with G^x in β(x), i.e., for each pair of agents a,b ∈β(x) such that Π_a = Π_b, there is an edge (a,b) ∈(Π_a) if and only if (a,b) is an edge in G^x.
Once the table is computed correctly, the solution is given by the value stored in M[r,C, (Π)] where C is empty partition and (Π) is empty. Roughly speaking, the basis corresponds to leaves (whose bags are empty), and are initialized to store 0. For each entry that is not valid we store -∞. To complete the proof, it now suffices to describe the computation of the records at each of the three non-trivial types of nodes in the decomposition and prove correctness.
Similarly to <Ref>, we design a dynamic programming on a nice tree decomposition, albeit the procedure and records are completely different.
From <Ref> it follows that if s_2 < 0 and (G) is bounded, then the maximum coalition size of a welfare maximizing outcome is bounded. Hence, using Theorem <ref> we get the following.
, , and are fixed-parameter tractable parameterized by the treewidth (G) if s_2 < 0.
Turning back to general scoring vectors, we recall that <Ref> provided a bound on the size of the coalitions in a welfare-optimal outcome in terms of the maximum degree Δ(G) of the network G. Applying Theorem <ref> again yields:
, , and are fixed-parameter tractable parameterized by the treewidth (G) and the maximum degree Δ(G) of the social network.
As our final contribution, we provide fixed-parameter algorithms for computing welfare-optimal outcomes that can also deal with networks containing high-degree agents. To do so, we exploit a different structural parameter than the treewidth—namely the vertex cover number of G ((G)). We note that while the vertex cover number is a significantly more “restrictive” graph parameter than treewidth, it has found numerous applications in the design of efficient algorithms in coalition formation, including for other types of coalition games
<cit.>.
, , and are fixed-parameter tractable parameterized by the vertex cover number (G) of the social network.
Let k = (G) and let U be a vertex cover for G of size k.
Observe that in each solution there are at most k non-singleton coalitions, since G has a vertex cover of size k and each coalition must be connected.
Furthermore, the vertices of G - U can be partitioned into at most 2^k groups according to their neighborhood in the set U.
That is, there are n_W vertices in G - U such that their neighborhood is W for some W ⊆ U; denote this set of vertices I_W.
We perform exhaustive branching to determine certain information about the structure of the coalitions in a solution—notably:
* which vertices of U belong to each coalition (i.e., we partition the set U); note that there are at most k^k such partitions, and
* if there is at least one agent of I_W in the coalition or not
; note that there are at most (2^2^k)^k such assignments of these sets to the coalitions.
We branch over all possible admissible options of the coalitional structure described above possessed by a hypothetical solution. The total number of branches is upper-bounded by a function of the parameter value k and thus for the problems to be in it suffices to show that for each branch we can find a solution (if it exists) by a fixed-parameter subprocedure.
To conclude the proof, we show that a welfare-maximum outcome (which furthermore satisfies the imposed stability constraints) with a given coalitional structure can be computed by modeling this as an Integer Quadratic Program where d+A_∞+ Q_∞ are all upper-bounded by a function of k—such a program can be solved in time using <Ref>.
The (integer) variables of the program are x^C_W, which express the number of vertices from the set I_W in the coalition with C ⊆ U; thus, we have x^C_W ∈ℤ and x^C_W ≥ 1.
Let 𝒞 be the considered partitioning of the vertex cover U.
We use C ∈𝒞 for the set C ⊆ U in the coalition and C^+ for the set C and the guessed groups having at least one agent in the coalition.
We require that the vertices of G-U are also partitioned in the solution, i.e.,
∑_C ∈𝒞∑_W ∈ C^+ x^C_W = n_W ∀ W ⊆ U.
The quadratic objective expresses the welfare of the coalitions in the solution while the linear constraints ensure the stability of the outcome; for the latter, we rely on the fact that it is sufficient to verify the stability for a single agent from the group I_W in each coalition.
§ CONCLUSIONS AND FUTURE RESEARCH DIRECTIONS
In this work, we studied social distance games through the lens of an adaptable, non-normalized scoring vector which can capture the positive as well as negative dynamics of social interactions within coalitions.
The main focus of this work was on welfare maximization, possibly in combination with individual-based stability notions—individual rationality and Nash stability.
It is not surprising that these problems are intractable for general networks; we complement our model with algorithms that work well in tree-like environments.
Our work opens up a number of avenues for future research.
One can consider other notions of individual-based stability such as individual stability <cit.><cit.>, or various notions of group-based stability such as core stability <cit.><cit.>.
Furthermore, our results do not settle the complexity of finding stable solutions (without simultaneous welfare maximization).
Therefore, it remains open if one can find a Nash stable solution for a specific scoring vector.
Also, a more complex open problem is to characterize those scoring vectors that guarantee the existence of a Nash (or individually) stable solution.
Finally, we remark that the proposed score-based model can be generalized further, e.g., by allowing for a broader definition of the scoring vectors. For instance, it is easy to generalize all our algorithms to scoring vectors which are not monotone in their “positive part”. One could also consider situations where the presence of an agent that is “far away” does not immediately set the utility of other agents in the coalition to -∞. One way to model these settings would be to consider “open” scoring vectors, for which we set (a)=() for all a>—meaning that distances over are all treated uniformly but not necessarily as unacceptable.
Notice that if () ≥ 0 for an open scoring vector , the grand coalition is always a social-welfare maximizing outcome for all three problems—hence here it is natural to focus on choices of with at least one negative entry. We note that all of our fixed-parameter algorithms immediately carry over to this setting for arbitrary choices of open scoring vectors . The situation becomes more interesting when considering the small-world property: while the diameter of every welfare-maximizing outcome can be bounded in the case of Nash stable or individually rational coalitions (as we prove in our final Theorem <ref> below), whether the same holds in the case of merely trying to maximize social welfare is open and seems to be a non-trivial question. Because of this, Theorem <ref> can also be extended to the and with open scoring vectors, but it is non-obvious for .
Let =(s_1,…,s_) be an arbitrary open scoring vector and G be a social network. Every outcome Π containing a coalition C∈Π with diameter exceeding ℓ = 2·· can be neither Nash-stable nor individually rational.
Consider a shortest path P in C whose length exceeds ℓ. We identify a set of edge cuts along P and show that at least one such cut must be near an agent whose utility in C is negative, due to the presence of a large number of agents that must be distant from the chosen edge cut.
*Acknowledgements.
All authors are grateful for support from the OeAD bilateral Czech-Austrian WTZ-funding Programme (Projects No. CZ 05/2021 and 8J21AT021). Robert Ganian acknowledges support from the Austrian Science Foundation (FWF, project Y1329). Thekla Hamm also acknowledges support from FWF, project J4651-N. Dušan Knop, Šimon Schierreich, and Ondřej Suchý acknowledge the support of the Czech Science Foundation Grant No. 22-19557S. Šimon Schierreich was additionally supported by the Grant Agency of the Czech Technical University in Prague, grant .
eptcs
|
http://arxiv.org/abs/2307.07643v1 | 20230714222228 | ACF-Net: An Attention-enhanced Co-interactive Fusion Network for Automated Structural Condition Assessment in Visual Inspection | [
"Chenyu Zhang",
"Zhaozheng Yin",
"Ruwen Qin"
] | cs.CV | [
"cs.CV"
] |
inst1]Chenyu Zhang
[inst1]organization=Department of Civil Engineering, Stony Brook University,
addressline=2427 Computer Science,
city=Stony Brook,
postcode=11794,
state=New York,
country=United States
inst2]Zhaozheng Yin
[inst2]organization=Department of Computer Science, Department of Biomedical Informatics, and AI Institute, Stony Brook University,
addressline=2313B Computer Science Building,
city=Stony Brook,
postcode=11794,
state=New York,
country=United States
inst1]Ruwen Qin
Efficiently monitoring the condition of civil infrastructures necessitates automating the structural condition assessment in visual inspection. This paper proposes an Attention-enhanced Co-interactive Fusion Network (ACF-Net) for automatic structural condition assessment in visual bridge inspection. The ACF-Net can simultaneously parse structural elements and segment surface defects on the elements in inspection images. It integrates two task-specific relearning subnets to extract task-specific features from an overall feature embedding and a co-interactive feature fusion module to capture the spatial correlation and facilitate information sharing between tasks. Experimental results demonstrate that the proposed ACF-Net outperforms the current state-of-the-art approaches, achieving promising performance with 92.11% mIoU for element parsing and 87.16% mIoU for corrosion segmentation on the new benchmark dataset Steel Bridge Condition Inspection Visual (SBCIV) testing set. An ablation study reveals the strengths of ACF-Net, and a case study showcases its capability to automate structural condition assessment. The code will be open-source after acceptance.
< g r a p h i c s >
* A novel Attention-enhanced Co-interactive Fusion Network (ACF-Net) with a share-split-interaction architecture is proposed for automated structural condition assessment in visual inspection.
* A benchmark dataset called Steel Bridge Condition Inspection Visual (SBCIV) dataset is created.
* ACF-Net outperforms state-of-the-art in both structural element parsing and surface defect segmentation.
* Experiments and examples validate the strengths of ACF-Net in bridge condition assessment and uncover reasons for achieving satisfying performance.
Infrastructure inspection Multitask learning Co-interactive fusion Structural element parsing Defect segmentation
0000 1111
0000 1111
§ INTRODUCTION
Visual inspection, a crucial component of structural health monitoring (SHM), is performed periodically to evaluate the condition of infrastructure <cit.>. However, traditional manual inspections have inherent limitations. Desires for time-cost efficiency, reliability, and safety have driven a growing interest in automating visual inspection with cutting-edge technologies like robotics and artificial intelligence <cit.>. Unmanned aerial vehicles (UAVs), equipped with one or multiple types of non-destructive evaluation sensors, have gained popularity for capturing inspection videos and images of infrastructure <cit.>. Maximizing the potential of robotic inspection platforms and the automation process necessitates the employment of efficient and reliable techniques for inspection image analysis. Deep convolutional neural networks (DCNNs), in particular, have shown tremendous potential for analyzing images and extracting vital information about the inspected structures, inspiring researchers to investigate their applications in SHM <cit.>. For example, a drone with mounted RGB cameras can quickly assess the condition of a bridge at the inspection site and narrow down to spots where other high-resolution yet time-consuming diagnostic sensors should be used to collect detailed information, such as infrared sensors, ground-penetrating radar, ultrasound scanning, and others.
According to the infrastructure inspection manuals and standards <cit.>, it is necessary to associate structural elements with the severity of defects developed in the elements to evaluate the condition of individual elements, which builds the foundation for assessing the condition of the overall structure. That is, it is required to not only recognize and localize key structural elements and defects in the inspection images captured by inspection robots, but also spatially associate them. This capability will offer a reference for prioritizing subsequent structural condition assessment that is usually more expensive.
Researchers have made progress in identifying or segmenting structural elements and defects using DCNNs <cit.>. However, most studies were dedicated to addressing one task, leading to three challenges in deep learning-based visual inspection. Firstly, the appearance of structural elements may be inaccurately recognized due to the presence of surface defects. For example, Fig. <ref>(a) shows a rusted girder section and a below-bearing share similar surface defects. The girder's rusted or flaking portion might be mistakenly identified as part of the bearing due to the similarities in appearance caused by the defects. Secondly, surface inhomogeneity, shadows, and poor lighting conditions, as shown in Fig. <ref>(b), continue to pose challenges for reliably assessing defects on surfaces of structural elements. Lastly, the spatial correlation between element parsing and defect segmentation tasks, as shown in Fig. <ref>(c), has been overlooked, leading to unreasonable predictions. For example, the presence of steel corrosion in the background area is apparently a wrong prediction result because it is impossible in reality. Several attempts have explored solutions to these challenges, mainly using multi-task learning (MTL) methods <cit.>. MTL presents an efficient approach that learns to perform the two related tasks simultaneously with a unified model <cit.>. While those studies have laid a solid foundation for visual assessment of the structure's condition, several research needs must be addressed further to improve the technology readiness of image analysis for automated visual inspection.
MTL can adopt a powerful deep encoder that extracts a deep feature embedding to represent each input inspection image <cit.>. There are many choices of deep encoders for semantic segmentation. The guidance for choosing one deep encoder that is well-suited for the tasks of this paper has not been available. Moreover, the overall embedding encompasses information related to both structural elements and surface defects. While a simple method named feature projection <cit.> has been developed to attempt to decouple task-specific features intertwined in the overall embedding, feature relearning has not been explored, which is a more advanced method to extract task-specific features. Last but not least, how to leverage the spatial correlation between structural elements and defects to let one task benefit from the other task and vice versa is still an unsolved question. A cross-talk method <cit.> was created for this purpose. However, that design does not explicitly integrates the physical meaning of spatial information that one task can provide to another. The spatial attention mechanism, which focuses on spatial attributes such as shape and boundaries within an image, is critical in image analysis. Upon obtaining the spatial attention maps and task-specific features, the design of a suitable network architecture becomes pivotal. Such an architecture should facilitate efficient communication and information exchange with these attention and feature maps across varying tasks. Incorporating this diverse information can enhance a model's understanding of the complex interrelations within the data, leading to more robust feature representations and improved semantic segmentation outcomes.
In addressing the above-discussed technical needs, this paper has the following contributions:
* A new Attention-enhanced Co-interactive Fusion MTL model, named ACF-Net, is introduced. It has a share-split-interaction pipeline composed of a shared high-resolution deep encoder, two task-specific relearning subnets, and a co-interactive feature fusion module.
* A new dataset, named Steel Bridge Condition Inspection Visual (SBCIV) dataset, is developed to support the development and evaluation of MTL models for automating the bridge element inspection.
* A comprehensive study employing both numerical experiments and qualitative results is conducted to verify the strengths of ACF-Net and reveal reasons for achieving satisfying performance.
Based on the proposed ACF-Net, the automatic structural condition assessment framework in visual bridge inspection is shown in Fig. <ref>.
The remainder of the paper is organized as follows. The next section is a summary of related work. Then, Section <ref> presents the architecture design of ACF-Net, followed by the details of executing the model. Section <ref> discusses results from the experimental studies for evaluating ACF-Net, and Section <ref> further presents the assessment case study. In the end, Section <ref> summarizes insights gained from this study and suggestions for important future work.
§ RELATED WORK
This paper is built on studies that contribute to structural condition assessment in visual inspection, either directly or indirectly. The related literature is summarized below.
§.§ DCNN-based defect segmentation
An intensively studied topic related to the visual inspection of infrastructures is defect detection, which is about finding structural surface defects or damage in inspection images or videos <cit.>. The majority of current research efforts are centered on DCNN-based defect segmentation, where each pixel of an inspection image is classified as defect or non-defect <cit.>. Segmentation can provide pixel-level position information of defects, resulting in superior accuracy compared to object detection methods <cit.>. Deep feature extractors, such as DCNNs, are employed due to their ability to facilitate automated representation learning and embed rich information, ultimately capturing complex real-world data features through multi-level feature abstraction <cit.>.
DCNN-based crack segmentation methods have shown considerable success in detecting and analyzing defects across various civil structures, including buildings, bridges, tunnels, and roads <cit.>. <cit.> utilized a Fully Convolutional Network (FCN) <cit.> for crack element marking, while <cit.> aimed at accurately quantifying cracks by training an atrous convolution-based DeepLabv3+ model <cit.>. <cit.> focused on crack connectivity through a densely connected DCNN architecture. <cit.> incorporated visual explanations into a U-Net <cit.> based model to highlight crack semantics.
Recently, DCNNs have been utilized for the detection and segmentation of corrosion in steel structures. <cit.> demonstrated that DCNNs surpass conventional vision-based corrosion detection methods that rely on texture and color analysis using a basic multilayer perceptron network. <cit.> introduced a corrosion assessment approach that applied DeepLab <cit.> to infrastructure inspection images. <cit.> developed a two-stage corrosion location method by integrating Feature Pyramid Network (FPN) <cit.> and Path Aggregation Network (PANet) <cit.> to identify corrosion areas on structural surfaces. <cit.> proposed an enhanced U-net, Fusion-Attention-U-net (FAU-net), which incorporated a fusion module and an attention module within the U-net for segmenting three types of corrosion-related damage within dim steel box girders. <cit.> applied U-Net for the automated simultaneous detection and localization of corrosion and rust grade recognition from inspection images of metal structures. These studies have laid a solid methodological groundwork for identifying defects in structural elements.
§.§ DCNN-based structural element parsing
Structural element inspection requires associating elements with defects developed on them. A stream of recent studies was motivated to focus on structural element parsing in inspection images using various DCNN-based methods. This parsing process aims at identifying structural elements within these images, typically achieved through object detection or segmentation within the given scene. Accurate identification of critical elements enables a thorough and precise evaluation of the infrastructure's overall condition, considering factors like defect shape, size, location, and compliance with established standards. <cit.> developed a hierarchical framework applicable to a range of classification tasks, including recognizing building component types of damage states. <cit.> detected damage in welded joints on truss structures by extracting and classifying target image areas. <cit.> presented an FCN-based method for bridge component recognition. <cit.> developed a DeepLab-based model incorporating RGB-D (color and depth) images for component segmentation in thirteen buildings. <cit.> proposed an enhanced U-Net model with a novel geometric consistency loss for geometry-informed structural component segmentation of post-earthquake buildings. <cit.> transferred a pre-trained Mask R-CNN <cit.> to the task of bridge elements segmentation and created a semi-supervised self-training method to refine the transferred network iteratively. Through a comparative analysis of the state-of-the-art semantic segmentation networks, <cit.> revealed the aptitude of High-Resolution Network (HRNet) <cit.> in efficiently extracting deep features for segmenting various structural elements in bridge inspection images. Furthermore, the study investigated factors that impact the network's performance, such as transfer learning, the size of the training set, data augmentation techniques, and the role of class weights.
Although impressive, the above methods exhibit some limitations in identifying and capturing the highly irregular and significantly deteriorated elements from inspection images. Furthermore, these approaches have yet to explore feature fusion's potential to enhance element parsing capability. Notably, compared to the extensively studied defect segmentation, structural element parsing remains in a relatively nascent stage of development.
§.§ MTL in visual structural assessment
MTL has proven effective in various civil engineering applications, such as SHM data reconstruction <cit.>, bridge damage diagnosis <cit.>, landslide evolution state prediction <cit.>, and more. The prevalent method for achieving MTL is to share the feature extractor and branch downstream tasks for respective predictions <cit.>. This shared feature extractor learns common representations for all tasks, significantly reducing the risk of overfitting and enhancing generalization <cit.>. However, one task can easily dominate others in this conventional approach, which negatively impacts the overall performance.
<cit.> introduced MaDnet, a DCNN comprising a shared feature extractor and multiple semantic segmentation pathways to identify material and damage types. This framework indicates that one segmentation task can provide contextual information for another. <cit.> developed MT-HRNet that employs the HRNetV2-W18 backbone and two segmentation heads for element recognition and damage identification in synthetic bridge images. To further address the challenges of task domination and improve task-specific feature sharing, <cit.> proposed MTL-D and MTL-I models that both can simultaneously segment bridge elements and surface corrosion. These models project the shared features for respective tasks and utilize the cross-talk feature sharing between tasks to enhance performance and prevent the dominance of a single task.
Despite remarkable advancements in this direction, most studies remain at the stage of naïve MTL models. While attempts were observed to exchange information among different tasks, they have yet to explicitly model the spatial association between elements and defects. The effective utilization of this spatial relationship will mitigate the task domination issue and enhance the overall performance, which remains a further exploration.
§ ACF-NET
In filling the above-discussed gaps, this paper introduces an Attention-enhanced Co-interactive Fusion Network (ACF-Net), shown in Fig. <ref>. ACF-Net analyzes each input RGB inspection image to parse the bridge in the images into structural elements and segment the surface defects on the elements. The backbone of ACF-Net encodes the input image as an overall feature embedding. Then, two relearning subnets respectively extract element- and defect-specific feature maps from the encoded overall embedding. After that, a co-interactive module generates spatial attention maps to guide the feature fusion for performing the two downstream tasks. Finally, two reconstruction decoders respectively perform the pixel-level classification for element parsing and defect segmentation. Details of the ACF-Net's key components are delineated below.
§.§ Shared encoder
ACF-Net chooses HRNetV2-W48 <cit.> as its shared encoder, which generates high-resolution representations through the parallel connection of high-to-low-resolution convolutions. An inspection image captured by the robot-mounted RGB camera is reshaped to be the size 3× 520× 520 before entering the encoder. The shared encoder extracts an overall feature embedding, f(∈ℝ^720× 120× 120), from the input image, which encompasses both the bridge element and defect information in the image.
§.§ Task-specific feature relearning subnets
Let Ω={ e, d} denote the index set of tasks, where e and d are the indices of element parsing and defect segmentation tasks, respectively. These two tasks concentrate on distinct characteristics and minutiae. Therefore, following the shared encoder are two task-specific relearning subnets that further extract the task-specific feature maps, f_i (∈ℝ^512× 120× 120), for i∈Ω. Each relearning subnet consists of a convolutional layer (COV), a batch normalization layer (BN), and the rectified linear unit (ReLU) activation function in sequence:
f_i = ReLU(BN(COV(f; θ_ rln,i))), ∀ i∈Ω
The convolutional operation in Eq. (<ref>) uses a kernel size of 3, a stride of 1, padding of 1, and 512 output channels. θ_ rln,i are learnable parameters of the convolutional layer in the relearning (rln) subnet for task i.
§.§ Co-interactive fusion module
Structural elements and defects exhibit spatial correlation. Therefore, exchanging information between the two tasks can positively impact their performance, which motivates the design of the co-interactive fusion module. The task-specific feature map of one task is fused with the additional spatial information from the other task in an additive manner:
f_i^* = f_i⊕(S_i⊗f_j), ∀ i, j∈Ω and i≠ j
Here, ⊗ represents the element-wise multiplication, ⊕ denotes the element-wise addition, S_i (∈ℝ^512× 120× 120) is the spatial attention mask for guiding the feature fusion for task i, and f_i^∗ is the resulting spatial-attention-enhanced feature map for the task.
In Eq. (<ref>), the spatial attention mask of one task consists of scores for adjusting the other task's feature map in feature fusion. These scores are learned by a convolutional layer (with a kernel size of 3, a stride of 1, and padding of 1) and normalized using the Sigmoid function:
S_i = Sigmoid(COV(f_i;θ_ att,i)), ∀ i∈Ω
where θ_ att,i are the learnable parameters of the attention (att) module for task i.
§.§ Reconstruction decoder
In leaving the co-interactive fusion module, the spatial-attention-enhanced feature map for any task i, f_i^∗, flows into the task's reconstruction decoder. First, the segmentation head (SH) in the decoder performs the pixel-level classification, which is composed of a convolutional layer (with a kernel size of 1, a stride of 1, and 512 output channels), a batch normalization layer, a ReLU activation function, and another convolutional layer (with a kernel size of 1, a stride of 1, and N_i output channels). Then, the obtained segmentation map is upsampled (UP) using the bilinear interpolation to give the pixel-level prediction scores for all classes, y_i (∈ℝ^N_i× 520× 520). That is,
y_i=UP(SH(f^∗_i;θ_ sh,i)), ∀ i∈Ω
y_i=LI(COV(ReLU(BN(COV(y^∗_i;θ_sg1,i)));θ_sg2,i)), ∀ i∈Ω, ∀ i∈Ω
where θ_ sh,i are learnable parameters of the convolutional layers in the segmentation head of task i.
§.§ Loss function
Learnable parameters of the proposed ACF-Net are determined through model training that minimizes a differentiable loss function through backpropagation. The loss function is an aggregated measure of the pixel-level dissimilarity between the ground truth and prediction on a training dataset.
Images in the training dataset are indexed by k, y_i (k) (∈ R^N_i× 520× 520) represents the one-hot encoding of the image's pixel-level ground truth associated with task i, and y_i(k) (∈ R^N_i× 520× 520) denotes the pixel-level prediction. The cross-entropy loss of ACF-Net in performing task i is
ℒ_i = - ∑_k <y_i(k), logy_i(k)>, ∀ i∈Ω
where <,> is the operation to obtain the Frobenius inner product on two tensors, which performs the element-wise product of the two input tensors to become one in the same size and then sums up all the elements of the resulting tensor.
The loss functions defined for the individual tasks need to be integrated as a total loss function so that ACF-Net learns the two tasks at once. This paper employs a straightforward yet efficient weighting scheme, known as Dynamic Weight Average (DWA) <cit.>, to adaptively balance the individual loss functions during training. Given that t is the index of training epochs and ℒ_i,t is the loss function of task i calculated using Eq. (<ref>) at epoch t, the relative loss descending rates of the two tasks in the last training epoch are:
w_t-1=[ℒ_ e,t-1/ℒ_ e,t-2, ℒ_ d,t-1/ℒ_ d,t-2]
These rates are references for assigning weights to the individual loss functions at the current epoch. w_t is initialized as [1,1] at t=1, 2.
The weights λ_ e,t and λ_ d,t for aggregating the individual loss functions are obtained by applying the softmax operation to w_t-1,
λ_t=2×Softmax[w_t-1/τ]
where λ_t=[λ_ e,t, λ_ d,t] is the vector of weights at t, and τ is a temperature parameter for controlling the softness of this weighting scheme, chosen as 2 in this study.
The total loss function, ℒ_ tot, is attained by aggregating the individual training loss functions ℒ_ e,t and ℒ_ d,t using the weights calculated in Eq. (<ref>):
ℒ_ tot = λ_ e,tℒ_ e,t+λ_ d,tℒ_ d,t
With the DWA scheme, a higher weight is put on the task with a smaller relative loss descending rate from the last training epoch.
§ EXPERIMENTAL SETUP
§.§ Dataset and data augmentation
Lacking publicly available datasets with the annotation for developing the proposed MTL model, this study created a new annotated dataset, the Steel Bridge Condition Inspection Visual (SBCIV) dataset, comprising 440 high-resolution images procured from two publicly available datasets: Corrosion Condition State Semantic Segmentation Dataset <cit.> and Common Objects in Context for bridge inspection (COCO-Bridge) dataset <cit.>. 100 images are reserved for testing and evaluating the ACF-Net, and the remaining 340 images, split into the training dataset and validation dataset at the 9:1 ratio, are used for model development and hyperparameters optimization.
The SBCIV dataset provides the pixel-level annotation for six types of common structural elements of steel plate girder bridges, shown in Fig. <ref>, including bearing, bracing, deck, floor beam, girder, and substructure. Fig. <ref> shows that the number of element types in an inspection image, as well as the number of bridge elements presented in an image, varies from one image to another. The dataset also provides the pixel-level binary annotation of surface defects: corrosion or non-corrosion. The annotations were developed using the LabelMe <cit.> labeling tool and adhered to the Bridge Inspector's Reference Manual <cit.> and the corrosion condition state guidelines outlined in the American Association of State Highway and Transportation Officials (AASHTO) <cit.>.
Deep learning models usually require a large amount of data to be trained effectively. Yet, creating a large dataset with the required annotation for the problem of study is expensive. The issue of small data can be partially addressed by data augmentation that aims to not only increase the quantity of data but also cover as many situations as possible which were not in the original dataset but could occur in real-world scenarios. To achieve this, seven image data augmentation methods were employed in this paper, including random scale transformation, random rotations between ±10^∘, random horizontal flipping, random image intensity noise using 5×5 Gaussian kernel, and HSV augment that randomly adjusts hue (H), saturation (S), and value (V) of images.
§.§ Implementation details
The ACF-Net was built based on the PyTorch 1.10.0 library and trained on a server with an Nvidia Tesla V100 GPU (32 GB memory). The Adam optimizer with an initial learning rate of 5e-4 and a minimum learning rate of 5e-6 was utilized for training the model. A cosine learning rate scheduler was used to adjust the learning rate during training. Owing to the limited dataset size in this study, transfer learning was applied to fine-tune ACF-Net, where the backbone was initialized with weights pre-trained on the Cityscapes dataset <cit.>. The model was fine-tuned for 150 epochs with a batch size of 8. For computational efficiency, all images were resized from the original size to 520× 520 pixels. The model achieving the lowest loss on the validation dataset was saved as the final model.
The ACF-Net was evaluated through comparative studies that measure its performance against related models and state-of-the-art summarized below.
* Single-task models: Two models, that use HRNetV2-W48 as the backbone, were trained separately using the hyperparameters mentioned above. They perform the bridge element parsing and defect segmentation tasks, respectively. The two single-task models serve as the baseline for assessing MTL models in this paper.
* Variants of ACF-Net: Three variants of ACF-Net were trained to evaluate the design of ACF-Net. The Naïve MTL model drops both the task-specific relearning (TR) subnets and the co-interactive fusion (CF) module from the ACF-Net. The ACF-Net w/o TR is the model that drops only the task-specific relearning subnets, whereas the ACF-Net w/o CF drops only the co-interactive fusion module.
* State-of-the-art models: The recently developed MTL models, including MaDnet <cit.>, MT-HRNet <cit.>, and MTL-I <cit.>, have shown state-of-the-art performance in segmenting both bridge elements and surface defects. ACF-Net was compared to these models to demonstrate the improvement it can achieve. To ensure a fair comparison, these networks were trained and tested on the SBCIV dataset using the same data augmentation methods. The training and testing of these models strictly followed the hyperparameter settings in their papers.
§.§ Evaluation metrics
Comprehensive metrics at the class-level, task-level, and model-level are defined. For any of the datasets, a vector of binary variables, y_i,j, denotes the ground truth of class j in task i for all pixels in that dataset, and the other vector of binary variables, y_i,j, is the prediction. Λ_i designates the set of classes in task i, for i∈{e, d}. Here, Λ_ e= {Bearing, Bracing, Deck, Floor beam, Girder, Substructure, Background} is the set of classes in the bridge element parsing task, and Λ_ d={Corrosion, Non-corrosion} is the set of classes in the defect segmentation task. In performing task i, a model's ability to predict class j pixels is assessed using three widely recognized metrics, namely Intersection over Union (IoU), Precision, and Recall:
IoU_i,j=y_i,j∧y_i,j_1/y_i,j∨y_i,j_1
Precision_i,j=y_i,j∧y_i,j_1/y_i,j_1
Recall_i,j=y_i,j∧y_i,j_1/y_i,j_1
where ∧ is the element-wise AND operator, ∨ is the element-wise OR operator, and ·_1 is norm 1 that can count the non-zero elements of the vector. IoU_i,j in Eq. (<ref>) calculates the intersection of the class j ground truth and the prediction over their union, for any j∈Λ_i. Precision_i,j in Eq. (<ref>) is the percentage of pixels predicted as class j which are predicted correctly, and Recall_i,j in Eq. (<ref>) is the percentage of class j pixels that are correctly predicted.
For the task-level evaluation, three commonly used metrics were adopted, which are mean IoU (mIoU), mean Accuracy (mAcc), and pixel Accuracy (pAcc). For any task i, mIoU is calculated by averaging the IoU values of all the classes. mAcc is the mean of class-level accuracy values, whereas pAcc denotes the accuracy evaluated at the pixel-level without regard to the classes.
mIoU_i=1/|Λ_i|∑_j∈Λ_iIoU_i,j
mAcc_i=1/|Λ_i|∑_j∈Λ_iy_i,j∧y_i,j_1/y_i,j_1
pAcc_i=∑_j∈Λ_iy_i,j∧y_i,j_1/∑_j∈Λ_iy_i,j_1
Finally, at the model-level, using the two single-task models as the baseline, the study measured the incremental of a MTL model's overall performance against the baseline by averaging the percentage increase of every task-level metric:
Δ = 1/6∑_i∈Ω[Δ (mIoU_i)+Δ (mAcc_i)+Δ (pAcc_i)]
where Δ(·) means the percentage increase of a metric.
§ EXPERIMENTS AND RESULTS
Experiments are conducted to verify the effectiveness of the proposed ACF-Net on the newly developed SBCIV dataset.
§.§ Ablation study
Ablation study was performed to thoroughly evaluate the effectiveness of the key components designed for ACF-Net. The single-task models utilizing the HRNetV2-W48, along with variants of the ACF-Net, were trained and pitted against the ACF-Net. Results are summarized in Table <ref> and discussed below.
From Table <ref>, it can be observed that the naïve MTL model is dominated by the defect segmentation task, as compared to the single-task models. Although the IoU value of defect segmentation has slightly improved by 0.38%, the IoU value of element parsing has significantly dropped by 2.26%. Therefore, the naïve MTL model exhibits an overall decline in performance compared to the single-task models. It indicates that performing two different tasks by directly utilizing the overall feature embedding from the shared deep feature extractor is not as effective as the task-specific single-task models.
After introducing the two task-specific feature relearning subnets, the naïve MTL model becomes the ACF-Net w/o CF model in Table <ref>. The added subnets in the ACF-Net w/o CF model effectively enhance the performance, resulting in a respective improvement of 2.23% and 0.35% in the IoU values of the two tasks, as compared to the naïve MTL model. The observed improvement justifies the effectiveness of further-learned task-specific features from the overall deep feature for the downstream tasks.
The ACF-Net w/o TF is obtained by adding the co-interactive fusion module to the naïve MTL model. Compared to the single-task models, the ACF-Net w/o TF maintains equivalent performance levels on the task of element parsing yet displays a significant improvement in the task of defect segmentation. The comparison demonstrates the benefit of incorporating spatial information of structural elements in the defect segmentation task, and vice versa.
Different from the naïve MTL model, the proposed ACF-Net has both the task-specific relearning subnets and the co-interactive fusion module. ACF-Net effectively addresses the limitation of the naïve MTL model, evidenced by increases of the task-level metrics for 0.25∼2.51%. ACF-Net has exceeded the performance of the two single-task models on all metrics for 0.22∼2.28%.
§.§ Comparison to state-of-the-art models
§.§.§ Quantitative comparisons
Comparison at the task-level. This study compared the proposed ACF-Net with existing models: MaDnet <cit.>, MT-HRNet <cit.>, and MTL-I <cit.>. The following discussion is mainly based on mIoU values. Observations from the model comparisons using other metrics, such as mAcc and pAcc, are similar.
Table <ref> shows that MaDnet, using a small single-scale hand-designed network, achieves 74.85% and 78.66% mIoU on the element parsing task and defect segmentation task, respectively. MT-HRNet, which uses the most lightweight HRNetV2-W18 as the feature extractor, performs slightly better on the two tasks. However, among all the models considered, MaDnet and MT-HRNet exhibit the least favorable performance, which could be attributed to the utilization of less powerful encoders. Naïve MTL model effectively boosts the two tasks' mIoU values to 89.60% and 85.26% by utilizing the most robust version of HRNet, HRNetV2-W48, as the encoder, although it keeps the same architecture as MT-HRNet. In comparison to the naïve MTL model, MTL-I maintains comparable results on the two tasks, despite utilizing a moderately powerful encoder, HRNetV2-W36. This can be attributed to the well-designed feature projection module and the efficient cross-talk feature sharing between the two tasks. Among all the compared methods, ACF-Net achieves the best performance in all the metrics, whose mIoU values are 17.26% and 8.50% higher than MaDnet's values, further verifying the advantage of the proposed approach.
Comparison at the class-level. ACF-Net also achieves good performance at the class-level compared to state-of-the-art models, as shown in Table <ref>. ACF-Net clearly outperforms the competitors among all classes. MaDnet and MT-HRNet are notably less capable of segmenting bracings and floor beams than other types of elements, which could be attributed to their irregular shapes. For example, bracings are predominantly cross-shaped. Compared to MaDnet, the naïve MTL model increases the IoU values in segmenting bracings for 27.56% and 11.23% for segmenting floor beams. The significant improvement is merely due to upgrading the encoder to a more capable one. Compared to the naïve MTL model, the MTL-I model further increases the IoU in segmenting bracings for another 0.18% and 6.29% in segmenting floor beams, indicating the effectiveness of feature disentanglement and information sharing through the co-interactive feature fusion in segmenting irregularly shaped elements. Ultimately, ACF-Net achieves the highest IoUs, 92.11% in segmenting bracings and 86.19% in segmenting floor beams. The increased IoUs for another 5.30% and 0.37% that ACF-Net achieved, compared to MTL-I, indicate that the relearning subnets and the co-interactive fusion module are better designs than their counterparts in MTL-I. In segmenting girders and substructures, which represent the majority of pixels across all classes, all models demonstrate at least acceptable results. A noticeable trend is observed, where the IoU value progressively increases from the leftmost to the rightmost model. The progress improvements are mainly due to the introduction of a powerful deep feature extractor, feature disentanglement or relearning, and feature fusion to MTL. The increases in IoU values in segmenting bearings and decks are primarily attributed to using HRNet as the deep feature extractor.
ACF-Net also demonstrates dominantly better performance in segmenting defects than other models. Compared to MaDnet, ACF-Net increases the IoU value in segmenting corrosion by 15.02%, with 11.74% contributed by the adoption of HRNetV2-W48 as the backbone and the remaining 3.28% from feature relearning and co-interactive feature fusion. ACF-Net also increases the ability to segment non-corrosion areas by 1.98%.
§.§.§ Qualitative comparison
Fig. <ref> illustrates five examples of bridge element parsing and defect segmentation results to demonstrate the effectiveness of the proposed ACF-Net. These examples represent various scenarios with extensive, partial, and scarcely defects.
The qualitative evaluation of the element parsing results demonstrates that ACF-Net generates superior predictions, as evidenced by the segmentation of small objects, such as the distant floor beam in Fig. <ref>(b), bearings in Fig. <ref>(c), distant small bracing in Fig. <ref>(d), and floor beam in Fig. <ref>(e). This is also apparent in segmenting irregular elements, as demonstrated by the bracings in Fig. <ref>(a)(d). Furthermore, ACF-Net facilitates more pronounced and smooth edges of object segmentation, as evidenced by the boundary segmentation of the floor beam in Fig. <ref>(b) and substructure in Fig. <ref>(e).
The qualitative comparison of the defect segmentation results reveals that ACF-Net can produce competitive outcomes, as it generates fewer false positives than other methods when the corrosion area is small or less visible, which is evident in Fig. <ref>(b)(c)(d).
Figure <ref> further presents ACF-Net's results in analyzing fourteen examples from the testing dataset. It is evident that the predictions given by ACF-Net exhibit high quality and closely align with the ground truth.
§.§.§ Model complexity comparison
A deep learning model's complexity, measured by the number of parameters, could be a cost of the model's performance improvement. Therefore, the performance assessment should keep the model complexity in consideration. Figure <ref> presents ACF-Net and other state-of-the-art models on the diagram of performance increment Δ at the model-level vs. model complexity (parameters in millions). MaDnet has the fewest parameters due to its simplistic structure, while MT-HRNet has slightly more parameters because it utilizes the lightweight HRNetV2-W18. Consequently, the performance of these two models is less than ideal. MTL-I, which employs HRNetV2-W32 as its backbone, results in a significant performance improvement compared to MaDnet and MT-HRNet. The performance of MTL-I still falls below the single-task baseline, but its parameters are about 77% less than the baseline. By replacing the backbone of MTL-I with HRNetV2-W48, the upgraded version, MTL-I (W48), outperforms its HRNetV2-W36 counterpart and the single-task models. When MT-HRNet's backbone is changed to HRNetV2-W48, the newer version, MT-HRNet (HRNetV2-W48), and naïve MTL share the same architecture and the number of parameters, yet naïve MTL performs substantially better due to the optimized selection of hyperparameters. ACF-Net achieves the best performance among all models, with approximately half of the parameters of the single-task models. Compared to MaDnet, ACF-Net's complexity is increased by 72.96 million, with the majority (62.11 million) added by adopting the HRNetV2-W48 as the backbone for representation learning and the remainder (10.85 million) from the relearning subnets and co-interactive fusion module.
§.§ Understanding of feature maps and masks
Task-specific feature relearning and co-interactive feature fusion are two important designs for ACF-Net. To better comprehend the roles of those modules, feature maps (f, f_ e, f_ d, f_ e^∗, f_ d^∗) and attention masks (S_ e and S_ d) learned by ACF-Net are visualized in Fig. <ref>.
These visualizations utilize the feature embeddings from the second channel to show primary features, with the values of each image rescaled to be within the range from 0 to 255 to accommodate the colors. Bicubic interpolation is applied to resize the feature maps and attention masks to 520× 520, the dimension of the input images of ACF-Net.
The 2nd row in Fig. <ref> visualizes the overall feature embedding f that essentially encompasses information relevant to both tasks. With the two feature relearning subnets, the element-specific feature map f_ e and defect-specific feature map f_ d are respectively extracted from the overall feature embedding. The 3rd row visualizes the element-specific feature map that primarily captures relatively global information about elements, such as position, shape, and scale. In contrast, the defect-specific feature map visualized in the 4th row mainly captures appearance information of surface defects, such as texture and color. The 5th row visualizes the element mask S_ e, whereas the 6th row presents the defect mask S_ d. A distinct difference in the two tasks' attention masks is evident. Each task's mask functions as a feature selector that masks out uninformative portions of the other task's feature map in feature fusion. It is noteworthy that the element masks exhibit a significantly higher contrast, transitioning from blue to yellow, while the defect masks predominantly present a yellow hue in the same area. This finding implies that the element task benefits more from the extraction of task-specific features. The 7th and 8th rows are spatial-attention-enhanced feature maps f_ e^∗ and f_ d^∗. Compared to f_ e in the 3rd row, f_ e^∗ in the 7th row captures more detailed appearance information about the elements and thus enhances the ability to differentiate different types of elements. Similarly, compared to f_ d (the 4th row), f_ d^∗ (the 8th row) integrates the context information about where the defect is developed, which enhances the ability to differentiate defects from defect-like texture such as watermarks on substructures and shadows.
§ CASE STUDY ON STRUCTURAL CONDITION ASSESSMENT
The potential application of the ACF-Net in bridge inspection is further illustrated in a preliminary structural condition assessment case study. To illustrate how ACF-Net performs the visual assessment for bridge elements, ten examples in the testing dataset are presented in Fig. <ref>. Each steel element is preliminarily evaluated based on the proportion of the corroded area observed on the element (the ratio of the corroded area to the entire element area). The structural conditions of bridge elements are then categorized into four distinct levels: Good, Fair, Poor, and Severe. These classifications are determined by specific thresholds of corrosion coverage: 0%, 25%, and 50%. These thresholds, which define the intervals between the various condition classifications, are graphically depicted in Fig. <ref>.
ACF-Net demonstrates promising initial evaluation results in scenarios where the bridge elements are in good condition, as illustrated in Fig. <ref>(c)(g). Moreover, the proposed method is capable of accurately identifying the elements of interest in cases with severe damage, even when there is extensive surface deterioration. The ACF-Net provides precise preliminary evaluation outcomes in such instances, as displayed in Fig. <ref>(a)(d)(e)(f)(j). In scenes featuring a mixture of elements in both well or unsatisfactory conditions, ACF-Net is capable of accurately determining the condition of each element, which can be observed in Fig. <ref>(b)(h)(i). This level of accuracy and reliability in detecting and assessing both well-maintained and heavily damaged bridge elements underscores the potential of ACF-Net as a valuable tool in the field of infrastructure inspection.
§ CONCLUSIONS
This paper presented a novel deep learning architecture called Attention-enhanced Co-interactive Fusion Network (ACF-Net), and a newly annotated Steel Bridge Condition Inspection Visual (SBCIV) dataset, to automate visual bridge inspection for structural condition assessment. ACF-Net employs a share-split-interaction pipeline, with the shared component utilizing a deep, high-resolution encoder HRNetV2-W48. The split-interaction component comprises two task-specific relearning subnets and one co-interactive feature fusion module. ACF-Net has surpassed the current state-of-the-art MTL methods for structural condition assessment, achieving 92.11% element parsing mIoU and 87.16% corrosion segmentation mIoU on the testing dataset of SBCIV. The ablation study revealed how ACF-Net's key modules contribute to performance improvement, and the evaluation on the testing dataset further demonstrated the capability of ACF-Net in automated visual inspection.
While ACF-Net has shown promising results in extracting structural elements in inspection images and assessing the elements' condition, addressing limitations presented in the current work will broaden the impact of ACF-Net on the visual inspection of civil infrastructure. One obstacle to be addressed is the scarcity of annotated data. Bridges are diverse in types and designs. The condition of the same bridge is also changing over time due to deterioration or reparation. The performance of ACF-Net will drop by a certain amount if the inspection images contain new elements and new defect types. Annotated data are required to adapt ACF-Net to new tasks. An efficient data annotation tool is desired to accommodate the need for annotated data. A portion of the improved task performance achieved by ACF-net is attributed to the use of a deep feature encoder. Deep feature extraction is the most time-consuming portion of image analysis. Lightweight feature extractors that are as powerful as the deep feature extractors are desired because the runtime efficiency would support inspectors' decisions at inspection fields, such as utilizing additional advanced inspection methods (e.g., infrared cameras, ground-penetrating radar, ultrasound scanning) to collect data at concerned areas identified from the visual inspection. Those future works will move the research on this topic forward, and the automation of infrastructure inspection will continue blooming.
§ ACKNOWLEDGEMENT
This work was supported by the National Science Foundation (grant numbers ECCS-2025929, ECCS-2026357). Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
elsarticle-num-names
|
http://arxiv.org/abs/2307.04602v1 | 20230710144153 | Inverse cascading for initial MHD turbulence spectra between Saffman and Batchelor | [
"Axel Brandenburg",
"Ramkishor Sharma",
"Tanmay Vachaspati"
] | physics.plasm-ph | [
"physics.plasm-ph"
] |
An effective density matrix approach for intersubband plasmons coupled to a cavity field: electrical extraction/injection of intersubband polaritons
R. Colombelli
August 12, 2023
====================================================================================================================================================
In decaying magnetohydrodynamic (MHD) turbulence with a strong magnetic
field, the spectral magnetic energy density increases with time at small
wavenumbers k, provided the spectrum at low k is sufficiently steep.
This is inverse cascading and occurs for an initial Batchelor spectrum,
where the magnetic energy per linear wavenumber interval increases
like k^4.
For an initial Saffman spectrum that is proportional to k^2, however,
inverse cascading is known not to occur.
We study here the case of an intermediate k^3 spectrum, which may be
relevant for magnetogenesis in the early Universe during the electroweak
epoch.
This case is not well understood in view of the standard Taylor expansion
of the magnetic energy spectrum for small k.
Using high resolution MHD simulations, we show that also in this
case there is inverse cascading with a strength just as expected from
the conservation of the Hosking integral, which governs the decay of an
initial Batchelor spectrum.
§ INTRODUCTION
Standard hydrodynamic turbulence exhibits forward cascading whereby
kinetic energy cascades from large scales (small wavenumbers) to smaller
scales (larger wavenumbers) <cit.>.
This also happens in decaying turbulence, except that the rate of energy
transfer to smaller scales is here decreasing with time <cit.>.
In magnetohydrodynamic (MHD) turbulence, the situation is in many ways
rather different.
This is primarily owing to magnetic helicity <cit.>, which is a
conserved quantity in the absence of magnetic diffusivity <cit.>.
Magnetic helicity is an important property of MHD turbulence that is
not shared with hydrodynamic turbulence, even though there is kinetic
helicity that is also an invariant if viscosity is strictly vanishing
<cit.>.
However, this is no longer true when the viscosity is finite <cit.>.
This is because kinetic helicity dissipation occurs faster than kinetic
energy dissipation, whereas magnetic helicity dissipation occurs more
slowly than magnetic energy dissipation for finite magnetic diffusivity
<cit.>.
The importance of magnetic helicity conservation has been recognized
long ago by <cit.> and <cit.> in cases when it is finite
on average.
In that case, it leads to the phenomenon of an inverse cascade.
In forced turbulence, this means that part of the injected energy gets
transferred to progressively larger scales <cit.>.
This process is at the heart of large-scale dynamos, which can be
described by what is known as the α effect <cit.>, and
is relevant for explaining the large-scale magnetic fields in stars and
galaxies <cit.>.
In decaying turbulence, on the other hand, inverse cascading leads to
a temporal increase of the magnetic energy at the smallest wavenumbers.
A similar phenomenon has never been seen in hydrodynamic turbulence,
where the spectrum at small k remains unchanged.
Even when the magnetic helicity vanishes on average, there can still be
inverse cascading.
In that case, it is no longer the mean magnetic helicity density, whose
conservation is important, but the magnetic helicity correlation integral,
also known as the Hosking integral <cit.>.
In nonhelical turbulence, the possibility of inverse cascading with an
increase of spectral magnetic energy at small wavenumbers was originally
only seen for steep initial magnetic energy spectra, (k)∝ k^4.
Here, (k) is defined as the spectral magnetic energy
per linear wavenumber interval and is normalized such that
∫(k,t) k=⟨^2|/2≡(t)
is the mean magnetic energy density.
Those k^4 spectra where motivated by causality arguments, requiring
that magnetic field correlation functions strictly vanish outside the
light cone <cit.>.
Such a field can be realized by a random vector potential that is
δ-correlated in space, i.e., the values of any two neighboring
mesh points are completely uncorrelated.
The magnetic vector potential has therefore a k^2 spectrum, which
implies that the magnetic field =× has k^4 spectrum.
For the case of a shallower (k)∝ k^2 spectrum, no inverse
cascading has been found <cit.>.
This was explained by the conservation of the magnetic Saffman integral
<cit.>, which constitutes the coefficient in the leading quadratic
term of the Taylor expansion of the magnetic energy spectrum for
small k.
The intermediate case of a k^3 spectrum may be realized during the
electroweak epoch in cosmology due to a distribution of magnetic charges
as shown in <cit.> and <cit.>.
The evolution of the magnetic field in this case is less clear.
<cit.> reported weak inverse cascading, but it is not
obvious whether this agrees with what should be expected based on the
conservation of the Hosking integral, or whether it is some intermediate
case in which the possible conservation of both the magnetic Saffman
integral and also the Hosking integral can play a role.
Investigating this in more detail is the purpose of the present work.
§ PRELIMINARY CONSIDERATIONS
§.§ Relevant integral quantities in MHD
Three important integrals have been discussed in the context of decaying
MHD turbulence.
The first two are the magnetic Saffman and magnetic Loitsyansky integrals
<cit.>,
I_ SM = ∫⟨()·(+)| ^3,
I_ LM= - ∫⟨()·(+)| r^2 ^3,
respectively.
Here, angle brackets denote ensemble averages, which we approximate
by volume averages.
The integrals saffman and magneticli are analogous to those
in hydrodynamics, but with being replaced by the velocity .
The third relevant quantity is the Hosking integral
<cit.>,
I_ H=∫⟨h()h(+)| ^3,
where h=· is the magnetic helicity density.
By defining the longitudinal correlation function M_ L(r) through
⟨() ·(+)⟩=1/r^2 d/dr(r^3 M_L),
the integrals I_ SM and I_ LM emerge in the coefficients
of the Taylor expansion of the magnetic energy <cit.>.
A similar expansion also applies to the magnetic helicity variance
spectra <cit.>.
For power spectra that decay sufficiently rapidly, a Taylor expansion
of sin(kr)/(kr) gives,
.()|_k→0 = 2 k^2/π∫d/dr(r^3 M_ L) (1-k^2 r^2/6+...) dr ≡I_ SM/2π^2k^2
+I_ LM/12π^2k^4+...,
.(h)|_k→0 = I_ H/2π^2k^2
+... .
Here, (h)=(k^2/8π^3L^3)∮_4π|h̃|^2 Ω_k
is the shell-integrated spectrum in volume L^3, the tilde marks a quantity
in Fourier space, and Ω_k is the solid angle in Fourier space, so that
∫(h) k=⟨h^2|, and likewise for ∫() k=⟨^2|.
The definition of shell integration implies that Parseval's theorem in the form
⟨h^2|L^3=∫|h̃|^2 ^3k/(2π)^3 is obeyed.
The magnetic energy spectrum is defined as (k,t)=()/2μ_0,
where μ_0 is the magnetic permeability, but in the following, we
measure in units where μ_0 is set to unity.
According to ExpSpB, () seems to be constrained
to having only even powers of k in the limit k → 0.
Furthermore, () ∝ k^2 when I_ SM is finite and dominant,
and likewise, () ∝ k^4 when I_ LM is finite and dominant.
The expansion in powers of k in (<ref>) holds, however, only if the coefficients
in the expansion are finite.
This is the case if, for example, M_ L is an exponentially decaying
function of r.
If, on the other hand, M_ L decays only as a power law, the expansion
does not hold since higher order coefficients will be divergent.
In such cases the leading order behavior in k may consist of odd
(or even arbitrary) powers of k.
A simple counterexample to the expansion in
(<ref>)
is provided by
considering the case r^3M_L ∝ r for large r in (<ref>).
The specific case of ()∝ k^3 occurs for magnetic fields
produced during electroweak symmetry breaking as discussed in
<cit.> and <cit.>.
In our numerical work we will compute both the compensated
shell-integrated spectra, i.e., (*) divided by suitable powers of
k, as well as the integrals using their definitions in
saffmanHintegral.
§.§ Competition between I_ SM and I_ H
Using the Taylor expansion of the magnetic energy spectrum in ExpSpB
we see that for initial Saffman scaling (∝ k^α
with α=2), the magnetic Saffman integral I_ SM must be
non-vanishing.
For initial Batchelor scaling (α=4), on the other hand,
I_ SM vanishes initially and cannot play a role.
In that case, the conservation of I_ H becomes important and leads
to inverse cascading, which then also implies the non-conservation of
I_ SM <cit.>.
For α=2, there are indications <cit.> that I_ SM
is slightly better conserved than the Hosking integral I_ H,
which enters the Taylor expansion of the magnetic helicity variance
spectrum in ExpSph.
Therefore, for α=2, () continues being determined by
ExpSpB, and I_ H begins to decline in ExpSph.
For α=4, on the other hand, I_ SM=0 initially, but then
both I_ SM and I_ LM begin to grow <cit.>.
Our question here is what happens in the intermediate case when
α=3.
In that situation, () and (h) cannot be Taylor expanded
and it is unclear whether there is inverse cascading in that case,
because it would require violation of the conservation of I_ SM,
or whether I_ SM is conserved, as for α=2, and there is
no inverse cascading.
§.§ Growth of spectral energy at small wavenumbers
We now want to quantify the growth of spectral energy at small
wavenumbers.
As in <cit.>, we use self-similarity, i.e., the assumption that
the magnetic energy spectra at different times can be collapsed on top
of each other by suitable rescaling.
Thus, we write
(k,t)=^-βϕ( k),
where (t)=∫ k^-1(k) k/ is the integral scale and
β depends on the relevant conservation law: β=2 for Saffman
scaling and β=3/2 for Hosking scaling.
This follows from the dimensions of the conserved quantity; see
<cit.> for details.
Next, we assume a certain initial subinertial range scaling, ∝ k^α.
Thus, for k≪1, we have
(k,t)=^α-β k^α .
Assuming power-law scaling, (t)∝ t^q, we get
lim_k→0(k,t)∝ t^(α-β) q.
Thus, inverse cascading is possible for α>β, so α=2
and β=3/2 could, in principle, still give rise to inverse cascading.
Following <cit.> we have q=2/(β+3), so q=2/5 for β=2
and q=4/9 for β=3/2; see Tscaling for a comparison of
different theoretical possibilities for the various exponents.
Thus, unless I_ SM is conserved and there is therefore no inverse
cascading, we expect lim_k→0(k,t)∝ t^2/3 for cubic
scaling (∝ k^3, i.e., between Saffman and Batchelor scalings)
when the Hosking integral is conserved (β=3/2 and q=4/9).
In the following, we present numerical simulations demonstrating that
this is indeed the case.
§ SIMULATIONS
We perform simulations in a domain of size (2π)^3, so the lowest
nonvanishing wavenumber is k≡ k_1, where k_1=1 for Runs B and C,
but 0.02 for Runs A and D.
For Run B, we assume that the initial magnetic energy spectrum peaks at
k_0=60 k_1, and therefore we consider spectral values for
k=k_1 to approximate the limit k→0.
We use N^3=2048^3 mesh points in all of our simulations, so the largest
wavenumber is 1024.
It is similar to a run of <cit.> with α=4, which here
corresponds to Run C.
We also compare with some other runs that we discuss later.
All simulations are performed with the Pencil Code <cit.>,
which solves the compressible, isothermal equations using finite
differences.
In the numerical simulations, the sound speed is always chosen to
be unity, i.e., =1.
The initial position of the spectral peak is at k=k_0 and its numerical
value is chosen to be 60 and the lowest wave number in the domain is
unity, or, when using the data of <cit.>, k_0=1 and k_1=0.02,
so that k_0/k_1=50.
The magnetic diffusivity is η k_1/=2×10^-6 in Runs B
and C, so η k_0/=1.2×10^-4.
In some runs with α=2, we also present results for larger values
of η.
The magnetic Prandtl number, i.e., the ratio of kinematic viscosity ν
to magnetic diffusivity, =ν/η, is unity for Runs B and C.
For Runs A and D, we have η k_0/=5×10^-5 and
ν k_0/=2×10^-4, so =4.
§.§ Inverse cascading
The results for the magnetic energy and helicity variance spectra are
shown in rspec_select_hoskM_k60del2bc_k3, which shows inverse
cascading with (k_1,t)∝ t^2/3 and (h)= for
k→0.
The temporal increase at low k is compatible with Tscaling
for α=3, β=3/2, q=4/9, and thus (α-β) q=2/3.
Next, we compare in rspec_select_comp_k60del2bc_k3 compensated
spectra, which allow us to determine I_ SM→2π^2()/k^2,
if it were flat for small k (but this is not the case here), and
I_ H→2π^2(h)/k^2, which is approximately flat for small k.
The upward trend with time in the peaks of the curves in
rspec_select_comp_k60del2bc_k3ab reflects the fact that
β=3/2 in Compensated, so the compensated spectrum,
(k,t)/k^2=^-βϕ( k)/k^2=^-β+2ϕ̃( k),
scales with decreasing (t)∝^-1 like ^-1/2.
Here, ϕ̃(κ)≡ϕ(κ)/κ^2 is a compensated
version of ϕ(κ), and so the peak increases with time like
^1/2∝ t^q/2.
The fact that the magnetic Saffman integral is not conserved is also
demonstrated by the fact that the compensated curves are not flat, but
show a bump.
For comparison with earlier work, it may still be useful to quote
approximate values of I_ SM.
Those are here based on the approximate height of the bumps; see the
dotted horizontal lines in rspec_select_comp_k60del2bc_k3ab.
In rspec_select_comp_k60del2bc_k3(d), we see that (h) shows a
distinctly downward trend with k for the smallest k values.
This suggests that the conservation property of I_ H begins to
deteriorate, especially at late times, and that higher order terms begin
to play a role.
To clarify this further, more scale separation would be useful, i.e., a
run with a larger value of k_0.
Such runs at a resolution of 2048^3 mesh points are, however, rather
expensive, but it is interesting to note that, even for the case of an
initial k^4 spectrum, the compensated spectra show a similar downward
trend with k when the numerical resolution is only 1024^3; see
Figure 3(d) of <cit.>, which corresponds to our Run D.
It should also be noted that in
rspec_select_comp_k60del2bc_k3(d), the last time is
t k_1=190, while in rspec_select_comp_k60del2bc_k3c,
the last time is only t k_1=60.
The two times correspond to t η k_0^2≈1.4 and 0.4.
§.§ Universal scaling constants
Given that I_ H is reasonably well conserved and enters the
evolution of magnetic energy and correlation length, as well as the
spectral envelope of the peak, through dimensional arguments, it is
useful to determine the nondimensional coefficients in these relations.
This was done recently for the cases α=2 and α=4;
see <cit.>, who computed the coefficients C_ H^(ξ),
C_ H^( E), and C_ H^(E), defined through the relations
(t)=C_i^(ξ) I_i^σt^q, (t)=C_i^( E) I_i^2σt^-p, (k)=C_i^(E) I_i^(3+β)/σ(k/k_0)^β,
where the index i on the integrals I_i and the coefficients
C_i^(ξ), C_i^( E), and C_i^(E) stands for
SM or H for magnetic Saffman and Hosking scalings, respectively, and
σ is the exponent with which length enters in I_i: σ=5
for the magnetic Saffman integral (i= SM) and σ=9 for the
Hosking integral (i= H).
In the following, we focus on the case i= H, but refer to
<cit.> for comparisons with i= SM.
We recall that k_0 is the initial position of the spectral peak.
Note that the last expression of GeneralFits describes an envelope
under which E(k,t) evolves; see rspec_select_hoskM_k60del2bc_k3a
for an example.
In principle, the nondimensional coefficients C_ H^(ξ),
C_ H^( E), and C_ H^(E) could depend on other
quantities characterizing the system, for example the magnetic Reynolds
number, but they may also be universal, just like for the Kolmogorov
constant in the kinetic energy spectrum.
To begin assessing the degree of universality of these nondimensional
coefficients, we now consider the empirical laws (t), (t),
and (k,t) for the new case of α=3.
As in earlier work, the nondimensional constants in the scaling laws
for α=3 agree with those found earlier for α=4 <cit.>.
Specifically, we have
(t)≈0.12 I_ H^1/9t^4/9, (t)≈3.7 I_ H^2/9t^-10/9, (k,t)0.025 I_ H^1/2(k/k_0)^3/2.
The quality of these asymptotic laws can be seen from the red lines in
the last two panels of rspec_select_comp_k60del2bc_k3.
The blue lines show the case if the Saffman integral were conserved.
As explained above, those are based on the position of the bumps in
rspec_select_comp_k60del2bc_k3ab, and are therefore
only of limited use.
A comparison of the coefficients with those found by <cit.> is
given in Tcomparison.
Note that in both panels, the solid and dashed blue lines show an
asymptotic upward trend, reflecting again that the magnetic Saffman
integral is not conserved.
§.§ Normalized Hosking and Saffman integrals
The runs of <cit.> had different mean magnetic energy densities
and also the minimum wavenumber k_1 was not unity, but k_1=0.02,
unlike the present cases, where k_1=1.
Instead, the peak of the initial spectrum, k_0, was then chosen to
be unity.
To compare such different runs, it is necessary to normalize I_ SM
and I_ H appropriately.
On dimensional grounds, I_ SM is proportional to ^3
and I_ H is proportional to ^2^5.
By approximating the spectrum as a broken power-law, as in <cit.>,
(k)={
E_peak(k/k_peak)^α, k≤k_peak,
E_peak(k/k_peak)^-s, k> k_peak,
.
where s=5/3 and s=2 were used to represent the inertial range slopes
at early and late times, respectively, we find
k_peak=α^-1+s^-1/(α+1)^-1+(s-1)^-1,
E_peak=/α^-1+s^-1.
For α=2, we find the following reference values for the
Saffman integral:
I_SM^ref=2π^2×{
250/99 ,
16/9 .
.
For other values of α, the value of I_ SM^ ref is not
meaningful and only I_ H^ ref is computed for other values
of α by using equations (2.14) and (4.5) in <cit.>.
It is given by
I_H^ref=8π^2 ^2 ^5
((α+1)^-1+(s-1)^-1/(α^-1+s^-1)^5/3)^3
(1/2α-3+1/2s+3).
In calculating the above expression, we assumed the magnetic field
distribution to be Gaussian and its spectrum to be of the form as given
in mag_spec.
These reference values are summarized in Tcomparison2.
In Tcomparison, we also list the ratios I_ H/I_ H^ ref
and I_ SM/I_ SM^ ref, where I_ H^ ref∝^2^5
and I_ SM^ ref∝^3 are defined quantitatively in
Tcomparison2.
We have used here the actual values of α=2, 3, or 4, and s=2 in
all cases which describes the late time inertial range well;
see rspec_select_hoskM_k60del2bc_k3a.
The former ratio, I_ H/I_ H^ ref, varies only little,
because the Hosking integral is always reasonably well conserved, except
when the magnetic diffusivity is large.
Near tη k_0^2≈0.1, the ratio has for all runs a well
distinguished maximum, which is the value we quote in Tcomparison.
These values tend to be about 20% larger than those at the end of the
run, which are the reference values given in Tcomparison.
The ratio I_ SM/I_ SM^ ref, on the other hand, is not
at all conserved for Runs B–D, and the ratio is then best be characterized
by a mild minimum at early times, which is the value quoted here.
It is interesting to note that I_ H/I_ H^ ref is about
twice as large on the larger mesh (Run C) than on the smaller mesh (Run D).
This is somewhat surprising.
It should be noted, however, that Run C with a larger mesh had actually
a larger magnetic diffusivity (η k_0/=7×10^-3) than Run D
(η k_0/=5×10^-5); see Tcomparison.
It is therefore possible that Run D was actually underresolved and that
this was not noticed yet.
To reexamine the idea that for α=2, I_ SM is better
conserved than I_ H, we compare their evolution for different
models in rspec_select_comp_k60del2bc_Isaff.
In addition to the two high resolution models (with 2048^3 mesh points)
with α=3 and 4, we also present the dependencies for the lower
resolution models of <cit.> (with 1024^3 mesh points)
with α=2 and 4.
We see that in all cases, I_ H is reasonably well conserved,
except when the magnetic diffusivity is large.
By contrast I_ SM is conserved only for α=2, and not
at all for any other values of α.
It is also remarkable that for α=2, I_ SM appears to be
much better conserved than I_ H for α=4 and 3.
In fact, by comparing runs for α=2 with larger magnetic
diffusivities (Runs A1 and A2), we find that I_ H declines more
rapidly (as expected), but I_ SM seems completely unaffected
by this.
This reflects mainly the fact that the magnetic field at the lowest
wave numbers is indeed unchanged.
§.§ Limitations of the Taylor expansion
Given that the expansion in ExpSpB cannot be justified for
α=3, the calculation of the magnetic Saffman integral as
I_ SM→2π^2()/k^2 may be problematic.
We therefore also compare with a direct calculation using the spectral
method through () analogous the “box-counting method” of
<cit.>; see their equation (2.9).
This corresponds here to calculating first the function
I_ SM(R)=∫ w_ sph^ BC(k;R)() k,
where
w_ sph^ BC(k;R)=4π R^3/3[
6j_1(kR)/kR]^2
is the weight function of <cit.>; see their equation (2.8).
We then obtain I_ SM as the limit of I_ SM(R)
for large values of R, but smaller than the system size L.
Here we choose R=R_*=L/3, but we note that the exact choice of this
value is not crucial.
In rspec_select_comp_k60del2bc_Isaffb, we compare the two
methods of obtaining I_ SM.
We see that the box-counting method tends to give somewhat better
conserved estimates of I_ SM.
For completeness, we also show the evolution of I_ H from the
box-counting method; see rspec_select_comp_k60del2bc_Isaffa.
Here, both methods give virtually indistinguishable results.
§.§ Comments on non-Gaussianity
The question of non-Gaussianity is important in many aspects of cosmology.
Not all its aspects are captured by kurtosis or skewness.
In the work of <cit.>, it was already pointed out that, although
the kurtosis was only slightly below the Gaussian value of three, there
was a very strong effect on the statistics of the fourth order moments
that enter in the calculation of I_ H and (h).
In Gaussianity_check, we compare (h) at the initial and a later
time from the numerical calculation and the semi-analytical calculation
based on the actual magnetic energy spectra, assuming Gaussian statistics.
As in <cit.>, we find also here a ten-fold excess of the actual
spectra compared with the value expected based on the assumption of
Gaussianity.
§.§ How special is the Saffman scaling for α=2?
We now address in more detail the case α=1.7, for which
tdep_for_k0 would predict lim_k→0(k,t)∝
t^(α-β) q=t^4/45≈ t^0.09.
This run is listed in Tcomparison2 as Run O.
We have seen that, for small magnetic diffusivity,
I_ H is well conserved for all values of α;
see rspec_select_comp_k60del2bc_Isaffa.
On the other hand, I_ SM appears to be well
conserved only in the special case of α=2; see
rspec_select_comp_k60del2bc_Isaffb.
One possibility is therefore that, as long as α>2, we have
inverse cascading, but not for α≤2.
But the argument for not expecting inverse cascading relies heavily on
the existence of I_ SM and that it is non-vanishing.
If we accept that for α=3, () cannot be expanded in terms of
k^2 and k^4, then this would also be true for α=1.7, which
is a value between 3/2 and 2.
One might therefore expect that also in this case, I_ SM would
not be conserved, and that the decay if governed by the conservation of
I_ H.
This possibility was already listed Tscaling.
In rspec_select_hoskM_k60del2bc_k1p7a we show that
there is no noticeable growth of lim_k→0(k,t).
The inset, however, does show that there is an intermediate
phase phase with a very weak growth ∝ t^0.05.
Given that also the theoretically expected growth ∝ t^0.09 is
already very small, and that the degree of conservation of I_ H
is also limited, as seen in rspec_select_hoskM_k60del2bc_k1p7b,
it is indeed possible that at larger resolution and smaller magnetic
diffusivity, clearer inverse cascading might emerge.
§.§ Evolution in the pq diagram
There is a range of tools for assessing the decay properties of MHD
turbulence.
We did already discuss the determination of I_ H and I_ SM,
and the potentially universal coefficients C_ H^(ξ),
C_ H^( E), and C_ H^(E).
We also discussed the close relation between the envelope parameter
β in CompensatedGeneralFits, and the parameter q
characterizing the growth of the correlation length ∝ t^q.
There is also the parameter p characterizing the decay of magnetic
energy, ∝ t^-p.
Both p and q can also be determined as instantaneous
scaling parameters through p(t)=-ln/ln t and
q(t)=ln/ln t, and their parametric representation p(t)
versus q(t) gives insights about the properties of the system and how
far it is from a self-similar evolution <cit.> and the
scale-invariance line, p=2(1-q) <cit.>.
In pEMxi_pq_run_3, we show such a pq diagram for Runs B and C.
We see that the points (q,p) for different times and for both runs
cluster around (q,p)=(4/9, 10/9), as expected for Hosking scaling.
The locations for Loitsyansky and Saffman scalings, (2/7, 10/7)
and (2/5, 6/5), respectively, as well as for the fully helical
case (2/3, 2/3) are also indicated for comparison.
A detailed assessment of the full range of scaling parameters is important
for establishing the validity of Hosking scaling.
Assessments based on comparisons of the parameter p for different
runs may not be sufficient, and have led to inconclusive results;
see <cit.> for recent results.
Thus, the idea behind the Hosking phenomenology is therefore not
universally accepted.
In this connection, it should be noted that additional support for the
validity of Hosking scaling came from two rather different numerical
experiments.
First, in applications to the Hall cascade, the Hosking phenomenology
predicts the scalings q=4/13 and p=10/13, which was confirmed
by simulations <cit.>.
Second, in relativistic plasmas where the mean magnetic helicity
density is finite, but the total chirality vanishes because the helicity
is exactly balanced by fermions chirality, the Hosking phenomenology
predicts a decay of mean magnetic helicity ∝ t^-2/3, which,
again, was confirmed by simulations <cit.>.
§ CONCLUSIONS
Our work has shown that the decay dynamics of an initial magnetic field
with power law scaling proportional to k^3 is similar to that
for k^4, but very different from that for k^2.
Even the case α=1.7 may be different from α=2.
This suggests that the case of an initial k^2 spectrum may be singular.
At the same time, it underlines the importance of the Hosking integral
in determining the decay dynamics for a large class of initial magnetic
energy spectra.
We also confirmed that the nondimensional coefficients in the
empirical scaling relations for (t), (t), and (k,t)
are compatible with those found earlier for an initial k^4 subinertial
range spectrum.
According to a simple argument involving self-similarity, we showed
and confirmed that the temporal growth of the magnetic energy spectra
at small k is proportional to t^2/3, while for α=4, we
have t^10/9.
At the moment, even with a resolution of 2048^3 mesh points, we
cannot make very firm statements about the case α=1.7, because
I_ H is not sufficiently well conserved and the value of α
is close to 3/2.
It would be useful to reconsider also the case α=2 with a more
accurate analysis to see whether even here one could find violation of
conservation of the magnetic Saffman integral, and thus weak inverse
cascading ∝ t^0.2.
§ ACKNOWLEDGEMENTS
We are grateful to
Antonino Midiri,
Alberto Roper Pol, and
Kandaswamy Subramanian
for encouraging discussions.
§ FUNDING
A.B. and R.S. where supported in part by the Swedish Research Council
(Vetenskapsrådet, 2019-04234); Nordita is sponsored by Nordforsk.
T.V. was supported by the U.S. Department of Energy, Office of High
Energy Physics, under Award No. DE-SC0019470.
We acknowledge the allocation of computing resources provided by the
Swedish National Allocations Committee at the Center for Parallel
Computers at the Royal Institute of Technology in Stockholm and
Linköping.
§ DECLARATION OF INTERESTS
The authors report no conflict of interest.
§ DATA AVAILABILITY STATEMENT
The data that support the findings of this study are openly available
on Zenodo at doi:10.5281/zenodo.8128611 (v2023.07.09).
All calculations have been performed with the Pencil Code
<cit.>; DOI:10.5281/zenodo.3961647.
§ AUTHOR'S ORCIDS
A. Brandenburg, https://orcid.org/0000-0002-7304-021X
R. Sharma, https://orcid.org/0000-0002-2549-6861
T. Vachaspati, https://orcid.org/0000-0002-3017-9422
jpp
|
http://arxiv.org/abs/2307.04791v1 | 20230710180004 | A self-averaging spectral form factor implies unitarity breaking | [
"Apollonas S. Matsoukas-Roubeas",
"Mathieu Beau",
"Lea F. Santos",
"Adolfo del Campo"
] | quant-ph | [
"quant-ph",
"cond-mat.stat-mech",
"hep-th",
"math-ph",
"math.MP"
] |
sgn
H̋
x
y
p
q
k̨
i
e
łl
ḍ
⟨
⟩
ω
Ω
ε
tr
†
|
http://arxiv.org/abs/2307.05614v1 | 20230710235845 | Impact of Feature Encoding on Malware Classification Explainability | [
"Elyes Manai",
"Mohamed Mejri",
"Jaouhar Fattahi"
] | cs.LG | [
"cs.LG",
"cs.AI"
] |
Impact of Feature Encoding on Malware Classification Explainability
Elyes Manai
Department of Computer Science
& Software Engineering
Laval University, Quebec, Canada.
[email protected]
Mohamed Mejri
Department of Computer Science
& Software Engineering
Laval University, Quebec, Canada.
[email protected]
Jaouhar Fattahi
Department of Computer Science
& Software Engineering
Laval University, Quebec, Canada.
[email protected]
October 2023
================================================================================================================================================================================================================================================================================================================================================================================================================================
This paper investigates the impact of feature encoding techniques on the explainability of XAI (Explainable Artificial Intelligence) algorithms. Using a malware classification dataset, we trained an XGBoost model and compared the performance of two feature encoding methods: Label Encoding (LE) and One Hot Encoding (OHE). Our findings reveal a marginal performance loss when using OHE instead of LE. However, the more detailed explanations provided by OHE compensated for this loss. We observed that OHE enables deeper exploration of details in both global and local contexts, facilitating more comprehensive answers. Additionally, we observed that using OHE resulted in smaller explanation files and reduced analysis time for human analysts. These findings emphasize the significance of considering feature encoding techniques in XAI research and suggest potential for further exploration by incorporating additional encoding methods and innovative visualization approaches.
Explainability, XAI, feature encoding, malware classification, preprocessing, LE, OHE, XGBoost.
§ INTRODUCTION
Machine learning has witnessed remarkable advancements in recent years, enabling the development of sophisticated models that achieve impressive performance on various tasks. As these tasks and the data they are trained on become more complex, so does the model complexity. This often causes the decision-making process to lack transparency, making it difficult to understand the reasons behind their predictions. In a society that uses AI for an ever-growing number of use cases, however, that lack of understanding can pose serious risks to the users. Averting these risks and allowing more control over what our AI is doing, thus allowing more responsible AIs, is the goal behind the Explainable Artificial Intelligence (XAI) subfield. This subdomain of AI focuses on making black-box models transparent by providing understandable explanations for their decisions. XAI also allows us to combine the powerful pattern-recognition learning capabilities of AI with human-readable explanations that humans can instinctively understand and explain. the algorithms used in XAI usually work by finding out what parts of the input and of the model weights most affect the model's predictions. The end result will be a summary of each feature's contribution to the model. How helpful are these summaries, however, which we can call the quality of the generated explanations, depends on several parameters such as the chosen algorithm, the model architecture, and the data preprocessing technique. This last parameter, however, is not as popular as the others. While most XAI research focuses on algorithms, use cases, and the quality of explanations generated, there is a lack of research on the impact of preprocessing on generated explanations. We think that the preprocessing technique has a sizable impact on the quality of generated explanations and should be more explored. More specifically, we are interested in the feature encoding step of the preprocessing pipeline. Since XAI methods summarize feature contribution, the way we encode our models will directly affect the understandability of the generated explanations. Since preprocessing directly affects model performance, considerations must be taken to not trade off too much performance for better explanations, as better explanations on an unprecise model are not useful. Nonetheless, we think that a minor performance loss for a major boost in explainability is worth it, as it also opens up the door for better model and data understanding, bias discovery, robustness tests, and overall higher quality assurance. This is especially important in critical industries such as Medicine, Finance, and Cyber Security. To showcase the added value of our idea in a real use case, we will apply Machine Learning and Explainability on a common problem in Cyber security: Malware Classification. It is one of the most common tasks that Machine Learning is applied to in modern antiviruses and Intrusion Detection Systems. We will train a model on a publicly available malware dataset, apply the XAI algorithm, switch the preprocessing technique and compare the generated explanations. We will show that new rules and pain points can be detected and further explored by just changing the preprocessing technique. To the best of our knowledge, no prior studies specifically addressed the subject of the direct impact of preprocessing on explanation quality in the field of XAI have been identified in the existing literature. Our comprehensive review of the literature revealed that research in XAI is more geared towards XAI algorithms <cit.>, the generated explanations<cit.>, alternative ways to bake explainability into the input features<cit.> and other related problems<cit.>. Our focus in this paper can be summarized as follows: Given that XAI algorithms use the input features as the key components for the generated explanations, it is safe to assume that the type of feature encoding used will directly affect the clarity of the explanations. The more explicit the feature, the more detailed should be the explanation we get. With that in mind, we will study two main questions in this paper:
* Does feature encoding affect explainability?
* If yes, what encoding yields better explainability and why?
§ CONCEPTS
§.§ Machine Learning Modeling
Machine learning modeling refers to the process of creating a predictive mathematical model using machine learning algorithms. It involves training a model on a labeled dataset, evaluating its performance, and using it to make predictions or gain insights from new, unseen data. Here is a brief overview of the steps involved in machine learning modeling:
* Data Preparation: collecting and preparing the dataset for modeling. It includes tasks such as data cleaning, handling missing values, dealing with outliers, and feature engineering (creating new features from existing ones).
* Splitting the Dataset: splitting it into two or more subsets: a training set, a validation set, and a test set. The training set is used to train the model, the validation set is used for hyperparameter tuning and model selection, and the test set is used to evaluate the final model's performance.
* Model Selection: Choosing the most adequate model(s) among various machine learning algorithms such as linear regression, decision trees, random forests, support vector machines, neural networks, etc. The choice of model depends on the type of problem, the nature of the data, and the desired outcome.
* Model Training: training the model on the training dataset by fitting it to the input features and the corresponding target variables. During the training process, the model learns the underlying patterns and relationships in the data.
* Hyperparameter Tuning: selecting the optimal values for parameters to improve the model's performance. Techniques like grid search, random search, or Bayesian optimization can be used for hyperparameter tuning.
* Model Evaluation: Assessing the performance of the trained model through evaluation metrics such as accuracy, precision, recall, F1 score etc. The model is evaluated on the validation set to understand its generalization capabilities and fine-tune any parameters if needed.
Machine learning modeling is an iterative process, and it often involves experimenting with different algorithms, feature engineering techniques, and hyperparameter settings to find the best-performing model for a given problem.
§.§ Feature Encoding
Feature encoding, also known as feature transformation or feature representation, is a crucial step in data preprocessing where categorical or textual features are converted into numerical representations that can be effectively used by machine learning algorithms. This is a mandatory step as ML algorithms only deal with numerical features. The choice of encoding technique directly impacts the ML performance. Here are some common feature encoding techniques:
* One-Hot Encoding: Each category within a categorical feature is represented by a binary feature. If a feature has n categories, it is encoded into n binary features, where only one feature is active (1) for a particular category, and the rest are inactive (0). One-hot encoding is useful when there is no inherent order or relationship among the categories.
* Label Encoding: Label encoding assigns a unique numerical label to each category within a categorical feature. Each category is represented by a distinct integer value. Label encoding is suitable when the categories have an ordinal relationship or when using algorithms that can directly work with integer inputs.
* Ordinal Encoding: Similar to label encoding, ordinal encoding assigns numerical labels to categories. However, ordinal encoding takes into account the order or rank of the categories and assigns values accordingly. For example, "low," "medium," and "high" could be encoded as 1, 2, and 3, respectively.
* Binary Encoding: Binary encoding represents categories as binary bit patterns. Each category is assigned a unique binary code, and each bit in the code represents the presence or absence of a category. Binary encoding can be efficient for high-cardinality categorical features and reduces the dimensionality compared to one-hot encoding.
* Embedding: Embedding techniques are commonly used for encoding textual or high-dimensional categorical features. Embeddings are dense, low-dimensional representations that capture semantic relationships between categories. Embeddings are learned using techniques like Word2Vec <cit.> or categorical embedding layers in deep learning models <cit.>.
§.§ Explainability
Explainability in the context of machine learning<cit.> refers to the ability to understand and interpret the decisions or predictions made by a machine learning model. It involves gaining insights into how and why a model arrives at a particular output, providing transparency and comprehensibility to the decision-making process. There are various approaches to achieving explainability:
* Model-Agnostic Approaches: These methods aim to explain any black-box machine learning model without relying on its internal structure. They involve techniques like feature importance analysis, partial dependence plots<cit.>, and surrogate models, which provide insights into the relationship between input features and model predictions.
* Rule-Based Approaches: These approaches aim to generate human-readable rules that describe the decision-making process of the model. Rule-based models, such as decision trees or rule lists, can provide explicit if-then statements that explain how specific features influence predictions.
* Interpretable Model Architectures: Some machine learning models, such as linear regression, logistic regression, or decision trees, inherently provide interpretable explanations. Their simplicity and transparency allow users to understand the impact of each feature on the final prediction.
* Local Explanations: Local explanation methods focus on explaining individual predictions rather than the model as a whole. Techniques like LIME<cit.> (Local Interpretable Model-Agnostic Explanations) or SHAP<cit.> (SHapley Additive exPlanations) provide insights into which features contributed the most to a particular prediction.
* Visualizations: Visualizations play a significant role in explaining complex models and high-dimensional data. Techniques like heatmaps, bar plots, scatter plots, or saliency maps help in visualizing feature importance, decision boundaries, or highlighting influential regions in the data.
§.§ Malware Detection
To demonstrate our work, we will take the common task of detecting malware. Malware are malicious pieces of software that are designed to infiltrate and damage information systems without the users' consent <cit.>. The term malware covers a lot of categories such as viruses, ransomware, worms, trojans, backdoors, spyware, keyloggers, adware, bots, and rootkits. Malware analysts have to discover exactly what happened to a system and make sure that the machines damaged by malicious software are isolated from the organization's network. The analysis done to single out the suspicious parts of the software can sometimes take a group of analysts and several hours or even days. Since undetected malware can have devastating consequences on any organization, malware detection has been deemed one of the most important tasks in cybersecurity. Several types of systems have been built to detect and capture malware such as Intrusion detection systems, antiviruses and firewalls, and these systems keep getting smarter thanks to the combined shared knowledge of the cyber security community and the rapid advancement of technology. Current Malware detection systems use Machine Learning and Deep Learning to detect anomalies in files and network packets to protect the systems they're installed on. Since Machine learning has been known for its fantastic classification capabilities, more and more complex architectures and models are being tested and deployed to the current market.
§ IMPLEMENTATION AND EXPERIMENTAL RESULTS
§.§ Dataset
For this project, we found a Malware classification dataset from the 2015 Microsoft Malware Classification Challenge<cit.>. The public variant we managed to download contains 19611 rows and 78 features. Each row represents a single file. The dataset is imbalanced as there are 14599 malware files and 5012 non-malware files, so 3 times as much malware.
The dataset has no missing data and all features are numerical aside from the "Name" one.
§.§ Preprocessing
The "Name" feature has been modified by the competition organizers to include "virus" if the file is malware and thus be removed since it does not represent real-life data. We do not apply any other preprocessing on the data aside from feature encoding. In this work, we apply two encoding techniques to all the features:
* Label Encoding: Each feature value is represented by a unique integer.
* One Hot Encoding: Each feature value becomes a separate binary column where 1 means the file's value of that feature is the column name, and 0 if not. This allows for more precise knowledge of what went wrong.
§.§ Machine Learning Modeling
For training, we choose XGBoost<cit.> as our base model and train it using its default parameters, namely 100 estimators, a max depth of 5 and a learning rate of 0.1. We use the free Google Colab coding environment which offers a single sever with 12.7GB of RAM and a single NVIDIA T4 GPU with 15GB of GPU RAM.
To evaluate our model, We use four popular metrics: Accuracy, Precision, Recall and F1.
In a nutshell, accuracy measures the overall correctness of the model's predictions by calculating the proportion of correctly classified instances out of the total number of instances. Precision quantifies the proportion of true positive predictions out of all positive predictions made by the model, indicating the model's ability to correctly identify positive instances and minimize false positives. Recall measures the proportion of true positive predictions out of all actual positive instances in the dataset, representing the model's ability to capture positive instances and minimize false negatives. Finally, the F1 score combines precision and recall into a single metric by taking their harmonic mean, providing a balanced assessment of the model's accuracy and considering both false positives and false negatives. We showcase the performance results of XGBoost on the label encoded dataset in Table <ref>.
Although we did not preprocess our data, aside from encoding them differently, we managed to get pretty good results. We can therefore directly go to the explainability part.
§.§ Explainability
For starters, we are going to take away non useful features because one hot encoding all 77 features created 85102 features, which kept crashing our environment due to insufficient RAM. To do that, we will use XGBoost's built in feature importance function to list each feature's impact on the model's decision making. In Table <ref>, we extract the top 10 influential features and sort them from most to least important.
According to Table <ref>, the combined score of the 10 most important features are 0.9381 which means that they represent 93.81% of the model's decision making power. We, therefore, can just keep these 10 features and not use the rest.
Doing so, we get the results shown in Table <ref>.
Comparing the results shown in Table <ref> to those in Table <ref> show that although we did lose a bit of performance, the drop is marginal (less than 1%). This means that if the One Hot encoding does provide us with more explainability power, it would be recommended to use.
For the next par, we will use a dedicated Explainability Algorithm called Shapley Additive Explanations (SHAP) to dig deeper into the model's inner reasoning.
§.§.§ The SHAP algorithm
SHAP<cit.> was introduced in 2017 and provides a unified way of explaining the contribution of each input feature to the final prediction of the model, based on calculated values called Shapley values. A Shapley value is a measure of the marginal contribution of a feature to the prediction, averaged over all possible combinations of features in the dataset. To calculate the Shapley values for a particular prediction, SHAP applies a game-theoretic approach based on the concept of cooperative games. It considers each feature value as a "player" in the game and computes the contribution of each player to the final prediction. It then calculates the average contribution of each player across all possible coalitions of players, weighting each coalition by its probability of occurrence. This approach results in a set of Shapley values, which represent the relative importance of each feature to the prediction for a specific instance. These Shapley values can be used to generate an explanation for the prediction, showing which features had the greatest impact and how they affected the final outcome. The mathematical formula used by SHAP to generate the Shapley Values is presented in Figure <ref>.
ϕ_i(x) = ∑_S ⊆ N ∖{i}|S|!(|N|-|S|-1)!/|N|![f(x_S ∪{x_i}) - f(x_S)]
Once generated, SHAP uses these values to display plots for both global explanations and local explanations.
§.§.§ Global feature importance
We use the SHAP algorithm to generate global summary plots that highlight the importance of each feature in the model’s decision-making similarly to what we have done in Table <ref>. Figures <ref> and <ref> display the importance plots for the Label Encoded dataset and the One Hot Encoded dataset, respectively.
§ DISCUSSION
The main difference between these plots is that while we know what feature is more important with Label Encoding, we know what exact value of that feature is more important with One Hot Encoding. This means that we get more specificity as a feature's importance is the sum of the importance of its unique values. A top ranking feature in the Label Encoding model could have therefore reached its rank because of the importance of some of its values, but not the others. Using One Hot Encoding, we can single out what values exactly are the most relevant to further analyze. For example, the "MinorOperatingSystemVersion" feature in <ref> has a mean SHAP value of almost 0.6, ranking fifth. However, in <ref>, we can see that is actually Version 3 of this feature that is really impactful, ranking first with a mean SHAP value of more than 1.2. Yet, version 1 of this feature only has a score of almost 0.2, and the rest of the version are not in the top 10 features. So using One Hot Encoding, we can single out files with the Version 3 of "MinorOperatingSystemVersion" and further analyze them separately in hopes of creating an easy rule for them or see what more we can learn. One drawback of this plot is that it is not easy to read when we have hundreds or thousands of features. In this example, we have 16087 features. It will be unproductive to use this plot to study feature importance. Instead, we can extract the raw SHAP values of all one hot encoded features, group them by original feature, and plot them side by side in another plot. We propose the plots in Figures <ref> and <ref> where we plot the importance of the different values of the "MajorSubsystemVersion" feature side by side, horizontally and vertically respectively. We chose this feature instead of the number 1 ranking "MinorOperatingSystemVersion" feature because it has considerably fewer distinct values making it easier to plot, wasting less space and delivering the same message. These figures allow us to better visually grasp the relativity in importance between the different values of a feature. This way, we can add or remove values to and from a watchlist and also construct rules for particular values. We can now combine this with the confidence score of the model at inference to start a routine, a check or apply a rule when the score doesn't hit the certainty thresshold. At that point, we would start investigating individual instances, thus needing different explanations called local explanations.
§.§.§ local feature importance
Local explanations focus on individual instances, displaying to the user the step-by-step contribution of each feature on the model's decision. Using SHAP's local explanation plots, we get Figures <ref> and <ref> which display the local explanation of instances 2 and 3 respectively, first using Label Encoding first and then One Hot Ecoding.
Again, the added refinement of the exact feature value gives us a lot more insight into what pushed the model towards a certain classification. Although the one hot encoding in this case may seem useless since we already know what value of each feature the instance holds, it instead can be used as an assertion method to make sure there are no anomalies in the decision shifting. Finally, we can see that being trained on the individual values changes the base value and decision shift intensity of each feature, as it has been trained on more finegrained data and the model had the chance to learn combinations that go together. These combinations in a tree based model such as XGBoost can then be used extracted and used as normal conditional IF rules or analyzed to detect vulnerabilities that went under the radar. Even then, the feature encoding will have an impact on the generated rules.
§.§.§ IF-Rules
IF-Rules are logical statements that express conditional relationships between input variables and output decisions and follow a simple structure: IF a specific condition or set of conditions is satisfied, THEN a particular action or decision should
be taken. The conditions and actions are typically expressed using logical operators, such as "AND," "OR," and "NOT." IF rules provide a transparent and interpretable way to encode domain knowledge and decision-making criteria into a system. Due to their nature, tree-based models can be seen as a
collection of IF rules combined together to form
a decision-making process. Each node in a decision tree represents an IF statement on a specific feature or attribute, and the tree structure guides the flow of decision-making based on these conditions. The splitting criteria at each node determine the conditions for branching into different paths, leading to subsequent nodes or leaves with specific outcomes or predictions. Since XGBoost is a tree based model, we can extract the IF-Rules it learned during the training phase and use them to build logical pipelines or to study them. An example of the IF-Rules learned by our XGBoost model can be seen in
Figures <ref> and <ref> for Label Encoding and One Hot Encoding respectively.
While there is no apparent difference between the IF-Rules of the two encoding techniques,
the difference lies in the metadata. In Table <ref>, we can see the difference in
the rules' total text length in number of characters as well as the explanation file size in KB. We can see that One Hot Encoding resulted in less characters which means less file size.
The indirect consequence of this is less analysis time, less system complexity and less ambiguity, all of which directly benefit analysts and systems.
§ CONCLUSION
In this paper, we studied the impact of feature encoding on the explainability of XAI algorithms. We took a malware classification dataset
as an example on which we trained an XGBoost model. We tried two different types of feature encoding:
Label Encoding and One Hot Encoding and found there is a marginal performance loss
by the model by using OHE instead of LE. That loss was made up with thanks to the more detailed explanations
we managed to make thanks to OHE. We found that OHE allows us to go deeper in the details
when searching for answers, both globally and locally. We also found that using OHE yields smaller explanation files and results in less time spent analyzing by human analysts.
We think this is an interesting aspect to be taken into consideration when working with XAI and could be
expanded by including more feature encoding techniques and more creative plots.
IEEEtran
§ NOTICE
2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
|
http://arxiv.org/abs/2307.04110v1 | 20230709065359 | Learning Space-Time Continuous Neural PDEs from Partially Observed States | [
"Valerii Iakovlev",
"Markus Heinonen",
"Harri Lähdesmäki"
] | cs.LG | [
"cs.LG"
] |
Model-Based End-to-End Learning for Multi-Target Integrated Sensing and Communication
José Miguel Mateos-Ramos, Student Member, IEEE,
Christian Häger, Member, IEEE,
Musa Furkan Keskin, Member, IEEE,
Luc Le Magoarou, Member, IEEE,
Henk Wymeersch, Senior Member, IEEE
This work was supported, in part, by a grant from the Chalmers AI Research Center Consortium (CHAIR), by the National Academic Infrastructure for Supercomputing in Sweden (NAISS), the Swedish Foundation for Strategic Research (SSF) (grant FUS21-0004, SAICOM), Hexa-X-II, part of the European Union’s Horizon Europe research and innovation programme under Grant Agreement No 101095759., and Swedish Research Council (VR grant 2022-03007). The work of C. Häger was also supported by the Swedish Research Council under grant no. 2020-04718.
José Miguel Mateos-Ramos, Christian Häger, Musa Furkan Keskin and Henk Wymeersch are with the Department of Electrical Engineering, Chalmers University of Technology, Sweden (email: [email protected]; [email protected]; [email protected]; [email protected]).
Luc Le Magoarou is with INSA Rennes, CNRS, IETR - UMR 6164, F-35000, Rennes, France (email: [email protected]).
Accepted 08-Jul-2023. Received 18-Jun-2023; in original form 23-May-2023
=============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
We introduce a novel grid-independent model for learning partial differential equations (PDEs) from noisy and partial observations on irregular spatiotemporal grids. We propose a space-time continuous latent neural PDE model with an efficient probabilistic framework and a novel encoder design for improved data efficiency and grid independence. The latent state dynamics are governed by a PDE model that combines the collocation method and the method of lines. We employ amortized variational inference for approximate posterior estimation and utilize a multiple shooting technique for enhanced training speed and stability. Our model demonstrates state-of-the-art performance on complex synthetic and real-world datasets, overcoming limitations of previous approaches and effectively handling partially-observed data. The proposed model outperforms recent methods, showing its potential to advance data-driven PDE modeling and enabling robust, grid-independent modeling of complex partially-observed dynamic processes.
§ INTRODUCTION
All source code and datasets will be made publicly available after review.
Modeling spatiotemporal processes allows to understand and predict the behavior of complex systems that evolve over time and space <cit.>. Partial differential equations (PDEs) are a popular tool for this task as they have a solid mathematical foundation <cit.> and can describe the dynamics of a wide range of physical, biological, and social phenomena <cit.>. However, deriving PDEs can be challenging, especially when the system's underlying mechanisms are complex and not well understood. Data-driven methods can bypass these challenges <cit.>. By learning the underlying system dynamics directly from data, we can develop accurate PDE models that capture the essential features of the system. This approach has changed our ability to model complex systems and make predictions about their behavior in a data-driven manner.
While current data-driven PDE models have been successful at modeling complex spatiotemporal phenomena, they often operate under various simplifying assumptions such as regularity of the spatial or temporal grids <cit.>, discreteness in space or time <cit.>, and availability of complete and noiseless observations <cit.>. Such assumptions become increasingly limiting in more realistic scenarios with scarce data and irregularly spaced, noisy and partial observations.
We address the limitations of existing methods and propose a space-time continuous and grid-independent model that can learn PDE dynamics from noisy and partial observations made on irregular spatiotemporal grids. Our main contributions include:
* Development of an efficient generative modeling framework for learning latent neural PDE models from noisy and partially-observed data;
* Novel PDE model that merges two PDE solution techniques – the collocation method and the method of lines – to achieve space-time continuity, grid-independence, and data efficiency;
* Novel encoder design that operates on local spatiotemporal neighborhoods for improved data-efficiency and grid-independence.
Our model demonstrates state-of-the-art performance on complex synthetic and real-world datasets, opening up the possibility for accurate and efficient modeling of complex dynamic processes and promoting further advancements in data-driven PDE modeling.
§ PROBLEM SETUP
In this work we are concerned with modeling of spatiotemporal processes. For brevity, we present our method for a single observed trajectory, but extension to multiple trajectories is straightforward. We observe a spatiotemporal dynamical system evolving over time on a spatial domain Ω. The observations are made at M arbitrary consecutive time points t_1:M:=(t_1, …, t_M) and N arbitrary observation locations _1:N:=(_1, …, _N), where _i ∈Ω. This generates a sequence of observations _1:M:=(_1, …, _M), where _i ∈ℝ^N × D contains D-dimensional observations at the N observation locations. We define _i^j as the observation at time t_i and location _j. The number of time points and observation locations may vary between different observed trajectories.
We assume the data is generated by a dynamical system with a latent state (t, ) ∈ℝ^d, where t is time and ∈Ω is spatial location. The latent state is governed by an unknown PDE and is mapped to the observed state (t, ) ∈ℝ^D by an unknown observation function g and likelihood model p:
∂(t, x)/∂ t = F((t,), ∂_(t,), ∂^2_(t,),…),
(t,) ∼ p(g((t,))),
where ∂^∙_(t,) denotes partial derivatives wrt .
In this work we make two assumptions that are highly relevant in real-world scenarios. First, we assume partial observations, that is, the observed state (t,) does not contain all information about the latent state (t,) (e.g., (t,) contains pressure and velocity, but (t,) contains information only about the pressure). Second, we assume out-of-distribution time points and observation locations, that is, their number, positions, and density can change arbitrarily at test time.
§ MODEL
[9]r0.4
< g r a p h i c s >
Model sketch. Initial latent state (t_1,) is evolved via F_θ_dyn to the following latent states which are then mapped to the observed states by g_θ_dec.
Here we describe the model components (Sec. <ref>) which are then used to construct the generative model (Sec. <ref>).
§.§ Model components
Our model consists of four parts: space-time continuous latent state (t, ) and observed state (t, ), a dynamics function F_θ_dyn governing the temporal evolution of the latent state, and an observation function g_θ_dec mapping the latent state to the observed state (see Figure <ref>). Next, we describe these components in detail.
Latent state.
To define a space-time continuous latent state (t, ) ∈ℝ^d, we introduce (t):=(^1(t), …, ^N(t)) ∈ℝ^N × d, where each ^i(t) ∈ℝ^d corresponds to the observation location _i.
Then, we define the latent state (t, ) as a spatial interpolant of (t):
(t, ) := Interpolate((t))(),
where Interpolate(·) maps (t) to an interpolant which can be evaluated at any spatial location ∈Ω (see Figure <ref>). We do not rely on a particular interpolation method, but in this work we use linear interpolation as it shows good performance and facilitates efficient implementation.
Latent state dynamics.
[13]r0.3
< g r a p h i c s >
Latent state (t,) defined as an interpolant of (t) := (^1(t), ..., ^4(t)).
Given a space-time continuous latent state, one can naturally define its dynamics in terms of a PDE:
∂(t, x)/∂ t = F_θ_dyn((t,), ∂_(t,), ∂^2_(t,),…),
where F_θ_dyn is a dynamics function with parameters θ_dyn. This is a viable approach known as the collocation method <cit.>, but it has several limitations. It requires us to decide which partial derivatives to include in the dynamics function, and also requires an interpolant which has all the selected partial derivatives (e.g., linear interpolant has only first order derivatives). To avoid these limitations, we combine the collocation method with another PDE solution technique known as the method of lines <cit.>, which approximates spatial derivatives ∂^∙_(t,) using only evaluations of (t,), and then let the dynamics function approximate all required derivatives in a data-driven manner. To do that, we define the spatial neighborhood of as 𝒩_S(), which is a set containing and its spatial neighbors, and also define (t, 𝒩_S()), which is a set of evaluations of the interpolant (t, ) at points in 𝒩_S():
𝒩_S() := {' ∈Ω : '= or ' is a spatial neighbor of },
(t, 𝒩_S()) := {(t, ') : ' ∈𝒩_S() },
and assume that this information is sufficient to approximate all required spatial derivatives at . This is a reasonable assumption since, e.g., finite differences can approximate derivatives using only function values and locations of the evaluation points. Hence, we define the dynamics of (t, ) as
∂(t, )/∂ t = F_θ_dyn(𝒩_S(), (t, 𝒩_S())),
which is defined only in terms of the values of the latent state, but not its spatial derivatives.
[17]r0.225
< g r a p h i c s >
Example of 𝒩_S(_i). Instead of using the observation locations (dots) to define spatial neighbors, we use spatial locations arranged in a fixed predefined pattern (crosses).
One way to define the spatial neighbors for is in terms of the observation locations _1:N (e.g., use the nearest ones) as was done, for example, in <cit.>. Instead, we utilize continuity of the latent state (t, ), and define the spatial neighbors in a grid-independent manner as a fixed number of points arranged in a predefined patter around (see Figure <ref>). This allows to fix the shape and size of the spatial neighborhoods in advance, making them independent of the observation locations. In this work we use the spatial neighborhood consisting of two concentric circles of radius r and r/2, each circle contains 8 evaluation points as in Figure <ref>. In Appendix <ref> we compare neighborhoods of various shapes and sizes.
Equation <ref> allows to simulate the temporal evolution of (t, ) at any spatial location. However, since (t, ) is defined only in terms of a spatial interpolant of (t) (see Eq. <ref>), with ^i(t) = (t, _i), it is sufficient to simulate the latent state dynamics only at the observation locations _1:N. Hence, we can completely characterize the latent state dynamics in terms of a system of N ODEs:
d(t)/dt :=
[ d^1(t)/dt; ⋮; d^N(t)/dt ] =
[ ∂(t, _1)/∂ t; ⋮; ∂(t, _N)/∂ t ] =
[ F_θ_dyn(𝒩_S(_1), (t, 𝒩_S(_1))); ⋮; F_θ_dyn(𝒩_S(_N), (t, 𝒩_S(_N))) ].
For convenience, we define (t; t_1, _1, θ_dyn) := ODESolve(t;t_1,_1,θ_dyn) as the solution of the ODE system in Equation <ref> at time t with initial state (t_1)=_1 and parameters θ_dyn. We also define (t, ; t_1, _1, θ_dyn) as the spatial interpolant of (t; t_1, _1, θ_dyn) as in Equation <ref>. We solve the ODEs using off the shelf differentiable ODE solvers from torchdiffeq package <cit.>. Note that we solve for the state (t) only at the observation locations _1:N, so to get the neighborhood values (t, 𝒩_S(_i)) we perform interpolation at every step of the ODE solver.
Observation function.
We define the mapping from the latent space to the observation space as a parametric function g_θ_dec with parameters θ_dec:
(t,) ∼𝒩(g_θ_dec((t, )), σ_u^2I_D),
where 𝒩 is the Gaussian distribution, σ_u^2 is noise variance, and I_D is D-by-D identity matrix.
§.§ Generative model
[18]r0.3
< g r a p h i c s >
Multiple shooting splits a trajectory with one initial state (top) into two sub-trajectories with two initial states (bottom) and tries to minimize the gap between sub-trajectories (orange arrow).
Training models of dynamic systems is often challenging due to long training times and training instabilities <cit.>. To alleviate these problems, various heuristics have been proposed, such as progressive lengthening and splitting of the training trajectories <cit.>. We use multiple shooting <cit.>, a simple and efficient technique which has demonstrated its effectiveness in ODE learning applications <cit.>. We extent the multiple shooting framework for latent ODE models presented in <cit.> to our PDE modeling setup by introducing spatial dimensions in the latent state and designing an encoder adapted specifically to the PDE setting (Section <ref>).
Multiple shooting splits a single trajectory {(t_i)}_i=1,...,M with one initial state _1 into B consecutive non-overlapping sub-trajectories {(t_i)}_i ∈ℐ_b, b=1,…,B with B initial states _1:B:=(_1,…,_B) while imposing a continuity penalty between the sub-trajectories (see Figure <ref>). The index set ℐ_b contains time point indices for the b'th sub-trajectory. We also denote the temporal position of _b as t_[b] and place _b at the first time point preceding the b'th sub-trajectory (except _1 which is placed at t_1). Note that the shooting states _b have the same dimension as the original latent state (t) i.e., _b ∈ℝ^N × d. Multiple shooting allows to parallelize the simulation over the sub-trajectories and shortens the simulation intervals thus improving the training speed and stability. In Appendix <ref> we demonstrate the effect of multiple shooting on the model training and prediction accuracy.
We begin by defining the prior over the unknown model parameters and initial states:
p(_1:B, θ_dyn, θ_dec) = p(_1:B|θ_dyn)p(θ_dyn)p(θ_dec),
where p(θ_dyn) and p(θ_dec) are zero-mean diagonal Gaussians, and the continuity inducing prior p(_1:B|θ_dyn) is defined as in <cit.>
p(_1:B| θ_dyn)
= p(_1) ∏_b=2^Bp(_b|_b-1, θ_dyn).
Intuitively, the continuity prior p(_b|_b-1, θ_dyn) takes the initial latent state _b-1, simulates it forward from time t_[b-1] to t_[b] to get μ_[b] = ODESolve(t_[b] ; t_[b-1], _b-1, θ_dyn), and then forces μ_[b] to approximately match the initial state _b of the next sub-trajectory,
thus promoting continuity of the full trajectory.
We assume the continuity inducing prior factorizes across the grid points, i.e.,
p(_1:B| θ_dyn)
= [ ∏_j=1^N p(_1^j) ] [ ∏_b=2^B∏_j=1^N p(_b^j|_b-1, θ_dyn)],
= [ ∏_j=1^N p(_1^j) ] [ ∏_b=2^B∏_j=1^N𝒩( _b^j|(t_[b], _j; t_[b-1], _b-1, θ_dyn), σ_c^2I_d )],
where
p(_1^j) is a diagonal Gaussian,
and parameter σ_c^2 controls the strength of the prior. Note that the term (t_[b], _j; t_[b-1], _b-1, θ_dyn) in Equation <ref> equals the ODE forward solution ODESolve(t_[b] ; t_[b-1], _b-1, θ_dyn) at grid location _j.
Finally, we define our generative in terms of the following sampling procedure:
θ_dyn, θ_dec, _1:B ∼ p(θ_dyn)p(θ_dec) p(_1:B | θ_dyn),
(t_i) = (t_i; t_[b], _b, θ_dyn), b ∈{1, ..., B}, i ∈ℐ_b,
_i^j ∼ p(_i^j | g_θ_dec((t_i, _j)), i = 1, …, M, j=1,…,N,
with the following joint distribution (see Appendix <ref> for details about the model specification.):
p(_1:M, _1:B, θ_dyn, θ_dec) = ∏_b=1^B∏_i ∈ℐ_b^∏_j=1^N[ p(_i^j|_b, θ_dyn, θ_dec) ] p(_1:B | θ_dyn) p(θ_dyn) p(θ_dec).
§ PARAMETER INFERENCE
§.§ Amortized variational inference
We approximate the true posterior over the model parameters and initial states p(_1:B, θ_dyn, θ_dec | _1:M) using variational inference <cit.> with the following approximate posterior:
q(θ_dyn, θ_dec, _1:B) = q(θ_dyn) q(θ_dec) q(_1:B) = q_ψ_dyn(θ_dyn) q_ψ_dec(θ_dec) ∏_b=1^B∏_j=1^Nq_ψ_b^j(_b^j),
where q_ψ_dyn, q_ψ_dec and q_ψ_b^j are diagonal Gaussians, and ψ_dyn, ψ_dec and ψ_b^j are variational parameters. To avoid direct optimization over the local variational parameters ψ_b^j, we use amortized variational inference <cit.> and train an encoder h_θ_enc with parameters θ_enc which maps observations _1:M to ψ_b^j (see Section <ref>). For brevity, we sometimes omit the dependence of approximate posteriors on variational parameters and simply write e.g., q(_b^j).
In variational inference the best approximation of the posterior is obtained by minimizing the Kullback-Leibler divergence:
KL[q(θ_dyn, θ_dec, _1:B) ‖ p(θ_dyn, θ_dec, _1:B|_1:N)],
which is equivalent to maximizing the evidence lower bound (ELBO), defined for our model as:
ℒ = ∑_b=1^B∑_i ∈ℐ_b∑_j=1^N𝔼_q(_b, θ_dyn, θ_dec)[ log p (_i^j | _b, θ_dyn, θ_dec) ] _(i) observation model
-∑_j=1^NKL[ q(_1^j) ‖ p(_1^j) ]_(ii) initial state prior
- ∑_b=2^B∑_j=1^N𝔼_q(θ_dyn, _b-1)[ KL[ q(_b^j) ‖ p(_b^j|_b-1, θ_dyn) ] ]_(iii) continuity prior
-KL[q(θ_dyn) ‖ p(θ_dyn)]_(iv) dynamics prior
-KL[q(θ_dec) ‖ p(θ_dec)]_(v) decoder prior.
The terms (ii), (iv), and (v) are computed analytically, while terms (i) and (iii) are approximated using Monte Carlo integration for expectations, and numerical ODE solvers for initial value problems.
See Appendix <ref> and <ref> approximate posterior details and derivation and computation of the ELBO.
§.§ Encoder
Here we describe our encoder which maps observations _1:M to local variational parameters ψ_b^j required to sample the initial latent state of the sub-trajectory b at time point t_[b] and observation location _j. Similarly to our model, the encoder should be data-efficient and grid-independent.
Similarly to our model (Section <ref>), we enable grid-independence by making the encoder operate on spatial interpolants of the observations _1:M (even if they are noisy):
_i() := Interpolate(_i)(), i=1,…,M,
where spatial interpolation is done separately for each time point i. We then use the interpolants _i() to define the spatial neighborhoods 𝒩_S() in a grid-independent manner.
To improve data-efficiency, we assume ψ_b^j does not depend on the whole observed sequence _1:M, but only on some local information in a spatiotemporal neighborhood of t_[b] and _j. We define the temporal neighborhood of t_[b] as
𝒩_T(t_[b]) {k : |t_k - t_[b]| ≤δ_T, k=1,…,M},
where δ_T is a hyperparameter controlling the neighborhood size, and then define the spatiotemporal neighborhood of t_[b] and _j as
[t_[b], _j] := {_k() : k ∈𝒩_T(t_[b]), ∈𝒩_S(_j) }.
Our encoder operates on such spatiotemporal neighborhoods [t_[b], _j] and works in three steps (see Figure <ref>). First, for each time index k ∈𝒩_T(t_[b]) it aggregates the spatial information {_k()}_∈𝒩(_j) into a vector α_k^S. Then, it aggregates the spatial representations α_k^S across time into another vector α_[b]^T which is finally mapped to the variational parameters ψ_b^j as follows:
ψ_b^j = h_θ_enc([t_[b], _j]) = h_read(h_temporal(h_spatial([t_[b], _j]))).
Spatial aggregation. Since the spatial neighborhoods are fixed and remain identical for all spatial locations (see Figure <ref>), we implement the spatial aggregation function h_spatial as an MLP which takes elements of the set {_k()}_∈𝒩_S(_j) stacked in a fixed order as the input.
Temporal aggregation. We implement h_temporal as a stack of transformer layers <cit.> which allows it to operate on input sets of arbitrary size. We use time-aware attention and continuous relative positional encodings <cit.> which were shown to be effective on data from dynamical systems observed at irregular time intervals. Each transformer layer takes a layer-specific input set {ξ_k^in}_k ∈𝒩_T(t_[b]), where ξ_k^in is located at t_k, and maps it to an output set {ξ_k^out}_k ∈𝒩_T(t_[b]), where each ξ_k^out is computed using only the input elements within distance δ_T from t_k, thus promoting temporal locality. Furthermore, instead of using absolute positional encodings the model assumes the behavior of the system does not depend on time and uses relative temporal distances to inject positional information. The first layer takes {α_k^S}_k ∈𝒩_T(t_[b]) as the input, while the last layer returns a single element at time point t_[b], which represents the temporal aggregation α_[b]^T.
Variational parameter readout. Since α_i^T is a fixed-length vector, we implement h_read as an MLP.
§ EXPERIMENTS
We use three challenging datasets: Shallow Water, Navier-Stokes, and Scalar Flow which contain observations of spatiotemporal system at N ≈ 1100 grid points evolving over time (see Figure <ref>). The first two datasets are synthetic and generated using numeric PDE solvers (we use scikit-fdiff <cit.> for Shallow Water, and PhiFlow <cit.> for Navier-Stokes), while the third dataset contains real-world observations (camera images) of smoke plumes raising in warm air <cit.>. In all cases the observations are made at irregular spatiotemporal grids and contain only partial information about the true system state. All datasets contain 60/20/20 training/validation/testing trajectories. See Appendix <ref> for details.
We train our model for 20k iterations with constant learning rate of 3e-4 and linear warmup. The latent spatiotemporal dynamics are simulated using differentiable ODE solvers from the torchdiffeq package <cit.> (we use dopri5 with rtol=1e-3, atol=1e-4, no adjoint). Training is done on a single NVIDIA Tesla V100 GPU, with a single run taking 3-4 hours. We use the mean absolute error (MAE) on the test set as the performance measure. Error bars are standard errors over 4 random seeds. For forecasting we use the expected value of the posterior predictive distribution. See Appendix <ref> for all details about the training, validation, and testing setup.
Latent state dimension. Here we show the advantage of using latent-space models on partially observed data. We change the latent state dimension d from 1 to 5 and measure the test MAE. Note that for d=1 we effectively have a data-space model which models the observations without trying to reconstruct the missing states. Figure <ref> shows that in all cases there is improvement in performance as the latent dimension grows. For Shallow Water and Navier-Stokes the true latent dimension is 3. Since Scalar Flow is a real-world process, there is no true latent dimension. As a benchmark, we provide the performance of our model trained on fully-observed versions of the synthetic datasets (we use the same architecture and hyperparameters, but fix d to 3). Figure <ref> also shows examples of model predictions (at the final time point) for different values of d. We see a huge difference between d=1 and d=3,5. Note how apparently small difference in MAE at d=1 and d=5 for Scalar Flow corresponds to a dramatic improvement in the prediction quality.
Grid independence. Here we show the grid-independence property of our model by training it on grids with ≈ 1100 observation locations, and then testing on a coarser, original, and finer grids. For Shallow Water and Navier-Stokes the coarser/finer grids contain 290/4200 nodes, while for Scalar Flow we have 560/6420 nodes, respectively. Figure <ref> shows the model's performance on different spatial grids. We see that A performance drop on coarse grids is expected since as we get less accurate information about the system's initial state and simulate the dynamics on coarse grids. Figure <ref> also shows examples of model predictions (at the final time point) for different grid sizes.
Comparison to other models.
Here we compare our model with two recent models from the literature: MAgNet <cit.> and DINo <cit.>. Similarly to our model, these models also produce space-time continuous predictions: MAgNet uses neural network-based interpolation and Euler time discretization, while DINo uses implicit neural representation-based
[6]r0.5
Test MAE for different models.
1!
Shallow Water Navier-Stokes Scalar-Flow
MAgNet 0.061 ± 0.001 0.103 ± 0.003 0.056 ± 0.003
DINo 0.063 ± 0.003 0.113 ± 0.002 0.059 ± 0.001
Ours 0.016 ± 0.002 0.041 ± 0.003 0.042 ± 0.001
decoder and continuous-time dynamics. These two methods also use an encoder that takes a history of observations and map them to an initial state in the latent space, where the latent dynamics are learned and the latent state is mapped to the observation space via a decoder (we use the non-Markovian version of DINo). We use the official implementations of both models and tune the hyperparameters for the best performance. For Shallow Water and Navier-Stokes we use the history size of 5 and predict the next 20 steps, while for Scalar Flow the history size is 10 and we predict the next 10 steps. See Appendix <ref> for hyperparameter details. The results are shown in Table <ref>, and the model predictions are shown in Figure <ref>. Our model shows the best performance, achieving very accurate predictions on the synthetic data, and also shows the capacity for modeling real-world data managing to predict the smoke speed, direction, and even the smoke separation. In Figure <ref> we also test data efficiency of the models and show that our model requires much less data to converge to its lowest error. In Appendix <ref> we further demonstrate our model's capability to learn dynamics from noisy data.
§ RELATED WORK
Closest to our work is <cit.>, where they considered the problem of learning PDEs from partial observations and proposed a discrete and grid-dependent model that is restricted to regular spatiotemporal grids. Another related work is that of <cit.>, where they proposed a variational inference framework for learning ODEs from noisy and partially-observed data. However, they consider only low-dimensional ODEs and are restricted to regular grids.
Other works considered learning the latent space PDE dynamics using the “encode-process-decode” approach. <cit.> use GNN-based encoder and dynamics function and map the observations to the same spatial grid in the latent space and learn the latent space dynamics. <cit.> use a similar approach but with CNNs and map the observations to a coarser latent grid and learn the coarse-scale dynamics. <cit.> use CNNs to map observations to a low-dimensional latent vector and learn the latent dynamics. However, all these approaches are grid-dependent, limited to regular spatial/temporal grids, and require fully-observed data.
Interpolation has been used in numerous studies for various applications. Works such as <cit.> use interpolation to map latent states on coarse grids to observations on finer grids. <cit.> used interpolation as a post-processing step to obtain continuous predictions, while <cit.> used it to recover observations at missing nodes.
§ CONCLUSION
We proposed a novel space-time continuous, grid-independent model for learning PDE dynamics from noisy and partial observations on irregular spatiotemporal grids. Our contributions include an efficient generative modeling framework, a novel latent PDE model merging collocation and method of lines, and a data-efficient, grid-independent encoder design. The model demonstrates state-of-the-art performance on complex datasets, highlighting its potential for advancing data-driven PDE modeling and enabling accurate predictions of spatiotemporal phenomena in diverse fields. However, our model and encoder operate on every spatial and temporal location which might not be the most efficient approach and hinders scaling to extremely large grids, hence research into more efficient latent state extraction and dynamics modeling methods is needed.
plainnat
§ APPENDIX A
§.§ Model specification.
Here we provide all details about our model specification. The joint distribution for our model is
p(_1:M, _1:B, θ_dyn, θ_dec) = p(_1:N|_1:B, θ_dyn, θ_dec) p(_1:B | θ_dyn) p(θ_dyn) p(θ_dec).
Next, we specify each component in detail.
Parameter priors. The parameter priors are isotropic zero-mean multivariate normal distributions:
p(θ_dyn) = 𝒩(θ_dyn | 0, I),
p(θ_dec) = 𝒩(θ_dec | 0, I),
where 𝒩 is the normal distribution, 0 is a zero vector, and I is the identity matrix, both have an appropriate dimensionality dependent on the number of encoder and dynamics parameters.
Continuity prior. We define the continuity prior as
p(_1:B| θ_dyn)
= p(_1) ∏_b=2^Bp(_b|_b-1, θ_dyn),
= [ ∏_j=1^N p(_1^j) ] [ ∏_b=2^B∏_j=1^N p(_b^j|_b-1, θ_dyn)],
= [ ∏_j=1^N𝒩(_1^j | 0, I) ] [ ∏_b=2^B∏_j=1^N𝒩( _b^j|(t_[b], _j; t_[b-1], _b-1, θ_dyn), σ_c^2I ).],
where 𝒩 is the normal distribution, 0∈ℝ^d is a zero vector, I ∈ℝ^d × d is the identity matrix, and σ_c ∈ℝ is the parameter controlling the strength of the prior. Smaller values of σ_c tend to produce smaller gaps between the sub-trajectories.
Observation model
p(_1:N|_1:B, θ_dyn, θ_dec) = ∏_b=1^B∏_i ∈ℐ_b^∏_j=1^N p(_i^j|_b, θ_dyn, θ_dec)
= ∏_b=1^B∏_i ∈ℐ_b^∏_j=1^Np(_i^j | g_θ_dec((t_i, _j; t_[b], _b, θ_dyn)))
= ∏_b=1^B∏_i ∈ℐ_b^∏_j=1^N𝒩(_i^j | g_θ_dec((t_i, _j; t_[b], _b, θ_dyn)), σ_u^2 I),
where 𝒩 is the normal distribution, σ_u^2 is the observation noise variance, and I ∈ℝ^D × D is the identity matrix. Note again that (t_i, _j; t_[b], _b, θ_dyn) above equals the ODE forward solution ODESolve(t_i ; t_[b], _b, θ_dyn) at grid location _j.
§.§ Approximate posterior specification.
Here we provide all details about the approximate posterior. We define the approximate posterior as
q(θ_dyn, θ_dec, _1:B) = q(θ_dyn) q(θ_dec) q(_1:B) = q_ψ_dyn(θ_dyn) q_ψ_dec(θ_dec) ∏_b=1^B∏_j=1^Nq_ψ_b^j(_b^j).
Next, we specify each component in detail.
Dynamics parameters posterior. We define q_ψ_dyn(θ_dyn) as
q_ψ_dyn(θ_dyn) = 𝒩(θ_dyn | γ_dyn, diag (τ_dyn^2)),
where γ_dyn and τ_dyn^2 are vectors with an appropriate dimension (dependent on the number of dynamics parameters), and diag (τ_dyn^2) is a matrix with τ_dyn^2 on the diagonal. We define the vector of variational parameters as ψ_dyn = (γ_dyn, τ_dyn^2). We optimize directly over ψ_dyn and initialize γ_dyn using Xavier <cit.> initialization, while τ_dyn is initialized with each element equal to 9 · 10^-4.
Decoder parameters posterior. We define q_ψ_dec(θ_dec) as
q_ψ_dec(θ_dec) = 𝒩(θ_dec | γ_dec, diag (τ_dec^2)),
where γ_dec and τ_dec^2 are vectors with an appropriate dimension (dependent on the number of decoder parameters), and diag (τ_dec^2) is a matrix with τ_dec^2 on the diagonal. We define the vector of variational parameters as ψ_dec = (γ_dec, τ_dec^2). We optimize directly over ψ_dec and initialize γ_dec using Xavier <cit.> initialization, while τ_dec is initialized with each element equal to 9 · 10^-4.
Shooting variables posterior. We define q_ψ_b^j(_b^j) as
q_ψ_b^j(_b^j) = 𝒩(_b^j | γ_b^j, diag ([τ_b^j]^2))),
where the vectors γ_b^j, τ_b^j ∈ℝ^d are returned by the encoder h_θ_enc, and diag ([τ_b^j]^2) is a matrix with [τ_b^j]^2 on the diagonal. We define the vector of variational parameters as ψ_b^j = (γ_b^j, [τ_b^j]). Because the variational inference for the shooting variables is amortized, our model is trained w.r.t. the parameters of the encoder network, θ_enc.
§ APPENDIX B
§.§ Derivation of ELBO.
For our model and the choice of the approximate posterior the ELBO can be written as
ℒ = ∫q(θ_dyn, θ_dec, _1:B) lnp(_1:M, _1:B, θ_dyn, θ_dec)/q(θ_dyn, θ_dec, _1:B)dθ_dyn dθ_dec d_1:B
= ∫q(θ_dyn, θ_dec, _1:B) lnp(_1:M|_1:B, θ_dyn, θ_dec)p(_1:B|θ_dyn)p(θ_dyn)p(θ_dec)/q(_1:B)q(θ_dyn)q(θ_dec)dθ_dyn dθ_dec d_1:B
= ∫q(θ_dyn, θ_dec, _1:B) lnp(_1:M | _1:B, θ_dyn, θ_dec)dθ_dyn dθ_dec d_1:B
- ∫q(θ_dyn, θ_dec, _1:B) lnq(_1:B)/p(_1:B | θ_dyn)dθ_dyn dθ_dec d_1:B
- ∫q(θ_dyn, θ_dec, _1:B) lnq(θ_dyn)/p(θ_dyn)dθ_dyn dθ_dec d_1:B
- ∫q(θ_dec, θ_dec, _1:B) lnq(θ_dec)/p(θ_dec)dθ_dyn dθ_dec d_1:B
= ℒ_1 - ℒ_2 - ℒ_3 - ℒ_4.
Next, we will look at each term ℒ_i separately.
ℒ_1 = ∫q(θ_dyn, θ_dec, _1:B) lnp(_1:M | _1:B, θ_dyn, θ_dec)dθ_dyn dθ_dec d_1:B
= ∫q(θ_dyn, θ_dec, _1:B) ln[∏_b=1^B∏_i ∈ℐ_b∏_j=1^Np(_i^j | _b, θ_dyn, θ_dec)]dθ_dyn dθ_dec d_1:B
= ∑_b=1^B∑_i ∈ℐ_b∑_j=1^N∫q(θ_dyn, θ_dec, _1:B) ln[p(_i^j | _b, θ_dyn, θ_dec)]dθ_dyn dθ_dec d_1:B
= ∑_b=1^B∑_i ∈ℐ_b∑_j=1^N∫q(θ_dyn, θ_dec, _b) ln[p(_i^j | _b, θ_dyn, θ_dec)]dθ_dyn dθ_dec d_b
= ∑_b=1^B∑_i ∈ℐ_b∑_j=1^N𝔼_q(θ_dyn, θ_dec, _b)ln[p(_i^j | _b, θ_dyn, θ_dec)].
ℒ_2 = ∫q(θ_dyn, θ_dec, _1:B) lnq(_1:B)/p(_1:B | θ_dyn)dθ_dyndθ_dec d_1:B
= ∫q(θ_dyn, θ_dec, _1:B) ln[q(_1)/p(_1)∏_b=2^Bq(_b)/p(_b|_b-1, θ_dyn)]dθ_dyndθ_dec d_1:B
= ∫q(θ_dyn, θ_dec, _1:B) ln[∏_j=1^Nq(_1^j)/p(_1^j)]dθ_dyndθ_dec d_1:B
+ ∫q(θ_dyn, θ_dec, _1:B) ln[∏_b=2^B∏_j=1^Nq(_b^j)/p(_b^j|_b-1, θ_dyn)]dθ_dyndθ_dec d_1:B
= ∑_j=1^N∫q(θ_dyn, θ_dec, _1:B) ln[q(_1^j)/p(_1^j)]dθ_dyndθ_dec d_1:B
+ ∑_b=2^B∫q(θ_dyn, θ_dec, _1:B) ∑_j=1^Nln[q(_b^j)/p(_b^j|_b-1, θ_dyn)]dθ_dyndθ_dec d_1:B
= ∑_j=1^N∫q(_1^j) ln[q(_1^j)/p(_1^j)]d_1^j
+ ∑_b=2^B∫q(θ_dyn, _b-1, _b) ∑_j=1^Nln[q(_b^j)/p(_b^j|_b-1, θ_dyn)]dθ_dyn d_b-1 d_b
= ∑_j=1^N∫q(_1^j) ln[q(_1^j)/p(_1^j)]d_1^j
+ ∑_b=2^B∫q(θ_dyn, _b-1) ∑_j=1^N[ ∫ q(_b^j) lnq(_b^j)/p(_b^j|_b-1, θ_dyn)d_b^j]dθ_dyn d_b-1
= ∑_j=1^NKL( q(_1^j) ‖ p(_1^j) ) + ∑_b=2^B𝔼_q(θ_dyn, _b-1)[ ∑_j=1^NKL( q(_b^j) ‖ p(_b^j|_b-1, θ_dyn) ) ],
where KL is Kullback–Leibler (KL) divergence. Both of the KL divergences above have a closed form but the expectation w.r.t. q(θ_dyn, _b-1) does not.
ℒ_3 = KL(q(θ_dyn) ‖ p(θ_dyn)), ℒ_4 = KL(q(θ_dec) ‖ p(θ_dec)).
§.§ Computation of ELBO.
We compute the ELBO using the following algorithm:
* Sample θ_dyn, θ_dec from q_ψ_dyn(θ_dyn), q_ψ_dec(θ_dec).
* Sample _1:B by sampling each _b^j from q_ψ_b^j(_b^j) with ψ_b^j = h_θ_enc([t_[b], _j]).
* Compute _1:M from _1:B as in Equations <ref>-<ref>.
* Compute ELBO ℒ (KL terms are computed in closed form, for expectations we use Monte Carlo integration with one sample).
Sampling is done using reparametrization to allow unbiased gradients w.r.t. the model parameters.
§ APPENDIX C
§.§ Datasets.
Shallow Water. The shallow water equations are a system of partial differential equations (PDEs) that simulate the behavior of water in a shallow basin. These equations are effectively a depth-integrated version of the Navier-Stokes equations, assuming the horizontal length scale is significantly larger than the vertical length scale. Given these assumptions, they provide a model for water dynamics in a basin or similar environment, and are commonly utilized in predicting the propagation of water waves, tides, tsunamis, and coastal currents. The state of the system modeled by these equations consists of the wave height h(t, x, y), velocity in the x-direction u(t, x, y) and velocity in the y-direction v(t, x, y). Given an initial state (h_0, u_0, v_0), we solve the PDEs on a spatial domain Ω over time interval [0, T]. The shallow water equations are defined as:
∂ h/∂ t + ∂ (hu)/∂ x + ∂ (hv)/∂ y = 0,
∂ u/∂ t + u∂ u/∂ x + v∂ u/∂ y + g∂ h/∂ x = 0,
∂ v/∂ t + u∂ v/∂ x + v∂ v/∂ y + g∂ h/∂ y = 0,
where g is the gravitational constant.
We set the spatial domain Ω to be a unit square and use periodic boundary conditions. We set T=0.1. The solution is evaluated at randomly selected spatial locations and time points. We use 1089 spatial locations and 25 time points. The spatial end temporal grids are the same for all trajectories. Since we are dealing with partially-observed cases, we assume that we observe only the wave height h(t,x,y).
For each trajectory, we start with zero initial velocities and the initial height h_0(x,y) generated as:
h̃_0(x, y) = ∑_k,l = -N^Nλ_klcos(2π (kx+ly)) + γ_klsin(2π (kx+ly)),
h_0(x, y) = 1 + h̃_0(x, y) - min(h̃_0)/max(h̃_0) - min(h̃_0),
where N = 3 and λ_kl, γ_kl∼𝒩(0, 1).
The datasets used for training, validation, and testing contain 60, 20, and 20 trajectories, respectively.
We use scikit-fdiff <cit.> to solve the PDEs.
Navier-Stokes. For this dataset we model the propagation of a scalar field (e.g., smoke concentration) in a fluid (e.g., air). The modeling is done by coupling the Navier-Stokes equations with the Boussinesq buoyancy term and the transport equation to model the propagation of the scalar field. The state of the system modeled by these equations consists of the scalar field c(t,x,y), velocity in x-direction u(t,x,y), velocity in y-direction v(t,x,y), and pressure p(t,x,y). Given an initial state (c_0, u_0, v_0, p_0), we solve the PDEs on a spatial domain Ω over time interval [0, T]. The Navier-Stokes equations with the transport equation are defined as:
∂ u/∂ x + ∂ v/∂ y = 0,
∂ u/∂ t + u ∂ u/∂ x + v ∂ u/∂ y = - ∂ p/∂ x + ν( ∂^2 u/∂ x^2 + ∂^2 u/∂ y^2),
∂ v/∂ t + u ∂ v/∂ x + v ∂ v/∂ y = - ∂ p/∂ y + ν( ∂^2 v/∂ x^2 + ∂^2 v/∂ y^2) + c,
∂ c/∂ t = - u ∂ c/∂ x - v ∂ c/∂ y + ν( ∂^2 c/∂ x^2 + ∂^2 c/∂ y^2),
where ν = 0.002.
We set the spatial domain Ω to be a unit square and use periodic boundary conditions. We set T=2.0, but drop the first 0.5 seconds due to slow dynamics during this time period. The solution is evaluated at randomly selected spatial locations and time points. We use 1089 spatial locations and 25 time points. The spatial and temporal grids are the same for all trajectories. Since we are dealing with partially-observed cases, we assume that we observe only the scalar field c(t,x,y).
For each trajectory, we start with zero initial velocities and pressure, and the initial scalar field c_0(x,y) is generated as:
c̃_0(x, y) = ∑_k,l = -N^Nλ_klcos(2π (kx+ly)) + γ_klsin(2π (kx+ly)),
c_0(x, y) = c̃_0(x, y) - min(c̃_0)/max(c̃_0) - min(c̃_0),
where N = 2 and λ_kl, γ_kl∼𝒩(0, 1).
The datasets used for training, validation, and testing contain 60, 20, and 20 trajectories, respectively.
We use PhiFlow <cit.> to solve the PDEs.
Scalar Flow.
r0.2
< g r a p h i c s >
Spatial grid used for Scalar Flow dataset.
This dataset, proposed by <cit.>, consists of observations of smoke plumes rising in hot air. The observations are post-processed camera images of the smoke plumes taken from multiple views. For simplicity, we use only the front view. The dataset contains 104 trajectories, where each trajectory has 150 time points and each image has the resolution 1080 × 1920.
To reduce dimensionality of the observations we sub-sample the original spatial and temporal grids. For the temporal grid, we remove the first 50 time points, which leaves 100 time points, and then take every 4th time point, thus leaving 20 time points in total. The original 1080 × 1920 spatial grid is first down-sampled by a factor of 9 giving a new grid with resolution 120 × 213, and then the new grid is further sub-sampled based on the smoke density at each node. In particular, we compute the average smoke density at each node (averaged over time), and then sample the nodes without replacement with the probability proportional to the average smoke density (thus, nodes that have zero density most of the time are not selected). See example of a final grid in Figure <ref>. This gives a new grid with 1089 nodes.
We further smooth the observations by applying Gaussian smoothing with the standard deviation of 1.5 (assuming domain size 120 × 213).
We use the first 60 trajectories for training, next 20 for validation and next 20 for testing.
§.§ Model architecture and hyper-parameters.
Dynamics function. For all datasets we define F_θ_dyn as an MLP. For Shallow Water/Navier-Stokes/Scalar Flow we use 1/3/3 hidden layers with the size of 1024/512/512, respectively. We use ReLU nonlinearities.
Observation function. For all datasets we define g_θ_dec as a selector function which takes the latent state (t, x) ∈ℝ^d and returns its first component.
Encoder. Our encoder h_θ_enc consists of three function: h_θ_spatial, h_θ_temporal, and h_θ_read. The spatial aggregation function h_θ_spatial is a linear mapping to ℝ^128. The temporal aggregation function h_θ_temporal is a stack of transformer layers with temporal attention and continuous relative positional encodings <cit.>. For all datasets, we set the number of transformer layers to 6. Finally, the variational parameter readout function h_θ_read is a mapping defined as
ψ_b^j = h_θ_read(α_[b]^T) =
[ γ_b^j; τ_b^j ]=
[ Linear(α_[b]^T); exp(Linear(α_[b]^T)) ],
where Linear is a linear layer (different for each line), and γ_b^j and τ_b^j are the variational parameters discussed in Appendix A.
Spatial and temporal neighborhoods. We use the same spatial neighborhoods 𝒩_S() for both the encoder and the dynamics function. We define 𝒩_S() as the set of points consisting of the point and points on two concentric circles centered at , with radii r and r/2, respectively. Each circle contains 8 points spaced 45 degrees apart (see Figure <ref> (right)). The radius r is set to 0.1. For Shallow Water/Navier-Stokes/Scalar Flow the size of temporal neighborhood (δ_T) is set to 0.1/0.1/0.2, respectively.
Multiple Shooting. For Shallow Water/Navier-Stokes/Scalar Flow we split the full training trajectories into 4/4/19 sub-trajectories, or, equivalently, have the sub-trajectory length of 6/6/2.
§.§ Training, validation, and testing setup.
Data preprocessing.
We scale the temporal grids, spatial grids, and observations to be within the interval [0, 1].
Training. We train our model for 20000 iterations using Adam <cit.> optimizer with constant learning rate 3e-4 and linear warmup for 200 iterations. The latent spatiotemporal dynamics are simulated using differentiable ODE solvers from the torchdiffeq package <cit.> (we use dopri5 with rtol=1e-3, atol=1e-4, no adjoint). The batch size is 1.
Validation. We use validation set to track the performance of our model during training and save the parameters
that produce the best validation performance. As performance measure we use the mean absolute error at predicting the full validation trajectories given some number of initial observations. For Shallow Water/Navier-Stokes/Scalar Flow we use the first 5/5/10 observations. The predictions are made by taking one sample from the posterior predictive distribution (see Appendix C.4 for details).
Testing. Testing is done similarly to validation, except that as the prediction we use an estimate of the expected value of the posterior predictive distribution (see Appendix C.4 for details).
§.§ Forecasting.
Given initial observations _1:m at time points t_1:m, we predict the future observation _n at a time point t_n > t_m as the expected value of the approximate posterior predictive distribution:
p(_n | _1:m, _1:M) ≈∫ p(_n | _m, θ_dyn, θ_dec) q(_m) q(θ_dyn) q(θ_dec) d_m dθ_dyn dθ_dec.
The expected value is estimated via Monte Carlo integration, so the algorithm for predicting _n is:
* Sample θ_dyn, θ_dec from q(θ_dyn), q(θ_dec).
* Sample _m from q(_m) = ∏_j=1^Nq_ψ_m^j(_m^j), where the variational parameters ψ_m^j are given by the encoder h_θ_enc operating on the initial observations _1:m as ψ_m^j = h_θ_enc([t_m, _j]).
* Compute the latent state (t_n) = (t_n; t_m, _m, θ_dyn).
* Sample _n by sampling each _n^j from 𝒩(_n^j | g_θ_dec((t_n, _j))), σ_u^2 I).
* Repeat steps 1-4 n times and average the predictions (we use n=10).
§.§ Model comparison setup.
DINo. We use the official implementation of DINo <cit.>. The encoder is an MLP with 3 hidden layers, 512 neurons each, and Swish non-linearities. The code dimension is 100. The dynamics function is an MLP with 3 hidden layers, 512 neurons each, and Swish non-linearities. The decoder has 3 layers and 64 channels.
MAgNet. We use the official implementation of MAgNet <cit.>. We use the graph neural network variant of the model. The number of message-passing steps is 5. All MLPs have 4 layers with 128 neurons each in each layer. The latent state dimension is 128.
§ APPENDIX D
§.§ Spatiotemporal neighborhood shapes and sizes.
Here we investigate the effect of changing the shape and size of spatial and temporal neighborhoods used by the encoder and dynamics functions. We use the default hyperparameters discussed in Appendix C and change only the neighborhood shape or size. A neighborhood size of zero implies no spatial/temporal aggregation.
Initially, we use the original circular neighborhood displayed in Figure <ref> for both encoder and dynamics function and change only its size (radius). The results are presented in Figures <ref> and <ref>. In Figure <ref>, it is surprising to see very little effect from changing the encoder's spatial neighborhood size. A potential explanation is that the dynamics function shares the spatial aggregation task with the encoder. However, the results in Figure <ref> are more intuitive, displaying a U-shaped curve for the test MAE, indicating the importance of using spatial neighborhoods of appropriate size. Interestingly, the best results tend to be achieved with relatively large neighborhood sizes. Similarly, Figure <ref> shows U-shaped curves for the encoder's temporal neighborhood size, suggesting that latent state inference benefits from utilizing local temporal information.
We then examine the effect of changing the shape of the dynamics function's spatial neighborhood. We use ncircle neighborhoods, which consist of n equidistant concentric circular neighborhoods (see examples in Figure <ref>). Effectively, we maintain a fixed neighborhood size while altering its density. The results can be seen in Figure <ref>. We find that performance does not significantly improve when using denser (and presumably more informative) spatial neighborhoods, indicating that accurate predictions only require a relatively sparse neighborhood with appropriate size.
§.§ Multiple shooting.
Here we demonstrate the effect of using multiple shooting for model training. In Figure <ref> (left), we vary the sub-trajectory length (longer sub-trajectories imply more difficult training) and plot the test errors for each sub-trajectory length. We observe that in all cases, the best results are achieved when the sub-trajectory length is considerably smaller than the full trajectory length. In Figure <ref> (right) we further show the training times, and as can be seen multiple shooting allows to noticeably reduce the training times.
§ APPENDIX E
Noisy Data. Here we show the effect of observation noise on our model and compare the results against other models. We train all models with data noise of various strengths, and then compute test MAE on noiseless data (we still use noisy data to infer the initial state at test time). Figure <ref> shows that our model can manage noise strength up to 0.1 without significant drops in performance. Note that all observations are in the range [0, 1].
|
http://arxiv.org/abs/2307.04136v1 | 20230709092915 | ECL: Class-Enhancement Contrastive Learning for Long-tailed Skin Lesion Classification | [
"Yilan Zhang",
"Jianqi Chen",
"Ke Wang",
"Fengying Xie"
] | cs.CV | [
"cs.CV"
] |
Zhang et al.
Image Processing Center, School of Astronautics, Beihang University, Beijing 100191, China
xfy_73.buaa.edu.cn
ECL: Class-Enhancement Contrastive Learning for Long-tailed Skin Lesion Classification
Yilan Zhang1 Jianqi Chen1 Ke Wang1 Fengying Xie1Corresponding author
August 12, 2023
======================================================================================
Skin image datasets often suffer from imbalanced data distribution, exacerbating the difficulty of computer-aided skin disease diagnosis. Some recent works exploit supervised contrastive learning (SCL) for this long-tailed challenge. Despite achieving significant performance, these SCL-based methods focus more on head classes, yet ignoring the utilization of information in tail classes. In this paper, we propose class-Enhancement Contrastive Learning (ECL), which enriches the information of minority classes and treats different classes equally. For information enhancement, we design a hybrid-proxy model to generate class-dependent proxies and propose a cycle update strategy for parameters optimization. A balanced-hybrid-proxy loss is designed to exploit relations between samples and proxies with different classes treated equally. Taking both “imbalanced data" and “imbalanced diagnosis difficulty" into account, we further present a balanced-weighted cross-entropy loss following curriculum learning schedule. Experimental results on the classification of imbalanced skin lesion data have demonstrated the superiority and effectiveness of our method. The codes can be publicly available from <https://github.com/zylbuaa/ECL.git>.
§ INTRODUCTION
Skin cancer is one of the most common cancers all over the world. Serious skin diseases such as melanoma can be life-threatening, making early detection and treatment essential <cit.>. As computer-aided diagnosis matures, recent advances with deep learning techniques such as CNNs have significantly improved the performance of skin lesion classification <cit.>. However, as a data-hungry approach, CNN models require large balanced and high-quality datasets to meet the accuracy and robustness requirements in applications, which is hard to suffice due to the long-tailed occurrence of diseases in the real-world. Long-tailed problem is usually caused by the incidence rate and the difficulty of data collection. Some diseases are common while others are rare, making it difficult to collect balanced data <cit.>. This will cause the head classes to account for the majority of the samples and the tail classes only have small portions. Thus, existing public skin datasets usually suffer from imbalanced problems which then results in class bias of classifier, for example, poor model performance especially on tail lesion types.
To tackle the challenge of learning unbiased classifiers with imbalanced data, many previous works focus on three main ideas, including re-sampling data <cit.>, re-weighting loss <cit.> and re-balancing training strategies <cit.>. Re-sampling methods over-sample tail classes or under-sample head classes, re-weighting methods adjust the weights of losses on class-level or instance-level, and re-balancing methods decouple the representation learning and classifier learning into two stages or assign the weights between features from different sampling branches <cit.>. Despite the great results achieved, these methods either manually interfere with the original data distribution or improve the accuracy of minority classes at the cost of reducing that of majority classes <cit.>.
Recently, contrastive learning (CL) methods pose great potential for representation learning when trained on imbalanced data <cit.>. Among them, supervised contrastive learning (SCL) <cit.> aggregates semantically similar samples and separates different classes by training in pairs, leading to impressive success in long-tailed classification of both natural and medical images <cit.>. However, there still remain some defects: (1) Current SCL-based methods utilize the information of minority classes insufficiently. Since tail classes are sampled with low probability, each training mini-batch inherits the long-tail distribution, making parameter updates less dependent on tail classes. (2) SCL loss focuses more on optimizing the head classes with much larger gradients than tail classes, which means tail classes are all pushed farther away from heads <cit.>. (3) Most methods only consider the impact of sample size (“imbalanced data") on the classification accuracy of skin diseases, while ignoring the diagnostic difficulty of the diseases themselves (“imbalanced diagnosis difficulty").
To address the above issues, we propose a class-Enhancement Contrastive Learning method (ECL) for skin lesion classification, differences between SCL and ECL are illustrated in Fig.<ref>. For sufficiently utilizing the tail data information, we attempt to address the solution from a proxy-based perspective. A proxy can be regarded as the representative of a specific class set as learnable parameters. We propose a novel hybrid-proxy model to generate proxies for enhancing different classes with a reversed imbalanced strategy , i.e., the fewer samples in a class, the more proxies the class has. These learnable proxies are optimized with a cycle update strategy that captures original data distribution to mitigate the quality degradation caused by the lack of minority samples in a mini-batch. Furthermore, we propose a balanced-hybrid-proxy loss, besides introducing balanced contrastive learning (BCL) <cit.>. The new loss treats all classes equally and utilizes sample-to-sample, proxy-to-sample and proxy-to-proxy relations to improve representation learning. Moreover, we design a balanced-weighted cross-entropy loss which follows a curriculum learning schedule by considering both imbalanced data and diagnosis difficulty.
Our contributions can be summarized as follows: (1) We propose an ECL framework for long-tailed skin lesion classification. Information of classes are enhanced by the designed hybrid-proxy model with a cycle update strategy. (2) We present a balanced-hybrid-proxy loss to balance the optimization of each class and leverage relations among samples and proxies. (3) A new balanced-weighted cross-entropy loss is designed for an unbiased classifier, which considers both “imbalanced data" and “imbalanced diagnosis difficulty". (4) Experimental results demonstrate that the proposed framework outperforms other state-of-the-art methods on two imbalanced dermoscopic image datasets and the ablation study shows the effectiveness of each element.
§ METHODS
The overall end-to-end framework of ECL is presented in Fig. <ref>. The network consists of two parallel branches: a contrastive learning (CL) branch for representative learning and a classifier learning branch. The two branches take in different augmentations T^i, i ∈{1,2 } from input images X and the backbone is shared between branches to learn the features X̃^i, i ∈{1,2 }. We use a fully connected layer as a logistic projection for classification g(·): 𝒳̃→𝒴̃ and a one-hidden layer MLP h(·): 𝒳̃→𝒵∈ℝ^d as a sample embedding head where d denotes the dimension. ℒ_2-normalization is applied to 𝒵 by using inner product as distance measurement in CL. Both the class-dependent proxies generated by hybrid-proxy model and the embeddings of samples are used to calculate balanced-weighted cross-entropy loss, thus capturing the rich relations of samples and proxies. For better representation, we design a cycle update strategy to optimize the proxies' parameters in hybrid-proxy model, together with a curriculum learning schedule for achieving unbiased classifiers. The details are introduced as follows.
§.§ Hybrid-Proxy Model
The proposed hybrid-proxy model consists of a set of class-dependent proxies 𝒫= { p^c_k|k∈{ 1,2,...,N^p_c}., c. ∈{1,2,...,C}}, C is the class number, p^c_k∈ℝ^d is the k-th proxy vector of class c, and N^p_c is the proxy number in this class. Since samples in a mini-batch follow imbalanced data distribution, these proxies are designed to be generated in a reversed imbalanced way by giving more representative proxies of tail classes for enhancing the information of minority samples. Let us denote the sample number of class c as N_c and the maximum in all classes as N_max. The proxy number N^p_c can be obtained by calculating the imbalanced factor N_max/N_c of each class:
N^p_c =
{[ 1 N_c = N_max; ⌊N_max/10 N_c⌋ + 2 N_c ≠ N_max; ].
In this way, the tail classes have more proxies while head classes have less, thus alleviating the imbalanced problem in a mini-batch.
As we know, a gradient descent algorithm will generally be executed to update the parameters after training a mini-batch of samples. However, when dealing with an imbalanced dataset, tail samples in a batch contribute little to the update of their corresponding proxies due to the low probability of being sampled. So how to get better representative proxies? Here we propose a cycle update strategy for the optimization of the parameters. Specifically, we introduce the gradient accumulation method into the training process to update proxies asynchronously. The proxies are updated only after a finished epoch that all data has been processed by the framework with the gradients accumulated. With such a strategy, tail proxies can be optimized in a view of whole data distribution, thus playing better roles in class information enhancement. Algorithm <ref> presents the details of the training process.
§.§ Balanced-Hybrid-Proxy Loss
To tackle the problem that SCL loss pays more attention on head classes, we introduce BCL and propose balanced-hybrid-proxy loss to treat classes equally. Given a batch of samples ℬ = { (x^(1,2)_i,y_i)}_B, let 𝒵={ z^(1,2)_i}_B = { z^1_1,z^2_2,...,z^1_B,z^2_B} be the feature embeddings in a batch and B denotes the batch size. For an anchor sample z_i∈𝒵 in class c, we unify the positive image set as z^+={ z_j|y_j = y_i = c, j ≠ i }. Also for an anchor proxy p^c_i, we unify all positive proxies as p^+. The proposed balanced-hybrid-proxy loss pulls points (both samples and proxies) in the same class together, while pushes apart samples from different classes in embedding space by using dot product as a similarity measure, which can be formulated as follows:
L_BHP = -1/2B+∑_c ∈ C N^p_c∑_s_i∈{𝒵∪𝒫}1/2B_c+N^p_c-1∑_s_j∈{z^+∪ p^+}log exp(s_i· s_j/τ)/E
E = ∑_c∈ C1/2B_c+N^p_c-1∑_s_k∈{𝒵_c∪𝒫_c}exp (s_i· s_k/τ)
where B_c means the sample number of class c in a batch, τ is the temperature parameter. In addition, we further define 𝒵_c and 𝒫_c as a subset with the label c of 𝒵 and 𝒫 respectively. The average operation in the denominator of balanced-hybrid-proxy loss can effectively reduce the gradients of the head classes, making an equal contribution to optimizing each class. Note that our loss differs from BCL as we enrich the learning of relations between samples and proxies. Sample-to-sample, proxy-to-sample and proxy-to-proxy relations in the proposed loss have the potential to promote network's representation learning. Moreover, as the skin datasets are often small, richer relations can effectively help form a high-quality distribution in the embedding space and improve the separation of features.
§.§ Balanced-Weighted Cross-Entropy Loss
Taking both “imbalanced data" and “imbalanced diagnosis difficulty" into consideration, we design a curriculum schedule and propose balanced-weighted cross-entropy loss to train an unbiased classifier. The training phase are divided into three stages. We first train a general classifier, then in the second stage we assign larger weight to tail classes for “imbalanced data". In the last stage, we utilize the results on the validation set as the diagnosis difficulty indicator of skin disease types to update the weights for “imbalanced diagnosis difficulty". The loss is given by:
L_BWCE = - 1/B∑_i=1^B w_i CE(ỹ_̃ĩ,y_i)
w_i = {[ 1 e<E_1; (C/N_c/∑_c∈ C1/N_c)^e-E_1/E_2-E_1 E_1<e<E_2; (C/f^e_c/∑_c∈ C1/f^e_c)^e-E_2/E-E_2 E_2<e<E; ].
where w denotes the weight and ỹ denotes the network prediction. We assume f^e_c is the evaluation result of class c on validation set after epoch e and we use f1-score in our experiments. The network is trained for E epochs, E_1 and E_2 are hyperparameters for stages. The final loss is given by Loss = λ L_BHP + μ L_BWCE where λ and μ are the hyperparameters which control the impact of losses.
§ EXPERIMENT
§.§ Dataset and Implementation Details
§.§.§ Dataset and Evaluation Metrics.
We evaluate the ECL on two publicly available dermoscopic datasets ISIC2018<cit.> and ISIC2019<cit.>. The 2018 dataset consists of 10015 images in 7 classes while a larger 2019 dataset provides 25331 images in 8 classes. The imbalanced factors α = N_max/N_min of the two datasets are all >50 (ISIC2018 58.30 and ISIC2019 53.87), which means that skin lesion classification suffers a serious imbalanced problem. We randomly divide the samples into the training, validation and test sets as 3:1:1.
We adopt five metrics for evaluation: accuracy (Acc), average precision (Pre), average sensitivity (Sen), macro f1-score (F1) and macro area under curve (AUC). Acc and F1 are considered as the most important metrics in this task.
§.§.§ Implementation Details.
The proposed algorithm is implemented in Python with Pytorch library and runs on a PC equipped with an NVIDIA A100 GPU.
We use ResNet50 <cit.> as backbone and the embedding dimension d is set to 128. We use SGD as the optimizer with the weight decay 1e-4. The initial learning rate is set to 0.002 and decayed by cosine schedule. We train the network for 100 epochs with a batch size of 64. The hyperparameters E_1, E_2, τ, λ, and μ are set to 20, 50, 0.01, 1, and 2 respectively. We use the default data augmentation strategy on ImageNet in <cit.> as T_1 for classification branch. And for CL branch, we add random grayscale, rotation, and vertical flip in T_1 as T_2 to enrich the data representations. Meanwhile, we only conduct the resize operation to ensure input size 224 × 224 × 3 during testing process.
The models with the highest Acc on validation set are chosen for testing. We conduct experiments in 3 independent runs and report the standard deviations in the supplementary material.
§.§ Experimental Results
§.§.§ Quantitative Results.
To evaluate the performance of our ECL, we compare our method with 10 advanced methods. Among them, focal loss <cit.>, LDAM-DRW <cit.>, logit adjust <cit.>, and MWNL <cit.> are the re-weighting loss methods. BBN <cit.> is the methods based on re-balancing training strategy while Hybrid-SC <cit.>, SCL <cit.>, BCL <cit.>, TSC <cit.> and ours are the CL-based methods. Moreover, MWNL and SCL have been verified to perform well in the skin disease classification task. To ensure fairness, we re-train all methods by rerun their released codes on our divided datasets with the same experimental settings. We also confirmed that all models have converged and choose the best eval checkpoints. The results are shown in Table <ref>. It can be seen that ECL has a significant advantage with the highest level in most metrics on two datasets. Noticeably, our ECL outperforms other imbalanced methods by great gains, e.g., 2.56% in Pre on ISIC2018 compared with SCL and 4.33% in F1 on ISIC2019 dataset compared with TSC. Furthermore, we draw the confusion matrixes after normalization in Fig. <ref>, which illustrate that ECL has significantly improved most of the categories, from minority to majority.
§.§.§ Ablation Study.
To further verify the effectiveness of the designs in ECL, we conduct a detailed ablation study shown in Table <ref> (the results on ISIC2018 are shown in supplementary material Table S2). First, we directly move the contrastive learning (CL) branch and replaced the balenced-weighted cross-entropy (BWCE) loss with cross-entropy (CE) loss. We can see from the results that adding CL branch can significantly improve the network's data representation ability with better performance than only adopting a classifier branch. And our BWCE loss can help in learning a more unbiased classifier with an improvement of 1.94% in F1 compared with CE on ISIC2019. Then we train the ECL w/o cycle update strategy. The overall performance of the network has declined compared with training w/ the strategy, indicating that this strategy can better enhance proxies learning through the whole data distribution. In the end, we also set the proxies' number of different classes equal to explore whether the classification ability of the network is improved due to the increase in the number of proxies. With more proxies, metrics fluctuate and do not increase significantly. However, the result of using proxies generated by reversed balanced way in hybrid-proxy model (HPM) outperforms equal proxies in nearly all metrics, which proves that more proxies can effectively enhance and enrich the information of tail classes.
§ CONCLUSION
In this work, we present a class-enhancement contrastive learning framework, named ECL, for long-tailed skin lesion classification. Hybrid-proxy model and balanced-hybrid-proxy loss are proposed to tackle the problem that SCL-based methods pay less attention to the learning of tail classes. Class-dependent proxies are generated in hybrid-proxy model to enhance information of tail classes, where rich relations between samples and proxies are utilized to improve representation learning of the network. Furthermore, blanced-weighted cross-entropy loss is designed to help train an unbiased classifier by considering both "imbalanced data" and "imbalanced diagnosis difficulty". Extensive experiments on ISIC2018 and ISIC2019 datasets have demonstrated the effectiveness and superiority of ECL over other compared methods.
splncs04
§ SUPPLEMENTARY MATERIAL: ECL: CLASS-ENHANCEMENT CONTRASTIVE LEARNING OF LONG-TAILED SKIN LEISION CLASSIFICATION
|
http://arxiv.org/abs/2307.05330v1 | 20230708201724 | The Value of Chess Squares | [
"Aditya Gupta",
"Shiva Maharaj",
"Nicholas Polson",
"Vadim Sokolov"
] | cs.AI | [
"cs.AI",
"cs.LG"
] |
Typology of Risks of Generative Text-to-Image Models
Atoosa Kasirzadeh
====================================================
Valuing chess squares and determining the placement of pieces on the board are the main objectives of our study. With the emergence of chess AI, it has become possible to accurately assess the worth of positions in a game of chess. The conventional approach assigns fixed values to pieces (=∞, =9, =5, =3, =3, =1). We enhance this analysis by introducing marginal valuations for both pieces and squares. We demonstrate our method by examining the positioning of Knights and Bishops, and also provide valuable insights into the valuation of pawns. Notably, Nimzowitsch was among the pioneers in advocating for the significance of Pawn structure and valuation. Finally, we conclude by suggesting potential avenues for future research.
Key Words: AI, AlphaZero, Bayes, Chess, Deep Learning, Neural Network, Chess Piece Values, Knights, Bishops, Pawns.
Chess is not a game. Chess is a well-defined form of computation. You may not be able to work out the answers, but in theory, there must be
a solution, a right procedure in any position. —John von Neumann
§ INTRODUCTION
Chess AI was pioneered by <cit.>, <cit.>, and <cit.>, who developed algorithms for solving chess. Shannon's approach was one of trial and error and “learning” the optimal policy. Turing (and Champernowne) valued the pieces marginally. They had the following positional evaluation functions: piece mobility, piece safety, king mobility, king safety, and castling. Modern day methods are based on state dependent objective function evaluation via learning (a.k.a reinforcement learning) <cit.>. Solving Chess is a daunting NP-hard computational problem, with the Shannon number, which measures the number of possible board states, being (with legal moves). A major advance over pure look-ahead calculation engines are deep neural networks which interpolate the value and policy functions from empirical game playing. For example, AlphaZero uses self-play to allow quick solution paths to be calculated and “learns" chess in less than four hours without any prior knowledge, see <cit.> and <cit.> for further discussion.
While much recent work has been done in Chess AI, the question of the value of a chess square has not yet been explored. In this work, we propose a system to measure the advantage/disadvantage offered by control of particular chess squares with different pieces. In particular, we propose a method for measuring the advantage/disadvantage states of the form s ∈Color×Piece×Square.
For example, the notion that certain state combinations, such as having a White on f5 provides an advantage to White players is a widely held belief in the world of chess. We analyze these key combinations to see whether the games of high-level chess grandmasters provide merit to this belief. Our investigation will shed light on the strategic nuances and patterns that emerge from such positions and contribute to the understanding of chess at the highest level of play.
To value pieces on squares, we create a Neural Network to analyze a dataset of Grandmaster games and make predictions regarding winning probabilities. This uses Centipawn evaluations for specific subsets of chess states involving Knight and Bishop pieces. The results show that our model successfully generated predictions for White Knights and Bishops, as well as Black Knights and Bishops. The predictions provided valuable insights into the advantages and disadvantages associated with different states and positions on the chessboard. For example, the analysis revealed that Knights placed in the corners of the board had lower winning probabilities, likely due to their limited mobility and restricted influence. On the other hand, as Knights moved closer to the opponent's side, their positional value tended to increase, potentially allowing them to infiltrate enemy territory and exert greater control over the game. The study's results enhance the understanding of chess strategies and gameplay dynamics, aiding in strategic decision-making and the evaluation of different gameplay approaches.
Several chess maxims are reflected in our neural network predictions. For example, Pawns are observed to gain in value as they cross the 4th rank, highlighting the significance of advancing pawns beyond this milestone. Pawns positioned on the h and a files on the 5th rank are particularly powerful, contributing to central control and potential attacking opportunities. Pawns on the 6th rank, especially when supported by a pawn on the 5th rank, become highly threatening. Edge pawns tend to be weaker compared to central pawns, emphasizing the importance of controlling central squares. Additionally, kingside pawns are often more dangerous when advanced than queenside pawns, influencing the dynamics of the game.
Important squares for the white pawn are identified by examining the highest Centipawn evaluation c(s) values in each column. The squares e4, h4, c5, and h6 are highlighted as critical positions for white pawns. Occupying these squares provides advantages, such as central control, support for piece development, and potential attacking opportunities.
Similarly, for black pawns, the squares f5, d5, c4, d3, and f3 emerge as key positions. Placing pawns on these squares enhances black's control of central areas, supports piece coordination, and enables counter-play against white's position.
Understanding the significance of these key squares and applying the derived insights allows players to make informed decisions regarding pawn placement, pawn breaks, and strategic plans. This knowledge empowers players to optimize their pawn structures, control critical areas of the board, and leverage their pawns to gain a competitive advantage in the game.
The rest of the paper is outlined as follows. Section <ref> provides connections with previous literature. Section <ref> goes over the methods we used. Section <ref> provides an application of the proposed methods to Grandmasters and Magnus Carlsen, the World Chess Champion. Section <ref> provides an application to Pawns. Finally, Section <ref> concludes.
§.§ Connections with Previous Work
In the field of Chess AI, previous research has primarily focused on predicting the probabilities of winning w(s) and Centipawn evaluations c(s) for more simplified states. <cit.> explored simpler states where s belongs to the set of Piece. In their work, they utilized Logistic Regression methods to determine the value of a chess piece by creating a model that predicts the outcome of a game based on existing piece imbalances in a given position. A recent lichess study also tried similar approaches <cit.> <cit.>.
Building upon this previous work, our research extends the scope by proposing an augmented state representation s that encompasses Color×Piece×Square, thereby incorporating the square (location) information as an additional component of the state. This augmentation enables a more comprehensive understanding of the game dynamics by considering both the piece and its position on the board. Furthermore, we employ Neural Networks as our chosen methodology, allowing us to capture and model the intricate relationships between the state s and its corresponding Centipawn evaluation c(s).
One crucial distinction between our proposed approach and previous methodologies lies in the predictive target. While prior research focused on predicting the binary outcome of the game (win or loss), our proposed model aims to predict the Centipawn evaluation c(s) instead. By doing so, we shift the focus towards assessing the advantage or disadvantage of a particular chess position, providing more granular information beyond a simple win/loss prediction.
By using the augmented state representation and employing Neural Networks, our proposed model offers a more comprehensive and nuanced analysis of the chess game. This allows us to capture the intricate interplay between the color, piece type, square, and Centipawn evaluation, providing a deeper understanding of the factors influencing the game's outcome.
In the realm of Chess AI research, <cit.> made significant strides by employing Q-learning methods, as discussed in Section <ref>, with a specific focus on chess gambits. Their work aimed to uncover key characteristics and insights associated with these strategic opening moves by calculating Q-values for various chess gambits. This initial exploration into the application of Q-learning in analyzing and understanding chess gambits laid a solid foundation for further research in this field.
This paper extends the work of <cit.> and proposes novel architectures that can predict the probabilities of winning w(s) and Centipawn evaluations c(s) for all possible states s ∈Color×Piece×Square. While previous work focused on specific subsets of states, particularly those related to gambits, our approach seeks to encompass the entire chessboard by incorporating the color, piece type, and square information into a comprehensive state representation.
By embracing a wider scope of analysis that covers all possible states, our research aims to provide a more comprehensive understanding of the game, surpassing the limitations imposed by narrow subsets. To achieve this, we employ advanced techniques, such as Neural Networks, to capture the intricate relationships between the components of a state and the corresponding probabilities of winning w(s) and Centipawn evaluations c(s). This allows us to offer valuable insights into the dynamics of chess gameplay across a vast array of states, thereby providing a more holistic and comprehensive analysis.
Through our research, we strive to advance the field by developing robust and effective models capable of accurately predicting the probabilities of winning and assessing the Centipawn evaluations for any given state. By considering the full spectrum of states represented by Color×Piece×Square, our proposed architectures pave the way for a deeper understanding of chess strategies. They enable us to evaluate the efficacy of these strategies and unravel the intricacies of the game, ultimately contributing to the development of more sophisticated and intelligent Chess AI systems.
§ CHESS PIECE AND SQUARE VALUATION
Our work will provide values for states consisting of a combination of pieces and squares For example, we make wish to assess the value of a fianchetto bishop of the queen's side ad that bishop controls a key diagonal. We denote this value by
V ( , b2 )
or a white knight on a good outpost such as f5, wish is denoted
V ( , f5). As valuation will be based on the probability of winning, as calculated by a chess engine, the law of probability gives us a key identity
V ( ) = ∑_position V ( , position ),
where the sum is taken over all future positions. Hence, we can see that the initial value of the knight (a.k.a V ( )=3 comes from its total use throughout the game. Once the pieces have moved, there's a different marginal values. Our goal is to be able to assess values such as V ( , f5).
The commonly used chess piece valuations are given by
( , , , , , ) = ( ∞ , 9 , 5 , 3, 3 ,1 )
These were modified in <cit.> through the use of Machine Learning techniques to be ( , , , , , ) = ( ∞ , 8.9 , 4.6 , 3.3, 3 ,1 ) and in a recent lichess study on finding the value for pieces finds
( , , , , , ) = ( ∞ , 9.82 , 4.93 , 3.28, 3.16 , 1 ). We build on this line of research by adding square position to the state vector.
§.§ Centipawn Evaluation and Optimal Play
In our approach, we begin by formalizing the theoretical functions used in Q-learning. The value function, denoted as V(s), represents the probability of winning the game given a specific state s. This state s belongs to the set Color×Piece×Square, and it is worth emphasizing that V(s) is calculated with respect to the color parameter in any given state.
To assess any legal chess position, we derive a Centipawn evaluation denoted as c(s). The Centipawn serves as a measurement unit for evaluating the advantage in chess, where one Centipawn is equal to 1/100 of a pawn. The win probability w(s) can be directly obtained from c(s) using the following equation:
w(s) = ℙ(winning|s) = 1/1+10^-c(s)/4, and c(s) = 4log_10(w(s)/1-w(s)).
For example, if White has a c(s) =0.2 advantage, then the win probability is w(s) = 0.526.
To address the sequential decision problem, we employ the dynamic programming technique known as Q-learning. This methodology involves breaking down the decision problem into smaller sub-problems. A key principle utilized in Q-learning is Bellman's principle of optimality, which states:
Bellman Principle of Optimality: An optimal policy has the property that whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision. (Bellman, 1957)
To solve this sequential decision problem, we employ Backwards Induction, which determines the most optimal action at the last node in the decision tree (i.e., the checkmate position). Utilizing this information, we can then determine the best action for the second-to-last decision point, and this process continues backward until we identify the optimal action for every possible situation, effectively solving the Bellman equation.
In recent years, the field of artificial intelligence has witnessed significant advancements, particularly in the realm of AI algorithms like deep learning, alongside the development of remarkably powerful computer chess engines. These technological breakthroughs have revolutionized the way we evaluate and understand chess positions, enabling us to delve into the intricacies of the game with unparalleled precision.
One notable achievement stemming from these advancements is the ability to accurately assess chess positions. By leveraging AI algorithms, particularly deep learning techniques, we can now analyze and comprehend chess moves and strategies in a manner that was previously unimaginable. These algorithms have been specifically designed to process vast amounts of data, learn from patterns, and make informed decisions, ultimately resulting in highly accurate evaluations of chess positions.
Moreover, the advent of advanced computer chess engines, exemplified by the likes of Stockfish 15 <cit.>, has played a pivotal role in shaping the landscape of chess analysis and study. These engines, meticulously crafted through a combination of cutting-edge algorithms and extensive programming, have transformed the way chess is played and understood.
Gone are the days when determining the optimality of specific chess lines of play relied solely on human intuition and analysis. The emergence of chess engines has effectively shifted the burden from human players and theorists to these intelligent systems. By leveraging their computational power and algorithmic prowess, chess engines have assumed the responsibility of assessing various lines of play, thus solving the Bellman equation.
By adhering to Bellman's optimality condition, computer chess engines fulfill the requirements of possessing complete knowledge about the chess environment and evaluating all possible actions and their consequences. Through this rigorous analysis, they provide insights into the optimal move in a given position
§.§ Q-Values
The corresponding Q-value represents the probability of winning, given a policy/move a in a given state s, by following the optimal Bellman path thereafter:
Q(s, a) = ℙ(winning|s, a).
To address the optimal sequential decision problem, we employ Q-learning, which calculates the Q-matrix (<cit.>, <cit.>), denoted as Q(s, a) for a given state s and action a. The Q-value matrix describes the value of performing action a and then acting optimally thereafter. The current optimal policy and value function can be expressed as follows:
V(s) = amax Q(s, a) = Q(s, a^*(s)) where a^*(s) = argmax_a Q(s, a).
The policy function establishes the optimal mapping from states to actions, and by substituting the Q-values, we obtain the value function for a given state.
In Section <ref>, we introduce a Neural Network architecture designed specifically for predicting the value of c(s) given the state s. By harnessing the predictive capability of this Neural Network, we can subsequently determine the probability of a player winning, denoted as w(s), based on their corresponding state s.
The Neural Network model comprises interconnected layers, including an input layer that accepts the state s as input. Through a series of computations within the hidden layers, the model captures complex relationships and patterns inherent in the input data. Ultimately, the output layer produces the predicted value of c(s).
By employing this trained Neural Network model, we can make predictions of c(s) for unseen states s. These predicted values can then be utilized to compute the probability of a player winning, denoted as w(s). The specific relationship between c(s) and w(s) is contingent upon the characteristics and dynamics of the chess game under analysis.
With the ability to predict w(s), we gain valuable insights into the probability of a player winning based on their current state s. This information can be harnessed in various ways, including evaluating strategic moves, assessing the overall advantage or disadvantage of specific board configurations, and guiding decision-making during gameplay.
The Neural Network's capacity to capture intricate patterns and relationships within the input data significantly contributes to more accurate predictions and a deeper understanding of the dynamics of the chess game. By incorporating the predicted values of c(s) and computing the corresponding probabilities of winning, we enhance our analytical capabilities and facilitate informed decision-making in the context of chess gameplay.
§.§ Neural Network Architecture
We design a specific 3-layer Neural Network aimed at predicting the value of a chess square and piece combination, denoted as c(s) for s ∈Color×Piece×Square, as shown in Figure <ref>. This model incorporates a hyperbolic tangent (tanh) activation function as a key component of its architecture. By applying the tanh activation function to the network layers, the model becomes capable of capturing and processing intricate patterns and relationships within the input data.
To ensure effective training of the model, we curate a meticulously crafted dataset. This dataset consists of two essential elements: the state information, represented by s, and the corresponding critical power level (CPL) recorded for each state. The state information encompasses relevant factors, variables, or parameters that define the chessboard system or environment.
Through supervised learning using this dataset, the model learns to associate the given state information with the corresponding CPL. Consequently, it acquires the ability to predict the CPL based on the provided state information as input. This training process involves iteratively adjusting the model's parameters to minimize the disparity between its predictions and the actual CPL values present in the training dataset.
The selection of the tanh activation function holds particular significance for our chess square and piece prediction model. The tanh function introduces non-linearity into the model, enabling it to capture complex relationships specific to chessboard configurations. This non-linearity allows the model to interpret intricate patterns and dependencies between the input variables and the output, facilitating more accurate predictions.
Furthermore, the tanh activation function maps the input values into the range [-1, 1], which is well-suited for our chess-related application. This bounded output range ensures that the model's predictions for critical power levels remain within a specific value range, aligning with the constraints and limitations inherent to chess strategies.
By incorporating the tanh activation function and training the model on the state information and corresponding CPL data, our proposed model strives to provide a robust and dependable framework for predicting critical power levels in various chess scenarios. Its ability to capture the intricate relationships specific to chess squares and pieces makes it particularly valuable for tasks such as evaluating the relative strength of different board configurations, predicting advantageous moves, and assisting in strategic decision-making during chess gameplay.
§.§ Data
In order to train the Neural Network effectively, a training dataset is constructed, comprising two essential components. This dataset consists of elements that contain both the state information denoted by s, as well as the corresponding evaluation associated with that particular state.
To gather the necessary chess game data for analysis, a vast mega database containing millions of previously played chess games is utilized. Within this database, each game is represented using the Portable Game Notation (PGN) notation, which allows for standardized representation and compatibility with various chess software and applications.
The process of constructing the training dataset involves parsing and evaluating all positions p within each game. The Forsyth-Edwards Notation (FEN) is employed to determine the location of relevant chess pieces within each position p. As a result, all states s ∈ p are extracted and added to the training dataset. To navigate through the moves of each chess game systematically, the Python Chess library is utilized. This library provides a comprehensive set of functions and classes specifically designed for working with chess games and positions, enabling efficient traversal of the stored games in the database.
For every position p within the dataset, an evaluation is obtained. To accomplish this, the research incorporates the Stockfish engine, a widely recognized and powerful chess engine. Stockfish employs advanced algorithms and evaluation functions to assess the strength of positions. By leveraging the capabilities of Stockfish, the training dataset can determine the evaluation of each position p on the chessboard accurately.
Finally, this evaluation is associated with all states s ∈ p, resulting in a comprehensive dataset that encompasses both the state s and the evaluation associated with the position p from which s was derived. This dataset serves as the foundation for training the Neural Network, enabling it to learn and make informed decisions based on the provided state information.
§ KNIGHT AND BISHOP VALUATION
In this study, our proposed model is applied to a comprehensive dataset comprising over 2000 Grandmaster games. The primary objective is to predict the probabilities of winning w(s) and Centipawn evaluations c(s) for a specific subset of states, namely those denoted by { (c, p, sq) ∈ s : p ∈{Knight, Bishop}}. Although our focus is initially on the Knight and Bishop pieces, it is important to note that the model can be expanded to encompass all pieces, offering a broader analysis of the game.
To provide a visual representation of the predicted values, heat maps are generated for both w(s) and c(s) corresponding to each valid combination within the specified subset. These heat maps offer a comprehensive overview of the probabilities of winning and Centipawn evaluations associated with the Knight and Bishop pieces in different states.
To illustrate the efficacy of our model, we first employ it to predict the Centipawn evaluations c(s) specifically for states where the color c is White and the piece p is Knight or Bishop. The resulting predictions are showcased in Figure <ref> and Figure <ref>, providing valuable insights into the relative advantages or disadvantages of such states. Building upon this, we further use c(s) to derive the corresponding probabilities of winning w(s) for these specific states. The model-generated probabilities are visualized in Figure <ref> and Figure <ref>, offering a clear representation of the likelihood of White winning the game given the occurrence of the specified state s.
By leveraging our proposed model, we gain a deeper understanding of the dynamics of the game, specifically in relation to the Knight and Bishop pieces within the context of the White color. This analysis not only facilitates strategic decision-making but also provides a basis for evaluating the effectiveness of various gameplay approaches. Moreover, the model's expandability to encompass all pieces allows for a comprehensive examination of the game across different states, enabling us to uncover additional insights and enhance the overall understanding of chess strategies and gameplay dynamics.
The model is then used to determine c(s) and w(s) for states { (c, p, sq) ∈ s : c = "Black", p = "Knight", "Bishop"}, as can be seen in Figure <ref>, Figure <ref>, Figure <ref>, and Figure <ref> respectively.
Key squares for the Bishops can be seen in <ref>:
The applications of the model on Grandmaster games provide valuable insights into the dynamics and strategies employed by top-level chess players. By predicting the Centipawn evaluations c(s) and winning probabilities w(s) for specific subsets of states, we gain a deeper understanding of the advantages and disadvantages associated with different chess positions. These insights have several practical applications in chess analysis and gameplay evaluation.
The predictions generated by the model offer a quantitative measure of the advantage/disadvantage provided by the Knight and Bishop pieces in specific states. Heat maps depicting the predicted Centipawn evaluations c(s) and winning probabilities w(s) are presented for both White and Black knights and bishops. These visual representations provide a comprehensive overview of the relative strengths and weaknesses of these pieces in various positions.
By focusing on specific subsets of states, we can analyze the effectiveness of the Knight and Bishop pieces individually, as well as their contributions to the overall gameplay strategies employed by Grandmasters. This analysis aids in strategic decision-making, enabling players to assess the potential advantages or disadvantages associated with specific moves and piece configurations.
Furthermore, the expandability of the model allows for a comprehensive examination of the game across different states. By extending the analysis to include all pieces, we can uncover additional insights into the dynamics of the game and evaluate the effectiveness of various gameplay approaches. This broader perspective enhances our overall understanding of chess strategies and gameplay dynamics.
The predictions generated by the model can also be utilized for comparative analysis between different players or groups of players. By analyzing the Centipawn evaluations and winning probabilities associated with specific states, we can identify patterns and trends in the strategies employed by Grandmasters. This information can be leveraged to develop training materials and strategies for aspiring chess players, helping them improve their gameplay and decision-making abilities.
For example, in Figure <ref>, where w(s) represents the evaluation of the knight-square state, we can observe that the lowest values of w(s) are found in the white corners of the chessboard, specifically squares a1 and h1. This observation aligns with the widely held belief that knights are generally considered being in their worst positions when confined to the corners of the board.
The disadvantage of having a knight in the corner may stem from its limited mobility and restricted scope of influence. When placed in the corners, knights have fewer potential squares to reach and can easily become isolated from the central and more strategically significant areas of the board.
On the other hand, as the knights move closer to the opponent's side of the board, their positional value tends to increase. This is most likely due to the knights' ability to infiltrate enemy territory, potentially attacking key squares, pieces, or pawns.
The increasing value of knight-square states as the knights advance can be attributed to several factors. Firstly, the proximity to the opponent's pieces and pawns provides more targets for the knight's maneuvers and attacks. Secondly, knights positioned closer to the enemy's side can exert greater control over central squares and influence the dynamics of the game. This control can restrict the opponent's options and potentially create weaknesses in their position.
Analyzing the values of knight-square states in different positions on the board, such as the corners and closer to the opponent's side, supports the claim that the placement of knights significantly affects their effectiveness. Understanding the strengths and weaknesses associated with different knight positions helps players make informed decisions about piece placement, strategic plans, and tactical considerations. Key squares for the knight to occupy are marked in Figure <ref>.
The applications of our model on Grandmaster games provide valuable insights into the dynamics and strategies employed in high-level chess. The predictions of Centipawn evaluations and winning probabilities offer a quantitative measure of the advantages and disadvantages associated with specific chess positions, aiding in strategic decision-making and gameplay evaluation. The expandability of the model allows for a comprehensive analysis of the game across different states, facilitating a deeper understanding of chess strategies and enhancing the overall gameplay experience.
§.§ Magnus Carlsen
Our proposed model can be further applied to gain insights into the playing style and performance of specific players. In this section, we focus on the world-renowned chess player Magnus Carlsen, the reigning World Chess Champion. By applying our model to the games played by Carlsen, we aim to uncover unique patterns and characteristics that contribute to his success and distinguish his gameplay from other Grandmasters.
Our proposed model is applied to a dataset consisting of 2000+ Carlsen games played in the last 5 years. Similar to the previous section, we begin by predicting the Centipawn evaluations c(s) for states where Carlsen plays as the “White" color and utilizes the “Knight" or “Bishop" piece. These predictions provide valuable insights into the relative advantages or disadvantages of Carlsen's chosen states, shedding light on his strategic decision-making process. The resulting heat maps, showcased in Figure <ref>, Figure <ref>, Figure <ref>, and Figure <ref>, offer a visual representation of the predicted Centipawn evaluations for Carlsen's specific subset of states.
Building upon this analysis, we further utilize the Centipawn evaluations c(s) to derive the corresponding probabilities of winning w(s) for Carlsen's selected states. The model-generated winning probabilities provide a clear representation of Carlsen's likelihood of winning the game given the occurrence of the specified state s.
By focusing on Carlsen's gameplay, we gain a deeper understanding of his preferred strategies and tendencies when employing the Knight piece as the “White" color. This analysis allows us to assess the effectiveness of Carlsen's gameplay choices, providing insights into his decision-making process and potential areas of strength or improvement. Additionally, comparing Carlsen's results to the general dataset of Grandmaster games helps us evaluate his performance against the broader chess community.
The model is then used to determine c(s) and w(s) for states (c, p, sq) ∈ s : c = "Black", p = "Knight", "Bishop", as can be seen in Figure <ref>, Figure <ref>, Figure <ref>, and Figure <ref>, respectively.
The applications of the model on Magnus Carlsen's games provide valuable insights into the dynamics and strategies employed by one of the world's top chess players. By predicting the Centipawn evaluations c(s) and winning probabilities w(s) for specific subsets of states, we can gain a deeper understanding of the advantages and disadvantages associated with different chess positions in Carlsen's games. These insights have numerous practical applications in chess analysis and gameplay evaluation.
The predictions generated by the model offer a quantitative measure of the advantage/disadvantage provided by the Knight and Bishop pieces in specific states encountered by Magnus Carlsen. Heat maps depicting the predicted Centipawn evaluations c(s) and winning probabilities w(s) are presented for both White and Black knights and bishops in Carlsen's games. These visual representations provide a comprehensive overview of the relative strengths and weaknesses of these pieces in various positions as encountered by Carlsen.
By focusing on specific subsets of states in Carlsen's games, we can analyze the effectiveness of the Knight and Bishop pieces individually, as well as their contributions to Carlsen's overall gameplay strategies. This analysis aids in strategic decision-making, enabling players to assess the potential advantages or disadvantages associated with specific moves and piece configurations based on Carlsen's approach.
Furthermore, the expandability of the model allows for a comprehensive examination of the game across different states in Carlsen's games. By extending the analysis to include all pieces, we can uncover additional insights into the dynamics of the game as played by Carlsen and evaluate the effectiveness of various gameplay approaches employed by him. This broader perspective enhances our overall understanding of Carlsen's strategies and gameplay dynamics.
The predictions generated by the model can also be utilized for comparative analysis between Magnus Carlsen and other players. By analyzing the Centipawn evaluations and winning probabilities associated with specific states in Carlsen's games, we can identify patterns and trends in his strategies. This information can be leveraged to develop training materials and strategies for aspiring chess players, helping them improve their gameplay and decision-making abilities while considering Carlsen's approach.
In Figure <ref>, we discover the solution to one of the questions raised in Section <ref>: the value of the white knight on f5. Figure <ref> illustrates the distribution of c(s) for the White Knight on f5 in Carlsen's games. It is evident that the c(s) values for the White Knight exhibit a positive skew, indicating that this particular state s is typically associated with favorable c(s) values. Therefore, having a white knight positioned on f5 often confers an advantage.
By incorporating such insights into our analysis of Carlsen's games, we gain a more comprehensive understanding of the strengths, weaknesses, and strategic implications of the Knight and Bishop pieces as employed by Magnus Carlsen.
In sum, the applications of our model on Magnus Carlsen's games provide valuable insights into the dynamics and strategies employed by this world-class chess player. The predictions of Centipawn evaluations and winning probabilities offer a quantitative measure of the advantages and disadvantages associated with specific chess positions encountered by Carlsen, aiding in strategic decision-making and gameplay evaluation. The expandability of the model allows for a comprehensive analysis of Carlsen's games, facilitating a deeper understanding of his strategies and enhancing the overall gameplay experience.
§ PAWN VALUATION
No pawn exchanges, no file-opening, no attack—Aron Nimzowitsch
Our study is not complete until we apply the model to the mighty pawn. Our proposed model is applied to a comprehensive dataset comprising over 2000 Grandmaster games. The primary objective is to predict the probabilities of winning w(s) and Centipawn evaluations c(s) for a specific subset of states, namely those denoted by { (c, p, sq) ∈ s : p ∈{Pawn}}.
The results of the model when applied to the White Pawn are shown in Figure <ref> and Figure <ref>.
We note a few chess maxims that are reflected in the model predictions.
* Pawns gain in value as they cross the 4th rank: This point highlights an important principle in chess, where advancing pawns beyond the 4th rank often leads to increased positional strength and potential threats. As pawns move forward, they gain control over more squares, restrict the opponent's piece mobility, and open up lines for their own pieces. Crossing the 4th rank is a significant milestone that can significantly impact the dynamics of the game.
* Pawns on the h and a files are very good on the 5th rank: This point emphasizes the strategic importance of pawns positioned on the h and a files when they reach the 5th rank. Pawns on these files can have a powerful influence on the game, particularly in the endgame. Placing pawns on the 5th rank provides support for the central pawns, helps control key central squares, and may facilitate piece activity and potential attacks on the opponent's position.
* Pawns on the 6th rank are deadly, especially when supported by a pawn on the 5th rank: This point highlights the strength of pawns on the 6th rank, which is just two steps away from promotion. Pawns advanced to this rank become highly dangerous, as they pose a direct threat to promote to a more powerful piece. When supported by a pawn on the 5th rank, these pawns can create a formidable pawn duo, exerting significant pressure on the opponent's position and potentially leading to advantageous tactical opportunities.
* Edge pawns tend to be weaker than central pawns: This point draws attention to the relative weakness of pawns placed on the edges of the board (such as the a and h files) compared to pawns in central positions. Edge pawns have fewer potential squares to advance or support other pieces, limiting their mobility and influence. In contrast, central pawns control more critical squares, contribute to a stronger pawn structure, and have a greater impact on the overall game dynamics.
* Kingside pawns are more dangerous when advanced than queenside pawns: This point highlights a positional aspect where advancing pawns on the kingside (g and h files for White, g and h files for Black) can have a more immediate and aggressive impact compared to advancing pawns on the queenside (a and b files for White, a and b files for Black). Advanced kingside pawns can create open lines, potentially exposing the opponent's king to attacks or weakening their pawn structure. Understanding this distinction helps players assess the strategic implications of pawn advances on different sides of the board.
Important squares for the white pawn can also be seen by examining the highest Centipawn evaluation c(s) values in each column. By analyzing the rows in the heatmap corresponding to the white pawns, we can identify squares that consistently have high Centipawn evaluations, indicating their significance for white pawns.
Starting from the top row (from White's perspective), the squares with the highest c(s) values are e4, h4, c5, and h6. These squares represent critical positions for white pawns.
The square e4, located in the fourth row, is a well-known central square in chess. Occupying e4 with a white pawn can provide several advantages, such as controlling important central squares, supporting piece development, and establishing a strong pawn presence in the center.
Also in the fourth row, we find the square h4. Although it is on the edge of the board, it is an important square for white pawns. Placing a pawn on h4 can serve multiple purposes, including potentially supporting a kingside pawn storm, reinforcing control over the g5 square, or preparing to launch an attack on the opponent's position.
In the fifth row, we encounter the square c5. Occupying c5 with a white pawn can contribute to a solid pawn structure and provide control over central squares. It may also support piece mobility and influence the game's dynamics, particularly in the context of pawn breaks or central pawn exchanges.
Finally, in the sixth row, the square h6 stands out with the highest c(s) value. Placing a pawn on h6 can have strategic implications, such as potentially supporting kingside attacks or acting as a defensive shield for the king.
By identifying these squares with high c(s) values, we gain valuable insights into the strategic positioning of white pawns. These squares offer opportunities for central control, piece activity, attacking potential, and overall pawn structure. Understanding the significance of these squares helps players make informed decisions regarding pawn placement, pawn breaks, and strategic plans to maximize their advantage in the game.
We next apply this model to the black pawns. The results are shown in Figure <ref> and Figure <ref>.
Similar conclusions can be drawn for the black pawns. By analyzing the highest Centipawn evaluation c(s) values in each column for the black pawns, we can identify the key squares that consistently have high evaluations, signifying their significance for black pawns.
Just like for the white pawns, the rows in the heatmap corresponding to the black pawns reveal important squares. The squares with the highest c(s) values for black pawns are f5, d5, c4, d3, and f3. These squares play a crucial role in determining the strength and strategic positioning of the black pawns.
The square f5, located in the fifth row, emerges as one of the critical squares for black pawns. Placing a pawn on f5 can provide black with control over central squares, potential support for piece development, and opportunities for counterplay.
The square d5 stands out with a high c(s) value. Occupying d5 with a black pawn contributes to central control, potentially restricts white's pawn breaks, and provides a solid foundation for black's pawn structure.
In the fourth row, the square c4 is identified as an important square for black pawns. Occupying c4 can offer black strategic advantages, such as central control, potential support for piece activity, and the creation of tactical opportunities.
Furthermore, the square d3 in the third row holds significance for black pawns. Placing a pawn on d3 strengthens black's central presence, potentially restricts white's pawn advancements, and helps solidify black's position in the center.
Lastly, the square f3 in the third row also demonstrates a high c(s) value. Occupying f3 with a black pawn can support kingside counterplay, potentially restrict white's piece mobility, and offer opportunities for tactical operations.
Analyzing these key squares for black pawns, namely f5, d5, c4, d3, and f3, provides valuable insights into the strategic considerations and potential strengths of the black pawn structure. Occupying and controlling these squares strategically enhances black's control of central areas, supports piece coordination, and enables counterplay against white's position.
By understanding the significance of these squares, players can make informed decisions regarding pawn placement, pawn breaks, and strategic plans to maximize their potential advantage and navigate the complexities of the game from the black perspective.
§ DISCUSSION
In this paper, we presented a comprehensive methodology for evaluating chess positions and predicting the probabilities of winning w(s) and Centipawn evaluations c(s). Our approach utilized a combination of Centipawn evaluation, Q-learning, and Neural Networks to capture the complex dynamics of the game and facilitate informed decision-making.
We began by formalizing the theoretical functions used in Q-learning, such as the value function V(s) and Centipawn evaluation c(s). The value function represented the probability of winning the game given a specific state s, while the Centipawn evaluation measured the advantage in chess. We derived the win probability w(s) from the Centipawn evaluation using a mathematical equation.
To address the sequential decision problem, we employed the dynamic programming technique of Q-learning, which involved breaking down the problem into smaller sub-problems and solving the Bellman equation. The Q-value matrix represented the probability of winning given a policy/move in a specific state, and we determined the optimal policy and value function using the Q-values.
To predict Centipawn evaluations c(s), we designed a Neural Network architecture specifically tailored for chess positions. This model incorporated the tanh activation function to capture intricate patterns and relationships within the input data. By training the Neural Network on a meticulously crafted dataset, we could make accurate predictions of Centipawn evaluations for unseen states.
Our methodology expanded upon previous work by considering a comprehensive state representation that encompassed color, piece type, and square information. This allowed for a more nuanced analysis of the game dynamics and a deeper understanding of the factors influencing the outcome. We also showcased the applications of our model, focusing on specific subsets of states, such as the Knight and Bishop pieces, and visualizing the predicted probabilities of winning and Centipawn evaluations through heat maps.
Further research in this area could explore the dynamic nature of square values, taking into account positional changes and the interaction between different pieces. By refining and expanding our methodology, we can continue to deepen our understanding of the intricate dynamics of chess positions and contribute to advancements in the field of chess AI.
In conclusion, our methodology provides a robust framework for evaluating chess positions and making informed decisions during gameplay. By combining Centipawn evaluation, Q-learning, and Neural Networks, we achieved a comprehensive analysis of the game dynamics and enhanced our ability to assess strategic moves and guide decision-making. Our research contributes to the development of more sophisticated and intelligent Chess AI systems, paving the way for deeper insights into the intricacies of the game.
With our methodology, we strive to unravel the logical relations of chess and provide a comprehensive understanding of the game, empowering players and researchers alike to unlock new levels of strategic thinking and mastery.
plainnat
|
http://arxiv.org/abs/2307.04611v1 | 20230710145339 | Maximal violation of the Bell-CHSH inequality via bumpified Haar wavelets | [
"David Dudal",
"Philipe De Fabritiis",
"Marcelo S. Guimaraes",
"Itzhak Roditi",
"Silvio P. Sorella"
] | hep-th | [
"hep-th",
"quant-ph"
] |
KU Leuven Campus Kortrijk–Kulak, Department of Physics, Etienne Sabbelaan 53 bus 7657, 8500 Kortrijk, BelgiumGhent University, Department of Physics and Astronomy, Krijgslaan 281-S9, 9000 Gent, Belgium
CBPF — Centro Brasileiro de Pesquisas Físicas, Rua Dr. Xavier Sigaud 150, 22290-180, Rio de Janeiro, Brazil
UERJ — Universidade do Estado do Rio de Janeiro, Rua São Francisco Xavier 524, 20550-013, Rio de Janeiro, Brazil
CBPF — Centro Brasileiro de Pesquisas Físicas, Rua Dr. Xavier Sigaud 150, 22290-180, Rio de Janeiro, Brazil
UERJ — Universidade do Estado do Rio de Janeiro, Rua São Francisco Xavier 524, 20550-013, Rio de Janeiro, Brazil
We devise a general setup to investigate the violation of the Bell-CHSH inequality in the vacuum state in the context of Quantum Field Theory. We test the method with massless spinor fields in (1+1)-dimensional Minkowski space-time. Alice's and Bob's test functions are explicitly constructed, first by employing Haar wavelets which are then bumpified into proper test functions via a smoothening procedure relying on the Planck-taper window function. Relativistic causality is implemented by requiring the support of Alice's and Bob's test functions to be located in the left and right Rindler wedges, respectively. Violations of the Bell-CHSH inequality as close as desired to Tsirelson's bound are reported. We briefly comment on the extra portal, compared to earlier works, this opens to scrutinize Bell-CHSH inequalities with generic, interacting Quantum Field Theories.
Maximal violation of the Bell-CHSH inequality via bumpified Haar wavelets
Silvio P. Sorella
August 12, 2023
=========================================================================
Introduction —
Bell's inequalities have been a pivotal issue in Quantum Mechanics since their formulation <cit.>. It is certainly appropriate to state that the principle of relativistic causality plays a key role in understanding the nature of this inequality. Requiring that Alice and Bob are space-like separated prevents any possible interference between their respective measurements, and it is worth reminding that the closure of the so-called causality loophole required highly sophisticated experimental tools <cit.>. As far as relativistic causality is concerned, it seems natural to look at the Bell-CHSH inequality within the realm of Quantum Field Theory (QFT), a quite difficult endeavor, already investigated in the pioneering works <cit.> (see also <cit.> for more recent attempts). In particular, the authors of <cit.> have been able to show, by using methods of Algebraic QFT, that the Bell-CHSH inequality can be maximally violated already at the level of free fields.
Goal —
The aim of the present work is to exhibit an explicit violation of the Bell-CHSH inequality in the vacuum within the QFT framework, thus giving continuity to the work done in <cit.>. More precisely, we shall be able to provide a systematic construction of appropriate test functions needed to detect the Bell-CHSH inequality violation by means of wavelet representations. To our knowledge, this is the first time in which such an explicit construction is presented. We also highlight the difference between our approach and that of <cit.>.
Method —
The whole procedure relies on the following:
(i) identify a Hermitian dichotomic field operator 𝒜: 𝒜^†= 𝒜 and 𝒜^2=1.
(ii) use smearing to localize the dichotomic field operators entering the Bell-CHSH inequality in suitable space-like separated regions of Minkowski space-time. Following <cit.>, we shall employ two pairs of smooth test functions with compact supports (f,f') and (g,g'), referred to as Alice's and Bob's test functions, aka. bump functions. Relativistic causality is implemented by demanding that the supports of (f,f') and (g,g') belong to Rindler's left and right wedges, respectively;
(iii) express the Bell-CHSH inequality in terms of the inner products between (f,f') and (g,g'). The vacuum expectation value of the Bell-CHSH correlator in QFT,
⟨𝒞⟩ = ⟨ 0 | i [( 𝒜_f + 𝒜_f') 𝒜_g + (𝒜_f - 𝒜_f') 𝒜_g'] | 0 ⟩,
can then be reexpressed in terms of inner products between the test functions, namely,
⟨𝒞⟩ = i ( ⟨ f | g ⟩ + ⟨ f | g' ⟩ + ⟨ f' | g ⟩ - ⟨ f' | g' ⟩),
where ⟨ f | g ⟩ stands for the Lorentz-invariant inner product, see Eq. (<ref>);
(iv) show that ⟨ f | g ⟩, ⟨ f | g' ⟩, ⟨ f' | g ⟩, ⟨ f' | g' ⟩ can be constructed so that the Bell-CHSH inequality is violated, i.e.,
2 < | ⟨𝒞⟩ | ≤ 2√(2) ,
where the value 2√(2) is Tsirelson's bound <cit.>. We shall proceed in two steps. First, we search for a preliminary set of would-be test functions (f, f'), (g, g') adopting a Haar wavelet finite series representation. In the second step, the final form of the test functions (f,f'), (g,g') is achieved via a bumpification procedure based on the Planck-taper window function. As a final result, we obtain explicit violations of the Bell-CHSH inequality as close as desired to Tsirelson's bound.
Outlook —
Let us emphasize that nowadays there is a great interest in testing Bell-CHSH inequalities in high-energy physics <cit.>. This allows one to probe entanglement in an energy regime never explored before, in which the appropriate description of the physical phenomena is in the realm of QFT. Our approach might, in principle, be used for any QFT, despite the numerical challenges that will naturally come with more complicated models, covering as well the case of interacting theories. Let us limit here to mention that, in the interacting case, the inner products between the test functions are modified in such a way that the kernel corresponding to the free Wightman two-point function gets replaced by the Källen-Lehmann spectral density, encoding the information about the interaction.
Spinor fields —
Let us introduce a QFT for a free spinor field in (1+1)-dimensions, with action
S = ∫ d^2x [ψ̅(i γ^μ∂_μ - m ) ψ].
In the above expression, ψ_α = (ψ_1, ψ_2)^t is a Dirac field described by a two-component spinor with complex ψ_1, ψ_2. In this work, we shall restrict ourselves to the free case and mainly consider the massless limit.
The Clifford algebra is given by {γ^μ , γ^ν} = 2 g^μν, where the metric is g_μν = diag(+1,-1) and the Dirac matrices are chosen as γ^0 = σ_x and γ^1 = i σ_y, being σ_x, σ_y the Pauli matrices. According to canonical quantization, we introduce the non-trivial equal-time anti-commutation relations
{ψ_α(t,x), ψ_β^†(t,y) } = δ_αβδ(x-y).
The Dirac field can be written in a plane wave expansion
ψ(t,x) =∫dk/2 πm/ω_k[ u(k) c_k e^-i k_μ x^μ + v(k) d_k^† e^+i k_μ x^μ],
where k_μ x^μ = ω_k t - kx and ω_k = √(k^2 + m^2).
For the algebra of creation and annihilation operators, we get
{ c_k, c_q^†} = { d_k, d_q^†} = 2 πω_k/mδ(k-q).
Evaluating the anti-commutators for different space-time points x^μ and y^μ, we find
{ψ_α(x), ψ̅_β(y) } = (i γ^μ∂_μ - m)_αβ i Δ_PJ(x-y),
with
i Δ_PJ(x) = ∫dk/2 π1/2 ω_k(e^-i k x - e^+i k x).
The Pauli-Jordan distribution Δ_PJ is a real, Lorentz-invariant, odd under the exchange x → -x, solution of the Klein-Gordon equation. Furthermore, this distribution vanishes outside the light cone (i.e., Δ_PJ(x) = 0 if x^2 <0), ensuring that measurements at space-like separated points do not interfere (cf. relativistic causality).
Smearing —
Quantum fields are operator-valued distributions <cit.> and must be smeared in order to give well-defined operators acting on the Hilbert space. In the present case, the smearing procedure is achieved by considering two-component spinor test functions of the form h_α (x) = (h_1(x), h_2(x))^t, where h_1, h_2 are commuting test functions belonging to the space C_0^∞ (ℝ^4) of infinitely differentiable functions with compact support. For the smeared spinor quantum fields we have
ψ(h) = ∫ d^2 x h̅^α(x) ψ_α(x),
ψ^†(h) = ∫ d^2 x ψ̅^α(x) h_α(x).
Due to the causal structure of the Pauli-Jordan distribution, if we consider two test functions (h, h') that have space-like separated supports, we will find {ψ(h), ψ^†(h') } = 0, which reflects causality at the level of smeared fields.
From the definition of the smeared spinor field, by plugging the plane wave expansion, we find
ψ(h) = c_h + d_h^†, ψ^†(h) = c_h^† + d_h,
where the smeared creation and annihilation operators read
c_h = ∫dk/2 πm/ω_kh̅(k) u(k) c_k; d_h = ∫dk/2 πm/ω_kv̅(k) h(-k) d_k,
with analogous equations for their conjugate expressions. From the canonical anti-commutation relations and the above definitions, one can compute the non-trivial anti-commutation relations in terms of the smeared creation and annihilation operators
{ c_h, c_h'^†} = ∫dk/2 π1/2 ω_kh̅(k) (k + m) h'(k),
{ d_h, d_h'^†} = ∫dk/2 π1/2 ω_kh̅'̅(-k) (k - m) h(-k),
where in both expressions the constraint ω_k = √(k^2 + m^2) is implicitly understood.
Bell setup —
Let us face now the introduction of a Bell quantum field operator, Hermitian and dichotomic. Following <cit.>, we shall consider the smeared operator
𝒜_h = ψ(h) + ψ^†(h).
It is immediate to see that 𝒜_h^† = 𝒜_h. As it is customary, the inner product between test functions is obtained through the two-point Wightman function associated with the operator 𝒜_h, that is, ⟨ h | h' ⟩≡⟨ 0 |𝒜_h 𝒜_h'| 0 ⟩. Using the anticommutation relations of the smeared creation and annihilation operators, the vacuum expectation value of the product 𝒜_h 𝒜_h' is easily evaluated, yielding
⟨ h | h' ⟩ = ∫dk/2 π1/2 ω_k [h̅(k) (k + m) h'(k)
+ h̅'̅(-k) (k - m) h(-k)].
When h=h', we can identify the above expression as the norm squared of the test function h. In particular, from the anti-commutation relations,
⟨𝒜_h^2 ⟩ = || h ||^2,
showing that, as desired, 𝒜_h is a dichotomic operator, provided the test function h is normalized to 1 <cit.>. For the Bell-CHSH correlator in the vacuum, we write
⟨𝒞⟩ = ⟨ 0 | i [ ( 𝒜_f + 𝒜_f') 𝒜_g + (𝒜_f - 𝒜_f') 𝒜_g'] | 0 ⟩,
where, (f,f') and (g,g') are Alice's and Bob's test functions whose supports are located in Rindler's left and right wedges, respectively, and the factor i is due to the anti-commuting nature of the spinor fields. The above expression can also be written in a Quantum Mechanics-like version:
⟨𝒞⟩ = i (⟨ f | g ⟩ + ⟨ f | g' ⟩ + ⟨ f' | g ⟩ - ⟨ f' | g' ⟩).
As already stated, the main goal of this work is to explicitly construct test functions such that the Bell-CHSH inequality is violated. This amounts to find test functions (f, f', g, g') belonging to the space 𝒞_0^∞(ℝ^4), normalized to 1, such that (f, f') and (g, g') have space-like separated supports and, finally, such that we have |⟨𝒞⟩| > 2. From the definition, Eq. (<ref>), imposing a reality condition on the test functions of the form f_i^*(k) = f_i(-k), g_i^*(k) = g_i(-k), there is a simplification of the inner product expression, which becomes
⟨ f | g ⟩ = ∫dk/2 π [(ω_k + k/ω_k)f_1^*(k) g_1(k)
+ (ω_k - k/ω_k)f_2^*(k) g_2(k) ].
In particular, considering the massless case, we can rewrite the above expression as
⟨ f | g ⟩ = ∫dk/2 π [(1 + sgn(k)) f_1^*(k)g_1(k)
+ (1 - sgn(k)) f_2^*(k)g_2(k) ],
where sgn(k) = k / | k |. Going back to configuration space, we find ⟨ f | g ⟩ = I_1 + I_2, where
I_1 =∫ dx [f_1^*(x) g_1(x) + f_2^*(x) g_2(x)],
I_2 = -i/π∫ dx dy (1/x-y) [f_1^*(x) g_1(y) - f_2^*(x) g_2(y)].
Here we will only consider real test functions. We remark that, with this assumption, we immediately find that ⟨ f | f ⟩ = I_1, since the contribution I_2 vanishes by symmetry arguments under the exchange of x and y. Furthermore, if f and g have disjoint supports, ⟨ f | g ⟩ = I_2∈ iℝ.
Wavelets —
Daubechies wavelets <cit.> are widely used to treat problems in signal processing and data compression <cit.>, and more recently, have been used in many different contexts, in QFT and beyond <cit.>. The main idea here is to expand the test functions in terms of a finite number of Haar wavelets, a particularly useful type of Daubechies wavelets, well-known for their approximation abilities <cit.>. These functions provide an orthonormal basis for the square-integrable functions on the real line and, moreover, have a compact support whose maximum size can be controlled. Let us introduce the mother wavelet ψ as
ψ(x) = {[ +1, if x ∈[ 0 , 1/2),; -1, if x ∈[ 1/2 , 1 ),; 0, otherwise. ].
One can then define the generic Haar wavelet ψ_n,k as
ψ_n,k(x) = 2^n/2 ψ(2^n x -k)
with support on I_n,k = [ k 2^-n, (k+1) 2^-n) and piecewise constant, giving + 2^n/2 on the first half of I_n,k and - 2^n/2 on the second half. They satisfy
∫ dx ψ_n,k(x) ψ_m,ℓ(x) = δ_nmδ_kℓ.
Accordingly, for each would-be test function entering the Bell-CHSH inequality, we write
f_j(x) = ∑_n=n_i^n_f∑_k=k_i^k_f f_j(n, k) ψ_n,k(x),
where f_j(n, k) are the coefficients associated with the Haar wavelet basis element ψ_n,k for the j-th component of the spinor test function f(x). We remark that these parameters {n_i,n_f,k_i,k_f} set the range and resolution of the Haar wavelet expansion.
In order to explicitly implement relativistic causality, we will consider the hypersurface t=0 and adopt the supports of (f,f') corresponding to Alice's lab on the negative position axis, as well as the supports of (g,g') corresponding to Bob's lab on the positive axis. This can be achieved taking k ≤ -1 for (f,f') and k≥ 0 for (g,g').
For the norm of the test function f we obtain
⟨f|f⟩ = ∑_n,k[f_1^2(n,k) + f_2^2(n,k)]
with analogous expressions for {f', g, g'}. In the same vein, we can evaluate ⟨f|g⟩, (naturally with similar expressions for ⟨f' |g⟩, ⟨f|g' ⟩ and ⟨f' |g' ⟩), finding
⟨f|g⟩ = ∑_n,k,m,l[f_1(n,k) g_1(m,l) - f_2(n,k) g_2(m,l)] ×
×[-i/π∫ dx dy (1/x-y) ψ_n,k(x) ψ_m,l(y)].
The advantage of using the Haar wavelet expansion is that all of the above integrals can be evaluated in closed form thanks to the piecewise constant nature of the wavelets. We refrain from listing the explicit expressions here as these are quite lengthy. Needless to say, numerical integration routines lead to consistent results. Therefore, given the parameters {n_i, n_f, k_i, k_f}, one can obtain all the inner products, and then further manipulate the Bell-CHSH inequality, searching for the conditions to achieve its explicit violation. More precisely, we shall impose
⟨f|f⟩ = ⟨f' |f'⟩=⟨g|g⟩=⟨g' |g'⟩ = 1 ,
⟨f|g⟩ = ⟨f' |g⟩=⟨f|g'⟩=-⟨f' |g'⟩=-i √(2)λ/1+λ^2
with λ∈ (√(2)-1,1). It is then easily checked that | ⟨𝒞⟩|= 4√(2)λ/1+λ^2∈ (2,2√(2)). Then, we can search for the wavelet coefficients that satisfy this constraint through a suitable numerical minimization procedure. Notice that the constraints (<ref>) are quadratic in nature, which in general lead to a well-posed problem, see e.g. <cit.>.
Bumpification —
The strategy presented above still needs to be refined. The would-be test functions are not smooth, due to the jumps in the Haar wavelets. Nevertheless, there is a class of smooth bump functions with compact support, which can be used to approximate the Haar wavelets as precisely as desired.
Following <cit.>, we define the basic Planck-taper window function with support on the interval [0,1] by
σ_0(x,ε) = {[ [1 + exp( ε (2 x-ε )/x (x-ε )) ]^-1, if x ∈( 0, ε),; +1, if x ∈[ε, 1-ε],; [1 + exp(ε (-2 x-ε +2)/(x-1) (x+ε -1)) ]^-1, if x ∈( 1-ε, 1 ),; 0, otherwise. ].
In the above expression, the parameter ε regulates the fraction of the window over which the function smoothly rises from 0 to 1 and falls from 1 to 0. This gives a smooth version of the basic rectangle, the deviation with which can be made arbitrarily small by tuning ε.
With this object in hand, we then introduce the mother bump function with support on [0,1] by
σ(x,ε) = {[ +σ_0(2x, ε), if x ∈( 0, 1/2),; -σ_0(2x-1,ε), if x ∈( 1/2, 1 ),; 0, otherwise. ].
Finally, we can define the C_0^∞(ℝ) version of the Haar wavelet,
σ_n,k(x,ε) = 2^n/2 σ(2^n x-k,ε).
This is indeed a smooth bump function with support on the interval I_n,k that approximates as precisely as we want ψ_n,k(x) per choice of ε, as illustrated in Fig. <ref>. As such, each wavelet solution of the form (<ref>) can be replaced by a bumpified version,
f_j(x) = ∑_n=n_i^n_f∑_k=k_i^k_f f_j(n, k) σ_n,k(x,ε),
whilst retaining the various expansion coefficients f_j(n, k), so that all crucial properties encoded in (<ref>) are reproduced up to arbitrary precision if ε is chosen small enough.
Results —
Finally, we present and discuss our main results for the test functions leading to the violation of Bell-CHSH inequalities. We will search for a solution by a numerical minimization (least squares fit so to say) of ℛ=|⟨f|f⟩-1|^2+|⟨f'|f'⟩-1|^2+|⟨g|g⟩-1|^2+|⟨g' |g'⟩-1|^2+|⟨f|g⟩+i √(2)λ/1+λ^2|^2 + |⟨f|g'⟩+i √(2)λ/1+λ^2|^2 + |⟨f' |g⟩+i √(2)λ/1+λ^2|^2 + |⟨f|g'⟩-i √(2)λ/1+λ^2|^2. By choosing the wavelet basis sufficiently large, ℛ becomes zero up to the desired precision, after which we stop the minimization. For the record, we also tested that directly minimizing |⟨f|f⟩-1|^2+|⟨f' |f'⟩-1|^2+|⟨g|g⟩-1|^2+|⟨g' |g'⟩-1|^2+ |⟨𝒞⟩ - 4 √(2)λ/1+λ^2|^2 leads to the same solution.
As a first test, we select λ=0.7, so that |⟨𝒞⟩| ≈ 2.66. To find the solution, we adopted the following parameter set {n_i= -5; n_f = 30; k_i = -4; k_f = -1} for (f, f') and {m_i = -5; m_f = 30; ℓ_i = 0; ℓ_f = 3} for (g, g'), with ℛ=𝒪(10^-26). This already illustrates the effectiveness of our method and encourages us to search for larger violations. In order to do so, we need to correspondingly enlarge our wavelet basis, especially if we intend to approach Tsirelson's bound, 2 √(2), for λ→ 1. As a matter of fact, imposing ⟨𝒞⟩≈ 2.82 for λ=0.99 and willing to achieve precision at the percent level, corresponding to ℛ=𝒪(10^-5), we were able to solve the constraints (<ref>) if we adopt the following set of parameters: {n_i= -10; n_f = 120; k_i = -5; k_f = -1} for (f, f') and {m_i = -10; m_f = 120; ℓ_i = 0; ℓ_f = 4} for (g, g')[Higher precision can be reached upon further enlarging the set of basis elements and, consequently, longer computation times. The wavelet coefficients for both reported cases can be obtained from the authors upon reasonable request.
].
For the bumpification, adopting the same set of wavelet coefficients, we thus replace the Haar wavelets ψ_n,k(x) with σ_n,k(x). As a self-consistency test, we numerically computed the inner products again, now with these smooth functions expanded in terms of σ_n,k(x), and checked if they are correctly normalized and violate the Bell-CHSH inequality up to the same precision as the underlying wavelet solution. For ε = 10^-10, we have found an excellent numerical agreement, as expected, showing that our strategy indeed works. The components of the corresponding normalized test functions leading to the Bell-CHSH inequality violation are shown in Fig. <ref> for the case λ=0.99. It should be stressed that although the functions seem to increase without bounds near the origin, this is a misleading impression: all of them go to zero in the limit x → 0, per construction. Also not visible in Fig. <ref> is the fact that all shown functions do have compact support, again per construction.
Interestingly, there seem to be several reflection relations between the various test function components upon visual inspection of Fig. <ref>. To verify these, we reconstructed the wavelet coefficients using an expansion with all the expected reflection relations built in, which resulted in a numerically indistinguishable solution from the one shown in Fig. <ref>.
Comparison with earlier work of Summers–Werner — The seminal papers <cit.> heavily relied on Tomita-Takesaki theory (see <cit.> for an introduction to the latter) to prove the existence of a set of test functions so that the Tsirelson bound can be approximated as precisely as desired. To the best of our knowledge, the explicit form of the Summers-Werner test functions is, unfortunately, unknown. Notice that their construct is limited to the free field case, as Tomita-Takesaki theory has to date no interacting counterpart. From <cit.>, the test functions (f,f') and (g,g') are linear combinations of another set of causally disjoint test functions, (f_1,f_2) and (g_1,g_2), with certain constraints for the various inner products w.r.t. Eq. (<ref>). It is an open question if the solution (strategy) proposed here leads to the same solution as the one of <cit.>; we will come back to this question in a larger forthcoming paper[As far as we know, there can be multiple sets of test functions leading to the same amount of Bell-CHSH violation. For instance, it is trivial to see that upon switching the sign of all upper or lower components of the spinor test functions does not change anything in the relevant inner products.]. As the number of test function constraints in <cit.> is larger than the ones we imposed — in fact even more than necessary to attain the maximal violation which is due to the specific proof of <cit.> — one might expect their and our solution will not be equivalent.
Conclusions —
We investigated the Bell-CHSH inequality in the context of QFT for a free massless spinor field in 1+1 dimensions. Introducing suitable Bell operators built with smeared spinor fields, we defined an appropriate inner product associated with these operators through their Wightman functions. Expanding the would-be test functions used in the smearing procedure as a finite sum over Haar wavelets, we numerically constructed suitable coefficients leading to the violation of Bell-CHSH inequality, arbitrarily close to the maximal violation in fact. Using the Planck-taper window function, the discontinuous Haar wavelet solution set was then bumpified into C_0^∞(ℝ) smooth functions with compact support up to arbitrary precision, allowing us to adopt the same set of wavelet coefficients that we found before. Therefore, we thus found a proper set of test functions leading to the explicit violation of the Bell-CHSH inequality in QFT. In future work, we foresee the generalization to the massive case, including to scalar field theories. Even more rewarding will be to test the here presented bumpified wavelet method for interacting QFTs, in which case far less is known about the possibility of having maximal violation or not. An interacting (1+1)-dimensional fermionic theory like the Thirring model <cit.> will constitute the most interesting test bed, especially since the spectral function is known exactly <cit.>, and the latter will enter the inner product as we already alluded to in the main text. We will report on these and other matters in future work.
Acknowledgments —
The authors would like to thank the Brazilian agencies CNPq and FAPERJ for financial support. S.P. Sorella, I. Roditi, and M.S. Guimaraes are CNPq researchers under contracts 301030/2019-7, 311876/2021-8, and 310049/2020-2, respectively. PDF is grateful to Gustavo P. de Brito, Henrique S. Lima, and Letícia F. Palhares for interesting discussions, and to Pedro C. Malta for helpful comments on the draft.
99
Bell64
J. S. Bell, On the Einstein-Podolsky-Rosen paradox, Physics Physique Fizika 1, 195 (1964).
Clauser69
J. F. Clauser, M. A. Horne, A. Shimony, and R. A. Holt, Proposed experiment to test local hidden variable theories, Phys. Rev. Lett. 23, 880 (1969).
Hensen15
B. Hensen et al., Loophole-free Bell inequality violation using electron spins separated by 1.3 kilometres, Nature 526, 682 (2015).
Giustina15
M. Giustina et al, Significant-Loophole-Free Test of Bell’s Theorem with Entangled Photons, Phys. Rev. Lett. 115, 250401 (2015).
Shalm15
L. K. Shalm et al., Strong loophole-free test of local realism, Phys. Rev. Lett. 115, 250402 (2015).
Rosenfeld17
W. Rosenfeld et al., Event-ready Bell test using entangled atoms simultaneously closing detection and locality loopholes, Phys. Rev. Lett. 119, 010402 (2017).
Li18
M.-H. Li et al., Test of local realism into the past without detection and locality loopholes, Phys. Rev. Lett. 121, 080404 (2018).
Storz23
S. Storz et al., Loophole-free Bell inequality violation with superconducting circuits, Nature 617, 265 (2023).
Summers87a
S. J. Summers and R. Werner, Bell's Inequalities and Quantum Field Theory. 1. General Setting, J. Math. Phys. 28, 2440 (1987).
Summers87b
S. J. Summers and R. Werner, Bell’s inequalities and quantum field theory. II. Bell’s inequalities are maximally violated in the vacuum, J. Math. Phys. 28, 2448 (1987).
Summers87c
S. J. Summers and R. Werner, Maximal Violation of Bell's Inequalities Is Generic in Quantum Field Theory, Commun. Math. Phys. 110, 247 (1987).
Peruzzo22
G. Peruzzo and S. P. Sorella, Remarks on the Clauser-Horne-Shimony-Holt inequality in relativistic quantum field theory, Phys. Rev. D 106, 125020 (2022).
Peruzzo23
G. Peruzzo and S. P. Sorella, Feynman path integral formulation of the Bell-Clauser-Horne-Shimony-Holt inequality in quantum field theory, Phys. Rev. D 107, 105001 (2023).
Sorella:2023pzc
S. P. Sorella, Remarks on the Bell-Clauser-Horne-Shimony-Holt inequality, Phys. Rev. D 107, 25013 (2023).
Dudal23
D. Dudal, P. De Fabritiis, M. S. Guimaraes, G. Peruzzo, and S. P. Sorella, BRST invariant formulation of the Bell-CHSH inequality in gauge field theories, arXiv:2304.01028.
DeFabritiis23
P. De Fabritiis, I. Roditi, and S. P. Sorella, Mermin's inequalities in Quantum Field Theory, arXiv:2303.12195.
Tsirelson80
B. S. Cirel'son, Quantum generalizations of Bell's inequality, Lett. Math. Phys. 4, 93 (1980).
Fabbrichesi21
M. Fabbrichesi, R. Floreanini, and G. Panizzo, Testing Bell Inequalities at the LHC with Top-Quark Pairs, Phys. Rev. Lett. 127, 161801 (2021).
Severi22
C. Severi, C.D.E. Boschi, F. Maltoni, and M. Sioli, Quantum tops at the LHC: from entanglement to Bell inequalities, Eur. Phys. J. C 82, 285 (2022).
Afik21
Y. Afik and J. R. M. de Nova, Entanglement and quantum tomography with top quarks at the LHC, Eur. Phys. J. Plus 136, 907 (2021).
Afik22
Y. Afik and J. R. M. de Nova, Quantum information with top quarks in QCD, Quantum 6, 820 (2022).
Afik23
Y. Afik and J. R. M. de Nova, Quantum discord and steering in top quarks at the LHC, Phys. Rev. Lett. 130, 221801 (2023).
Barr22
A. J. Barr, Testing Bell inequalities in Higgs boson decays, Phys. Lett. B 825, 136866 (2022).
Morales:2023gow
R. A. Morales, Exploring Bell inequalities and quantum entanglement in vector boson scattering, [arXiv:2306.17247 [hep-ph]].
Haag:1992hx
R. Haag, Local quantum physics: Fields, particles, algebras (Springer-Verlag, 1992).
Daubechies88
I. Daubechies, Orthonormal bases of compactly supported wavelets, Commun. Pure Appl. Math. 41, 909 (1988).
Daubechies92
I. Daubechies, Ten Lectures on Wavelets, CBMS-NSF Regional Conference Series in Applied Mathematics (SIAM, Philadelphia, 1992).
Kaiser94
G. Kaiser, A Friendly Guide to Wavelets (Birkhauser, New York, 1994).
Howard98
H. L. Resnikoff and R. O. Wells, Wavelet Analysis (Springer, New York, 1998).
Bulut13
F. Bulut and W. N. Polyzou, Wavelets in field theory, Phys. Rev. D 87, 116011 (2013).
George22
D. J. George, Y. R. Sanders, M. Bagherimehrab, B. C. Sanders, and G. K. Brennen, Entanglement in Quantum Field Theory via wavelet representations, Phys. Rev. D 106, 036025 (2022).
Haegeman18
J. Haegeman, B. Swingle, M. Walter, J. Cotler, G.Evenbly, and V. B. Scholz, Rigorous Free-Fermion Entanglement Renormalization from Wavelet Theory, Phys. Rev. X 8, 011003 (2018).
Witteveen21
F. Witteveen and M. Walter, Bosonic entanglement renormalization circuits from wavelet theory, SciPost Phys. 10, 143 (2021).
Deleersnyder21
W. Deleersnyder, B. Maveau, T. Hermans, and D. Dudal, Inversion of electromagnetic induction data using a novel wavelet-based and scale-dependent regularization term, Geophys. J. Int. 226, 1715 (2021).
Deleersnyder23
W. Deleersnyder, B. Maveau, T. Hermans, and D. Dudal, Flexible quasi-2D inversion of time-domain AEM data, using a wavelet-based complexity measure, Geophys. J. Int. 233, 1847 (2023).
Haar
U. Lepik and H. Hein, Haar Wavelets: With Applications (Springer International Publishing, Switzerland, 2014).
posed
A.D. Ioffe, R.E. Lucchetti, and J.P. Revalski, Almost Every Convex or Quadratic Programming Problem Is Well Posed, Mathematics of Operations Research 29, 369 (2004).
McKechan:2010kp
D. J. A. McKechan, C. Robinson, and B. S. Sathyaprakash,
A tapering window for time-domain templates and simulated signals in the detection of gravitational waves from coalescing compact binaries,
Class. Quant. Grav. 27, 084020 (2010).
Witten:2018zxz
E. Witten, APS Medal for Exceptional Achievement in Research: Invited article on entanglement properties of quantum field theory, Rev. Mod. Phys. 90, 045003 (2018).
Thirring:1958in
W. E. Thirring, A Soluble relativistic field theory?,
Annals Phys. 3, 91 (1958).
Johnson:1961cs
K. Johnson, Solution of the equations for the Green's functions of a two-dimensional relativistic field theory,
Nuovo Cim. 20, 773 (1961).
Thompson:1983yr
G. Thompson, The Gauge Technique and Two-dimensional Models, Phys. Lett. B 131, 385 (1983).
|
http://arxiv.org/abs/2307.04465v1 | 20230710102840 | Tropical convexity in location problems | [
"Andrei Comăneci"
] | math.OC | [
"math.OC",
"math.MG",
"q-bio.PE",
"14T90, 26B25, 52A30, 90B85, 92B10"
] |
We investigate location problems whose optimum lies in the tropical convex hull of the input points.
Firstly, we study geodesically star-convex sets under the asymmetric tropical distance and introduce the class of tropically quasiconvex functions whose sub-level sets have this shape.
The latter are related to monotonic functions.
Then we show that location problems whose distances are measured by tropically quasiconvex functions as before give an optimum in the tropical convex hull of the input points.
We also show that a similar result holds if we replace the input points by tropically convex sets.
Finally, we focus on applications to phylogenetics presenting properties of consensus methods arising from our class of location problems.
Deformations at Earth's dayside magnetopause during quasi-radial IMF conditions: Global kinetic simulations and soft X-ray imaging
Chi Wang
August 12, 2023
==================================================================================================================================
§ INTRODUCTION
There is a recent interest in studying location problems in tropical geometry, especially in the use of tropical methods to data analysis.
Maybe the first article to promote such problems with a view towards “tropical statistics” is the work of Lin et al. <cit.>.
They showed that tropical convexity in tree spaces has some better properties than the geometry of Billera, Holmes, and Vogtmann (BHV) <cit.>.
This encouraged them to propose location estimators based on the symmetric tropical distance that could potentially exploit tropical convexity.
In particular, this would give a tropical approach to the consensus problem from phylogenetics <cit.>.
The connection for the proposed location statistics to tropical convexity was not well understood.
For example, they noticed that tropical Fermat–Weber points can lie outside the tropical convex hull of the input points <cit.>, although it was found later that one can find Fermat–Weber points inside the tropical convex hull <cit.>.
However, the unclear connection makes it difficult to obtain solutions that can be interpreted in the phylogenetic setting; see also <cit.>.
Recently, we could show that studying the Fermat–Weber problem using an asymmetric distance function leads to a better explanation in terms of tropical convexity <cit.>.
In particular, it provides a clear approach based on tropical convexity to the consensus problem from phylogenetics.
Moreover, various desirable properties of consensus methods were obtained by exploiting tropical convexity.
In fact, the good properties were solely due to tropical convexity and not the particular distance function which motivates the search for other methods with similar properties.
In this paper, we focus on location problems that have the potential of exploiting tropical convexity.
More specifically, we care of those location estimators that will belong to the tropical convex hull of the input points.
Such estimators are based on distances that reflect the tropical structure of the space and can be seen as a counterpart to similar studies regarding location problems and ordinary convexity.
Significant work was done for understanding geometric properties of location problems and their relationship to ordinary convexity.
The case of Chebyshev centers dates back to the 60s in the work of Garkavi <cit.> and Klee <cit.>.
More general location problems in a normed space were studied by Wendell and Hurter <cit.>, while a focus on geometric properties of Fermat–Weber problems with varying distances is covered by Durier and Michelot <cit.>.
What is more, it was shown that finding an optimal solution in the (ordinary) convex hull for every set of points is equivalent to having an inner product space in three dimensions or more; a general form of this result was obtained by Durier <cit.>.
The results mentioned above show a strong relationship between ordinary convexity and a Euclidean structure.
Tropical convexity, on the other hand, it is related to the lattice structure of (^n,≤).
Hence, we have to focus on “monotonic” distances.
To interpret geometrically monotonic functions in the quotient space n, we notice that all sub-level sets share a similarity: they are geodesically star-convex with respect to the asymmetric tropical distance.
The latter can be seen by remarking that geodesic segments are images of order segments in (^n,≤).
The resulting sets, called -star-convex, and functions, called -star-quasiconvex, are discussed in sections <ref> and <ref>, respectively.
In section <ref> we focus on location problems in which distances to the sites are measured by -star-quasiconvex functions.
We show that this setting guarantees optimal locations in the tropical convex hull of the input.
We will see that the triangle inequality does not play any role, which emphasizes the differences between tropical and ordinary convexity.
Further, this setting allows for very general location problems where dissimilarities are not necessarily distances; triangle inequality is generally assumed in location science when dealing with geographic location <cit.>, but it is not reasonable for more general data <cit.> and never assumed in the construction of M-estimators <cit.>.
We have further a few examples of location problems from the literature that end in our setting.
In particular, location problems involving the symmetric and asymmetric tropical distances.
However, the former case might contain cases where some optima are outside the tropical convex hull of the input.
So what is the precise distinction between the symmetric and the asymmetric tropical distances that causes the above behaviour?
We show that strict -star-convexity is the answer.
This motivates that study of regularized versions discussed in §<ref>.
We briefly show in section <ref> that we can extend the results to the case when the sites are tropically convex sets.
Then section <ref> deals with the main application to phylogenetics: the tropical approach to consensus methods.
Our general setting provides a large class of tropically convex consensus methods as defined in <cit.>.
Furthermore, we enlarge the list of desirable properties of these consensus methods that were given in the previously cited work.
Finally, we conclude with section <ref> consisting of highlights and possible directions for future research.
§ TROPICAL CONVEXITY
The purpose of this section is to fix the notation and emphasize the basic properties of tropically convex sets that will be used later.
One can consult the book of Joswig <cit.> for more details.
We will use both semirings ^min=(∪{∞},∧,+) and ^max=(∪{-∞},∨,+) where x∧ y=min(x,y) and x∨ y=max(x,y).
They are isomorphic under the map x↦ -x, but it is better to be seen as dual to each other.
This duality will play an important role later similar to the relationship between max-tropical polytopes and min-tropical hyperplanes <cit.>.
Since our applications deal with points of finite entry, we will define tropical geometric objects in ^n and n.
It also exploits the common set of ^max and ^min and we can make use of the vector space structure.
A min-tropical cone K⊂^n is a set closed under min-tropical linear combinations: (x+λ)∧ (y+μ)∈ K for all x,y∈ K and λ,μ∈.
The image of a min-tropical cone in n is called a min-tropically convex set.
A common example is the min-tropical hyperplane with apex v which is the set H^min_v={x∈n:|_j(x_j-v_j)|≥ 2}.
The max-tropical cones and max-tropically convex sets are defined similarly, replacing min by max in the previous definitions.
One can also see them as images of min-tropical cones and min-tropically convex sets under x↦ -x.
The min-tropical convex hull of two points a,b∈n will be denoted by [a,b]_max and is called the min-tropical segment between a and b.
We will also use the notation (a,b)_min=[a,b]_min∖{a,b} for the open min-tropical segment between a and b.
Similarly, we define [a,b]_max and (a,b)_max.
The min-tropical convex hull of a set A⊂n is the smallest min-tropically convex set containing A and we denote it by ^min(A).
It can be related to the max-tropical semiring by <cit.>.
For this we need to introduce the max-tropical sector S_i^max={x∈n:x_i≥ x_j ∀ j∈[n]}={x∈n:i∈_j x_j}.
Then <cit.> says that x belongs to ^min(A) if and only if for each i∈[n] there exists a_i∈ A such that x∈ a_i+S_i^max.
For the case of max-tropically convex hull just reverse min with max.
We say that a point a of a min-tropically convex set A is i-exposed if (a+S_i^min)∩ A={a}.
If a point is i-exposed for some i∈[n], then we simply call it exposed.
Since the order ≤ on ^n is strongly related to tropical convexity, we will focus on monotonic function.
We say that a function f:X→, defined on a subset X of ^n, is increasing if for every x, y∈ X with x≤ y we have f(x)≤ f(y).
We call f strictly increasing if f(x)<f(y) whenever x≤ y and x≠ y.
For a,b∈^n and a≤ b, we denote by [a,b]_≤ the set of points x∈^n such that a≤ x≤ b and call it the order segment between a and b.
It can also be written as a box: [a,b]_≤=[a_1,b_1]×…×[a_n,b_n].
Its image in n is a polytrope, i.e. it is both min- and max-tropically convex <cit.>, which we call a box polytrope.
A particular case is presented in the following example.
Consider the asymmetric distance d_(a,b)=∑_i(b_i-a_i)-nmin_j(b_j-a_j) defined on n <cit.>.
We are interested in geodesic segments under this distance, which are portrayed in Figure <ref>.
This is different from the geodesic convexity discussed in <cit.> which focuses on the symmetric tropical distance.
For two points a,b∈n we define the (oriented) geodesic segment between a and b under d_ as [a,b]_:={x∈^n:d_(a,x)+d_(x,b)=d_(a,b)}.
The geodesic segment [a,b]_ is a (box) polytrope.
To see this, we point out that [a,b]_=(a+S_i^min)∩(b+S_i^max) where i is any index from _j(b_j-a_j); the equality can be also seen in Figure <ref>.
What is more, if we choose representatives a and b such that min_j(b_j-a_j)=0, then [a,b]_ is the image of [a,b]_≤ in n.
The min-tropical vertices [a,b]_ are of the form v_j=b-(b_j-b_i+a_i-a_j)e_j=b-(b_j-a_j-min_ℓ(b_ℓ-a_ℓ))e_j for j∈[n]
The set [a,b]_ contains the ordinary segment [a,b] but also the min- and max-tropical segments between a and b.
What is more, for every c∈n the min-tropical segment between a and b is contained in [c,a]_∪[c,b]_.
To see the latter statement, we take arbitrary representatives modulo for a and b and show that a∧ b∈[c,a]_∪[c,b]_.
Let i∈_j[(a_j∧ b_j)-c_j].
Without loss of generality, we can assume that a_i∧ b_i=a_i.
Thus, a∧ b∈ (c+S_i^min)∩(a+S_i^max)=[c,a]_.
The canonical coordinates of a point x∈n are the entries of the x∈^n defined by x= x-(min_j x_j).
This is a representative of x modulo such that all its entries are non-negative and at least one entry is 0.
We say that K is a strictly min-tropically convex cone if K is a min-tropically convex cone and for every a,b∈ K such that a∧ b is different from a and b modulo , then a∧ b belongs to the interior of K.
We say that a subset of n is strictly min-tropically convex if it is the image of a strictly min-tropically convex cone under the canonical projection ^n→n.
A subset L of n is strictly min-tropically convex if all the points of the open min-tropical segment (a,b)_min belong to the interior of L, where a and b are distinct points in L.
Any strictly min-tropically convex set is a singleton or its closure coincides with the closure of its interior.
Moreover, all of its boundary points are exposed.
The first part results from Remark <ref>.
For the second part, consider v which is not exposed.
Then there exist p,q in the strictly min-tropically convex set such that v∈(p,q)_min.
According to the same remark, v is an interior point.
§ -STAR-CONVEX SETS
A -star-convex set with kernel v is a non-empty set K⊆n such that for every point w∈ K we have [v,w]_⊆ K.
We call K strictly -star-convex if [v,w]_∖{w} belongs to the interior of K for every w∈ K.
Since [v,w]_ contains the ordinary segment [v,w], we conclude that -star-convex sets are also star-convex in the ordinary sense.
We show now that -star-convex sets are min-tropically convex.
Any -star-convex set is min-tropically convex.
Let K be a -star-convex set with kernel v and a,b arbitrary points in K.
According to Remark <ref>, we have [a,b]_min⊆ [v,a]_∪ [v,b]_.
The latter set is contained in K due to its -star-convexity.
However, -star-convex sets might not be max-tropically convex.
For example, the image of the regular simplex Δ_n={e_1,…,e_n} in n is -star-convex but not max-tropically convex.
One can find examples of -star-convex sets in Figure <ref>.
Picture (a) shows a min-tropical hyperplane H^min_v which is -star-convex with kernel v—the apex.
Picture (b) displays the unit balls for tropical L^p norms, which will be defined in Example <ref>.
They are nested increasingly with respect to p; the outer one corresponds to the tropical L^∞ norm and is the only one that is not strictly -star-convex.
One can recognize the triangle as the unit ball for the asymmetric tropical distance d_.
The min-tropical hyperplane with apex at the origin (the kernel of the -star-convex sets) is dotted.
Picture (c) shows a more complicated -star-convex sets.
This case is not pure dimensional, the tropically exposed points do not form a closed set.
Moreover, it is neither convex in the ordinary sense, nor strictly -star-convex.
Let K be a -star-convex set with kernel v such that K≠{v}.
Then K is strictly -star-convex if and only if K is strictly min-tropically convex and v is an interior point of K.
Firstly, assume that K is strictly -star-convex.
For every a,b∈ K the min-tropical segment [a,b]_min is a subset of [v,a]_∪[v,b]_.
Therefore, all of the points of [a,b]_min with the exception of a and b must be in the interior of K.
Hence, K is strictly min-tropically convex.
The fact that v is an interior point is clear from the definition and our assumption that K≠{v}.
Conversely, assume that K is strictly min-tropically convex and v is an interior point of K.
We consider w∈ K∖{v} and we show that all points of [v,w]_∖{w} are in the interior of K.
The result is clear for non-exposed points of [v,w]_ as we assumed K is strictly min-tropically convex.
Hence, let u be an exposed point of [v,w]_ distinct from w.
According to the discussion from Remark <ref>, u=w-(w_j-w_i)e_j where i∈_k w_k and j∉_k w_k.
Since (u+w)/2 belongs to the interior of the tropical segment [u,w]_min and K is strictly min-tropically convex, then (u+w)/2 is an interior point of K.
Thus, for small δ>0, the point c=(u+w)/2-δ e_i belongs to K.
However, u∈[v,c]_=S_i^min∩(c+S_i^max) as c-u=(w-u)/2-δ e_i=(w_j-w_i)e_j/2-δ e_i.
But u cannot be an exposed point of [v,c]_ as c-u is not parallel to a vector e_k for k∈[n] unless n=2.
Consequently, u must be an interior point of K from the strict min-tropical convexity of K, when n≥ 3.
For the case n=2, we could have noticed that the exposed points of [v,w]_ are v and w, so u can only be equal to v.
But v was already assumed to be interior.
The proof above shows that the assumption that v is an interior point of K is superfluous for the converse when n≥ 3.
If K is strictly -star-convex with kernel v, then any exposed point of K from v+S_i^min is i-exposed.
If a∈ v+S_i^min and it is not i-exposed, then there exists b∈(a+S_i^min)∩ K with b≠ a.
In particular, a∈[v,b]_∖{b}.
But the strict -star-convexity of K implies that a must be an interior point.
§ TROPICALLY QUASICONVEX FUNCTIONS
A function f:^n→ whose sub-level sets L_≤α(f):={x:f(x)≤α} are convex is called quasiconvex.
This is a purely geometric definition, but some other sources define them as functions satisfying f(λ x+(1-λ)y)≤max{f(x),f(y)} for every x,y∈^n and λ∈[0,1].
The latter can be more convenient in checking quasiconvexity.
See <cit.> for more details.
We will be interested in specific tropically quasiconvex functions.
Before we introduce them, we need some notation.
For a function γ:^n_≥ 0→ we associate the function γ:n→ defined by γ(x)=γ(x).
We recall that x=x-(min_i x_i) are the canonical coordinates of x.
We call a function f:n→ -star-quasiconvex with kernel v if f(x)=γ(x-v) for some increasing function γ:^n_≥ 0→.
Moreover, if γ is strictly increasing, we call f strictly -star-quasiconvex.
We will give a geometric interpretation of -star-quasiconvex in Theorem <ref>.
However, we prefer the definition above because it easier to check in practice.
Considering γ a monotonic norm <cit.>, f measures the distance to the kernel.
If v=, then f is a gauge which are commonly used in convex analysis <cit.> and location science <cit.>.
Gauges are sometimes dubbed “asymmetric norms” as they satisfy all the properties of a norm with the exception that f(x) need not be equal to f(-x).
A famous class of monotonic norms are the L^p norms.
They give rise to -star-quasiconvex gauges whose expression is
γ_p(x)={[ √(∑_i∈[n](x_i-min_j∈[n] x_j)^p) if p∈[1,∞); max_i∈[n]x_i-min_j∈[n]x_j if p = ∞; ]. .
We call them tropical L^p norms.
They appeared in the work of Luo <cit.> under the name “B^p-pseudonorms”.
One can recognize the tropical L^∞ norm as the tropical norm defined in <cit.>.
The relationship to the L^∞ norm is stressed in <cit.>.
The tropical L^1 norm gives rise to the asymmetric tropical distance d_; this relationship is implicit in <cit.>.
The function γ depends only on the values on ∂^n_≥ 0, so we could have considered only ∂^n_≥ 0 as the domain of γ.
However, this does not increase the generality since every (strictly) increasing function defined on ∂^n_≥ 0 can be extended to a (strictly) increasing function on ^n_≥ 0, according to the following lemma.
Every (strictly) increasing function γ:∂^n_≥ 0→ can be extended to a (strictly) increasing function γ̃:^n_≥ 0→.
Moreover, if γ is continuous, then the extension can also be made continuous.
Consider γ̃(x)=max_i∈[n]γ(x_-i,0_i)+∏_i∈[n]x_i.
Clearly, this is continuous if γ is, as being a composition of continuous functions.
Moreover, γ̃(x)=γ(x) for every x⃗∈∂^n_≥ 0, due to monotonicity of γ and the fact that x_1x_2… x_n=0 for x∈∂^n_≥ 0.
If x≤ y, then x_-i≤ y_-i for all i∈[n], where x_-i is obtained from x by removing the ith entry.
Therefore, γ(x_-i,0_i)≤γ(y_-i,0_i) for every i∈[n], which implies γ̃(x)≤γ̃(y) after using ∏_j x_j≤∏_j y_j.
In other words, γ̃ is increasing.
Moreover, if γ is strictly increasing and x≠ y we have two cases.
On the one hand, if y∈∂^n_≥ 0, then x∈∂^n_≥ 0 so γ̃(x)=γ(x)<γ(y)=γ̃(y).
On the other hand, if y∈^n_>0, then ∏_j x_j<∏_j y_j.
Using the last inequality with max_i∈[n]γ(x_-i,0_i)≤max_i∈[n]γ(y_-i,0_i), we obtain γ̃(x)<γ̃(y).
Accordingly, γ̃ is strictly increasing if γ is strictly increasing.
The following result explains why the functions from Definition <ref> deserve the name “-star-quasiconvex”.
Let f:n→ be a continuous function.
Then f is (strictly) -star-quasiconvex if and only if all of its non-empty sub-level sets are (strictly) -star convex with the same kernel.
After an eventual translation, we can assume that the kernel is .
Firstly, assume f is -star-quasiconvex and let α∈^n arbitrary such that L_≤α(f) is non-empty.
Let γ:^n→ increasing such that f(x)=γ(x).
Let w∈ L_≤α(f) and choose i∈[n] such that w∈ S_i^min.
Since γ is increasing, the points x∈^n satisfying ≤ x≤w belong to L_≤α(γ).
This set projects onto [,w]_ showing that [,w]_⊆ L_≤α(f).
Since w was selected arbitrarily, L_≤α(f) must be -star convex with kernel .
If f is strictly -star-quasiconvex, then the points satisfying ≤ x≤w different from w actually belong to L_<α(f).
Due to the continuity of f, this coincides with the interior of L_≤α(f).
This shows that L_≤α(f) is strictly -star-convex.
Conversely, assume that L_≤α(f) is -star-convex with kernel for every α≥ f().
Take γ:∂^n_≥ 0→ defined as γ(x)=f(x) for x∈∂^n_≥ 0.
Using Lemma <ref> it is enough to show that γ is increasing.
Let x and y arbitrary points of ∂^n_≥ 0 such that x≤ y.
The order segment [,y]_≤ projects onto [,y]_ which belongs to L_≤ f(y)(f).
Due to the -star-convexity of sub-level sets, we obtain γ(x)=f(x)≤ f(y)=γ(y).
If we have strict -star-convexity, then [0,y]_∖{y} is contained in the interior of L_≤ f(y)(f) which coincides to L_<f(y)(f).
Hence, we obtain γ(x)<γ(y) for this case.
The continuity of f is relevant only for strictly -star-quasiconvex functions.
Without continuity, only the strict -star-convexity of the sub-level sets is not sufficient for f to be strictly -star-quasiconvex.
This is similar to the case of ordinary quasiconvex functions; cf. <cit.> and <cit.>.
We will see that convexity, in the ordinary sense, will also be helpful for our applications.
We give a simple criterion for checking when a -star-quasiconvex function is convex.
If γ is increasing and (strictly) convex, then γ is (strictly) convex.
Let x,y∈^n and λ∈[0,1].
We have min_j(λ x_i+(1-λ) y_i)≥λmin_i x_i+(1-λ)min_i y_i as λ,1-λ≥ 0.
Hence, λ x+(1-λ) y-min_j(λ x_i+(1-λ) y_i)≤λ( x-min_i x_i)+(1-λ)( y-min_i y_i).
Since γ is convex and increasing, we obtain
γ(λ x+(1-λ) y) ≤γ(λ( x-min_i x_i)+(1-λ)(y-min_i y_i))
≤λ γ(x-min_i x_i)+(1-λ) γ(y-min_i y_i)
=λγ(x)+(1-λ)γ(y).
If γ is strictly convex and x≠ y modulo , then the second inequality from (<ref>) is strict, so γ(λ x+(1-λ) y)<λγ(x)+(1-λ)γ(y).
Thus, γ is strictly convex if γ is strictly convex.
§ TROPICALLY CONVEX LOCATION PROBLEMS
We will consider some input points v_1,…,v_m in n.
We measure the distance (or dissimilarity) from x∈n to a point v_i using a -star-quasiconvex function f_i having kernel v_i.
We consider increasing functions γ_i:^n→ such that f_i(x)=γ_i(x-v_i).
Without loss of generality, we assume γ_i()=0, so that all dissimilarities are non-negative.
The purpose of location problems is to find a point as close (or similar) as possible to the input points, depending on some criterion; usually, the optimal location is a minimum of an objective function h:n→.
The function h is constructed using an increasing function g:^m_≥ 0→, which aggregates the distances to the input points.
Formally, we define h(x)=g(f_1(x),…,f_m(x)).
Since f_i measures the distance or dissimilarity from x to v_i and g is increasing, the minima of h record a global closeness to the input points.
In most studied location problems, we would have a distance d on n and set f_i(x)=d(x,v_i).
Common choices of g are g(x)=x_1+…+x_m, for the median or Fermat–Weber problem, g(x)=max_i∈[m]x_i for the center problem <cit.>, or g(x)=x_1^2+…+x_m^2, for defining the Fréchet mean <cit.>.
Nevertheless, we will allow g to be an arbitrary increasing function.
We will assume that h has a minimum, which happens, e.g., when h is lower semi-continuous.
Let h be as above.
Then there is a minimum of h belonging to ^max(v_1,…, v_m).
Moreover, if g is strictly increasing and at least one of f_1,…,f_m is strictly -star-quasiconvex, then all the minima of h are contained in ^max(v_1,…,v_m).
Consider x∉^max(v_1,…,v_m) which is a minimum of h.
Thus there exists k∈[n] such that k∉_j(x_j-v_ij) for all i∈[m].
Set δ_i:=x_k-v_ik-min_j(x_j-v_ij) for all i, and δ=min_iδ_i, which is strictly positive by the consideration of k.
Note that f_i(x-δ e_k)=γ_i(x-v_i-δ e_k-min_j(x_j-v_ij))≤γ_i(x-v_i-min_j(x_j-v_ij))=f_i(x) for all i∈[n].
Hence h(x-δ e_k)≤ h(x).
Note that the inequality above is strict if g and some γ_ℓ are strictly increasing.
Indeed, in that case, we must have f_ℓ(x-δ e_k)<f_ℓ(x), so we use the strict increase of g in the ℓth entry.
That would contradict the optimality of x, so the second statement of the theorem holds.
For the first statement, we can only infer that x-δ e_k is also a minimum of h.
Hence, we can find an optimum of h in ^max(v_1,…,v_m) by moving x in directions -e_k for indices k as above.
To be more precise, we collect in D(x) the possible elementary descent directions from x; formally D(x):=⋂_i∈[m]([n]∖_j∈[n](x_j-v_ij)).
Notice that k∈ D(x), but k∉ D(x-δ e_k).
Moreover, D(x-δ e_k)⊊ D(x), as the functions only increase by our move in a descent direction.
Thus, replacing x by x-δ e_k, we find a minimum with smaller D(x).
We can repeat the procedure to construct a minimum x^⋆ of h with D(x^⋆)=∅.
The last condition is equivalent to x^⋆∈^max(v_1,…,v_m) due to <cit.>.
The regions of f_i where it looks like a monotonic function are induced by the min-tropical hyperplane based at v_i.
Those hyperplanes defined the max-tropical polytope generated by the input points, explaining why we look at the max-tropical convex hull, instead of the min analogue.
The following lemma presents cases when there is a unique optimum location.
We recall that a gauge γ is called strictly convex if γ(λ x+(1-λ)y)<1 for every λ∈(0,1) and x,y∈n with γ(x)=γ(y)=1, although they are not strictly convex functions.
Assume that g,f_1,…,f_m are convex, g is strictly increasing, and at least one of the following conditions holds:
a) at least one f_i is strictly convex; or
b) all f_i are strictly convex gauges and the points v_1,…, v_m are not collinear.
Then h is strictly convex.
In particular, it has a unique minimum.
Consider arbitrary distinct points x, y∈^n/ and a scalar λ∈(0,1).
For case a), we have f_i(λ x+(1-λ) y-v_i)<λ f_i(x-v_i)+(1-λ)f_i(x-v_i).
Since g is convex and strictly increasing and the functions f_j convex, we obtain
h(λ x+(1-λ) y)<λ h(x)+(1-λ)h(y).
So h must be strictly convex.
For case b), at least one of the points v_i is not on the line through x and y.
Then x-v_i and y-v_i they are not parallel and the strict convexity of the unit ball defined by f_i implies that f_i(λ x+(1-λ) y-v_i)<λ f_i(x-v_i)+(1-λ)f_i(x-v_i).
The rest of the proof is identical to case a).
§.§ Examples
Here we review the tropical location problems from literature that fall in our category, i.e. an optimum belongs to the tropical convex hull of the input.
[Tropical Fermat–Weber and Fréchet problems]
To the best of our knowledge, the first one-point location problems in tropical geometry are proposed by Lin et al. <cit.>.
They suggest the study of Fermat–Weber points and Fréchet means under the symmetric tropical distance d_.
The goal was to relate them to tropical convexity for applications in phylogenetics.
However, they noticed that tropical Fermat–Weber points might lie outside the tropical convex hull of the input points leading to medians that cannot be interpreted easily in biological applications <cit.>.
However, Theorem <ref> says that it is possible to find an optimum in the tropical convex hull.
This was already noticed for the tropical Fermat–Weber points <cit.> but it was unknown, until now, for tropical Fréchet means.
[Tropical center]
Consider the case f_i(x)=d_(v_i,x) and g(y)=max(y_1,…,y_m).
This can be interpreted as the center of the minimum max-tropical L^1 ball enclosing the points v_1,…,v_m.
The tropical center appears in <cit.>, but the details are omitted.
If we choose representatives of the input points in ={x∈^n:x_1+…+x_n=0}, the optimum can be obtained by solving the linear program:
[ minimize n· t ; subject to v_ij - x_j ≤ t , for i∈[m] and j∈[n]; x_1 + … + x_n = 0 ].
Note that the x-coordinates of the optimal solutions are equal, modulo , to the x-coordinates of the linear program
[ minimize n· t+∑_j=1^n x_j ; subject to v_ij - x_j ≤ t , for i∈[m] and j∈[n] ].
Let (t^⋆,x^⋆) an optimal solution of (<ref>).
For any solution of (<ref>) we have t+x_j≥max_i∈[m]v_ij=:V_j.
In particular, x^⋆ will have the smallest entries if we actually have equality: t^⋆+x^⋆_j=V_j, otherwise we can replace x^⋆ by some x^⋆-ε e_i to minimize the objective function.
This implies x^⋆ = V modulo ; in particular, the solution is unique in n.
Even if we do not have g strictly increasing, the uniqueness and Theorem <ref> ensures that the optimum is in the tropical convex hull.
However, this could have been noticed from the closed form V=⋁_i v_i for v_1,…,v_m∈.
[Transportation problems]
Consider λ_1,…,λ_n>0 and (λ) the simplex in n whose vertices are e_i/λ_i.
Then γ_(λ)(x)=∑_iλ_i x_i-(∑_iλ_i)min_j x_j
is the gauge on n whose unit ball is (λ).
The (weighted) Fermat–Weber problem ∑_i∈[m]w_iγ_(λ)(x-v_i) is equivalent to a transportation problem and every transportation problem can be reduced to this case; to see this better, write it as a linear program after scaling the weights w_i such that ∑_i w_i=∑_j λ_j (this change does not influence the optimum).
This was firstly noticed in <cit.>, where the authors focused on the case λ_1=…=λ_n.
The corresponding optimum is called a tropical median in the work cited.
The optimal point is called a λ-splitter by Tokuyama and Nakano <cit.>, but no metric interpretation was mentioned.
The authors gave a condition of partitioning the space in n region in an equal fashion with some weights coming from λ and w; this can be seen as a reinterpretation of the first-order optimality condition for the corresponding Fermat–Weber problem.
As a λ-splitter, it appeared in statistics <cit.> and as a particular case of Minkowski partition problems <cit.>.
[Locating tropical hyperplanes]
The tropical hyperplanes are parametrized by ^n/ by their identification with their apex.
Moreover, we have d_(a,H_x^max)=(x-a)_(2)-(x-a)_(1).
For a vector y, we denote by y_(k) the kth smallest entry, also known as the kth order statistic.
Note that the aforementioned distance is -star-quasiconvex with apex a; the easiest to see this is noticing that the second order statistic is increasing.
Therefore, our general location problems cover the case of locating tropical hyperplanes.
The best-fit tropical hyperplane with with L^1 error, i.e. g is the L^1 norm, was considered by Yoshida, Zhang, and Zhang as part of tropical principal component analysis <cit.>.
The case of L^∞ error was considered by Akian et al. <cit.> for applications to auction theory and called tropical linear regression.
They also show that the problem is polynomial-time equivalent to mean-payoff games <cit.> and, using d_(a,H_x^max)=d_(x,H_a^min), that it is dual to the problem of finding the largest inscribed ball in the tropical convex hull of the input points <cit.>.
To end this subsection, we compute the optimal location from the examples above for specific input points.
We consider the points from <cit.> which are given by the columns of the matrix
V = [ 0 1 3 2; 1 0 2 3; 1 1 0 0; ].
For this input, there is a unique tropical Fréchet point, (1,1,0), but the set of tropical Fermat–Weber points is a hexagon, marked with grey in Figure <ref>.
We remark that V has two axes of symmetry and (1,1,0) is their intersection.
The point (1,1,0) is also the tropical center of V, while the tropical median is (0,0,0).
The latter point is the also the unique apex of the best-fit tropical hyperplane with L^1 error of <cit.>.
It is also a solution of the tropical linear regression, but not the unique one.
The apices of the best-fit tropical hyperplanes with L^∞ error are of the form (λ,λ,0) with λ≤ 1 and their set is pictured with green in Figure <ref>.
§.§ Regularization
In some cases, we cannot expend g to be strictly increasing or all the dissimilarity functions f_i to be strictly -star-quasiconvex.
Hence, a minimization algorithm might return a point outside the max-tropical convex hull of the input points, when there are multiple solutions.
In this subsection, we show how we could try to arrive to a solution belonging to ^max(v_1,…,v_m) through a regularized formulation.
The idea of regularization is to consider a small parameter λ>0 and a nicely behaved function f_m+1:n→_≥ 0 and try to solve the optimization problem
minimize g(f_1(x),…,f_m(x))+λ f_m+1(x).
For our purposes, f_m+1 is nicely behaved if it is strictly -star-quasiconvex with a kernel from ^max(v_1,…,v_m).
An easy choice for v is the tropical center from Example <ref>.
This is also a location problem with g_λ:^m+1_≥ 0→ given by g_λ(x_1,…,x_m,x_m+1)=g(x_1,…,x_m)+λ x_m+1 and the optimality criterion is the function h_λ:n→ given by h_λ(x)=g_λ(f_1(x),…,f_m+1(x))
Note that g_λ is strictly increasing in the (m+1)-st entry for every λ>0.
Checking more carefully the proof of Theorem <ref>, the second statement holds if f_ℓ is strictly -star-quasiconvex and g strictly increasing in its ℓ-th entry.
We use this property for the regularization.
Therefore, we obtain the following direct consequence of Theorem <ref>.
For every λ>0, all the minima of h_λ lie in ^max(v_1,…,v_m).
The influence of the term f_m+1 decreases as λ goes to 0.
If the functions are regular enough, we expect that a collection of optima x^⋆_λ of h_λ to converge to an optimum of h.
In fact, x^⋆_λ will be an optimum of h for λ sufficiently small if h is polyhedral convex and f_m+1 is Lipschitz continuous.
If h is polyhedral convex and f_m+1 is a convex function with sub-linear growth, then there exists λ_0>0 such that all minima of h_λ are also minima of h for every λ<λ_0.
The proof is quite technical using the differential theory from convex analysis so it is given in the appendix.
We stress that Proposition <ref> can be useful for studying the tropical Fermat–Weber problem from <cit.>.
Without regularization, it has undesirable behaviour for applications to biology; cf. <cit.>.
§ LOCATION PROBLEMS WITH TROPICALLY CONVEX SITES
Location problems can appear also when facilities are regions of the ambient space and not only points.
Here, we consider such a generalization where the sites are tropically convex sets.
In the previous section, we used different distances to the input points.
Here, we will measure our dissimilarities in a uniform way, by fixing an increasing function γ:^n_≥ 0→ and considering d_γ(x,y)=γ(y-x).
We than say that d_γ is -star-quasiconvex; if γ is strictly increasing we say that d_γ is strictly -star-quasiconvex.
This allows a clear definition of a distance from a region to a point: d_γ(A,x):=inf_y∈ Ad_γ(y,x).
For a closed max-tropical cone K⊆^n we define the projection π_K:^n→ K as π_K(x)=max{y∈ K:y≤ x}.
We note that π_K(x+λ)=π_K(x)+λ for every x∈^n and λ∈, so it induces a well-defined function π_K/:n→ K/ called the tropical projection onto the max-tropically convex set K/.
The following lemma gives an explicit formula for the tropical projection and it characterizes it as a closest point under d_γ.
We omit the proof, as it is a classical result, shown when γ is the maximum norm in <cit.> and for a general tropical L^p norm in <cit.>.
Let A be a closed max-tropically convex set.
Then the tropical projection π_A(x) of a point x has the entries
π_A(x)_i=max_a∈ A(a_i+min_j∈[n](x_j-a_j)).
Moreover, d_γ(A, x)=d_γ(π_A(x), x) and π_A(x) is the unique point whose distance to x equals d_γ(A,x) if d_γ is strictly -star-quasiconvex.
In fact, the maximum expression of the tropical projection from Lemma <ref> can be taken over the extremal points, in the case of tropical polytopes <cit.>.
A similar result seems similar for general convex sets, but the form above is sufficient for our purposes.
From now on, our given sites are closed max-tropically convex sites A_1,…,A_m in n.
Similar to section <ref>, the objective function is h=g(d_γ(A_1,x),…,d_γ(A_m,x)), where g:^m_≥ 0→_≥ 0 is increasing.
There exists an minimum of h lying in the tropical convex hull of the input ^max(A_1∪…∪ A_m).
Moreover, if g and γ are strictly increasing, then all the minima of h lie in ^max(A_1∪…∪ A_m).
If x∉^max(A_1∪…∪ A_m), then <cit.> entails the existence of an index ℓ∈[n] such that min_j≠ℓ(x_j-a_j)<x_ℓ-a_ℓ for every a∈^max(A_1∪…∪ A_m).
Since A_1,…,A_m are closed sets, then there exists an open ball around x not intersecting the union of these sets.
Thus, for δ>0 sufficiently small and y=x-δ e_ℓ we have min_j(y_j-a_j)=min_j(x_j-a_j) for every a∈^max(A_1∪…∪ A_m).
Therefore, equation (<ref>) implies π_A_i(y)=π_A_i(x) for all i∈[m].
Note that y-π_A_i(y)=x-π_A_i(x)-δ e_ℓ≤ x-π_A_i(x).
Since γ is increasing, we have d_γ(A_i,y)=γ(y-π_A_i(y))≤γ(x-π_A_i(x))=d_γ(A_i,x) for every i∈[m].
Moreover, if γ is strictly increasing we get d_γ(A_i,y)<d_γ(A_i,x).
In other words, going from x in the direction -e_ℓ we obtain a decrease in all the distances d_γ(A_i,x); in particular, a decrease of h.
Using this observation, the rest of the proof is identical to the proof of Theorem <ref>.
§ TROPICALLY CONVEX CONSENSUS METHODS
In this section, we focus on applications to phylogenetics—the study of evolutionary history of species <cit.>.
The information is represented as an evolutionary tree, or phylogeny, which are trees whose leaves are labeled by the name of the species.
In this paper, we will deal only with trees that encode the evolution from a common ancestor and possess a molecular clock.
To be more formal, we have a finite set containing the names of the species and a rooted tree whose leaves are in bijection with ; the root corresponds to the most recent ancestor of all the species into consideration.
The time is represented as positive weights on the edges, which gives a way to measure distances between nodes in the trees.
What is more we assume that the distance from the root to any leaf is the same; it means that the same time is measured from the evolution of the most recent common ancestor (MRCA) of all species and any element of .
Such trees are called equidistant.
To a rooted phylogeny T we associate a distance matrix D∈^× where the entry D_ij represents the distance between the leaves labelled i and j in T.
It is known that T is equidistant if and only if D is ultrametric <cit.>, i.e.
D_ij≤max(D_ik,D_kj) ∀ i,j,k∈.
Hence, we will not distinguish between equidistant trees and ultrametric matrices in the rest of the paper.
Because D is symmetric and has zero entries on the diagonal, we can see it as a point of ^2.
We define the tree space _ as the image of space of all ultrametrics in 2.
Due to <cit.>, this is homeomorphic to the BHV space defined in <cit.>.
We note that the ultrametric condition (<ref>) implies that _ is max-tropically convex.
We are interested in consensus methods: given as input multiple phylogenies on , find an evolutionary tree on being as similar as possible to the input trees.
This is a common problem in evolutionary biology, as multiple distinct trees arise from the statistical procedures or from the multiple methods to reconstruct phylogenies from different data; see <cit.> or <cit.> for details.
A consensus method can be seen as a location statistic in the tree space.
Since the latter is max-tropically convex, there were many attempts to exploit this geometric structure to obtain relevant information <cit.>.
We are interested in tropically convex consensus methods, defined in <cit.>.
A consensus method c is tropically convex if c(T_1,…,T_m)∈^max(T_1,…,T_m)
for every m≥ 1 and T_1,…,T_m∈_.
The location problems discussed in the previous section give rise to tropically convex consensus methods.
Note that we do not need to impose the restriction that the optimum to lie in _.
It is automatically satisfied from the tropical convexity of _ and Theorem <ref>.
This observation ensured that tropical median consensus methods are fast to compute <cit.>.
Tropically convex consensus methods are particularly interesting because they preserve relationships from the input trees.
To explain this more clearly, we firstly need some terminology: two subsets of taxa A,B form a nesting in T, and we denote it by A<B, if the MRCA of A in T is a strict descendant of the MRCA of A∪ B.
If D is the ultrametric associated to T, then we can write the condition as
max_i,j∈ AD_ij<max_k,ℓ∈ A∪ BD_kℓ.
We say that a consensus method c is Pareto on nestings if c(T_1,…,T_m) displays the nesting A<B whenever A<B appears in all input trees T_1,…,T_m.
The consensus method c is called co-Pareto on nestings if c(T_1,…,T_m) does not display the nesting A<B unless A<B appears in some input tree T_i.
These conditions are desirable for consensus methods <cit.>.
It is useful to see these properties from a geometric point of view.
Consider _(A<B) the subset of _ consisting of trees displaying the nesting A<B; it is described by (<ref>).
We also make the notation _(A≮B) for the complement _∖_(A<B), which is the set of trees not displaying A<B.
Then c is Pareto on nestings if and only if for every nesting A<B and trees T_1,…,T_m∈_(A<B) we have c(T_1,…,T_m)∈_(A<B).
We also note that c is co-Pareto on nestings if and only if for every nesting A<B and trees T_1,…,T_m∈_(A≮B) we have c(T_1,…,T_m)∈_(A≮B).
The next result shows that tropically convex consensus methods have both Pareto and co-Pareto properties, being an improved version of <cit.>.
Thus, we have a large class of consensus methods satisfying both properties.
This is remarkable, as no such consensus method is listed in the surveys <cit.>.
Tropically convex consensus methods are Pareto and co-Pareto on nestings.
For every nesting A<B, the set _(A<B) is max-tropically convex as (<ref>) describes an open max-tropical halfspace.
Whence, Remark <ref> implies that tropically convex consensus methods are Pareto on nestings.
Similarly, the set _(A≮B) is max-tropically convex as it is the intersection of _ with the tropical halfspace defined by the inequality max_i,j∈ AD_ij≥max_k,ℓ∈ A∪ BD_kℓ.
Remark <ref> implies also the co-Pareto property.
The Pareto property gives a unanimity rule: nestings present in all the trees are also present in the consensus.
One may wonder if this rule can be relaxed as there exist (super)majority-rule consensus trees commonly used for the unweighted case; they are denoted M_ℓ by Felsenstein in <cit.>.
Indeed, one can find such a rule for tropical medians <cit.>.
A nesting appears in the tropical median consensus tree if it appears in a proportion of the input trees greater than 1-1/n2.
Moreover, a nesting will not appear in the tropical median consensus tree if it occurs in a proportion less than 1/n2 of the input trees.
The tropical median corresponds to the Fermat–Weber problem whose gauge distance is given by the regular simplex.
Therefore, the essential hull of a finite set A defined in <cit.> coincides with the max-tropical convex hull of A.
Then the conclusion follows from <cit.> and Remark <ref>, as in the proof of Proposition <ref>.
Note that a consensus method is not well-defined when there are multiple minimum points.
Most problematic is the situation when different tree topologies are possible, when it is unclear how to resolve incompatible optimum trees.
Yet, this is not the case when the set of optimal locations is convex <cit.>: separating the tree space in cones of trees having a tree topology gives rise to a convexly disjoint collection in the sense of <cit.>.
Nonetheless, the aforementioned proposition applies when the set of all optima in n2 is contained in _; guaranteed for strictly -star-quasiconvex dissimilarities.
Otherwise, one might still have problems in defining consistently a consensus method; see <cit.> for the symmetric tropical Fermat–Weber problem.
For this reason, one has to consider the regularized versions discussed in §<ref>.
§.§ Towards tropical supertrees
Supertrees are a generalizations of consensus trees in the case when the given input consists of phylogenies on different taxa.
This can be also interpreted as a missing-data problem.
In other words, we are given as input phylogenetic trees T_1,…,T_m whose leaves are labelled by _1,…,_m, respectively.
A supertree method returns a tree whose leaf set is =⋃_i_i summarizing as the information from T_1,…,T_m.
We use the idea of Grindstaff and Owen to represent trees with missing taxa by the set of all possible trees on all the taxa <cit.>.
Their method is similar to a location problem with BHV distance using an L^∞ error.
We note that another approach for supertrees in a tropical setting was proposed in <cit.>; the authors relied on imputation to reduce supertrees to consensus trees.
So we replace the input tree T_i by the tropically convex set __i^-1(T_i) where _:_→_ is a projection obtained by keeping the entries of an ultrametric matrix corresponding only to rows and columns from ⊂.
We will be interested in tropically convex supertrees, i.e. the output belongs to ^max(⋃_i∈[m]__i^-1(T_i)).
According to Theorem <ref>, we may obtain such methods by employing strictly -star-quasiconvex dissimilarity measures.
Tropically convex supertrees are also Pareto on nestings.
We record this fact, whose proof is similar to Proposition <ref>.
In particular, it motivates the search for tropically convex supertree methods.
Tropically convex supertree methods are Pareto on nestings.
A co-Pareto property is no longer possible, as relationships between groups of taxa might not appear in all input trees.
Nonetheless, there cannot appear conflicting relationships.
We remark that we did not give a well-defined supertree method.
The problem arises from the fact that the optima could have different tree topologies.
For example, an extreme case is when there are two trees T_1 and T_2 on disjoint set of taxa.
There are clearly many different ways to combine the information.
Therefore, extra assumptions must be made.
§ CONCLUSION AND FUTURE PERSPECTIVES
We provided a large class of location estimators whose value lies in the max-tropical convex hull of the input with the purpose of obtaining consensus methods with good properties.
The first direction would be to obtain methods to obtain the optima efficiently.
On the other hand, searching for extra properties of specific location problems could be helpful for applications; more details are provided below.
§.§ Comparison to consensus methods based on the BHV distance
We have exploited tropical convexity to obtain consensus methods with good properties.
More precisely, we focused on (co-)Pareto properties that can be interpreted in a purely geometric way.
The associated spaces are also max-tropically convex so the aforesaid properties are immediate for the tropical approach.
Although the BHV geometry of the tree space is more studied than its tropical counterpart, there are few consensus methods proposed for this geometry.
A first proposal was given in the pioneering paper by Billera, Holmes, and Vogtmann <cit.>, but a few drawbacks were already pointed out: e.g., doubling every input tree changes the output.
An approach based on Fréchet means was proposed by Miller et al. <cit.> and Bačák <cit.>.
It is also Pareto and co-Pareto on splits <cit.>, but the result is more intricate.
The same properties hold for Fermat–Weber and center problems in the BHV space <cit.>.
The approach is again analytical, but similar for all the cases.
One could try a geometric approach, as in the tropical case, as it could lead faster to identification of self-consistent properties for consensus methods.
§.§ Majority rules in consensus methods
Proposition <ref> provides a supermajority rule for tropical median consensus with respect to nestings.
This can be a step towards understanding the relationship between median weighted trees and the widely used majority-rule consensus for unweigthed trees.
In fact, the majority-rule consensus can be interpreted as a median <cit.>, but it is unclear if this can be extended to weighted phylogenies.
However, Proposition <ref> provides a large threshold for a majority rule in the case of tropical median consensus trees, indicating that they are quite conservative.
This seems to be owing to the low breakdown point of the tropical median caused by asymmetry; check <cit.> for more details.
Therefore, an investigation of location estimators with higher breakdown point could provide a better connection to the majority-rule consensus.
§.§ Compositional data
A different application of our location estimators could be to compositional data <cit.>.
That is, the data can be seen as points in a simplex; our methods would be applied to the centered logratio transform of the input.
Note that -star-quasiconvex sets are defined with respect to special directions, which correspond to the vertices of the simplex.
What is more, the motivation of Tokuyama and Nakano in studying algorithms for transportation problem came from splitting the points from a simplex in multiple regions <cit.>.
Moreover, Nielsen and Sun analyzed clustering methods with the symmetric tropical distance on compositional data showing a better performance than other more commonly used dissimilarity measures <cit.>.
These results suggest that -star-quasiconvex dissimilarities could be useful in compositional data analysis.
§.§ Acknowledgments
I am indebted to Michael Joswig for discussing different aspects of this paper.
I thank Günter Rote for bringing <cit.> to my attention.
The author was supported by Facets of Complexity (GRK 2434, project-ID 385256563).
§ APPENDIX A: CONVEX ANALYSIS ON N
We state and proof a slightly more general form of Proposition <ref> and then we put an Euclidean structure on n to show how we can obtain a quantitative result for the regularized version of the tropical Fermat–Weber problem.
§.§ The proof of Proposition <ref>
We will prove the result in a finite-dimensional real vector space X.
We will equip it with an inner product ⟨·,·⟩ which gives an isomorphism X^*≅ X.
In this way, we can see the subgradients of a convex function as elements of X.
We recall that the subdifferential of a convex function f:X→ at a point x is the set
∂ f(x)={c∈ X:f(y)-f(x)≥⟨ c,y-x⟩ ∀ y∈ X}.
It will be used to characterize the minima of f through the first-order minimality condition: x is a minimum of f if and only if ∈∂ f(x).
We refer to the book by Rockafellar <cit.> for more details on convex analysis.
We are interested in optima of regularized versions of f of the form f+λ h with h having linear growth.
More specifically, we care of h being Lipschitz continuous, i.e. there exists a constant L>0 such that |h(x)-h(y)|≤ Lx-y for every x,y∈ X, where · is any norm on X.[We assumed that X is finite-dimensional, so every two norms are equivalent. Thus, the definition does not depend on the specific norm. Nevertheless, the constant L depends on ·.]
As a last definition, we say that h is polyhedral convex if it is the maximum of finitely many affine functions on X.
Now we can state and proof a slight generalization of Proposition <ref>.
Let h:X→ be a polyhedral convex function and f:X→ convex and Lipschitz continuous.
Then there exists a constant λ_0>0 such that the minima of h+λ f are also the minima of h for every λ∈(0,λ_0).
Consider an arbitrary minimum m_λ of h+λ f.
The first-order optimality condition entails ∈∂ h(m_λ)+λ∂ f(m_λ).
What is more, since f is Lipschitz continuous, <cit.> yields the existence of a bounded set B such that ∂ f(x)⊂ B for all x∈ X.
If ∉∂ h(x), then ∉∂ h(x)+λ B for λ sufficiently small, as ∂ h(x) is closed.
We also know that there are finitely many values for ∂ h(x), as we assumed h is a polyhedral convex function.
Accordingly, there exists λ_0>0 such that ∉∂ h(x)+λ B for every λ∈(0,λ_0).
The last relation implies that ∈∂ h(m_λ) if λ<λ_0, which is equivalent to m_λ being a minimum of h.
If we know the bounded set B from the proof of Proposition <ref>, then we can set λ_0=sup{λ>0:∉ P+λ B, ∀ P∈}
where is the set of all possible values of ∂ h(x) such that ∉∂ h(x).
The infimum is positive, as 𝒫 is a finite collection of closed convex sets.
If h is a gauge γ, then <cit.> says that we can set B={x∈ X:γ^∘(x)≤ r}=:rB_γ^∘ for some r>0 where γ^∘(y):=sup_x:γ(x)≤ 1⟨ x,y⟩ is the dual gauge.
Hence, P+λ B represents the set of points at distance at most λ r from P measured by the distance d_γ^∘ induced from γ^∘, i.e. d_γ^∘(x,y)=γ^∘(y-x).
Consequently, we have λ_0=inf_P∈d_γ^∘(P,)/r.
§.§ Euclidean structure on n
We just conclude with explaining how we can put a Euclidean structure on n in a natural way.
The idea is to identify the tropical projective torus with a hyperplane of ^n with the regular Euclidean structure.
Using this idea, by factoring with , one can identify n with the orthogonal subspace to , which is ={x∈:x_1+…+x_n=0}.
This identification is natural as we obtain the same subdifferentials of a convex function f:n→ as in the case when we consider it as a function on ^n such that f(x+λ)=f(x) for each x∈^n and λ∈.
Having fixed this structure, we search for λ_0 as in Proposition <ref> for h(x)=∑_i γ_∞(x-v_i) and f(x)=γ_1(x-v) where v∈^max(v_1,…,v_m).
This is, we want quantitative results for regularizations of tropical Fermat–Weber problems.
In this case, the subdifferentials of h are integer polytopes in .
Moreover, one can check that the dual gauge of γ_1 has the expression γ_1^∘(x)=γ_1(-x)/n which takes integer values at each point of ∩^n.
Consequently, λ_0=inf_P∈d_γ_1^∘(P,)≥ 1 as it is a positive integer.
Whence, the minima of h+λ f are also minima of h for every λ∈(0,1).
|
http://arxiv.org/abs/2307.04259v2 | 20230709201858 | Dynamical Wormhole Solutions in Rastall Theory | [
"Yaghoub Heydarzade",
"Maryam Ranjbar"
] | gr-qc | [
"gr-qc"
] |
1]Yaghoub Heydarzademailto:[email protected]@bilkent.edu.tr
2]Maryam Ranjbarmailto:[email protected]@gmail.com
[1]Department of Mathematics, Faculty of Sciences, Bilkent University, 06800 Ankara, Turkey
[2]Istanbul, Turkey
Dynamical Wormhole Solutions in Rastall Theory
[
August 12, 2023
==============================================
Wormhole configurations in Einstein's general theory of relativity (GR) require exotic matter sources violating the weak energy condition (WEC). Rastall's theory is a generalization of GR in its matter source considering a nonconserved energy-momentum (EM) tensor. Hence, on one hand, the nature of this generalization of the matter source of field equations and, on the other hand, the possibility of respecting energy conditions for dynamical wormholes in contrast to static ones motivates us to study the possibility of the existence of wormhole configurations respecting energy conditions or minimizing the violations of them in Rastall's modified theory. We derive general analytical solutions considering a constant redshift function and a particular equation of state for energy density and pressure profiles. We show that because of the modification in the EM source of the field equations, there exist solutions respecting the WEC in the vicinity of the wormhole's
throat for specified values of the parameters. Some particular solutions are discussed in detail.
Dynamical Wormhole Solutions in Rastall Theory
[
August 12, 2023
==============================================
§ INTRODUCTION
Despite the success of Einstein's general relativity (GR) in explaining many gravitational phenomena, it falls short in explaining dark matter and dark energy. To address these issues, modifications of GR have been proposed, e.g scalar-tensor theories <cit.>, f(R) theories <cit.>, and braneworlds <cit.>. For a comprehensive review, see <cit.>.
In 1972, Peter Rastall proposed a modification to Einstein's theory with a nonconserved energy-momentum
tensor <cit.>. In his theory, the divergence of energy-momentum tensor is proportional to the gradient of the Ricci scalar through a proportionality constant <cit.>. Hence, in contrast to the standard conservation law of energy-momentum, the Bianchi identity still holds. Rastall gravity yields some interesting results, for instance, the late time accelerating expansion of the universe can be explained <cit.>, and the de Sitter black hole solutions can be found without explicitly assuming a cosmological constant <cit.>.
The question of equivalence of Rastall gravity to Einstein's theory as a redefinition of the EM tensor was raised in <cit.>. However,
it has been shown that the nature of this theory considering a nonconserved
EM source is not just a redefinition of EM, and it gives different results
than GR, see for instances <cit.>.
It is shown recently that a Lagrangian formulation for a Rastall-type theory can be provided in the context of f(R, ℒ_m) and f(R, T) theory <cit.> where R is the Ricci scalar, ℒ_m is the Lagrangian of matter fields and T is the trace of the energy-momentum tensor.
Einstein's general theory of relativity (GR) admits solutions describing geometrical bridges connecting two distant regions of a universe or even two different universes. For the first time it was Wheeler who proposed the term "wormhole" for these geometrical bridges in order to provide a mechanism for having "charge without charge". He claimed that the electric charge emerges as a manifestation
of the topology of a space, a sheet with a handle <cit.>. The interest in these solutions almost declined over years until the notion of traversable Lorentzian wormholes was introduced by Morris, Thorne and Yurtsewer <cit.>. It was discussed that these structures could allow humans not only to travel between distant parts of a universes, or even two universes, but also to construct time machines.
In the framework of GR, the flaring-out condition on the throat of wormhole leads to the violation of weak energy condition (WEC) demanding an exotic matter source in the Earth-based laboratory context. This violation of the energy condition is conventionally a problematic issue that requires a resolution or at least a minimization <cit.>.
Numerous studies have endeavored to address the nature of exotic matter within various settings <cit.>. One approach is to
construct thin-shell wormholes in the context of GR via cut-and-paste procedure in which the exotic matter source is minimized by concentrating at the wormhole's throat <cit.>.
Another approach is to investigate the modified
theories of gravity where the presence of curvature higher order terms in curvature may provide a possibility for constructing wormhole structures by ordinary matter sources <cit.>. As instances, see wormhole solutions in Brans-Dicke theory <cit.>, Einstein-Gauss-Bonnet theory
<cit.>, f(R) gravity <cit.>, and scalar-tensor gravity <cit.>, higher dimensioanal
theories <cit.>. Moreover, in contrast to static wormholes in GR, it has been noted that for evolving wormholes there is this possibility of satisfaction of energy conditions for a
finite interval of time <cit.>, see also pioneer works <cit.>.
Akin to the other modified theories, Rastall theory also has numerous successful applications in cosmology and astrophysics and this drives a motivation for investigating it versus the conditions for the existence wormholes structures. Nevertheless, our main motivation for the present study rely on very distinct feature of this theory that distinguishes it from other modified
theories: its modification in the matter source of the Einstein field equations only and leaving the geometric part unaltered. As the result, this provides very unique possibilities in
the context of this theory as i) the field equations
remain rather simple to be handled, and ii) the main concern in constructing
wormholes, need for exotic matter fields, can be traced easily by the nonminimal coupling of the EM tensor and geometry, and their interplay through a constant
coupling parameter. We will see the footprint of this coupling in the solutions derived. On the other hand, the possibility of respecting ECs for finite
time intervals in dynamical configurations, in contrast to static cases, in GR, stimulate another motivation to investigate how these dynamical configurations
behave in Rastall theory. Therefore, the objective of our study is to discover viable dynamical wormhole solutions within the framework of Rastall theory and demonstrate how the nonminimal coupling nature of this theory influences the shape and evolution of these solutions. Here it is necessary to to mention that static wormhole solutions have been studied in the context of Rastall theory showing that the WEC can be met for some particular solutions, see for instance <cit.>. In <cit.> it is shown that Rastall theory is capable of modifying the energy condition requirements of the matter source to satisfy the strong energy condition at the throat. This modification demands that either the Rastall coupling κ or λ has to be negative. It is concluded that Rastall gravity has the potential to alleviate some issues encountered by static wormholes within the framework of Einstein gravity.
Since the dynamical wormholes in the context of Rastall theory have not been studied yet, it seems worthwhile to put one step further to explore the theory for the possible generalizations of the static solutions to dynamical cases.
The organization of the paper is as follows. In section II, we derive the general analytical solutions of the field equations for a wormhole geometry. In section III, we analyse some particular solutions versus the flaring out and WEC, and show that under some constraints these conditions are respected in the context of Rastall gravity. Section IV is devoted to our concluding remarks.
§ EVOLVING WORMHOLES IN RASTALL THEORY
The validity of the energy-momentum conservation law in the four dimensional spacetime was questioned by Rastall <cit.>. He considered the following hypothesis
𝑇^μν _; μ= λℜ^, ν,
where T^μν
is the energy-momentum tensor of matter source, λ is the Rastall constant parameter, and ℜ is the Ricci scalar. Hence, the Einstein field equations get modified as
𝐺_μν+ κλ g_μνℜ = κ𝑇_μν,
where κ is the gravitational coupling. In the present work, we are interested in dynamical wormhole solutions of these field equations. For the static wormhole solutions in Rastall theory, see <cit.>. Hence, we consider time-dependent generalization of Morris-Thorne wormhole metric as <cit.>
ds^2 = -U(r) dt^2 + 𝑅(t)^2 ( dr^2/1-B(r)/r + r^2 (d θ^2 + sin^2 θ dϕ^2) ),
where R(t) is the scale factor of the background Universe, U(r) is the redshift function and B(r) is the wormhole shape function. The static Morris-Thorne wormhole is recovered by setting R(t)=constant. In order to have a wormhole geometry, the following general constraints on the redshift and shape functions are required <cit.>.
∙ The wormhole throat connecting two asymptotic regions is located at the minimum radial coordinate r_0=B(r_0).
∙ The shape function B(r) must satisfy the so-called flaring-out condition B(r)-rB^' (r)>0 at the vicinity of the throat which reduces to B^'(r_0)<1 at the throat.
∙ In order to keep the signature of the metric for r > r_0, the shape function holds the condition 1-B(r)/r>0.
∙ For asymptotically flat
wormholes, the metric functions should satisfy the conditions U(r)→ 1, B(r)/r → 0 as r →∞. In this case, the metric (<ref>) tends
to the flat Friedmann-Robertson-Walker metric in the asymptotic region.
∙ The redshift function U(r) must be finite and nonzero throughout the spacetime in order to ensure the absence of horizons and singularities.
We use a similar methodology as in <cit.>
for evolving Lorentzian wormholes in GR. We will see that how Rastall's paprameter appears in the solutions for the scale factor and shape function to modify the similar solutions in <cit.>. Considering the metric (<ref>) with the constant redshift function U(r)=1, and the energy-momentum tensor T^μ_ν=diag(-ρ(t,r), P_r(t,r), P_l(t,r), P_l(t,r)), field equations (<ref>) yield
ρ(t,r)=1/κ( 3 H^2 + B^'(r)/r^2 𝑅(t)^2 -κλℜ),
P_r(t,r)= 1/κ( -3 H^2 -2 Ḣ -B(r)/𝑅(t)^2 r^3 + κλℜ),
P_l(t,r)= 1/κ( -3H^2 - 2Ḣ -B^'(r)/2 r^2 𝑅(t)^2 +B(r)/2 r^3 𝑅(t)^2 + κλℜ),
where H=Ṙ(t)/R(t), and the Ricci scalar reads as
ℜ=2 B^'(r)/r^2 𝑅(t)^2 + 12 𝐻^2 + 6 𝐻̇.
For integrating the present system of three nonlinear partial differential equations (<ref>), (<ref>) and (<ref>) with five unknowns R(t), B(r), ρ(t,r), P_r(t,r) and
P_l(t,r), one can consider a physically motivated constraint; more specifically
an equation of state for the sets of unknowns (ρ(t,r), P_r(t,r)) and (ρ(t,r), P_l(t,r)) or even for (P_r(t,r), P_l(t,r)) as in <cit.>. Another possibility is to consider traceless
constraint on EM tensor as in <cit.>. Here, in order to keep the equation
of state as much as possible general which can reduce to some known specific equations
of state, we consider a general EoS including our three unknowns (ρ(t,r), P_r(t,r),P_l(t,r) as in <cit.>
ρ(t,r)=ω/1+2 γ(P_r (t,r)+ 2 γ P_l(t,r) ),
where ω and γ are equation of state parameters. This equation
of state depending on two parameters ω and γ can reduce to
the following special cases: i) the barotropic EoS as ρ(t,r)=ω P(t,r) when P_r(t,r)=P_l(t,r)=P(t,r), ∀γ, which reduces
to cosmological constant for ω=-1, ii) the traceless EM's EoS as -ρ(t,r) +P_r(t,r)+2P_l(t,r)=0 when ω=3, γ=1, and iii) the dimension (n) dependant EoS ρ(t,r)=α(P_r (t,r)+ (n-2) P_l(t,r) ) <cit.> in n=4 when γ=1. Later we will see that how
the Rastall's coupling β and the wormhole conditions together put
constraints on each of these two parameters ω and γ in (<ref>).
Combining the set of equations (<ref>, <ref>, <ref>) with the EoS (<ref>), we obtain the following single nonlinear partial differential equation in our unknown functions B(r) and R(t)
( 1+ γ (2 + ω) ) r B^'(r) - ω (γ -1) B(r)/κ (1+2 γ)r^3 = - 𝑅(t)^2 (1+2 γ) ( 8 ω𝐻̇ + 12 H^2 (ω +1) )/(4+ 8 γ) κ
+ λℜ (1+ ω) 𝑅(t)^2.
This equation can be integrated for B(r) and R(t) by separating it into the radial and temporal parts as follows
( 1+ γ (2 + ω) ) r B^'(r) - ω (γ -1) B(r)/ (1+2 γ)r^3 -2β (1+ω) B^'(r)/r^2=
β (1+ ω) 𝑅(t)^2 (12 𝐻^2 + 6 𝐻̇)̇ - 𝑅(t)^2 (1+2 γ) ( 8 ω𝐻̇ + 12 H^2 (ω +1) )/(4+ 8 γ) ,
where β=κλ. This equation can be considered as the master equation to be solved for our unknowns, and
it is similar to the master equation in <cit.>. In <cit.> the master equation was
derived by combining the field equations considering the relation p_r(t,r)=α p_t(r,t) where in general α=α(r). However, one notes to the modification here by the Rastall's parameter β and the difference in the coefficients due to the different equation of states used. The radial and temporal parts of Eq. (<ref>) give the following ordinary differential
equations (ODEs) for the shape function and scale factor respectively
( 1+ γ (2 + ω) ) r B^'(r) - ω (γ -1) B(r)/ (1+2 γ)r^3 -2β(1+ω) B^'(r)/r^2 =C,
and
R(t)^2 [(6 β (ω +1) - 2 ω) Ḣ+(12 β (ω +1)- 3(ω +1) )H^2 ] =C.
Let the constants a=6 β (ω+1)-2 ω and d= 12 β (ω+1)-3(1+ω), then Eq.(<ref>) can be rewritten as
R(t)^2 [a Ḣ+ d H^2]= C,
or equivalently
a R(t) R̈(t)+ b Ṙ(t)^2 =C,
where the constant b= d - a = a + ω -3. Here, one notes that the dynamics of the scale factor depends on the Rastall's coupling parameter β and EoS parameter ω while is independent of the parameter γ.
In the following subsections, we obtain general exact solutions to Eqs.(<ref>) and (<ref>) for two cases C=0 and C 0. Some particular sub-classes of the obtained general solutions will be investigated versus the flaring out and weak energy conditions in the next section.
§.§ Solutions for C=0
§.§.§ Solution for the shape function
Integrating Eq.(<ref>) for C=0, the shape function can be obtained as
B(r)= r_0 (r_0/r)^(1-γ ) ω/1-2 β (2 γ +1) (ω+1)+γ ( ω+2),
Here one observes that how the Rastall's coupling parameter β modifies the wormhole's shape function in comparison to the case of GR when β=0. The resulting geometry can be asymptotically flat or nonflat depending on the set of
parameters ω,γ and β.
The flaring out condition at the throat reads as
B'(r_0)=(γ -1) ω/1-2 β (2 γ +1) (ω+1)+γ ( ω+2)<1.
Moreover, in order to satisfy the asymptotically flatness B(r)/r→ 0 as r→∞, the following condition should be fulfilled
-1<(1-γ ) ω/1-2 β (2 γ +1) (ω+1)+γ ( ω+2)<1.
§.§.§ Solution for the scale factor
One can integrate Eq.(<ref>) for C= 0 to find the general solution
R(t)= (R_0 t+ R_1)^1/1+ b/a=(R_0 t+ R_1)^a/d,
where R_0 and R_1 are integration constants. One observes that this solution does not contain the Big Bang singularity if t≠ -R_1/R_0. Here
one notes that the solution (<ref>) is a generic dynamic wormhole solution that is similar to the solution obtained in <cit.> in GR. Hence,
the general form of the solution for the scale factor is independent of the Rastall gravity due to the similarity in the governing ODE on R(t) in (<ref>). However the solutions may differ depending on the assumed parameter constraints for the purpose of the solution in the underlying theory. Here, Rastall's coupling β arises in the power a/d and can be considered as a factor for distinguishing the solution from those in GR in the limit β→ 0. Later we will discuss the values of β parameter and its effect in satisfaction of wormhole conditions.
The following particular subclasses of (<ref>) and (<ref>) can be of interest.
∙ 𝐚=𝐝
For this case, the scale factor, shape function and ω are given by
R(t)=R_0 t +R_1, ω =6 β -3/1-6 β, β1/6,
B(r)=r^3/r_0^2.
One can verify that this solution to (<ref>) fails to satisfy the flaring out condition for evolving wormhole solutions. Hence, we do not analyze this solution versus the WEC.
∙
𝐚=2𝐝
In this case, we have
R(t)=(R_0 t + R_1)^2, ω =9 β -3/2-9 β, β2/9,
B(r)=r_0 (r_0/r)^3 (3 β -1) (γ -1)/β (5 γ +7)-γ -2,
where γ should satisfy the wormhole conditions.
∙
𝐚=1/2 𝐝
In this case, we find
R(t)=(R_0 t + R_1)^1/2, ω=3,
B(r)=r_0 (r_0/r)^3 (γ -1)/8 β (2 γ +1)-5 γ -1.
Here, β parameter remains arbitrary and γ should satisfy the wormhole conditions. Here to make clear how the Rastall gravity, and not only the choice of the stress-tensor, is important in influencing the solutions (<ref>) and (<ref>), one may consider the following two possibilities: i) fix the parameter γ and ω by assuming known specific stress energy tensors at this step, so that solutions now clearly depend on the Rastall factor, and ii) consider the theoretically and observationally verified values or ranges on
Rastall parameter β, and then obtain corresponding allowable ω and γ values satisfying the wormhole conditions that can include parameter ranges for both the normal and exotic matters. The latter possibility implies
how the coupling parameter β confines or affects the matter sources needed for such configurations. Up to this point, one observes the constraint on ω parameter. In section 3, in order to investigate the obtained viable solutions versus the ECs, regarding the theoretical and observational constraints on β parameter <cit.>, we will consider two admissible ranges 0<β<1/6 and β<0, and we will analyse the above latter possibility in detail. Specifically,
we show that the satisfaction of wormhole conditions is possible for two
observationally obtained values of β=0.163 <cit.> and β=0.041
<cit.>. As an instance, for the particular solution of
a=1/2d, consideration the EoS parameters ω=3, γ=0.35 with β=0.041 provides the possibility of satisfaction of all wormhole conditions that is illustrated in Figure <ref>. This is an interesting case in the sense that substituting these EoS parameters in (<ref>) and defining an effective pressure P_e(t,r)=P_r(t,r)+(0.7) P_l(t,r) we have an effective equation of state P_e(t,r)=1.7/3ρ(t,r) which denotes a matter source respecting ECs. This indeed is an example for the first possibility mentioned
above as well.
§.§ Solutions for C 0
§.§.§ Solution for the shape function
The shape function B(r) can be obtained by integrating (<ref>) as
B(r)= -C /6 β (ω+1)-ω-3r^3+ C_1 r^(γ -1) ω/1-2 β (2 γ +1) (ω+1)+γ ( ω+2),
where C and C_1 are separation and integration constants, respectively. Like (<ref>), the solution
(<ref>) is a generic wormhole shape function and is similar to the solution in <cit.>. The difference being is upto some parameter choices. However, one observes that, as we will see later in analyzing solutions versus
WEC, the difference in the underlying theories, i.e here the being of Rastall parameter β, can play a crucial role in satisfying wormhole conditions even by ordinary matter sources. This indeed implies how such a modification in EM source, akin to the higher order curvature terms in other modified theories, is capable of solving the issue of the need for exotic matter in GR.
To be specific, the presence of β puts constraints on the required matter sources, i.e on ω and γ, see the classification given
in Table <ref>. In other words, as discussed in <cit.> for
static cases, considering the field equations G_μν=κ_r S_μν where
the effective EM tensor S_μν includes the Rastall's modification term βℜg_μν, the actual matters make up with phantom characteristics. Therefore, in Rastall gravity, general wormhole solutions can exist with both normal and phantom matter, depending on the Rastall coupling parameter.
Using the (initial) condition
B(r_0)=r_0 at the wormhole's throat we can determine integration constant C_1 as
C_1= (6 β (ω+1)-ω-3)r_0 + C r_0^3/(6 β (ω+1)- ω - 3)r_0^(γ -1) ω/1-2 β (2 γ +1) (ω+1)+γ ( ω+2),
from which we find the flaring out condition at the throat
as
B'(r_0)= -C r_0^2(1+2 γ) +ω(1-γ) /-1+2 β (2 γ +1) (ω+1)-γ ( ω+2)<1.
Here one observes that depending on the set of parameters ω,γ, and β, the coefficient of the first term in (<ref>),
i.e k=C /6 β (ω+1)-ω-3, appears as an effective cosmological constant. This means that for C≠ 0, we have asymptotically (anti) de Sitter-like solutions and the asymptotic flatness condition
does not hold here. Also, as it is pointed out in <cit.>, the
above defined k constant can be interpreted as a topological
number denoting the spatial curvature of the background FRW spacetime taking
values ± 1, 0 representing a closed, open and flat universe, respectively. one can
write the B(r) function as
B(r)= -k r^3 + B_n(r),
where k represents the spatial curvature of the FRW metric
and B_n(r) is the shape function of a wormhole inhabiting within this spacetime. One should note to the difference here in (<ref>) and (<ref>), similar to <cit.> as instances, and in <cit.> where the throat condition B_n(r_0)=r_0 is imposed only on the second term B_n(r) in the shape function. It is mentioned in <cit.>
that imposing the throat condition B(r_0)=r_0, the spatial extension of the wormhole solution cannot be arbitrarily large.
Following <cit.>, the throat condition B_n(r_0)=r_0 together with
t the flaring out condition
give
B'(r_0)=(γ -1) ω/1-2 β (2 γ +1) (ω+1)+γ ( ω+2)<1.
The asymptotic flatness condition reads as
-1<(1-γ ) ω/1-2 β (2 γ +1) (ω+1)+γ ( ω+2)<1.
§.§.§ Solution for the scale factor
Considering the general case a,b 0, Eq.(<ref>) can be integrated giving the following first order nonlinear differential equation
Ṙ^2(t)=C/b( 1-R_0 R^- 2b/a),
where R_0 is an integration constant, and hence
∫d R/√(1-R_0 R^ -2b/a)= ±√(C/b)∫ d t,
for C/b>0. Here one can obtain the explicit from of the scale factor R(t) for some particular cases of parameters a and b.
The following particular cases can be of interest.
∙ 𝐚=-2𝐛
This case gives the scale factor R(t), ω and shape function B(r) as follows
R(t) =1/R_0-R_0/4(±√(C/b)
t + R_1)^2, ω =3-9 β/9 β -2, β2/9,
B(r) =(9 β -2) C /12 β -3r^3 +r_0 ((3-12 β )+(9 β -2) C r_0^2)/12 β -3 (r/r_0)^3 (3 β -1) (γ -1)/-β (5 γ +7)+γ +2,
where R_1 is an integration constant.
Considering B_n(r) as the shape function of the
inhabiting wormhole, we have
R(t) =1/R_0-R_0/4(±√(k)
t + R_1)^2, ω =3-9 β/9 β -2, β2/9,
B_n(r) = r_0
(r/r_0)^3 (3 β -1) (γ -1)/-β (5 γ +7)+γ +2,
where the reality of the solution requires k=1. Later we will show that the WEC can be respected in both the above cases for a=-2b.
∙ 𝐚=-𝐛
In this case, one finds
R(t)= 1/√(R_0)sin(±√(CR_0/b) t+R_1), β = 1/4,
B(r)= -2 C /ω -3r^3 + r_0 (2 C r_0^2+ω -3)/ω -3 (r/r_0)^2 (γ -1) ω/2 γ -ω +1,
where R_1 is an integration constant. We do not analyze this solution versus the wormhole conditions since the contraction of the field equations (<ref>) by the metric gives the Ricci scalar as ℜ=1/1-4βT which diverges for β = 1/4 and T≠ 0 <cit.>.
§ WEAK ENERGY CONDITION
In order to investigate the obtained viable solutions versus the energy conditions, regarding the theoretical and observational constraints on β parameter <cit.>, we will consider two admissible ranges 0<β<1/6 and β<0.
§.§ WEC for 0<β<1/6
In this subsection, considering 0<β<1/6 we obtain the valid ranges of ω and γ satisfying both the WEC (ρ≥ 0, ρ+P_r >0 and ρ+P_l>0) and flaring-out condition (B^'(r_0)<1) simultaneously.
§.§.§ Analysis of solutions for C=0
Here we analyze the following particular solutions for the scale factor when C=0.
∙
a=2 d
Inserting the scale factor and the shape function
in (<ref>) into the field equations (<ref>-<ref>), one obtains
ρ(t,r)= -3 (3 β -1) (6 β -1) R_0^2/2 π G (4 β -1) (R_0 t+R_1)^2
+ 3 r_0^-2 (2 β -1) (3 β -1) (6 β -1) (γ -1)/8 π G (4 β -1) (β (5 γ +7)-γ -2) (R_0 t+R_1)^4 (r_0/r)^3+ 3 (3 β -1) (γ -1)/β (5 γ +7)-γ -2,
ρ(t,r)+P_r(t,r) =(6 β -1) R_0^2/2 π G (4 β -1)(R_0 t+R_1)^2
- r_0^-2 (6 β -1) (2 β (7 γ -1)-4 γ +1)/8 π G (4 β -1) (β (5 γ +7)-γ -2) (R_0 t+R_1)^4 (r_0/r)^3+ 3 (3 β -1) (γ -1)/β (5 γ +7)-γ -2,
ρ(t,r)+P_l(t,r) =(6 β -1) R_0^2/2 π G (4 β -1)(R_0 t+R_1)^2
- r_0^-2 (6 β -1) (4 β (γ -4)-2 γ +5)/16 π G (4 β -1) (β (5 γ +7)-γ -2) (R_0 t+R_1)^4 (r_0/r)^3+ 3 (3 β -1) (γ -1)/β (5 γ +7)-γ -2.
In order to avoid the singularities in density and pressure profiles that corresponds to the big bang singularity at R(t)=0, it requires t≠ -R_1/R_0. Combining the constraint on ω and β in (<ref>) with 0<β<1/6, the flaring-out, flatness and weak energy condition
can all be satisfied simultaneously if
R_0 R_1>0, 2 β -1/14 β -4<γ <16 β -5/4 β -2, r_0>1/2√(14 βγ -2 β -4 γ +1/R_0^2 R_1^2 (5 βγ +7 β -γ -2)).
Here one observes that the satisfaction of all wormhole conditions imposes some interesting constraints. Specifically: (i) the required matter type (γ and ω parameters) for a specific solution is constrained by the Rastall's coupling, and ii) the wormhole throat radius r_0 cannot be arbitrary, and it constrained by the Rastall's coupling β and the matter parameter γ. This is similar to the result in <cit.> where it is shown that for wormholes in the Einstein-de Sitter universe, the wormhole throat radius not only depends on the shape function parameters but also on the background cosmological constant.
For a specific set of parameters according to the constraints (<ref>), the behavior of ρ, ρ+P_r and ρ+P_l as well as B(r)/r are illustrated in the Figures <ref> and <ref>. The positiveness of ρ, ρ+P_r and ρ+P_l represents the satisfaction of the WEC
in Rastall's theory. Figure <ref> shows that for β=0.163 with
variety of
γ values in the range given by (<ref>), the WEC condition remains respected
for a variety of wormholes with radii r_0 satisfying (<ref>). Here one notes that the throat radius r_0 is fixed for a fixed value of β and γ, and is defined as the point where B(r) is minimum. In case of a dynamic wormhole the throat area is subject to change in time due to changing R(t). In Figure <ref>, the first plot represents the asymptotic
flatness of B(r)/r function and the other plots represent the satisfaction of WEC for a specific wormhole with the characteristic
parameters r_0=0.1, β=0.163, γ=0.4.
∙
a=1/2 d
Inserting the scale factor and and shape function
in (<ref>) into the field equations (<ref>-<ref>), we find
ρ(t,r) =3 R_0^2 (6 β -1) /32 π G (4 β -1) (R_0 t+R_1)^2
+3 r_0^-2 (6 β -1) (2 β -1) (γ -1)/8 π G (4 β -1) (8 β (2 γ +1)-5 γ -1) (R_0 t+R_1) (r_0/r)^3 (γ -1)/8 β (2 γ +1)-5 γ -1+3,
ρ(t,r)+P_r(t, r)= (6 β -1) R_0^2/8 π (4 β -1) G (R_0 t+R_1)^2
-r_0^-2(6 β -1) (β (8 γ +4)-γ -2)/4 π G (4 β -1)(8 β (2 γ +1)-5 γ -1) (R_0 t+R_1) (r_0/r)^3 (γ -1)/8 β (2 γ +1)-5 γ -1+3,
ρ(t,r)+P_l(t, r)=(6 β -1) R_0^2/16 π G (4 β -1) (R_0 t+R_1)^2
+r_0^-2(6 β -1) (β (8 γ +4)-4 γ +1)/16 π G (4 β -1) (8 β (2 γ +1)-5 γ -1) (R_0 t+R_1) (r_0/r)^3 (γ -1)/8 β (2 γ +1)-5 γ -1+3.
In this case, satisfaction of flaring-out condition, flatness condition and WEC at throat requires
R_0,R_1<0:
γ <2-4 β/8 β -1, 0<β<1/8; r_0≥ 2 √(-2 βγ R_1-2 β R_1-γ R_1+R_1/R_0^2 (16 βγ +8 β -5 γ -1)),
γ >-4 β -1/8 β -4, 0<β<1/8; r_0>√(-8 βγ R_1-4 β R_1+4 γ R_1-R_1/R_0^2 (16 βγ +8 β -5 γ -1)),
γ >1/2, β =1/8 ; r_0>√(-6 γ R_1-3 R_1/6 γ R_0^2),
-4 β -1/8 β -4<γ <2-4 β/8 β -1, 1/8<β < 1/6; r_0>√(-8 βγ R_1-4 β R_1+4 γ R_1-R_1/R_0^2 (16 βγ +8 β -5 γ -1)) .
R_0,R_1>0:
γ <2-4 β/8 β -1, γ >-4 β -1/8 β -4; 0<β<1/8; r_0>√(2)√(8 βγ R_1+4 β R_1-γ R_1-2 R_1/R_0^2 (16 βγ +8 β -5 γ -1)),
γ >1/2, β =1/8; r_0>√(R_1/γ R_0^2),
-4 β -1/8 β -4<γ <2-4 β/8 β -1, 1/8<β < 1/6; r_0>√(2)√(8 βγ R_1+4 β R_1-γ R_1-2 R_1/R_0^2 (16 βγ +8 β -5 γ -1)).
Similar arguments given for the previous solution and its figures can be also made here. Figure <ref> shows that for a specific β=0.041, the WEC will be satisfied for variety of wormholes with r_0 and γ meeting the constraints in (<ref>).
Figure <ref> shows the asymptotic behavior of B(r),
as well as ρ, ρ+P_r and ρ+P_l satisfying the WECs for a specific set of parameters according to the constraints (<ref>) in the entire spacetime.
§.§.§ Analysis of solutions for C 0
Here we analyze the following two particular cases.
∙ 𝐚=-2 𝐛
Substituting the scale factor and shape function in (<ref>) into the field equations (<ref>-<ref>), we find
ρ(t,r) =3 C R_0^2 (3 β -1) (6 β -1) (9 β -2) /2 π G (4 β -1) (-48 β +R_0^2 (2 √(3) (4 β -1) R_1 t √((2-9 β ) C/4 β -1)+(2-9 β ) C t^2+3 (4 β -1) R_1^2)+12)
-162 R_0^2 (2 β -1) (3 β -1) (6 β -1) (γ -1) (C (9 β -2) +r_0^-2(3-12 β)))/π G (1-4 β )^2 (β (5 γ +7)-γ -2) (R_0^2 (√(3) t √((2-9 β ) C/4 β -1)+3 R_1)^2-36)^2 (r_0/r)^3-3 (3 β -1) (γ -1)/-β (5 γ +7)+γ +2,
ρ (t,r)+P_r(t,r)=C R_0^2 (6 β -1) (9 β -2) /2 π G (4 β -1) (48 β +R_0^2 (2 √(3) (1-4 β ) R_1 t √((2-9 β ) C/4 β -1)+(9 β -2) C t^2+(3-12 β ) R_1^2)-12)
+ 6 R_0^2 (6 β -1) (2 β (7 γ -1)-4 γ +1) (C(9 β -2)+r_0^-2(3-12 β))/π G (β (5 γ +7)-γ -2) (-48 β +R_0^2 (2 √(3) (4 β -1) R_1 t √((2-9 β ) C/4 β -1)+(2-9 β ) C t^2+3 (4 β -1) R_1^2)+12)^2
(r_0/r)^3-3 (3 β -1) (γ -1)/-β (5 γ +7)+γ +2,
ρ (t,r)+P_l(t,r)=C R_0^2 (6 β -1) (9 β -2)/2 π G (4 β -1) (48 β +R_0^2 (2 √(3) (1-4 β ) R_1 t √((2-9 β ) C/4 β -1)+(9 β -2) C t^2+(3-12 β ) R_1^2)-12)
+ 3 R_0^2 (6 β -1) (4 β (γ -4)-2 γ +5) (C(9 β -2)+r_0^-2(3-12 β)))/π G (β (5 γ +7)-γ -2) (-48 β +R_0^2 (2 √(3) (4 β -1) R_1 t √((2-9 β ) C/4 β -1)+(2-9 β ) C t^2+3 (4 β -1) R_1^2)+12)^2
(r_0/r)^3-3 (3 β -1) (γ -1)/-β (5 γ +7)+γ +2.
Since 0<β<1/6 and ω =3-9 β/9 β -2, the WEC and flaring-out condition will be satisfied under the fallowing conditions
C<0:
R_0≤-2/|R_1|, R_0>2/|R_1|; -1/2<γ≤2 β -1/14 β -4, r_0>√(14 βγ -2 β -4 γ +1/(9 β -2) (2 γ +1) C),
-2/|R_1|<R_0<2/|R_1|, -1/2<γ≤20 β -7 β R_0^2 R_1^2+2 R_0^2 R_1^2-4/-76 β +5 β R_0^2 R_1^2-R_0^2 R_1^2+20, r_0>√(14 βγ -2 β -4 γ +1/(9 β -2) (2 γ +1) C).
Similar arguments given for the previous solutions and their figures can be also made here. Figure <ref> shows that for a specific β=0.041, the WEC will be satisfied for variety of wormholes with r_0 and γ meeting the constraints in (<ref>). Figure <ref> shows the behavior of B(r)/r as well as ρ, ρ+P_r and ρ+P_l satisfying the WEC for a specific set of parameters according to the constraints in (<ref>). As it is seen from the first plot, in this case
we have a finite wormhole configuration which cannot be arbitrarily large.
∙ 𝐚=-2 𝐛, 𝐤=1
Considering the shape function and scale factor as (<ref>) leaves the field equations (<ref>-<ref>) as
ρ(t,r) =-3 (6 β -1) R_0^2 ((3 β -1) R_0^2 (R_1 ± t)^2-4 β)/2 π (4 β -1) G (R_0^2 (R_1 ± t)^2-4)^2
+6 (6 β -1) (β (6 β -5)+1) (γ -1) R_0^2 r_0^-2/(G π (4 β -1) (β (5 γ +7)-γ -2) (R_0^2 (R_1 ± t)^2-4)^2) (r_0/r)^3-3 (3 β -1) (γ -1)/-β (5 γ +7)+γ +2,
ρ(t,r)+P_r(t,r) =(6 β -1) R_0^2 (R_0^2 (R_1 ± t)^2+4)/2 π G (4 β -1) (R_0^2 (R_1 ± t)^2-4)^2
- 2 R_0^2 r_0^-2(6 β -1) (2 β (7 γ -1)-4 γ +1)/π G (4 β -1) (β (5 γ +7)-γ -2) (R_0^2 (R_1± t)^2-4)^2 (r_0/r)^3-3 (3 β -1) (γ -1)/-β (5 γ +7)+γ +2,
ρ(t,r)+P_l(t,r) =(6 β -1) R_0^2 (R_0^2 (R_1± t)^2+4)/2 π G (4 β -1)(R_0^2 (R_1± t)^2-4)^2
- r_0^-2 R_0^2 (6 β -1) (4 β (γ -4)-2 γ +5)/(π G (4 β -1) G (β (5 γ +7)-γ -2) (R_0^2 (R_1 ± t)^2-4)^2) (r_0/r)^3-3 (3 β -1) (γ -1)/-β (5 γ +7)+γ +2.
Considering 0<β<1/6 and ω=3-9 β/9 β -2, the WEC, flaring-out and asymptotically flantess condition will be satisfied simultaneously if
R_00, R_0± 2/|R_1|, 2 β -1/14 β -4<γ <16 β -5/4 β -2, r_0>2 √(14 βγ -2 β -4 γ +1/(5 βγ +7 β -γ -2) (R_0^2 R_1^2+4))
Figure <ref> shows that the WEC will be satisfied for variety of wormholes with r_0 and γ meeting the constraints in (<ref>). Figure <ref> shows the asymptotic behavior of B_n(r)/r as well as ρ, ρ+P_r, and ρ+P_l respecting WEC in entire spacetime.
§.§ WEC for β<0
Some observational tests of Rastall theory indicates negative values of β ,
see as an instance <cit.>. Hence, in this subsection we address the WEC and flaring-out condition for β<0.
§.§.§ Analysis of solutions for C=0
∙ 𝐚=2𝐝
In order to satisfy the WEC, flaring-out condition and flatness condition in this case, using Eq.(<ref>) with β <0, the following constraints should be satisfied.
R_0 R_1>0, 2 β -1/14 β -4<γ <16 β -5/4 β -2, r_0>1/2√(14 βγ -2 β -4 γ +1/R_0^2 R_1^2 (5 βγ +7 β -γ -2)).
The constraints here are the same as the obtained ones for 0<β<1/6 in (<ref>).
∙ 𝐚=1/2𝐝
Using (<ref>), since ω=3 and β<0, the following restrictions on γ parameter provides respecting the WE, flaring-out and flatness conditions
R_0, R_1<0:
γ <2-4 β/8 β -1, r_0≥ 2 √(-2 βγ R_1-2 β R_1-γ R_1+R_1/R_0^2 (16 βγ +8 β -5 γ -1)),
γ >-4 β -1/8 β -4, r_0>√(-8 βγ R_1-4 β R_1+4 γ R_1-R_1/R_0^2 (16 βγ +8 β -5 γ -1)).
R_0, R_1>0:
γ <2-4 β/8 β -1, γ≥ 0; β <-1/4; r_0>√(2)√(8 βγ R_1+4 β R_1-γ R_1-2 R_1/R_0^2 (16 βγ +8 β -5 γ -1)),
-4 β -1/8 β -4<γ <0, β <-1/4; r_0 ≥ 2 √(-2 βγ R_1-2 β R_1-γ R_1+R_1/R_0^2 (16 βγ +8 β -5 γ -1)),
γ <2-4 β/8 β -1, γ >-4 β -1/8 β -4; -1/4≤β <0; r_0>√(2)√(8 βγ R_1+4 β R_1-γ R_1-2 R_1/R_0^2 (16 βγ +8 β -5 γ -1)).
§.§.§ For solution of C 0
∙ 𝐚=-2𝐛
Considering Eq.<ref>, the WEC and flaring-out condition will be met if
C<0:
R_0≤-2 √(5)/|R_1| R_0>2 √(5)/|R_1|, -1/2<γ≤2 β -1/14 β -4, r_0>√(14 βγ -2 β -4 γ +1/(9 β -2) (2 γ +1) C),
-2 √(5)/|R_1|<R_0<2 √(5)/|R_1|, -1/2<γ≤20 β -7 β R_0^2 R_1^2+2 R_0^2 R_1^2-4/-76 β +5 β R_0^2 R_1^2-R_0^2 R_1^2+20, r_0>√(14 βγ -2 β -4 γ +1/(9 β -2) (2 γ +1) C).
∙ 𝐚=-2𝐛, 𝐤=1
Considering (<ref>), with β<0 and ω=3-9 β/9 β-2, the WEC, flaring-out, and flatness conditions will be respected if
R_00, R_0± 2/|R_1|, 2 β -1/14 β -4<γ <16 β -5/4 β -2, r_0>2 √(14 βγ -2 β -4 γ +1/(5 βγ +7 β -γ -2) (R_0^2 R_1^2+4))
§ CONCLUSION
In this paper, analytical evolving wormhole solutions with a constant redshift function are investigated in the context of Rastall's modified theory. A general class of solutions, including the asymptotically flat and (anti)de Sitter solutions, is derived by assuming a particular equation of state for the energy density and pressure profiles. Regarding the theoretical and observational constraints on Rastall's coupling β, two admissible ranges 0<β<1/6 and β<0 are considered in order to study the solutions versus the required
conditions for traversable wormholes. It is shown that simultaneous satisfaction of all these conditions is achievable under the obtained constraints on the parameters of the solutions. Also it is shown that the size of the wormhole throat is constrained and depends on both the Rastall's coupling β and the equation of state parameters of the matter source. A list of three particular solutions with their constraints providing the satisfaction of all wormhole conditions is given in Table <ref>.
Data Availability Statement: No Data associated in the manuscript.
plain
1
intro25
V. Faraoni,
Cosmology in scalar tensor gravity
https://doi.org/10.1007/978-1-4020-1989-0Springer, Dordrecht (2004)
intro29
A. D. Felice and S. Tsujikawa,
f(R) Theories
https://doi.org/10.12942/lrr-2010-3Living Reviews in Relativity 13, (2010)
intro31
R. Maartens,
Brane-World Gravity
https://doi.org/10.12942/lrr-2004-7 Living Rev. Relativ. 7, 7 (2004)
intro24
T. Clifton, P. G. Ferreira, A. Padilla, and C. Skordis,
Modified gravity and cosmology
https://doi.org/10.1016/j.physrep.2012.01.001Physics Reports 513, 1 (2012)
intro36
Peter Rastall,
Generalization of the Einstein Theory
https://doi.org/10.1103/PhysRevD.6.3357Phys. Rev. D 6, 3357 (1972)
intro37
H. Moradpour, Y. Heydarzade, F. Darabi, and Ines G. Salako,
A generalization to the Rastall theory and cosmic eras
https://doi.org/10.1140/epjc/s10052-017-4811-zEur. Phys. J. C 77, 259 (2017)
introbh
Y. Heydarzade, H. Moradpour, F. Darabi,
Black hole solutions in Rastall theory
https://doi.org/10.1139/cjp-2017-0254Canadian Journal of Physics 95, 12 (2017)
introvis
Matt Visser,
Rastall gravity is equivalent to Einstein gravity
https://doi.org/10.1016/j.physletb.2018.05.028Physics Letters B 782, 83 (2018)
intro38
F. Darabi, H. Moradpour, I. Licata, Y. Heydarzade, and C. Corda,
Einstein and Rastall theories of gravitation in comparison
https://doi.org/10.1140/epjc/s10052-017-5502-5Eur. Phys. J. C 78, 25 (2018)
intro371
G.G.L. Nashed, and W.E. Hanafy,
Non-trivial class of anisotropic compact stellar model in Rastall gravity
https://doi.org/10.1140/epjc/s10052-022-10634-0Eur. Phys. J. C 82, 679 (2022)
intro372
Muhammad F.A.R. Sakti, Agus Suroso , Anto Sulaksono, and Freddy P. Zen,
Rotating black holes and exotic compact objects in the Kerr/CFT correspondence within Rastall gravity
https://doi.org/10.1016/j.dark.2022.100974Physics of the Dark Universe 35, 100974 (2022)
intro373
L. Meng, and DJ. Liu,
Tidal Love numbers of neutron stars in Rastall gravity
https://doi.org/10.1007/s10509-021-04013-6Astrophys Space Sci 366, 105 (2021)
intro374
Miguel Cruz et al,
A thermodynamics revision of Rastall gravity
https://doi.org/10.1088/1361-6382/ab45abClass. Quantum Grav. 36 225007 (2019)
introlag
Fabris, Júlio C. and Piattella, Oliver F. and Rodrigues, Davi C.
On Rastall gravity formulation as a f(R,ℒ_m) and a f(R, T) theory
https://doi.org/10.1140/epjp/s13360-023-03845-1Eur. Phys. J. Plus 138, 232 (2023)
intro381
De Moraes, W. A. G. and Santos, A. F.,
Lagrangian formalism for Rastall theory of gravity and Gödel-type universe
https://doi.org/10.1007/s10714-019-2652-9Gen Relativ Gravit 51, 167 (2019)
introwheel
John A. Wheeler,
On the nature of quantum geometrodynamics
https://doi.org/10.1016/0003-4916(57)90050-7Annals of Physics 2, 604 (1957)
introMTY1
Michael S. Morris, and Kip S. Thorne,
Wormholes in spacetime and their use for interstellar travel: A tool for teaching general relativity
https://doi.org/10.1119/1.15620American Journal of Physics 56, 395–412 (1988)
introMTY2
Michael S. Morris, Kip S. Thorne, and Ulvi Yurtsever,
Wormholes, Time Machines, and the Weak Energy Condition
https://doi.org/10.1103/PhysRevLett.61.1446Phys. Rev. Lett. 61, 1446 (1988)
WHbook
Matt Visser,
Lorentzian Wormholes: From Einstein to Hawking
American Institute of Physics (1995)
Hochberg-Visser
David Hochberg and Matt Visser,
Geometric structure of the generic static traversable wormhole throat
https://link.aps.org/doi/10.1103/PhysRevD.56.4745Phys. Rev. D 56, 4745 (1997)
Hochberg-Visser2
David Hochberg and Matt Visser,
Null Energy Condition in Dynamic Wormholes
https://link.aps.org/doi/10.1103/PhysRevLett.81.746Phys. Rev. Lett. 81, 746 (1998)
intro64
David Hochberg and Matt Visser,
Dynamic wormholes, antitrapped surfaces, and energy conditions
https://doi.org/10.1103/PhysRevD.58.044021Phys. Rev. D 58, 044021 (1998)
viser-d-1989
Matt Visser,
Traversable wormholes: Some simple examples
https://link.aps.org/doi/10.1103/PhysRevD.39.3182Phys. Rev. D 39, 3182 (1989)
viser-n-1989
Matt Visser,
Traversable wormholes from surgically modified Schwarzschild spacetimes
https://doi.org/10.1016/0550-3213(89)90100-4Nucl. Phys. B 328, 203 (1989)
Eiroa-2005
E. F. Eiroa and C. Simeone,
Thin-shell wormholes in dilaton gravity
https://link.aps.org/doi/10.1103/PhysRevD.71.127501Phys. Rev. D 71, 127501 (2005)
Zaslavski-2007
O. B. Zaslavskii,
Traversable wormholes: Minimum violation of the null energy condition revisited
https://link.aps.org/doi/10.1103/PhysRevD.76.044017Phys. Rev. D 76, 044017 (2007)
introEC2
Eric Poisson and Matt Visser,
Thin-shell wormholes: Linearization stability
https://doi.org/10.1103/PhysRevD.52.7318Phys. Rev. D 52, 7318 (1995)
introor1
S. Habib Mazharimousavi, M. Halilsoy, and Z. Amirabi,
Stability of thin-shell wormholes supported by normal matter in Einstein-Maxwell-Gauss-Bonnet gravity
https://doi.org/10.1103/PhysRevD.81.104002Phys. Rev. D 81, 104002 (2010)
introor2
Mohammad Reza Mehdizadeh, Mahdi Kord Zangeneh, and Francisco S.N. Lobo,
Higher-dimensional thin-shell wormholes in third-order Lovelock gravity
https://doi.org/10.1103/PhysRevD.92.044022Phys. Rev. D 92, 044022 (2015)
lobo-2011
Francisco S. N. Lobo,
Wormhole geometries in modified gravity
https://doi.org/10.1063/1.4734456F. S. N. Lobo, AIP Conf. Proc. 1458, 447 (2011)
harko-2013
T. Harko, F. S. N. Lobo, M. K. Mak and S. V. Sushkov,
Modified-gravity wormholes without exotic matter
Phys. Rev. D 87, 067504 (2013)
intro52
A. G. Agnese and M. La Camera,
Wormholes in the Brans-Dicke theory of gravitation
https://doi.org/10.1103/PhysRevD.51.2011Phys. Rev. D 51, 2011 (1995)
intro53
Kamal K. Nandi, Anwarul Islam, and James Evans,
Brans wormholes
https://doi.org/10.1103/PhysRevD.55.2497Phys. Rev. D 55, 2497 (1997)
intro54
Francisco S. N. Lobo and Miguel A. Oliveira,
General class of vacuum Brans-Dicke wormholes
https://doi.org/10.1103/PhysRevD.81.067501Phys. Rev. D 81, 067501 (2010)
intro55
Sergey V. Sushkov and Sergey M. Kozyrev,
Composite vacuum Brans-Dicke wormholes
https://doi.org/10.1103/PhysRevD.84.124026Phys. Rev. D 84, 124026 (2011)
bran
Ernesto F. Eiroa, Martin G. Richart, Claudio Simeone,
Thin-shell wormholes in Brans-Dicke gravity
https://doi.org/10.1016/j.physleta.2008.10.065 Phys.Lett. A 373,
1 (2008)
dotti-2007
G. Dotti, J. Oliva and R. Troncoso,
Exact solutions for the Einstein-Gauss-Bonnet theory in five dimensions: Black holes, wormholes, and spacetime horns
https://link.aps.org/doi/10.1103/PhysRevD.76.064038Phys. Rev. D 76, 064038 (2007)
dotti-2009
G. Dotti, J. Oliva and R. Troncoso,
Vacuum solutions with nontrivial boundaries for the Einstein-Gauss-Bonnet theory
https://doi.org/10.1142/S0217751X09045248Int. J. Mod. Phys. A 24, 1690 (2009)
intro56
Francisco S. N. Lobo and Miguel A. Oliveira,
Wormhole geometries in f(R) modified theories of gravity
https://doi.org/10.1103/PhysRevD.80.104012Phys. Rev. D 80, 104012 (2009)
intro57
Nadiezhda Montelongo Garcia and Francisco S. N. Lobo,
Wormhole geometries supported by a nonminimal curvature-matter coupling
https://doi.org/10.1103/PhysRevD.82.104018Phys. Rev. D 82, 104018 (2010)
intro58
Nadiezhda Montelongo Garcia and Francisco S N Lobo,
Nonminimal curvature–matter coupled wormholes with matter satisfying the null energy condition
https://doi.org/10.1088/0264-9381/28/8/085018Class. Quantum Grav. 28 085018 (2011)
Bhattacharya-2017
S. Bhattacharya and S. Chakraborty,
f(R) gravity solutions for evolving wormholes
https://doi.org/10.1140/epjc/s10052-017-5131-zEur. Phys. J. C 77, 558 (2017)
intro59
Rajibul Shaikh and Sayan Kar,
Wormholes, the weak energy condition, and scalar-tensor gravity
https://doi.org/10.1103/PhysRevD.94.024011Phys. Rev. D 94, 024011 (2016)
camera-2003
M. La Camera,
Wormhole solutions in the Randall–Sundrum scenario
https://doi.org/10.1016/j.physletb.2003.08.042Phys. Lett. B 573, 27 (2003)
higherdim-dotti
G. Dotti, J. Oliva and R. Troncoso,
Static wormhole solution for higher-dimensional gravity in vacuum
https://link.aps.org/doi/10.1103/PhysRevD.75.024002Phys. Rev. D 75, 024002 (2007)
Lovelock-gravity
J. Matulich and R. Troncoso,
Asymptotically Lifshitz wormholes and black holes for Lovelock gravity in vacuum
https://doi.org/10.1007/JHEP10(2011)118J. High Energ. Phys. 2011, 118 (2011)
Torii-2013
Takashi Torii and Hisa-aki Shinkai,
Wormholes in higher dimensional space-time: Exact solutions and their linear stability analysis
https://link.aps.org/doi/10.1103/PhysRevD.88.064027Phys. Rev. D 88, 064027 (2013)
dgp
MG. Richarte,
Wormholes and solitonic shells in five-dimensional DGP theory
https://doi.org/10.1103/PhysRevD.82.044021 Phy.Rev. D 82, 044021 (2010)
intro62
Sayan Kar and Deshdeep Sahdev,
Evolving Lorentzian wormholes
https://doi.org/10.1103/PhysRevD.53.722Phys. Rev. D 53, 722 (1996)
intro63
N. Riazi, and B. Nasre Esfahani,
Time-dependent wormholes in an expanding universedominated by traceless matter
https://doi.org/10.1023/A:1002434423671Astrophysics and Space Science 271, 237–243 (2000)
intro65
S. A. Hayward,
Dynamic Wormholes
https://doi.org/10.1142/s0218271899000286International Journal of Modern Physics D 08, 373-382 (1999)
intro66
Sayan Kar,
Evolving wormholes and the weak energy condition
https://doi.org/10.1103/PhysRevD.49.862Phys. Rev. D 49, 862 (1994)
intro67
Sergey V. Sushkov and Yuan-Zhong Zhang,
Scalar wormholes in a cosmological setting and their instability
https://doi.org/10.1103/PhysRevD.77.024042Phys. Rev. D 77, 024042 (2008)
intro68
Peter K. F. Kuhfittig,
Static and dynamic traversable wormhole geometries satisfying the Ford-Roman constraints
https://doi.org/10.1103/PhysRevD.66.024015Phys. Rev. D 66, 024015 (2002)
intro69
Luis A. Anchordoqui, Diego F. Torres, Marta L. Trobo, and Santiago E. Perez Bergliaffa,
Evolving wormhole geometries
https://doi.org/10.1103/PhysRevD.57.829Phys. Rev. D 57, 829 (1998)
intro71
M. LA Camera,
On thin-shell wormholes evolving in flat
FRW spacetime
https://doi.org/10.1142/s0217732311035407Modern Physics Letters A 26, 857 (2011)
intro73
Aarón V B Arellano and Francisco S N Lobo,
Evolving wormhole geometries within nonlinear electrodynamics
https://doi.org/10.1088/0264-9381/23/20/004Class. Quantum Grav. 23 5811 (2006)
intro74
B.N. Esfahani,
The null energy condition in wormholes with cosmological constant
https://doi.org/10.1007/s10714-005-0018-yGen Relativ Gravit 37, 271–279 (2005)
intro75
M. Cataldo, F. Aróstica, and S. Bahamonde,
(N+1)-dimensional Lorentzian evolving wormholes supported by polytropic matter
https://doi.org/10.1140/epjc/s10052-013-2517-4Eur. Phys. J. C 73, 2517 (2013)
intro76
Mauricio Cataldo and Sergio del Campo,
Two-fluid evolving Lorentzian wormholes
https://doi.org/10.1103/PhysRevD.85.104010Phys. Rev. D 85, 104010 (2012)
intro79
N. Riazi, and M.R. Bordbar,
Time-dependent wormhole in an inhomogeneous spherically symmetric space time with a cosmological constant
https://doi.org/10.1007/s10509-010-0435-6Astrophys Space Sci 331, 315–320 (2011)
introras2
K. A. Bronnikov, J. C. Fabris, O. F. Piattella, and E. C. Santos,
Static, spherically symmetric solutions with a scalar field in Rastall gravity
https://doi.org/10.1007/s10714-016-2152-0Gen Relativ Gravit 48, 162 (2016)
introras3
Mustafa, G. and Shahzad, M. R. and Abbas, G. and Xia, T.,
Stable wormholes solutions in the background of Rastall theory
https://doi.org/10.1142/S0217732320500352Modern Physics Letters A 35, 2050035 (2020)
lob
Iarley P. Lobo, Martín G. Richarte, J. P. Morais Graça nd H. Moradpour, Thin-shell wormholes in Rastall gravity
https://doi.org/10.1140/epjp/s13360-020-00553-yEur. Phys. J. Plus
135, 550 (2020).
naz
N. Nazavari, K. Saaidi and A. Mohammadi,
Wormhole solution in modified teleparallel-Rastall gravity and energy conditions
https://doi.org/10.1007/s10714-023-03093-9 Gen. Relativ. Gravit 55, 45 (2023).
traversable
H. Moradpour, N. Sadeghnezhad, and S. H. Hendi,
Traversable asymptotically flat wormholes in Rastall gravity
https://doi.org/10.1139/cjp-2017-0040Canadian Journal of Physics 95, 1257 (2017)
introras4
Shibaji Halder, Subhra Bhattacharya, and Subenoy Chakraborty,
Wormhole solutions in Rastall gravity theory
https://doi.org/10.1142/S0217732319500950Modern Physics Letters A 34, 1950095 (2019)
Bhat-2021
S Bhattacharya, T Bandyopadhyay,
Revisiting the evolving Lorentzian wormhole: a general perspective
https://doi.org/10.1007/s10714-021-02878-0Gen Relativ Gravit 53, 104 (2021)
ext1
Mohammad Reza Mehdizadeh, Amir Hadi Ziaie,
Dynamical wormholes in Lovelock gravity
https://doi.org/10.48550/arXiv.2111.14828Phys. Rev. D 104, 104050 (2021)
ext2
Mohammad Reza Mehdizadeh,
Dynamical wormholes in Einstein-Gauss-Bonnet gravity
https://doi.org/10.1140/epjc/s10052-020-7871-4Eur. Phys. C 80:310 (2020)
kar
S. Kar, and D. Saahdev,
Restricted class of traversable wormholes with traceless matter, https://link.aps.org/doi/10.1103/PhysRevD.52.2030Physical Review D 52,4 2030, (1995).
EOS1
Luis A. Anchordoqui, Santiago Perez Bergliaffa, and Diego F. Torres,
Brans-Dicke wormholes in nonvacuum spacetime
https://doi.org/10.1103/PhysRevD.55.5226Phys. Rev. D 55, 5226 (1997)
EOS2
Mohammad Reza Mehdizadeh and Francisco S.N. Lobo,
Novel third-order Lovelock wormhole solutions
https://doi.org/10.1103/PhysRevD.93.124014Phys. Rev. D 93, 124014 (2016)
dim
Mohammad Reza Mehdizadeh, Mahdi Kord Zangeneh, and Francisco S. N. Lobo
Einstein-Gauss-Bonnet traversable wormholes satisfying the weak energy condition
https://doi.org/10.1103/PhysRevD.91.084004 Phys.Rev.D 91, 084004 (2015).
PRD-moradpour
H. Moradpour, Alexander Bonilla, Everton M.C. Abreu, and Jorge Ananias Neto,
Accelerated cosmos in a nonextensive setup
https://doi.org/10.1103/PhysRevD.96.123504Phys. Rev. D 96, 123504 (2017)
El-Hanafy
Waleed El Hanafy,
Impact of Rastall Gravity on Mass, Radius, and Sound Speed of the Pulsar PSR J0740+6620
https://doi.org/10.3847/1538-4357/ac9410ApJ 940, 51 (2022)
stz967
R. Li, J. Wang, Z. Xu, and X. Guo, Constraining the Rastall parameters in static space–times with galaxy-scale strong gravitational lensing
https://doi.org/10.1093/mnras/stz96Monthly Notices of the
Royal Astronomical Society 486, 2407 (2019)
moradpur-beta1.6
H. Moradpour and I. G. Salako,
Thermodynamic Analysis of the Static Spherically Symmetric Field Equations in Rastall Theory
https://doi.org/10.1155/2016/3492796dvances in High Energy Physics 2016, 3492796 (2016)
cjp
Y. Heydarzade, N. Riazi, and H. Moradpour,
Phantom wormhole solutions in a generic cosmological constant background
https://doi.org/10.1139/cjp-2015-0359Canadian Journal of Physics 93, 1523 (2015)
|
http://arxiv.org/abs/2307.07401v1 | 20230714152700 | Weyl's law for Neumann Schrödinger operators on Hölder domains | [
"Charlotte Dietze"
] | math-ph | [
"math-ph",
"math.MP",
"35P15, 35P20"
] |
The torsion of stellar streams
Adriana Bariego–Quintana
1
Felipe J. Llanes–Estrada2
July 14th 2023
========================================================================================
We review recent results on the semiclassical behaviour of Schrödinger operators with Neumann boundary conditions. In this setting, the validity of Weyl's law requires additional conditions on the potential. We will explain the techniques needed to control the number of bound states near the boundary, thus leading to universal estimates on the number of bound states.
§ INTRODUCTION
Weyl's law for the eigenvalues of the Laplacian on a domain Ω⊂^d (a bounded, open and connected subset of ^d) states that the number of eigenvalues below λ, which we denote by N(- Δ_Ω - λ), satisfies
N(- Δ_Ω - λ) = |B_1^d(0)|/(2π)^d |Ω|λ^d/2 + o(λ^d/2) as λ→∞,
under suitable assumptions on the boundary conditions or the domain Ω, where |B_1^d(0)| is the volume of the unit ball in ^d.
Its history, see <cit.>, goes back to Rayleigh <cit.> in 1877 who examined the number of overtones of musical instruments such as a violin string or an organ pipe. In three dimensions, he derived that for a cubic organ pipe, the number of overtones, which are the square root of the eigenvalues of the Laplacian, below the frequency ν behaves like the volume of the organ pipe multiplied by ν^3 for large ν. He could later connect this to the famous blackbody radiation experiments by Planck in the 1890s <cit.>, where he could derive the correct behaviour of the emitted energy for cubical shapes <cit.>. Sommerfeld <cit.> and Lorentz <cit.> remarked in 1910 that it remains to show that this law for the eigenvalues of the Laplacian does not depend on the shape considered. This problem was solved in 1911 by Weyl <cit.>, thereby rigourously justifying (<ref>) for the Dirichlet Laplacian - Δ_Ω^D[The Dirichlet Laplacian is the self-adjoint operator on L^2() corresponding to the quadratic form ∫_ |∇ u|^2 defined on H^1_0()]. For his proof, Weyl further developed the min-max principle, which was introduced by Fischer <cit.> and also later by Courant <cit.>. Weyl showed (<ref>) for bounded domains Ω⊂^d with sufficiently smooth boundary. Rozenblum extended (<ref>) to all open sets Ω⊂^d of finite measure <cit.>.
The eigenvalue problem of the Laplacian was also made popular by Kac in his 1966 paper “Can one hear the shape of a drum?” <cit.>. The general answer is no <cit.>, see Figure <ref>, except if Ω is analytic and satisfies certain symmetry properties <cit.>. Nevertheless, in any case, we can always “hear” the volume |Ω| from Weyl's law (<ref>). Also, due to a classical result of Ivrii <cit.>, if Ω is sufficiently smooth, then we can also “hear” the surface area |∂Ω| from a second-order Weyl law
N(- Δ_Ω^D - λ) = |B_1^d(0)|/(2π)^d |Ω|λ^d/2 -1/4|B_1^d-1(0)|/(2π)^d-1|∂Ω|λ^d-1/2
+ o(λ^d-1/2) as λ→∞.
See also <cit.> for an extension of (<ref>) to the sum of eigenvalues which holds for all Lipschitz domains. Here Ω is called a Lipschitz domain if it is a domain and the boundary of Ω is locally the graph of a Lipschitz continuous function f, that is, there exists a constant c>0 such that |f(x)-f(y)|≤ c|x-y| for all x,y in the domain of f.
The results we presented above were stated for the Dirichlet Laplacian Δ_Ω^D. Since we can trivially extend functions in that by zero outside Ω, we can think of results for the Dirichlet Laplacian as “global” properties of the Laplacian. On the other hand, the Neumann Laplacian Δ_Ω^N[The Neumann Laplacian is the self-adjoint operator on L^2() corresponding to the quadratic form ∫_ |∇ u|^2 defined on H^1()] appears naturally when localising. In order to understand its properties, one needs to better understand the geometry of the boundary since functions in H^1() can grow to infinity close to the boundary of . This makes many problems for the Neumann Laplacian more difficult than their analogues for the Dirichlet Laplacian. While many of the results mentioned above have a corresponding counterpart for the Neumann Laplacian under suitable assumptions, even very basic properties can fail for the Neumann Laplacian in general. While the Dirichlet Laplacian on a domain (which we always assume to be bounded) always has compact resolvent, there are examples of domains such that zero is in the essential spectrum of the Neumann Laplacian on that domain. For instance, Hempel and Seco <cit.> constructed such a domain known as “rooms and passages”, see Figure <ref>.
In a remarkable work, Netrusov and Safarov showed Weyl's law for γ-Hölder domains with Neumann boundary conditions
N(- Δ_Ω^N - λ) = |B_1^d(0)|/(2π)^d |Ω|λ^d/2 + o(λ^d/2) as λ→∞,
holds for all ∈(d-1/d,1) <cit.>, and it fails for all ∈(0,d-1/d] in the sense that for those , there exists a -Hölder domain such that (<ref>) is not true <cit.>. Here Ω is called a -Hölder domain if it is a domain and the boundary of Ω is locally the graph of a -Hölder continuous function f, that is, there exists a constant c>0 such that |f(x)-f(y)|≤ c|x-y|^γ for all x,y in the domain of f. Note that the case =1 corresponds to Lipschitz domains, which are well known to satisfy (<ref>), see <cit.>.
The reason why there is a transition at =d-1/d can be intuitively explained as follows. Locally, the boundary of Ω is the graph of a Hölder continuous function
f:A→ with A⊂^d-1 open and bounded,
that is, it is a subset of the boundary of ∂ up to a local change of coordinates by translation and rotation. Let us denote by ^s is the s-dimensional Hausdorff measure for s>0. Then there exists a constant C>0 such that
^d-1/({(x',f(x')) | x'∈ A})≤ C^d-1(A)<∞.
It follows that the Hausdorff dimension of ∂ is at most d-1/. If >d-1/d, then by (<ref>), the Hausdorff dimension of {(x',f(x')) | x'∈ A} is strictly smaller than d, which is the Hausdorff dimension of . Hence, the contribution from the bulk of should dominate the boundary effects. If ≤d-1/d, then the relatively straightforward proof of (<ref>) is not enough to decide if the Hausdorff dimension of the boundary ∂ is equal to d (note that it cannot be larger since ∂⊂^d). The boundary effects might be of the same or even higher order as the contribution from the bulk of and this is indeed what one observes for Weyl's law.
The proof idea by Netrusov and Safarov for (<ref>) is to decompose the domain into smaller domains on which there is at most one negative eigenvalue of - Δ^N - λ each. Then the number of oscillatory domains chosen gives an upper bound for the number of negative eigenvalues N(- Δ_Ω^N - λ) of - Δ_Ω^N - λ.
It will turn out that for >d-1/d, the parts close to the boundary only give a subleading contribution, so the leading order contribution comes from the oscillatory domains in the bulk of . This leading order contribution can be shown to be the right-hand side of (<ref>) in the same way as in Weyl's proof for Weyl's law, thereby establishing the upper bound. The proof of the lower bound is simpler, for example by comparing with the Dirichlet Laplacian.
Another way to view Weyl's law (<ref>) is via semiclassics. It suggests that every bound state corresponds to a volume of size (2π)^d in the phase space <cit.>, which is in our case given by ×^d. For large λ, the semiclassical approximation states
N(- Δ_Ω - λ) ≈1/(2π)^d|{(p,x) ∈^d ×Ω | |p|^2 - λ <0 }| = |B_1^d(0)|/(2π)^d |Ω|λ^d/2 ,
compare with (<ref>).
In the following, we will focus on Schrödinger operators and we will present the results from <cit.>. To this end, let V: Ω→ (-∞,0] be measurable. We will refer to V as a potential. By the semiclassical approximation, we expect for a large λ
N(- Δ_Ω + λ V ) ≈1/(2π)^d|{(p,x) ∈^d ×Ω | |p|^2 + λ V(x) <0 }| = |B_1^d(0)|/(2π)^dλ^d/2∫_Ω |V|^d/2.
While Netrusov and Safarov showed that Weyl's law holds for -Hölder domains with ∈(d-1/d,1), our first result <cit.> shows that the situation for Schrödinger operators is more delicate.
Let d≥ 2. For every γ∈(d-1d, 1) there exists a γ-Hölder domain Ω⊂^d and V : Ω→( -∞, 0] with V ∈ L^d/2(Ω) such that
lim sup_λ→∞Ωλ V/λ^d/2 = ∞ .
Theorem <ref> shows that in general, we cannot even expect semiclassical behaviour, and in particular, the semiclassical approximation does not hold in the setting of Theorem <ref>. However, if we make further assumptions on the potential V, we can prove a universal Cwikel-Lieb-Rozenblum-type bound on the number of negative eigenvalues of the Schrödinger operator - Δ_Ω^N + V, see <cit.>.
Let d≥ 2. Let γ∈[2(d-1)2d - 1 , 1 ) and let Ω be a γ-Hölder domain. Then there exists a constant C_ = C_(d, γ, Ω)> 0 and p_d,γ>d/2 such that for every V : Ω→(- ∞, 0 ] with V∈ L^p_d,γ()
Ω≤ C_(1 + Vp_d,γ^d/2) .
Moreover, p_d,γ satisfies
lim_γ→ 1 p_d,γ = d/2.
The main point of Theorem <ref> is that we obtain the expected semiclassical behaviour
λ V=𝒪(λ^d/2) as λ→∞
if V∈ L^p_d,γ(). It is possible to obtain for all ∈(0,1)
Ω≤ C_(1 + ∫_ |V|^) .
for some >d/2 by following the strategy of Rozenblum, see below in (<ref>) and see also <cit.> combined with <cit.>. However, this bound is insufficient for (<ref>).
While Theorem <ref> is of independent interest, we can use it to derive Weyl's law on Hölder domains, thus rigourously justifying (<ref>), see <cit.>.
Let d≥ 2. Let γ∈[2(d-1)2d - 1 , 1 ) and let Ω⊂^d be a γ-Hölder domain. Let V : Ω→ (-∞, 0] with V∈ L^p_d,γ(). Then
N (-Δ^N_Ω + λ V) = (2 π)^-d|B_1^d(0)| λ^d/2∫_Ω |V|^d/2 + o (λ^d/2) as λ→∞ .
The proof idea for Theorem <ref> is to approximate the potential V by a sequence of potentials (V_n)_n∈, which are continuous and compactly supported inside . For each of the potentials V_n, we can follow Weyl's proof strategy for the Weyl law for constant potentials to obtain Weyl's law for the Schrödinger operator -Δ^N_Ω + λ V_n. For instance, for the upper bound in (<ref>), we split for δ∈(0,1)
N(-Δ^N_Ω + λ V ) ≤ N((1 - δ) (-Δ^N_Ω) + λ V_n) + N (δ(-Δ^N_Ω) + λ(V - V_n)) ,
and we divide by λ^d/2 and let λ→∞ first, then n→∞ and finally δ→0. We will recover the right-hand side in (<ref>) from the first summand on the right-hand side in (<ref>). We will show that the contribution from the second summand N (δ(-Δ^N_Ω) + λ(V - V_n)) goes to zero using Theorem <ref>.
The range of γ in Theorem <ref> is smaller than the optimal range γ∈(d-1/d, 1) obtained in <cit.> for constant potentials. In fact, we are able to cover the full optimal range including the endpoint γ∈[d-1/d, 1) provided we replace the L^p_d,γ() norm by a weighted L^ norm V with =(d,)>d/2, which gives better control on the growth of the potential near the boundary, see <cit.>. The proof relies on an improved version of Theorem <ref> for γ∈[d-1/d, 1) if V<∞, see Theorem <ref> below and also <cit.>. The norm V controls the growth of the potential V near the boundary ∂. Note that the domain and the potential in Theorem <ref> do not satisfy (<ref>), so V=∞ in Theorem <ref>. Indeed, our construction in the proof of Theorem <ref> involves a potential V that grows to infinity near the boundary of .
We hope that the techniques we developed to deal with rough domains will be helpful for the investigation of semiclassics of potentials that are singular or oscillatory along some lower-dimensional manifold.
Acknowledgements. The author would like to express her deepest gratitude to Phan Thành Nam for his continued support and very helpful advice. She would also like to thank Laure Saint-Raymond for her support and hospitality at Institut des Hautes Études Scientifiques and for inspiring discussions. The author acknowledges the support from the Deutsche Forschungsgemeinschaft (DFG project Nr. 426365943), from the Jean-Paul Gimon Fund and from the Erasmus+ programme.
§ THE CASE OF CONSTANT POTENTIALS
In this section, we will explain the proof strategy of Netrusov and Safarov for Weyl's law for constant potentials (<ref>) <cit.>, see the following Theorem.
Let γ∈(d-1d , 1 ) and let Ω⊂^d be a γ-Hölder domain. Then
N (-Δ^N_Ω - λ) = (2 π)^-d|B_1^d(0)| || λ^d/2 + o (λ^d/2) as λ→∞ .
The main idea is to cover the domain by smaller domains, which we will call oscillatory domains in the following, such that there is at most one negative eigenvalue of the corresponding Schrödinger operator on each oscillatory domain. The number of negative eigenvalues on the domain can then be bounded by the number of oscillatory domains we have chosen.
Netrusov and Safarov can choose the oscillatory domains in the bulk of Ω as cubes of a fixed λ-dependent side length which are arranged on a lattice, and which only overlap on their boundaries.
Close to the boundary ∂ Netrusov and Safarov <cit.> construct the oscillatory domains as follows. They use the (d-1)-dimensional Besicovitch covering lemma on A in (<ref>) to get a collection of (d-1)-dimensional cubes, which overlap at most an -dependent number of times, and on each of those these cubes, the function f from (<ref>) does not oscillate too much (depending on λ). They use each of these (d-1)-dimensional cubes to stack multiple d-dimensional cuboids of a small λ-dependent height on top of each other below the graph of f, which is part of ∂. The one on top will in general not be a cuboid, but its upper boundary is given by the corresponding part of the graph of f. For this oscillatory domain D on top, Netrusov and Safarov use a new Poincaré-Sobolev inequality <cit.> to show that -Δ^N_D - λ has at most one negative eigenvalue. Netrusov and Safarov can estimate the number of oscillatory demands close to the boundary ∂ by a constant times λ^d-1/2, so the boundary contribution for γ>d-1/d is of subleading order.
§ PROOF STRATEGY OF THEOREM <REF>
We will explain the proof strategy of the following stronger version of Theorem <ref> for γ∈[d-1d, 1), see <cit.>. Theorem <ref> follows from Theorem <ref> combined with <cit.>.
Let d≥ 2. Let γ∈[d-1d, 1) and let Ω be a γ-Hölder domain. Then there exists a constant C_ = C_(d, γ, Ω)> 0 such that for every V : Ω→(- ∞, 0 ] with V < ∞, we have
Ω≤ C_(1 + V^d/2) .
Here the norm V=V, is given in <cit.>, with β=β(d,)>0 and =(d,)∈(d/2,∞) chosen as in <cit.>, see also (<ref>) below.
The Cwikel-Lieb-Rozenblum inequality <cit.> states that for any open set ⊂^d for d≥3 and for every V : →(- ∞, 0 ], we have
≤ C(d) ∫_ |V|^d/2 .
Our proof of Theorem <ref> is inspired by both Rozenblum's proof strategy of the Cwikel-Lieb-Rozenblum inequality (<ref>), see <cit.> and <cit.> and Netrusov's and Safarov's proof strategy of Weyl's law for constant potentials on Hölder domains (<ref>) <cit.>, see Section <ref>.
Since (<ref>) is an inequality on the entire space ^d or on a domain with Dirichlet boundary conditions, Rozenblum can choose his oscillatory domains as cubes on the entire domain. In order to make sure that there is at most one negative eigenvalue on each cube, he needs to vary the size of the cubes depending on the size of the potential V there. Using the Besicovitch covering lemma, Rozenblum can make sure that the cubes overlap at most a dimension-dependent number of times, which is sufficient for the Cwikel-Lieb-Rozenblum inequality since it does not require an optimal constant as opposed to Weyl's law.
Similarly, for the proof of Theorem <ref>, our oscillatory domains need to depend on the size of the potential V locally. Moreover, we have to be careful close to the boundary ∂, where our oscillatory domains will be given by rectangles intersected with . Far enough away from the boundary ∂, these rectangles are cubes.
Close to the boundary, we need to use a new covering lemma for oscillatory domains <cit.>. Moreover, in order to relate an oscillatory domain having at most one negative eigenvalue to the size of the potential V on that oscillatory domain measured in an appropriate norm, we use a new Poincaré-Sobolev inequality on oscillatory domains <cit.>. Recall that Rozenblum uses the Poincaré-Sobolev inequality on cubes, while Netrusov and Safarov use a Poincaré-Sobolev inequality (only involving the L^2 norm) on oscillatory domains <cit.>.
Combining all these ingredients, we can bound the number of negative eigenvalues of the Schrödinger operator -Δ^N_Ω+V by the number of oscillatory domains we chose. In Rozenblum's proof of the Cwikel-Lieb-Rozenblum inequality, the quantity ∫_Q|V|^d/2 was of order one on each cube Q. Therefore he could estimate
≤ number of cubes Q ≤ C(d) ∑_cubes Q∫_ |V|^d/2≤ C(d) ∫_ |V|^d/2,
where he used that the cubes Q can only overlap a finite number of times in the last step.
In our case, we can choose the oscillatory domains D such that ∫_D|V|^ is of order one for some =(d,)∈(d/2,∞). Imitating the estimate in (<ref>), we get
N(-Δ^N_Ω + V ) ≤ C_(1+∫_|V|^),
where we need to add 1 on the right-hand side because we are dealing with the Neumann Laplacian, so we can also test with constant functions. However, (<ref>) is not the estimate we aim for. In particular, replacing V by λ V in (<ref>) and letting λ→∞, we obtain
λ V=𝒪(λ^) as λ→∞,
which is weaker than (<ref>) since >d/2. The main remaining challenge is to get the expected semiclassical behaviour. The key idea is to use Hölder's inequality for a sum of products of real numbers. If we denote by {D_j}_j ∈ J_3 the oscillatory domains that are very close to the boundary, where J_3 is an index set, then we can estimate the number of oscillatory domains very close to the boundary |J_3| for s, s' ∈(1, ∞) with 1s + 1s' = 1, so for any A_j > 0, j ∈ J_3, we have
|J_3| = ∑_j ∈ J_3 A_j^-1 A_j ≤(∑_j ∈ J_3 A_j^-s')^1/s'(∑_j ∈ J_3 A_j^s)^1/s .
We choose each A_j > 0 for j ∈ J_3 such that it only depends on the size of the largest side-length of the oscillatory domain D_j and the distance of its centre to the boundary ∂ measured in a certain way. The norm V consists of the L^d/2 norm on plus a weighted L^ semi-norm on with a weight w that grows near the boundary at a scale determined by the parameter β=β(d,)>0, that is, on each D⊂, we have
V_D:=(Dx|V(x)|^d/2)^2/d+(Dxw(x)|V(x)|^)^1/.
The A_j > 0 satisfy for all j ∈ J_3
A_j^s ≤ C D_jxw(x)|V(x)|^,
so using that the oscillatory domains {D_j}_j ∈ J_3 overlap at most a bounded number of times, we get
∑_j ∈ J_3 A_j^s ≤ C xw(x)|V(x)|^≤ C V^.
In fact, we can choose the A_j > 0 such that we also have
∑_j ∈ J_3 A_j^-s'≤ C V^-1/2
and we can choose all the parameters such that
p̃/s - 1/2s' = d/2 .
Combining (<ref>), (<ref>), (<ref>) and (<ref>), we obtain
|J_3| ≤ CV^- 1/2s'V^p̃/s=CV^d/2,
which has the desired semiclassical behaviour (<ref>).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.