text
stringlengths
6
128k
Online social media provides a channel for monitoring people's social behaviors and their mental distress. Due to the restrictions imposed by COVID-19 people are increasingly using online social networks to express their feelings. Consequently, there is a significant amount of diverse user-generated social media content. However, COVID-19 pandemic has changed the way we live, study, socialize and recreate and this has affected our well-being and mental health problems. There are growing researches that leverage online social media analysis to detect and assess user's mental status. In this paper, we survey the literature of social media analysis for mental disorders detection, with a special focus on the studies conducted in the context of COVID-19 during 2020-2021. Firstly, we classify the surveyed studies in terms of feature extraction types, varying from language usage patterns to aesthetic preferences and online behaviors. Secondly, we explore detection methods used for mental disorders detection including machine learning and deep learning detection methods. Finally, we discuss the challenges of mental disorder detection using social media data, including the privacy and ethical concerns, as well as the technical challenges of scaling and deploying such systems at large scales, and discuss the learnt lessons over the last few years.
We study the closure of the unitary orbit of a given point in the non-commutative Choquet boundary of a unital operator space with respect to the topology of pointwise norm convergence. This may be described more extensively as the $\ast$-representations of the $\mathrm{C}^{\ast}$-envelope that are approximately unitarily equivalent to one that possesses the unique extension property. Although these $\ast$-representations do not necessarily have the unique extension property themselves, we show that their unital completely positive extensions display significant restrictions. When the underlying operator space is separable, this allows us to connect our work to Arveson's hyperrigidity conjecture. Finally, as an application, we reformulate the classical \v{S}a\v{s}kin Theorem and Arveson's essential normality conjecture.
Chaput, Manivel and Perrin proved a formula describing the quantum product by Schubert classes associated to cominuscule weights in a rational projective homogeneous space X. In the case where X has Picard rank one, we link this formula to the stratification of X by P-orbits, where P is the parabolic subgroup associated to the cominuscule weight. We deduce a decomposition of the Hasse diagram of X, i.e the diagram describing the cup-product with the hyperplane class.
In this note, we classify solutions to a class of Monge-Amp\`ere equations whose right hand side may be degenerate or singular in the half space. Solutions to these equations are special solutions to a class of fourth order equations, including the affine maximal hypersurface equation, in the half space. Both the Dirichlet boundary value and Neumann boundary value cases are considered.
The main motivation to study models in the presence of a minimal length is to obtain a quantum field theory free of the divergences. In this way, in this paper, we have constructed a new framework for quantum electrodynamics embedded in a minimal length scale background. New operators are introduced and the Green function method was used for the solution of the field equations, i.e., the Maxwell, Klein-Gordon and Dirac equations. We have analyzed specifically the scalar field and its one loop propagator. The mass of the scalar field regularized by the minimal length was obtained. The QED Lagrangian containing a minimal length was also constructed and the divergences were analyzed. The electron and photon propagators, and the electron self-energy at one loop as a function of the minimal length was also obtained.
Projection-based model order reduction allows for the parsimonious representation of full order models (FOMs), typically obtained through the discretization of certain partial differential equations (PDEs) using conventional techniques where the discretization may contain a very large number of degrees of freedom. As a result of this more compact representation, the resulting projection-based reduced order models (ROMs) can achieve considerable computational speedups, which are especially useful in real-time or multi-query analyses. One known deficiency of projection-based ROMs is that they can suffer from a lack of robustness, stability and accuracy, especially in the predictive regime, which ultimately limits their useful application. Another research gap that has prevented the widespread adoption of ROMs within the modeling and simulation community is the lack of theoretical and algorithmic foundations necessary for the "plug-and-play" integration of these models into existing multi-scale and multi-physics frameworks. This paper describes a new methodology that has the potential to address both of the aforementioned deficiencies by coupling projection-based ROMs with each other as well as with conventional FOMs by means of the Schwarz alternating method. Leveraging recent work that adapted the Schwarz alternating method to enable consistent and concurrent multi-scale coupling of finite element FOMs in solid mechanics, we present a new extension of the Schwarz formulation that enables ROM-FOM and ROM-ROM coupling in nonlinear solid mechanics. In order to maintain efficiency, we employ hyper-reduction via the Energy-Conserving Sampling and Weighting approach. We evaluate the proposed coupling approach in the reproductive as well as in the predictive regime on a canonical test case that involves the dynamic propagation of a traveling wave in a nonlinear hyper-elastic material.
This paper proposes an approach to content-preserving stitching of images with regular boundary constraints, which aims to stitch multiple images to generate a panoramic image with regular boundary. Existing methods treat image stitching and rectangling as two separate steps, which may result in suboptimal results as the stitching process is not aware of the further warping needs for rectangling. We address these limitations by formulating image stitching with regular boundaries in a unified optimization. Starting from the initial stitching results produced by traditional warping-based optimization, we obtain the irregular boundary from the warped meshes by polygon Boolean operations which robustly handle arbitrary mesh compositions, and by analyzing the irregular boundary construct a piecewise rectangular boundary. Based on this, we further incorporate straight line preserving and regular boundary constraints into the image stitching framework, and conduct iterative optimization to obtain an optimal piecewise rectangular boundary, thus can make the panoramic boundary as close as possible to a rectangle, while reducing unwanted distortions. We further extend our method to panoramic videos and selfie photography, by integrating the temporal coherence and portrait preservation into the optimization. Experiments show that our method efficiently produces visually pleasing panoramas with regular boundaries and unnoticeable distortions.
XTE J1739-302 is a transient X-ray source with unusually short outbursts, lasting on the order of hours. Here we give a summary of X-ray observations we have made of this object in outburst with the Rossi X-ray Timing Explorer (RXTE) and at a low level of activity with the Chandra X-ray Observatory, as well as observations made by other groups. Visible and infrared spectroscopy of the mass donor of XTE J1739-302 are presented in a companion paper. The X-ray spectrum is hard both at low levels and in outburst, but somewhat variable, and there is strong variability in the absorption column from one outburst to another. Although no pulsation has been observed, the outburst data from multiple observatories show a characteristic timescale for variability on the order of 1500-2000 s. The Chandra localization (right ascension 17h 39m 11.58s, declination -30o 20' 37.6'', J2000) shows that despite being located less than 2 degrees from the Galactic Center and highly absorbed, XTE J1739-302 is actually a foreground object with a bright optical counterpart. The combination of a very short outburst timescale and a supergiant companion is shared with several other recently-discovered systems, forming a class we designate as Supergiant Fast X-ray Transients (SFXTs). Three persistently bright X-ray binaries with similar supergiant companions have also produced extremely short, bright outbursts: Cyg X-1, Vela X-1, and 1E 1145.1-6141.
Directed protein networks with only a few thousand of nodes are rather complex and do not allow to extract easily the effective influence of one protein to another taking into account all indirect pathways via the global network. Furthermore, the different types of activation and inhibition actions between proteins provide a considerable challenge in the frame work of network analysis. At the same time these protein interactions are of crucial importance and at the heart of cellular functioning. We develop the Google matrix analysis of the protein-protein network from the open public database SIGNOR. The developed approach takes into account the bi-functional activation or inhibition nature of interactions between each pair of proteins describing it in the frame work of Ising-spin matrix transitions. We also apply a recently developed linear response theory for the Google matrix which highlights a pathway of proteins whose PageRank probabilities are most sensitive with respect to two proteins selected for the analysis. This group of proteins is analyzed by the reduced Google matrix algorithm which allows to determine the effective interactions between them due to direct and indirect pathways in the global network. We show that the dominating activation or inhibition function of each protein can be characterized by its magnetization. The results of this Google matrix analysis are presented for three examples of selected pairs of proteins. The developed methods work rapidly and efficiently even for networks with several million of nodes and can be applied to various biological networks.
We use hydrodynamic equations to study sound propagation in a superfluid Fermi gas inside a strongly elongated cigar-shaped trap, with main attention to the transition from the BCS to the unitary regime. We treat first the role of the radial density profile in the quasi-onedimensional limit and then evaluate numerically the effect of the axial confinement in a configuration in which a hole is present in the gas density at the center of the trap. We find that in a strongly elongated trap the speed of sound in both the BCS and the unitary regime differs by a factor sqrt{3/5} from that in a homogeneous three-dimensional superfluid. The predictions of the theory could be tested by measurements of sound-wave propagation in a set-up such as that exploited by M.R. Andrews et al. [Phys. Rev. Lett. 79, 553 (1997)] for an atomic Bose-Einstein condensate.
The aim of this thesis project is to investigate the bit commitment protocol in the framework of operational probabilistic theories. In particular a careful study is carried on the feasibility of bit commitment in the non-local boxes theory. New aspects of the theory are also presented.
We have searched for the decay B^+ -> omega l^+ nu in 78 fb^-1 of Y(4S) data (85.0 million BBbar events) accumulated with the Belle detector. The final state is fully reconstructed using the omega decay into pi^+ pi^- pi^0 and the detector hermeticity to infer the neutrino momentum. The signal yield is extracted by a two-dimensional fit to the lepton momentum and the invariant p^i+ pi^- pi^0 mass. The result of the fit depends on the form factor model assumed for the decay. Taking the average over three different models, 421 +/- 132 events are found in the data corresponding to a preliminary branching fraction of (1.4 +/- 0.4(stat) +/- 0.2(syst) +/- 0.3(model)) * 10^-4.
Networks are representations of complex underlying social processes. However, the same given network may be more suitable to model one behavior of individuals than another. In many cases, aggregate population models may be more effective than modeling on the network. We present a general framework for evaluating the suitability of given networks for a set of predictive tasks of interest, compared against alternative, networks inferred from data. We present several interpretable network models and measures for our comparison. We apply this general framework to the case study on collective classification of music preferences in a newly available dataset of the Last.fm social network.
In this article we analyze the nuclear matrix elements (NME) of the neutrinoless double beta decays of the nuclei 48-Ca, 76-Ge, 82-Se, 124-Sn, 130-Te and 136-Xe in the framework of the Interacting Shell Model (ISM). We study the relative value of the different contributions to them, such as higher order terms in the nuclear current, finite nuclear size effects and short range correlations, as well as their evolution with the maximum seniority permitted in the wave functions. We discuss also the build-up of the NME's as a function of the distance between the decaying neutrons. We calculate the decays to final 0+ first excited states and find that these decays are at least 25 times more supressed with respect to the ground state to ground state transition.
We study the possibility of detecting the charged Higgs bosons predicted in the Minimal Supersymmetric Standard Model $(H^\pm)$, with the reactions $e^{+}e^{-}\to \tau^-\bar \nu_{\tau}H^+, \tau^+\nu_\tau H^-$, using the helicity formalism. We analyze the region of parameter space $(m_{A^0}-\tan\beta)$ where $H^\pm$ could be detected in the limit when $\tan\beta$ is large. The numerical computation is done for the energie which is expected to be available at LEP-II ($\sqrt{s}=200$ GeV) and for a possible Next Linear $e^{+}e^{-}$ Collider ($\sqrt{s}=500$ GeV).
After a short summary of my talk, I discuss $K_{l3}$ decays and elastic $\pi\pi$ scattering in the framework of chiral perturbation theory.
We propose an order-disorder type microscopic model for BaTiO$_3$-like Ferroelectric Substance. Our model has three phase transitions and four phases. The symmetry and directions of the polarizations of the ordered phases agree with the experimental results of BaTiO$_3$. The intermediate phases in our model are known as an incompletely ordered phase, which appears in a generalized clock model.
We derive asymptotic formulas for the number of rational points on a smooth projective quadratic hypersurface of dimension at least three inside of a shrinking adelic open neighbourhood. This is a quantitative version of weak approximation for quadrics and allows us to deduce the best growth rate of the size of such an adelic neighbourhood for which equidistribution is preserved.
Multi-electron production is studied at high electron transverse momentum in positron- and electron-proton collisions using the H1 detector at HERA. The data correspond to an integrated luminosity of 115 pb-1. Di-electron and tri-electron event yields are measured. Cross sections are derived in a restricted phase space region dominated by photon-photon collisions. In general good agreement is found with the Standard Model predictions. However, for electron pair invariant masses above 100 GeV, three di-electron events and three tri-electron events are observed, compared to Standard Model expectations of 0.30 \pm 0.04 and 0.23 \pm 0.04, respectively.
Stochastic kinetic models (SKMs) are increasingly used to account for the inherent stochasticity exhibited by interacting populations of species in areas such as epidemiology, population ecology and systems biology. Species numbers are modelled using a continuous-time stochastic process, and, depending on the application area of interest, this will typically take the form of a Markov jump process or an It\^o diffusion process. Widespread use of these models is typically precluded by their computational complexity. In particular, performing exact fully Bayesian inference in either modelling framework is challenging due to the intractability of the observed data likelihood, necessitating the use of computationally intensive techniques such as particle Markov chain Monte Carlo (particle MCMC). It is proposed to increase the computational and statistical efficiency of this approach by leveraging the tractability of an inexpensive surrogate derived directly from either the jump or diffusion process. The surrogate is used in three ways: in the design of a gradient-based parameter proposal, to construct an appropriate bridge and in the first stage of a delayed-acceptance step. The resulting approach, which exactly targets the posterior of interest, offers substantial gains in efficiency over a standard particle MCMC implementation.
Sociometric badges are an emerging technology for study how teams interact in physical places. Audio data recorded by sociometric badges is often downsampled to not record discussions of the sociometric badges holders. To gain more information about interactions inside teams with sociometric badges a Voice Activity Detector (VAD) is deployed to measure verbal activity of the interaction. Detecting voice activity from downsampled audio data is challenging because down-sampling destroys information from the data. We developed a VAD using deep learning techniques that achieves only moderate accuracy in a low noise meeting setting and in across variable noise levels despite excellent validation performance. Experiences and lessons learned while developing the VAD are discussed.
Measurements transfer information about a system to the apparatus, and then further on -- to observers and (often inadvertently) to the environment. I show that even imperfect copying essential in such situations restricts possible unperturbed outcomes to an orthogonal subset of all possible states of the system, thus breaking the unitary symmetry of its Hilbert space implied by the quantum superposition principle. Preferred outcome states emerge as a result. They provide framework for the ``wavepacket collapse'', designating terminal points of quantum jumps, and defining the measured observable by specifying its eigenstates. In quantum Darwinism, they are the progenitors of multiple copies spread throughout the environment -- the fittest quantum states that not only survive decoherence, but subvert it into carrying information about them -- into becoming a witness.
We prove that the complement of a closed set S satisfying an extended exterior sphere condition is nothing but the union of closed balls with common radius. This generalizes [11, Theorem 3] where the set S is assumed to be prox-regular, a property stronger than the extended exterior sphere condition. We also provide a sufficient condition for the equivalence between prox-regularity and the extended exterior sphere condition that generalizes [13, Corollary 3.12] to the case in which S is not necessarily regular closed.
An algebraic treatment of shape-invariant potentials is discussed. By introducing an operator which reparametrizes wavefunctions, the shape-invariance condition can be related to a generalized Heisenberg- Weyl algebra. It is shown that this makes it possible to define a coherent state associated with the shape-invariant potentials.
The soft X-ray emission in obscured active galactic nuclei (AGN) is dominated by emission lines, produced in a gas photoionized by the nuclear continuum and likely spatially coincident with the optical narrow line region (NLR). However, a fraction of the observed soft X-ray flux appears like a featureless power law continuum. If the continuum underlying the soft X-ray emission lines is due to Thomson scattering of the nuclear radiation, it should be very highly polarized. We calculated the expected amount of polarization assuming a simple conical geometry for the NLR, combining these results with the observed fraction of the reflected continuum in bright obscured AGN.
From robots that replace workers to robots that serve as helpful colleagues, the field of robotic automation is experiencing a new trend that represents a huge challenge for component manufacturers. The contribution starts from an innovative vision that sees an ever closer collaboration between Cobot, able to do a specific physical job with precision, the AI world, able to analyze information and support the decision-making process, and the man able to have a strategic vision of the future.
Children and adults with cerebral palsy (CP) can have involuntary upper limb movements as a consequence of the symptoms that characterize their motor disability, leading to difficulties in communicating with caretakers and peers. We describe how a socially assistive robot may help individuals with CP to practice non-verbal communicative gestures using an active orthosis in a one-on-one number-guessing game. We performed a user study and data collection with participants with CP; we found that participants preferred an embodied robot over a screen-based agent, and we used the participant data to train personalized models of participant engagement dynamics that can be used to select personalized robot actions. Our work highlights the benefit of personalized models in the engagement of users with CP with a socially assistive robot and offers design insights for future work in this area.
We study deformations of complex projective varieties that are homotopically or homologically trivial. We formulate several conjectures and give some examples and partial answers.
We present a study of two-particle correlation functions involving photons and neutral pions in proton-proton and lead-lead collisions at the LHC energy. The aim is to use these correlation functions to quantify the effects of the medium on the jet decay properties.
Cophasing six telescopes from the CHARA array, the CHARA-Michigan Phasetracker (CHAMP) and Michigan Infrared Combiner (MIRC) are pushing the frontiers of infrared long-baseline interferometric imaging in key scientific areas such as star- and planet-formation. Here we review our concepts and recent improvements on the CHAMP and MIRC control interfaces, which establish the communication to the real-time data recording & fringe tracking code, provide essential performance diagnostics, and assist the observer in the alignment and flux optimization procedure. For fringe detection and tracking with MIRC, we have developed a novel matrix approach, which provides predictions for the fringe positions based on cross-fringe information.
A long-standing vision of backscatter communications is to provide long-range connectivity and high-speed transmissions for batteryless Internet-of-Things (IoT). Recent years have seen major innovations in designing backscatters toward this goal. Yet, they either operate at a very short range, or experience extremely low throughput. This paper takes one step further toward breaking this stalemate, by presenting PolarScatter that exploits channel polarization in long-range backscatter links. We transform backscatter channels into nearly noiseless virtual channels through channel polarization, and convey bits with extremely low error probability. Specifically, we propose a new polar code scheme that automatically adapts itself to different channel quality, and design a low-cost encoder to accommodate polar codes on resource-constrained backscatter tags. We build a prototype PCB tag and test it in various outdoor and indoor environments. Our experiments show that our prototype achieves up to 10$\times$ throughput gain, or extends the range limit by 1.8$\times$ compared with the state-of-the-art long-range backscatter solution. We also simulate an IC design in TSMC 65 nm LP CMOS process. Compared with traditional encoders, our encoder reduces storage overhead by three orders of magnitude, and lowers the power consumption to tens of microwatts.
Let $G$ be a graph on $n$ vertices and let $\mathcal{L}_k$ be an arbitrary function that assigns each vertex in $G$ a list of $k$ colours. Then $G$ is $\mathcal{L}_k$-list colourable if there exists a proper colouring of the vertices of $G$ such that every vertex is coloured with a colour from its own list. We say $G$ is $k$-choosable if for every such function $\mathcal{L}_k$, $G$ is $\mathcal{L}_k$-list colourable. The minimum $k$ such that $G$ is $k$-choosable is called the list chromatic number of $G$ and is denoted by $\chi_L(G)$. Let $\chi_L(G) = s$ and let $t$ be a positive integer less than $s$. The partial list colouring conjecture due to Albertson et al. \cite{albertson2000partial} states that for every $\mathcal{L}_t$ that maps the vertices of $G$ to $t$-sized lists, there always exists an induced subgraph of $G$ of size at least $\frac{tn}{s}$ that is $\mathcal{L}_t$-list colourable. In this paper we show that the partial list colouring conjecture holds true for certain classes of graphs like claw-free graphs, graphs with large chromatic number, chordless graphs, and series-parallel graphs. In the second part of the paper, we put forth a question which is a variant of the partial list colouring conjecture: does $G$ always contain an induced subgraph of size at least $\frac{tn}{s}$ that is $t$-choosable? We show that the answer to this question is not always `yes' by explicitly constructing an infinite family of $3$-choosable graphs where a largest induced $2$-choosable subgraph of each graph in the family is of size at most $\frac{5n}{8}$.
Next-generation spectroscopic surveys will map the large-scale structure of the observable universe, using emission line galaxies as tracers. While each survey will map the sky with a specific emission line, interloping emission lines can masquerade as the survey's intended emission line at different redshifts. Interloping lines from galaxies that are not removed can contaminate the power spectrum measurement, mixing correlations from various redshifts and diluting the true signal. We assess the potential for power spectrum contamination, finding that an interloper fraction worse than 0.2% could bias power spectrum measurements for future surveys by more than 10% of statistical errors, while also biasing power spectrum inferences. We also construct a formalism for predicting cosmological parameter bias, demonstrating that a 0.15%-0.3% interloper fraction could bias the growth rate by more than 10% of the error, which can affect constraints on gravity upcoming surveys. We use the COSMOS Mock Catalog (CMC), with the emission lines re-scaled to better reproduce recent data, to predict potential interloper fractions for the Prime Focus Spectrograph (PFS) and the Wide-Field InfraRed Survey Telescope (WFIRST). We find that secondary line identification, or confirming galaxy redshifts by finding correlated emission lines, can remove interlopers for PFS. For WFIRST, we use the CMC to predict that the 0.2% target can be reached for the WFIRST H$\alpha$ survey, but sensitive optical and near-infrared photometry will be required. For the WFIRST [OIII] survey, the predicted interloper fractions reach several percent and their effects will have to be estimated and removed statistically (e.g. with deep training samples). (Abridged)
We study a natural functional on the space of holomorphic sections of the Deligne-Hitchin moduli space of a compact Riemann surface, generalizing the energy of equivariant harmonic maps corresponding to twistor lines. We give a link to a natural meromorphic connection on the hyperholomorphic line bundle recently constructed by Hitchin. Moreover, we prove that for a certain class of real holomorphic sections of the Deligne-Hitchin moduli space, the functional is basically given by the Willmore energy of corresponding (equivariant) conformal map to the 3-sphere. As an application we use the functional to distinguish new components of real holomorphic sections of the Deligne-Hitchin moduli space from the space of twistor lines.
The problem of generating microstructures of complex materials in silico has been approached from various directions including simulation, Markov, deep learning and descriptor-based approaches. This work presents a hybrid method that is inspired by all four categories and has interesting scalability properties. A neural cellular automaton is trained to evolve microstructures based on local information. Unlike most machine learning-based approaches, it does not directly require a data set of reference micrographs, but is trained from statistical microstructure descriptors that can stem from a single reference. This means that the training cost scales only with the complexity of the structure and associated descriptors. Since the size of the reconstructed structures can be set during inference, even extremely large structures can be efficiently generated. Similarly, the method is very efficient if many structures are to be reconstructed from the same descriptor for statistical evaluations. The method is formulated and discussed in detail by means of various numerical experiments, demonstrating its utility and scalability.
Innovative ideas are often situated where disciplines meet, and socio-economic problems generally require contributions from several disciplines. Ways to stimulate interdisciplinary research collaborations are therefore an increasing point of attention for science policy. There is concern that 'regular' funding programs, involving advice from disciplinary experts and discipline-bound viewpoints, may not adequately stimulate, select or evaluate this kind of research. This has led to specific policies aimed at interdisciplinary research in many countries. There is however at this moment no generally accepted method to adequately select and evaluate interdisciplinary research. In the vast context of different forms of interdisciplinarity, this paper aims to contribute to the debate on best practices to stimulate and support interdisciplinary research collaborations. It describes the selection procedures and results of a university program supporting networks formed 'bottom up', integrating expertise from different disciplines. The program's recent evaluation indicates that it is successful in selecting and supporting the interdisciplinary synergies aimed for, responding to a need experienced in the field. The analysis further confirms that potential for interdisciplinary collaboration is present in all disciplines.
In this article we present the prototype of a workshop on naive set theory designed for high school students in or around the seventh year of primary education. Our concept is based on two events which the author organized in 2006 and 2010 for students of elementary school and high school, respectively. The article also includes a practice report on the two workshops.
Rapid growth of genetic databases means huge savings from improvements in their data compression, what requires better inexpensive statistical models. This article proposes automatized optimizations e.g. of Markov-like models, especially context binning and model clustering. While it is popular to just remove low bits of the context, proposed context binning automatically optimizes such reduction as tabled: state=bin[context] determining probability distribution, this way extracting nearly all useful information also from very large contexts, into a relatively small number of states. The second proposed approach: model clustering uses k-means clustering in space of general statistical models, allowing to optimize a few models (as cluster centroids) to be chosen e.g. separately for each read. There are also briefly discussed some adaptivity techniques to include data non-stationarity.
Research exploring CycleGAN-based synthetic image generation has recently accelerated in the medical community, as it is able to leverage unpaired datasets effectively. However, clinical acceptance of these synthetic images pose a significant challenge as they are subject to strict evaluation protocols. A commonly established drawback of the CycleGAN, the introduction of artifacts in generated images is unforgivable in the case of medical images. In an attempt to alleviate this drawback, we explore different constraints of the CycleGAN along with investigation of adaptive control of these constraints. The benefits of imposing additional constraints on the CycleGAN, in the form of structure retaining losses is also explored. A generalized frequency loss inspired by arxiv:2012.12821 that preserves content in the frequency domain between source and target is investigated and compared with existing losses such as the MIND loss arXiv:1809.04536. CycleGAN implementations from the ganslate framework (https://github.com/ganslate-team/ganslate) are used for experimentation in this thesis. Synthetic images generated from our methods are quantitatively and qualitatively investigated and outperform the baseline CycleGAN and other approaches. Furthermore, no observable artifacts or loss in image quality is found, which is critical for acceptance of these synthetic images. The synthetic medical images thus generated are also evaluated using domain-specific evaluation and using segmentation as a downstream task, in order to clearly highlight their applicability to clinical workflows.
Central heavy-ion collisions may induce sizeable fluctuations of the topological charge. This effect is expected to distort the dispersion relation for the hadron masses. We construct a general setup for a compact description of this phenomenon in the framework of bottom-up holographic approach to QCD. A couple of soft wall holographic models are proposed for the vector mesons. The states having different circular polarizations are shown to have different effective mass. The requirement of stability imposes strong constraints on the possible choice of models.
Motivated by the recent discovery of superconductivity in Na$_x$CoO$_2\cdot y$H$_2$O, we use series expansion methods and cluster mean-field theory to study spontaneous charge order, Neel order, ferromagnetic order, dimer order and phase-separation in the triangular-lattice t-J-V model at 2/3 electron density. We find that for t<0, the charge ordered state, with electrons preferentially occupying a honeycomb lattice, is very robust. Quite surprisingly, hopping to the third sublattice can even enhance Neel order. At large negative t and small V, the Nagaoka ferromagnetic state is obtained. For large positive t, charge and Neel order vanish below a critical V, giving rise to an itinerant antiferromagnetically correlated state.
We consider bilinear optimal control problems, whose objective functionals do not depend on the controls. Hence, bang-bang solutions will appear. We investigate sufficient second-order conditions for bang-bang controls, which guarantee local quadratic growth of the objective functional in $L^1$. In addition, we prove that for controls that are not bang-bang, no such growth can be expected. Finally, we study the finite-element discretization, and prove error estimates of bang-bang controls in $L^1$-norms.
Quantum steganography is a powerful method for information security where communications between a sender and receiver are disguised as naturally occurring noise in a channel. We encoded the phase and amplitude of weak coherent laser states such that a third party monitoring the communications channel, measuring the flow of optical states through the channel, would see an amalgamation of states indistinguishable from thermal noise light. Using quantum state tomography, we experimentally reconstructed the density matrices for artificially engineered thermal states and spontaneous emission from an optical amplifier and verified a state fidelity F>0.98 when compared with theoretical thermal states.
We show directly that the fractal uncertainty principle of Bourgain-Dyatlov [arXiv:1612.09040] implies that there exists $ \sigma > 0 $ for which the Selberg zeta function for a convex co-compact hyperbolic surface has only finitely many zeros with $ \Re s \geq \frac12 - \sigma$. That eliminates advanced microlocal techniques of Dyatlov-Zahl [arXiv:1504.06589] though we stress that these techniques are still needed for resolvent bounds and for possible generalizations to the case of non-constant curvature.
Various microorganisms and some mammalian cells are able to swim in viscous fluids by performing nonreciprocal body deformations, such as rotating attached flagella or by distorting their entire body. In order to perform chemotaxis, i.e. to move towards and to stay at high concentrations of nutrients, they adapt their swimming gaits in a nontrivial manner. We propose a model how microswimmers are able to autonomously adapt their shape in order to swim in one dimension towards high field concentrations using an internal decision making machinery modeled by an artificial neural network. We present two methods to measure chemical gradients, spatial and temporal sensing, as known for swimming mammalian cells and bacteria, respectively. Using the NEAT genetic algorithm surprisingly simple neural networks evolve which control the shape deformations of the microswimmer and allow them to navigate in static and complex time-dependent chemical environments. By introducing noisy signal transmission in the neural network the well-known biased run-and-tumble motion emerges. Our work demonstrates that the evolution of a simple internal decision-making machinery, which we can fully interpret and is coupled to the environment, allows navigation in diverse chemical landscapes. These findings are of relevance for intracellular biochemical sensing mechanisms of single cells, or for the simple nervous system of small multicellular organisms such as C. elegans.
The X-ray binary XTE J1817-330 was discovered in outburst on 26 January 2006 with RXTE/ASM. One year later, another X-ray transient discovered in 1996, XTE J1856+053, was detected by RXTE during a new outburst on 28 February 2007. We triggered XMM-Newton target of opportunity observations on these two objects to constrain their parameters and search for a stellar black holes. We summarize the properties of these two X-ray transients and show that the soft X-ray spectra indicate indeed the presence of an accreting stellar black hole in each of the two systems.
Apparent competition is an indirect interaction between species that share natural resources without any mutual aggression but negatively affect each other if there is a common enemy. The negative results of the apparent competition are reflected in the species spatial segregation, which impacts the dynamics of their populations. Performing a series of stochastic simulations, we study a model where organisms of two prey species do not compete for space but share a common predator. Our outcomes elucidate the central role played by the predator in the pattern formation and coarsening dynamics in apparent competition models. Investigating the effects of predator mortality on the persistence of the species, we find a crossover between a curvature driven scaling regime and a coexistence scenario. For low predator mortality, spatial domains mainly inhabited by one type of prey arise, surrounded by interfaces that mostly contain predators. We demonstrate that the dynamics of the interface network are curvature driven whose coarsening follows a scaling law common to other nonlinear systems. The effects of the apparent competition decrease for high predator mortality, allowing organisms of two prey species to share a more significant fraction of lattice. Finally, our results reveal that predation capacity in single-prey domains influences the scaling power law that characterises the coarsening dynamics. Our findings may be helpful to biologists to understand the pattern formation and dynamics of biodiversity in systems with apparent competition.
Entanglement is a key issue in the quantum physics which gives rise to resources for achieving tasks that are not possible within the realm of classical physics. Quantum entanglement varies with the evolution of the quantum systems. It is of significance to investigate the entanglement dynamics in terms of quantum channels. We study the entanglement-breaking channels and present the necessary and sufficient conditions for a quantum channel to an entanglement-breaking one for qubit systems. Furthermore, a concept of strong entanglement-breaking channel is introduced. The amendment of entanglement-breaking channels is also studied.
Spin-orbit-torque (SOT) switching using the spin Hall effect (SHE) in heavy metals and topological insulators (TIs) has great potential for ultra-low power magnetoresistive random-access memory (MRAM). To be competitive with conventional spin-transfer-torque (STT) switching, a pure spin current source with large spin Hall angle (${\theta}_{SH}$ > 1) and high electrical conductivity (${\sigma} > 10^5 {\Omega}^{-1}m^{-1}$) is required. Here, we demonstrate such a pure spin current source: BiSb thin films with ${\sigma}{\sim}2.5*10^5 {\Omega}^{-1}m^{-1}$, ${\theta}_{SH}{\sim}52$, and spin Hall conductivity ${\sigma}_{SH}{\sim}1.3*10^7 {\hbar}/2e{\Omega}^{-1}m^{-1}$ at room temperature. We show that BiSb thin films can generate a colossal spin-orbit field of 2770 Oe/(MA/cm$^2$) and a critical switching current density as low as 1.5 MA/cm$^2$ in Bi$_{0.9}$Sb$_{0.1}$ / MnGa bi-layers. BiSb is the best candidate for the first industrial application of topological insulators.
We consider the problem of computing a sparse binary representation of an image. To be precise, given an image and an overcomplete, non-orthonormal basis, we aim to find a sparse binary vector indicating the minimal set of basis vectors that when added together best reconstruct the given input. We formulate this problem with an $L_2$ loss on the reconstruction error, and an $L_0$ (or, equivalently, an $L_1$) loss on the binary vector enforcing sparsity. This yields a quadratic binary optimization problem (QUBO), whose optimal solution(s) in general is NP-hard to find. The method of unsupervised and unnormalized dictionary feature learning for a desired sparsity level to best match the data is presented. Next, we solve the sparse representation QUBO by implementing it both on a D-Wave quantum annealer with Pegasus chip connectivity via minor embedding, as well as on the Intel Loihi 2 spiking neuromorphic processor. On the quantum annealer, we sample from the sparse representation QUBO using parallel quantum annealing combined with quantum evolution Monte Carlo, also known as iterated reverse annealing. On Loihi 2, we use a stochastic winner take all network of neurons. The solutions are benchmarked against simulated annealing, a classical heuristic, and the optimal solutions are computed using CPLEX. Iterated reverse quantum annealing performs similarly to simulated annealing, although simulated annealing is always able to sample the optimal solution whereas quantum annealing was not always able to. The Loihi 2 solutions that are sampled are on average more sparse than the solutions from any of the other methods. Loihi 2 outperforms a D-Wave quantum annealer standard linear-schedule anneal, while iterated reverse quantum annealing performs much better than both unmodified linear-schedule quantum annealing and iterated warm starting on Loihi 2.
EuPd$_2$Si$_2$ is a valence-fluctuating system undergoing a temperature-induced valence crossover at $T'_V\approx160\,$K. We present the successful single crystal growth using the Czochralski method for the substitution series EuPd$_2$(Si$_{1-x}$Ge$_x$)$_2$, with substitution levels $x\leq 0.15$. A careful determination of the germanium content revealed that only half of the nominal concentration is build into the crystal structure. From thermodynamic measurements it is established that $T'_V$ is strongly suppressed for small substitution levels and antiferromagnetic order from stable divalent europium emerges for $x\gtrsim 0.10$. The valence transition is accompanied by a pronounced change of the lattice parameter $a$ of order 1.8%. In the antiferromagnetically ordered state below $T_N = 47$ K, we find sizeable magnetic anisotropy with an easy plane perpendicular to the crystallographic c direction. An entropy analysis revealed that no valence fluctuations are present for the magnetically ordered materials. Combining the obtained thermodynamic and structural data, we construct a concentration-temperature phase diagram demonstrating a rather abrupt change from a valence-fluctuating to a magnetically-ordered state in EuPd$_2$(Si$_{1-x}$Ge$_x$)$_2$.
We present an algorithm for the solution of Sylvester equations with right-hand side of low rank. The method is based on projection onto a block rational Krylov subspace, with two key contributions with respect to the state-of-the-art. First, we show how to maintain the last pole equal to infinity throughout the iteration, by means of pole reodering. This allows for a cheap evaluation of the true residual at every step. Second, we extend the convergence analysis in [Beckermann B., An error analysis for rational Galerkin projection applied to the Sylvester equation, SINUM, 2011] to the block case. This extension allows to link the convergence with the problem of minimizing the norm of a small rational matrix over the spectra or field-of-values of the involved matrices. This is in contrast with the non-block case, where the minimum problem is scalar, instead of matrix-valued. Replacing the norm of the objective function with an easier to evaluate function yields several adaptive pole selection strategies, providing a theoretical analysis for known heuristics, as well as effective novel techniques.
As the world's population ages, cataract-induced visual dysfunction and blindness is on the increase. This is a significant global problem. The most common symptoms of cataracts are glared and blurred vision. Usually, people with cataract have trouble seeing or reading at distance or in low light and also their color perception is altered. Furthermore, cataract is a sneaky disease as it is usually a very slow but progressive process, which creates adaptation so that patients find it difficult to recognize. Moreover, for the doctors it can be very difficult to explain and give comprehensive answers to the patients' symptoms. We built and tested an optic device that uses egg albumen to mimic the optical degradation of the crystalline related cataracts and that is able to visualize how the cataract impairs vision. At best of our knowledge, it is the first experimental system developed at this aim. This can be a valuable tool, which can be of help in education for students in medical sciences as well as to provide a method to illustrate the patients how their vision is affected by cataract progression process.
Craniofacial Superimposition involves the superimposition of an image of a skull with a number of ante-mortem face images of an individual and the analysis of their morphological correspondence. Despite being used for one century, it is not yet a mature and fully accepted technique due to the absence of solid scientific approaches, significant reliability studies, and international standards. In this paper we present a comprehensive experimentation on the limitations of Craniofacial Superimposition as a forensic identification technique. The study involves different experiments over more than 1 Million comparisons performed by a landmark-based automatic 3D/2D superimposition method. The total sample analyzed consists of 320 subjects and 29 craniofacial landmarks.
High entropy oxides (HEOs) are a class of materials, containing equimolar portions of five or more transition metal and/or rare-earth elements. We report here about the layer-by-layer growth of HEO [(La$_{0.2}$Pr$_{0.2}$Nd$_{0.2}$Sm$_{0.2}$Eu$_{0.2}$)NiO$_3$] thin films on NdGaO$_3$ substrates by pulsed laser deposition. The combined characterizations with in-situ reflection high energy electron diffraction, atomic force microscopy, and X-ray diffraction affirm the single crystalline nature of the film with smooth surface morphology. The desired +3 oxidation of Ni has been confirmed by an element sensitive X-ray absorption spectroscopy measurement. Temperature dependent electrical transport measurements revealed a first order metal-insulator transition with the transition temperature very similar to the undoped NdNiO$_3$. Since both of these systems have a comparable tolerance factor, this work demonstrates that the electronic behaviors of $A$-site disordered perovskite-HEOs are primarily controlled by the average tolerance factor.
Prediction of protein-ligand binding affinity is a major goal in drug discovery. Generally, free energy gap is calculated between two states (e.g., ligand binding and unbinding). The energy gap implicitly includes the effects of changes in protein dynamics induced by the binding ligand. However, the relationship between protein dynamics and binding affinity remains unclear. Here, we propose a novel method that represents protein behavioral change upon ligand binding with a simple feature that can be used to predict protein-ligand affinity. From unbiased molecular simulation data, an unsupervised deep learning method measures the differences in protein dynamics at a ligand-binding site depending on the bound ligands. A dimension-reduction method extracts a dynamic feature that is strongly correlated to the binding affinities. Moreover, the residues that play important roles in protein-ligand interactions are specified based on their contribution to the differences. These results indicate the potential for dynamics-based drug discovery.
We consider the commutative limit of matrix geometry described by a large-$N$ sequence of some Hermitian matrices. Under some assumptions, we show that the commutative geometry possesses a K\"{a}hler structure. We find an explicit relation between the K\"{a}hler structure and the matrix configurations which define the matrix geometry. We also find a relation between the matrix configurations and those obtained from the geometric quantization.
The evanescent field of an optical nanofiber presents a versatile interface for the manipulation of micron-scale particles in dispersion. Here, we present a detailed study of the optical binding interactions of a pair of 3.13 $\mu$m SiO$_2$ particles in the nanofiber evanescent field. Preferred equilibrium positions for the spheres as a function of nanofiber diameter and sphere size are discussed. We demonstrated optical propulsion and self-arrangement of chains of one to seven 3.13 $\mu$m SiO$_2$ particles; this effect is associated with optical binding via simulated trends of multiple scattering effects. Incorporating an optical nanofiber into an optical tweezers setup facilitated the individual and collective introduction of selected particles to the nanofiber evanescent field for experiments. Computational simulations provide insight into the dynamics behind the observed behavior.
Growing needs in localising audiovisual content in multiple languages through subtitles call for the development of automatic solutions for human subtitling. Neural Machine Translation (NMT) can contribute to the automatisation of subtitling, facilitating the work of human subtitlers and reducing turn-around times and related costs. NMT requires high-quality, large, task-specific training data. The existing subtitling corpora, however, are missing both alignments to the source language audio and important information about subtitle breaks. This poses a significant limitation for developing efficient automatic approaches for subtitling, since the length and form of a subtitle directly depends on the duration of the utterance. In this work, we present MuST-Cinema, a multilingual speech translation corpus built from TED subtitles. The corpus is comprised of (audio, transcription, translation) triplets. Subtitle breaks are preserved by inserting special symbols. We show that the corpus can be used to build models that efficiently segment sentences into subtitles and propose a method for annotating existing subtitling corpora with subtitle breaks, conforming to the constraint of length.
We make use of deep 1.2mm-continuum observations (12.7microJy/beam RMS) of a 1 arcmin^2 region in the Hubble Ultra Deep Field to probe dust-enshrouded star formation from 330 Lyman-break galaxies spanning the redshift range z=2-10 (to ~2-3 Msol/yr at 1sigma over the entire range). Given the depth and area of ASPECS, we would expect to tentatively detect 35 galaxies extrapolating the Meurer z~0 IRX-beta relation to z>~2 (assuming T_d~35 K). However, only 6 tentative detections are found at z>~2 in ASPECS, with just three at >3sigma. Subdividing z=2-10 galaxies according to stellar mass, UV luminosity, and UV-continuum slope and stacking the results, we only find a significant detection in the most massive (>10^9.75 Msol) subsample, with an infrared excess (IRX=L_{IR}/L_{UV}) consistent with previous z~2 results. However, the infrared excess we measure from our large selection of sub-L* (<10^9.75 Msol) galaxies is 0.11(-0.42)(+0.32) and 0.14(-0.14)(+0.15) at z=2-3 and z=4-10, respectively, lying below even an SMC IRX-beta relation (95% confidence). These results demonstrate the relevance of stellar mass for predicting the IR luminosity of z>~2 galaxies. We furthermore find that the evolution of the IRX-stellar mass relationship depends on the evolution of the dust temperature. If the dust temperature increases monotonically with redshift (as (1+z)^0.32) such that T_d~44-50 K at z>=4, current results are suggestive of little evolution in this relationship to z~6. We use these results to revisit recent estimates of the z>~3 SFR density. One less obvious implication is in interpreting the high Halpha EWs seen in z~5 galaxies: our results imply that star-forming galaxies produce Lyman-continuum photons at twice the efficiency (per unit UV luminosity) as implied in conventional models. Star-forming galaxies can then reionize the Universe, even if the escape fraction is <10%.
We demonstrate a close relationship between superconductivity and the dimensions of the Fe-Se(Te) tetrahedron in FeSe0.5Te0.5. This is done by exploiting thin film epitaxy, which provides controlled biaxial stress, both compressive and tensile, to distort the tetrahedron. The Se/Te height within the tetrahedron is found to be of crucial importance to superconductivity, in agreement with the theoretical proposal that (pi,pi) spin fluctuations promote superconductivity in Fe superconductors.
Modern indoor localization techniques are essential to overcome the weak GPS coverage in indoor environments. Recently, considerable progress has been made in Channel State Information (CSI) based indoor localization with signal fingerprints. However, CSI signal patterns can be complicated in the large and highly dynamic indoor spaces with complex interiors, thus a solution for solving this issue is urgently needed to expand the applications of CSI to a broader indoor space. In this paper, we propose an end-to-end solution including data collection, pattern clustering, denoising, calibration and a lightweight one-dimensional convolutional neural network (1D CNN) model with CSI fingerprinting to tackle this problem. We have also created and plan to open source a CSI dataset with a large amount of data collected across complex indoor environments at Colorado State University. Experiments indicate that our approach achieves up to 68.5% improved performance (mean distance error) with minimal number of parameters, compared to the best-known deep machine learning and CSI-based indoor localization works.
The structure of the one loop self-energy graphs of the $\rho$ meson is analyzed in the real time formulation of thermal field theory. The modified spectral function of the $\rho$ meson in hot hadronic matter leads to a large enhancement of lepton pair production below the bare peak of the $\rho$. It has been shown that the effective temperature extracted from the inverse slope of the transverse momentum distributions for various invariant mass ($M$) windows of the pair can be used as an efficient tool to characterize different phases of the evolving matter.
Deep neural networks (DNNs) are shown to be promising solutions in many challenging artificial intelligence tasks. However, it is very hard to figure out whether the low precision of a DNN model is an inevitable result, or caused by defects. This paper aims at addressing this challenging problem. We find that the internal data flow footprints of a DNN model can provide insights to locate the root cause effectively. We develop DeepMorph (DNN Tomography) to analyze the root cause, which can guide a DNN developer to improve the model.
The paper is in essence a survey of categories having $\phi$-weighted colimits for all the weights $\phi$ in some class $\Phi$. We introduce the class $\Phi^+$ of {\em $\Phi$-flat} weights which are those $\psi$ for which $\psi$-colimits commute in the base $\V$ with limits having weights in $\Phi$; and the class $\Phi^-$ of {\em $\Phi$-atomic} weights, which are those $\psi$ for which $\psi$-limits commute in the base $\V$ with colimits having weights in $\Phi$. We show that both these classes are {\em saturated} (that is, what was called {\em closed} in the terminology of \cite{AK88}). We prove that for the class $\p$ of {\em all} weights, the classes $\p^+$ and $\p^-$ both coincide with the class $\Q$ of {\em absolute} weights. For any class $\Phi$ and any category $\A$, we have the free $\Phi$-cocompletion $\Phi(\A)$ of $\A$; and we recognize $\Q(\A)$ as the Cauchy-completion of $\A$. We study the equivalence between ${(\Q(\A^{op}))}^{op}$ and $\Q(\A)$, which we exhibit as the restriction of the Isbell adjunction between ${[\A,\V]}^{op}$ and $[\A^{op},\V]$ when $\A$ is small; and we give a new Morita theorem for any class $\Phi$ containing $\Q$. We end with the study of $\Phi$-continuous weights and their relation to the $\Phi$-flat weights.
An integrable deformation of the known integrable model of two interacting p-dimensional and q-dimensional spherical tops is considered. After reduction this system gives rise to the generalized Lagrange and the Kowalevski tops. The corresponding Lax matrices and classical r-matrices are calculated.
Generative Retrieval (GR), autoregressively decoding relevant document identifiers given a query, has been shown to perform well under the setting of small-scale corpora. By memorizing the document corpus with model parameters, GR implicitly achieves deep interaction between query and document. However, such a memorizing mechanism faces three drawbacks: (1) Poor memory accuracy for fine-grained features of documents; (2) Memory confusion gets worse as the corpus size increases; (3) Huge memory update costs for new documents. To alleviate these problems, we propose the Generative Dense Retrieval (GDR) paradigm. Specifically, GDR first uses the limited memory volume to achieve inter-cluster matching from query to relevant document clusters. Memorizing-free matching mechanism from Dense Retrieval (DR) is then introduced to conduct fine-grained intra-cluster matching from clusters to relevant documents. The coarse-to-fine process maximizes the advantages of GR's deep interaction and DR's scalability. Besides, we design a cluster identifier constructing strategy to facilitate corpus memory and a cluster-adaptive negative sampling strategy to enhance the intra-cluster mapping ability. Empirical results show that GDR obtains an average of 3.0 R@100 improvement on NQ dataset under multiple settings and has better scalability.
When diagnosing the brain tumor, doctors usually make a diagnosis by observing multimodal brain images from the axial view, the coronal view and the sagittal view, respectively. And then they make a comprehensive decision to confirm the brain tumor based on the information obtained from multi-views. Inspired by this diagnosing process and in order to further utilize the 3D information hidden in the dataset, this paper proposes a multi-view dynamic fusion framework to improve the performance of brain tumor segmentation. The proposed framework consists of 1) a multi-view deep neural network architecture, which represents multi learning networks for segmenting the brain tumor from different views and each deep neural network corresponds to multi-modal brain images from one single view and 2) the dynamic decision fusion method, which is mainly used to fuse segmentation results from multi-views as an integrate one and two different fusion methods, the voting method and the weighted averaging method, have been adopted to evaluate the fusing process. Moreover, the multi-view fusion loss, which consists of the segmentation loss, the transition loss and the decision loss, is proposed to facilitate the training process of multi-view learning networks so as to keep the consistency of appearance and space, not only in the process of fusing segmentation results, but also in the process of training the learning network. \par By evaluating the proposed framework on BRATS 2015 and BRATS 2018, it can be found that the fusion results from multi-views achieve a better performance than the segmentation result from the single view and the effectiveness of proposed multi-view fusion loss has also been proved. Moreover, the proposed framework achieves a better segmentation performance and a higher efficiency compared to other counterpart methods.
Pathways are integral to systems biology. Their classical representation has proven useful but is inconsistent in the meaning assigned to each arrow (or edge) and inadvertently implies the isolation of one pathway from another. Conversely, modern high-throughput experiments give rise to standardized networks facilitating topological calculations. Combining these perspectives, we can embed classical pathways within large-scale networks and thus demonstrate the crosstalk between them. As more diverse types of high-throughput data become available, we can effectively merge both perspectives, embedding pathways simultaneously in multiple networks. However, the original problem still remains - the current edge representation is inadequate to accurately convey all the information in pathways. Therefore, we suggest that a standardized, well-defined, edge ontology is necessary and propose a prototype here, as a starting point for reaching this goal.
Rashba spin splitting (RSS) in biased semiconductor quantum wells is investigated theoretically based on the eight-band envelope function model. We find that at large wave vectors, RSS is both nonmonotonic and anisotropic as a function of in-plane wave vector, in contrast to the widely used linear and isotropic model. We derive an analytical expression for RSS, which can correctly reproduce such nonmonotonic behavior at large wave vectors. We also investigate numerically the dependence of RSS on the various band parameters and find that RSS increases with decreasing band gap and subband index, increasing valence band offset, external electric field, and well width. Our analytical expression for RSS provides a satisfactory explanation to all these features.
We study the satisfiability of string constraints where context-free membership constraints may be imposed on variables. Additionally a variable may be constrained to be a subword of a word obtained by shuffling variables and their transductions. The satisfiability problem is known to be undecidable even without rational transductions. It is known to be NExptime-complete without transductions, if the subword relations between variables do not have a cyclic dependency between them. We show that the satisfiability problem stays decidable in this fragment even when rational transductions are added. It is 2NExptime-complete with context-free membership, and NExptime-complete with only regular membership. For the lower bound we prove a technical lemma that is of independent interest: The length of the shortest word in the intersection of a pushdown automaton (of size $O(n)$) and $n$ finite-state automata (each of size $O(n)$) can be double exponential in $n$.
An index formula is proposed for contact transformations between contact manifolds equipped with CR structures or with fillings by symplectic manifolds. The formula generalizes the Atiyah-Singer formula and gives a conjectured formula for the index of Fourier integral operators, as well as Epstein's relative index for CR structures.
We study the entropy of Chinese and English texts, based on characters in case of Chinese texts and based on words for both languages. Significant differences are found between the languages and between different personal styles of debating partners. The entropy analysis points in the direction of lower entropy, that is of higher complexity. Such a text analysis would be applied for individuals of different styles, a single individual at different age, as well as different groups of the population.
To describe the tunneling dynamics of a stack of two-dimensional fermionic superfluids in an optical potential, we derive an effective action functional from a path integral treatment. This effective action leads, in the saddle point approximation, to equations of motion for the density and the phase of the superfluid Fermi gas in each layer. In the strong coupling limit (where bosonic molecules are formed) these equations reduce to a discrete nonlinear Schrodinger equation, where the molecular tunneling amplitude is reduced for large binding energies. In the weak coupling (BCS) regime, we study the evolution of the stacked superfluids and derive an approximate analytical expression for the Josephson oscillation frequency in an external harmonic potential. Both in the weak and intermediate coupling regimes the detection of the Josephson oscillations described by our path integral treatment constitutes experimental evidence for the fermionic superfluid regime.
Under time-reversal symmetry, a linear charge Hall response is usually deemed to be forbidden by the Onsager relation. In this work, we discover a scenario for realizing a time-reversal even linear charge Hall effect in a non-isolated two-dimensional crystal allowed by time reversal symmetry. The restriction by Onsager relation is lifted by interfacial coupling with an adjacent layer, where the overall chiral symmetry requirement is fulfilled by a twisted stacking. We reveal the underlying band geometric quantity as the momentum-space vorticity of layer current. The effect is demonstrated in twisted bilayer graphene and twisted homobilayer transition metal dichalcogenides with a wide range of twist angles, which exhibit giant Hall ratios under experimentally practical conditions, with gate voltage controlled on-off switch. This work reveals intriguing Hall physics in chiral structures, and opens up a research direction of layertronics that exploits the quantum nature of layer degree of freedom to uncover exciting effects.
Supersymmetry (SUSY) is an attractive extension of the Standard Model possibly solving many standing issues in particle physics and cosmology. The general purpose ATLAS detector at the Large Hadron Collider (LHC) is an experiment capable of discovering or excluding TeV SUSY. However discovery can only be claimed when the Standard Model backgrounds are understood and are under control. The expectations at the LHC are that Monte Carlo simulation predictions may not be sufficient to achieve this and the backgrounds will have to determined from data itself. In this note we will highlight some data driven methods developed to estimate backgrounds and detect a possible SUSY excess.
The leading difficulty in achieving the contrast necessary to directly image exoplanets and associated structures (eg. protoplanetary disks) at wavelengths ranging from the visible to the infrared are quasi-static speckles, and they are hard to distinguish from planets at the necessary level of precision. The source of the quasi-static speckles is hardware aberrations that are not compensated by the adaptive optics system. These aberrations are called non-common path aberrations (NCPA). In 2013, Frazin showed how, in principle, simultaneous millisecond (ms) telemetry from the wavefront sensor (WFS) and the science camera behind a stellar coronagraph can be used as input into a regression scheme that simultaneously and self-consistently estimates the NCPA and the sought-after image of the planetary system (the exoplanet image). The physical principle underlying the regression method is rather simple: the wavefronts, which are measured by the WFS, modulate the speckles caused by the NCPA and therefore can be used as probes of the optical system. The most important departure from realism in the author's 2013 article was the assumption that the WFS made error-free measurements. The simulations in Part I provide results on the joint regression on the NCPA and the exoplanet image from three different methods, called the ideal, the naive, and the bias-corrected estimators. The ideal estimator is not physically realizable but is a useful as a benchmark for simulation studies, but the other two are, at least in principle. This article provides the regression equations for all three of these estimators as well as a supporting technical discussion. Briefly, the naive estimator simply uses the noisy WFS measurements without any attempt to account for the errors, and the bias-corrected estimator uses statistical knowledge of the wavefronts to treat errors in the WFS measurements.
Over a finite field $\F_q$ the $(n,d,q)$-Reed-Muller code is the code given by evaluations of $n$-variate polynomials of total degree at most $d$ on all points (of $\F_q^n$). The task of testing if a function $f:\F_q^n \to \F_q$ is close to a codeword of an $(n,d,q)$-Reed-Muller code has been of central interest in complexity theory and property testing. The query complexity of this task is the minimal number of queries that a tester can make (minimum over all testers of the maximum number of queries over all random choices) while accepting all Reed-Muller codewords and rejecting words that are $\delta$-far from the code with probability $\Omega(\delta)$. (In this work we allow the constant in the $\Omega$ to depend on $d$.) In this work we give a new upper bound of $(c q)^{(d+1)/q}$ on the query complexity, where $c$ is a universal constant. In the process we also give new upper bounds on the "spanning weight" of the dual of the Reed-Muller code (which is also a Reed-Muller code). The spanning weight of a code is the smallest integer $w$ such that codewords of Hamming weight at most $w$ span the code.
We present a challenge set for French --> English machine translation based on the approach introduced in Isabelle, Cherry and Foster (EMNLP 2017). Such challenge sets are made up of sentences that are expected to be relatively difficult for machines to translate correctly because their most straightforward translations tend to be linguistically divergent. We present here a set of 506 manually constructed French sentences, 307 of which are targeted to the same kinds of structural divergences as in the paper mentioned above. The remaining 199 sentences are designed to test the ability of the systems to correctly translate difficult grammatical words such as prepositions. We report on the results of using this challenge set for testing two different systems, namely Google Translate and DEEPL, each on two different dates (October 2017 and January 2018). All the resulting data are made publicly available.
Within this Technical Report, we present the full analysis of 61 routing protocols for Wireless Sensor Networks (WSNs) for the purposes of routing in Payment Channel Networks (PCNs). In addition, we present the full results of the implementation of the three algorithms E-TORA, TERP, and M-DART.
The fidelity susceptibility has been used to detect quantum phase transitions in the Hermitian quantum many-body systems over a decade, where the fidelity susceptibility density approaches $+\infty$ in the thermodynamic limits. Here the fidelity susceptibility $\chi$ is generalized to non-Hermitian quantum systems by taking the geometric structure of the Hilbert space into consideration. Instead of solving the metric equation of motion from scratch, we chose a gauge where the fidelities are composed of biorthogonal eigenstates and can be worked out algebraically or numerically when not on the exceptional point (EP). Due to the properties of the Hilbert space geometry at EP, we found that EP can be found when $\chi$ approaches $-\infty$. As examples, we investigate the simplest $\mathcal{PT}$ symmetric $2\times2$ Hamiltonian with a single tuning parameter and the non-Hermitian Su-Schriffer-Heeger model.
In applications that involve human-robot interaction (HRI), human-robot teaming (HRT), and cooperative human-machine systems, the inference of the human partner's intent is of critical importance. This paper presents a method for the inference of the human operator's navigational intent, in the context of mobile robots that provide full or partial (e.g., shared control) teleoperation. We propose the Machine Learning Operator Intent Inference (MLOII) method, which a) processes spatial data collected by the robot's sensors; b) utilizes a supervised machine learning algorithm to estimate the operator's most probable navigational goal online. The proposed method's ability to reliably and efficiently infer the intent of the human operator is experimentally evaluated in realistically simulated exploration and remote inspection scenarios. The results in terms of accuracy and uncertainty indicate that the proposed method is comparable to another state-of-the-art method found in the literature.
Potassium aluminium tetrahydride KAlH4 of high phase purity (space group Pnma (62)) was synthesized via a mechanochemical route. The thus obtained material was studied by 27Al and 39K MAS NMR spectroscopy. For both nuclei precise data for the isotropic chemical shift and the quadrupole coupling at T=295 K were derived (27Al: delta_iso=(107.6+-0.2) ppm, C_Q = (1.29+-0.02) MHz and eta = 0.64+-0.02; 39K: delta_iso=(6.1+-0.2) ppm, C_Q = (0.562+-0.005) MHz and eta = 0.74+-0.02). The straightforward NMR spectroscopic approach applied here should also work for other complex aluminium hydrides and for many other materials containing half-integer nuclei experiencing small to medium-sized quadrupole couplings.
We first prove the L^p-convergence (p\geq 1) and a Fernique-type exponential integrability of divergence functionals for all Cameron-Martin vector fields with respect to the pinned Wiener measure on loop spaces over a compact Riemannian manifold. We then prove that the Driver flow is a smooth transform on path spaces in the sense of the Malliavin calculus and has an \infty-quasi-continuous modification which can be quasi-surely well defined on path spaces. This leads us to construct the Driver flow on loop spaces through the corresponding flow on path spaces. Combining these two results with the Cruzeiro lemma [J. Funct. Anal. 54 (1983) 206-227] we give an alternative proof of the quasi-invariance of the pinned Wiener measure under Driver's flow on loop spaces which was established earlier by Driver [Trans. Amer. Math. Soc. 342 (1994) 375-394] and Enchev and Stroock [Adv. Math. 119 (1996) 127-154] by Doob's h-processes approach together with the short time estimates of the gradient and the Hessian of the logarithmic heat kernel on compact Riemannian manifolds. We also establish the L^p-convergence (p\geq 1) and a Fernique-type exponential integrability theorem for the stochastic anti-development of pinned Brownian motions on compact Riemannian manifold with an explicit exponential exponent. Our results generalize and sharpen some earlier results due to Gross [J. Funct. Anal. 102 (1991) 268-313] and Hsu [Math. Ann. 309 (1997) 331-339]. Our method does not need any heat kernel estimate and is based on quasi-sure analysis and Sobolev estimates on path spaces.
In this paper, we investigate inexact variants of dual-primal isogeometric tearing and interconnecting methods for solving large-scale systems of linear equations arising from Galerkin isogeometric discretizations of elliptic boundary value problems. The considered methods are extensions of standard finite element tearing and interconnecting methods to isogeometric analysis. The algorithms are implemented by means of energy minimizing primal subspaces. We discuss the replacement of local sparse direct solvers by iterative methods, particularly, multigrid solvers. We investigate the incorporation of these iterative solvers into different formulations of the algorithm. Finally, we present numerical examples comparing the performance of these inexact versions.
The human microbiome can contribute to pathogeneses of many complex diseases by mediating disease-leading causal pathways. However, standard mediation analysis methods are not adequate to analyze the microbiome as a mediator due to the excessive number of zero-valued sequencing reads in the data that is compounded by its compositional structure. The two main challenges raised by the zero-inflated data structure are: (a) disentangling the mediation effect induced by the point mass at zero; and (b) identifying the observed zero-valued data points that are actually not zero (i.e., false zeros). We develop a novel marginal mediation analysis method under the potential-outcomes framework to fill this gap and show the marginal model can also account for the compositional structure. The mediation effect can be decomposed into two components that are inherent to the two-part nature of zero-inflated distributions. With probabilistic models to account for observing zeros, we also address the challenge with false zeros. A comprehensive simulation study and the application in a real microbiome study showcase our approach in comparison with existing approaches.
We express the action of six-dimensional supergravity in terms of four-dimensional N=1 superfields, focusing on the moduli dependence of the action. The gauge invariance of the action in the tensor-vector sector is realized in a quite nontrivial manner, and it determines the moduli dependence of the action. The resultant moduli dependence is intricate, especially on the shape modulus. Our result is reduced to the known superfield actions of six-dimensional global SUSY theories and of five-dimensional supergravity by replacing the moduli superfields with their background values and by performing the dimensional reduction, respectively.
Analytic results are presented for preheating in both flat and open models of chaotic inflation, for the case of massless inflaton decay into further inflaton quanta. It is demonstrated that preheating in both these cases closely resembles that in Minkowski spacetime. Furthermore, quantitative differences between preheating in spatially-flat and open models of inflation remain of order $10^{-2}$ for the chaotic inflation initial conditions considered here.
In this paper, we propose systematic and efficient gradient-based methods for both one-way and two-way partial AUC (pAUC) maximization that are applicable to deep learning. We propose new formulations of pAUC surrogate objectives by using the distributionally robust optimization (DRO) to define the loss for each individual positive data. We consider two formulations of DRO, one of which is based on conditional-value-at-risk (CVaR) that yields a non-smooth but exact estimator for pAUC, and another one is based on a KL divergence regularized DRO that yields an inexact but smooth (soft) estimator for pAUC. For both one-way and two-way pAUC maximization, we propose two algorithms and prove their convergence for optimizing their two formulations, respectively. Experiments demonstrate the effectiveness of the proposed algorithms for pAUC maximization for deep learning on various datasets.
We study the presence of $L$-orthogonal elements in connection with Daugavet centers and narrow operators. We prove that, if $\dens(Y)\leq \omega_1$ and $G:X\longrightarrow Y$ is a Daugavet center, then $G(W)$ contains some $L$-orthogonal for every non-empty $w^*$-open subset of $B_{X^{**}}$. In the context of narrow operators, we show that if $X$ is separable and $T:X\longrightarrow Y$ is a narrow operator, then given $y\in B_X$ and any non-empty $w^*$-open subset $W$ of $B_{X^{**}}$ then $W$ contains some $L$-orthogonal $u$ so that $T^{**}(u)=T(y)$. In the particular case that $T^*(Y^*)$ is separable, we extend the previous result to $\dens(X)=\omega_1$. Finally, we prove that none of the previous results holds in larger density characters (in particular, a counterexample is shown for $\omega_2$ under continuum hypothesis).
We report the results of an analysis of the redshift power spectrum $P^S(k,\mu)$ in three typical Cold Dark Matter (CDM) cosmological models, where $\mu$ is the cosine of the angle between the wave vector and the line-of-sight. Two distinct biased tracers derived from the primordial density peaks of Bardeen et al. and the cluster-underweight model of Jing, Mo, & B\"orner are considered in addition to the pure dark matter models. Based on a large set of high resolution simulations, we have measured the redshift power spectrum for the three tracers from the linear to the nonlinear regime. We investigate the validity of the relation - guessed from linear theory - in the nonlinear regime $$ P^S(k,\mu)=P^R(k)[1+\beta\mu^2]^2D(k,\mu,\sigma_{12}(k)), $$ where $P^R(k)$ is the real space power spectrum, and $\beta$ equals $\Omega_0^{0.6}/b_l$. The damping function $D$ which should generally depend on $k$, $\mu$, and $\sigma_{12}(k)$, is found to be a function of only one variable $k\mu\sigma_{12}(k)$. This scaling behavior extends into the nonlinear regime, while $D$ can be accurately expressed as a Lorentz function - well known from linear theory - for values $D > 0.1$. The difference between $\sigma_{12}(k)$ and the pairwise velocity dispersion defined by the 3-D peculiar velocity of the simulations (taking $r=1/k$) is about 15%. Therefore $\sigma_{12}(k)$ is a good indicator of the pairwise velocity dispersion. The exact functional form of $D$ depends on the cosmological model and on the bias scheme. We have given an accurate fitting formula for the functional form of $D$ for the models studied.
The overconsumption of consumers under today's increasingly scarce natural resources has overwhelmed the textile industry in middle-income countries, such as Romania. It is becoming more and more essential to encourage sustainable clothing consumption behaviors, such as purchasing recyclable clothes. Notwithstanding there is a limited number of studies trying to understand the intrinsic factors that motivate consumers' purchase intention toward sustainable clothes in middle-income countries. Moreover, the effect of consumers' environmental knowledge on determining their purchase intention of sustainable clothes remains understudied. Consequently, the purpose of this paper is to make a significant contribution to the sustainable consumption literature by providing a consolidated framework that explores the behavioral factors inclining Romanian consumers' purchase intention towards sustainable clothes. The foundation of this study combines consumers' social value orientation and the theory of planned behavior. the partial least square path modelling procedure was used to analyze the data of 1,018 Romanian respondents. The findings of this study show that altruistic value orientation, subjective norms, and sustainable attitudes have a positive effect on Romanian consumers' purchase intention of sustainable clothing. Thus, these insights provide essential practical implications of advocating for the consumption of sustainable clothes along with useful guidelines for practitioners in the textile industry among middle-income countries, especially in Romania, to reduce overconsumption.
Motivated by recent proposals of ``collisionally inhomogeneous'' Bose-Einstein condensates (BECs), which have a spatially modulated scattering length, we study the existence and stability properties of bright and dark matter-wave solitons of a BEC characterized by a periodic, piecewise-constant scattering length. We use a ``stitching'' approach to analytically approximate the pertinent solutions of the underlying nonlinear Schr\"odinger equation by matching the wavefunction and its derivatives at the interfaces of the nonlinearity coefficient. To accurately quantify the stability of bright and dark solitons, we adapt general tools from the theory of perturbed Hamiltonian systems. We show that solitons can only exist at the centers of the constant regions of the piecewise-constant nonlinearity. We find both stable and unstable configurations for bright solitons and show that all dark solitons are unstable, with different instability mechanisms that depend on the soliton location. We corroborate our analytical results with numerical computations.
We demonstrate coupling between the atomic spin and orbital-angular-momentum (OAM) of the atom's center-of-mass motion in a Bose-Einstein condensate (BEC). The coupling is induced by Raman-dressing lasers with a Laguerre-Gaussian beam, and creates coreless vortices in a $F=1$ $^{87}$Rb spinor BEC. We observe correlations between spin and OAM in the dressed state and characterize the spin texture; the result is in good agreement with the theory. In the presence of the Raman field our dressed state is stable for 0.1~s or longer, and it decays due to collision-induced relaxation. As we turn off the Raman beams, the vortex cores in the bare spin $|m_F=1\rangle$ and $|-1\rangle$ split. These spin-OAM coupled systems with the Raman-dressing approach have great potential for exploring new topological textures and quantum states.
We compute the renormalization mismatch displayed in 1--loop approximation by classically equivalent 4-quark operators and coming from different possible definitions of the $\gamma_5$ matrix in dimensional regularization. The result is then employed to study the effect of the various treatments of $\gamma_5$ upon the size of radiative corrections to 4-quark condensates in the QCD sum rules for $\rho$ and $A_1$ mesons. We find that a fully anticommuting $\gamma_5$ which automatically respects non-anomalous chiral Ward-Slavnov identities leads to considerably smaller corrections and reduces theoretical uncertainty in the QCD prediction for the $\tau$ hadronic decay rate.
In this paper we propose an Intelligent Management System which is capable of managing the automobile functions using the rigorous real-time principles and a multicore processor in order to realize higher efficiency and safety for the vehicle. It depicts how various automobile functionalities can be fine grained and treated to fit in real time concepts. It also shows how the modern multicore processors can be of good use in organizing vast amounts of correlated functions to be executed in real-time with excellent time commitments. The modeling of the automobile tasks with real time commitments, organizing appropriate scheduling for various real time tasks and the usage of a multicore processor enables the system to realize higher efficiency and offer better safety levels to the vehicle. The industry available real time operating system is used for scheduling various tasks and jobs on the multicore processor.
We prove equivalences of derived categories for the various mirrors in the Batyrev-Borisov construction. In particular, we obtain a positive answer to a conjecture of Batyrev and Nill. The proof involves passing to an associated category of singularities and toric variation of geometric invariant theory quotients.
This survey explores the literature on game-theoretic models of network formation under the hypothesis of mutual consent in link formation. The introduction of consent in link formation imposes a coordination problem in the network formation process. This survey explores the conclusions from this theory and the various methodologies to avoid the main pitfalls. The main insight originates from Myerson's work on mutual consent in link formation and his main conclusion that the empty network (the network without any links) always emerges as a strong Nash equilibrium in any game-theoretic model of network formation under mutual consent and positive link formation costs. Jackson and Wolinsky introduced a cooperative framework to avoid this main pitfall. They devised the notion of a pairwise stable network to arrive at equilibrium networks that are mainly non-trivial. Unfortunately, this notion of pairwise stability requires coordinated action by pairs of decision makers in link formation. I survey the possible solutions in a purely non-cooperative framework of network formation under mutual consent by exploring potential refinements of the standard Nash equilibrium concept to explain the emergence of non-trivial networks. This includes the notions of unilateral and monadic stability. The first one is founded on advanced rational reasoning of individuals about how others would respond to one's efforts to modify the network. The latter incorporates trusting, boundedly rational behaviour into the network formation process. The survey is concluded with an initial exploration of external correlation devices as an alternative framework to address mutual consent in network formation.
This paper presents a one shot analysis of the lossy compression problem under average distortion constraints. We calculate the exact expected distortion of a random code. The result is given as an integral formula using a newly defined functional $\tilde{D}(z,Q_Y)$ where $Q_Y$ is the random coding distribution and $z\in [0,1]$. When we plug in the code distribution as $Q_Y$, this functional produces the average distortion of the code, thus provide a converse result utilizing the same functional. Two alternative formulas are provided for $\tilde{D}(z,Q_Y)$, the first involves a supremum over some auxiliary distribution $Q_X$ which has resemblance to the channel coding meta-converse and the other involves an infimum over channels which resemble the well known Shannon distortion-rate function.
As systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications, understanding these black box models has become paramount. In response, Explainable AI (XAI) has emerged as a field of research with practical and ethical benefits across various domains. This paper not only highlights the advancements in XAI and its application in real-world scenarios but also addresses the ongoing challenges within XAI, emphasizing the need for broader perspectives and collaborative efforts. We bring together experts from diverse fields to identify open problems, striving to synchronize research agendas and accelerate XAI in practical applications. By fostering collaborative discussion and interdisciplinary cooperation, we aim to propel XAI forward, contributing to its continued success. Our goal is to put forward a comprehensive proposal for advancing XAI. To achieve this goal, we present a manifesto of 27 open problems categorized into nine categories. These challenges encapsulate the complexities and nuances of XAI and offer a road map for future research. For each problem, we provide promising research directions in the hope of harnessing the collective intelligence of interested stakeholders.