text
stringlengths 6
128k
|
---|
We show that the global minimum solution of $\lVert A - BXC \rVert$ can be
found in closed-form with singular value decompositions and generalized
singular value decompositions for a variety of constraints on $X$ involving
rank, norm, symmetry, two-sided product, and prescribed eigenvalue. This
extends the solution of Friedland--Torokhti for the generalized
rank-constrained approximation problem to other constraints as well as provides
an alternative solution for rank constraint in terms of singular value
decompositions. For more complicated constraints on $X$ involving structures
such as Toeplitz, Hankel, circulant, nonnegativity, stochasticity, positive
semidefiniteness, prescribed eigenvector, etc, we prove that a simple iterative
method is linearly and globally convergent to the global minimum solution.
|
Ubiquitous computing environments are characterised by smart, interconnected
artefacts embedded in our physical world that are projected to provide useful
services to human inhabitants unobtrusively. Mobile devices are becoming the
primary tools of human interaction with these embedded artefacts and
utilisation of services available in smart computing environments such as
clouds. Advancements in capabilities of mobile devices allow a number of user
and environment related context consumers to be hosted on these devices.
Without a coordinating component, these context consumers and providers are a
potential burden on device resources; specifically the effect of uncoordinated
computation and communication with cloud-enabled services can negatively impact
the battery life. Therefore energy conservation is a major concern in realising
the collaboration and utilisation of mobile device based context-aware
applications and cloud based services. This paper presents the concept of a
context-brokering component to aid in coordination and communication of context
information between mobile devices and services deployed in a cloud
infrastructure. A prototype context broker is experimentally analysed for
effects on energy conservation when accessing and coordinating with cloud
services on a smart device, with results signifying reduction in energy
consumption.
|
Due to larger mass and earlier production, heavy quark(quarkonium) can be
sensitive probes to investigate the fast decaying electromagnetic and vortical
fields produced in heavy-ion collisions. The non-relativistic Schroedinger-like
equation for heavy quarks under strong electromagnetic fields in the rotating
frame is deduced and used to construct the two-body equation for the charmonium
system. The effective potential between charm and anti-charm becomes
anisotropic in electromagnetic and vortical fields, especially along the
direction of the Lorentz force. The vorticity will affect this asymmetry
property largely and catalyze the transition from strong interaction dominant
bound state to electromagnetic and vortical interaction controlled anisotropic
bound state. It is possible to be realized in high-energy nuclear collisions.
|
Anaphoric expressions, such as pronouns and referential descriptions, are
situated with respect to the linguistic context of prior turns, as well as, the
immediate visual environment. However, a speaker's referential descriptions do
not always uniquely identify the referent, leading to ambiguities in need of
resolution through subsequent clarificational exchanges. Thus, effective
Ambiguity Detection and Coreference Resolution are key to task success in
Conversational AI. In this paper, we present models for these two tasks as part
of the SIMMC 2.0 Challenge (Kottur et al. 2021). Specifically, we use TOD-BERT
and LXMERT based models, compare them to a number of baselines and provide
ablation experiments. Our results show that (1) language models are able to
exploit correlations in the data to detect ambiguity; and (2) unimodal
coreference resolution models can avoid the need for a vision component,
through the use of smart object representations.
|
We consider the role of the velocity in Lorentz-violating fermionic quantum
theory, especially emphasizing the nonrelativistic regime. Information about
the velocity will be important for the kinematical analysis of scattering and
other problems. Working within the minimal standard model extension, we derive
new expressions for the velocity. We find that generic momentum and spin
eigenstates may not have well-defined velocities. We also demonstrate how
several different techniques may be used to shed light on different aspects of
the problem. A relativistic operator analysis allows us to study the behavior
of the Lorentz-violating Zitterbewegung. Alternatively, by studying the time
evolution of Gaussian wave packets, we find that there are Lorentz-violating
modifications to the wave packet spreading and the spin structure of the wave
function.
|
In this paper, we establish continuous bilinear decompositions that arise in
the study of products between elements in martingale Hardy spaces $ H^p\
(0<p\leqslant 1) $ and functions in their dual spaces. Our decompositions are
based on martingale paraproducts. As a consequence of our work, we also obtain
analogous results for dyadic martingales on spaces of homogeneous type equipped
with a doubling measure.
|
A stochastic cellular automata (CA) model for pedestrian dynamics is
presented. Our goal is to simulate different types of pedestrian movement, from
regular to panic. But here we emphasize regular situations which imply that
pedestrians analyze environment and choose their route more carefully. And
transition probabilities have to depict such effect. The potentials of floor
fields and environment analysis are combined in the model obtained. People
patience is included in the model. This makes simulation of pedestrians
movement more realistic. Some simulation results are presented and comparison
with basic FF-model is made.
|
Extracting classical information from quantum systems is of fundamental
importance, and classical shadows allow us to extract a large amount of
information using relatively few measurements. Conventional shadow estimators
are unbiased and thus approach the true mean in the infinite-sample limit. In
this work, we consider a biased scheme, intentionally introducing a bias by
rescaling the conventional classical shadows estimators can reduce the error in
the finite-sample regime. The approach is straightforward to implement and
requires no quantum resources. We analytically prove average case as well as
worst- and best-case scenarios, and rigorously prove that it is, in principle,
always worth biasing the estimators. We illustrate our approach in a quantum
simulation task of a $12$-qubit spin-ring problem and demonstrate how
estimating expected values of non-local perturbations can be significantly more
efficient using our biased scheme.
|
Many computer vision and medical imaging problems are faced with learning
from large-scale datasets, with millions of observations and features. In this
paper we propose a novel efficient learning scheme that tightens a sparsity
constraint by gradually removing variables based on a criterion and a schedule.
The attractive fact that the problem size keeps dropping throughout the
iterations makes it particularly suitable for big data learning. Our approach
applies generically to the optimization of any differentiable loss function,
and finds applications in regression, classification and ranking. The resultant
algorithms build variable screening into estimation and are extremely simple to
implement. We provide theoretical guarantees of convergence and selection
consistency. In addition, one dimensional piecewise linear response functions
are used to account for nonlinearity and a second order prior is imposed on
these functions to avoid overfitting. Experiments on real and synthetic data
show that the proposed method compares very well with other state of the art
methods in regression, classification and ranking while being computationally
very efficient and scalable.
|
We present a model-independent method of quantifying galaxy evolution in
high- resolution images, which we apply to the Hubble Deep Field (HDF). Our
procedure is to k-correct the pixels belonging to the images of a complete set
of bright galaxies and then to replicate each galaxy image to higher redshift
by the product of its space density, 1/V_{max}, and the cosmological volume.
The set of bright galaxies is itself selected from the HDF because presently
the HDF provides the highest quality UV images of a redshift-complete sample of
galaxies (31 galaxies with I<21.9, \bar{z}=0.5, and for which V/V_{max} is
spread fairly). These galaxies are bright enough to permit accurate
pixel-by-pixel k-corrections into the restframe UV (\sim 2000 A). We match the
shot noise, spatial sampling and PSF smoothing of the HDF, resulting in
entirely empirical and parameter free ``no-evolution'' deep fields of galaxies
for direct comparison with the HDF. We obtain the following results. Faint HDF
galaxies (I>24) are much smaller, more numerous, and less regular than our
``no-evolution'' extrapolation, for any relevant geometry. A higher proportion
of HDF galaxies ``dropout'' in both U and B, indicating that some galaxies were
brighter at higher redshifts than our ``cloned'' z\sim0.5 population. By simple
image transformations we demonstrate that bolometric luminosity evolution
generates galaxies which are too large and the contribution of any evolving
dwarf population is uninterestingly small. A plausible fit is provided by
`mass-conserving' density-evolution, consistent with hierarchical growth of
small-scale structure. Finally, we show the potential for improvement using the
Advanced Camera, with its superior UV and optical performance.
|
We report field- and current-induced domain wall (DW) depinning experiments
in Ta/Co20Fe60B20/MgO nanowires through a Hall cross geometry. While purely
field-induced depinning shows no angular dependence on in-plane fields, the
effect of the current depends crucially on the internal DW structure, which we
manipulate by an external magnetic in-plane field. We show for the first time
depinning measurements for a current sent parallel to the DW and compare its
depinning efficiency with the conventional case of current flowing
perpendicularly to the DW. We find that the maximum efficiency is similar for
both current directions within the error bars, which is in line with a
dominating damping-like spin-orbit torque (SOT) and indicates that no large
additional torques arise for currents parallel to the DW. Finally, we find a
varying dependence of the maximum depinning efficiency angle for different DWs
and pinning levels. This emphasizes the importance of our full angular scans
compared to previously used measurements for just two field directions
(parallel and perpendicular to the DW) and shows the sensitivity of the
spin-orbit torque to the precise DW structure and pinning sites.
|
In the first moments of a relativistic heavy ion collision explosive
collective flow begins to grow before the matter has yet equilibrated. Here it
is found that as long as the stress-energy tensor is traceless, early flow is
independent of whether the matter is composed of fields or particles,
equilibrated or not, or whether the stress-energy tensor is isotropic. This
eliminates much of the uncertainty in modeling early stages of a collision.
|
Using the Southeastern Association for Research in Astronomy 0.6 meter
telescope located at Cerro Tololo, we searched for variable stars in the
southern globular cluster NGC 6584. We obtained images during 8 nights between
28 May and 6 July of 2011. After processing the images, we used the image
subtraction package ISIS developed by Alard (2000)to search for the variable
stars. We identified a total of 69 variable stars in our 10x10 arcmin^2 field,
including 43 variables cataloged by Millis & Liller (1980) and 26 hereto
unknown variables. In total, we classified 46 of the variables as type RRab,
with a mean period of 0.56776 days, 15 as type RRc with a mean period of
0.30886 days, perhaps one lower amplitude type RRe, with a period of 0.26482
days, 4 eclipsing binaries, and 3 long period (P > 2 days) variable stars. As
many as 15 of the RRab Lyrae stars exhibited the Blazhko Effect. Furthermore,
the mean periods of the RR Lyrae types, the exhibited period/amplitude
relationship, and the ratio of N_c/(N_ab+N_c) of 0.25 are consistent with an
Oosterhoff Type I cluster. Here we present refined periods, V-band light
curves, and classifications for each of the 69 variables, as well as a
color-magnitude diagram of the cluster.
|
We study varieties $\mathcal{A}_n$ arising as equivariant compactifications
of the space of $n$ points in $\mathbb{C}$ up to overall translation. We define
$\mathcal{A}_n$ and examine its basic geometric properties before constructing
an isomorphism to an augmented wonderful variety. We show that $\mathcal{A}_n$
is in a canonical way a resolution of the space $\overline{P}_n$ considered by
Zahariuc, proving along the way that the resolution constructed by Zahariuc is
equivalent to ours.
|
We focus on the confinement of two-dimensional Dirac fermions within the
waveguides created by realistic magnetic fields. Understanding of their band
structure is of our main concern. We provide easily applicable criteria, mostly
depending only on the asymptotic behavior of the magnetic field, that can
guarantee existence or absence of the energy bands and provide valuable insight
into the systems where analytical solution is impossible. The general results
are employed in specific systems where the waveguide is created by the magnetic
field of a set of electric wires or magnetized strips.
|
Today, the technology for video streaming over the Internet is converging
towards a paradigm named HTTP-based adaptive streaming (HAS). HAS comes with
two unique flavors. First, by riding on top of HTTP/TCP, it leverages the
network-friendly TCP to achieve firewall/NATS traversal and bandwidth sharing.
Second, by pre-encoding and storing the video in a number of discrete bitrate
levels, it introduces video bitrate adaptivity in a scalable way that the video
encoding is excluded from the closed-loop adaptation. A conventional wisdom is
that the TCP throughput observed by a HAS client indicates the available
network bandwidth, thus can be used as a reliable reference for the video
bitrate selection.
We argue that this no longer holds true when HAS becomes a substantial
fraction of the Internet traffic. We show that when multiple HAS clients
compete at a network bottleneck, the presence of competing clients and the
discrete nature of the video bitrates would together create confusion for a
client to correctly perceive its fair-share bandwidth. Through analysis and
real experiments, we demonstrate that this fundamental limitation would lead
to, for example, video rate oscillation that negatively impacts the video
watching experiences. We therefore argue that it is necessary to implement at
the application layer a "probe-and-adapt" mechanism for HAS video rate
adaptation, which is akin but orthogonal to the transport-layer network rate
adaptation achieved by TCP. We present PANDA -- a client-side rate adaptation
algorithm for HAS -- as an embodiment of this idea. Our testbed results show
that compared to conventional algorithms, PANDA is able to reduce the
instability of video rate by 60%, at a given risk of buffer underrun.
|
In the present paper, we consider the problem of matrix completion with
noise. Unlike previous works, we consider quite general sampling distribution
and we do not need to know or to estimate the variance of the noise. Two new
nuclear-norm penalized estimators are proposed, one of them of "square-root"
type. We analyse their performance under high-dimensional scaling and provide
non-asymptotic bounds on the Frobenius norm error. Up to a logarithmic factor,
these performance guarantees are minimax optimal in a number of circumstances.
|
String theory, if it describes nature, is probably strongly coupled. In light
of recent developments in string duality, this means that the ``real world''
should correspond to a region of the classical moduli space which admits no
weak coupling description. We exhibit, in the heterotic string, one such region
of the moduli space, in which the coupling, $\lambda$, is large and the
``compactification radius'' scales as $\lambda^{1/3}$. We discuss some of the
issues raised by the conjecture that the true vacuum lies in such a region.
These include the question of coupling constant unification, and more generally
the problem of what quantities one might hope to calculate and compare with
experiment in such a picture.
|
This article treats various aspects of the geometry of the moduli of r-spin
curves and its compactification. Generalized spin curves, or r-spin curves, are
a natural generalization of 2-spin curves (algebraic curves with a
theta-characteristic), and have been of interest lately because of the
similarities between the intersection theory of these moduli spaces and that of
the moduli of stable maps. In particular, these spaces are the subject of a
remarkable conjecture of E. Witten relating their intersection theory to the
Gelfand-Dikii (rth KdV) heirarchy. There is also a W-algebra conjecture for
these spaces, analogous to the Virasoro conjecture of quantum cohomology.
We construct a smooth compactification of the stack of smooth r-spin curves,
describe the geometric meaning of its points, and prove that it is projective.
We also prove that when r is odd and g>1, the compactified stack of spin curves
and its coarse moduli space are irreducible, and when r is even and g>1, the
stack is the disjoint union of two irreducible components. We give similar
results for n-pointed spin curves, as required for Witten's conjecture, and
also generalize to the n-pointed case the classical fact that when g=1, the
moduli of r-spin curves is the disjoint union of d(r) components, where d(r) is
the number of positive divisors of r. These irreducibility properties are
important in the study of the Picard group of the stack, and also in the study
of the cohomological field theory related to Witten's conjecture (see
math.AG/9905034).
|
The anomalous magnetic moment of the muon has recently been measured to be in
conflict with the Standard Model prediction with an excess of 2.6 sigma. Taking
this result as a measurement of the supersymmetric contribution, we find that
at 95% confidence level it imposes an upper bound of about 500 GeV on the
neutralino mass and forbids higgsino dark matter. More interestingly, it
predicts an accessible lower bound on the direct detection rate, and it
strongly favors models detectable by neutrino telescopes. Cosmic ray
antideuterons may also be an interesting probe of such models.
|
Using a simplified model of cascade pair creation over pulsar polar caps
presented in two previous papers, we investigate the expected gamma-ray output
from pulsars' low altitude particle acceleration and pair creation regions. We
divide pulsars into several categories, based on which mechanism truncates the
particle acceleration off the polar cap, and give estimates for the expected
luminosity of each category.
We find that inverse Compton scattering above the pulsar polar cap provides
the primary gamma rays which initiate the pair cascades in most pulsars. This
reduces the expected $\gamma$-ray luminosity below previous estimates which
assumed curvature gamma ray emission was the dominant initiator of pair
creation in all pulsars.
|
The need for improved engine efficiencies has motivated the development of
high-pressure combustion systems, in which operating conditions achieve and
exceed critical conditions. Associated with these conditions are strong
variations in thermo-transport properties as the fluid undergoes phase
transition, and two-stage ignition with low-temperature combustion. Accurately
simulating these physical phenomena at real-fluid environments remains a
challenge. By addressing this issue, a high-fidelity LES-modeling framework is
developed to conduct simulations of transcritical fuel spray mixing and
auto-ignition at high-pressure conditions. The simulation is based on a
recently developed diffused interface method that solves the compressible
multi-species conservation equations along with a Peng-Robinson state equation
and real-fluid transport properties. LES analysis is performed for non-reacting
and reacting spray conditions targeting the ECN Spray A configuration at
chamber conditions with a pressure of 60 bar and temperatures between 900 K and
1200 K to investigate effects of the real-fluid environment and low-temperature
chemistry. Comparisons with measurements in terms of global spray parameters
(i.e., liquid and vapor penetration lengths) are shown to be in good agreement.
Analysis of the mixture fraction distributions in the dispersed spray region
demonstrates the accuracy in modelling the turbulent mixing behavior. Good
agreement of the ignition delay time and the lift-off length is obtained from
simulation results at different ambient temperature conditions and the
formation of intermediate species is captured by the simulations, indicating
that the presented numerical framework adequately reproduces the corresponding
low- and high-temperature ignition processes under high-pressure conditions,
which are relevant to realistic diesel-fuel injection systems.
|
Deep learning techniques, namely convolutional neural networks (CNN), have
previously been adapted to select gamma-ray events in the TAIGA experiment,
having achieved a good quality of selection as compared with the conventional
Hillas approach. Another important task for the TAIGA data analysis was also
solved with CNN: gamma-ray energy estimation showed some improvement in
comparison with the conventional method based on the Hillas analysis.
Furthermore, our software was completely redeveloped for the graphics
processing unit (GPU), which led to significantly faster calculations in both
of these tasks. All the results have been obtained with the simulated data of
TAIGA Monte Carlo software; their experimental confirmation is envisaged for
the near future.
|
We consider the Born and inverse Born series for scalar waves with a cubic
nonlinearity of Kerr type. We find a recursive formula for the operators in the
Born series and prove their boundedness. This result gives conditions which
guarantee convergence of the Born series, and subsequently yields conditions
which guarantee convergence of the inverse Born series. We also use fixed point
theory to give alternate explicit conditions for convergence of the Born
series. We illustrate our results with numerical experiments.
|
Multi-antenna or multiple-input multiple-output (MIMO) technique can
significantly improve the efficiency of radio frequency (RF) signal enabled
wireless energy transfer (WET). To fully exploit the energy beamforming gain at
the energy transmitter (ET), the knowledge of channel state information (CSI)
is essential, which, however, is difficult to be obtained in practice due to
the hardware limitation of the energy receiver (ER). To overcome this
difficulty, under a point-to-point MIMO WET setup, this paper proposes a
general design framework for a new type of channel learning method based on the
ER's energy measurement and feedback. Specifically, the ER measures and encodes
the harvested energy levels over different training intervals into bits, and
sends them to the ET via a feedback link of limited rate. Based on the
energy-level feedback, the ET adjusts transmit beamforming in subsequent
training intervals and obtains refined estimates of the MIMO channel by
leveraging the technique of analytic center cutting plane method (ACCPM) in
convex optimization. Under this general design framework, we further propose
two specific feedback schemes termed energy quantization and energy comparison,
where the feedback bits at each interval are generated at the ER by quantizing
the measured energy level at the current interval and comparing it with those
in the previous intervals, respectively. Numerical results are provided to
compare the performance of the two feedback schemes. It is shown that energy
quantization performs better when the number of feedback bits per interval is
large, while energy comparison is more effective with small number of feedback
bits.
|
Interfaces formed by correlated oxides offer a critical avenue for
discovering emergent phenomena and quantum states. However, the fabrication of
oxide interfaces with variable crystallographic orientations and strain states
integrated along a film plane is extremely challenge by conventional
layer-by-layer stacking or self-assembling. Here, we report the creation of
morphotropic grain boundaries (GBs) in laterally interconnected cobaltite
homostructures. Single-crystalline substrates and suspended ultrathin
freestanding membranes provide independent templates for coherent epitaxy and
constraint on the growth orientation, resulting in seamless and atomically
sharp GBs. Electronic states and magnetic behavior in hybrid structures are
laterally modulated and isolated by GBs, enabling artificially engineered
functionalities in the planar matrix. Our work offers a simple and scalable
method for fabricating unprecedented innovative interfaces through controlled
synthesis routes as well as provides a platform for exploring potential
applications in neuromorphics, solid state batteries, and catalysis.
|
A proof of the following theorem is given, answering an open problem
attributed to Kunen: suppose that $T$ is compact and that $Y$ is the image of
$X$ under a perfect map, $X$ is normal, and $Y\times T$ is normal. Then $X
\times T$ is normal.
|
Preparation of a specific quantum state is a required step for a variety of
proposed practical uses of quantum dynamics. We report an experimental
demonstration of optical quantum state preparation in a semiconductor quantum
dot with electrical readout, which contrasts with earlier work based on Rabi
flopping in that the method is robust with respect to variation in the optical
coupling. We use adiabatic rapid passage, which is capable of inverting single
dots to a specified upper level. We demonstrate that when the pulse power
exceeds a threshold for inversion, the final state is independent of power.
This provides a new tool for preparing quantum states in semiconductor dots and
has a wide range of potential uses.
|
By analogy to the different accretion states observed in black-hole X-ray
binaries (BHXBs), it appears plausible that accretion disks in active galactic
nuclei (AGN) undergo a state transition between a radiatively efficient and
inefficient accretion flow. If the radiative efficiency changes at some
critical accretion rate, there will be a change in the distribution of black
hole masses and bolometric luminosities at the corresponding transition
luminosity. To test this prediction, I consider the joint distribution of AGN
black hole masses and bolometric luminosities for a sample taken from the
literature. The small number of objects with low Eddington-scaled accretion
rates mdot < 0.01 and black hole masses Mbh < 10^9 Msun constitutes tentative
evidence for the existence of such a transition in AGN. Selection effects, in
particular those associated with flux-limited samples, systematically exclude
objects in particular regions of the black hole mass-luminosity plane.
Therefore, they require particular attention in the analysis of distributions
of black hole mass, bolometric luminosity, and derived quantities like the
accretion rate. I suggest further observational tests of the BHXB-AGN
unification scheme which are based on the jet domination of the energy output
of BHXBs in the hard state, and on the possible equivalence of BHXB in the very
high (or "steep power-law") state showing ejections and efficiently accreting
quasars and radio galaxies with powerful radio jets.
|
Classification of time series signals has become an important construct and
has many practical applications. With existing classifiers we may be able to
accurately classify signals, however that accuracy may decline if using a
reduced number of attributes. Transforming the data then undertaking reduction
in dimensionality may improve the quality of the data analysis, decrease time
required for classification and simplify models. We propose an approach, which
chooses suitable wavelets to transform the data, then combines the output from
these transforms to construct a dataset to then apply ensemble classifiers to.
We demonstrate this on different data sets, across different classifiers and
use differing evaluation methods. Our experimental results demonstrate the
effectiveness of the proposed technique, compared to the approaches that use
either raw signal data or a single wavelet transform.
|
Today's cloud storage services must offer storage reliability and fast data
retrieval for large amount of data without sacrificing storage cost. We present
SEARS, a cloud-based storage system which integrates erasure coding and data
deduplication to support efficient and reliable data storage with fast user
response time. With proper association of data to storage server clusters,
SEARS provides flexible mixing of different configurations, suitable for
real-time and archival applications.
Our prototype implementation of SEARS over Amazon EC2 shows that it
outperforms existing storage systems in storage efficiency and file retrieval
time. For 3 MB files, SEARS delivers retrieval time of $2.5$ s compared to $7$
s with existing systems.
|
Analog-Based In-Memory Computing (AIMC) inference accelerators can be used to
efficiently execute Deep Neural Network (DNN) inference workloads. However, to
mitigate accuracy losses, due to circuit and device non-idealities,
Hardware-Aware (HWA) training methodologies must be employed. These typically
require significant information about the underlying hardware. In this paper,
we propose two Post-Training (PT) optimization methods to improve accuracy
after training is performed. For each crossbar, the first optimizes the
conductance range of each column, and the second optimizes the input, i.e,
Digital-to-Analog Converter (DAC), range. It is demonstrated that, when these
methods are employed, the complexity during training, and the amount of
information about the underlying hardware can be reduced, with no notable
change in accuracy ($\leq$0.1%) when finetuning the pretrained RoBERTa
transformer model for all General Language Understanding Evaluation (GLUE)
benchmark tasks. Additionally, it is demonstrated that further optimizing
learned parameters PT improves accuracy.
|
Deep reinforcement learning (DRL) has recently been adopted in a wide range
of physics and engineering domains for its ability to solve decision-making
problems that were previously out of reach due to a combination of
non-linearity and high dimensionality. In the last few years, it has spread in
the field of computational mechanics, and particularly in fluid dynamics, with
recent applications in flow control and shape optimization. In this work, we
conduct a detailed review of existing DRL applications to fluid mechanics
problems. In addition, we present recent results that further illustrate the
potential of DRL in Fluid Mechanics. The coupling methods used in each case are
covered, detailing their advantages and limitations. Our review also focuses on
the comparison with classical methods for optimal control and optimization.
Finally, several test cases are described that illustrate recent progress made
in this field. The goal of this publication is to provide an understanding of
DRL capabilities along with state-of-the-art applications in fluid dynamics to
researchers wishing to address new problems with these methods.
|
We present a probabilistic deep learning methodology that enables the
construction of predictive data-driven surrogates for stochastic systems.
Leveraging recent advances in variational inference with implicit
distributions, we put forth a statistical inference framework that enables the
end-to-end training of surrogate models on paired input-output observations
that may be stochastic in nature, originate from different information sources
of variable fidelity, or be corrupted by complex noise processes. The resulting
surrogates can accommodate high-dimensional inputs and outputs and are able to
return predictions with quantified uncertainty. The effectiveness our approach
is demonstrated through a series of canonical studies, including the regression
of noisy data, multi-fidelity modeling of stochastic processes, and uncertainty
propagation in high-dimensional dynamical systems.
|
We report evidence from the 3B Catalogue that long ($T_{90} > 10$ s) and
short ($T_{90} < 10$ s) gamma-ray bursts represent distinct source populations.
Their spatial distributions are significantly different, with long bursts
having $\langle V/V_{max} \rangle = 0.282 \pm 0.014$ but short bursts having
$\langle V/V_{max} \rangle = 0.385 \pm 0.019$, differing by $0.103 \pm 0.024$,
significant at the $4.3 \sigma$ level. Long and short bursts also differ
qualitatively in their spectral behavior, with short bursts harder in the BATSE
(50--300 KeV) band, but long bursts more likely to be detected at photon
energies > 1 MeV. This implies different spatial origin and physical processes
for long and short bursts. Long bursts may be explained by accretion-induced
collapse. Short bursts require another mechanism, for which we suggest neutron
star collisions. These are capable of producing neutrino bursts as short as a
few ms, consistent with the shortest observed time scales in GRB. We briefly
investigate the parameters of clusters in which neutron star collisons may
occur, and discuss the nuclear evolution of expelled and accelerated matter.
|
Quantum fields in curved spacetime exhibit a wealth of effects like Hawking
radiation from black holes. While quantum field theory in black holes can only
be studied theoretically, it can be tested in controlled laboratory
experiments. In experiments, a fluid going from sub- to supersonic speed
creates an effectively curved spacetime for the acoustic field, with a horizon
where the speed of the fluid equals the speed of sound. The challenge to test
predictions like the Hawking effect in such systems lies in the control of the
spacetime curvature and access to the field spectrum thereon. Here, we create
tailored stationary effective curved spacetimes in a polaritonic quantum fluid
of light in which either massless or massive excitations can be created, with
smooth and steep horizons and various supersonic fluid speeds. Using a recently
developed spectroscopy method we measure the spectrum of collective excitations
on these spacetimes, crucially observing negative energy modes in the
supersonic regions, which signals the formation of a horizon. Control over the
horizon curvature and access to the spectrum on either side demonstrates the
potential of quantum fluids of light for the study of field theories on curved
spacetimes, and we discuss the possibility of investigating emission and
spectral instabilities with a horizon or in an effective Exotic Compact Object
configuration.
|
We investigate Gaussian warped five-dimensional thick braneworlds.
Identification of the graviton's wave function (squared) in the extra-dimension
with a probability distribution function leads to a straightforward
probabilistic interpretation of braneworlds. The extra-coordinate $y$ is
regarded as a Gaussian-distributed random variable. Hence, all of the field
variables and operators which depend on $y$ are, also, randomly distributed.
Four-dimensional measurable (macroscopic) quantities are identified with the
corresponding averaged values over the Gaussian distribution. The present
scenario represents a new phenomenological approach to smooth thick branes
which can not be obtained through 'smearing out' Randall-Sundrum-like (thin)
braneworlds.
|
The objective of this work is to investigate complementary features which can
aid the quintessential Mel frequency cepstral coefficients (MFCCs) in the task
of closed, limited set word recognition for non-native English speakers of
different mother-tongues. Unlike the MFCCs, which are derived from the spectral
energy of the speech signal, the proposed frequency-centroids (FCs) encapsulate
the spectral centres of the different bands of the speech spectrum, with the
bands defined by the Mel filterbank. These features, in combination with the
MFCCs, are observed to provide relative performance improvement in English word
recognition, particularly under varied noisy conditions. A two-stage
Convolution Neural Network (CNN) is used to model the features of the English
words uttered with Arabic, French and Spanish accents.
|
In the present paper, we introduce a concept of Ricci curvature on
hypergraphs for a nonlinear Laplacian. We prove that our definition of the
Ricci curvature is a generalization of Lin-Lu-Yau coarse Ricci curvature for
graphs to hypergraphs. We also show a lower bound of nonzero eigenvalues of
Laplacian, gradient estimate of heat flow, and diameter bound of Bonnet-Myers
type for our curvature notion. This research leads to understanding how
nonlinearity of Laplacian causes complexity of curvatures.
|
The second part of the Hilbert's sixteenth problem consists in determining
the upper bound $\mathcal{H}(n)$ for the number of limit cycles that planar
polynomial vector fields of degree $n$ can have. For $n\geq2$, it is still
unknown whether $\mathcal{H}(n)$ is finite or not. The main achievements
obtained so far establish lower bounds for $\mathcal{H}(n)$. Regarding
asymptotic behavior, the best result says that $\mathcal{H}(n)$ grows as fast
as $n^2\log(n)$. Better lower bounds for small values of $n$ are known in the
research literature. In the recent paper "Some open problems in low dimensional
dynamical systems" by A. Gasull, Problem 18 proposes another Hilbert's
sixteenth type problem, namely improving the lower bounds for $\mathcal{L}(n)$,
$n\in\mathbb{N}$, which is defined as the maximum number of limit cycles that
planar piecewise linear differential systems with two zones separated by a
branch of an algebraic curve of degree $n$ can have. So far,
$\mathcal{L}(n)\geq [n/2],$ $n\in\mathbb{N}$, is the best known general lower
bound. Again, better lower bounds for small values of $n$ are known in the
research literature. Here, by using a recently developed second order Melnikov
method for nonsmooth systems with nonlinear discontinuity manifold, it is shown
that $\mathcal{L}(n)$ grows as fast as $n^2.$ This will be achieved by
providing lower bounds for $\mathcal{L}(n)$, which improves every previous
estimates for $n\geq 4$.
|
We present an exploratory study for the nonperturbative determination of the
coefficient of the ${\cal O}(a)$ improvement term to the Wilson action,
$c_{SW}$. Following the work by L\"{u}scher et al., we impose the PCAC relation
as a nonperturbative improvement condition on $c_{SW}$, without, however, using
the Schr\"{o}dinger functional in our calculation.
|
In this paper we study the application of convolutional neural networks for
jointly detecting objects depicted in still images and estimating their 3D
pose. We identify different feature representations of oriented objects, and
energies that lead a network to learn this representations. The choice of the
representation is crucial since the pose of an object has a natural, continuous
structure while its category is a discrete variable. We evaluate the different
approaches on the joint object detection and pose estimation task of the
Pascal3D+ benchmark using Average Viewpoint Precision. We show that a
classification approach on discretized viewpoints achieves state-of-the-art
performance for joint object detection and pose estimation, and significantly
outperforms existing baselines on this benchmark.
|
The exact energy and angular-momentum conservation laws are derived by
Noether method for the Hamiltonian and symplectic representations of the
gauge-free electromagnetic gyrokinetic Vlasov-Maxwell equations. These
gyrokinetic equations, which are solely expressed in terms of electromagnetic
fields, describe the low-frequency turbulent fluctuations that perturb a
time-independent toroidally-axisymmetric magnetized plasma. The explicit proofs
presented here provide a complete picture of the transfer of energy and angular
momentum between the gyrocenters and the perturbed electromagnetic fields, in
which the crucial roles played by gyrocenter polarization and magnetization
effects are highlighted. In addition to yielding an exact angular-momentum
conservation law, the gyrokinetic Noether equation yields an exact momentum
transport equation, which might be useful in more general equilibrium magnetic
geometries.
|
Large Language Models (LLMs) have demonstrated remarkable performance across
diverse tasks and exhibited impressive reasoning abilities by applying
zero-shot Chain-of-Thought (CoT) prompting. However, due to the evolving nature
of sentence prefixes during the pre-training phase, existing zero-shot CoT
prompting methods that employ identical CoT prompting across all task instances
may not be optimal. In this paper, we introduce a novel zero-shot prompting
method that leverages evolutionary algorithms to generate diverse promptings
for LLMs dynamically. Our approach involves initializing two CoT promptings,
performing evolutionary operations based on LLMs to create a varied set, and
utilizing the LLMs to select a suitable CoT prompting for a given problem.
Additionally, a rewriting operation, guided by the selected CoT prompting,
enhances the understanding of the LLMs about the problem. Extensive experiments
conducted across ten reasoning datasets demonstrate the superior performance of
our proposed method compared to current zero-shot CoT prompting methods on
GPT-3.5-turbo and GPT-4. Moreover, in-depth analytical experiments underscore
the adaptability and effectiveness of our method in various reasoning tasks.
|
We reformulate Hrushovski's definability patterns from the setting of first
order logic to the setting of positive logic. Given an h-universal theory T we
put two structures on the type spaces of models of T in two languages,
\mathcal{L} and \mathcal{L}_{\pi}. It turns out that for sufficiently saturated
models, the corresponding h-universal theories \mathcal{T} and
\mathcal{T}_{\pi} are independent of the model. We show that there is a
canonical model \mathcal{J} of \mathcal{T}, and in many interesting cases there
is an analogous canonical model \mathcal{J}_{\pi} of \mathcal{T}_{\pi}, both of
which embed into every type space. We discuss the properties of these canonical
models, called cores, and give some concrete examples.
|
Thyroid cancer is common worldwide, with a rapid increase in prevalence
across North America in recent years. While most patients present with palpable
nodules through physical examination, a large number of small and medium-sized
nodules are detected by ultrasound examination. Suspicious nodules are then
sent for biopsy through fine needle aspiration. Since biopsies are invasive and
sometimes inconclusive, various research groups have tried to develop
computer-aided diagnosis systems. Earlier approaches along these lines relied
on clinically relevant features that were manually identified by radiologists.
With the recent success of artificial intelligence (AI), various new methods
are being developed to identify these features in thyroid ultrasound
automatically. In this paper, we present a systematic review of
state-of-the-art on AI application in sonographic diagnosis of thyroid cancer.
This review follows a methodology-based classification of the different
techniques available for thyroid cancer diagnosis. With more than 50 papers
included in this review, we reflect on the trends and challenges of the field
of sonographic diagnosis of thyroid malignancies and potential of
computer-aided diagnosis to increase the impact of ultrasound applications on
the future of thyroid cancer diagnosis. Machine learning will continue to play
a fundamental role in the development of future thyroid cancer diagnosis
frameworks.
|
The Indian monsoon brings around 80% of the annual rainfall over the summer
months June--September to the Indian subcontinent. The timing of the monsoon
onset and the associated rainfall has a large impact on agriculture, thus
impacting the livelihoods of over one billion people. To improve forecasting
the monsoon on sub-seasonal timescales, global climate models are in continual
development. One of the key issues is the representation of convection, which
is typically parametrised. Different convection schemes offer varying degrees
of performance, depending on the model and scenario. Here, we propose a method
to compute a convective timescale, which could be used as a metric for
comparison across different models and convection schemes. The method involves
the determination of a vertical convective flux between the lower and upper
troposphere through moisture budget analysis, and then relating this to the
total column moisture content. The method is applied to a WRF model simulation
of the 2016 Indian monsoon, giving convective timescales that are reduced by a
factor of 2 when the onset of the monsoon occurs. The convective timescale can
also be used as an indicator of monsoon transitions from pre-onset to full
phase of the monsoon, and to assess changes in monsoon phases under future
climate scenarios.
|
We consider holographic CFTs and study their large $N$ expansion. We use
Polyakov-Mellin bootstrap to extract the CFT data of all operators, including
scalars, till $O(1/N^4)$. We add a contact term in Mellin space, which
corresponds to an effective $\phi^4$ theory in AdS and leads to anomalous
dimensions for scalars at $O(1/N^2)$. Using this we fix $O(1/N^4)$ anomalous
dimensions for double trace operators finding perfect agreement with
\cite{loopal} (for $\Delta_{\phi}=2$). Our approach generalizes this to any
dimensions and any value of conformal dimensions of external scalar field. In
the second part of the paper, we compute the loop amplitude in AdS which
corresponds to non-planar correlators of in CFT. More precisely, using CFT data
at $O(1/N^4)$ we fix the AdS bubble diagram and the triangle diagram for the
general case.
|
Motivated by a problem of scheduling unit-length jobs with weak preferences
over time-slots, the random assignment problem (also called the house
allocation problem) is considered on a uniform preference domain. For the
subdomain in which preferences are strict except possibly for the class of
unacceptable objects, Bogomolnaia and Moulin characterized the probabilistic
serial mechanism as the only mechanism satisfying equal treatment of equals,
strategyproofness, and ordinal efficiency. The main result in this paper is
that the natural extension of the probabilistic serial mechanism to the domain
of weak, but uniform, preferences fails strategyproofness, but so does every
other mechanism that is ordinally efficient and treats equals equally. If
envy-free assignments are required, then any (probabilistic or deterministic)
mechanism that guarantees an ex post efficient outcome must fail even a weak
form of strategyproofness.
|
We show in spatially one dimensional Madelung fluid that a simple requirement
on local stability of the maximum of quantum probability density will, if
combined with the global scale invariance of quantum potential, lead to a class
of quantum probability densities globally being self-trapped by their own
self-generated quantum potentials, possessing only a finite-size spatial
support. It turns out to belong to a class of the most probable wave function
given its energy through the maximum entropy principle. We proceed to show that
there is a limiting case in which the quantum probability density becomes the
stationary-moving soliton-like solution of the Schr\"odinger equation.
|
We present ten medium-resolution, high signal-to-noise ratio near-infrared
(NIR) spectra of SN 2011fe from SpeX on the NASA Infrared Telescope Facility
(IRTF) and Gemini Near-Infrared Spectrograph (GNIRS) on Gemini North, obtained
as part of the Carnegie Supernova Project. This data set constitutes the
earliest time-series NIR spectroscopy of a Type Ia supernova (SN Ia), with the
first spectrum obtained at 2.58 days past the explosion and covering -14.6 to
+17.3 days relative to B-band maximum. C I {\lambda}1.0693 {\mu}m is detected
in SN 2011fe with increasing strength up to maximum light. The delay in the
onset of the NIR C I line demonstrates its potential to be an effective tracer
of unprocessed material. For the first time in a SN Ia, the early rapid decline
of the Mg II {\lambda}1.0927 {\mu}m velocity was observed, and the subsequent
velocity is remarkably constant. The Mg II velocity during this constant phase
locates the inner edge of carbon burning and probes the conditions under which
the transition from deflagration to detonation occurs. We show that the Mg II
velocity does not correlate with the optical light-curve decline rate
{\Delta}m15. The prominent break at ~1.5 {\mu}m is the main source of concern
for NIR k-correction calculations. We demonstrate here that the feature has a
uniform time evolution among SNe Ia, with the flux ratio across the break
strongly correlated with {\Delta}m15. The predictability of the strength and
the onset of this feature suggests that the associated k-correction
uncertainties can be minimized with improved spectral templates.
|
We consider the Ginzburg-Landau energy for a type-I superconductor in the
shape of an infinite three-dimensional slab, with two-dimensional periodicity,
with an applied magnetic field which is uniform and perpendicular to the slab.
We determine the optimal scaling law of the minimal energy in terms of the
parameters of the problem, when the applied magnetic field is sufficiently
small and the sample sufficiently thick. This optimal scaling law is proven via
ansatz-free lower bounds and an explicit branching construction which refines
further and further as one approaches the surface of the sample. Two different
regimes appear, with different scaling exponents. In the first regime, the
branching leads to an almost uniform magnetic field pattern on the boundary; in
the second one the inhomogeneity survives up to the boundary.
|
Infrared and visible image fusion targets to provide an informative image by
combining complementary information from different sensors. Existing
learning-based fusion approaches attempt to construct various loss functions to
preserve complementary features, while neglecting to discover the
inter-relationship between the two modalities, leading to redundant or even
invalid information on the fusion results. Moreover, most methods focus on
strengthening the network with an increase in depth while neglecting the
importance of feature transmission, causing vital information degeneration. To
alleviate these issues, we propose a coupled contrastive learning network,
dubbed CoCoNet, to realize infrared and visible image fusion in an end-to-end
manner. Concretely, to simultaneously retain typical features from both
modalities and to avoid artifacts emerging on the fused result, we develop a
coupled contrastive constraint in our loss function. In a fused image, its
foreground target / background detail part is pulled close to the infrared /
visible source and pushed far away from the visible / infrared source in the
representation space. We further exploit image characteristics to provide
data-sensitive weights, allowing our loss function to build a more reliable
relationship with source images. A multi-level attention module is established
to learn rich hierarchical feature representation and to comprehensively
transfer features in the fusion process. We also apply the proposed CoCoNet on
medical image fusion of different types, e.g., magnetic resonance image,
positron emission tomography image, and single photon emission computed
tomography image. Extensive experiments demonstrate that our method achieves
state-of-the-art (SOTA) performance under both subjective and objective
evaluation, especially in preserving prominent targets and recovering vital
textural details.
|
We revisit the derivation of multipole contributions to the atom-wall
interaction previously presented in [G. Lach et al., Phys. Rev. A 81, 052507
(2010)]. A careful reconsideration of the angular-momentum decomposition of the
second-, third- and fourth-rank tensors composed of the derivatives of the
electric-field modes leads to a modification for the results for the
quadrupole, octupole and hexadecupole contributions to the atom-wall
interaction. Asymptotic results are given for the asymptotic long-range forms
of the multipole terms, in both the short-range and long-range limits.
Calculations are carried out for hydrogen and positronium in contact with
$\alpha$-quartz; a reanalysis of analytic models of the dielectric function of
alpha-quartz is performed. Analytic results are provided for the multipole
polarizabilities of hydrogen and positronium. The quadrupole correction is
shown to be numerically significant for atom-surface interactions. The
expansion into multipoles is shown to constitute a divergent, asymptotic
series. Connections to van-der-Waals corrected density-functional theory and
applications to physisorption are decribed.
|
We study a charged Brownian gas with a non uniform bath temperature, and
present a thermohydrodynamical picture. Expansion on the collision time probes
the validity of the local equilibrium approach and the relevant thermodynamical
variables. For the linear regime we present several applications (including
some novel results). For the lowest nonlinear expansion and uniform bath
temperature we compute the gradient corrections to the local equilibrium
approach and the fundamental (Smoluchowsky) equation for the nonequilibrium
particle density.
|
We propose a scalable optimization framework for estimating convex inner
approximations of the steady-state security sets. The framework is based on
Brouwer fixed point theorem applied to a fixed-point form of the power flow
equations. It establishes a certificate for the self-mapping of a polytope
region constructed around a given feasible operating point. This certificate is
based on the explicit bounds on the nonlinear terms that hold within the
self-mapped polytope. The shape of the polytope is adapted to find the largest
approximation of the steady-state security region. While the corresponding
optimization problem is nonlinear and non-convex, every feasible solution found
by local search defines a valid inner approximation. The number of variables
scales linearly with the system size, and the general framework can naturally
be applied to other nonlinear equations with affine dependence on inputs. Test
cases, with the system sizes up to $1354$ buses, are used to illustrate the
scalability of the approach. The results show that the approximated regions are
not unreasonably conservative and that they cover substantial fractions of the
true steady-state security regions for most medium-sized test cases.
|
This letter attempts to design a surveillance scheme by adopting an active
reconfigurable intelligent surface (RIS). Different from the conventional
passive RIS, the active RIS could not only adjust the phase shift but also
amplify the amplitude of the reflected signal. With such reflecting, the
reflected signal of active RIS could jointly adjust the signal to interference
plus noise ratio (SINR) of the suspicious receiver and the legitimate monitor,
hence the proactive eavesdropping at the physical layer could be effectively
realized. We formulate the optimization problem with the target of maximizing
the eavesdropping rate to obtain the optimal reflecting coefficient matrix of
the active RIS. The formulated optimization problem is nonconvex fractional
programming and challenging to deal with. We then solve the problem by
approximating it as a series of convex constraints. Simulation results validate
the effectiveness of our designed surveillance scheme and show that the
proposed active RIS aided surveillance scheme has good performance in terms of
eavesdropping rate compared with the scheme with passive RIS.
|
Debris discs are a consequence of the planet formation process and constitute
the fingerprints of planetesimal systems. Their solar system's counterparts are
the asteroid and Edgeworth-Kuiper belts. The DUNES survey aims at detecting
extra-solar analogues to the Edgeworth-Kuiper belt around solar-type stars,
putting in this way the solar system into context. The survey allows us to
address some questions related to the prevalence and properties of planetesimal
systems. We used {\it Herschel}/PACS to observe a sample of nearby FGK stars.
Data at 100 and 160 $\mu$m were obtained, complemented in some cases with
observations at 70 $\mu$m, and at 250, 350 and 500 $\mu$m using SPIRE. The
observing strategy was to integrate as deep as possible at 100 $\mu$m to detect
the stellar photosphere. Debris discs have been detected at a fractional
luminosity level down to several times that of the Edgeworth-Kuiper belt. The
incidence rate of discs around the DUNES stars is increased from a rate of
$\sim$ 12.1% $\pm$ 5% before \emph{Herschel} to $\sim$ 20.2% $\pm$ 2%. A
significant fraction ($\sim$ 52%) of the discs are resolved, which represents
an enormous step ahead from the previously known resolved discs. Some stars are
associated with faint far-IR excesses attributed to a new class of cold discs.
Although it cannot be excluded that these excesses are produced by coincidental
alignment of background galaxies, statistical arguments suggest that at least
some of them are true debris discs. Some discs display peculiar SEDs with
spectral indexes in the 70-160$\mu$m range steeper than the Rayleigh-Jeans one.
An analysis of the debris disc parameters suggests that a decrease might exist
of the mean black body radius from the F-type to the K-type stars. In addition,
a weak trend is suggested for a correlation of disc sizes and an
anticorrelation of disc temperatures with the stellar age.
|
A transfer-matrix simulation scheme for the three-dimensional (d=3) bond
percolation is presented. Our scheme is based on Novotny's transfer-matrix
formalism, which enables us to consider arbitrary (integral) number of sites N
constituting a unit of the transfer-matrix slice even for d=3. Such an
arbitrariness allows us to perform systematic finite-size-scaling analysis of
the criticality at the percolation threshold. Diagonalizing the transfer matrix
for N =4,5,...,10, we obtain an estimate for the correlation-length critical
exponent nu = 0.81(5).
|
We introduce an ensemble consisting of logarithmically repelling charge one
and charge two particles on the unit circle constrained so that the total
charge of all particles equals $N$, but the proportion of each species of
particle is allowed to vary according to a fugacity parameter. We identify the
proper scaling of the fugacity with $N$ so that the proportion of each particle
stays positive in the $N \rightarrow \infty$ limit. This ensemble forms a
Pfaffian point process on the unit circle, and we derive the scaling limits of
the matrix kernel(s) as a function of the interpolating parameter. This
provides a solvable interpolation between the circular unitary and symplectic
ensembles.
|
We consider the problem of non-smooth convex optimization with linear
equality constraints, where the objective function is only accessible through
its proximal operator. This problem arises in many different fields such as
statistical learning, computational imaging, telecommunications, and optimal
control. To solve it, we propose an Anderson accelerated Douglas-Rachford
splitting (A2DR) algorithm, which we show either globally converges or provides
a certificate of infeasibility/unboundedness under very mild conditions.
Applied to a block separable objective, A2DR partially decouples so that its
steps may be carried out in parallel, yielding an algorithm that is fast and
scalable to multiple processors. We describe an open-source implementation and
demonstrate its performance on a wide range of examples.
|
This Letter reports observations of an event that connects all major classes
of solar eruptions: those that erupt fully into the heliosphere versus those
that fail and are confined to the Sun, and those that eject new flux into the
heliosphere, in the form of a flux rope, versus those that eject only new
plasma in the form of a jet. The event originated in a filament channel
overlying a circular polarity inversion line (PIL) and occurred on 2013-03-20
during the extended decay phase of the active region designated NOAA
12488/12501. The event was especially well-observed by multiple spacecraft and
exhibited the well-studied null-point topology. We analyze all aspects of the
eruption using SDO AIA and HMI, STEREO-A EUVI, and SOHO LASCO imagery. One
section of the filament undergoes a classic failed eruption with cool plasma
subsequently draining onto the section that did not erupt, but a complex
structured CME/jet is clearly observed by SOHO LASCO C2 shortly after the
failed filament eruption. We describe in detail the slow buildup to eruption,
the lack of an obvious trigger, and the immediate reappearance of the filament
after the event. The unique mixture of major eruption properties observed
during this event places severe constraints on the structure of the filament
channel field and, consequently, on the possible eruption mechanism.
|
Finite elasticity problems commonly include material and geometric
nonlinearities and are solved using various numerical methods. However, for
highly nonlinear problems, achieving convergence is relatively difficult and
requires small load step sizes. In this work, we present a new method to
transform the discretized governing equations so that the transformed problem
has significantly reduced nonlinearity and, therefore, Newton solvers exhibit
improved convergence properties. We study exponential-type nonlinearity in soft
tissues and geometric nonlinearity in compression, and propose novel
formulations for the two problems. We test the new formulations in several
numerical examples and show significant reduction in iterations required for
convergence, especially at large load steps. Notably, the proposed formulation
is capable of yielding convergent solution even when 10 to 100 times larger
load steps are applied. The proposed framework is generic and can be applied to
other types of nonlinearities as well.
|
We consider the magnetic Laplacian with the homogeneous magnetic field in two
and three dimensions. We prove that the $(k+1)$-th magnetic Neumann eigenvalue
of a bounded convex planar domain is not larger than its $k$-th magnetic
Dirichlet eigenvalue. In three dimensions, we restrict our attention to convex
domains, which are invariant under rotation by an angle of $\pi$ around an axis
parallel to the magnetic field. For such domains, we prove that the $(k+2)$-th
magnetic Neumann eigenvalue is not larger than the $k$-th magnetic Dirichlet
eigenvalue provided that this Dirichlet eigenvalue is simple. The proofs rely
on a modification of the strategy due to Levine and Weinberger.
|
This paper describes our winning systems in MRL: The 1st Shared Task on
Multilingual Clause-level Morphology (EMNLP 2022 Workshop) designed by KUIS AI
NLP team. We present our work for all three parts of the shared task:
inflection, reinflection, and analysis. We mainly explore transformers with two
approaches: (i) training models from scratch in combination with data
augmentation, and (ii) transfer learning with prefix-tuning at multilingual
morphological tasks. Data augmentation significantly improves performance for
most languages in the inflection and reinflection tasks. On the other hand,
Prefix-tuning on a pre-trained mGPT model helps us to adapt analysis tasks in
low-data and multilingual settings. While transformer architectures with data
augmentation achieved the most promising results for inflection and
reinflection tasks, prefix-tuning on mGPT received the highest results for the
analysis task. Our systems received 1st place in all three tasks in MRL 2022.
|
We present the Belavkin filtering equation for the intense balanced
heterodyne detection in a unitary model of an indirect observation. The
measuring apparatus modelled by a Bose field is initially prepared in a
coherent state and the observed process is a diffusion one. We prove that this
filtering equation is relaxing: any initial square-integrable function tends
asymptotically to a coherent state with an amplitude depending on the coupling
constant and the initial state of the apparatus. The time-development of a
squeezed coherent state is studied and compared with the previous results
obtained for the measuring apparatus prepared initially in the vacuum state.
|
A summary introduction of the Weil-Petersson metric space geometry is
presented. Teichmueller space and its augmentation are described in terms of
Fenchel-Nielsen coordinates. Formulas for the gradients and Hessians of
geodesic-length functions are presented. Applications are considered. A
description of the Weil-Petersson metric in Fenchel-Nielsen coordinates is
presented. The Alexandrov tangent cone at points of the augmentation is
described. A comparison dictionary is presented between the geometry of the
space of flat tori and Teichmueller space with the Weil-Petersson metric.
|
Recent Monte Carlo simulations (A. G. Moreira and R. R. Netz: Eur. Phys. J. E
{\bf 8} (2002) 33) in the strong Coulomb coupling regime suggest strange
counterion electrostatics unlike the Poisson-Boltzmann picture: when
counterion-counterion repulsive interactions are much larger than
counterion--macroion attraction, the coarse-grained counterion distribution
around a macroion is determined only by the latter, and the former is
irrelevant. Here, we offer an explanation for the apparently paradoxical
electrostatics by mathematically manipulating the strong coupling limit.
|
There has been recent interest in understanding the all loop structure of the
subleading power soft and collinear limits, with the goal of achieving a
systematic resummation of subleading power infrared logarithms. Most of this
work has focused on subleading power corrections to soft gluon emission, whose
form is strongly constrained by symmetries. In this paper we initiate a study
of the all loop structure of soft fermion emission. In $\mathcal{N}=1$ QCD we
perform an operator based factorization and resummation of the associated
infrared logarithms, and prove that they exponentiate into a Sudakov due to
their relation to soft gluon emission. We verify this result through explicit
calculation to $\mathcal{O}(\alpha_s^3)$. We show that in QCD, this simple
Sudakov exponentiation is violated by endpoint contributions proportional to
$(C_A-C_F)^n$ which contribute at leading logarithmic order. Combining our
$\mathcal{N}=1$ result and our calculation of the endpoint contributions to
$\mathcal{O}(\alpha_s^3)$, we conjecture a result for the soft quark Sudakov in
QCD, a new all orders function first appearing at subleading power, and give
evidence for its universality. Our result, which is expressed in terms of
combinations of cusp anomalous dimensions in different color representations,
takes an intriguingly simple form and also exhibits interesting similarities to
results for large-x logarithms in the off diagonal splitting functions.
|
Recent multi-dimensional (multi-D) core-collapse supernova (CCSN) simulations
characterize gravitational waves (GWs) and neutrino signals, offering insight
into universal properties of CCSN independent of progenitor. Neutrino analysis
in real observations, however, will be complicated due to the ambiguity of
self-induced neutrino flavor conversion (NFC), which poses an obstacle to
extracting detailed physical information. In this paper, we propose a novel
approach to place a constraint on NFC from observed quantities of GWs and
neutrinos based on correlation analysis from recent, detailed multi-D CCSN
simulations. The proposed method can be used even in cases with low
significance - or no detection of GWs. We also discuss how we can utilize
electro-magnetic observations to complement the proposed method. Although our
proposed method has uncertainties associated with CCSN modeling, the present
result will serve as a base for more detailed studies. Reducing the systematic
errors involved in CCSN models is a key to success in this multi-messenger
analysis that needs to be done in collaboration with different theoretical
groups.
|
A constituent parton picture of hadrons with logarithmic confinement
naturally arises in weak coupling light-front QCD. Confinement provides a mass
gap that allows the constituent picture to emerge. The effective renormalized
Hamiltonian is computed to ${\cal O}(g^2)$, and used to study charmonium and
bottomonium. Radial and angular excitations can be used to fix the coupling
$\alpha$, the quark mass $M$, and the cutoff $\Lambda$. The resultant hyperfine
structure is very close to experiment.
|
This note gives necessary and sufficient conditions for a sequence of
non-negative integers to be the degree sequence of a connected simple graph.
This result is implicit in a paper of Hakimi. A new alternative
characterisation of these necessary and sufficient conditions is also given.
|
This paper studies the adaptive optimal control problem for a class of linear
time-delay systems described by delay differential equations (DDEs). A crucial
strategy is to take advantage of recent developments in reinforcement learning
and adaptive dynamic programming and develop novel methods to learn adaptive
optimal controllers from finite samples of input and state data. In this paper,
the data-driven policy iteration (PI) is proposed to solve the
infinite-dimensional algebraic Riccati equation (ARE) iteratively in the
absence of exact model knowledge. Interestingly, the proposed recursive PI
algorithm is new in the present context of continuous-time time-delay systems,
even when the model knowledge is assumed known. The efficacy of the proposed
learning-based control methods is validated by means of practical applications
arising from metal cutting and autonomous driving.
|
This paper introduces the CowStallNumbers dataset, a collection of images
extracted from videos focusing on cow teats, designed to advance the field of
cow stall number detection. The dataset comprises 1042 training images and 261
test images, featuring stall numbers ranging from 0 to 60. To enhance the
dataset, we performed fine-tuning on a YOLO model and applied data augmentation
techniques, including random crop, center crop, and random rotation. The
experimental outcomes demonstrate a notable 95.4\% accuracy in recognizing
stall numbers.
|
We study the effect of disorder in strongly interacting small atomic chains.
Using the Kotliar- Ruckenstein slave-boson approach we diagonalize the
Hamiltonian via scattering matrix theory. We numerically solve the Kondo
transmission and the slave-boson parameters that allow us to calculate the
Kondo temperature. We demonstrate that in the weak disorder regime, disorder in
the energy levels of the dopants induces a non-screened disorder in the Kondo
couplings of the atoms. We show that disorder increases the Kondo temperature
of a perfect chain. We find that this disorder in the couplings comes from a
local distribution of Kondo temperatures along the chain. We propose two
experimental setups where the impact of local Kondo temperatures can be
observed.
|
The Asymptotic Safety Hypothesis for gravity relies on the existence of an
interacting fixed point of the Wilsonian renormalization group flow, which
controls the microscopic dynamics, and provides a UV completion of the theory.
Connecting such UV completion to observable physics has become an active area
of research in the last decades. In this work we show such connection within
the framework of scalar-tensor models. More specifically, we found that
cosmological inflation naturally emerges from the integration of the RG flow
equations, and that the predicted parameters of the emergent effective
potentials provide a slow-roll model of inflation compatible with current
observations. Furthermore, the RG evolution of the effective action starting at
the UV fixed point, provides a prediction for the initial value of the inflaton
field.
|
Quality Estimation (QE) is the task of automatically predicting Machine
Translation quality in the absence of reference translations, making it
applicable in real-time settings, such as translating online social media
conversations. Recent success in QE stems from the use of multilingual
pre-trained representations, where very large models lead to impressive
results. However, the inference time, disk and memory requirements of such
models do not allow for wide usage in the real world. Models trained on
distilled pre-trained representations remain prohibitively large for many usage
scenarios. We instead propose to directly transfer knowledge from a strong QE
teacher model to a much smaller model with a different, shallower architecture.
We show that this approach, in combination with data augmentation, leads to
light-weight QE models that perform competitively with distilled pre-trained
representations with 8x fewer parameters.
|
The Linear Parameter-Varying (LPV) framework has long been used to guarantee
performance and stability requirements of nonlinear (NL) systems mainly through
the $\mathcal{L}_2$-gain concept. However, recent research has pointed out that
current $\mathcal{L}_2$-gain based LPV synthesis methods can fail to guarantee
these requirements if stabilization of a non-zero operating condition (e.g.
reference tracking, constant disturbance rejection, etc.) is required. In this
paper, an LPV based synthesis method is proposed which is able to guarantee
incremental performance and stability of an NL system even with reference and
disturbance rejection objectives. The developed approach and the current
$\mathcal{L}_2$ LPV synthesis method are compared in a simulation study of the
position control problem of a Duffing oscillator, showing performance
improvements of the proposed method compared to the current
$\mathcal{L}_2$-based approach for tracking and disturbance rejection.
|
Motivated by the recent development of insulated nano-tubes and the attempts
to develop conducting nano wires in such tubes, we examine the Fermionic
behaviour in extremely thin wires. Although the one- dimensional problem has
been studied in detail over the years, it is an extreme idealization: We
consider the more realistic scenario of thin wires which are nevertheless three
dimensional. We show that the assembly of Fermions behaves as if it is below
the Fermi temperature, and in the limit of one dimension, in the ground state
as well. Thus there are indeed Bosonization features. These conclusions are
checked from an independent stand point.
|
Static potential games are non-cooperative games which admit a fictitious
function, also referred to as a potential function, such that the minimizers of
this function constitute a subset (or a refinement) of the Nash equilibrium
strategies of the associated non-cooperative game. In this paper, we study a
class $N$-player non-zero sum difference games with inequality constraints
which admit a potential game structure. In particular, we provide conditions
for the existence of an optimal control problem (with inequality constraints)
such that the solution of this problem yields an open-loop Nash equilibrium
strategy of the corresponding dynamic non-cooperative game (with inequality
constraints). Further, we provide a way to construct potential functions
associated with this optimal control problem. We specialize our general results
to a linear-quadratic setting and provide a linear complementarity
problem-based approach for computing the refinements of the open-loop Nash
equilibria. We illustrate our results with an example inspired by energy
storage incentives in a smart grid.
|
Traditional databases are not equipped with the adequate functionality to
handle the volume and variety of "Big Data". Strict schema definition and data
loading are prerequisites even for the most primitive query session. Raw data
processing has been proposed as a schema-on-demand alternative that provides
instant access to the data. When loading is an option, it is driven exclusively
by the current-running query, resulting in sub-optimal performance across a
query workload. In this paper, we investigate the problem of workload-driven
raw data processing with partial loading. We model loading as fully-replicated
binary vertical partitioning. We provide a linear mixed integer programming
optimization formulation that we prove to be NP-hard. We design a two-stage
heuristic that comes within close range of the optimal solution in a fraction
of the time. We extend the optimization formulation and the heuristic to
pipelined raw data processing, scenario in which data access and extraction are
executed concurrently. We provide three case-studies over real data formats
that confirm the accuracy of the model when implemented in a state-of-the-art
pipelined operator for raw data processing.
|
Spectroscopic observations obtained with the VLT of one planetary nebula (PN)
in Sextans A and of five PNe in Sextans B and of several HII regions (HII) in
these two dwarf irregular galaxies are presented. The extended spectral
coverage, from 320.0 to 1000.0nm, and the large telescope aperture allowed us
to detect a number of emission lines, covering more than one ionization stage
for several elements (He, O, S, Ar). The electron temperature (Te) diagnostic
[OIII] line at 436.3 nm was measured in all six PNe and in several HII allowing
for an accurate determination of the ionic and total chemical abundances by
means of the Ionization Correction Factors method. For the time being, these
PNe are the farthest ones where such a direct measurement of the Te is
obtained. In addition, all PNe and HII were also modelled using the
photoionization code CLOUDY. The physico-chemical properties of PNe and HII are
presented and discussed. A small dispersion in the oxygen abundance of HII was
found in both galaxies: 12 + $\log$(O/H)=7.6$\pm$0.2 in SextansA, and
7.8$\pm$0.2 in SextansB. For the five PNe of SextansA, we find that 12 +
$\log$(O/H)=8.0$\pm$0.3, with a mean abundance consistent with that of HII. The
only PN known in SextansA appears to have been produced by a quite massive
progenitor, and has a significant nitrogen overabundance. In addition, its
oxygen abundance is 0.4 dex larger than the mean abundance of HII, possibly
indicating an efficient third dredge-up for massive, low-metallicity PN
progenitors. The metal enrichment of both galaxies is analyzed using these new
data.
|
We report the detection of carbon monoxide (CO) emission from the young
supernova remnant Cassiopeia A (Cas A) at wavelengths corresponding to the
fundamental vibrational mode at 4.65 micron. We obtained AKARI Infrared Camera
spectra towards 4 positions which unambiguously reveal the broad characteristic
CO ro-vibrational band profile. The observed positions include unshocked ejecta
at the center, indicating that CO molecules form in the ejecta at an early
phase. We extracted a dozen spectra across Cas A along the long 1 arcmin slits,
and compared these to simple CO emission models in Local Thermodynamic
Equilibrium to obtain first-order estimates of the excitation temperatures and
CO masses involved. Our observations suggest that significant amounts of carbon
may have been locked up in CO since the explosion 330 years ago. Surprisingly,
CO has not been efficiently destroyed by reactions with ionized He or the
energetic electrons created by the decay of the radiative nuclei. Our CO
detection thus implies that less carbon is available to form carbonaceous dust
in supernovae than is currently thought and that molecular gas could lock up a
significant amount of heavy elements in supernova ejecta.
|
We show that the time-dependent Doppler effect should induce measureable
deviations of the time history of the projected orbit of a star around the
supermassive black hole in the Galactic center (SgrA*) from the expected
Keplerian history. In particular, the line-of-sight acceleration of the star
generates apparent acceleration of its image along its velocity vector on the
sky, even if its actual Keplerian acceleration in this direction vanishes. The
excess apparent acceleration simply results from the transformation of time
between the reference frames of the observer and the star. Although the excess
acceleration averages to zero over a full closed orbit, it could lead to
systematic offsets of a few percent in estimates of the dynamical mass or
position of the black hole that rely on partially sampled orbits with
pericentric distances of ~100AU. Deviations of this magnitude from apparent
Keplerian dynamics of known stars should be detectable by future observations.
|
We report the existence of Weyl points in a class of non-central symmetric
metamaterials, which has time reversal symmetry, but does not have inversion
symmetry due to chiral coupling between electric and magnetic fields. This
class of metamaterial exhibits either type-I or type-II Weyl points depending
on its non-local response. We also provide a physical realization of such
metamaterial consisting of an array of metal wires in the shape of elliptical
helixes which exhibits type-II Weyl points.
|
In an interferometer, path information and interference visibility are
incompatible quantities. Complete determination of the path will exclude any
possibility of interference, rendering the visibility zero. However, if the
composite object and probe state is pure, it is, under certain conditions,
possible to trade the path information for improved (conditioned) visibility.
Such a procedure is called quantum erasure. We have performed such experiments
with polarization entangled photon pairs. Using a partial polarizer we could
vary the degree of entanglement between object and probe. We could also vary
the interferometer splitting ratio and thereby vary the a priori path
predictability. We have tested quantum erasure under a number of different
experimental conditions and found good agreement between experiments and
theory.
|
Let R be a Stanley-Reisner ring (that is, a reduced monomial ring) with
coefficients in a domain k, and K its associated simplicial complex. Also let
D_k(R) be the ring of k-linear differential operators on R. We give two
different descriptions of the two-sided ideal structure of D_k(R) as being in
bijection with certain well-known subcomplexes of K; one based on explicit
computation in the Weyl algebra, valid in any characteristic, and one valid in
characteristic p based on the Frobenius splitting of R. A result of Traves
[Tra99] on the D_k(R)-module structure of R is also given a new proof and
different interpretation using these techniques.
|
The increase of cyber attacks in both the numbers and varieties in recent
years demands to build a more sophisticated network intrusion detection system
(NIDS). These NIDS perform better when they can monitor all the traffic
traversing through the network like when being deployed on a Software-Defined
Network (SDN). Because of the inability to detect zero-day attacks,
signature-based NIDS which were traditionally used for detecting malicious
traffic are beginning to get replaced by anomaly-based NIDS built on neural
networks. However, recently it has been shown that such NIDS have their own
drawback namely being vulnerable to the adversarial example attack. Moreover,
they were mostly evaluated on the old datasets which don't represent the
variety of attacks network systems might face these days. In this paper, we
present Reconstruction from Partial Observation (RePO) as a new mechanism to
build an NIDS with the help of denoising autoencoders capable of detecting
different types of network attacks in a low false alert setting with an
enhanced robustness against adversarial example attack. Our evaluation
conducted on a dataset with a variety of network attacks shows denoising
autoencoders can improve detection of malicious traffic by up to 29% in a
normal setting and by up to 45% in an adversarial setting compared to other
recently proposed anomaly detectors.
|
We present an algorithm which attains O(\sqrt{T}) internal (and thus
external) regret for finite games with partial monitoring under the local
observability condition. Recently, this condition has been shown by (Bartok,
Pal, and Szepesvari, 2011) to imply the O(\sqrt{T}) rate for partial monitoring
games against an i.i.d. opponent, and the authors conjectured that the same
holds for non-stochastic adversaries. Our result is in the affirmative, and it
completes the characterization of possible rates for finite partial-monitoring
games, an open question stated by (Cesa-Bianchi, Lugosi, and Stoltz, 2006). Our
regret guarantees also hold for the more general model of partial monitoring
with random signals.
|
In this paper, we consider counting and projected model counting of
extensions in abstract argumentation for various semantics. When asking for
projected counts we are interested in counting the number of extensions of a
given argumentation framework while multiple extensions that are identical when
restricted to the projected arguments count as only one projected extension. We
establish classical complexity results and parameterized complexity results
when the problems are parameterized by treewidth of the undirected
argumentation graph. To obtain upper bounds for counting projected extensions,
we introduce novel algorithms that exploit small treewidth of the undirected
argumentation graph of the input instance by dynamic programming (DP). Our
algorithms run in time double or triple exponential in the treewidth depending
on the considered semantics. Finally, we take the exponential time hypothesis
(ETH) into account and establish lower bounds of bounded treewidth algorithms
for counting extensions and projected extension.
|
This paper examines the art practices, artwork, and motivations of prolific
users of the latest generation of text-to-image models. Through interviews,
observations, and a user survey, we present a sampling of the artistic styles
and describe the developed community of practice around generative AI. We find
that: 1) the text prompt and the resulting image can be considered collectively
as an art piece prompts as art and 2) prompt templates (prompts with ``slots''
for others to fill in with their own words) are developed to create generative
art styles. We discover that the value placed by this community on unique
outputs leads to artists seeking specialized vocabulary to produce distinctive
art pieces (e.g., by reading architectural blogs to find phrases to describe
images). We also find that some artists use "glitches" in the model that can be
turned into artistic styles of their own right. From these findings, we outline
specific implications for design regarding future prompting and image editing
options.
|
Discrete coherent states for a system of $n$ qubits are introduced in terms
of eigenstates of the finite Fourier transform. The properties of these states
are pictured in phase space by resorting to the discrete Wigner function
|
We prove a uniform version of Varadhan decomposition for shift-invariant
closed uniform forms associated to large scale interacting systems on general
crystal lattices. In particular, this result includes the case of translation
invariant processes on Euclidean lattices $\mathbf{Z}^d$ with finite range. Our
result generalizes the result of arXiv:2009.04699 which was valid for systems
on transferable graphs. In subsequent research, we will use the result of this
article to prove Varadhan's decomposition of closed $L^2$-forms for large scale
interacting systems on general crystal lattices.
|
The phonon-mediated attractive interaction between carriers leads to the
Cooper pair formation in conventional superconductors. Despite decades of
research, the glue holding Cooper pairs in high-temperature superconducting
cuprates is still controversial, and the same is true as for the relative
involvement of structural and electronic degrees of freedom. Ultrafast electron
crystallography (UEC) offers, through observation of spatio-temporally resolved
diffraction, the means for determining structural dynamics and the possible
role of electron-lattice interaction. A polarized femtosecond (fs) laser pulse
excites the charge carriers, which relax through electron-electron and
electron-phonon coupling, and the consequential structural distortion is
followed diffracting fs electron pulses. In this review, the recent findings
obtained on cuprates are summarized. In particular, we discuss the strength and
symmetry of the directional electron-phonon coupling in Bi2Sr2CaCu2O8+\delta
(BSCCO), as well as the c-axis structural instability induced by near-infrared
pulses in La2CuO4 (LCO). The theoretical implications of these results are
discussed with focus on the possibility of charge stripes being significant in
accounting for the polarization anisotropy of BSCCO, and cohesion energy
(Madelung) calculations being descriptive of the c-axis instability in LCO.
|
fgivenx is a Python package for functional posterior plotting, currently used
in astronomy, but will be of use to scientists performing any Bayesian analysis
which has predictive posteriors that are functions. The source code for fgivenx
is available on GitHub at https://github.com/williamjameshandley/fgivenx
|
Robust Gray codes were introduced by (Lolck and Pagh, SODA 2024). Informally,
a robust Gray code is a (binary) Gray code $\mathcal{G}$ so that, given a noisy
version of the encoding $\mathcal{G}(j)$ of an integer $j$, one can recover
$\hat{j}$ that is close to $j$ (with high probability over the noise). Such
codes have found applications in differential privacy.
In this work, we present near-optimal constructions of robust Gray codes. In
more detail, we construct a Gray code $\mathcal{G}$ of rate $1 - H_2(p) -
\varepsilon$ that is efficiently encodable, and that is robust in the following
sense. Supposed that $\mathcal{G}(j)$ is passed through the binary symmetric
channel $\text{BSC}_p$ with cross-over probability $p$, to obtain $x$. We
present an efficient decoding algorithm that, given $x$, returns an estimate
$\hat{j}$ so that $|j - \hat{j}|$ is small with high probability.
|
We investigate the internal structure of clusters of galaxies in
high-resolution N-body simulations of 4 different cosmologies. There is a
higher proportion of disordered clusters in critical-density than in
low-density universes, although the structure of relaxed clusters is very
similar in each. Crude measures of substructure, such as the shift in the
position of the centre-of-mass as the density threshold is varied, can
distinguish the two in a sample of just 20 or so clusters; it is harder to
differentiate between clusters in open and flat models with the same density
parameter. Most clusters are in a quasi-steady state within the virial radius
and are well-described by the density profile of Navarro, Frenk & White (1995).
|
We measured brain waves of viewers watching the 2D, 2.5D, and 3D motion
pictures, comparing them with one another. The relative intensity of
{\alpha}-frequency band of 2.5D-viewer was lower than that of 2D-viewer, while
that of 3D-viewer remained with similar intensity. This result implies visual
neuro-processing of the 2.5D-viewer differs from that of the 3D-viewer.
|
We consider a branching population where individuals have i.i.d.\ life
lengths (not necessarily exponential) and constant birth rate. We let $N_t$
denote the population size at time $t$. %(called homogeneous, binary
Crump--Mode--Jagers process). We further assume that all individuals, at birth
time, are equipped with independent exponential clocks with parameter $\delta$.
We are interested in the genealogical tree stopped at the first time $T$ when
one of those clocks rings. This question has applications in epidemiology, in
population genetics, in ecology and in queuing theory.
We show that conditional on $\{T<\infty\}$, the joint law of $(N_T, T,
X^{(T)})$, where $X^{(T)}$ is the jumping contour process of the tree truncated
at time $T$, is equal to that of $(M, -I_M, Y_M')$ conditional on
$\{M\not=0\}$, where : $M+1$ is the number of visits of 0, before some single
independent exponential clock $\mathbf{e}$ with parameter $\delta$ rings, by
some specified L{\'e}vy process $Y$ without negative jumps reflected below its
supremum; $I_M$ is the infimum of the path $Y_M$ defined as $Y$ killed at its
last 0 before $\mathbf{e}$; $Y_M'$ is the Vervaat transform of $Y_M$.
This identity yields an explanation for the geometric distribution of $N_T$
\cite{K,T} and has numerous other applications. In particular, conditional on
$\{N_T=n\}$, and also on $\{N_T=n, T<a\}$, the ages and residual lifetimes of
the $n$ alive individuals at time $T$ are i.i.d.\ and independent of $n$. We
provide explicit formulae for this distribution and give a more general
application to outbreaks of antibiotic-resistant bacteria in the hospital.
|
In this work we present the first steps towards benchmarking isospin symmetry
breaking in ab initio nuclear theory for calculations of superallowed Fermi
$\beta$-decay. Using the valence-space in-medium similarity renormalization
group, we calculate b and c coefficients of the isobaric multiplet mass
equation, starting from two different Hamiltonians constructed from chiral
effective field theory. We compare results to experimental measurements for all
T=1 isobaric analogue triplets of relevance to superallowed $\beta$-decay for
masses A=10 to A=74 and find an overall agreement within approximately 250 keV
of experimental data for both b and c coefficients. A greater level of
accuracy, however, is obtained by a phenomenological Skyrme interaction or a
classical charged-sphere estimate. Finally, we show that evolution of the
valence-space operator does not meaningfully improve the quality of the
coefficients with respect to experimental data, which indicates that
higher-order many-body effects are likely not responsible for the observed
discrepancies.
|
Subsets and Splits