text
stringlengths 6
128k
|
---|
It has been observed in multiple lattice determinations of isovector axial
and pseudoscalar nucleon form factors, that, despite the fact that the partial
conservation of the axialvector current is fulfilled on the level of
correlation functions, the corresponding relation for form factors (sometimes
called the generalized Goldberger-Treiman relation in the literature) is broken
rather badly. In this work we trace this difference back to excited state
contributions and propose a new projection method that resolves this problem.
We demonstrate the efficacy of this method by computing the axial and
pseudoscalar form factors as well as related quantities on ensembles with two
flavors of improved Wilson fermions using pion masses down to 150 MeV. To this
end, we perform the $z$-expansion with analytically enforced asymptotic
behaviour and extrapolate to the physical point.
|
This paper presents a thorough evaluation of a bistable system versus a
matched filter in detecting bipolar pulse signals. The detectability of the
bistable system can be optimized by adding noise, i.e. the stochastic resonance
(SR) phenomenon. This SR effect is also demonstrated by approximate statistical
detection theory of the bistable system and corresponding numerical
simulations. Furthermore, the performance comparison results between the
bistable system and the matched filter show that (a) the bistable system is
more robust than the matched filter in detecting signals with disturbed pulse
rates, and (b) the bistable system approaches the performance of the matched
filter in detecting unknown arrival times of received signals, with an
especially better computational efficiency. These significant results verify
the potential applicability of the bistable system in signal detection field.
|
We construct Luzin-type subsets of the real line in all finite powers
Rothberger, with a non-Menger product. To this end, we use a purely
combinatorial approach which allows to weaken assumptions used earlier to
construct sets with analogous properties. Our assumptions hold, e.g., in the
Random model, where already known category theoretic methods fail.
|
It is commonly believed that grand unified theories (GUTs) predict proton
decay. This is because the exchange of extra GUT gauge bosons gives rise to
dimension 6 proton decay operators. We show that there exists a class of GUTs
in which these operators are absent. Many string and supergravity models in the
literature belong to this class.
|
The measurement-result-conditioned evolution of a system (e.g. an atom) with
spontaneous emissions of photons is well described by the quantum trajectory
(QT) theory. In this work we generalize the associated QT theory from
infinitely wide bandwidth Markovian environment to the case of finite bandwidth
non-Markovian environment. In particular, we generalize the treatment for
arbitrary spectrum, which is not restricted by the specific Lorentzian case. We
rigorously prove a general existence of a perfect scaling behavior jointly
defined by the bandwidth of environment and the time interval between
successive photon detections. For a couple of examples, we obtain analytic
results to facilitate QT simulations based on the Monte-Carlo algorithm. For
the case where analytical result is not available, numerical scheme is proposed
for practical simulations.
|
We calculate all transport coefficients of second order transient
hydrodynamics in two effective kinetic theory models: a hadron-resonance gas
and a quasiparticle model with thermal masses tuned to reproduce QCD
thermodynamics. We compare the corresponding results with calculations for an
ultrarelativistic single-component gas, that are widely employed in
hydrodynamic simulations of heavy ion collisions. We find that both of these
effective models display a qualitatively different normalized bulk viscosity,
when compared to the calculation for the single-component gas. Indeed,
$\zeta/[\tau_{\Pi}(\varepsilon_{0} + P_{0})] \simeq 16.91(1/3-c_{s}^{2})^{2}$,
for the hadron-resonance gas model, and $\zeta/[\tau_{\Pi}(\varepsilon_{0} +
P_{0})] \simeq 5 (1/3-c_{s}^{2})$ for the quasiparticle model. Differences are
also observed for many second-order transport coefficients, specially those
related to the bulk viscous pressure. The transport coefficients derived are
shown to be consistent with fundamental linear stability and causality
conditions.
|
Assuming perfect collision efficiency, we demonstrate that turbulence can
initiate and sustain the rapid growth of very small water droplets in air even
when these droplets are too small to cluster, and even without having to take
gravity and small-scale intermittency into account. This is because the range
of local Stokes numbers of identical droplets in the turbulent flow field is
broad enough even when small-scale intermittency is neglected. This
demonstration is given for turbulence which is one order of magnitude less
intense than is typical in warm clouds but with a volume fraction which, even
though small, is nevertheless large enough for an estimated a priori frequency
of collisions to be ten times larger than in warm clouds. However, the time of
growth in these conditions turns out to be one order of magnitude smaller than
in warm clouds.
|
Representations in the form of Symmetric Positive Definite (SPD) matrices
have been popularized in a variety of visual learning applications due to their
demonstrated ability to capture rich second-order statistics of visual data.
There exist several similarity measures for comparing SPD matrices with
documented benefits. However, selecting an appropriate measure for a given
problem remains a challenge and in most cases, is the result of a
trial-and-error process. In this paper, we propose to learn similarity measures
in a data-driven manner. To this end, we capitalize on the \alpha\beta-log-det
divergence, which is a meta-divergence parametrized by scalars \alpha and
\beta, subsuming a wide family of popular information divergences on SPD
matrices for distinct and discrete values of these parameters. Our key idea is
to cast these parameters in a continuum and learn them from data. We
systematically extend this idea to learn vector-valued parameters, thereby
increasing the expressiveness of the underlying non-linear measure. We conjoin
the divergence learning problem with several standard tasks in machine
learning, including supervised discriminative dictionary learning and
unsupervised SPD matrix clustering. We present Riemannian gradient descent
schemes for optimizing our formulations efficiently, and show the usefulness of
our method on eight standard computer vision tasks.
|
The matter-antimatter asymmetry is one of the greatest challenges in the
modern physics. The universe including this paper and even the reader
him(her)self seems to be built up of ordinary matter only. Theoretically, the
well-known Sakharov's conditions remain the solid framework explaining the
circumstances that matter became dominant against the antimatter while the
universe cools down and/or expands. On the other hand, the standard model for
elementary particles apparently prevents at least two conditions out of them.
In this work, we introduce a systematic study of the antiparticle-to-particle
ratios measured in various $NN$ and $AA$ collisions over the last three
decades. It is obvious that the available experimental facilities turn to be
able to perform nuclear collisions, in which the matter-antimatter asymmetry
raises from $\sim 0%$ at AGS to $\sim 100%$ at LHC. Assuming that the final
state of hadronization in the nuclear collisions takes place along the
freezeout line, which is defined by a constant entropy density, various
antiparticle-to-particle ratios are studied in framework of the hadron
resonance gas (HRG) model. Implementing modified phase space and distribution
function in the grand-canonical ensemble and taking into account the
experimental acceptance, the ratios of antiparticle-to-particle over the whole
range of center-of-mass-energies are very well reproduced by the HRG model.
Furthermore, the antiproton-to-proton ratios measured by ALICE in $pp$
collisions is also very well described by the HRG model. It is likely to
conclude that the LHC heavy-ion program will produce the same particle ratios
as the $pp$ program implying the dynamics and evolution of the system would not
depend on the initial conditions. The ratios of bosons and baryons get very
close to unity indicating that the matter-antimatter asymmetry nearly vanishes
at LHC.
|
We prove that for lambda = beta_omega or just lambda strong limit singular of
cofinality aleph_0, if there is a universal member in the class K^lf_lambda of
locally finite groups of cardinality lambda, then there is a canonical one
(parallel to special models for elementary classes, which is the replacement of
universal homogeneous ones and saturated ones in cardinals lambda = lambda^ <
lambda).
For this, we rely on the existence of enough indecomposable such groups, as
proved in "Density of indecomposable locally finite groups". We also more
generally deal with the existence of universal members in general classes for
such cardinals.
|
We present a method for the computation of the variance of cosmic microwave
background (CMB) temperature maps on azimuthally symmetric patches using a fast
convolution approach. As an example of the application of the method, we show
results for the search for concentric rings with unusual variance in the 7-year
WMAP data. We re-analyse claims concerning the unusual variance profile of
rings centred at two locations on the sky that have recently drawn special
attention in the context of the conformal cyclic cosmology scenario proposed by
Penrose (2009). We extend this analysis to rings with larger radii and centred
on other points of the sky. Using the fast convolution technique enables us to
perform this search with higher resolution and a wider range of radii than in
previous studies. We show that for one of the two special points rings with
radii larger than 10 degrees have systematically lower variance in comparison
to the concordance LambdaCDM model predictions. However, we show that this
deviation is caused by the multipoles up to order l=7. Therefore, the deficit
of power for concentric rings with larger radii is yet another manifestation of
the well-known anomalous CMB distribution on large angular scales. Furthermore,
low variance rings can be easily found centred on other points in the sky. In
addition, we show also the results of a search for extremely high variance
rings. As for the low variance rings, some anomalies seem to be related to the
anomalous distribution of the low-order multipoles of the WMAP CMB maps. As
such our results are not consistent with the conformal cyclic cosmology
scenario.
|
We determine the shared information that can be extracted from time-bin
entangled photons using frame encoding. We consider photons generated by a
general down-conversion source and also model losses, dark counts and the
effects of multiple photons within each frame. Furthermore, we describe a
procedure for including other imperfections such as after-pulsing, detector
dead-times and jitter. The results are illustrated by deriving analytic
expressions for the maximum information that can be extracted from
high-dimensional time-bin entangled photons generated by a spontaneous
parametric down conversion. A key finding is that under realistic conditions
and using standard SPAD detectors one can still choose frame size so as to
extract over 10 bits per photon. These results are thus useful for experiments
on high-dimensional quantum-key distribution system.
|
We develop a unified theoretical picture for excitations in Mott systems,
portraying both the heavy quasiparticle excitations and the Hubbard bands as
features of an emergent Fermi liquid state formed in an extended Hilbert space,
which is non-perturbatively connected to the physical system. This observation
sheds light on the fact that even the incoherent excitations in strongly
correlated matter often display a well defined Bloch character, with pronounced
momentum dispersion. Furthermore, it indicates that the Mott point can be
viewed as a topological transition, where the number of distinct dispersing
bands displays a sudden change at the critical point. Our results, obtained
from an appropriate variational principle, display also remarkable quantitative
accuracy. This opens an exciting avenue for fast realistic modeling of strongly
correlated materials.
|
The field of magnonics offers a new type of low-power information processing,
in which magnons, the quanta of spin waves, carry and process data instead of
electrons. Many magnonic devices were demonstrated recently, but the
development of each of them requires specialized investigations and, usually,
one device design is suitable for one function only. Here, we introduce the
method of inverse-design magnonics, in which any functionality can be specified
first, and a feedback-based computational algorithm is used to obtain the
device design. Our proof-of-concept prototype is based on a rectangular
ferromagnetic area which can be patterned using square shaped voids. To
demonstrate the universality of this approach, we explore linear, nonlinear and
nonreciprocal magnonic functionalities and use the same algorithm to create a
magnonic (de-)multiplexer, a nonlinear switch and a circulator. Thus,
inverse-design magnonics can be used to develop highly efficient rf
applications as well as Boolean and neuromorphic computing building blocks.
|
The extreme degrees of the colored Jones polynomial of any link are bounded
in terms of concrete data from any link diagram. It is known that these bounds
are sharp for semi-adequate diagrams. One of the goals of this paper is to show
the converse; if the bounds are sharp then the diagram is semi-adequate. As a
result, we use colored Jones link polynomials to extract an invariant that
detects semi-adequate links and discuss some applications.
|
Suppose a database containing $M$ records is replicated across $N$ servers,
and a user wants to privately retrieve one record by accessing the servers such
that identity of the retrieved record is secret against any up to $T$ servers.
A scheme designed for this purpose is called a private information retrieval
(PIR) scheme. In practice, capacity-achieving and small sub-packetization are
both desired for PIR schemes, because the former implies the highest download
rate and the latter usually means simple realization.
For general values of $N,T,M$, the only known capacity-achieving PIR scheme
was designed by Sun and Jafar in 2016 with sub-packetization $N^M$. In this
paper, we design a linear capacity-achieving PIR scheme with much smaller
sub-packetization $dn^{M-1}$, where $d={\rm gcd}(N,T)$ and $n=N/d$.
Furthermore, we prove that for any linear capacity-achieving PIR scheme it must
have sub-packetization no less than $dn^{M-1}$, implying our scheme has the
optimal sub-packetization. Moreover, comparing with Sun and Jafar's scheme, our
scheme reduces the field size by a factor of $\frac{1}{Nd^{M-2}}$.
|
In this paper, we reformulate the non-convex $\ell_q$-norm minimization
problem with $q\in(0,1)$ into a 2-step problem, which consists of one convex
and one non-convex subproblems, and propose a novel iterative algorithm called
QISTA ($\ell_q$-ISTA) to solve the $\left(\ell_q\right)$-problem. By taking
advantage of deep learning in accelerating optimization algorithms, together
with the speedup strategy that using the momentum from all previous layers in
the network, we propose a learning-based method, called QISTA-Net-s, to solve
the sparse signal reconstruction problem. Extensive experimental comparisons
demonstrate that the QISTA-Net-s yield better reconstruction qualities than
state-of-the-art $\ell_1$-norm optimization (plus learning) algorithms even if
the original sparse signal is noisy. On the other hand, based on the network
architecture associated with QISTA, with considering the use of convolution
layers, we proposed the QISTA-Net-n for solving the image CS problem, and the
performance of the reconstruction still outperforms most of the
state-of-the-art natural images reconstruction methods. QISTA-Net-n is designed
in unfolding QISTA and adding the convolutional operator as the dictionary.
This makes QISTA-Net-s interpretable. We provide complete experimental results
that QISTA-Net-s and QISTA-Net-n contribute the better reconstruction
performance than the competing.
|
We study homogeneous quenches in integrable quantum field theory where the
initial state contains zero-momentum particles. We demonstrate that the
two-particle pair amplitude necessarily has a singularity at the two-particle
threshold. Albeit the explicit discussion is carried out for special
(integrable) initial states, we argue that the singularity is inevitably
present and is a generic feature of homogeneous quenches involving the creation
of zero momentum particles. We also identify the singularity in quenches in the
Ising model across the quantum critical point, and compute it perturbatively in
phase quenches in the quantum sine-Gordon model which are potentially relevant
to experiments. We then construct the explicit time dependence of one-point
functions using a linked cluster expansion regulated by a finite volume
parameter. We find that the secular contribution normally linear in time is
modified by a $t\ln t$ term. We additionally encounter a novel type of secular
contribution which is shown to be related to parametric resonance. It is an
interesting open question to resum the new contributions and to establish their
consequences directly observable in experiments or numerical simulations.
|
The perfect transmission of charge carriers through potential barriers in
graphene (Klein tunneling) is a direct consequence of the Dirac equation that
governs the low-energy carrier dynamics. As a result, localized states do not
exist in unpatterned graphene, but quasi-bound states \emph{can} occur for
potentials with closed integrable dynamics. Here, we report the observation of
resonance states in photo-switchable self-assembled molecular(SAM)-graphene
hybrid. Conductive AFM measurements performed at room temperature reveal strong
current resonances, the strength of which can be reversibly gated \textit{on-}
and \textit{off-} by optically switching the molecular conformation of the
mSAM. Comparisons of the voltage separation between current resonances ($\sim
70$--$120$ mV) with solutions of the Dirac equation indicate that the radius of
the gating potential is $\sim 7 \pm 2$ nm with a strength $\geq 0.5$ eV. Our
results and methods might provide a route toward \emph{optically programmable}
carrier dynamics and transport in graphene nano-materials.
|
We take into account higher derivative R4 corrections in M-theory and
construct quantum black hole and black string solutions in 11 dimensions up to
the next leading order. The quantum black string is stretching along the 11th
direction and the Gregory-Laflamme instability is examined at the quantum
level. Thermodynamics of the boosted quantum black hole and black string are
also discussed. Especially we take the near horizon limit of the quantum black
string and investigate its instability quantitatively.
|
We cross-matched 1.3 million white dwarf (WD) candidates from Gaia EDR3 with
spectral data from LAMOST DR7 within 3 arcsec. Applying machine learning
described in our previous work, we spectroscopically identified 6,190 WD
objects after visual inspection, among which 1,496 targets were firstly
confirmed. 32 detailed classes were adopted for them, including but not limited
to DAB and DB+M. We estimated the atmospheric parameters for the DA and DB type
WD using Levenberg-Marquardt least-squares algorithm (LM). Finally, a catalog
of WD spectra from LAMOST was provided online.
|
We retrace Davenport's solution to Wahba's classic problem of aligning two
pointclouds using the formalism of Geometric Algebra (GA). GA proves to be a
natural backdrop for this problem involving three-dimensional rotations due to
the isomorphism between unit-length quaternions and rotors. While the solution
to this problem is not a new result, it is hoped that its treatment in GA will
have tutorial value as well as open the door to addressing more complex
problems in a similar way.
|
We investigate a reaction of boron trichloride (BCl3) with iron(III)
hydroxide (Fe(OH)3) by ab initio quantum chemical calculation as a simple model
for a reaction of iron impurities in BCl3 gas. We also examine a reaction with
water. We find that compounds such as Fe(Cl)(OBCl2)2(OHBCl2) and
Fe(Cl)2(OBCl2)(OHBCl2) are formed while producing HCl and reaction paths to
them are revealed. We also analyze the stabilization mechanism of these paths
using newly-developed interaction energy density derived from electronic stress
tensor in the framework of the Regional DFT (Density Functional Theory) and
Rigged QED (Quantum ElectroDynamics).
|
Entities, as important carriers of real-world knowledge, play a key role in
many NLP tasks. We focus on incorporating entity knowledge into an
encoder-decoder framework for informative text generation. Existing approaches
tried to index, retrieve, and read external documents as evidence, but they
suffered from a large computational overhead. In this work, we propose an
encoder-decoder framework with an entity memory, namely EDMem. The entity
knowledge is stored in the memory as latent representations, and the memory is
pre-trained on Wikipedia along with encoder-decoder parameters. To precisely
generate entity names, we design three decoding methods to constrain entity
generation by linking entities in the memory. EDMem is a unified framework that
can be used on various entity-intensive question answering and generation
tasks. Extensive experimental results show that EDMem outperforms both
memory-based auto-encoder models and non-memory encoder-decoder models.
|
Proactive decision support (PDS) helps in improving the decision making
experience of human decision makers in human-in-the-loop planning environments.
Here both the quality of the decisions and the ease of making them are
enhanced. In this regard, we propose a PDS framework, named RADAR, based on the
research in Automated Planning in AI, that aids the human decision maker with
her plan to achieve her goals by providing alerts on: whether such a plan can
succeed at all, whether there exist any resource constraints that may foil her
plan, etc. This is achieved by generating and analyzing the landmarks that must
be accomplished by any successful plan on the way to achieving the goals. Note
that, this approach also supports naturalistic decision making which is being
acknowledged as a necessary element in proactive decision support, since it
only aids the human decision maker through suggestions and alerts rather than
enforcing fixed plans or decisions. We demonstrate the utility of the proposed
framework through search-and-rescue examples in a fire-fighting domain.
|
We give a new definition of dimension spectrum for non-regular spectral
triples and compute the exact (i.e. non only the asymptotics) heat-trace of
standard Podles spheres $S^2_q$ for $0<q<1$, study its behavior when $q\to 1$
and fully compute its exact spectral action for an explicit class of cut-off
functions.
|
Humans can easily describe what they see in a coherent way and at varying
level of detail. However, existing approaches for automatic video description
are mainly focused on single sentence generation and produce descriptions at a
fixed level of detail. In this paper, we address both of these limitations: for
a variable level of detail we produce coherent multi-sentence descriptions of
complex videos. We follow a two-step approach where we first learn to predict a
semantic representation (SR) from video and then generate natural language
descriptions from the SR. To produce consistent multi-sentence descriptions, we
model across-sentence consistency at the level of the SR by enforcing a
consistent topic. We also contribute both to the visual recognition of objects
proposing a hand-centric approach as well as to the robust generation of
sentences using a word lattice. Human judges rate our multi-sentence
descriptions as more readable, correct, and relevant than related work. To
understand the difference between more detailed and shorter descriptions, we
collect and analyze a video description corpus of three levels of detail.
|
Motivation: Its well known the use of electric current in the cleaning of
viral diseases in plants instead a deep knowledge of this phenomenas
theoretical basis is not yet available. The description of the real causes of
nucleoprotein inactivation, will contribute to the optimization of further
experiments in order to obtain more and more efficient cleaning methodologies
for starting material supplies of vegetal species micropropagation process. The
dissipated energy as heat will depend on the specific treatment applied and on
the physical properties of the vegetal tissue and viral nucleoprotein. Results:
A hyperbolic dependence between the absorbance output values, obtained during a
diagnostic micro-ELISA experiment, and the electrical power (in watts) applied
to each explant was found. The former, demonstrate the cleaning process nature
is essentially the effect over viral nucleoprotein denaturation, by means of
the heat, to which they were exposed in the thermal bath that the vegetal
tissue constitutes. Our results are consistent with mathematical developed from
theoretical frame of harmonic oscillators in which molecular machine (viral
nucleoproteins) are redefined.
|
Recent observations have shown that a growing number of the most massive
Galactic globular clusters contain multiple populations of stars with different
[Fe/H] and neutron-capture element abundances. NGC 6273 has only recently been
recognized as a member of this "iron-complex" cluster class, and we provide
here a chemical and kinematic analysis of > 300 red giant branch (RGB) and
asymptotic giant branch (AGB) member stars using high resolution spectra
obtained with the Magellan-M2FS and VLT-FLAMES instruments. Multiple lines of
evidence indicate that NGC 6273 possesses an intrinsic metallicity spread that
ranges from about [Fe/H] = -2 to -1 dex, and may include at least three
populations with different [Fe/H] values. The three populations identified here
contain separate first (Na/Al-poor) and second (Na/Al-rich) generation stars,
but a Mg-Al anti-correlation may only be present in stars with [Fe/H] > -1.65.
The strong correlation between [La/Eu] and [Fe/H] suggests that the s-process
must have dominated the heavy element enrichment at higher metallicities. A
small group of stars with low [alpha/Fe] is identified and may have been
accreted from a former surrounding field star population. The cluster's large
abundance variations are coupled with a complex, extended, and multimodal blue
horizontal branch (HB). The HB morphology and chemical abundances suggest that
NGC 6273 may have an origin that is similar to omega Cen and M 54.
|
The dynamics of the interaction between microcavities connected to a common
waveguide in a multiresonator quantum memory circuit is investigated. Optimum
conditions are identified for the use of quantum memory and a dynamic picture
of the exchange of energy between different microcavities is obtained.
|
Let $f=(f_1, f_2)$ be a regular sequence of affine curves in $\bC^2$. Under
some reduction conditions achieved by composing with some polynomial
automorphisms of $\bC^2$, we show that the intersection number of curves
$(f_i)$ in $\bC^2$ equals to the coefficient of the leading term $x^{n-1}$ in
$g_2$, where $n=\deg f_i$ $(i=1, 2)$ and $(g_1, g_2)$ is the unique solution of
the equation $y{\mathcal J}(f)=g_1f_1+g_2f_2$ with $\deg g_i\leq n-1$. So the
well-known Jacobian problem is reduced to solving the equation above.
Furthermore, by using the result above, we show that the Jacobian problem can
also be reduced to a special family of polynomial maps.
|
The paper considers the halting scheme for quantum Turing machines. The
scheme originally proposed by Deutsch appears to be correct, but not exactly as
originally intended. We discuss the result of Ozawa as well as the objections
raised by Myers, Kieu and Danos and others. Finally, the relationship of the
halting scheme to the quest for a universal quantum Turing machine is
considered.
|
We propose a protocol that, given a communication network, computes a
subnetwork such that, for every pair $(u,v)$ of nodes connected in the original
network, there is a minimum-energy path between $u$ and $v$ in the subnetwork
(where a minimum-energy path is one that allows messages to be transmitted with
a minimum use of energy). The network computed by our protocol is in general a
subnetwork of the one computed by the protocol given in [13]. Moreover, our
protocol is computationally simpler. We demonstrate the performance
improvements obtained by using the subnetwork computed by our protocol through
simulation.
|
We give geometric and algorithmic criterions in order to have of a proper
Galois closure for a codimension one germ of quasi-homogeneous foliation. We
recall this notion recently introduced by B. Malgrange, and describe the Galois
envelope of a group of germs of analytic diffeomorphisms. The geometric
criterions are obtained from transverse analytic invariants, whereas the
algorithmic ones make use of formal normal forms.
|
Recurrent Neural Networks (RNNs) are theoretically Turing-complete and
established themselves as a dominant model for language processing. Yet, there
still remains an uncertainty regarding their language learning capabilities. In
this paper, we empirically evaluate the inductive learning capabilities of Long
Short-Term Memory networks, a popular extension of simple RNNs, to learn simple
formal languages, in particular $a^nb^n$, $a^nb^nc^n$, and $a^nb^nc^nd^n$. We
investigate the influence of various aspects of learning, such as training data
regimes and model capacity, on the generalization to unobserved samples. We
find striking differences in model performances under different training
settings and highlight the need for careful analysis and assessment when making
claims about the learning capabilities of neural network models.
|
We prove that any real-analytic action of $SL(n,\Z), n\ge 3$ with standard
homotopy data that preserves an ergodic measure $\mu$ whose support is not
contained in a ball, is analytically conjugate on an open invariant set to the
standard linear action on the complement to a finite union of periodic orbits.
|
The brain solves the credit assignment problem remarkably well. For credit to
be assigned across neural networks they must, in principle, wait for specific
neural computations to finish. How the brain deals with this inherent locking
problem has remained unclear. Deep learning methods suffer from similar locking
constraints both on the forward and feedback phase. Recently, decoupled neural
interfaces (DNIs) were introduced as a solution to the forward and feedback
locking problems in deep networks. Here we propose that a specialised brain
region, the cerebellum, helps the cerebral cortex solve similar locking
problems akin to DNIs. To demonstrate the potential of this framework we
introduce a systems-level model in which a recurrent cortical network receives
online temporal feedback predictions from a cerebellar module. We test this
cortico-cerebellar recurrent neural network (ccRNN) model on a number of
sensorimotor (line and digit drawing) and cognitive tasks (pattern recognition
and caption generation) that have been shown to be cerebellar-dependent. In all
tasks, we observe that ccRNNs facilitates learning while reducing ataxia-like
behaviours, consistent with classical experimental observations. Moreover, our
model also explains recent behavioural and neuronal observations while making
several testable predictions across multiple levels. Overall, our work offers a
novel perspective on the cerebellum as a brain-wide decoupling machine for
efficient credit assignment and opens a new avenue between deep learning and
neuroscience.
|
Plastic nanoparticles present technological opportunities and environmental
concerns, but measurement challenges impede product development and hazard
assessment. To meet these challenges, we advance a lateral nanoflow assay that
integrates complex nanofluidic replicas, optical localization microscopy, and
novel statistical analyses. We apply our sample-in-answer-out system to measure
polystyrene nanoparticles that sorb and carry hydrophobic fluorophores. An
elegant scaling of surface forces automates advection and dominates diffusion
to drive the analytical separation of colloidal nanoparticles by their steric
diameters. Reference nanoparticles, with a mean of 99 nm and a standard
deviation of 8.4 nm, test the unknown limits of silicone replicas to function
as separation matrices. New calibrations correct aberrations from microscope
and device, improving the accuracy of reducing single micrographs to joint
histograms of steric diameter and fluorescence intensity. A dimensional model
approaches the information limit of the system to discriminate size exclusion
from surface adsorption, yielding errors of the mean ranging from 0.2 nm to 2.3
nm and errors of the standard deviation ranging from 2.2 nm to 4.2 nm. A
hierarchical model accounts for metrological, optical, and dimensional
variability to reveal a fundamental structure-property relationship. Intensity
scales with diameter to the power of 3.6 +/- 0.5 at 95 % coverage, confounding
basic concepts of surface adsorption or volume absorption. Distributions of
fluorescivity - the product of the number density, absorption cross section,
and quantum yield of an ensemble of fluorophores - are ultrabroad and
asymmetric, limiting any inference from fluorescence intensity. This surprising
characterization of common nanoplastics resets expectations for optimizing
products, applying standards, and understanding byproducts.
|
For operators on Hilbert spaces of any dimension, we show that equivalence
after extension coincides with equivalence after one-sided extension, thus
obtaining a proof of their coincidence with Schur coupling. We also provide a
concrete description of this equivalence relation in several cases, in
particular for compact operators.
|
Hard sphere systems are often used to model simple fluids. The configuration
spaces of hard spheres in a three-dimensional torus modulo various symmetry
groups are comparatively simple, and could provide valuable information about
the nature of phase transitions. Specifically, the topological changes in the
configuration space as a function of packing fraction have been conjectured to
be related to the onset of first-order phase transitions. The critical
configurations for one to twelve spheres are sampled using a Morse-theoretic
approach, and are available in an online, interactive database. Explicit
triangulations are constructed for the configuration spaces of the two sphere
system, and their topological and geometric properties are studied. The
critical configurations are found to be associated with geometric changes to
the configuration space that connect previously distant regions and reduce the
configuration space diameter as measured by the commute time and diffusion
distances. The number of such critical configurations around the packing
fraction of the solid-liquid phase transition increases exponentially with the
number of spheres, suggesting that the onset of the first-order phase
transition in the thermodynamic limit is associated with a discontinuity in the
configuration space diameter.
|
The Metropolis-Hastings algorithm allows one to sample asymptotically from
any probability distribution $\pi$. There has been recently much work devoted
to the development of variants of the MH update which can handle scenarios
where such an evaluation is impossible, and yet are guaranteed to sample from
$\pi$ asymptotically. The most popular approach to have emerged is arguably the
pseudo-marginal MH algorithm which substitutes an unbiased estimate of an
unnormalised version of $\pi$ for $\pi$. Alternative pseudo-marginal algorithms
relying instead on unbiased estimates of the MH acceptance ratio have also been
proposed. These algorithms can have better properties than standard PM
algorithms. Convergence properties of both classes of algorithms are known to
depend on the variability of the estimators involved and reduced variability is
guaranteed to decrease the asymptotic variance of ergodic averages and will
shorten the burn-in period, or convergence to equilibrium, in most scenarios of
interest. A simple approach to reduce variability, amenable to parallel
computations, consists of averaging independent estimators. However, while
averaging estimators of $\pi$ in a pseudo-marginal algorithm retains the
guarantee of sampling from $\pi$ asymptotically, naive averaging of acceptance
ratio estimates breaks detailed balance, leading to incorrect results. We
propose an original methodology which allows for a correct implementation of
this idea. We establish theoretical properties which parallel those available
for standard PM algorithms and discussed above. We demonstrate the interest of
the approach on various inference problems. In particular we show that
convergence to equilibrium can be significantly shortened, therefore offering
the possibility to reduce a user's waiting time in a generic fashion when a
parallel computing architecture is available.
|
It is known that realignment crierion is necessary but not a sufficient
criterion for lower as well as higher dimensional system. In this work, we
first consider a two-qubit system and derived the necessary and sufficient
condition based on realignment operation for a particular class of two-qubit
system. Thus we solved the problem of if and only if condition partially for a
particular class of two-qubit state. We have shown that the derived necessary
and sufficient condition detects two-qubit entangled states, which are not
detected by the realignment criterion. Next, we discuss the higher dimensional
system and obtained the necessary condition on the minimum singular value of
the realigned matrix of $d\otimes d$ dimensional separable states. Moreover, we
provide the geometrical interpretation of the derived separability criterion
for $d\otimes d$ dimensional system. Furthermore, we show that our criterion
may also detect bound entangled state. The entanglement detection criterion
studied here is beneficial in the sense that it requires to calculate only
minimum singular value of the realigned matrix while on the other hand
realignment criterion requires all singular values of the realigned matrix.
Thus, our criterion has computational advantage over the realignment criterion.
|
Anomaly detection is a fundamental problem in computer vision area with many
real-world applications. Given a wide range of images belonging to the normal
class, emerging from some distribution, the objective of this task is to
construct the model to detect out-of-distribution images belonging to abnormal
instances. Semi-supervised Generative Adversarial Networks (GAN)-based methods
have been gaining popularity in anomaly detection task recently. However, the
training process of GAN is still unstable and challenging. To solve these
issues, a novel adversarial dual autoencoder network is proposed, in which the
underlying structure of training data is not only captured in latent feature
space, but also can be further restricted in the space of latent representation
in a discriminant manner, leading to a more accurate detector. In addition, the
auxiliary autoencoder regarded as a discriminator could obtain an more stable
training process. Experiments show that our model achieves the state-of-the-art
results on MNIST and CIFAR10 datasets as well as GTSRB stop signs dataset.
|
Astrophotonics is the next-generation approach that provides the means to
miniaturize near-infrared (NIR) spectrometers for upcoming large telescopes and
make them more robust and inexpensive. The target requirements for our
spectrograph are: a resolving power of about 3000, wide spectral range (J and H
bands), free spectral range of about 30 nm, high on-chip throughput of about
80% (-1dB) and low crosstalk (high contrast ratio) between adjacent on-chip
wavelength channels of less than 1% (-20dB). A promising photonic technology to
achieve these requirements is Arrayed Waveguide Gratings (AWGs). We have
developed our first generation of AWG devices using a silica-on-silicon
substrate with a very thin layer of silicon-nitride in the core of our
waveguides. The waveguide bending losses are minimized by optimizing the
geometry of the waveguides. Our first generation of AWG devices are designed
for H band and have a resolving power of around 1500 and free spectral range of
about 10 nm around a central wavelength of 1600 nm. The devices have a
footprint of only 12 mm x 6 mm. They are broadband (1450-1650 nm), have a peak
on-chip throughput of about 80% (-1 dB) and contrast ratio of about 1.5% (-18
dB). These results confirm the robustness of our design, fabrication and
simulation methods. Currently, the devices are designed for Transverse Electric
(TE) polarization and all the results are for TE mode. We are developing
separate J- and H-band AWGs with higher resolving power, higher throughput and
lower crosstalk over a wider free spectral range to make them better suited for
astronomical applications.
|
Starting from an intrinsic geometric characterization of de Sitter timelike
and lightlike geodesics we give a new description of the conserved quantities
associated with classical free particles on the de Sitter manifold. These
quantities allow for a natural discussion of classical pointlike scattering and
decay processes. We also provide an intrinsic definition of energy of a
classical de Sitter particle and discuss its different expressions in various
local coordinate systems and their relations with earlier definitions found in
the literature.
|
The detection of gravitational waves from coalescing binary neutron stars
represents another milestone in gravitational-wave astronomy. However, since
LIGO is currently not as sensitive to the merger/ringdown part of the waveform,
the possibility that such signals are produced by a black hole-neutron star
binary can not be easily ruled out without appealing to assumptions about the
underlying compact object populations. We review a few astrophysical channels
that might produce black holes below 3 $M_{\odot}$ (roughly the upper bound on
the maximum mass of a neutron star), as well as existing constraints for these
channels. We show that, due to the uncertainty in the neutron star equation of
state, it is difficult to distinguish gravitational waves from a binary neutron
star system, from those of a black hole-neutron star system with the same
component masses, assuming Advanced LIGO sensitivity. This degeneracy can be
broken by accumulating statistics from many events to better constrain the
equation of state, or by third-generation detectors with higher sensitivity to
the late spiral to post-merger signal. We also discuss the possible differences
in electromagnetic counterparts between binary neutron star and low mass black
hole-neutron star mergers, arguing that it will be challenging to definitively
distinguish the two without better understanding of the underlying
astrophysical processes.
|
This paper addresses fundamental issues on the nature of the concepts and
structures of fuzzy logic, focusing, in particular, on the conceptual and
functional differences that exist between probabilistic and possibilistic
approaches. A semantic model provides the basic framework to define
possibilistic structures and concepts by means of a function that quantifies
proximity, closeness, or resemblance between pairs of possible worlds. The
resulting model is a natural extension, based on multiple conceivability
relations, of the modal logic concepts of necessity and possibility. By
contrast, chance-oriented probabilistic concepts and structures rely on
measures of set extension that quantify the proportion of possible worlds where
a proposition is true. Resemblance between possible worlds is quantified by a
generalized similarity relation: a function that assigns a number between O and
1 to every pair of possible worlds. Using this similarity relation, which is a
form of numerical complement of a classic metric or distance, it is possible to
define and interpret the major constructs and methods of fuzzy logic:
conditional and unconditioned possibility and necessity distributions and the
generalized modus ponens of Zadeh.
|
In this paper, this author proved that $A^4 + hB^4 = C^4 + hD^4$ always has
the integral solutions for $h < 20000.$ Then we conjecture the equation $A^4 +
hB^4 = C^4 + hD^4$ always has the integral solutions.
|
This paper presents a simple analytical framework for the dynamic response of
cirrus to a local radiative flux convergence, expressible in terms of three
independent modes of cloud evolution. Horizontally narrow and tenuous clouds
within a stable environment adjust to radiative heating by ascending gradually
across isentropes while spreading sufficiently fast so as to keep isentropic
surfaces nearly flat. More optically dense clouds experience very concentrated
heating, and if they are also very broad, they develop a convecting mixed
layer. Along isentropic spreading still occurs, but in the form of turbulent
density currents rather than laminar flows. A third adjustment mode relates to
evaporation, which erodes cloudy air as it lofts. The dominant mode is
determined from two dimensionless numbers, whose predictive power is shown in
comparisons with high resolution numerical cloud simulations. The power and
simplicity of the approach hints that fast, sub-grid scale radiative-dynamic
atmospheric interactions might be efficiently parameterized within slower,
coarse-grid climate models.
|
Paramagnetic Meissner effect was observed in single crystal Yb(3)Rh(4)Sn(13).
While field cooling the sample, at the onset of the superconducting transition
at 7.92 K, the DC magnetization first goes through a minimum at 7.85 K,
followed by the peak of the PME signal at 7.6 K and then crosses over to the
diamagnetic state. The magnetization vs field curves are reversible above the
minimum and imbibe a pinning characteristic below 7.85 K. The minimum can be
attributed to the opposing contribution to the dc magnetization signal from
surface superconductivity. The subsequent flux compression while cooling the
sample below 7.85 K shows the rapid increase in the dc magnetization signal.
PME signal is actually the diamagnetism opposing the flux compression.
|
We discuss BeppoSAX observations and archive ASCA data of NGC 7679, a nearby,
nearly face-on SB0 galaxy in which starburst and AGN activities coexist. The
X-ray observations reveal a bright (L_{0.1-50 keV} \sim 2.9 \times 10^{43} erg
s^{-1}) and variable source having a minimum observed doubling/halving time
scale of \sim 10 - 20 ksec. A simple power law with photon index of \Gamma \sim
1.75 and small absorption (N_H < 4\times 10^{20} cm^{-2}) can reproduce the NGC
7679 spectrum from 0.1 up to 50 keV. These X-ray properties are unambiguous
signs of Seyfert 1 activity in the nucleus of NGC 7679. The starburst activity,
revealed by the IR emission, optical spectroscopy and H\alpha imaging, and
dominating in the optical and IR bands, is clearly overwhelmed by the AGN in
the X-ray band. Although, at first glance, this is similar to what is observed
in other starburst-AGN galaxies (e.g. NGC 6240, NGC 4945), most strikingly here
and at odds with the above examples, the X-ray spectrum of NGC 7679 does not
appear to be highly absorbed. The main peculiarity of objects like NGC 7679 is
not the strength of their starburst but the apparent optical weakness of the
Seyfert 1 nucleus when compared with its X-ray luminosity. To date NGC 7679 is
one of the few Seyfert 1/ Starburst composites for which the broad-band X-ray
properties have been investigated in detail. The results presented here imply
that optical and infrared spectroscopy could be highly inefficient in revealing
the presence of an AGN in these kinds of objects, which instead is clearly
revealed from X-ray spectroscopic and variability investigations.
|
The aim of the present paper is to provide an intrinsic investigation of
projective changes in Finlser geometry, following the pullback formalism.
Various known local results are generalized and other new intrinsic results are
obtained. Nontrivial characterizations of projective changes are given. The
fundamental projectively invariant tensors, namely, the projective deviation
tensor, the Weyl torsion tensor, the Weyl curvature tensor and the Douglas
tensor are investigated. The properties of these tensors and their
interrelationships are obtained. Projective connections and projectively flat
manifolds are characterized. The present work is entirely intrinsic (free from
local coordinates).
|
We analyze aspects of the behavior of the family of inner parallel bodies of
a convex body for the isoperimetric quotient and deficit of arbitrary
quermassintegrals. By means of technical boundary properties of the so-called
form body of a convex body and similar constructions for inner parallel bodies,
we point out an erroneous use of a relation between the latter bodies in two
different works. We correct these results, limiting them to convex bodies
having a very precise boundary structure.
|
Human behavior is incredibly complex and the factors that drive decision
making--from instinct, to strategy, to biases between individuals--often vary
over multiple timescales. In this paper, we design a predictive framework that
learns representations to encode an individual's 'behavioral style', i.e.
long-term behavioral trends, while simultaneously predicting future actions and
choices. The model explicitly separates representations into three latent
spaces: the recent past space, the short-term space, and the long-term space
where we hope to capture individual differences. To simultaneously extract both
global and local variables from complex human behavior, our method combines a
multi-scale temporal convolutional network with latent prediction tasks, where
we encourage embeddings across the entire sequence, as well as subsets of the
sequence, to be mapped to similar points in the latent space. We develop and
apply our method to a large-scale behavioral dataset from 1,000 humans playing
a 3-armed bandit task, and analyze what our model's resulting embeddings reveal
about the human decision making process. In addition to predicting future
choices, we show that our model can learn rich representations of human
behavior over multiple timescales and provide signatures of differences in
individuals.
|
This paper presents Master of Puppets (MOP), an animation-by-demonstration
framework that allows users to control the motion of virtual characters
(puppets) in real time. In the first step, the user is asked to perform the
necessary actions that correspond to the character's motions. The user's
actions are recorded, and a hidden Markov model (HMM) is used to learn the
temporal profile of the actions. During the runtime of the framework, the user
controls the motions of the virtual character based on the specified
activities. The advantage of the MOP framework is that it recognizes and
follows the progress of the user's actions in real time. Based on the forward
algorithm, the method predicts the evolution of the user's actions, which
corresponds to the evolution of the character's motion. This method treats
characters as puppets that can perform only one motion at a time. This means
that combinations of motion segments (motion synthesis), as well as the
interpolation of individual motion sequences, are not provided as
functionalities. By implementing the framework and presenting several computer
puppetry scenarios, its efficiency and flexibility in animating virtual
characters is demonstrated.
|
We consider the problem of recovering a common latent source with independent
components from multiple views. This applies to settings in which a variable is
measured with multiple experimental modalities, and where the goal is to
synthesize the disparate measurements into a single unified representation. We
consider the case that the observed views are a nonlinear mixing of
component-wise corruptions of the sources. When the views are considered
separately, this reduces to nonlinear Independent Component Analysis (ICA) for
which it is provably impossible to undo the mixing. We present novel
identifiability proofs that this is possible when the multiple views are
considered jointly, showing that the mixing can theoretically be undone using
function approximators such as deep neural networks. In contrast to known
identifiability results for nonlinear ICA, we prove that independent latent
sources with arbitrary mixing can be recovered as long as multiple,
sufficiently different noisy views are available.
|
The MAJORANA collaboration is actively pursuing research and development
aimed at a tonne-scale 76Ge neutrinoless double-beta decay) experiment. The
current, primary focus is the construction of the MAJORANA DEMONSTRATOR
experiment, an R&D effort that will field approximately 40kg of germanium
detectors with mixed enrichment levels. This article provides a status update
on the construction of the DEMONSTRATOR.
|
In the "intense--coupling" regime all Higgs bosons of the Minimal
Supersymmetric extension of the Standard Model (MSSM) are rather light and have
comparable masses of O(100 GeV). They couple maximally to electroweak gauge
bosons, and for large ratios of the vacuum expectation values of the two Higgs
doublet fields, tan\beta, they interact strongly with the standard third
generation fermions. We present in this note a comprehensive study of this
scenario. We summarize the main phenomenological features, and the accordance
with the direct constraints from Higgs boson searches at LEP2 and the Tevatron
as well as the indirect constraints from precision measurements will be
checked. After the presentation of the decay branching ratios, we discuss the
production cross sections of the neutral Higgs particles in this regime at
future colliders, the Tevatron Run II, the LHC and a 500 GeV e+e- linear
collider.
|
In this paper, we propose the use of geodesic distances in conjunction with
multivariate distance matrix regression, called geometric-MDMR, as a powerful
first step analysis method for manifold-valued data. Manifold-valued data is
appearing more frequently in the literature from analyses of earthquake to
analysing brain patterns. Accounting for the structure of this data increases
the complexity of your analysis, but allows for much more interpretable results
in terms of the data. To test geometric-MDMR, we develop a method to simulate
functional connectivity matrices for fMRI data to perform a simulation study,
which shows that our method outperforms the current standards in fMRI analysis.
|
The model-based gait recognition methods usually adopt the pedestrian walking
postures to identify human beings.
However, existing methods did not explicitly resolve the large intra-class
variance of human pose due to camera views changing.
In this paper, we propose to generate multi-view pose sequences for each
single-view pose sample by learning full-rank transformation matrices via
lower-upper generative adversarial network (LUGAN).
By the prior of camera imaging, we derive that the spatial coordinates
between cross-view poses satisfy a linear transformation of a full-rank matrix,
thereby, this paper employs the adversarial training to learn transformation
matrices from the source pose and target views to obtain the target pose
sequences.
To this end, we implement a generator composed of graph convolutional (GCN)
layers, fully connected (FC) layers and two-branch convolutional (CNN) layers:
GCN layers and FC layers encode the source pose sequence and target view, then
CNN branches learn a lower triangular matrix and an upper triangular matrix,
respectively, finally they are multiplied to formulate the full-rank
transformation matrix.
For the purpose of adversarial training, we further devise a condition
discriminator that distinguishes whether the pose sequence is true or
generated.
To enable the high-level correlation learning, we propose a plug-and-play
module, named multi-scale hypergraph convolution (HGC), to replace the spatial
graph convolutional layer in baseline, which could simultaneously model the
joint-level, part-level and body-level correlations.
Extensive experiments on two large gait recognition datasets, i.e., CASIA-B
and OUMVLP-Pose, demonstrate that our method outperforms the baseline model and
existing pose-based methods by a large margin.
|
In this work, we propose extreme compression techniques like binarization,
ternarization for Neural Decoders such as TurboAE. These methods reduce memory
and computation by a factor of 64 with a performance better than the quantized
(with 1-bit or 2-bits) Neural Decoders. However, because of the limited
representation capability of the Binary and Ternary networks, the performance
is not as good as the real-valued decoder. To fill this gap, we further propose
to ensemble 4 such weak performers to deploy in the edge to achieve a
performance similar to the real-valued network. These ensemble decoders give 16
and 64 times saving in memory and computation respectively and help to achieve
performance similar to real-valued TurboAE.
|
Shell-type Supernova remnants (SNRs) have long been known to harbour a
population of ultra-relativistic particles, accelerated in the Supernova shock
wave by the mechanism of diffusive shock acceleration. Experimental evidence
for the existence of electrons up to energies of ~100 TeV was first provided by
the detection of hard X-ray synchrotron emission as e.g. in the shell of the
young SNR SN1006. Furthermore using theoretical arguments shell-type Supernova
remnants have long been considered as the main accelerator of protons - Cosmic
rays - in the Galaxy; definite proof of this process is however still missing.
Pulsar Wind Nebulae (PWN) - diffuse structures surrounding young pulsars - are
another class of objects known to be a site of particle acceleration in the
Galaxy, again through the detection of hard synchrotron X-rays such as in the
Crab Nebula. Gamma-rays above 100 MeV provide a direct access to acceleration
processes. The GLAST Large Area telescope (LAT) will be operating in the energy
range between 30 MeV and 300 GeV and will provide excellent sensitivity,
angular and energy resolution in a previously rather poorly explored energy
band. We will describe prospects for the investigation of these Galactic
particle accelerators with GLAST.
|
Formal Concept Analysis has proven to be an effective method of restructuring
complete lattices and various algebraic domains. In this paper, the notions of
attribute continuous formal context and continuous formal concept are
introduced by considering a selection F of fnite subsets of attributes. Our
decision of a selection F relies on a kind of generalized interior operators.
It is shown that the set of continuous formal concepts forms a continuous
domain, and every continuous domain can be obtained in this way. Moreover, an
notion of F-morphism is also identified to produce a category equivalent to
that of continuous domains with Scott-continuous functions. This paper also
consider the representations of various subclasses of continuous domains such
as algebraic domains, bounded complete domains and stably continuous
semilattices. These results explore the fundamental idea of domain theory in
Formal Concept Analysis from a categorical viewpoint.
|
The present study brings forward important information, within the framework
of spectral distribution theory, about the types of forces that dominate three
realistic interactions, CD-Bonn, CDBonn+ 3terms and GXPF1, in nuclei and their
ability to account for many-particle effects such as the formation of
correlated nucleon pairs and enhanced quadrupole collective modes.
Like-particle and proton-neutron isovector pairing correlations are described
microscopically by a model interaction with Sp(4) dynamical symmetry, which is
extended to include an additional quadrupole-quadrupole interaction. The
analysis of the results for the 1f7/2 level shows that both CD-Bonn+3terms and
GXPF1 exhibit a well-developed pairing character compared to CD-Bonn, while the
latter appears to build up more (less) rotational isovector T = 1 (isoscalar T
= 0) collective features. Furthermore, the three realistic interactions are in
general found to correlate strongly with the pairing+quadrupole model
interaction, especially for the highest possible isospin group of states where
the model interaction can be used to provide a reasonable description of the
corresponding energy spectra.
|
We study the small scale magnetic reconnection above the radiative
inefficient accretion flow around massive black hole via 2D
magnetohydrodynamics (MHD) numerical simulation, in order to model the blob
formation and ejection from the accretion flow around Sgr A*. The connection of
both the newly emerging magnetic field and the pre-existing magnetic field is
investigated to check whether blobs could be driven in the environment of black
hole accretion disc. After the magnetic connection, both the velocity and
temperature of the plasma can be comparable to the inferred physical properties
at the base of the observed blob ejection. For illustration, three small boxes
which are located within 40 Schwarzschild radii from the central black hole are
chosen as our simulation areas. At the beginning of the reconnections, the
fluid is pulled toward the central black hole due to the gravitational
attraction and the current sheet produced by the reconnection is also pulled
toward the same direction, consequently, the resulting outflows move both
upwards and towards the symmetry axis of the central black hole. Eventually,
huge blobs appear, which supports the catastrophe model of episodic jets
\citep{2009MNRAS.395.2183Y}. It is also found that the closer to the black hole
the magnetic connection happens, the higher the converting efficiency of the
magnetic energy into the heat and kinetic energy. For these inner blobs, they
have vortex structure due to the K-H instability, which happens along the
current sheet separating the fluids with different speed.
|
We analyze the validity of a quasiparticle description of a superconducting
state at a metallic quantum-critical point (QCP). A normal state at a QCP is a
non-Fermi liquid with no coherent quasiparticles. A superconducting order gaps
out low-energy excitations, except for a sliver of states for non-s-wave gap
symmetry, and at a first glance, should restore a coherent quasiparticle
behavior. We argue that this does not necessarily hold as in some cases the
fermionic self-energy remains singular slightly above the gap edge. This
singularity gives rise to markedly non-BCS behavior of the density of states
and to broadening and eventual vanishing of the quasiparticle peak in the
spectral function. We analyze the set of quantum-critical models with an
effective dynamical 4-fermion interaction, mediated by a gapless boson at a
QCP, $V(\Omega) \propto 1/\Omega^\gamma$. We show that coherent quasiparticle
behavior in a superconducting state holds for $\gamma <1/2$, but breaks down
for larger $\gamma$. We discuss signatures of quasiparticle breakdown and
compare our results with the data.
|
The objectives of this study was to bring out the understanding of the
concept of agile IT project management; what it is and what it is not. It was
also aimed at comparing the pros and cons of both agile and traditional methods
of IT project management in a typical industry setting; the challenges of going
purely agile, and so on. It is purely a review of literature of peer reviewed
papers sourced mainly from Google Scholar. It was revealed that agile outweigh
the traditional methods in terms of benefits, but its implementation poses a
lot of challenges due to a number of issues, paramount among them being
organizational culture and empowerment of the project team. This has resulted
in a number of industries sticking to the traditional methods despite the
overwhelming benefits of agile. In another school of thought, the combination
of the two paradigms is the way forward.
|
We prove that on the Baire space $(D^{\kappa},\pi)$, $\kappa \geq \omega_0$
where $D$ is a uniformly discrete space having $\omega _1$-strongly compact
cardinal and $\pi$ denotes the product uniformity on $D^\kappa$, there exists a
$z_u$-filter $\mathcal{F}$ being Cauchy for the uniformity $e\pi$ having as a
base all the countable uniform partitions of $(D^\kappa,\pi)$, and failing the
countable intersection property. This fact is equivalent to the existence of a
non-vanishing real-valued uniformly continuous function $f$ on $D^{\kappa}$ for
which the inverse function $g=1/f$ cannot be continuously extended to the
completion of $(D^{\kappa _0},e\pi)$. This does not happen when the cardinal of
$D$ is strictly smaller than the first Ulam-measurable cardinal.
|
The previously studied Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) state is
stabilized by a magnetic field via the Zeeman coupling in spin-singlet
superconductors. Here we suggest a novel route to achieve non-zero
center-of-mass momentum pairing states in superconductors with Fermi surface
nesting. We investigate two-dimensional superconductors under a uniform
external current, which leads to a finite pair-momentum of ${\bf q}_{e}$. We
find that an FFLO state with a spontaneous pair-momentum of ${\bf q}_{s}$ is
stabilized above a certain critical current which depends on the direction of
the external current. A finite ${\bf q}_s$ arises in order to make the total
pair-momentum of ${\bf q}_t(={\bf q}_s + {\bf q}_e)$ perpendicular to the
nesting vector, which lowers the free energy of the FFLO state, as compared to
the superconducting and normal states. We also suggest experimental signatures
of the FFLO state.
|
In this work, a Bayesian model calibration framework is presented that
utilizes goal-oriented a-posterior error estimates in quantities of interest
(QoIs) for classes of high-fidelity models characterized by PDEs. It is shown
that for a large class of computational models, it is possible to develop a
computationally inexpensive procedure for calibrating parameters of
high-fidelity models of physical events when the parameters of low-fidelity
(surrogate) models are known with acceptable accuracy. The main ingredients in
the proposed model calibration scheme are goal-oriented a-posteriori estimates
of error in QoIs computed using a so-called lower fidelity model compared to
those of an uncalibrated higher fidelity model. The estimates of error in QoIs
are used to define likelihood functions in Bayesian inversion analysis. A
standard Bayesian approach is employed to compute the posterior distribution of
model parameters of high-fidelity models. As applications, parameters in a
quasi-linear second-order elliptic boundary-value problem (BVP) are calibrated
using a second-order linear elliptic BVP. In a second application, parameters
of a tumor growth model involving nonlinear time-dependent PDEs are calibrated
using a lower fidelity linear tumor growth model with known parameter values.
|
The angular distributions of the baryon-antibaryon low-mass enhancements seen
in the charmless three-body baryonic B decays B+ -> p pbar K+, B0 -> p pbar Ks,
and B0 -> p Lambdabar pi- are reported. A quark fragmentation interpretation is
supported, while the gluonic resonance picture is disfavored. Searches for the
Theta+ and Theta++ pentaquarks in the relevant decay modes and possible
glueball states G with 2.2 GeV/c2 < M-ppbar < 2.4 GeV/c2 in the ppbar systems
give null results. We set upper limits on the products of branching fractions,
B(B0 -> Theta+ p)\times B(Theta+ -> p Ks) < 2.3 \times 10^{-7}, B(B+ -> Theta++
pbar) \times B(Theta++ -> p K+) < 9.1 \times 10^{-8}, and B(B+ -> G K+) \times
B(G -> p pbar) < 4.1 \times 10^{-7} at the 90% confidence level. The analysis
is based on a 140 fb^{-1} data sample recorded on the Upsilon(4S) resonance
with the Belle detector at the KEKB asymmetric-energy e+e- collider.
|
A packing of partial difference sets is a collection of disjoint partial
difference sets in a finite group $G$. This configuration has received
considerable attention in design theory, finite geometry, coding theory, and
graph theory over many years, although often only implicitly. We consider
packings of certain Latin square type partial difference sets in abelian groups
having identical parameters, the size of the collection being either the
maximum possible or one smaller. We unify and extend numerous previous results
in a common framework, recognizing that a particular subgroup reveals important
structural information about the packing. Identifying this subgroup allows us
to formulate a recursive lifting construction of packings in abelian groups of
increasing exponent, as well as a product construction yielding packings in the
direct product of the starting groups. We also study packings of certain
negative Latin square type partial difference sets of maximum possible size in
abelian groups, all but one of which have identical parameters, and show how to
produce such collections using packings of Latin square type partial difference
sets.
|
Let $t$ be a non-negative integer and $\mbox{$\cal P$}=\{(A_i,B_i)\}_{1\leq
i\leq m}$ be a set-pair family satisfying $|A_i \cap B_i|\leq t$ for $1\leq i
\leq m$. $\mbox{$\cal P$}$ is called strong Bollob\'as $t$-system, if $|A_i\cap
B_j|>t$ for all $1\leq i\neq j \leq m$.
F\"uredi conjectured the following nice generalization of Bollob\'as'
Theorem:
Let $t$ be a non-negative integer. Let $\mbox{$\cal P$}=\{(A_i,B_i)\}_{1\leq
i\leq m}$ be a strong Bollob\'as $t$-system. Then $$ \sum_{i=1}^m
\frac{1}{{|A_i|+|B_i|-2t \choose |A_i|-t}}\leq 1. $$ We confirmed the following
special case of F\"uredi's conjecture along with some more results of similar
flavor.
Let $t$ be a non-negative integer. Let $\mbox{$\cal P$}=\{(A_i,B_i)\}_{1\leq
i\leq m}$ denote a strong Bollob\'as $t$-system. Define $a_i:=|A_i|$ and
$b_i:=|B_i|$ for each $i$. Assume that there exists a positive integer $N$ such
that $a_i+b_i=N$ for each $i$. Then $$ \sum_{i=1}^m \frac{1}{{a_i+b_i-2t
\choose a_i-t}}\leq 1. $$
|
This paper describes a method for solving smooth nonconvex minimization
problems subject to bound constraints with good worst-case complexity
guarantees and practical performance. The method contains elements of two
existing methods: the classical gradient projection approach for
bound-constrained optimization and a recently proposed Newton-conjugate
gradient algorithm for unconstrained nonconvex optimization. Using a new
definition of approximate second-order optimality parametrized by some
tolerance $\epsilon$ (which is compared with related definitions from previous
works), we derive complexity bounds in terms of $\epsilon$ for both the number
of iterations required and the total amount of computation. The latter is
measured by the number of gradient evaluations or Hessian-vector products. We
also describe illustrative computational results on several test problems from
low-rank matrix optimization.
|
There has recently been an explosion of work on spoken dialogue systems,
along with an increased interest in open-domain systems that engage in casual
conversations on popular topics such as movies, books and music. These systems
aim to socially engage, entertain, and even empathize with their users. Since
the achievement of such social goals is hard to measure, recent research has
used dialogue length or human ratings as evaluation metrics, and developed
methods for automatically calculating novel metrics, such as coherence,
consistency, relevance and engagement. Here we develop a PARADISE model for
predicting the performance of Athena, a dialogue system that has participated
in thousands of conversations with real users, while competing as a finalist in
the Alexa Prize. We use both user ratings and dialogue length as metrics for
dialogue quality, and experiment with predicting these metrics using automatic
features that are both system dependent and independent. Our goal is to learn a
general objective function that can be used to optimize the dialogue choices of
any Alexa Prize system in real time and evaluate its performance. Our best
model for predicting user ratings gets an R$^2$ of .136 with a DistilBert
model, and the best model for predicting length with system independent
features gets an R$^2$ of .865, suggesting that conversation length may be a
more reliable measure for automatic training of dialogue systems.
|
Quantum teleportation -- the transmission and reconstruction over arbitrary
distances of the state of a quantum system -- is demonstrated experimentally.
During teleportation, an initial photon which carries the polarization that is
to be transferred and one of a pair of entangled photons are subjected to a
measurement such that the second photon of the entangled pair acquires the
polarization of the initial photon. This latter photon can be arbitrarily far
away from the initial one. Quantum teleportation will be a critical ingredient
for quantum computation networks.
|
An improper interval (edge) coloring of a graph $G$ is an assignment of
colors to the edges of $G$ satisfying the condition that, for every vertex $v
\in V(G)$, the set of colors assigned to the edges incident with $v$ forms an
integral interval. An interval coloring is $k$-improper if at most $k$ edges
with the same color all share a common endpoint. The minimum integer $k$ such
that there exists a $k$-improper interval coloring of the graph $G$ is the
interval coloring impropriety of $G$, denoted by $\mu_{int}(G)$. In this paper,
we provide a construction of an interval coloring of a subclass of complete
multipartite graphs. This provides additional evidence to the conjecture by
Casselgren and Petrosyan that $\mu_{int}(G)\leq 2$ for all complete
multipartite graphs $G$. Additionally, we determine improved upper bounds on
the interval coloring impropriety of several classes of graphs, namely 2-trees,
iterated triangulations, and outerplanar graphs. Finally, we investigate the
interval coloring impropriety of the corona product of two graphs, $G\odot H$.
|
Adversarial attacks have been extensively studied in recent years since they
can identify the vulnerability of deep learning models before deployed. In this
paper, we consider the black-box adversarial setting, where the adversary needs
to craft adversarial examples without access to the gradients of a target
model. Previous methods attempted to approximate the true gradient either by
using the transfer gradient of a surrogate white-box model or based on the
feedback of model queries. However, the existing methods inevitably suffer from
low attack success rates or poor query efficiency since it is difficult to
estimate the gradient in a high-dimensional input space with limited
information. To address these problems and improve black-box attacks, we
propose two prior-guided random gradient-free (PRGF) algorithms based on biased
sampling and gradient averaging, respectively. Our methods can take the
advantage of a transfer-based prior given by the gradient of a surrogate model
and the query information simultaneously. Through theoretical analyses, the
transfer-based prior is appropriately integrated with model queries by an
optimal coefficient in each method. Extensive experiments demonstrate that, in
comparison with the alternative state-of-the-arts, both of our methods require
much fewer queries to attack black-box models with higher success rates.
|
Over a decade ago, the H1 Collaboration decided to embrace the
object-oriented paradigm and completely redesign its data analysis model and
data storage format. The event data model, based on the RooT framework,
consists of three layers - tracks and calorimeter clusters, identified
particles and finally event summary data - with a singleton class providing
unified access. This original solution was then augmented with a fourth layer
containing user-defined objects.
This contribution will summarise the history of the solutions used, from
modifications to the original design, to the evolution of the high-level
end-user analysis object framework which is used by H1 today. Several important
issues are addressed - the portability of expert knowledge to increase the
efficiency of data analysis, the flexibility of the framework to incorporate
new analyses, the performance and ease of use, and lessons learned for future
projects.
|
We investigated the evolutionary stages and disk properties of 211 Young
stellar objects (YSOs) across the Perseus cloud by modeling the broadband
optical to mid-infrared (IR) spectral energy distribution (SED). By exploring
the relationships among the turnoff wave bands lambda_turnoff (longward of
which significant IR excesses above the stellar photosphere are observed), the
excess spectral index alpha_excess at lambda <~ 24 microns, and the disk inner
radius R_in (from SED modeling) for YSOs of different evolutionary stages, we
found that the median and standard deviation of alpha_excess of YSOs with
optically thick disks tend to increase with lambda_turnoff, especially at
lambda_turnoff >= 5.8 microns, whereas the median fractional dust luminosities
L_dust/L_star tend to decrease with lambda_turnoff. This points to an
inside-out disk clearing of small dust grains. Moreover, a positive correlation
between alpha_excess and R_in was found at alpha_excess > ~0 and R_in > ~10
$\times$ the dust sublimation radius R_sub, irrespective of lambda_turnoff,
L_dust/L_star and disk flaring. This suggests that the outer disk flaring
either does not evolve synchronously with the inside-out disk clearing or has
little influence on alpha_excess shortward of 24 microns. About 23% of our YSO
disks are classified as transitional disks, which have lambda_turnoff >= 5.8
microns and L_dust/L_star >10^(-3). The transitional disks and full disks
occupy distinctly different regions on the L_dust/L_star vs. alpha_excess
diagram. Taking L_dust/L_star as an approximate discriminator of disks with
(>0.1) and without (<0.1) considerable accretion activity, we found that 65%
and 35% of the transitional disks may be consistent with being dominantly
cleared by photoevaporation and dynamical interaction respectively. [abridged]
|
We propose Lattice gauge equivariant Convolutional Neural Networks (L-CNNs)
for generic machine learning applications on lattice gauge theoretical
problems. At the heart of this network structure is a novel convolutional layer
that preserves gauge equivariance while forming arbitrarily shaped Wilson loops
in successive bilinear layers. Together with topological information, for
example from Polyakov loops, such a network can in principle approximate any
gauge covariant function on the lattice. We demonstrate that L-CNNs can learn
and generalize gauge invariant quantities that traditional convolutional neural
networks are incapable of finding.
|
We consider an initial-boundary value problem for the $n$-dimensional wave
equation with the variable sound speed, $n\geq 1$. We construct three-level
implicit in time and compact in space (three-point in each space direction) 4th
order finite-difference schemes on the uniform rectangular meshes including
their one-parameter (for $n=2$) and three-parameter (for $n=3$) families. We
also show that some already known methods can be converted into such schemes.
In a unified manner, we prove the conditional stability of schemes in the
strong and weak energy norms together with the 4th order error estimate under
natural conditions on the time step. We also transform an unconditionally
stable 4th order two-level scheme suggested for $n=2$ to the three-level form,
extend it for any $n\geq 1$ and prove its stability. We also give an example of
a compact scheme for non-uniform in space and time rectangular meshes. We
suggest simple fast iterative methods based on FFT to implement the schemes. A
new effective initial guess to start iterations is given too. We also present
promising results of numerical experiments.
|
Generating interpretable visualizations from complex data is a common problem
in many applications. Two key ingredients for tackling this issue are
clustering and representation learning. However, current methods do not yet
successfully combine the strengths of these two approaches. Existing
representation learning models which rely on latent topological structure such
as self-organising maps, exhibit markedly lower clustering performance compared
to recent deep clustering methods. To close this performance gap, we (a)
present a novel way to fit self-organizing maps with probabilistic cluster
assignments (PSOM), (b) propose a new deep architecture for probabilistic
clustering (DPSOM) using a VAE, and (c) extend our architecture for time-series
clustering (T-DPSOM), which also allows forecasting in the latent space using
LSTMs. We show that DPSOM achieves superior clustering performance compared to
current deep clustering methods on MNIST/Fashion-MNIST, while maintaining the
favourable visualization properties of SOMs. On medical time series, we show
that T-DPSOM outperforms baseline methods in time series clustering and time
series forecasting, while providing interpretable visualizations of patient
state trajectories and uncertainty estimation.
|
Kalmeyer-Laughlin (KL) chiral spin liquid (CSL) is a type of quantum spin
liquid without time-reversal symmetry, and it is considered as the parent state
of an exotic type of superconductivity--anyon superconductor. Such exotic state
has been sought for more than twenty years, however it remains unclear whether
it can exist in realistic system where time-reversal symmetry is breaking
(T-breaking) spontaneously. By using the density matrix renormalization group,
we show that KL CSL exists in a frustrated anisotropic kagome Heisenberg model
(KHM), which has spontaneously T-breaking. We find that our model has two
topological degenerate ground states, which exhibit non-vanishing scalar
chirality order and are protected by finite excitation gap. Further, we
identify this state as KL CSL by the characteristic edge conformal field theory
from the entanglement spectrum and the quasiparticles braiding statistics
extracted from the modular matrix. We also study how this CSL phase evolves as
the system approaches the nearest neighbor KHM.
|
We describe a new approach for on-chip optical non-reciprocity which makes
use of strong optomechanical interaction in microring resonators. By optically
pumping the ring resonator in one direction, the optomechanical coupling is
only enhanced in that direction, and consequently, the system exhibits a
non-reciprocal response. For different configurations, this system can function
either as an optical isolator or a coherent non-reciprocal phase shifter. We
show that the operation of such a device on the level of single-photon could be
achieved with existing technology.
|
Entropic uncertainty is a well-known concept to formulate uncertainty
relations for continuous variable quantum systems with finitely many degrees of
freedom. Typically, the bounds of such relations scale with the number of
oscillator modes, preventing a straight-forward generalization to quantum field
theories. In this work, we overcome this difficulty by introducing the notion
of a functional relative entropy and show that it has a meaningful field theory
limit. We present the first entropic uncertainty relation for a scalar quantum
field theory and exemplify its behavior by considering few particle excitations
and the thermal state. Also, we show that the relation implies the
multidimensional Heisenberg uncertainty relation.
|
Explaining to what extent the real power of genetic algorithms lies in the
ability of crossover to recombine individuals into higher quality solutions is
an important problem in evolutionary computation. In this paper we show how the
interplay between mutation and crossover can make genetic algorithms hillclimb
faster than their mutation-only counterparts. We devise a Markov Chain
framework that allows to rigorously prove an upper bound on the runtime of
standard steady state genetic algorithms to hillclimb the OneMax function. The
bound establishes that the steady-state genetic algorithms are 25% faster than
all standard bit mutation-only evolutionary algorithms with static mutation
rate up to lower order terms for moderate population sizes. The analysis also
suggests that larger populations may be faster than populations of size 2. We
present a lower bound for a greedy (2+1) GA that matches the upper bound for
populations larger than 2, rigorously proving that 2 individuals cannot
outperform larger population sizes under greedy selection and greedy crossover
up to lower order terms. In complementary experiments the best population size
is greater than 2 and the greedy genetic algorithms are faster than standard
ones, further suggesting that the derived lower bound also holds for the
standard steady state (2+1) GA.
|
In the standard cosmological model, the dimming of distant Type Ia supernovae
is explained by invoking the existence of repulsive `dark energy' which is
causing the Hubble expansion to accelerate. However this may be an artifact of
interpreting the data in an (oversimplified) homogeneous model universe. In the
simplest inhomogeneous model which fits the SNe Ia Hubble diagram without dark
energy, we are located close to the centre of a void modelled by a
Lema\'itre-Tolman-Bondi metric. It has been claimed that such models cannot fit
the CMB and other cosmological data. This is however based on the assumption of
a scale-free spectrum for the primordial density perturbation. An alternative
physically motivated form for the spectrum enables a good fit to both SNe Ia
(Constitution/Union2) and CMB (WMAP 7-yr) data, and to the locally measured
Hubble parameter. Constraints from baryon acoustic oscillations and primordial
nucleosynthesis are also satisfied.
|
Weyl particles exhibit chiral transport property under external curved
space-time geometry. This effect is called chiral gravitational effect, which
plays an important role in quantum field theory. However, the absence of real
Weyl particles in nature hinders the observation of such interesting phenomena.
In this paper, we show that chiral gravitational effect can be manifested in
Weyl metamaterials with spatially controlled nonlocality. This inhomogeneous
modulation results in a spatially dependent group velocity in the Weyl cone
dispersion, which is equivalent to introducing a curved background space-time
(or gravitational field) for Weyl pseudo-spinors. The synthetic gravitational
field leads to the quantization of energy levels, including chiral zeroth order
energy modes (or simply chiral zero modes) that determine the chiral transport
property of pseudo-spinors. The inhomogeneous Weyl metamaterial provides an
experimentally realizable platform for investigating the interaction between
Weyl particles and gravitational field, allowing for observation of chiral
gravitational effect in table-top experiments.
|
We present a novel generalized convolution quadrature method that accurately
approximates convolution integrals. During the late 1980s, Lubich introduced
convolution quadrature techniques, which have now emerged as a prevalent
methodology in this field. However, these techniques were limited to constant
time stepping, and only in the last decade generalized convolution quadrature
based on the implicit Euler and Runge-Kutta methods have been developed,
allowing for variable time stepping. In this paper, we introduce and analyze a
new generalized convolution quadrature method based on the trapezoidal rule.
Crucial for the analysis is the connection to a new modified divided difference
formula that we establish. Numerical experiments demonstrate the effectiveness
of our method in achieving highly accurate and reliable results.
|
The search for lepton-flavor-violation process e^+e^-\to e\mu in the energy
region \sqrt{s}=984 - 1060 MeV with SND detector at VEPP-2M e^+e^- collider is
reported. The model independent 90% CL upper limits on the e^+e^-\to e\mu cross
section, \sigma_{e\mu} < 11 pb, as well as on the corresponding \phi\to e\mu
branching fraction, B(\phi\to e\mu) < 2 \times 10^{-6}, for the final particles
polar angles 55^\circ<\theta<125^\circ, were obtained.
|
Kilohertz quasi-periodic oscillations (kHz QPOs) has been regarded as
representing the Keplerian frequency at the inner disk edge in the neutron star
X-ray binaries. The so-called ``parallel tracks'' on the plot of the kHz QPO
frequency vs. X-ray flux in neutron star X-ray binaries, on the other hand,
show the correlation between the kHz QPO frequency and the X-ray flux on time
scales from hours to days. This is suspected as caused by the variations of the
mass accretion rate through the accretion disk surrounding the neutron star. We
show here that by comparing the correlation between the kHz QPO frequency and
the X-ray count rate on a certain QPO time scale observed approximately
simultaneous in the Fourier power spectra of the X-ray light curve, we have
found evidences that the X-ray flux of millihertz QPOs in neutron star X-ray
binaries is generated inside the inner disk edge if adopting that the kilohertz
QPO frequency is an orbital frequency at the inner disk edge. This approach
could be applied to other variability components in X-ray binaries.
|
We present theories for the latitudinal extents of both Hadley cells
throughout the annual cycle by combining our recent scaling for the ascending
edge latitude (Hill et al. 2021) with the uniform Rossby number (Ro),
baroclinic instability-based theory for the poleward, descending edge latitudes
of Kang and Lu 2012. The resulting analytic expressions for all three Hadley
cell edges are predictive except for diagnosed values of Ro and two
proportionality constants. The theory captures the climatological annual cycle
of the ascending and descending edges in an Earth-like simulation in an
idealized aquaplanet general circulation model (GCM), provided the descending
edge prediction is lagged by one month. In simulations in this and two other
idealized GCMs with varied planetary rotation rate ($\Omega$), the winter,
descending edge of the solsticial, cross-equatorial Hadley cell scales
approximately as $\Omega^{-1/2}$ and the summer, ascending edge as
$\Omega^{-2/3}$, both in accordance with our theory.
|
Deep learning has led to significant advances in artificial intelligence, in
part, by adopting strategies motivated by neurophysiology. However, it is
unclear whether deep learning could occur in the real brain. Here, we show that
a deep learning algorithm that utilizes multi-compartment neurons might help us
to understand how the brain optimizes cost functions. Like neocortical
pyramidal neurons, neurons in our model receive sensory information and
higher-order feedback in electrotonically segregated compartments. Thanks to
this segregation, the neurons in different layers of the network can coordinate
synaptic weight updates. As a result, the network can learn to categorize
images better than a single layer network. Furthermore, we show that our
algorithm takes advantage of multilayer architectures to identify useful
representations---the hallmark of deep learning. This work demonstrates that
deep learning can be achieved using segregated dendritic compartments, which
may help to explain the dendritic morphology of neocortical pyramidal neurons.
|
We present a phenomenological theory of spin-orbit torques in a metallic
ferromagnet with spin-relaxing boundaries. The model is rooted in the coupled
diffusion of charge and spin in the bulk of the ferromagnet, where we account
for the anomalous Hall effects as well as the anisotropic magnetoresistance in
the corresponding constitutive relations for both charge and spin sectors. The
diffusion equations are supplemented with suitable boundary conditions
reflecting the spin-sink capacity of the environment. In inversion-asymmetric
heterostructures, the uncompensated spin accumulation exerts a dissipative
torque on the order parameter, giving rise to a current-dependent linewidth in
the ferromagnetic resonance with a characteristic angular dependence. We
compare our model to recent spin-torque ferromagnetic resonance measurements,
illustrating how rich self-induced spin-torque phenomenology can arise even in
simple magnetic structures.
|
Penalized regression has become a standard tool for model building across a
wide range of application domains. Common practice is to tune the amount of
penalization to tradeoff bias and variance or to optimize some other measure of
performance of the estimated model. An advantage of such automated
model-building procedures is that their operating characteristics are
well-defined, i.e., completely data-driven, and thereby they can be
systematically studied. However, in many applications it is desirable to
incorporate domain knowledge into the model building process; one way to do
this is to characterize each model along the solution path of a penalized
regression estimator in terms of an operating characteristic that is meaningful
within a domain context and then to allow domain experts to choose from among
these models using these operating characteristics as well as other factors not
available to the estimation algorithm. We derive an estimator of the false
selection rate for each model along the solution path using a novel variable
addition method. The proposed estimator applies to both fixed and random
designs and allows for $p \gg n$. The proposed estimator can be used to
estimate a model with a pre-specified false selection rate or can be overlaid
on the solution path to facilitate interactive model exploration. We
characterize the asymptotic behavior of the proposed estimator in the case of a
linear model under a fixed design; however, simulation experiments show that
the proposed estimator provides consistently more accurate estimates of the
false selection rate than competing methods across a wide range of models.
|
This paper considers the class of L\'evy processes that can be written as a
Brownian motion time changed by an independent L\'evy subordinator. Examples in
this class include the variance gamma model, the normal inverse Gaussian model,
and other processes popular in financial modeling. The question addressed is
the precise relation between the standard first passage time and an alternative
notion, which we call first passage of the second kind, as suggested by Hurd
(2007) and others. We are able to prove that standard first passage time is the
almost sure limit of iterations of first passage of the second kind. Many
different problems arising in financial mathematics are posed as first passage
problems, and motivated by this fact, we are lead to consider the implications
of the approximation scheme for fast numerical methods for computing first
passage. We find that the generic form of the iteration can be competitive with
other numerical techniques. In the particular case of the VG model, the scheme
can be further refined to give very fast algorithms.
|
We discuss all-to-all quark propagator techniques in two (related) contexts
within Lattice QCD: the computation of closed quark propagators, and
applications to the so-called "eye diagrams" appearing in the computation of
non-leptonic kaon decay amplitudes. Combinations of low-mode averaging and
diluted stochastic volume sources that yield optimal signal-to-noise ratios for
the latter problem are developed. We also apply a recently proposed probing
algorithm to compute directly the diagonal of the inverse Dirac operator, and
compare its performance with that of stochastic methods. At fixed computational
cost the two procedures yield comparable signal-to-noise ratios, but probing
has practical advantages which make it a promising tool for a wide range of
applications in Lattice QCD.
|
Model-based safety assessment has been one of the leading research thrusts of
the System Safety Engineering community for over two decades. However, there is
still a lack of consensus on what MBSA is. The ambiguity in the identity of
MBSA impedes the advancement of MBSA as an active research area. For this
reason, this paper aims to investigate the identity of MBSA to help achieve a
consensus across the community. Towards this end, we first reason about the
core activities that an MBSA approach must conduct. Second, we characterize the
core patterns in which the core activities must be conducted for an approach to
be considered MBSA. Finally, a recently published MBSA paper is reviewed to
test the effectiveness of our characterization of MBSA.
|
We present results of stroboscopic microwave spectroscopy of radio-frequency
dressed optically pumped magnetometer. Interaction between radio-frequency
dressed atoms and a synchronously pulsed microwave field followed by Voigt
effect-based optical probing allows us to perform partial state tomography and
assess the efficiency of the state preparation process. To theoretically
describe the system, we solve the dynamical equation of the density matrix
employing Floquet expansion. Our theoretical results are in good agreement with
experimental measurements over a wide range of parameters and pumping
conditions. Finally, the theoretical and experimental analysis presented in
this work can be generalised to other systems involving complex state
preparation techniques.
|
Subsets and Splits